Docs: Moving Audio files into dedicated subdirectory

Bug: 9743555
Change-Id: Iec775387c01ae6053ab75e02f3d98a521e80552b
diff --git a/src/devices/audio/attributes.jd b/src/devices/audio/attributes.jd
new file mode 100644
index 0000000..473a04e
--- /dev/null
+++ b/src/devices/audio/attributes.jd
@@ -0,0 +1,257 @@
+page.title=Audio Attributes
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>Audio players support attributes that define how the audio system handles routing, volume, and
+focus decisions for the specified source. Applications can attach attributes to an audio playback
+(such as music played by a streaming service or a notification for a new email) then pass the audio
+source attributes to the framework, where the audio system uses the attributes to make mixing
+decisions and to notify applications about the state of the system.</p>
+
+<p class="note"><strong>Note:</strong> Applications can also attach attributes to an audio
+recording (such as audio captured in a video recording), but this functionality is not exposed in
+the public API.</p>
+
+<p>In Android 4.4 and earlier, the framework made mixing decisions using only the audio stream type.
+However, basing such decisions on stream type was too limiting to produce quality output across
+multiple applications and devices. For example, on a mobile device, some applications (i.e.
+Google Maps) played driving directions on the STREAM_MUSIC stream type; however, on mobile
+devices in projection mode (i.e. Android Auto), applications cannot mix driving directions with
+other media streams.</p>
+
+<p>Using the <a href="http://developer.android.com/reference/android/media/AudioAttributes.html">
+audio attribute API</a>, applications can now provide the audio system with detailed information
+about a specific audio source:</p>
+
+<ul>
+<li><b>Usage</b>. Specifies why the source is playing and controls routing, focus, and volume
+decisions.</li>
+<li><b>Content type</b>. Specifies what the source is playing (music, movie, speech,
+sonification, unknown).</li>
+<li><b>Flags</b>. Specifies how the source should be played. Includes support for audibility
+enforcement (camera shutter sounds required in some countries) and hardware audio/video
+synchronization.</li>
+</ul>
+
+<p>For dynamics processing, applications must distinguish between movie, music, and speech content.
+Information about the data itself may also matter, such as loudness and peak sample value.</p>
+
+<h2 id="using">Using attributes</h2>
+
+<p>Usage specifies the context in which the stream is used, providing information about why the
+sound is playing and what the sound is used for. Usage information is more expressive than a stream
+type and allows platforms or routing policies to refine volume or routing decisions.</p>
+
+<p>Supply one of the following usage values for any instance:</p>
+
+<ul>
+<li><code>USAGE_UNKNOWN</code></li>
+<li><code>USAGE_MEDIA</code></li>
+<li><code>USAGE_VOICE_COMMUNICATION</code></li>
+<li><code>USAGE_VOICE_COMMUNICATION_SIGNALLING</code></li>
+<li><code>USAGE_ALARM</code></li>
+<li><code>USAGE_NOTIFICATION</code></li>
+<li><code>USAGE_NOTIFICATION_RINGTONE</code></li>
+<li><code>USAGE_NOTIFICATION_COMMUNICATION_INSTANT</code></li>
+<li><code>USAGE_NOTIFICATION_COMMUNICATION_DELAYED</code></li>
+<li><code>USAGE_NOTIFICATION_EVENT</code></li>
+<li><code>USAGE_ASSISTANCE_ACCESSIBILITY</code></li>
+<li><code>USAGE_ASSISTANCE_NAVIGATION_GUIDANCE</code></li>
+<li><code>USAGE_ASSISTANCE_SONIFICATION</code></li>
+<li><code>USAGE_GAME</code></li>
+</ul>
+
+<p>Audio attribute usage values are mutually exclusive. For examples, refer to <code>
+<a href="http://developer.android.com/reference/android/media/AudioAttributes.html#USAGE_MEDIA">
+USAGE_MEDIA</a></code> and <code>
+<a href="http://developer.android.com/reference/android/media/AudioAttributes.html#USAGE_ALARM">
+USAGE_ALARM</a></code> definitions; for exceptions, refer to the <code>
+<a href="http://developer.android.com/reference/android/media/AudioAttributes.Builder.html">
+AudioAttributes.Builder</a></code> definition.</p>
+
+<h2 id="content-type">Content type</h2>
+
+<p>Content type defines what the sound is and expresses the general category of the content such as
+movie, speech, or beep/ringtone. The audio framework uses content type information to selectively
+configure audio post-processing blocks. While supplying the content type is optional, you should
+include type information whenever the content type is known, such as using
+<code>CONTENT_TYPE_MOVIE</code> for a movie streaming service or <code>CONTENT_TYPE_MUSIC</code>
+for a music playback application.</p>
+
+<p>Supply one of the following content type values for any instance:</p>
+
+<ul>
+<li><code>CONTENT_TYPE_UNKNOWN</code> (default)</li>
+<li><code>CONTENT_TYPE_MOVIE</code></li>
+<li><code>CONTENT_TYPE_MUSIC</code></li>
+<li><code>CONTENT_TYPE_SONIFICATION</code></li>
+<li><code>CONTENT_TYPE_SPEECH</code></li>
+</ul>
+
+<p>Audio attribute content type values are mutually exclusive. For details on content types,
+refer to the <a href="http://developer.android.com/reference/android/media/AudioAttributes.html">
+audio attribute API</a>.</p>
+
+<h2 id="flags">Flags</h2>
+
+<p>Flags specify how the audio framework applies effects to the audio playback. Supply one or more
+of the following flags for an instance:</p>
+
+<ul>
+<li><code>FLAG_AUDIBILITY_ENFORCED</code>. Requests the system ensure the audibility of the
+sound. Use to address the needs of legacy <code>STREAM_SYSTEM_ENFORCED</code> (such as forcing
+camera shutter sounds).</li>
+<li><code>HW_AV_SYNC</code>. Requests the system select an output stream that supports hardware A/V
+synchronization.</li>
+</ul>
+
+<p>Audio attribute flags are non-exclusive (can be combined). For details on these flags,
+refer to the <a href="http://developer.android.com/reference/android/media/AudioAttributes.html">
+audio attribute API</a>.</p>
+
+<h2 id="example">Example</h2>
+
+<p>In this example, AudioAttributes.Builder defines the AudioAttributes to be used by a new
+AudioTrack instance:</p>
+
+<pre>
+AudioTrack myTrack = new AudioTrack(
+  new AudioAttributes.Builder()
+ .setUsage(AudioAttributes.USAGE_MEDIA)
+    .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
+    .build(),
+  myFormat, myBuffSize, AudioTrack.MODE_STREAM, mySession);
+</pre>
+
+<h2 id="compatibility">Compatibility</h2>
+
+<p>Application developers should use audio attributes when creating or updating applications for
+Android 5.0. However, applications are not required to take advantage of attributes; they can
+handle legacy stream types only or remain unaware of attributes (i.e. a generic media player that
+doesn’t know anything about the content it’s playing).</p>
+
+<p>In such cases, the framework maintains backwards compatibility with older devices and Android
+releases by automatically translating legacy audio stream types to audio attributes. However, the
+framework does not enforce or guarantee this mapping across devices, manufacturers, or Android
+releases.</p>
+
+<p>Compatibility mappings:</p>
+
+<table>
+<tr>
+  <th>Android 5.0</th>
+  <th>Android 4.4 and earlier</th>
+</tr>
+<tr>
+  <td>
+  <code>CONTENT_TYPE_SPEECH</code><br>
+  <code>USAGE_VOICE_COMMUNICATION</code>
+  </td>
+  <td>
+  <code>STREAM_VOICE_CALL</code>
+  </td>
+</tr>
+<tr>
+  <td>
+  <code>CONTENT_TYPE_SONIFICATION</code><br>
+  <code>USAGE_ASSISTANCE_SONIFICATION</code>
+  </td>
+  <td>
+  <code>STREAM_SYSTEM</code>
+  </td>
+</tr>
+<tr>
+  <td>
+  <code>CONTENT_TYPE_SONIFICATION</code><br>
+  <code>USAGE_NOTIFICATION_RINGTONE</code>
+  </td>
+  <td>
+  <code>STREAM_RING</code>
+  </td>
+</tr>
+<tr>
+  <td>
+  <code>CONTENT_TYPE_MUSIC</code><br>
+  <code>USAGE_UNKNOWN</code><br>
+  <code>USAGE_MEDIA</code><br>
+  <code>USAGE_GAME</code><br>
+  <code>USAGE_ASSISTANCE_ACCESSIBILITY</code><br>
+  <code>USAGE_ASSISTANCE_NAVIGATION_GUIDANCE</code><br>
+  </td>
+  <td>
+  <code>STREAM_MUSIC</code>
+  </td>
+</tr>
+<tr>
+  <td>
+  <code>CONTENT_TYPE_SONIFICATION</code><br>
+  <code>USAGE_ALARM</code>
+  </td>
+  <td>
+  <code>STREAM_ALARM</code>
+  </td>
+</tr>
+<tr>
+  <td>
+  <code>CONTENT_TYPE_SONIFICATION</code><br>
+  <code>USAGE_NOTIFICATION</code><br>
+  <code>USAGE_NOTIFICATION_COMMUNICATION_REQUEST</code><br>
+  <code>USAGE_NOTIFICATION_COMMUNICATION_INSTANT</code><br>
+  <code>USAGE_NOTIFICATION_COMMUNICATION_DELAYED</code><br>
+  <code>USAGE_NOTIFICATION_EVENT</code><br>
+  </td>
+  <td>
+  <code>STREAM_NOTIFICATION</code>
+  </td>
+</tr>
+<tr>
+  <td>
+  <code>CONTENT_TYPE_SPEECH</code>
+  </td>
+  <td>
+  (@hide)<code> STREAM_BLUETOOTH_SCO</code>
+  </td>
+</tr>
+<tr>
+  <td>
+  <code>FLAG_AUDIBILITY_ENFORCED</code>
+  </td>
+  <td>
+  (@hide)<code> STREAM_SYSTEM_ENFORCED</code>
+  </td>
+</tr>
+<tr>
+  <td>
+  <code>CONTENT_TYPE_SONIFICATION</code><br>
+  <code>USAGE_VOICE_COMMUNICATION_SIGNALLING</code>
+  </td>
+  <td>
+  (@hide)<code> STREAM_DTMF</code>
+  </td>
+</tr>
+</table>
+
+<p class="note"><strong>Note:</strong> @hide streams are used internally by the framework but are
+not part of the public API.</p>
\ No newline at end of file
diff --git a/src/devices/audio/avoiding_pi.jd b/src/devices/audio/avoiding_pi.jd
new file mode 100644
index 0000000..e884f8c
--- /dev/null
+++ b/src/devices/audio/avoiding_pi.jd
@@ -0,0 +1,326 @@
+page.title=Avoiding Priority Inversion
+@jd:body
+
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+This article explains how the Android's audio system attempts to avoid
+priority inversion, as of the Android 4.1 release,
+and highlights techniques that you can use too.
+</p>
+
+<p>
+These techniques may be useful to developers of high-performance
+audio apps, OEMs, and SoC providers who are implementing an audio
+HAL. Please note implementing these techniques is not
+guaranteed to prevent glitches or other failures, particularly if
+used outside of the audio context.
+Your results may vary, and you should conduct your own
+evaluation and testing.
+</p>
+
+<h2 id="background">Background</h2>
+
+<p>
+The Android AudioFlinger audio server and AudioTrack/AudioRecord
+client implementation are being re-architected to reduce latency.
+This work started in Android 4.1, continued in 4.2 and 4.3, and now more
+improvements exist in version 4.4.
+</p>
+
+<p>
+To achieve this lower latency, many changes were needed throughout the system. One
+important change is to assign CPU resources to time-critical
+threads with a more predictable scheduling policy. Reliable scheduling
+allows the audio buffer sizes and counts to be reduced while still
+avoiding artifacts due to underruns.
+</p>
+
+<h2 id="priorityInversion">Priority inversion</h2>
+
+<p>
+<a href="http://en.wikipedia.org/wiki/Priority_inversion">Priority inversion</a>
+is a classic failure mode of real-time systems,
+where a higher-priority task is blocked for an unbounded time waiting
+for a lower-priority task to release a resource such as (shared
+state protected by) a
+<a href="http://en.wikipedia.org/wiki/Mutual_exclusion">mutex</a>.
+</p>
+
+<p>
+In an audio system, priority inversion typically manifests as a
+<a href="http://en.wikipedia.org/wiki/Glitch">glitch</a>
+(click, pop, dropout),
+<a href="http://en.wikipedia.org/wiki/Max_Headroom_(character)">repeated audio</a>
+when circular buffers
+are used, or delay in responding to a command.
+</p>
+
+<p>
+In the Android audio implementation, priority inversion is most
+likely to occur in these places. And so you should focus your attention here:
+</p>
+
+<ul>
+
+<li>
+between normal mixer thread and fast mixer thread in AudioFlinger
+</li>
+
+<li>
+between application callback thread for a fast AudioTrack and
+fast mixer thread (they both have elevated priority, but slightly
+different priorities)
+</li>
+
+<li>
+within the audio Hardware Abstraction Layer (HAL) implementation, e.g. for telephony or echo cancellation
+</li>
+
+<li>
+within the audio driver in kernel
+</li>
+
+<li>
+between AudioTrack callback thread and other app threads (this is out of our control)
+</li>
+
+</ul>
+
+<p>
+As of this writing, reduced latency for AudioRecord is planned but
+not yet implemented. The likely priority inversion spots will be
+similar to those for AudioTrack.
+</p>
+
+<h2 id="commonSolutions">Common solutions</h2>
+
+<p>
+The typical solutions include:
+</p>
+
+<ul>
+
+<li>
+disabling interrupts
+</li>
+
+<li>
+priority inheritance mutexes
+</li>
+
+</ul>
+
+<p>
+Disabling interrupts is not feasible in Linux user space, and does
+not work for Symmetric Multi-Processors (SMP).
+</p>
+
+
+<p>
+Priority inheritance
+<a href="http://en.wikipedia.org/wiki/Futex">futexes</a>
+(fast user-space mutexes) are available
+in Linux kernel, but are not currently exposed by the Android C
+runtime library
+<a href="http://en.wikipedia.org/wiki/Bionic_(software)">Bionic</a>.
+They are not used in the audio system because they are relatively heavyweight,
+and because they rely on a trusted client.
+</p>
+
+<h2 id="androidTechniques">Techniques used by Android</h2>
+
+<p>
+Experiments started with "try lock" and lock with timeout. These are
+non-blocking and bounded blocking variants of the mutex lock
+operation. Try lock and lock with timeout worked fairly well but were
+susceptible to a couple of obscure failure modes: the
+server was not guaranteed to be able to access the shared state if
+the client happened to be busy, and the cumulative timeout could
+be too long if there was a long sequence of unrelated locks that
+all timed out.
+</p>
+
+
+<p>
+We also use
+<a href="http://en.wikipedia.org/wiki/Linearizability">atomic operations</a>
+such as:
+</p>
+
+<ul>
+<li>increment</li>
+<li>bitwise "or"</li>
+<li>bitwise "and"</li>
+</ul>
+
+<p>
+All of these return the previous value and include the necessary
+SMP barriers. The disadvantage is they can require unbounded retries.
+In practice, we've found that the retries are not a problem.
+</p>
+
+<p class="note"><strong>Note:</strong> Atomic operations and their interactions with memory barriers
+<<<<<<< HEAD
+are notoriously badly misunderstood and used incorrectly. These methods are
+included here for completeness but recommend you also read the article
+=======
+are notoriously badly misunderstood and used incorrectly. We include these methods
+here for completeness but recommend you also read the article
+>>>>>>> goog/lmp-dev
+<a href="https://developer.android.com/training/articles/smp.html">
+SMP Primer for Android</a>
+for further information.
+</p>
+
+<p>
+We still have and use most of the above tools, and have recently
+added these techniques:
+</p>
+
+<ul>
+
+<li>
+Use non-blocking single-reader single-writer
+<a href="http://en.wikipedia.org/wiki/Circular_buffer">FIFO queues</a>
+for data.
+</li>
+
+<li>
+Try to
+<i>copy</i>
+state rather than
+<i>share</i>
+state between high- and
+low-priority modules.
+</li>
+
+<li>
+When state does need to be shared, limit the state to the
+maximum-size
+<a href="http://en.wikipedia.org/wiki/Word_(computer_architecture)">word</a>
+that can be accessed atomically in one-bus operation
+without retries.
+</li>
+
+<li>
+For complex multi-word state, use a state queue. A state queue
+is basically just a non-blocking single-reader single-writer FIFO
+queue used for state rather than data, except the writer collapses
+adjacent pushes into a single push.
+</li>
+
+<li>
+Pay attention to
+<a href="http://en.wikipedia.org/wiki/Memory_barrier">memory barriers</a>
+for SMP correctness.
+</li>
+
+<li>
+<a href="http://en.wikipedia.org/wiki/Trust,_but_verify">Trust, but verify</a>.
+When sharing
+<i>state</i>
+between processes, don't
+assume that the state is well-formed. For example, check that indices
+are within bounds. This verification isn't needed between threads
+in the same process, between mutual trusting processes (which
+typically have the same UID). It's also unnecessary for shared
+<i>data</i>
+such as PCM audio where a corruption is inconsequential.
+</li>
+
+</ul>
+
+<h2 id="nonBlockingAlgorithms">Non-blocking algorithms</h2>
+
+<p>
+<a href="http://en.wikipedia.org/wiki/Non-blocking_algorithm">Non-blocking algorithms</a>
+have been a subject of much recent study.
+But with the exception of single-reader single-writer FIFO queues,
+we've found them to be complex and error-prone.
+</p>
+
+<p>
+Starting in Android 4.2, you can find our non-blocking,
+single-reader/writer classes in these locations:
+</p>
+
+<ul>
+
+<li>
+frameworks/av/include/media/nbaio/
+</li>
+
+<li>
+frameworks/av/media/libnbaio/
+</li>
+
+<li>
+frameworks/av/services/audioflinger/StateQueue*
+</li>
+
+</ul>
+
+<p>
+These were designed specifically for AudioFlinger and are not
+general-purpose. Non-blocking algorithms are notorious for being
+difficult to debug. You can look at this code as a model. But be
+aware there may be bugs, and the classes are not guaranteed to be
+suitable for other purposes.
+</p>
+
+<p>
+For developers, some of the sample OpenSL ES application code should be updated to
+use non-blocking algorithms or reference a non-Android open source library.
+</p>
+
+<h2 id="tools">Tools</h2>
+
+<p>
+To the best of our knowledge, there are no automatic tools for
+finding priority inversion, especially before it happens. Some
+research static code analysis tools are capable of finding priority
+inversions if able to access the entire codebase. Of course, if
+arbitrary user code is involved (as it is here for the application)
+or is a large codebase (as for the Linux kernel and device drivers),
+static analysis may be impractical. The most important thing is to
+read the code very carefully and get a good grasp on the entire
+system and the interactions. Tools such as
+<a href="http://developer.android.com/tools/help/systrace.html">systrace</a>
+and
+<code>ps -t -p</code>
+are useful for seeing priority inversion after it occurs, but do
+not tell you in advance.
+</p>
+
+<h2 id="aFinalWord">A final word</h2>
+
+<p>
+After all of this discussion, don't be afraid of mutexes. Mutexes
+are your friend for ordinary use, when used and implemented correctly
+in ordinary non-time-critical use cases. But between high- and
+low-priority tasks and in time-sensitive systems mutexes are more
+likely to cause trouble.
+</p>
+
diff --git a/src/devices/audio/debugging.jd b/src/devices/audio/debugging.jd
new file mode 100644
index 0000000..8cee394
--- /dev/null
+++ b/src/devices/audio/debugging.jd
@@ -0,0 +1,434 @@
+page.title=Audio Debugging
+@jd:body
+
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+This article describes some tips and tricks for debugging Android audio.
+</p>
+
+<h2 id="teeSink">Tee Sink</h2>
+
+<p>
+The "tee sink" is
+an AudioFlinger debugging feature, available in custom builds only,
+for retaining a short fragment of recent audio for later analysis.
+This permits comparison between what was actually played or recorded
+vs. what was expected.
+</p>
+
+<p>
+For privacy the tee sink is disabled by default, at both compile-time and
+run-time.  To use the tee sink, you will need to enable it by re-compiling,
+and also by setting a property.  Be sure to disable this feature after you are
+done debugging; the tee sink should not be left enabled in production builds.
+</p>
+
+<p>
+The instructions in the remainder of this section are for Android 4.4,
+and may require changes for other versions.
+</p>
+
+<h3>Compile-time setup</h3>
+
+<ol>
+<li><code>cd frameworks/av/services/audioflinger</code></li>
+<li>Edit <code>Configuration.h</code>.</li>
+<li>Uncomment <code>#define TEE_SINK</code>.</li>
+<li>Re-build <code>libaudioflinger.so</code>.</li>
+<li><code>adb root</code></li>
+<li><code>adb remount</code></li>
+<li>Push or sync the new <code>libaudioflinger.so</code> to the device's <code>/system/lib</code>.</li>
+</ol>
+
+<h3>Run-time setup</h3>
+
+<ol>
+<li><code>adb shell getprop | grep ro.debuggable</code>
+<br />Confirm that the output is: <code>[ro.debuggable]: [1]</code>
+</li>
+<li><code>adb shell</code></li>
+<li><code>ls -ld /data/misc/media</code>
+<br />
+<p>
+Confirm that the output is:
+</p>
+<pre>
+drwx------ media media ... media
+</pre>
+<br />
+<p>
+If the directory does not exist, create it as follows:
+</p>
+<pre>
+mkdir /data/misc/media
+chown media:media /data/misc/media
+</pre>
+</li>
+<li><code>echo af.tee=# &gt; /data/local.prop</code>
+<br />Where the <code>af.tee</code> value is a number described below.
+</li>
+<li><code>chmod 644 /data/local.prop</code></li>
+<li><code>reboot</code></li>
+</ol>
+
+<h4>Values for <code>af.tee</code> property</h4>
+
+<p>
+The value of <code>af.tee</code> is a number between 0 and 7, expressing
+the sum of several bits, one per feature.
+See the code at <code>AudioFlinger::AudioFlinger()</code> in <code>AudioFlinger.cpp</code>
+for an explanation of each bit, but briefly:
+</p>
+<ul>
+<li>1 = input</li>
+<li>2 = FastMixer output</li>
+<li>4 = per-track AudioRecord and AudioTrack</li>
+</ul>
+
+<p>
+There is no bit for deep buffer or normal mixer yet,
+but you can get similar results using "4."
+</p>
+
+<h3>Test and acquire data</h3>
+
+<ol>
+<li>Run your audio test.</li>
+<li><code>adb shell dumpsys media.audio_flinger</code></li>
+<li>Look for a line in dumpsys output such as this:<br />
+<code>tee copied to /data/misc/media/20131010101147_2.wav</code>
+<br />This is a PCM .wav file.</br>
+</li>
+<li><code>adb pull</code> any <code>/data/misc/media/*.wav</code> files of interest;
+note that track-specific dump filenames do not appear in the dumpsys output,
+but are still saved to <code>/data/misc/media</code> upon track closure.
+</li>
+<li>Review the dump files for privacy concerns before sharing with others.</li>
+</ol>
+
+<h4>Suggestions</h4>
+
+<p>Try these ideas for more useful results:</p>
+
+<ul>
+<li>Disable touch sounds and key clicks.</li>
+<li>Maximize all volumes.</li>
+<li>Disable apps that make sound or record from microphone,
+if they are not of interest to your test.
+</li>
+<li>Track-specific dumps are only saved when the track is closed;
+you may need to force close an app in order to dump its track-specific data
+<li>Do the <code>dumpsys</code> immediately after test;
+there is a limited amount of recording space available.</li>
+<li>To make sure you don't lose your dump files,
+upload them to your host periodically.
+Only a limited number of dump files are preserved;
+older dumps are removed after that limit is reached.</li>
+</ul>
+
+<h3>Restore</h3>
+
+<p>
+As noted above, the tee sink feature should not be left enabled.
+Restore your build and device as follows:
+</p>
+<ol>
+<li>Revert the source code changes to <code>Configuration.h</code>.</li>
+<li>Re-build <code>libaudioflinger.so</code>.</li>
+<li>Push or sync the restored <code>libaudioflinger.so</code>
+to the device's <code>/system/lib</code>.
+</li>
+<li><code>adb shell</code></li>
+<li><code>rm /data/local.prop</code></li>
+<li><code>rm /data/misc/media/*.wav</code></li>
+<li><code>reboot</code></li>
+</ol>
+
+<h2 id="mediaLog">media.log</h2>
+
+<h3>ALOGx macros</h3>
+
+<p>
+The standard Java language logging API in Android SDK is
+<a href="http://developer.android.com/reference/android/util/Log.html">android.util.Log</a>.
+</p>
+
+<p>
+The corresponding C language API in Android NDK is
+<code>__android_log_print</code>
+declared in <code>&lt;android/log.h&gt;</code>.
+</p>
+
+<p>
+Within the native portion of Android framework, we
+prefer macros named <code>ALOGE</code>, <code>ALOGW</code>,
+<code>ALOGI</code>, <code>ALOGV</code>, etc.  They are declared in
+<code>&lt;utils/Log.h&gt;</code>, and for the purposes of this article
+we'll collectively refer to them as <code>ALOGx</code>.
+</p>
+
+<p>
+All of these APIs are easy-to-use and well-understood, so they are pervasive
+throughout the Android platform.  In particular the <code>mediaserver</code>
+process, which includes the AudioFlinger sound server, uses
+<code>ALOGx</code> extensively.
+</p>
+
+<p>
+Nevertheless, there are some limitations to <code>ALOGx</code> and friends:
+</p>
+
+<ul>
+<li>
+They are suspectible to "log spam": the log buffer is a shared resource
+so it can easily overflow due to unrelated log entries, resulting in
+missed information.  The <code>ALOGV</code> variant is disabled at
+compile-time by default.  But of course even it can result in log spam
+if it is enabled.
+</li>
+<li>
+The underlying kernel system calls could block, possibly resulting in
+priority inversion and consequently measurement disturbances and
+inaccuracies.  This is of
+special concern to time-critical threads such as <code>FastMixer</code>.
+</li>
+<li>
+If a particular log is disabled to reduce log spam,
+then any information that would have been captured by that log is lost.
+It is not possible to enable a specific log retroactively,
+<i>after</i> it becomes clear that the log would have been interesting.
+</li>
+</ul>
+
+<h3>NBLOG, media.log, and MediaLogService</h3>
+
+<p>
+The <code>NBLOG</code> APIs and associated <code>media.log</code>
+process and <code>MediaLogService</code>
+service together form a newer logging system for media, and are specifically
+designed to address the issues above.  We will loosely use the term
+"media.log" to refer to all three, but strictly speaking <code>NBLOG</code> is the
+C++ logging API, <code>media.log</code> is a Linux process name, and <code>MediaLogService</code>
+is an Android binder service for examining the logs.
+</p>
+
+<p>
+A <code>media.log</code> "timeline" is a series
+of log entries whose relative ordering is preserved.
+By convention, each thread should use it's own timeline.
+</p>
+
+<h3>Benefits</h3>
+
+<p>
+The benefits of the <code>media.log</code> system are that it:
+</p>
+<ul>
+<li>Doesn't spam the main log unless and until it is needed.</li>
+<li>Can be examined even when <code>mediaserver</code> crashes or hangs.</li>
+<li>Is non-blocking per timeline.</li>
+<li>Offers less disturbance to performance.
+(Of course no form of logging is completely non-intrusive.)
+</li>
+</ul>
+
+<h3>Architecture</h3>
+
+<p>
+The diagram below shows the relationship of the <code>mediaserver</code> process
+and the <code>init</code> process, before <code>media.log</code> is introduced:
+</p>
+<img src="images/medialog_before.png" alt="Architecture before media.log" />
+<p>
+Notable points:
+</p>
+<ul>
+<li><code>init</code> forks and execs <code>mediaserver</code>.</li>
+<li><code>init</code> detects the death of <code>mediaserver</code>, and re-forks as necessary.</li>
+<li><code>ALOGx</code> logging is not shown.
+</ul>
+
+<p>
+The diagram below shows the new relationship of the components,
+after <code>media.log</code> is added to the architecture:
+</p>
+<img src="images/medialog_after.png" alt="Architecture after media.log" />
+<p>
+Important changes:
+</p>
+
+<ul>
+
+<li>
+Clients use <code>NBLOG</code> API to construct log entries and append them to
+a circular buffer in shared memory.
+</li>
+
+<li>
+<code>MediaLogService</code> can dump the contents of the circular buffer at any time.
+</li>
+
+<li>
+The circular buffer is designed in such a way that any corruption of the
+shared memory will not crash <code>MediaLogService</code>, and it will still be able
+to dump as much of the buffer that is not affected by the corruption.
+</li>
+
+<li>
+The circular buffer is non-blocking and lock-free for both writing
+new entries and reading existing entries.
+</li>
+
+<li>
+No kernel system calls are required to write to or read from the circular buffer
+(other than optional timestamps).
+</li>
+
+</ul>
+
+<h4>Where to use</h4>
+
+<p>
+As of Android 4.4, there are only a few log points in AudioFlinger
+that use the <code>media.log</code> system.  Though the new APIs are not as
+easy to use as <code>ALOGx</code>, they are not extremely difficult either.
+We encourage you to learn the new logging system for those
+occasions when it is indispensable.
+In particular, it is recommended for AudioFlinger threads that must
+run frequently, periodically, and without blocking such as the
+<code>FastMixer</code> thread.
+</p>
+
+<h3>How to use</h3>
+
+<h4>Add logs</h4>
+
+<p>
+First, you need to add logs to your code.
+</p>
+
+<p>
+In <code>FastMixer</code> thread, use code such as this:
+</p>
+<pre>
+logWriter->log("string");
+logWriter->logf("format", parameters);
+logWriter->logTimestamp();
+</pre>
+<p>
+As this <code>NBLog</code> timeline is used only by the <code>FastMixer</code> thread,
+there is no need for mutual exclusion.
+</p>
+
+<p>
+In other AudioFlinger threads, use <code>mNBLogWriter</code>:
+</p>
+<pre>
+mNBLogWriter->log("string");
+mNBLogWriter->logf("format", parameters);
+mNBLogWriter->logTimestamp();
+</pre>
+<p>
+For threads other than <code>FastMixer</code>,
+the thread's <code>NBLog</code> timeline can be used by both the thread itself, and
+by binder operations.  <code>NBLog::Writer</code> does not provide any
+implicit mutual exclusion per timeline, so be sure that all logs occur
+within a context where the thread's mutex <code>mLock</code> is held.
+</p>
+
+<p>
+After you have added the logs, re-build AudioFlinger.
+</p>
+
+<p class="caution"><strong>Caution:</strong>
+A separate <code>NBLog::Writer</code> timeline is required per thread,
+to ensure thread safety, since timelines omit mutexes by design.  If you
+want more than one thread to use the same timeline, you can protect with an
+existing mutex (as described above for <code>mLock</code>).  Or you can
+use the <code>NBLog::LockedWriter</code> wrapper instead of <code>NBLog::Writer</code>.
+However, this negates a prime benefit of this API: its non-blocking
+behavior.
+</p>
+
+<p>
+The full <code>NBLog</code> API is at <code>frameworks/av/include/media/nbaio/NBLog.h</code>.
+</p>
+
+<h4>Enable media.log</h4>
+
+<p>
+<code>media.log</code> is disabled by default. It is active only when property
+<code>ro.test_harness</code> is <code>1</code>.  You can enable it by:
+</p>
+
+<pre>
+adb root
+adb shell
+echo ro.test_harness=1 > /data/local.prop
+chmod 644 /data/local.prop
+reboot
+</pre>
+
+<p>
+The connection is lost during reboot, so:
+</p>
+<pre>
+adb shell
+</pre>
+
+The command <code>ps media</code> will now show two processes:
+<ul>
+<li>media.log</li>
+<li>mediaserver</li>
+</ul>
+<p>
+Note the process ID of <code>mediaserver</code> for later.
+</p>
+
+<h4>Displaying the timelines</h4>
+
+<p>
+You can manually request a log dump at any time.
+This command shows logs from all the active and recent timelines, and then clears them:
+</p>
+<pre>
+dumpsys media.log
+</pre>
+
+<p>
+Note that by design timelines are independent,
+and there is no facility for merging timelines.
+</p>
+
+<h4>Recovering logs after mediaserver death</h4>
+
+<p>
+Now try killing <code>mediaserver</code> process: <code>kill -9 #</code>, where # is
+the process ID you noted earlier.  You should see a dump from <code>media.log</code>
+in the main <code>logcat</code>, showing all the logs leading up to the crash.
+</p>
+<pre>
+dumpsys media.log
+</pre>
diff --git a/src/devices/audio/funplug.jd b/src/devices/audio/funplug.jd
new file mode 100644
index 0000000..9200556
--- /dev/null
+++ b/src/devices/audio/funplug.jd
@@ -0,0 +1,53 @@
+page.title=FunPlug Audio Dongle
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+The diagram and photo below show an audio loopback
+<a href="http://en.wikipedia.org/wiki/Dongle">dongle</a>
+for the
+<a href="http://en.wikipedia.org/wiki/Phone_connector_(audio)">headset connector</a>
+that we call the "FunPlug."
+The Chrome hardware team designed this circuit and plug for functional testing;
+however it has many other uses too.  The Android audio team uses it to measure
+<a href="latency_measure.html#measuringRoundTrip">round-trip audio latency</a>,
+via the Larsen effect (feedback loop).
+</p>
+
+<h2 id="funplugCircuit">FunPlug circuit</h2>
+
+<img src="images/funplug_circuit.png" alt="FunPlug circuit" />
+
+<p>
+To ensure that the output signal will not overload the microphone input,
+we cut it down by about 20dB.
+The resistor loads are there to tell the microphone polarity switch that
+it is a US/CTIA pinout plug.
+</p>
+
+<h2 id="funplugAssembled">FunPlug assembled</h2>
+
+<img src="images/funplug_assembled.jpg" alt="FunPlug fully assembled" />
+
diff --git a/src/devices/audio/images/audio_hal.png b/src/devices/audio/images/audio_hal.png
new file mode 100644
index 0000000..273ac81
--- /dev/null
+++ b/src/devices/audio/images/audio_hal.png
Binary files differ
diff --git a/src/devices/audio/images/breadboard.jpg b/src/devices/audio/images/breadboard.jpg
new file mode 100644
index 0000000..f98422f
--- /dev/null
+++ b/src/devices/audio/images/breadboard.jpg
Binary files differ
diff --git a/src/devices/audio/images/display.jpg b/src/devices/audio/images/display.jpg
new file mode 100644
index 0000000..9748ec3
--- /dev/null
+++ b/src/devices/audio/images/display.jpg
Binary files differ
diff --git a/src/devices/audio/images/pcb.jpg b/src/devices/audio/images/pcb.jpg
new file mode 100644
index 0000000..10a2789
--- /dev/null
+++ b/src/devices/audio/images/pcb.jpg
Binary files differ
diff --git a/src/devices/audio/implement.jd b/src/devices/audio/implement.jd
new file mode 100644
index 0000000..28f06b7
--- /dev/null
+++ b/src/devices/audio/implement.jd
@@ -0,0 +1,296 @@
+page.title=Audio
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>This page explains how to implement the audio Hardware Abstraction Layer (HAL) and configure the
+shared library.</p>
+
+<h2 id="implementing">Implementing the HAL</h2>
+
+<p>The audio HAL is composed of three different interfaces that you must implement:</p>
+
+<ul>
+<li><code>hardware/libhardware/include/hardware/audio.h</code> - represents the main functions
+of an audio device.</li>
+<li><code>hardware/libhardware/include/hardware/audio_policy.h</code> - represents the audio policy
+manager, which handles things like audio routing and volume control policies.</li>
+<li><code>hardware/libhardware/include/hardware/audio_effect.h</code> - represents effects that can
+be applied to audio such as downmixing, echo, or noise suppression.</li>
+</ul>
+
+<p>For an example, refer to the implementation for the Galaxy Nexus at
+<code>device/samsung/tuna/audio</code>.</p>
+
+<p>In addition to implementing the HAL, you need to create a
+<code>device/&lt;company_name&gt;/&lt;device_name&gt;/audio/audio_policy.conf</code> file that
+declares the audio devices present on your product. For an example, see the file for the Galaxy
+Nexus audio hardware in <code>device/samsung/tuna/audio/audio_policy.conf</code>. Also, see the
+<code>system/core/include/system/audio.h</code> and
+<code>system/core/include/system/audio_policy.h</code> header files for a reference of the
+properties that you can define.</p>
+
+<h3 id="multichannel">Multi-channel support</h3>
+
+<p>If your hardware and driver supports multichannel audio via HDMI, you can output the audio
+stream  directly to the audio hardware. This bypasses the AudioFlinger mixer so it doesn't get
+downmixed to two channels.</p>
+
+<p>The audio HAL must expose whether an output stream profile supports multichannel audio
+capabilities. If the HAL exposes its capabilities, the default policy manager allows multichannel
+playback over HDMI.</p>
+
+<p>For more implementation details, see the <code>device/samsung/tuna/audio/audio_hw.c</code> in
+the Android 4.1 release.</p>
+
+<p>To specify that your product contains a multichannel audio output, edit the
+<code>audio_policy.conf</code> file to describe the multichannel output for your product. The
+following is an example from the Galaxy Nexus that shows a "dynamic" channel mask, which means the
+audio policy manager queries the actual channel masks supported by the HDMI sink after connection.
+You can also specify a static channel mask like <code>AUDIO_CHANNEL_OUT_5POINT1</code>.</p>
+
+<pre>
+audio_hw_modules {
+  primary {
+    outputs {
+        ...
+        hdmi {
+          sampling_rates 44100|48000
+          channel_masks dynamic
+          formats AUDIO_FORMAT_PCM_16_BIT
+          devices AUDIO_DEVICE_OUT_AUX_DIGITAL
+          flags AUDIO_OUTPUT_FLAG_DIRECT
+        }
+        ...
+    }
+    ...
+  }
+  ...
+}
+</pre>
+
+<p>AudioFlinger's mixer downmixes the content to stereo automatically when sent to an audio device
+that does not support multichannel audio.</p>
+
+<h3 id="codecs">Media codecs</h3>
+
+<p>Ensure the audio codecs your hardware and drivers support are properly declared for your
+product. For details on declaring supported codecs, see <a href="{@docRoot}devices/media.html#expose">Exposing Codecs
+to the Framework</a>.</p>
+
+<h2 id="configuring">Configuring the shared library</h2>
+
+<p>You need to package the HAL implementation into a shared library and copy it to the appropriate
+location by creating an <code>Android.mk</code> file:</p>
+
+<ol>
+<li>Create a <code>device/&lt;company_name&gt;/&lt;device_name&gt;/audio</code> directory to
+contain your library's source files.</li>
+<li>Create an <code>Android.mk</code> file to build the shared library. Ensure that the Makefile
+contains the following line:
+<pre>
+LOCAL_MODULE := audio.primary.&lt;device_name&gt;
+</pre>
+
+<p>Notice your library must be named <code>audio_primary.&lt;device_name&gt;.so</code> so
+that Android can correctly load the library. The "<code>primary</code>" portion of this filename
+indicates that this shared library is for the primary audio hardware located on the device. The
+module names <code>audio.a2dp.&lt;device_name&gt;</code> and
+<code>audio.usb.&lt;device_name&gt;</code> are also available for bluetooth and USB audio
+interfaces. Here is an example of an <code>Android.mk</code> from the Galaxy Nexus audio hardware:
+</p>
+
+<pre>
+LOCAL_PATH := $(call my-dir)
+
+include $(CLEAR_VARS)
+
+LOCAL_MODULE := audio.primary.tuna
+LOCAL_MODULE_RELATIVE_PATH := hw
+LOCAL_SRC_FILES := audio_hw.c ril_interface.c
+LOCAL_C_INCLUDES += \
+        external/tinyalsa/include \
+        $(call include-path-for, audio-utils) \
+        $(call include-path-for, audio-effects)
+LOCAL_SHARED_LIBRARIES := liblog libcutils libtinyalsa libaudioutils libdl
+LOCAL_MODULE_TAGS := optional
+
+include $(BUILD_SHARED_LIBRARY)
+</pre>
+
+</li>
+
+<li>If your product supports low latency audio as specified by the Android CDD, copy the
+corresponding XML feature file into your product. For example, in your product's
+<code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> Makefile:
+
+<pre>
+PRODUCT_COPY_FILES := ...
+
+PRODUCT_COPY_FILES += \
+frameworks/native/data/etc/android.android.hardware.audio.low_latency.xml:system/etc/permissions/android.hardware.audio.low_latency.xml \
+</pre>
+
+</li>
+
+<li>Copy the <code>audio_policy.conf</code> file that you created earlier to the
+<code>system/etc/</code> directory in your product's
+<code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> Makefile. For example:
+
+<pre>
+PRODUCT_COPY_FILES += \
+        device/samsung/tuna/audio/audio_policy.conf:system/etc/audio_policy.conf
+</pre>
+
+</li>
+
+<li>Declare the shared modules of your audio HAL that are required by your product in the
+product's <code>device/&lt;company_name&gt;/&lt;device_name&gt;/device.mk</code> Makefile. For
+example, the Galaxy Nexus requires the primary and bluetooth audio HAL modules:
+
+<pre>
+PRODUCT_PACKAGES += \
+        audio.primary.tuna \
+        audio.a2dp.default
+</pre>
+
+</li>
+</ol>
+
+<h2 id="preprocessing">Audio pre-processing effects</h2>
+
+<p>The Android platform provides audio effects on supported devices in the
+<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx
+</a> package, which is available for developers to access. For example, on the Nexus 10, the
+following pre-processing effects are supported:</p>
+
+<ul>
+<li>
+<a href="http://developer.android.com/reference/android/media/audiofx/AcousticEchoCanceler.html">
+Acoustic Echo Cancellation</a></li>
+<li>
+<a href="http://developer.android.com/reference/android/media/audiofx/AutomaticGainControl.html">
+Automatic Gain Control</a></li>
+<li>
+<a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">
+Noise Suppression</a></li>
+</ul>
+
+
+<p>Pre-processing effects are paired with the use case mode in which the pre-processing is requested
+. In Android app development, a use case is referred to as an <code>AudioSource</code>; and app
+developers request to use the <code>AudioSource</code> abstraction instead of the actual audio
+hardware device. The Android Audio Policy Manager maps an <code>AudioSource</code> to the actual
+hardware with <code>AudioPolicyManagerBase::getDeviceForInputSource(int inputSource)</code>. The
+following sources are exposed to developers:</p>
+
+<ul>
+<li><code>android.media.MediaRecorder.AudioSource.CAMCORDER</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.VOICE_COMMUNICATION</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.VOICE_CALL</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.VOICE_DOWNLINK</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.VOICE_UPLINK</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.VOICE_RECOGNITION</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.MIC</code></li>
+<li><code>android.media.MediaRecorder.AudioSource.DEFAULT</code></li> </ul>
+
+<p>The default pre-processing effects applied for each <code>AudioSource</code> are specified in
+the <code>/system/etc/audio_effects.conf</code> file. To specify your own default effects for every
+<code>AudioSource</code>, create a <code>/system/vendor/etc/audio_effects.conf</code> file and
+specify the pre-processing effects to turn on. For an example, see the implementation for the Nexus
+10 in <code>device/samsung/manta/audio_effects.conf</code>. AudioEffect instances acquire and
+release a session when created and destroyed, enabling the effects (such as the Loudness Enhancer)
+to persist throughout the duration of the session. </p>
+
+<p class="warning"><strong>Warning:</strong> For the <code>VOICE_RECOGNITION</code> use case, do
+not enable the noise suppression pre-processing effect. It should not be turned on by default when
+recording from this audio source, and you should not enable it in your own audio_effects.conf file.
+Turning on the effect by default will cause the device to fail the <a
+href="{@docRoot}compatibility/index.html"> compatibility requirement</a> regardless of whether this was on by
+default due to configuration file , or the audio HAL implementation's default behavior.</p>
+
+<p>The following example enables pre-processing for the VoIP <code>AudioSource</code> and Camcorder
+<code>AudioSource</code>. By declaring the <code>AudioSource</code> configuration in this manner,
+the framework will automatically request from the audio HAL the use of those effects.</p>
+
+<pre>
+pre_processing {
+   voice_communication {
+       aec {}
+       ns {}
+   }
+   camcorder {
+       agc {}
+   }
+}
+</pre>
+
+<h3 id="tuning">Source tuning</h3>
+
+<p>For <code>AudioSource</code> tuning, there are no explicit requirements on audio gain or audio
+processing with the exception of voice recognition (<code>VOICE_RECOGNITION</code>).</p>
+
+<p>The requirements for voice recognition are:</p>
+
+<ul>
+<li>"flat" frequency response (+/- 3dB) from 100Hz to 4kHz</li>
+<li>close-talk config: 90dB SPL reads RMS of 2500 (16bit samples)</li>
+<li>level tracks linearly from -18dB to +12dB relative to 90dB SPL</li>
+<li>THD < 1% (90dB SPL in 100 to 4000Hz range)</li>
+<li>8kHz sampling rate (anti-aliasing)</li>
+<li>Effects/pre-processing must be disabled by default</li>
+</ul>
+
+<p>Examples of tuning different effects for different sources are:</p>
+
+<ul>
+<li>Noise Suppressor
+<ul>
+<li>Tuned for wind noise suppressor for <code>CAMCORDER</code></li>
+<li>Tuned for stationary noise suppressor for <code>VOICE_COMMUNICATION</code></li>
+</ul>
+</li>
+<li>Automatic Gain Control
+<ul>
+<li>Tuned for close-talk for <code>VOICE_COMMUNICATION</code> and main phone mic</li>
+<li>Tuned for far-talk for <code>CAMCORDER</code></li>
+</ul>
+</li>
+</ul>
+
+<h3 id="more">More information</h3>
+
+<p>For more information, see:</p>
+
+<ul>
+<li>Android documentation for
+<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">
+audiofx package</a>
+
+<li>Android documentation for
+<a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">
+Noise Suppression audio effect</a></li>
+
+<li><code>device/samsung/manta/audio_effects.conf</code> file for the Nexus 10</li>
+</ul>
diff --git a/src/devices/audio/index.jd b/src/devices/audio/index.jd
new file mode 100644
index 0000000..dd3109b
--- /dev/null
+++ b/src/devices/audio/index.jd
@@ -0,0 +1,107 @@
+page.title=Audio
+@jd:body
+
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<p>
+  Android's audio Hardware Abstraction Layer (HAL) connects the higher-level, audio-specific
+  framework APIs in <a href="http://developer.android.com/reference/android/media/package-summary.html">android.media</a>
+  to the underlying audio driver and hardware. 
+</p>
+
+<p>
+  The following figure and list describe how audio functionality is implemented and the relevant
+  source code that is involved in the implementation:
+</p>
+<p>
+  <img src="images/audio_hal.png" alt="Audio architecture" />
+</p>
+<dl>
+  <dt>
+    Application framework
+  </dt>
+  <dd>
+    At the application framework level is the app code, which utilizes the
+    <a href="http://developer.android.com/reference/android/media/package-summary.html">android.media</a>
+    APIs to interact with the audio hardware. Internally, this code calls corresponding JNI glue
+    classes to access the native code that interacts with the audio hardware.
+  </dd>
+  <dt>
+    JNI
+  </dt>
+  <dd>
+    The JNI code associated with <a href="http://developer.android.com/reference/android/media/package-summary.html">android.media</a> is located in the
+    <code>frameworks/base/core/jni/</code> and <code>frameworks/base/media/jni</code> directories.
+    This code calls the lower level native code to obtain access to the audio hardware.
+  </dd>
+  <dt>
+    Native framework
+  </dt>
+  <dd>
+    The native framework is defined in <code>frameworks/av/media/libmedia</code> and provides a
+    native equivalent to the <a href="http://developer.android.com/reference/android/media/package-summary.html">android.media</a> package. The native framework calls the Binder
+    IPC proxies to obtain access to audio-specific services of the media server.
+  </dd>
+  <dt>
+    Binder IPC
+  </dt>
+  <dd>
+    The Binder IPC proxies facilitate communication over process boundaries. They are located in
+    the <code>frameworks/av/media/libmedia</code> directory and begin with the letter "I".
+  </dd>
+  <dt>
+    Media Server
+  </dt>
+  <dd>
+    The audio services in the media server, located in
+    <code>frameworks/av/services/audioflinger</code>, is the actual code that interacts with your
+    HAL implementations.
+  </dd>
+  <dt>
+    HAL
+  </dt>
+  <dd>
+    The HAL defines the standard interface that audio services call into
+    and that you must implement to have your audio hardware function correctly. The audio HAL
+    interfaces are located in
+<code>hardware/libhardware/include/hardware</code>. See <a
+href="{@docRoot}devices/halref/audio_8h_source.html">&lt;hardware/audio.h&gt;</a> for additional details.
+  </dd>
+  <dt>
+    Kernel Driver
+  </dt>
+  <dd>
+    The audio driver interacts with the hardware and your implementation of the HAL. You can choose
+    to use ALSA, OSS, or a custom driver of your own at this level. The HAL is driver-agnostic.
+    <p>
+  <strong>Note:</strong> If you do choose ALSA, we recommend using <code>external/tinyalsa</code>
+  for the user portion of the driver because of its compatible licensing (The standard user-mode
+  library is GPL licensed).
+</p>
+  </dd>
+</dl>
+<p>
+Not shown: Android native audio based on OpenSL ES.
+This API is exposed as part of
+<a href="https://developer.android.com/tools/sdk/ndk/index.html">Android NDK</a>,
+and is at the same architecture level as
+<a href="http://developer.android.com/reference/android/media/package-summary.html">android.media</a>.
+</p>
+
+<p>
+   See the rest of the pages within the Audio section for implementation
+   instructions and ways to improve performance.
+</p>
diff --git a/src/devices/audio/latency.jd b/src/devices/audio/latency.jd
new file mode 100644
index 0000000..815f5b9
--- /dev/null
+++ b/src/devices/audio/latency.jd
@@ -0,0 +1,144 @@
+page.title=Audio Latency
+@jd:body
+
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>Audio latency is the time delay as an audio signal passes through a system.
+  For a complete description of audio latency for the purposes of Android
+  compatibility, see <em>Section 5.5 Audio Latency</em>
+  in the <a href="{@docRoot}compatibility/index.html">Android CDD</a>.
+  See <a href="latency_design.html">Design For Reduced Latency</a> for an 
+  understanding of Android's audio latency-reduction efforts.
+</p>
+
+<p>
+  This page focuses on the contributors to output latency,
+  but a similar discussion applies to input latency.
+</p>
+<p>
+  Assuming the analog circuitry does not contribute significantly, then the major 
+surface-level contributors to audio latency are the following:
+</p>
+
+<ul>
+  <li>Application</li>
+  <li>Total number of buffers in pipeline</li>
+  <li>Size of each buffer, in frames</li>
+  <li>Additional latency after the app processor, such as from a DSP</li>
+</ul>
+
+<p>
+  As accurate as the above list of contributors may be, it is also misleading.
+  The reason is that buffer count and buffer size are more of an
+  <em>effect</em> than a <em>cause</em>.  What usually happens is that
+  a given buffer scheme is implemented and tested, but during testing, an audio
+  underrun is heard as a "click" or "pop."  To compensate, the
+  system designer then increases buffer sizes or buffer counts.
+  This has the desired result of eliminating the underruns, but it also
+  has the undesired side effect of increasing latency.
+</p>
+
+<p>
+  A better approach is to understand the causes of the
+  underruns and then correct those.  This eliminates the
+  audible artifacts and may permit even smaller or fewer buffers
+  and thus reduce latency.
+</p>
+
+<p>
+  In our experience, the most common causes of underruns include:
+</p>
+<ul>
+  <li>Linux CFS (Completely Fair Scheduler)</li>
+  <li>high-priority threads with SCHED_FIFO scheduling</li>
+  <li>long scheduling latency</li>
+  <li>long-running interrupt handlers</li>
+  <li>long interrupt disable time</li>
+</ul>
+
+<h3>Linux CFS and SCHED_FIFO scheduling</h3>
+<p>
+  The Linux CFS is designed to be fair to competing workloads sharing a common CPU
+  resource. This fairness is represented by a per-thread <em>nice</em> parameter.
+  The nice value ranges from -19 (least nice, or most CPU time allocated)
+  to 20 (nicest, or least CPU time allocated). In general, all threads with a given
+  nice value receive approximately equal CPU time and threads with a
+  numerically lower nice value should expect to
+  receive more CPU time. However, CFS is "fair" only over relatively long
+  periods of observation. Over short-term observation windows,
+  CFS may allocate the CPU resource in unexpected ways. For example, it
+  may take the CPU away from a thread with numerically low niceness
+  onto a thread with a numerically high niceness.  In the case of audio,
+  this can result in an underrun.
+</p>
+
+<p>
+  The obvious solution is to avoid CFS for high-performance audio
+  threads. Beginning with Android 4.1, such threads now use the
+  <code>SCHED_FIFO</code> scheduling policy rather than the <code>SCHED_NORMAL</code> (also called
+  <code>SCHED_OTHER</code>) scheduling policy implemented by CFS.
+</p>
+
+<p>
+  Though the high-performance audio threads now use <code>SCHED_FIFO</code>, they
+  are still susceptible to other higher priority <code>SCHED_FIFO</code> threads.
+  These are typically kernel worker threads, but there may also be a few
+  non-audio user threads with policy <code>SCHED_FIFO</code>. The available <code>SCHED_FIFO</code>
+  priorities range from 1 to 99.  The audio threads run at priority
+  2 or 3.  This leaves priority 1 available for lower priority threads,
+  and priorities 4 to 99 for higher priority threads.  We recommend 
+  you use priority 1 whenever possible, and reserve priorities 4 to 99 for
+  those threads that are guaranteed to complete within a bounded amount
+  of time and are known to not interfere with scheduling of audio threads.
+</p>
+
+<h3>Scheduling latency</h3>
+<p>
+  Scheduling latency is the time between when a thread becomes
+  ready to run, and when the resulting context switch completes so that the
+  thread actually runs on a CPU. The shorter the latency the better, and 
+  anything over two milliseconds causes problems for audio. Long scheduling
+  latency is most likely to occur during mode transitions, such as
+  bringing up or shutting down a CPU, switching between a security kernel
+  and the normal kernel, switching from full power to low-power mode,
+  or adjusting the CPU clock frequency and voltage.
+</p>
+
+<h3>Interrupts</h3>
+<p>
+  In many designs, CPU 0 services all external interrupts.  So a
+  long-running interrupt handler may delay other interrupts, in particular
+  audio direct memory access (DMA) completion interrupts. Design interrupt handlers
+  to finish quickly and defer any lengthy work to a thread (preferably
+  a CFS thread or <code>SCHED_FIFO</code> thread of priority 1).
+</p>
+
+<p>
+  Equivalently, disabling interrupts on CPU 0 for a long period
+  has the same result of delaying the servicing of audio interrupts.
+  Long interrupt disable times typically happen while waiting for a kernel
+  <i>spin lock</i>.  Review these spin locks to ensure that
+  they are bounded.
+</p>
+
diff --git a/src/devices/audio/latency_design.jd b/src/devices/audio/latency_design.jd
new file mode 100644
index 0000000..a2ad236
--- /dev/null
+++ b/src/devices/audio/latency_design.jd
@@ -0,0 +1,230 @@
+page.title=Design For Reduced Latency
+@jd:body
+
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+The Android 4.1 release introduced internal framework changes for
+a lower latency audio output path. There were no public client API
+or HAL API changes. This document describes the initial design,
+which is expected to evolve over time.
+Having a good understanding of this design should help device OEM and
+SoC vendors implement the design correctly on their particular devices
+and chipsets.  This article is not intended for application developers.
+</p>
+
+<h2 id="trackCreation">Track Creation</h2>
+
+<p>
+The client can optionally set bit <code>AUDIO_OUTPUT_FLAG_FAST</code> in the
+<code>audio_output_flags_t</code> parameter of AudioTrack C++ constructor or
+<code>AudioTrack::set()</code>. Currently the only clients that do so are:
+</p>
+
+<ul>
+<li>OpenSL ES</li>
+<li>SoundPool</li>
+<li>ToneGenerator</li>
+</ul>
+
+<p>
+The AudioTrack C++ implementation reviews the <code>AUDIO_OUTPUT_FLAG_FAST</code>
+request and may optionally deny the request at client level. If it
+decides to pass the request on, it does so using <code>TRACK_FAST</code> bit of
+the <code>track_flags_t</code> parameter of the <code>IAudioTrack</code> factory method
+<code>IAudioFlinger::createTrack()</code>.
+</p>
+
+<p>
+The AudioFlinger audio server reviews the <code>TRACK_FAST</code> request and may
+optionally deny the request at server level. It informs the client
+whether or not the request was accepted, via bit <code>CBLK_FAST</code> of the
+shared memory control block.
+</p>
+
+<p>
+The factors that impact the decision include:
+</p>
+
+<ul>
+<li>Presence of a fast mixer thread for this output (see below)</li>
+<li>Track sample rate</li>
+<li>Presence of a client thread to execute callback handlers for this track</li>
+<li>Track buffer size</li>
+<li>Available fast track slots (see below)</li>
+</ul>
+
+<p>
+If the client's request was accepted, it is called a "fast track."
+Otherwise it's called a "normal track."
+</p>
+
+<h2 id="mixerThreads">Mixer Threads</h2>
+
+<p>
+At the time AudioFlinger creates a normal mixer thread, it decides
+whether or not to also create a fast mixer thread. Both the normal
+mixer and fast mixer are not associated with a particular track,
+but rather with a set of tracks. There is always a normal mixer
+thread. The fast mixer thread, if it exists, is subservient to the
+normal mixer thread and acts under its control.
+</p>
+
+<h3 id="fastMixer">Fast mixer</h3>
+
+<h4>Features</h4>
+
+<p>
+The fast mixer thread provides these features:
+</p>
+
+<ul>
+<li>Mixing of the normal mixer's sub-mix and up to 7 client fast tracks</li>
+<li>Per track attenuation</li>
+</ul>
+
+<p>
+Omitted features:
+</p>
+
+<ul>
+<li>Per track sample rate conversion</li>
+<li>Per track effects</li>
+<li>Per mix effects</li>
+</ul>
+
+<h4>Period</h4>
+
+<p>
+The fast mixer runs periodically, with a recommended period of two
+to three milliseconds (ms), or slightly higher if needed for scheduling stability.
+This number was chosen so that, accounting for the complete
+buffer pipeline, the total latency is on the order of 10 ms. Smaller
+values are possible but may result in increased power consumption
+and chance of glitches depending on CPU scheduling predictability.
+Larger values are possible, up to 20 ms, but result in degraded
+total latency and so should be avoided.
+</p>
+
+<h4>Scheduling</h4>
+
+<p>
+The fast mixer runs at elevated <code>SCHED_FIFO</code> priority. It needs very
+little CPU time, but must run often and with low scheduling jitter.
+Running too late will result in glitches due to underrun. Running
+too early will result in glitches due to pulling from a fast track
+before the track has provided data.
+</p>
+
+<h4>Blocking</h4>
+
+<p>
+Ideally the fast mixer thread never blocks, other than at HAL
+<code>write()</code>. Other occurrences of blocking within the fast mixer are
+considered bugs. In particular, mutexes are avoided.
+</p>
+
+<h4>Relationship to other components</h4>
+
+<p>
+The fast mixer has little direct interaction with clients. In
+particular, it does not see binder-level operations, but it does
+access the client's shared memory control block.
+</p>
+
+<p>
+The fast mixer receives commands from the normal mixer via a state queue.
+</p>
+
+<p>
+Other than pulling track data, interaction with clients is via the normal mixer.
+</p>
+
+<p>
+The fast mixer's primary sink is the audio HAL.
+</p>
+
+<h3 id="normalMixer">Normal mixer</h3>
+
+<h4>Features</h4>
+
+<p>
+All features are enabled:
+</p>
+
+<ul>
+<li>Up to 32 tracks</li>
+<li>Per track attenuation</li>
+<li>Per track sample rate conversion</li>
+<li>Effects processing</li>
+</ul>
+
+<h4>Period</h4>
+
+<p>
+The period is computed to be the first integral multiple of the
+fast mixer period that is >= 20 ms.
+</p>
+
+<h4>Scheduling</h4>
+
+<p>
+The normal mixer runs at elevated <code>SCHED_OTHER</code> priority.
+</p>
+
+<h4>Blocking</h4>
+
+<p>
+The normal mixer is permitted to block, and often does so at various
+mutexes as well as at a blocking pipe to write its sub-mix.
+</p>
+
+<h4>Relationship to other components</h4>
+
+<p>
+The normal mixer interacts extensively with the outside world,
+including binder threads, audio policy manager, fast mixer thread,
+and client tracks.
+</p>
+
+<p>
+The normal mixer's sink is a blocking pipe to the fast mixer's track 0.
+</p>
+
+<h2 id="flags">Flags</h2>
+
+<p>
+<code>AUDIO_OUTPUT_FLAG_FAST</code> bit is a hint. There's no guarantee the
+request will be fulfilled.
+</p>
+
+<p>
+<code>AUDIO_OUTPUT_FLAG_FAST</code> is a client-level concept. It does not appear
+in server.
+</p>
+
+<p>
+<code>TRACK_FAST</code> is a client -> server concept.
+</p>
+
diff --git a/src/devices/audio/latency_measure.jd b/src/devices/audio/latency_measure.jd
new file mode 100644
index 0000000..cd9df27
--- /dev/null
+++ b/src/devices/audio/latency_measure.jd
@@ -0,0 +1,219 @@
+page.title=Audio Latency
+@jd:body
+
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+  This page describes common methods for measuring input and output latency.
+</p>
+
+
+
+<h2 id="measuringOutput">Measuring Output Latency</h2>
+
+<p>
+  There are several techniques available to measure output latency,
+  with varying degrees of accuracy and ease of running, described below. Also
+see the <a href="testing_circuit.html">Testing circuit</a> for an example test environment.
+</p>
+
+<h3>LED and oscilloscope test</h3>
+<p>
+This test measures latency in relation to the device's LED indicator.
+If your production device does not have an LED, you can install the
+  LED on a prototype form factor device. For even better accuracy
+  on prototype devices with exposed circuity, connect one
+  oscilloscope probe to the LED directly to bypass the light
+  sensor latency.
+  </p>
+
+<p>
+  If you cannot install an LED on either your production or prototype device,
+  try the following workarounds:
+</p>
+
+<ul>
+  <li>Use a General Purpose Input/Output (GPIO) pin for the same purpose.</li>
+  <li>Use JTAG or another debugging port.</li>
+  <li>Use the screen backlight. This might be risky as the
+  backlight may have a non-neglible latency, and can contribute to
+  an inaccurate latency reading.
+  </li>
+</ul>
+
+<p>To conduct this test:</p>
+
+<ol>
+  <li>Run an app that periodically pulses the LED at
+  the same time it outputs audio. 
+
+  <p class="note"><b>Note:</b> To get useful results, it is crucial to use the correct
+  APIs in the test app so that you're exercising the fast audio output path.
+  See <a href="latency_design.html">Design For Reduced Latency</a> for
+  background.
+  </p>
+  </li>
+  <li>Place a light sensor next to the LED.</li>
+  <li>Connect the probes of a dual-channel oscilloscope to both the wired headphone
+  jack (line output) and light sensor.</li>
+  <li>Use the oscilloscope to measure
+  the time difference between observing the line output signal versus the light
+  sensor signal.</li>
+</ol>
+
+  <p>The difference in time is the approximate audio output latency,
+  assuming that the LED latency and light sensor latency are both zero.
+  Typically, the LED and light sensor each have a relatively low latency
+  on the order of one millisecond or less, which is sufficiently low enough
+  to ignore.</p>
+
+<h2 id="measuringRoundTrip">Measuring Round-Trip Latency</h2>
+
+<p>
+  Round-trip latency is the sum of output latency and input latency.
+</p>
+
+<h3>Larsen test</h3>
+<p>
+  One of the easiest latency tests is an audio feedback
+  (Larsen effect) test. This provides a crude measure of combined output
+  and input latency by timing an impulse response loop. This test is not very useful
+  for detailed analysis
+  by itself because of the nature of the test, but it can be useful for calibrating 
+  other tests, and for establishing an upper bound.</p>
+
+<p>To conduct this test:</p>
+<ol>
+  <li>Run an app that captures audio from the microphone and immediately plays the
+  captured data back over the speaker.</li>
+  <li>Create a sound externally,
+  such as tapping a pencil by the microphone. This noise generates a feedback loop.
+  Alternatively, one can inject an impulse into the loop using software.</li>
+  <li>Measure the time between feedback pulses to get the sum of the output latency, input latency, and application overhead.</li>
+</ol>
+
+  <p>This method does not break down the
+  component times, which is important when the output latency
+  and input latency are independent. So this method is not recommended for measuring
+  precise output latency or input latency values in isolation, but might be useful
+  for establishing rough estimates.</p>
+
+<h3>FunPlug</h3>
+
+<p>
+  The <a href="funplug.html">FunPlug</a> dongle is handy for
+  measuring round-trip latency over the headset connector.
+  The image below demonstrates the result of injecting an impulse
+  into the loop once, and then allowing the feedback loop to oscillate.
+  The period of the oscillations is the round-trip latency.
+  The specific device, software release, and
+  test conditions are not specified here.  The results shown
+  should not be extrapolated.
+</p>
+
+<img src="images/round_trip.png" alt="round-trip measurement" />
+
+<h2 id="measuringInput">Measuring Input Latency</h2>
+
+<p>
+  Input latency is more difficult to measure than output latency. The following
+  tests might help.
+</p>
+
+<p>
+One approach is to first determine the output latency
+  using the LED and oscilloscope method and then use
+  the audio feedback (Larsen) test to determine the sum of output
+  latency and input latency. The difference between these two
+  measurements is the input latency.
+</p>
+
+<p>
+  Another technique is to use a GPIO pin on a prototype device.
+  Externally, pulse a GPIO input at the same time that you present
+  an audio signal to the device.  Run an app that compares the
+  difference in arrival times of the GPIO signal and audio data.
+</p>
+
+<h2 id="reducing">Reducing Latency</h2>
+
+<p>To achieve low audio latency, pay special attention throughout the
+system to scheduling, interrupt handling, power management, and device
+driver design. Your goal is to prevent any part of the platform from
+blocking a <code>SCHED_FIFO</code> audio thread for more than a couple
+of milliseconds. By adopting such a systematic approach, you can reduce
+audio latency and get the side benefit of more predictable performance
+overall.
+</p>
+
+
+ <p>
+  Audio underruns, when they do occur, are often detectable only under certain
+  conditions or only at the transitions. Try stressing the system by launching
+  new apps and scrolling quickly through various displays. But be aware
+  that some test conditions are so stressful as to be beyond the design
+  goals. For example, taking a bugreport puts such enormous load on the
+  system that it may be acceptable to have an underrun in that case.
+</p>
+
+<p>
+  When testing for underruns:
+</p>
+  <ul>
+  <li>Configure any DSP after the app processor so that it adds
+  minimal latency.</li>
+  <li>Run tests under different conditions
+  such as having the screen on or off, USB plugged in or unplugged,
+  WiFi on or off, Bluetooth on or off, and telephony and data radios
+  on or off.</li>
+  <li>Select relatively quiet music that you're very familiar with, and which is easy
+  to hear underruns in.</li>
+  <li>Use wired headphones for extra sensitivity.</li>
+  <li>Give yourself breaks so that you don't experience "ear fatigue."</li>
+  </ul>
+
+<p>
+  Once you find the underlying causes of underruns, reduce
+  the buffer counts and sizes to take advantage of this.
+  The eager approach of reducing buffer counts and sizes <i>before</i>
+  analyzing underruns and fixing the causes of underruns only
+  results in frustration.
+</p>
+
+<h3 id="tools">Tools</h3>
+<p>
+  <code>systrace</code> is an excellent general-purpose tool
+  for diagnosing system-level performance glitches.
+</p>
+
+<p>
+  The output of <code>dumpsys media.audio_flinger</code> also contains a
+  useful section called "simple moving statistics." This has a summary
+  of the variability of elapsed times for each audio mix and I/O cycle.
+  Ideally, all the time measurements should be about equal to the mean or
+  nominal cycle time. If you see a very low minimum or high maximum, this is an
+  indication of a problem, likely a high scheduling latency or interrupt
+  disable time. The <i>tail</i> part of the output is especially helpful,
+  as it highlights the variability beyond +/- 3 standard deviations.
+</p>
diff --git a/src/devices/audio/src.jd b/src/devices/audio/src.jd
new file mode 100644
index 0000000..6238770
--- /dev/null
+++ b/src/devices/audio/src.jd
@@ -0,0 +1,117 @@
+page.title=Sample Rate Conversion
+@jd:body
+
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<h2 id="srcIntro">Introduction</h2>
+
+<p>
+See the Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Resampling_(audio)">Resampling (audio)</a>
+for a generic definition of sample rate conversion, also known as "resampling."
+The remainder of this article describes resampling within Android.
+</p>
+
+<p>
+Sample rate conversion is the process of changing a
+stream of discrete samples at one sample rate to another stream at
+another sample rate.  A sample rate converter, or resampler, is a module
+that implements sample rate conversion.  With respect to the resampler,
+the original stream is called the source signal, and the resampled stream is
+the sink signal.
+</p>
+
+<p>
+Resamplers are used in several places in Android.
+For example, an MP3 file may be encoded at 44.1 kHz sample rate and
+needs to be played back on an Android device supporting 48 kHz audio
+internally. In that case, a resampler would be used to upsample the MP3
+output audio from 44.1 kHz source sample rate to a 48 kHz sink sample rate
+used within the Android device.
+</p>
+
+<p>
+The characteristics of a resampler can be expressed using metrics, including:
+</p>
+
+<ul>
+<li>degree of preservation of the overall amplitude of the signal</li>
+<li>degree of preservation of the frequency bandwidth of the signal,
+    subject to limitations of the sink sample rate</li>
+<li>overall latency through the resampler</li>
+<li>consistent phase and group delay with respect to frequency</li>
+<li>computational complexity, expressed in CPU cycles or power draw</li>
+<li>permitted ratios of source and sink sample rates</li>
+<li>ability to dynamically change sample rate ratios</li>
+<li>which digital audio sample formats are supported</li>
+</ul>
+
+<p>
+The ideal resampler would exactly preserve the source signal's amplitude
+and frequency bandwidth (subject to limitations of the sink
+sample rate), have minimal and consistent delay, have minimal
+computational complexity, permit arbitrary and dynamic conversion ratios,
+and support all common digital audio sample formats.
+In practice, ideal resamplers do not exist, and actual resamplers are
+a compromise among these characteristics.
+For example, the goals of ideal quality conflict with short delay and low complexity.
+</p>
+
+<p>
+Android includes a variety of audio resamplers, so that appropriate
+compromises can be made depending on the application use case and load.
+Section <a href="#srcResamplers">Resampler implementations</a>
+below lists the available resamplers, summarizes their characteristics,
+and identifies where they should typically be used.
+</p>
+
+<h2 id="srcResamplers">Resampler implementations</h2>
+
+<p>
+Available resampler implementations change frequently,
+and may be customized by OEMs.
+As of Android 4.4, the default resamplers
+in descending order of signal distortion, and ascending order of
+computational complexity include:
+</p>
+
+<ul>
+<li>linear</li>
+<li>cubic</li>
+<li>sinc with original coefficients</li>
+<li>sinc with revised coefficients</li>
+</ul>
+
+<p>
+In general, the sinc resamplers are more appropriate for higher-quality
+music playback, and the other resamplers should be reserved for cases
+where quality is less important (an example might be "key clicks" or similar).
+</p>
+
+<p>
+The specific resampler implementation selected depends on
+the use case, load, and the value of system property
+<code>af.resampler.quality</code>.  For details,
+consult the audio resampler source code in AudioFlinger.
+</p>
diff --git a/src/devices/audio/terminology.jd b/src/devices/audio/terminology.jd
new file mode 100644
index 0000000..b1b12b6
--- /dev/null
+++ b/src/devices/audio/terminology.jd
@@ -0,0 +1,649 @@
+page.title=Audio Terminology
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+This document provides a glossary of audio-related terminology, including
+a list of widely used, generic terms and a list of terms that are specific
+to Android.
+</p>
+
+<h2 id="genericTerm">Generic Terms</h2>
+
+<p>
+These are audio terms that are widely used, with their conventional meanings.
+</p>
+
+<h3 id="digitalAudioTerms">Digital Audio</h3>
+
+<dl>
+
+<dt>acoustics</dt>
+<dd>
+The study of the mechanical properties of sound, for example how the
+physical placement of transducers such as speakers and microphones on
+a device affects perceived audio quality.
+</dd>
+
+<dt>attenuation</dt>
+<dd>
+A multiplicative factor less than or equal to 1.0,
+applied to an audio signal to decrease the signal level.
+Compare to "gain".
+</dd>
+
+<dt>bits per sample or bit depth</dt>
+<dd>
+Number of bits of information per sample.
+</dd>
+
+<dt>channel</dt>
+<dd>
+A single stream of audio information, usually corresponding to one
+location of recording or playback.
+</dd>
+
+<dt>downmixing</dt>
+<dd>
+To decrease the number of channels, e.g. from stereo to mono, or from 5.1 to stereo.
+This can be accomplished by dropping some channels, mixing channels, or more advanced signal processing.
+Simple mixing without attenuation or limiting has the potential for overflow and clipping.
+Compare to "upmixing".
+</dd>
+
+<dt>duck</dt>
+<dd>
+To temporarily reduce the volume of one stream, when another stream
+becomes active.  For example, if music is playing and a notification arrives,
+then the music stream could be ducked while the notification plays.
+Compare to "mute".
+</dd>
+
+<dt>frame</dt>
+<dd>
+A set of samples, one per channel, at a point in time.
+</dd>
+
+<dt>frames per buffer</dt>
+<dd>
+The number of frames handed from one module to the next at once;
+for example the audio HAL interface uses this concept.
+</dd>
+
+<dt>gain</dt>
+<dd>
+A multiplicative factor greater than or equal to 1.0,
+applied to an audio signal to increase the signal level.
+Compare to "attenuation".
+</dd>
+
+<dt>Hz</dt>
+<dd>
+The units for sample rate or frame rate.
+</dd>
+
+<dt>latency</dt>
+<dd>
+Time delay as a signal passes through a system.
+</dd>
+
+<dt>mono</dt>
+<dd>
+One channel.
+</dd>
+
+<dt>multichannel</dt>
+<dd>
+See "surround sound".
+Strictly, since stereo is more than one channel, it is also "multi" channel.
+But that usage would be confusing.
+</dd>
+
+<dt>mute</dt>
+<dd>
+To (temporarily) force volume to be zero, independently from the usual volume controls.
+</dd>
+
+<dt>PCM</dt>
+<dd>
+Pulse Code Modulation, the most common low-level encoding of digital audio.
+The audio signal is sampled at a regular interval, called the sample rate,
+and then quantized to discrete values within a particular range depending on the bit depth.
+For example, for 16-bit PCM, the sample values are integers between -32768 and +32767.
+</dd>
+
+<dt>ramp</dt>
+<dd>
+To gradually increase or decrease the level of a particular audio parameter,
+for example volume or the strength of an effect.
+A volume ramp is commonly applied when pausing and resuming music, to avoid a hard audible transition.
+</dd>
+
+<dt>sample</dt>
+<dd>
+A number representing the audio value for a single channel at a point in time.
+</dd>
+
+<dt>sample rate or frame rate</dt>
+<dd>
+Number of frames per second;
+note that "frame rate" is thus more accurate,
+but "sample rate" is conventionally used to mean "frame rate."
+</dd>
+
+<dt>sonification</dt>
+<dd>
+The use of sound to express feedback or information,
+for example touch sounds and keyboard sounds.
+</dd>
+
+<dt>stereo</dt>
+<dd>
+Two channels.
+</dd>
+
+<dt>stereo widening</dt>
+<dd>
+An effect applied to a stereo signal, to make another stereo signal which sounds fuller and richer.
+The effect can also be applied to a mono signal, in which case it is a type of upmixing.
+</dd>
+
+<dt>surround sound</dt>
+<dd>
+Various techniques for increasing the ability of a listener to perceive
+sound position beyond stereo left and right.
+</dd>
+
+<dt>upmixing</dt>
+<dd>
+To increase the number of channels, e.g. from mono to stereo, or from stereo to surround sound.
+This can be accomplished by duplication, panning, or more advanced signal processing.
+Compare to "downmixing".
+</dd>
+
+<dt>virtualizer</dt>
+<dd>
+An effect that attempts to spatialize audio channels, such as trying to
+simulate more speakers, or give the illusion that various sound sources have position.
+</dd>
+
+<dt>volume</dt>
+<dd>
+Loudness, the subjective strength of an audio signal.
+</dd>
+
+</dl>
+
+<h3 id="hardwareTerms">Hardware and Accessories</h3>
+
+<p>
+These terms are related to audio hardware and accessories.
+</p>
+
+<h4 id="interDeviceTerms">Inter-device interconnect</h4>
+
+<p>
+These technologies connect audio and video components between devices,
+and are readily visible at the external connectors.  The HAL implementor
+may need to be aware of these, as well as the end user.
+</p>
+
+<dl>
+
+<dt>Bluetooth</dt>
+<dd>
+A short range wireless technology.
+The major audio-related
+<a href="http://en.wikipedia.org/wiki/Bluetooth_profile">Bluetooth profiles</a>
+and
+<a href="http://en.wikipedia.org/wiki/Bluetooth_protocols">Bluetooth protocols</a>
+are described at these Wikipedia articles:
+
+<ul>
+
+<li><a href="http://en.wikipedia.org/wiki/Bluetooth_profile#Advanced_Audio_Distribution_Profile_.28A2DP.29">A2DP</a>
+for music
+</li>
+
+<li><a href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Synchronous_connection-oriented_.28SCO.29_link">SCO</a>
+for telephony
+</li>
+
+</ul>
+
+</dd>
+
+<dt>DisplayPort</dt>
+<dd>
+Digital display interface by VESA.
+</dd>
+
+<dt>HDMI</dt>
+<dd>
+High-Definition Multimedia Interface, an interface for transferring
+audio and video data.  For mobile devices, either a micro-HDMI (type D) or MHL connector is used.
+</dd>
+
+<dt>MHL</dt>
+<dd>
+Mobile High-Definition Link is a mobile audio/video interface, often
+over micro-USB connector.
+</dd>
+
+<dt>phone connector</dt>
+<dd>
+A mini or sub-mini phone connector
+connects a device to wired headphones, headset, or line-level amplifier.
+</dd>
+
+<dt>SlimPort</dt>
+<dd>
+An adapter from micro-USB to HDMI.
+</dd>
+
+<dt>S/PDIF</dt>
+<dd>
+Sony/Philips Digital Interface Format is an interconnect for uncompressed PCM.
+See Wikipedia article <a href="http://en.wikipedia.org/wiki/S/PDIF">S/PDIF</a>.
+</dd>
+
+<dt>USB</dt>
+<dd>
+Universal Serial Bus.
+See Wikipedia article <a href="http://en.wikipedia.org/wiki/USB">USB</a>.
+</dd>
+
+</dl>
+
+<h4 id="intraDeviceTerms">Intra-device interconnect</h4>
+
+<p>
+These technologies connect internal audio components within a given
+device, and are not visible without disassembling the device.  The HAL
+implementor may need to be aware of these, but not the end user.
+</p>
+
+See these Wikipedia articles:
+<ul>
+<li><a href="http://en.wikipedia.org/wiki/General-purpose_input/output">GPIO</a></li>
+<li><a href="http://en.wikipedia.org/wiki/I%C2%B2C">I²C</a></li>
+<li><a href="http://en.wikipedia.org/wiki/I%C2%B2S">I²S</a></li>
+<li><a href="http://en.wikipedia.org/wiki/McASP">McASP</a></li>
+<li><a href="http://en.wikipedia.org/wiki/SLIMbus">SLIMbus</a></li>
+<li><a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus">SPI</a></li>
+</ul>
+
+<h3 id="signalTerms">Audio Signal Path</h3>
+
+<p>
+These terms are related to the signal path that audio data follows from
+an application to the transducer, or vice-versa.
+</p>
+
+<dl>
+
+<dt>ADC</dt>
+<dd>
+Analog to digital converter, a module that converts an analog signal
+(continuous in both time and amplitude) to a digital signal (discrete in
+both time and amplitude).  Conceptually, an ADC consists of a periodic
+sample-and-hold followed by a quantizer, although it does not have to
+be implemented that way.  An ADC is usually preceded by a low-pass filter
+to remove any high frequency components that are not representable using
+the desired sample rate.  See Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter">Analog-to-digital converter</a>.
+</dd>
+
+<dt>AP</dt>
+<dd>
+Application processor, the main general-purpose computer on a mobile device.
+</dd>
+
+<dt>codec</dt>
+<dd>
+Coder-decoder, a module that encodes and/or decodes an audio signal
+from one representation to another.  Typically this is analog to PCM, or PCM to analog.
+Strictly, the term "codec" is reserved for modules that both encode and decode,
+however it can also more loosely refer to only one of these.
+See Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Audio_codec">Audio codec</a>.
+</dd>
+
+<dt>DAC</dt>
+<dd>
+Digital to analog converter, a module that converts a digital signal
+(discrete in both time and amplitude) to an analog signal
+(continuous in both time and amplitude).  A DAC is usually followed by
+a low-pass filter to remove any high frequency components introduced
+by digital quantization.
+See Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter">Digital-to-analog converter</a>.
+</dd>
+
+<dt>DSP</dt>
+<dd>
+Digital Signal Processor, an optional component which is typically located
+after the application processor (for output), or before the application processor (for input).
+The primary purpose of a DSP is to off-load the application processor,
+and provide signal processing features at a lower power cost.
+</dd>
+
+<dt>PDM</dt>
+<dd>
+Pulse-density modulation
+is a form of modulation used to represent an analog signal by a digital signal,
+where the relative density of 1s versus 0s indicates the signal level.
+It is commonly used by digital to analog converters.
+See Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">Pulse-density modulation</a>.
+</dd>
+
+<dt>PWM</dt>
+<dd>
+Pulse-width modulation
+is a form of modulation used to represent an analog signal by a digital signal,
+where the relative width of a digital pulse indicates the signal level.
+It is commonly used by analog to digital converters.
+See Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Pulse-width_modulation">Pulse-width modulation</a>.
+</dd>
+
+</dl>
+
+<h3 id="srcTerms">Sample Rate Conversion</h3>
+
+<dl>
+
+<dt>downsample</dt>
+<dd>To resample, where sink sample rate &lt; source sample rate.</dd>
+
+<dt>Nyquist frequency</dt>
+<dd>
+The Nyquist frequency, equal to 1/2 of a given sample rate, is the
+maximum frequency component that can be represented by a discretized
+signal at that sample rate.  For example, the human hearing range is
+typically assumed to extend up to approximately 20 kHz, and so a digital
+audio signal must have a sample rate of at least 40 kHz to represent that
+range.  In practice, sample rates of 44.1 kHz and 48 kHz are commonly
+used, with Nyquist frequencies of 22.05 kHz and 24 kHz respectively.
+See
+<a href="http://en.wikipedia.org/wiki/Nyquist_frequency">Nyquist frequency</a>
+and
+<a href="http://en.wikipedia.org/wiki/Hearing_range">Hearing range</a>
+for more information.
+</dd>
+
+<dt>resampler</dt>
+<dd>Synonym for sample rate converter.</dd>
+
+<dt>resampling</dt>
+<dd>The process of converting sample rate.</dd>
+
+<dt>sample rate converter</dt>
+<dd>A module that resamples.</dd>
+
+<dt>sink</dt>
+<dd>The output of a resampler.</dd>
+
+<dt>source</dt>
+<dd>The input to a resampler.</dd>
+
+<dt>upsample</dt>
+<dd>To resample, where sink sample rate &gt; source sample rate.</dd>
+
+</dl>
+
+<h2 id="androidSpecificTerms">Android-Specific Terms</h2>
+
+<p>
+These are terms specific to the Android audio framework, or that
+may have a special meaning within Android beyond their general meaning.
+</p>
+
+<dl>
+
+<dt>ALSA</dt>
+<dd>
+Advanced Linux Sound Architecture.  As the name suggests, it is an audio
+framework primarily for Linux, but it has influenced other systems.
+See Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture">ALSA</a>
+for the general definition. As used within Android, it refers primarily
+to the kernel audio framework and drivers, not to the user-mode API. See
+tinyalsa.
+</dd>
+
+<dt>AudioEffect</dt>
+<dd>
+An API and implementation framework for output (post-processing) effects
+and input (pre-processing) effects.  The API is defined at
+<a href="http://developer.android.com/reference/android/media/audiofx/AudioEffect.html">android.media.audiofx.AudioEffect</a>.
+</dd>
+
+<dt>AudioFlinger</dt>
+<dd>
+The sound server implementation for Android. AudioFlinger
+runs within the mediaserver process. See Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Sound_server">Sound server</a>
+for the generic definition.
+</dd>
+
+<dt>audio focus</dt>
+<dd>
+A set of APIs for managing audio interactions across multiple independent apps.
+See <a href="http://developer.android.com/training/managing-audio/audio-focus.html">Managing Audio
+Focus</a> and the focus-related methods and constants of
+<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>.
+</dd>
+
+<dt>AudioMixer</dt>
+<dd>
+The module within AudioFlinger responsible for
+combining multiple tracks and applying attenuation
+(volume) and certain effects. The Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Audio_mixing_(recorded_music)">Audio mixing (recorded music)</a>
+may be useful for understanding the generic
+concept. But that article describes a mixer more as a hardware device
+or a software application, rather than a software module within a system.
+</dd>
+
+<dt>audio policy</dt>
+<dd>
+Service responsible for all actions that require a policy decision
+to be made first, such as opening a new I/O stream, re-routing after a
+change, and stream volume management.
+</dd>
+
+<dt>AudioRecord</dt>
+<dd>
+The primary low-level client API for receiving data from an audio
+input device such as microphone.  The data is usually in pulse-code modulation
+(PCM) format.
+The API is defined at
+<a href="http://developer.android.com/reference/android/media/AudioRecord.html">android.media.AudioRecord</a>.
+</dd>
+
+<dt>AudioResampler</dt>
+<dd>
+The module within AudioFlinger responsible for
+<a href="src.html">sample rate conversion</a>.
+</dd>
+
+<dt>AudioTrack</dt>
+<dd>
+The primary low-level client API for sending data to an audio output
+device such as a speaker.  The data is usually in PCM format.
+The API is defined at
+<a href="http://developer.android.com/reference/android/media/AudioTrack.html">android.media.AudioTrack</a>.
+</dd>
+
+<dt>client</dt>
+<dd>
+Usually same as application or app, but sometimes the "client" of
+AudioFlinger is actually a thread running within the mediaserver system
+process. An example of that is when playing media that is decoded by a
+MediaPlayer object.
+</dd>
+
+<dt>HAL</dt>
+<dd>
+Hardware Abstraction Layer. HAL is a generic term in Android. With
+respect to audio, it is a layer between AudioFlinger and the kernel
+device driver with a C API, which replaces the earlier C++ libaudio.
+</dd>
+
+<dt>FastMixer</dt>
+<dd>
+A thread within AudioFlinger that services lower latency "fast tracks"
+and drives the primary output device.
+</dd>
+
+<dt>fast track</dt>
+<dd>
+An AudioTrack or AudioRecord client with lower latency but fewer features, on some devices.
+</dd>
+
+<dt>MediaPlayer</dt>
+<dd>
+A higher-level client API than AudioTrack, for playing either encoded
+content, or content which includes multimedia audio and video tracks.
+</dd>
+
+<dt>media.log</dt>
+<dd>
+An AudioFlinger debugging feature, available in custom builds only,
+for logging audio events to a circular buffer where they can then be
+dumped retroactively when needed.
+</dd>
+
+<dt>mediaserver</dt>
+<dd>
+An Android system process that contains a number of media-related
+services, including AudioFlinger.
+</dd>
+
+<dt>NBAIO</dt>
+<dd>
+An abstraction for "non-blocking" audio input/output ports used within
+AudioFlinger. The name can be misleading, as some implementations of
+the NBAIO API actually do support blocking. The key implementations of
+NBAIO are for pipes of various kinds.
+</dd>
+
+<dt>normal mixer</dt>
+<dd>
+A thread within AudioFlinger that services most full-featured
+AudioTrack clients, and either directly drives an output device or feeds
+its sub-mix into FastMixer via a pipe.
+</dd>
+
+<dt>OpenSL ES</dt>
+<dd>
+An audio API standard by The Khronos Group. Android versions since
+API level 9 support a native audio API which is based on a subset of
+OpenSL ES 1.0.1.
+</dd>
+
+<dt>silent mode</dt>
+<dd>
+A user-settable feature to mute the phone ringer and notifications,
+without affecting media playback (music, videos, games) or alarms.
+</dd>
+
+<dt>SoundPool</dt>
+<dd>
+A higher-level client API than AudioTrack, used for playing sampled
+audio clips. It is useful for triggering UI feedback, game sounds, etc.
+The API is defined at
+<a href="http://developer.android.com/reference/android/media/SoundPool.html">android.media.SoundPool</a>.
+</dd>
+
+<dt>Stagefright</dt>
+<dd>
+See <a href="{@docRoot}devices/media.html">Media</a>.
+</dd>
+
+<dt>StateQueue</dt>
+<dd>
+A module within AudioFlinger responsible for synchronizing state
+among threads. Whereas NBAIO is used to pass data, StateQueue is used
+to pass control information.
+</dd>
+
+<dt>strategy</dt>
+<dd>
+A grouping of stream types with similar behavior, used by the audio policy service.
+</dd>
+
+<dt>stream type</dt>
+<dd>
+An enumeration that expresses a use case for audio output.
+The audio policy implementation uses the stream type, along with other parameters,
+to determine volume and routing decisions.
+Specific stream types are listed at
+<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>.
+</dd>
+
+<dt>tee sink</dt>
+<dd>
+See the separate article on tee sink in
+<a href="debugging.html#teeSink">Audio Debugging</a>.
+</dd>
+
+<dt>tinyalsa</dt>
+<dd>
+A small user-mode API above ALSA kernel with BSD license, recommended
+for use in HAL implementations.
+</dd>
+
+<dt>ToneGenerator</dt>
+<dd>
+A higher-level client API than AudioTrack, used for playing DTMF signals.
+See the Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling">Dual-tone multi-frequency signaling</a>,
+and the API definition at
+<a href="http://developer.android.com/reference/android/media/ToneGenerator.html">android.media.ToneGenerator</a>.
+</dd>
+
+<dt>track</dt>
+<dd>
+An audio stream, controlled by the AudioTrack or AudioRecord API.
+</dd>
+
+<dt>volume attenuation curve</dt>
+<dd>
+A device-specific mapping from a generic volume index to a particular attenuation factor
+for a given output.
+</dd>
+
+<dt>volume index</dt>
+<dd>
+A unitless integer that expresses the desired relative volume of a stream.
+The volume-related APIs of
+<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>
+operate in volume indices rather than absolute attenuation factors.
+</dd>
+
+</dl>
diff --git a/src/devices/audio/testing_circuit.jd b/src/devices/audio/testing_circuit.jd
new file mode 100644
index 0000000..040755d
--- /dev/null
+++ b/src/devices/audio/testing_circuit.jd
@@ -0,0 +1,91 @@
+page.title=Light Testing Circuit
+@jd:body
+
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+The file <a href="http://developer.android.com/downloads/partner/audio/av_sync_board.zip">av_sync_board.zip</a>
+contains CAD files for an A/V sync and latency testing
+printed circuit board (PCB).
+The files include a fabrication drawing, EAGLE CAD, schematic, and BOM. See <a
+href="latency.html">Audio Latency</a> for recommended testing methods.
+</p>
+
+<p>
+This PCB
+can be used to help measure the time between flashing the device's
+notification LED or screen backlight, vs. detecting an audio signal.
+When combined with a dual-channel oscilloscope and suitable test app,
+it can show the difference in time between detecting the light and audio.
+That assumes the LED or backlight response time and light detector's response time
+are negligible relative to the audio.
+</p>
+
+<p>
+This design is supplied "as is", and we aren't be responsible for any errors in the design.
+But if you have any suggestions for improvement, please post to the <a
+href="https://groups.google.com/forum/#!forum/android-porting">android-porting</a> group.
+</p>
+
+<p>
+Of course, this is not the only (or necessarily best) way to measure A/V sync and latency,
+and we would like to hear about your alternative methods, also at android-porting group.
+</p>
+
+<p>
+There are currently no compatibility requirements to use this particular PCB.
+We supply it to encourage your continued attention to audio performance.
+</p>
+
+<h2 id="images">Images</h2>
+
+<p>
+These photos show the circuit in action.
+</p>
+
+<img style="margin:1.5em auto" src="images/breadboard.jpg" alt="breadboard prototype" />
+<br />
+<center>Breadboard prototype</center>
+
+<img style="margin:1.5em auto" src="images/pcb.jpg" alt="an early run of the PCB" />
+<br />
+<center>An early run of the PCB</center>
+
+<img style="margin:1.5em auto" src="images/display.jpg" alt="example display">
+<br />
+<center>Example display</center>
+<p>
+This image
+shows the scope display for an unspecified device, software release, and test conditions;
+the results are not typical and cannot be used to extrapolate to other situations.
+</p>
+
+<h2 id="video">Video</h2>
+
+<p>
+This <a href="http://www.youtube.com/watch?v=f95S2IILBJY">Youtube video</a>
+shows the the breadboard version testing circuit in operation.
+Skip ahead to 1:00 to see the circuit.
+</p>
+
diff --git a/src/devices/audio/tv.jd b/src/devices/audio/tv.jd
new file mode 100644
index 0000000..8cd97b9
--- /dev/null
+++ b/src/devices/audio/tv.jd
@@ -0,0 +1,296 @@
+page.title=TV Audio
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>The TV Input Framework (TIF) manager works with the audio routing API to support flexible audio
+path changes. When a System on Chip (SoC) implements the TV hardware abstraction layer (HAL), each
+TV input (HDMI IN, Tuner, etc.) provides <code>TvInputHardwareInfo</code> that specifies AudioPort information for audio type and address.</p>
+
+<ul>
+<li><b>Physical</b> audio input/output devices have a corresponding AudioPort.</li>
+<li><b>Software</b> audio output/input streams are represented as AudioMixPort (child class of
+AudioPort).</li>
+</ul>
+
+<p>The TIF then uses AudioPort information for the audio routing API.</p>
+
+<p><img src="images/ape_audio_tv_tif.png" alt="Android TV Input Framework (TIF)" />
+<p class="img-caption"><strong>Figure 1.</strong> TV Input Framework (TIF)</p>
+
+<h2 id="Requirements">Requirements</h2>
+
+<p>A SoC must implement the audio HAL with the following audio routing API support:</p>
+
+<table>
+<tbody>
+<tr>
+<th>Audio Ports</th>
+<td>
+<ul>
+<li>TV Audio Input has a corresponding audio source port implementation.</li>
+<li>TV Audio Output has a corresponding audio sink port implementation.</li>
+<li>Can create audio patch between any TV input audio port and any TV output audio port.</li>
+</ul>
+</td>
+</tr>
+<tr>
+<th>Default Input</th>
+<td>AudioRecord (created with DEFAULT input source) must seize <i>virtual null input source</i> for
+AUDIO_DEVICE_IN_DEFAULT acquisition on Android TV.</td>
+</tr>
+<tr>
+<th>Device Loopback</th>
+<td>Requires supporting an AUDIO_DEVICE_IN_LOOPBACK input that is a complete mix of all audio output
+of all the TV output (11Khz, 16bit mono or 48Khz, 16bit mono). Used only for audio capture.
+</td>
+</tr>
+</tbody>
+</table>
+
+
+<h2 id="Audio Devices">TV audio devices</h2>
+
+<p>Android supports the following audio devices for TV audio input/output.</p>
+
+<h4>system/core/include/system/audio.h</h4>
+
+<pre>
+/* output devices */
+AUDIO_DEVICE_OUT_AUX_DIGITAL  = 0x400,
+AUDIO_DEVICE_OUT_HDMI   = AUDIO_DEVICE_OUT_AUX_DIGITAL,
+/* HDMI Audio Return Channel */
+AUDIO_DEVICE_OUT_HDMI_ARC   = 0x40000,
+/* S/PDIF out */
+AUDIO_DEVICE_OUT_SPDIF    = 0x80000,
+/* input devices */
+AUDIO_DEVICE_IN_AUX_DIGITAL   = AUDIO_DEVICE_BIT_IN | 0x20,
+AUDIO_DEVICE_IN_HDMI      = AUDIO_DEVICE_IN_AUX_DIGITAL,
+/* TV tuner input */
+AUDIO_DEVICE_IN_TV_TUNER    = AUDIO_DEVICE_BIT_IN | 0x4000,
+/* S/PDIF in */
+AUDIO_DEVICE_IN_SPDIF   = AUDIO_DEVICE_BIT_IN | 0x10000,
+AUDIO_DEVICE_IN_LOOPBACK    = AUDIO_DEVICE_BIT_IN | 0x40000,
+</pre>
+
+
+<h2 id="HAL extension">Audio HAL extension</h2>
+
+<p>The Audio HAL extension for the audio routing API is defined by following:</p>
+
+<h4>system/core/include/system/audio.h</h4>
+
+<pre>
+/* audio port configuration structure used to specify a particular configuration of an audio port */
+struct audio_port_config {
+    audio_port_handle_t      id;           /* port unique ID */
+    audio_port_role_t        role;         /* sink or source */
+    audio_port_type_t        type;         /* device, mix ... */
+    unsigned int             config_mask;  /* e.g AUDIO_PORT_CONFIG_ALL */
+    unsigned int             sample_rate;  /* sampling rate in Hz */
+    audio_channel_mask_t     channel_mask; /* channel mask if applicable */
+    audio_format_t           format;       /* format if applicable */
+    struct audio_gain_config gain;         /* gain to apply if applicable */
+    union {
+        struct audio_port_config_device_ext  device;  /* device specific info */
+        struct audio_port_config_mix_ext     mix;     /* mix specific info */
+        struct audio_port_config_session_ext session; /* session specific info */
+    } ext;
+};
+struct audio_port {
+    audio_port_handle_t      id;                /* port unique ID */
+    audio_port_role_t        role;              /* sink or source */
+    audio_port_type_t        type;              /* device, mix ... */
+    unsigned int             num_sample_rates;  /* number of sampling rates in following array */
+    unsigned int             sample_rates[AUDIO_PORT_MAX_SAMPLING_RATES];
+    unsigned int             num_channel_masks; /* number of channel masks in following array */
+    audio_channel_mask_t     channel_masks[AUDIO_PORT_MAX_CHANNEL_MASKS];
+    unsigned int             num_formats;       /* number of formats in following array */
+    audio_format_t           formats[AUDIO_PORT_MAX_FORMATS];
+    unsigned int             num_gains;         /* number of gains in following array */
+    struct audio_gain        gains[AUDIO_PORT_MAX_GAINS];
+    struct audio_port_config active_config;     /* current audio port configuration */
+    union {
+        struct audio_port_device_ext  device;
+        struct audio_port_mix_ext     mix;
+        struct audio_port_session_ext session;
+    } ext;
+};
+</pre>
+
+<h4>hardware/libhardware/include/hardware/audio.h</h4>
+
+<pre>
+struct audio_hw_device {
+  :
+    /**
+     * Routing control
+     */
+
+    /* Creates an audio patch between several source and sink ports.
+     * The handle is allocated by the HAL and should be unique for this
+     * audio HAL module. */
+    int (*create_audio_patch)(struct audio_hw_device *dev,
+                               unsigned int num_sources,
+                               const struct audio_port_config *sources,
+                               unsigned int num_sinks,
+                               const struct audio_port_config *sinks,
+                               audio_patch_handle_t *handle);
+
+    /* Release an audio patch */
+    int (*release_audio_patch)(struct audio_hw_device *dev,
+                               audio_patch_handle_t handle);
+
+    /* Fills the list of supported attributes for a given audio port.
+     * As input, "port" contains the information (type, role, address etc...)
+     * needed by the HAL to identify the port.
+     * As output, "port" contains possible attributes (sampling rates, formats,
+     * channel masks, gain controllers...) for this port.
+     */
+    int (*get_audio_port)(struct audio_hw_device *dev,
+                          struct audio_port *port);
+
+    /* Set audio port configuration */
+    int (*set_audio_port_config)(struct audio_hw_device *dev,
+                         const struct audio_port_config *config);
+</pre>
+
+<h2 id="Testing">Testing DEVICE_IN_LOOPBACK</h2>
+
+<p>To test DEVICE_IN_LOOPBACK for TV monitoring, use the following testing code. After running the
+test, the captured audio saves to <code>/sdcard/record_loopback.raw</code>, where you can listen to
+it using <code>ffmeg</code>.</p>
+
+<pre>
+&lt;uses-permission android:name="android.permission.MODIFY_AUDIO_ROUTING" /&gt;
+&lt;uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /&gt;
+
+   AudioRecord mRecorder;
+   Handler mHandler = new Handler();
+   int mMinBufferSize = AudioRecord.getMinBufferSize(RECORD_SAMPLING_RATE,
+           AudioFormat.CHANNEL_IN_MONO,
+           AudioFormat.ENCODING_PCM_16BIT);;
+   static final int RECORD_SAMPLING_RATE = 48000;
+   public void doCapture() {
+       mRecorder = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, RECORD_SAMPLING_RATE,
+               AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mMinBufferSize * 10);
+       AudioManager am = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
+       ArrayList&lt;AudioPort&gt; audioPorts = new ArrayList&lt;AudioPort&gt;();
+       am.listAudioPorts(audioPorts);
+       AudioPortConfig srcPortConfig = null;
+       AudioPortConfig sinkPortConfig = null;
+       for (AudioPort audioPort : audioPorts) {
+           if (srcPortConfig == null
+                   && audioPort.role() == AudioPort.ROLE_SOURCE
+                   && audioPort instanceof AudioDevicePort) {
+               AudioDevicePort audioDevicePort = (AudioDevicePort) audioPort;
+               if (audioDevicePort.type() == AudioManager.DEVICE_IN_LOOPBACK) {
+                   srcPortConfig = audioPort.buildConfig(48000, AudioFormat.CHANNEL_IN_DEFAULT,
+                           AudioFormat.ENCODING_DEFAULT, null);
+                   Log.d(LOG_TAG, "Found loopback audio source port : " + audioPort);
+               }
+           }
+           else if (sinkPortConfig == null
+                   && audioPort.role() == AudioPort.ROLE_SINK
+                   && audioPort instanceof AudioMixPort) {
+               sinkPortConfig = audioPort.buildConfig(48000, AudioFormat.CHANNEL_OUT_DEFAULT,
+                       AudioFormat.ENCODING_DEFAULT, null);
+               Log.d(LOG_TAG, "Found recorder audio mix port : " + audioPort);
+           }
+       }
+       if (srcPortConfig != null && sinkPortConfig != null) {
+           AudioPatch[] patches = new AudioPatch[] { null };
+           int status = am.createAudioPatch(
+                   patches,
+                   new AudioPortConfig[] { srcPortConfig },
+                   new AudioPortConfig[] { sinkPortConfig });
+           Log.d(LOG_TAG, "Result of createAudioPatch(): " + status);
+       }
+       mRecorder.startRecording();
+       processAudioData();
+       mRecorder.stop();
+       mRecorder.release();
+   }
+   private void processAudioData() {
+       OutputStream rawFileStream = null;
+       byte data[] = new byte[mMinBufferSize];
+       try {
+           rawFileStream = new BufferedOutputStream(
+                   new FileOutputStream(new File("/sdcard/record_loopback.raw")));
+       } catch (FileNotFoundException e) {
+           Log.d(LOG_TAG, "Can't open file.", e);
+       }
+       long startTimeMs = System.currentTimeMillis();
+       while (System.currentTimeMillis() - startTimeMs &lt; 5000) {
+           int nbytes = mRecorder.read(data, 0, mMinBufferSize);
+           if (nbytes &lt;= 0) {
+               continue;
+           }
+           try {
+               rawFileStream.write(data);
+           } catch (IOException e) {
+               Log.e(LOG_TAG, "Error on writing raw file.", e);
+           }
+       }
+       try {
+           rawFileStream.close();
+       } catch (IOException e) {
+       }
+       Log.d(LOG_TAG, "Exit audio recording.");
+   }
+</pre>
+
+<p>Locate the captured audio file in <code>/sdcard/record_loopback.raw</code> and listen to it using
+<code>ffmeg</code>:</p>
+
+<pre>
+adb pull /sdcard/record_loopback.raw
+ffmpeg -f s16le -ar 48k -ac 1 -i record_loopback.raw record_loopback.wav
+ffplay record_loopback.wav
+</pre>
+
+<h2 id="Use cases">Use cases</h2>
+
+<p>This section includes common use cases for TV audio.</p>
+
+<h3>TV tuner with speaker output</h3>
+
+<p>When a TV tuner becomes active, the audio routing API creates an audio patch between the tuner
+and the default output (e.g. the speaker). The tuner output does not require decoding, but final
+audio output is mixed with software output_stream.</p>
+
+<p><img src="images/ape_audio_tv_tuner.png" alt="Android TV Tuner Audio Patch" />
+<p class="img-caption">
+<strong>Figure 2.</strong> Audio Patch for TV tuner with speaker output.</p>
+
+
+<h3>HDMI OUT during live TV</h3>
+
+<p>A user is watching live TV then switches to the HDMI audio output (Intent.ACTION_HDMI_AUDIO_PLUG)
+. The output device of all output_streams changes to the HDMI_OUT port, and the TIF manager changes
+the sink port of the existing tuner audio patch to the HDMI_OUT port.</p>
+
+<p><p><img src="images/ape_audio_tv_hdmi_tuner.png" alt="Android TV HDMI-OUT Audio Patch" />
+<p class="img-caption">
+<strong>Figure 3.</strong> Audio Patch for HDMI OUT from live TV.</p>
diff --git a/src/devices/audio/usb.jd b/src/devices/audio/usb.jd
new file mode 100644
index 0000000..9589146
--- /dev/null
+++ b/src/devices/audio/usb.jd
@@ -0,0 +1,600 @@
+page.title=USB Digital Audio
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+This article reviews Android support for USB digital audio and related
+USB-based protocols.
+</p>
+
+<h3 id="audience">Audience</h3>
+
+<p>
+The target audience of this article is Android device OEMs, SoC vendors,
+USB audio peripheral suppliers, advanced audio application developers,
+and others seeking detailed understanding of USB digital audio internals on Android.
+</p>
+
+<p>
+End users should see the <a href="https://support.google.com/android/">Help Center</a> instead.
+Though this article is not oriented towards end users,
+certain audiophile consumers may find portions of interest.
+</p>
+
+<h2 id="overview">Overview of USB</h2>
+
+<p>
+Universal Serial Bus (USB) is informally described in the Wikipedia article
+<a href="http://en.wikipedia.org/wiki/USB">USB</a>,
+and is formally defined by the standards published by the
+<a href="http://www.usb.org/">USB Implementers Forum, Inc</a>.
+For convenience, we summarize the key USB concepts here,
+but the standards are the authoritative reference.
+</p>
+
+<h3 id="terminology">Basic concepts and terminology</h3>
+
+<p>
+USB is a <a href="http://en.wikipedia.org/wiki/Bus_(computing)">bus</a>
+with a single initiator of data transfer operations, called the <i>host</i>.
+The host communicates with
+<a href="http://en.wikipedia.org/wiki/Peripheral">peripherals</a> via the bus.
+</p>
+
+<p>
+<b>Note:</b> the terms <i>device</i> or <i>accessory</i> are common synonyms for
+<i>peripheral</i>.  We avoid those terms here, as they could be confused with
+Android <a href="http://en.wikipedia.org/wiki/Mobile_device">device</a>
+or the Android-specific concept called
+<a href="http://developer.android.com/guide/topics/connectivity/usb/accessory.html">accessory mode</a>.
+</p>
+
+<p>
+A critical host role is <i>enumeration</i>:
+the process of detecting which peripherals are connected to the bus,
+and querying their properties expressed via <i>descriptors</i>.
+</p>
+
+<p>
+A peripheral may be one physical object
+but actually implement multiple logical <i>functions</i>.
+For example, a webcam peripheral could have both a camera function and a
+microphone audio function.
+</p>
+
+<p>
+Each peripheral function has an <i>interface</i> that
+defines the protocol to communicate with that function.
+</p>
+
+<p>
+The host communicates with a peripheral over a
+<a href="http://en.wikipedia.org/wiki/Stream_(computing)">pipe</a>
+to an <a href="http://en.wikipedia.org/wiki/Communication_endpoint">endpoint</a>,
+a data source or sink
+provided by one of the peripheral's functions.
+</p>
+
+<p>
+There are two kinds of pipes: <i>message</i> and <i>stream</i>.
+A message pipe is used for bi-directional control and status.
+A stream pipe is used for uni-directional data transfer.
+</p>
+
+<p>
+The host initiates all data transfers,
+hence the terms <i>input</i> and <i>output</i> are expressed relative to the host.
+An input operation transfers data from the peripheral to the host,
+while an output operation transfers data from the host to the peripheral.
+</p>
+
+<p>
+There are three major data transfer modes:
+<i>interrupt</i>, <i>bulk</i>, and <i>isochronous</i>.
+Isochronous mode will be discussed further in the context of audio.
+</p>
+
+<p>
+The peripheral may have <i>terminals</i> that connect to the outside world,
+beyond the peripheral itself.  In this way, the peripheral serves
+to translate between USB protocol and "real world" signals.
+The terminals are logical objects of the function.
+</p>
+
+<h2 id="androidModes">Android USB modes</h2>
+
+<h3 id="developmentMode">Development mode</h3>
+
+<p>
+<i>Development mode</i> has been present since the initial release of Android.
+The Android device appears as a USB peripheral
+to a host PC running a desktop operating system such as Linux,
+Mac OS X, or Windows.  The only visible peripheral function is either
+<a href="http://en.wikipedia.org/wiki/Android_software_development#Fastboot">Android fastboot</a>
+or
+<a href="http://developer.android.com/tools/help/adb.html">Android Debug Bridge (adb)</a>.
+The fastboot and adb protocols are layered over USB bulk data transfer mode.
+</p>
+
+<h3 id="hostMode">Host mode</h3>
+
+<p>
+<i>Host mode</i> is introduced in Android 3.1 (API level 12).
+</p>
+
+<p>
+As the Android device must act as host, and most Android devices include
+a micro-USB connector that does not directly permit host operation,
+an on-the-go (<a href="http://en.wikipedia.org/wiki/USB_On-The-Go">OTG</a>) adapter
+such as this is usually required:
+</p>
+
+<img src="images/otg.jpg" style="image-orientation: 90deg;" height="50%" width="50%" alt="OTG">
+
+<p>
+An Android device might not provide sufficient power to operate a
+particular peripheral, depending on how much power the peripheral needs,
+and how much the Android device is capable of supplying.  Even if
+adequate power is available, the Android device battery charge may
+be significantly shortened.  For these situations, use a powered
+<a href="http://en.wikipedia.org/wiki/USB_hub">hub</a> such as this:
+</p>
+
+<img src="images/hub.jpg" alt="Powered hub">
+
+<h3 id="accessoryMode">Accessory mode</h3>
+
+<p>
+<i>Accessory mode</i> was introduced in Android 3.1 (API level 12) and back-ported to Android 2.3.4.
+In this mode, the Android device operates as a USB peripheral,
+under the control of another device such as a dock that serves as host.
+The difference between development mode and accessory mode
+is that additional USB functions are visible to the host, beyond adb.
+The Android device begins in development mode and then
+transitions to accessory mode via a re-negotiation process.
+</p>
+
+<p>
+Accessory mode was extended with additional features in Android 4.1,
+in particular audio described below.
+</p>
+
+<h2 id="audioClass">USB audio</h2>
+
+<h3 id="class">USB classes</h3>
+
+<p>
+Each peripheral function has an associated <i>device class</i> document
+that specifies the standard protocol for that function.
+This enables <i>class compliant</i> hosts and peripheral functions
+to inter-operate, without detailed knowledge of each other's workings.
+Class compliance is critical if the host and peripheral are provided by
+different entities.
+</p>
+
+<p>
+The term <i>driverless</i> is a common synonym for <i>class compliant</i>,
+indicating that it is possible to use the standard features of such a
+peripheral without requiring an operating-system specific
+<a href="http://en.wikipedia.org/wiki/Device_driver">driver</a> to be installed.
+One can assume that a peripheral advertised as "no driver needed"
+for major desktop operating systems
+will be class compliant, though there may be exceptions.
+</p>
+
+<h3 id="audioClass">USB audio class</h3>
+
+<p>
+Here we concern ourselves only with peripherals that implement
+audio functions, and thus adhere to the audio device class.  There are two
+editions of the USB audio class specification: class 1 (UAC1) and 2 (UAC2).
+</p>
+
+<h3 id="otherClasses">Comparison with other classes</h3>
+
+<p>
+USB includes many other device classes, some of which may be confused
+with the audio class.  The
+<a href="http://en.wikipedia.org/wiki/USB_mass_storage_device_class">mass storage class</a>
+(MSC) is used for
+sector-oriented access to media, while
+<a href="http://en.wikipedia.org/wiki/Media_Transfer_Protocol">Media Transfer Protocol</a>
+(MTP) is for full file access to media.
+Both MSC and MTP may be used for transferring audio files,
+but only USB audio class is suitable for real-time streaming.
+</p>
+
+<h3 id="audioTerminals">Audio terminals</h3>
+
+<p>
+The terminals of an audio peripheral are typically analog.
+The analog signal presented at the peripheral's input terminal is converted to digital by an
+<a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter">analog-to-digital converter</a>
+(ADC),
+and is carried over USB protocol to be consumed by
+the host.  The ADC is a data <i>source</i>
+for the host.  Similarly, the host sends a
+digital audio signal over USB protocol to the peripheral, where a
+<a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter">digital-to-analog converter</a>
+(DAC)
+converts and presents to an analog output terminal.
+The DAC is a <i>sink</i> for the host.
+</p>
+
+<h3 id="channels">Channels</h3>
+
+<p>
+A peripheral with audio function can include a source terminal, sink terminal, or both.
+Each direction may have one channel (<i>mono</i>), two channels
+(<i>stereo</i>), or more.
+Peripherals with more than two channels are called <i>multichannel</i>.
+It is common to interpret a stereo stream as consisting of
+<i>left</i> and <i>right</i> channels, and by extension to interpret a multichannel stream as having
+spatial locations corresponding to each channel.  However, it is also quite appropriate
+(especially for USB audio more so than
+<a href="http://en.wikipedia.org/wiki/HDMI">HDMI</a>)
+to not assign any particular
+standard spatial meaning to each channel.  In this case, it is up to the
+application and user to define how each channel is used.
+For example, a four-channel USB input stream might have the first three
+channels attached to various microphones within a room, and the final
+channel receiving input from an AM radio.
+</p>
+
+<h3 id="isochronous">Isochronous transfer mode</h3>
+
+<p>
+USB audio uses isochronous transfer mode for its real-time characteristics,
+at the expense of error recovery.
+In isochronous mode, bandwidth is guaranteed, and data transmission
+errors are detected using a cyclic redundancy check (CRC).  But there is
+no packet acknowledgement or re-transmission in the event of error.
+</p>
+
+<p>
+Isochronous transmissions occur each Start Of Frame (SOF) period.
+The SOF period is one millisecond for full-speed, and 125 microseconds for
+high-speed.  Each full-speed frame carries up to 1023 bytes of payload,
+and a high-speed frame carries up to 1024 bytes.  Putting these together,
+we calculate the maximum transfer rate as 1,023,000 or 8,192,000 bytes
+per second.  This sets a theoretical upper limit on the combined audio
+sample rate, channel count, and bit depth.  The practical limit is lower.
+</p>
+
+<p>
+Within isochronous mode, there are three sub-modes:
+</p>
+
+<ul>
+<li>Adaptive</li>
+<li>Asynchronous</li>
+<li>Synchronous</li>
+</ul>
+
+<p>
+In adaptive sub-mode, the peripheral sink or source adapts to a potentially varying sample rate
+of the host.
+</p>
+
+<p>
+In asynchronous (also called implicit feedback) sub-mode,
+the sink or source determines the sample rate, and the host accomodates.
+The primary theoretical advantage of asynchronous sub-mode is that the source
+or sink USB clock is physically and electrically closer to (and indeed may
+be the same as, or derived from) the clock that drives the DAC or ADC.
+This proximity means that asynchronous sub-mode should be less susceptible
+to clock jitter.  In addition, the clock used by the DAC or ADC may be
+designed for higher accuracy and lower drift than the host clock.
+</p>
+
+<p>
+In synchronous sub-mode, a fixed number of bytes is transferred each SOF period.
+The audio sample rate is effectively derived from the USB clock.
+Synchronous sub-mode is not commonly used with audio because both
+host and peripheral are at the mercy of the USB clock.
+</p>
+
+<p>
+The table below summarizes the isochronous sub-modes:
+</p>
+
+<table>
+<tr>
+  <th>Sub-mode</th>
+  <th>Byte count<br \>per packet</th>
+  <th>Sample rate<br \>determined by</th>
+  <th>Used for audio</th>
+</tr>
+<tr>
+  <td>adaptive</td>
+  <td>variable</td>
+  <td>host</td>
+  <td>yes</td>
+</tr>
+<tr>
+  <td>asynchronous</td>
+  <td>variable</td>
+  <td>peripheral</td>
+  <td>yes</td>
+</tr>
+<tr>
+  <td>synchronous</td>
+  <td>fixed</td>
+  <td>USB clock</td>
+  <td>no</td>
+</tr>
+</table>
+
+<p>
+In practice, the sub-mode does of course matter, but other factors
+should also be considered.
+</p>
+
+<h2 id="androidSupport">Android support for USB audio class</h2>
+
+<h3 id="developmentAudio">Development mode</h3>
+
+<p>
+USB audio is not supported in development mode.
+</p>
+
+<h3 id="hostAudio">Host mode</h3>
+
+<p>
+Android 5.0 (API level 21) and above supports a subset of USB audio class 1 (UAC1) features:
+</p>
+
+<ul>
+<li>The Android device must act as host</li>
+<li>The audio format must be PCM (interface type I)</li>
+<li>The bit depth must be 16-bits, 24-bits, or 32-bits where
+24 bits of useful audio data are left-justified within the most significant
+bits of the 32-bit word</li>
+<li>The sample rate must be either 48, 44.1, 32, 24, 22.05, 16, 12, 11.025, or 8 kHz</li>
+<li>The channel count must be 1 (mono) or 2 (stereo)</li>
+</ul>
+
+<p>
+Perusal of the Android framework source code may show additional code
+beyond the minimum needed to support these features.  But this code
+has not been validated, so more advanced features are not yet claimed.
+</p>
+
+<h3 id="accessoryAudio">Accessory mode</h3>
+
+<p>
+Android 4.1 (API level 16) added limited support for audio playback to the host.
+While in accessory mode, Android automatically routes its audio output to USB.
+That is, the Android device serves as a data source to the host, for example a dock.
+</p>
+
+<p>
+Accessory mode audio has these features:
+</p>
+
+<ul>
+<li>
+The Android device must be controlled by a knowledgeable host that
+can first transition the Android device from development mode to accessory mode,
+and then the host must transfer audio data from the appropriate endpoint.
+Thus the Android device does not appear "driverless" to the host.
+</li>
+<li>The direction must be <i>input</i>, expressed relative to the host</li>
+<li>The audio format must be 16-bit PCM</li>
+<li>The sample rate must be 44.1 kHz</li>
+<li>The channel count must be 2 (stereo)</li>
+</ul>
+
+<p>
+Accessory mode audio has not been widely adopted,
+and is not currently recommended for new designs.
+</p>
+
+<h2 id="applications">Applications of USB digital audio</h2>
+
+<p>
+As the name indicates, the USB digital audio signal is represented
+by a <a href="http://en.wikipedia.org/wiki/Digital_data">digital</a> data stream
+rather than the <a href="http://en.wikipedia.org/wiki/Analog_signal">analog</a>
+signal used by the common TRS mini
+<a href=" http://en.wikipedia.org/wiki/Phone_connector_(audio)">headset connector</a>.
+Eventually any digital signal must be converted to analog before it can be heard.
+There are tradeoffs in choosing where to place that conversion.
+</p>
+
+<h3 id="comparison">A tale of two DACs</h3>
+
+<p>
+In the example diagram below, we compare two designs.  First we have a
+mobile device with Application Processor (AP), on-board DAC, amplifier,
+and analog TRS connector attached to headphones.  We also consider a
+mobile device with USB connected to external USB DAC and amplifier,
+also with headphones.
+</p>
+
+<img src="images/dac.png" alt="DAC comparison">
+
+<p>
+Which design is better?  The answer depends on your needs.
+Each has advantages and disadvantages.
+<b>Note:</b> this is an artificial comparison, since
+a real Android device would probably have both options available.
+</p>
+
+<p>
+The first design A is simpler, less expensive, uses less power,
+and will be a more reliable design assuming otherwise equally reliable components.
+However, there are usually audio quality tradeoffs vs. other requirements.
+For example, if this is a mass-market device, it may be designed to fit
+the needs of the general consumer, not for the audiophile.
+</p>
+
+<p>
+In the second design, the external audio peripheral C can be designed for
+higher audio quality and greater power output without impacting the cost of
+the basic mass market Android device B.  Yes, it is a more expensive design,
+but the cost is absorbed only by those who want it.
+</p>
+
+<p>
+Mobile devices are notorious for having high-density
+circuit boards, which can result in more opportunities for
+<a href="http://en.wikipedia.org/wiki/Crosstalk_(electronics)">crosstalk</a>
+that degrades adjacent analog signals.  Digital communication is less susceptible to
+<a href="http://en.wikipedia.org/wiki/Noise_(electronics)">noise</a>,
+so moving the DAC from the Android device A to an external circuit board
+C allows the final analog stages to be physically and electrically
+isolated from the dense and noisy circuit board, resulting in higher fidelity audio.
+</p>
+
+<p>
+On the other hand,
+the second design is more complex, and with added complexity come more
+opportunities for things to fail.  There is also additional latency
+from the USB controllers.
+</p>
+
+<h3 id="applications">Applications</h3>
+
+<p>
+Typical USB host mode audio applications include:
+</p>
+
+<ul>
+<li>music listening</li>
+<li>telephony</li>
+<li>instant messaging and voice chat</li>
+<li>recording</li>
+</ul>
+
+<p>
+For all of these applications, Android detects a compatible USB digital
+audio peripheral, and automatically routes audio playback and capture
+appropriately, based on the audio policy rules.
+Stereo content is played on the first two channels of the peripheral.
+</p>
+
+<p>
+There are no APIs specific to USB digital audio.
+For advanced usage, the automatic routing may interfere with applications
+that are USB-aware.  For such applications, disable automatic routing
+via the corresponding control in the Media section of
+<a href="http://developer.android.com/tools/index.html">Settings / Developer Options</a>.
+</p>
+
+<h2 id="compatibility">Implementing USB audio</h2>
+
+<h3 id="recommendationsPeripheral">Recommendations for audio peripheral vendors</h3>
+
+<p>
+In order to inter-operate with Android devices, audio peripheral vendors should:
+</p>
+
+<ul>
+<li>design for audio class compliance;
+currently Android targets class 1, but it is wise to plan for class 2</li>
+<li>avoid <a href="http://en.wiktionary.org/wiki/quirk">quirks</a>
+<li>test for inter-operability with reference and popular Android devices</li>
+<li>clearly document supported features, audio class compliance, power requirements, etc.
+so that consumers can make informed decisions</li>
+</ul>
+
+<h3 id="recommendationsAndroid">Recommendations for Android device OEMs and SoC vendors</h3>
+
+<p>
+In order to support USB digital audio, device OEMs and SoC vendors should:
+</p>
+
+<ul>
+<li>enable all kernel features needed: USB host mode, USB audio, isochronous transfer mode</li>
+<li>keep up-to-date with recent kernel releases and patches;
+despite the noble goal of class compliance, there are extant audio peripherals
+with <a href="http://en.wiktionary.org/wiki/quirk">quirks</a>,
+and recent kernels have workarounds for such quirks
+</li>
+<li>enable USB audio policy as described below</li>
+<li>test for inter-operability with common USB audio peripherals</li>
+</ul>
+
+<h3 id="enable">How to enable USB audio policy</h3>
+
+<p>
+To enable USB audio, add an entry to the
+audio policy configuration file.  This is typically
+located here:
+<pre>device/oem/codename/audio_policy.conf</pre>
+The pathname component "oem" should be replaced by the name
+of the OEM who manufactures the Android device,
+and "codename" should be replaced by the device code name.
+</p>
+
+<p>
+An example entry is shown here:
+</p>
+
+<pre>
+audio_hw_modules {
+  ...
+  usb {
+    outputs {
+      usb_accessory {
+        sampling_rates 44100
+        channel_masks AUDIO_CHANNEL_OUT_STEREO
+        formats AUDIO_FORMAT_PCM_16_BIT
+        devices AUDIO_DEVICE_OUT_USB_ACCESSORY
+      }
+      usb_device {
+        sampling_rates dynamic
+        channel_masks dynamic
+        formats dynamic
+        devices AUDIO_DEVICE_OUT_USB_DEVICE
+      }
+    }
+    inputs {
+      usb_device {
+        sampling_rates dynamic
+        channel_masks AUDIO_CHANNEL_IN_STEREO
+        formats AUDIO_FORMAT_PCM_16_BIT
+        devices AUDIO_DEVICE_IN_USB_DEVICE
+      }
+    }
+  }
+  ...
+}
+</pre>
+
+<h3 id="sourceCode">Source code</h3>
+
+<p>
+The audio Hardware Abstraction Layer (HAL)
+implementation for USB audio is located here:
+<pre>hardware/libhardware/modules/usbaudio/</pre>
+The USB audio HAL relies heavily on
+<i>tinyalsa</i>, described at <a href="audio_terminology.html">Audio Terminology</a>.
+Though USB audio relies on isochronous transfers,
+this is abstracted away by the ALSA implementation.
+So the USB audio HAL and tinyalsa do not need to concern
+themselves with this part of USB protocol.
+</p>
diff --git a/src/devices/audio/warmup.jd b/src/devices/audio/warmup.jd
new file mode 100644
index 0000000..777650b
--- /dev/null
+++ b/src/devices/audio/warmup.jd
@@ -0,0 +1,113 @@
+page.title=Audio Warmup
+@jd:body
+
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>Audio warmup is the time it takes for the audio amplifier circuit in your device to
+be fully powered and reach its normal operation state. The major contributors
+to audio warmup time are power management and any "de-pop" logic to stabilize
+the circuit.
+</p>
+
+<p>This document describes how to measure audio warmup time and possible ways to decrease
+warmup time.</p>
+
+<h2 id="measuringOutput">Measuring Output Warmup</h2>
+
+<p>
+  AudioFlinger's FastMixer thread automatically measures output warmup
+  and reports it as part of the output of the <code>dumpsys media.audio_flinger</code> command.
+  At warmup, FastMixer calls <code>write()</code>
+  repeatedly until the time between two <code>write()</code>s is the amount expected.
+  FastMixer determines audio warmup by seeing how long a Hardware Abstraction
+Layer (HAL) <code>write()</code> takes to stabilize. 
+</p>
+
+<p>To measure audio warmup, follow these steps for the built-in speaker and wired headphones 
+  and at different times after booting. Warmup times are usually different for each output device 
+  and right after booting the device:</p>
+
+<ol>
+  <li>Ensure that FastMixer is enabled.</li>
+  <li>Enable touch sounds by selecting <b>Settings > Sound > Touch sounds</b> on the device.</li>
+  <li>Ensure that audio has been off for at least three seconds. Five seconds or more is better, because
+  the hardware itself might have its own power logic beyond the three seconds that AudioFlinger has.</li>
+  <li>Press Home, and you should hear a click sound.</li>
+  <li>Run the following command to receive the measured warmup:
+  <br /><code>adb shell dumpsys media.audio_flinger | grep measuredWarmup</code>
+
+<p>
+You should see output like this:
+</p>
+
+<pre>
+sampleRate=44100 frameCount=256 measuredWarmup=X ms, warmupCycles=X
+</pre>
+
+<p>
+  The <code>measuredWarmup=X</code> is X number of milliseconds
+  it took for the first set of HAL <code>write()</code>s to complete.
+</p>
+
+<p>
+  The <code>warmupCycles=X</code> is how many HAL write requests it took
+  until the execution time of <code>write()</code> matches what is expected.
+</p>
+</li>
+<li>
+  Take five measurements and record them all, as well as the mean.
+  If they are not all approximately the same,
+  then it's likely that a measurement is incorrect.
+  For example, if you don't wait long enough after the audio has been off,
+  you will see a lower warmup time than the mean value.
+</li>
+</ol>
+
+
+<h2 id="measuringInput">Measuring Input Warmup</h2>
+
+<p>
+  There are currently no tools provided for measuring audio input warmup.
+  However, input warmup time can be estimated by observing
+  the time required for <a href="http://developer.android.com/reference/android/media/AudioRecord.html#startRecording()">startRecording()</a>
+  to return. 
+</p>
+
+
+<h2 id="reducing">Reducing Warmup Time</h2>
+
+<p>
+  Warmup time can usually be reduced by a combination of:
+  <ul>
+  <li>Good circuit design</li>
+  <li>Accurate time delays in kernel device driver</li>
+  <li>Performing independent warmup operations concurrently rather than sequentially</li>
+  <li>Leaving circuits powered on or not reconfiguring clocks (increases idle power consumption)
+  <li>Caching computed parameters</li>
+  </ul>
+<p>
+  However, beware of excessive optimization. You may find that you
+  need to tradeoff between low warmup time versus
+  lack of popping at power transitions.
+</p>