Update s.a.c documentation

Latency:
  - rate-monotonic scheduling
  - power management
  - security kernels

USB digital audio:
  - requirements on OEM
  - debugging over WiFi

Terminology:
  - AVRCP
  - FastCapture
  - jitter
  - overrun
  - panning
  - transducer
  - underrun

Change-Id: Id2138533b80355bdf4ced0619af8e2e3a2f9f0d2
diff --git a/src/devices/audio/avoiding_pi.jd b/src/devices/audio/avoiding_pi.jd
index ec407d7..96d09a2 100644
--- a/src/devices/audio/avoiding_pi.jd
+++ b/src/devices/audio/avoiding_pi.jd
@@ -26,7 +26,7 @@
 
 <p>
 This article explains how the Android's audio system attempts to avoid
-priority inversion, as of the Android 4.1 release,
+priority inversion,
 and highlights techniques that you can use too.
 </p>
 
@@ -45,8 +45,8 @@
 <p>
 The Android AudioFlinger audio server and AudioTrack/AudioRecord
 client implementation are being re-architected to reduce latency.
-This work started in Android 4.1, continued in 4.2 and 4.3, and now more
-improvements exist in version 4.4.
+This work started in Android 4.1, and continued with further improvements
+in 4.2, 4.3, 4.4, and 5.0.
 </p>
 
 <p>
@@ -54,7 +54,7 @@
 important change is to assign CPU resources to time-critical
 threads with a more predictable scheduling policy. Reliable scheduling
 allows the audio buffer sizes and counts to be reduced while still
-avoiding artifacts due to underruns.
+avoiding underruns and overruns.
 </p>
 
 <h2 id="priorityInversion">Priority inversion</h2>
@@ -95,6 +95,11 @@
 </li>
 
 <li>
+between application callback thread for a fast AudioRecord and
+fast capture thread (similar to previous)
+</li>
+
+<li>
 within the audio Hardware Abstraction Layer (HAL) implementation, e.g. for telephony or echo cancellation
 </li>
 
@@ -103,17 +108,11 @@
 </li>
 
 <li>
-between AudioTrack callback thread and other app threads (this is out of our control)
+between AudioTrack or AudioRecord callback thread and other app threads (this is out of our control)
 </li>
 
 </ul>
 
-<p>
-As of this writing, reduced latency for AudioRecord is planned but
-not yet implemented. The likely priority inversion spots will be
-similar to those for AudioTrack.
-</p>
-
 <h2 id="commonSolutions">Common solutions</h2>
 
 <p>
diff --git a/src/devices/audio/debugging.jd b/src/devices/audio/debugging.jd
index 4dacb5c..78ca801 100644
--- a/src/devices/audio/debugging.jd
+++ b/src/devices/audio/debugging.jd
@@ -46,7 +46,7 @@
 </p>
 
 <p>
-The instructions in the remainder of this section are for Android 4.4,
+The instructions in the remainder of this section are for Android 5.0,
 and may require changes for other versions.
 </p>
 
@@ -212,7 +212,7 @@
 The underlying kernel system calls could block, possibly resulting in
 priority inversion and consequently measurement disturbances and
 inaccuracies.  This is of
-special concern to time-critical threads such as <code>FastMixer</code>.
+special concern to time-critical threads such as <code>FastMixer</code> and <code>FastCapture</code>.
 </li>
 <li>
 If a particular log is disabled to reduce log spam,
@@ -326,7 +326,7 @@
 occasions when it is indispensable.
 In particular, it is recommended for AudioFlinger threads that must
 run frequently, periodically, and without blocking such as the
-<code>FastMixer</code> thread.
+<code>FastMixer</code> and <code>FastCapture</code> threads.
 </p>
 
 <h3>How to use</h3>
@@ -338,7 +338,7 @@
 </p>
 
 <p>
-In <code>FastMixer</code> thread, use code such as this:
+In <code>FastMixer</code> and <code>FastCapture</code> threads, use code such as this:
 </p>
 <pre>
 logWriter->log("string");
@@ -346,7 +346,8 @@
 logWriter->logTimestamp();
 </pre>
 <p>
-As this <code>NBLog</code> timeline is used only by the <code>FastMixer</code> thread,
+As this <code>NBLog</code> timeline is used only by the <code>FastMixer</code> and
+<code>FastCapture</code> threads,
 there is no need for mutual exclusion.
 </p>
 
@@ -359,7 +360,7 @@
 mNBLogWriter->logTimestamp();
 </pre>
 <p>
-For threads other than <code>FastMixer</code>,
+For threads other than <code>FastMixer</code> and <code>FastCapture</code>,
 the thread's <code>NBLog</code> timeline can be used by both the thread itself, and
 by binder operations.  <code>NBLog::Writer</code> does not provide any
 implicit mutual exclusion per timeline, so be sure that all logs occur
diff --git a/src/devices/audio/funplug.jd b/src/devices/audio/funplug.jd
index fb4b527..6245e95 100644
--- a/src/devices/audio/funplug.jd
+++ b/src/devices/audio/funplug.jd
@@ -46,8 +46,8 @@
 <p>
 To ensure that the output signal will not overload the microphone input,
 we cut it down by about 20dB.
-The resistor loads are there to tell the microphone polarity switch that
-it is a US/CTIA pinout plug.
+The resistor loads tell the microphone polarity switch that
+the FunPlug is a US/CTIA pinout Tip Ring Ring Shield (TRRS) plug.
 </p>
 
 <h2 id="funplugAssembled">FunPlug assembled</h2>
diff --git a/src/devices/audio/latency.jd b/src/devices/audio/latency.jd
index 815f5b9..a779157 100644
--- a/src/devices/audio/latency.jd
+++ b/src/devices/audio/latency.jd
@@ -38,7 +38,7 @@
 </p>
 <p>
   Assuming the analog circuitry does not contribute significantly, then the major 
-surface-level contributors to audio latency are the following:
+  surface-level contributors to audio latency are the following:
 </p>
 
 <ul>
@@ -53,21 +53,21 @@
   The reason is that buffer count and buffer size are more of an
   <em>effect</em> than a <em>cause</em>.  What usually happens is that
   a given buffer scheme is implemented and tested, but during testing, an audio
-  underrun is heard as a "click" or "pop."  To compensate, the
+  underrun or overrun is heard as a "click" or "pop."  To compensate, the
   system designer then increases buffer sizes or buffer counts.
-  This has the desired result of eliminating the underruns, but it also
+  This has the desired result of eliminating the underruns or overruns, but it also
   has the undesired side effect of increasing latency.
 </p>
 
 <p>
   A better approach is to understand the causes of the
-  underruns and then correct those.  This eliminates the
+  underruns and overruns, and then correct those.  This eliminates the
   audible artifacts and may permit even smaller or fewer buffers
   and thus reduce latency.
 </p>
 
 <p>
-  In our experience, the most common causes of underruns include:
+  In our experience, the most common causes of underruns and overruns include:
 </p>
 <ul>
   <li>Linux CFS (Completely Fair Scheduler)</li>
@@ -75,9 +75,11 @@
   <li>long scheduling latency</li>
   <li>long-running interrupt handlers</li>
   <li>long interrupt disable time</li>
+  <li>power management</li>
+  <li>security kernels</li>
 </ul>
 
-<h3>Linux CFS and SCHED_FIFO scheduling</h3>
+<h3 id="linuxCfs">Linux CFS and SCHED_FIFO scheduling</h3>
 <p>
   The Linux CFS is designed to be fair to competing workloads sharing a common CPU
   resource. This fairness is represented by a per-thread <em>nice</em> parameter.
@@ -90,7 +92,7 @@
   CFS may allocate the CPU resource in unexpected ways. For example, it
   may take the CPU away from a thread with numerically low niceness
   onto a thread with a numerically high niceness.  In the case of audio,
-  this can result in an underrun.
+  this can result in an underrun or overrun.
 </p>
 
 <p>
@@ -100,6 +102,7 @@
   <code>SCHED_OTHER</code>) scheduling policy implemented by CFS.
 </p>
 
+<h3 id="schedFifo">SCHED_FIFO priorities</h3>
 <p>
   Though the high-performance audio threads now use <code>SCHED_FIFO</code>, they
   are still susceptible to other higher priority <code>SCHED_FIFO</code> threads.
@@ -110,10 +113,25 @@
   and priorities 4 to 99 for higher priority threads.  We recommend 
   you use priority 1 whenever possible, and reserve priorities 4 to 99 for
   those threads that are guaranteed to complete within a bounded amount
-  of time and are known to not interfere with scheduling of audio threads.
+  of time, execute with a period shorter than the period of audio threads,
+  and are known to not interfere with scheduling of audio threads.
 </p>
 
-<h3>Scheduling latency</h3>
+<h3 id="rms">Rate-monotonic scheduling</h3>
+<p>
+  For more information on the theory of assignment of fixed priorities,
+  see the Wikipedia article
+  <a href="http://en.wikipedia.org/wiki/Rate-monotonic_scheduling">Rate-monotonic scheduling</a> (RMS).
+  A key point is that fixed priorities should be allocated strictly based on period,
+  with higher priorities assigned to threads of shorter periods, not based on perceived "importance".
+  Non-periodic threads may be modeled as periodic threads, using the maximum frequency of execution
+  and maximum computation per execution.  If a non-periodic thread cannot be modeled as
+  a periodic thread (for example it could execute with unbounded frequency or unbounded computation
+  per execution), then it should not be assigned a fixed priority as that would be incompatible
+  with the scheduling of true periodic threads.
+</p>
+
+<h3 id="schedLatency">Scheduling latency</h3>
 <p>
   Scheduling latency is the time between when a thread becomes
   ready to run, and when the resulting context switch completes so that the
@@ -125,12 +143,12 @@
   or adjusting the CPU clock frequency and voltage.
 </p>
 
-<h3>Interrupts</h3>
+<h3 id="interrupts">Interrupts</h3>
 <p>
   In many designs, CPU 0 services all external interrupts.  So a
   long-running interrupt handler may delay other interrupts, in particular
   audio direct memory access (DMA) completion interrupts. Design interrupt handlers
-  to finish quickly and defer any lengthy work to a thread (preferably
+  to finish quickly and defer lengthy work to a thread (preferably
   a CFS thread or <code>SCHED_FIFO</code> thread of priority 1).
 </p>
 
@@ -142,3 +160,53 @@
   they are bounded.
 </p>
 
+<h3 id="power">Power, performance, and thermal management</h3>
+<p>
+  <a href="http://en.wikipedia.org/wiki/Power_management">Power management</a>
+  is a broad term that encompasses efforts to monitor
+  and reduce power consumption while optimizing performance.
+  <a href="http://en.wikipedia.org/wiki/Thermal_management_of_electronic_devices_and_systems">Thermal management</a>
+  is similar, but seeks to measure and control heat to avoid damage due to excess heat.
+  In the Linux kernel, the CPU
+  <a href="http://en.wikipedia.org/wiki/Governor_%28device%29">governor</a>
+  is responsible for low-level policy, while user mode configures high-level policy.
+  Techniques used include:
+</p>
+
+<ul>
+  <li>dynamic voltage scaling</li>
+  <li>dynamic frequency scaling</li>
+  <li>dynamic core enabling</li>
+  <li>cluster switching</li>
+  <li>power gating</li>
+  <li>hotplug (hotswap)</li>
+  <li>various sleep modes (halt, stop, idle, suspend, etc.)</li>
+  <li>process migration</li>
+  <li><a href="http://en.wikipedia.org/wiki/Processor_affinity">processor affinity</a></li>
+</ul>
+
+<p>
+  Some management operations can result in "work stoppages" or
+  times during which there is no useful work performed by the application processor.
+  These work stoppages can interfere with audio, so such management should be designed
+  for an acceptable worst-case work stoppage while audio is active.
+  Of course, when thermal runaway is imminent, avoiding permanent damage
+  is more important than audio!
+</p>
+
+<h3 id="security">Security kernels</h3>
+<p>
+  A <a href="http://en.wikipedia.org/wiki/Security_kernel">security kernel</a> for
+  <a href="http://en.wikipedia.org/wiki/Digital_rights_management">Digital rights management</a>
+  (DRM) may run on the same application processor core(s) as those used
+  for the main operating system kernel and application code.  Any time
+  during which a security kernel operation is active on a core is effectively a
+  stoppage of ordinary work that would normally run on that core.
+  In particular, this may include audio work.  By its nature, the internal
+  behavior of a security kernel is inscrutable from higher-level layers, and thus
+  any performance anomalies caused by a security kernel are especially
+  pernicious.  For example, security kernel operations do not typically appear in
+  context switch traces.  We call this "dark time" &mdash; time that elapses
+  yet cannot be observed.  Security kernels should be designed for an
+  acceptable worst-case work stoppage while audio is active.
+</p>
diff --git a/src/devices/audio/latency_design.jd b/src/devices/audio/latency_design.jd
index a2ad236..21f963f 100644
--- a/src/devices/audio/latency_design.jd
+++ b/src/devices/audio/latency_design.jd
@@ -26,9 +26,10 @@
 
 <p>
 The Android 4.1 release introduced internal framework changes for
-a lower latency audio output path. There were no public client API
+a <a href="http://en.wikipedia.org/wiki/Low_latency">lower latency</a>
+audio output path. There were minimal public client API
 or HAL API changes. This document describes the initial design,
-which is expected to evolve over time.
+which has continued to evolve over time.
 Having a good understanding of this design should help device OEM and
 SoC vendors implement the design correctly on their particular devices
 and chipsets.  This article is not intended for application developers.
@@ -43,9 +44,9 @@
 </p>
 
 <ul>
-<li>OpenSL ES</li>
-<li>SoundPool</li>
-<li>ToneGenerator</li>
+<li>Android native audio based on OpenSL ES</li>
+<li><a href="http://developer.android.com/reference/android/media/SoundPool.html">android.media.SoundPool</a></li>
+<li><a href="http://developer.android.com/reference/android/media/ToneGenerator.html">android.media.ToneGenerator</a></li>
 </ul>
 
 <p>
@@ -118,7 +119,7 @@
 
 <p>
 The fast mixer runs periodically, with a recommended period of two
-to three milliseconds (ms), or slightly higher if needed for scheduling stability.
+to three milliseconds (ms), or a slightly higher period of five ms if needed for scheduling stability.
 This number was chosen so that, accounting for the complete
 buffer pipeline, the total latency is on the order of 10 ms. Smaller
 values are possible but may result in increased power consumption
@@ -132,6 +133,9 @@
 <p>
 The fast mixer runs at elevated <code>SCHED_FIFO</code> priority. It needs very
 little CPU time, but must run often and with low scheduling jitter.
+<a href="http://en.wikipedia.org/wiki/Jitter">Jitter</a>
+expresses the variation in cycle time: it is the difference between the
+actual cycle time versus the expected cycle time.
 Running too late will result in glitches due to underrun. Running
 too early will result in glitches due to pulling from a fast track
 before the track has provided data.
@@ -143,6 +147,9 @@
 Ideally the fast mixer thread never blocks, other than at HAL
 <code>write()</code>. Other occurrences of blocking within the fast mixer are
 considered bugs. In particular, mutexes are avoided.
+Instead, <a href="http://en.wikipedia.org/wiki/Non-blocking_algorithm">non-blocking algorithms</a>
+(also known as lock-free algorithms) are used.
+See <a href="avoiding_pi.html">Avoiding Priority Inversion</a> for more on this topic.
 </p>
 
 <h4>Relationship to other components</h4>
diff --git a/src/devices/audio/latency_measure.jd b/src/devices/audio/latency_measure.jd
index ae644df..90038bf 100644
--- a/src/devices/audio/latency_measure.jd
+++ b/src/devices/audio/latency_measure.jd
@@ -38,7 +38,7 @@
 see the <a href="testing_circuit.html">Testing circuit</a> for an example test environment.
 </p>
 
-<h3>LED and oscilloscope test</h3>
+<h3 id="ledTest">LED and oscilloscope test</h3>
 <p>
 This test measures latency in relation to the device's LED indicator.
 If your production device does not have an LED, you can install the
@@ -94,7 +94,7 @@
   Round-trip latency is the sum of output latency and input latency.
 </p>
 
-<h3>Larsen test</h3>
+<h3 id="larsenTest">Larsen test</h3>
 <p>
   One of the easiest latency tests is an audio feedback
   (Larsen effect) test. This provides a crude measure of combined output
@@ -119,7 +119,7 @@
   precise output latency or input latency values in isolation, but might be useful
   for establishing rough estimates.</p>
 
-<h3>FunPlug</h3>
+<h3 id="funplug">FunPlug</h3>
 
 <p>
   The <a href="funplug.html">FunPlug</a> dongle is handy for
diff --git a/src/devices/audio/src.jd b/src/devices/audio/src.jd
index 6238770..ab70fee 100644
--- a/src/devices/audio/src.jd
+++ b/src/devices/audio/src.jd
@@ -31,6 +31,7 @@
 <a href="http://en.wikipedia.org/wiki/Resampling_(audio)">Resampling (audio)</a>
 for a generic definition of sample rate conversion, also known as "resampling."
 The remainder of this article describes resampling within Android.
+See <a href="terminology.html#srcTerms">Sample Rate Conversion</a> for related terminology.
 </p>
 
 <p>
diff --git a/src/devices/audio/terminology.jd b/src/devices/audio/terminology.jd
index b1b12b6..bbe4e3f 100644
--- a/src/devices/audio/terminology.jd
+++ b/src/devices/audio/terminology.jd
@@ -51,7 +51,7 @@
 <dd>
 A multiplicative factor less than or equal to 1.0,
 applied to an audio signal to decrease the signal level.
-Compare to "gain".
+Compare to "gain."
 </dd>
 
 <dt>bits per sample or bit depth</dt>
@@ -70,7 +70,7 @@
 To decrease the number of channels, e.g. from stereo to mono, or from 5.1 to stereo.
 This can be accomplished by dropping some channels, mixing channels, or more advanced signal processing.
 Simple mixing without attenuation or limiting has the potential for overflow and clipping.
-Compare to "upmixing".
+Compare to "upmixing."
 </dd>
 
 <dt>duck</dt>
@@ -78,7 +78,7 @@
 To temporarily reduce the volume of one stream, when another stream
 becomes active.  For example, if music is playing and a notification arrives,
 then the music stream could be ducked while the notification plays.
-Compare to "mute".
+Compare to "mute."
 </dd>
 
 <dt>frame</dt>
@@ -96,7 +96,7 @@
 <dd>
 A multiplicative factor greater than or equal to 1.0,
 applied to an audio signal to increase the signal level.
-Compare to "attenuation".
+Compare to "attenuation."
 </dd>
 
 <dt>Hz</dt>
@@ -126,6 +126,20 @@
 To (temporarily) force volume to be zero, independently from the usual volume controls.
 </dd>
 
+<dt>overrun</dt>
+<dd>
+An audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by failure
+to accept supplied data in sufficient time.
+See Wikipedia article <a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>
+[sic; the article for "buffer overrun" describes an unrelated failure].
+Compare to "underrun."
+</dd>
+
+<dt>panning</dt>
+<dd>
+To direct a signal to a desired position within a stereo or multi-channel field.
+</dd>
+
 <dt>PCM</dt>
 <dd>
 Pulse Code Modulation, the most common low-level encoding of digital audio.
@@ -176,11 +190,19 @@
 sound position beyond stereo left and right.
 </dd>
 
+<dt>underrun</dt>
+<dd>
+An audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by failure
+to supply needed data in sufficient time.
+See Wikipedia article <a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>.
+Compare to "overrun."
+</dd>
+
 <dt>upmixing</dt>
 <dd>
 To increase the number of channels, e.g. from mono to stereo, or from stereo to surround sound.
 This can be accomplished by duplication, panning, or more advanced signal processing.
-Compare to "downmixing".
+Compare to "downmixing."
 </dd>
 
 <dt>virtualizer</dt>
@@ -231,6 +253,9 @@
 for telephony
 </li>
 
+<li><a href="http://en.wikipedia.org/wiki/List_of_Bluetooth_profiles#Audio.2FVideo_Remote_Control_Profile_.28AVRCP.29">Audio/Video Remote Control Profile (AVRCP)</a>
+</li>
+
 </ul>
 
 </dd>
@@ -288,8 +313,8 @@
 See these Wikipedia articles:
 <ul>
 <li><a href="http://en.wikipedia.org/wiki/General-purpose_input/output">GPIO</a></li>
-<li><a href="http://en.wikipedia.org/wiki/I%C2%B2C">I²C</a></li>
-<li><a href="http://en.wikipedia.org/wiki/I%C2%B2S">I²S</a></li>
+<li><a href="http://en.wikipedia.org/wiki/I%C2%B2C">I²C</a>, for control channel</li>
+<li><a href="http://en.wikipedia.org/wiki/I%C2%B2S">I²S</a>, for audio data</li>
 <li><a href="http://en.wikipedia.org/wiki/McASP">McASP</a></li>
 <li><a href="http://en.wikipedia.org/wiki/SLIMbus">SLIMbus</a></li>
 <li><a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus">SPI</a></li>
@@ -370,6 +395,15 @@
 <a href="http://en.wikipedia.org/wiki/Pulse-width_modulation">Pulse-width modulation</a>.
 </dd>
 
+<dt>transducer</dt>
+<dd>
+A transducer converts variations in physical "real-world" quantities to electrical signals.
+In audio, the physical quantity is sound pressure,
+and the transducers are the loudspeaker and microphone.
+See Wikipedia article
+<a href="http://en.wikipedia.org/wiki/Transducer">Transducer</a>.
+</dd>
+
 </dl>
 
 <h3 id="srcTerms">Sample Rate Conversion</h3>
@@ -514,15 +548,21 @@
 device driver with a C API, which replaces the earlier C++ libaudio.
 </dd>
 
+<dt>FastCapture</dt>
+<dd>
+A thread within AudioFlinger that sends audio data to lower latency "fast tracks"
+and drives the input device when configured for reduced latency.
+</dd>
+
 <dt>FastMixer</dt>
 <dd>
-A thread within AudioFlinger that services lower latency "fast tracks"
-and drives the primary output device.
+A thread within AudioFlinger that receives and mixes audio data from lower latency "fast tracks"
+and drives the primary output device when configured for reduced latency.
 </dd>
 
 <dt>fast track</dt>
 <dd>
-An AudioTrack or AudioRecord client with lower latency but fewer features, on some devices.
+An AudioTrack or AudioRecord client with lower latency but fewer features, on some devices and routes.
 </dd>
 
 <dt>MediaPlayer</dt>
@@ -561,9 +601,10 @@
 
 <dt>OpenSL ES</dt>
 <dd>
-An audio API standard by The Khronos Group. Android versions since
-API level 9 support a native audio API which is based on a subset of
-OpenSL ES 1.0.1.
+An audio API standard by
+<a href="http://www.khronos.org/">The Khronos Group</a>. Android versions since
+API level 9 support a native audio API that is based on a subset of
+<a href="http://www.khronos.org/opensles/">OpenSL ES 1.0.1</a>.
 </dd>
 
 <dt>silent mode</dt>
diff --git a/src/devices/audio/tv.jd b/src/devices/audio/tv.jd
index 8cd97b9..bf60884 100644
--- a/src/devices/audio/tv.jd
+++ b/src/devices/audio/tv.jd
@@ -274,7 +274,7 @@
 
 <p>This section includes common use cases for TV audio.</p>
 
-<h3>TV tuner with speaker output</h3>
+<h3 id="tvSpeaker">TV tuner with speaker output</h3>
 
 <p>When a TV tuner becomes active, the audio routing API creates an audio patch between the tuner
 and the default output (e.g. the speaker). The tuner output does not require decoding, but final
@@ -285,7 +285,7 @@
 <strong>Figure 2.</strong> Audio Patch for TV tuner with speaker output.</p>
 
 
-<h3>HDMI OUT during live TV</h3>
+<h3 id="hdmiOut">HDMI OUT during live TV</h3>
 
 <p>A user is watching live TV then switches to the HDMI audio output (Intent.ACTION_HDMI_AUDIO_PLUG)
 . The output device of all output_streams changes to the HDMI_OUT port, and the TIF manager changes
diff --git a/src/devices/audio/usb.jd b/src/devices/audio/usb.jd
index a033d1b..e805ac4 100644
--- a/src/devices/audio/usb.jd
+++ b/src/devices/audio/usb.jd
@@ -38,7 +38,10 @@
 </p>
 
 <p>
-End users should see the <a href="https://support.google.com/android/">Help Center</a> instead.
+End users of Nexus devices should see the article
+<a href="https://support.google.com/nexus/answer/6127700">Record and play back audio using USB host mode</a>
+at the
+<a href="https://support.google.com/nexus/">Nexus Help Center</a> instead.
 Though this article is not oriented towards end users,
 certain audiophile consumers may find portions of interest.
 </p>
@@ -486,7 +489,7 @@
 from the USB controllers.
 </p>
 
-<h3 id="applications">Applications</h3>
+<h3 id="hostApplications">Host mode applications</h3>
 
 <p>
 Typical USB host mode audio applications include:
@@ -514,6 +517,16 @@
 <a href="http://developer.android.com/tools/index.html">Settings / Developer Options</a>.
 </p>
 
+<h3 id="hostDebugging">Debugging while in host mode</h3>
+
+<p>
+While in USB host mode, adb debugging over USB is unavailable.
+See section <a href="http://developer.android.com/tools/help/adb.html#wireless">Wireless usage</a>
+of
+<a href="http://developer.android.com/tools/help/adb.html">Android Debug Bridge</a>
+for an alternative.
+</p>
+
 <h2 id="compatibility">Implementing USB audio</h2>
 
 <h3 id="recommendationsPeripheral">Recommendations for audio peripheral vendors</h3>
@@ -538,6 +551,7 @@
 </p>
 
 <ul>
+<li>design hardware to support USB host mode</li>
 <li>enable all kernel features needed: USB host mode, USB audio, isochronous transfer mode</li>
 <li>keep up-to-date with recent kernel releases and patches;
 despite the noble goal of class compliance, there are extant audio peripherals
@@ -545,6 +559,7 @@
 and recent kernels have workarounds for such quirks
 </li>
 <li>enable USB audio policy as described below</li>
+<li>add audio.usb.default to PRODUCT_PACKAGES in device.mk</li>
 <li>test for inter-operability with common USB audio peripherals</li>
 </ul>