Docs: Starting updates for the K release.
Bug: 9391856
Staging location: http://claym.mtv.corp.google.com:8088/devices/audio.html

Change-Id: I58629ae54270ec949a47258ce63eae9643843deb
diff --git a/src/devices/audio.jd b/src/devices/audio.jd
index 8b58d9e..1192830 100644
--- a/src/devices/audio.jd
+++ b/src/devices/audio.jd
@@ -25,7 +25,7 @@
 </div>
 
 <p>
-  Android's audio HAL connects the higher level, audio-specific
+  Android's audio Hardware Abstraction Layer (HAL) connects the higher level, audio-specific
   framework APIs in <a href="http://developer.android.com/reference/android/media/package-summary.html">android.media</a>
   to the underlying audio driver and hardware. 
 </p>
@@ -45,7 +45,7 @@
     At the application framework level is the app code, which utilizes the
     <a href="http://developer.android.com/reference/android/media/package-summary.html">android.media</a>
     APIs to interact with the audio hardware. Internally, this code calls corresponding JNI glue
-    classes to access the native code that interacts with the auido hardware.
+    classes to access the native code that interacts with the audio hardware.
   </dd>
   <dt>
     JNI
@@ -82,9 +82,11 @@
     HAL
   </dt>
   <dd>
-    The hardware abstraction layer defines the standard interface that audio services calls into
+    The HAL defines the standard interface that audio services call into
     and that you must implement to have your audio hardware function correctly. The audio HAL
-    interfaces are located in <code>hardware/libhardware/include/hardware</code>.
+    interfaces are located in
+<code>hardware/libhardware/include/hardware</code>. See <a
+href="http://source.android.com/devices/reference/audio_8h_source.html">audio.h</a> for additional details.
   </dd>
   <dt>
     Kernel Driver
@@ -130,14 +132,15 @@
    header files for a reference of the properties that you can define.
 </p>
 <h3 id="multichannel">Multi-channel support</h3>
-<p>If your hardware and driver supports multi-channel audio via HDMI, you can output the audio stream
+<p>If your hardware and driver supports multichannel audio via HDMI, you can output the audio stream
   directly to the audio hardware. This bypasses the AudioFlinger mixer so it doesn't get downmixed to two channels. 
   
   <p>
-  The audio HAL must expose whether an output stream profile supports multi-channel audio capabilities.
+  The audio HAL must expose whether an output stream profile supports multichannel audio capabilities.
   If the HAL exposes its capabilities, the default policy manager allows multichannel playback over 
   HDMI.</p>
- <p>For more implementation details, see the <code>device/samsung/tuna/audio/audio_hw.c</code> in the Jellybean release.</p>
+ <p>For more implementation details, see the
+<code>device/samsung/tuna/audio/audio_hw.c</code> in the Android 4.1 release.</p>
 
   <p>
   To specify that your product contains a multichannel audio output, edit the <code>audio_policy.conf</code> file to describe the multichannel
@@ -165,18 +168,18 @@
 </pre>
 
 
-  <p>If your product does not support multichannel audio, AudioFlinger's mixer downmixes the content to stereo
+  <p>AudioFlinger's mixer downmixes the content to stereo
     automatically when sent to an audio device that does not support multichannel audio.</p>
 </p>
 
-<h3 id="codecs">Media Codecs</h3>
+<h3 id="codecs">Media codecs</h3>
 
-<p>Ensure that the audio codecs that your hardware and drivers support are properly declared for your product. See
+<p>Ensure the audio codecs your hardware and drivers support are properly declared for your product. See
   <a href="media.html#expose"> Exposing Codecs to the Framework</a> for information on how to do this.
 </p>
 
 <h2 id="configuring">
-  Configuring the Shared Library
+  Configuring the shared library
 </h2>
 <p>
   You need to package the HAL implementation into a shared library and copy it to the
@@ -192,7 +195,7 @@
 LOCAL_MODULE := audio.primary.&lt;device_name&gt;
 </pre>
     <p>
-      Notice that your library must be named <code>audio_primary.&lt;device_name&gt;.so</code> so
+      Notice your library must be named <code>audio_primary.&lt;device_name&gt;.so</code> so
       that Android can correctly load the library. The "<code>primary</code>" portion of this
       filename indicates that this shared library is for the primary audio hardware located on the
       device. The module names <code>audio.a2dp.&lt;device_name&gt;</code> and
@@ -249,24 +252,27 @@
   </li>
 </ol>
 
-<h2 id="preprocessing">Audio preprocessing effects</h2>
+<h2 id="preprocessing">Audio pre-processing effects</h2>
 <p>
-The Android platform supports audio effects on supported devices in the
+The Android platform provides audio effects on supported devices in the
 <a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">audiofx</a>
 package, which is available for developers to access. For example, on the Nexus 10, the following pre-processing effects are supported: </p>
 <ul>
-  <li><a href="http://developer.android.com/reference/android/media/audiofx/AcousticEchoCanceler.html">Acoustic Echo Cancellation</a></li>
-  <li><a href="http://developer.android.com/reference/android/media/audiofx/AutomaticGainControl.html">Automatic Gain Control</a></li>
-  <li><a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">Noise Suppression</a></li>
+  <li><a
+href="http://developer.android.com/reference/android/media/audiofx/AcousticEchoCanceler.html">Acoustic Echo Cancellation</a></li>
+  <li><a
+href="http://developer.android.com/reference/android/media/audiofx/AutomaticGainControl.html">Automatic Gain Control</a></li>
+  <li><a
+href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">Noise Suppression</a></li>
 </ul>
 </p>
 
 
 <p>Pre-processing effects are always paired with the use case mode in which the pre-processing is requested. In Android
-app development, a use case is referred to as an <code>AudioSource</code>, and app developers
-request to use the <code>AudioSource</code> abstraction instead of the actual audio hardware device to use.
+app development, a use case is referred to as an <code>AudioSource</code>; and app developers
+request to use the <code>AudioSource</code> abstraction instead of the actual audio hardware device.
 The Android Audio Policy Manager maps an <code>AudioSource</code> to the actual hardware with <code>AudioPolicyManagerBase::getDeviceForInputSource(int 
-inputSource)</code>. In Android 4.2, the following sources are exposed to developers:
+inputSource)</code>. The following sources are exposed to developers:
 </p>
 <ul>
 <code><li>android.media.MediaRecorder.AudioSource.CAMCORDER</li></code>
@@ -288,11 +294,13 @@
 <p class="warning"><strong>Warning:</strong> For the <code>VOICE_RECOGNITION</code> use case, do not enable
 the noise suppression pre-processing effect. It should not be turned on by default when recording from this audio source,
 and you should not enable it in your own audio_effects.conf file. Turning on the effect by default will cause the device to fail
-the <a href="/compatibility/index.html"> compatibility requirement </a>
-regardless of whether is was on by default due to configuration file, or the audio HAL implementation's default behavior.</p>
+the <a href="/compatibility/index.html">compatibility requirement</a> regardless of whether this was on by default due to 
+configuration file, or the audio HAL implementation's default behavior.</p>
 
 <p>The following example enables pre-processing for the VoIP <code>AudioSource</code> and Camcorder <code>AudioSource</code>.
-By declaring the <code>AudioSource</code> configuration in this manner, the framework will automatically request from the audio HAL the use of those effects</p>
+By declaring the <code>AudioSource</code> configuration in this manner, the
+framework will automatically request from the audio HAL the use of those
+effects.</p>
 
 <pre>
 pre_processing {
diff --git a/src/devices/audio_avoiding_pi.jd b/src/devices/audio_avoiding_pi.jd
index 184a150..a8cd208 100644
--- a/src/devices/audio_avoiding_pi.jd
+++ b/src/devices/audio_avoiding_pi.jd
@@ -11,34 +11,34 @@
 
 <p>
 This article explains how the Android's audio system attempts to avoid
-priority inversion, as of the Android 4.1 (Jellybean) release,
+priority inversion, as of the Android 4.1 release,
 and highlights techniques that you can use too.
 </p>
 
 <p>
 These techniques may be useful to developers of high-performance
 audio apps, OEMs, and SoC providers who are implementing an audio
-HAL. Please note that implementing these techniques is not
+HAL. Please note implementing these techniques is not
 guaranteed to prevent glitches or other failures, particularly if
 used outside of the audio context.
-Your results may vary and you should conduct your own
+Your results may vary, and you should conduct your own
 evaluation and testing.
 </p>
 
 <h2 id="background">Background</h2>
 
 <p>
-The Android audio server "AudioFlinger" and AudioTrack/AudioRecord
+The Android AudioFlinger audio server and AudioTrack/AudioRecord
 client implementation are being re-architected to reduce latency.
-This work started in Android 4.1 (Jellybean), continued in 4.2
-(Jellybean MR1), and more improvements are likely in "K".
+This work started in Android 4.1, continued in 4.2 and 4.3, and now more
+improvements exist in version 4.4.
 </p>
 
 <p>
-The lower latency needed many changes throughout the system. One
-important change was to  assign CPU resources to time-critical
+To achieve this lower latency, many changes were needed throughout the system. One
+important change is to assign CPU resources to time-critical
 threads with a more predictable scheduling policy. Reliable scheduling
-allows the audio buffer sizes and counts to be reduced, while still
+allows the audio buffer sizes and counts to be reduced while still
 avoiding artifacts due to underruns.
 </p>
 
@@ -64,7 +64,7 @@
 
 <p>
 In the Android audio implementation, priority inversion is most
-likely to occur in these places, and so we focus attention here:
+likely to occur in these places. And so we focus attention here:
 </p>
 
 <ul>
@@ -80,7 +80,7 @@
 </li>
 
 <li>
-within the audio HAL implementation, e.g. for telephony or echo cancellation
+within the audio Hardware Abstraction Layer (HAL) implementation, e.g. for telephony or echo cancellation
 </li>
 
 <li>
@@ -119,7 +119,7 @@
 
 <p>
 Disabling interrupts is not feasible in Linux user space, and does
-not work for SMP.
+not work for Symmetric Multi-Processors (SMP).
 </p>
 
 
@@ -162,15 +162,15 @@
 </ul>
 
 <p>
-All of these return the previous value, and include the necessary
+All of these return the previous value and include the necessary
 SMP barriers. The disadvantage is they can require unbounded retries.
 In practice, we've found that the retries are not a problem.
 </p>
 
 <p>
-Note: atomic operations and their interactions with memory barriers
+<strong>Note</strong>: Atomic operations and their interactions with memory barriers
 are notoriously badly misunderstood and used incorrectly. We include
-these here for completeness, but recommend you also read the article
+these methods here for completeness but recommend you also read the article
 <a href="https://developer.android.com/training/articles/smp.html">
 SMP Primer for Android</a>
 for further information.
@@ -202,7 +202,7 @@
 When state does need to be shared, limit the state to the
 maximum-size
 <a href="http://en.wikipedia.org/wiki/Word_(computer_architecture)">word</a>
-that can be accessed atomically in one bus operation
+that can be accessed atomically in one-bus operation
 without retries.
 </li>
 
@@ -244,7 +244,7 @@
 </p>
 
 <p>
-In Android 4.2 (Jellybean MR1), you can find our non-blocking,
+Starting in Android 4.2, you can find our non-blocking,
 single-reader/writer classes in these locations:
 </p>
 
@@ -267,14 +267,14 @@
 <p>
 These were designed specifically for AudioFlinger and are not
 general-purpose. Non-blocking algorithms are notorious for being
-difficult to debug. You can look at this code as a model, but be
+difficult to debug. You can look at this code as a model. But be
 aware there may be bugs, and the classes are not guaranteed to be
 suitable for other purposes.
 </p>
 
 <p>
 For developers, we may update some of the sample OpenSL ES application
-code to use non-blocking, or referencing a non-Android open source
+code to use non-blocking algorithms or reference a non-Android open source
 library.
 </p>
 
diff --git a/src/devices/audio_latency.jd b/src/devices/audio_latency.jd
index 476842b..2a1d2d7 100644
--- a/src/devices/audio_latency.jd
+++ b/src/devices/audio_latency.jd
@@ -26,8 +26,10 @@
 
 <p>Audio latency is the time delay as an audio signal passes through a system.
   For a complete description of audio latency for the purposes of Android
-  compatibility, see <em>Section 5.4 Audio Latency</em>
+  compatibility, see <em>Section 5.5 Audio Latency</em>
   in the <a href="http://source.android.com/compatibility/index.html">Android CDD</a>.
+  See <a href="latency_design.html">Design For Reduced Latency</a> for an 
+  understanding of Android's audio latency-reduction efforts.
 </p>
 
 <h2 id="contributors">Contributors to Latency</h2>
@@ -37,8 +39,8 @@
   but a similar discussion applies to input latency.
 </p>
 <p>
-  Assuming that the analog circuitry does not contribute significantly.
-  Then the major surface-level contributors to audio latency are the following:
+  Assuming the analog circuitry does not contribute significantly, then the major 
+surface-level contributors to audio latency are the following:
 </p>
 
 <ul>
@@ -53,14 +55,14 @@
   The reason is that buffer count and buffer size are more of an
   <em>effect</em> than a <em>cause</em>.  What usually happens is that
   a given buffer scheme is implemented and tested, but during testing, an audio
-  underrun is heard as a "click" or "pop".  To compensate, the
+  underrun is heard as a "click" or "pop."  To compensate, the
   system designer then increases buffer sizes or buffer counts.
   This has the desired result of eliminating the underruns, but it also
   has the undesired side effect of increasing latency.
 </p>
 
 <p>
-  A better approach is to understand the underlying causes of the
+  A better approach is to understand the causes of the
   underruns and then correct those.  This eliminates the
   audible artifacts and may even permit even smaller or fewer buffers
   and thus reduce latency.
@@ -95,7 +97,7 @@
 
 <p>
   The obvious solution is to avoid CFS for high-performance audio
-  threads. Beginning with Android 4.1 (Jelly Bean), such threads now use the
+  threads. Beginning with Android 4.1, such threads now use the
   <code>SCHED_FIFO</code> scheduling policy rather than the <code>SCHED_NORMAL</code> (also called
   <code>SCHED_OTHER</code>) scheduling policy implemented by CFS.
 </p>
@@ -107,17 +109,17 @@
   non-audio user threads with policy <code>SCHED_FIFO</code>. The available <code>SCHED_FIFO</code>
   priorities range from 1 to 99.  The audio threads run at priority
   2 or 3.  This leaves priority 1 available for lower priority threads,
-  and priorities 4 to 99 for higher priority threads.  We recommend that
+  and priorities 4 to 99 for higher priority threads.  We recommend 
   you use priority 1 whenever possible, and reserve priorities 4 to 99 for
   those threads that are guaranteed to complete within a bounded amount
-  of time, and are known to not interfere with scheduling of audio threads.
+  of time and are known to not interfere with scheduling of audio threads.
 </p>
 
 <h3>Scheduling latency</h3>
 <p>
   Scheduling latency is the time between when a thread becomes
   ready to run, and when the resulting context switch completes so that the
-  thread actually runs on a CPU. The shorter the latency the better and 
+  thread actually runs on a CPU. The shorter the latency the better, and 
   anything over two milliseconds causes problems for audio. Long scheduling
   latency is most likely to occur during mode transitions, such as
   bringing up or shutting down a CPU, switching between a security kernel
@@ -129,7 +131,7 @@
 <p>
   In many designs, CPU 0 services all external interrupts.  So a
   long-running interrupt handler may delay other interrupts, in particular
-  audio DMA completion interrupts. Design interrupt handlers
+  audio direct memory access (DMA) completion interrupts. Design interrupt handlers
   to finish quickly and defer any lengthy work to a thread (preferably
   a CFS thread or <code>SCHED_FIFO</code> thread of priority 1).
 </p>
@@ -148,7 +150,8 @@
 
 <p>
   There are several techniques available to measure output latency,
-  with varying degrees of accuracy and ease of running.
+  with varying degrees of accuracy and ease of running, described below. Also
+see the <a href="testing_circuit.html">Testing circuit</a> for an example test environment.
 </p>
 
 <h3>LED and oscilloscope test</h3>
@@ -167,8 +170,8 @@
 </p>
 
 <ul>
-  <li>Use a General Purpose Input/Output (GPIO) pin for the same purpose</li>
-  <li>Use JTAG or another debugging port</li>
+  <li>Use a General Purpose Input/Output (GPIO) pin for the same purpose.</li>
+  <li>Use JTAG or another debugging port.</li>
   <li>Use the screen backlight. This might be risky as the
   backlight may have a non-neglible latency, and can contribute to
   an inaccurate latency reading.
@@ -183,8 +186,8 @@
 
   <p class="note"><b>Note:</b> To get useful results, it is crucial to use the correct
   APIs in the test app so that you're exercising the fast audio output path.
-  See the separate document "Application developer guidelines for reduced
-  audio latency". <!-- where is this ?-->
+  See <a href="latency_design.html">Design For Reduced Latency</a> for
+  background.
   </p>
   </li>
   <li>Place a light sensor next to the LED.</li>
@@ -198,7 +201,7 @@
   <p>The difference in time is the approximate audio output latency,
   assuming that the LED latency and light sensor latency are both zero.
   Typically, the LED and light sensor each have a relatively low latency
-  on the order of 1 millisecond or less, which is sufficiently low enough
+  on the order of one millisecond or less, which is sufficiently low enough
   to ignore.</p>
 
 <h3>Larsen test</h3>
@@ -206,7 +209,8 @@
   One of the easiest latency tests is an audio feedback
   (Larsen effect) test. This provides a crude measure of combined output
   and input latency by timing an impulse response loop. This test is not very useful
-  by itself because of the nature of the test, but</p>
+  by itself because of the nature of the test, but it can be useful for calibrating 
+  other tests</p>
 
 <p>To conduct this test:</p>
 <ol>
@@ -219,7 +223,7 @@
 
   <p>This method does not break down the
   component times, which is important when the output latency
-  and input latency are independent, so this method is not recommended for measuring output latency, but might be useful
+  and input latency are independent. So this method is not recommended for measuring output latency, but might be useful
   to help measure output latency.</p>
 
 <h2 id="measuringInput">Measuring Input Latency</h2>
@@ -270,7 +274,7 @@
 </p>
   <ul>
   <li>Configure any DSP after the app processor so that it adds
-  minimal latency</li>
+  minimal latency.</li>
   <li>Run tests under different conditions
   such as having the screen on or off, USB plugged in or unplugged,
   WiFi on or off, Bluetooth on or off, and telephony and data radios
@@ -278,7 +282,7 @@
   <li>Select relatively quiet music that you're very familiar with, and which is easy
   to hear underruns in.</li>
   <li>Use wired headphones for extra sensitivity.</li>
-  <li>Give yourself breaks so that you don't experience "ear fatigue".</li>
+  <li>Give yourself breaks so that you don't experience "ear fatigue."</li>
   </ul>
 
 <p>
@@ -297,11 +301,11 @@
 
 <p>
   The output of <code>dumpsys media.audio_flinger</code> also contains a
-  useful section called "simple moving statistics". This has a summary
+  useful section called "simple moving statistics." This has a summary
   of the variability of elapsed times for each audio mix and I/O cycle.
   Ideally, all the time measurements should be about equal to the mean or
   nominal cycle time. If you see a very low minimum or high maximum, this is an
-  indication of a problem, which is probably a high scheduling latency or interrupt
+  indication of a problem, likely a high scheduling latency or interrupt
   disable time. The <i>tail</i> part of the output is especially helpful,
   as it highlights the variability beyond +/- 3 standard deviations.
-</p>
\ No newline at end of file
+</p>
diff --git a/src/devices/audio_terminology.jd b/src/devices/audio_terminology.jd
index eee03aa..23592d4 100644
--- a/src/devices/audio_terminology.jd
+++ b/src/devices/audio_terminology.jd
@@ -76,7 +76,7 @@
 <dd>
 Number of frames per second;
 note that "frame rate" is thus more accurate,
-but "sample rate" is conventionally used to mean "frame rate".
+but "sample rate" is conventionally used to mean "frame rate."
 </dd>
 
 <dt>stereo</dt>
@@ -89,7 +89,7 @@
 <h2 id="androidSpecificTerms">Android-Specific Terms</h2>
 
 <p>
-These are terms that are specific to Android audio framework, or that
+These are terms specific to the Android audio framework, or that
 may have a special meaning within Android beyond their general meaning.
 </p>
 
@@ -135,7 +135,8 @@
 <dt>AudioRecord</dt>
 <dd>
 The primary low-level client API for receiving data from an audio
-input device such as microphone.  The data is usually in PCM format.
+input device such as microphone.  The data is usually in pulse-code modulation
+(PCM) format.
 </dd>
 
 <dt>AudioResampler</dt>
@@ -187,7 +188,7 @@
 <dt>MediaPlayer</dt>
 <dd>
 A higher-level client API than AudioTrack, for playing either encoded
-content, or content which includes multi-media audio and video tracks.
+content, or content which includes multimedia audio and video tracks.
 </dd>
 
 <dt>media.log</dt>
@@ -215,7 +216,7 @@
 <dd>
 A thread within AudioFlinger that services most full-featured
 AudioTrack clients, and either directly drives an output device or feeds
-it's sub-mix into FastMixer via a pipe.
+its sub-mix into FastMixer via a pipe.
 </dd>
 
 <dt>OpenSL ES</dt>
@@ -243,7 +244,7 @@
 <dt>tinyalsa</dt>
 <dd>
 A small user-mode API above ALSA kernel with BSD license, recommended
-for use by HAL implementations.
+for use in HAL implementations.
 </dd>
 
 <dt>track</dt>
diff --git a/src/devices/audio_warmup.jd b/src/devices/audio_warmup.jd
index 4beb7e0..ba1217c 100644
--- a/src/devices/audio_warmup.jd
+++ b/src/devices/audio_warmup.jd
@@ -40,12 +40,13 @@
   and reports it as part of the output of the <code>dumpsys media.audio_flinger</code> command.
   At warmup, FastMixer calls <code>write()</code>
   repeatedly until the time between two <code>write()</code>s is the amount expected.
-  FastMixer determines audio warmup by seeing how long a HAL <code>write()</code> takes to stabilize. 
+  FastMixer determines audio warmup by seeing how long a Hardware Abstraction
+Layer (HAL) <code>write()</code> takes to stabilize. 
 </p>
 
-<p>To measure audio warmup, do the following
-steps for the built-in speaker and wired headphones and at different times after booting.
-Warmup times are usually different for each output device and right after booting the device:</p>
+<p>To measure audio warmup, follow these steps for the built-in speaker and wired headphones 
+  and at different times after booting. Warmup times are usually different for each output device 
+  and right after booting the device:</p>
 
 <ol>
   <li>Ensure that FastMixer is enabled.</li>
@@ -75,7 +76,7 @@
 </p>
 </li>
 <li>
-  Take five measurements and report them all, as well as the mean.
+  Take five measurements and record them all, as well as the mean.
   If they are not all approximately the same,
   then it’s likely that a measurement is incorrect.
   For example, if you don’t wait long enough after the audio has been off,
@@ -102,7 +103,7 @@
   <li>Good circuit design</li>
   <li>Accurate time delays in kernel device driver</li>
   <li>Performing independent warmup operations concurrently rather than sequentially</li>
-  <li>Leaving circuits powered on or not reconfiguring clocks (increases idle power consumption).
+  <li>Leaving circuits powered on or not reconfiguring clocks (increases idle power consumption)
   <li>Caching computed parameters</li>
   </ul>
   However, beware of excessive optimization. You may find that you
diff --git a/src/devices/latency_design.jd b/src/devices/latency_design.jd
index 8e6d766..eb503f3 100644
--- a/src/devices/latency_design.jd
+++ b/src/devices/latency_design.jd
@@ -10,12 +10,12 @@
 </div>
 
 <p>
-Android 4.1 (Jelly Bean) release introduced internal framework changes for
+The Android 4.1 release introduced internal framework changes for
 a lower latency audio output path. There were no public client API
 or HAL API changes. This document describes the initial design,
 which is expected to evolve over time.
 Having a good understanding of this design should help device OEM and
-SoC vendors to implement the design correctly on their particular devices
+SoC vendors implement the design correctly on their particular devices
 and chipsets.  This article is not intended for application developers.
 </p>
 
@@ -42,7 +42,7 @@
 </p>
 
 <p>
-AudioFlinger (server) reviews the <code>TRACK_FAST</code> request and may
+The AudioFlinger audio server reviews the <code>TRACK_FAST</code> request and may
 optionally deny the request at server level. It informs the client
 whether or not the request was accepted, via bit <code>CBLK_FAST</code> of the
 shared memory control block.
@@ -61,8 +61,8 @@
 </ul>
 
 <p>
-If the client's request was accepted, it is called a "fast track",
-otherwise it's called a "normal track".
+If the client's request was accepted, it is called a "fast track."
+Otherwise it's called a "normal track."
 </p>
 
 <h2 id="mixerThreads">Mixer Threads</h2>
@@ -102,8 +102,8 @@
 <h4>Period</h4>
 
 <p>
-The fast mixer runs periodically, with a recommended period of 2
-to 3 ms, or slightly higher if needed for scheduling stability.
+The fast mixer runs periodically, with a recommended period of two
+to three milliseconds (ms), or slightly higher if needed for scheduling stability.
 This number was chosen so that, accounting for the complete
 buffer pipeline, the total latency is on the order of 10 ms. Smaller
 values are possible but may result in increased power consumption
@@ -169,7 +169,7 @@
 
 <p>
 The period is computed to be the first integral multiple of the
-fast mixer period that is >= 20 milliseconds.
+fast mixer period that is >= 20 ms.
 </p>
 
 <h4>Scheduling</h4>
diff --git a/src/devices/testing_circuit.jd b/src/devices/testing_circuit.jd
index baee474..3ad6575 100644
--- a/src/devices/testing_circuit.jd
+++ b/src/devices/testing_circuit.jd
@@ -13,7 +13,8 @@
 The file <a href="http://developer.android.com/downloads/partner/audio/av_sync_board.zip">av_sync_board.zip</a>
 contains CAD files for an A/V sync and latency testing
 printed circuit board (PCB).
-The files include a fabrication drawing, EAGLE CAD, schematic, and BOM.
+The files include a fabrication drawing, EAGLE CAD, schematic, and BOM. See <a
+href="audio_latency.html">Audio Latency</a> for recommended testing methods.
 </p>
 
 <p>
@@ -28,7 +29,8 @@
 
 <p>
 This design is supplied "as is", and we aren't be responsible for any errors in the design.
-But if you have any suggestions for improvement, please post to android-porting group.
+But if you have any suggestions for improvement, please post to the <a
+href="https://groups.google.com/forum/#!forum/android-porting">android-porting</a> group.
 </p>
 
 <p>