Fix markup for audio pages: tags, entities, etc
Change-Id: I6053a188fb51c9c8a5f7807780094c2dde543e2c
diff --git a/src/devices/audio/attributes.jd b/src/devices/audio/attributes.jd
index 473a04e..0f4beef 100644
--- a/src/devices/audio/attributes.jd
+++ b/src/devices/audio/attributes.jd
@@ -150,7 +150,7 @@
<p>Application developers should use audio attributes when creating or updating applications for
Android 5.0. However, applications are not required to take advantage of attributes; they can
handle legacy stream types only or remain unaware of attributes (i.e. a generic media player that
-doesn’t know anything about the content it’s playing).</p>
+doesn't know anything about the content it's playing).</p>
<p>In such cases, the framework maintains backwards compatibility with older devices and Android
releases by automatically translating legacy audio stream types to audio attributes. However, the
@@ -166,7 +166,7 @@
</tr>
<tr>
<td>
- <code>CONTENT_TYPE_SPEECH</code><br>
+ <code>CONTENT_TYPE_SPEECH</code><br />
<code>USAGE_VOICE_COMMUNICATION</code>
</td>
<td>
@@ -175,7 +175,7 @@
</tr>
<tr>
<td>
- <code>CONTENT_TYPE_SONIFICATION</code><br>
+ <code>CONTENT_TYPE_SONIFICATION</code><br />
<code>USAGE_ASSISTANCE_SONIFICATION</code>
</td>
<td>
@@ -184,7 +184,7 @@
</tr>
<tr>
<td>
- <code>CONTENT_TYPE_SONIFICATION</code><br>
+ <code>CONTENT_TYPE_SONIFICATION</code><br />
<code>USAGE_NOTIFICATION_RINGTONE</code>
</td>
<td>
@@ -193,12 +193,12 @@
</tr>
<tr>
<td>
- <code>CONTENT_TYPE_MUSIC</code><br>
- <code>USAGE_UNKNOWN</code><br>
- <code>USAGE_MEDIA</code><br>
- <code>USAGE_GAME</code><br>
- <code>USAGE_ASSISTANCE_ACCESSIBILITY</code><br>
- <code>USAGE_ASSISTANCE_NAVIGATION_GUIDANCE</code><br>
+ <code>CONTENT_TYPE_MUSIC</code><br />
+ <code>USAGE_UNKNOWN</code><br />
+ <code>USAGE_MEDIA</code><br />
+ <code>USAGE_GAME</code><br />
+ <code>USAGE_ASSISTANCE_ACCESSIBILITY</code><br />
+ <code>USAGE_ASSISTANCE_NAVIGATION_GUIDANCE</code>
</td>
<td>
<code>STREAM_MUSIC</code>
@@ -206,7 +206,7 @@
</tr>
<tr>
<td>
- <code>CONTENT_TYPE_SONIFICATION</code><br>
+ <code>CONTENT_TYPE_SONIFICATION</code><br />
<code>USAGE_ALARM</code>
</td>
<td>
@@ -215,12 +215,12 @@
</tr>
<tr>
<td>
- <code>CONTENT_TYPE_SONIFICATION</code><br>
- <code>USAGE_NOTIFICATION</code><br>
- <code>USAGE_NOTIFICATION_COMMUNICATION_REQUEST</code><br>
- <code>USAGE_NOTIFICATION_COMMUNICATION_INSTANT</code><br>
- <code>USAGE_NOTIFICATION_COMMUNICATION_DELAYED</code><br>
- <code>USAGE_NOTIFICATION_EVENT</code><br>
+ <code>CONTENT_TYPE_SONIFICATION</code><br />
+ <code>USAGE_NOTIFICATION</code><br />
+ <code>USAGE_NOTIFICATION_COMMUNICATION_REQUEST</code><br />
+ <code>USAGE_NOTIFICATION_COMMUNICATION_INSTANT</code><br />
+ <code>USAGE_NOTIFICATION_COMMUNICATION_DELAYED</code><br />
+ <code>USAGE_NOTIFICATION_EVENT</code>
</td>
<td>
<code>STREAM_NOTIFICATION</code>
@@ -244,7 +244,7 @@
</tr>
<tr>
<td>
- <code>CONTENT_TYPE_SONIFICATION</code><br>
+ <code>CONTENT_TYPE_SONIFICATION</code><br />
<code>USAGE_VOICE_COMMUNICATION_SIGNALLING</code>
</td>
<td>
@@ -254,4 +254,4 @@
</table>
<p class="note"><strong>Note:</strong> @hide streams are used internally by the framework but are
-not part of the public API.</p>
\ No newline at end of file
+not part of the public API.</p>
diff --git a/src/devices/audio/avoiding_pi.jd b/src/devices/audio/avoiding_pi.jd
index 022c766..602c545 100644
--- a/src/devices/audio/avoiding_pi.jd
+++ b/src/devices/audio/avoiding_pi.jd
@@ -302,10 +302,10 @@
<code>frameworks/av/audio_utils</code>:
</p>
<ul>
- <li><a href="https://android.googlesource.com/platform/system/media/+/master/audio_utils/include/audio_utils/fifo.h">include/audio_utils/fifo.h</a>
- <li><a href="https://android.googlesource.com/platform/system/media/+/master/audio_utils/fifo.c">fifo.c</a>
- <li><a href="https://android.googlesource.com/platform/system/media/+/master/audio_utils/include/audio_utils/roundup.h">include/audio_utils/roundup.h</a>
- <li><a href="https://android.googlesource.com/platform/system/media/+/master/audio_utils/roundup.c">roundup.c</a>
+ <li><a href="https://android.googlesource.com/platform/system/media/+/master/audio_utils/include/audio_utils/fifo.h">include/audio_utils/fifo.h</a></li>
+ <li><a href="https://android.googlesource.com/platform/system/media/+/master/audio_utils/fifo.c">fifo.c</a></li>
+ <li><a href="https://android.googlesource.com/platform/system/media/+/master/audio_utils/include/audio_utils/roundup.h">include/audio_utils/roundup.h</a></li>
+ <li><a href="https://android.googlesource.com/platform/system/media/+/master/audio_utils/roundup.c">roundup.c</a></li>
</ul>
<h2 id="tools">Tools</h2>
@@ -336,4 +336,3 @@
low-priority tasks and in time-sensitive systems mutexes are more
likely to cause trouble.
</p>
-
diff --git a/src/devices/audio/debugging.jd b/src/devices/audio/debugging.jd
index cc4a9c5..6b98030 100644
--- a/src/devices/audio/debugging.jd
+++ b/src/devices/audio/debugging.jd
@@ -119,7 +119,7 @@
<li><code>adb shell dumpsys media.audio_flinger</code></li>
<li>Look for a line in dumpsys output such as this:<br />
<code>tee copied to /data/misc/media/20131010101147_2.wav</code>
-<br />This is a PCM .wav file.</br>
+<br />This is a PCM .wav file.
</li>
<li><code>adb pull</code> any <code>/data/misc/media/*.wav</code> files of interest;
note that track-specific dump filenames do not appear in the dumpsys output,
@@ -140,6 +140,7 @@
</li>
<li>Track-specific dumps are only saved when the track is closed;
you may need to force close an app in order to dump its track-specific data
+</li>
<li>Do the <code>dumpsys</code> immediately after test;
there is a limited amount of recording space available.</li>
<li>To make sure you don't lose your dump files,
@@ -271,7 +272,7 @@
<ul>
<li><code>init</code> forks and execs <code>mediaserver</code>.</li>
<li><code>init</code> detects the death of <code>mediaserver</code>, and re-forks as necessary.</li>
-<li><code>ALOGx</code> logging is not shown.
+<li><code>ALOGx</code> logging is not shown.</li>
</ul>
<p>
@@ -341,9 +342,9 @@
In <code>FastMixer</code> and <code>FastCapture</code> threads, use code such as this:
</p>
<pre>
-logWriter->log("string");
-logWriter->logf("format", parameters);
-logWriter->logTimestamp();
+logWriter->log("string");
+logWriter->logf("format", parameters);
+logWriter->logTimestamp();
</pre>
<p>
As this <code>NBLog</code> timeline is used only by the <code>FastMixer</code> and
@@ -355,9 +356,9 @@
In other AudioFlinger threads, use <code>mNBLogWriter</code>:
</p>
<pre>
-mNBLogWriter->log("string");
-mNBLogWriter->logf("format", parameters);
-mNBLogWriter->logTimestamp();
+mNBLogWriter->log("string");
+mNBLogWriter->logf("format", parameters);
+mNBLogWriter->logTimestamp();
</pre>
<p>
For threads other than <code>FastMixer</code> and <code>FastCapture</code>,
diff --git a/src/devices/audio/implement.jd b/src/devices/audio/implement.jd
index 2ab82b0..cf68404 100644
--- a/src/devices/audio/implement.jd
+++ b/src/devices/audio/implement.jd
@@ -49,12 +49,12 @@
Nexus audio hardware in <code>device/samsung/tuna/audio/audio_policy.conf</code>. Also, see the
audio header files for a reference of the properties that you can define.</p>
-<p>In the Android M release and later, the paths are:<br>
-<code>system/media/audio/include/system/audio.h</code><br>
+<p>In the Android M release and later, the paths are:<br />
+<code>system/media/audio/include/system/audio.h</code><br />
<code>system/media/audio/include/system/audio_policy.h</code></p>
-<p>In Android 5.1 and earlier, the paths are:<br>
-<code>system/core/include/system/audio.h</code><br>
+<p>In Android 5.1 and earlier, the paths are:<br />
+<code>system/core/include/system/audio.h</code><br />
<code>system/core/include/system/audio_policy.h</code></p>
<h3 id="multichannel">Multi-channel support</h3>
@@ -263,7 +263,7 @@
<li>"flat" frequency response (+/- 3dB) from 100Hz to 4kHz</li>
<li>close-talk config: 90dB SPL reads RMS of 2500 (16bit samples)</li>
<li>level tracks linearly from -18dB to +12dB relative to 90dB SPL</li>
-<li>THD < 1% (90dB SPL in 100 to 4000Hz range)</li>
+<li>THD < 1% (90dB SPL in 100 to 4000Hz range)</li>
<li>8kHz sampling rate (anti-aliasing)</li>
<li>Effects/pre-processing must be disabled by default</li>
</ul>
@@ -292,7 +292,7 @@
<ul>
<li>Android documentation for
<a href="http://developer.android.com/reference/android/media/audiofx/package-summary.html">
-audiofx package</a>
+audiofx package</a></li>
<li>Android documentation for
<a href="http://developer.android.com/reference/android/media/audiofx/NoiseSuppressor.html">
diff --git a/src/devices/audio/latency_app.jd b/src/devices/audio/latency_app.jd
index 5672147..9505f9b 100644
--- a/src/devices/audio/latency_app.jd
+++ b/src/devices/audio/latency_app.jd
@@ -28,7 +28,7 @@
<p>For the lowest audio latency possible, we recommend you use Android native audio
based on OpenSL ES 1.0.1.</p>
-<h2 id=implementation>Implementation checklist</h2>
+<h2 id="implementation">Implementation checklist</h2>
<p>To use Android native audio:</p>
@@ -78,9 +78,9 @@
</ol>
-<h2 id=supporting>Supporting documentation</h2>
+<h2 id="supporting">Supporting documentation</h2>
-<h3 id=opensl_es_1_0_1>OpenSL ES 1.0.1</h3>
+<h3 id="opensl_es_1_0_1">OpenSL ES 1.0.1</h3>
<p>
Use a PDF viewer to review the
@@ -99,7 +99,7 @@
are not relevant to Android.
</p>
-<h3 id=opensl_es_for_android>OpenSL ES for Android</h3>
+<h3 id="opensl_es_for_android">OpenSL ES for Android</h3>
<p>
The document "OpenSL ES for Android" is provided in the NDK installation,
@@ -111,7 +111,7 @@
</pre>
<p>
-You’ll want to skim the whole
+You'll want to skim the whole
document, but pay special attention to the "Performance" subsection of the
"Programming notes" section.
</p>
@@ -126,7 +126,7 @@
that aren't included in base OpenSL ES 1.0.1.
</p>
-<h3 id=relationship>Relationship with OpenSL ES 1.0.1</h3>
+<h3 id="relationship">Relationship with OpenSL ES 1.0.1</h3>
<p>
This Venn diagram shows the relationship between
@@ -138,9 +138,9 @@
<strong>Figure 1.</strong> Venn diagram
</p>
-<h2 id=resources>Other resources</h2>
+<h2 id="resources">Other resources</h2>
-<h3 id=source_android_com>source.android.com</h3>
+<h3 id="source_android_com">source.android.com</h3>
<p>
The site <a href="{@docRoot}">source.android.com</a>
@@ -154,14 +154,14 @@
<a href="latency.html">Audio Latency.</a>
</p>
-<h3 id=android_ndk>android-ndk</h3>
+<h3 id="android_ndk">android-ndk</h3>
<p>
If you have questions about how to use Android native audio, you can ask at the discussion group
<a href="https://groups.google.com/forum/#!forum/android-ndk">android-ndk.</a>
</p>
-<h3 id=videos>Videos</h3>
+<h3 id="videos">Videos</h3>
<dl>
diff --git a/src/devices/audio/latency_design.jd b/src/devices/audio/latency_design.jd
index 21f963f..c931fba 100644
--- a/src/devices/audio/latency_design.jd
+++ b/src/devices/audio/latency_design.jd
@@ -232,6 +232,5 @@
</p>
<p>
-<code>TRACK_FAST</code> is a client -> server concept.
+<code>TRACK_FAST</code> is a client -> server concept.
</p>
-
diff --git a/src/devices/audio/latency_measure.jd b/src/devices/audio/latency_measure.jd
index d0113d2..cf974bd 100644
--- a/src/devices/audio/latency_measure.jd
+++ b/src/devices/audio/latency_measure.jd
@@ -66,7 +66,7 @@
<ol>
<li>Run an app that periodically pulses the LED at
- the same time it outputs audio.
+ the same time it outputs audio.
<p class="note"><strong>Note:</strong> To get useful results, it is crucial to use the correct
APIs in the test app so that you're exercising the fast audio output path.
See <a href="latency_design.html">Design For Reduced Latency</a> for
diff --git a/src/devices/audio/testing_circuit.jd b/src/devices/audio/testing_circuit.jd
index 995062d..12a5bcb 100644
--- a/src/devices/audio/testing_circuit.jd
+++ b/src/devices/audio/testing_circuit.jd
@@ -92,4 +92,3 @@
shows the the breadboard version testing circuit in operation.
Skip ahead to 1:00 to see the circuit.
</p>
-
diff --git a/src/devices/audio/tv.jd b/src/devices/audio/tv.jd
index 372c27d..9f7afc8 100644
--- a/src/devices/audio/tv.jd
+++ b/src/devices/audio/tv.jd
@@ -36,10 +36,10 @@
<p>The TIF then uses AudioPort information for the audio routing API.</p>
-<p><img src="images/ape_audio_tv_tif.png" alt="Android TV Input Framework (TIF)" />
+<p><img src="images/ape_audio_tv_tif.png" alt="Android TV Input Framework (TIF)" /></p>
<p class="img-caption"><strong>Figure 1.</strong> TV Input Framework (TIF)</p>
-<h2 id="Requirements">Requirements</h2>
+<h2 id="requirements">Requirements</h2>
<p>A SoC must implement the audio HAL with the following audio routing API support:</p>
@@ -70,7 +70,7 @@
</table>
-<h2 id="Audio Devices">TV audio devices</h2>
+<h2 id="audioDevices">TV audio devices</h2>
<p>Android supports the following audio devices for TV audio input/output.</p>
@@ -98,7 +98,7 @@
</pre>
-<h2 id="HAL extension">Audio HAL extension</h2>
+<h2 id="halExtension">Audio HAL extension</h2>
<p>The Audio HAL extension for the audio routing API is defined by following:</p>
@@ -182,7 +182,7 @@
const struct audio_port_config *config);
</pre>
-<h2 id="Testing">Testing DEVICE_IN_LOOPBACK</h2>
+<h2 id="testing">Testing DEVICE_IN_LOOPBACK</h2>
<p>To test DEVICE_IN_LOOPBACK for TV monitoring, use the following testing code. After running the
test, the captured audio saves to <code>/sdcard/record_loopback.raw</code>, where you can listen to
@@ -208,8 +208,8 @@
AudioPortConfig sinkPortConfig = null;
for (AudioPort audioPort : audioPorts) {
if (srcPortConfig == null
- && audioPort.role() == AudioPort.ROLE_SOURCE
- && audioPort instanceof AudioDevicePort) {
+ && audioPort.role() == AudioPort.ROLE_SOURCE
+ && audioPort instanceof AudioDevicePort) {
AudioDevicePort audioDevicePort = (AudioDevicePort) audioPort;
if (audioDevicePort.type() == AudioManager.DEVICE_IN_LOOPBACK) {
srcPortConfig = audioPort.buildConfig(48000, AudioFormat.CHANNEL_IN_DEFAULT,
@@ -218,14 +218,14 @@
}
}
else if (sinkPortConfig == null
- && audioPort.role() == AudioPort.ROLE_SINK
- && audioPort instanceof AudioMixPort) {
+ && audioPort.role() == AudioPort.ROLE_SINK
+ && audioPort instanceof AudioMixPort) {
sinkPortConfig = audioPort.buildConfig(48000, AudioFormat.CHANNEL_OUT_DEFAULT,
AudioFormat.ENCODING_DEFAULT, null);
Log.d(LOG_TAG, "Found recorder audio mix port : " + audioPort);
}
}
- if (srcPortConfig != null && sinkPortConfig != null) {
+ if (srcPortConfig != null && sinkPortConfig != null) {
AudioPatch[] patches = new AudioPatch[] { null };
int status = am.createAudioPatch(
patches,
@@ -276,7 +276,7 @@
ffplay record_loopback.wav
</pre>
-<h2 id="Use cases">Use cases</h2>
+<h2 id="useCases">Use cases</h2>
<p>This section includes common use cases for TV audio.</p>
@@ -286,7 +286,7 @@
and the default output (e.g. the speaker). The tuner output does not require decoding, but final
audio output is mixed with software output_stream.</p>
-<p><img src="images/ape_audio_tv_tuner.png" alt="Android TV Tuner Audio Patch" />
+<img src="images/ape_audio_tv_tuner.png" alt="Android TV Tuner Audio Patch" />
<p class="img-caption">
<strong>Figure 2.</strong> Audio Patch for TV tuner with speaker output.</p>
@@ -297,6 +297,6 @@
. The output device of all output_streams changes to the HDMI_OUT port, and the TIF manager changes
the sink port of the existing tuner audio patch to the HDMI_OUT port.</p>
-<p><p><img src="images/ape_audio_tv_hdmi_tuner.png" alt="Android TV HDMI-OUT Audio Patch" />
+<img src="images/ape_audio_tv_hdmi_tuner.png" alt="Android TV HDMI-OUT Audio Patch" />
<p class="img-caption">
<strong>Figure 3.</strong> Audio Patch for HDMI OUT from live TV.</p>
diff --git a/src/devices/audio/usb.jd b/src/devices/audio/usb.jd
index 1a0ce67..bb0bb69 100644
--- a/src/devices/audio/usb.jd
+++ b/src/devices/audio/usb.jd
@@ -66,7 +66,6 @@
<a href="http://en.wikipedia.org/wiki/Peripheral">peripherals</a> via the bus.
</p>
-<p>
<p class="note"><strong>Note:</strong> The terms <i>device</i> and <i>accessory</i> are common synonyms for
<i>peripheral</i>. We avoid those terms here, as they could be confused with
Android <a href="http://en.wikipedia.org/wiki/Mobile_device">device</a>
@@ -191,7 +190,7 @@
in particular audio described below.
</p>
-<h2 id="audioClass">USB audio</h2>
+<h2 id="usbAudio">USB audio</h2>
<h3 id="class">USB classes</h3>
@@ -333,8 +332,8 @@
<table>
<tr>
<th>Sub-mode</th>
- <th>Byte count<br \>per packet</th>
- <th>Sample rate<br \>determined by</th>
+ <th>Byte count<br />per packet</th>
+ <th>Sample rate<br />determined by</th>
<th>Used for audio</th>
</tr>
<tr>
@@ -429,7 +428,7 @@
by a <a href="http://en.wikipedia.org/wiki/Digital_data">digital</a> data stream
rather than the <a href="http://en.wikipedia.org/wiki/Analog_signal">analog</a>
signal used by the common TRS mini
-<a href=" http://en.wikipedia.org/wiki/Phone_connector_(audio)">headset connector</a>.
+<a href="http://en.wikipedia.org/wiki/Phone_connector_(audio)">headset connector</a>.
Eventually any digital signal must be converted to analog before it can be heard.
There are tradeoffs in choosing where to place that conversion.
</p>
@@ -452,6 +451,7 @@
<p>
Which design is better? The answer depends on your needs.
Each has advantages and disadvantages.
+</p>
<p class="note"><strong>Note:</strong> This is an artificial comparison, since
a real Android device would probably have both options available.
</p>
@@ -538,7 +538,7 @@
<ul>
<li>design for audio class compliance;
currently Android targets class 1, but it is wise to plan for class 2</li>
-<li>avoid <a href="http://en.wiktionary.org/wiki/quirk">quirks</a>
+<li>avoid <a href="http://en.wiktionary.org/wiki/quirk">quirks</a></li>
<li>test for inter-operability with reference and popular Android devices</li>
<li>clearly document supported features, audio class compliance, power requirements, etc.
so that consumers can make informed decisions</li>
@@ -572,7 +572,9 @@
To enable USB audio, add an entry to the
audio policy configuration file. This is typically
located here:
+</p>
<pre>device/oem/codename/audio_policy.conf</pre>
+<p>
The pathname component "oem" should be replaced by the name
of the OEM who manufactures the Android device,
and "codename" should be replaced by the device code name.
@@ -618,7 +620,9 @@
<p>
The audio Hardware Abstraction Layer (HAL)
implementation for USB audio is located here:
+</p>
<pre>hardware/libhardware/modules/usbaudio/</pre>
+<p>
The USB audio HAL relies heavily on
<i>tinyalsa</i>, described at <a href="terminology.html">Audio Terminology</a>.
Though USB audio relies on isochronous transfers,
diff --git a/src/devices/audio/warmup.jd b/src/devices/audio/warmup.jd
index 777650b..1dec834 100644
--- a/src/devices/audio/warmup.jd
+++ b/src/devices/audio/warmup.jd
@@ -41,11 +41,11 @@
At warmup, FastMixer calls <code>write()</code>
repeatedly until the time between two <code>write()</code>s is the amount expected.
FastMixer determines audio warmup by seeing how long a Hardware Abstraction
-Layer (HAL) <code>write()</code> takes to stabilize.
+Layer (HAL) <code>write()</code> takes to stabilize.
</p>
-<p>To measure audio warmup, follow these steps for the built-in speaker and wired headphones
- and at different times after booting. Warmup times are usually different for each output device
+<p>To measure audio warmup, follow these steps for the built-in speaker and wired headphones
+ and at different times after booting. Warmup times are usually different for each output device
and right after booting the device:</p>
<ol>
@@ -91,7 +91,7 @@
There are currently no tools provided for measuring audio input warmup.
However, input warmup time can be estimated by observing
the time required for <a href="http://developer.android.com/reference/android/media/AudioRecord.html#startRecording()">startRecording()</a>
- to return.
+ to return.
</p>
@@ -99,11 +99,12 @@
<p>
Warmup time can usually be reduced by a combination of:
+</p>
<ul>
<li>Good circuit design</li>
<li>Accurate time delays in kernel device driver</li>
<li>Performing independent warmup operations concurrently rather than sequentially</li>
- <li>Leaving circuits powered on or not reconfiguring clocks (increases idle power consumption)
+ <li>Leaving circuits powered on or not reconfiguring clocks (increases idle power consumption)</li>
<li>Caching computed parameters</li>
</ul>
<p>