am 992c9b7b: Merge "Docs: Fix a link in Android Security Overview"

* commit '992c9b7be2cd061c0721099bed8c6da2fd8bee0e':
  Docs: Fix a link in Android Security Overview
diff --git a/src/devices/audio/images/dac.png b/src/devices/audio/images/dac.png
new file mode 100644
index 0000000..6875035
--- /dev/null
+++ b/src/devices/audio/images/dac.png
Binary files differ
diff --git a/src/devices/audio/images/hub.jpg b/src/devices/audio/images/hub.jpg
new file mode 100644
index 0000000..59a9f3b
--- /dev/null
+++ b/src/devices/audio/images/hub.jpg
Binary files differ
diff --git a/src/devices/audio/images/otg.jpg b/src/devices/audio/images/otg.jpg
new file mode 100644
index 0000000..152180d
--- /dev/null
+++ b/src/devices/audio/images/otg.jpg
Binary files differ
diff --git a/src/devices/audio_avoiding_pi.jd b/src/devices/audio_avoiding_pi.jd
index 49b901e..80d0907 100644
--- a/src/devices/audio_avoiding_pi.jd
+++ b/src/devices/audio_avoiding_pi.jd
@@ -1,6 +1,21 @@
 page.title=Avoiding Priority Inversion
 @jd:body
 
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
 <div id="qv-wrapper">
   <div id="qv">
     <h2>In this document</h2>
diff --git a/src/devices/audio_debugging.jd b/src/devices/audio_debugging.jd
index 7ac3a53..9c9b2fb 100644
--- a/src/devices/audio_debugging.jd
+++ b/src/devices/audio_debugging.jd
@@ -1,6 +1,21 @@
 page.title=Audio Debugging
 @jd:body
 
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
 <div id="qv-wrapper">
   <div id="qv">
     <h2>In this document</h2>
diff --git a/src/devices/audio_implement.jd b/src/devices/audio_implement.jd
index 32cd137..5d04074 100644
--- a/src/devices/audio_implement.jd
+++ b/src/devices/audio_implement.jd
@@ -136,7 +136,7 @@
 include $(CLEAR_VARS)
 
 LOCAL_MODULE := audio.primary.tuna
-LOCAL_MODULE_PATH := $(TARGET_OUT_SHARED_LIBRARIES)/hw
+LOCAL_MODULE_RELATIVE_PATH := hw
 LOCAL_SRC_FILES := audio_hw.c ril_interface.c
 LOCAL_C_INCLUDES += \
         external/tinyalsa/include \
diff --git a/src/devices/audio_src.jd b/src/devices/audio_src.jd
index ffacba6..6238770 100644
--- a/src/devices/audio_src.jd
+++ b/src/devices/audio_src.jd
@@ -1,6 +1,21 @@
 page.title=Sample Rate Conversion
 @jd:body
 
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
 <div id="qv-wrapper">
   <div id="qv">
     <h2>In this document</h2>
diff --git a/src/devices/audio_terminology.jd b/src/devices/audio_terminology.jd
index 0b876a7..87baacb 100644
--- a/src/devices/audio_terminology.jd
+++ b/src/devices/audio_terminology.jd
@@ -1,6 +1,21 @@
 page.title=Audio Terminology
 @jd:body
 
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
 <div id="qv-wrapper">
   <div id="qv">
     <h2>In this document</h2>
diff --git a/src/devices/audio_usb.jd b/src/devices/audio_usb.jd
new file mode 100644
index 0000000..8e8fdaf
--- /dev/null
+++ b/src/devices/audio_usb.jd
@@ -0,0 +1,600 @@
+page.title=USB Digital Audio
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>
+This article reviews Android support for USB digital audio and related
+USB-based protocols.
+</p>
+
+<h3 id="audience">Audience</h3>
+
+<p>
+The target audience of this article is Android device OEMs, SoC vendors,
+USB audio peripheral suppliers, advanced audio application developers,
+and others seeking detailed understanding of USB digital audio internals on Android.
+</p>
+
+<p>
+End users should see the <a href="https://support.google.com/android/">Help Center</a> instead.
+Though this article is not oriented towards end users,
+certain audiophile consumers may find portions of interest.
+</p>
+
+<h2 id="overview">Overview of USB</h2>
+
+<p>
+Universal Serial Bus (USB) is informally described in the Wikipedia article
+<a href="http://en.wikipedia.org/wiki/USB">USB</a>,
+and is formally defined by the standards published by the
+<a href="http://www.usb.org/">USB Implementers Forum, Inc</a>.
+For convenience, we summarize the key USB concepts here,
+but the standards are the authoritative reference.
+</p>
+
+<h3 id="terminology">Basic concepts and terminology</h3>
+
+<p>
+USB is a <a href="http://en.wikipedia.org/wiki/Bus_(computing)">bus</a>
+with a single initiator of data transfer operations, called the <i>host</i>.
+The host communicates with
+<a href="http://en.wikipedia.org/wiki/Peripheral">peripherals</a> via the bus.
+</p>
+
+<p>
+<b>Note:</b> the terms <i>device</i> or <i>accessory</i> are common synonyms for
+<i>peripheral</i>.  We avoid those terms here, as they could be confused with
+Android <a href="http://en.wikipedia.org/wiki/Mobile_device">device</a>
+or the Android-specific concept called
+<a href="http://developer.android.com/guide/topics/connectivity/usb/accessory.html">accessory mode</a>.
+</p>
+
+<p>
+A critical host role is <i>enumeration</i>:
+the process of detecting which peripherals are connected to the bus,
+and querying their properties expressed via <i>descriptors</i>.
+</p>
+
+<p>
+A peripheral may be one physical object
+but actually implement multiple logical <i>functions</i>.
+For example, a webcam peripheral could have both a camera function and a
+microphone audio function.
+</p>
+
+<p>
+Each peripheral function has an <i>interface</i> that
+defines the protocol to communicate with that function.
+</p>
+
+<p>
+The host communicates with a peripheral over a
+<a href="http://en.wikipedia.org/wiki/Stream_(computing)">pipe</a>
+to an <a href="http://en.wikipedia.org/wiki/Communication_endpoint">endpoint</a>,
+a data source or sink
+provided by one of the peripheral's functions.
+</p>
+
+<p>
+There are two kinds of pipes: <i>message</i> and <i>stream</i>.
+A message pipe is used for bi-directional control and status.
+A stream pipe is used for uni-directional data transfer.
+</p>
+
+<p>
+The host initiates all data transfers,
+hence the terms <i>input</i> and <i>output</i> are expressed relative to the host.
+An input operation transfers data from the peripheral to the host,
+while an output operation transfers data from the host to the peripheral.
+</p>
+
+<p>
+There are three major data transfer modes:
+<i>interrupt</i>, <i>bulk</i>, and <i>isochronous</i>.
+Isochronous mode will be discussed further in the context of audio.
+</p>
+
+<p>
+The peripheral may have <i>terminals</i> that connect to the outside world,
+beyond the peripheral itself.  In this way, the peripheral serves
+to translate between USB protocol and "real world" signals.
+The terminals are logical objects of the function.
+</p>
+
+<h2 id="androidModes">Android USB modes</h2>
+
+<h3 id="developmentMode">Development mode</h3>
+
+<p>
+<i>Development mode</i> has been present since the initial release of Android.
+The Android device appears as a USB peripheral
+to a host PC running a desktop operating system such as Linux,
+Mac OS X, or Windows.  The only visible peripheral function is either
+<a href="http://en.wikipedia.org/wiki/Android_software_development#Fastboot">Android fastboot</a>
+or
+<a href="http://developer.android.com/tools/help/adb.html">Android Debug Bridge (adb)</a>.
+The fastboot and adb protocols are layered over USB bulk data transfer mode.
+</p>
+
+<h3 id="hostMode">Host mode</h3>
+
+<p>
+<i>Host mode</i> is introduced in Android 3.1 (API level 12).
+</p>
+
+<p>
+As the Android device must act as host, and most Android devices include
+a micro-USB connector that does not directly permit host operation,
+an on-the-go (<a href="http://en.wikipedia.org/wiki/USB_On-The-Go">OTG</a>) adapter
+such as this is usually required:
+</p>
+
+<img src="audio/images/otg.jpg" style="image-orientation: 90deg;" height="50%" width="50%" alt="OTG">
+
+<p>
+An Android device might not provide sufficient power to operate a
+particular peripheral, depending on how much power the peripheral needs,
+and how much the Android device is capable of supplying.  Even if
+adequate power is available, the Android device battery charge may
+be significantly shortened.  For these situations, use a powered
+<a href="http://en.wikipedia.org/wiki/USB_hub">hub</a> such as this:
+</p>
+
+<img src="audio/images/hub.jpg" alt="Powered hub">
+
+<h3 id="accessoryMode">Accessory mode</h3>
+
+<p>
+<i>Accessory mode</i> was introduced in Android 3.1 (API level 12) and back-ported to Android 2.3.4.
+In this mode, the Android device operates as a USB peripheral,
+under the control of another device such as a dock that serves as host.
+The difference between development mode and accessory mode
+is that additional USB functions are visible to the host, beyond adb.
+The Android device begins in development mode and then
+transitions to accessory mode via a re-negotiation process.
+</p>
+
+<p>
+Accessory mode was extended with additional features in Android 4.1,
+in particular audio described below.
+</p>
+
+<h2 id="audioClass">USB audio</h2>
+
+<h3 id="class">USB classes</h3>
+
+<p>
+Each peripheral function has an associated <i>device class</i> document
+that specifies the standard protocol for that function.
+This enables <i>class compliant</i> hosts and peripheral functions
+to inter-operate, without detailed knowledge of each other's workings.
+Class compliance is critical if the host and peripheral are provided by
+different entities.
+</p>
+
+<p>
+The term <i>driverless</i> is a common synonym for <i>class compliant</i>,
+indicating that it is possible to use the standard features of such a
+peripheral without requiring an operating-system specific
+<a href="http://en.wikipedia.org/wiki/Device_driver">driver</a> to be installed.
+One can assume that a peripheral advertised as "no driver needed"
+for major desktop operating systems
+will be class compliant, though there may be exceptions.
+</p>
+
+<h3 id="audioClass">USB audio class</h3>
+
+<p>
+Here we concern ourselves only with peripherals that implement
+audio functions, and thus adhere to the audio device class.  There are two
+editions of the USB audio class specification: class 1 (UAC1) and 2 (UAC2).
+</p>
+
+<h3 id="otherClasses">Comparison with other classes</h3>
+
+<p>
+USB includes many other device classes, some of which may be confused
+with the audio class.  The
+<a href="http://en.wikipedia.org/wiki/USB_mass_storage_device_class">mass storage class</a>
+(MSC) is used for
+sector-oriented access to media, while
+<a href="http://en.wikipedia.org/wiki/Media_Transfer_Protocol">Media Transfer Protocol</a>
+(MTP) is for full file access to media.
+Both MSC and MTP may be used for transferring audio files,
+but only USB audio class is suitable for real-time streaming.
+</p>
+
+<h3 id="audioTerminals">Audio terminals</h3>
+
+<p>
+The terminals of an audio peripheral are typically analog.
+The analog signal presented at the peripheral's input terminal is converted to digital by an
+<a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter">analog-to-digital converter</a>
+(ADC),
+and is carried over USB protocol to be consumed by
+the host.  The ADC is a data <i>source</i>
+for the host.  Similarly, the host sends a
+digital audio signal over USB protocol to the peripheral, where a
+<a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter">digital-to-analog converter</a>
+(DAC)
+converts and presents to an analog output terminal.
+The DAC is a <i>sink</i> for the host.
+</p>
+
+<h3 id="channels">Channels</h3>
+
+<p>
+A peripheral with audio function can include a source terminal, sink terminal, or both.
+Each direction may have one channel (<i>mono</i>), two channels
+(<i>stereo</i>), or more.
+Peripherals with more than two channels are called <i>multichannel</i>.
+It is common to interpret a stereo stream as consisting of
+<i>left</i> and <i>right</i> channels, and by extension to interpret a multichannel stream as having
+spatial locations corresponding to each channel.  However, it is also quite appropriate
+(especially for USB audio more so than
+<a href="http://en.wikipedia.org/wiki/HDMI">HDMI</a>)
+to not assign any particular
+standard spatial meaning to each channel.  In this case, it is up to the
+application and user to define how each channel is used.
+For example, a four-channel USB input stream might have the first three
+channels attached to various microphones within a room, and the final
+channel receiving input from an AM radio.
+</p>
+
+<h3 id="isochronous">Isochronous transfer mode</h3>
+
+<p>
+USB audio uses isochronous transfer mode for its real-time characteristics,
+at the expense of error recovery.
+In isochronous mode, bandwidth is guaranteed, and data transmission
+errors are detected using a cyclic redundancy check (CRC).  But there is
+no packet acknowledgement or re-transmission in the event of error.
+</p>
+
+<p>
+Isochronous transmissions occur each Start Of Frame (SOF) period.
+The SOF period is one millisecond for full-speed, and 125 microseconds for
+high-speed.  Each full-speed frame carries up to 1023 bytes of payload,
+and a high-speed frame carries up to 1024 bytes.  Putting these together,
+we calculate the maximum transfer rate as 1,023,000 or 8,192,000 bytes
+per second.  This sets a theoretical upper limit on the combined audio
+sample rate, channel count, and bit depth.  The practical limit is lower.
+</p>
+
+<p>
+Within isochronous mode, there are three sub-modes:
+</p>
+
+<ul>
+<li>Adaptive</li>
+<li>Asynchronous</li>
+<li>Synchronous</li>
+</ul>
+
+<p>
+In adaptive sub-mode, the peripheral sink or source adapts to a potentially varying sample rate
+of the host.
+</p>
+
+<p>
+In asynchronous (also called implicit feedback) sub-mode,
+the sink or source determines the sample rate, and the host accomodates.
+The primary theoretical advantage of asynchronous sub-mode is that the source
+or sink USB clock is physically and electrically closer to (and indeed may
+be the same as, or derived from) the clock that drives the DAC or ADC.
+This proximity means that asynchronous sub-mode should be less susceptible
+to clock jitter.  In addition, the clock used by the DAC or ADC may be
+designed for higher accuracy and lower drift than the host clock.
+</p>
+
+<p>
+In synchronous sub-mode, a fixed number of bytes is transferred each SOF period.
+The audio sample rate is effectively derived from the USB clock.
+Synchronous sub-mode is not commonly used with audio because both
+host and peripheral are at the mercy of the USB clock.
+</p>
+
+<p>
+The table below summarizes the isochronous sub-modes:
+</p>
+
+<table>
+<tr>
+  <th>Sub-mode</th>
+  <th>Byte count<br \>per packet</th>
+  <th>Sample rate<br \>determined by</th>
+  <th>Used for audio</th>
+</tr>
+<tr>
+  <td>adaptive</td>
+  <td>variable</td>
+  <td>host</td>
+  <td>yes</td>
+</tr>
+<tr>
+  <td>asynchronous</td>
+  <td>variable</td>
+  <td>peripheral</td>
+  <td>yes</td>
+</tr>
+<tr>
+  <td>synchronous</td>
+  <td>fixed</td>
+  <td>USB clock</td>
+  <td>no</td>
+</tr>
+</table>
+
+<p>
+In practice, the sub-mode does of course matter, but other factors
+should also be considered.
+</p>
+
+<h2 id="androidSupport">Android support for USB audio class</h2>
+
+<h3 id="developmentAudio">Development mode</h3>
+
+<p>
+USB audio is not supported in development mode.
+</p>
+
+<h3 id="hostAudio">Host mode</h3>
+
+<p>
+Android 5.0 (API level 21) and above supports a subset of USB audio class 1 (UAC1) features:
+</p>
+
+<ul>
+<li>The Android device must act as host</li>
+<li>The audio format must be PCM (interface type I)</li>
+<li>The bit depth must be 16-bits, 24-bits, or 32-bits where
+24 bits of useful audio data are left-justified within the most significant
+bits of the 32-bit word</li>
+<li>The sample rate must be either 48, 44.1, 32, 24, 22.05, 16, 12, 11.025, or 8 kHz</li>
+<li>The channel count must be 1 (mono) or 2 (stereo)</li>
+</ul>
+
+<p>
+Perusal of the Android framework source code may show additional code
+beyond the minimum needed to support these features.  But this code
+has not been validated, so more advanced features are not yet claimed.
+</p>
+
+<h3 id="accessoryAudio">Accessory mode</h3>
+
+<p>
+Android 4.1 (API level 16) added limited support for audio playback to the host.
+While in accessory mode, Android automatically routes its audio output to USB.
+That is, the Android device serves as a data source to the host, for example a dock.
+</p>
+
+<p>
+Accessory mode audio has these features:
+</p>
+
+<ul>
+<li>
+The Android device must be controlled by a knowledgeable host that
+can first transition the Android device from development mode to accessory mode,
+and then the host must transfer audio data from the appropriate endpoint.
+Thus the Android device does not appear "driverless" to the host.
+</li>
+<li>The direction must be <i>input</i>, expressed relative to the host</li>
+<li>The audio format must be 16-bit PCM</li>
+<li>The sample rate must be 44.1 kHz</li>
+<li>The channel count must be 2 (stereo)</li>
+</ul>
+
+<p>
+Accessory mode audio has not been widely adopted,
+and is not currently recommended for new designs.
+</p>
+
+<h2 id="applications">Applications of USB digital audio</h2>
+
+<p>
+As the name indicates, the USB digital audio signal is represented
+by a <a href="http://en.wikipedia.org/wiki/Digital_data">digital</a> data stream
+rather than the <a href="http://en.wikipedia.org/wiki/Analog_signal">analog</a>
+signal used by the common TRS mini
+<a href=" http://en.wikipedia.org/wiki/Phone_connector_(audio)">headset connector</a>.
+Eventually any digital signal must be converted to analog before it can be heard.
+There are tradeoffs in choosing where to place that conversion.
+</p>
+
+<h3 id="comparison">A tale of two DACs</h3>
+
+<p>
+In the example diagram below, we compare two designs.  First we have a
+mobile device with Application Processor (AP), on-board DAC, amplifier,
+and analog TRS connector attached to headphones.  We also consider a
+mobile device with USB connected to external USB DAC and amplifier,
+also with headphones.
+</p>
+
+<img src="audio/images/dac.png" alt="DAC comparison">
+
+<p>
+Which design is better?  The answer depends on your needs.
+Each has advantages and disadvantages.
+<b>Note:</b> this is an artificial comparison, since
+a real Android device would probably have both options available.
+</p>
+
+<p>
+The first design A is simpler, less expensive, uses less power,
+and will be a more reliable design assuming otherwise equally reliable components.
+However, there are usually audio quality tradeoffs vs. other requirements.
+For example, if this is a mass-market device, it may be designed to fit
+the needs of the general consumer, not for the audiophile.
+</p>
+
+<p>
+In the second design, the external audio peripheral C can be designed for
+higher audio quality and greater power output without impacting the cost of
+the basic mass market Android device B.  Yes, it is a more expensive design,
+but the cost is absorbed only by those who want it.
+</p>
+
+<p>
+Mobile devices are notorious for having high-density
+circuit boards, which can result in more opportunities for
+<a href="http://en.wikipedia.org/wiki/Crosstalk_(electronics)">crosstalk</a>
+that degrades adjacent analog signals.  Digital communication is less susceptible to
+<a href="http://en.wikipedia.org/wiki/Noise_(electronics)">noise</a>,
+so moving the DAC from the Android device A to an external circuit board
+C allows the final analog stages to be physically and electrically
+isolated from the dense and noisy circuit board, resulting in higher fidelity audio.
+</p>
+
+<p>
+On the other hand,
+the second design is more complex, and with added complexity come more
+opportunities for things to fail.  There is also additional latency
+from the USB controllers.
+</p>
+
+<h3 id="applications">Applications</h3>
+
+<p>
+Typical USB host mode audio applications include:
+</p>
+
+<ul>
+<li>music listening</li>
+<li>telephony</li>
+<li>instant messaging and voice chat</li>
+<li>recording</li>
+</ul>
+
+<p>
+For all of these applications, Android detects a compatible USB digital
+audio peripheral, and automatically routes audio playback and capture
+appropriately, based on the audio policy rules.
+Stereo content is played on the first two channels of the peripheral.
+</p>
+
+<p>
+There are no APIs specific to USB digital audio.
+For advanced usage, the automatic routing may interfere with applications
+that are USB-aware.  For such applications, disable automatic routing
+via the corresponding control in the Media section of
+<a href="http://developer.android.com/tools/index.html">Settings / Developer Options</a>.
+</p>
+
+<h2 id="compatibility">Implementing USB audio</h2>
+
+<h3 id="recommendationsPeripheral">Recommendations for audio peripheral vendors</h3>
+
+<p>
+In order to inter-operate with Android devices, audio peripheral vendors should:
+</p>
+
+<ul>
+<li>design for audio class compliance;
+currently Android targets class 1, but it is wise to plan for class 2</li>
+<li>avoid <a href="http://en.wiktionary.org/wiki/quirk">quirks</a>
+<li>test for inter-operability with reference and popular Android devices</li>
+<li>clearly document supported features, audio class compliance, power requirements, etc.
+so that consumers can make informed decisions</li>
+</ul>
+
+<h3 id="recommendationsAndroid">Recommendations for Android device OEMs and SoC vendors</h3>
+
+<p>
+In order to support USB digital audio, device OEMs and SoC vendors should:
+</p>
+
+<ul>
+<li>enable all kernel features needed: USB host mode, USB audio, isochronous transfer mode</li>
+<li>keep up-to-date with recent kernel releases and patches;
+despite the noble goal of class compliance, there are extant audio peripherals
+with <a href="http://en.wiktionary.org/wiki/quirk">quirks</a>,
+and recent kernels have workarounds for such quirks
+</li>
+<li>enable USB audio policy as described below</li>
+<li>test for inter-operability with common USB audio peripherals</li>
+</ul>
+
+<h3 id="enable">How to enable USB audio policy</h3>
+
+<p>
+To enable USB audio, add an entry to the
+audio policy configuration file.  This is typically
+located here:
+<pre>device/oem/codename/audio_policy.conf</pre>
+The pathname component "oem" should be replaced by the name
+of the OEM who manufactures the Android device,
+and "codename" should be replaced by the device code name.
+</p>
+
+<p>
+An example entry is shown here:
+</p>
+
+<pre>
+audio_hw_modules {
+  ...
+  usb {
+    outputs {
+      usb_accessory {
+        sampling_rates 44100
+        channel_masks AUDIO_CHANNEL_OUT_STEREO
+        formats AUDIO_FORMAT_PCM_16_BIT
+        devices AUDIO_DEVICE_OUT_USB_ACCESSORY
+      }
+      usb_device {
+        sampling_rates dynamic
+        channel_masks dynamic
+        formats dynamic
+        devices AUDIO_DEVICE_OUT_USB_DEVICE
+      }
+    }
+    inputs {
+      usb_device {
+        sampling_rates dynamic
+        channel_masks AUDIO_CHANNEL_IN_STEREO
+        formats AUDIO_FORMAT_PCM_16_BIT
+        devices AUDIO_DEVICE_IN_USB_DEVICE
+      }
+    }
+  }
+  ...
+}
+</pre>
+
+<h3 id="sourceCode">Source code</h3>
+
+<p>
+The audio Hardware Abstraction Layer (HAL)
+implementation for USB audio is located here:
+<pre>hardware/libhardware/modules/usbaudio/</pre>
+The USB audio HAL relies heavily on
+<i>tinyalsa</i>, described at <a href="audio_terminology.html">Audio Terminology</a>.
+Though USB audio relies on isochronous transfers,
+this is abstracted away by the ALSA implementation.
+So the USB audio HAL and tinyalsa do not need to concern
+themselves with this part of USB protocol.
+</p>
diff --git a/src/devices/camera/camera.jd b/src/devices/camera/camera.jd
index 4b4b22c..577224b 100644
--- a/src/devices/camera/camera.jd
+++ b/src/devices/camera/camera.jd
@@ -112,7 +112,7 @@
   <li>Create an <code>Android.mk</code> file to build the shared library. Ensure that the Makefile contains the following lines:
 <pre>
 LOCAL_MODULE := camera.&lt;device_name&gt;
-LOCAL_MODULE_PATH := $(TARGET_OUT_SHARED_LIBRARIES)/hw
+LOCAL_MODULE_RELATIVE_PATH := hw
 </pre>
 <p>Notice that your library must be named <code>camera.&lt;device_name&gt;</code> (<code>.so</code> is appended automatically),
 so that Android can correctly load the library. For an example, see the Makefile
diff --git a/src/devices/devices_toc.cs b/src/devices/devices_toc.cs
index 85dcae4..c46a418 100644
--- a/src/devices/devices_toc.cs
+++ b/src/devices/devices_toc.cs
@@ -49,6 +49,7 @@
           <li><a href="<?cs var:toroot ?>devices/audio_avoiding_pi.html">Priority Inversion</a></li>
           <li><a href="<?cs var:toroot ?>devices/audio_src.html">Sample Rate Conversion</a></li>
           <li><a href="<?cs var:toroot ?>devices/audio_debugging.html">Debugging</a></li>
+          <li><a href="<?cs var:toroot ?>devices/audio_usb.html">USB Digital Audio</a></li>
         </ul>
       </li>
       <li><a href="<?cs var:toroot ?>devices/bluetooth.html">Bluetooth</a></li>
@@ -152,11 +153,19 @@
                 <span class="en">Encryption</span>
               </a>
             </li>
-            <li>
+          <li class="nav-section">
+            <div class="nav-section-header">
               <a href="<?cs var:toroot ?>devices/tech/security/se-linux.html">
                 <span class="en">Security-Enhanced Linux</span>
               </a>
-            </li>
+            </div>
+            <ul>
+              <li><a href="<?cs var:toroot ?>devices/tech/security/selinux/concepts.html">Concepts</a></li>
+              <li><a href="<?cs var:toroot ?>devices/tech/security/selinux/implement.html">Implementation</a></li>
+              <li><a href="<?cs var:toroot ?>devices/tech/security/selinux/customize.html">Customization</a></li>
+              <li><a href="<?cs var:toroot ?>devices/tech/security/selinux/validate.html">Validation</a></li>
+            </ul>
+          </li>
           </ul>
       </li>
      <li class="nav-section">
@@ -167,18 +176,48 @@
           </div>
           <ul>
             <li>
-              <a href="<?cs var:toroot ?>devices/sensors/base_triggers.html">
-                <span class="en">Base sensors</span>
+              <a href="<?cs var:toroot ?>devices/sensors/sensor-stack.html">
+                <span class="en">Sensor stack</span>
               </a>
             </li>
             <li>
-              <a href="<?cs var:toroot ?>devices/sensors/composite_sensors.html">
-                <span class="en">Composite sensors</span>
+              <a href="<?cs var:toroot ?>devices/sensors/report-modes.html">
+                <span class="en">Reporting modes</span>
+              </a>
+            </li>
+            <li>
+              <a href="<?cs var:toroot ?>devices/sensors/suspend-mode.html">
+                <span class="en">Suspend mode</span>
+              </a>
+            </li>
+            <li>
+              <a href="<?cs var:toroot ?>devices/sensors/power-use.html">
+                <span class="en">Power consumption</span>
+              </a>
+            </li>
+            <li>
+              <a href="<?cs var:toroot ?>devices/sensors/interaction.html">
+                <span class="en">Interaction</span>
+              </a>
+            </li>
+            <li>
+              <a href="<?cs var:toroot ?>devices/sensors/hal-interface.html">
+                <span class="en">HAL interface</span>
               </a>
             </li>
             <li>
               <a href="<?cs var:toroot ?>devices/sensors/batching.html">
-                <span class="en">Batching results</span>
+                <span class="en">Batching</span>
+              </a>
+            </li>
+            <li>
+              <a href="<?cs var:toroot ?>devices/sensors/sensor-types.html">
+                <span class="en">Sensor types</span>
+              </a>
+            </li>
+            <li>
+              <a href="<?cs var:toroot ?>devices/sensors/versioning.html">
+                <span class="en">Version deprecation</span>
               </a>
             </li>
           </ul>
diff --git a/src/devices/latency_design.jd b/src/devices/latency_design.jd
index 15485a5..a2ad236 100644
--- a/src/devices/latency_design.jd
+++ b/src/devices/latency_design.jd
@@ -1,6 +1,21 @@
 page.title=Design For Reduced Latency
 @jd:body
 
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
 <div id="qv-wrapper">
   <div id="qv">
     <h2>In this document</h2>
diff --git a/src/devices/sensors/base_triggers.jd b/src/devices/sensors/base_triggers.jd
deleted file mode 100644
index 81d2547..0000000
--- a/src/devices/sensors/base_triggers.jd
+++ /dev/null
@@ -1,152 +0,0 @@
-page.title=Base sensors and trigger modes
-@jd:body
-
-<!--
-    Copyright 2013 The Android Open Source Project
-
-    Licensed under the Apache License, Version 2.0 (the "License");
-    you may not use this file except in compliance with the License.
-    You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
--->
-<div id="qv-wrapper">
-  <div id="qv">
-    <h2>In this document</h2>
-    <ol id="auto-toc">
-    </ol>
-  </div>
-</div>
-
-<h2 id="triggers">Trigger modes</h2>
-<p>Sensors can report events in different ways called trigger modes; each sensor 
-  type has one and only one trigger mode associated to it. Four trigger modes 
-  exist:</p>
-
-<h3 id="continuous">Continuous</h3>
-<p>Events are reported at a constant rate defined by setDelay(). Example sensors 
-  using the continuous trigger mode are accelerometers and gyroscopes.</p>
-
-<h3 id="on-change">On-change</h3>
-<p>Events are reported only if the sensor's value has changed. Activating the
-sensor also triggers an event, meaning the HAL must return an event immediately
-when an on-change sensor is activated. Example sensors using the on-change
-trigger mode are the step counter and proximity sensor types.</p>
-
-<p>Here is how the <code>period_ns</code> parameter affects setDelay(...) and
-batch(...). The <code>period_ns</code> parameter is used to set a lower limit
-to the reporting period, meaning the minimum time between consecutive events.
-Here is an example: If activating the step counter with period_ns = 10 seconds,
-walking for 1 minute, and then not walking for 1 minute, the events will
-be generated every 10 seconds during the first minute, and no event will be
-generated in the second minute.</p>
-
-<h3 id="one-shot">One-shot</h3>
-<p>Upon detection of an event, the sensor deactivates itself and then sends a 
-  single event. Order matters to avoid race conditions. No other event is sent 
-  until the sensor is reactivated. setDelay() is ignored. 
-<a
-href="{@docRoot}devices/sensors/composite_sensors.html#Significant">Significant
-motion</a> is an example of this kind of sensor.</p>
-<h3 id="special">Special</h3>
-<p>See the individual sensor type descriptions for details.</p>
-<h2 id="categories">Categories</h2>
-<p>Sensors fall into four primary categories:</p>
-<blockquote>
-  <p><em>Base</em> - records core measurements from which all other sensors are derived <br/>
-    <em>Activity</em> - detects user or device movement<br/>
-    <em>Attitude</em> - measures the orientation of the device<br/>
-    <em>Uncalibrated</em> - is identical to the corresponding base sensor except the 
-    dynamic calibration is reported separately rather than applied to the results</p>
-</blockquote>
-<h2 id="base">Base sensors</h2>
-<p>These sensor types are listed first because they are the fundamental sensors 
-  upon which all other sensor types are based.</p>
-<h3 id="Accelerometer">Accelerometer</h3>
-<p><em>Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-<p>All values are in SI units (m/s^2) and measure the acceleration of the device 
-  minus the force of gravity.</p>
-<p>Acceleration sensors return sensor events for all three axes at a constant rate 
-  defined by setDelay().</p>
-<ul>
-  <li>x: Acceleration on the x-axis</li>
-  <li>y: Acceleration on the y-axis</li>
-  <li>z: Acceleration on the z-axis</li>
-</ul>
-<p>Note the readings from the accelerometer include the acceleration due to gravity 
-  (which is opposite the direction of the gravity vector).</p>
-<p>Here are examples:</p>
-<ul>
-  <li>The norm of (x, y, z)  should be close to 0 when in free fall.</li>
-  <li>When the device lies flat on a table and is pushed on its left side toward the 
-    right, the x acceleration value is positive.</li>
-  <li>When the device lies flat on a table, the acceleration value is +9.81, which 
-    corresponds to the acceleration of the device (0 m/s^2) minus the force of 
-    gravity (-9.81 m/s^2).</li>
-  <li>When the device lies flat on a table and is pushed toward the sky, the 
-    acceleration value is greater than +9.81, which corresponds to the 
-    acceleration of the device (+A m/s^2) minus the force of gravity (-9.81 
-    m/s^2).</li>
-</ul>
-<h3 id="Ambient">Ambient temperature</h3>
-<p><em>Trigger-mode: On-change<br/>
-Wake-up sensor: No</em></p>
-<p>This sensor provides the ambient (room) temperature in degrees Celsius.</p>
-<h3 id="Geomagnetic">Geomagnetic field</h3>
-<p><em>Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-<p>All values are in micro-Tesla (uT) and measure the geomagnetic field in the X, Y 
-  and Z axis.</p>
-<p>Returned values include calibration mechanisms so the vector is aligned with the 
-  magnetic declination and heading of the earth's geomagnetic field.</p>
-<p>Magnetic field sensors return sensor events for all three axes at a constant 
-  rate defined by setDelay().</p>
-<h3 id="Gyroscope">Gyroscope</h3>
-<p><em>Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-<p>All values are in radians/second and measure the rate of rotation around the X, 
-  Y and Z axis.  The coordinate system is the same as is used for the acceleration 
-  sensor. Rotation is positive in the counter-clockwise direction (right-hand 
-  rule).</p>
-<p>That is, an observer looking from some positive location on the x, y or z axis 
-  at a device positioned on the origin would report positive rotation if the 
-  device appeared to be rotating counter clockwise. Note that this is the standard 
-  mathematical definition of positive rotation and does not agree with the 
-  definition of roll given elsewhere.</p>
-<p>The range should at least be 17.45 rad/s (ie: ~1000 deg/s).</p>
-<p>Automatic gyro-drift compensation is required.</p>
-<h3 id="Light">Light</h3>
-<p><em>Trigger-mode: On-change<br/>
-Wake-up sensor: No</em></p>
-<p>The light sensor value is returned in SI lux units.</p>
-<h3 id="Proximity">Proximity</h3>
-<p><em>Trigger-mode: On-change<br/>
-Wake-up sensor: Yes</em></p>
-<p>Measures the distance from the sensor to the closest visible surface. As this is 
-  a wake-up sensor, it should wake up the SoC when it is running and detects a 
-  change in proximity. The distance value is measured in centimeters. Note that 
-  some proximity sensors only support a binary &quot;near&quot; or &quot;far&quot; measurement. In 
-  this case, the sensor should report its maxRange value in the &quot;far&quot; state and a 
-  value less than maxRange in the &quot;near&quot; state.</p>
-<p>To ensure the applications have the time to receive the event before the 
-  application processor goes back to sleep, the driver must hold a &quot;timeout wake 
-  lock&quot; for 200 milliseconds for every wake-up sensor. That is, the application 
-  processor should not be allowed to go back to sleep in the 200 milliseconds 
-  following a wake-up interrupt.</p>
-<h3 id="Pressure">Pressure</h3>
-<p><em>Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-<p>The pressure sensor uses a barometer to return the atmospheric pressure in 
-  hectopascal (hPa).</p>
-<h3 id="humidity">Relative humidity</h3>
-<p><em>Trigger-mode: On-change<br/>
-Wake-up sensor: No</em></p>
-<p>A relative humidity sensor measures relative ambient air humidity and returns a 
-  value in percent.</p>
diff --git a/src/devices/sensors/batching.jd b/src/devices/sensors/batching.jd
index 405df88..4986f6a 100644
--- a/src/devices/sensors/batching.jd
+++ b/src/devices/sensors/batching.jd
@@ -1,8 +1,8 @@
-page.title=Batching sensor results
+page.title=Batching
 @jd:body
 
 <!--
-    Copyright 2013 The Android Open Source Project
+    Copyright 2014 The Android Open Source Project
 
     Licensed under the Apache License, Version 2.0 (the "License");
     you may not use this file except in compliance with the License.
@@ -24,178 +24,196 @@
   </div>
 </div>
 
-<h2 id="Why">Why batch?</h2>
-<p>This page presents the specificities of Batch mode and the expected behaviors
-  of sensors while in batch mode. Batching can enable significant power savings by 
-  preventing the application processor from waking up to receive each event. Instead, these 
-  events can be grouped and processed together.</p>
-<h2 id="batch-function">batch(int handle, int flags, int64_t period_ns, int64_t
-  max_report_latency)</h2>
-<p>Enabling batch mode for a given sensor sets the delay between events.
-  <code>max_report_latency</code> sets the maximum time by which events can be delayed and
-  batched together before being reported to the applications. A value of zero 
-  disables batch mode for the given sensor. The <code>period_ns</code> parameter is equivalent
-  to calling setDelay() -- this function both enables or disables the batch mode 
-  AND sets the event's period in nanoseconds. See setDelay() for a detailed 
-  explanation of the <code>period_ns</code> parameter.</p>
-<p>In non-batch mode, all sensor events must be reported as soon as they are 
-  detected. For example, an accelerometer activated at 50Hz will trigger 
-  interrupts 50 times per second.<br/>
-  While in batch mode, sensor events do not need to be reported as soon as they 
-  are detected. They can be temporarily stored and reported in batches, as long as 
-  no event is delayed by more than <code>maxReportingLatency</code> nanoseconds. That is, all events 
-  since the previous batch are recorded and returned at once. This reduces the 
-  amount of interrupts sent to the SoC and allows the SoC to switch to a lower 
-  power mode (idle) while the sensor is capturing and batching data.</p>
-<p>setDelay() is not affected and it behaves as usual. <br/>
-  <br/>
-  Each event has a timestamp associated with it. The timestamp must be accurate 
-  and correspond to the time at which the event physically happened.</p>
-<p>Batching does not modify the behavior of poll(): batches from different sensors 
-  can be interleaved and split. As usual, all events from the same sensor are 
-  time-ordered.</p>
-<h2 id="Suspend">Behavior outside of suspend mode</h2>
-<p>These are the power modes of the application processor: on, idle, and suspend. 
-  The sensors behave differently in each of these modes. As you would imagine, on 
-  mode is when the application processor is running. Idle mode is a medium power mode 
-  where the application processor is powered but doesn't perform any tasks.
-  Suspend is a low-power mode where the application processor is not powered. The
-  power consumption of the device in this mode is usually 100 times less than in the On
-  mode.</p>
-<p>When the SoC is awake (not in suspend mode), events must be reported in batches 
-  at least every maxReportingLatency. No event shall be dropped or lost. If internal 
-  hardware FIFOs fill up before the maxReportingLatency, then events are reported at that 
-  point to ensure no event is lost.</p>
-<h2 id="Normal">Normal behavior in suspend mode</h2>
-<p>By default, batch mode doesn't significantly change the interaction with suspend 
-  mode. That is, sensors must continue to allow the SoC to go into suspend mode 
-  and sensors must stay active to fill their internal FIFO. In this mode, when the 
-  FIFO fills up, it shall wrap around and behave like a circular buffer, 
-  overwriting older events.<br/>
-  <br/>
-  As soon as the SoC comes out of suspend mode, a batch is produced with as much 
-as the recent history as possible, and batch operation resumes as usual.</p>
-<p>The behavior described above allows applications to record the recent history of 
-  a set of sensor types while keeping the SoC in suspend. It also allows the 
-  hardware to not have to rely on a wake-up interrupt line.</p>
-<h2 id="WAKE_UPON_FIFO_FULL">WAKE_UPON_FIFO_FULL behavior in suspend mode</h2>
-<p>There are cases, however, where an application cannot afford to lose any events, 
-  even when the device goes into suspend mode.</p>
-<p>For a given rate, if a sensor has the capability to store at least 10 seconds 
-  worth of events in its FIFO and is able to wake up the SoC, it can implement an 
-  optional secondary mode: the <code>WAKE_UPON_FIFO_FULL</code> mode.</p>
-<p>The caller will set the <code>SENSORS_BATCH_WAKE_UPON_FIFO_FULL</code> flag to activate this
-  mode. If the sensor does not support this mode, batch() will fail when the flag 
-  is set.</p>
-<p>In batch mode, and only when the flag
-<code>SENSORS_BATCH_WAKE_UPON_FIFO_FULL</code> is
-  set and supported, the specified sensor must be able to wake-up the SoC and be
-  able to buffer at least 10 seconds worth of the requested sensor events.</p>
-<p>When running with the <code>WAKE_UPON_FIFO_FULL</code> flag set, no events can be lost. When
-  the FIFO is getting full, the sensor must wake up the SoC from suspend and 
-  return a batch before the FIFO fills-up.</p>
-<p>Depending on the device, it might take a few milliseconds for the SoC to 
-  entirely come out of suspend and start flushing the FIFO. Enough head room must 
-  be allocated in the FIFO to allow the device to entirely come out of suspend 
-  without the FIFO overflowing (no events shall be lost).</p>
-<p>Implementing the <code>WAKE_UPON_FIFO_FULL</code> mode is optional. If the hardware cannot
-  support this mode, or if the physical FIFO is so small that the device would 
-  never be allowed to go into suspend for at least 10 seconds, then this function 
-  <strong>must</strong> fail when the flag
-<code>SENSORS_BATCH_WAKE_UPON_FIFO_FULL</code> is set, regardless
-  of the value of the maxReportingLatency parameter.</p>
-<h2 id="Implementing">Implementing batching</h2>
-<p>Batch mode, if supported, should happen at the hardware level, typically using 
-  hardware FIFOs. In particular, it SHALL NOT be implemented in the HAL, as this 
-  would be counter productive. The goal here is to save significant amounts of 
-  power. Batching should be implemented without the aid of the SoC, which should
-  be allowed to be in suspend mode during batching.</p>
-<p>In some implementations, events from several sensors can share the same physical 
-  FIFO. In that case, all events in the FIFO can be sent and processed by the HAL 
-  as soon as one batch must be reported.</p>
+<h2 id="what_is_batching">What is batching?</h2>
+<p>“Batching” refers to storing sensor events in a hardware FIFO before reporting
+  them through the <a href="hal-interface.html">HAL</a> instead of reporting them immediately.</p>
+<p>Batching can enable significant power savings by preventing the SoC from waking
+  up to receive each event. Instead, the events can be grouped and processed
+  together. </p>
+<p>The bigger the FIFOs, the more power can be saved. Implementing batching is an
+  exercise of trading off hardware memory for reduced power consumption.</p>
+<p>Batching happens when a sensor possesses a hardware FIFO
+  (<code>sensor_t.fifoMaxEventCount &gt; 0</code>) and we are in one of two situations:</p>
+<ul>
+  <li> <code>max_report_latency &gt; 0</code>, meaning the sensor events for this specific sensor can
+    be delayed up to <code>max_report_latency</code> before being reported through the HAL. </li>
+  <li> or the SoC is in suspend mode and the sensor is a non-wake-up sensor, meaning
+    events must be stored while waiting for the SoC to wake up. </li>
+</ul>
+<p>See the paragraph on the <a
+  href="hal-interface.html#batch_sensor_flags_sampling_period_maximum_report_latency">HAL
+  batch function</a> for more details.</p>
+<p>The opposite of batching is the continuous operation, where events are not
+  buffered, meaning they are reported immediately. Continuous operation
+  corresponds to:</p>
+<ul>
+  <li> when <code>max_report_latency = 0</code> and the events can be delivered to the application,
+    meaning
+    <ul>
+      <li> the SoC is awake </li>
+      <li> or the sensor is a wake-up sensor </li>
+    </ul>
+  </li>
+  <li> or when the sensor doesn’t have a hardware FIFO (<code>sensor_t.fifoMaxEventCount =
+    0</code>), in which case
+    <ul>
+      <li> the events are reported if the SoC is awake or the sensor is a wake-up sensor </li>
+      <li> the events are lost when the SoC is asleep and the sensor is not a wake-up
+        sensor </li>
+    </ul>
+  </li>
+</ul>
+<h2 id="wake-up_fifos_and_non-wake-up_fifos">Wake-up FIFOs and non-wake-up FIFOs</h2>
+<p>Sensor events from <a href="suspend-mode.html#wake-up_sensors">wake-up
+  sensors</a> must be stored into a wake-up FIFO. There can be one wake-up FIFO
+  per sensor, or, more commonly, one big shared wake-up FIFO where events from all wake-up
+  sensors are interleaved. Other options are also possible, with for example some
+  wake-up sensors having a dedicated FIFO, and the rest of the wake-up sensors
+  all sharing the same one.</p>
+<p>Similarly, sensor events from <a
+  href="suspend-mode.html#non-wake-up_sensors">non-wake-up sensors</a> must be
+  stored into a non-wake-up FIFOs, and there can be one or several
+  non-wake-up FIFOs.</p>
+<p>In all cases, wake-up sensor events and non-wake-up sensor events cannot be
+  interleaved into the same FIFO. Wake-up events go in wake-up FIFOs, and
+  non-wake-up events go in non-wake-up FIFOs.</p>
+<p>For the wake-up FIFO, the “one big shared FIFO” design provides the best power
+  benefits. For the non-wake-up FIFO, there is no preference between the “one big
+  shared FIFO” and “several small reserved FIFOs”. See <a
+  href="#fifo_allocation_priority">FIFO allocation priority</a> for suggestions
+  on how to dimension each FIFO.</p>
+<h2 id="behavior_outside_of_suspend_mode">Behavior outside of suspend mode</h2>
+<p>When the SoC is awake (not in suspend mode), the events can be stored
+  temporarily in their FIFO, as long as they are not delayed by more than
+  <code>max_report_latency</code>.</p>
+<p>As long as the SoC doesn’t enter the suspend mode, no event shall be dropped or
+  lost. If internal hardware FIFOs is getting full before <code>max_report_latency</code>
+  elapsed, then events are reported at that point to ensure that no event is
+  lost.</p>
+<p>If several sensors share the same FIFO and the <code>max_report_latency</code> of one of
+  them elapses, all events from the FIFO are reported, even if the
+  <code>max_report_latency</code> of the other sensors didn’t elapse yet. The general goal is
+  to reduce the number of times batches of events must be reported, so as soon as
+  one event must be reported, all events from all sensors can be reported.</p>
 <p>For example, if the following sensors are activated:</p>
 <ul>
-  <li>accelerometer batched with <code>maxReportingLatency</code> = 20s</li>
-  <li>gyroscope batched with <code>maxReportingLatency</code> = 5s</li>
+  <li> accelerometer batched with <code>max_report_latency</code> = 20s </li>
+  <li> gyroscope batched with <code>max_report_latency</code> = 5s </li>
 </ul>
-<p>Then the accelerometer batches can be reported at the same time the gyroscope 
-  batches are reported (every 5 seconds).<br/>
-  <br/>
-  Batch mode can be enabled or disabled at any time, in particular while the 
-  specified sensor is already enabled; and this shall not result in the loss of 
+<p>Then the accelerometer batches can be reported at the same time the gyroscope
+  batches are reported (every 5 seconds), even if the accelerometer and the
+  gyroscope do not share the same FIFO.</p>
+<h2 id="behavior_in_suspend_mode">Behavior in suspend mode</h2>
+<p>Batching is particularly beneficial when wanting to collect sensor data in the
+  background without keeping the SoC awake. Because the sensor drivers and HAL
+  implementation are not allowed to hold a wake-lock*, the SoC can enter the
+  suspend mode even while sensor data is being collected.</p>
+<p>The behavior of sensors while the SoC is suspended depends on whether the
+  sensor is a wake-up sensor. See <a
+href="suspend-mode.html#wake-up_sensors">Wake-up sensors</a> for some
+details.</p>
+<p>When a non-wake-up FIFO fills up, it must wrap around and behave like a
+  circular buffer, overwriting older events: the new events replace the old ones.
+  <code>max_report_latency</code> has no impact on non-wake-up FIFOs while in suspend mode.</p>
+<p>When a wake-up FIFO fills up, or when the <code>max_report_latency</code> of one of the
+  wake-up sensor elapsed, the hardware must wake up the SoC and report the data.</p>
+<p>In both cases (wake-up and non-wake-up), as soon as the SoC comes out of
+  suspend mode, a batch is produced with the content of all FIFOs, even if
+  <code>max_report_latency</code> of some sensors didn’t elapse yet. This minimizes the risk
+  of having to wake-up the SoC again soon if it goes back to suspend. Hence, it
+  minimizes power consumption.</p>
+<p>*One notable exception of drivers not being allowed to hold a wake lock is when
+  a wake-up sensor with <a href="report-modes.html#continuous">continuous
+  reporting mode</a> is activated with <code>max_report_latency</code> &lt; 1
+  second. In that case, the driver can hold a wake lock because the SoC would
+  anyway not have the time to enter the suspend mode, as it would be awoken by
+  a wake-up event before reaching the suspend mode.</p>
+<h2 id="precautions_to_take_when_batching_wake-up_sensors">Precautions to take when batching wake-up sensors</h2>
+<p>Depending on the device, it might take a few milliseconds for the SoC to
+  entirely come out of suspend and start flushing the FIFO. Enough head room must
+  be allocated in the FIFO to allow the device to entirely come out of suspend
+  without the wake-up FIFO overflowing. No events shall be lost, and the
+  <code>max_report_latency</code> must be respected.</p>
+<h2 id="precautions_to_take_when_batching_non-wake-up_on-change_sensors">Precautions to take when batching non-wake-up on-change sensors</h2>
+<p>On-change sensors only generate events when the value they are measuring is
+  changing. If the measured value changes while the SoC is in suspend mode,
+  applications expect to receive an event as soon as the SoC wakes up. Because of
+  this, batching of <a href="suspend-mode.html#non-wake-up_sensors">non-wake-up</a> on-change sensor events must be performed carefully if the sensor shares its
+  FIFO with other sensors. The last event generated by each on-change sensor must
+  always be saved outside of the shared FIFO so it can never be overwritten by
+  other events. When the SoC wakes up, after all events from the FIFO have been
+  reported, the last on-change sensor event must be reported.</p>
+<p>Here is a situation we want to avoid:</p>
+<ol>
+  <li> An application registers to the non-wake-up step counter (on-change) and the
+    non-wake-up accelerometer (continuous), both sharing the same FIFO </li>
+  <li> The application receives a step counter event “step_count=1000 steps” </li>
+  <li> The SoC goes to suspend </li>
+  <li> The user walks 20 steps, causing step counter and accelerometer events to be
+    interleaved, the last step counter event being “step_count = 1020 steps” </li>
+  <li> The user doesn’t move for a long time, causing accelerometer events to continue
+    accumulating in the FIFO, eventually overwriting every step_count event in the
+    shared FIFO </li>
+  <li> SoC wakes up and all events from the FIFO are sent to the application </li>
+  <li> The application receives only accelerometer events and thinks that the user
+    didn’t walk (bad!) </li>
+</ol>
+<p>By saving the last step counter event outside of the FIFO, the HAL can report
+  this event when the SoC wakes up, even if all other step counter events were
+  overwritten by accelerometer events. This way, the application receives
+  “step_count = 1020 steps” when the SoC wakes up.</p>
+<h2 id="implementing_batching">Implementing batching</h2>
+<p>Batching cannot be emulated in software. It must be implemented entirely in
+  hardware, with hardware FIFOs. In particular, it cannot be implemented on the
+  SoC, for example in the HAL implementation, as this would be
+  counter-productive. The goal here is to save significant amounts of power.
+  Batching must be implemented without the aid of the SoC, which should be
+  allowed to be in suspend mode during batching.</p>
+<p><code>max_report_latency</code> can be modified at any time, in particular while the
+  specified sensor is already enabled; and this shall not result in the loss of
   events.</p>
-<h2 id="fifo-allocation">FiFo allocation priority</h2>
-<p>On platforms in which hardware FIFO size is limited, the system designers may 
-  have to choose how much FIFO to reserve for each sensor. To help with this 
-  choice, here is a list of applications made possible when batching is 
+<h2 id="fifo_allocation_priority">FIFO allocation priority</h2>
+<p>On platforms in which hardware FIFO size is limited, the system designers may
+  have to choose how much FIFO to reserve for each sensor. To help with this
+  choice, here is a list of applications made possible when batching is
   implemented on the different sensors.</p>
-<p><strong>High value: Low power pedestrian dead reckoning</strong><br/>
-  Target batching time: 20 seconds to 1 minute<br/>
-  Sensors to batch:<br/>
-  - Step detector<br/>
-  - Rotation vector or game rotation vector at 5Hz<br/>
-  Gives us step and heading while letting the SoC go to Suspend.<br/>
-  <br/>
-  <strong>High value: Medium power activity/gesture recognition</strong><br/>
-  Target batching time: 3 seconds<br/>
-  Sensors to batch: accelerometer between 20Hz and 50Hz<br/>
-  Allows recognizing arbitrary activities and gestures without having<br/>
-  to keep the SoC fully awake while the data is collected.<br/>
-  <br/>
-  <strong>Medium-high value: Interrupt load reduction</strong><br/>
-  Target batching time: &lt; 1 second<br/>
-  Sensors to batch: any high frequency sensor.<br/>
-  If the gyroscope is set at 240Hz, even batching just 10 gyro events can<br/>
-  reduce the number of interrupts from 240/second to 24/second.<br/>
-  <br/>
-  <strong>Medium value: Continuous low frequency data collection</strong><br/>
-  Target batching time: &gt; 1 minute<br/>
-  Sensors to batch: barometer, humidity sensor, other low frequency<br/>
-  sensors.<br/>
-  Allows creating monitoring applications at low power.<br/>
-  <br/>
-  <strong>Medium value: Continuous full-sensors collection</strong><br/>
-  Target batching time: &gt; 1 minute<br/>
-  Sensors to batch: all, at high frequencies<br/>
-  Allows full collection of sensor data while leaving the SoC in<br/>
-  suspend mode. Only to consider if fifo space is not an issue.<br/>
-  <br/>
-  In each of the cases above, if <code>WAKE_UPON_FIFO_FULL</code> is implemented, the<br/>
-  applications might decide to let the SoC go to suspend, allowing for even<br/>
-  more power savings.</p>
-<h2 id="Dry-run">Dry run</h2>
-<p>If the flag <code>SENSORS_BATCH_DRY_RUN</code> is set, this function returns without
-  modifying the batch mode or the event period and has no side effects, but 
-  returns errors as usual (as it would if this flag was not set). This flag is 
-  used to check if batch mode is available for a given configuration, in 
-  particular for a given sensor at a given rate.</p>
-<h2 id="Return-values">Return values</h2>
-<p>Because sensors must be independent, the return value must not depend on the 
-  state of the system (whether another sensor is on or not), nor on whether the 
-  flag <code>SENSORS_BATCH_DRY_RUN</code> is set (in other words, if a batch call with
-  <code>SENSORS_BATCH_DRY_RUN</code> is successful, the same call without
-<code>SENSORS_BATCH_DRY_RUN</code>
-  must succeed as well).</p>
-<p>If successful, 0 is returned.</p>
-<p>If the specified sensor doesn't support batch mode, -EINVAL is returned.<br/>
-  If the specified sensor's trigger-mode is one-shot, -EINVAL is returned.</p>
-<p>If WAKE UPON FIFO_FULL is specified and the specified sensor's internal FIFO is 
-  too small to store at least 10 seconds worth of data at the given rate, -EINVAL 
-  is returned. Note that as stated above, this has to be determined at compile 
-  time and not based on the state of the system.</p>
-<p>If some other constraints above cannot be satisfied, -EINVAL is returned.<br/>
-  <br/>
-  Note: The <code>maxReportingLatency</code> parameter when &gt; 0 has no impact on
-  whether this function succeeds or fails.<br/>
-  <br/>
-  If <code>maxReportingLatency</code> is set to 0, this function must succeed.</p>
-<h2 id="Supporting-docs">Supporting documentation</h2>
-<p><a href="http://developer.android.com/guide/topics/sensors/index.html">Developer - Location and Sensors 
-  APIs</a></p>
-<p><a href="http://developer.android.com/guide/topics/sensors/sensors_overview.html">Developer - Sensors 
-  Overview</a></p>
-<p><a href="http://developer.android.com/reference/android/hardware/Sensor.html">Sensors SDK API 
-  reference</a></p>
-<p><a href="{@docRoot}devices/reference/sensors_8h_source.html">Android 
-  Hardware Abstraction Layer - sensors.h</a></p>
-<p><a href="http://developer.android.com/reference/android/hardware/SensorManager.html">SensorManager</a></p>
+<h3 id="high_value_low_power_pedestrian_dead_reckoning">High value: Low power pedestrian dead reckoning</h3>
+<p>Target batching time: 1 to 10 minutes</p>
+<p>Sensors to batch:</p>
+<ul>
+  <li> Wake-up Step detector </li>
+  <li> Wake-up Game rotation vector at 5Hz </li>
+  <li> Wake-up Barometer at 5Hz </li>
+  <li> Wake-up Uncalibrated Magnetometer at 5Hz </li>
+</ul>
+<p>Batching this data allows performing pedestrian dead reckoning while letting
+  the SoC go to suspend.</p>
+<h3 id="high_value_medium_power_intermittent_activity_gesture_recognition">High value: Medium power intermittent activity/gesture recognition</h3>
+<p>Target batching time: 3 seconds</p>
+<p>Sensors to batch: Non-wake-up Accelerometer at 50Hz</p>
+<p>Batching this data allows periodically recognizing arbitrary activities and
+  gestures without having to keep the SoC awake while the data is collected.</p>
+<h3 id="medium_value_medium_power_continuous_activity_gesture_recognition">Medium value: Medium power continuous activity/gesture recognition</h3>
+<p>Target batching time: 1 to 3 minutes</p>
+<p>Sensors to batch: Wake-up Accelerometer at 50Hz</p>
+<p>Batching this data allows continuously recognizing arbitrary activities and
+  gestures without having to keep the SoC awake while the data is collected.</p>
+<h3 id="medium-high_value_interrupt_load_reduction">Medium-high value: Interrupt load reduction</h3>
+<p>Target batching time: &lt; 1 second</p>
+<p>Sensors to batch: any high frequency sensor, usually non-wake-up.</p>
+<p>If the gyroscope is set at 240Hz, even batching just 10 gyro events can reduce
+  the number of interrupts from 240/second to 24/second.</p>
+<h3 id="medium_value_continuous_low_frequency_data_collection">Medium value: Continuous low frequency data collection</h3>
+<p>Target batching time: 1 to 10 minutes</p>
+<p>Sensors to batch:</p>
+<ul>
+  <li> Wake-up barometer at 1Hz, </li>
+  <li> Wake-up humidity sensor at 1Hz </li>
+  <li> Other low frequency wake-up sensors at similar rates </li>
+</ul>
+<p>Allows creating monitoring applications at low power.</p>
+<h3 id="medium-low_value_continuous_full-sensors_collection">Medium-low value: Continuous full-sensors collection</h3>
+<p>Target batching time: 1 to 10 minutes</p>
+<p>Sensors to batch: all wake-up sensors, at high frequencies</p>
+<p>Allows full collection of sensor data while leaving the SoC in suspend mode.
+  Only to consider if FIFO space is not an issue.</p>
diff --git a/src/devices/sensors/composite_sensors.jd b/src/devices/sensors/composite_sensors.jd
deleted file mode 100644
index d3fbed2..0000000
--- a/src/devices/sensors/composite_sensors.jd
+++ /dev/null
@@ -1,534 +0,0 @@
-page.title=Composite sensors
-@jd:body
-
-<!--
-    Copyright 2013 The Android Open Source Project
-
-    Licensed under the Apache License, Version 2.0 (the "License");
-    you may not use this file except in compliance with the License.
-    You may obtain a copy of the License at
-
-        http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
--->
-<div id="qv-wrapper">
-  <div id="qv">
-    <h2>In this document</h2>
-    <ol id="auto-toc">
-    </ol>
-  </div>
-</div>
-
-<h2 id="summary">Composite sensor type summary</h2>
-
-<p>The following table lists the composite sensor types and their categories, 
-underlying base sensors, and trigger modes. Certain base sensors are required of 
-each sensor for accuracy. Using other tools to approximate results should be 
-avoided as they will invariably provide a poor user experience.</p>
-
-<p>When there is no gyroscope on the device, and
-only when there is no gyroscope, you may implement the rotation vector and
-other composite sensors without using the gyroscope.</p>
-<table>
-  <tr>
-<th>Sensor type</th>
-<th>Category</th>
-<th>Underlying base sensor</th>
-<th>Trigger mode</th>
-</tr>
-<tr>
-<td>Game rotation vector</td>
-<td>Attitude</td>
-<td>Accelerometer, Gyroscope
-MUST NOT USE Magnetometer</td>
-<td>Continuous</td>
-</tr>
-<tr>
-<td>Geomagnetic rotation vector (Magnetometer) <img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" /></td>
-<td>Attitude</td>
-<td>Accelerometer, Magnetometer
-NOT Gyroscope</td>
-<td>Continuous</td>
-</tr>
-<tr>
-<td>Gravity</td>
-<td>Attitude</td>
-<td>Accelerometer, Gyroscope</td>
-<td>Continuous</td>
-</tr>
-<tr>
-<td>Gyroscope uncalibrated</td>
-<td>Uncalibrated</td>
-<td>Gyroscope</td>
-<td>Continuous</td>
-</tr>
-<tr>
-<td>Linear acceleration</td>
-<td>Activity</td>
-<td>Accelerometer, Gyroscope
-AND Magnetometer</td>
-<td>Continuous</td>
-</tr>
-<tr>
-<td>Magnetic field uncalibrated</td>
-<td>Uncalibrated</td>
-<td>Magnetometer</td>
-<td>Continuous</td>
-</tr>
-<tr>
-<td>Orientation</td>
-<td>Attitude</td>
-<td>Accelerometer, Magnetometer
-PREFERRED Gyroscope</td>
-<td>Continuous</td>
-</tr>
-<tr>
-<td>Rotation vector</td>
-<td>Attitude</td>
-<td>Accelerometer, Gyroscope
-AND Magnetometer</td>
-<td>Continuous</td>
-</tr>
-<tr>
-<td>Significant motion
-  <img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" /></td>
-<td>Activity</td>
-<td>Accelerometer (or another as long as very low power)</td>
-<td>One-shot</td>
-</tr>
-<tr>
-<td>Step counter
-  <img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" /></td>
-<td>Activity</td>
-<td>Accelerometer</td>
-<td>On-
-change</td>
-</tr>
-<tr>
-<td>Step detector
-  <img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" /></td>
-<td>Activity</td>
-<td>Accelerometer</td>
-<td>Special</td>
-</tr>
-</table>
-
-<p><img src="images/battery_icon.png" alt="low power icon"/> = 
-Low power sensor</p>
-
-<h2 id="Activity">Activity sensors</h2>
-
-<h3 id="acceleration">Linear acceleration</h3>
-
-<p><em>Underlying base sensor(s): Accelerometer, Gyroscope AND Magnetometer<br/>
-Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-
-<p>Indicates the linear acceleration of the device in device coordinates, not 
-including gravity. The output is conceptually:<br/>
-output of <code>TYPE_ACCELERATION</code> minus output of
-<code>TYPE_GRAVITY</code>.</p>
-
-<p>Readings on all axes should be close to 0 when the device is immobile. Units are 
-m/s^2. The coordinate system is the same as is used for the acceleration sensor.</p>
-
-<h3 id="Significant">Significant motion</h3>
-
-<p><em>Underlying base sensor(s): Accelerometer (or another as long as low power)<br/>
-Trigger-mode: One-shot<br/>
-Wake-up sensor: Yes</em></p>
-
-<p>Significant motion allows a device to stay in suspend and idle modes longer and 
-save power. It does this by relying upon last known location until the device 
-experiences "significant motion." Such a movement would trigger on mode and a 
-call to retrieve new location.</p>
-
-<p>Here is an example on how the platform can use significant motion to save
-power. When users are moving, their locations are updated frequently. After some period 
-of inactivity, significant motion presumes the device is static and stops 
-seeking location updates. It instead registers the last known location as valid. 
-The device is then allowed to go into idle and then suspend mode.</p>
-
-<p>This sensor exists to save power by keeping the SoC in suspend mode when the 
-device is at rest. A sensor of this type triggers an event each time significant 
-motion is detected and automatically disables itself. The only allowed value to 
-return is 1.0.</p>
-
-<p>A significant motion is a motion that might lead to a change in the user 
-location. Examples of such significant motions are:</p>
-
-<ul>
-<li>walking or biking</li>
-<li>sitting in a moving car, coach or train</li>
-</ul>
-
-<p>Examples of situations that should not trigger significant motion:</p>
-
-<ul>
-<li>phone in pocket and person is not moving</li>
-<li>phone is on a table and the table shakes a bit due to nearby traffic or 
-washing machine</li>
-</ul>
-
-<p>This sensor makes a tradeoff for power consumption that may result in a small 
-amount of false negatives. This is done for a few reasons:</p>
-
-<ol>
-<li>The goal of this sensor is to save power.</li>
-<li>Triggering an event when the user is not moving (false positive) is costly in 
-terms of power, so it should be avoided.</li>
-<li>Not triggering an event when the user is moving (false negative) is 
-acceptable as long as it is not done repeatedly. If the user has been walking 
-for 10 seconds, not triggering an event within those 10 seconds is not 
-acceptable.</li>
-</ol>
-
-<p>To ensure the applications have the time to receive the significant motion event 
-before the application processor goes back to sleep, the driver must hold a 
-"timeout wake lock" for 200 milliseconds for every wake-up sensor. That is, the 
-application processor should not be allowed to go back to sleep in the 200 
-milliseconds following a wake-up interrupt.</p>
-
-<p><strong>Important</strong>: This sensor is very different from the other types in that it
-must work when the screen is off without the need for holding a partial wake 
-lock (other than the timeout wake lock) and MUST allow the SoC to go into 
-suspend. When significant motion is detected, the sensor must awaken the SoC and 
-the event be reported.</p>
-
-<p>If a particular device cannot support this mode of operation, then this sensor 
-type <strong>must not</strong> be reported by the HAL. ie: it is not acceptable to "emulate" 
-this sensor in the HAL.</p>
-
-<p>When the sensor is not activated, it must also be deactivated in the hardware; 
-it must not wake up the SoC anymore, even in case of significant motion.</p>
-
-<p>setDelay() has no effect and is ignored.</p>
-
-<p>Once a "significant motion" event is returned, a sensor of this type must 
-disable itself automatically, as if activate(..., 0) had been called.</p>
-
-<h3 id="detector">Step detector</h3>
-
-<p><em>Underlying base sensor(s): Accelerometer<br/>
-Trigger-mode: Special<br/>
-Wake-up sensor: No</em></p>
-
-<p>A sensor of this type triggers an event each time a step is taken by the user. 
-The only allowed value to return is 1.0 and an event is generated for each step. 
-Like with any other event, the timestamp indicates when the event (here the 
-step) occurred. This corresponds to when the foot hit the ground, generating a 
-high variation in acceleration.</p>
-
-<p>Compared to the step counter, the step detector should have a lower latency 
-(less than 2 seconds). Both the step detector and the step counter detect when 
-the user is walking, running and walking up the stairs. They should not trigger 
-when the user is biking, driving or in other vehicles.</p>
-
-<p>While this sensor operates, it shall not disrupt any other sensors, in 
-particular, the accelerometer; it might very well be in use.</p>
-
-<p>This sensor must be low power. That is, if the step detection cannot be done in 
-hardware, this sensor should not be defined. Also, when the step detector is 
-activated and the accelerometer is not, only steps should trigger interrupts 
-(not accelerometer data).</p>
-
-<p>setDelay() has no impact on this sensor type.</p>
-
-<h3 id="counter">Step counter</h3>
-
-<p><em>Underlying base sensor(s): Accelerometer<br/>
-Trigger-mode: On-change<br/>
-Wake-up sensor: No</em></p>
-
-<p>A sensor of this type returns the number of steps taken by the user since the 
-last reboot while activated. The value is returned as a uint64_t and is reset to 
-zero only on a system reboot.</p>
-
-<p>The timestamp of the event is set to the time when the last step for that event 
-was taken.<br/>
-See the <a href="#detector">Step detector</a> 
-sensor type for the signification of the time of a step.</p>
-
-<p>Compared to the step detector, the step counter can have a higher latency (less 
-than 10 seconds).  Thanks to this latency, this sensor has a high accuracy; the 
-step count after a full day of measures should be within 10% of the real step 
-count. Both the step detector and the step counter detect when the user is 
-walking, running and walking up the stairs. They should not trigger when the 
-user is biking, driving or in other vehicles.</p>
-
-<p><strong>Important note</strong>: This sensor is different from other types in that it must work 
-when the screen is off without the need of holding a partial wake-lock and MUST 
-allow the SoC to go into suspend.</p>
-
-<p>While in suspend mode this sensor must stay active. No events are reported 
-during that time but steps continue to be accounted for; an event will be 
-reported as soon as the SoC resumes if the timeout has expired.</p>
-
-<p>In other words, when the screen is off and the device is allowed to go into 
-suspend mode, it should not be woken up, regardless of the setDelay() value. But 
-the steps shall continue to be counted.</p>
-
-<p>The driver must however ensure the internal step count never overflows. The 
-minimum size of the hardware's internal counter shall be 16 bits. (This 
-restriction is here to avoid too frequent wake-ups when the delay is very 
-large.) It is allowed in this situation to wake the SoC up so the driver can do 
-the counter maintenance.</p>
-
-<p>While this sensor operates, it shall not disrupt any other sensors, in 
-particular, the accelerometer; it might very well be in use.</p>
-
-<p>If a particular device cannot support these modes of operation, then this sensor 
-type <strong>must not</strong> be reported by the HAL. ie: it is not acceptable to "emulate" 
-this sensor in the HAL.</p>
-
-<p>This sensor must be low power. That is, if the step detection cannot be done in 
-hardware, this sensor should not be defined. Also, when the step counter is 
-activated and the accelerometer is not, only steps should trigger interrupts 
-(not accelerometer data).</p>
-
-<h2 id="Attitude">Attitude sensors</h2>
-
-<h3 id="Rotation-vector">Rotation vector</h3>
-
-<p><em>Underlying base sensor(s): Accelerometer, Gyroscope AND Magnetometer<br/>
-Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-
-<p>The rotation vector symbolizes the orientation of the device relative to the 
-East-North-Up coordinates frame. It is usually obtained by integration of 
-accelerometer, gyroscope and magnetometer readings.</p>
-
-<p>The East-North-Up coordinate system is defined as a direct orthonormal basis 
-where:</p>
-
-<ul>
-<li>X points east and is tangential to the ground.</li>
-<li>Y points north and is tangential to the ground.</li>
-<li>Z points towards the sky and is perpendicular to the ground.</li>
-</ul>
-
-<p>The orientation of the phone is represented by the rotation necessary to align 
-the East-North-Up coordinates with the phone's coordinates. That is, applying 
-the rotation to the world frame (X,Y,Z) would align them with the phone 
-coordinates (x,y,z).</p>
-
-<p>The rotation can be seen as rotating the phone by an angle theta around an axis 
-rot_axis to go from the reference (East-North-Up aligned) device orientation to 
-the current device orientation.</p>
-
-<p>The rotation is encoded as the four (reordered) components of a unit quaternion:</p>
-
-<ul>
-<li><code>sensors_event_t.data[0]</code> = rot_axis.x*sin(theta/2)</li>
-<li><code>sensors_event_t.data[1]</code> = rot_axis.y*sin(theta/2)</li>
-<li><code>sensors_event_t.data[2]</code> = rot_axis.z*sin(theta/2)</li>
-<li><code>sensors_event_t.data[3]</code> = cos(theta/2)</li>
-</ul>
-
-<p>Where:</p>
-
-<ul>
-<li>rot_axis.x,y,z are the North-East-Up coordinates of a unit length vector 
-representing the rotation axis</li>
-<li>theta is the rotation angle</li>
-</ul>
-
-<p>The quaternion must be of norm 1. (It is a unit quaternion.) Failure to ensure 
-this will cause erratic client behaviour.</p>
-
-<p>In addition, this sensor reports an estimated heading accuracy:<br/>
-<code>sensors_event_t.data[4]</code> = estimated_accuracy (in radians)</p>
-
-<p>The heading error must be less than estimated_accuracy 95% of the time. This 
-sensor must use a gyroscope and an accelerometer as main orientation change 
-input.</p>
-
-<p>This sensor should also include magnetometer input to make up for gyro drift, 
-but it cannot be implemented using only a magnetometer.</p>
-
-<h3 id="Game-rotation">Game rotation vector</h3>
-
-<p><em>Underlying base sensor(s): Accelerometer, Gyroscope NOT Magnetometer<br/>
-Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-
-<p>Similar to the <a href="#Rotation-vector">rotation vector</a> sensor but not using 
-the geomagnetic field. Therefore the Y axis doesn't point north but instead to 
-some other reference. That reference is allowed to drift by the same order of 
-magnitude as the gyroscope drifts around the Z axis.</p>
-
-<p>This sensor does not report an estimated heading accuracy:<br/>
-<code>sensors_event_t.data[4]</code> is reserved and should be set to 0</p>
-
-<p>In an ideal case, a phone rotated and returned to the same real-world 
-orientation should report the same game rotation vector (without using the 
-earth's geomagnetic field).</p>
-
-<p>This sensor must be based on a gyroscope. It cannot be implemented using a 
-magnetometer.</p>
-
-<h3 id="Gravity">Gravity</h3>
-
-<p><em>Underlying base sensor(s): Accelerometer, Gyroscope NOT Magnetometer<br/>
-Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-
-<p>The gravity output of this sensor indicates the direction and magnitude of 
-gravity in the device's coordinates. Units are m/s^2. On Earth, the magnitude is 
-9.8 m/s^2. The coordinate system is the same as is used for the acceleration 
-sensor. When the device is at rest, the output of the gravity sensor should be 
-identical to that of the accelerometer.</p>
-
-<h3 id="Magnetometer">Geomagnetic rotation vector (Magnetometer)</h3>
-
-<p><em>Underlying base sensor(s): Accelerometer, Magnetometer NOT Gyroscope<br/>
-Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-
-<p>This sensor is similar to the <a href="#Rotation-vector">rotation vector</a> sensor 
-but using a magnetometer instead of a gyroscope.</p>
-
-<p>This sensor must be based on a magnetometer. It cannot be implemented using a 
-gyroscope, and gyroscope input cannot be used by this sensor.</p>
-
-<p>Just like the rotation vector sensor, this sensor reports an estimated heading 
-accuracy:<br/>
-<code>sensors_event_t.data[4]</code> = estimated_accuracy (in radians)</p>
-
-<p>The heading error must be less than estimated_accuracy 95% of the time.</p>
-
-<p>See the <a href="#Rotation-vector">rotation vector</a> sensor description for more 
-details.</p>
-
-<h3 id="Orientation">Orientation</h3>
-
-<p><em>Underlying base sensor(s): Accelerometer, Magnetometer PREFERRED Gyroscope<br/>
-Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-
-<p><strong>Note</strong>: This is an older sensor type that has been 
-deprecated in the Android SDK although not yet in the HAL. It has been replaced 
-by the rotation vector sensor, which is more clearly defined, requires a 
-gyroscope, and therefore provides more accurate results. Use the rotation vector 
-sensor over the orientation sensor whenever possible.</p>
-
-<p>The orientation sensor tracks the attitude of the device. All values are angles 
-in degrees. Orientation sensors return sensor events for all three axes at a 
-constant rate defined by setDelay().</p>
-
-<ul>
-<li>azimuth: angle between the magnetic north direction and the Y axis, around <br />
-the Z axis (0&lt;=azimuth&lt;360). 0=North, 90=East, 180=South, 270=West</li>
-<li>pitch: Rotation around X axis (-180&lt;=pitch&lt;=180), with positive values when 
-the z-axis moves toward the y-axis.</li>
-<li>roll: Rotation around Y axis (-90&lt;=roll&lt;=90), with positive values when the 
-x-axis moves towards the z-axis.</li>
-</ul>
-
-<p>Please note, for historical reasons the roll angle is positive in the clockwise 
-direction. (Mathematically speaking, it should be positive in the 
-counter-clockwise direction):</p>
-
-<div class="figure" style="width:264px">
-  <img src="images/axis_positive_roll.png" alt="Depiction of orientation relative to a device" height="253" />
-  <p class="img-caption">
-    <strong>Figure 2.</strong> Orientation relative to a device.
-  </p>
-</div>
-
-<p>This definition is different from yaw, pitch and roll used in aviation where the 
-X axis is along the long side of the plane (tail to nose).</p>
-
-<h2 id="Uncalibrated">Uncalibrated sensors</h2>
-
-<p>Uncalibrated sensors provide more raw results and may include some bias but also 
-contain fewer "jumps" from corrections applied through calibration. Some 
-applications may prefer these uncalibrated results as smoother and more 
-reliable. For instance, if an application is attempting to conduct its own 
-sensor fusion, introducing calibrations can actually distort results.</p>
-
-<h3 id="Gyroscope-uncalibrated">Gyroscope uncalibrated</h3>
-
-<p><em>Underlying base sensor(s): Gyroscope<br/>
-Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-
-<p>The uncalibrated gyroscope is useful for post-processing and melding orientation 
-data. All values are in radians/second and measure the rate of rotation around 
-the X, Y and Z axis. An estimation of the drift on each axis is reported as 
-well.</p>
-
-<p>No gyro-drift compensation shall be performed. Factory calibration and 
-temperature compensation should still be applied to the rate of rotation 
-(angular speeds).</p>
-
-<p>The coordinate system is the same as is used for the acceleration sensor. 
-Rotation is positive in the counter-clockwise direction (right-hand rule). That 
-is, an observer looking from some positive location on the x, y or z axis at a 
-device positioned on the origin would report positive rotation if the device 
-appeared to be rotating counter clockwise. Note that this is the standard 
-mathematical definition of positive rotation and does not agree with the 
-definition of roll given elsewhere.</p>
-
-<p>The range should at least be 17.45 rad/s (ie: ~1000 deg/s).</p>
-
-<p>Content of an uncalibrated_gyro event (units are rad/sec):</p>
-
-<ul>
-<li>x_uncalib : angular speed (w/o drift compensation) around the X axis</li>
-<li>y_uncalib : angular speed (w/o drift compensation) around the Y axis</li>
-<li>z_uncalib : angular speed (w/o drift compensation) around the Z axis</li>
-<li>x_bias : estimated drift around X axis in rad/s</li>
-<li>y_bias : estimated drift around Y axis in rad/s</li>
-<li>z_bias : estimated drift around Z axis in rad/s</li>
-</ul>
-
-<p>If the implementation is not able to estimate the drift, then this sensor <strong>must 
-not</strong> be reported by this HAL. Instead, the regular 
-<a href="{@docRoot}devices/sensors/base_triggers.html#Gyroscope">Gyroscope</a> sensor is used without drift compensation.</p>
-
-<p>If this sensor is present, then the corresponding Gyroscope sensor must be 
-present and both must return the same <code>sensor_t::name</code> and
-<code>sensor_t::vendor</code>.</p>
-
-<h3 id="Magnetic-field-uncalibrated">Magnetic field uncalibrated</h3>
-
-<p><em>Underlying base sensor(s): Magnetometer<br/>
-Trigger-mode: Continuous<br/>
-Wake-up sensor: No</em></p>
-
-<p>Similar to <a href="{@docRoot}devices/sensors/base_triggers.html#Geomagnetic">Geomagnetic field</a> sensor, but the hard 
-iron calibration is reported separately instead of being included in the 
-measurement. The uncalibrated magnetometer allows the system to handle bad hard 
-iron estimation.</p>
-
-<p>Factory calibration and temperature compensation should still be applied to the 
-"uncalibrated" measurement. Separating away the hard iron calibration estimation 
-allows the system to better recover from bad hard iron estimation.</p>
-
-<p>All values are in micro-Tesla (uT) and measure the ambient magnetic field in the 
-X, Y and Z axis. Assumptions that the magnetic field is due to the Earth's poles 
-should be avoided.</p>
-
-<p>The uncalibrated_magnetic event contains three fields for uncalibrated measurement: x_uncalib, y_uncalib, z_uncalib. Each is a component of the 
-measured magnetic field, with soft iron and temperature compensation applied, 
-but not hard iron calibration. These values should be continuous (no 
-re-calibration should cause a jump).</p>
-
-<p>The uncalibrated_magnetic event contains three fields for hard iron bias estimates: x_bias, y_bias, z_bias. Each field is a component of the estimated 
-hard iron calibration. They represent the offsets to apply to the calibrated 
-readings to obtain uncalibrated readings (x_uncalib ~= x_calibrated + x_bias). 
-These values are expected to jump as soon as the estimate of the hard iron 
-changes, and they should be stable the rest of the time.</p>
-
-<p>If this sensor is present, then the corresponding Geomagnetic field sensor must 
-be present and both must return the same  <code>sensor_t::name</code> and
-<code>sensor_t::vendor</code>.</p>
-
-<p>See the <a href="{@docRoot}devices/sensors/base_triggers.html#Geomagnetic">geomagnetic field</a> sensor description for more 
-information.<br/></p>
diff --git a/src/devices/sensors/hal-interface.jd b/src/devices/sensors/hal-interface.jd
new file mode 100644
index 0000000..5c232fa
--- /dev/null
+++ b/src/devices/sensors/hal-interface.jd
@@ -0,0 +1,367 @@
+page.title=HAL interface
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>The HAL interface, declared in <a href="{@docRoot}devices/reference/sensors_8h.html">sensors.h</a>, represents the interface between the Android <a href="sensor-stack.html#framework">framework</a> and the hardware-specific software. A HAL implementation must define each
+  function declared in sensors.h. The main functions are:</p>
+<ul>
+  <li><code>get_sensors_list</code> - Returns the list of all sensors. </li>
+  <li><code>activate</code> - Starts or stops a sensor. </li>
+  <li><code>batch</code> - Sets a sensor’s parameters such as sampling frequency and maximum
+    reporting latency. </li>
+  <li><code>setDelay</code> - Used only in HAL version 1.0. Sets the sampling frequency for a
+    given sensor. </li>
+  <li><code>flush</code> - Flushes the FIFO of the specified sensor and reports a flush complete
+    event when this is done. </li>
+  <li><code>poll</code> - Returns available sensor events. </li>
+</ul>
+<p>The implementation must be thread safe and allow these functions to be called
+  from different threads.</p>
+<p>The interface also defines several types used by those functions. The main
+  types are:</p>
+<ul>
+  <li><code>sensors_module_t</code></li>
+  <li><code>sensors_poll_device_t</code></li>
+  <li><code>sensor_t</code></li>
+  <li><code>sensors_event_t</code></li>
+</ul>
+<p>In addition to the sections below, see <a href="{@docRoot}devices/reference/sensors_8h.html">sensors.h</a> for more information on those types.</p>
+<h2 id="get_sensors_list_list">get_sensors_list(list)</h2>
+<pre>int (*get_sensors_list)(struct sensors_module_t* module, struct sensor_t
+  const** list);</pre>
+<p>Provides the list of sensors implemented by the HAL. See <a href="#sensor_t">sensor_t</a> for details on how the sensors are defined.</p>
+<p>The order in which the sensors appear in the list is the order in which the
+  sensors will be reported to the applications. Usually, the base sensors appear
+  first, followed by the composite sensors.</p>
+<p>If several sensors share the same sensor type and wake-up property, the first
+  one in the list is called the “default” sensor. It is the one returned by
+  <code>getDefaultSensor(int sensorType, bool wakeUp)</code>.</p>
+<p>This function returns the number of sensors in the list.</p>
+<h2 id="activate_sensor_true_false">activate(sensor, true/false)</h2>
+<pre>int (*activate)(struct sensors_poll_device_t *dev, int sensor_handle, int
+  enabled);</pre>
+<p>Activates or deactivates a sensor.</p>
+<p><code>sensor_handle</code> is the handle of the sensor to activate/deactivate. A sensor’s
+  handle is defined by the <code>handle</code> field of its <a href="#sensor_t">sensor_t</a> structure.</p>
+<p><code>enabled</code> is set to 1 to enable or 0 to disable the sensor.</p>
+<p>One-shot sensors deactivate themselves automatically upon receiving an event,
+  and they must still accept to be deactivated through a call to <code>activate(...,
+  enabled=0)</code>.</p>
+<p>Non-wake-up sensors never prevent the SoC from going into suspend mode; that
+  is, the HAL shall not hold a partial wake-lock on behalf of applications.</p>
+<p>Wake-up sensors, when delivering events continuously, can prevent the SoC from
+  going into suspend mode, but if no event needs to be delivered, the partial
+  wake-lock must be released.</p>
+<p>If <code>enabled</code> is 1 and the sensor is already activated, this function is a no-op
+  and succeeds.</p>
+<p>If <code>enabled</code> is 0 and the sensor is already deactivated, this function is a no-op
+  and succeeds.</p>
+<p>This function returns 0 on success and a negative error number otherwise.</p>
+<h2 id="batch_sensor_flags_sampling_period_maximum_report_latency">batch(sensor, flags, sampling period, maximum report latency)</h2>
+<pre>
+int (*batch)(
+     struct sensors_poll_device_1* dev,
+     int sensor_handle,
+     int flags,
+     int64_t sampling_period_ns,
+     int64_t max_report_latency_ns);
+</pre>
+<p>Sets a sensor’s parameters, including <a href="#sampling_period_ns">sampling frequency</a> and <a href="#max_report_latency_ns">maximum report latency</a>. This function can be called while the sensor is activated, in which case it
+  must not cause any sensor measurements to be lost: Transitioning from one
+  sampling rate to the other cannot cause lost events, nor can transitioning from
+  a high maximum report latency to a low maximum report latency.</p>
+<p><code>sensor_handle</code> is the handle of the sensor to configure.</p>
+<p><code>flags</code> is currently unused.</p>
+<p><code>sampling_period_ns</code> is the sampling period at which the sensor should run, in
+  nanoseconds. See <a href="#sampling_period_ns">sampling_period_ns</a> for more details.</p>
+<p><code>max_report_latency_ns</code> is the maximum time by which events can be delayed before
+  being reported through the HAL, in nanoseconds. See the <a href="#max_report_latency_ns">max_report_latency_ns</a> paragraph for more details.</p>
+<p>This function returns 0 on success and a negative error number otherwise.</p>
+<h3 id="sampling_period_ns">sampling_period_ns</h3>
+<p>What the <code>sampling_period_ns</code> parameter means depends on the specified sensor's
+  reporting mode:</p>
+<ul>
+  <li> Continuous: <code>sampling_period_ns</code> is the sampling rate, meaning the rate at which
+    events are generated. </li>
+  <li> On-change: <code>sampling_period_ns</code> limits the sampling rate of events, meaning
+    events are generated no faster than every <code>sampling_period_ns</code> nanoseconds. There
+    might be periods longer than <code>sampling_period_ns</code> where no event is generated if
+    the measured values do not change for long periods. See <a
+    href="report-modes.html#on-change">on-change</a> reporting mode for more
+    details. </li>
+  <li> One-shot: <code>sampling_period_ns</code> is ignored. It has no effect. </li>
+  <li> Special: See the specific <a href="sensor-types.html">sensor type
+  descriptions</a> for details on how <code>sampling_period_ns</code> is used
+  for special sensors. </li>
+</ul>
+<p>See <a href="report-modes.html">Reporting modes</a> for more information
+  about the impact of <code>sampling_period_ns</code> in the different modes.</p>
+<p>For continuous and on-change sensors,</p>
+<ul>
+  <li> if <code>sampling_period_ns</code> is less than
+    <code>sensor_t.minDelay</code>, then the HAL implementation must silently
+    clamp it to <code>max(sensor_t.minDelay, 1ms)</code>. Android
+    does not support the generation of events at more than 1000Hz. </li>
+  <li> if <code>sampling_period_ns</code> is greater than
+    <code>sensor_t.maxDelay</code>, then the HAL
+    implementation must silently truncate it to <code>sensor_t.maxDelay</code>. </li>
+</ul>
+<p>Physical sensors sometimes have limitations on the rates at which they can run
+  and the accuracy of their clocks. To account for this, we allow the actual
+  sampling frequency to differ from the requested frequency, as long as it
+  satisfies the requirements in the table below.</p>
+<table>
+  <tr>
+    <th><p>If the requested frequency is</p></th>
+    <th><p>Then the actual frequency must be</p></th>
+  </tr>
+  <tr>
+    <td><p>below min frequency (&lt;1/maxDelay)</p></td>
+    <td><p>between 90% and 110% of the min frequency</p></td>
+  </tr>
+  <tr>
+    <td><p>between min and max frequency</p></td>
+    <td><p>between 90% and 220% of the requested frequency</p></td>
+  </tr>
+  <tr>
+    <td><p>above max frequency (&gt;1/minDelay)</p></td>
+    <td><p>between 90% and 110% of the max frequency</p>
+      <p>and below 1100Hz</p></td>
+  </tr>
+</table>
+<p>Note that this contract is valid only at the HAL level, where there is always a
+  single client. At the SDK level, applications might get different rates, due to
+  the multiplexing happening in the Framework. See <a
+  href="sensor-stack.html#framework">Framework</a> for more details.</p>
+<h3 id="max_report_latency_ns">max_report_latency_ns</h3>
+<p><code>max_report_latency_ns</code> sets the maximum time in nanoseconds, by which events can
+  be delayed and stored in the hardware FIFO before being reported through the
+  HAL while the SoC is awake.</p>
+<p>A value of zero signifies that the events must be reported as soon as they are
+  measured, either skipping the FIFO altogether, or emptying the FIFO as soon as
+  one event from this sensor is present in it.</p>
+<p>For example, an accelerometer activated at 50Hz with <code>max_report_latency_ns=0</code>
+  will trigger interrupts 50 times per second when the SoC is awake.</p>
+<p>When <code>max_report_latency_ns&gt;0</code>, sensor events do not need to be reported as soon
+  as they are detected. They can be temporarily stored in the hardware FIFO and
+  reported in batches, as long as no event is delayed by more than
+  max_report_latency_ns nanoseconds. That is, all events since the previous batch
+  are recorded and returned at once. This reduces the amount of interrupts sent
+  to the SoC and allows the SoC to switch to a lower power mode (idle) while the
+  sensor is capturing and batching data.</p>
+<p>Each event has a timestamp associated with it. Delaying the time at which an
+  event is reported does not impact the event timestamp. The timestamp must be
+  accurate and correspond to the time at which the event physically happened, not
+  the time it is being reported. </p>
+<p>Allowing sensor events to be stored temporarily in the hardware FIFO does not
+  modify the behavior of <code>poll</code>: events from different sensors can be interleaved,
+  and as usual, all events from the same sensor are time-ordered.</p>
+<p>See <a href="batching.html">Batching</a> for more details on sensor
+batching, including behaviors in suspend mode and out of suspend mode.</p>
+<h2 id="setdelay_sensor_sampling_period">setDelay(sensor, sampling period)</h2>
+<pre>
+int (*setDelay)(
+     struct sensors_poll_device_t *dev,
+     int sensor_handle,
+     int64_t sampling_period_ns);
+</pre>
+<p>After HAL version 1.0, this function is deprecated and is never called.
+  Instead, the <code>batch</code> function is called to set the
+  <code>sampling_period_ns</code> parameter.</p>
+<p>In HAL version 1.0, setDelay was used instead of batch to set <a href="#sampling_period_ns">sampling_period_ns</a>.</p>
+<h2 id="flush_sensor">flush(sensor)</h2>
+<pre>int (*flush)(struct sensors_poll_device_1* dev, int sensor_handle);</pre>
+<p>Add a <a href="#metadata_flush_complete_events">flush complete event</a> to the end of the hardware FIFO for the specified sensor and flushes the FIFO;
+  those events are delivered as usual (i.e.: as if the maximum reporting latency
+  had expired) and removed from the FIFO.</p>
+<p>The flush happens asynchronously (i.e.: this function must return immediately).
+  If the implementation uses a single FIFO for several sensors, that FIFO is
+  flushed and the flush complete event is added only for the specified sensor.</p>
+<p>If the specified sensor has no FIFO (no buffering possible), or if the FIFO,
+  was empty at the time of the call, <code>flush</code> must still succeed and send a flush
+  complete event for that sensor. This applies to all sensors other than one-shot
+  sensors.</p>
+<p>When <code>flush</code> is called, even if a flush event is already in the FIFO for that
+  sensor, an additional one must be created and added to the end of the FIFO, and
+  the FIFO must be flushed. The number of <code>flush</code> calls must be
+  equal to the number of flush complete events created.</p>
+<p><code>flush</code> does not apply to <a href="report-modes.html#one-shot">one-shot</a>
+  sensors; if <code>sensor_handle</code> refers to a one-shot sensor,
+  <code>flush</code> must return <code>-EINVAL</code> and not generate any
+  flush complete metadata event.</p>
+<p>This function returns 0 on success, <code>-EINVAL</code> if the specified sensor is a
+  one-shot sensor or wasn’t enabled, and a negative error number otherwise.</p>
+<h2 id="poll">poll()</h2>
+<pre>int (*poll)(struct sensors_poll_device_t *dev, sensors_event_t* data, int
+  count);</pre>
+<p>Returns an array of sensor data by filling the <code>data</code> argument. This function
+  must block until events are available. It will return the number of events read
+  on success, or a negative error number in case of an error.</p>
+<p>The number of events returned in <code>data</code> must be less or equal to
+  the <code>count</code> argument. This function shall never return 0 (no event).</p>
+<h2 id="sequence_of_calls">Sequence of calls</h2>
+<p>When the device boots, <code>get_sensors_list</code> is called.</p>
+<p>When a sensor gets activated, the <code>batch</code> function will be called with the
+  requested parameters, followed by <code>activate(..., enable=1)</code>.</p>
+<p>Note that in HAL version 1_0, the order was the opposite: <code>activate</code> was called
+  first, followed by <code>set_delay</code>.</p>
+<p>When the requested characteristics of a sensor are changing while it is
+  activated, the <code>batch</code> function is called.</p>
+<p><code>flush</code> can be called at any time, even on non-activated sensors (in which case
+  it must return <code>-EINVAL</code>)</p>
+<p>When a sensor gets deactivated, <code>activate(..., enable=0)</code> will be called.</p>
+<p>In parallel to those calls, the <code>poll</code> function will be called repeatedly to
+  request data. <code>poll</code> can be called even when no sensors are activated.</p>
+<h2 id="sensors_module_t">sensors_module_t</h2>
+<p><code>sensors_module_t</code> is the type used to create the Android hardware module for the
+  sensors. The implementation of the HAL must define an object
+  <code>HAL_MODULE_INFO_SYM</code> of this type to expose the <a
+  href="#get_sensors_list_list">get_sensors_list</a> function. See the definition
+  of <code>sensors_module_t</code> in <a
+  href="{@docRoot}devices/reference/sensors_8h.html">sensors.h</a> and the
+  definition of <code>hw_module_t</code> for more information.</p>
+<h2 id="sensors_poll_device_t_sensors_poll_device_1_t">sensors_poll_device_t / sensors_poll_device_1_t</h2>
+<p><code>sensors_poll_device_1_t</code> contains the rest of the methods defined above:
+  <code>activate</code>, <code>batch</code>, <code>flush</code> and
+  <code>poll</code>. Its <code>common</code> field (of type <a
+  href="{@docRoot}devices/reference/structhw__device__t.html">hw_device_t</a>)
+  defines the version number of the HAL.</p>
+<h2 id="sensor_t">sensor_t</h2>
+<p><code>sensor_t</code> represents an <a href="index.html">Android sensor</a>. Here are some of its important fields:</p>
+<p><strong>name:</strong> A user-visible string that represents the sensor. This string often
+  contains the part name of the underlying sensor, the type of the sensor, and
+  whether it is a wake-up sensor. For example, “LIS2HH12 Accelerometer”,
+  “MAX21000 Uncalibrated Gyroscope”, “BMP280 Wake-up Barometer”, “MPU6515 Game
+  Rotation Vector”</p>
+<p><strong>handle:</strong> The integer used to refer to the sensor when registering to it or
+  generating events from it.</p>
+<p><strong>type:</strong> The type of the sensor. See the explanation of sensor
+type in <a href="index.html">What are Android sensors?</a> for more details, and see <a
+href="sensor-types.html">Sensor types</a> for official sensor types. For
+non-official sensor types, <code>type</code> must start with <code>SENSOR_TYPE_DEVICE_PRIVATE_BASE</code></p>
+<p><strong>stringType:</strong> The type of the sensor as a string. When the sensor has an official
+  type, set to <code>SENSOR_STRING_TYPE_*</code>. When the sensor has a manufacturer specific
+  type, <code>stringType</code> must start with the manufacturer reverse domain name. For
+  example, a sensor (say a unicorn detector) defined by the
+  <em>Cool-product</em> team at Fictional-Company could use
+  <code>stringType=”com.fictional_company.cool_product.unicorn_detector”</code>.
+  The <code>stringType</code> is used to uniquely identify non-official sensors types. See <a
+  href="{@docRoot}devices/reference/sensors_8h.html">sensors.h</a> for more
+  information on types and string types.</p>
+<p><strong>requiredPermission:</strong> A string representing the permission that applications must
+  possess to see the sensor, register to it and receive its data. An empty string
+  means applications do not require any permission to access this sensor. Some
+  sensor types like the <a href="sensor-types.html#heart_rate">heart rate
+  monitor</a> have a mandatory <code>requiredPermission</code>. All sensors
+  providing sensitive user information (such as the heart rate) must be protected by a permission.</p>
+<p><strong>flags:</strong> Flags for this sensor, defining the sensor’s reporting mode and whether
+  the sensor is a wake-up sensor or not. For example, a one-shot wake-up sensor
+  will have <code>flags = SENSOR_FLAG_ONE_SHOT_MODE | SENSOR_FLAG_WAKE_UP</code>. The bits of
+  the flag that are not used in the current HAL version must be left equal to 0.</p>
+<p><strong>maxRange:</strong> The maximum value the sensor can report, in the same unit as the
+  reported values. The sensor must be able to report values without saturating
+  within <code>[-maxRange; maxRange]</code>. Note that this means the total range of the
+  sensor in the generic sense is <code>2*maxRange</code>. When the sensor reports values over
+  several axes, the range applies to each axis. For example, a “+/- 2g”
+  accelerometer will report <code>maxRange = 2*9.81 = 2g</code>.</p>
+<p><strong>resolution:</strong> The smallest difference in value that the sensor can measure.
+  Usually computed based on <code>maxRange</code> and the number of bits in the measurement.</p>
+<p><strong>power:</strong> The power cost of enabling the sensor, in milliAmps. This is nearly
+  always more that the power consumption reported in the datasheet of the
+  underlying sensor. See <a
+href="sensor-types.html#base_sensors_=_not_equal_to_physical_sensors">Base
+sensors != physical sensors</a> for more details and see <a
+href="power-use.html#power_measurement_process">Power measurement process</a> for details on
+how to measure the power consumption of a sensor. If the
+  sensor’s power consumption depends on whether the device is moving, the power
+  consumption while moving is the one reported in the <code>power</code> field.</p>
+<p><strong>minDelay:</strong> For continuous sensors, the sampling period, in microseconds,
+  corresponding to the fastest rate the sensor supports. See <a href="#sampling_period_ns">sampling_period_ns</a> for details on how this value is used. Beware that <code>minDelay</code> is expressed in
+  microseconds while <code>sampling_period_ns</code> is in nanoseconds. For on-change and
+  special reporting mode sensors, unless otherwise specified, <code>minDelay</code> must be 0.
+  For one-shot sensors, it must be -1.</p>
+<p><strong>maxDelay:</strong> For continuous and on-change sensors, the sampling period, in
+  microseconds, corresponding to the slowest rate the sensor supports. See <a href="#sampling_period_ns">sampling_period_ns</a> for details on how this value is used. Beware that <code>maxDelay</code> is expressed in
+  microseconds while <code>sampling_period_ns</code> is in nanoseconds. For special and
+  one-shot sensors, <code>maxDelay</code> must be 0.</p>
+<p><strong>fifoReservedEventCount:</strong> The number of events reserved for this sensor in the
+  hardware FIFO. If there is a dedicated FIFO for this sensor, then
+  <code>fifoReservedEventCount</code> is the size of this dedicated FIFO. If the FIFO is
+  shared with other sensors, <code>fifoReservedEventCount</code> is the size of the part of
+  the FIFO that is reserved for that sensor. On most shared-FIFO systems, and on
+  systems that do not have a hardware FIFO this value is 0.</p>
+<p><strong>fifoMaxEventCount:</strong> The maximum number of events that could be stored in the
+  FIFOs for this sensor. This is always greater or equal to
+  <code>fifoReservedEventCount</code>. This value is used to estimate how quickly the FIFO
+  will get full when registering to the sensor at a specific rate, supposing no
+  other sensors are activated. On systems that do not have a hardware FIFO,
+  <code>fifoMaxEventCount</code> is 0. See <a href="batching.html">Batching</a> for more details.</p>
+<p>For sensors with an official sensor type, some of the fields are overwritten by
+  the framework. For example, <a
+  href="sensor-types.html#accelerometer">accelerometer</a> sensors are forced to
+  have a continuous reporting mode, and <a
+  href="sensor-types.html#heart_rate">heart rate</a> monitors are forced to be
+  protected by the <code>SENSOR_PERMISSION_BODY_SENSORS</code> permission.</p>
+<h2 id="sensors_event_t">sensors_event_t</h2>
+<p>Sensor events generated by Android sensors and reported through the <a
+href="#poll">poll</a> function are of <code>type sensors_event_t</code>. Here are some
+important fields of <code>sensors_event_t</code>:</p>
+<p>version: must be <code>sizeof(struct sensors_event_t)</code></p>
+<p><strong>sensor:</strong> The handle of the sensor that generated the event, as defined by
+  <code>sensor_t.handle</code>.</p>
+<p><strong>type:</strong> The sensor type of the sensor that generated the event, as defined by
+  <code>sensor_t.type</code>.</p>
+<p><strong>timestamp:</strong> The timestamp of the event in nanoseconds. This is the time the
+  event happened (a step was taken, or an accelerometer measurement was made),
+  not the time the event was reported. <code>timestamp</code> must be synchronized with the
+  <code>elapsedRealtimeNano</code> clock, and in the case of continuous sensors, the jitter
+  must be small. Timestamp filtering is sometimes necessary to satisfy the CDD
+  requirements, as using only the SoC interrupt time to set the timestamps
+  causes too high jitter, and using only the sensor chip time to set the
+  timestamps can cause de-synchronization from the
+  <code>elapsedRealtimeNano</code> clock, as the sensor clock drifts.</p>
+<p>data and overlapping fields: The values measured by the sensor. The meaning and
+  units of those fields are specific to each sensor type. See <a
+  href="{@docRoot}devices/reference/sensors_8h.html">sensors.h</a> and the
+  definition of the different <a href="sensor-types.html">Sensor types</a> for a
+  description of the data fields. For some sensors, the accuracy of the
+  readings is also reported as part of the data, through a <code>status</code> field. This
+  field is only piped through for those select sensor types, appearing at the SDK
+  layer as an accuracy value. For those sensors, the fact that the status field
+  must be set is mentioned in their <a href="sensor-types.html">sensor type</a> definition.</p>
+<h3 id="metadata_flush_complete_events">Metadata flush complete events</h3>
+<p>Metadata events have the same type as normal sensor events:
+  <code>sensors_event_meta_data_t = sensors_event_t</code>. They are returned together with
+  other sensor events through poll. They possess the following fields:</p>
+<p>version must be <code>META_DATA_VERSION</code></p>
+<p>type must be <code>SENSOR_TYPE_META_DATA</code></p>
+<p>sensor, reserved, and <strong>timestamp </strong>must be 0</p>
+<p>meta_data.what contains the metadata type for this event. There is currently a
+  single valid metadata type: <code>META_DATA_FLUSH_COMPLETE</code>.</p>
+<p><code>META_DATA_FLUSH_COMPLETE</code> events represent the completion of the flush of a
+  sensor FIFO. When <code>meta_data.what=META_DATA_FLUSH_COMPLETE</code>, <code>meta_data.sensor</code>
+  must be set to the handle of the sensor that has been flushed. They are
+  generated when and only when <code>flush</code> is called on a sensor. See the section on
+  the <a href="#flush_sensor">flush</a> function for more information.</p>
diff --git a/src/devices/sensors/images/sensor_layers.png b/src/devices/sensors/images/sensor_layers.png
new file mode 100644
index 0000000..7d1ca25
--- /dev/null
+++ b/src/devices/sensors/images/sensor_layers.png
Binary files differ
diff --git a/src/devices/sensors/index.jd b/src/devices/sensors/index.jd
index e5fa438..6f21488 100644
--- a/src/devices/sensors/index.jd
+++ b/src/devices/sensors/index.jd
@@ -1,8 +1,8 @@
-page.title=Sensors HAL overview
+page.title=Sensors
 @jd:body
 
 <!--
-    Copyright 2013 The Android Open Source Project
+    Copyright 2014 The Android Open Source Project
 
     Licensed under the Apache License, Version 2.0 (the "License");
     you may not use this file except in compliance with the License.
@@ -24,238 +24,138 @@
   </div>
 </div>
 
-<h2 id="intro">Introduction</h2>
-<p><a href="http://developer.android.com/guide/topics/sensors/sensors_overview.html">Android 
-  sensors</a> give applications access to a mobile device's underlying base sensor(s): 
-  accelerometer, gyroscope, and magnetometer. Manufacturers develop the drivers 
-  that define additional composite sensor types from those base sensors. For 
-  instance, Android offers both calibrated and uncalibrated gyroscopes, a 
-  geomagnetic rotation vector and a game rotation vector. This variety gives 
-  developers some flexibility in tuning applications for battery life optimization 
-  and accuracy.</p>
-<p>The <a href="{@docRoot}devices/reference/sensors_8h_source.html">Sensors
-Hardware Abstraction Layer (HAL) API</a> is the interface between the hardware drivers 
-and the Android framework; the <a href="http://developer.android.com/reference/android/hardware/Sensor.html">Sensors Software Development Kit (SDK) 
-  API</a> is the interface between the Android framework and the Java applications. Please note, 
-  the Sensors HAL API described in this documentation is not identical to the 
-  Sensors SDK API described on <a href="http://developer.android.com/reference/android/hardware/Sensor.html">developer.android.com</a>. 
-  For example, some sensors that are deprecated in the SDK may still exist in the 
-  HAL, and vice versa.</p>
-<p>Similarly, audio recorders, Global Positioning System (GPS) devices, and 
-  accessory (pluggable) sensors are not supported by the Android Sensors HAL API 
-  described here. This API covers sensors that are physically part of the device 
-  only. Please see the <a href="{@docRoot}devices/audio.html">Audio</a>, <a href="{@docRoot}devices/reference/gps_8h.html">Location </a><a href="{@docRoot}devices/reference/gps_8h.html">Strategies</a>, 
-  and the <a href="{@docRoot}accessories/index.html">Accessories</a> section 
-  for information on those devices.</p>
-<p>Application framework<br/>
-  At the application framework level is the app code, which utilizes the <a href="http://developer.android.com/reference/android/hardware/package-summary.html">android.hardware</a> APIs to interact with the sensors hardware. Internally, this code calls 
-  corresponding JNI glue classes to access the native code that interacts with the 
-  sensors hardware.</p>
-<p>JNI<br/>
-  The JNI code associated with <a href="http://developer.android.com/reference/android/hardware/package-summary.html">android.hardware</a> is located in the frameworks/base/core/jni/ directory. This code calls the lower 
-  level native code to obtain access to the sensor hardware.</p>
-<p>Native framework<br/>
-  The native framework is defined in <code>frameworks/native/</code> and provides a native
-  equivalent to the <a href="http://developer.android.com/reference/android/hardware/package-summary.html">android.hardware</a> package. The native framework calls the Binder IPC proxies to obtain access to 
-  sensor-specific services.</p>
-<p>Binder IPC<br/>
-  The Binder IPC proxies facilitate communication over process boundaries.</p>
-<p>HAL<br/>
-  The Hardware Abstraction Layer (HAL) defines the standard interface that sensor 
-  services call into and that you must implement to have your sensor hardware 
-  function correctly. The sensor HAL interfaces are located in 
-  <code>hardware/libhardware/include/hardware</code>. See <a
-href="http://source.android.com/devices/reference/sensors_8h.html">sensors.h</a> for
-additional details.</p>
-<p>Kernel Driver<br/>
-  The sensors driver interacts with the hardware and your implementation of the 
-  HAL. The HAL is driver-agnostic.</p>
-<h3 id="axis-def">Sensor axis definition</h3>
-<p>The sensor event values are expressed in a specific frame that is static 
-  relative to the phone. This API is relative only to the NATURAL orientation of 
-  the screen. In other words:</p>
-<ul>
-  <li>the axes are not swapped when the device's screen orientation changes.</li>
-  <li>higher level services <em>may</em> perform this transformation.</li>
-</ul>
-<div class="figure" style="width:269px"> <img src="http://developer.android.com/images/axis_device.png" alt="Coordinate system relative to device for Sensor
-    API" height="225" />
-  <p class="img-caption"> <strong>Figure 1.</strong> Coordinate system (relative to a device) that's used by the Sensor
-    API. </p>
-</div>
-<h3 id="accuracy">Accuracy</h3>
-<p>The sensors included by the manufacturer must be accurate and precise to meet
-the expectations of application developers. The sensors included in Android devices are 
-  tested for sensor interaction and accuracy as part of the <a href="{@docRoot}compatibility/index.html">Android Compatibility 
-    program</a> starting in the 
-  Android 4.4 release. Testing will continue to be improved in future releases. 
-  See the <em>Sensors</em> section of the Android Compatibility Definition Document (CDD) 
-  for the exact requirements.</p>
-<h3 id="power">Power consumption</h3>
-<p>Some defined sensor are higher power than others. Others are lower power by 
-  design and should be implemented as such with their processing done in the 
-  hardware. This means they should not require the application processor to be 
-  running. Here are the low-power sensors:</p>
-<ul>
-  <li><a href="{@docRoot}devices/sensors/composite_sensors.html#Magnetometer">Geomagnetic rotation vector</a></li>
-  <li><a href="{@docRoot}devices/sensors/composite_sensors.html#Significant">Significant motion</a></li>
-  <li><a href="{@docRoot}devices/sensors/composite_sensors.html#counter">Step counter</a></li>
-  <li><a href="{@docRoot}devices/sensors/composite_sensors.html#detector">Step detector</a></li>
-</ul>
-<p>They are accompanied by a low-power <img src="images/battery_icon.png"
-alt="low-power sensors"/>
-  icon in the <a href="{@docRoot}devices/sensors/composite_sensors.html#summary">Sensor summary</a> table. </p>
-<p>These sensor types cannot be implemented at high power as their primary benefit 
-  is low battery use. It is better to not implement a low-power sensor at all 
-  rather than implement it as high power.</p>
-<p>Composite low-power sensor types, such as the step detector, must have their 
-  processing conducted in the hardware; power use is much lower than if done in 
-  the software. Power use is low on small microprocessors and even lower still on 
-  application-specific integrated circuits (ASICs). A hardware implementation of 
-  composite sensor types can also make use of more raw sensor data and a better 
-  synchronization between sensors.</p>
-<h3 id="release">HAL release cycle</h3>
-<p>Functionality is tied to versions of the API. Android maintains two versions of 
-  the Sensors HAL API per release. For instance, if version 1 was the latest and 
-  version 1.1 is released, the version prior to 1 will no longer be supported upon 
-  that release. Only the two latest versions of the Sensors HAL API are supported.</p>
-<h2 id="interaction">Interaction</h2>
-<h3 id="concurrent">Concurrent running</h3>
-<p>Android sensors must work independently of one another. Activating one sensor 
-  shall not deactivate another sensor. Activating one shall not reduce the rate of 
-  another. This is a key element of compatibility testing.</p>
-<h3 id="suspend">Interaction with suspend mode</h3>
-<p>Unless otherwise noted, an enabled sensor shall not prevent the system on a chip 
-  (SoC) from going into suspend mode. It is the responsibility of applications to keep a 
-  partial <a href="http://developer.android.com/reference/android/os/PowerManager.WakeLock.html">wake 
-    lock</a> should they wish to receive sensor events while the screen is off. While in 
-  suspend mode, and unless otherwise noted (<a
-href="{@docRoot}devices/sensors/batching.html">batch</a> mode 
-  and sensor particularities), enabled sensors' events are lost.</p>
-<p>Note that conceptually, the sensor itself is not deactivated while in suspend 
-  mode. Instead, the data it returns is missing. The oldest data is dropped to 
-  accommodate the latest data. As soon as the SoC gets out of suspend mode, 
-  operations resume as normal.</p>
-<p>Most applications should either hold a wake lock to ensure the system doesn't go 
-  to suspend, or unregister from the sensors when they do not need them, unless 
-  batch mode is active. When batching, sensors must continue to fill their 
-  internal FIFO. (See the documentation of <a
-href="{@docRoot}devices/sensors/batching.html">batch</a> mode 
-  to learn how suspend interacts with batch mode.)</p>
-<p>Wake-up sensors are a notable exception to the above. Wake-up sensors must
-wake up the SoC to deliver events. They must still let the SoC go into suspend
-mode, but must also wake it up when an event is triggered.</p>
-<h3 id="fusion">Sensor fusion and virtual sensors</h3>
-<p>Many composite sensor types are or can be implemented as virtual sensors from 
-  underlying base sensors on the device. Examples of composite sensors types 
-  include the rotation vector sensor, orientation sensor, step detector and step 
-  counter.</p>
-<p>From the point of view of this API, these virtual sensors <strong>must</strong> appear as 
-  real, individual sensors. It is the responsibility of the driver and HAL to make 
-  sure this is the case.</p>
-<p>In particular, all sensors must be able to function concurrently. For example, 
-  if defining both an accelerometer and a step counter, then both must be able to 
-  work concurrently.</p>
-<h3 id="hal">HAL interface</h3>
-<p>These are the common sensor calls expected at the HAL level:</p>
-<ol>
-  <li><em>getSensorList()</em> - Gets the list of all sensors.</li>
-  <li><em>activate()</em> - Starts or stops the specified sensor.</li>
-  <li><em>batch()</em> - Sets parameters to group event data collection and optimize power use.</li>
-  <li><em>setDelay()</em> - Sets the event's period in 
-    nanoseconds for a given sensor.</li>
-  <li><em>flush()</em> - Flush adds an event to the end of the 
-    &quot;batch mode&quot; FIFO for the specified sensor and flushes the FIFO.</li>
-  <li><em>poll()</em> - Returns an array of sensor data. </li>
-</ol>
-<p>Please note, the implementation must be thread safe and allow these values to be 
-  called from different threads.</p>
-<h4 id="getSensorList">getSensorList(sensor_type)</h4>
-<p>Provide the list of sensors implemented by the HAL for the given sensor type. </p>
-<p>Developers may then make multiple calls to get sensors of different types or use 
-  <code>Sensor.TYPE_ALL</code> to get all the sensors. See getSensorList() defined on
-  developer.android.com for more details.</p>
-<h4 id="activate">activate(sensor, true/false)</h4>
-<pre>
-            int (*activate)(struct sensors_poll_device_t *dev,
-                    int handle, int enabled);</pre>
-<p>Activates or deactivates the sensor with the specified handle. Handles must be 
-  higher than <code>SENSORS_HANDLE_BASE</code> and must be unique. A handle identifies a given
-  sensor. The handle is used to activate and/or deactivate sensors. In this 
-  version of the API, there can only be 256 handles.</p>
-<p>The handle is the handle of the sensor to change. The enabled argument is set to 
-  1 to enable or 0 to disable the sensor.</p>
-<p>Unless otherwise noted in the individual sensor type descriptions, an activated 
-  sensor never prevents the SoC from going into suspend mode; that is, the HAL 
-  shall not hold a partial wake lock on behalf of applications.<br/>
-  <br/>
-  One-shot sensors deactivate themselves automatically upon receiving an event, 
-  and they must still accept to be deactivated through a call to activate(..., 
-  ..., 0).<br/>
-  <br/>
-  If &quot;enabled&quot; is 1 and the sensor is already activated, this function is a no-op 
-  and succeeds. If &quot;enabled&quot; is 0 and the sensor is already deactivated, this 
-  function is a no-op and succeeds. This returns 0 on success and a negative errno 
-  code otherwise.</p>
-<h4 id="batch">batch(sensor, batching parameters)</h4>
-<pre>
-            int (*batch)(struct sensors_poll_device_1* dev,
-                   int handle, int flags, int64_t period_ns, int64_t timeout);
-</pre>
-<p>Sets parameters to group event data collection and reduce power use. Batching 
-  can enable significant power savings by allowing the application processor to 
-  sleep rather than awake for each notification. Instead, these notifications can 
-  be grouped and processed together. See the <a
-href="{@docRoot}devices/sensors/batching.html">Batching</a> section for details.</p>
-<h4 id="setDelay">setDelay(sensor, delay)</h4>
-<pre>
-            int (*setDelay)(struct sensors_poll_device_t *dev,
-                    int handle, int64_t period_ns);
-</pre>
-<p>Sets the event's period in nanoseconds for a given sensor. What the
-<code>period_ns</code> parameter means depends on the specified sensor's trigger mode:</p>
-<ul>
-  <li>Continuous: setDelay() sets the sampling rate.</li>
-  <li>On-change: setDelay() limits the delivery rate of events.</li>
-  <li>One-shot: setDelay() is ignored. It has no effect.</li>
-  <li>Special: See specific sensor type descriptions.</li>
-</ul>
-<p>For continuous and on-change sensors, if the requested value is less than
-<code>sensor_t::minDelay</code>, then it's silently clamped to
-<code>sensor_t::minDelay</code> unless <code>sensor_t::minDelay</code> is 0,
-in which case it is clamped to &gt;= 1ms. setDelay will not be called when the sensor is
-in batching mode. In this case, batch() will be called with the new period. Return 0 if successful, 
-&lt; 0 on error.</p>
-<p>When calculating the sampling period T in setDelay (or batch), the actual period
-should be smaller than T and no smaller than T/2. Finer granularity is not
-necessary.</p>
-<h4 id="flush">flush()</h4>
-<pre>
-            int (*flush)(struct sensors_poll_device_1* dev, int handle);
-</pre>
-<p>Flush adds a <code>META_DATA_FLUSH_COMPLETE</code> event
-(<code>sensors_event_meta_data_t</code>) to the
-  end of the &quot;batch mode&quot; FIFO for the specified sensor and flushes the FIFO; 
-  those events are delivered as usual (i.e.: as if the batch timeout had expired) 
-  and removed from the FIFO.<br/>
-  <br/>
-  The flush happens asynchronously (i.e.: this function must return immediately). 
-  If the implementation uses a single FIFO for several sensors, that FIFO is 
-  flushed and the <code>META_DATA_FLUSH_COMPLETE</code> event is added only for the specified
-  sensor.<br/>
-  <br/>
-  If the specified sensor wasn't in batch mode, flush succeeds and promptly sends 
-  a <code>META_DATA_FLUSH_COMPLETE</code> event for that sensor.</p>
-<p>If the FIFO was empty at the time of the call, flush returns 0 (success) and 
-  promptly sends a <code>META_DATA_FLUSH_COMPLETE</code> event for that sensor.<br/>
-  <br/>
-  If the specified sensor wasn't enabled, flush returns -EINVAL. return 0 on 
-  success, negative errno code otherwise.</p>
-<h4 id="poll">poll()</h4>
-<pre>            int (*poll)(struct sensors_poll_device_t *dev,
-                    sensors_event_t* data, int count);</pre>
-<p>Returns an array of sensor data. This function must block until events are 
-  available. It will return the number of events read on success, or -errno in 
-  case of an error.</p>
-<p>The number of events returned in data must be less or equal to the &quot;count&quot; 
-  argument. This function shall never return 0 (no event).</p>
+    <h2 id="what_are_“android_sensors”">What are Android sensors?</h2>
+    <p>Android sensors give applications access to a mobile device's underlying
+      physical sensors. They are data-providing virtual devices defined by the
+      implementation of <a
+      href="{@docRoot}devices/reference/sensors_8h.html">sensors.h</a>,
+      the sensor Hardware Abstraction Layer (HAL).</p>
+    <ul>
+      <li> Those virtual devices provide data coming from a set of physical sensors:
+        accelerometers, gyroscopes, magnetometers, barometer, humidity, pressure,
+        light, proximity and heart rate sensors… </li>
+      <li> Notably, camera, fingerprint sensor, microphone and touch screen are currently
+        not in the list of physical devices providing data through “Android sensors.”
+        They have their own reporting mechanism. </li>
+      <li> The separation is arbitrary, but in general, Android sensors provide lower
+        bandwidth data. For example, “100hz x 3 channels” for an accelerometer versus
+        “25hz x 8 MP x 3 channels” for a camera or “44kHz x 1 channel” for a
+        microphone. </li>
+    </ul>
+    <p>How the different physical sensors are connected to the system on chip
+       (SoC) is not defined by Android.</p>
+    <ul>
+      <li> Often, sensor chips are connected to the SoC through a <a href="sensor-stack.html#sensor_hub">sensor hub</a>, allowing some low-power monitoring and processing of the data. </li>
+      <li> Often, Inter-Integrated Circuit (I2C) or Serial Peripheral Interface
+        (SPI) is used as the transport mechanism. </li>
+      <li> To reduce power consumption, some architectures are hierarchical, with some
+	minimal processing being done in the application-specific integrated
+	circuit (ASIC - like motion detection on the accelerometer chip), and
+        more is done in a microcontroller (like step detection
+        in a sensor hub). </li>
+      <li> It is up to the device manufacturer to choose an architecture based on
+	accuracy, power, price and package-size characteristics. See <a
+        href="sensor-stack.html">Sensor stack</a> for more information. </li>
+      <li> Batching capabilities are an important consideration for power optimization.
+        See <a href="batching.html">Batching</a> for more information. </li>
+    </ul>
+    <p>Each Android sensor has a “type” representing how the sensor behaves and what
+      data it provides.</p>
+    <ul>
+      <li> The official Android <a href="sensor-types.html">Sensor types</a> are defined in <a href="{@docRoot}devices/reference/sensors_8h.html">sensors.h</a> under the names SENSOR_TYPE_…
+        <ul>
+          <li> The vast majority of sensors have an official sensor type. </li>
+          <li> Those types are documented in the Android SDK. </li>
+	  <li> Behavior of sensors with those types are tested in the Android
+               Compatibility Test Suite (CTS). </li>
+        </ul>
+      </li>
+      <li> If a manufacturer integrates a new kind of sensor on an Android device, the
+        manufacturer can define its own temporary type to refer to it.
+        <ul>
+          <li> Those types are undocumented, so application developers are unlikely to use
+            them, either because they don’t know about them, or know that they are rarely
+            present (only on some devices from this specific manufacturer). </li>
+          <li> They are not tested by CTS. </li>
+	  <li> Once Android defines an official sensor type for this kind of
+	       sensor, manufacturers must stop using their own temporary type
+	       and use the official type instead. This way, the sensor will be
+               used by more application developers. </li>
+        </ul>
+      </li>
+      <li> The list of all sensors present on the device is reported by the HAL
+        implementation.
+        <ul>
+          <li> There can be several sensors of the same type. For example, two proximity
+            sensors or two accelerometers. </li>
+          <li> The vast majority of applications request only a single sensor of a given type.
+            For example, an application requesting the default accelerometer will get the
+            first accelerometer in the list. </li>
+          <li> Sensors are often defined by <a href="suspend-mode.html#wake-up_sensors">wake-up</a> and <a href="suspend-mode.html#non-wake-up_sensors">non-wake-up</a> pairs, both sensors sharing the same type, but differing by their wake-up
+            characteristic. </li>
+        </ul>
+      </li>
+    </ul>
+<p>Android sensors provide data as a series of sensor events.</p>
+      <p> Each <a href="hal-interface.html#sensors_event_t">event</a> contains:</p>
+        <ul>
+          <li> a handle to the sensor that generated it </li>
+          <li> the timestamp at which the event was detected or measured </li>
+          <li> and some data </li>
+        </ul>
+      <p>The interpretation of the reported data depends on the sensor type.
+          See the <a href="sensor-types.html">sensor type</a> definitions for details on
+          what data is reported for each sensor type.</p>
+
+<h2 id="existing_documentation2">Existing documentation</h2>
+    <h3 id="targeted_at_developers">Targeted at developers</h3>
+    <ul>
+      <li> Overview
+        <ul>
+          <li><a href="https://developer.android.com/guide/topics/sensors/sensors_overview.html"> https://developer.android.com/guide/topics/sensors/sensors_overview.html </a></li>
+        </ul>
+      </li>
+      <li> SDK reference
+        <ul>
+          <li> <a href="https://developer.android.com/reference/android/hardware/SensorManager.html">https://developer.android.com/reference/android/hardware/SensorManager.html</a></li>
+          <li><a href="https://developer.android.com/reference/android/hardware/SensorEventListener.html"> https://developer.android.com/reference/android/hardware/SensorEventListener.html</a></li>
+          <li> <a href="https://developer.android.com/reference/android/hardware/SensorEvent.html">https://developer.android.com/reference/android/hardware/SensorEvent.html</a></li>
+          <li><a href="https://developer.android.com/reference/android/hardware/Sensor.html"> https://developer.android.com/reference/android/hardware/Sensor.html</a></li>
+        </ul>
+      </li>
+      <li> StackOverflow and tutorial websites
+        <ul>
+          <li> Because sensors documentation was sometimes lacking, developers resorted to Q&amp;A
+            websites like StackOverflow to find answers. </li>
+          <li> Some tutorial websites exist as well, but do not cover the latest features like
+            batching, significant motion and game rotation vectors. </li>
+          <li> The answers over there are not always right, and show where more documentation
+            is needed. </li>
+        </ul>
+      </li>
+    </ul>
+<h3 id="targeted_at_manufacturers_public">Targeted at manufacturers</h3>
+    <ul>
+      <li> Overview
+        <ul>
+	  <li>This <a href="{@docRoot}devices/sensors/index.html">Sensors</a>
+            page and its sub-pages. </li>
+        </ul>
+      </li>
+      <li> Hardware abstraction layer (HAL)
+        <ul>
+          <li> <a href="{@docRoot}devices/reference/sensors_8h_source.html">https://source.android.com/devices/reference/sensors_8h_source.html</a></li>
+          <li> Also known as “sensors.h” </li>
+          <li> The source of truth. First document to be updated when new features are
+            developed. </li>
+        </ul>
+      </li>
+      <li> Android CDD (Compatibility Definition Document)
+        <ul>
+          <li><a href="{@docRoot}compatibility/android-cdd.pdf">https://source.android.com/compatibility/android-cdd.pdf</a></li>
+          <li> See sections relative to sensors. </li>
+          <li> The CDD is lenient, so satisfying the CDD requirements is not enough to ensure
+            high quality sensors. </li>
+        </ul>
+      </li>
+    </ul>
diff --git a/src/devices/sensors/interaction.jd b/src/devices/sensors/interaction.jd
new file mode 100644
index 0000000..bb60b2e
--- /dev/null
+++ b/src/devices/sensors/interaction.jd
@@ -0,0 +1,52 @@
+page.title=Interaction
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+
+<p>From the perspective of Android applications, every Android sensor is an
+  independent entity, meaning there is no interaction between the different
+  sensors.</p>
+<ul>
+  <li> This is true even though several Android sensors might share the same
+    underlying physical sensor </li>
+  <li> For example: step counter, significant motion and accelerometer, all relying on
+    the same physical accelerometer, must be able to work concurrently </li>
+  <li> This is also true for wake-up and non-wake-up versions of the same sensor </li>
+</ul>
+<p>Android sensors must be able to work simultaneously and independently of one
+  another. That is, any action on one Android sensor must not impact the behavior
+  of the other sensors.</p>
+<p>Specifically, at the HAL level:</p>
+<ul>
+  <li> activating a sensor </li>
+  <li> deactivating a sensor </li>
+  <li> changing the sampling frequency of a sensor </li>
+  <li> changing the maximum reporting latency of a sensor </li>
+</ul>
+<p>cannot cause:</p>
+<ul>
+  <li> another activated sensor to stop working </li>
+  <li> another activated sensor to change sampling rate </li>
+  <li> another activated sensor to decrease the quality of its measurements </li>
+  <li> another non-activated sensor to start delivering events </li>
+</ul>
+<p>Nor can any of the actions above prevent actions (activation, deactivation,
+  and parameter changes) on another sensor from succeeding. For instance,
+  whether we can activate the step counter must be independent of whether the
+  accelerometer is currently activated. </p>
+<p>As another important example, a wake-up sensor activated at 5Hz must generate events
+  at around 5Hz, even if its non-wake-up variant is being activated at 100Hz.</p>
diff --git a/src/devices/sensors/power-use.jd b/src/devices/sensors/power-use.jd
new file mode 100644
index 0000000..00c3882
--- /dev/null
+++ b/src/devices/sensors/power-use.jd
@@ -0,0 +1,67 @@
+page.title=Power consumption
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<h2 id="low_power_sensors">Low-power sensors</h2>
+<p>Some sensor types are defined as being low power. Low-power sensors must
+  function at low power, with their processing done in the hardware. This means
+  they should not require the SoC to be running. Here are some low-power sensor
+  types:</p>
+<ul>
+  <li> Geomagnetic rotation vector </li>
+  <li> Significant motion </li>
+  <li> Step counter </li>
+  <li> Step detector </li>
+  <li> Tilt detector </li>
+</ul>
+<p>They are accompanied by a low-power (<img src="images/battery_icon.png"
+width="20" height="20" alt="Low power sensor" />) icon in the <a
+href="sensor-types.html#composite_sensor_type_summary">Composite sensor type
+summary</a> table.</p>
+<p>These sensor types cannot be implemented at high power as their primary benefit
+  is low battery use. These sensors are expected to be activated for very long
+  periods, possibly 24/7. It is better to not implement a low-power sensor at all
+  rather than implement it as high power, as it would cause dramatic battery
+  drain.</p>
+<p>Composite low-power sensor types, such as the step detector, must have their
+  processing conducted in the hardware.</p>
+<p>See the CDD for specific power requirements, and expect tests in CTS to
+  verify those power requirements.</p>
+<h2 id="power_measurement_process">Power measurement process</h2>
+<p>The power is measured at the battery. For values in milliWatts, we use the
+  nominal voltage of the battery, meaning a 1mA current at 4V must be counted as
+  4mW.</p>
+<p>The power is measured when the SoC is asleep, and averaged over a few seconds
+  of the SoC being asleep, so that periodic spikes in power from the sensor chips
+  are taken into account.</p>
+<p>For one-shot wake-up sensors, the power is measured while the sensor doesn’t
+  trigger (so it doesn’t wake the SoC up). Similarly, for other sensors, the
+  power is measured while the sensor data is stored in the hardware FIFO, so the
+  SoC is not woken up.</p>
+<p>The power normally is measured as a delta with when no sensor is activated.
+  When several sensors are activated, the delta in power must be no greater than
+  the sum of the power of each activated sensor. If an accelerometer consumes
+  0.5mA and a step detector consumes 0.5mA, then activating both at the same time
+  must consume less than 0.5+0.5=1mA.</p>
diff --git a/src/devices/sensors/report-modes.jd b/src/devices/sensors/report-modes.jd
new file mode 100644
index 0000000..b800326
--- /dev/null
+++ b/src/devices/sensors/report-modes.jd
@@ -0,0 +1,66 @@
+page.title=Reporting modes
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>Sensors can generate events in different ways called reporting modes; each
+  sensor type has one and only one reporting mode associated with it. Four
+  reporting modes exist.</p>
+<h2 id="continuous">Continuous</h2>
+<p>Events are generated at a constant rate defined by the <a href="hal-interface.html#sampling_period_ns">sampling_period_ns</a> parameter passed to the <code>batch</code> function. Example sensors using the continuous
+  reporting mode are <a href="sensor-types.html#accelerometer">accelerometers</a> and <a href="sensor-types.html#gyroscope">gyroscopes</a>.</p>
+<h2 id="on-change">On-change</h2>
+<p>Events are generated only if the measured values have changed. Activating the
+  sensor at the HAL level (calling <code>activate(..., enable=1)</code> on it) also triggers
+  an event, meaning the HAL must return an event immediately when an on-change
+  sensor is activated. Example sensors using the on-change reporting mode are the
+  step counter, proximity, and heart rate sensor types.</p>
+<p>The <a href="hal-interface.html#sampling_period_ns">sampling_period_ns</a>
+  parameter passed to the <code>batch</code> function is used to set the minimum
+  time between consecutive events, meaning an event should not be generated until
+  sampling_period_ns nanoseconds elapsed since the last event, even if the value
+  changed since then. If the value changed, an event must be generated as soon as
+  <code>sampling_period_ns</code> has elapsed since the last event.</p>
+<p>For example, suppose:</p>
+<ul>
+  <li> We activate the step counter with <code>sampling_period_ns = 10 * 10^9</code> (10 seconds). </li>
+  <li> And walk for 55 seconds, then stand still for one minute. </li>
+  <li> Then the events will be generated about every 10 seconds during the first
+    minute (including at time t=0 because of the activation of the sensor, and t=60
+    seconds), for a total of seven events, and no event will be generated in the second
+    minute because the value of the step count didn’t change after t=60 seconds. </li>
+</ul>
+<h2 id="one-shot">One-shot</h2>
+<p>Upon detection of an event, the sensor deactivates itself and then sends a
+  single event through the HAL. Order matters to avoid race conditions. (The
+  sensor must be deactivated before the event is reported through the HAL). No
+  other event is sent until the sensor is reactivated. <a href="sensor-types.html#significant_motion">Significant motion</a> is an example of this kind of sensor.</p>
+<p>One-shot sensors are sometimes referred to as trigger sensors.</p>
+<p>The <code>sampling_period_ns</code> and <code>max_report_latency_ns</code>
+  parameters passed to the <code>batch</code> function are ignored. Events from
+  one-shot events cannot be stored in hardware FIFOs; the events must be
+  reported as soon as they are generated.</p>
+<h2 id="special">Special</h2>
+<p>See the individual <a href="sensor-types.html">sensor type descriptions</a>
+for details on when the events are generated.</p>
diff --git a/src/devices/sensors/sensor-stack.jd b/src/devices/sensors/sensor-stack.jd
new file mode 100644
index 0000000..9776c44
--- /dev/null
+++ b/src/devices/sensors/sensor-stack.jd
@@ -0,0 +1,182 @@
+page.title=Sensor stack
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>The figure below represents the Android sensor stack. Each component
+  communicates only with the components directly above and below it, though some
+  sensors can bypass the sensor hub when it is present. Control flows from the
+  applications down to the sensors, and data flows from the sensors up to the
+  applications.</p>
+<img src="images/sensor_layers.png" alt="Layers and owners of the Android sensor stack" />
+<p class="img-caption"><strong>Figure 1.</strong> Layers of the Android sensor stack and their respective owners</p>
+
+<h2 id="sdk">SDK</h2>
+<p>Applications access sensors through the <a href="http://developer.android.com/reference/android/hardware/SensorManager.html">Sensors SDK (Software Development Kit) API</a>. The SDK contains functions to list available sensors and to register to a
+  sensor.</p>
+<p>When registering to a sensor, the application specifies its preferred sampling
+  frequency and its latency requirements.</p>
+<ul>
+  <li> For example, an application might register to the default accelerometer,
+    requesting events at 100Hz, and allowing events to be reported with a 1-second
+    latency. </li>
+  <li> The application will receive events from the accelerometer at a rate of at
+    least 100Hz, and possibly delayed up to 1 second. </li>
+</ul>
+<p>See the <a href="index.html#targeted_at_developers">developer documentation</a> for more information on the SDK.</p>
+<h2 id="framework">Framework</h2>
+<p>The framework is in charge of linking the several applications to the <a href="hal-interface.html">HAL</a>. The HAL itself is single-client. Without this multiplexing happening at the
+  framework level, only a single application could access each sensor at any
+  given time.</p>
+<ul>
+  <li> When a first application registers to a sensor, the framework sends a request
+    to the HAL to activate the sensor. </li>
+  <li> When additional applications register to the same sensor, the framework takes
+    into account requirements from each application and sends the updated requested
+    parameters to the HAL.
+    <ul>
+      <li> The <a href="hal-interface.html#sampling_period_ns">sampling frequency</a> will be the maximum of the requested sampling frequencies, meaning some
+        applications will receive events at a frequency higher than the one they
+        requested. </li>
+      <li> The <a href="hal-interface.html#max_report_latency_ns">maximum reporting latency</a> will be the minimum of the requested ones. If one application requests one
+        sensor with a maximum reporting latency of 0, all applications will receive the
+        events from this sensor in continuous mode even if some requested the sensor
+        with a non-zero maximum reporting latency. See <a href="batching.html">Batching</a> for more details. </li>
+    </ul>
+  </li>
+  <li> When the last application registered to one sensor unregisters from it, the
+    frameworks sends a request to the HAL to deactivate the sensor so power is not
+    consumed unnecessarily. </li>
+</ul>
+<h3 id="impact_of_multiplexing">Impact of multiplexing</h3>
+<p>This need for a multiplexing layer in the framework explains some design
+  decisions.</p>
+<ul>
+  <li> When an application requests a specific sampling frequency, there is no
+    guarantee that events won’t arrive at a faster rate. If another application
+    requested the same sensor at a faster rate, the first application will also
+    receive them at the fast rate. </li>
+  <li> The same lack of guarantee applies to the requested maximum reporting latency:
+    applications might receive events with much less latency than they requested. </li>
+  <li> Besides sampling frequency and maximum reporting latency, applications cannot
+    configure sensor parameters.
+    <ul>
+      <li> For example, imagine a physical sensor that can function both in “high
+        accuracy” and “low power” modes. </li>
+      <li> Only one of those two modes can be used on an Android device, because
+        otherwise, an application could request the high accuracy mode, and another one
+        a low power mode; there would be no way for the framework to satisfy both
+        applications. The framework must always be able to satisfy all its clients, so
+        this is not an option. </li>
+    </ul>
+  </li>
+  <li> There is no mechanism to send data down from the applications to the sensors or
+    their drivers. This ensures one application cannot modify the behavior of the
+    sensors, breaking other applications. </li>
+</ul>
+<h3 id="sensor_fusion">Sensor fusion</h3>
+<p>The Android framework provides a default implementation for some composite
+  sensors. When a <a href="sensor-types.html#gyroscope">gyroscope</a>, an <a href="sensor-types.html#accelerometer">accelerometer</a> and a <a href="sensor-types.html#magnetic_field_sensor">magnetometer</a> are present on a device, but no <a href="sensor-types.html#rotation_vector">rotation vector</a>, <a href="sensor-types.html#gravity">gravity</a> and <a href="sensor-types.html#linear_acceleration">linear acceleration</a> sensors are present, the framework implements those sensors so applications
+  can still use them.</p>
+<p>The default implementation does not have access to all the data that other
+  implementations have access to, and it must run on the SoC, so it is not as
+  accurate nor as power efficient as other implementations can be. As much as
+  possible, device manufacturers should define their own fused sensors (rotation
+  vector, gravity and linear acceleration, as well as newer composite sensors like
+  the <a href="sensor-types.html#game_rotation_vector">game rotation vector</a>) rather than rely on this default implementation. Device manufacturers can
+  also request sensor chip vendors to provide them with an implementation.</p>
+<p>The default sensor fusion implementation is not being maintained and
+  might cause devices relying on it to fail CTS.</p>
+<h3 id="under_the_hood">Under the Hood</h3>
+<p>This section is provided as background information for those maintaining the
+  Android Open Source Project (AOSP) framework code. It is not relevant for
+  hardware manufacturers.</p>
+<h4 id="jni">JNI</h4>
+<p>The framework uses a Java Native Interface (JNI) associated with <a href="http://developer.android.com/reference/android/hardware/package-summary.html">android.hardware</a> and located in the <code>frameworks/base/core/jni/</code> directory. This code calls the
+  lower level native code to obtain access to the sensor hardware.</p>
+<h4 id="native_framework">Native framework</h4>
+<p>The native framework is defined in <code>frameworks/native/</code> and provides a native
+  equivalent to the <a href="http://developer.android.com/reference/android/hardware/package-summary.html">android.hardware</a> package. The native framework calls the Binder IPC proxies to obtain access to
+  sensor-specific services.</p>
+<h4 id="binder_ipc">Binder IPC</h4>
+<p>The Binder IPC proxies facilitate communication over process boundaries.</p>
+<h2 id="hal">HAL</h2>
+<p>The Sensors Hardware Abstraction Layer (HAL) API is the interface between the
+  hardware drivers and the Android framework. It consists of one HAL interface
+  sensors.h and one HAL implementation we refer to as sensors.cpp.</p>
+<p>The interface is defined by Android and AOSP contributors, and the
+  implementation is provided by the manufacturer of the device.</p>
+<p>The sensor HAL interface is located in <code>hardware/libhardware/include/hardware</code>.
+  See <a href="{@docRoot}devices/reference/sensors_8h.html">sensors.h</a> for additional details.</p>
+<h3 id="release_cycle">Release cycle</h3>
+<p>The HAL implementation specifies what version of the HAL interface it
+  implements by setting <code>your_poll_device.common.version</code>. The existing HAL
+  interface versions are defined in sensors.h, and functionality is tied to those
+  versions.</p>
+<p>The Android framework currently supports versions 1.0 and 1.3, but 1.0 will
+  soon not be supported anymore. This documentation describes the behavior of version
+  1.3, to which all devices should upgrade. For details on how to upgrade to
+  1.3, see <a href="versioning.html">HAL version deprecation</a>.</p>
+<h2 id="kernel_driver">Kernel driver</h2>
+<p>The sensor drivers interact with the physical devices. In some cases, the HAL
+  implementation and the drivers are the same software entity. In other cases,
+  the hardware integrator requests sensor chip manufacturers to provide the
+  drivers, but they are the ones writing the HAL implementation.</p>
+<p>In all cases, HAL implementation and kernel drivers are the responsibility of
+  the hardware manufacturers, and Android does not provide preferred approaches to
+  write them.</p>
+<h2 id="sensor_hub">Sensor hub</h2>
+<p>The sensor stack of a device can optionally include a sensor hub, useful to
+  perform some low-level computation at low power while the SoC can be in a
+  suspend mode. For example, step counting or sensor fusion can be performed on
+  those chips. It is also a good place to implement sensor batching, adding
+  hardware FIFOs for the sensor events. See <a
+href="batching.html">Batching</a> for more information.</p>
+<p>How the sensor hub is materialized depends on the architecture. It is sometimes
+  a separate chip, and sometimes included on the same chip as the SoC. Important
+  characteristics of the sensor hub is that it should contain sufficient memory
+  for batching and consume very little power to enable implementation of the low
+  power Android sensors. Some sensor hubs contain a microcontroller for generic
+  computation, and hardware accelerators to enable very low power computation for
+  low power sensors.</p>
+<p>How the sensor hub is architectured and how it communicates with the sensors
+  and the SoC (I2C bus, SPI bus, …) is not specified by Android, but it should aim
+  at minimizing overall power use.</p>
+<p>One option that appears to have a significant impact on implementation
+  simplicity is having two interrupt lines going from the sensor hub to the SoC:
+  one for wake-up interrupts (for wake-up sensors), and the other for non-wake-up
+  interrupts (for non-wake-up sensors).</p>
+<h2 id="sensors">Sensors</h2>
+<p>Those are the physical MEMs chips making the measurements. In many cases,
+  several physical sensors are present on the same chip. For example, some chips
+  include an accelerometer, a gyroscope and a magnetometer. (Such chips are often
+  called 9-axis chips, as each sensor provides data over 3 axes.)</p>
+<p>Some of those chips also contain some logic to perform usual computations such
+  as motion detection, step detection and 9-axis sensor fusion.</p>
+<p>Although the CDD power and accuracy requirements and recommendations target the
+  Android sensor and not the physical sensors, those requirements impact the
+  choice of physical sensors. For example, the accuracy requirement on the game
+  rotation vector has implications on the required accuracy for the physical
+  gyroscope. It is up to the device manufacturer to derive the requirements for
+  physical sensors.</p>
diff --git a/src/devices/sensors/sensor-types.jd b/src/devices/sensors/sensor-types.jd
new file mode 100644
index 0000000..824680f
--- /dev/null
+++ b/src/devices/sensors/sensor-types.jd
@@ -0,0 +1,748 @@
+page.title=Sensor types
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<h2 id="sensor_axis_definition">Sensor axis definition</h2>
+<p>Sensor event values from many sensors are expressed in a specific frame that is
+  static relative to the phone. This API is relative only to the NATURAL
+  orientation of the screen. In other words, the axes are not swapped when the
+  device's screen orientation changes.</p>
+
+<div class="figure" style="width:269px">
+  <img src="http://developer.android.com/images/axis_device.png"
+alt="Coordinate system of sensor API" height="225" />
+  <p class="img-caption">
+    <strong>Figure 1.</strong> Coordinate system (relative to a device) that's
+     used by the Sensor API.
+  </p>
+</div>
+
+<h2 id="base_sensors">Base sensors</h2>
+<p>Some sensor types are named directly after the physical sensors they represent.
+  Sensors with such types are called “base” sensors, referring to the fact they
+  relay data from a single physical sensor, contrary to “composite” sensors, for
+  which the data is generated out of other sensors.</p>
+<p>Examples of base sensor types:</p>
+<ul>
+  <li><code>SENSOR_TYPE_ACCELEROMETER</code></li>
+  <li><code>SENSOR_TYPE_GYROSCOPE</code></li>
+  <li><code>SENSOR_TYPE_MAGNETOMETER</code></li>
+</ul>
+  <p> See the list of Android sensor types below for more details on each
+<h3 id="base_sensors_=_not_equal_to_physical_sensors">Base sensors != (not equal to) physical sensors</h3>
+<p>Base sensors are not to be confused with their underlying physical sensor. The
+  data from a base sensor is not the raw output of the physical sensor:
+  corrections are be applied, such as bias compensation and temperature
+  compensation.</p>
+<p>The characteristics of a base sensor might be different from the
+  characteristics of its underlying physical sensor.</p>
+<ul>
+  <li> For example, a gyroscope chip might be rated to have a bias range of 1 deg/sec.
+    <ul>
+      <li> After factory calibration, temperature compensation and bias compensation are
+        applied, the actual bias of the Android sensor will be reduced, may be to a
+        point where the bias is guaranteed to be below 0.01deg/sec. </li>
+      <li> In this situation, we say that the Android sensor has a bias below 0.01
+        deg/sec, even though the data sheet of the underlying sensor said 1 deg/sec. </li>
+    </ul>
+  </li>
+  <li> As another example, a barometer might have a power consumption of 100uW.
+    <ul>
+      <li> Because the generated data needs to be transported from the chip to the SoC,
+        the actual power cost to gather data from the barometer Android sensor might be
+        much higher, for example 1000uW. </li>
+      <li> In this situation, we say that the Android sensor has a power consumption of
+        1000uW, even though the power consumption measured at the barometer chip leads
+        is 100uW. </li>
+    </ul>
+  </li>
+  <li> As a third example, a magnetometer might consume 100uW when calibrated, but
+    consume more when calibrating.
+    <ul>
+      <li> Its calibration routine might require activating the gyroscope, consuming
+        5000uW, and running some algorithm, costing another 900uW. </li>
+      <li> In this situation, we say that the maximum power consumption of the
+        (magnetometer) Android sensor is 6000uW. </li>
+      <li> In this case, the average power consumption is the more useful measure, and it
+        is what is reported in the sensor static characteristics through the HAL. </li>
+    </ul>
+  </li>
+</ul>
+<h3 id="accelerometer">Accelerometer</h3>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_ACCELEROMETER)</code> <em>returns a non-wake-up sensor</em></p>
+<p>An accelerometer sensor reports the acceleration of the device along the 3
+  sensor axes. The measured acceleration includes both the physical acceleration
+  (change of velocity) and the gravity. The measurement is reported in the x, y
+  and z fields of sensors_event_t.acceleration.</p>
+<p>All values are in SI units (m/s^2) and measure the acceleration of the device
+  minus the force of gravity along the 3 sensor axes.</p>
+<p>Here are examples:</p>
+<ul>
+  <li> The norm of (x, y, z) should be close to 0 when in free fall. </li>
+  <li> When the device lies flat on a table and is pushed on its left side toward the
+    right, the x acceleration value is positive. </li>
+  <li> When the device lies flat on a table, the acceleration value along z is +9.81
+    alo, which corresponds to the acceleration of the device (0 m/s^2) minus the
+    force of gravity (-9.81 m/s^2). </li>
+  <li> When the device lies flat on a table and is pushed toward the sky, the
+    acceleration value is greater than +9.81, which corresponds to the acceleration
+    of the device (+A m/s^2) minus the force of gravity (-9.81 m/s^2). </li>
+</ul>
+<p>The readings are calibrated using:</p>
+<ul>
+  <li> temperature compensation </li>
+  <li> online bias calibration </li>
+  <li> online scale calibration </li>
+</ul>
+<p>The bias and scale calibration must only be updated while the sensor is
+  deactivated, so as to avoid causing jumps in values during streaming.</p>
+<p>The accelerometer also reports how accurate it expects its readings to be
+  through <code>sensors_event_t.acceleration.status</code>. See the <a
+  href="http://developer.android.com/reference/android/hardware/SensorManager.html">SensorManager</a>’s
+  <a href="http://developer.android.com/reference/android/hardware/SensorManager.html#SENSOR_STATUS_ACCURACY_HIGH"><code>SENSOR_STATUS_*  </code></a> constants for more information on possible values for this field.</p>
+<h3 id="ambient_temperature">Ambient temperature</h3>
+<p>Reporting-mode: <em><a href="report-modes.html#on-change">On-change</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_AMBIENT_TEMPERATURE)</code> <em>returns a non-wake-up sensor</em></p>
+<p>This sensor provides the ambient (room) temperature in degrees Celsius.</p>
+<h3 id="magnetic_field_sensor">Magnetic field sensor</h3>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_MAGNETIC_FIELD)</code> <em>returns a non-wake-up sensor</em></p>
+<p><code>SENSOR_TYPE_GEOMAGNETIC_FIELD == SENSOR_TYPE_MAGNETIC_FIELD</code></p>
+<p>A magnetic field sensor (also known as magnetometer) reports the ambient
+  magnetic field, as measured along the 3 sensor axes.</p>
+<p>The measurement is reported in the x, y and z fields of
+  <code>sensors_event_t.magnetic</code> and all values are in micro-Tesla (uT).</p>
+<p>The magnetometer also reports how accurate it expects its readings to be
+  through <code>sensors_event_t.magnetic.status</code>. See the <a
+href="http://developer.android.com/reference/android/hardware/SensorManager.html">SensorManager</a>’s
+<a href="http://developer.android.com/reference/android/hardware/SensorManager.html#SENSOR_STATUS_ACCURACY_HIGH"><code>SENSOR_STATUS_*</code></a> constants for more information on possible values for this field.</p>
+<p>The readings are calibrated using:</p>
+<ul>
+  <li> temperature compensation </li>
+  <li> factory (or online) soft-iron calibration </li>
+  <li> online hard-iron calibration </li>
+</ul>
+<h3 id="gyroscope">Gyroscope</h3>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_GYROSCOPE)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A gyroscope sensor reports the rate of rotation of the device around the 3
+  sensor axes.</p>
+<p>Rotation is positive in the counterclockwise direction (right-hand rule). That
+  is, an observer looking from some positive location on the x, y or z axis at a
+  device positioned on the origin would report positive rotation if the device
+  appeared to be rotating counter clockwise. Note that this is the standard
+  mathematical definition of positive rotation and does not agree with the
+  aerospace definition of roll.</p>
+<p>The measurement is reported in the x, y and z fields of <code>sensors_event_t.gyro</code>
+  and all values are in radians per second (rad/s).</p>
+<p>The readings are calibrated using:</p>
+<ul>
+  <li> temperature compensation </li>
+  <li> factory (or online) scale compensation </li>
+  <li> online bias calibration (to remove drift) </li>
+</ul>
+<p>The gyroscope also reports how accurate it expects its readings to be through
+  <code>sensors_event_t.gyro.status</code>. See the <a
+  href="http://developer.android.com/reference/android/hardware/SensorManager.html">SensorManager</a>’s
+  <a
+href="http://developer.android.com/reference/android/hardware/SensorManager.html#SENSOR_STATUS_ACCURACY_HIGH"><code>SENSOR_STATUS_*</code></a> constants for more information on possible values for this field.</p>
+<p>The gyroscope cannot be emulated based on magnetometers and accelerometers, as
+  this would cause it to have reduced local consistency and responsiveness. It
+  must be based on a usual gyroscope chip.</p>
+<h3 id="heart_rate">Heart Rate</h3>
+<p>Reporting-mode: <em><a href="report-modes.html#on-change">On-change</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_HEART_RATE)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A heart rate sensor reports the current heart rate of the person touching the
+  device.</p>
+<p>The current heart rate in beats per minute (BPM) is reported in
+  <code>sensors_event_t.heart_rate.bpm</code> and the status of the sensor is reported in
+  <code>sensors_event_t.heart_rate.status</code>. See the <a
+  href="http://developer.android.com/reference/android/hardware/SensorManager.html">SensorManager</a>’s
+  <a href="http://developer.android.com/reference/android/hardware/SensorManager.html#SENSOR_STATUS_ACCURACY_HIGH"><code>SENSOR_STATUS_*</code></a> constants for more information on possible values for this field. In
+  particular, upon the first activation, unless the device is known to not be on
+  the body, the status field of the first event must be set to
+  <code>SENSOR_STATUS_UNRELIABLE</code>. Because this sensor is on-change,
+  events are generated when and only when <code>heart_rate.bpm</code> or
+  <code>heart_rate.status</code> have changed since the last event. The events
+  are generated no faster than every <code>sampling_period</code>.</p>
+<p><code>sensor_t.requiredPermission</code> is always <code>SENSOR_PERMISSION_BODY_SENSORS</code>.</p>
+<h3 id="light">Light</h3>
+<p>Reporting-mode: <em><a href="report-modes.html#on-change">On-change</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_LIGHT)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A light sensor reports the current illumination in SI lux units.</p>
+<p>The measurement is reported in <code>sensors_event_t.light</code>.</p>
+<h3 id="proximity">Proximity</h3>
+<p>Reporting-mode: <em><a href="report-modes.html#on-change">On-change</a></em></p>
+<p>Usually defined as a wake-up sensor</p>
+<p><code>getDefaultSensor(SENSOR_TYPE_PROXIMITY)</code> <em>returns a wake-up sensor</em></p>
+<p>A proximity sensor reports the distance from the sensor to the closest visible
+  surface.</p>
+<p>Up to Android KitKat, the proximity sensors were always wake-up sensors, waking
+  up the SoC when detecting a change in proximity. After Android KitKat, we
+  advise to implement the wake-up version of this sensor first, as it is the one
+  that is used to turn the screen on and off while making phone calls.</p>
+<p>The measurement is reported in centimeters in <code>sensors_event_t.distance</code>. Note
+  that some proximity sensors only support a binary &quot;near&quot; or &quot;far&quot; measurement.
+  In this case, the sensor report its <code>sensor_t.maxRange</code> value in the &quot;far&quot; state
+  and a value less than <code>sensor_t.maxRange</code> in the &quot;near&quot; state.</p>
+<h3 id="pressure">Pressure</h3>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_PRESSURE)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A pressure sensor (also known as barometer) reports the atmospheric pressure in
+  hectopascal (hPa).</p>
+<p>The readings are calibrated using</p>
+<ul>
+  <li> temperature compensation </li>
+  <li> factory bias calibration </li>
+  <li> factory scale calibration </li>
+</ul>
+<p>The barometer is often used to estimate elevation changes. To estimate absolute
+  elevation, the sea-level pressure (changing depending on the weather) must be
+  used as a reference.</p>
+<h3 id="relative_humidity">Relative humidity</h3>
+<p>Reporting-mode: <em><a href="report-modes.html#on-change">On-change</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_RELATIVE_HUMIDITY)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A relative humidity sensor measures relative ambient air humidity and returns a
+  value in percent.</p>
+<h2 id="composite_sensor_types">Composite sensor types</h2>
+<p>Any sensor that is not a base sensor is called a composite sensor. Composite
+  sensors generate their data by processing and/or fusing data from one or
+  several physical sensors.</p>
+<p>Examples of composite sensor types:</p>
+<ul>
+  <li><a href="#step_detector">Step detector</a> and <a href="#significant_motion">Significant motion</a>, which are usually based on an accelerometer, but could be based on other
+    sensors as well, if the power consumption and accuracy was acceptable. </li>
+  <li><a href="#game_rotation_vector">Game rotation vector</a>, based on an
+    accelerometer and a gyroscope. </li>
+  <li><a href="#gyroscope_uncalibrated">Uncalibrated gyroscope</a>, which is
+    similar to the gyroscope base sensor, but with
+    the bias calibration being reported separately instead of being corrected in
+    the measurement. </li>
+</ul>
+<p>Just like base sensors, the characteristics of the composite sensors come from
+  the characteristics of their final data.</p>
+<ul>
+  <li> For example, the power consumption of a game rotation vector is probably equal
+    to the sum of the power consumptions of: the accelerometer chip, the gyroscope
+    chip, the chip processing the data, and the busses transporting the data. </li>
+  <li> As another example, the drift of a game rotation vector will depend as much on
+    the quality of the calibration algorithm as on the physical sensor
+    characteristics. </li>
+</ul>
+<h2 id="composite_sensor_type_summary">Composite sensor type summary</h2>
+<p>The following table lists the composite sensor types. Each composite sensor
+  relies on data from one or several physical sensors. Choosing other underlying
+  physical sensors to approximate results should be avoided as they will provide
+  a poor user experience.</p>
+<p>When there is no gyroscope on the device, and only when there is no gyroscope,
+  you may implement the rotation vector, linear acceleration and gravity sensors
+  without using the gyroscope.</p>
+<table>
+  <tr>
+    <th><p>Sensor type</p></th>
+    <th><p>Category</p></th>
+    <th><p>Underlying physical sensors</p></th>
+    <th><p>Reporting mode</p></th>
+  </tr>
+  <tr>
+    <td><p><a href="#game_rotation_vector">Game rotation vector</a></p></td>
+    <td><p>Attitude</p></td>
+    <td><p>Accelerometer, Gyroscope MUST NOT USE Magnetometer</p></td>
+    <td><p>Continuous</p></td>
+  </tr>
+  <tr>
+    <td><img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" />
+      <p>i<a href="#geomagnetic_rotation_vector"Geomagnetic rotation vector</a></p></td>
+    <td><p>Attitude</p></td>
+    <td><p>Accelerometer, Magnetometer, MUST NOT USE Gyroscope</p></td>
+    <td><p>Continuous</p></td>
+  </tr>
+  <tr>
+    <td><img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" />
+      <p><a href="#glance_gesture">Glance gesture</a></p></td>
+    <td><p>Interaction</p></td>
+    <td><p>Undefined</p></td>
+    <td><p>One-shot</p></td>
+  </tr>
+  <tr>
+    <td><p><a href="#gravity">Gravity</a></p></td>
+    <td><p>Attitude</p></td>
+    <td><p>Accelerometer, Gyroscope</p></td>
+    <td><p>Continuous</p></td>
+  </tr>
+  <tr>
+    <td><p><a href="#gyroscope_uncalibrated">Gyroscope uncalibrated</a></p></td>
+    <td><p>Uncalibrated</p></td>
+    <td><p>Gyroscope</p></td>
+    <td><p>Continuous</p></td>
+  </tr>
+  <tr>
+    <td><p><a href="#linear_acceleration">Linear acceleration</a></p></td>
+    <td><p>Activity</p></td>
+    <td><p>Accelerometer, Gyroscope (if present) or Magnetometer (if gyro not present)</p></td>
+    <td><p>Continuous</p></td>
+  </tr>
+  <tr>
+    <td><p><a href="#magnetic_field_uncalibrated">Magnetic field uncalibrated</a></p></td>
+    <td><p>Uncalibrated</p></td>
+    <td><p>Magnetometer</p></td>
+    <td><p>Continuous</p></td>
+  </tr>
+  <tr>
+    <td><p><a href="#orientation_deprecated">Orientation</a> (deprecated)</p></td>
+    <td><p>Attitude</p></td>
+    <td><p>Accelerometer, Magnetometer PREFERRED Gyroscope</p></td>
+    <td><p>Continuous</p></td>
+  </tr>
+  <tr>
+    <td><img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" />
+      <p><a href="#pick_up_gesture">Pick up gesture</a></p></td>
+    <td><p>Interaction</p></td>
+    <td><p>Undefined</p></td>
+    <td><p>One-shot</p></td>
+  </tr>
+  <tr>
+    <td><p><a href="#rotation_vector">Rotation vector</a></p></td>
+    <td><p>Attitude</p></td>
+    <td><p>Accelerometer, Magnetometer, AND (when present) <em>Gyroscope </em></p></td>
+    <td><p>Continuous</p></td>
+  </tr>
+  <tr>
+    <td><img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" />
+      <p><a href="#significant_motion">Significant motion</a></p></td>
+    <td><p>Activity</p></td>
+    <td><p>Accelerometer (or another as long as very low power)</p></td>
+    <td><p>One-shot</p></td>
+  </tr>
+  <tr>
+    <td><img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" />
+      <p><a href="#step_counter">Step counter</a></p></td>
+    <td><p>Activity</p></td>
+    <td><p>Accelerometer</p></td>
+    <td><p>On-change</p></td>
+  </tr>
+  <tr>
+    <td><img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" />
+      <p><a href="#step_detector">Step detector</a></p></td>
+    <td><p>Activity</p></td>
+    <td><p>Accelerometer</p></td>
+    <td><p>Special</p></td>
+  </tr>
+  <tr>
+    <td><img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" />
+      <p><a href="#tilt_detector">Tilt detector</a></p></td>
+    <td><p>Activity</p></td>
+    <td><p>Accelerometer</p></td>
+    <td><p>Special</p></td>
+  </tr>
+  <tr>
+    <td><img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" />
+      <p><a href="#wake_up_gesture">Wake up gesture</a></p></td>
+    <td><p>Interaction</p></td>
+    <td><p>Undefined</p></td>
+    <td><p>One-shot</p></td>
+  </tr>
+</table>
+<img src="images/battery_icon.png" width="20" height="20" alt="Low power sensor" />
+<p> = Low power sensor</p>
+<h2 id="activity_composite_sensors">Activity composite sensors</h2>
+<h3 id="linear_acceleration">Linear acceleration</h3>
+<p>Underlying physical sensors:  Accelerometer and (if present) Gyroscope (or
+  magnetometer if gyroscope not present)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_LINEAR_ACCELERATION)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A linear acceleration sensor reports the linear acceleration of the device in
+  the sensor frame, not including gravity.</p>
+<p>The output is conceptually: output of the <a
+  href="#accelerometer">accelerometer</a> minus the output of the <a
+  href="#gravity">gravity sensor</a>. It is reported in m/s^2 in the x, y and z
+  fields of <code>sensors_event_t.acceleration</code>.</p>
+<p>Readings on all axes should be close to 0 when the device is immobile.</p>
+<p>If the device possesses a gyroscope, the linear acceleration sensor must use
+  the accelerometer gyroscope and accelerometer as input.</p>
+<p>If the device doesn’t possess a gyroscope, the linear acceleration sensor must
+  use the accelerometer and the magnetometer as input.</p>
+<h3 id="significant_motion">Significant motion</h3>
+<p>Underlying physical sensor: Accelerometer (or another as long as low power)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#one-shot">One-shot</a></em></p>
+<p>Low-power</p>
+<p>Implement only the wake-up version of this sensor.</p>
+<p><code>getDefaultSensor(SENSOR_TYPE_SIGNIFICANT_MOTION)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A significant motion detector triggers when the detecting a “significant
+  motion”: a motion that might lead to a change in the user location.</p>
+<p>Examples of such significant motions are:</p>
+<ul>
+  <li> walking or biking </li>
+  <li> sitting in a moving car, coach or train </li>
+</ul>
+<p>Examples of situations that do not trigger significant motion:</p>
+<ul>
+  <li> phone in pocket and person is not moving </li>
+  <li> phone is on a table and the table shakes a bit due to nearby traffic or washing
+    machine </li>
+</ul>
+<p>At the high level, the significant motion detector is used to reduce the power
+  consumption of location determination. When the localization algorithms detect
+  that the device is static, they can switch to a low power mode, where they rely
+  on significant motion to wake the device up when the user is changing location.</p>
+<p>This sensor must be low power. It makes a tradeoff for power consumption that
+  may result in a small amount of false negatives. This is done for a few
+  reasons:</p>
+<ul>
+  <li> The goal of this sensor is to save power. </li>
+  <li> Triggering an event when the user is not moving (false positive) is costly in
+    terms of power, so it should be avoided. </li>
+  <li> Not triggering an event when the user is moving (false negative) is acceptable
+    as long as it is not done repeatedly. If the user has been walking for 10
+    seconds, not triggering an event within those 10 seconds is not acceptable. </li>
+</ul>
+<p>Each sensor event reports 1 in <code>sensors_event_t.data[0]</code></p>
+<h3 id="step_detector">Step detector</h3>
+<p>Underlying physical sensor: Accelerometer (+ possibly others as long as low
+  power)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#special">Special</a> (one event per step taken)</em></p>
+<p>Low-power</p>
+<p><code>getDefaultSensor(SENSOR_TYPE_STEP_DETECTOR)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A step detector generates an event each time a step is taken by the user.</p>
+<p>The timestamp of the event <code>sensors_event_t.timestamp</code> corresponds to when the
+  foot hit the ground, generating a high variation in acceleration.</p>
+<p>Compared to the step counter, the step detector should have a lower latency
+  (less than 2 seconds). Both the step detector and the step counter detect when
+  the user is walking, running and walking up the stairs. They should not trigger
+  when the user is biking, driving or in other vehicles.</p>
+<p>This sensor must be low power. That is, if the step detection cannot be done in
+  hardware, this sensor should not be defined. In particular, when the step
+  detector is activated and the accelerometer is not, only steps should trigger
+  interrupts (not every accelerometer reading).</p>
+<p><code>sampling_period_ns</code> has no impact on step detectors.</p>
+<p>Each sensor event reports 1 in <code>sensors_event_t.data[0]</code></p>
+<h3 id="step_counter">Step counter</h3>
+<p>Underlying physical sensor: Accelerometer (+ possibly others as long as low
+  power)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#on-change">On-change</a></em></p>
+<p>Low-power</p>
+<p><code>getDefaultSensor(SENSOR_TYPE_STEP_COUNTER)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A step counter reports the number of steps taken by the user since the last
+  reboot while activated.</p>
+<p>The measurement is reported as a <code>uint64_t</code> in
+  <code>sensors_event_t.step_counter</code> and
+  is reset to zero only on a system reboot.</p>
+<p>The timestamp of the event is set to the time when the last step for that event
+  was taken.</p>
+<p>See the <a href="#step_detector">Step detector</a> sensor type for the signification of the time of a step.</p>
+<p>Compared to the step detector, the step counter can have a higher latency (up
+  to 10 seconds). Thanks to this latency, this sensor has a high accuracy; the
+  step count after a full day of measures should be within 10% of the actual step
+  count. Both the step detector and the step counter detect when the user is
+  walking, running and walking up the stairs. They should not trigger when the
+  user is biking, driving or in other vehicles.</p>
+<p>The hardware must ensure the internal step count never overflows. The minimum
+  size of the hardware's internal counter shall be 16 bits. In case of imminent
+  overflow (at most every ~2^16 steps), the SoC can be woken up so the driver can
+  do the counter maintenance.</p>
+<p>As stated in <a href="interaction.html">Interaction</a>, while this sensor
+  operates, it shall not disrupt any other sensors, in particular, the
+  accelerometer, which might very well be in use.</p>
+<p>If a particular device cannot support these modes of operation, then this
+  sensor type must not be reported by the HAL. ie: it is not acceptable to
+  &quot;emulate&quot; this sensor in the HAL.</p>
+<p>This sensor must be low power. That is, if the step detection cannot be done in
+  hardware, this sensor should not be defined. In particular, when the step
+  counter is activated and the accelerometer is not, only steps should trigger
+  interrupts (not accelerometer data).</p>
+<h3 id="tilt_detector">Tilt detector</h3>
+<p>Underlying physical sensor: Accelerometer (+ possibly others as long as low
+  power)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#special">Special</a></em></p>
+<p>Low-power</p>
+<p>Implement only the wake-up version of this sensor.</p>
+<p><code>getDefaultSensor(SENSOR_TYPE_TILT_DETECTOR)</code> <em>returns a wake-up sensor</em></p>
+<p>A tilt detector generates an event each time a tilt event is detected.</p>
+<p>A tilt event is defined by the direction of the 2-seconds window average
+  gravity changing by at least 35 degrees since the activation or the last event
+  generated by the sensor. Here is the algorithm:</p>
+<ul>
+  <li> <code>reference_estimated_gravity</code> = average of accelerometer measurements over the
+    first second after activation or the estimated gravity when the last tilt event
+    was generated. </li>
+  <li> <code>current_estimated_gravity</code> = average of accelerometer measurements over the last
+    2 seconds. </li>
+  <li> trigger when <code>angle(reference_estimated_gravity, current_estimated_gravity) &gt; 35
+    degrees</code> </li>
+</ul>
+<p>Large accelerations without a change in phone orientation should not trigger a
+  tilt event. For example, a sharp turn or strong acceleration while driving a
+  car should not trigger a tilt event, even though the angle of the average
+  acceleration might vary by more than 35 degrees.
+  Typically, this sensor is
+  implemented with the help of only an accelerometer. Other sensors can be used
+  as well if they do not increase the power consumption significantly. This is a
+  low power sensor that should allow the SoC to go into suspend mode. Do not
+  emulate this sensor in the HAL. Each sensor event reports 1 in
+  <code>sensors_event_t.data[0]</code>.</p>
+<h2 id="attitude_composite_sensors">Attitude composite sensors</h2>
+<h3 id="rotation_vector">Rotation vector</h3>
+<p>Underlying physical sensors: Accelerometer, Magnetometer, and (when present)
+  Gyroscope</p>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_ROTATION_VECTOR)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A rotation vector sensor reports the orientation of the device relative to the
+  East-North-Up coordinates frame. It is usually obtained by integration of
+  accelerometer, gyroscope and magnetometer readings.</p>
+<p>The East-North-Up coordinate system is defined as a direct orthonormal basis
+  where:</p>
+<ul>
+  <li> X points east and is tangential to the ground. </li>
+  <li> Y points north and is tangential to the ground. </li>
+  <li> Z points towards the sky and is perpendicular to the ground. </li>
+</ul>
+<p>The orientation of the phone is represented by the rotation necessary to align
+  the East-North-Up coordinates with the phone's coordinates. That is, applying
+  the rotation to the world frame (X,Y,Z) would align them with the phone
+  coordinates (x,y,z).</p>
+<p>The rotation can be seen as rotating the phone by an angle theta around an axis
+  rot_axis to go from the reference (East-North-Up aligned) device orientation to
+  the current device orientation.</p>
+<p>The rotation is encoded as the four unit-less x, y, z, w components of a unit
+  quaternion:</p>
+<ul>
+  <li> <code>sensors_event_t.data[0] = rot_axis.x*sin(theta/2)</code> </li>
+  <li> <code>sensors_event_t.data[1] = rot_axis.y*sin(theta/2)</code> </li>
+  <li> <code>sensors_event_t.data[2] = rot_axis.z*sin(theta/2)</code> </li>
+  <li> <code>sensors_event_t.data[3] = cos(theta/2)</code> </li>
+</ul>
+<p>Where:</p>
+<ul>
+  <li> the  x, y and z fields of <code>rot_axis</code> are the East-North-Up
+  coordinates of a unit length vector representing the rotation axis </li>
+  <li> <code>theta</code> is the rotation angle </li>
+</ul>
+<p>The quaternion is a unit quaternion: it must be of norm 1. Failure to ensure
+  this will cause erratic client behaviour.</p>
+<p>In addition, this sensor reports an estimated heading accuracy:</p>
+<p><code>sensors_event_t.data[4] = estimated_accuracy</code> (in radians)</p>
+<p>The heading error must be less than <code>estimated_accuracy</code> 95% of the time. This
+  sensor must use a gyroscope as main orientation change input unless there is no
+  gyroscope on the device.</p>
+<p>This sensor also includes the accelerometer and magnetometer input to make up
+  for gyroscope drift, but it cannot be implemented using only the magnetometer
+  and accelerometer, unless there is no gyroscope on the device.</p>
+<h3 id="game_rotation_vector">Game rotation vector</h3>
+<p>Underlying physical sensors: Accelerometer and Gyroscope (no Magnetometer)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_GAME_ROTATION_VECTOR)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A game rotation vector sensor is similar to a rotation vector sensor but not
+  using the geomagnetic field. Therefore the Y axis doesn't point north but
+  instead to some other reference. That reference is allowed to drift by the same
+  order of magnitude as the gyroscope drifts around the Z axis.</p>
+<p>See the <a href="#rotation_vector">Rotation vector</a> sensor for details on
+  how to set <code>sensors_event_t.data[0-3]</code>. This sensor does
+  not report an estimated heading accuracy:
+  <code>sensors_event_t.data[4]</code> is reserved and should be set to 0.</p>
+<p>In an ideal case, a phone rotated and returned to the same real-world
+  orientation should report the same game rotation vector.</p>
+<p>This sensor must be based on a gyroscope and an accelerometer. It cannot use
+  magnetometer as an input, besides, indirectly, through estimation of the
+  gyroscope bias.</p>
+<h3 id="gravity">Gravity</h3>
+<p>Underlying physical sensors: Accelerometer and (if present) Gyroscope (or
+  magnetometer if gyroscope not present)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_GRAVITY)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A gravity sensor reports the direction and magnitude of gravity in the device's
+  coordinates.</p>
+<p>The gravity vector components are reported in m/s^2 in the x, y and z fields of
+  <code>sensors_event_t.acceleration</code>.</p>
+<p>When the device is at rest, the output of the gravity sensor should be
+  identical to that of the accelerometer. On Earth, the magnitude is around 9.8
+  m/s^2.</p>
+<p>If the device possesses a gyroscope, the gravity sensor must use the
+  accelerometer gyroscope and accelerometer as input.</p>
+<p>If the device doesn’t possess a gyroscope, the gravity sensor must use the
+  accelerometer and the magnetometer as input.</p>
+<h3 id="geomagnetic_rotation_vector">Geomagnetic rotation vector</h3>
+<p>Underlying physical sensors: Accelerometer and Magnetometer (no Gyroscope)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p>Low-power</p>
+<p><code>getDefaultSensor(SENSOR_TYPE_GEOMAGNETIC_ROTATION_VECTOR)</code> <em>returns a non-wake-up sensor</em></p>
+<p>A geomagnetic rotation vector is similar to a rotation vector sensor but using
+  a magnetometer and no gyroscope.</p>
+<p>This sensor must be based on a magnetometer. It cannot be implemented using a
+  gyroscope, and gyroscope input cannot be used by this sensor.</p>
+<p>See the <a href="#rotation_vector">Rotation vector</a> sensor for details on
+  how to set <code>sensors_event_t.data[0-4]</code>.</p>
+<p>Just like for the rotation vector sensor, the heading error must be less than
+  the estimated accuracy (<code>sensors_event_t.data[4]</code>) 95% of the time.</p>
+<p>This sensor must be low power, so it has to be implemented in hardware.</p>
+<h3 id="orientation_deprecated">Orientation (deprecated)</h3>
+<p>Underlying physical sensors: Accelerometer, Magnetometer and (if present)
+  Gyroscope</p>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_ORIENTATION)</code> <em>returns a non-wake-up sensor</em></p>
+<p>Note: This is an older sensor type that has been deprecated in the Android SDK.
+  It has been replaced by the rotation vector sensor, which is more clearly
+  defined. Use the rotation vector sensor over the orientation sensor whenever
+  possible.</p>
+<p>An orientation sensor reports the attitude of the device. The measurements are
+  reported in degrees in the x, y and z fields of <code>sensors_event_t.orientation</code>:</p>
+<ul>
+  <li> <code>sensors_event_t.orientation.x</code>: azimuth, the angle between the magnetic north
+    direction and the Y axis, around the Z axis (<code>0&lt;=azimuth&lt;360</code>). 0=North, 90=East,
+    180=South, 270=West </li>
+  <li> <code>sensors_event_t.orientation.y</code>: pitch, rotation around X axis
+    (<code>-180&lt;=pitch&lt;=180</code>), with positive values when the z-axis moves toward the
+    y-axis. </li>
+  <li> <code>sensors_event_t.orientation.z</code>: roll, rotation around Y axis (<code>-90&lt;=roll&lt;=90</code>),
+    with positive values when the x-axis moves towards the z-axis. </li>
+</ul>
+<p>Please note, for historical reasons the roll angle is positive in the clockwise
+  direction. (Mathematically speaking, it should be positive in the
+  counter-clockwise direction):</p>
+<div class="figure" style="width:264px">
+  <imgsrc="images/axis_positive_roll.png" alt="Depiction of orientation
+   relative to a device" height="253" />
+  <p class="img-caption">
+    <strong>Figure 2.</strong> Orientation relative to a device.
+  </p>
+</div>
+<p>This definition is different from yaw, pitch and roll used in aviation where
+  the X axis is along the long side of the plane (tail to nose).</p>
+<p>The orientation sensor also reports how accurate it expects its readings to be
+  through sensors_event_t.orientation.status. See the <a href="http://developer.android.com/reference/android/hardware/SensorManager.html">SensorManager</a>’s <a href="http://developer.android.com/reference/android/hardware/SensorManager.html#SENSOR_STATUS_ACCURACY_HIGH">SENSOR_STATUS_</a>* constants for more information on possible values for this field.</p>
+<h2 id="uncalibrated_sensors">Uncalibrated sensors</h2>
+<p>Uncalibrated sensors provide more raw results and may include some bias but
+  also contain fewer &quot;jumps&quot; from corrections applied through calibration. Some
+  applications may prefer these uncalibrated results as smoother and more
+  reliable. For instance, if an application is attempting to conduct its own
+  sensor fusion, introducing calibrations can actually distort results.</p>
+<h3 id="gyroscope_uncalibrated">Gyroscope uncalibrated</h3>
+<p>Underlying physical sensor: Gyroscope</p>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_GYROSCOPE_UNCALIBRATED)</code> <em>returns a non-wake-up sensor</em></p>
+<p>An uncalibrated gyroscope reports the rate of rotation around the sensor axes
+  without applying bias compensation to them, along with a bias estimate. All
+  values are in radians/second and are reported in the fields of
+  <code>sensors_event_t.uncalibrated_gyro</code>:</p>
+<ul>
+  <li> <code>x_uncalib</code>: angular speed (w/o drift compensation) around the X axis </li>
+  <li> <code>y_uncalib</code>: angular speed (w/o drift compensation) around the Y axis </li>
+  <li> <code>z_uncalib</code>: angular speed (w/o drift compensation) around the Z axis </li>
+  <li> <code>x_bias</code>: estimated drift around X axis </li>
+  <li> <code>y_bias</code>: estimated drift around Y axis </li>
+  <li> <code>z_bias</code>: estimated drift around Z axis </li>
+</ul>
+<p>Conceptually, the uncalibrated measurement is the sum of the calibrated
+  measurement and the bias estimate: <code>_uncalibrated = _calibrated + _bias</code>.</p>
+<p>The <code>x/y/z_bias</code> values are expected to jump as soon as the estimate of the bias
+  changes, and they should be stable the rest of the time.</p>
+<p>See the definition of the <a href="#gyroscope">gyroscope</a> sensor for
+  details on the coordinate system used.</p>
+<p>Factory calibration and temperature compensation must be applied to the
+  measurements. Also, gyroscope drift estimation must be implemented so that
+  reasonable estimates can be reported in <code>x_bias</code>,
+  <code>y_bias</code> and <code>z_bias</code>. If the
+  implementation is not able to estimate the drift, then this sensor must not be
+  implemented.</p>
+<p>If this sensor is present, then the corresponding Gyroscope sensor must also be
+  present and both sensors must share the same <code>sensor_t.name</code> and
+  <code>sensor_t.vendor</code> values.</p>
+<h3 id="magnetic_field_uncalibrated">Magnetic field uncalibrated</h3>
+<p>Underlying physical sensor: Magnetometer</p>
+<p>Reporting-mode: <em><a href="report-modes.html#continuous">Continuous</a></em></p>
+<p><code>getDefaultSensor(SENSOR_TYPE_MAGNETIC_FIELD_UNCALIBRATED)</code> <em>returns a non-wake-up sensor</em></p>
+<p>An uncalibrated magnetic field sensor reports the ambient magnetic field
+  together with a hard iron calibration estimate. All values are in micro-Tesla
+  (uT) and are reported in the fields of <code>sensors_event_t.uncalibrated_magnetic</code>:</p>
+<ul>
+  <li> <code>x_uncalib</code>: magnetic field (w/o hard-iron compensation) along the X axis </li>
+  <li> <code>y_uncalib</code>: magnetic field (w/o hard-iron compensation) along the Y axis </li>
+  <li> <code>z_uncalib</code>: magnetic field (w/o hard-iron compensation) along the Z axis </li>
+  <li> <code>x_bias</code>: estimated hard-iron bias along the X axis </li>
+  <li> <code>y_bias</code>: estimated hard-iron bias along the Y axis </li>
+  <li> <code>z_bias</code>: estimated hard-iron bias along the Z axis </li>
+</ul>
+<p>Conceptually, the uncalibrated measurement is the sum of the calibrated
+  measurement and the bias estimate: <code>_uncalibrated = _calibrated + _bias</code>.</p>
+<p>The uncalibrated magnetometer allows higher level algorithms to handle bad hard
+  iron estimation. The <code>x/y/z_bias</code> values are expected to jump as soon as the
+  estimate of the hard-iron changes, and they should be stable the rest of the
+  time.</p>
+<p>Soft-iron calibration and temperature compensation must be applied to the
+  measurements. Also, hard-iron estimation must be implemented so that reasonable
+  estimates can be reported in <code>x_bias</code>, <code>y_bias</code> and
+  <code>z_bias</code>. If the implementation is not able to estimate the bias,
+  then this sensor must not be implemented.</p>
+<p>If this sensor is present, then the corresponding magnetic field sensor must be
+  present and both sensors must share the same <code>sensor_t.name</code> and
+  <code>sensor_t.vendor</code> values.</p>
+<h2 id="interaction_composite_sensors">Interaction composite sensors</h2>
+<p>Some sensors are mostly used to detect interactions with the user. We do not
+  define how those sensors must be implemented, but they must be low power and it
+  is the responsibility of the device manufacturer to verify their quality in
+  terms of user experience.</p>
+<h3 id="wake_up_gesture">Wake up gesture</h3>
+<p>Underlying physical sensors: Undefined (anything low power)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#one-shot">One-shot</a></em></p>
+<p>Low-power</p>
+<p>Implement only the wake-up version of this sensor.</p>
+<p><code>getDefaultSensor(SENSOR_TYPE_WAKE_GESTURE)</code> <em>returns a wake-up sensor</em></p>
+<p>A wake up gesture sensor enables waking up the device based on a device
+  specific motion. When this sensor triggers, the device behaves as if the power
+  button was pressed, turning the screen on. This behavior (turning on the screen
+  when this sensor triggers) might be deactivated by the user in the device
+  settings. Changes in settings do not impact the behavior of the sensor: only
+  whether the framework turns the screen on when it triggers.
+  The actual gesture to be detected is not specified, and can be chosen by the
+  manufacturer of the device.</p>
+<p>This sensor must be low power, as it is likely to be activated 24/7.</p>
+<p>Each sensor event reports 1 in <code>sensors_event_t.data[0]</code>.</p>
+<h3 id="pick_up_gesture">Pick up gesture</h3>
+<p>Underlying physical sensors: Undefined (anything low power)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#one-shot">One-shot</a></em></p>
+<p>Low-power</p>
+<p>Implement only the wake-up version of this sensor.</p>
+<p><code>getDefaultSensor(SENSOR_TYPE_PICK_UP_GESTURE)</code> <em>returns a wake-up sensor</em></p>
+<p>A pick-up gesture sensor sensor triggers when the device is picked up
+  regardless of wherever is was before (desk, pocket, bag).</p>
+<p>Each sensor event reports 1 in <code>sensors_event_t.data[0]</code>.</p>
+<h3 id="glance_gesture">Glance gesture</h3>
+<p>Underlying physical sensors: Undefined (anything low power)</p>
+<p>Reporting-mode: <em><a href="report-modes.html#one-shot">One-shot</a></em></p>
+<p>Low-power</p>
+<p>Implement only the wake-up version of this sensor.</p>
+<p><code>getDefaultSensor(SENSOR_TYPE_GLANCE_GESTURE)</code> <em>returns a wake-up sensor</em></p>
+<p>A glance gesture sensor enables briefly turning the screen on to enable the
+  user to glance content on screen based on a specific motion. When this sensor
+  triggers, the device will turn the screen on momentarily to allow the user to
+  glance notifications or other content while the device remains locked in a
+  non-interactive state (dozing), then the screen will turn off again. This
+  behavior (briefly turning on the screen when this sensor triggers) might be
+  deactivated by the user in the device settings. Changes in settings do not
+  impact the behavior of the sensor: only whether the framework briefly turns the
+  screen on when it triggers. The actual gesture to be detected is not specified,
+  and can be chosen by the manufacturer of the device.</p>
+<p>This sensor must be low power, as it is likely to be activated 24/7.
+  Each sensor event reports 1 in <code>sensors_event_t.data[0]</code>.</p>
diff --git a/src/devices/sensors/suspend-mode.jd b/src/devices/sensors/suspend-mode.jd
new file mode 100644
index 0000000..1f9c351
--- /dev/null
+++ b/src/devices/sensors/suspend-mode.jd
@@ -0,0 +1,81 @@
+page.title=Suspend mode
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<h2 id="soc_power_states">SoC power states</h2>
+<p>The power states of the system on a chip (SoC) are: on, idle, and suspend. “On” is when the
+  SoC is running. “Idle” is a medium power mode where the SoC is powered but
+  doesn't perform any tasks. “Suspend” is a low-power mode where the SoC is not
+  powered. The power consumption of the device in this mode is usually 100 times
+  less than in the “On” mode.</p>
+<h2 id="non-wake-up_sensors">Non-wake-up sensors</h2>
+<p>Non-wake-up sensors are sensors that do not prevent the SoC
+  from going into suspend mode and do not wake the SoC up to report data. In
+  particular, the drivers are not allowed to hold wake-locks. It is the
+  responsibility of applications to keep a partial wake lock should they wish to
+  receive events from non-wake-up sensors while the screen is off. While the SoC
+  is in suspend mode, the sensors must continue to function and generate events,
+  which are put in a hardware FIFO. (See <a
+  href="batching.html">Batching</a> for more details.) The events in the
+  FIFO are delivered to the applications when the SoC wakes up. If the FIFO is
+  too small to store all events, the older events are lost; the oldest data is dropped to accommodate
+  the latest data. In the extreme case where the FIFO is nonexistent, all events
+  generated while the SoC is in suspend mode are lost. One exception is the
+  latest event from each on-change sensor: the last event <a href="batching.html#precautions_to_take_when_batching_non-wake-up_on-change_sensors">must be saved </a>outside of the FIFO so it cannot be lost.</p>
+<p>As soon as the SoC gets out of suspend mode, all events from the FIFO are
+  reported and operations resume as normal.</p>
+<p>Applications using non-wake-up sensors should either hold a wake lock to ensure
+  the system doesn't go to suspend, unregister from the sensors when they do
+  not need them, or expect to lose events while the SoC is in suspend mode.</p>
+<h2 id="wake-up_sensors">Wake-up sensors</h2>
+<p>In opposition to non-wake-up sensors, wake-up sensors ensure that their data is
+  delivered independently of the state of the SoC. While the SoC is awake, the
+  wake-up sensors behave like non-wake-up-sensors. When the SoC is asleep,
+  wake-up sensors must wake up the SoC to deliver events. They must still let the
+  SoC go into suspend mode, but must also wake it up when an event needs to be
+  reported. That is, the sensor must wake the SoC up and deliver the events
+  before the maximum reporting latency has elapsed or the hardware FIFO gets full.
+  See <a href="batching.html">Batching</a> for more details.</p>
+<p>To ensure the applications have the time to receive the event before the SoC
+  goes back to sleep, the driver must hold a &quot;timeout wake lock&quot; for 200
+  milliseconds each time an event is being reported. <em>That is, the SoC should not
+  be allowed to go back to sleep in the 200 milliseconds following a wake-up
+  interrupt.</em> This requirement will disappear in a future Android release, and we
+  need this timeout wake lock until then.</p>
+<h2 id="how_to_define_wake-up_and_non-wake-up_sensors">How to define wake-up and non-wake-up sensors?</h2>
+<p>Up to KitKat, whether a sensor was a wake-up or a non-wake-up sensor was
+  dictated by the sensor type: most were non-wake-up sensors, with the exception
+  of the <a href="sensor-types.html#proximity">proximity</a> sensor and the <a href="sensor-types.html#significant_motion">significant motion detector</a>.</p>
+<p>Starting in L, whether a given sensor is a wake-up sensor or not is specified
+  by a flag in the sensor definition. Most sensors can be defined by pairs of
+  wake-up and non-wake-up variants of the same sensor, in which case they must
+  behave as two independent sensors, not interacting with one another. See
+  <a href="interaction.html">Interaction</a> for more details.</p>
+<p>Unless specified otherwise in the sensor type definition, it is recommended to
+  implement one wake-up sensor and one non-wake-up sensor for each sensor type
+  listed in <a href="sensor-types.html">Sensor types</a>. In each sensor type
+  definition, see what sensor (wake-up or non-wake-up) will be returned by
+  <code>SensorManager.getDefaultSensor(sensorType)</code>. It is the sensor
+  that most applications will use.</p>
diff --git a/src/devices/sensors/versioning.jd b/src/devices/sensors/versioning.jd
new file mode 100644
index 0000000..130c58c
--- /dev/null
+++ b/src/devices/sensors/versioning.jd
@@ -0,0 +1,183 @@
+page.title=HAL version deprecation
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>In the L release of Android, we are halting support for some sensor HAL
+versions. The only supported versions are <code>SENSORS_DEVICE_API_VERSION_1_0
+</code>and <code>SENSORS_DEVICE_API_VERSION_1_3</code>.</p>
+
+<p>In the next releases, we are likely to drop support for 1_0 as well.</p>
+
+<p>1_0 has no concept of batching. If possible, all devices using 1_0 SHOULD
+upgrade to 1_3.</p>
+
+<p>1_1 and 1_2 suffer from poor definition of the batching concept, and are not
+supported anymore</p>
+
+<p>All devices currently using 1_1 or 1_2 MUST upgrade to 1_3.</p>
+
+<p>In 1_3, we simplified the notion of batching, and we introduced wake up
+sensors.</p>
+
+<p>To upgrade to 1_3, follow the changes listed below.</p>
+
+<h2>Implement the batch function</h2>
+
+<p>Even if you do not implement batching (your hardware has no FIFO), you must
+implement the <code>batch</code> function. <code>batch</code> is used to set
+the sampling period and the maximum reporting latency for a given sensor. It
+replaces <code>setDelay</code>. <code>setDelay</code> will not be called
+anymore.</p>
+
+<p>If you do not implement batching, you can implement <code>batch</code> by
+simply calling your existing <code>setDelay</code> function with the provided
+<code>sampling_period_ns</code> parameter.</p>
+
+<h2>Implement the flush function</h2>
+
+<p>Even if you do not implement batching, you must implement the
+<code>flush</code> function.</p>
+
+<p>If you do not implement batching, <code>flush</code> must generate one
+<code>META_DATA_FLUSH_COMPLETE</code> event and return 0 (success).</p>
+
+<h2>Change your sensors_poll_device_t.common.version</h2>
+
+<pre class=prettyprint>
+your_poll_device.common.version = SENSORS_DEVICE_API_VERSION_1_3
+</pre>
+
+<h2>Add the new fields to the definition of your sensors</h2>
+
+<p>When defining each sensor, in addition to the usual <a
+href="{@docRoot}devices/sensors/hal-interface.html#sensor_t">sensor_t</a>
+fields:</p>
+
+<pre class=prettyprint>
+.name =       "My magnetic field Sensor",
+.vendor =     "My company",
+.version
+=    1,
+.handle =     mag_handle,
+.type =       SENSOR_TYPE_MAGNETIC_FIELD,
+.maxRange =   200.0f,
+.resolution = CONVERT_M,
+.power =      5.0f,
+.minDelay =
+ 16667,
+</pre>
+
+<p>you also must set the new fields, defined between 1_0 and 1_3:</p>
+
+<pre class=prettyprint>
+.fifoReservedEventCount = 0,
+.fifoMaxEventCount =   0,
+.stringType =         0,
+.requiredPermission = 0,
+.maxDelay =      200000
+.flags =
+SENSOR_FLAG_CONTINUOUS_MODE,
+</pre>
+
+<p><em>fifoReservedEventCount</em>: If not implementing batching, set this one to 0.</p>
+
+<p><em>fifoMaxEventCount</em>: If not implementing batching, set this one to 0</p>
+
+<p><em>stringType</em>: Set to 0 for all official android sensors (those that are defined in
+sensors.h), as this value will be overwritten by the framework. For
+non-official sensors, see <a
+href="{@docRoot}devices/sensors/hal-interface.html#sensor_t">sensor_t</a> for
+details on how to set it.</p>
+
+<p><em>requiredPermission</em>: This is the permission that applications will be required to have to get
+access to your sensor. You can usually set this to 0 for all of your sensors,
+but sensors with type <code>HEART_RATE</code> must set this to <code>SENSOR_PERMISSION_BODY_SENSORS.</code></p>
+
+<p><em>maxDelay</em>: This value is important and you will need to set it according to the
+capabilities of the sensor and of its driver.</p>
+
+<p>This value is defined only for continuous and on-change sensors. It is the
+delay between two sensor events corresponding to the lowest frequency that this
+sensor supports. When lower frequencies are requested through the
+<code>batch</code> function, the events will be generated at this frequency
+instead. It can be used by the framework or applications to estimate when the
+batch FIFO may be full. If this value is not set properly, CTS will fail.
+For one-shot and special reporting mode sensors, set <code>maxDelay</code> to 0.</p>
+
+<p>For continuous sensors, set it to the maximum sampling period allowed in
+microseconds.</p>
+
+<p>Note:</p>
+<ul>
+<li><code>period_ns</code> is in nanoseconds whereas
+<code>maxDelay</code>/<code>minDelay</code> are in microseconds.</li>
+<li><code>maxDelay </code>should always fit within a 32-bit signed integer. It
+is declared as 64 bit on 64 bit architectures only for binary compatibility reasons.</li>
+</ul>
+
+<p><em>flags</em>: This field defines the reporting mode of the sensor and whether the sensor is
+a wake up sensor.</p>
+
+<p>If you do not implement batching, and are just moving from 1.0 to 1.3, set this
+to:</p>
+
+<p><code>SENSOR_FLAG_WAKE_UP | SENSOR_FLAG_ONE_SHOT_MODE</code> for <a
+href="{@docRoot}devices/sensors/report-modes.html#one-shot">one-shot</a>
+sensors</p>
+
+<p><code>SENSOR_FLAG_CONTINUOUS_MODE</code> for <a
+href="{@docRoot}devices/sensors/report-modes.html#continuous">continuous</a>
+sensors <code>SENSOR_FLAG_ON_CHANGE_MODE</code> for <a
+href="{@docRoot}devices/sensors/report-modes.html#on-change">on-change</a>
+sensors except <a href="#proximity">proximity</a>
+<code>SENSOR_FLAG_SPECIAL_REPORTING_MODE</code> for sensors with <a
+href="{@docRoot}devices/sensors/report-modes.html#special">special</a>
+reporting mode except for the <a
+href="{@docRoot}devices/sensors/sensor-types.html#tilt_detector">tilt
+detector</a>.</p>
+
+<p><code>SENSOR_FLAG_WAKE_UP | SENSOR_FLAG_ON_CHANGE_MODE</code> for the <a
+href="{@docRoot}devices/sensors/sensor-types.html#proximity">proximity</a> sensor and the Android official <a
+href="{@docRoot}devices/sensors/sensor-types.html#tilt_detector">tilt detector</a> sensor.</p>
+
+<h2>Notes when upgrading from 1_1 or 1_2</h2>
+<ul>
+  <li> The <code>batch</code> function now nearly-always succeeds, even for sensors that do not support
+batching, independently of the value of the timeout argument. The only cases
+where the <code>batch </code>function might fail are internal errors, or a bad
+<code>sensor_handle,</code> or negative <code>sampling_period_ns </code>or
+negative <code>max_report_latency_ns</code>.
+  <li> Whether a sensor supports batching is defined by whether it has a
+<code>fifoMaxEventCount </code>greater than 0. (In previous versions, it was
+based on the return value of <code>batch()</code>.
+  <li> Sensors that support batching are always in what we called the “batch
+mode” in previous versions: even if the <code>max_report_latency_ns</code> parameter is 0, the sensor must still be batched, meaning the events must be
+stored in the FIFO when the SoC goes to suspend mode.
+  <li> The <code>flags </code>parameter of the <code>batch</code> function is
+not used anymore. <code>DRY_RUN</code> and <code>WAKE_UPON_FIFO_FULL</code> are
+both deprecated, and will never be passed to the <code>batch</code> function.
+  <li> The batch timeout argument is now referred to as the
+<code>max_report_latency</code> argument.
+</ul>
diff --git a/src/devices/tech/encryption/index.jd b/src/devices/tech/encryption/index.jd
index 87e145c..e3038d4 100644
--- a/src/devices/tech/encryption/index.jd
+++ b/src/devices/tech/encryption/index.jd
@@ -16,339 +16,488 @@
     See the License for the specific language governing permissions and
     limitations under the License.
 -->
-<p>Encryption on Android uses the dm-crypt layer in the Linux kernel.  Read the
-detailed description below of how it is tied into the Android system and what must
-be done on a new device to get this feature working.</p>
 
-<h2 id="quick-summary-for-3rd-parties">Quick summary for 3rd parties.</h2>
-<p>If you want to enable encryption on your device based on Android 3.0
-aka Honeycomb, there are only a few requirements:</p>
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<h2 id=what_is_encryption>What is encryption?</h2>
+
+<p>Encryption is the process of encoding user data on an Android device using an
+encrypted key. Once a device is encrypted, all user-created data is
+automatically encrypted before committing it to disk and all reads
+automatically decrypt data before returning it to the calling process.</p>
+
+<h2 id=what_we’ve_added_for_android_l>What we’ve added for Android L</h2>
+
+<ul>
+  <li>Created fast encryption, which only encrypts used blocks on the data partition
+to avoid first boot taking a long time. Only ext4 and f2fs filesystems
+currently support fast encryption.
+  <li>Added the <code>forceencrypt</code> flag to encrypt on first boot.
+  <li>Added support for patterns and encryption without a password.
+  <li>Added hardware-backed storage of the encryption key. See <a
+       href="#storing_the_encrypted_key">Storing the encrypted key</a> for more details.
+</ul>
+
+<h2 id=how_android_encryption_works>How Android encryption works</h2>
+
+<p>Android disk encryption is based on <code>dm-crypt</code>, which is a kernel feature that works at the block device layer. Because of
+this, encryption works with Embedded MultiMediaCard<strong> (</strong>eMMC) and similar flash devices that present themselves to the kernel as block
+devices. Encryption is not possible with YAFFS, which talks directly to a raw
+NAND flash chip. </p>
+
+<p>The encryption algorithm is 128 Advanced Encryption Standard (AES) with
+cipher-block chaining (CBC) and ESSIV:SHA256. The master key is encrypted with
+128-bit AES via calls to the OpenSSL library. You must use 128 bits or more for
+the key (with 256 being optional). </p>
+
+<p class="note"><strong>Note:</strong> OEMs can use 128-bit or higher to encrypt the master key.</p>
+
+<p>In the L release, there are four kinds of encryption states: </p>
+
+<ul>
+  <li>default
+  <li>PIN
+  <li>password
+  <li>pattern
+</ul>
+
+<p>Upon first boot, the device generates a 128-bit key. This key is then encrypted
+with a default password, and the encrypted key is stored in the crypto
+metadata. The 128-bit key generated is valid until the next factory reset. Upon
+factory reset, a new 128-bit key is generated.</p>
+
+<p>When the user sets the PIN/pass or password on the device, only the 128-bit key
+is re-encrypted and stored. (ie. user PIN/pass/pattern changes do NOT cause
+re-encryption of userdata.) </p>
+
+<p>Encryption is managed by <code>init</code> and <code>vold</code>. <code>init</code> calls <code>vold</code>, and vold sets properties to trigger events in init. Other parts of the system
+also look at the properties to conduct tasks such as report status, ask for a
+password, or prompt to factory reset in the case of a fatal error. To invoke
+encryption features in <code>vold</code>, the system uses the command line tool <code>vdc</code>’s <code>cryptfs</code> commands: <code>checkpw</code>, <code>restart</code>, <code>enablecrypto</code>, <code>changepw</code>, <code>cryptocomplete</code>, <code>verifypw</code>, <code>setfield</code>, <code>getfield</code>, <code>mountdefaultencrypted</code>, <code>getpwtype</code>, <code>getpw</code>, and <code>clearpw</code>.</p>
+
+<p>In order to encrypt, decrypt or wipe <code>/data</code>, <code>/data</code> must not be mounted. However, in order to show any user interface (UI), the
+framework must start and the framework requires <code>/data</code> to run. To resolve this conundrum, a temporary filesystem is mounted on <code>/data</code>. This allows Android to prompt for passwords, show progress, or suggest a data
+wipe as needed. It does impose the limitation that in order to switch from the
+temporary filesystem to the true <code>/data</code> filesystem, the system must stop every process with open files on the
+temporary filesystem and restart those processes on the real <code>/data</code> filesystem. To do this, all services must be in one of three groups: <code>core</code>, <code>main</code>, and <code>late_start</code>.</p>
+
+<ul>
+  <li><code>core</code>: Never shut down after starting.
+  <li><code>main</code>: Shut down and then restart after the disk password is entered.
+  <li><code>late_start</code>: Does not start until after <code>/data</code> has been decrypted and mounted.
+</ul>
+
+<p>To trigger these actions, the  <code>vold.decrypt</code> property is set to <a href="https://android.googlesource.com/platform/system/vold/+/master/cryptfs.c">various strings</a>. To kill and restart services, the <code>init</code> commands are:</p>
+
+<ul>
+  <li><code>class_reset</code>: Stops a service but allows it to be restarted with class_start.
+  <li><code>class_start</code>: Restarts a service.
+  <li><code>class_stop</code>: Stops a service and adds a <code>SVC_DISABLED</code> flag. Stopped services do not respond to <code>class_start</code>.
+</ul>
+
+<h2 id=flows>Flows</h2>
+
+<p>There are four flows for an encrypted device. A device is encrypted just once
+and then follows a normal boot flow.  </p>
+
+<ul>
+  <li>Encrypt a previously unencrypted device:
+  <ul>
+    <li>Encrypt a new device with <code>forceencrypt</code>: Mandatory encryption at first boot (starting in Android L).
+    <li>Encrypt an existing device: User-initiated encryption (Android K and earlier).
+  </ul>
+  <li>Boot an encrypted device:
+  <ul>
+    <li>Starting an encrypted device with no password: Booting an encrypted device that
+has no set password (relevant for devices running Android L and later).
+    <li> Starting an encrypted device with a password: Booting an encrypted device that
+has a set password.
+  </ul>
+</ul>
+
+<p>In addition to these flows, the device can also fail to encrypt <code>/data</code>. Each of the flows are explained in detail below.</p>
+
+<h3 id=encrypt_a_new_device_with_forceencrypt>Encrypt a new device with <code>/forceencrypt</code></h3>
+
+<p>This is the normal first boot for an Android L device. </p>
+
 <ol>
-<li>
-<p>The /data filesystem must be on a device that presents a block device
-    interface.  eMMC is used in the first devices.  This is because the
-    encryption is done by the dm-crypt layer in the kernel, which works
-    at the block device layer.</p>
-</li>
-<li>
-<p>The function get_fs_size() in system/vold/cryptfs.c assumes the filesystem
-    used for /data is ext4.  It's just error checking code to make sure the
-    filesystem doesn't extend into the last 16 Kbytes of the partition where
-    the crypto footer is kept.  It was useful for development when sizes were
-    changing, but should not be required for release.  If you are not using
-    ext4, you can either delete it and the call to it, or fix it to understand
-    the filesystem you are using.</p>
-</li>
-<li>
-<p>Most of the code to handle the setup and teardown of the temporary framework
-    is in files that are not usually required to be changed on a per device
-    basis.  However, the init.<device>.rc file will require some changes.  All
-    services must be put in one of three classes: core, main or late_state.
-    Services in the core class are not shutdown and restarted when the
-    temporary framework gets the disk password.  Services in the main class
-    are restarted when the framework is restarted.  Services in late_start are
-    not started until after the temporary framework is restarted.  Put services
-    here that are not required to be running while the temporary framework
-    gets the disk password.</p>
-<p>Also any directories that need to be created on /data that are device
-specific need to be in the Action for post-fs-data, and that Action must end
-with the command "setprop vold.post_fs_data_done 1".  If your
-init.<device>.rc file does not have a post-fs-data Action, then the
-post-fs-data Action in the main init.rc file must end with the command
-"setprop vold.post_fs_data_done 1".</p>
-</li>
+  <li><strong>Detect unencrypted filesystem with <code>/forceencrypt</code> flag</strong>
+
+<p>
+<code>/data</code> is not encrypted but needs to be because <code>/forceencrypt</code> mandates it.
+Unmount <code>/data</code>.</p>
+
+  <li><strong>Start encrypting <code>/data</code></strong>
+
+<p><code>vold.decrypt = "trigger_encryption"</code> triggers <code>init.rc</code>, which will cause <code>vold</code> to encrypt <code>/data</code> with no password. (None is set because this should be a new device.)</p>
+
+
+  <li><strong>Mount tmpfs</strong>
+
+
+<p><code>vold</code> mounts a tmpfs <code>/data</code> (using the tmpfs options from
+<code>ro.crypto.tmpfs_options</code>) and sets the property <code>vold.encrypt_progress</code> to 0.
+<code>vold</code> prepepares the tmpfs <code>/data</code> for booting an encrypted system and sets the
+property <code>vold.decrypt</code> to: <code>trigger_restart_min_framework</code>
+</p>
+
+  <li><strong>Bring up framework to show progress</strong>
+
+
+<p>Because the device has virtually no data to encrypt, the progress bar will
+often not actually appear because encryption happens so quickly. See <a href="#encrypt_an_existing_device">Encrypt an existing device</a> for more details about the progress UI. </p>
+
+  <li><strong>When <code>/data</code> is encrypted, take down the framework</strong>
+
+<p><code>vold</code>  sets <code>vold.decrypt</code> to <code>trigger_default_encryption</code> which starts the <code>defaultcrypto</code> service. (This starts the flow below for mounting a default encrypted
+userdata.) <code>trigger_default_encryption</code> checks the encryption type to see if <code>/data</code> is  encrypted with or without a  password. Because Android L devices are
+encrypted on first boot, there should be no password set; therefore we decrypt
+and mount <code>/data</code>.</p>
+
+  <li><strong>Mount <code>/data</code></strong>
+
+<p><code>init</code> then mounts <code>/data</code> on a tmpfs RAMDisk using parameters it picks up from <code>ro.crypto.tmpfs_options</code>, which is set in <code>init.rc</code>.</p>
+
+  <li><strong>Start framework</strong>
+
+<p>Set <code>vold</code> to <code>trigger_restart_framework</code>, which continues the usual boot process.</p>
 </ol>
-<h2 id="how-android-encryption-works">How Android encryption works</h2>
-<p>Disk encryption on Android is based on dm-crypt, which is a kernel feature that
-works at the block device layer.  Therefore, it is not usable with YAFFS, which
-talks directly to a raw nand flash chip, but does work with emmc and similar
-flash devices which present themselves to the kernel as a block device.  The
-current preferred filesystem to use on these devices is ext4, though that is
-independent of whether encryption is used or not.</p>
-<p>While the actual encryption work is a standard linux kernel feature, enabling it
-on an Android device proved somewhat tricky.  The Android system tries to avoid
-incorporating GPL components, so using the cryptsetup command or libdevmapper
-were not available options.  So making the appropriate ioctl(2) calls into the
-kernel was the best choice.  The Android volume daemon (vold) already did this
-to support moving apps to the SD card, so I chose to leverage that work
-for whole disk encryption.  The actual encryption used for the filesystem for
-first release is 128 AES with CBC and ESSIV:SHA256.  The master key is
-encrypted with 128 bit AES via calls to the openssl library.</p>
-<p>Once it was decided to put the smarts in vold, it became obvious that invoking
-the encryption features would be done like invoking other vold commands, by
-adding a new module to vold (called cryptfs) and teaching it various commands.
-The commands are checkpw, restart, enablecrypto, changepw and cryptocomplete.
-They will be described in more detail below.</p>
-<p>The other big issue was how to get the password from the user on boot.  The
-initial plan was to implement a minimal UI that could be invoked from init
-in the initial ramdisk, and then init would decrypt and mount /data.  However,
-the UI engineer said that was a lot of work, and suggested instead that init
-communicate upon startup to tell the framework to pop up the password entry
-screen, get the password, and then shutdown and have the real framework started.
-It was decided to go this route, and this then led to a host of other decisions
-described below.  In particular, init set a property to tell the framework to go
-into the special password entry mode, and that set the stage for much
-communication between vold, init and the framework using properties.  The
-details are described below.</p>
-<p>Finally, there were problems around killing and restarting various services
-so that /data could be unmounted and remounted.  Bringing up the temporary
-framework to get the user password requires that a tmpfs /data filesystem be
-mounted, otherwise the framework will not run.  But to unmount the tmpfs /data
-filesystem so the real decrypted /data filesystem could be mounted meant that
-every process that had open files on the tmpfs /data filesystem had to be killed
-and restarted on the real /data filesystem.  This magic was accomplished by
-requiring all services to be in 1 of 3 groups: core, main and late_start.
-Core services are never shut down after starting.  main services are shutdown
-and then restarted after the disk password is entered.  late_start services
-are not started until after /data has been decrypted and mounted.  The magic
-to trigger these actions is by setting the property vold.decrypt to various
-magic strings, which is described below.  Also, a new init command "class_reset"
-was invented to stop a service, but allow it to be restarted with a
-"class_start" command.  If the command "class_stop" was used instead of the
-new command "class_reset" the flag SVC_DISABLED was added to the state of
-any service stopped, which means it would not be started when the command
-class_start was used on its class.</p>
-<h2 id="booting-an-encrypted-system">Booting an encrypted system.</h2>
+
+<h3 id=encrypt_an_existing_device>Encrypt an existing device</h3>
+
+<p>This is what happens when you encrypt an unencrypted Android K or earlier
+device that has been migrated to L. Note that this is the same flow as used in
+K.</p>
+
+<p>This process is user-initiated and is referred to as “inplace encryption” in
+the code. When a user selects to encrypt a device, the UI makes sure the
+battery is fully charged and the AC adapter is plugged in so there is enough
+power to finish the encryption process.</p>
+
+<p class="warning"><strong>Warning:</strong> If the device runs out of power and shuts down before it has finished
+encrypting, file data is left in a partially encrypted state. The device must
+be factory reset and all data is lost.</p>
+
+<p>To enable inplace encryption, <code>vold</code> starts a loop to read each sector of the real block device and then write it
+to the crypto block device. <code>vold</code> checks to see if a sector is in use before reading and writing it, which makes
+encryption much faster on a new device that has little to no data. </p>
+
+<p><strong>State of device</strong>: Set <code>ro.crypto.state = "unencrypted"</code> and execute the <code>on nonencrypted</code> <code>init</code> trigger to continue booting.</p>
+
 <ol>
-<li>
-<p>When init fails to mount /data, it assumes the filesystem  is encrypted,
-    and sets several properties:
-      ro.crypto.state = "encrypted"
-      vold.decrypt = 1
-    It then mounts a /data on a tmpfs ramdisk, using parameters it picks
-    up from ro.crypto.tmpfs_options, which is set in init.rc.</p>
-<p>If init was able to mount /data, it sets ro.crypto.state to "unencrypted".</p>
-<p>In either case, init then sets 5 properties to save the initial mount
-options given for /data in these properties:
-    ro.crypto.fs_type
-    ro.crypto.fs_real_blkdev
-    ro.crypto.fs_mnt_point
-    ro.crypto.fs_options
-    ro.crypto.fs_flags (saved as an ascii 8 digit hex number preceded by 0x)</p>
-</li>
-<li>
-<p>The framework starts up, and sees that vold.decrypt is set to "1".  This
-    tells the framework that it is booting on a tmpfs /data disk, and it needs
-    to get the user password.  First, however, it needs to make sure that the
-    disk was properly encrypted.  It sends the command "cryptfs cryptocomplete"
-    to vold, and vold returns 0 if encryption was completed successfully, or -1
-    on internal error, or -2 if encryption was not completed successfully. 
-    Vold determines this by looking in the crypto footer for the
-    CRYPTO_ENCRYPTION_IN_PROGRESS flag.  If it's set, the encryption process
-    was interrupted, and there is no usable data on the device.  If vold returns
-    an error, the UI should pop up a message saying the user needs to reboot and
-    factory reset the device, and give the user a button to press to do so.</p>
-</li>
-<li>
-<p>Assuming the "cryptfs cryptocomplete" command returned success, the
-    framework should pop up a UI asking for the disk password.  The UI then
-    sends the command "cryptfs checkpw <passwd>" to vold.  If the password
-    is correct (which is determined by successfully mounting the decrypted
-    at a temporary location, then unmounting it), vold saves the name of the
-    decrypted block device in the property ro.crypto.fs_crypto_blkdev, and
-    returns status 0 to the UI.  If the password is incorrect, it returns -1
-    to the UI.</p>
-</li>
-<li>
-<p>The UI puts up a crypto boot graphic, and then calls vold with the command
-    "cryptfs restart".  vold sets the property vold.decrypt to
-    "trigger_reset_main", which causes init.rc to do "class_reset main".  This
-    stops all services in the main class, which allows the tmpfs /data to be
-    unmounted.  vold then mounts the decrypted real /data partition, and then
-    preps the new partition (which may never have been prepped if it was
-    encrypted with the wipe option, which is not supported on first release).
-    It sets the property vold.post_fs_data_done to "0", and then sets
-    vold.decrypt to "trigger_post_fs_dat".  This causes init.rc to run the
-    post-fs-data commands in init.rc and init.<device>.rc.  They will create
-    any necessary directories, links, et al, and then set vold.post_fs_data_done
-    to "1".  Vold waits until it sees the "1" in that property.  Finally, vold
-    sets the property vold.decrypt to "trigger_restart_framework" which causes
-    init.rc to start services in class main again, and also start services
-    in class late_start for the first time since boot.</p>
-<p>Now the framework boots all its services using the decrypted /data
-filesystem, and the system is ready for use.</p>
-</li>
+  <li><strong>Check password</strong>
+
+<p>The UI calls <code>vold</code> with the command <code>cryptfs enablecrypto inplace</code> where <code>passwd</code> is the user's lock screen password.</p>
+
+  <li><strong>Take down the framework</strong>
+
+<p><code>vold</code> checks for errors, returns -1 if it can't encrypt, and prints a reason in the
+log. If it can encrypt, it sets the property <code>vold.decrypt</code> to <code>trigger_shutdown_framework</code>. This causes <code>init.rc</code> to stop services in the classes <code>late_start</code> and <code>main</code>. </p>
+
+  <li><strong>Unmount <code>/data</code></strong>
+
+<p><code>vold</code> unmounts <code>/mnt/sdcard</code> and then <code>/data</code>.</p>
+
+  <li><strong>Start encrypting <code>/data</code></strong>
+
+<p><code>vold</code> then sets up the crypto mapping, which creates a virtual crypto block device
+that maps onto the real block device but encrypts each sector as it is written,
+and decrypts each sector as it is read. <code>vold</code> then creates and writes out the crypto metadata.</p>
+
+  <li><strong>While it’s encrypting, mount tmpfs</strong>
+
+<p><code>vold</code> mounts a tmpfs <code>/data</code> (using the tmpfs options from <code>ro.crypto.tmpfs_options</code>) and sets the property <code>vold.encrypt_progress</code> to 0. <code>vold</code> prepares the tmpfs <code>/data</code> for booting an encrypted system and sets the property <code>vold.decrypt</code> to: <code>trigger_restart_min_framework</code> </p>
+
+  <li><strong>Bring up framework to show progress</strong>
+
+<p><code>trigger_restart_min_framework </code>causes <code>init.rc</code> to start the <code>main</code> class of services. When the framework sees that <code>vold.encrypt_progress</code> is set to 0, it brings up the progress bar UI, which queries that property
+every five seconds and updates a progress bar. The encryption loop updates <code>vold.encrypt_progress</code> every time it encrypts another percent of the partition. </p>
+
+  <li><strong>When<code> /data</code> is encrypted, reboot</strong>
+
+<p>When <code>/data</code> is successfully encrypted, <code>vold</code> clears the flag <code>ENCRYPTION_IN_PROGRESS</code> in the metadata and reboots the system. </p>
+
+<p> If the reboot fails for some reason, <code>vold</code> sets the property <code>vold.encrypt_progress</code> to <code>error_reboot_failed</code> and the UI should display a message asking the user to press a button to
+reboot. This is not expected to ever occur.</p>
 </ol>
-<h2 id="enabling-encryption-on-the-device">Enabling encryption on the device.</h2>
-<p>For first release, we only support encrypt in place, which requires the
-framework to be shutdown, /data unmounted, and then every sector of the
-device encrypted, after which the device reboots to go through the process
-described above.  Here are the details:</p>
+
+<h3 id=starting_an_encrypted_device_with_default_encryption>Starting an encrypted device with default encryption</h3>
+
+<p>This is what happens when you boot up an encrypted device with no password.
+Because Android L devices are encrypted on first boot, there should be no set
+password and therefore this is the <em>default encryption</em> state.</p>
+
 <ol>
-<li>
-<p>From the UI, the user selects to encrypt the device.  The UI ensures that
-    there is a full charge on the battery, and the AC adapter is plugged in.
-    It does this to make sure there is enough power to finish the encryption
-    process, because if the device runs out of power and shuts down before it
-    has finished encrypting, file data is left in a partially encrypted state,
-    and the device must be factory reset (and all data lost).</p>
-<p>Once the user presses the final button to encrypt the device, the UI calls
-vold with the command "cryptfs enablecrypto inplace <passwd>" where passwd
-is the user's lock screen password.</p>
-</li>
-<li>
-<p>vold does some error checking, and returns -1 if it can't encrypt, and
-    prints a reason in the log.  If it thinks it can, it sets the property
-    vold.decrypt to "trigger_shutdown_framework".  This causes init.rc to
-    stop services in the classes late_start and main.  vold then unmounts
-    /mnt/sdcard and then /data.</p>
-</li>
-<li>
-<p>If doing an inplace encryption, vold then mounts a tmpfs /data (using the
-    tmpfs options from ro.crypto.tmpfs_options) and sets the property
-    vold.encrypt_progress to "0".  It then preps the tmpfs /data filesystem as
-    mentioned in step 3 for booting an encrypted system, and then sets the
-    property vold.decrypt to "trigger_restart_min_framework".  This causes
-    init.rc to start the main class of services.  When the framework sees that
-    vold.encrypt_progress is set to "0", it will bring up the progress bar UI,
-    which queries that property every 5 seconds and updates a progress bar.</p>
-</li>
-<li>
-<p>vold then sets up the crypto mapping, which creates a virtual crypto block
-    device that maps onto the real block device, but encrypts each sector as it
-    is written, and decrypts each sector as it is read.  vold then creates and
-    writes out the crypto footer.</p>
-<p>The crypto footer contains details on the type of encryption, and an
-encrypted copy of the master key to decrypt the filesystem.  The master key
-is a 128 bit number created by reading from /dev/urandom.  It is encrypted
-with a hash of the user password created with the PBKDF2 function from the
-SSL library.  The footer also contains a random salt (also read from
-/dev/urandom) used to add entropy to the hash from PBKDF2, and prevent
-rainbow table attacks on the password.  Also, the flag
-CRYPT_ENCRYPTION_IN_PROGRESS is set in the crypto footer to detect failure
-to complete the encryption process.  See the file cryptfs.h for details
-on the crypto footer layout.  The crypto footer is kept in the last 16
-Kbytes of the partition, and the /data filesystem cannot extend into that
-part of the partition.</p>
-</li>
-<li>
-<p>If told was to enable encryption with wipe, vold invokes the command
-    "make_ext4fs" on the crypto block device, taking care to not include
-    the last 16 Kbytes of the partition in the filesystem.</p>
-<p>If the command was to enable inplace, vold starts a loop to read each sector
-of the real block device, and then write it to the crypto block device.
-This takes about an hour on a 30 Gbyte partition on the Motorola Xoom.
-This will vary on other hardware.  The loop updates the property
-vold.encrypt_progress every time it encrypts another 1 percent of the
-partition.  The UI checks this property every 5 seconds and updates
-the progress bar when it changes.</p>
-</li>
-<li>
-<p>When either encryption method has finished successfully, vold clears the
-    flag ENCRYPTION_IN_PROGRESS in the footer, and reboots the system.
-    If the reboot fails for some reason, vold sets the property
-    vold.encrypt_progress to "error_reboot_failed" and the UI should
-    display a message asking the user to press a button to reboot.
-    This is not expected to ever occur.</p>
-</li>
-<li>
-<p>If vold detects an error during the encryption process, and if no data has
-    been destroyed yet and the framework is up, vold sets the property
-    vold.encrypt_progress to "error_not_encrypted" and the UI should give the
-    user the option to reboot, telling them that the encryption process
-    never started.  If the error occurs after the framework has been torn
-    down, but before the progress bar UI is up, vold will just reboot the
-    system.  If the reboot fails, it sets vold.encrypt_progress to
-    "error_shutting_down" and returns -1, but there will not be anyone
-    to catch the error.  This is not expected to happen.</p>
-<p>If vold detects an error during the encryption process, it sets
-vold.encrypt_progress to "error_partially_encrypted" and returns -1.
-The UI should then display a message saying the encryption failed, and
-provide a button for the user to factory reset the device.</p>
-</li>
+  <li><strong>Detect encrypted <code>/data</code> with no password</strong>
+
+<p>Detect that the Android device is encrypted because <code>/data</code>
+cannot be mounted and one of the flags <code>encryptable</code> or
+<code>forceencrypt</code> is set.</p>
+
+<p><code>vold</code> sets <code>vold.decrypt</code> to <code>trigger_default_encryption</code>, which starts the <code>defaultcrypto</code> service. <code>trigger_default_encryption</code> checks the encryption type to see if <code>/data</code> is  encrypted with or without a  password. </p>
+
+  <li><strong>Decrypt /data</strong>
+
+<p>Creates the <code>dm-crypt</code> device over the block device so the device is ready for use.</p>
+
+  <li><strong>Mount /data</strong>
+
+<p><code>vold</code> then mounts the decrypted real <code>/data </code>partition and then prepares the new partition. It sets the property <code>vold.post_fs_data_done</code> to 0 and then sets <code>vold.decrypt</code> to <code>trigger_post_fs_data</code>. This causes <code>init.rc</code> to run its <code>post-fs-data</code> commands. They will create any necessary directories or links and then set <code>vold.post_fs_data_done</code> to 1.</p>
+
+<p>Once <code>vold</code> sees the 1 in that property, it sets the property <code>vold.decrypt</code> to: <code>trigger_restart_framework.</code> This causes <code>init.rc</code> to start services in class <code>main</code> again and also start services in class <code>late_start</code> for the first time since boot.</p>
+
+  <li><strong>Start framework</strong>
+
+<p>Now the framework boots all its services using the decrypted <code>/data</code>, and the system is ready for use.</p>
 </ol>
-<h2 id="changing-the-password">Changing the password</h2>
-<p>To change the password for the disk encryption, the UI sends the command
-"cryptfs changepw <newpw>" to vold, and vold re-encrypts the disk master
-key with the new password.</p>
-<h2 id="summary-of-related-properties">Summary of related properties</h2>
-<p>Here is a table summarizing the various properties, their possible values,
-and what they mean:</p>
-<pre><code>vold.decrypt  1                               Set by init to tell the UI to ask
-                                              for the disk pw
 
-vold.decrypt  trigger_reset_main              Set by vold to shutdown the UI
-                                              asking for the disk password
+<h3 id=starting_an_encrypted_device_without_default_encryption>Starting an encrypted device without default encryption</h3>
 
-vold.decrypt  trigger_post_fs_data            Set by vold to prep /data with
-                                              necessary dirs, et al.
+<p>This is what happens when you boot up an encrypted device that has a set
+password. The device’s password can be a pin, pattern, or password. </p>
 
-vold.decrypt  trigger_restart_framework       Set by vold to start the real
-                                              framework and all services
+<ol>
+  <li><strong>Detect encrypted device with a password</strong>
 
-vold.decrypt  trigger_shutdown_framework      Set by vold to shutdown the full
-                                              framework to start encryption
+<p>Detect that the Android device is encrypted because the flag <code>ro.crypto.state = "encrypted"</code></p>
 
-vold.decrypt  trigger_restart_min_framework   Set by vold to start the progress
-                                              bar UI for encryption.
+<p><code>vold</code> sets <code>vold.decrypt</code> to <code>trigger_restart_min_framework</code> because <code>/data</code> is  encrypted with a password.</p>
 
-vold.enrypt_progress                          When the framework starts up, if
-                                              this property is set, enter the
-                                              progress bar UI mode.
+  <li><strong>Mount tmpfs</strong>
 
-vold.encrypt_progress  0 to 100               The progress bar UI should display
-                                              the percentage value set.
+<p><code>init</code> sets five properties to save the initial mount options given for <code>/data</code> with parameters passed from <code>init.rc</code>.  <code>vold</code> uses these properties to set up the crypto mapping:</p>
 
-vold.encrypt_progress  error_partially_encrypted  The progress bar UI should
-                                                  display a message that the
-                                                  encryption failed, and give
-                                                  the user an option to factory
-                                                  reset the device.
+<ol>
+  <li><code>ro.crypto.fs_type</code>
+  <li><code>ro.crypto.fs_real_blkdev</code>
+  <li><code>ro.crypto.fs_mnt_point</code>
+  <li><code>ro.crypto.fs_options</code>
+  <li><code>ro.crypto.fs_flags </code>(ASCII 8-digit hex number preceded by 0x)
+  </ol>
 
-vold.encrypt_progress  error_reboot_failed    The progress bar UI should display
-                                              a message saying encryption
-                                              completed, and give the user a
-                                              button to reboot the device.
-                                              This error is not expected to
-                                              happen.
+  <li><strong>Start framework to prompt for password</strong>
 
-vold.encrypt_progress  error_not_encrypted    The progress bar UI should display
-                                              a message saying an error occured,
-                                              and no data was encrypted or lost,
-                                              and give the user a button to
-                                              reboot the system.
+<p>The framework starts up and sees that <code>vold.decrypt</code> is set to <code>trigger_restart_min_framework</code>. This tells the framework that it is booting on a tmpfs <code>/data</code> disk and it needs to get the user password.</p>
 
-vold.encrypt_progress  error_shutting_down    The progress bar UI is not
-                                              running, so it's unclear who
-                                              will respond to this error,
-                                              and it should never happen
-                                              anyway.
+<p>First, however, it needs to make sure that the disk was properly encrypted. It
+sends the command <code>cryptfs cryptocomplete</code> to <code>vold</code>. <code>vold</code> returns 0 if encryption was completed successfully, -1 on internal error, or
+-2 if encryption was not completed successfully. <code>vold</code> determines this by looking in the crypto metadata for the <code>CRYPTO_ENCRYPTION_IN_PROGRESS</code> flag. If it's set, the encryption process was interrupted, and there is no
+usable data on the device. If <code>vold</code> returns an error, the UI should display a message to the user to reboot and
+factory reset the device, and give the user a button to press to do so.</p>
 
-vold.post_fs_data_done  0                     Set by vold just before setting
-                                              vold.decrypt to
-                                              trigger_post_fs_data.
+  <li><strong>Decrypt data with password</strong>
 
-vold.post_fs_data_done  1                     Set by init.rc or init.&lt;device&gt;.rc
-                                              just after finishing the task
-                                              post-fs-data.
+<p>Once <code>cryptfs cryptocomplete</code> is successful, the framework displays a UI asking for the disk password. The
+UI checks the password by sending the command <code>cryptfs checkpw</code> to <code>vold</code>. If the password is correct (which is determined by successfully mounting the
+decrypted <code>/data</code> at a temporary location, then unmounting it), <code>vold</code> saves the name of the decrypted block device in the property <code>ro.crypto.fs_crypto_blkdev</code> and returns status 0 to the UI. If the password is incorrect, it returns -1 to
+the UI.</p>
 
-ro.crypto.fs_crypto_blkdev                    Set by the vold command checkpw
-                                              for later use by the vold command
-                                              restart.
+  <li><strong>Stop framework</strong>
 
-ro.crypto.state unencrypted                   Set by init to say this system is
-                                              running with an unencrypted /data
+<p>The UI puts up a crypto boot graphic and then calls <code>vold</code> with the command <code>cryptfs restart</code>. <code>vold</code> sets the property <code>vold.decrypt</code> to <code>trigger_reset_main</code>, which causes <code>init.rc</code> to do <code>class_reset main</code>. This stops all services in the main class, which allows the tmpfs <code>/data</code> to be unmounted. </p>
 
-ro.crypto.state encrypted                     Set by init to say this system is
-                                              running with an encrypted /data
+  <li><strong>Mount <code>/data</code></strong>
 
-ro.crypto.fs_type                             These 5 properties are set by init
-ro.crypto.fs_real_blkdev                      when it tries to mount /data with
-ro.crypto.fs_mnt_point                        parameters passed in from init.rc.
-ro.crypto.fs_options                          vold uses these to setup the
-ro.crypto.fs_flags                            crypto mapping.
+<p><code>vold</code> then mounts the decrypted real <code>/data </code>partition and prepares the new partition (which may never have been prepared if
+it was encrypted with the wipe option, which is not supported on first
+release). It sets the property <code>vold.post_fs_data_done</code> to 0 and then sets <code>vold.decrypt</code> to <code>trigger_post_fs_data</code>. This causes <code>init.rc</code> to run its <code>post-fs-data</code> commands. They will create any necessary directories or links and then set <code>vold.post_fs_data_done</code> to 1. Once <code>vold</code> sees the 1 in that property, it sets the property <code>vold.decrypt</code> to <code>trigger_restart_framework</code>. This causes <code>init.rc</code> to start services in class <code>main</code> again and also start services in class <code>late_start</code> for the first time since boot.</p>
 
-ro.crypto.tmpfs_options                       Set by init.rc with the options
-                                              init should use when mounting
-                                              the tmpfs /data filesystem.
-</code></pre>
-<h2 id="summary-of-new-init-actions">Summary of new init actions</h2>
-<p>A list of the new Actions that are added to init.rc and/or init.<device>.rc:</p>
-<pre><code>on post-fs-data
+  <li><strong>Start full framework</strong>
+
+<p>Now the framework boots all its services using the decrypted <code>/data</code> filesystem, and the system is ready for use.</p>
+</ol>
+
+<h3 id=failure>Failure</h3>
+
+<p>A device that fails to decrypt might be awry for a few reasons. The device
+starts with the normal series of steps to boot:</p>
+
+<ol>
+  <li>Detect encrypted device with a password
+  <li>Mount tmpfs
+  <li>Start framework to prompt for password
+</ol>
+
+<p>But after the framework opens, the device can encounter some errors:</p>
+
+<ul>
+  <li>Password matches but cannot decrypt data
+  <li>User enters wrong password 30 times
+</ul>
+
+<p>If these errors are not resolved, <strong>prompt user to factory wipe</strong>:</p>
+
+<p>If <code>vold</code> detects an error during the encryption process, and if no data has been
+destroyed yet and the framework is up, <code>vold</code> sets the property <code>vold.encrypt_progress </code>to <code>error_not_encrypted</code>. The UI prompts the user to reboot and alerts them the encryption process
+never started. If the error occurs after the framework has been torn down, but
+before the progress bar UI is up, <code>vold</code> will reboot the system. If the reboot fails, it sets <code>vold.encrypt_progress</code> to <code>error_shutting_down</code> and returns -1; but there will not be anything to catch the error. This is not
+expected to happen.</p>
+
+<p>If <code>vold</code> detects an error during the encryption process, it sets <code>vold.encrypt_progress</code> to <code>error_partially_encrypted</code> and returns -1. The UI should then display a message saying the encryption
+failed and provide a button for the user to factory reset the device. </p>
+
+<h2 id=storing_the_encrypted_key>Storing the encrypted key</h2>
+
+<p>The encrypted key is stored in the crypto metadata. Hardware backing is implemented by using Trusted Execution Environment’s (TEE) signing capability.
+Previously, we encrypted the master key with a key generated by applying scrypt to the user's password and the stored salt. In order to make the key resilient
+against off-box attacks, we extend this algorithm by signing the resultant key with a stored TEE key. The resultant signature is then turned into an appropriate length key by one more application of scrypt. This key is then used to encrypt and decrypt the master key. To store this key:</p>
+
+<ol>
+  <li>Generate random 16-byte disk encryption key (DEK) and 16-byte salt.
+  <li>Apply scrypt to the user password and the salt to produce 16-byte intermediate
+key 1 (IK1).
+  <li>Pad IK1 with zero bytes to the size of the hardware-bound private key (HBK).
+Specifically, we pad as: 00 || IK1 || 00..00; one zero byte, 32 IK1 bytes, 223
+zero bytes.
+  <li>Sign padded IK1 with HBK to produce 256-byte IK2.
+  <li>Apply scrypt to IK2 and salt (same salt as step 2) to produce 16-byte IK3.
+  <li>Use the first 16 bytes of IK3 as KEK and the last 16 bytes as IV.
+  <li>Encrypt DEK with AES_CBC, with key KEK, and initialization vector IV.
+</ol>
+
+<h2 id=changing_the_password>Changing the password</h2>
+
+<p>When a user elects to change or remove their password in settings, the UI sends
+the command <code>cryptfs changepw</code>  to <code>vold</code>, and <code>vold</code> re-encrypts the disk master key with the new password.</p>
+
+<h2 id=encryption_properties>Encryption properties</h2>
+
+<p><code>vold</code> and <code>init</code> communicate with each other by setting properties. Here is a list of available
+properties for encryption.</p>
+
+<h3 id=vold_properties>Vold properties </h3>
+
+<table>
+  <tr>
+    <th>Property</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td><code>vold.decrypt  trigger_encryption</code></td>
+    <td>Encrypt the drive with no
+    password.</td>
+  </tr>
+  <tr>
+    <td><code>vold.decrypt  trigger_default_encryption</code></td>
+    <td>Check the drive to see if it is encrypted with no password.
+If it is, decrypt and mount it,
+else set <code>vold.decrypt</code> to trigger_restart_min_framework.</td>
+  </tr>
+  <tr>
+    <td><code>vold.decrypt  trigger_reset_main</code></td>
+    <td>Set by vold to shutdown the UI asking for the disk password.</td>
+  </tr>
+  <tr>
+    <td><code>vold.decrypt  trigger_post_fs_data</code></td>
+    <td> Set by vold to prep /data with necessary directories, et al.</td>
+  </tr>
+  <tr>
+    <td><code>vold.decrypt  trigger_restart_framework</code></td>
+    <td>Set by vold to start the real framework and all services.</td>
+  </tr>
+  <tr>
+    <td><code>vold.decrypt  trigger_shutdown_framework</code></td>
+    <td>Set by vold to shutdown the full framework to start encryption.</td>
+  </tr>
+  <tr>
+    <td><code>vold.decrypt  trigger_restart_min_framework</code></td>
+    <td>Set by vold to start the
+progress bar UI for encryption or
+prompt for password, depending on
+the value of <code>ro.crypto.state</code>.</td>
+  </tr>
+  <tr>
+    <td><code>vold.encrypt_progress</code></td>
+    <td>When the framework starts up,
+if this property is set, enter
+the progress bar UI mode.</td>
+  </tr>
+  <tr>
+    <td><code>vold.encrypt_progress  0 to 100</code></td>
+    <td>The progress bar UI should
+display the percentage value set.</td>
+  </tr>
+  <tr>
+    <td><code>vold.encrypt_progress  error_partially_encrypted</code></td>
+    <td>The progress bar UI should display a message that the encryption failed, and
+give the user an option to
+factory reset the device.</td>
+  </tr>
+  <tr>
+    <td><code>vold.encrypt_progress  error_reboot_failed</code></td>
+    <td>The progress bar UI should
+display a message saying encryption completed, and give the user a button to reboot the device. This error is not expected to happen.</td>
+  </tr>
+  <tr>
+    <td><code>vold.encrypt_progress  error_not_encrypted</code></td>
+    <td>The progress bar UI should
+display a message saying an error
+occured,  no data was encrypted or
+lost, and give the user a button to reboot the system.</td>
+  </tr>
+  <tr>
+    <td><code>vold.encrypt_progress  error_shutting_down</code></td>
+    <td>The progress bar UI is not running, so it is unclear who will respond to this error. And it should never happen anyway.</td>
+  </tr>
+  <tr>
+    <td><code>vold.post_fs_data_done  0</code></td>
+    <td>Set by <code>vold</code> just before setting <code>vold.decrypt</code> to <code>trigger_post_fs_data</code>.</td>
+  </tr>
+  <tr>
+    <td><code>vold.post_fs_data_done  1</code></td>
+    <td>Set by <code>init.rc</code> or
+    <code>init.rc</code> just after finishing the task <code>post-fs-data</code>.</td>
+  </tr>
+</table>
+<h3 id=init_properties>init properties</h3>
+
+<table>
+  <tr>
+    <th>Property</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td><code>ro.crypto.fs_crypto_blkdev</code></td>
+    <td>Set by the <code>vold</code> command <code>checkpw</code> for later use by the <code>vold</code> command <code>restart</code>.</td>
+  </tr>
+  <tr>
+    <td><code>ro.crypto.state unencrypted</code></td>
+    <td>Set by <code>init</code> to say this system is running with an unencrypted
+    <code>/data ro.crypto.state encrypted</code>. Set by <code>init</code> to say this system is running with an encrypted <code>/data</code>.</td>
+  </tr>
+  <tr>
+    <td><p><code>ro.crypto.fs_type<br>
+      ro.crypto.fs_real_blkdev      <br>
+      ro.crypto.fs_mnt_point<br>
+      ro.crypto.fs_options<br>
+      ro.crypto.fs_flags      <br>
+    </code></p></td>
+    <td> These five properties are set by
+      <code>init</code> when it tries to mount <code>/data</code> with parameters passed in from
+    <code>init.rc</code>. <code>vold</code> uses these to setup the crypto mapping.</td>
+  </tr>
+  <tr>
+    <td><code>ro.crypto.tmpfs_options</code></td>
+    <td>Set by <code>init.rc</code> with the options init should use when mounting the tmpfs /data filesystem.</td>
+  </tr>
+</table>
+<h2 id=init_actions>Init actions</h2>
+
+<pre>
+on post-fs-data
 on nonencrypted
 on property:vold.decrypt=trigger_reset_main
 on property:vold.decrypt=trigger_post_fs_data
 on property:vold.decrypt=trigger_restart_min_framework
 on property:vold.decrypt=trigger_restart_framework
 on property:vold.decrypt=trigger_shutdown_framework
-</code></pre>
+on property:vold.decrypt=trigger_encryption
+on property:vold.decrypt=trigger_default_encryption
+</pre>
diff --git a/src/devices/tech/security/se-linux.jd b/src/devices/tech/security/se-linux.jd
index ad06fee..6c34a02 100644
--- a/src/devices/tech/security/se-linux.jd
+++ b/src/devices/tech/security/se-linux.jd
@@ -1,4 +1,4 @@
-page.title=Validating Security-Enhanced Linux in Android
+page.title=Security-Enhanced Linux in Android
 @jd:body
 
 <!--
@@ -24,416 +24,78 @@
   </div>
 </div>
 
-<h2 id="introduction">Introduction</h2>
-<p>
-As part of the Android <a href="{@docRoot}devices/tech/security/index.html">security
-model</a>, Android uses Security-Enhanced Linux (SELinux) to enforce Mandatory
-Access Control (MAC) over all processes, even processes running
-with root/superuser privileges (a.k.a. Linux capabilities).  SELinux enhances
-Android security, and contributions to it have been made by a number of
-companies and organizations; all Android code and contributors are publicly
-available for review on
-<a href="https://android.googlesource.com/">android.googlesource.com</a>. With
-SELinux, Android can better protect and confine system services, control access
-to application data and system logs, reduce the effects of malicious software,
-and protect users from potential flaws in code on mobile devices.
-</p>
-<p>
-Android includes SELinux in enforcing mode and a corresponding security policy
-that works by default across the <a
-href="https://android.googlesource.com/">Android Open Source
-Project</a>. In enforcing mode, illegitimate
-actions are prevented and all potential violations are logged by the kernel to
-<code>dmesg</code>. Android device manufacturers should gather information about
-errors so they may refine their software and SELinux policies before enforcing
-them.
-</p>
+<h2 id=introduction>Introduction</h2>
 
-<h2 id="background">Background</h2>
-<p>
-SELinux can operate in one of two global modes: permissive mode, in
-which permission denials are logged but not enforced, and enforcing
-mode, in which permission denials are both logged and
-enforced. SELinux also supports a per-domain permissive mode in which
-specific domains (processes) can be made permissive while placing the
-rest of the system in global enforcing mode. A domain is simply a
-label identifying a process or set of processes in the security
-policy, where all processes labeled with the same domain are treated
-identically by the security policy. Per-domain permissive mode enables
-incremental application of SELinux to an ever-increasing portion of
-the system.  Per-domain permissive mode also enables policy
-development for new services while keeping the rest of the system
-enforcing.
-</p>
+<p>The Android security model is based in part on the concept of application
+sandboxes. Each application runs in its own sandbox. Prior to Android 4.3,
+these sandboxes were defined by the creation of a unique Linux UID for each
+application at time of installation. Starting with Android 4.3,
+Security-Enhanced Linux (SELinux) is used to further define the boundaries of
+the Android application sandbox.</p>
 
-<p>
-In Android 4.3, SELinux was fully permissive.  In Android 4.4, SELinux
-was made enforcing for the domains for several root processes:
-<code>installd</code>, <code>netd</code>, <code>vold</code> and
-<code>zygote</code>.  <em>All other processes, including other
-services and all apps, remain in permissive mode to allow further
-evaluation and prevent failures in Android 4.4. Still, an errant
-application could trigger an action in a root process that is not
-allowed, thereby causing the process or the application to crash.</em>
-</p>
-<p>
-For this reason, device manufacturers should retain the default settings
-provided by Android and limit enforcing mode to system services only until
-they've resolved issues reported in dmesg. That said, device manufacturers may
-need to augment their SELinux implementation to account for their additions and
-other changes to the operating system. See the <em>Customization</em> section for
-instructions.
-</p>
+<p>As part of the Android <a href="{@docRoot}devices/tech/security/index.html">security model</a>, Android uses SELinux to enforce mandatory access control (MAC) over all
+processes, even processes running with root/superuser privileges (a.k.a. Linux
+capabilities). SELinux enhances Android security by confining privileged
+processes and automating security policy creation.</p>
 
-<h2 id="mac">Mandatory access control</h2>
-<p>
-In conjunction with other Android security measures, Android's access control
-policy greatly limits the potential damage of compromised
-machines and accounts. Using tools like Android's discretionary and mandatory
-access controls gives you a structure to ensure your software runs
-only at the minimum privilege level. This mitigates the effects of
-attacks and reduces the likelihood of errant processes overwriting or even
-transmitting data.
-</p>
-<p>
-Starting in Android 4.3, SELinux provides a mandatory access control (MAC)
-umbrella over traditional discretionary access control (DAC) environments.
-For instance, software must typically run as the root user account to write
-to raw block devices. In a traditional DAC-based Linux environment, if the root
-user becomes compromised that user can write to every raw block device. However,
-SELinux can be used to label these devices so the process assigned the root
-privilege can write to only those specified in the associated policy. In this
-way, the process cannot overwrite data and system settings outside of the
-specific raw block device.
-</p>
-<p>
-See the <em>Use Cases</em> section for more examples of threats and ways to
-address them with SELinux.
-</p>
+<p>Contributions to it have been made by a number of companies and organizations;
+all Android code and contributors are publicly available for review on <a href="https://android.googlesource.com/">android.googlesource.com</a>. With SELinux, Android can better protect and confine system services, control
+access to application data and system logs, reduce the effects of malicious
+software, and protect users from potential flaws in code on mobile devices.</p>
 
-<h2 id="implementation">Implementation</h2>
-<p>
-Android's SELinux implementation is in enforcing mode - rather than the
-non-functional disabled mode or the notification-only permissive mode - to act
-as a reference and facilitate testing and development. Although enforcing mode
-is set globally, please remember this can be overridden on a per-domain basis
-as is in the case of the application domain.
-</p>
-<p>
-SELinux for Android is accompanied by everything you need to enable SELinux
-now. You merely need to integrate the <a
-href="https://android.googlesource.com/kernel/common/">latest Android
-kernel</a> and then incorporate the files found in the
-<a
-href="https://android.googlesource.com/platform/external/sepolicy/">
-external/sepolicy</a> directory:<br/>
-<a
-href="https://android.googlesource.com/kernel/common/">
-https://android.googlesource.com/kernel/common/</a>
-<br/>
-<a
-href="https://android.googlesource.com/platform/external/sepolicy/">
-https://android.googlesource.com/platform/external/sepolicy/</a>
-</p>
+<p>Android includes SELinux in enforcing mode and a corresponding security policy
+that works by default across the <a href="https://android.googlesource.com/">Android Open Source Project</a>. In enforcing mode, illegitimate actions are prevented and all attempted
+violations are logged by the kernel to <code>dmesg</code> and <code>logcat</code>. Android device manufacturers should gather information about errors so they
+may refine their software and SELinux policies before enforcing them.</p>
 
-<p>
- Those files when compiled comprise the SELinux kernel security policy and cover
-the upstream Android operating system. You should not need to modify
-the <root>external/sepolicy</root> files directly.  Instead, add your own
-device-specific policy files within the
-<root>/device/manufacturer/device-name/sepolicy directory.
-</p>
+<h2 id=background>Background</h2>
 
-<p>
-Then just update your <code>BoardConfig.mk</code> makefile - located in the
-<device-name> directory containing the sepolicy subdirectory - to reference the
-sepolicy subdirectory and any policy file once created, as shown below.  The
-BOARD_SEPOLICY variables and their meaning is documented in the
-external/sepolicy/README file.
-</p>
+<p>SELinux operates on the ethos of default denial. Anything that is not
+explicitly allowed is denied. SELinux can operate in one of two global modes:
+permissive mode, in which permission denials are logged but not enforced, and
+enforcing mode, in which denials are both logged and enforced. SELinux also
+supports a per-domain permissive mode in which specific domains (processes) can
+be made permissive while placing the rest of the system in global enforcing
+mode. A domain is simply a label identifying a process or set of processes in
+the security policy, where all processes labeled with the same domain are
+treated identically by the security policy. Per-domain permissive mode enables
+incremental application of SELinux to an ever-increasing portion of the system.
+Per-domain permissive mode also enables policy development for new services
+while keeping the rest of the system enforcing.</p>
 
-<pre>
-BOARD_SEPOLICY_DIRS += \
-        &lt;root&gt;/device/manufacturer/device-name/sepolicy
+<p>In the L release, Android moves to full enforcement of SELinux. This builds
+upon the permissive release of 4.3 and the partial enforcement of 4.4. In
+short, Android is shifting from enforcement on a limited set of crucial domains
+(<code>installd</code>, <code>netd</code>, <code>vold</code> and <code>zygote</code>) to everything (more than 60 domains). This means manufacturers will have to
+better understand and scale their SELinux implementations to provide compatible
+devices. Understand that:</p>
 
-BOARD_SEPOLICY_UNION += \
-        genfs_contexts \
-        file_contexts \
-        sepolicy.te
-</pre>
-
-<p>
-After rebuilding your device, it is enabled with SELinux. You can now either
-customize your SELinux policies to accommodate your own additions to the Android
-operating system as described in the <em>Customization</em> section or verify
-your existing setup as covered in the <em>Validation</em> section.
-</p>
-
-<h2 id="customization">Customization</h2>
-<p>
-Once you've integrated this base level of functionality and thoroughly analyzed
-the results, you may add your own policy settings to cover your customizations
-to the Android operating system. Of course, these policies must still meet the
-<a href="http://source.android.com/compatibility/index.html">Android
-Compatibility
-program</a> requirements and
-not remove the default SELinux settings.
-</p>
-<p>
-Manufacturers should not remove existing security settings. Otherwise, they risk
-breaking the Android SELinux implementation and the applications it governs.
-This includes third-party applications that will likely need to be improved to
-be compliant and operational. Applications must require no modification to
-continue functioning on SELinux-enabled devices.
-</p>
-<p>
-See the <em>Kernel Security Features</em> section of the Android Compatibility
-Definition document for specific requirements:<br/>
-<a
-href="http://source.android.com/compatibility/index.html">
-http://source.android.com/compatibility/index.html</a>
-</p>
-<p>
-SELinux uses a whitelist approach, meaning all access must be explicitly allowed
-in policy in order to be granted. Since Android's default SELinux policy already
-supports the Android Open Source Project, OEMs are not required to modify
-SELinux settings in any way. If they do customize SELinux settings, they should
-take great care not to break existing applications. Here is how we recommend
-proceeding:
-</p>
-
-<ol>
-<li>Use the <a href="https://android.googlesource.com/kernel/common/">latest
-Android
-kernel</a>.</li>
-<li>Adopt the <a
-href="http://en.wikipedia.org/wiki/Principle_of_least_privilege">principle of
-least
-privilege</a>.</li>
-<li>Address only your own additions to Android. The default policy works with
-the
-<a href="https://android.googlesource.com/">Android Open Source Project</a>
-codebase
-automatically.</li>
-<li>Compartmentalize software components into modules that conduct singular
-tasks.</li>
-<li>Create SELinux policies that isolate those tasks from unrelated
-functions.</li>
-<li>Put those policies in *.te files (the extension for SELinux policy source
-files) within the <root>/device/manufacturer/device-name/sepolicy
-directory and use BOARD_SEPOLICY variables to include them in your build.</li>
-<li>Make new domains permissive initially.  In Android 4.4 and earlier, this
-is done using a permissive declaration.  In later versions of Android,
-per-domain permissive mode is specified using the permissive_or_unconfined()
-macro.</li>
-<li>Analyze results and refine your domain definitions.</li>
-<li>Remove the permissive declaration when no further denials appear
-in userdebug builds.</li>
-</ol>
-
-<p>
-Once integrated, OEM Android development should include a step to ensure
-SELinux
-compatibility going forward. In an ideal software development process, SELinux
-policy changes only when the software model changes and not the actual
-implementation.
-</p>
-<p>
-As device manufacturers begin to customize SELinux, they should first audit
-their additions to Android. If they've added a component that conducts a new
-function, the manufacturers will need to ensure the component meets the security
-policy applied by Android, as well as any associated policy crafted by the OEM,
-before turning on enforcing mode.
-</p>
-<p>
-To prevent unnecessary issues, it is better to be overbroad and over-compatible
-than too restrictive and incompatible, which results in broken device functions.
-Conversely, if a manufacturer's changes will benefit others, it should supply
-the modifications to the default SELinux policy as a
-<a href="http://source.android.com/source/submit-patches.html">patch</a>. If the
-patch is
-applied to the default security policy, the manufacturer will no longer need to
-make this change with each new Android release.
-</p>
-
-<h2 id="use-cases">Use Cases</h2> <p>Here are specific examples of exploits to
-consider when crafting your own software and associated SELinux policies:</p>
-
-<p><strong>Symlinks</strong> - Because symlinks appear as files, they are often
-read just as that. This can lead to exploits. For instance, some privileged
-components such as <code>init</code> change the permissions of certain files,
-sometimes to be excessively open.</p>
-
-<p>Attackers might then replace those files with symlinks to code they control,
-allowing the attacker to overwrite arbitrary files. But if you know your
-application will never traverse a symlink, you can prohibit it from doing so
-with SELinux.</p>
-
-<p><strong>System files</strong> - Consider the class of system files that
-should only be modified by the system server. Still, since <code>netd</code>,
-<code>init</code>, and <code>vold</code> run as root, they can access those
-system files. So if <code>netd</code> became compromised, it could compromise
-those files and potentially the system server itself.</p>
-
-<p>With SELinux, you can identify those files as system server data files.
-Therefore, the only domain that has read/write access to them is system server.
-Even if <code>netd</code> became compromised, it could not switch domains to the
-system server domain and access those system files although it runs as root.</p>
-
-<p><strong>App data</strong> - Another example is the class of functions that
-must run as root but should not get to access app data. This is incredibly
-useful as wide-ranging assertions can be made, such as certain domains
-unrelated to application data being prohibited from accessing the internet.</p>
-
-<p><strong>setattr</strong> - For commands such as <code>chmod</code> and
-<code>chown</code>, you could identify the set of files where the associated
-domain can conduct <code>setattr</code>. Anything outside of that could be
-prohibited from these changes, even by root. So an application might run
-<code>chmod</code> and <code>chown</code> against those labeled app_data_files
-but not shell_data_files or system_data_files.</p>
-
-<h2 id="related-files">Related Files</h2>
-<p>This section serves to guide you once you&rsquo;ve decided to
-customize the SELinux policy settings. See the <em>Customization</em> section
-for steps. We recommend device manufacturers start with the default Android
-SELinux policy and make the minimum possible set of changes to address their
-additions to Android. Existing Android SELinux policy files are found in the
-root of the <a
-href="https://android.googlesource.com/platform/external/sepolicy/">
-external/sepolicy</a> directory.</p>
-
-<p>Android upgraded its SELinux policy version to allow the SELinux mode to be
-set to permissive on a per-domain basis. For example, if you run all of your
-applications in a single domain, you could set that domain to be permissive and
-then have all other functions and their domains set to enforcing. Domains are
-associated with applications by the key used to sign each application. The
-mapping of app certificates to domains is specified via the
-mac_permissions.xml and seapp_contexts configuration files.</p>
-
-<p>Here are the files you must create or edit in order to customize SELinux:</p>
 <ul>
-<li>
-<p><em>New SELinux policy source (*.te) files</em> - Located in the
-&lt;root&gt;/device/manufacturer/device-name/sepolicy directory These files
-define domains and their labels. The new policy files get concatenated with the
-existing policy files during compilation into a single SELinux kernel policy
-file.</p>
-<p><strong>Important</strong>:Do not alter the app.te file provided by the
-Android Open Source Project. Doing so risks breaking all third-party
-applications.
-</p>
-</li>
-<li>
-<p><em>Updated <code>BoardConfig.mk</code> makefile</em> - Located in the
-&lt;device-name&gt; directory containing the sepolicy subdirectory. It must be
-updated to reference the sepolicy subdirectory once created if it wasn&rsquo;t
-in initial implementation.</p> </li>
-<li>
-<p><em>Updated <code>file_contexts</code></em> - Located in
-the sepolicy subdirectory. It labels files and is managed in the userspace. As
-you create new policies, update this file to reference them. In order to apply
-new <code>file_contexts</code>, you must run <code>restorecon</code> on the file
-to be relabeled.</p>
-</li> </ul>
-
-<p>The remaining files in the sepolicy directory are either auto-generated or
-should remain static. The policy rules come in the form: allow <em>domains</em>
-<em>types</em>:<em>classes</em> <em>permissions</em>;, where:</p>
-<ul>
-<li>
-<p><em>Domain</em> - A label for the process or set of processes.
-</p></li>
-<li>
-<p><em>Type</em> - A label for the object (e.g. file, socket) or set of objects.
-</p></li>
-<li>
-<p><em>Class</em> - The kind of object (e.g. file, socket) being accessed.
-</p></li>
-<li>
-<p><em>Permission</em> - The operation (e.g. read, write) being performed.
-</p></li>
-
-<p>And so an example use of this would follow the structure:<br>
-<code>allow appdomain app_data_file:file rw_file_perms;</code></p>
-
-<p>This says an application is allowed to read and write files labeled
-app_data_file. Note that this rule relies upon macros defined in the
-global_macros file, and other helpful macros can also be found in the
-te_macros file.  Macros are provided for common groupings of classes,
-permissions and rules, and should be used whenever possible to help reduce the
-likelihood of failures due to denials on related permissions.  During
-compilation, those overrides are concatenated to the existing SELinux settings
-and into a single security policy. These overrides add to the base security
-policy rather than subtract from existing settings.</p>
-
-<p>Once the new policy files and <code>BoardConfig.mk</code> updates are in
-place, the new policy settings are automatically built into the final kernel
-policy file.</p>
-
-<h2 id="validation">Validation</h2> <p>Android strongly encourages OEMs to test
-their SELinux implementations thoroughly. As manufacturers implement SELinux,
-they should initially release their own policies in permissive mode. If
-possible, apply the new policy to a test pool of devices first.</p>
-
-<p>Once applied, make sure SELinux is running in the correct mode on the device
-by issuing the command: <code>getenforce</code></p>
-
-<p>This will print the global SELinux mode: either Disabled, Enforcing, or
-Permissive.
-Please note, this command shows only the global SELinux mode. To determine the
-SELinux mode for each domain, you must examine the corresponding files.</p>
-
-<p>Then check for errors. Errors are routed as event logs to <code>dmesg</code>
-and viewable locally on the device. Manufacturers should examine the SELinux
-output to <code>dmesg</code> on these devices and refine settings prior to
-public release in permissive mode and eventual switch to enforcing mode.  It is
-possible to capture the ongoing denial logs by running
-<code>cat /proc/kmsg</code> or to capture denial logs from the previous boot by
-running <code>cat /proc/last_kmsg</code>.</p>
-
-<p>With this output, manufacturers can readily identify when system users or
-components are in violation of SELinux policy. Manufacturers can then repair
-this bad behavior, either by changes to the software, SELinux policy, or
-both.</p>
-
-<p>Specifically, these log messages indicate what processes would fail
-under enforcing mode and why. Here is an example:</p>
-
-<pre>
-denied  { connectto } for  pid=2671 comm="ping" path="/dev/socket/dnsproxyd"
-scontext=u:r:shell:s0 tcontext=u:r:netd:s0 tclass=unix_stream_socket
-</pre>
-
-<p>Interpret this output like so:</p>
-<ul>
-<li>The { connectto } above represents the action being taken. Together with the
-tclass at the end (unix_stream_socket) it tells you roughly what was being done
-to what. In this case, something was trying to connect to a unix stream
-socket.</li>
-<li>The scontext (u:r:shell:s0) tells you what context initiated the action. In
-this case this is something running as the shell.</li>
-<li>The tcontext (u:r:netd:s0) tells you the context of the action’s target. In
-this case, that’s a unix_stream_socket owned by netd.</li>
-<li>The comm="ping" at the top gives you an additional hint about what was being
-run at the time the denial was generated. In this case, it’s a pretty good
-hint.</li>
+  <li> Everything is in enforcing mode in the L release
+  <li> No processes other than <code>init</code> should run in the <code>init</code> domain
+  <li> Any generic denial (for a block_device, socket_device, default_service, etc.)
+indicates that device needs a special domain
 </ul>
 
-<p>Android is taking this information, analyzing
-it and refining its default security policy so that it works on a wide range of
-Android devices with little customization. With this policy, OEMs must only
-accommodate their own changes to the Android operating system.</p>
+<h2 id=supporting_documentation>Supporting documentation</h2>
 
-<p>Then run the SELinux-enabled devices through the <a
-href="{@docRoot}compatibility/cts-intro.html">Android
-Compatibility Test Suite</a> (CTS).</p> <p>As said, any new policies must still
-meet the <a href="{@docRoot}compatibility/index.html">Android
-Compatibility program</a> requirements.</p>
+<p>See the documentation below for details on constructing useful policies:</p>
 
-<p>Finally, if possible, turn on enforcing mode internally (on devices of
-employees) to raise the visibility of failures. Identify any user issues and
-resolve them.  </p> <h2 id="help">Help</h2> Device manufacturers are strongly
-encouraged to work with their Android account managers to analyze SELinux
-results and improve policy settings. Over time, Android intends to support
-common manufacturer additions in its default SELinux policy. For more
-information, contact security@android.com.
+<p><a href="https://seandroid.bitbucket.org/PapersandPresentations.html">https://seandroid.bitbucket.org/PapersandPresentations.html</a></p>
+
+<p><a href="https://www.codeproject.com/Articles/806904/Android-Security-Customization-with-SEAndroid">https://www.codeproject.com/Articles/806904/Android-Security-Customization-with-SEAndroid</a></p>
+
+<p><a href="https://www.nsa.gov/research/_files/publications/implementing_selinux.pdf">https://www.nsa.gov/research/_files/publications/implementing_selinux.pdf</a></p>
+
+<p><a href="https://events.linuxfoundation.org/sites/events/files/slides/abs2014_seforandroid_smalley.pdf">https://events.linuxfoundation.org/sites/events/files/slides/abs2014_seforandroid_smalley.pdf</a></p>
+
+<p><a href="https://www.internetsociety.org/sites/default/files/02_4.pdf">https://www.internetsociety.org/sites/default/files/02_4.pdf</a></p>
+
+<p><a href="https://www.gnu.org/software/m4/manual/index.html">https://www.gnu.org/software/m4/manual/index.html</a></p>
+
+<p><a href="https://freecomputerbooks.com/books/The_SELinux_Notebook-4th_Edition.pdf">https://freecomputerbooks.com/books/The_SELinux_Notebook-4th_Edition.pdf</a></p>
+
+<h2 id=help>Help</h2>
+
+<p>Over time, Android intends to support common manufacturer additions in its
+default SELinux policy. For more information, contact <a href="mailto:security@android.com">security@android.com</a>.</p>
diff --git a/src/devices/tech/security/selinux/concepts.jd b/src/devices/tech/security/selinux/concepts.jd
new file mode 100644
index 0000000..a0eb2cc
--- /dev/null
+++ b/src/devices/tech/security/selinux/concepts.jd
@@ -0,0 +1,174 @@
+page.title=SELinux concepts
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>Review this page to become familar with the concepts at play within SELinux.</p>
+
+<h2 id=mandatory_access_control>Mandatory access control</h2>
+
+<p>Security Enhanced Linux (SELinux), is a mandatory access control (MAC) system
+for the Linux operating system.  As a MAC system, it differs from Linux’s
+familiar discretionary access control (DAC) system.  In a DAC system, a concept
+of ownership exists, whereby an owner of a particular resource controls access
+permissions associated with it.  This is generally coarse-grained and subject
+to unintended privilege escalation.  A MAC system, however, consults a central
+authority for a decision on all access attempts.</p>
+
+<p>SELinux has been implemented as part of the Linux Security Module (LSM)
+framework, which recognizes various kernel objects, and sensitive actions
+performed on them.  At the point at which each of these actions would be
+performed, an LSM hook function is called to determine whether or not the
+action should be allowed based on the information for it stored in an opaque
+security object. SELinux provides an implementation for these hooks and
+management of these security objects, which combine with its own policy, to
+determine the access decisions.</p>
+
+<p>In conjunction with other Android security measures, Android's access control
+policy greatly limits the potential damage of compromised machines and
+accounts. Using tools like Android's discretionary and mandatory access
+controls gives you a structure to ensure your software runs only at the minimum
+privilege level. This mitigates the effects of attacks and reduces the
+likelihood of errant processes overwriting or even transmitting data.</p>
+
+<p>Starting in Android 4.3, SELinux provides a mandatory access control (MAC)
+umbrella over traditional discretionary access control (DAC) environments. For
+instance, software must typically run as the root user account to write to raw
+block devices. In a traditional DAC-based Linux environment, if the root user
+becomes compromised that user can write to every raw block device. However,
+SELinux can be used to label these devices so the process assigned the root
+privilege can write to only those specified in the associated policy. In this
+way, the process cannot overwrite data and system settings outside of the
+specific raw block device.</p>
+
+<p>See <a href="implement.html#use_cases">Use Cases</a> for more examples of threats and ways to address them with SELinux.</p>
+
+<h2 id=enforcement_levels>Enforcement levels</h2>
+
+<p>Become familiar with the following terms to understand how SELinux can be
+implemented to varying strengths.</p>
+
+<ul>
+  <li><em>Permissive</em> - SELinux security policy is not enforced, only logged.
+  <li><em>Enforcing</em> - Security policy is enforced and logged. Failures appear as EPERM errors.
+</ul>
+
+<p>This choice is binary and determines whether your policy takes action or merely
+allows you to gather potential failures. Permissive is especially useful during
+implementation.</p>
+
+<ul>
+  <li><em>Unconfined</em> - A very light policy that prohibits certain tasks and provides a temporary
+stop-gap during development. Should not be used for anything outside of the
+Android Open Source Project (AOSP).
+  <li><em>Confined</em> - A custom-written policy designed for the service. That policy should define
+precisely what is allowed.
+</ul>
+
+<p>Unconfined policies are available to help implement SELinux in Android quickly.
+They are suitable for most root-level applications. But they should be
+converted to confined policies wherever possible over time to restrict each
+application to precisely the resources it needs.</p>
+
+<p>Ideally, your policy is both in enforcing mode and confined. Unconfined
+policies in enforcement mode can mask potential violations that would have been
+logged in permissive mode with a confined policy. Therefore, we strongly
+recommend partners implement true confined policies.</p>
+
+<h2 id=labels_rules_and_domains>Labels, rules and domains</h2>
+
+<p>SELinux depends upon <em>labels</em> to match actions and policies. Labels determine what is allowed. Sockets,
+files, and processes all have labels in SELinux. SELinux decisions are based
+fundamentally on labels assigned to these objects and the policy defining how
+they may interact.  In SELinux, a label takes the form:
+user:role:type:mls_level, where the type is the primary component of the access
+decisions, which may be modified by the other sections components which make up
+the label.  The objects are mapped to classes and the different types of access
+for each class are represented by permissions. </p>
+
+<p>The policy rules come in the form: allow <em>domains</em> <em>types</em>:<em>classes</em> <em>permissions</em>;, where:</p>
+
+<ul>
+  <li><em>Domain</em> - A label for the process or set of processes.
+  <li><em>Type</em> - A label for the object (e.g. file, socket) or set of objects.
+  <li><em>Class</em> - The kind of object (e.g. file, socket) being accessed.
+  <li><em>Permission</em> - The operation (e.g. read, write) being performed.
+</ul>
+
+<p>And so an example use of this would follow the structure:</p>
+<code>allow appdomain app_data_file:file rw_file_perms;</code>
+
+<p>This says an application is allowed to read and write files labeled
+app_data_file. Note that this rule relies upon macros defined in the
+global_macros file, and other helpful macros can also be found in the te_macros
+file. Macros are provided for common groupings of classes, permissions and
+rules, and should be used whenever possible to help reduce the likelihood of
+failures due to denials on related permissions. During compilation, those
+overrides are concatenated to the existing SELinux settings and into a single
+security policy. These overrides add to the base security policy rather than
+subtract from existing settings.</p>
+
+<p>Use the syntax above to create avc rules that comprise the essence of an
+SELinux policy.  A rule takes the form:
+<pre>
+&lt;rule variant&gt; &lt;source_type&gt; &lt;target_type&gt; : &lt;class&gt; &lt;permission&gt;
+</pre>
+
+<p>The rule indicates what should happen when an object labeled with the <em>source_type </em>attempts an action corresponding to <em>permission </em>on an object of class <em>class </em>which has the <em>target_type </em>label.  The most common example of one of these rules is an allow rule, e.g.:</p>
+
+<pre>
+allow domain null_device:chr_file { open };
+</pre>
+
+
+<p>
+This rule allows a process with <em>source_type</em> of ‘domain’to take the action described by the <em>permission</em> ‘open’ on an object of <em>class</em> ‘chr_file’ that has the <em>target_type</em> label of ‘null_device.’  In practice, this rule may be extended to include other permissions: </p>
+
+<pre>
+allow domain null_device:chr_file { getattr open read ioctl lock append write}; 
+</pre>
+
+<p>When combined with the knowledge that ‘domain’ is a label for all processes and
+that null_device is the label for the ‘chr_file’ /dev/null, this rule basically
+permits reading and writing to <code>/dev/null</code>.</p>
+
+<p>A <em>domain</em> generally corresponds to a process and will have a label associated with it.</p>
+
+<p>For example, a typical Android app is running it its own process and has the
+label of untrusted_app that grants it certain restricted permissions.</p>
+
+<p>Platform apps built into the system run under a separate label and are granted
+a distinct set of permissions. System apps that are part of the core Android
+system run under the system_app label for yet another set of privileges.</p>
+
+<p>These generic labels require further specification:</p>
+
+<ul>
+  <li> socket_device
+  <li> device
+  <li> block_device
+  <li> default_service
+  <li> system_data_type
+  <li> tmpfs
+</ul>
diff --git a/src/devices/tech/security/selinux/customize.jd b/src/devices/tech/security/selinux/customize.jd
new file mode 100644
index 0000000..79ca5d6
--- /dev/null
+++ b/src/devices/tech/security/selinux/customize.jd
@@ -0,0 +1,274 @@
+page.title=Customizing SELinux
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>Once you've integrated this base level of functionality and thoroughly analyzed
+the results, you may add your own policy settings to cover your customizations
+to the Android operating system. Of course, these policies must still meet the <a href="{@docRoot}compatibility/index.html">Android Compatibility program</a> requirements and not remove the default SELinux settings.</p>
+
+<p>Manufacturers should not remove existing security settings. Otherwise, they
+risk breaking the Android SELinux implementation and the applications it
+governs. This includes third-party applications that will likely need to be
+improved to be compliant and operational. Applications must require no
+modification to continue functioning on SELinux-enabled devices.</p>
+
+<p>When embarking upon customizing SELinux, manufacturers should remember to:</p>
+
+<ul>
+  <li>Write SELinux policy for all new daemons
+  <li>Use predefined domains whenever appropriate
+  <li>Assign a domain to any process spawned as an <code>init</code> service
+  <li>Become familiar with the macros before writing policy
+  <li>Submit changes to core policy to AOSP
+</ul>
+
+<p>And not to:</p>
+
+<ul>
+  <li>Create incompatible policy
+  <li>Allow end user policy customization
+  <li>Allow MDM policy customizations
+  <li>Scare users with policy violations
+  <li>Add backdoors
+</ul>
+
+<p>See the <em>Kernel Security Features</em> section of the <a href="{@docRoot}compatibility/android-cdd.pdf">Android Compatibility Definition document</a> for specific requirements.</p>
+
+<p>SELinux uses a whitelist approach, meaning all access must be explicitly
+allowed in policy in order to be granted. Since Android's default SELinux
+policy already supports the Android Open Source Project, OEMs are not required
+to modify SELinux settings in any way. If they do customize SELinux settings,
+they should take great care not to break existing applications. Here is how we
+recommend proceeding:</p>
+
+<ol>
+  <li>Use the <a href="https://android.googlesource.com/kernel/common/">latest Android kernel</a>.
+  <li>Adopt the <a href="http://en.wikipedia.org/wiki/Principle_of_least_privilege">principle of least privilege</a>.
+  <li>Address only your own additions to Android. The default policy works with the <a href="https://android.googlesource.com/">Android Open Source Project</a> codebase automatically.
+  <li>Compartmentalize software components into modules that conduct singular tasks.
+  <li>Create SELinux policies that isolate those tasks from unrelated functions.
+  <li>Put those policies in *.te files (the extension for SELinux policy source
+files) within the <code>/device/manufacturer/device-name/sepolicy</code> directory and use
+<code>BOARD_SEPOLICY</code> variables to include them in your build.
+  <li>Make new domains permissive initially. In Android 4.4 and earlier, this is done
+using a permissive declaration. In later versions of Android, per-domain
+permissive mode is specified using the <code>permissive_or_unconfined()</code> macro.
+  <li>Analyze results and refine your domain definitions.
+  <li>Remove the permissive declaration when no further denials appear in userdebug
+builds.
+</ol>
+
+<p>Once integrated, OEM Android development should include a step to ensure
+SELinux compatibility going forward. In an ideal software development process,
+SELinux policy changes only when the software model changes and not the actual
+implementation.</p>
+
+<p>As device manufacturers begin to customize SELinux, they should first audit
+their additions to Android. If they've added a component that conducts a new
+function, the manufacturers will need to ensure the component meets the
+security policy applied by Android, as well as any associated policy crafted by
+the OEM, before turning on enforcing mode.</p>
+
+<p>To prevent unnecessary issues, it is better to be overbroad and over-compatible
+than too restrictive and incompatible, which results in broken device
+functions. Conversely, if a manufacturer's changes will benefit others, it
+should supply the modifications to the default SELinux policy as a <a href="{@docRoot}source/submit-patches.html">patch</a>. If the patch is applied to the default security policy, the manufacturer will no longer need to make this change with each new Android release.</p>
+
+<h2 id=example_policy_statements>Example policy statements</h2>
+
+<p>First, note SELinux is based upon the <a href="https://www.gnu.org/software/m4/manual/index.html">M4</a> computer language and therefore supports a variety of macros to save time.</p>
+
+<p>In the following example, all domains are granted access to read or write to <code>/dev/null</code> and read from <code>/dev/0</code>.</p>
+
+<pre>
+# Allow read / write access to /dev/null
+allow domain null_device:chr_file { getattr open read ioctl lock append write};
+
+# Allow read-only access to /dev/zero
+allow domain zero_device:chr_file { getattr open read ioctl lock };
+</pre>
+
+
+<p>This same statement can be written with SELinux <code>*_file_perms</code> macros (shorthand):</p>
+
+<pre>
+# Allow read / write access to /dev/null
+allow domain null_device:chr_file rw_file_perms;
+
+# Allow read-only access to /dev/zero
+allow domain zero_device:chr_file r_file_perms;
+</pre>
+
+<h2 id=example_policy>Example policy</h2>
+
+<p>Here is a complete example policy for DHCP, which we examine below:</p>
+
+<pre>
+type dhcp, domain;
+permissive_or_unconfined(dhcp)
+type dhcp_exec, exec_type, file_type;
+type dhcp_data_file, file_type, data_file_type;
+
+init_daemon_domain(dhcp)
+net_domain(dhcp)
+
+allow dhcp self:capability { setgid setuid net_admin net_raw net_bind_service
+};
+allow dhcp self:packet_socket create_socket_perms;
+allow dhcp self:netlink_route_socket { create_socket_perms nlmsg_write };
+allow dhcp shell_exec:file rx_file_perms;
+allow dhcp system_file:file rx_file_perms;
+# For /proc/sys/net/ipv4/conf/*/promote_secondaries
+allow dhcp proc_net:file write;
+allow dhcp system_prop:property_service set ;
+unix_socket_connect(dhcp, property, init)
+
+type_transition dhcp system_data_file:{ dir file } dhcp_data_file;
+allow dhcp dhcp_data_file:dir create_dir_perms;
+allow dhcp dhcp_data_file:file create_file_perms;
+
+allow dhcp netd:fd use;
+allow dhcp netd:fifo_file rw_file_perms;
+allow dhcp netd:{ dgram_socket_class_set unix_stream_socket } { read write };
+allow dhcp netd:{ netlink_kobject_uevent_socket netlink_route_socket
+netlink_nflog_socket } { read write };
+</pre>
+
+<p>Let’s dissect the example:</p>
+
+<p>In the first line, the type declaration, the DHCP daemon inherits from the base
+security policy (<code>domain</code>). From the previous statement examples, we know DHCP can read from and write
+to <code>/dev/null.</code></p>
+
+<p>In the second line, DHCP is identified as an experimental domain (<code>permissive_or_unconfined</code>) with only minimal rules enforced.</p>
+
+<p>In the <code>init_daemon_domain(dhcp)</code> line, the policy states DHCP is spawned from <code>init</code> and is allowed to communicate with it.</p>
+
+<p>In the <code>net_domain(dhcp)</code> line, the policy allows DHCP to use common network functionality from the <code>net</code> domain such as reading and writing TCP packets, communicating over sockets, and conducting DNS requests.</p>
+
+<p>In the line <code>allow dhcp proc_net:file write;</code>, the policy states DHCP can write to specific files in <code>/proc</code>. This line demonstrates SELinux’s fine-grained file labeling. It uses the <code>proc_net</code> label to limit write access to only the files under <code>/proc/sys/net</code>.</p>
+
+<p>The final block of the example starting with <code>allow dhcp netd:fd use;</code> depicts how applications may be allowed to interact with one another. The
+policy says DHCP and netd may communicate with one another via file
+descriptors, FIFO files, datagram sockets, and UNIX stream sockets. DHCP may
+only read to and write from the datagram sockets and UNIX stream sockets and
+not create or open them.</p>
+
+<h2 id=available_controls>Available controls</h2>
+
+<table>
+ <tr>
+    <td>
+<p><strong>Domain</strong></p>
+</td>
+    <td>
+<p><strong>Capability</strong></p>
+</td>
+ </tr>
+ <tr>
+    <td>
+<p>file</p>
+</td>
+    <td>
+<pre>
+ioctl read write create getattr setattr lock relabelfrom relabelto append
+unlink link rename execute swapon quotaon mounton</pre>
+</td>
+ </tr>
+ <tr>
+ <td>
+<p>directory</p>
+</td>
+ <td>
+<pre>
+add_name remove_name reparent search rmdir open audit_access execmod</pre>
+</td>
+ </tr>
+ <tr>
+ <td>
+<p>socket</p>
+</td>
+ <td>
+<pre>
+ioctl read write create getattr setattr lock relabelfrom relabelto append bind
+connect listen accept getopt setopt shutdown recvfrom sendto recv_msg send_msg
+name_bind</pre>
+</td>
+ </tr>
+ <tr>
+ <td>
+<p>filesystem</p>
+</td>
+ <td>
+<pre>
+mount remount unmount getattr relabelfrom relabelto transition associate
+quotamod quotaget</pre>
+ </td>
+ </tr>
+ <tr>
+ <td>
+<p>process</p>
+ </td>
+ <td>
+<pre>
+fork transition sigchld sigkill sigstop signull signal ptrace getsched setsched
+getsession getpgid setpgid getcap setcap share getattr setexec setfscreate
+noatsecure siginh setrlimit rlimitinh dyntransition setcurrent execmem
+execstack execheap setkeycreate setsockcreate</pre>
+</td>
+ </tr>
+ <tr>
+ <td>
+<p>security</p>
+</td>
+ <td>
+<pre>
+compute_av compute_create compute_member check_context load_policy
+compute_relabel compute_user setenforce setbool setsecparam setcheckreqprot
+read_policy</pre>
+</td>
+ </tr>
+ <tr>
+ <td>
+<p>capability</p>
+</td>
+ <td>
+<pre>
+chown dac_override dac_read_search fowner fsetid kill setgid setuid setpcap
+linux_immutable net_bind_service net_broadcast net_admin net_raw ipc_lock
+ipc_owner sys_module sys_rawio sys_chroot sys_ptrace sys_pacct sys_admin
+sys_boot sys_nice sys_resource sys_time sys_tty_config mknod lease audit_write
+audit_control setfcap</pre>
+</td>
+ </tr>
+ <tr>
+ <td>
+<p><strong>MORE</strong></p>
+</td>
+ <td>
+<p><strong>AND MORE</strong></p>
+</td>
+ </tr>
+</table>
diff --git a/src/devices/tech/security/selinux/implement.jd b/src/devices/tech/security/selinux/implement.jd
new file mode 100644
index 0000000..9e2e724
--- /dev/null
+++ b/src/devices/tech/security/selinux/implement.jd
@@ -0,0 +1,184 @@
+page.title=Implementing SELinux
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>SELinux is set up to default-deny, which means that every single access for
+which it has a hook in the kernel must be explicitly allowed by policy.  This
+means a policy file is comprised of a large amount of information regarding
+rules, types, classes, permissions, and more.  A full consideration of SELinux
+is out of the scope of this document, but an understanding of how to write
+policy rules is now essential when bringing up new Android devices. There is a
+great deal of information available regarding SELinux already. See <a
+href="{@docRoot}devices/tech/security/se-linux.html#supporting_documentation">Supporting
+documentation</a> for suggested resources.</p>
+
+<h2 id=summary_of_steps>Summary of steps</h2>
+
+<p>Here is a brief summary of the steps needed to implement SELinux on your
+Android device:</p>
+
+<ol>
+  <li>Add SELinux support in the kernel and configuration.
+  <li>Grant each service (process or daemon) started from <code>init</code> its own domain.
+  <li>Identify these services by:
+  <ul>
+    <li>Reviewing the init file and finding all services.
+    <li>Examining warnings in <code>dmesg</code>.
+    <li>Searching (<code>grep</code>) through processes to see which run in the init domain.
+  </ul>
+  <li>Label all new processes, drivers, sockets, etc.
+All objects need to be labeled
+properly to ensure they interact properly with the policies you apply. See the
+labels used in AOSP for examples to follow in label name creation.
+  <li>Institute security policies that fully cover all labels and restrict
+permissions to their absolute minimum.
+</ol>
+
+<p>Ideally, OEMs start with the policies in the AOSP and then build upon them for
+their own customizations.</p>
+
+<h2 id=key_files>Key files</h2>
+
+<p>SELinux for Android is accompanied by everything you need to enable SELinux
+now. You merely need to integrate the <a href="https://android.googlesource.com/kernel/common/">latest Android kernel</a> and then incorporate the files found in the <a href="https://android.googlesource.com/platform/external/sepolicy/">external/sepolicy</a> directory:</p>
+
+<p><a href="https://android.googlesource.com/kernel/common/">https://android.googlesource.com/kernel/common/ </a></p>
+
+<p><a href="https://android.googlesource.com/platform/external/sepolicy/">https://android.googlesource.com/platform/external/sepolicy/</a></p>
+
+<p>Those files when compiled comprise the SELinux kernel security policy and cover
+the upstream Android operating system. You should not need to modify the
+external/sepolicy files directly. Instead, add your own device-specific policy
+files within the /device/manufacturer/device-name/sepolicy directory.</p>
+
+<p>Here are the files you must create or edit in order to implement SELinux:</p>
+
+<ul>
+  <li><em>New SELinux policy source (*.te) files</em> - Located in the <root>/device/manufacturer/device-name/sepolicy directory. These files define domains and their labels. The new policy files get
+concatenated with the existing policy files during compilation into a single
+SELinux kernel policy file.
+<p class="caution"><strong>Important:</strong> Do not alter the app.te file provided by the Android Open Source Project.
+Doing so risks breaking all third-party applications.</p>
+  <li><em>Updated BoardConfig.mk makefile</em> - Located in the <device-name> directory containing the sepolicy subdirectory. It must be updated to reference the sepolicy subdirectory once created if it
+wasn’t in initial implementation.
+  <li><em>Updated *_contexts files</em> - Located in the sepolicy subdirectory. These label files and
+are managed in the userspace. As you create new policies, update these files to
+reference them. In order to apply new *_contexts, you must run <code>restorecon</code> on the file to be relabeled.
+</ul>
+
+<p>Then just update your BoardConfig.mk makefile - located in the directory
+containing the sepolicy subdirectory - to reference the sepolicy subdirectory
+and each policy file once created, as shown below. The BOARD_SEPOLICY variables
+and their meaning is documented in the external/sepolicy/README file.</p>
+
+<pre>
+BOARD_SEPOLICY_DIRS += \
+        &lt;root>/device/manufacturer/device-name/sepolicy
+
+BOARD_SEPOLICY_UNION += \
+        genfs_contexts \
+        file_contexts \
+        sepolicy.te
+</pre>
+
+<p>After rebuilding your device, it is enabled with SELinux. You can now either
+customize your SELinux policies to accommodate your own additions to the
+Android operating system as described in <a
+href="customize.html">Customization</a> or verify your existing setup as
+covered in <a href="validate.html">Validation</a>.</p>
+
+<p>Once the new policy files and BoardConfig.mk updates are in place, the new
+policy settings are automatically built into the final kernel policy file.</p>
+
+<h2 id=use_cases>Use cases</h2>
+
+<p>Here are specific examples of exploits to consider when crafting your own
+software and associated SELinux policies:</p>
+
+<p><strong>Symlinks</strong> - Because symlinks appear as files, they are often read just as that. This can
+lead to exploits. For instance, some privileged components such as init change
+the permissions of certain files, sometimes to be excessively open.</p>
+
+<p>Attackers might then replace those files with symlinks to code they control,
+allowing the attacker to overwrite arbitrary files. But if you know your
+application will never traverse a symlink, you can prohibit it from doing so
+with SELinux.</p>
+
+<p><strong>System files</strong> - Consider the class of system files that should only be modified by the
+system server. Still, since netd, init, and vold run as root, they can access
+those system files. So if netd became compromised, it could compromise those
+files and potentially the system server itself.</p>
+
+<p>With SELinux, you can identify those files as system server data files.
+Therefore, the only domain that has read/write access to them is system server.
+Even if netd became compromised, it could not switch domains to the system
+server domain and access those system files although it runs as root.</p>
+
+<p><strong>App data</strong> - Another example is the class of functions that must run as root but should
+not get to access app data. This is incredibly useful as wide-ranging
+assertions can be made, such as certain domains unrelated to application data
+being prohibited from accessing the internet.</p>
+
+<p><strong>setattr</strong> - For commands such as chmod and chown, you could identify the set of files
+where the associated domain can conduct setattr. Anything outside of that could
+be prohibited from these changes, even by root. So an application might run
+chmod and chown against those labeled app_data_files but not shell_data_files
+or system_data_files.</p>
+
+<h2 id=steps_in_detail>Steps in detail</h2>
+
+<p>Here is a detailed view of how Android recommends you employ and customize
+SELinux to protect your devices:</p>
+
+<ol>
+  <li>Enable SELinux in the kernel:
+<code>CONFIG_SECURITY_SELINUX=y</code>
+  <li>Change the kernel_cmdline parameter so that:<br/>
+<code>BOARD_KERNEL_CMDLINE := androidboot.selinux=permissive</code>
+  <li>Boot up the system in permissive and see what denials are encountered on boot:<br/>
+<code>su -c dmesg | grep denied > ~/t.tmp su -c dmesg | grep denied | audit2allow</code>
+  <li>Evaluate the output. See <a href="validate.html">Validation</a> for instructions and tools.
+  <li>Identify devices, and other new files that need labeling.Identify devices, and
+other new files that need labeling.
+  <li>Use existing or new labels for your objects.
+Look at the *_contexts files to
+see how things were previously labeled and use knowledge of the label meanings
+to assign a new one. Ideally, this will be an existing label which will fit
+into policy, but sometimes a new label will be needed, and rules for access to
+that label will be needed, as well.
+  <li>Identify domains/processes that should have their own security domains. A policy will likely need to be written for each of these from scratch. All services spawned from <code>init</code>, for instance, should have their own. The following commands help reveal those that remain running (but ALL services need such a treatment):<br/>
+<code>$ adb shell su -c ps -Z | grep init</code><br/>
+<code>$ adb shell su -c dmesg | grep 'avc: '</code>
+  <li>Review init.<device>.rc to identify any which are without a type.
+These should
+be given domains EARLY in order to avoid adding rules to init or otherwise
+confusing <code>init</code> accesses with ones that are in their own policy.
+  <li>Set up <code>BOARD_CONFIG.mk</code> to use <code>BOARD_SEPOLICY_UNION</code> and <code>BOARD_SEPOLICY_DIRS</code>. See
+the README in /sepolicy for details on setting this up.
+  <li> Examine the init.&lt;device&gt;.rc file and make sure every use of “mount”
+corresponds to a properly labeled filesystem.
+  <li> Go through each denial and create SELinux policy to properly handle each. See
+the examples within <a href="customize.html">Customization</a>.
+</ol>
diff --git a/src/devices/tech/security/selinux/validate.jd b/src/devices/tech/security/selinux/validate.jd
new file mode 100644
index 0000000..2734665
--- /dev/null
+++ b/src/devices/tech/security/selinux/validate.jd
@@ -0,0 +1,146 @@
+page.title=Validating SELinux
+@jd:body
+
+<!--
+    Copyright 2014 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+<div id="qv-wrapper">
+  <div id="qv">
+    <h2>In this document</h2>
+    <ol id="auto-toc">
+    </ol>
+  </div>
+</div>
+
+<p>Android strongly encourages OEMs to test their SELinux implementations
+thoroughly. As manufacturers implement SELinux, they should apply the new
+policy to a test pool of devices first.</p>
+
+<p>Once applied, make sure SELinux is running in the correct mode on the device by
+issuing the command:getenforce</p>
+
+<p>This will print the global SELinux mode: either Disabled, Enforcing, or
+Permissive. Please note, this command shows only the global SELinux mode. To
+determine the SELinux mode for each domain, you must examine the corresponding
+files or run the latest version of <code>sepolicy-analyze</code> with the appropriate (-p) flag, present in /platform/external/sepolicy/tools/.</p>
+
+<h2 id=reading_denials>Reading denials</h2>
+
+<p>Then check for errors. Errors are routed as event logs to dmesg and <code>logcat</code> and are viewable locally on the device. Manufacturers should examine the
+SELinux output to dmesg on these devices and refine settings prior to public
+release in permissive mode and eventual switch to enforcing mode. SELinux log
+messages contain "AVC" and so may easily be found with <code>grep</code>. It is
+possible to capture the ongoing denial logs by running <code>cat /proc/kmsg</code>
+or to capture denial logs from the previous boot by running cat <code>/proc/last_kmsg</code>.</p>
+
+<p>With this output, manufacturers can readily identify when system users or
+components are in violation of SELinux policy. Manufacturers can then repair
+this bad behavior, either by changes to the software, SELinux policy, or both.</p>
+
+<p>Specifically, these log messages indicate what processes would fail under
+enforcing mode and why. Here is an example:</p>
+
+<pre>
+denied  { connectto } for  pid=2671 comm="ping" path="/dev/socket/dnsproxyd"
+scontext=u:r:shell:s0 tcontext=u:r:netd:s0 tclass=unix_stream_socket
+</pre>
+
+<p>Interpret this output like so:</p>
+
+<ul>
+  <li> The <code>{ connectto }</code> above represents the action being taken. Together with the
+<code>tclass</code> at the end (<code>unix_stream_socket</code>), it tells you roughly what was being done
+to what. In this case, something was trying to connect to a unix stream socket.
+  <li> The <code>scontext (u:r:shell:s0)</code> tells you what context initiated the action. In
+this case this is something running as the shell.
+  <li> The <code>tcontext (u:r:netd:s0)</code> tells you the context of the action’s target. In
+this case, that’s a unix_stream_socket owned by <code>netd</code>.
+  <li> The <code>comm="ping"</code> at the top gives you an additional hint about what was being
+run at the time the denial was generated. In this case, it’s a pretty good hint.
+</ul>
+
+<p>And here is another example:</p>
+
+<pre>
+$ adb shell su -c dmesg | grep 'avc: '
+&lt;5> type=1400 audit: avc:  denied  { read write } for  pid=177
+comm="rmt_storage" name="mem" dev="tmpfs" ino=6004 scontext=u:r:rmt:s0
+tcontext=u:object_r:kmem_device:s0 tclass=chr_file
+</pre>
+
+
+<p>Here are the key elements from this denial:</p>
+
+<ul>
+  <li><em>Action</em> - the attempted action is highlighted in brackets, <code>read write</code> or <code>setenforce</code>. 
+  <li><em>Actor</em> - The <code>scontext</code> (source context) entry represents the actor, in this case the<code> rmt_storage</code> daemon.
+  <li><em>Object</em> - The <code>tcontext</code> (target context) entry represents the object being acted upon, in this case
+kmem.
+  <li><em>Result</em> - The <code>tclass</code> (target class) entry indicates the type of object being acted upon, in this
+case a <code>chr_file</code> (character device).
+</ul>
+
+<h2 id=switching_to_permissive>Switching to permissive</h2>
+
+<p class="caution"><strong>Important:</strong> Permissive mode is not supported on production devices. CTS tests confirm
+enforcing mode is enabled.</p>
+
+<p>To turn a device’s SELinux enforcement into globally permissive via ADB, as
+root issue:</p>
+
+<pre>
+$ adb shell su -c setenforce 0
+</pre>
+
+<p>Or at the kernel command line (during early device bring-up):</p>
+
+<pre>
+androidboot.selinux=permissive
+androidboot.selinux=disabled
+androidboot.selinux=enforcing
+</pre>
+
+<h2 id=using_audit2allow>Using audit2allow</h2>
+
+<p>The <code>selinux/policycoreutils/audit2allow</code> tool takes <code>dmesg</code> denials and converts them into corresponding SELinux policy statements. As
+such, it can greatly speed SELinux development. To install it, run:</p>
+
+<pre>
+$ sudo apt-get install policycoreutils
+</pre>
+
+<p>To use it:</p>
+
+<pre>
+$ adb shell su -c dmesg | audit2allow
+</pre>
+
+<p>Nevertheless, care must be taken to examine each potential addition for
+overreaching permissions. For example, feeding audit2allow the <code>rmt_storage</code> denial shown earlier results in the following suggested SELinux policy
+statement:</p>
+
+<pre>
+#============= shell ==============
+allow shell kernel:security setenforce;
+#============= rmt ==============
+allow rmt kmem_device:chr_file { read write };
+</pre>
+
+
+<p>This would grant <code>rmt</code> the ability to write kernel memory, a glaring security hole. Often the <code>audit2allow</code> statements are only a starting point, after which changes to the source
+domain, the label of the target and the incorporation of proper macros may be
+required to arrive at a good policy. Sometimes the denial being examined should
+not result in any policy changes at all, but rather the offending application
+should be changed.</p>
diff --git a/src/devices/testing_circuit.jd b/src/devices/testing_circuit.jd
index 3ad6575..7d672f3 100644
--- a/src/devices/testing_circuit.jd
+++ b/src/devices/testing_circuit.jd
@@ -1,6 +1,21 @@
 page.title=Testing Circuit
 @jd:body
 
+<!--
+    Copyright 2013 The Android Open Source Project
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
 <div id="qv-wrapper">
   <div id="qv">
     <h2>In this document</h2>