Merge "soc: qcom: spcom: Pass channel name as formatted string to device_create"
diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst
new file mode 100644
index 0000000..330106b
--- /dev/null
+++ b/Documentation/block/inline-encryption.rst
@@ -0,0 +1,183 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=================
+Inline Encryption
+=================
+
+Objective
+=========
+
+We want to support inline encryption (IE) in the kernel.
+To allow for testing, we also want a crypto API fallback when actual
+IE hardware is absent. We also want IE to work with layered devices
+like dm and loopback (i.e. we want to be able to use the IE hardware
+of the underlying devices if present, or else fall back to crypto API
+en/decryption).
+
+
+Constraints and notes
+=====================
+
+- IE hardware have a limited number of "keyslots" that can be programmed
+  with an encryption context (key, algorithm, data unit size, etc.) at any time.
+  One can specify a keyslot in a data request made to the device, and the
+  device will en/decrypt the data using the encryption context programmed into
+  that specified keyslot. When possible, we want to make multiple requests with
+  the same encryption context share the same keyslot.
+
+- We need a way for filesystems to specify an encryption context to use for
+  en/decrypting a struct bio, and a device driver (like UFS) needs to be able
+  to use that encryption context when it processes the bio.
+
+- We need a way for device drivers to expose their capabilities in a unified
+  way to the upper layers.
+
+
+Design
+======
+
+We add a struct bio_crypt_ctx to struct bio that can represent an
+encryption context, because we need to be able to pass this encryption
+context from the FS layer to the device driver to act upon.
+
+While IE hardware works on the notion of keyslots, the FS layer has no
+knowledge of keyslots - it simply wants to specify an encryption context to
+use while en/decrypting a bio.
+
+We introduce a keyslot manager (KSM) that handles the translation from
+encryption contexts specified by the FS to keyslots on the IE hardware.
+This KSM also serves as the way IE hardware can expose their capabilities to
+upper layers. The generic mode of operation is: each device driver that wants
+to support IE will construct a KSM and set it up in its struct request_queue.
+Upper layers that want to use IE on this device can then use this KSM in
+the device's struct request_queue to translate an encryption context into
+a keyslot. The presence of the KSM in the request queue shall be used to mean
+that the device supports IE.
+
+On the device driver end of the interface, the device driver needs to tell the
+KSM how to actually manipulate the IE hardware in the device to do things like
+programming the crypto key into the IE hardware into a particular keyslot. All
+this is achieved through the :c:type:`struct keyslot_mgmt_ll_ops` that the
+device driver passes to the KSM when creating it.
+
+It uses refcounts to track which keyslots are idle (either they have no
+encryption context programmed, or there are no in-flight struct bios
+referencing that keyslot). When a new encryption context needs a keyslot, it
+tries to find a keyslot that has already been programmed with the same
+encryption context, and if there is no such keyslot, it evicts the least
+recently used idle keyslot and programs the new encryption context into that
+one. If no idle keyslots are available, then the caller will sleep until there
+is at least one.
+
+
+Blk-crypto
+==========
+
+The above is sufficient for simple cases, but does not work if there is a
+need for a crypto API fallback, or if we are want to use IE with layered
+devices. To these ends, we introduce blk-crypto. Blk-crypto allows us to
+present a unified view of encryption to the FS (so FS only needs to specify
+an encryption context and not worry about keyslots at all), and blk-crypto
+can decide whether to delegate the en/decryption to IE hardware or to the
+crypto API. Blk-crypto maintains an internal KSM that serves as the crypto
+API fallback.
+
+Blk-crypto needs to ensure that the encryption context is programmed into the
+"correct" keyslot manager for IE. If a bio is submitted to a layered device
+that eventually passes the bio down to a device that really does support IE, we
+want the encryption context to be programmed into a keyslot for the KSM of the
+device with IE support. However, blk-crypto does not know a priori whether a
+particular device is the final device in the layering structure for a bio or
+not. So in the case that a particular device does not support IE, since it is
+possibly the final destination device for the bio, if the bio requires
+encryption (i.e. the bio is doing a write operation), blk-crypto must fallback
+to the crypto API *before* sending the bio to the device.
+
+Blk-crypto ensures that:
+
+- The bio's encryption context is programmed into a keyslot in the KSM of the
+  request queue that the bio is being submitted to (or the crypto API fallback
+  KSM if the request queue doesn't have a KSM), and that the ``bc_ksm``
+  in the ``bi_crypt_context`` is set to this KSM
+
+- That the bio has its own individual reference to the keyslot in this KSM.
+  Once the bio passes through blk-crypto, its encryption context is programmed
+  in some KSM. The "its own individual reference to the keyslot" ensures that
+  keyslots can be released by each bio independently of other bios while
+  ensuring that the bio has a valid reference to the keyslot when, for e.g., the
+  crypto API fallback KSM in blk-crypto performs crypto on the device's behalf.
+  The individual references are ensured by increasing the refcount for the
+  keyslot in the ``bc_ksm`` when a bio with a programmed encryption
+  context is cloned.
+
+
+What blk-crypto does on bio submission
+--------------------------------------
+
+**Case 1:** blk-crypto is given a bio with only an encryption context that hasn't
+been programmed into any keyslot in any KSM (for e.g. a bio from the FS).
+  In this case, blk-crypto will program the encryption context into the KSM of the
+  request queue the bio is being submitted to (and if this KSM does not exist,
+  then it will program it into blk-crypto's internal KSM for crypto API
+  fallback). The KSM that this encryption context was programmed into is stored
+  as the ``bc_ksm`` in the bio's ``bi_crypt_context``.
+
+**Case 2:** blk-crypto is given a bio whose encryption context has already been
+programmed into a keyslot in the *crypto API fallback* KSM.
+  In this case, blk-crypto does nothing; it treats the bio as not having
+  specified an encryption context. Note that we cannot do here what we will do
+  in Case 3 because we would have already encrypted the bio via the crypto API
+  by this point.
+
+**Case 3:** blk-crypto is given a bio whose encryption context has already been
+programmed into a keyslot in some KSM (that is *not* the crypto API fallback
+KSM).
+  In this case, blk-crypto first releases that keyslot from that KSM and then
+  treats the bio as in Case 1.
+
+This way, when a device driver is processing a bio, it can be sure that
+the bio's encryption context has been programmed into some KSM (either the
+device driver's request queue's KSM, or blk-crypto's crypto API fallback KSM).
+It then simply needs to check if the bio's ``bc_ksm`` is the device's
+request queue's KSM. If so, then it should proceed with IE. If not, it should
+simply do nothing with respect to crypto, because some other KSM (perhaps the
+blk-crypto crypto API fallback KSM) is handling the en/decryption.
+
+Blk-crypto will release the keyslot that is being held by the bio (and also
+decrypt it if the bio is using the crypto API fallback KSM) once
+``bio_remaining_done`` returns true for the bio.
+
+
+Layered Devices
+===============
+
+Layered devices that wish to support IE need to create their own keyslot
+manager for their request queue, and expose whatever functionality they choose.
+When a layered device wants to pass a bio to another layer (either by
+resubmitting the same bio, or by submitting a clone), it doesn't need to do
+anything special because the bio (or the clone) will once again pass through
+blk-crypto, which will work as described in Case 3. If a layered device wants
+for some reason to do the IO by itself instead of passing it on to a child
+device, but it also chose to expose IE capabilities by setting up a KSM in its
+request queue, it is then responsible for en/decrypting the data itself. In
+such cases, the device can choose to call the blk-crypto function
+``blk_crypto_fallback_to_kernel_crypto_api`` (TODO: Not yet implemented), which will
+cause the en/decryption to be done via the crypto API fallback.
+
+
+Future Optimizations for layered devices
+========================================
+
+Creating a keyslot manager for the layered device uses up memory for each
+keyslot, and in general, a layered device (like dm-linear) merely passes the
+request on to a "child" device, so the keyslots in the layered device itself
+might be completely unused. We can instead define a new type of KSM; the
+"passthrough KSM", that layered devices can use to let blk-crypto know that
+this layered device *will* pass the bio to some child device (and hence
+through blk-crypto again, at which point blk-crypto can program the encryption
+context, instead of programming it into the layered device's KSM). Again, if
+the device "lies" and decides to do the IO itself instead of passing it on to
+a child device, it is responsible for doing the en/decryption (and can choose
+to call ``blk_crypto_fallback_to_kernel_crypto_api``). Another use case for the
+"passthrough KSM" is for IE devices that want to manage their own keyslots/do
+not have a limited number of keyslots.
diff --git a/Documentation/crypto/msm/msm_ice_driver.txt b/Documentation/crypto/msm/msm_ice_driver.txt
deleted file mode 100644
index 4d02c22..0000000
--- a/Documentation/crypto/msm/msm_ice_driver.txt
+++ /dev/null
@@ -1,235 +0,0 @@
-Introduction:
-=============
-Storage encryption has been one of the most required feature from security
-point of view. QTI based storage encryption solution uses general purpose
-crypto engine. While this kind of solution provide a decent amount of
-performance, it falls short as storage speed is improving significantly
-continuously. To overcome performance degradation, newer chips are going to
-have Inline Crypto Engine (ICE) embedded into storage device. ICE is supposed
-to meet the line speed of storage devices.
-
-Hardware Description
-====================
-ICE is a HW block that is embedded into storage device such as UFS/eMMC. By
-default, ICE works in bypass mode i.e. ICE HW does not perform any crypto
-operation on data to be processed by storage device. If required, ICE can be
-configured to perform crypto operation in one direction (i.e. either encryption
-or decryption) or in both direction(both encryption & decryption).
-
-When a switch between the operation modes(plain to crypto or crypto to plain)
-is desired for a particular partition, SW must complete all transactions for
-that particular partition before switching the crypto mode i.e. no crypto, one
-direction crypto or both direction crypto operation. Requests for other
-partitions are not impacted due to crypto mode switch.
-
-ICE HW currently supports AES128/256 bit ECB & XTS mode encryption algorithms.
-
-Keys for crypto operations are loaded from SW. Keys are stored in a lookup
-table(LUT) located inside ICE HW. Maximum of 32 keys can be loaded in ICE key
-LUT. A Key inside the LUT can be referred using a key index.
-
-SW Description
-==============
-ICE HW has catagorized ICE registers in 2 groups: those which can be accessed by
-only secure side i.e. TZ and those which can be accessed by non-secure side such
-as HLOS as well. This requires that ICE driver to be split in two pieces: one
-running from TZ space and another from HLOS space.
-
-ICE driver from TZ would configure keys as requested by HLOS side.
-
-ICE driver on HLOS side is responsible for initialization of ICE HW.
-
-SW Architecture Diagram
-=======================
-Following are all the components involved in the ICE driver for control path:
-
-+++++++++++++++++++++++++++++++++++++++++
-+               App layer               +
-+++++++++++++++++++++++++++++++++++++++++
-+             System layer              +
-+   ++++++++         +++++++            +
-+   + VOLD +         + PFM +            +
-+   ++++++++         +++++++            +
-+         ||         ||                 +
-+         ||         ||                 +
-+         \/         \/                 +
-+        ++++++++++++++                 +
-+        + LibQSEECom +                 +
-+        ++++++++++++++                 +
-+++++++++++++++++++++++++++++++++++++++++
-+             Kernel                    +       +++++++++++++++++
-+                                       +       +     KMS       +
-+  +++++++  +++++++++++  +++++++++++    +       +++++++++++++++++
-+  + ICE +  + Storage +  + QSEECom +    +       +   ICE Driver  +
-+++++++++++++++++++++++++++++++++++++++++ <===> +++++++++++++++++
-               ||                                    ||
-               ||                                    ||
-               \/                                    \/
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-+                      Storage Device                           +
-+                      ++++++++++++++                           +
-+                      +   ICE HW   +                           +
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
-Use Cases:
-----------
-a) Device bootup
-ICE HW is detected during bootup time and corresponding probe function is
-called. ICE driver parses its data from device tree node. ICE HW and storage
-HW are tightly coupled. Storage device probing is dependent upon ICE device
-probing. ICE driver configures all the required registers to put the ICE HW
-in bypass mode.
-
-b) Configuring keys
-Currently, there are couple of use cases to configure the keys.
-
-1) Full Disk Encryption(FDE)
-System layer(VOLD) at invocation of apps layer would call libqseecom to create
-the encryption key. Libqseecom calls qseecom driver to communicate with KMS
-module on the secure side i.e. TZ. KMS would call ICE driver on the TZ side to
-create and set the keys in ICE HW. At the end of transaction, VOLD would have
-key index of key LUT where encryption key is present.
-
-2) Per File Encryption (PFE)
-Per File Manager(PFM) calls QSEECom api to create the key. PFM has a peer comp-
-onent(PFT) at kernel layer which gets the corresponding key index from PFM.
-
-Following are all the components involved in the ICE driver for data path:
-
-+++++++++++++++++++++++++++++++++++++++++
-+               App layer               +
-+++++++++++++++++++++++++++++++++++++++++
-+              VFS                      +
-+---------------------------------------+
-+         File System (EXT4)            +
-+---------------------------------------+
-+             Block Layer               +
-+ --------------------------------------+
-+                              +++++++  +
-+              dm-req-crypt => + PFT +  +
-+                              +++++++  +
-+                                       +
-+---------------------------------------+
-+    +++++++++++           +++++++      +
-+    + Storage +           + ICE +      +
-+++++++++++++++++++++++++++++++++++++++++
-+                  ||                   +
-+                  || (Storage Req with +
-+                  \/  ICE parameters ) +
-+++++++++++++++++++++++++++++++++++++++++
-+          Storage Device               +
-+          ++++++++++++++               +
-+          +   ICE HW   +               +
-+++++++++++++++++++++++++++++++++++++++++
-
-c) Data transaction
-Once the crypto key has been configured, VOLD/PFM creates device mapping for
-data partition. As part of device mapping VOLD passes key index, crypto
-algorithm, mode and key length to dm layer. In case of PFE, keys are provided
-by PFT as and when request is processed by dm-req-crypt. When any application
-needs to read/write data, it would go through DM layer which would add crypto
-information, provided by VOLD/PFT, to Request. For each Request, Storage driver
-would ask ICE driver to configure crypto part of request. ICE driver extracts
-crypto data from Request structure and provide it to storage driver which would
-finally dispatch request to storage device.
-
-d) Error Handling
-Due to issue # 1 mentioned in "Known Issues", ICE driver does not register for
-any interrupt. However, it enables sources of interrupt for ICE HW. After each
-data transaction, Storage driver receives transaction completion event. As part
-of event handling, storage driver calls  ICE driver to check if any of ICE
-interrupt status is set. If yes, storage driver returns error to upper layer.
-
-Error handling would be changed in future chips.
-
-Interfaces
-==========
-ICE driver exposes interfaces for storage driver to :
-1. Get the global instance of ICE driver
-2. Get the implemented interfaces of the particular ice instance
-3. Initialize the ICE HW
-4. Reset the ICE HW
-5. Resume/Suspend the ICE HW
-6. Get the Crypto configuration for the data request for storage
-7. Check if current data transaction has generated any interrupt
-
-Driver Parameters
-=================
-This driver is built and statically linked into the kernel; therefore,
-there are no module parameters supported by this driver.
-
-There are no kernel command line parameters supported by this driver.
-
-Power Management
-================
-ICE driver does not do power management on its own as it is part of storage
-hardware. Whenever storage driver receives request for power collapse/suspend
-resume, it would call ICE driver which exposes APIs for Storage HW. ICE HW
-during power collapse or reset, wipes crypto configuration data. When ICE
-driver receives request to resume, it would ask ICE driver on TZ side to
-restore the configuration. ICE driver does not do anything as part of power
-collapse or suspend event.
-
-Interface:
-==========
-ICE driver exposes following APIs for storage driver to use:
-
-int (*init)(struct platform_device *, void *, ice_success_cb, ice_error_cb);
-	-- This function is invoked by storage controller during initialization of
-	storage controller. Storage controller would provide success and error call
-	backs which would be invoked asynchronously once ICE HW init is done.
-
-int (*reset)(struct platform_device *);
-	-- ICE HW reset as part of storage controller reset. When storage controller
-	received reset command, it would call reset on ICE HW. As of now, ICE HW
-	does not need to do anything as part of reset.
-
-int (*resume)(struct platform_device *);
-	-- ICE HW while going to reset, wipes all crypto keys and other data from ICE
-	HW. ICE driver would reconfigure those data as part of resume operation.
-
-int (*suspend)(struct platform_device *);
-	-- This API would be called by storage driver when storage device is going to
-	suspend mode. As of today, ICE driver does not do anything to handle suspend.
-
-int (*config)(struct platform_device *, struct request* , struct ice_data_setting*);
-	-- Storage driver would call this interface to get all crypto data required to
-	perform crypto operation.
-
-int (*status)(struct platform_device *);
-	-- Storage driver would call this interface to check if previous data transfer
-	generated any error.
-
-Config options
-==============
-This driver is enabled by the kernel config option CONFIG_CRYPTO_DEV_MSM_ICE.
-
-Dependencies
-============
-ICE driver depends upon corresponding ICE driver on TZ side to function
-appropriately.
-
-Known Issues
-============
-1. ICE HW emits 0s even if it has generated an interrupt
-This issue has significant impact on how ICE interrupts are handled. Currently,
-ICE driver does not register for any of the ICE interrupts but enables the
-sources of interrupt. Once storage driver asks to check the status of interrupt,
-it reads and clears the clear status and provide read status to storage driver.
-This mechanism though not optimal but prevents filesystem curruption.
-This issue has been fixed in newer chips.
-
-2. ICE HW wipes all crypto data during power collapse
-This issue necessiate that ICE driver on TZ side store the crypto material
-which is not required in the case of general purpose crypto engine.
-This issue has been fixed in newer chips.
-
-Further Improvements
-====================
-Currently, Due to PFE use case, ICE driver is dependent upon dm-req-crypt to
-provide the keys as part of request structure. This couples ICE driver with
-dm-req-crypt based solution. It is under discussion to expose an IOCTL based
-and registration based interface APIs from ICE driver. ICE driver would use
-these two interfaces to find out if any key exists for current request. If
-yes, choose the right key index received from IOCTL or registration based
-APIs. If not, dont set any crypto parameter in the request.
diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
index 82efa41..4ed9d58 100644
--- a/Documentation/filesystems/fscrypt.rst
+++ b/Documentation/filesystems/fscrypt.rst
@@ -72,6 +72,9 @@
 fscrypt (and storage encryption in general) can only provide limited
 protection, if any at all, against online attacks.  In detail:
 
+Side-channel attacks
+~~~~~~~~~~~~~~~~~~~~
+
 fscrypt is only resistant to side-channel attacks, such as timing or
 electromagnetic attacks, to the extent that the underlying Linux
 Cryptographic API algorithms are.  If a vulnerable algorithm is used,
@@ -80,29 +83,90 @@
 Side channel attacks may also be mounted against applications
 consuming decrypted data.
 
-After an encryption key has been provided, fscrypt is not designed to
-hide the plaintext file contents or filenames from other users on the
-same system, regardless of the visibility of the keyring key.
-Instead, existing access control mechanisms such as file mode bits,
-POSIX ACLs, LSMs, or mount namespaces should be used for this purpose.
-Also note that as long as the encryption keys are *anywhere* in
-memory, an online attacker can necessarily compromise them by mounting
-a physical attack or by exploiting any kernel security vulnerability
-which provides an arbitrary memory read primitive.
+Unauthorized file access
+~~~~~~~~~~~~~~~~~~~~~~~~
 
-While it is ostensibly possible to "evict" keys from the system,
-recently accessed encrypted files will remain accessible at least
-until the filesystem is unmounted or the VFS caches are dropped, e.g.
-using ``echo 2 > /proc/sys/vm/drop_caches``.  Even after that, if the
-RAM is compromised before being powered off, it will likely still be
-possible to recover portions of the plaintext file contents, if not
-some of the encryption keys as well.  (Since Linux v4.12, all
-in-kernel keys related to fscrypt are sanitized before being freed.
-However, userspace would need to do its part as well.)
+After an encryption key has been added, fscrypt does not hide the
+plaintext file contents or filenames from other users on the same
+system.  Instead, existing access control mechanisms such as file mode
+bits, POSIX ACLs, LSMs, or namespaces should be used for this purpose.
 
-Currently, fscrypt does not prevent a user from maliciously providing
-an incorrect key for another user's existing encrypted files.  A
-protection against this is planned.
+(For the reasoning behind this, understand that while the key is
+added, the confidentiality of the data, from the perspective of the
+system itself, is *not* protected by the mathematical properties of
+encryption but rather only by the correctness of the kernel.
+Therefore, any encryption-specific access control checks would merely
+be enforced by kernel *code* and therefore would be largely redundant
+with the wide variety of access control mechanisms already available.)
+
+Kernel memory compromise
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+An attacker who compromises the system enough to read from arbitrary
+memory, e.g. by mounting a physical attack or by exploiting a kernel
+security vulnerability, can compromise all encryption keys that are
+currently in use.
+
+However, fscrypt allows encryption keys to be removed from the kernel,
+which may protect them from later compromise.
+
+In more detail, the FS_IOC_REMOVE_ENCRYPTION_KEY ioctl (or the
+FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS ioctl) can wipe a master
+encryption key from kernel memory.  If it does so, it will also try to
+evict all cached inodes which had been "unlocked" using the key,
+thereby wiping their per-file keys and making them once again appear
+"locked", i.e. in ciphertext or encrypted form.
+
+However, these ioctls have some limitations:
+
+- Per-file keys for in-use files will *not* be removed or wiped.
+  Therefore, for maximum effect, userspace should close the relevant
+  encrypted files and directories before removing a master key, as
+  well as kill any processes whose working directory is in an affected
+  encrypted directory.
+
+- The kernel cannot magically wipe copies of the master key(s) that
+  userspace might have as well.  Therefore, userspace must wipe all
+  copies of the master key(s) it makes as well; normally this should
+  be done immediately after FS_IOC_ADD_ENCRYPTION_KEY, without waiting
+  for FS_IOC_REMOVE_ENCRYPTION_KEY.  Naturally, the same also applies
+  to all higher levels in the key hierarchy.  Userspace should also
+  follow other security precautions such as mlock()ing memory
+  containing keys to prevent it from being swapped out.
+
+- In general, decrypted contents and filenames in the kernel VFS
+  caches are freed but not wiped.  Therefore, portions thereof may be
+  recoverable from freed memory, even after the corresponding key(s)
+  were wiped.  To partially solve this, you can set
+  CONFIG_PAGE_POISONING=y in your kernel config and add page_poison=1
+  to your kernel command line.  However, this has a performance cost.
+
+- Secret keys might still exist in CPU registers, in crypto
+  accelerator hardware (if used by the crypto API to implement any of
+  the algorithms), or in other places not explicitly considered here.
+
+Limitations of v1 policies
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+v1 encryption policies have some weaknesses with respect to online
+attacks:
+
+- There is no verification that the provided master key is correct.
+  Therefore, a malicious user can temporarily associate the wrong key
+  with another user's encrypted files to which they have read-only
+  access.  Because of filesystem caching, the wrong key will then be
+  used by the other user's accesses to those files, even if the other
+  user has the correct key in their own keyring.  This violates the
+  meaning of "read-only access".
+
+- A compromise of a per-file key also compromises the master key from
+  which it was derived.
+
+- Non-root users cannot securely remove encryption keys.
+
+All the above problems are fixed with v2 encryption policies.  For
+this reason among others, it is recommended to use v2 encryption
+policies on all new encrypted directories.
 
 Key hierarchy
 =============
@@ -123,11 +187,52 @@
 of which protects any number of directory trees on any number of
 filesystems.
 
-Userspace should generate master keys either using a cryptographically
-secure random number generator, or by using a KDF (Key Derivation
-Function).  Note that whenever a KDF is used to "stretch" a
-lower-entropy secret such as a passphrase, it is critical that a KDF
-designed for this purpose be used, such as scrypt, PBKDF2, or Argon2.
+Master keys must be real cryptographic keys, i.e. indistinguishable
+from random bytestrings of the same length.  This implies that users
+**must not** directly use a password as a master key, zero-pad a
+shorter key, or repeat a shorter key.  Security cannot be guaranteed
+if userspace makes any such error, as the cryptographic proofs and
+analysis would no longer apply.
+
+Instead, users should generate master keys either using a
+cryptographically secure random number generator, or by using a KDF
+(Key Derivation Function).  The kernel does not do any key stretching;
+therefore, if userspace derives the key from a low-entropy secret such
+as a passphrase, it is critical that a KDF designed for this purpose
+be used, such as scrypt, PBKDF2, or Argon2.
+
+Key derivation function
+-----------------------
+
+With one exception, fscrypt never uses the master key(s) for
+encryption directly.  Instead, they are only used as input to a KDF
+(Key Derivation Function) to derive the actual keys.
+
+The KDF used for a particular master key differs depending on whether
+the key is used for v1 encryption policies or for v2 encryption
+policies.  Users **must not** use the same key for both v1 and v2
+encryption policies.  (No real-world attack is currently known on this
+specific case of key reuse, but its security cannot be guaranteed
+since the cryptographic proofs and analysis would no longer apply.)
+
+For v1 encryption policies, the KDF only supports deriving per-file
+encryption keys.  It works by encrypting the master key with
+AES-128-ECB, using the file's 16-byte nonce as the AES key.  The
+resulting ciphertext is used as the derived key.  If the ciphertext is
+longer than needed, then it is truncated to the needed length.
+
+For v2 encryption policies, the KDF is HKDF-SHA512.  The master key is
+passed as the "input keying material", no salt is used, and a distinct
+"application-specific information string" is used for each distinct
+key to be derived.  For example, when a per-file encryption key is
+derived, the application-specific information string is the file's
+nonce prefixed with "fscrypt\\0" and a context byte.  Different
+context bytes are used for other types of derived keys.
+
+HKDF-SHA512 is preferred to the original AES-128-ECB based KDF because
+HKDF is more flexible, is nonreversible, and evenly distributes
+entropy from the master key.  HKDF is also standardized and widely
+used by other software, whereas the AES-128-ECB based KDF is ad-hoc.
 
 Per-file keys
 -------------
@@ -138,29 +243,9 @@
 cases, fscrypt does this by deriving per-file keys.  When a new
 encrypted inode (regular file, directory, or symlink) is created,
 fscrypt randomly generates a 16-byte nonce and stores it in the
-inode's encryption xattr.  Then, it uses a KDF (Key Derivation
-Function) to derive the file's key from the master key and nonce.
-
-The Adiantum encryption mode (see `Encryption modes and usage`_) is
-special, since it accepts longer IVs and is suitable for both contents
-and filenames encryption.  For it, a "direct key" option is offered
-where the file's nonce is included in the IVs and the master key is
-used for encryption directly.  This improves performance; however,
-users must not use the same master key for any other encryption mode.
-
-Below, the KDF and design considerations are described in more detail.
-
-The current KDF works by encrypting the master key with AES-128-ECB,
-using the file's nonce as the AES key.  The output is used as the
-derived key.  If the output is longer than needed, then it is
-truncated to the needed length.
-
-Note: this KDF meets the primary security requirement, which is to
-produce unique derived keys that preserve the entropy of the master
-key, assuming that the master key is already a good pseudorandom key.
-However, it is nonstandard and has some problems such as being
-reversible, so it is generally considered to be a mistake!  It may be
-replaced with HKDF or another more standard KDF in the future.
+inode's encryption xattr.  Then, it uses a KDF (as described in `Key
+derivation function`_) to derive the file's key from the master key
+and nonce.
 
 Key derivation was chosen over key wrapping because wrapped keys would
 require larger xattrs which would be less likely to fit in-line in the
@@ -171,10 +256,51 @@
 the master keys may be wrapped in userspace, e.g. as is done by the
 `fscrypt <https://github.com/google/fscrypt>`_ tool.
 
-Including the inode number in the IVs was considered.  However, it was
-rejected as it would have prevented ext4 filesystems from being
-resized, and by itself still wouldn't have been sufficient to prevent
-the same key from being directly reused for both XTS and CTS-CBC.
+DIRECT_KEY policies
+-------------------
+
+The Adiantum encryption mode (see `Encryption modes and usage`_) is
+suitable for both contents and filenames encryption, and it accepts
+long IVs --- long enough to hold both an 8-byte logical block number
+and a 16-byte per-file nonce.  Also, the overhead of each Adiantum key
+is greater than that of an AES-256-XTS key.
+
+Therefore, to improve performance and save memory, for Adiantum a
+"direct key" configuration is supported.  When the user has enabled
+this by setting FSCRYPT_POLICY_FLAG_DIRECT_KEY in the fscrypt policy,
+per-file keys are not used.  Instead, whenever any data (contents or
+filenames) is encrypted, the file's 16-byte nonce is included in the
+IV.  Moreover:
+
+- For v1 encryption policies, the encryption is done directly with the
+  master key.  Because of this, users **must not** use the same master
+  key for any other purpose, even for other v1 policies.
+
+- For v2 encryption policies, the encryption is done with a per-mode
+  key derived using the KDF.  Users may use the same master key for
+  other v2 encryption policies.
+
+IV_INO_LBLK_64 policies
+-----------------------
+
+When FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 is set in the fscrypt policy,
+the encryption keys are derived from the master key, encryption mode
+number, and filesystem UUID.  This normally results in all files
+protected by the same master key sharing a single contents encryption
+key and a single filenames encryption key.  To still encrypt different
+files' data differently, inode numbers are included in the IVs.
+Consequently, shrinking the filesystem may not be allowed.
+
+This format is optimized for use with inline encryption hardware
+compliant with the UFS or eMMC standards, which support only 64 IV
+bits per I/O request and may have only a small number of keyslots.
+
+Key identifiers
+---------------
+
+For master keys used for v2 encryption policies, a unique 16-byte "key
+identifier" is also derived using the KDF.  This value is stored in
+the clear, since it is needed to reliably identify the key itself.
 
 Encryption modes and usage
 ==========================
@@ -192,8 +318,9 @@
 
 AES-128-CBC was added only for low-powered embedded devices with
 crypto accelerators such as CAAM or CESA that do not support XTS.  To
-use AES-128-CBC, CONFIG_CRYPTO_SHA256 (or another SHA-256
-implementation) must be enabled so that ESSIV can be used.
+use AES-128-CBC, CONFIG_CRYPTO_ESSIV and CONFIG_CRYPTO_SHA256 (or
+another SHA-256 implementation) must be enabled so that ESSIV can be
+used.
 
 Adiantum is a (primarily) stream cipher-based mode that is fast even
 on CPUs without dedicated crypto instructions.  It's also a true
@@ -225,10 +352,17 @@
   is encrypted with AES-256 where the AES-256 key is the SHA-256 hash
   of the file's data encryption key.
 
-- In the "direct key" configuration (FS_POLICY_FLAG_DIRECT_KEY set in
-  the fscrypt_policy), the file's nonce is also appended to the IV.
+- With `DIRECT_KEY policies`_, the file's nonce is appended to the IV.
   Currently this is only allowed with the Adiantum encryption mode.
 
+- With `IV_INO_LBLK_64 policies`_, the logical block number is limited
+  to 32 bits and is placed in bits 0-31 of the IV.  The inode number
+  (which is also limited to 32 bits) is placed in bits 32-63.
+
+Note that because file logical block numbers are included in the IVs,
+filesystems must enforce that blocks are never shifted around within
+encrypted files, e.g. via "collapse range" or "insert range".
+
 Filenames encryption
 --------------------
 
@@ -237,10 +371,10 @@
 filenames of up to 255 bytes, the same IV is used for every filename
 in a directory.
 
-However, each encrypted directory still uses a unique key; or
-alternatively (for the "direct key" configuration) has the file's
-nonce included in the IVs.  Thus, IV reuse is limited to within a
-single directory.
+However, each encrypted directory still uses a unique key, or
+alternatively has the file's nonce (for `DIRECT_KEY policies`_) or
+inode number (for `IV_INO_LBLK_64 policies`_) included in the IVs.
+Thus, IV reuse is limited to within a single directory.
 
 With CTS-CBC, the IV reuse means that when the plaintext filenames
 share a common prefix at least as long as the cipher block size (16
@@ -269,49 +403,80 @@
 Setting an encryption policy
 ----------------------------
 
+FS_IOC_SET_ENCRYPTION_POLICY
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
 The FS_IOC_SET_ENCRYPTION_POLICY ioctl sets an encryption policy on an
 empty directory or verifies that a directory or regular file already
 has the specified encryption policy.  It takes in a pointer to a
-:c:type:`struct fscrypt_policy`, defined as follows::
+:c:type:`struct fscrypt_policy_v1` or a :c:type:`struct
+fscrypt_policy_v2`, defined as follows::
 
-    #define FS_KEY_DESCRIPTOR_SIZE  8
-
-    struct fscrypt_policy {
+    #define FSCRYPT_POLICY_V1               0
+    #define FSCRYPT_KEY_DESCRIPTOR_SIZE     8
+    struct fscrypt_policy_v1 {
             __u8 version;
             __u8 contents_encryption_mode;
             __u8 filenames_encryption_mode;
             __u8 flags;
-            __u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
+            __u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
+    };
+    #define fscrypt_policy  fscrypt_policy_v1
+
+    #define FSCRYPT_POLICY_V2               2
+    #define FSCRYPT_KEY_IDENTIFIER_SIZE     16
+    struct fscrypt_policy_v2 {
+            __u8 version;
+            __u8 contents_encryption_mode;
+            __u8 filenames_encryption_mode;
+            __u8 flags;
+            __u8 __reserved[4];
+            __u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
     };
 
 This structure must be initialized as follows:
 
-- ``version`` must be 0.
+- ``version`` must be FSCRYPT_POLICY_V1 (0) if the struct is
+  :c:type:`fscrypt_policy_v1` or FSCRYPT_POLICY_V2 (2) if the struct
+  is :c:type:`fscrypt_policy_v2`.  (Note: we refer to the original
+  policy version as "v1", though its version code is really 0.)  For
+  new encrypted directories, use v2 policies.
 
 - ``contents_encryption_mode`` and ``filenames_encryption_mode`` must
-  be set to constants from ``<linux/fs.h>`` which identify the
-  encryption modes to use.  If unsure, use
-  FS_ENCRYPTION_MODE_AES_256_XTS (1) for ``contents_encryption_mode``
-  and FS_ENCRYPTION_MODE_AES_256_CTS (4) for
-  ``filenames_encryption_mode``.
+  be set to constants from ``<linux/fscrypt.h>`` which identify the
+  encryption modes to use.  If unsure, use FSCRYPT_MODE_AES_256_XTS
+  (1) for ``contents_encryption_mode`` and FSCRYPT_MODE_AES_256_CTS
+  (4) for ``filenames_encryption_mode``.
 
-- ``flags`` must contain a value from ``<linux/fs.h>`` which
-  identifies the amount of NUL-padding to use when encrypting
-  filenames.  If unsure, use FS_POLICY_FLAGS_PAD_32 (0x3).
-  In addition, if the chosen encryption modes are both
-  FS_ENCRYPTION_MODE_ADIANTUM, this can contain
-  FS_POLICY_FLAG_DIRECT_KEY to specify that the master key should be
-  used directly, without key derivation.
+- ``flags`` contains optional flags from ``<linux/fscrypt.h>``:
 
-- ``master_key_descriptor`` specifies how to find the master key in
-  the keyring; see `Adding keys`_.  It is up to userspace to choose a
-  unique ``master_key_descriptor`` for each master key.  The e4crypt
-  and fscrypt tools use the first 8 bytes of
+  - FSCRYPT_POLICY_FLAGS_PAD_*: The amount of NUL padding to use when
+    encrypting filenames.  If unsure, use FSCRYPT_POLICY_FLAGS_PAD_32
+    (0x3).
+  - FSCRYPT_POLICY_FLAG_DIRECT_KEY: See `DIRECT_KEY policies`_.
+  - FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64: See `IV_INO_LBLK_64
+    policies`_.  This is mutually exclusive with DIRECT_KEY and is not
+    supported on v1 policies.
+
+- For v2 encryption policies, ``__reserved`` must be zeroed.
+
+- For v1 encryption policies, ``master_key_descriptor`` specifies how
+  to find the master key in a keyring; see `Adding keys`_.  It is up
+  to userspace to choose a unique ``master_key_descriptor`` for each
+  master key.  The e4crypt and fscrypt tools use the first 8 bytes of
   ``SHA-512(SHA-512(master_key))``, but this particular scheme is not
   required.  Also, the master key need not be in the keyring yet when
   FS_IOC_SET_ENCRYPTION_POLICY is executed.  However, it must be added
   before any files can be created in the encrypted directory.
 
+  For v2 encryption policies, ``master_key_descriptor`` has been
+  replaced with ``master_key_identifier``, which is longer and cannot
+  be arbitrarily chosen.  Instead, the key must first be added using
+  `FS_IOC_ADD_ENCRYPTION_KEY`_.  Then, the ``key_spec.u.identifier``
+  the kernel returned in the :c:type:`struct fscrypt_add_key_arg` must
+  be used as the ``master_key_identifier`` in the :c:type:`struct
+  fscrypt_policy_v2`.
+
 If the file is not yet encrypted, then FS_IOC_SET_ENCRYPTION_POLICY
 verifies that the file is an empty directory.  If so, the specified
 encryption policy is assigned to the directory, turning it into an
@@ -327,6 +492,15 @@
 returns 0.  Otherwise, it fails with EEXIST.  This works on both
 regular files and directories, including nonempty directories.
 
+When a v2 encryption policy is assigned to a directory, it is also
+required that either the specified key has been added by the current
+user or that the caller has CAP_FOWNER in the initial user namespace.
+(This is needed to prevent a user from encrypting their data with
+another user's key.)  The key must remain added while
+FS_IOC_SET_ENCRYPTION_POLICY is executing.  However, if the new
+encrypted directory does not need to be accessed immediately, then the
+key can be removed right away afterwards.
+
 Note that the ext4 filesystem does not allow the root directory to be
 encrypted, even if it is empty.  Users who want to encrypt an entire
 filesystem with one key should consider using dm-crypt instead.
@@ -339,7 +513,11 @@
 - ``EEXIST``: the file is already encrypted with an encryption policy
   different from the one specified
 - ``EINVAL``: an invalid encryption policy was specified (invalid
-  version, mode(s), or flags)
+  version, mode(s), or flags; or reserved bits were set)
+- ``ENOKEY``: a v2 encryption policy was specified, but the key with
+  the specified ``master_key_identifier`` has not been added, nor does
+  the process have the CAP_FOWNER capability in the initial user
+  namespace
 - ``ENOTDIR``: the file is unencrypted and is a regular file, not a
   directory
 - ``ENOTEMPTY``: the file is unencrypted and is a nonempty directory
@@ -358,25 +536,79 @@
 Getting an encryption policy
 ----------------------------
 
-The FS_IOC_GET_ENCRYPTION_POLICY ioctl retrieves the :c:type:`struct
-fscrypt_policy`, if any, for a directory or regular file.  See above
-for the struct definition.  No additional permissions are required
-beyond the ability to open the file.
+Two ioctls are available to get a file's encryption policy:
 
-FS_IOC_GET_ENCRYPTION_POLICY can fail with the following errors:
+- `FS_IOC_GET_ENCRYPTION_POLICY_EX`_
+- `FS_IOC_GET_ENCRYPTION_POLICY`_
+
+The extended (_EX) version of the ioctl is more general and is
+recommended to use when possible.  However, on older kernels only the
+original ioctl is available.  Applications should try the extended
+version, and if it fails with ENOTTY fall back to the original
+version.
+
+FS_IOC_GET_ENCRYPTION_POLICY_EX
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FS_IOC_GET_ENCRYPTION_POLICY_EX ioctl retrieves the encryption
+policy, if any, for a directory or regular file.  No additional
+permissions are required beyond the ability to open the file.  It
+takes in a pointer to a :c:type:`struct fscrypt_get_policy_ex_arg`,
+defined as follows::
+
+    struct fscrypt_get_policy_ex_arg {
+            __u64 policy_size; /* input/output */
+            union {
+                    __u8 version;
+                    struct fscrypt_policy_v1 v1;
+                    struct fscrypt_policy_v2 v2;
+            } policy; /* output */
+    };
+
+The caller must initialize ``policy_size`` to the size available for
+the policy struct, i.e. ``sizeof(arg.policy)``.
+
+On success, the policy struct is returned in ``policy``, and its
+actual size is returned in ``policy_size``.  ``policy.version`` should
+be checked to determine the version of policy returned.  Note that the
+version code for the "v1" policy is actually 0 (FSCRYPT_POLICY_V1).
+
+FS_IOC_GET_ENCRYPTION_POLICY_EX can fail with the following errors:
 
 - ``EINVAL``: the file is encrypted, but it uses an unrecognized
-  encryption context format
+  encryption policy version
 - ``ENODATA``: the file is not encrypted
-- ``ENOTTY``: this type of filesystem does not implement encryption
+- ``ENOTTY``: this type of filesystem does not implement encryption,
+  or this kernel is too old to support FS_IOC_GET_ENCRYPTION_POLICY_EX
+  (try FS_IOC_GET_ENCRYPTION_POLICY instead)
 - ``EOPNOTSUPP``: the kernel was not configured with encryption
-  support for this filesystem
+  support for this filesystem, or the filesystem superblock has not
+  had encryption enabled on it
+- ``EOVERFLOW``: the file is encrypted and uses a recognized
+  encryption policy version, but the policy struct does not fit into
+  the provided buffer
 
 Note: if you only need to know whether a file is encrypted or not, on
 most filesystems it is also possible to use the FS_IOC_GETFLAGS ioctl
 and check for FS_ENCRYPT_FL, or to use the statx() system call and
 check for STATX_ATTR_ENCRYPTED in stx_attributes.
 
+FS_IOC_GET_ENCRYPTION_POLICY
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FS_IOC_GET_ENCRYPTION_POLICY ioctl can also retrieve the
+encryption policy, if any, for a directory or regular file.  However,
+unlike `FS_IOC_GET_ENCRYPTION_POLICY_EX`_,
+FS_IOC_GET_ENCRYPTION_POLICY only supports the original policy
+version.  It takes in a pointer directly to a :c:type:`struct
+fscrypt_policy_v1` rather than a :c:type:`struct
+fscrypt_get_policy_ex_arg`.
+
+The error codes for FS_IOC_GET_ENCRYPTION_POLICY are the same as those
+for FS_IOC_GET_ENCRYPTION_POLICY_EX, except that
+FS_IOC_GET_ENCRYPTION_POLICY also returns ``EINVAL`` if the file is
+encrypted using a newer encryption policy version.
+
 Getting the per-filesystem salt
 -------------------------------
 
@@ -392,8 +624,144 @@
 Adding keys
 -----------
 
-To provide a master key, userspace must add it to an appropriate
-keyring using the add_key() system call (see:
+FS_IOC_ADD_ENCRYPTION_KEY
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FS_IOC_ADD_ENCRYPTION_KEY ioctl adds a master encryption key to
+the filesystem, making all files on the filesystem which were
+encrypted using that key appear "unlocked", i.e. in plaintext form.
+It can be executed on any file or directory on the target filesystem,
+but using the filesystem's root directory is recommended.  It takes in
+a pointer to a :c:type:`struct fscrypt_add_key_arg`, defined as
+follows::
+
+    struct fscrypt_add_key_arg {
+            struct fscrypt_key_specifier key_spec;
+            __u32 raw_size;
+            __u32 key_id;
+            __u32 __reserved[8];
+            __u8 raw[];
+    };
+
+    #define FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR        1
+    #define FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER        2
+
+    struct fscrypt_key_specifier {
+            __u32 type;     /* one of FSCRYPT_KEY_SPEC_TYPE_* */
+            __u32 __reserved;
+            union {
+                    __u8 __reserved[32]; /* reserve some extra space */
+                    __u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
+                    __u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
+            } u;
+    };
+
+    struct fscrypt_provisioning_key_payload {
+            __u32 type;
+            __u32 __reserved;
+            __u8 raw[];
+    };
+
+:c:type:`struct fscrypt_add_key_arg` must be zeroed, then initialized
+as follows:
+
+- If the key is being added for use by v1 encryption policies, then
+  ``key_spec.type`` must contain FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR, and
+  ``key_spec.u.descriptor`` must contain the descriptor of the key
+  being added, corresponding to the value in the
+  ``master_key_descriptor`` field of :c:type:`struct
+  fscrypt_policy_v1`.  To add this type of key, the calling process
+  must have the CAP_SYS_ADMIN capability in the initial user
+  namespace.
+
+  Alternatively, if the key is being added for use by v2 encryption
+  policies, then ``key_spec.type`` must contain
+  FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER, and ``key_spec.u.identifier`` is
+  an *output* field which the kernel fills in with a cryptographic
+  hash of the key.  To add this type of key, the calling process does
+  not need any privileges.  However, the number of keys that can be
+  added is limited by the user's quota for the keyrings service (see
+  ``Documentation/security/keys/core.rst``).
+
+- ``raw_size`` must be the size of the ``raw`` key provided, in bytes.
+  Alternatively, if ``key_id`` is nonzero, this field must be 0, since
+  in that case the size is implied by the specified Linux keyring key.
+
+- ``key_id`` is 0 if the raw key is given directly in the ``raw``
+  field.  Otherwise ``key_id`` is the ID of a Linux keyring key of
+  type "fscrypt-provisioning" whose payload is a :c:type:`struct
+  fscrypt_provisioning_key_payload` whose ``raw`` field contains the
+  raw key and whose ``type`` field matches ``key_spec.type``.  Since
+  ``raw`` is variable-length, the total size of this key's payload
+  must be ``sizeof(struct fscrypt_provisioning_key_payload)`` plus the
+  raw key size.  The process must have Search permission on this key.
+
+  Most users should leave this 0 and specify the raw key directly.
+  The support for specifying a Linux keyring key is intended mainly to
+  allow re-adding keys after a filesystem is unmounted and re-mounted,
+  without having to store the raw keys in userspace memory.
+
+- ``raw`` is a variable-length field which must contain the actual
+  key, ``raw_size`` bytes long.  Alternatively, if ``key_id`` is
+  nonzero, then this field is unused.
+
+For v2 policy keys, the kernel keeps track of which user (identified
+by effective user ID) added the key, and only allows the key to be
+removed by that user --- or by "root", if they use
+`FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS`_.
+
+However, if another user has added the key, it may be desirable to
+prevent that other user from unexpectedly removing it.  Therefore,
+FS_IOC_ADD_ENCRYPTION_KEY may also be used to add a v2 policy key
+*again*, even if it's already added by other user(s).  In this case,
+FS_IOC_ADD_ENCRYPTION_KEY will just install a claim to the key for the
+current user, rather than actually add the key again (but the raw key
+must still be provided, as a proof of knowledge).
+
+FS_IOC_ADD_ENCRYPTION_KEY returns 0 if either the key or a claim to
+the key was either added or already exists.
+
+FS_IOC_ADD_ENCRYPTION_KEY can fail with the following errors:
+
+- ``EACCES``: FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR was specified, but the
+  caller does not have the CAP_SYS_ADMIN capability in the initial
+  user namespace; or the raw key was specified by Linux key ID but the
+  process lacks Search permission on the key.
+- ``EDQUOT``: the key quota for this user would be exceeded by adding
+  the key
+- ``EINVAL``: invalid key size or key specifier type, or reserved bits
+  were set
+- ``EKEYREJECTED``: the raw key was specified by Linux key ID, but the
+  key has the wrong type
+- ``ENOKEY``: the raw key was specified by Linux key ID, but no key
+  exists with that ID
+- ``ENOTTY``: this type of filesystem does not implement encryption
+- ``EOPNOTSUPP``: the kernel was not configured with encryption
+  support for this filesystem, or the filesystem superblock has not
+  had encryption enabled on it
+
+Legacy method
+~~~~~~~~~~~~~
+
+For v1 encryption policies, a master encryption key can also be
+provided by adding it to a process-subscribed keyring, e.g. to a
+session keyring, or to a user keyring if the user keyring is linked
+into the session keyring.
+
+This method is deprecated (and not supported for v2 encryption
+policies) for several reasons.  First, it cannot be used in
+combination with FS_IOC_REMOVE_ENCRYPTION_KEY (see `Removing keys`_),
+so for removing a key a workaround such as keyctl_unlink() in
+combination with ``sync; echo 2 > /proc/sys/vm/drop_caches`` would
+have to be used.  Second, it doesn't match the fact that the
+locked/unlocked status of encrypted files (i.e. whether they appear to
+be in plaintext form or in ciphertext form) is global.  This mismatch
+has caused much confusion as well as real problems when processes
+running under different UIDs, such as a ``sudo`` command, need to
+access encrypted files.
+
+Nevertheless, to add a key to one of the process-subscribed keyrings,
+the add_key() system call can be used (see:
 ``Documentation/security/keys/core.rst``).  The key type must be
 "logon"; keys of this type are kept in kernel memory and cannot be
 read back by userspace.  The key description must be "fscrypt:"
@@ -401,12 +769,12 @@
 ``master_key_descriptor`` that was set in the encryption policy.  The
 key payload must conform to the following structure::
 
-    #define FS_MAX_KEY_SIZE 64
+    #define FSCRYPT_MAX_KEY_SIZE            64
 
     struct fscrypt_key {
-            u32 mode;
-            u8 raw[FS_MAX_KEY_SIZE];
-            u32 size;
+            __u32 mode;
+            __u8 raw[FSCRYPT_MAX_KEY_SIZE];
+            __u32 size;
     };
 
 ``mode`` is ignored; just set it to 0.  The actual key is provided in
@@ -418,26 +786,194 @@
 filesystem-specific prefixes are deprecated and should not be used in
 new programs.
 
-There are several different types of keyrings in which encryption keys
-may be placed, such as a session keyring, a user session keyring, or a
-user keyring.  Each key must be placed in a keyring that is "attached"
-to all processes that might need to access files encrypted with it, in
-the sense that request_key() will find the key.  Generally, if only
-processes belonging to a specific user need to access a given
-encrypted directory and no session keyring has been installed, then
-that directory's key should be placed in that user's user session
-keyring or user keyring.  Otherwise, a session keyring should be
-installed if needed, and the key should be linked into that session
-keyring, or in a keyring linked into that session keyring.
+Removing keys
+-------------
 
-Note: introducing the complex visibility semantics of keyrings here
-was arguably a mistake --- especially given that by design, after any
-process successfully opens an encrypted file (thereby setting up the
-per-file key), possessing the keyring key is not actually required for
-any process to read/write the file until its in-memory inode is
-evicted.  In the future there probably should be a way to provide keys
-directly to the filesystem instead, which would make the intended
-semantics clearer.
+Two ioctls are available for removing a key that was added by
+`FS_IOC_ADD_ENCRYPTION_KEY`_:
+
+- `FS_IOC_REMOVE_ENCRYPTION_KEY`_
+- `FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS`_
+
+These two ioctls differ only in cases where v2 policy keys are added
+or removed by non-root users.
+
+These ioctls don't work on keys that were added via the legacy
+process-subscribed keyrings mechanism.
+
+Before using these ioctls, read the `Kernel memory compromise`_
+section for a discussion of the security goals and limitations of
+these ioctls.
+
+FS_IOC_REMOVE_ENCRYPTION_KEY
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FS_IOC_REMOVE_ENCRYPTION_KEY ioctl removes a claim to a master
+encryption key from the filesystem, and possibly removes the key
+itself.  It can be executed on any file or directory on the target
+filesystem, but using the filesystem's root directory is recommended.
+It takes in a pointer to a :c:type:`struct fscrypt_remove_key_arg`,
+defined as follows::
+
+    struct fscrypt_remove_key_arg {
+            struct fscrypt_key_specifier key_spec;
+    #define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY      0x00000001
+    #define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS     0x00000002
+            __u32 removal_status_flags;     /* output */
+            __u32 __reserved[5];
+    };
+
+This structure must be zeroed, then initialized as follows:
+
+- The key to remove is specified by ``key_spec``:
+
+    - To remove a key used by v1 encryption policies, set
+      ``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR and fill
+      in ``key_spec.u.descriptor``.  To remove this type of key, the
+      calling process must have the CAP_SYS_ADMIN capability in the
+      initial user namespace.
+
+    - To remove a key used by v2 encryption policies, set
+      ``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER and fill
+      in ``key_spec.u.identifier``.
+
+For v2 policy keys, this ioctl is usable by non-root users.  However,
+to make this possible, it actually just removes the current user's
+claim to the key, undoing a single call to FS_IOC_ADD_ENCRYPTION_KEY.
+Only after all claims are removed is the key really removed.
+
+For example, if FS_IOC_ADD_ENCRYPTION_KEY was called with uid 1000,
+then the key will be "claimed" by uid 1000, and
+FS_IOC_REMOVE_ENCRYPTION_KEY will only succeed as uid 1000.  Or, if
+both uids 1000 and 2000 added the key, then for each uid
+FS_IOC_REMOVE_ENCRYPTION_KEY will only remove their own claim.  Only
+once *both* are removed is the key really removed.  (Think of it like
+unlinking a file that may have hard links.)
+
+If FS_IOC_REMOVE_ENCRYPTION_KEY really removes the key, it will also
+try to "lock" all files that had been unlocked with the key.  It won't
+lock files that are still in-use, so this ioctl is expected to be used
+in cooperation with userspace ensuring that none of the files are
+still open.  However, if necessary, this ioctl can be executed again
+later to retry locking any remaining files.
+
+FS_IOC_REMOVE_ENCRYPTION_KEY returns 0 if either the key was removed
+(but may still have files remaining to be locked), the user's claim to
+the key was removed, or the key was already removed but had files
+remaining to be the locked so the ioctl retried locking them.  In any
+of these cases, ``removal_status_flags`` is filled in with the
+following informational status flags:
+
+- ``FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY``: set if some file(s)
+  are still in-use.  Not guaranteed to be set in the case where only
+  the user's claim to the key was removed.
+- ``FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS``: set if only the
+  user's claim to the key was removed, not the key itself
+
+FS_IOC_REMOVE_ENCRYPTION_KEY can fail with the following errors:
+
+- ``EACCES``: The FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR key specifier type
+  was specified, but the caller does not have the CAP_SYS_ADMIN
+  capability in the initial user namespace
+- ``EINVAL``: invalid key specifier type, or reserved bits were set
+- ``ENOKEY``: the key object was not found at all, i.e. it was never
+  added in the first place or was already fully removed including all
+  files locked; or, the user does not have a claim to the key (but
+  someone else does).
+- ``ENOTTY``: this type of filesystem does not implement encryption
+- ``EOPNOTSUPP``: the kernel was not configured with encryption
+  support for this filesystem, or the filesystem superblock has not
+  had encryption enabled on it
+
+FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS is exactly the same as
+`FS_IOC_REMOVE_ENCRYPTION_KEY`_, except that for v2 policy keys, the
+ALL_USERS version of the ioctl will remove all users' claims to the
+key, not just the current user's.  I.e., the key itself will always be
+removed, no matter how many users have added it.  This difference is
+only meaningful if non-root users are adding and removing keys.
+
+Because of this, FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS also requires
+"root", namely the CAP_SYS_ADMIN capability in the initial user
+namespace.  Otherwise it will fail with EACCES.
+
+Getting key status
+------------------
+
+FS_IOC_GET_ENCRYPTION_KEY_STATUS
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The FS_IOC_GET_ENCRYPTION_KEY_STATUS ioctl retrieves the status of a
+master encryption key.  It can be executed on any file or directory on
+the target filesystem, but using the filesystem's root directory is
+recommended.  It takes in a pointer to a :c:type:`struct
+fscrypt_get_key_status_arg`, defined as follows::
+
+    struct fscrypt_get_key_status_arg {
+            /* input */
+            struct fscrypt_key_specifier key_spec;
+            __u32 __reserved[6];
+
+            /* output */
+    #define FSCRYPT_KEY_STATUS_ABSENT               1
+    #define FSCRYPT_KEY_STATUS_PRESENT              2
+    #define FSCRYPT_KEY_STATUS_INCOMPLETELY_REMOVED 3
+            __u32 status;
+    #define FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF   0x00000001
+            __u32 status_flags;
+            __u32 user_count;
+            __u32 __out_reserved[13];
+    };
+
+The caller must zero all input fields, then fill in ``key_spec``:
+
+    - To get the status of a key for v1 encryption policies, set
+      ``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR and fill
+      in ``key_spec.u.descriptor``.
+
+    - To get the status of a key for v2 encryption policies, set
+      ``key_spec.type`` to FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER and fill
+      in ``key_spec.u.identifier``.
+
+On success, 0 is returned and the kernel fills in the output fields:
+
+- ``status`` indicates whether the key is absent, present, or
+  incompletely removed.  Incompletely removed means that the master
+  secret has been removed, but some files are still in use; i.e.,
+  `FS_IOC_REMOVE_ENCRYPTION_KEY`_ returned 0 but set the informational
+  status flag FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY.
+
+- ``status_flags`` can contain the following flags:
+
+    - ``FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF`` indicates that the key
+      has added by the current user.  This is only set for keys
+      identified by ``identifier`` rather than by ``descriptor``.
+
+- ``user_count`` specifies the number of users who have added the key.
+  This is only set for keys identified by ``identifier`` rather than
+  by ``descriptor``.
+
+FS_IOC_GET_ENCRYPTION_KEY_STATUS can fail with the following errors:
+
+- ``EINVAL``: invalid key specifier type, or reserved bits were set
+- ``ENOTTY``: this type of filesystem does not implement encryption
+- ``EOPNOTSUPP``: the kernel was not configured with encryption
+  support for this filesystem, or the filesystem superblock has not
+  had encryption enabled on it
+
+Among other use cases, FS_IOC_GET_ENCRYPTION_KEY_STATUS can be useful
+for determining whether the key for a given encrypted directory needs
+to be added before prompting the user for the passphrase needed to
+derive the key.
+
+FS_IOC_GET_ENCRYPTION_KEY_STATUS can only get the status of keys in
+the filesystem-level keyring, i.e. the keyring managed by
+`FS_IOC_ADD_ENCRYPTION_KEY`_ and `FS_IOC_REMOVE_ENCRYPTION_KEY`_.  It
+cannot get the status of a key that has only been added for use by v1
+encryption policies using the legacy mechanism involving
+process-subscribed keyrings.
 
 Access semantics
 ================
@@ -500,7 +1036,7 @@
 
 Some filesystem operations may be performed on encrypted regular
 files, directories, and symlinks even before their encryption key has
-been provided:
+been added, or after their encryption key has been removed:
 
 - File metadata may be read, e.g. using stat().
 
@@ -565,33 +1101,44 @@
 ------------------
 
 An encryption policy is represented on-disk by a :c:type:`struct
-fscrypt_context`.  It is up to individual filesystems to decide where
-to store it, but normally it would be stored in a hidden extended
-attribute.  It should *not* be exposed by the xattr-related system
-calls such as getxattr() and setxattr() because of the special
-semantics of the encryption xattr.  (In particular, there would be
-much confusion if an encryption policy were to be added to or removed
-from anything other than an empty directory.)  The struct is defined
-as follows::
+fscrypt_context_v1` or a :c:type:`struct fscrypt_context_v2`.  It is
+up to individual filesystems to decide where to store it, but normally
+it would be stored in a hidden extended attribute.  It should *not* be
+exposed by the xattr-related system calls such as getxattr() and
+setxattr() because of the special semantics of the encryption xattr.
+(In particular, there would be much confusion if an encryption policy
+were to be added to or removed from anything other than an empty
+directory.)  These structs are defined as follows::
 
-    #define FS_KEY_DESCRIPTOR_SIZE  8
     #define FS_KEY_DERIVATION_NONCE_SIZE 16
 
-    struct fscrypt_context {
-            u8 format;
+    #define FSCRYPT_KEY_DESCRIPTOR_SIZE  8
+    struct fscrypt_context_v1 {
+            u8 version;
             u8 contents_encryption_mode;
             u8 filenames_encryption_mode;
             u8 flags;
-            u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
+            u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
             u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
     };
 
-Note that :c:type:`struct fscrypt_context` contains the same
-information as :c:type:`struct fscrypt_policy` (see `Setting an
-encryption policy`_), except that :c:type:`struct fscrypt_context`
-also contains a nonce.  The nonce is randomly generated by the kernel
-and is used to derive the inode's encryption key as described in
-`Per-file keys`_.
+    #define FSCRYPT_KEY_IDENTIFIER_SIZE  16
+    struct fscrypt_context_v2 {
+            u8 version;
+            u8 contents_encryption_mode;
+            u8 filenames_encryption_mode;
+            u8 flags;
+            u8 __reserved[4];
+            u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
+            u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
+    };
+
+The context structs contain the same information as the corresponding
+policy structs (see `Setting an encryption policy`_), except that the
+context structs also contain a nonce.  The nonce is randomly generated
+by the kernel and is used as KDF input or as a tweak to cause
+different files to be encrypted differently; see `Per-file keys`_ and
+`DIRECT_KEY policies`_.
 
 Data path changes
 -----------------
diff --git a/MAINTAINERS b/MAINTAINERS
index 14930c2..7bd11ad 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6013,6 +6013,7 @@
 S:	Supported
 F:	fs/crypto/
 F:	include/linux/fscrypt*.h
+F:	include/uapi/linux/fscrypt.h
 F:	Documentation/filesystems/fscrypt.rst
 
 FSI-ATTACHED I2C DRIVER
diff --git a/Makefile b/Makefile
index d72b773..cb967bd 100644
--- a/Makefile
+++ b/Makefile
@@ -503,6 +503,7 @@
 CLANG_FLAGS	+= $(call cc-option, -Wno-misleading-indentation)
 CLANG_FLAGS	+= $(call cc-option, -Wno-bool-operation)
 CLANG_FLAGS	+= -Werror=unknown-warning-option
+CLANG_FLAGS	+= $(call cc-option, -Wno-unsequenced)
 KBUILD_CFLAGS	+= $(CLANG_FLAGS)
 KBUILD_AFLAGS	+= $(CLANG_FLAGS)
 export CLANG_FLAGS
diff --git a/arch/arm/configs/vendor/bengal-perf_defconfig b/arch/arm/configs/vendor/bengal-perf_defconfig
index c1b5e1b..dc26d22 100644
--- a/arch/arm/configs/vendor/bengal-perf_defconfig
+++ b/arch/arm/configs/vendor/bengal-perf_defconfig
@@ -82,6 +82,7 @@
 CONFIG_MODULE_SIG_FORCE=y
 CONFIG_MODULE_SIG_SHA512=y
 # CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
 CONFIG_PARTITION_ADVANCED=y
 # CONFIG_IOSCHED_DEADLINE is not set
 CONFIG_CFQ_GROUP_IOSCHED=y
@@ -268,14 +269,15 @@
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
 CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
 CONFIG_DM_CRYPT=y
-CONFIG_DM_DEFAULT_KEY=y
 CONFIG_DM_UEVENT=y
 CONFIG_DM_VERITY=y
 CONFIG_DM_VERITY_FEC=y
+CONFIG_DM_ANDROID_VERITY_AT_MOST_ONCE_DEFAULT_ENABLED=y
 CONFIG_NETDEVICES=y
 CONFIG_BONDING=y
 CONFIG_DUMMY=y
@@ -436,8 +438,9 @@
 CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
 CONFIG_MMC_SDHCI=y
 CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
 CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
 CONFIG_NEW_LEDS=y
 CONFIG_LEDS_CLASS=y
 CONFIG_LEDS_CLASS_FLASH=y
@@ -472,6 +475,9 @@
 CONFIG_SM_GPUCC_BENGAL=y
 CONFIG_SM_DISPCC_BENGAL=y
 CONFIG_SM_DEBUGCC_BENGAL=y
+CONFIG_QM_DISPCC_SCUBA=y
+CONFIG_QM_GPUCC_SCUBA=y
+CONFIG_QM_DEBUGCC_SCUBA=y
 CONFIG_HWSPINLOCK=y
 CONFIG_HWSPINLOCK_QCOM=y
 CONFIG_MAILBOX=y
@@ -528,6 +534,8 @@
 CONFIG_QCOM_CDSP_RM=y
 CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
 CONFIG_QCOM_CX_IPEAK=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
 CONFIG_ICNSS=y
 CONFIG_ICNSS_QMI=y
 CONFIG_DEVFREQ_GOV_PASSIVE=y
@@ -557,6 +565,7 @@
 CONFIG_F2FS_FS=y
 CONFIG_F2FS_FS_SECURITY=y
 CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
 CONFIG_QUOTA=y
 CONFIG_QUOTA_NETLINK_INTERFACE=y
 CONFIG_QFMT_V2=y
@@ -570,7 +579,6 @@
 # CONFIG_NETWORK_FILESYSTEMS is not set
 CONFIG_NLS_CODEPAGE_437=y
 CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
 CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
 CONFIG_SECURITY=y
 CONFIG_LSM_MMAP_MIN_ADDR=4096
@@ -588,7 +596,6 @@
 CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
 CONFIG_CRYPTO_DEV_QCRYPTO=y
 CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
 CONFIG_PRINTK_TIME=y
 CONFIG_DEBUG_INFO=y
 CONFIG_FRAME_WARN=2048
diff --git a/arch/arm/configs/vendor/bengal_defconfig b/arch/arm/configs/vendor/bengal_defconfig
index c588b8d..61d4f2d 100644
--- a/arch/arm/configs/vendor/bengal_defconfig
+++ b/arch/arm/configs/vendor/bengal_defconfig
@@ -87,6 +87,7 @@
 CONFIG_MODULE_SIG_FORCE=y
 CONFIG_MODULE_SIG_SHA512=y
 # CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
 CONFIG_PARTITION_ADVANCED=y
 # CONFIG_IOSCHED_DEADLINE is not set
 CONFIG_CFQ_GROUP_IOSCHED=y
@@ -283,14 +284,15 @@
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
 CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
 CONFIG_DM_CRYPT=y
-CONFIG_DM_DEFAULT_KEY=y
 CONFIG_DM_UEVENT=y
 CONFIG_DM_VERITY=y
 CONFIG_DM_VERITY_FEC=y
+CONFIG_DM_ANDROID_VERITY_AT_MOST_ONCE_DEFAULT_ENABLED=y
 CONFIG_NETDEVICES=y
 CONFIG_BONDING=y
 CONFIG_DUMMY=y
@@ -471,8 +473,9 @@
 CONFIG_MMC_IPC_LOGGING=y
 CONFIG_MMC_SDHCI=y
 CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
 CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
 CONFIG_NEW_LEDS=y
 CONFIG_LEDS_CLASS=y
 CONFIG_LEDS_CLASS_FLASH=y
@@ -509,6 +512,9 @@
 CONFIG_SM_GPUCC_BENGAL=y
 CONFIG_SM_DISPCC_BENGAL=y
 CONFIG_SM_DEBUGCC_BENGAL=y
+CONFIG_QM_DISPCC_SCUBA=y
+CONFIG_QM_GPUCC_SCUBA=y
+CONFIG_QM_DEBUGCC_SCUBA=y
 CONFIG_HWSPINLOCK=y
 CONFIG_HWSPINLOCK_QCOM=y
 CONFIG_MAILBOX=y
@@ -573,6 +579,8 @@
 CONFIG_QCOM_CDSP_RM=y
 CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
 CONFIG_QCOM_CX_IPEAK=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
 CONFIG_ICNSS=y
 CONFIG_ICNSS_DEBUG=y
 CONFIG_ICNSS_QMI=y
@@ -604,6 +612,7 @@
 CONFIG_F2FS_FS_SECURITY=y
 CONFIG_F2FS_CHECK_FS=y
 CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
 CONFIG_QUOTA=y
 CONFIG_QUOTA_NETLINK_INTERFACE=y
 CONFIG_QFMT_V2=y
@@ -619,7 +628,6 @@
 CONFIG_NLS_CODEPAGE_437=y
 CONFIG_NLS_ASCII=y
 CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
 CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
 CONFIG_SECURITY=y
 CONFIG_LSM_MMAP_MIN_ADDR=4096
@@ -637,7 +645,6 @@
 CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
 CONFIG_CRYPTO_DEV_QCRYPTO=y
 CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
 CONFIG_XZ_DEC=y
 CONFIG_PRINTK_TIME=y
 CONFIG_DYNAMIC_DEBUG=y
diff --git a/arch/arm/mach-qcom/Kconfig b/arch/arm/mach-qcom/Kconfig
index c108485..01c7488 100644
--- a/arch/arm/mach-qcom/Kconfig
+++ b/arch/arm/mach-qcom/Kconfig
@@ -42,6 +42,44 @@
 	select CLKSRC_OF
 	select COMMON_CLK
 
+config ARCH_SDM660
+	bool "Enable Support for Qualcomm Technologies, Inc. SDM660"
+	select CLKDEV_LOOKUP
+	select HAVE_CLK
+	select HAVE_CLK_PREPARE
+	select PM_OPP
+	select SOC_BUS
+	select MSM_IRQ
+	select THERMAL_WRITABLE_TRIPS
+	select ARM_GIC_V3
+	select ARM_AMBA
+	select SPARSE_IRQ
+	select MULTI_IRQ_HANDLER
+	select HAVE_ARM_ARCH_TIMER
+	select MAY_HAVE_SPARSE_IRQ
+	select COMMON_CLK
+	select COMMON_CLK_QCOM
+	select QCOM_GDSC
+	select PINCTRL_MSM_TLMM
+	select USE_PINCTRL_IRQ
+	select MSM_PM if PM
+	select QMI_ENCDEC
+	select CPU_FREQ
+	select CPU_FREQ_MSM
+	select PM_DEVFREQ
+	select MSM_DEVFREQ_DEVBW
+	select DEVFREQ_SIMPLE_DEV
+	select DEVFREQ_GOV_MSM_BW_HWMON
+	select MSM_BIMC_BWMON
+	select MSM_QDSP6V2_CODECS
+	select MSM_AUDIO_QDSP6V2 if SND_SOC
+	select MSM_RPM_SMD
+	select GENERIC_IRQ_MIGRATION
+	select MSM_JTAGV8 if CORESIGHT_ETMV4
+	help
+	This enables support for the SDM660 chipset. If you do not
+	wish to build a kernel that runs on this chipset, say 'N' here.
+
 config ARCH_BENGAL
 	bool "Enable Support for Qualcomm Technologies, Inc. BENGAL"
 	select COMMON_CLK_QCOM
diff --git a/arch/arm/mach-qcom/Makefile b/arch/arm/mach-qcom/Makefile
index f6658a2..621362a 100644
--- a/arch/arm/mach-qcom/Makefile
+++ b/arch/arm/mach-qcom/Makefile
@@ -2,3 +2,4 @@
 obj-$(CONFIG_SMP)	+= platsmp.o
 obj-$(CONFIG_ARCH_BENGAL) += board-bengal.o
 obj-$(CONFIG_ARCH_SCUBA) += board-scuba.o
+obj-$(CONFIG_ARCH_SDM660) += board-660.o
diff --git a/arch/arm/mach-qcom/board-660.c b/arch/arm/mach-qcom/board-660.c
new file mode 100644
index 0000000..f616baa
--- /dev/null
+++ b/arch/arm/mach-qcom/board-660.c
@@ -0,0 +1,68 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2016, 2019-2020, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <asm/mach/arch.h>
+#include "board-dt.h"
+
+static const char *sdm660_dt_match[] __initconst = {
+	"qcom,sdm660",
+	"qcom,sda660",
+	NULL
+};
+
+static void __init sdm660_init(void)
+{
+	board_dt_populate(NULL);
+}
+
+DT_MACHINE_START(SDM660_DT,
+	"Qualcomm Technologies, Inc. SDM 660 (Flattened Device Tree)")
+	.init_machine = sdm660_init,
+	.dt_compat = sdm660_dt_match,
+MACHINE_END
+
+static const char *sdm630_dt_match[] __initconst = {
+	"qcom,sdm630",
+	"qcom,sda630",
+	NULL
+};
+
+static void __init sdm630_init(void)
+{
+	board_dt_populate(NULL);
+}
+
+DT_MACHINE_START(SDM630_DT,
+	"Qualcomm Technologies, Inc. SDM 630 (Flattened Device Tree)")
+	.init_machine = sdm630_init,
+	.dt_compat = sdm630_dt_match,
+MACHINE_END
+
+static const char *sdm658_dt_match[] __initconst = {
+	"qcom,sdm658",
+	"qcom,sda658",
+	NULL
+};
+
+static void __init sdm658_init(void)
+{
+	board_dt_populate(NULL);
+}
+
+DT_MACHINE_START(SDM658_DT,
+	"Qualcomm Technologies, Inc. SDM 658 (Flattened Device Tree)")
+	.init_machine = sdm658_init,
+	.dt_compat = sdm658_dt_match,
+MACHINE_END
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 339eb17..587e2eb 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -18,6 +18,7 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/pgtable.h>
 #include <asm/memory.h>
+#include <asm/cache.h>
 
 #include "proc-macros.S"
 
@@ -548,10 +549,10 @@
 ENDPROC(__v7_setup)
 
 	.bss
-	.align	2
+	.align	L1_CACHE_SHIFT
 __v7_setup_stack:
 	.space	4 * 7				@ 7 registers
-
+	.align	L1_CACHE_SHIFT
 	__INITDATA
 
 	.weak cpu_v7_bugs_init
diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
index 7ae180d..7793552 100644
--- a/arch/arm64/Kconfig.platforms
+++ b/arch/arm64/Kconfig.platforms
@@ -190,6 +190,16 @@
 	  This enables support for the SCUBA chipset. If you do not
 	  wish to build a kernel that runs on this chipset, say 'N' here.
 
+config ARCH_SDM660
+	bool "Enable Support for Qualcomm Technologies, Inc. SDM660"
+	depends on ARCH_QCOM
+	select COMMON_CLK
+	select COMMON_CLK_QCOM
+	select QCOM_GDSC
+	help
+	  This enables support for the SDM660 chipset. If you do not
+	  wish to build a kernel that runs on this chipset, say 'N' here.
+
 config ARCH_REALTEK
 	bool "Realtek Platforms"
 	help
diff --git a/arch/arm64/configs/gki_defconfig b/arch/arm64/configs/gki_defconfig
index 62b98ef..b4b41a3 100644
--- a/arch/arm64/configs/gki_defconfig
+++ b/arch/arm64/configs/gki_defconfig
@@ -81,6 +81,7 @@
 CONFIG_MODULES=y
 CONFIG_MODULE_UNLOAD=y
 CONFIG_MODVERSIONS=y
+CONFIG_BLK_INLINE_ENCRYPTION=y
 CONFIG_GKI_HACKS_TO_FIX=y
 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
 CONFIG_BINFMT_MISC=m
@@ -222,6 +223,7 @@
 CONFIG_BLK_DEV_SD=y
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
+CONFIG_SCSI_UFS_CRYPTO=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
 CONFIG_DM_CRYPT=y
@@ -392,6 +394,7 @@
 CONFIG_F2FS_FS=y
 CONFIG_F2FS_FS_SECURITY=y
 CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
 CONFIG_FS_VERITY=y
 CONFIG_FS_VERITY_BUILTIN_SIGNATURES=y
 # CONFIG_DNOTIFY is not set
diff --git a/arch/arm64/configs/vendor/bengal-perf_defconfig b/arch/arm64/configs/vendor/bengal-perf_defconfig
index e9c3a94..3592739 100644
--- a/arch/arm64/configs/vendor/bengal-perf_defconfig
+++ b/arch/arm64/configs/vendor/bengal-perf_defconfig
@@ -97,6 +97,7 @@
 CONFIG_MODULE_SIG_FORCE=y
 CONFIG_MODULE_SIG_SHA512=y
 # CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
 CONFIG_PARTITION_ADVANCED=y
 # CONFIG_IOSCHED_DEADLINE is not set
 CONFIG_CFQ_GROUP_IOSCHED=y
@@ -288,7 +289,8 @@
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
 CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
 CONFIG_DM_CRYPT=y
@@ -466,8 +468,9 @@
 CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
 CONFIG_MMC_SDHCI=y
 CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
 CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
 CONFIG_NEW_LEDS=y
 CONFIG_LEDS_CLASS=y
 CONFIG_LEDS_CLASS_FLASH=y
@@ -565,6 +568,9 @@
 CONFIG_QCOM_CDSP_RM=y
 CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
 CONFIG_QCOM_CX_IPEAK=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
+CONFIG_QTI_HW_KEY_MANAGER=y
 CONFIG_ICNSS=y
 CONFIG_ICNSS_QMI=y
 CONFIG_DEVFREQ_GOV_PASSIVE=y
@@ -596,6 +602,7 @@
 CONFIG_F2FS_FS=y
 CONFIG_F2FS_FS_SECURITY=y
 CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
 CONFIG_QUOTA=y
 CONFIG_QUOTA_NETLINK_INTERFACE=y
 CONFIG_QFMT_V2=y
@@ -610,7 +617,6 @@
 # CONFIG_NETWORK_FILESYSTEMS is not set
 CONFIG_NLS_CODEPAGE_437=y
 CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
 CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
 CONFIG_SECURITY=y
 CONFIG_HARDENED_USERCOPY=y
@@ -627,7 +633,6 @@
 CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
 CONFIG_CRYPTO_DEV_QCRYPTO=y
 CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
 CONFIG_STACK_HASH_ORDER_SHIFT=12
 CONFIG_PRINTK_TIME=y
 CONFIG_DEBUG_INFO=y
diff --git a/arch/arm64/configs/vendor/bengal_defconfig b/arch/arm64/configs/vendor/bengal_defconfig
index aa57061..4467b21 100644
--- a/arch/arm64/configs/vendor/bengal_defconfig
+++ b/arch/arm64/configs/vendor/bengal_defconfig
@@ -102,6 +102,7 @@
 CONFIG_MODULE_SIG_FORCE=y
 CONFIG_MODULE_SIG_SHA512=y
 # CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
 CONFIG_PARTITION_ADVANCED=y
 # CONFIG_IOSCHED_DEADLINE is not set
 CONFIG_CFQ_GROUP_IOSCHED=y
@@ -298,8 +299,9 @@
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
 CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
 CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
 CONFIG_DM_CRYPT=y
@@ -479,8 +481,9 @@
 CONFIG_MMC_IPC_LOGGING=y
 CONFIG_MMC_SDHCI=y
 CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
 CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
 CONFIG_NEW_LEDS=y
 CONFIG_LEDS_CLASS=y
 CONFIG_LEDS_CLASS_FLASH=y
@@ -589,6 +592,9 @@
 CONFIG_QCOM_CDSP_RM=y
 CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
 CONFIG_QCOM_CX_IPEAK=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
+CONFIG_QTI_HW_KEY_MANAGER=y
 CONFIG_ICNSS=y
 CONFIG_ICNSS_DEBUG=y
 CONFIG_ICNSS_QMI=y
@@ -622,6 +628,7 @@
 CONFIG_F2FS_FS_SECURITY=y
 CONFIG_F2FS_CHECK_FS=y
 CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
 CONFIG_QUOTA=y
 CONFIG_QUOTA_NETLINK_INTERFACE=y
 CONFIG_QFMT_V2=y
@@ -636,7 +643,6 @@
 # CONFIG_NETWORK_FILESYSTEMS is not set
 CONFIG_NLS_CODEPAGE_437=y
 CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
 CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
 CONFIG_SECURITY=y
 CONFIG_HARDENED_USERCOPY=y
@@ -653,7 +659,6 @@
 CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
 CONFIG_CRYPTO_DEV_QCRYPTO=y
 CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
 CONFIG_PRINTK_TIME=y
 CONFIG_DYNAMIC_DEBUG=y
 CONFIG_DEBUG_CONSOLE_UNHASHED_POINTERS=y
diff --git a/arch/arm64/configs/vendor/kona-iot-perf_defconfig b/arch/arm64/configs/vendor/kona-iot-perf_defconfig
index 6f2c09d..1b4e76f 100644
--- a/arch/arm64/configs/vendor/kona-iot-perf_defconfig
+++ b/arch/arm64/configs/vendor/kona-iot-perf_defconfig
@@ -294,11 +294,9 @@
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
 CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
 CONFIG_DM_CRYPT=y
-CONFIG_DM_DEFAULT_KEY=y
 CONFIG_DM_SNAPSHOT=y
 CONFIG_DM_UEVENT=y
 CONFIG_DM_VERITY=y
@@ -424,8 +422,7 @@
 CONFIG_MSM_GLOBAL_SYNX=y
 CONFIG_DVB_MPQ=m
 CONFIG_DVB_MPQ_DEMUX=m
-CONFIG_DVB_MPQ_TSPP1=y
-CONFIG_TSPP=m
+CONFIG_DVB_MPQ_SW=y
 CONFIG_VIDEO_V4L2_VIDEOBUF2_CORE=y
 CONFIG_I2C_RTC6226_QCA=y
 CONFIG_DRM=y
@@ -667,7 +664,6 @@
 CONFIG_SDCARD_FS=y
 CONFIG_NLS_CODEPAGE_437=y
 CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
 CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
 CONFIG_SECURITY=y
 CONFIG_HARDENED_USERCOPY=y
@@ -685,7 +681,6 @@
 CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
 CONFIG_CRYPTO_DEV_QCRYPTO=y
 CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
 CONFIG_PRINTK_TIME=y
 CONFIG_DEBUG_INFO=y
 CONFIG_DEBUG_FS=y
diff --git a/arch/arm64/configs/vendor/kona-iot_defconfig b/arch/arm64/configs/vendor/kona-iot_defconfig
index b3d2663..dbce12e 100644
--- a/arch/arm64/configs/vendor/kona-iot_defconfig
+++ b/arch/arm64/configs/vendor/kona-iot_defconfig
@@ -308,12 +308,10 @@
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
 CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
 CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
 CONFIG_DM_CRYPT=y
-CONFIG_DM_DEFAULT_KEY=y
 CONFIG_DM_SNAPSHOT=y
 CONFIG_DM_UEVENT=y
 CONFIG_DM_VERITY=y
@@ -440,8 +438,7 @@
 CONFIG_MSM_GLOBAL_SYNX=y
 CONFIG_DVB_MPQ=m
 CONFIG_DVB_MPQ_DEMUX=m
-CONFIG_DVB_MPQ_TSPP1=y
-CONFIG_TSPP=m
+CONFIG_DVB_MPQ_SW=y
 CONFIG_VIDEO_V4L2_VIDEOBUF2_CORE=y
 CONFIG_I2C_RTC6226_QCA=y
 CONFIG_DRM=y
@@ -701,7 +698,6 @@
 # CONFIG_NETWORK_FILESYSTEMS is not set
 CONFIG_NLS_CODEPAGE_437=y
 CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
 CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
 CONFIG_SECURITY=y
 CONFIG_HARDENED_USERCOPY=y
@@ -720,7 +716,6 @@
 CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
 CONFIG_CRYPTO_DEV_QCRYPTO=y
 CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
 CONFIG_XZ_DEC=y
 CONFIG_PRINTK_TIME=y
 CONFIG_DYNAMIC_DEBUG=y
diff --git a/arch/arm64/configs/vendor/kona-perf_defconfig b/arch/arm64/configs/vendor/kona-perf_defconfig
index fbe075e..74b58921 100644
--- a/arch/arm64/configs/vendor/kona-perf_defconfig
+++ b/arch/arm64/configs/vendor/kona-perf_defconfig
@@ -97,6 +97,7 @@
 CONFIG_MODULE_SIG=y
 CONFIG_MODULE_SIG_FORCE=y
 CONFIG_MODULE_SIG_SHA512=y
+CONFIG_BLK_INLINE_ENCRYPTION=y
 CONFIG_PARTITION_ADVANCED=y
 CONFIG_CFQ_GROUP_IOSCHED=y
 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
@@ -297,10 +298,10 @@
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
 CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
-CONFIG_DM_CRYPT=y
 CONFIG_DM_DEFAULT_KEY=y
 CONFIG_DM_SNAPSHOT=y
 CONFIG_DM_UEVENT=y
@@ -504,6 +505,8 @@
 CONFIG_MMC_SDHCI=y
 CONFIG_MMC_SDHCI_PLTFM=y
 CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
 CONFIG_NEW_LEDS=y
 CONFIG_LEDS_CLASS=y
 CONFIG_LEDS_QTI_TRI_LED=y
@@ -616,6 +619,8 @@
 CONFIG_QMP_DEBUGFS_CLIENT=y
 CONFIG_QCOM_CDSP_RM=y
 CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
 CONFIG_DEVFREQ_GOV_PASSIVE=y
 CONFIG_QCOM_BIMC_BWMON=y
 CONFIG_ARM_MEMLAT_MON=y
@@ -639,6 +644,7 @@
 CONFIG_RAS=y
 CONFIG_ANDROID=y
 CONFIG_ANDROID_BINDER_IPC=y
+# CONFIG_NVMEM_SYSFS is not set
 CONFIG_QCOM_QFPROM=y
 CONFIG_NVMEM_SPMI_SDAM=y
 CONFIG_SLIMBUS_MSM_NGD=y
@@ -651,9 +657,10 @@
 CONFIG_QCOM_KGSL=y
 CONFIG_EXT4_FS=y
 CONFIG_EXT4_FS_SECURITY=y
+CONFIG_EXT4_ENCRYPTION=y
 CONFIG_F2FS_FS=y
 CONFIG_F2FS_FS_SECURITY=y
-CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
 CONFIG_QUOTA=y
 CONFIG_QUOTA_NETLINK_INTERFACE=y
 CONFIG_QFMT_V2=y
@@ -668,7 +675,6 @@
 CONFIG_SDCARD_FS=y
 CONFIG_NLS_CODEPAGE_437=y
 CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
 CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
 CONFIG_SECURITY=y
 CONFIG_HARDENED_USERCOPY=y
@@ -686,7 +692,6 @@
 CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
 CONFIG_CRYPTO_DEV_QCRYPTO=y
 CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
 CONFIG_PRINTK_TIME=y
 CONFIG_DEBUG_INFO=y
 CONFIG_DEBUG_FS=y
diff --git a/arch/arm64/configs/vendor/kona_defconfig b/arch/arm64/configs/vendor/kona_defconfig
index 1020da6..46b77b9 100644
--- a/arch/arm64/configs/vendor/kona_defconfig
+++ b/arch/arm64/configs/vendor/kona_defconfig
@@ -100,6 +100,7 @@
 CONFIG_MODULE_SIG_FORCE=y
 CONFIG_MODULE_SIG_SHA512=y
 # CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
 CONFIG_PARTITION_ADVANCED=y
 # CONFIG_IOSCHED_DEADLINE is not set
 CONFIG_CFQ_GROUP_IOSCHED=y
@@ -311,11 +312,11 @@
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
 CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
 CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
-CONFIG_DM_CRYPT=y
 CONFIG_DM_DEFAULT_KEY=y
 CONFIG_DM_SNAPSHOT=y
 CONFIG_DM_UEVENT=y
@@ -522,6 +523,8 @@
 CONFIG_MMC_SDHCI=y
 CONFIG_MMC_SDHCI_PLTFM=y
 CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
 CONFIG_NEW_LEDS=y
 CONFIG_LEDS_CLASS=y
 CONFIG_LEDS_QTI_TRI_LED=y
@@ -646,6 +649,8 @@
 CONFIG_QMP_DEBUGFS_CLIENT=y
 CONFIG_QCOM_CDSP_RM=y
 CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
 CONFIG_DEVFREQ_GOV_PASSIVE=y
 CONFIG_QCOM_BIMC_BWMON=y
 CONFIG_ARM_MEMLAT_MON=y
@@ -670,6 +675,7 @@
 CONFIG_RAS=y
 CONFIG_ANDROID=y
 CONFIG_ANDROID_BINDER_IPC=y
+# CONFIG_NVMEM_SYSFS is not set
 CONFIG_QCOM_QFPROM=y
 CONFIG_NVMEM_SPMI_SDAM=y
 CONFIG_SLIMBUS_MSM_NGD=y
@@ -687,6 +693,7 @@
 CONFIG_F2FS_FS=y
 CONFIG_F2FS_FS_SECURITY=y
 CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
 CONFIG_QUOTA=y
 CONFIG_QUOTA_NETLINK_INTERFACE=y
 CONFIG_QFMT_V2=y
@@ -703,7 +710,6 @@
 # CONFIG_NETWORK_FILESYSTEMS is not set
 CONFIG_NLS_CODEPAGE_437=y
 CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
 CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
 CONFIG_SECURITY=y
 CONFIG_HARDENED_USERCOPY=y
@@ -722,7 +728,6 @@
 CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
 CONFIG_CRYPTO_DEV_QCRYPTO=y
 CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
 CONFIG_XZ_DEC=y
 CONFIG_PRINTK_TIME=y
 CONFIG_DYNAMIC_DEBUG=y
diff --git a/arch/arm64/configs/vendor/lito-perf_defconfig b/arch/arm64/configs/vendor/lito-perf_defconfig
index 142e9ae..7548051 100644
--- a/arch/arm64/configs/vendor/lito-perf_defconfig
+++ b/arch/arm64/configs/vendor/lito-perf_defconfig
@@ -96,6 +96,7 @@
 CONFIG_MODULE_SIG_FORCE=y
 CONFIG_MODULE_SIG_SHA512=y
 # CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
 CONFIG_PARTITION_ADVANCED=y
 # CONFIG_IOSCHED_DEADLINE is not set
 CONFIG_CFQ_GROUP_IOSCHED=y
@@ -290,7 +291,8 @@
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
 CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
 CONFIG_DM_CRYPT=y
@@ -488,8 +490,9 @@
 CONFIG_MMC_TEST=y
 CONFIG_MMC_SDHCI=y
 CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
 CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
 CONFIG_NEW_LEDS=y
 CONFIG_LEDS_CLASS=y
 CONFIG_LEDS_QTI_TRI_LED=y
@@ -604,6 +607,8 @@
 CONFIG_QMP_DEBUGFS_CLIENT=y
 CONFIG_QCOM_CDSP_RM=y
 CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
 CONFIG_ICNSS=y
 CONFIG_ICNSS_QMI=y
 CONFIG_DEVFREQ_GOV_PASSIVE=y
@@ -636,6 +641,7 @@
 CONFIG_F2FS_FS=y
 CONFIG_F2FS_FS_SECURITY=y
 CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
 CONFIG_QUOTA=y
 CONFIG_QUOTA_NETLINK_INTERFACE=y
 CONFIG_QFMT_V2=y
@@ -651,7 +657,6 @@
 # CONFIG_NETWORK_FILESYSTEMS is not set
 CONFIG_NLS_CODEPAGE_437=y
 CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
 CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
 CONFIG_SECURITY=y
 CONFIG_HARDENED_USERCOPY=y
@@ -667,7 +672,6 @@
 CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
 CONFIG_CRYPTO_DEV_QCRYPTO=y
 CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
 CONFIG_STACK_HASH_ORDER_SHIFT=12
 CONFIG_PRINTK_TIME=y
 CONFIG_DEBUG_INFO=y
diff --git a/arch/arm64/configs/vendor/lito_defconfig b/arch/arm64/configs/vendor/lito_defconfig
index 9b4fe41..9c80d86 100644
--- a/arch/arm64/configs/vendor/lito_defconfig
+++ b/arch/arm64/configs/vendor/lito_defconfig
@@ -98,6 +98,7 @@
 CONFIG_MODULE_SIG_FORCE=y
 CONFIG_MODULE_SIG_SHA512=y
 # CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_INLINE_ENCRYPTION=y
 CONFIG_PARTITION_ADVANCED=y
 # CONFIG_IOSCHED_DEADLINE is not set
 CONFIG_CFQ_GROUP_IOSCHED=y
@@ -296,8 +297,9 @@
 CONFIG_SCSI_UFSHCD=y
 CONFIG_SCSI_UFSHCD_PLATFORM=y
 CONFIG_SCSI_UFS_QCOM=y
-CONFIG_SCSI_UFS_QCOM_ICE=y
 CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
 CONFIG_MD=y
 CONFIG_BLK_DEV_DM=y
 CONFIG_DM_CRYPT=y
@@ -497,8 +499,9 @@
 CONFIG_MMC_IPC_LOGGING=y
 CONFIG_MMC_SDHCI=y
 CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_MSM_ICE=y
 CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
 CONFIG_NEW_LEDS=y
 CONFIG_LEDS_CLASS=y
 CONFIG_LEDS_QTI_TRI_LED=y
@@ -623,6 +626,8 @@
 CONFIG_QMP_DEBUGFS_CLIENT=y
 CONFIG_QCOM_CDSP_RM=y
 CONFIG_QCOM_QHEE_ENABLE_MEM_PROTECTION=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
 CONFIG_ICNSS=y
 CONFIG_ICNSS_DEBUG=y
 CONFIG_ICNSS_QMI=y
@@ -656,6 +661,7 @@
 CONFIG_F2FS_FS=y
 CONFIG_F2FS_FS_SECURITY=y
 CONFIG_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
 CONFIG_QUOTA=y
 CONFIG_QUOTA_NETLINK_INTERFACE=y
 CONFIG_QFMT_V2=y
@@ -672,7 +678,6 @@
 # CONFIG_NETWORK_FILESYSTEMS is not set
 CONFIG_NLS_CODEPAGE_437=y
 CONFIG_NLS_ISO8859_1=y
-CONFIG_PFK=y
 CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
 CONFIG_SECURITY=y
 CONFIG_HARDENED_USERCOPY=y
@@ -688,7 +693,6 @@
 CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
 CONFIG_CRYPTO_DEV_QCRYPTO=y
 CONFIG_CRYPTO_DEV_QCEDEV=y
-CONFIG_CRYPTO_DEV_QCOM_ICE=y
 CONFIG_XZ_DEC=y
 CONFIG_PRINTK_TIME=y
 CONFIG_DYNAMIC_DEBUG=y
diff --git a/arch/arm64/configs/vendor/sdm660-perf_defconfig b/arch/arm64/configs/vendor/sdm660-perf_defconfig
new file mode 100644
index 0000000..67089f8
--- /dev/null
+++ b/arch/arm64/configs/vendor/sdm660-perf_defconfig
@@ -0,0 +1,647 @@
+CONFIG_LOCALVERSION="-perf"
+# CONFIG_LOCALVERSION_AUTO is not set
+CONFIG_AUDIT=y
+# CONFIG_AUDITSYSCALL is not set
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_PREEMPT=y
+CONFIG_IRQ_TIME_ACCOUNTING=y
+CONFIG_SCHED_WALT=y
+CONFIG_TASKSTATS=y
+CONFIG_TASK_XACCT=y
+CONFIG_TASK_IO_ACCOUNTING=y
+CONFIG_PSI=y
+CONFIG_PSI_FTRACE=y
+CONFIG_RCU_EXPERT=y
+CONFIG_RCU_FAST_NO_HZ=y
+CONFIG_RCU_NOCB_CPU=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_LOG_CPU_MAX_BUF_SHIFT=17
+CONFIG_MEMCG=y
+CONFIG_MEMCG_SWAP=y
+CONFIG_BLK_CGROUP=y
+CONFIG_RT_GROUP_SCHED=y
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CPUSETS=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_CGROUP_BPF=y
+CONFIG_SCHED_CORE_CTL=y
+CONFIG_NAMESPACES=y
+# CONFIG_UTS_NS is not set
+# CONFIG_PID_NS is not set
+CONFIG_SCHED_AUTOGROUP=y
+CONFIG_SCHED_TUNE=y
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_RD_XZ is not set
+# CONFIG_RD_LZO is not set
+# CONFIG_RD_LZ4 is not set
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+# CONFIG_FHANDLE is not set
+CONFIG_KALLSYMS_ALL=y
+CONFIG_BPF_SYSCALL=y
+CONFIG_EMBEDDED=y
+# CONFIG_SLUB_DEBUG is not set
+# CONFIG_COMPAT_BRK is not set
+CONFIG_SLAB_FREELIST_RANDOM=y
+CONFIG_SLAB_FREELIST_HARDENED=y
+CONFIG_PROFILING=y
+CONFIG_HOTPLUG_SIZE_BITS=29
+CONFIG_ARCH_QCOM=y
+CONFIG_ARCH_SDM660=y
+CONFIG_PCI=y
+CONFIG_PCI_MSM=y
+CONFIG_SCHED_MC=y
+CONFIG_NR_CPUS=8
+CONFIG_HZ_100=y
+CONFIG_SECCOMP=y
+CONFIG_PRINT_VMEMLAYOUT=y
+CONFIG_ARMV8_DEPRECATED=y
+CONFIG_SWP_EMULATION=y
+CONFIG_CP15_BARRIER_EMULATION=y
+CONFIG_SETEND_EMULATION=y
+CONFIG_ARM64_SW_TTBR0_PAN=y
+# CONFIG_ARM64_VHE is not set
+CONFIG_RANDOMIZE_BASE=y
+# CONFIG_EFI is not set
+CONFIG_BUILD_ARM64_APPENDED_DTB_IMAGE=y
+CONFIG_COMPAT=y
+CONFIG_PM_AUTOSLEEP=y
+CONFIG_PM_WAKELOCKS=y
+CONFIG_PM_WAKELOCKS_LIMIT=0
+# CONFIG_PM_WAKELOCKS_GC is not set
+CONFIG_CPU_IDLE=y
+CONFIG_ARM_CPUIDLE=y
+CONFIG_CPU_FREQ=y
+CONFIG_CPU_FREQ_TIMES=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=y
+CONFIG_CPU_FREQ_GOV_USERSPACE=y
+CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
+CONFIG_CPU_BOOST=y
+CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
+CONFIG_ARM_QCOM_CPUFREQ_HW=y
+CONFIG_CPU_FREQ_MSM=y
+CONFIG_MSM_TZ_LOG=y
+CONFIG_ARM64_CRYPTO=y
+CONFIG_CRYPTO_SHA1_ARM64_CE=y
+CONFIG_CRYPTO_SHA2_ARM64_CE=y
+CONFIG_CRYPTO_GHASH_ARM64_CE=y
+CONFIG_CRYPTO_AES_ARM64_CE_CCM=y
+CONFIG_CRYPTO_AES_ARM64_CE_BLK=y
+CONFIG_CRYPTO_AES_ARM64_NEON_BLK=y
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS=16
+CONFIG_PANIC_ON_REFCOUNT_ERROR=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODULE_FORCE_UNLOAD=y
+CONFIG_MODVERSIONS=y
+CONFIG_MODULE_SIG=y
+CONFIG_MODULE_SIG_FORCE=y
+CONFIG_MODULE_SIG_SHA512=y
+CONFIG_BLK_DEV_ZONED=y
+CONFIG_BLK_INLINE_ENCRYPTION=y
+CONFIG_PARTITION_ADVANCED=y
+CONFIG_CFQ_GROUP_IOSCHED=y
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_MEMORY_HOTPLUG=y
+CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE=y
+CONFIG_MEMORY_HOTPLUG_MOVABLE_NODE=y
+CONFIG_MEMORY_HOTREMOVE=y
+CONFIG_CLEANCACHE=y
+CONFIG_CMA=y
+CONFIG_CMA_AREAS=8
+CONFIG_ZSMALLOC=y
+CONFIG_BALANCE_ANON_FILE_RECLAIM=y
+CONFIG_HAVE_USERSPACE_LOW_MEMORY_KILLER=y
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_XFRM_USER=y
+CONFIG_XFRM_INTERFACE=y
+CONFIG_XFRM_STATISTICS=y
+CONFIG_NET_KEY=y
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_NET_IPGRE_DEMUX=y
+CONFIG_SYN_COOKIES=y
+CONFIG_NET_IPVTI=y
+CONFIG_INET_AH=y
+CONFIG_INET_ESP=y
+CONFIG_INET_IPCOMP=y
+CONFIG_INET_UDP_DIAG=y
+CONFIG_INET_DIAG_DESTROY=y
+CONFIG_IPV6_ROUTER_PREF=y
+CONFIG_IPV6_ROUTE_INFO=y
+CONFIG_IPV6_OPTIMISTIC_DAD=y
+CONFIG_INET6_AH=y
+CONFIG_INET6_ESP=y
+CONFIG_INET6_IPCOMP=y
+CONFIG_IPV6_MIP6=y
+CONFIG_IPV6_VTI=y
+CONFIG_IPV6_MULTIPLE_TABLES=y
+CONFIG_IPV6_SUBTREES=y
+CONFIG_NETFILTER=y
+CONFIG_NF_CONNTRACK=y
+CONFIG_NF_CONNTRACK_SECMARK=y
+CONFIG_NF_CONNTRACK_EVENTS=y
+CONFIG_NF_CONNTRACK_AMANDA=y
+CONFIG_NF_CONNTRACK_FTP=y
+CONFIG_NF_CONNTRACK_H323=y
+CONFIG_NF_CONNTRACK_IRC=y
+CONFIG_NF_CONNTRACK_NETBIOS_NS=y
+CONFIG_NF_CONNTRACK_PPTP=y
+CONFIG_NF_CONNTRACK_SANE=y
+CONFIG_NF_CONNTRACK_TFTP=y
+CONFIG_NF_CT_NETLINK=y
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
+CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
+CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
+CONFIG_NETFILTER_XT_TARGET_HARDIDLETIMER=y
+CONFIG_NETFILTER_XT_TARGET_LOG=y
+CONFIG_NETFILTER_XT_TARGET_MARK=y
+CONFIG_NETFILTER_XT_TARGET_NFLOG=y
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
+CONFIG_NETFILTER_XT_TARGET_NOTRACK=y
+CONFIG_NETFILTER_XT_TARGET_TEE=y
+CONFIG_NETFILTER_XT_TARGET_TPROXY=y
+CONFIG_NETFILTER_XT_TARGET_TRACE=y
+CONFIG_NETFILTER_XT_TARGET_SECMARK=y
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
+CONFIG_NETFILTER_XT_MATCH_BPF=y
+CONFIG_NETFILTER_XT_MATCH_COMMENT=y
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
+CONFIG_NETFILTER_XT_MATCH_DSCP=y
+CONFIG_NETFILTER_XT_MATCH_ESP=y
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_HELPER=y
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
+# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
+CONFIG_NETFILTER_XT_MATCH_LENGTH=y
+CONFIG_NETFILTER_XT_MATCH_LIMIT=y
+CONFIG_NETFILTER_XT_MATCH_MAC=y
+CONFIG_NETFILTER_XT_MATCH_MARK=y
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
+CONFIG_NETFILTER_XT_MATCH_OWNER=y
+CONFIG_NETFILTER_XT_MATCH_POLICY=y
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA2_LOG=y
+# CONFIG_NETFILTER_XT_MATCH_SCTP is not set
+CONFIG_NETFILTER_XT_MATCH_SOCKET=y
+CONFIG_NETFILTER_XT_MATCH_STATE=y
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
+CONFIG_NETFILTER_XT_MATCH_STRING=y
+CONFIG_NETFILTER_XT_MATCH_TIME=y
+CONFIG_NETFILTER_XT_MATCH_U32=y
+CONFIG_IP_NF_IPTABLES=y
+CONFIG_IP_NF_MATCH_AH=y
+CONFIG_IP_NF_MATCH_ECN=y
+CONFIG_IP_NF_MATCH_RPFILTER=y
+CONFIG_IP_NF_MATCH_TTL=y
+CONFIG_IP_NF_FILTER=y
+CONFIG_IP_NF_TARGET_REJECT=y
+CONFIG_IP_NF_NAT=y
+CONFIG_IP_NF_TARGET_MASQUERADE=y
+CONFIG_IP_NF_TARGET_NETMAP=y
+CONFIG_IP_NF_TARGET_REDIRECT=y
+CONFIG_IP_NF_MANGLE=y
+CONFIG_IP_NF_RAW=y
+CONFIG_IP_NF_SECURITY=y
+CONFIG_IP_NF_ARPTABLES=y
+CONFIG_IP_NF_ARPFILTER=y
+CONFIG_IP_NF_ARP_MANGLE=y
+CONFIG_IP6_NF_IPTABLES=y
+CONFIG_IP6_NF_MATCH_RPFILTER=y
+CONFIG_IP6_NF_FILTER=y
+CONFIG_IP6_NF_TARGET_REJECT=y
+CONFIG_IP6_NF_MANGLE=y
+CONFIG_IP6_NF_RAW=y
+CONFIG_BRIDGE_NF_EBTABLES=y
+CONFIG_BRIDGE_EBT_BROUTE=y
+CONFIG_IP_SCTP=y
+CONFIG_L2TP=y
+CONFIG_L2TP_V3=y
+CONFIG_L2TP_IP=y
+CONFIG_L2TP_ETH=y
+CONFIG_BRIDGE=y
+CONFIG_NET_SCHED=y
+CONFIG_NET_SCH_HTB=y
+CONFIG_NET_SCH_PRIO=y
+CONFIG_NET_SCH_MULTIQ=y
+CONFIG_NET_SCH_INGRESS=y
+CONFIG_NET_CLS_FW=y
+CONFIG_NET_CLS_U32=y
+CONFIG_CLS_U32_MARK=y
+CONFIG_NET_CLS_FLOW=y
+CONFIG_NET_CLS_BPF=y
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_CMP=y
+CONFIG_NET_EMATCH_NBYTE=y
+CONFIG_NET_EMATCH_U32=y
+CONFIG_NET_EMATCH_META=y
+CONFIG_NET_EMATCH_TEXT=y
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_GACT=y
+CONFIG_NET_ACT_MIRRED=y
+CONFIG_NET_ACT_SKBEDIT=y
+CONFIG_DNS_RESOLVER=y
+CONFIG_QRTR=y
+CONFIG_QRTR_SMD=y
+CONFIG_SOCKEV_NLMCAST=y
+CONFIG_BT=y
+CONFIG_MSM_BT_POWER=y
+CONFIG_BTFM_SLIM_WCN3990=y
+CONFIG_CFG80211=y
+CONFIG_CFG80211_CERTIFICATION_ONUS=y
+CONFIG_CFG80211_REG_CELLULAR_HINTS=y
+CONFIG_CFG80211_INTERNAL_REGDB=y
+CONFIG_RFKILL=y
+CONFIG_NFC_NQ=y
+CONFIG_FW_LOADER_USER_HELPER=y
+CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
+CONFIG_REGMAP_WCD_IRQ=y
+CONFIG_DMA_CMA=y
+CONFIG_MTD=m
+CONFIG_ZRAM=y
+CONFIG_ZRAM_DEDUP=y
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_LOOP_MIN_COUNT=16
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=8192
+CONFIG_HDCP_QSEECOM=y
+CONFIG_QSEECOM=y
+CONFIG_UID_SYS_STATS=y
+CONFIG_MEMORY_STATE_TIME=y
+CONFIG_SCSI=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_SG=y
+CONFIG_CHR_DEV_SCH=y
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+CONFIG_SCSI_SCAN_ASYNC=y
+CONFIG_SCSI_UFSHCD=y
+CONFIG_SCSI_UFSHCD_PLATFORM=y
+CONFIG_SCSI_UFS_QCOM=y
+CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_MD_LINEAR=y
+CONFIG_BLK_DEV_DM=y
+CONFIG_DM_CRYPT=y
+CONFIG_DM_DEFAULT_KEY=y
+CONFIG_DM_UEVENT=y
+CONFIG_DM_VERITY=y
+CONFIG_DM_VERITY_FEC=y
+CONFIG_DM_ANDROID_VERITY=y
+CONFIG_DM_BOW=y
+CONFIG_NETDEVICES=y
+CONFIG_BONDING=y
+CONFIG_DUMMY=y
+CONFIG_TUN=y
+CONFIG_SKY2=y
+CONFIG_RMNET=y
+CONFIG_SMSC911X=y
+CONFIG_PPP=y
+CONFIG_PPP_BSDCOMP=y
+CONFIG_PPP_DEFLATE=y
+CONFIG_PPP_FILTER=y
+CONFIG_PPP_MPPE=y
+CONFIG_PPP_MULTILINK=y
+CONFIG_PPPOE=y
+CONFIG_PPTP=y
+CONFIG_PPPOL2TP=y
+CONFIG_PPP_ASYNC=y
+CONFIG_PPP_SYNC_TTY=y
+CONFIG_USB_RTL8152=y
+CONFIG_USB_USBNET=y
+CONFIG_WIL6210=m
+CONFIG_WCNSS_MEM_PRE_ALLOC=y
+CONFIG_CLD_LL_CORE=y
+CONFIG_CNSS_GENL=y
+CONFIG_INPUT_EVDEV=y
+CONFIG_INPUT_KEYRESET=y
+CONFIG_KEYBOARD_GPIO=y
+# CONFIG_INPUT_MOUSE is not set
+CONFIG_INPUT_JOYSTICK=y
+CONFIG_INPUT_TOUCHSCREEN=y
+# CONFIG_TOUCHSCREEN_SYNAPTICS_DSX is not set
+# CONFIG_TOUCHSCREEN_SYNAPTICS_TCM is not set
+CONFIG_INPUT_MISC=y
+CONFIG_INPUT_QPNP_POWER_ON=y
+CONFIG_INPUT_QTI_HAPTICS=y
+CONFIG_INPUT_UINPUT=y
+CONFIG_INPUT_GPIO=y
+# CONFIG_SERIO_SERPORT is not set
+# CONFIG_VT is not set
+# CONFIG_LEGACY_PTYS is not set
+# CONFIG_DEVMEM is not set
+CONFIG_SERIAL_DEV_BUS=y
+CONFIG_TTY_PRINTK=y
+CONFIG_HW_RANDOM=y
+CONFIG_HW_RANDOM_MSM_LEGACY=y
+# CONFIG_DEVPORT is not set
+CONFIG_DIAG_CHAR=y
+CONFIG_MSM_FASTCVPD=y
+CONFIG_MSM_ADSPRPC=y
+CONFIG_MSM_RDBG=m
+CONFIG_I2C_CHARDEV=y
+CONFIG_SPI=y
+CONFIG_SPI_QUP=y
+CONFIG_SPI_SPIDEV=y
+CONFIG_SPMI=y
+CONFIG_SPMI_MSM_PMIC_ARB_DEBUG=y
+CONFIG_PINCTRL_QCOM_SPMI_PMIC=y
+CONFIG_GPIO_SYSFS=y
+CONFIG_POWER_RESET_QCOM=y
+CONFIG_POWER_RESET_XGENE=y
+CONFIG_POWER_RESET_SYSCON=y
+CONFIG_QPNP_QNOVO5=y
+CONFIG_THERMAL=y
+CONFIG_THERMAL_WRITABLE_TRIPS=y
+CONFIG_THERMAL_GOV_USER_SPACE=y
+CONFIG_THERMAL_GOV_LOW_LIMITS=y
+CONFIG_CPU_THERMAL=y
+CONFIG_DEVFREQ_THERMAL=y
+CONFIG_QCOM_SPMI_TEMP_ALARM=y
+CONFIG_THERMAL_TSENS=y
+CONFIG_QTI_ADC_TM=y
+CONFIG_QTI_VIRTUAL_SENSOR=y
+CONFIG_QTI_BCL_PMIC5=y
+CONFIG_QTI_BCL_SOC_DRIVER=y
+CONFIG_QTI_QMI_COOLING_DEVICE=y
+CONFIG_QTI_THERMAL_LIMITS_DCVS=y
+CONFIG_REGULATOR_COOLING_DEVICE=y
+CONFIG_QTI_CX_IPEAK_COOLING_DEVICE=y
+CONFIG_MFD_I2C_PMIC=y
+CONFIG_MFD_SPMI_PMIC=y
+CONFIG_REGULATOR=y
+CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_REGULATOR_PROXY_CONSUMER=y
+CONFIG_REGULATOR_QCOM_SMD_RPM=y
+CONFIG_REGULATOR_QPNP_LABIBB=y
+CONFIG_REGULATOR_QPNP_LCDB=y
+CONFIG_REGULATOR_QPNP_OLEDB=y
+CONFIG_REGULATOR_RPM_SMD=y
+CONFIG_REGULATOR_STUB=y
+CONFIG_RC_CORE=m
+CONFIG_MEDIA_SUPPORT=y
+CONFIG_MEDIA_CAMERA_SUPPORT=y
+CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
+CONFIG_MEDIA_CONTROLLER=y
+CONFIG_VIDEO_V4L2_SUBDEV_API=y
+CONFIG_VIDEO_FIXED_MINOR_RANGES=y
+CONFIG_MEDIA_USB_SUPPORT=y
+CONFIG_USB_VIDEO_CLASS=y
+CONFIG_V4L_PLATFORM_DRIVERS=y
+CONFIG_DVB_MPQ=m
+CONFIG_DVB_MPQ_DEMUX=m
+CONFIG_FB=y
+CONFIG_FB_ARMCLCD=y
+CONFIG_FB_VIRTUAL=y
+CONFIG_BACKLIGHT_QCOM_SPMI_WLED=y
+CONFIG_LOGO=y
+# CONFIG_LOGO_LINUX_MONO is not set
+# CONFIG_LOGO_LINUX_VGA16 is not set
+CONFIG_SOUND=y
+CONFIG_SND=y
+CONFIG_SND_DYNAMIC_MINORS=y
+CONFIG_SND_USB_AUDIO=y
+CONFIG_SND_USB_AUDIO_QMI=y
+CONFIG_SND_SOC=y
+CONFIG_UHID=y
+CONFIG_HID_APPLE=y
+CONFIG_HID_ELECOM=y
+CONFIG_HID_MAGICMOUSE=y
+CONFIG_HID_MICROSOFT=y
+CONFIG_HID_MULTITOUCH=y
+CONFIG_HID_PLANTRONICS=y
+CONFIG_HID_SONY=y
+CONFIG_USB=y
+CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
+CONFIG_USB_XHCI_HCD=y
+CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_EHCI_HCD_PLATFORM=y
+CONFIG_USB_OHCI_HCD=y
+CONFIG_USB_OHCI_HCD_PLATFORM=y
+CONFIG_USB_STORAGE=y
+CONFIG_USB_DWC3=y
+CONFIG_USB_DWC3_MSM=y
+CONFIG_USB_ISP1760=y
+CONFIG_USB_ISP1760_HOST_ROLE=y
+CONFIG_USB_EHSET_TEST_FIXTURE=y
+CONFIG_USB_LINK_LAYER_TEST=y
+CONFIG_NOP_USB_XCEIV=y
+CONFIG_USB_MSM_SSPHY_QMP=y
+CONFIG_MSM_QUSB_PHY=y
+CONFIG_MSM_HSUSB_PHY=y
+CONFIG_USB_GADGET=y
+CONFIG_USB_GADGET_VBUS_DRAW=500
+CONFIG_USB_CONFIGFS=y
+CONFIG_USB_CONFIGFS_UEVENT=y
+CONFIG_USB_CONFIGFS_NCM=y
+CONFIG_USB_CONFIGFS_MASS_STORAGE=y
+CONFIG_USB_CONFIGFS_F_FS=y
+CONFIG_USB_CONFIGFS_F_ACC=y
+CONFIG_USB_CONFIGFS_F_AUDIO_SRC=y
+CONFIG_USB_CONFIGFS_F_MIDI=y
+CONFIG_USB_CONFIGFS_F_HID=y
+CONFIG_USB_CONFIGFS_F_DIAG=y
+CONFIG_USB_CONFIGFS_F_CDEV=y
+CONFIG_USB_CONFIGFS_F_CCID=y
+CONFIG_USB_CONFIGFS_F_QDSS=y
+CONFIG_USB_CONFIGFS_F_GSI=y
+CONFIG_USB_CONFIGFS_F_MTP=y
+CONFIG_USB_CONFIGFS_F_PTP=y
+CONFIG_TYPEC=y
+CONFIG_USB_PD_POLICY=y
+CONFIG_QPNP_USB_PDPHY=y
+CONFIG_MMC=y
+CONFIG_MMC_BLOCK_MINORS=32
+CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
+CONFIG_MMC_TEST=m
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+CONFIG_LEDS_QTI_TRI_LED=y
+CONFIG_LEDS_QPNP_FLASH_V2=y
+CONFIG_RTC_CLASS=y
+CONFIG_DMADEVICES=y
+CONFIG_QCOM_GPI_DMA=y
+CONFIG_SYNC_FILE=y
+CONFIG_UIO=y
+CONFIG_UIO_MSM_SHAREDMEM=y
+CONFIG_STAGING=y
+CONFIG_ASHMEM=y
+CONFIG_ION=y
+CONFIG_QPNP_REVID=y
+CONFIG_SPS=y
+CONFIG_SPS_SUPPORT_NDP_BAM=y
+CONFIG_GSI=y
+CONFIG_MSM_11AD=m
+CONFIG_USB_BAM=y
+CONFIG_HWSPINLOCK=y
+CONFIG_HWSPINLOCK_QCOM=y
+CONFIG_MAILBOX=y
+CONFIG_QCOM_APCS_IPC=y
+CONFIG_MSM_QMP=y
+CONFIG_IOMMU_IO_PGTABLE_FAST=y
+CONFIG_ARM_SMMU=y
+CONFIG_ARM_SMMU_TESTBUS_DUMP=y
+CONFIG_QCOM_LAZY_MAPPING=y
+CONFIG_RPMSG_CHAR=y
+CONFIG_RPMSG_QCOM_GLINK_RPM=y
+CONFIG_RPMSG_QCOM_GLINK_SMEM=y
+CONFIG_RPMSG_QCOM_GLINK_SPSS=y
+CONFIG_RPMSG_QCOM_GLINK_SPI=y
+CONFIG_RPMSG_QCOM_SMD=y
+CONFIG_MSM_RPM_SMD=y
+CONFIG_QCOM_COMMAND_DB=y
+CONFIG_QCOM_MEM_OFFLINE=y
+CONFIG_OVERRIDE_MEMORY_LIMIT=y
+CONFIG_QCOM_CPUSS_DUMP=y
+CONFIG_QCOM_RUN_QUEUE_STATS=y
+CONFIG_QPNP_PBS=y
+CONFIG_QCOM_QMI_HELPERS=y
+CONFIG_QCOM_SMEM=y
+CONFIG_QCOM_SMD_RPM=y
+CONFIG_QCOM_EARLY_RANDOM=y
+CONFIG_QCOM_MEMORY_DUMP_V2=y
+CONFIG_QCOM_SMP2P=y
+CONFIG_SETUP_SSR_NOTIF_TIMEOUTS=y
+CONFIG_SSR_SYSMON_NOTIF_TIMEOUT=20000
+CONFIG_SSR_SUBSYS_NOTIF_TIMEOUT=20000
+CONFIG_PANIC_ON_SSR_NOTIF_TIMEOUT=y
+CONFIG_QCOM_SECURE_BUFFER=y
+CONFIG_MSM_SERVICE_LOCATOR=y
+CONFIG_MSM_SERVICE_NOTIFIER=y
+CONFIG_MSM_SUBSYSTEM_RESTART=y
+CONFIG_MSM_PIL=y
+CONFIG_MSM_SYSMON_QMI_COMM=y
+CONFIG_MSM_PIL_SSR_GENERIC=y
+CONFIG_MSM_BOOT_STATS=y
+CONFIG_QCOM_DCC_V2=y
+CONFIG_QCOM_MINIDUMP=y
+CONFIG_MSM_CORE_HANG_DETECT=y
+CONFIG_MSM_GLADIATOR_HANG_DETECT=y
+CONFIG_QCOM_FSA4480_I2C=y
+CONFIG_QCOM_WATCHDOG_V2=y
+CONFIG_QCOM_FORCE_WDOG_BITE_ON_PANIC=y
+CONFIG_QCOM_WDOG_IPI_ENABLE=y
+CONFIG_QCOM_BUS_SCALING=y
+CONFIG_MSM_SPCOM=y
+CONFIG_MSM_SPSS_UTILS=y
+CONFIG_QSEE_IPC_IRQ_BRIDGE=y
+CONFIG_QCOM_GLINK=y
+CONFIG_QCOM_GLINK_PKT=y
+CONFIG_QCOM_SMP2P_SLEEPSTATE=y
+CONFIG_MSM_CDSP_LOADER=y
+CONFIG_QCOM_SMCINVOKE=y
+CONFIG_MSM_EVENT_TIMER=y
+CONFIG_MSM_PM=y
+CONFIG_QTI_RPM_STATS_LOG=y
+CONFIG_QTEE_SHM_BRIDGE=y
+CONFIG_MEM_SHARE_QMI_SERVICE=y
+CONFIG_MSM_PERFORMANCE=y
+CONFIG_QCOM_CDSP_RM=y
+CONFIG_QCOM_CX_IPEAK=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
+CONFIG_ICNSS=y
+CONFIG_ICNSS_QMI=y
+CONFIG_DEVFREQ_GOV_PASSIVE=y
+CONFIG_QCOM_BIMC_BWMON=y
+CONFIG_ARM_MEMLAT_MON=y
+CONFIG_DEVFREQ_GOV_QCOM_BW_HWMON=y
+CONFIG_DEVFREQ_GOV_QCOM_CACHE_HWMON=y
+CONFIG_DEVFREQ_GOV_MEMLAT=y
+CONFIG_DEVFREQ_SIMPLE_DEV=y
+CONFIG_QCOM_DEVFREQ_DEVBW=y
+CONFIG_IIO=y
+CONFIG_QCOM_SPMI_ADC5=y
+CONFIG_QCOM_SPMI_VADC=y
+CONFIG_PWM=y
+CONFIG_PWM_QTI_LPG=y
+CONFIG_ARM_GIC_V3_ACL=y
+CONFIG_QCOM_MPM=y
+CONFIG_PHY_XGENE=y
+CONFIG_RAS=y
+CONFIG_ANDROID=y
+CONFIG_ANDROID_BINDER_IPC=y
+CONFIG_QCOM_QFPROM=y
+CONFIG_NVMEM_SPMI_SDAM=y
+CONFIG_SLIMBUS_MSM_NGD=y
+CONFIG_SENSORS_SSC=y
+CONFIG_QCOM_KGSL=y
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_FS_SECURITY=y
+CONFIG_EXT4_ENCRYPTION=y
+CONFIG_F2FS_FS=y
+CONFIG_F2FS_FS_SECURITY=y
+CONFIG_F2FS_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
+CONFIG_QUOTA=y
+CONFIG_QUOTA_NETLINK_INTERFACE=y
+CONFIG_QFMT_V2=y
+CONFIG_FUSE_FS=y
+CONFIG_OVERLAY_FS=y
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_ECRYPT_FS=y
+CONFIG_ECRYPT_FS_MESSAGING=y
+CONFIG_SDCARD_FS=y
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
+CONFIG_SECURITY=y
+CONFIG_HARDENED_USERCOPY=y
+CONFIG_HARDENED_USERCOPY_PAGESPAN=y
+CONFIG_FORTIFY_SOURCE=y
+CONFIG_SECURITY_SELINUX=y
+CONFIG_SECURITY_SMACK=y
+CONFIG_CRYPTO_GCM=y
+CONFIG_CRYPTO_XCBC=y
+CONFIG_CRYPTO_MD4=y
+CONFIG_CRYPTO_TWOFISH=y
+CONFIG_CRYPTO_ANSI_CPRNG=y
+CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
+CONFIG_CRYPTO_DEV_QCRYPTO=y
+CONFIG_CRYPTO_DEV_QCEDEV=y
+CONFIG_SYSTEM_TRUSTED_KEYS="verity.x509.pem"
+CONFIG_XZ_DEC=y
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_INFO=y
+CONFIG_PAGE_OWNER=y
+# CONFIG_SECTION_MISMATCH_WARN_ONLY is not set
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_PANIC_TIMEOUT=5
+CONFIG_SCHEDSTATS=y
+# CONFIG_DEBUG_PREEMPT is not set
+CONFIG_IPC_LOGGING=y
+CONFIG_DEBUG_ALIGN_RODATA=y
+CONFIG_CORESIGHT=y
+CONFIG_CORESIGHT_LINK_AND_SINK_TMC=y
+CONFIG_CORESIGHT_SOURCE_ETM4X=y
+CONFIG_CORESIGHT_STM=y
+CONFIG_CORESIGHT_CTI=y
+CONFIG_CORESIGHT_TPDA=y
+CONFIG_CORESIGHT_TPDM=y
+CONFIG_CORESIGHT_HWEVENT=y
+CONFIG_CORESIGHT_REMOTE_ETM=y
diff --git a/arch/arm64/configs/vendor/sdm660_defconfig b/arch/arm64/configs/vendor/sdm660_defconfig
new file mode 100644
index 0000000..6e561f2
--- /dev/null
+++ b/arch/arm64/configs/vendor/sdm660_defconfig
@@ -0,0 +1,698 @@
+# CONFIG_LOCALVERSION_AUTO is not set
+CONFIG_AUDIT=y
+# CONFIG_AUDITSYSCALL is not set
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_PREEMPT=y
+CONFIG_IRQ_TIME_ACCOUNTING=y
+CONFIG_SCHED_WALT=y
+CONFIG_TASKSTATS=y
+CONFIG_TASK_XACCT=y
+CONFIG_TASK_IO_ACCOUNTING=y
+CONFIG_PSI=y
+CONFIG_PSI_FTRACE=y
+CONFIG_RCU_EXPERT=y
+CONFIG_RCU_FAST_NO_HZ=y
+CONFIG_RCU_NOCB_CPU=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_LOG_CPU_MAX_BUF_SHIFT=17
+CONFIG_MEMCG=y
+CONFIG_MEMCG_SWAP=y
+CONFIG_BLK_CGROUP=y
+CONFIG_DEBUG_BLK_CGROUP=y
+CONFIG_RT_GROUP_SCHED=y
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CPUSETS=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_CGROUP_BPF=y
+CONFIG_CGROUP_DEBUG=y
+CONFIG_SCHED_CORE_CTL=y
+CONFIG_NAMESPACES=y
+# CONFIG_UTS_NS is not set
+# CONFIG_PID_NS is not set
+CONFIG_SCHED_AUTOGROUP=y
+CONFIG_SCHED_TUNE=y
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_RD_XZ is not set
+# CONFIG_RD_LZO is not set
+# CONFIG_RD_LZ4 is not set
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+# CONFIG_FHANDLE is not set
+CONFIG_KALLSYMS_ALL=y
+CONFIG_BPF_SYSCALL=y
+CONFIG_EMBEDDED=y
+# CONFIG_COMPAT_BRK is not set
+CONFIG_SLAB_FREELIST_RANDOM=y
+CONFIG_SLAB_FREELIST_HARDENED=y
+CONFIG_PROFILING=y
+# CONFIG_ZONE_DMA32 is not set
+CONFIG_HOTPLUG_SIZE_BITS=29
+CONFIG_ARCH_QCOM=y
+CONFIG_ARCH_SDM660=y
+CONFIG_PCI=y
+CONFIG_PCI_MSM=y
+CONFIG_SCHED_MC=y
+CONFIG_NR_CPUS=8
+CONFIG_HZ_100=y
+CONFIG_SECCOMP=y
+CONFIG_PRINT_VMEMLAYOUT=y
+CONFIG_ARMV8_DEPRECATED=y
+CONFIG_SWP_EMULATION=y
+CONFIG_CP15_BARRIER_EMULATION=y
+CONFIG_SETEND_EMULATION=y
+CONFIG_ARM64_SW_TTBR0_PAN=y
+# CONFIG_ARM64_VHE is not set
+CONFIG_RANDOMIZE_BASE=y
+CONFIG_BUILD_ARM64_APPENDED_DTB_IMAGE=y
+CONFIG_COMPAT=y
+CONFIG_PM_AUTOSLEEP=y
+CONFIG_PM_WAKELOCKS=y
+CONFIG_PM_WAKELOCKS_LIMIT=0
+# CONFIG_PM_WAKELOCKS_GC is not set
+CONFIG_PM_DEBUG=y
+CONFIG_CPU_IDLE=y
+CONFIG_ARM_CPUIDLE=y
+CONFIG_CPU_FREQ=y
+CONFIG_CPU_FREQ_TIMES=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=y
+CONFIG_CPU_FREQ_GOV_USERSPACE=y
+CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
+CONFIG_CPU_BOOST=y
+CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
+CONFIG_ARM_QCOM_CPUFREQ_HW=y
+CONFIG_MSM_TZ_LOG=y
+CONFIG_ARM64_CRYPTO=y
+CONFIG_CRYPTO_SHA1_ARM64_CE=y
+CONFIG_CRYPTO_SHA2_ARM64_CE=y
+CONFIG_CRYPTO_GHASH_ARM64_CE=y
+CONFIG_CRYPTO_AES_ARM64_CE_CCM=y
+CONFIG_CRYPTO_AES_ARM64_CE_BLK=y
+CONFIG_CRYPTO_AES_ARM64_NEON_BLK=y
+CONFIG_ARCH_MMAP_RND_COMPAT_BITS=16
+CONFIG_PANIC_ON_REFCOUNT_ERROR=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODULE_FORCE_UNLOAD=y
+CONFIG_MODVERSIONS=y
+CONFIG_MODULE_SIG=y
+CONFIG_MODULE_SIG_FORCE=y
+CONFIG_MODULE_SIG_SHA512=y
+# CONFIG_BLK_DEV_BSG is not set
+CONFIG_BLK_DEV_ZONED=y
+CONFIG_BLK_INLINE_ENCRYPTION=y
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_IOSCHED_DEADLINE is not set
+CONFIG_CFQ_GROUP_IOSCHED=y
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_MEMORY_HOTPLUG=y
+CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE=y
+CONFIG_MEMORY_HOTPLUG_MOVABLE_NODE=y
+CONFIG_MEMORY_HOTREMOVE=y
+CONFIG_CLEANCACHE=y
+CONFIG_CMA=y
+CONFIG_CMA_DEBUG=y
+CONFIG_CMA_DEBUGFS=y
+CONFIG_CMA_ALLOW_WRITE_DEBUGFS=y
+CONFIG_CMA_AREAS=8
+CONFIG_ZSMALLOC=y
+CONFIG_BALANCE_ANON_FILE_RECLAIM=y
+CONFIG_HAVE_USERSPACE_LOW_MEMORY_KILLER=y
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_XFRM_USER=y
+CONFIG_XFRM_INTERFACE=y
+CONFIG_XFRM_STATISTICS=y
+CONFIG_NET_KEY=y
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_NET_IPGRE_DEMUX=y
+CONFIG_SYN_COOKIES=y
+CONFIG_NET_IPVTI=y
+CONFIG_INET_AH=y
+CONFIG_INET_ESP=y
+CONFIG_INET_IPCOMP=y
+CONFIG_INET_UDP_DIAG=y
+CONFIG_INET_DIAG_DESTROY=y
+CONFIG_IPV6_ROUTER_PREF=y
+CONFIG_IPV6_ROUTE_INFO=y
+CONFIG_IPV6_OPTIMISTIC_DAD=y
+CONFIG_INET6_AH=y
+CONFIG_INET6_ESP=y
+CONFIG_INET6_IPCOMP=y
+CONFIG_IPV6_MIP6=y
+CONFIG_IPV6_VTI=y
+CONFIG_IPV6_MULTIPLE_TABLES=y
+CONFIG_IPV6_SUBTREES=y
+CONFIG_NETFILTER=y
+CONFIG_NF_CONNTRACK=y
+CONFIG_NF_CONNTRACK_SECMARK=y
+CONFIG_NF_CONNTRACK_EVENTS=y
+CONFIG_NF_CONNTRACK_AMANDA=y
+CONFIG_NF_CONNTRACK_FTP=y
+CONFIG_NF_CONNTRACK_H323=y
+CONFIG_NF_CONNTRACK_IRC=y
+CONFIG_NF_CONNTRACK_NETBIOS_NS=y
+CONFIG_NF_CONNTRACK_PPTP=y
+CONFIG_NF_CONNTRACK_SANE=y
+CONFIG_NF_CONNTRACK_TFTP=y
+CONFIG_NF_CT_NETLINK=y
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
+CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
+CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
+CONFIG_NETFILTER_XT_TARGET_HARDIDLETIMER=y
+CONFIG_NETFILTER_XT_TARGET_LOG=y
+CONFIG_NETFILTER_XT_TARGET_MARK=y
+CONFIG_NETFILTER_XT_TARGET_NFLOG=y
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
+CONFIG_NETFILTER_XT_TARGET_NOTRACK=y
+CONFIG_NETFILTER_XT_TARGET_TEE=y
+CONFIG_NETFILTER_XT_TARGET_TPROXY=y
+CONFIG_NETFILTER_XT_TARGET_TRACE=y
+CONFIG_NETFILTER_XT_TARGET_SECMARK=y
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
+CONFIG_NETFILTER_XT_MATCH_BPF=y
+CONFIG_NETFILTER_XT_MATCH_COMMENT=y
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
+CONFIG_NETFILTER_XT_MATCH_DSCP=y
+CONFIG_NETFILTER_XT_MATCH_ESP=y
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_HELPER=y
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
+# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
+CONFIG_NETFILTER_XT_MATCH_LENGTH=y
+CONFIG_NETFILTER_XT_MATCH_LIMIT=y
+CONFIG_NETFILTER_XT_MATCH_MAC=y
+CONFIG_NETFILTER_XT_MATCH_MARK=y
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
+CONFIG_NETFILTER_XT_MATCH_OWNER=y
+CONFIG_NETFILTER_XT_MATCH_POLICY=y
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
+CONFIG_NETFILTER_XT_MATCH_SOCKET=y
+CONFIG_NETFILTER_XT_MATCH_STATE=y
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
+CONFIG_NETFILTER_XT_MATCH_STRING=y
+CONFIG_NETFILTER_XT_MATCH_TIME=y
+CONFIG_NETFILTER_XT_MATCH_U32=y
+CONFIG_IP_NF_IPTABLES=y
+CONFIG_IP_NF_MATCH_AH=y
+CONFIG_IP_NF_MATCH_ECN=y
+CONFIG_IP_NF_MATCH_RPFILTER=y
+CONFIG_IP_NF_MATCH_TTL=y
+CONFIG_IP_NF_FILTER=y
+CONFIG_IP_NF_TARGET_REJECT=y
+CONFIG_IP_NF_NAT=y
+CONFIG_IP_NF_TARGET_MASQUERADE=y
+CONFIG_IP_NF_TARGET_NETMAP=y
+CONFIG_IP_NF_TARGET_REDIRECT=y
+CONFIG_IP_NF_MANGLE=y
+CONFIG_IP_NF_RAW=y
+CONFIG_IP_NF_SECURITY=y
+CONFIG_IP_NF_ARPTABLES=y
+CONFIG_IP_NF_ARPFILTER=y
+CONFIG_IP_NF_ARP_MANGLE=y
+CONFIG_IP6_NF_IPTABLES=y
+CONFIG_IP6_NF_MATCH_RPFILTER=y
+CONFIG_IP6_NF_FILTER=y
+CONFIG_IP6_NF_TARGET_REJECT=y
+CONFIG_IP6_NF_MANGLE=y
+CONFIG_IP6_NF_RAW=y
+CONFIG_BRIDGE_NF_EBTABLES=y
+CONFIG_BRIDGE_EBT_BROUTE=y
+CONFIG_IP_SCTP=y
+CONFIG_L2TP=y
+CONFIG_L2TP_DEBUGFS=y
+CONFIG_L2TP_V3=y
+CONFIG_L2TP_IP=y
+CONFIG_L2TP_ETH=y
+CONFIG_BRIDGE=y
+CONFIG_NET_SCHED=y
+CONFIG_NET_SCH_HTB=y
+CONFIG_NET_SCH_PRIO=y
+CONFIG_NET_SCH_MULTIQ=y
+CONFIG_NET_SCH_INGRESS=y
+CONFIG_NET_CLS_FW=y
+CONFIG_NET_CLS_U32=y
+CONFIG_CLS_U32_MARK=y
+CONFIG_NET_CLS_FLOW=y
+CONFIG_NET_CLS_BPF=y
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_CMP=y
+CONFIG_NET_EMATCH_NBYTE=y
+CONFIG_NET_EMATCH_U32=y
+CONFIG_NET_EMATCH_META=y
+CONFIG_NET_EMATCH_TEXT=y
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_GACT=y
+CONFIG_NET_ACT_MIRRED=y
+CONFIG_NET_ACT_SKBEDIT=y
+CONFIG_DNS_RESOLVER=y
+CONFIG_QRTR=y
+CONFIG_QRTR_SMD=y
+CONFIG_SOCKEV_NLMCAST=y
+CONFIG_BT=y
+CONFIG_MSM_BT_POWER=y
+CONFIG_BTFM_SLIM_WCN3990=y
+CONFIG_CFG80211=y
+CONFIG_CFG80211_CERTIFICATION_ONUS=y
+CONFIG_CFG80211_REG_CELLULAR_HINTS=y
+CONFIG_CFG80211_INTERNAL_REGDB=y
+# CONFIG_CFG80211_CRDA_SUPPORT is not set
+CONFIG_RFKILL=y
+CONFIG_NFC_NQ=y
+CONFIG_FW_LOADER_USER_HELPER=y
+CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
+CONFIG_REGMAP_WCD_IRQ=y
+CONFIG_REGMAP_ALLOW_WRITE_DEBUGFS=y
+CONFIG_DMA_CMA=y
+CONFIG_ZRAM=y
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_LOOP_MIN_COUNT=16
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=8192
+CONFIG_HDCP_QSEECOM=y
+CONFIG_QSEECOM=y
+CONFIG_UID_SYS_STATS=y
+CONFIG_MEMORY_STATE_TIME=y
+CONFIG_SCSI=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_SG=y
+CONFIG_CHR_DEV_SCH=y
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+CONFIG_SCSI_SCAN_ASYNC=y
+CONFIG_SCSI_UFSHCD=y
+CONFIG_SCSI_UFSHCD_PLATFORM=y
+CONFIG_SCSI_UFS_QCOM=y
+CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
+CONFIG_SCSI_UFS_CRYPTO=y
+CONFIG_SCSI_UFS_CRYPTO_QTI=y
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_MD_LINEAR=y
+CONFIG_BLK_DEV_DM=y
+CONFIG_DM_CRYPT=y
+CONFIG_DM_DEFAULT_KEY=y
+CONFIG_DM_UEVENT=y
+CONFIG_DM_VERITY=y
+CONFIG_DM_VERITY_FEC=y
+CONFIG_DM_ANDROID_VERITY=y
+CONFIG_DM_BOW=y
+CONFIG_NETDEVICES=y
+CONFIG_BONDING=y
+CONFIG_DUMMY=y
+CONFIG_TUN=y
+CONFIG_RMNET=y
+CONFIG_PPP=y
+CONFIG_PPP_BSDCOMP=y
+CONFIG_PPP_DEFLATE=y
+CONFIG_PPP_FILTER=y
+CONFIG_PPP_MPPE=y
+CONFIG_PPP_MULTILINK=y
+CONFIG_PPPOE=y
+CONFIG_PPTP=y
+CONFIG_PPPOL2TP=y
+CONFIG_PPP_ASYNC=y
+CONFIG_PPP_SYNC_TTY=y
+CONFIG_USB_RTL8152=y
+CONFIG_USB_USBNET=y
+CONFIG_WIL6210=m
+CONFIG_WCNSS_MEM_PRE_ALLOC=y
+CONFIG_CLD_LL_CORE=y
+CONFIG_CNSS_GENL=y
+CONFIG_INPUT_EVDEV=y
+CONFIG_INPUT_KEYRESET=y
+CONFIG_KEYBOARD_GPIO=y
+# CONFIG_INPUT_MOUSE is not set
+CONFIG_INPUT_JOYSTICK=y
+CONFIG_INPUT_TOUCHSCREEN=y
+# CONFIG_TOUCHSCREEN_SYNAPTICS_DSX is not set
+# CONFIG_TOUCHSCREEN_SYNAPTICS_TCM is not set
+CONFIG_INPUT_MISC=y
+CONFIG_INPUT_QPNP_POWER_ON=y
+CONFIG_INPUT_QTI_HAPTICS=y
+CONFIG_INPUT_UINPUT=y
+CONFIG_INPUT_GPIO=y
+# CONFIG_SERIO_SERPORT is not set
+# CONFIG_VT is not set
+# CONFIG_LEGACY_PTYS is not set
+# CONFIG_DEVMEM is not set
+CONFIG_SERIAL_MSM=y
+CONFIG_SERIAL_MSM_CONSOLE=y
+CONFIG_SERIAL_DEV_BUS=y
+CONFIG_TTY_PRINTK=y
+CONFIG_HW_RANDOM=y
+CONFIG_HW_RANDOM_MSM_LEGACY=y
+# CONFIG_DEVPORT is not set
+CONFIG_DIAG_CHAR=y
+CONFIG_MSM_FASTCVPD=y
+CONFIG_MSM_ADSPRPC=y
+CONFIG_MSM_RDBG=m
+CONFIG_I2C_CHARDEV=y
+CONFIG_SPI=y
+CONFIG_SPI_QUP=y
+CONFIG_SPI_SPIDEV=y
+CONFIG_SPMI=y
+CONFIG_SPMI_MSM_PMIC_ARB_DEBUG=y
+CONFIG_PINCTRL_QCOM_SPMI_PMIC=y
+CONFIG_GPIO_SYSFS=y
+CONFIG_POWER_RESET_QCOM=y
+CONFIG_POWER_RESET_XGENE=y
+CONFIG_POWER_RESET_SYSCON=y
+CONFIG_THERMAL=y
+CONFIG_THERMAL_WRITABLE_TRIPS=y
+CONFIG_THERMAL_GOV_USER_SPACE=y
+CONFIG_THERMAL_GOV_LOW_LIMITS=y
+CONFIG_CPU_THERMAL=y
+CONFIG_DEVFREQ_THERMAL=y
+CONFIG_QCOM_SPMI_TEMP_ALARM=y
+CONFIG_THERMAL_TSENS=y
+CONFIG_QTI_ADC_TM=y
+CONFIG_QTI_VIRTUAL_SENSOR=y
+CONFIG_QTI_BCL_PMIC5=y
+CONFIG_QTI_BCL_SOC_DRIVER=y
+CONFIG_QTI_QMI_COOLING_DEVICE=y
+CONFIG_QTI_THERMAL_LIMITS_DCVS=y
+CONFIG_REGULATOR_COOLING_DEVICE=y
+CONFIG_MFD_I2C_PMIC=y
+CONFIG_MFD_SPMI_PMIC=y
+CONFIG_REGULATOR=y
+CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_REGULATOR_PROXY_CONSUMER=y
+CONFIG_REGULATOR_QPNP_LABIBB=y
+CONFIG_REGULATOR_QPNP_LCDB=y
+CONFIG_REGULATOR_QPNP_OLEDB=y
+CONFIG_REGULATOR_RPM_SMD=y
+CONFIG_REGULATOR_STUB=y
+CONFIG_MEDIA_SUPPORT=y
+CONFIG_MEDIA_CAMERA_SUPPORT=y
+CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
+CONFIG_MEDIA_CONTROLLER=y
+CONFIG_VIDEO_V4L2_SUBDEV_API=y
+CONFIG_VIDEO_ADV_DEBUG=y
+CONFIG_VIDEO_FIXED_MINOR_RANGES=y
+CONFIG_MEDIA_USB_SUPPORT=y
+CONFIG_USB_VIDEO_CLASS=y
+CONFIG_V4L_PLATFORM_DRIVERS=y
+CONFIG_DVB_MPQ=m
+CONFIG_DVB_MPQ_DEMUX=m
+CONFIG_FB=y
+CONFIG_FB_VIRTUAL=y
+CONFIG_BACKLIGHT_LCD_SUPPORT=y
+CONFIG_BACKLIGHT_CLASS_DEVICE=y
+CONFIG_BACKLIGHT_QCOM_SPMI_WLED=y
+CONFIG_LOGO=y
+# CONFIG_LOGO_LINUX_MONO is not set
+# CONFIG_LOGO_LINUX_VGA16 is not set
+CONFIG_SOUND=y
+CONFIG_SND=y
+CONFIG_SND_DYNAMIC_MINORS=y
+CONFIG_SND_USB_AUDIO=y
+CONFIG_SND_USB_AUDIO_QMI=y
+CONFIG_SND_SOC=y
+CONFIG_UHID=y
+CONFIG_HID_APPLE=y
+CONFIG_HID_ELECOM=y
+CONFIG_HID_MAGICMOUSE=y
+CONFIG_HID_MICROSOFT=y
+CONFIG_HID_MULTITOUCH=y
+CONFIG_HID_PLANTRONICS=y
+CONFIG_HID_SONY=y
+CONFIG_USB=y
+CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
+CONFIG_USB_XHCI_HCD=y
+CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_EHCI_HCD_PLATFORM=y
+CONFIG_USB_OHCI_HCD=y
+CONFIG_USB_OHCI_HCD_PLATFORM=y
+CONFIG_USB_STORAGE=y
+CONFIG_USB_DWC3=y
+CONFIG_USB_DWC3_MSM=y
+CONFIG_USB_ISP1760=y
+CONFIG_USB_ISP1760_HOST_ROLE=y
+CONFIG_USB_EHSET_TEST_FIXTURE=y
+CONFIG_USB_LINK_LAYER_TEST=y
+CONFIG_NOP_USB_XCEIV=y
+CONFIG_USB_MSM_SSPHY_QMP=y
+CONFIG_MSM_QUSB_PHY=y
+CONFIG_MSM_HSUSB_PHY=y
+CONFIG_USB_GADGET=y
+CONFIG_USB_GADGET_VBUS_DRAW=500
+CONFIG_USB_CONFIGFS=y
+CONFIG_USB_CONFIGFS_UEVENT=y
+CONFIG_USB_CONFIGFS_NCM=y
+CONFIG_USB_CONFIGFS_MASS_STORAGE=y
+CONFIG_USB_CONFIGFS_F_FS=y
+CONFIG_USB_CONFIGFS_F_ACC=y
+CONFIG_USB_CONFIGFS_F_AUDIO_SRC=y
+CONFIG_USB_CONFIGFS_F_MIDI=y
+CONFIG_USB_CONFIGFS_F_HID=y
+CONFIG_USB_CONFIGFS_F_DIAG=y
+CONFIG_USB_CONFIGFS_F_CDEV=y
+CONFIG_USB_CONFIGFS_F_CCID=y
+CONFIG_USB_CONFIGFS_F_QDSS=y
+CONFIG_USB_CONFIGFS_F_GSI=y
+CONFIG_USB_CONFIGFS_F_MTP=y
+CONFIG_USB_CONFIGFS_F_PTP=y
+CONFIG_TYPEC=y
+CONFIG_USB_PD_POLICY=y
+CONFIG_QPNP_USB_PDPHY=y
+CONFIG_MMC=y
+CONFIG_MMC_BLOCK_MINORS=32
+CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
+CONFIG_MMC_TEST=y
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_CQHCI_CRYPTO=y
+CONFIG_MMC_CQHCI_CRYPTO_QTI=y
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+CONFIG_LEDS_QPNP_FLASH_V2=y
+CONFIG_RTC_CLASS=y
+CONFIG_DMADEVICES=y
+CONFIG_QCOM_GPI_DMA=y
+CONFIG_QCOM_GPI_DMA_DEBUG=y
+CONFIG_SYNC_FILE=y
+CONFIG_DEBUG_DMA_BUF_REF=y
+CONFIG_UIO=y
+CONFIG_UIO_MSM_SHAREDMEM=y
+CONFIG_STAGING=y
+CONFIG_ASHMEM=y
+CONFIG_ION=y
+CONFIG_QPNP_REVID=y
+CONFIG_SPS=y
+CONFIG_SPS_SUPPORT_NDP_BAM=y
+CONFIG_GSI=y
+CONFIG_MSM_11AD=m
+CONFIG_USB_BAM=y
+CONFIG_HWSPINLOCK=y
+CONFIG_HWSPINLOCK_QCOM=y
+CONFIG_MAILBOX=y
+CONFIG_QCOM_APCS_IPC=y
+CONFIG_MSM_QMP=y
+CONFIG_IOMMU_IO_PGTABLE_FAST=y
+CONFIG_ARM_SMMU=y
+CONFIG_IOMMU_TLBSYNC_DEBUG=y
+CONFIG_ARM_SMMU_TESTBUS_DUMP=y
+CONFIG_QCOM_LAZY_MAPPING=y
+CONFIG_IOMMU_DEBUG=y
+CONFIG_IOMMU_TESTS=y
+CONFIG_RPMSG_CHAR=y
+CONFIG_RPMSG_QCOM_GLINK_RPM=y
+CONFIG_RPMSG_QCOM_GLINK_SMEM=y
+CONFIG_RPMSG_QCOM_GLINK_SPSS=y
+CONFIG_RPMSG_QCOM_GLINK_SPI=y
+CONFIG_RPMSG_QCOM_SMD=y
+CONFIG_MSM_RPM_SMD=y
+CONFIG_QCOM_COMMAND_DB=y
+CONFIG_QCOM_MEM_OFFLINE=y
+CONFIG_OVERRIDE_MEMORY_LIMIT=y
+CONFIG_QCOM_CPUSS_DUMP=y
+CONFIG_QCOM_RUN_QUEUE_STATS=y
+CONFIG_QPNP_PBS=y
+CONFIG_QCOM_QMI_HELPERS=y
+CONFIG_QCOM_SMEM=y
+CONFIG_QCOM_SMD_RPM=y
+CONFIG_QCOM_EARLY_RANDOM=y
+CONFIG_QCOM_MEMORY_DUMP_V2=y
+CONFIG_QCOM_SMP2P=y
+CONFIG_SETUP_SSR_NOTIF_TIMEOUTS=y
+CONFIG_SSR_SYSMON_NOTIF_TIMEOUT=20000
+CONFIG_SSR_SUBSYS_NOTIF_TIMEOUT=20000
+CONFIG_PANIC_ON_SSR_NOTIF_TIMEOUT=y
+CONFIG_QCOM_SECURE_BUFFER=y
+CONFIG_MSM_SERVICE_LOCATOR=y
+CONFIG_MSM_SERVICE_NOTIFIER=y
+CONFIG_MSM_SUBSYSTEM_RESTART=y
+CONFIG_MSM_PIL=y
+CONFIG_MSM_SYSMON_QMI_COMM=y
+CONFIG_MSM_PIL_SSR_GENERIC=y
+CONFIG_MSM_BOOT_STATS=y
+CONFIG_QCOM_DCC_V2=y
+CONFIG_QCOM_MINIDUMP=y
+CONFIG_MSM_CORE_HANG_DETECT=y
+CONFIG_MSM_GLADIATOR_HANG_DETECT=y
+CONFIG_QCOM_FSA4480_I2C=y
+CONFIG_QCOM_WATCHDOG_V2=y
+CONFIG_QCOM_FORCE_WDOG_BITE_ON_PANIC=y
+CONFIG_QCOM_WDOG_IPI_ENABLE=y
+CONFIG_QCOM_BUS_SCALING=y
+CONFIG_MSM_SPCOM=y
+CONFIG_MSM_SPSS_UTILS=y
+CONFIG_QSEE_IPC_IRQ_BRIDGE=y
+CONFIG_QCOM_GLINK=y
+CONFIG_QCOM_GLINK_PKT=y
+CONFIG_QCOM_SMP2P_SLEEPSTATE=y
+CONFIG_MSM_CDSP_LOADER=y
+CONFIG_QCOM_SMCINVOKE=y
+CONFIG_MSM_EVENT_TIMER=y
+CONFIG_MSM_PM=y
+CONFIG_QTI_RPM_STATS_LOG=y
+CONFIG_QTEE_SHM_BRIDGE=y
+CONFIG_MEM_SHARE_QMI_SERVICE=y
+CONFIG_MSM_PERFORMANCE=y
+CONFIG_QCOM_CDSP_RM=y
+CONFIG_QCOM_CX_IPEAK=y
+CONFIG_QTI_CRYPTO_COMMON=y
+CONFIG_QTI_CRYPTO_TZ=y
+CONFIG_ICNSS=y
+CONFIG_ICNSS_DEBUG=y
+CONFIG_ICNSS_QMI=y
+CONFIG_DEVFREQ_GOV_PASSIVE=y
+CONFIG_QCOM_BIMC_BWMON=y
+CONFIG_ARM_MEMLAT_MON=y
+CONFIG_DEVFREQ_GOV_QCOM_BW_HWMON=y
+CONFIG_DEVFREQ_GOV_MEMLAT=y
+CONFIG_QCOM_DEVFREQ_DEVBW=y
+CONFIG_IIO=y
+CONFIG_QCOM_SPMI_ADC5=y
+CONFIG_QCOM_SPMI_VADC=y
+CONFIG_PWM=y
+CONFIG_PWM_QTI_LPG=y
+CONFIG_ARM_GIC_V3_ACL=y
+CONFIG_QCOM_MPM=y
+CONFIG_PHY_XGENE=y
+CONFIG_ANDROID=y
+CONFIG_ANDROID_BINDER_IPC=y
+CONFIG_QCOM_QFPROM=y
+CONFIG_NVMEM_SPMI_SDAM=y
+CONFIG_SENSORS_SSC=y
+CONFIG_QCOM_KGSL=y
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_FS_SECURITY=y
+CONFIG_EXT4_ENCRYPTION=y
+CONFIG_F2FS_FS=y
+CONFIG_F2FS_FS_SECURITY=y
+CONFIG_F2FS_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
+CONFIG_QUOTA=y
+CONFIG_QUOTA_NETLINK_INTERFACE=y
+CONFIG_QFMT_V2=y
+CONFIG_FUSE_FS=y
+CONFIG_OVERLAY_FS=y
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_EFIVAR_FS=y
+CONFIG_ECRYPT_FS=y
+CONFIG_ECRYPT_FS_MESSAGING=y
+CONFIG_SDCARD_FS=y
+# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
+CONFIG_SECURITY=y
+CONFIG_HARDENED_USERCOPY=y
+CONFIG_HARDENED_USERCOPY_PAGESPAN=y
+CONFIG_SECURITY_SELINUX=y
+CONFIG_SECURITY_SMACK=y
+CONFIG_CRYPTO_GCM=y
+CONFIG_CRYPTO_XCBC=y
+CONFIG_CRYPTO_MD4=y
+CONFIG_CRYPTO_TWOFISH=y
+CONFIG_CRYPTO_ANSI_CPRNG=y
+CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
+CONFIG_CRYPTO_DEV_QCRYPTO=y
+CONFIG_CRYPTO_DEV_QCEDEV=y
+CONFIG_SYSTEM_TRUSTED_KEYS="verity.x509.pem"
+CONFIG_XZ_DEC=y
+CONFIG_PRINTK_TIME=y
+CONFIG_DYNAMIC_DEBUG=y
+CONFIG_DEBUG_MODULE_LOAD_INFO=y
+CONFIG_DEBUG_INFO=y
+CONFIG_PAGE_OWNER=y
+CONFIG_PAGE_OWNER_ENABLE_DEFAULT=y
+CONFIG_DEBUG_SECTION_MISMATCH=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_DEBUG_PAGEALLOC=y
+CONFIG_SLUB_DEBUG_PANIC_ON=y
+CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT=y
+CONFIG_PAGE_POISONING=y
+CONFIG_PAGE_POISONING_ENABLE_DEFAULT=y
+CONFIG_DEBUG_OBJECTS=y
+CONFIG_DEBUG_OBJECTS_FREE=y
+CONFIG_DEBUG_OBJECTS_TIMERS=y
+CONFIG_DEBUG_OBJECTS_WORK=y
+CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
+CONFIG_DEBUG_OBJECTS_PERCPU_COUNTER=y
+CONFIG_SLUB_DEBUG_ON=y
+CONFIG_DEBUG_KMEMLEAK=y
+CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE=4000
+CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y
+CONFIG_DEBUG_STACK_USAGE=y
+CONFIG_DEBUG_MEMORY_INIT=y
+CONFIG_SOFTLOCKUP_DETECTOR=y
+CONFIG_WQ_WATCHDOG=y
+CONFIG_PANIC_TIMEOUT=5
+CONFIG_PANIC_ON_SCHED_BUG=y
+CONFIG_PANIC_ON_RT_THROTTLING=y
+CONFIG_SCHEDSTATS=y
+CONFIG_SCHED_STACK_END_CHECK=y
+# CONFIG_DEBUG_PREEMPT is not set
+CONFIG_DEBUG_SPINLOCK=y
+CONFIG_DEBUG_MUTEXES=y
+CONFIG_DEBUG_ATOMIC_SLEEP=y
+CONFIG_DEBUG_SG=y
+CONFIG_DEBUG_NOTIFIERS=y
+CONFIG_DEBUG_CREDENTIALS=y
+CONFIG_FAULT_INJECTION=y
+CONFIG_FAIL_PAGE_ALLOC=y
+CONFIG_FAULT_INJECTION_DEBUG_FS=y
+CONFIG_UFS_FAULT_INJECTION=y
+CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y
+CONFIG_IPC_LOGGING=y
+CONFIG_QCOM_RTB=y
+CONFIG_QCOM_RTB_SEPARATE_CPUS=y
+CONFIG_FUNCTION_TRACER=y
+CONFIG_PREEMPTIRQ_EVENTS=y
+CONFIG_IRQSOFF_TRACER=y
+CONFIG_PREEMPT_TRACER=y
+CONFIG_BLK_DEV_IO_TRACE=y
+CONFIG_LKDTM=m
+CONFIG_MEMTEST=y
+CONFIG_BUG_ON_DATA_CORRUPTION=y
+CONFIG_PANIC_ON_DATA_CORRUPTION=y
+CONFIG_ARM64_STRICT_BREAK_BEFORE_MAKE=y
+CONFIG_CORESIGHT=y
+CONFIG_CORESIGHT_LINK_AND_SINK_TMC=y
+CONFIG_CORESIGHT_SOURCE_ETM4X=y
+CONFIG_CORESIGHT_STM=y
+CONFIG_CORESIGHT_CTI=y
+CONFIG_CORESIGHT_TPDA=y
+CONFIG_CORESIGHT_TPDM=y
+CONFIG_CORESIGHT_HWEVENT=y
+CONFIG_CORESIGHT_REMOTE_ETM=y
diff --git a/arch/x86/configs/gki_defconfig b/arch/x86/configs/gki_defconfig
index 2307b1e..f2e9c4a 100644
--- a/arch/x86/configs/gki_defconfig
+++ b/arch/x86/configs/gki_defconfig
@@ -50,6 +50,7 @@
 CONFIG_MODULES=y
 CONFIG_MODULE_UNLOAD=y
 CONFIG_MODVERSIONS=y
+CONFIG_BLK_INLINE_ENCRYPTION=y
 CONFIG_GKI_HACKS_TO_FIX=y
 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
 CONFIG_BINFMT_MISC=m
@@ -329,6 +330,7 @@
 CONFIG_F2FS_FS=y
 CONFIG_F2FS_FS_SECURITY=y
 CONFIG_F2FS_FS_ENCRYPTION=y
+CONFIG_FS_ENCRYPTION_INLINE_CRYPT=y
 CONFIG_FS_VERITY=y
 CONFIG_FS_VERITY_BUILTIN_SIGNATURES=y
 # CONFIG_DNOTIFY is not set
diff --git a/block/Kconfig b/block/Kconfig
index 1f2469a..1a4929c 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -200,6 +200,23 @@
 	Enabling this option enables users to setup/unlock/lock
 	Locking ranges for SED devices using the Opal protocol.
 
+config BLK_INLINE_ENCRYPTION
+	bool "Enable inline encryption support in block layer"
+	help
+	  Build the blk-crypto subsystem. Enabling this lets the
+	  block layer handle encryption, so users can take
+	  advantage of inline encryption hardware if present.
+
+config BLK_INLINE_ENCRYPTION_FALLBACK
+	bool "Enable crypto API fallback for blk-crypto"
+	depends on BLK_INLINE_ENCRYPTION
+	select CRYPTO
+	select CRYPTO_BLKCIPHER
+	help
+	  Enabling this lets the block layer handle inline encryption
+	  by falling back to the kernel crypto API when inline
+	  encryption hardware is not present.
+
 menu "Partition Types"
 
 source "block/partitions/Kconfig"
diff --git a/block/Makefile b/block/Makefile
index 572b33f..a2e0533 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -37,3 +37,6 @@
 obj-$(CONFIG_BLK_DEBUG_FS)	+= blk-mq-debugfs.o
 obj-$(CONFIG_BLK_DEBUG_FS_ZONED)+= blk-mq-debugfs-zoned.o
 obj-$(CONFIG_BLK_SED_OPAL)	+= sed-opal.o
+obj-$(CONFIG_BLK_INLINE_ENCRYPTION)	+= keyslot-manager.o bio-crypt-ctx.o \
+					   blk-crypto.o
+obj-$(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK)	+= blk-crypto-fallback.o
\ No newline at end of file
diff --git a/block/bio-crypt-ctx.c b/block/bio-crypt-ctx.c
new file mode 100644
index 0000000..75008b2
--- /dev/null
+++ b/block/bio-crypt-ctx.c
@@ -0,0 +1,142 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#include <linux/bio.h>
+#include <linux/blkdev.h>
+#include <linux/keyslot-manager.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include "blk-crypto-internal.h"
+
+static int num_prealloc_crypt_ctxs = 128;
+
+module_param(num_prealloc_crypt_ctxs, int, 0444);
+MODULE_PARM_DESC(num_prealloc_crypt_ctxs,
+		"Number of bio crypto contexts to preallocate");
+
+static struct kmem_cache *bio_crypt_ctx_cache;
+static mempool_t *bio_crypt_ctx_pool;
+
+int __init bio_crypt_ctx_init(void)
+{
+	size_t i;
+
+	bio_crypt_ctx_cache = KMEM_CACHE(bio_crypt_ctx, 0);
+	if (!bio_crypt_ctx_cache)
+		return -ENOMEM;
+
+	bio_crypt_ctx_pool = mempool_create_slab_pool(num_prealloc_crypt_ctxs,
+						      bio_crypt_ctx_cache);
+	if (!bio_crypt_ctx_pool)
+		return -ENOMEM;
+
+	/* This is assumed in various places. */
+	BUILD_BUG_ON(BLK_ENCRYPTION_MODE_INVALID != 0);
+
+	/* Sanity check that no algorithm exceeds the defined limits. */
+	for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) {
+		BUG_ON(blk_crypto_modes[i].keysize > BLK_CRYPTO_MAX_KEY_SIZE);
+		BUG_ON(blk_crypto_modes[i].ivsize > BLK_CRYPTO_MAX_IV_SIZE);
+	}
+
+	return 0;
+}
+
+struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask)
+{
+	return mempool_alloc(bio_crypt_ctx_pool, gfp_mask);
+}
+EXPORT_SYMBOL_GPL(bio_crypt_alloc_ctx);
+
+void bio_crypt_free_ctx(struct bio *bio)
+{
+	mempool_free(bio->bi_crypt_context, bio_crypt_ctx_pool);
+	bio->bi_crypt_context = NULL;
+}
+
+void bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask)
+{
+	const struct bio_crypt_ctx *src_bc = src->bi_crypt_context;
+
+	bio_clone_skip_dm_default_key(dst, src);
+
+	/*
+	 * If a bio is fallback_crypted, then it will be decrypted when
+	 * bio_endio is called. As we only want the data to be decrypted once,
+	 * copies of the bio must not have have a crypt context.
+	 */
+	if (!src_bc || bio_crypt_fallback_crypted(src_bc))
+		return;
+
+	dst->bi_crypt_context = bio_crypt_alloc_ctx(gfp_mask);
+	*dst->bi_crypt_context = *src_bc;
+
+	if (src_bc->bc_keyslot >= 0)
+		keyslot_manager_get_slot(src_bc->bc_ksm, src_bc->bc_keyslot);
+}
+EXPORT_SYMBOL_GPL(bio_crypt_clone);
+
+bool bio_crypt_should_process(struct request *rq)
+{
+	struct bio *bio = rq->bio;
+
+	if (!bio || !bio->bi_crypt_context)
+		return false;
+
+	return rq->q->ksm == bio->bi_crypt_context->bc_ksm;
+}
+EXPORT_SYMBOL_GPL(bio_crypt_should_process);
+
+/*
+ * Checks that two bio crypt contexts are compatible - i.e. that
+ * they are mergeable except for data_unit_num continuity.
+ */
+bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
+{
+	struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
+	struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
+
+	if (!bc1)
+		return !bc2;
+	return bc2 && bc1->bc_key == bc2->bc_key;
+}
+
+/*
+ * Checks that two bio crypt contexts are compatible, and also
+ * that their data_unit_nums are continuous (and can hence be merged)
+ * in the order b_1 followed by b_2.
+ */
+bool bio_crypt_ctx_mergeable(struct bio *b_1, unsigned int b1_bytes,
+			     struct bio *b_2)
+{
+	struct bio_crypt_ctx *bc1 = b_1->bi_crypt_context;
+	struct bio_crypt_ctx *bc2 = b_2->bi_crypt_context;
+
+	if (!bio_crypt_ctx_compatible(b_1, b_2))
+		return false;
+
+	return !bc1 || bio_crypt_dun_is_contiguous(bc1, b1_bytes, bc2->bc_dun);
+}
+
+void bio_crypt_ctx_release_keyslot(struct bio_crypt_ctx *bc)
+{
+	keyslot_manager_put_slot(bc->bc_ksm, bc->bc_keyslot);
+	bc->bc_ksm = NULL;
+	bc->bc_keyslot = -1;
+}
+
+int bio_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
+				  struct keyslot_manager *ksm)
+{
+	int slot = keyslot_manager_get_slot_for_key(ksm, bc->bc_key);
+
+	if (slot < 0)
+		return slot;
+
+	bc->bc_keyslot = slot;
+	bc->bc_ksm = ksm;
+	return 0;
+}
diff --git a/block/bio.c b/block/bio.c
index ab2acc2..ee3bae8 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -29,6 +29,7 @@
 #include <linux/workqueue.h>
 #include <linux/cgroup.h>
 #include <linux/blk-cgroup.h>
+#include <linux/blk-crypto.h>
 
 #include <trace/events/block.h>
 #include "blk.h"
@@ -245,6 +246,8 @@
 void bio_uninit(struct bio *bio)
 {
 	bio_disassociate_task(bio);
+
+	bio_crypt_free_ctx(bio);
 }
 EXPORT_SYMBOL(bio_uninit);
 
@@ -580,19 +583,6 @@
 }
 EXPORT_SYMBOL(bio_phys_segments);
 
-inline void bio_clone_crypt_key(struct bio *dst, const struct bio *src)
-{
-#ifdef CONFIG_PFK
-	dst->bi_iter.bi_dun = src->bi_iter.bi_dun;
-#ifdef CONFIG_DM_DEFAULT_KEY
-	dst->bi_crypt_key = src->bi_crypt_key;
-	dst->bi_crypt_skip = src->bi_crypt_skip;
-#endif
-	dst->bi_dio_inode = src->bi_dio_inode;
-#endif
-}
-EXPORT_SYMBOL(bio_clone_crypt_key);
-
 /**
  * 	__bio_clone_fast - clone a bio that shares the original bio's biovec
  * 	@bio: destination bio
@@ -622,7 +612,7 @@
 	bio->bi_write_hint = bio_src->bi_write_hint;
 	bio->bi_iter = bio_src->bi_iter;
 	bio->bi_io_vec = bio_src->bi_io_vec;
-	bio_clone_crypt_key(bio, bio_src);
+
 	bio_clone_blkcg_association(bio, bio_src);
 }
 EXPORT_SYMBOL(__bio_clone_fast);
@@ -645,15 +635,12 @@
 
 	__bio_clone_fast(b, bio);
 
-	if (bio_integrity(bio)) {
-		int ret;
+	bio_crypt_clone(b, bio, gfp_mask);
 
-		ret = bio_integrity_clone(b, bio, gfp_mask);
-
-		if (ret < 0) {
-			bio_put(b);
-			return NULL;
-		}
+	if (bio_integrity(bio) &&
+	    bio_integrity_clone(b, bio, gfp_mask) < 0) {
+		bio_put(b);
+		return NULL;
 	}
 
 	return b;
@@ -966,6 +953,7 @@
 	if (bio_integrity(bio))
 		bio_integrity_advance(bio, bytes);
 
+	bio_crypt_advance(bio, bytes);
 	bio_advance_iter(bio, &bio->bi_iter, bytes);
 }
 EXPORT_SYMBOL(bio_advance);
@@ -1764,6 +1752,10 @@
 again:
 	if (!bio_remaining_done(bio))
 		return;
+
+	if (!blk_crypto_endio(bio))
+		return;
+
 	if (!bio_integrity_endio(bio))
 		return;
 
diff --git a/block/blk-core.c b/block/blk-core.c
index b6c6fb1..f61a9f1 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -36,6 +36,7 @@
 #include <linux/debugfs.h>
 #include <linux/bpf.h>
 #include <linux/psi.h>
+#include <linux/blk-crypto.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/block.h>
@@ -1610,9 +1611,6 @@
 	/* q->queue_lock is unlocked at this point */
 	rq->__data_len = 0;
 	rq->__sector = (sector_t) -1;
-#ifdef CONFIG_PFK
-	rq->__dun = 0;
-#endif
 	rq->bio = rq->biotail = NULL;
 	return rq;
 }
@@ -1845,9 +1843,6 @@
 	bio->bi_next = req->bio;
 	req->bio = bio;
 
-#ifdef CONFIG_PFK
-	req->__dun = bio->bi_iter.bi_dun;
-#endif
 	req->__sector = bio->bi_iter.bi_sector;
 	req->__data_len += bio->bi_iter.bi_size;
 	req->ioprio = ioprio_best(req->ioprio, bio_prio(bio));
@@ -1997,9 +1992,6 @@
 	else
 		req->ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
 	req->write_hint = bio->bi_write_hint;
-#ifdef CONFIG_PFK
-	req->__dun = bio->bi_iter.bi_dun;
-#endif
 	blk_rq_bio_prep(req->q, req, bio);
 }
 EXPORT_SYMBOL_GPL(blk_init_request_from_bio);
@@ -2471,7 +2463,9 @@
 			/* Create a fresh bio_list for all subordinate requests */
 			bio_list_on_stack[1] = bio_list_on_stack[0];
 			bio_list_init(&bio_list_on_stack[0]);
-			ret = q->make_request_fn(q, bio);
+
+			if (!blk_crypto_submit_bio(&bio))
+				ret = q->make_request_fn(q, bio);
 
 			/* sort new bios into those for a lower level
 			 * and those for the same level
@@ -2520,7 +2514,7 @@
 {
 	struct request_queue *q = bio->bi_disk->queue;
 	bool nowait = bio->bi_opf & REQ_NOWAIT;
-	blk_qc_t ret;
+	blk_qc_t ret = BLK_QC_T_NONE;
 
 	if (!generic_make_request_checks(bio))
 		return BLK_QC_T_NONE;
@@ -2534,7 +2528,8 @@
 		return BLK_QC_T_NONE;
 	}
 
-	ret = q->make_request_fn(q, bio);
+	if (!blk_crypto_submit_bio(&bio))
+		ret = q->make_request_fn(q, bio);
 	blk_queue_exit(q);
 	return ret;
 }
@@ -3161,13 +3156,8 @@
 	req->__data_len -= total_bytes;
 
 	/* update sector only for requests with clear definition of sector */
-	if (!blk_rq_is_passthrough(req)) {
+	if (!blk_rq_is_passthrough(req))
 		req->__sector += total_bytes >> 9;
-#ifdef CONFIG_PFK
-		if (req->__dun)
-			req->__dun += total_bytes >> 12;
-#endif
-	}
 
 	/* mixed attributes always follow the first bio */
 	if (req->rq_flags & RQF_MIXED_MERGE) {
@@ -3531,9 +3521,6 @@
 {
 	dst->cpu = src->cpu;
 	dst->__sector = blk_rq_pos(src);
-#ifdef CONFIG_PFK
-	dst->__dun = blk_rq_dun(src);
-#endif
 	dst->__data_len = blk_rq_bytes(src);
 	if (src->rq_flags & RQF_SPECIAL_PAYLOAD) {
 		dst->rq_flags |= RQF_SPECIAL_PAYLOAD;
@@ -4009,5 +3996,11 @@
 	blk_debugfs_root = debugfs_create_dir("block", NULL);
 #endif
 
+	if (bio_crypt_ctx_init() < 0)
+		panic("Failed to allocate mem for bio crypt ctxs\n");
+
+	if (blk_crypto_fallback_init() < 0)
+		panic("Failed to init blk-crypto-fallback\n");
+
 	return 0;
 }
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
new file mode 100644
index 0000000..945d23d
--- /dev/null
+++ b/block/blk-crypto-fallback.c
@@ -0,0 +1,656 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+/*
+ * Refer to Documentation/block/inline-encryption.rst for detailed explanation.
+ */
+
+#define pr_fmt(fmt) "blk-crypto-fallback: " fmt
+
+#include <crypto/skcipher.h>
+#include <linux/blk-cgroup.h>
+#include <linux/blk-crypto.h>
+#include <linux/crypto.h>
+#include <linux/keyslot-manager.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/random.h>
+
+#include "blk-crypto-internal.h"
+
+static unsigned int num_prealloc_bounce_pg = 32;
+module_param(num_prealloc_bounce_pg, uint, 0);
+MODULE_PARM_DESC(num_prealloc_bounce_pg,
+		 "Number of preallocated bounce pages for the blk-crypto crypto API fallback");
+
+static unsigned int blk_crypto_num_keyslots = 100;
+module_param_named(num_keyslots, blk_crypto_num_keyslots, uint, 0);
+MODULE_PARM_DESC(num_keyslots,
+		 "Number of keyslots for the blk-crypto crypto API fallback");
+
+static unsigned int num_prealloc_fallback_crypt_ctxs = 128;
+module_param(num_prealloc_fallback_crypt_ctxs, uint, 0);
+MODULE_PARM_DESC(num_prealloc_crypt_fallback_ctxs,
+		 "Number of preallocated bio fallback crypto contexts for blk-crypto to use during crypto API fallback");
+
+struct bio_fallback_crypt_ctx {
+	struct bio_crypt_ctx crypt_ctx;
+	/*
+	 * Copy of the bvec_iter when this bio was submitted.
+	 * We only want to en/decrypt the part of the bio as described by the
+	 * bvec_iter upon submission because bio might be split before being
+	 * resubmitted
+	 */
+	struct bvec_iter crypt_iter;
+	u64 fallback_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+};
+
+/* The following few vars are only used during the crypto API fallback */
+static struct kmem_cache *bio_fallback_crypt_ctx_cache;
+static mempool_t *bio_fallback_crypt_ctx_pool;
+
+/*
+ * Allocating a crypto tfm during I/O can deadlock, so we have to preallocate
+ * all of a mode's tfms when that mode starts being used. Since each mode may
+ * need all the keyslots at some point, each mode needs its own tfm for each
+ * keyslot; thus, a keyslot may contain tfms for multiple modes.  However, to
+ * match the behavior of real inline encryption hardware (which only supports a
+ * single encryption context per keyslot), we only allow one tfm per keyslot to
+ * be used at a time - the rest of the unused tfms have their keys cleared.
+ */
+static DEFINE_MUTEX(tfms_init_lock);
+static bool tfms_inited[BLK_ENCRYPTION_MODE_MAX];
+
+struct blk_crypto_decrypt_work {
+	struct work_struct work;
+	struct bio *bio;
+};
+
+static struct blk_crypto_keyslot {
+	struct crypto_skcipher *tfm;
+	enum blk_crypto_mode_num crypto_mode;
+	struct crypto_skcipher *tfms[BLK_ENCRYPTION_MODE_MAX];
+} *blk_crypto_keyslots;
+
+/* The following few vars are only used during the crypto API fallback */
+static struct keyslot_manager *blk_crypto_ksm;
+static struct workqueue_struct *blk_crypto_wq;
+static mempool_t *blk_crypto_bounce_page_pool;
+static struct kmem_cache *blk_crypto_decrypt_work_cache;
+
+bool bio_crypt_fallback_crypted(const struct bio_crypt_ctx *bc)
+{
+	return bc && bc->bc_ksm == blk_crypto_ksm;
+}
+
+/*
+ * This is the key we set when evicting a keyslot. This *should* be the all 0's
+ * key, but AES-XTS rejects that key, so we use some random bytes instead.
+ */
+static u8 blank_key[BLK_CRYPTO_MAX_KEY_SIZE];
+
+static void blk_crypto_evict_keyslot(unsigned int slot)
+{
+	struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
+	enum blk_crypto_mode_num crypto_mode = slotp->crypto_mode;
+	int err;
+
+	WARN_ON(slotp->crypto_mode == BLK_ENCRYPTION_MODE_INVALID);
+
+	/* Clear the key in the skcipher */
+	err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], blank_key,
+				     blk_crypto_modes[crypto_mode].keysize);
+	WARN_ON(err);
+	slotp->crypto_mode = BLK_ENCRYPTION_MODE_INVALID;
+}
+
+static int blk_crypto_keyslot_program(struct keyslot_manager *ksm,
+				      const struct blk_crypto_key *key,
+				      unsigned int slot)
+{
+	struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
+	const enum blk_crypto_mode_num crypto_mode = key->crypto_mode;
+	int err;
+
+	if (crypto_mode != slotp->crypto_mode &&
+	    slotp->crypto_mode != BLK_ENCRYPTION_MODE_INVALID) {
+		blk_crypto_evict_keyslot(slot);
+	}
+
+	if (!slotp->tfms[crypto_mode])
+		return -ENOMEM;
+	slotp->crypto_mode = crypto_mode;
+	err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key->raw,
+				     key->size);
+	if (err) {
+		blk_crypto_evict_keyslot(slot);
+		return err;
+	}
+	return 0;
+}
+
+static int blk_crypto_keyslot_evict(struct keyslot_manager *ksm,
+				    const struct blk_crypto_key *key,
+				    unsigned int slot)
+{
+	blk_crypto_evict_keyslot(slot);
+	return 0;
+}
+
+/*
+ * The crypto API fallback KSM ops - only used for a bio when it specifies a
+ * blk_crypto_mode for which we failed to get a keyslot in the device's inline
+ * encryption hardware (which probably means the device doesn't have inline
+ * encryption hardware that supports that crypto mode).
+ */
+static const struct keyslot_mgmt_ll_ops blk_crypto_ksm_ll_ops = {
+	.keyslot_program	= blk_crypto_keyslot_program,
+	.keyslot_evict		= blk_crypto_keyslot_evict,
+};
+
+static void blk_crypto_encrypt_endio(struct bio *enc_bio)
+{
+	struct bio *src_bio = enc_bio->bi_private;
+	int i;
+
+	for (i = 0; i < enc_bio->bi_vcnt; i++)
+		mempool_free(enc_bio->bi_io_vec[i].bv_page,
+			     blk_crypto_bounce_page_pool);
+
+	src_bio->bi_status = enc_bio->bi_status;
+
+	bio_put(enc_bio);
+	bio_endio(src_bio);
+}
+
+static struct bio *blk_crypto_clone_bio(struct bio *bio_src)
+{
+	struct bvec_iter iter;
+	struct bio_vec bv;
+	struct bio *bio;
+
+	bio = bio_alloc_bioset(GFP_NOIO, bio_segments(bio_src), NULL);
+	if (!bio)
+		return NULL;
+	bio->bi_disk		= bio_src->bi_disk;
+	bio->bi_opf		= bio_src->bi_opf;
+	bio->bi_ioprio		= bio_src->bi_ioprio;
+	bio->bi_write_hint	= bio_src->bi_write_hint;
+	bio->bi_iter.bi_sector	= bio_src->bi_iter.bi_sector;
+	bio->bi_iter.bi_size	= bio_src->bi_iter.bi_size;
+
+	bio_for_each_segment(bv, bio_src, iter)
+		bio->bi_io_vec[bio->bi_vcnt++] = bv;
+
+	if (bio_integrity(bio_src) &&
+	    bio_integrity_clone(bio, bio_src, GFP_NOIO) < 0) {
+		bio_put(bio);
+		return NULL;
+	}
+
+	bio_clone_blkcg_association(bio, bio_src);
+
+	bio_clone_skip_dm_default_key(bio, bio_src);
+
+	return bio;
+}
+
+static int blk_crypto_alloc_cipher_req(struct bio *src_bio,
+				       struct skcipher_request **ciph_req_ret,
+				       struct crypto_wait *wait)
+{
+	struct skcipher_request *ciph_req;
+	const struct blk_crypto_keyslot *slotp;
+
+	slotp = &blk_crypto_keyslots[src_bio->bi_crypt_context->bc_keyslot];
+	ciph_req = skcipher_request_alloc(slotp->tfms[slotp->crypto_mode],
+					  GFP_NOIO);
+	if (!ciph_req) {
+		src_bio->bi_status = BLK_STS_RESOURCE;
+		return -ENOMEM;
+	}
+
+	skcipher_request_set_callback(ciph_req,
+				      CRYPTO_TFM_REQ_MAY_BACKLOG |
+				      CRYPTO_TFM_REQ_MAY_SLEEP,
+				      crypto_req_done, wait);
+	*ciph_req_ret = ciph_req;
+	return 0;
+}
+
+static int blk_crypto_split_bio_if_needed(struct bio **bio_ptr)
+{
+	struct bio *bio = *bio_ptr;
+	unsigned int i = 0;
+	unsigned int num_sectors = 0;
+	struct bio_vec bv;
+	struct bvec_iter iter;
+
+	bio_for_each_segment(bv, bio, iter) {
+		num_sectors += bv.bv_len >> SECTOR_SHIFT;
+		if (++i == BIO_MAX_PAGES)
+			break;
+	}
+	if (num_sectors < bio_sectors(bio)) {
+		struct bio *split_bio;
+
+		split_bio = bio_split(bio, num_sectors, GFP_NOIO, NULL);
+		if (!split_bio) {
+			bio->bi_status = BLK_STS_RESOURCE;
+			return -ENOMEM;
+		}
+		bio_chain(split_bio, bio);
+		generic_make_request(bio);
+		*bio_ptr = split_bio;
+	}
+	return 0;
+}
+
+union blk_crypto_iv {
+	__le64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+	u8 bytes[BLK_CRYPTO_MAX_IV_SIZE];
+};
+
+static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+				 union blk_crypto_iv *iv)
+{
+	int i;
+
+	for (i = 0; i < BLK_CRYPTO_DUN_ARRAY_SIZE; i++)
+		iv->dun[i] = cpu_to_le64(dun[i]);
+}
+
+/*
+ * The crypto API fallback's encryption routine.
+ * Allocate a bounce bio for encryption, encrypt the input bio using crypto API,
+ * and replace *bio_ptr with the bounce bio. May split input bio if it's too
+ * large.
+ */
+static int blk_crypto_encrypt_bio(struct bio **bio_ptr)
+{
+	struct bio *src_bio;
+	struct skcipher_request *ciph_req = NULL;
+	DECLARE_CRYPTO_WAIT(wait);
+	u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+	union blk_crypto_iv iv;
+	struct scatterlist src, dst;
+	struct bio *enc_bio;
+	unsigned int i, j;
+	int data_unit_size;
+	struct bio_crypt_ctx *bc;
+	int err = 0;
+
+	/* Split the bio if it's too big for single page bvec */
+	err = blk_crypto_split_bio_if_needed(bio_ptr);
+	if (err)
+		return err;
+
+	src_bio = *bio_ptr;
+	bc = src_bio->bi_crypt_context;
+	data_unit_size = bc->bc_key->data_unit_size;
+
+	/* Allocate bounce bio for encryption */
+	enc_bio = blk_crypto_clone_bio(src_bio);
+	if (!enc_bio) {
+		src_bio->bi_status = BLK_STS_RESOURCE;
+		return -ENOMEM;
+	}
+
+	/*
+	 * Use the crypto API fallback keyslot manager to get a crypto_skcipher
+	 * for the algorithm and key specified for this bio.
+	 */
+	err = bio_crypt_ctx_acquire_keyslot(bc, blk_crypto_ksm);
+	if (err) {
+		src_bio->bi_status = BLK_STS_IOERR;
+		goto out_put_enc_bio;
+	}
+
+	/* and then allocate an skcipher_request for it */
+	err = blk_crypto_alloc_cipher_req(src_bio, &ciph_req, &wait);
+	if (err)
+		goto out_release_keyslot;
+
+	memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun));
+	sg_init_table(&src, 1);
+	sg_init_table(&dst, 1);
+
+	skcipher_request_set_crypt(ciph_req, &src, &dst, data_unit_size,
+				   iv.bytes);
+
+	/* Encrypt each page in the bounce bio */
+	for (i = 0; i < enc_bio->bi_vcnt; i++) {
+		struct bio_vec *enc_bvec = &enc_bio->bi_io_vec[i];
+		struct page *plaintext_page = enc_bvec->bv_page;
+		struct page *ciphertext_page =
+			mempool_alloc(blk_crypto_bounce_page_pool, GFP_NOIO);
+
+		enc_bvec->bv_page = ciphertext_page;
+
+		if (!ciphertext_page) {
+			src_bio->bi_status = BLK_STS_RESOURCE;
+			err = -ENOMEM;
+			goto out_free_bounce_pages;
+		}
+
+		sg_set_page(&src, plaintext_page, data_unit_size,
+			    enc_bvec->bv_offset);
+		sg_set_page(&dst, ciphertext_page, data_unit_size,
+			    enc_bvec->bv_offset);
+
+		/* Encrypt each data unit in this page */
+		for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) {
+			blk_crypto_dun_to_iv(curr_dun, &iv);
+			err = crypto_wait_req(crypto_skcipher_encrypt(ciph_req),
+					      &wait);
+			if (err) {
+				i++;
+				src_bio->bi_status = BLK_STS_RESOURCE;
+				goto out_free_bounce_pages;
+			}
+			bio_crypt_dun_increment(curr_dun, 1);
+			src.offset += data_unit_size;
+			dst.offset += data_unit_size;
+		}
+	}
+
+	enc_bio->bi_private = src_bio;
+	enc_bio->bi_end_io = blk_crypto_encrypt_endio;
+	*bio_ptr = enc_bio;
+
+	enc_bio = NULL;
+	err = 0;
+	goto out_free_ciph_req;
+
+out_free_bounce_pages:
+	while (i > 0)
+		mempool_free(enc_bio->bi_io_vec[--i].bv_page,
+			     blk_crypto_bounce_page_pool);
+out_free_ciph_req:
+	skcipher_request_free(ciph_req);
+out_release_keyslot:
+	bio_crypt_ctx_release_keyslot(bc);
+out_put_enc_bio:
+	if (enc_bio)
+		bio_put(enc_bio);
+
+	return err;
+}
+
+static void blk_crypto_free_fallback_crypt_ctx(struct bio *bio)
+{
+	mempool_free(container_of(bio->bi_crypt_context,
+				  struct bio_fallback_crypt_ctx,
+				  crypt_ctx),
+		     bio_fallback_crypt_ctx_pool);
+	bio->bi_crypt_context = NULL;
+}
+
+/*
+ * The crypto API fallback's main decryption routine.
+ * Decrypts input bio in place.
+ */
+static void blk_crypto_decrypt_bio(struct work_struct *work)
+{
+	struct blk_crypto_decrypt_work *decrypt_work =
+		container_of(work, struct blk_crypto_decrypt_work, work);
+	struct bio *bio = decrypt_work->bio;
+	struct skcipher_request *ciph_req = NULL;
+	DECLARE_CRYPTO_WAIT(wait);
+	struct bio_vec bv;
+	struct bvec_iter iter;
+	u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+	union blk_crypto_iv iv;
+	struct scatterlist sg;
+	struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+	struct bio_fallback_crypt_ctx *f_ctx =
+		container_of(bc, struct bio_fallback_crypt_ctx, crypt_ctx);
+	const int data_unit_size = bc->bc_key->data_unit_size;
+	unsigned int i;
+	int err;
+
+	/*
+	 * Use the crypto API fallback keyslot manager to get a crypto_skcipher
+	 * for the algorithm and key specified for this bio.
+	 */
+	if (bio_crypt_ctx_acquire_keyslot(bc, blk_crypto_ksm)) {
+		bio->bi_status = BLK_STS_RESOURCE;
+		goto out_no_keyslot;
+	}
+
+	/* and then allocate an skcipher_request for it */
+	err = blk_crypto_alloc_cipher_req(bio, &ciph_req, &wait);
+	if (err)
+		goto out;
+
+	memcpy(curr_dun, f_ctx->fallback_dun, sizeof(curr_dun));
+	sg_init_table(&sg, 1);
+	skcipher_request_set_crypt(ciph_req, &sg, &sg, data_unit_size,
+				   iv.bytes);
+
+	/* Decrypt each segment in the bio */
+	__bio_for_each_segment(bv, bio, iter, f_ctx->crypt_iter) {
+		struct page *page = bv.bv_page;
+
+		sg_set_page(&sg, page, data_unit_size, bv.bv_offset);
+
+		/* Decrypt each data unit in the segment */
+		for (i = 0; i < bv.bv_len; i += data_unit_size) {
+			blk_crypto_dun_to_iv(curr_dun, &iv);
+			if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req),
+					    &wait)) {
+				bio->bi_status = BLK_STS_IOERR;
+				goto out;
+			}
+			bio_crypt_dun_increment(curr_dun, 1);
+			sg.offset += data_unit_size;
+		}
+	}
+
+out:
+	skcipher_request_free(ciph_req);
+	bio_crypt_ctx_release_keyslot(bc);
+out_no_keyslot:
+	kmem_cache_free(blk_crypto_decrypt_work_cache, decrypt_work);
+	blk_crypto_free_fallback_crypt_ctx(bio);
+	bio_endio(bio);
+}
+
+/*
+ * Queue bio for decryption.
+ * Returns true iff bio was queued for decryption.
+ */
+bool blk_crypto_queue_decrypt_bio(struct bio *bio)
+{
+	struct blk_crypto_decrypt_work *decrypt_work;
+
+	/* If there was an IO error, don't queue for decrypt. */
+	if (bio->bi_status)
+		goto out;
+
+	decrypt_work = kmem_cache_zalloc(blk_crypto_decrypt_work_cache,
+					 GFP_ATOMIC);
+	if (!decrypt_work) {
+		bio->bi_status = BLK_STS_RESOURCE;
+		goto out;
+	}
+
+	INIT_WORK(&decrypt_work->work, blk_crypto_decrypt_bio);
+	decrypt_work->bio = bio;
+	queue_work(blk_crypto_wq, &decrypt_work->work);
+
+	return true;
+out:
+	blk_crypto_free_fallback_crypt_ctx(bio);
+	return false;
+}
+
+/**
+ * blk_crypto_start_using_mode() - Start using a crypto algorithm on a device
+ * @mode_num: the blk_crypto_mode we want to allocate ciphers for.
+ * @data_unit_size: the data unit size that will be used
+ * @q: the request queue for the device
+ *
+ * Upper layers must call this function to ensure that a the crypto API fallback
+ * has transforms for this algorithm, if they become necessary.
+ *
+ * Return: 0 on success and -err on error.
+ */
+int blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num,
+				unsigned int data_unit_size,
+				struct request_queue *q)
+{
+	struct blk_crypto_keyslot *slotp;
+	unsigned int i;
+	int err = 0;
+
+	/*
+	 * Fast path
+	 * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num]
+	 * for each i are visible before we try to access them.
+	 */
+	if (likely(smp_load_acquire(&tfms_inited[mode_num])))
+		return 0;
+
+	/*
+	 * If the keyslot manager of the request queue supports this
+	 * crypto mode, then we don't need to allocate this mode.
+	 */
+	if (keyslot_manager_crypto_mode_supported(q->ksm, mode_num,
+						  data_unit_size))
+		return 0;
+
+	mutex_lock(&tfms_init_lock);
+	if (likely(tfms_inited[mode_num]))
+		goto out;
+
+	for (i = 0; i < blk_crypto_num_keyslots; i++) {
+		slotp = &blk_crypto_keyslots[i];
+		slotp->tfms[mode_num] = crypto_alloc_skcipher(
+					blk_crypto_modes[mode_num].cipher_str,
+					0, 0);
+		if (IS_ERR(slotp->tfms[mode_num])) {
+			err = PTR_ERR(slotp->tfms[mode_num]);
+			slotp->tfms[mode_num] = NULL;
+			goto out_free_tfms;
+		}
+
+		crypto_skcipher_set_flags(slotp->tfms[mode_num],
+					  CRYPTO_TFM_REQ_WEAK_KEY);
+	}
+
+	/*
+	 * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num]
+	 * for each i are visible before we set tfms_inited[mode_num].
+	 */
+	smp_store_release(&tfms_inited[mode_num], true);
+	goto out;
+
+out_free_tfms:
+	for (i = 0; i < blk_crypto_num_keyslots; i++) {
+		slotp = &blk_crypto_keyslots[i];
+		crypto_free_skcipher(slotp->tfms[mode_num]);
+		slotp->tfms[mode_num] = NULL;
+	}
+out:
+	mutex_unlock(&tfms_init_lock);
+	return err;
+}
+EXPORT_SYMBOL_GPL(blk_crypto_start_using_mode);
+
+int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
+{
+	return keyslot_manager_evict_key(blk_crypto_ksm, key);
+}
+
+int blk_crypto_fallback_submit_bio(struct bio **bio_ptr)
+{
+	struct bio *bio = *bio_ptr;
+	struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+	struct bio_fallback_crypt_ctx *f_ctx;
+
+	if (bc->bc_key->is_hw_wrapped) {
+		pr_warn_once("HW wrapped key cannot be used with fallback.\n");
+		bio->bi_status = BLK_STS_NOTSUPP;
+		return -EOPNOTSUPP;
+	}
+
+	if (!tfms_inited[bc->bc_key->crypto_mode]) {
+		bio->bi_status = BLK_STS_IOERR;
+		return -EIO;
+	}
+
+	if (bio_data_dir(bio) == WRITE)
+		return blk_crypto_encrypt_bio(bio_ptr);
+
+	/*
+	 * Mark bio as fallback crypted and replace the bio_crypt_ctx with
+	 * another one contained in a bio_fallback_crypt_ctx, so that the
+	 * fallback has space to store the info it needs for decryption.
+	 */
+	bc->bc_ksm = blk_crypto_ksm;
+	f_ctx = mempool_alloc(bio_fallback_crypt_ctx_pool, GFP_NOIO);
+	f_ctx->crypt_ctx = *bc;
+	memcpy(f_ctx->fallback_dun, bc->bc_dun, sizeof(f_ctx->fallback_dun));
+	f_ctx->crypt_iter = bio->bi_iter;
+
+	bio_crypt_free_ctx(bio);
+	bio->bi_crypt_context = &f_ctx->crypt_ctx;
+
+	return 0;
+}
+
+int __init blk_crypto_fallback_init(void)
+{
+	int i;
+	unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX];
+
+	prandom_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);
+
+	/* All blk-crypto modes have a crypto API fallback. */
+	for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++)
+		crypto_mode_supported[i] = 0xFFFFFFFF;
+	crypto_mode_supported[BLK_ENCRYPTION_MODE_INVALID] = 0;
+
+	blk_crypto_ksm = keyslot_manager_create(blk_crypto_num_keyslots,
+						&blk_crypto_ksm_ll_ops,
+						crypto_mode_supported, NULL);
+	if (!blk_crypto_ksm)
+		return -ENOMEM;
+
+	blk_crypto_wq = alloc_workqueue("blk_crypto_wq",
+					WQ_UNBOUND | WQ_HIGHPRI |
+					WQ_MEM_RECLAIM, num_online_cpus());
+	if (!blk_crypto_wq)
+		return -ENOMEM;
+
+	blk_crypto_keyslots = kcalloc(blk_crypto_num_keyslots,
+				      sizeof(blk_crypto_keyslots[0]),
+				      GFP_KERNEL);
+	if (!blk_crypto_keyslots)
+		return -ENOMEM;
+
+	blk_crypto_bounce_page_pool =
+		mempool_create_page_pool(num_prealloc_bounce_pg, 0);
+	if (!blk_crypto_bounce_page_pool)
+		return -ENOMEM;
+
+	blk_crypto_decrypt_work_cache = KMEM_CACHE(blk_crypto_decrypt_work,
+						   SLAB_RECLAIM_ACCOUNT);
+	if (!blk_crypto_decrypt_work_cache)
+		return -ENOMEM;
+
+	bio_fallback_crypt_ctx_cache = KMEM_CACHE(bio_fallback_crypt_ctx, 0);
+	if (!bio_fallback_crypt_ctx_cache)
+		return -ENOMEM;
+
+	bio_fallback_crypt_ctx_pool =
+		mempool_create_slab_pool(num_prealloc_fallback_crypt_ctxs,
+					 bio_fallback_crypt_ctx_cache);
+	if (!bio_fallback_crypt_ctx_pool)
+		return -ENOMEM;
+
+	return 0;
+}
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
new file mode 100644
index 0000000..40d826b
--- /dev/null
+++ b/block/blk-crypto-internal.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_BLK_CRYPTO_INTERNAL_H
+#define __LINUX_BLK_CRYPTO_INTERNAL_H
+
+#include <linux/bio.h>
+
+/* Represents a crypto mode supported by blk-crypto  */
+struct blk_crypto_mode {
+	const char *cipher_str; /* crypto API name (for fallback case) */
+	unsigned int keysize; /* key size in bytes */
+	unsigned int ivsize; /* iv size in bytes */
+};
+
+extern const struct blk_crypto_mode blk_crypto_modes[];
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK
+
+int blk_crypto_fallback_submit_bio(struct bio **bio_ptr);
+
+bool blk_crypto_queue_decrypt_bio(struct bio *bio);
+
+int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key);
+
+bool bio_crypt_fallback_crypted(const struct bio_crypt_ctx *bc);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
+static inline bool bio_crypt_fallback_crypted(const struct bio_crypt_ctx *bc)
+{
+	return false;
+}
+
+static inline int blk_crypto_fallback_submit_bio(struct bio **bio_ptr)
+{
+	pr_warn_once("crypto API fallback disabled; failing request\n");
+	(*bio_ptr)->bi_status = BLK_STS_NOTSUPP;
+	return -EIO;
+}
+
+static inline bool blk_crypto_queue_decrypt_bio(struct bio *bio)
+{
+	WARN_ON(1);
+	return false;
+}
+
+static inline int
+blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
+{
+	return 0;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
+#endif /* __LINUX_BLK_CRYPTO_INTERNAL_H */
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
new file mode 100644
index 0000000..88df1c0
--- /dev/null
+++ b/block/blk-crypto.c
@@ -0,0 +1,260 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+/*
+ * Refer to Documentation/block/inline-encryption.rst for detailed explanation.
+ */
+
+#define pr_fmt(fmt) "blk-crypto: " fmt
+
+#include <linux/blk-crypto.h>
+#include <linux/blkdev.h>
+#include <linux/keyslot-manager.h>
+#include <linux/random.h>
+#include <linux/siphash.h>
+
+#include "blk-crypto-internal.h"
+
+const struct blk_crypto_mode blk_crypto_modes[] = {
+	[BLK_ENCRYPTION_MODE_AES_256_XTS] = {
+		.cipher_str = "xts(aes)",
+		.keysize = 64,
+		.ivsize = 16,
+	},
+	[BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV] = {
+		.cipher_str = "essiv(cbc(aes),sha256)",
+		.keysize = 16,
+		.ivsize = 16,
+	},
+	[BLK_ENCRYPTION_MODE_ADIANTUM] = {
+		.cipher_str = "adiantum(xchacha12,aes)",
+		.keysize = 32,
+		.ivsize = 32,
+	},
+};
+
+/* Check that all I/O segments are data unit aligned */
+static int bio_crypt_check_alignment(struct bio *bio)
+{
+	const unsigned int data_unit_size =
+				bio->bi_crypt_context->bc_key->data_unit_size;
+	struct bvec_iter iter;
+	struct bio_vec bv;
+
+	bio_for_each_segment(bv, bio, iter) {
+		if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size))
+			return -EIO;
+	}
+	return 0;
+}
+
+/**
+ * blk_crypto_submit_bio - handle submitting bio for inline encryption
+ *
+ * @bio_ptr: pointer to original bio pointer
+ *
+ * If the bio doesn't have inline encryption enabled or the submitter already
+ * specified a keyslot for the target device, do nothing.  Else, a raw key must
+ * have been provided, so acquire a device keyslot for it if supported.  Else,
+ * use the crypto API fallback.
+ *
+ * When the crypto API fallback is used for encryption, blk-crypto may choose to
+ * split the bio into 2 - the first one that will continue to be processed and
+ * the second one that will be resubmitted via generic_make_request.
+ * A bounce bio will be allocated to encrypt the contents of the aforementioned
+ * "first one", and *bio_ptr will be updated to this bounce bio.
+ *
+ * Return: 0 if bio submission should continue; nonzero if bio_endio() was
+ *	   already called so bio submission should abort.
+ */
+int blk_crypto_submit_bio(struct bio **bio_ptr)
+{
+	struct bio *bio = *bio_ptr;
+	struct request_queue *q;
+	struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+	int err;
+
+	if (!bc || !bio_has_data(bio))
+		return 0;
+
+	/*
+	 * When a read bio is marked for fallback decryption, its bi_iter is
+	 * saved so that when we decrypt the bio later, we know what part of it
+	 * was marked for fallback decryption (when the bio is passed down after
+	 * blk_crypto_submit bio, it may be split or advanced so we cannot rely
+	 * on the bi_iter while decrypting in blk_crypto_endio)
+	 */
+	if (bio_crypt_fallback_crypted(bc))
+		return 0;
+
+	err = bio_crypt_check_alignment(bio);
+	if (err) {
+		bio->bi_status = BLK_STS_IOERR;
+		goto out;
+	}
+
+	q = bio->bi_disk->queue;
+
+	if (bc->bc_ksm) {
+		/* Key already programmed into device? */
+		if (q->ksm == bc->bc_ksm)
+			return 0;
+
+		/* Nope, release the existing keyslot. */
+		bio_crypt_ctx_release_keyslot(bc);
+	}
+
+	/* Get device keyslot if supported */
+	if (keyslot_manager_crypto_mode_supported(q->ksm,
+						  bc->bc_key->crypto_mode,
+						  bc->bc_key->data_unit_size)) {
+		err = bio_crypt_ctx_acquire_keyslot(bc, q->ksm);
+		if (!err)
+			return 0;
+
+		pr_warn_once("Failed to acquire keyslot for %s (err=%d).  Falling back to crypto API.\n",
+			     bio->bi_disk->disk_name, err);
+	}
+
+	/* Fallback to crypto API */
+	err = blk_crypto_fallback_submit_bio(bio_ptr);
+	if (err)
+		goto out;
+
+	return 0;
+out:
+	bio_endio(*bio_ptr);
+	return err;
+}
+
+/**
+ * blk_crypto_endio - clean up bio w.r.t inline encryption during bio_endio
+ *
+ * @bio: the bio to clean up
+ *
+ * If blk_crypto_submit_bio decided to fallback to crypto API for this bio,
+ * we queue the bio for decryption into a workqueue and return false,
+ * and call bio_endio(bio) at a later time (after the bio has been decrypted).
+ *
+ * If the bio is not to be decrypted by the crypto API, this function releases
+ * the reference to the keyslot that blk_crypto_submit_bio got.
+ *
+ * Return: true if bio_endio should continue; false otherwise (bio_endio will
+ * be called again when bio has been decrypted).
+ */
+bool blk_crypto_endio(struct bio *bio)
+{
+	struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+
+	if (!bc)
+		return true;
+
+	if (bio_crypt_fallback_crypted(bc)) {
+		/*
+		 * The only bios who's crypto is handled by the blk-crypto
+		 * fallback when they reach here are those with
+		 * bio_data_dir(bio) == READ, since WRITE bios that are
+		 * encrypted by the crypto API fallback are handled by
+		 * blk_crypto_encrypt_endio().
+		 */
+		return !blk_crypto_queue_decrypt_bio(bio);
+	}
+
+	if (bc->bc_keyslot >= 0)
+		bio_crypt_ctx_release_keyslot(bc);
+
+	return true;
+}
+
+/**
+ * blk_crypto_init_key() - Prepare a key for use with blk-crypto
+ * @blk_key: Pointer to the blk_crypto_key to initialize.
+ * @raw_key: Pointer to the raw key.
+ * @raw_key_size: Size of raw key.  Must be at least the required size for the
+ *                chosen @crypto_mode; see blk_crypto_modes[].  (It's allowed
+ *                to be longer than the mode's actual key size, in order to
+ *                support inline encryption hardware that accepts wrapped keys.
+ *                @is_hw_wrapped has to be set for such keys)
+ * @is_hw_wrapped: Denotes @raw_key is wrapped.
+ * @crypto_mode: identifier for the encryption algorithm to use
+ * @data_unit_size: the data unit size to use for en/decryption
+ *
+ * Return: The blk_crypto_key that was prepared, or an ERR_PTR() on error.  When
+ *	   done using the key, it must be freed with blk_crypto_free_key().
+ */
+int blk_crypto_init_key(struct blk_crypto_key *blk_key,
+			const u8 *raw_key, unsigned int raw_key_size,
+			bool is_hw_wrapped,
+			enum blk_crypto_mode_num crypto_mode,
+			unsigned int data_unit_size)
+{
+	const struct blk_crypto_mode *mode;
+	static siphash_key_t hash_key;
+
+	memset(blk_key, 0, sizeof(*blk_key));
+
+	if (crypto_mode >= ARRAY_SIZE(blk_crypto_modes))
+		return -EINVAL;
+
+	BUILD_BUG_ON(BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE < BLK_CRYPTO_MAX_KEY_SIZE);
+
+	mode = &blk_crypto_modes[crypto_mode];
+	if (is_hw_wrapped) {
+		if (raw_key_size < mode->keysize ||
+		    raw_key_size > BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE)
+			return -EINVAL;
+	} else {
+		if (raw_key_size != mode->keysize)
+			return -EINVAL;
+	}
+
+	if (!is_power_of_2(data_unit_size))
+		return -EINVAL;
+
+	blk_key->crypto_mode = crypto_mode;
+	blk_key->data_unit_size = data_unit_size;
+	blk_key->data_unit_size_bits = ilog2(data_unit_size);
+	blk_key->size = raw_key_size;
+	blk_key->is_hw_wrapped = is_hw_wrapped;
+	memcpy(blk_key->raw, raw_key, raw_key_size);
+
+	/*
+	 * The keyslot manager uses the SipHash of the key to implement O(1) key
+	 * lookups while avoiding leaking information about the keys.  It's
+	 * precomputed here so that it only needs to be computed once per key.
+	 */
+	get_random_once(&hash_key, sizeof(hash_key));
+	blk_key->hash = siphash(raw_key, raw_key_size, &hash_key);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(blk_crypto_init_key);
+
+/**
+ * blk_crypto_evict_key() - Evict a key from any inline encryption hardware
+ *			    it may have been programmed into
+ * @q: The request queue who's keyslot manager this key might have been
+ *     programmed into
+ * @key: The key to evict
+ *
+ * Upper layers (filesystems) should call this function to ensure that a key
+ * is evicted from hardware that it might have been programmed into. This
+ * will call keyslot_manager_evict_key on the queue's keyslot manager, if one
+ * exists, and supports the crypto algorithm with the specified data unit size.
+ * Otherwise, it will evict the key from the blk-crypto-fallback's ksm.
+ *
+ * Return: 0 on success, -err on error.
+ */
+int blk_crypto_evict_key(struct request_queue *q,
+			 const struct blk_crypto_key *key)
+{
+	if (q->ksm &&
+	    keyslot_manager_crypto_mode_supported(q->ksm, key->crypto_mode,
+						  key->data_unit_size))
+		return keyslot_manager_evict_key(q->ksm, key);
+
+	return blk_crypto_fallback_evict_key(key);
+}
+EXPORT_SYMBOL_GPL(blk_crypto_evict_key);
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 8c8c285..ac7ff16 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -9,7 +9,7 @@
 #include <linux/scatterlist.h>
 
 #include <trace/events/block.h>
-#include <linux/pfk.h>
+
 #include "blk.h"
 
 static struct bio *blk_bio_discard_split(struct request_queue *q,
@@ -515,13 +515,13 @@
 	if (blk_integrity_rq(req) &&
 	    integrity_req_gap_back_merge(req, bio))
 		return 0;
-	if (blk_try_merge(req, bio) != ELEVATOR_BACK_MERGE)
-		return 0;
 	if (blk_rq_sectors(req) + bio_sectors(bio) >
 	    blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
 		req_set_nomerge(q, req);
 		return 0;
 	}
+	if (!bio_crypt_ctx_mergeable(req->bio, blk_rq_bytes(req), bio))
+		return 0;
 	if (!bio_flagged(req->biotail, BIO_SEG_VALID))
 		blk_recount_segments(q, req->biotail);
 	if (!bio_flagged(bio, BIO_SEG_VALID))
@@ -539,13 +539,13 @@
 	if (blk_integrity_rq(req) &&
 	    integrity_req_gap_front_merge(req, bio))
 		return 0;
-	if (blk_try_merge(req, bio) != ELEVATOR_FRONT_MERGE)
-		return 0;
 	if (blk_rq_sectors(req) + bio_sectors(bio) >
 	    blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) {
 		req_set_nomerge(q, req);
 		return 0;
 	}
+	if (!bio_crypt_ctx_mergeable(bio, bio->bi_iter.bi_size, req->bio))
+		return 0;
 	if (!bio_flagged(bio, BIO_SEG_VALID))
 		blk_recount_segments(q, bio);
 	if (!bio_flagged(req->bio, BIO_SEG_VALID))
@@ -622,6 +622,9 @@
 	if (blk_integrity_merge_rq(q, req, next) == false)
 		return 0;
 
+	if (!bio_crypt_ctx_mergeable(req->bio, blk_rq_bytes(req), next->bio))
+		return 0;
+
 	/* Merge is OK... */
 	req->nr_phys_segments = total_phys_segments;
 	return 1;
@@ -674,11 +677,6 @@
 	}
 }
 
-static bool crypto_not_mergeable(const struct bio *bio, const struct bio *nxt)
-{
-	return (!pfk_allow_merge_bio(bio, nxt));
-}
-
 /*
  * For non-mq, this has to be called with the request spinlock acquired.
  * For mq with scheduling, the appropriate queue wide lock should be held.
@@ -717,9 +715,6 @@
 	if (req->write_hint != next->write_hint)
 		return NULL;
 
-	if (crypto_not_mergeable(req->bio, next->bio))
-		return 0;
-
 	/*
 	 * If we are allowed to merge, then append bio list
 	 * from next to rq and release next. merge_requests_fn
@@ -850,6 +845,10 @@
 	if (rq->write_hint != bio->bi_write_hint)
 		return false;
 
+	/* Only merge if the crypt contexts are compatible */
+	if (!bio_crypt_ctx_compatible(bio, rq->bio))
+		return false;
+
 	return true;
 }
 
@@ -858,16 +857,9 @@
 	if (req_op(rq) == REQ_OP_DISCARD &&
 	    queue_max_discard_segments(rq->q) > 1)
 		return ELEVATOR_DISCARD_MERGE;
-	else if (blk_rq_pos(rq) + blk_rq_sectors(rq) ==
-						bio->bi_iter.bi_sector) {
-		if (crypto_not_mergeable(rq->bio, bio))
-			return ELEVATOR_NO_MERGE;
+	else if (blk_rq_pos(rq) + blk_rq_sectors(rq) == bio->bi_iter.bi_sector)
 		return ELEVATOR_BACK_MERGE;
-	} else if (blk_rq_pos(rq) - bio_sectors(bio) ==
-						bio->bi_iter.bi_sector) {
-		if (crypto_not_mergeable(bio, rq->bio))
-			return ELEVATOR_NO_MERGE;
+	else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector)
 		return ELEVATOR_FRONT_MERGE;
-	}
 	return ELEVATOR_NO_MERGE;
 }
diff --git a/block/blk.h b/block/blk.h
index 34fcead..1a5b67b 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -55,6 +55,24 @@
 		lockdep_assert_held(q->queue_lock);
 }
 
+static inline void queue_flag_set_unlocked(unsigned int flag,
+					   struct request_queue *q)
+{
+	if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
+	    kref_read(&q->kobj.kref))
+		lockdep_assert_held(q->queue_lock);
+	__set_bit(flag, &q->queue_flags);
+}
+
+static inline void queue_flag_clear_unlocked(unsigned int flag,
+					     struct request_queue *q)
+{
+	if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
+	    kref_read(&q->kobj.kref))
+		lockdep_assert_held(q->queue_lock);
+	__clear_bit(flag, &q->queue_flags);
+}
+
 static inline int queue_flag_test_and_clear(unsigned int flag,
 					    struct request_queue *q)
 {
diff --git a/block/bounce.c b/block/bounce.c
index c6a5536..dc37375 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -267,17 +267,14 @@
 		break;
 	}
 
-	if (bio_integrity(bio_src)) {
-		int ret;
+	bio_crypt_clone(bio, bio_src, gfp_mask);
 
-		ret = bio_integrity_clone(bio, bio_src, gfp_mask);
-		if (ret < 0) {
-			bio_put(bio);
-			return NULL;
-		}
+	if (bio_integrity(bio_src) &&
+	    bio_integrity_clone(bio, bio_src, gfp_mask) < 0) {
+		bio_put(bio);
+		return NULL;
 	}
 
-	bio_clone_crypt_key(bio, bio_src);
 	bio_clone_blkcg_association(bio, bio_src);
 
 	return bio;
diff --git a/block/elevator.c b/block/elevator.c
index 3d88ab3..6d940446 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -422,7 +422,7 @@
 {
 	struct elevator_queue *e = q->elevator;
 	struct request *__rq;
-	enum elv_merge ret;
+
 	/*
 	 * Levels of merges:
 	 * 	nomerges:  No merges at all attempted
@@ -435,11 +435,9 @@
 	/*
 	 * First try one-hit cache.
 	 */
-	if (q->last_merge) {
-		if (!elv_bio_merge_ok(q->last_merge, bio))
-			return ELEVATOR_NO_MERGE;
+	if (q->last_merge && elv_bio_merge_ok(q->last_merge, bio)) {
+		enum elv_merge ret = blk_try_merge(q->last_merge, bio);
 
-		ret = blk_try_merge(q->last_merge, bio);
 		if (ret != ELEVATOR_NO_MERGE) {
 			*req = q->last_merge;
 			return ret;
diff --git a/block/keyslot-manager.c b/block/keyslot-manager.c
new file mode 100644
index 0000000..1436426
--- /dev/null
+++ b/block/keyslot-manager.c
@@ -0,0 +1,559 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+/**
+ * DOC: The Keyslot Manager
+ *
+ * Many devices with inline encryption support have a limited number of "slots"
+ * into which encryption contexts may be programmed, and requests can be tagged
+ * with a slot number to specify the key to use for en/decryption.
+ *
+ * As the number of slots are limited, and programming keys is expensive on
+ * many inline encryption hardware, we don't want to program the same key into
+ * multiple slots - if multiple requests are using the same key, we want to
+ * program just one slot with that key and use that slot for all requests.
+ *
+ * The keyslot manager manages these keyslots appropriately, and also acts as
+ * an abstraction between the inline encryption hardware and the upper layers.
+ *
+ * Lower layer devices will set up a keyslot manager in their request queue
+ * and tell it how to perform device specific operations like programming/
+ * evicting keys from keyslots.
+ *
+ * Upper layers will call keyslot_manager_get_slot_for_key() to program a
+ * key into some slot in the inline encryption hardware.
+ */
+#include <crypto/algapi.h>
+#include <linux/keyslot-manager.h>
+#include <linux/atomic.h>
+#include <linux/mutex.h>
+#include <linux/wait.h>
+#include <linux/blkdev.h>
+
+struct keyslot {
+	atomic_t slot_refs;
+	struct list_head idle_slot_node;
+	struct hlist_node hash_node;
+	struct blk_crypto_key key;
+};
+
+struct keyslot_manager {
+	unsigned int num_slots;
+	struct keyslot_mgmt_ll_ops ksm_ll_ops;
+	unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX];
+	void *ll_priv_data;
+
+	/* Protects programming and evicting keys from the device */
+	struct rw_semaphore lock;
+
+	/* List of idle slots, with least recently used slot at front */
+	wait_queue_head_t idle_slots_wait_queue;
+	struct list_head idle_slots;
+	spinlock_t idle_slots_lock;
+
+	/*
+	 * Hash table which maps key hashes to keyslots, so that we can find a
+	 * key's keyslot in O(1) time rather than O(num_slots).  Protected by
+	 * 'lock'.  A cryptographic hash function is used so that timing attacks
+	 * can't leak information about the raw keys.
+	 */
+	struct hlist_head *slot_hashtable;
+	unsigned int slot_hashtable_size;
+
+	/* Per-keyslot data */
+	struct keyslot slots[];
+};
+
+static inline bool keyslot_manager_is_passthrough(struct keyslot_manager *ksm)
+{
+	return ksm->num_slots == 0;
+}
+
+/**
+ * keyslot_manager_create() - Create a keyslot manager
+ * @num_slots: The number of key slots to manage.
+ * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops for the device that this keyslot
+ *		manager will use to perform operations like programming and
+ *		evicting keys.
+ * @crypto_mode_supported:	Array of size BLK_ENCRYPTION_MODE_MAX of
+ *				bitmasks that represents whether a crypto mode
+ *				and data unit size are supported. The i'th bit
+ *				of crypto_mode_supported[crypto_mode] is set iff
+ *				a data unit size of (1 << i) is supported. We
+ *				only support data unit sizes that are powers of
+ *				2.
+ * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops.
+ *
+ * Allocate memory for and initialize a keyslot manager. Called by e.g.
+ * storage drivers to set up a keyslot manager in their request_queue.
+ *
+ * Context: May sleep
+ * Return: Pointer to constructed keyslot manager or NULL on error.
+ */
+struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
+	const struct keyslot_mgmt_ll_ops *ksm_ll_ops,
+	const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
+	void *ll_priv_data)
+{
+	struct keyslot_manager *ksm;
+	unsigned int slot;
+	unsigned int i;
+
+	if (num_slots == 0)
+		return NULL;
+
+	/* Check that all ops are specified */
+	if (ksm_ll_ops->keyslot_program == NULL ||
+	    ksm_ll_ops->keyslot_evict == NULL)
+		return NULL;
+
+	ksm = kvzalloc(struct_size(ksm, slots, num_slots), GFP_KERNEL);
+	if (!ksm)
+		return NULL;
+
+	ksm->num_slots = num_slots;
+	ksm->ksm_ll_ops = *ksm_ll_ops;
+	memcpy(ksm->crypto_mode_supported, crypto_mode_supported,
+	       sizeof(ksm->crypto_mode_supported));
+	ksm->ll_priv_data = ll_priv_data;
+
+	init_rwsem(&ksm->lock);
+
+	init_waitqueue_head(&ksm->idle_slots_wait_queue);
+	INIT_LIST_HEAD(&ksm->idle_slots);
+
+	for (slot = 0; slot < num_slots; slot++) {
+		list_add_tail(&ksm->slots[slot].idle_slot_node,
+			      &ksm->idle_slots);
+	}
+
+	spin_lock_init(&ksm->idle_slots_lock);
+
+	ksm->slot_hashtable_size = roundup_pow_of_two(num_slots);
+	ksm->slot_hashtable = kvmalloc_array(ksm->slot_hashtable_size,
+					     sizeof(ksm->slot_hashtable[0]),
+					     GFP_KERNEL);
+	if (!ksm->slot_hashtable)
+		goto err_free_ksm;
+	for (i = 0; i < ksm->slot_hashtable_size; i++)
+		INIT_HLIST_HEAD(&ksm->slot_hashtable[i]);
+
+	return ksm;
+
+err_free_ksm:
+	keyslot_manager_destroy(ksm);
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_create);
+
+static inline struct hlist_head *
+hash_bucket_for_key(struct keyslot_manager *ksm,
+		    const struct blk_crypto_key *key)
+{
+	return &ksm->slot_hashtable[key->hash & (ksm->slot_hashtable_size - 1)];
+}
+
+static void remove_slot_from_lru_list(struct keyslot_manager *ksm, int slot)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ksm->idle_slots_lock, flags);
+	list_del(&ksm->slots[slot].idle_slot_node);
+	spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
+}
+
+static int find_keyslot(struct keyslot_manager *ksm,
+			const struct blk_crypto_key *key)
+{
+	const struct hlist_head *head = hash_bucket_for_key(ksm, key);
+	const struct keyslot *slotp;
+
+	hlist_for_each_entry(slotp, head, hash_node) {
+		if (slotp->key.hash == key->hash &&
+		    slotp->key.crypto_mode == key->crypto_mode &&
+		    slotp->key.size == key->size &&
+		    slotp->key.data_unit_size == key->data_unit_size &&
+		    !crypto_memneq(slotp->key.raw, key->raw, key->size))
+			return slotp - ksm->slots;
+	}
+	return -ENOKEY;
+}
+
+static int find_and_grab_keyslot(struct keyslot_manager *ksm,
+				 const struct blk_crypto_key *key)
+{
+	int slot;
+
+	slot = find_keyslot(ksm, key);
+	if (slot < 0)
+		return slot;
+	if (atomic_inc_return(&ksm->slots[slot].slot_refs) == 1) {
+		/* Took first reference to this slot; remove it from LRU list */
+		remove_slot_from_lru_list(ksm, slot);
+	}
+	return slot;
+}
+
+/**
+ * keyslot_manager_get_slot_for_key() - Program a key into a keyslot.
+ * @ksm: The keyslot manager to program the key into.
+ * @key: Pointer to the key object to program, including the raw key, crypto
+ *	 mode, and data unit size.
+ *
+ * Get a keyslot that's been programmed with the specified key.  If one already
+ * exists, return it with incremented refcount.  Otherwise, wait for a keyslot
+ * to become idle and program it.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ * Return: The keyslot on success, else a -errno value.
+ */
+int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
+				     const struct blk_crypto_key *key)
+{
+	int slot;
+	int err;
+	struct keyslot *idle_slot;
+
+	if (keyslot_manager_is_passthrough(ksm))
+		return 0;
+
+	down_read(&ksm->lock);
+	slot = find_and_grab_keyslot(ksm, key);
+	up_read(&ksm->lock);
+	if (slot != -ENOKEY)
+		return slot;
+
+	for (;;) {
+		down_write(&ksm->lock);
+		slot = find_and_grab_keyslot(ksm, key);
+		if (slot != -ENOKEY) {
+			up_write(&ksm->lock);
+			return slot;
+		}
+
+		/*
+		 * If we're here, that means there wasn't a slot that was
+		 * already programmed with the key. So try to program it.
+		 */
+		if (!list_empty(&ksm->idle_slots))
+			break;
+
+		up_write(&ksm->lock);
+		wait_event(ksm->idle_slots_wait_queue,
+			   !list_empty(&ksm->idle_slots));
+	}
+
+	idle_slot = list_first_entry(&ksm->idle_slots, struct keyslot,
+					     idle_slot_node);
+	slot = idle_slot - ksm->slots;
+
+	err = ksm->ksm_ll_ops.keyslot_program(ksm, key, slot);
+	if (err) {
+		wake_up(&ksm->idle_slots_wait_queue);
+		up_write(&ksm->lock);
+		return err;
+	}
+
+	/* Move this slot to the hash list for the new key. */
+	if (idle_slot->key.crypto_mode != BLK_ENCRYPTION_MODE_INVALID)
+		hlist_del(&idle_slot->hash_node);
+	hlist_add_head(&idle_slot->hash_node, hash_bucket_for_key(ksm, key));
+
+	atomic_set(&idle_slot->slot_refs, 1);
+	idle_slot->key = *key;
+
+	remove_slot_from_lru_list(ksm, slot);
+
+	up_write(&ksm->lock);
+	return slot;
+}
+
+/**
+ * keyslot_manager_get_slot() - Increment the refcount on the specified slot.
+ * @ksm: The keyslot manager that we want to modify.
+ * @slot: The slot to increment the refcount of.
+ *
+ * This function assumes that there is already an active reference to that slot
+ * and simply increments the refcount. This is useful when cloning a bio that
+ * already has a reference to a keyslot, and we want the cloned bio to also have
+ * its own reference.
+ *
+ * Context: Any context.
+ */
+void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot)
+{
+	if (keyslot_manager_is_passthrough(ksm))
+		return;
+
+	if (WARN_ON(slot >= ksm->num_slots))
+		return;
+
+	WARN_ON(atomic_inc_return(&ksm->slots[slot].slot_refs) < 2);
+}
+
+/**
+ * keyslot_manager_put_slot() - Release a reference to a slot
+ * @ksm: The keyslot manager to release the reference from.
+ * @slot: The slot to release the reference from.
+ *
+ * Context: Any context.
+ */
+void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot)
+{
+	unsigned long flags;
+
+	if (keyslot_manager_is_passthrough(ksm))
+		return;
+
+	if (WARN_ON(slot >= ksm->num_slots))
+		return;
+
+	if (atomic_dec_and_lock_irqsave(&ksm->slots[slot].slot_refs,
+					&ksm->idle_slots_lock, flags)) {
+		list_add_tail(&ksm->slots[slot].idle_slot_node,
+			      &ksm->idle_slots);
+		spin_unlock_irqrestore(&ksm->idle_slots_lock, flags);
+		wake_up(&ksm->idle_slots_wait_queue);
+	}
+}
+
+/**
+ * keyslot_manager_crypto_mode_supported() - Find out if a crypto_mode/data
+ *					     unit size combination is supported
+ *					     by a ksm.
+ * @ksm: The keyslot manager to check
+ * @crypto_mode: The crypto mode to check for.
+ * @data_unit_size: The data_unit_size for the mode.
+ *
+ * Calls and returns the result of the crypto_mode_supported function specified
+ * by the ksm.
+ *
+ * Context: Process context.
+ * Return: Whether or not this ksm supports the specified crypto_mode/
+ *	   data_unit_size combo.
+ */
+bool keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm,
+					   enum blk_crypto_mode_num crypto_mode,
+					   unsigned int data_unit_size)
+{
+	if (!ksm)
+		return false;
+	if (WARN_ON(crypto_mode >= BLK_ENCRYPTION_MODE_MAX))
+		return false;
+	if (WARN_ON(!is_power_of_2(data_unit_size)))
+		return false;
+	return ksm->crypto_mode_supported[crypto_mode] & data_unit_size;
+}
+
+/**
+ * keyslot_manager_evict_key() - Evict a key from the lower layer device.
+ * @ksm: The keyslot manager to evict from
+ * @key: The key to evict
+ *
+ * Find the keyslot that the specified key was programmed into, and evict that
+ * slot from the lower layer device if that slot is not currently in use.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ * Return: 0 on success, -EBUSY if the key is still in use, or another
+ *	   -errno value on other error.
+ */
+int keyslot_manager_evict_key(struct keyslot_manager *ksm,
+			      const struct blk_crypto_key *key)
+{
+	int slot;
+	int err;
+	struct keyslot *slotp;
+
+	if (keyslot_manager_is_passthrough(ksm)) {
+		if (ksm->ksm_ll_ops.keyslot_evict) {
+			down_write(&ksm->lock);
+			err = ksm->ksm_ll_ops.keyslot_evict(ksm, key, -1);
+			up_write(&ksm->lock);
+			return err;
+		}
+		return 0;
+	}
+
+	down_write(&ksm->lock);
+	slot = find_keyslot(ksm, key);
+	if (slot < 0) {
+		err = slot;
+		goto out_unlock;
+	}
+	slotp = &ksm->slots[slot];
+
+	if (atomic_read(&slotp->slot_refs) != 0) {
+		err = -EBUSY;
+		goto out_unlock;
+	}
+	err = ksm->ksm_ll_ops.keyslot_evict(ksm, key, slot);
+	if (err)
+		goto out_unlock;
+
+	hlist_del(&slotp->hash_node);
+	memzero_explicit(&slotp->key, sizeof(slotp->key));
+	err = 0;
+out_unlock:
+	up_write(&ksm->lock);
+	return err;
+}
+
+/**
+ * keyslot_manager_reprogram_all_keys() - Re-program all keyslots.
+ * @ksm: The keyslot manager
+ *
+ * Re-program all keyslots that are supposed to have a key programmed.  This is
+ * intended only for use by drivers for hardware that loses its keys on reset.
+ *
+ * Context: Process context. Takes and releases ksm->lock.
+ */
+void keyslot_manager_reprogram_all_keys(struct keyslot_manager *ksm)
+{
+	unsigned int slot;
+
+	if (WARN_ON(keyslot_manager_is_passthrough(ksm)))
+		return;
+
+	down_write(&ksm->lock);
+	for (slot = 0; slot < ksm->num_slots; slot++) {
+		const struct keyslot *slotp = &ksm->slots[slot];
+		int err;
+
+		if (slotp->key.crypto_mode == BLK_ENCRYPTION_MODE_INVALID)
+			continue;
+
+		err = ksm->ksm_ll_ops.keyslot_program(ksm, &slotp->key, slot);
+		WARN_ON(err);
+	}
+	up_write(&ksm->lock);
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_reprogram_all_keys);
+
+/**
+ * keyslot_manager_private() - return the private data stored with ksm
+ * @ksm: The keyslot manager
+ *
+ * Returns the private data passed to the ksm when it was created.
+ */
+void *keyslot_manager_private(struct keyslot_manager *ksm)
+{
+	return ksm->ll_priv_data;
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_private);
+
+void keyslot_manager_destroy(struct keyslot_manager *ksm)
+{
+	if (ksm) {
+		kvfree(ksm->slot_hashtable);
+		memzero_explicit(ksm, struct_size(ksm, slots, ksm->num_slots));
+		kvfree(ksm);
+	}
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_destroy);
+
+/**
+ * keyslot_manager_create_passthrough() - Create a passthrough keyslot manager
+ * @ksm_ll_ops: The struct keyslot_mgmt_ll_ops
+ * @crypto_mode_supported: Bitmasks for supported encryption modes
+ * @ll_priv_data: Private data passed as is to the functions in ksm_ll_ops.
+ *
+ * Allocate memory for and initialize a passthrough keyslot manager.
+ * Called by e.g. storage drivers to set up a keyslot manager in their
+ * request_queue, when the storage driver wants to manage its keys by itself.
+ * This is useful for inline encryption hardware that don't have a small fixed
+ * number of keyslots, and for layered devices.
+ *
+ * See keyslot_manager_create() for more details about the parameters.
+ *
+ * Context: This function may sleep
+ * Return: Pointer to constructed keyslot manager or NULL on error.
+ */
+struct keyslot_manager *keyslot_manager_create_passthrough(
+	const struct keyslot_mgmt_ll_ops *ksm_ll_ops,
+	const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
+	void *ll_priv_data)
+{
+	struct keyslot_manager *ksm;
+
+	ksm = kzalloc(sizeof(*ksm), GFP_KERNEL);
+	if (!ksm)
+		return NULL;
+
+	ksm->ksm_ll_ops = *ksm_ll_ops;
+	memcpy(ksm->crypto_mode_supported, crypto_mode_supported,
+	       sizeof(ksm->crypto_mode_supported));
+	ksm->ll_priv_data = ll_priv_data;
+
+	init_rwsem(&ksm->lock);
+
+	return ksm;
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_create_passthrough);
+
+/**
+ * keyslot_manager_intersect_modes() - restrict supported modes by child device
+ * @parent: The keyslot manager for parent device
+ * @child: The keyslot manager for child device, or NULL
+ *
+ * Clear any crypto mode support bits in @parent that aren't set in @child.
+ * If @child is NULL, then all parent bits are cleared.
+ *
+ * Only use this when setting up the keyslot manager for a layered device,
+ * before it's been exposed yet.
+ */
+void keyslot_manager_intersect_modes(struct keyslot_manager *parent,
+				     const struct keyslot_manager *child)
+{
+	if (child) {
+		unsigned int i;
+
+		for (i = 0; i < ARRAY_SIZE(child->crypto_mode_supported); i++) {
+			parent->crypto_mode_supported[i] &=
+				child->crypto_mode_supported[i];
+		}
+	} else {
+		memset(parent->crypto_mode_supported, 0,
+		       sizeof(parent->crypto_mode_supported));
+	}
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_intersect_modes);
+
+/**
+ * keyslot_manager_derive_raw_secret() - Derive software secret from wrapped key
+ * @ksm: The keyslot manager
+ * @wrapped_key: The wrapped key
+ * @wrapped_key_size: Size of the wrapped key in bytes
+ * @secret: (output) the software secret
+ * @secret_size: (output) the number of secret bytes to derive
+ *
+ * Given a hardware-wrapped key, ask the hardware to derive a secret which
+ * software can use for cryptographic tasks other than inline encryption.  The
+ * derived secret is guaranteed to be cryptographically isolated from the key
+ * with which any inline encryption with this wrapped key would actually be
+ * done.  I.e., both will be derived from the unwrapped key.
+ *
+ * Return: 0 on success, -EOPNOTSUPP if hardware-wrapped keys are unsupported,
+ *	   or another -errno code.
+ */
+int keyslot_manager_derive_raw_secret(struct keyslot_manager *ksm,
+				      const u8 *wrapped_key,
+				      unsigned int wrapped_key_size,
+				      u8 *secret, unsigned int secret_size)
+{
+	int err;
+
+	down_write(&ksm->lock);
+	if (ksm->ksm_ll_ops.derive_raw_secret) {
+		err = ksm->ksm_ll_ops.derive_raw_secret(ksm, wrapped_key,
+							wrapped_key_size,
+							secret, secret_size);
+	} else {
+		err = -EOPNOTSUPP;
+	}
+	up_write(&ksm->lock);
+
+	return err;
+}
+EXPORT_SYMBOL_GPL(keyslot_manager_derive_raw_secret);
diff --git a/drivers/bluetooth/bluetooth-power.c b/drivers/bluetooth/bluetooth-power.c
index 25cebdc..54b09bf 100644
--- a/drivers/bluetooth/bluetooth-power.c
+++ b/drivers/bluetooth/bluetooth-power.c
@@ -42,6 +42,7 @@
 	{	.compatible = "qca,qca6174" },
 	{	.compatible = "qca,wcn3990" },
 	{	.compatible = "qca,qca6390" },
+	{	.compatible = "qca,wcn6750" },
 	{}
 };
 
@@ -272,10 +273,14 @@
 			return rc;
 		}
 		msleep(50);
-		BT_PWR_ERR("BTON:Turn Bt Off bt-reset-gpio(%d) value(%d)\n",
-			bt_reset_gpio, gpio_get_value(bt_reset_gpio));
-		BT_PWR_ERR("BTON:Turn Bt Off bt-sw-ctrl-gpio(%d) value(%d)\n",
-			bt_sw_ctrl_gpio,  gpio_get_value(bt_sw_ctrl_gpio));
+		BT_PWR_INFO("BTON:Turn Bt Off bt-reset-gpio(%d) value(%d)\n",
+				bt_reset_gpio, gpio_get_value(bt_reset_gpio));
+		if (bt_sw_ctrl_gpio >= 0) {
+			BT_PWR_INFO("BTON:Turn Bt Off");
+			BT_PWR_INFO("bt-sw-ctrl-gpio(%d) value(%d)",
+					bt_sw_ctrl_gpio,
+					gpio_get_value(bt_sw_ctrl_gpio));
+		}
 
 		rc = gpio_direction_output(bt_reset_gpio, 1);
 		if (rc) {
@@ -306,22 +311,30 @@
 					BT_PWR_ERR("Prob: Set Debug-Gpio\n");
 			}
 		}
-		BT_PWR_ERR("BTON:Turn Bt On bt-reset-gpio(%d) value(%d)\n",
-			bt_reset_gpio, gpio_get_value(bt_reset_gpio));
-		BT_PWR_ERR("BTON:Turn Bt On bt-sw-ctrl-gpio(%d) value(%d)\n",
-			bt_sw_ctrl_gpio,  gpio_get_value(bt_sw_ctrl_gpio));
+		BT_PWR_INFO("BTON:Turn Bt On bt-reset-gpio(%d) value(%d)\n",
+				bt_reset_gpio, gpio_get_value(bt_reset_gpio));
+		if (bt_sw_ctrl_gpio >= 0) {
+			BT_PWR_INFO("BTON:Turn Bt On");
+			BT_PWR_INFO("bt-sw-ctrl-gpio(%d) value(%d)",
+					bt_sw_ctrl_gpio,
+					gpio_get_value(bt_sw_ctrl_gpio));
+		}
 	} else {
 		gpio_set_value(bt_reset_gpio, 0);
 		if  (bt_debug_gpio  >=  0)
 			gpio_set_value(bt_debug_gpio,  0);
 		msleep(100);
-		BT_PWR_ERR("BT-OFF:bt-reset-gpio(%d) value(%d)\n",
-			bt_reset_gpio, gpio_get_value(bt_reset_gpio));
-		BT_PWR_ERR("BT-OFF:bt-sw-ctrl-gpio(%d) value(%d)\n",
-			bt_sw_ctrl_gpio,  gpio_get_value(bt_sw_ctrl_gpio));
+		BT_PWR_INFO("BT-OFF:bt-reset-gpio(%d) value(%d)\n",
+				bt_reset_gpio, gpio_get_value(bt_reset_gpio));
+
+		if (bt_sw_ctrl_gpio >= 0) {
+			BT_PWR_INFO("BT-OFF:bt-sw-ctrl-gpio(%d) value(%d)",
+					bt_sw_ctrl_gpio,
+					gpio_get_value(bt_sw_ctrl_gpio));
+		}
 	}
 
-	BT_PWR_ERR("bt_gpio= %d on: %d is successful", bt_reset_gpio, on);
+	BT_PWR_INFO("bt_gpio= %d on: %d is successful", bt_reset_gpio, on);
 	return rc;
 }
 
@@ -893,7 +906,7 @@
 		break;
 	case BT_CMD_CHIPSET_VERS:
 		chipset_version = (int)arg;
-		BT_PWR_ERR("BT_CMD_CHIP_VERS soc_version:%x", chipset_version);
+		BT_PWR_ERR("unified Current SOC Version : %x", chipset_version);
 		if (chipset_version) {
 			soc_id = chipset_version;
 			if (soc_id == QCA_HSP_SOC_ID_0100 ||
diff --git a/drivers/bus/mhi/controllers/mhi_arch_qcom.c b/drivers/bus/mhi/controllers/mhi_arch_qcom.c
index f9f1731..ce4a33b 100644
--- a/drivers/bus/mhi/controllers/mhi_arch_qcom.c
+++ b/drivers/bus/mhi/controllers/mhi_arch_qcom.c
@@ -587,12 +587,17 @@
 	if (cur_link_info->target_link_speed != PCI_EXP_LNKSTA_CLS_2_5GB) {
 		link_info.target_link_speed = PCI_EXP_LNKSTA_CLS_2_5GB;
 		link_info.target_link_width = cur_link_info->target_link_width;
-		ret = mhi_arch_pcie_scale_bw(mhi_cntrl, pci_dev, &link_info);
+
+		ret = msm_pcie_set_link_bandwidth(pci_dev,
+						  link_info.target_link_speed,
+						  link_info.target_link_width);
 		if (ret) {
 			MHI_CNTRL_ERR("Failed to switch Gen1 speed\n");
 			return -EBUSY;
 		}
 
+		/* no DDR votes when doing a drv suspend */
+		mhi_arch_set_bus_request(mhi_cntrl, 0);
 		bw_switched = true;
 	}
 
@@ -601,9 +606,7 @@
 				  pci_dev, NULL, mhi_cntrl->wake_set ?
 				  MSM_PCIE_CONFIG_NO_L1SS_TO : 0);
 
-	/*
-	 * we failed to suspend and scaled down pcie bw.. need to scale up again
-	 */
+	/* failed to suspend and scaled down pcie bw, need to scale up again */
 	if (ret && bw_switched) {
 		mhi_arch_pcie_scale_bw(mhi_cntrl, pci_dev, cur_link_info);
 		return ret;
@@ -723,10 +726,15 @@
 	case MHI_FAST_LINK_OFF:
 		ret = msm_pcie_pm_control(MSM_PCIE_RESUME, mhi_cntrl->bus,
 					  pci_dev, NULL, 0);
-		if (ret ||
-		    cur_info->target_link_speed == PCI_EXP_LNKSTA_CLS_2_5GB)
+		if (ret)
 			break;
 
+		if (cur_info->target_link_speed == PCI_EXP_LNKSTA_CLS_2_5GB) {
+			mhi_arch_set_bus_request(mhi_cntrl,
+						 cur_info->target_link_speed);
+			break;
+		}
+
 		/*
 		 * BW request from device isn't for gen 1 link speed, we can
 		 * only print an error here.
diff --git a/drivers/bus/mhi/core/mhi_boot.c b/drivers/bus/mhi/core/mhi_boot.c
index 21d7202..e720028 100644
--- a/drivers/bus/mhi/core/mhi_boot.c
+++ b/drivers/bus/mhi/core/mhi_boot.c
@@ -565,8 +565,21 @@
 
 	ret = request_firmware(&firmware, fw_name, mhi_cntrl->dev);
 	if (ret) {
-		MHI_CNTRL_ERR("Error loading firmware, ret:%d\n", ret);
-		return;
+		if (!mhi_cntrl->fw_image_fallback) {
+			MHI_ERR("Error loading fw, ret:%d\n", ret);
+			return;
+		}
+
+		/* re-try with fall back fw image */
+		ret = request_firmware(&firmware, mhi_cntrl->fw_image_fallback,
+				mhi_cntrl->dev);
+		if (ret) {
+			MHI_ERR("Error loading fw_fb, ret:%d\n", ret);
+			return;
+		}
+
+		mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
+				     MHI_CB_FW_FALLBACK_IMG);
 	}
 
 	size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size;
diff --git a/drivers/bus/mhi/core/mhi_init.c b/drivers/bus/mhi/core/mhi_init.c
index 9c8be59..32d6285 100644
--- a/drivers/bus/mhi/core/mhi_init.c
+++ b/drivers/bus/mhi/core/mhi_init.c
@@ -1225,7 +1225,7 @@
 			mhi_event->process_event = mhi_process_ctrl_ev_ring;
 			break;
 		case MHI_ER_TSYNC_ELEMENT_TYPE:
-			mhi_event->process_event = mhi_process_tsync_event_ring;
+			mhi_event->process_event = mhi_process_tsync_ev_ring;
 			break;
 		case MHI_ER_BW_SCALE_ELEMENT_TYPE:
 			mhi_event->process_event = mhi_process_bw_scale_ev_ring;
diff --git a/drivers/bus/mhi/core/mhi_internal.h b/drivers/bus/mhi/core/mhi_internal.h
index c885d63..c2b99e8 100644
--- a/drivers/bus/mhi/core/mhi_internal.h
+++ b/drivers/bus/mhi/core/mhi_internal.h
@@ -717,8 +717,6 @@
 struct tsync_node {
 	struct list_head node;
 	u32 sequence;
-	u32 int_sequence;
-	u64 local_time;
 	u64 remote_time;
 	struct mhi_device *mhi_dev;
 	void (*cb_func)(struct mhi_device *mhi_dev, u32 sequence,
@@ -728,7 +726,9 @@
 struct mhi_timesync {
 	void __iomem *time_reg;
 	u32 int_sequence;
+	u64 local_time;
 	bool db_support;
+	bool db_response_pending;
 	spinlock_t lock; /* list protection */
 	struct list_head head;
 };
@@ -787,8 +787,8 @@
 				struct mhi_event *mhi_event, u32 event_quota);
 int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			     struct mhi_event *mhi_event, u32 event_quota);
-int mhi_process_tsync_event_ring(struct mhi_controller *mhi_cntrl,
-				 struct mhi_event *mhi_event, u32 event_quota);
+int mhi_process_tsync_ev_ring(struct mhi_controller *mhi_cntrl,
+			      struct mhi_event *mhi_event, u32 event_quota);
 int mhi_process_bw_scale_ev_ring(struct mhi_controller *mhi_cntrl,
 				 struct mhi_event *mhi_event, u32 event_quota);
 int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
diff --git a/drivers/bus/mhi/core/mhi_main.c b/drivers/bus/mhi/core/mhi_main.c
index 974c62f..e259aba 100644
--- a/drivers/bus/mhi/core/mhi_main.c
+++ b/drivers/bus/mhi/core/mhi_main.c
@@ -1070,7 +1070,9 @@
 
 	result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
 		-EOVERFLOW : 0;
-	result.bytes_xferd = xfer_len;
+
+	/* truncate to buf len if xfer_len is larger */
+	result.bytes_xferd = min_t(u16, xfer_len, buf_info->len);
 	result.buf_addr = buf_info->cb_buf;
 	result.dir = mhi_chan->dir;
 
@@ -1320,7 +1322,7 @@
 		chan = MHI_TRE_GET_EV_CHID(local_rp);
 		if (chan >= mhi_cntrl->max_chan) {
 			MHI_ERR("invalid channel id %u\n", chan);
-			continue;
+			goto next_er_element;
 		}
 		mhi_chan = &mhi_cntrl->mhi_chan[chan];
 
@@ -1332,6 +1334,7 @@
 			event_quota--;
 		}
 
+next_er_element:
 		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
 		local_rp = ev_ring->rp;
 		dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
@@ -1347,82 +1350,89 @@
 	return count;
 }
 
-int mhi_process_tsync_event_ring(struct mhi_controller *mhi_cntrl,
-				 struct mhi_event *mhi_event,
-				 u32 event_quota)
+int mhi_process_tsync_ev_ring(struct mhi_controller *mhi_cntrl,
+			      struct mhi_event *mhi_event,
+			      u32 event_quota)
 {
-	struct mhi_tre *dev_rp, *local_rp;
+	struct mhi_tre *dev_rp;
 	struct mhi_ring *ev_ring = &mhi_event->ring;
 	struct mhi_event_ctxt *er_ctxt =
 		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
 	struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
-	int count = 0;
-	u32 int_sequence, unit;
+	u32 sequence;
 	u64 remote_time;
+	int ret = 0;
 
-	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state))) {
-		MHI_LOG("No EV access, PM_STATE:%s\n",
-			to_mhi_pm_state_str(mhi_cntrl->pm_state));
-		return -EIO;
-	}
-
+	spin_lock_bh(&mhi_event->lock);
 	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
-	local_rp = ev_ring->rp;
-
-	while (dev_rp != local_rp) {
-		enum MHI_PKT_TYPE type = MHI_TRE_GET_EV_TYPE(local_rp);
-		struct tsync_node *tsync_node;
-
-		MHI_VERB("Processing Event:0x%llx 0x%08x 0x%08x\n",
-			local_rp->ptr, local_rp->dword[0], local_rp->dword[1]);
-
-		MHI_ASSERT(type != MHI_PKT_TYPE_TSYNC_EVENT, "!TSYNC event");
-
-		int_sequence = MHI_TRE_GET_EV_TSYNC_SEQ(local_rp);
-		unit = MHI_TRE_GET_EV_TSYNC_UNIT(local_rp);
-		remote_time = MHI_TRE_GET_EV_TIME(local_rp);
-
-		do {
-			spin_lock(&mhi_tsync->lock);
-			tsync_node = list_first_entry_or_null(&mhi_tsync->head,
-						      struct tsync_node, node);
-			if (!tsync_node) {
-				spin_unlock(&mhi_tsync->lock);
-				break;
-			}
-
-			list_del(&tsync_node->node);
-			spin_unlock(&mhi_tsync->lock);
-
-			/*
-			 * device may not able to process all time sync commands
-			 * host issue and only process last command it receive
-			 */
-			if (tsync_node->int_sequence == int_sequence) {
-				tsync_node->cb_func(tsync_node->mhi_dev,
-						    tsync_node->sequence,
-						    tsync_node->local_time,
-						    remote_time);
-				kfree(tsync_node);
-			} else {
-				kfree(tsync_node);
-			}
-		} while (true);
-
-		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
-		local_rp = ev_ring->rp;
-		dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
-		count++;
+	if (ev_ring->rp == dev_rp) {
+		spin_unlock_bh(&mhi_event->lock);
+		goto exit_tsync_process;
 	}
 
+	/* if rp points to base, we need to wrap it around */
+	if (dev_rp == ev_ring->base)
+		dev_rp = ev_ring->base + ev_ring->len;
+	dev_rp--;
+
+	/* fast forward to currently processed element and recycle er */
+	ev_ring->rp = dev_rp;
+	ev_ring->wp = dev_rp - 1;
+	if (ev_ring->wp < ev_ring->base)
+		ev_ring->wp = ev_ring->base + ev_ring->len - ev_ring->el_size;
+	mhi_recycle_fwd_ev_ring_element(mhi_cntrl, ev_ring);
+
+	MHI_ASSERT(MHI_TRE_GET_EV_TYPE(dev_rp) != MHI_PKT_TYPE_TSYNC_EVENT,
+		   "!TSYNC event");
+
+	sequence = MHI_TRE_GET_EV_TSYNC_SEQ(dev_rp);
+	remote_time = MHI_TRE_GET_EV_TIME(dev_rp);
+
+	MHI_VERB("Received TSYNC event with seq:0x%llx time:0x%llx\n",
+		 sequence, remote_time);
+
 	read_lock_bh(&mhi_cntrl->pm_lock);
 	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
 		mhi_ring_er_db(mhi_event);
 	read_unlock_bh(&mhi_cntrl->pm_lock);
+	spin_unlock_bh(&mhi_event->lock);
 
+	mutex_lock(&mhi_cntrl->tsync_mutex);
+
+	if (unlikely(mhi_tsync->int_sequence != sequence)) {
+		MHI_ASSERT(1, "Unexpected response:0x%llx Expected:0x%llx\n",
+			   sequence, mhi_tsync->int_sequence);
+		mutex_unlock(&mhi_cntrl->tsync_mutex);
+		goto exit_tsync_process;
+	}
+
+	do {
+		struct tsync_node *tsync_node;
+
+		spin_lock(&mhi_tsync->lock);
+		tsync_node = list_first_entry_or_null(&mhi_tsync->head,
+					struct tsync_node, node);
+		if (!tsync_node) {
+			spin_unlock(&mhi_tsync->lock);
+			break;
+		}
+
+		list_del(&tsync_node->node);
+		spin_unlock(&mhi_tsync->lock);
+
+		tsync_node->cb_func(tsync_node->mhi_dev,
+				    tsync_node->sequence,
+				    mhi_tsync->local_time, remote_time);
+		kfree(tsync_node);
+	} while (true);
+
+	mhi_tsync->db_response_pending = false;
+	mutex_unlock(&mhi_cntrl->tsync_mutex);
+
+exit_tsync_process:
 	MHI_VERB("exit er_index:%u\n", mhi_event->er_index);
 
-	return count;
+	return ret;
 }
 
 int mhi_process_bw_scale_ev_ring(struct mhi_controller *mhi_cntrl,
@@ -2575,7 +2585,7 @@
 	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
 	struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
 	struct tsync_node *tsync_node;
-	int ret;
+	int ret = 0;
 
 	/* not all devices support all time features */
 	mutex_lock(&mhi_cntrl->tsync_mutex);
@@ -2599,6 +2609,10 @@
 	}
 	read_unlock_bh(&mhi_cntrl->pm_lock);
 
+	MHI_LOG("Enter with pm_state:%s MHI_STATE:%s\n",
+		 to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		 TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+
 	/*
 	 * technically we can use GFP_KERNEL, but wants to avoid
 	 * # of times scheduling out
@@ -2609,15 +2623,17 @@
 		goto error_no_mem;
 	}
 
+	tsync_node->sequence = sequence;
+	tsync_node->cb_func = cb_func;
+	tsync_node->mhi_dev = mhi_dev;
+
+	if (mhi_tsync->db_response_pending)
+		goto skip_tsync_db;
+
 	mhi_tsync->int_sequence++;
 	if (mhi_tsync->int_sequence == 0xFFFFFFFF)
 		mhi_tsync->int_sequence = 0;
 
-	tsync_node->sequence = sequence;
-	tsync_node->int_sequence = mhi_tsync->int_sequence;
-	tsync_node->cb_func = cb_func;
-	tsync_node->mhi_dev = mhi_dev;
-
 	/* disable link level low power modes */
 	ret = mhi_cntrl->lpm_disable(mhi_cntrl, mhi_cntrl->priv_data);
 	if (ret) {
@@ -2626,10 +2642,6 @@
 		goto error_invalid_state;
 	}
 
-	spin_lock(&mhi_tsync->lock);
-	list_add_tail(&tsync_node->node, &mhi_tsync->head);
-	spin_unlock(&mhi_tsync->lock);
-
 	/*
 	 * time critical code, delay between these two steps should be
 	 * deterministic as possible.
@@ -2637,9 +2649,9 @@
 	preempt_disable();
 	local_irq_disable();
 
-	tsync_node->local_time =
+	mhi_tsync->local_time =
 		mhi_cntrl->time_get(mhi_cntrl, mhi_cntrl->priv_data);
-	writel_relaxed_no_log(tsync_node->int_sequence, mhi_cntrl->tsync_db);
+	writel_relaxed_no_log(mhi_tsync->int_sequence, mhi_cntrl->tsync_db);
 	/* write must go thru immediately */
 	wmb();
 
@@ -2648,6 +2660,15 @@
 
 	mhi_cntrl->lpm_enable(mhi_cntrl, mhi_cntrl->priv_data);
 
+	MHI_VERB("time DB request with seq:0x%llx\n", mhi_tsync->int_sequence);
+
+	mhi_tsync->db_response_pending = true;
+
+skip_tsync_db:
+	spin_lock(&mhi_tsync->lock);
+	list_add_tail(&tsync_node->node, &mhi_tsync->head);
+	spin_unlock(&mhi_tsync->lock);
+
 	ret = 0;
 
 error_invalid_state:
diff --git a/drivers/bus/mhi/core/mhi_pm.c b/drivers/bus/mhi/core/mhi_pm.c
index 703dcba..a4e63bc 100644
--- a/drivers/bus/mhi/core/mhi_pm.c
+++ b/drivers/bus/mhi/core/mhi_pm.c
@@ -824,6 +824,9 @@
 		 TO_MHI_STATE_STR(mhi_cntrl->dev_state),
 		 TO_MHI_EXEC_STR(mhi_cntrl->ee));
 
+	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+		return;
+
 	/* check special purpose event rings and process events */
 	list_for_each_entry(mhi_event, &mhi_cntrl->sp_ev_rings, node)
 		mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
diff --git a/drivers/bus/mhi/devices/mhi_uci.c b/drivers/bus/mhi/devices/mhi_uci.c
index d16ba5c..2a7fbfa 100644
--- a/drivers/bus/mhi/devices/mhi_uci.c
+++ b/drivers/bus/mhi/devices/mhi_uci.c
@@ -408,21 +408,29 @@
 	}
 
 	uci_buf = uci_chan->cur_buf;
-	spin_unlock_bh(&uci_chan->lock);
 
 	/* Copy the buffer to user space */
 	to_copy = min_t(size_t, count, uci_chan->rx_size);
 	ptr = uci_buf->data + (uci_buf->len - uci_chan->rx_size);
+	spin_unlock_bh(&uci_chan->lock);
+
 	ret = copy_to_user(buf, ptr, to_copy);
 	if (ret)
 		return ret;
 
+	spin_lock_bh(&uci_chan->lock);
+	/* Buffer already queued from diff thread while we dropped lock ? */
+	if (to_copy && !uci_chan->rx_size) {
+		MSG_VERB("Bailout as buffer already queued (%lu %lu)\n",
+			 to_copy, uci_chan->rx_size);
+		goto read_error;
+	}
+
 	MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
 	uci_chan->rx_size -= to_copy;
 
 	/* we finished with this buffer, queue it back to hardware */
 	if (!uci_chan->rx_size) {
-		spin_lock_bh(&uci_chan->lock);
 		uci_chan->cur_buf = NULL;
 
 		if (uci_dev->enabled)
@@ -437,9 +445,8 @@
 			kfree(uci_buf->data);
 			goto read_error;
 		}
-
-		spin_unlock_bh(&uci_chan->lock);
 	}
+	spin_unlock_bh(&uci_chan->lock);
 
 	MSG_VERB("Returning %lu bytes\n", to_copy);
 
diff --git a/drivers/char/adsprpc.c b/drivers/char/adsprpc.c
index 55714d0..c6b8653 100644
--- a/drivers/char/adsprpc.c
+++ b/drivers/char/adsprpc.c
@@ -384,6 +384,7 @@
 	uint64_t jobid[NUM_CHANNELS];
 	struct wakeup_source *wake_source;
 	struct qos_cores silvercores;
+	uint32_t max_size_limit;
 };
 
 struct fastrpc_mmap {
@@ -1075,6 +1076,14 @@
 			goto bail;
 		}
 
+		VERIFY(err, map->size >= len && map->size < me->max_size_limit);
+		if (err) {
+			err = -EFAULT;
+			pr_err("adsprpc: %s: invalid map size 0x%zx len 0x%zx\n",
+				__func__, map->size, len);
+			goto bail;
+		}
+
 		vmid = fl->apps->channel[fl->cid].vmid;
 		if (!sess->smmu.enabled && !vmid) {
 			VERIFY(err, map->phys >= me->range.addr &&
@@ -1116,12 +1125,17 @@
 			int remote, struct fastrpc_buf **obuf)
 {
 	int err = 0, vmid;
+	struct fastrpc_apps *me = &gfa;
 	struct fastrpc_buf *buf = NULL, *fr = NULL;
 	struct hlist_node *n;
 
-	VERIFY(err, size > 0);
-	if (err)
+	VERIFY(err, size > 0 && size < me->max_size_limit);
+	if (err) {
+		err = -EFAULT;
+		pr_err("adsprpc: %s: invalid allocation size 0x%zx\n",
+			__func__, size);
 		goto bail;
+	}
 
 	if (!remote) {
 		/* find the smallest buffer that fits in the cache */
@@ -4001,19 +4015,26 @@
 
 	fl->tgid = current->tgid;
 	snprintf(strpid, PID_SIZE, "%d", current->pid);
-	buf_size = strlen(current->comm) + strlen("_") + strlen(strpid) + 1;
-	fl->debug_buf = kzalloc(buf_size, GFP_KERNEL);
-	if (!fl->debug_buf) {
-		err = -ENOMEM;
-		return err;
-	}
-	snprintf(fl->debug_buf, UL_SIZE, "%.10s%s%d",
+	if (debugfs_root) {
+		buf_size = strlen(current->comm) + strlen("_")
+			+ strlen(strpid) + 1;
+		fl->debug_buf = kzalloc(buf_size, GFP_KERNEL);
+		if (!fl->debug_buf) {
+			err = -ENOMEM;
+			return err;
+		}
+		snprintf(fl->debug_buf, UL_SIZE, "%.10s%s%d",
 			current->comm, "_", current->pid);
-	fl->debugfs_file = debugfs_create_file(fl->debug_buf, 0644,
-					debugfs_root, fl, &debugfs_fops);
-	if (!fl->debugfs_file)
-		pr_warn("Error: %s: %s: failed to create debugfs file %s\n",
+		fl->debugfs_file = debugfs_create_file(fl->debug_buf, 0644,
+			debugfs_root, fl, &debugfs_fops);
+		if (IS_ERR_OR_NULL(fl->debugfs_file)) {
+			pr_warn("Error: %s: %s: failed to create debugfs file %s\n",
 				current->comm, __func__, fl->debug_buf);
+			fl->debugfs_file = NULL;
+			kfree(fl->debug_buf);
+			fl->debug_buf = NULL;
+		}
+	}
 	return err;
 }
 
@@ -4611,9 +4632,11 @@
 	struct fastrpc_channel_ctx *chan;
 	struct fastrpc_session_ctx *sess;
 	struct of_phandle_args iommuspec;
+	struct fastrpc_apps *me = &gfa;
 	const char *name;
 	int err = 0, cid = -1, i = 0;
 	u32 sharedcb_count = 0, j = 0;
+	uint32_t dma_addr_pool[2] = {0, 0};
 
 	VERIFY(err, NULL != (name = of_get_property(dev->of_node,
 					 "label", NULL)));
@@ -4660,6 +4683,11 @@
 	dma_set_max_seg_size(sess->smmu.dev, DMA_BIT_MASK(32));
 	dma_set_seg_boundary(sess->smmu.dev, (unsigned long)DMA_BIT_MASK(64));
 
+	of_property_read_u32_array(dev->of_node, "qcom,iommu-dma-addr-pool",
+			dma_addr_pool, 2);
+	me->max_size_limit = (dma_addr_pool[1] == 0 ? 0x78000000 :
+			dma_addr_pool[1]);
+
 	if (of_get_property(dev->of_node, "shared-cb", NULL) != NULL) {
 		err = of_property_read_u32(dev->of_node, "shared-cb",
 				&sharedcb_count);
@@ -4679,8 +4707,15 @@
 	}
 
 	chan->sesscount++;
-	debugfs_global_file = debugfs_create_file("global", 0644, debugfs_root,
-							NULL, &debugfs_fops);
+	if (debugfs_root) {
+		debugfs_global_file = debugfs_create_file("global", 0644,
+			debugfs_root, NULL, &debugfs_fops);
+		if (IS_ERR_OR_NULL(debugfs_global_file)) {
+			pr_warn("Error: %s: %s: failed to create debugfs global file\n",
+				current->comm, __func__);
+			debugfs_global_file = NULL;
+		}
+	}
 bail:
 	return err;
 }
@@ -4799,7 +4834,6 @@
 		init_qos_cores_list(dev, "qcom,qos-cores",
 							&me->silvercores);
 
-
 		of_property_read_u32(dev->of_node, "qcom,rpc-latency-us",
 			&me->latency);
 		if (of_get_property(dev->of_node,
@@ -4990,6 +5024,12 @@
 	int err = 0, i;
 
 	debugfs_root = debugfs_create_dir("adsprpc", NULL);
+	if (IS_ERR_OR_NULL(debugfs_root)) {
+		pr_warn("Error: %s: %s: failed to create debugfs root dir\n",
+			current->comm, __func__);
+		debugfs_remove_recursive(debugfs_root);
+		debugfs_root = NULL;
+	}
 	memset(me, 0, sizeof(*me));
 	fastrpc_init(me);
 	me->dev = NULL;
@@ -4998,7 +5038,7 @@
 	if (err)
 		goto register_bail;
 	VERIFY(err, 0 == alloc_chrdev_region(&me->dev_no, 0, NUM_CHANNELS,
-					DEVICE_NAME));
+		DEVICE_NAME));
 	if (err)
 		goto alloc_chrdev_bail;
 	cdev_init(&me->cdev, &fops);
diff --git a/drivers/char/diag/diag_dci.c b/drivers/char/diag/diag_dci.c
index e2c3344..4f38dba 100644
--- a/drivers/char/diag/diag_dci.c
+++ b/drivers/char/diag/diag_dci.c
@@ -1668,10 +1668,13 @@
 {
 	uint8_t retries = 0, max_retries = 50;
 	unsigned char *buf = NULL;
+	unsigned long flags;
 
 	do {
+		spin_lock_irqsave(&driver->dci_mempool_lock, flags);
 		buf = diagmem_alloc(driver, DIAG_MDM_BUF_SIZE,
 				    dci_ops_tbl[token].mempool);
+		spin_unlock_irqrestore(&driver->dci_mempool_lock, flags);
 		if (!buf) {
 			usleep_range(5000, 5100);
 			retries++;
@@ -1689,13 +1692,16 @@
 
 int diag_dci_write_done_bridge(int index, unsigned char *buf, int len)
 {
+	unsigned long flags;
 	int token = BRIDGE_TO_TOKEN(index);
 
 	if (!VALID_DCI_TOKEN(token)) {
 		pr_err("diag: Invalid DCI token %d in %s\n", token, __func__);
 		return -EINVAL;
 	}
+	spin_lock_irqsave(&driver->dci_mempool_lock, flags);
 	diagmem_free(driver, buf, dci_ops_tbl[token].mempool);
+	spin_unlock_irqrestore(&driver->dci_mempool_lock, flags);
 	return 0;
 }
 #endif
@@ -1709,6 +1715,7 @@
 	int dci_header_size = sizeof(struct diag_dci_header_t);
 	int ret = DIAG_DCI_NO_ERROR;
 	uint32_t write_len = 0;
+	unsigned long flags;
 
 	if (!data)
 		return -EIO;
@@ -1742,7 +1749,9 @@
 	if (ret) {
 		pr_err("diag: error writing dci pkt to remote proc, token: %d, err: %d\n",
 			token, ret);
+		spin_lock_irqsave(&driver->dci_mempool_lock, flags);
 		diagmem_free(driver, buf, dci_ops_tbl[token].mempool);
+		spin_unlock_irqrestore(&driver->dci_mempool_lock, flags);
 	} else {
 		ret = DIAG_DCI_NO_ERROR;
 	}
@@ -1766,6 +1775,7 @@
 	struct diag_ctrl_dci_handshake_pkt ctrl_pkt;
 	unsigned char *buf = NULL;
 	struct diag_dci_header_t dci_header;
+	unsigned long flags;
 
 	if (!VALID_DCI_TOKEN(token)) {
 		pr_err("diag: In %s, invalid DCI token %d\n", __func__, token);
@@ -1805,7 +1815,9 @@
 	if (err) {
 		pr_err("diag: error writing ack packet to remote proc, token: %d, err: %d\n",
 		       token, err);
+		spin_lock_irqsave(&driver->dci_mempool_lock, flags);
 		diagmem_free(driver, buf, dci_ops_tbl[token].mempool);
+		spin_unlock_irqrestore(&driver->dci_mempool_lock, flags);
 		return err;
 	}
 
@@ -2469,6 +2481,7 @@
 	int i, ret = DIAG_DCI_NO_ERROR, err = DIAG_DCI_NO_ERROR;
 	unsigned char *event_mask_ptr = NULL;
 	uint32_t write_len = 0;
+	unsigned long flags;
 
 	mutex_lock(&dci_event_mask_mutex);
 	event_mask_ptr = dci_ops_tbl[token].event_mask_composite;
@@ -2514,7 +2527,9 @@
 	if (err) {
 		pr_err("diag: error writing event mask to remote proc, token: %d, err: %d\n",
 		       token, err);
+		spin_lock_irqsave(&driver->dci_mempool_lock, flags);
 		diagmem_free(driver, buf, dci_ops_tbl[token].mempool);
+		spin_unlock_irqrestore(&driver->dci_mempool_lock, flags);
 		ret = err;
 	} else {
 		ret = DIAG_DCI_NO_ERROR;
@@ -2671,6 +2686,7 @@
 	int i, ret = DIAG_DCI_NO_ERROR, err = DIAG_DCI_NO_ERROR;
 	int updated;
 	uint32_t write_len = 0;
+	unsigned long flags;
 
 	mutex_lock(&dci_log_mask_mutex);
 	log_mask_ptr = dci_ops_tbl[token].log_mask_composite;
@@ -2710,7 +2726,10 @@
 		if (err) {
 			pr_err("diag: error writing log mask to remote processor, equip_id: %d, token: %d, err: %d\n",
 			       i, token, err);
+			spin_lock_irqsave(&driver->dci_mempool_lock, flags);
 			diagmem_free(driver, buf, dci_ops_tbl[token].mempool);
+			spin_unlock_irqrestore(&driver->dci_mempool_lock,
+				flags);
 			updated = 0;
 		}
 		if (updated)
@@ -2850,6 +2869,7 @@
 	mutex_init(&dci_log_mask_mutex);
 	mutex_init(&dci_event_mask_mutex);
 	spin_lock_init(&ws_lock);
+	spin_lock_init(&driver->dci_mempool_lock);
 
 	ret = diag_dci_init_ops_tbl();
 	if (ret)
@@ -2996,6 +3016,8 @@
 	int i, err = 0;
 	struct diag_dci_client_tbl *new_entry = NULL;
 	struct diag_dci_buf_peripheral_t *proc_buf = NULL;
+	struct pid *pid_struct = NULL;
+	struct task_struct *task_s = NULL;
 
 	if (!reg_entry)
 		return DIAG_DCI_NO_REG;
@@ -3011,14 +3033,25 @@
 	if (driver->num_dci_client >= MAX_DCI_CLIENTS)
 		return DIAG_DCI_NO_REG;
 
-	new_entry = kzalloc(sizeof(struct diag_dci_client_tbl), GFP_KERNEL);
-	if (!new_entry)
+	pid_struct = find_get_pid(current->tgid);
+	if (!pid_struct)
 		return DIAG_DCI_NO_REG;
+	task_s = get_pid_task(pid_struct, PIDTYPE_PID);
+	if (!task_s) {
+		put_pid(pid_struct);
+		return DIAG_DCI_NO_REG;
+	}
+	new_entry = kzalloc(sizeof(struct diag_dci_client_tbl), GFP_KERNEL);
+	if (!new_entry) {
+		put_pid(pid_struct);
+		put_task_struct(task_s);
+		return DIAG_DCI_NO_REG;
+	}
+
+	get_task_struct(task_s);
 
 	mutex_lock(&driver->dci_mutex);
-
-	get_task_struct(current);
-	new_entry->client = current;
+	new_entry->client = task_s;
 	new_entry->tgid = current->tgid;
 	new_entry->client_info.notification_list =
 				reg_entry->notification_list;
@@ -3108,7 +3141,8 @@
 		diag_update_proc_vote(DIAG_PROC_DCI, VOTE_UP, reg_entry->token);
 	queue_work(driver->diag_real_time_wq, &driver->diag_real_time_work);
 	mutex_unlock(&driver->dci_mutex);
-
+	put_pid(pid_struct);
+	put_task_struct(task_s);
 	return reg_entry->client_id;
 
 fail_alloc:
@@ -3145,8 +3179,10 @@
 		kfree(new_entry);
 		new_entry = NULL;
 	}
-	put_task_struct(current);
 	mutex_unlock(&driver->dci_mutex);
+	put_task_struct(task_s);
+	put_task_struct(task_s);
+	put_pid(pid_struct);
 	return DIAG_DCI_NO_REG;
 }
 
diff --git a/drivers/char/diag/diag_masks.c b/drivers/char/diag/diag_masks.c
index 019e203..d40f2f2 100644
--- a/drivers/char/diag/diag_masks.c
+++ b/drivers/char/diag/diag_masks.c
@@ -809,7 +809,7 @@
 		write_len += sizeof(rsp_ms);
 		if (rsp_ms.id_valid) {
 			sub_index = diag_check_subid_mask_index(rsp_ms.sub_id,
-				pid);
+				0);
 			ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
 				sub_index);
 			if (!ms_ptr)
@@ -1004,11 +1004,17 @@
 		req_sub = (struct diag_msg_build_mask_sub_t *)src_buf;
 		rsp_sub = *req_sub;
 		rsp_sub.status = MSG_STATUS_FAIL;
-		sub_index = diag_check_subid_mask_index(req_sub->sub_id, pid);
-		ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr, sub_index);
-		if (!ms_ptr)
-			goto err;
-		mask = (struct diag_msg_mask_t *)ms_ptr->sub_ptr;
+		if (req_sub->id_valid) {
+			sub_index = diag_check_subid_mask_index(req_sub->sub_id,
+				0);
+			ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
+				sub_index);
+			if (!ms_ptr)
+				goto err;
+			mask = (struct diag_msg_mask_t *)ms_ptr->sub_ptr;
+		} else {
+			mask = (struct diag_msg_mask_t *)mask_info->ptr;
+		}
 		ssid_range.ssid_first = req_sub->ssid_first;
 		ssid_range.ssid_last = req_sub->ssid_last;
 		header_len = sizeof(rsp_sub);
@@ -1103,7 +1109,7 @@
 		header_len = sizeof(struct diag_msg_build_mask_sub_t);
 		if (req_sub->id_valid) {
 			sub_index = diag_check_subid_mask_index(req_sub->sub_id,
-				pid);
+				0);
 			ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
 				sub_index);
 			if (!ms_ptr)
@@ -1304,7 +1310,7 @@
 		header_len = sizeof(struct diag_msg_config_rsp_sub_t);
 		if (req_sub->id_valid) {
 			sub_index = diag_check_subid_mask_index(req_sub->sub_id,
-				pid);
+				0);
 			ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
 				sub_index);
 			if (!ms_ptr)
@@ -1454,7 +1460,7 @@
 	if (!cmd_ver || !req->id_valid)
 		memcpy(dest_buf + write_len, event_mask.ptr, mask_size);
 	else {
-		sub_index = diag_check_subid_mask_index(req->sub_id, pid);
+		sub_index = diag_check_subid_mask_index(req->sub_id, 0);
 		ms_ptr = diag_get_ms_ptr_index(event_mask.ms_ptr, sub_index);
 		if (!ms_ptr || !ms_ptr->sub_ptr)
 			return 0;
@@ -1516,7 +1522,7 @@
 		goto err;
 	}
 	if (cmd_ver && req_sub->id_valid) {
-		sub_index = diag_check_subid_mask_index(req_sub->sub_id, pid);
+		sub_index = diag_check_subid_mask_index(req_sub->sub_id, 0);
 		if (sub_index < 0) {
 			ret = sub_index;
 			goto err;
@@ -1631,7 +1637,7 @@
 		preset = req->preset_id;
 	}
 	if (cmd_ver && req->id_valid) {
-		sub_index = diag_check_subid_mask_index(req->sub_id, pid);
+		sub_index = diag_check_subid_mask_index(req->sub_id, 0);
 		if (sub_index < 0) {
 			ret = sub_index;
 			goto err;
@@ -1751,7 +1757,7 @@
 		req_sub = (struct diag_log_config_rsp_sub_t *)src_buf;
 		if (req_sub->id_valid) {
 			sub_index = diag_check_subid_mask_index(req_sub->sub_id,
-				pid);
+				0);
 			ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
 				sub_index);
 			if (!ms_ptr) {
@@ -1875,7 +1881,7 @@
 		req = (struct diag_log_config_req_sub_t *)src_buf;
 		if (req->id_valid) {
 			sub_index = diag_check_subid_mask_index(req->sub_id,
-				pid);
+				0);
 			ms_ptr = diag_get_ms_ptr_index(log_mask.ms_ptr,
 				sub_index);
 			if (!ms_ptr)
@@ -1963,7 +1969,7 @@
 		read_len += sizeof(struct diag_log_config_rsp_sub_t);
 		if (req_sub->id_valid) {
 			sub_index = diag_check_subid_mask_index(req_sub->sub_id,
-				pid);
+				0);
 			ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
 				sub_index);
 			if (!ms_ptr) {
@@ -2170,7 +2176,7 @@
 		req = (struct diag_log_config_rsp_sub_t *)src_buf;
 		if (req->id_valid) {
 			sub_index = diag_check_subid_mask_index(req->sub_id,
-				pid);
+				0);
 			ms_ptr = diag_get_ms_ptr_index(mask_info->ms_ptr,
 				sub_index);
 			if (!ms_ptr) {
@@ -3425,7 +3431,9 @@
 			subid = *(uint32_t *)(buf +
 				sizeof(struct diag_pkt_header_t) +
 				2*sizeof(uint8_t));
+			mutex_lock(&driver->md_session_lock);
 			subid_index = diag_check_subid_mask_index(subid, pid);
+			mutex_unlock(&driver->md_session_lock);
 		}
 		if (subid_valid && (subid_index < 0))
 			return 0;
@@ -3608,8 +3616,8 @@
 
 	diag_subid_info[i] = subid;
 
-	mutex_lock(&driver->md_session_lock);
-	info = diag_md_session_get_pid(pid);
+	if (pid)
+		info = diag_md_session_get_pid(pid);
 
 	err = diag_multisim_msg_mask_init(i, info);
 	if (err)
@@ -3621,10 +3629,8 @@
 	if (err)
 		goto fail;
 
-	mutex_unlock(&driver->md_session_lock);
 	return i;
 fail:
-	mutex_unlock(&driver->md_session_lock);
 	pr_err("diag: Could not initialize diag mask for subid: %d buffers\n",
 		subid);
 	return -ENOMEM;
diff --git a/drivers/char/diag/diagchar.h b/drivers/char/diag/diagchar.h
index 0516429..880cbeb 100644
--- a/drivers/char/diag/diagchar.h
+++ b/drivers/char/diag/diagchar.h
@@ -797,6 +797,7 @@
 	struct mutex diag_id_mutex;
 	struct mutex diagid_v2_mutex;
 	struct mutex cmd_reg_mutex;
+	spinlock_t dci_mempool_lock;
 	uint32_t cmd_reg_count;
 	struct mutex diagfwd_channel_mutex[NUM_PERIPHERALS];
 	/* Sizes that reflect memory pool sizes */
diff --git a/drivers/char/diag/diagfwd.h b/drivers/char/diag/diagfwd.h
index fd79491..8960b72 100644
--- a/drivers/char/diag/diagfwd.h
+++ b/drivers/char/diag/diagfwd.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright (c) 2008-2019, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2008-2020, The Linux Foundation. All rights reserved.
  */
 
 #ifndef DIAGFWD_H
@@ -44,4 +44,6 @@
 int diag_process_stm_cmd(unsigned char *buf, unsigned char *dest_buf);
 void diag_md_hdlc_reset_timer_func(struct timer_list *tlist);
 void diag_update_md_clients(unsigned int type);
+void diag_process_stm_mask(uint8_t cmd, uint8_t data_mask,
+	int data_type);
 #endif
diff --git a/drivers/char/diag/diagfwd_cntl.c b/drivers/char/diag/diagfwd_cntl.c
index 5acb25f..c41839f 100644
--- a/drivers/char/diag/diagfwd_cntl.c
+++ b/drivers/char/diag/diagfwd_cntl.c
@@ -1,5 +1,5 @@
 // SPDX-License-Identifier: GPL-2.0-only
-/* Copyright (c) 2011-2019, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2020, The Linux Foundation. All rights reserved.
  */
 
 #include <linux/slab.h>
@@ -1894,12 +1894,18 @@
 			pr_err("diag: Unable to send PASSTHRU ctrl packet to peripheral %d, err: %d\n",
 				i, err);
 	}
+	if ((diagid_mask & DIAG_ID_APPS) &&
+		(hw_accel_type == DIAG_HW_ACCEL_TYPE_STM)) {
+		diag_process_stm_mask(req_params->operation,
+			DIAG_STM_APPS, APPS_DATA);
+	}
 	return 0;
 }
 
 int diagfwd_cntl_init(void)
 {
 	uint8_t peripheral = 0;
+	uint32_t diagid_mask = 0;
 
 	driver->polling_reg_flag = 0;
 	driver->log_on_demand_support = 1;
@@ -1920,6 +1926,9 @@
 	if (!driver->cntl_wq)
 		return -ENOMEM;
 
+	diagid_mask = (BITMASK_DIAGID_FMASK | BITMASK_HW_ACCEL_STM_V1);
+	process_diagid_v2_feature_mask(DIAG_ID_APPS, diagid_mask);
+
 	return 0;
 }
 
diff --git a/drivers/char/diag/diagfwd_cntl.h b/drivers/char/diag/diagfwd_cntl.h
index b714d5e..b78f7e3 100644
--- a/drivers/char/diag/diagfwd_cntl.h
+++ b/drivers/char/diag/diagfwd_cntl.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright (c) 2011-2019, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2020, The Linux Foundation. All rights reserved.
  */
 
 #ifndef DIAGFWD_CNTL_H
@@ -91,6 +91,10 @@
 #define MAX_DIAGID_STR_LEN	30
 #define MIN_DIAGID_STR_LEN	5
 
+#define BITMASK_DIAGID_FMASK		0x0001
+#define BITMASK_HW_ACCEL_STM_V1		0x0002
+#define BITMASK_HW_ACCEL_ATB_V1		0x0004
+
 struct diag_ctrl_pkt_header_t {
 	uint32_t pkt_id;
 	uint32_t len;
diff --git a/drivers/char/hw_random/msm_rng.c b/drivers/char/hw_random/msm_rng.c
index 4479b1d..541fa71 100644
--- a/drivers/char/hw_random/msm_rng.c
+++ b/drivers/char/hw_random/msm_rng.c
@@ -285,6 +285,10 @@
 					"qcom,msm-rng-iface-clk")) {
 				msm_rng_dev->prng_clk = clk_get(&pdev->dev,
 							"iface_clk");
+			} else if (of_property_read_bool(pdev->dev.of_node,
+					"qcom,msm-rng-hwkm-clk")) {
+				msm_rng_dev->prng_clk = clk_get(&pdev->dev,
+							 "km_clk_src");
 			} else {
 				msm_rng_dev->prng_clk = clk_get(&pdev->dev,
 							 "core_clk");
diff --git a/drivers/clk/qcom/debugcc-scuba.c b/drivers/clk/qcom/debugcc-scuba.c
index 7830fd3..9d4820b 100644
--- a/drivers/clk/qcom/debugcc-scuba.c
+++ b/drivers/clk/qcom/debugcc-scuba.c
@@ -111,7 +111,6 @@
 	"disp_cc_debug_mux",
 	"gcc_ahb2phy_csi_clk",
 	"gcc_ahb2phy_usb_clk",
-	"gcc_apc_vs_clk",
 	"gcc_bimc_gpu_axi_clk",
 	"gcc_boot_rom_ahb_clk",
 	"gcc_cam_throttle_nrt_clk",
@@ -156,8 +155,6 @@
 	"gcc_gpu_memnoc_gfx_clk",
 	"gcc_gpu_snoc_dvm_gfx_clk",
 	"gcc_gpu_throttle_core_clk",
-	"gcc_gpu_throttle_xo_clk",
-	"gcc_mss_vs_clk",
 	"gcc_pdm2_clk",
 	"gcc_pdm_ahb_clk",
 	"gcc_pdm_xo4_clk",
@@ -190,9 +187,6 @@
 	"gcc_usb3_prim_phy_com_aux_clk",
 	"gcc_usb3_prim_phy_pipe_clk",
 	"gcc_vcodec0_axi_clk",
-	"gcc_vdda_vs_clk",
-	"gcc_vddcx_vs_clk",
-	"gcc_vddmx_vs_clk",
 	"gcc_venus_ahb_clk",
 	"gcc_venus_ctl_axi_clk",
 	"gcc_video_ahb_clk",
@@ -201,9 +195,6 @@
 	"gcc_video_vcodec0_sys_clk",
 	"gcc_video_venus_ctl_clk",
 	"gcc_video_xo_clk",
-	"gcc_vs_ctrl_ahb_clk",
-	"gcc_vs_ctrl_clk",
-	"gcc_wcss_vs_clk",
 	"gpu_cc_debug_mux",
 	"mc_cc_debug_mux",
 	"measure_only_cnoc_clk",
@@ -222,7 +213,6 @@
 	0x41,		/* disp_cc_debug_mux */
 	0x62,		/* gcc_ahb2phy_csi_clk */
 	0x63,		/* gcc_ahb2phy_usb_clk */
-	0xBF,		/* gcc_apc_vs_clk */
 	0x8D,		/* gcc_bimc_gpu_axi_clk */
 	0x75,		/* gcc_boot_rom_ahb_clk */
 	0x4B,		/* gcc_cam_throttle_nrt_clk */
@@ -267,8 +257,6 @@
 	0xE4,		/* gcc_gpu_memnoc_gfx_clk */
 	0xE6,		/* gcc_gpu_snoc_dvm_gfx_clk */
 	0xEB,		/* gcc_gpu_throttle_core_clk */
-	0xEA,		/* gcc_gpu_throttle_xo_clk */
-	0xBE,		/* gcc_mss_vs_clk */
 	0x72,		/* gcc_pdm2_clk */
 	0x70,		/* gcc_pdm_ahb_clk */
 	0x71,		/* gcc_pdm_xo4_clk */
@@ -301,9 +289,6 @@
 	0x5E,		/* gcc_usb3_prim_phy_com_aux_clk */
 	0x5F,		/* gcc_usb3_prim_phy_pipe_clk */
 	0x12C,		/* gcc_vcodec0_axi_clk */
-	0xBB,		/* gcc_vdda_vs_clk */
-	0xB9,		/* gcc_vddcx_vs_clk */
-	0xBA,		/* gcc_vddmx_vs_clk */
 	0x12D,		/* gcc_venus_ahb_clk */
 	0x12B,		/* gcc_venus_ctl_axi_clk */
 	0x35,		/* gcc_video_ahb_clk */
@@ -312,9 +297,6 @@
 	0x129,		/* gcc_video_vcodec0_sys_clk */
 	0x127,		/* gcc_video_venus_ctl_clk */
 	0x3D,		/* gcc_video_xo_clk */
-	0xBD,		/* gcc_vs_ctrl_ahb_clk */
-	0xBC,		/* gcc_vs_ctrl_clk */
-	0xC0,		/* gcc_wcss_vs_clk */
 	0xE3,		/* gpu_cc_debug_mux */
 	0x9B,		/* mc_cc_debug_mux */
 	0x19,		/* measure_only_cnoc_clk */
@@ -351,9 +333,7 @@
 static const char *const gpu_cc_debug_mux_parent_names[] = {
 	"gpu_cc_ahb_clk",
 	"gpu_cc_crc_ahb_clk",
-	"gpu_cc_cx_apb_clk",
 	"gpu_cc_cx_gfx3d_clk",
-	"gpu_cc_cx_gfx3d_slv_clk",
 	"gpu_cc_cx_gmu_clk",
 	"gpu_cc_cx_snoc_dvm_clk",
 	"gpu_cc_cxo_aon_clk",
@@ -366,9 +346,7 @@
 static int gpu_cc_debug_mux_sels[] = {
 	0x10,		/* gpu_cc_ahb_clk */
 	0x11,		/* gpu_cc_crc_ahb_clk */
-	0x14,		/* gpu_cc_cx_apb_clk */
 	0x1A,		/* gpu_cc_cx_gfx3d_clk */
-	0x1B,		/* gpu_cc_cx_gfx3d_slv_clk */
 	0x18,		/* gpu_cc_cx_gmu_clk */
 	0x15,		/* gpu_cc_cx_snoc_dvm_clk */
 	0xA,		/* gpu_cc_cxo_aon_clk */
diff --git a/drivers/clk/qcom/gcc-scuba.c b/drivers/clk/qcom/gcc-scuba.c
index 6100d2e..b8bf0d6 100644
--- a/drivers/clk/qcom/gcc-scuba.c
+++ b/drivers/clk/qcom/gcc-scuba.c
@@ -2465,19 +2465,6 @@
 	},
 };
 
-static struct clk_branch gcc_gpu_throttle_xo_clk = {
-	.halt_reg = 0x36044,
-	.halt_check = BRANCH_HALT,
-	.clkr = {
-		.enable_reg = 0x36044,
-		.enable_mask = BIT(0),
-		.hw.init = &(struct clk_init_data){
-			.name = "gcc_gpu_throttle_xo_clk",
-			.ops = &clk_branch2_ops,
-		},
-	},
-};
-
 static struct clk_branch gcc_pdm2_clk = {
 	.halt_reg = 0x2000c,
 	.halt_check = BRANCH_HALT,
@@ -3192,7 +3179,6 @@
 	[GCC_GPU_MEMNOC_GFX_CLK] = &gcc_gpu_memnoc_gfx_clk.clkr,
 	[GCC_GPU_SNOC_DVM_GFX_CLK] = &gcc_gpu_snoc_dvm_gfx_clk.clkr,
 	[GCC_GPU_THROTTLE_CORE_CLK] = &gcc_gpu_throttle_core_clk.clkr,
-	[GCC_GPU_THROTTLE_XO_CLK] = &gcc_gpu_throttle_xo_clk.clkr,
 	[GCC_PDM2_CLK] = &gcc_pdm2_clk.clkr,
 	[GCC_PDM2_CLK_SRC] = &gcc_pdm2_clk_src.clkr,
 	[GCC_PDM_AHB_CLK] = &gcc_pdm_ahb_clk.clkr,
diff --git a/drivers/clk/qcom/gpucc-scuba.c b/drivers/clk/qcom/gpucc-scuba.c
index 1ffb4d3..87e11f8 100644
--- a/drivers/clk/qcom/gpucc-scuba.c
+++ b/drivers/clk/qcom/gpucc-scuba.c
@@ -110,9 +110,9 @@
 			.num_rate_max = VDD_NUM,
 			.rate_max = (unsigned long[VDD_NUM]) {
 				[VDD_MIN] = 1200000000,
-				[VDD_LOWER] = 2400000000,
-				[VDD_LOW] = 3000000000,
-				[VDD_NOMINAL] = 3300000000},
+				[VDD_LOWER] = 2400000000UL,
+				[VDD_LOW] = 3000000000UL,
+				[VDD_NOMINAL] = 3300000000UL},
 		},
 	},
 };
diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index c65f2a8..20b245b 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -804,8 +804,4 @@
 
 source "drivers/crypto/hisilicon/Kconfig"
 
-if ARCH_QCOM
-source drivers/crypto/msm/Kconfig
-endif
-
 endif # CRYPTO_HW
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index e2ca339..c23396f 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -21,7 +21,6 @@
 obj-$(CONFIG_CRYPTO_DEV_MXC_SCC) += mxc-scc.o
 obj-$(CONFIG_CRYPTO_DEV_NIAGARA2) += n2_crypto.o
 n2_crypto-y := n2_core.o n2_asm.o
-obj-$(CONFIG_CRYPTO_DEV_QCOM_ICE) += msm/
 obj-$(CONFIG_CRYPTO_DEV_NX) += nx/
 obj-$(CONFIG_CRYPTO_DEV_OMAP) += omap-crypto.o
 obj-$(CONFIG_CRYPTO_DEV_OMAP_AES) += omap-aes-driver.o
diff --git a/drivers/crypto/msm/Kconfig b/drivers/crypto/msm/Kconfig
deleted file mode 100644
index cd4e519..0000000
--- a/drivers/crypto/msm/Kconfig
+++ /dev/null
@@ -1,10 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-config CRYPTO_DEV_QCOM_ICE
-	tristate "Inline Crypto Module"
-	default n
-	depends on BLK_DEV_DM
-	help
-	  This driver supports Inline Crypto Engine for QTI chipsets, MSM8994
-	  and later, to accelerate crypto operations for storage needs.
-	  To compile this driver as a module, choose M here: the
-	  module will be called ice.
diff --git a/drivers/crypto/msm/Makefile b/drivers/crypto/msm/Makefile
index 48a92b6..ba6763c 100644
--- a/drivers/crypto/msm/Makefile
+++ b/drivers/crypto/msm/Makefile
@@ -4,4 +4,3 @@
 obj-$(CONFIG_CRYPTO_DEV_QCEDEV) += qcedev_smmu.o
 obj-$(CONFIG_CRYPTO_DEV_QCRYPTO) += qcrypto.o
 obj-$(CONFIG_CRYPTO_DEV_OTA_CRYPTO) += ota_crypto.o
-obj-$(CONFIG_CRYPTO_DEV_QCOM_ICE) += ice.o
diff --git a/drivers/crypto/msm/ice.c b/drivers/crypto/msm/ice.c
deleted file mode 100644
index 097e871..0000000
--- a/drivers/crypto/msm/ice.c
+++ /dev/null
@@ -1,1784 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * QTI Inline Crypto Engine (ICE) driver
- *
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
- */
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/io.h>
-#include <linux/interrupt.h>
-#include <linux/delay.h>
-#include <linux/of.h>
-#include <linux/device-mapper.h>
-#include <linux/clk.h>
-#include <linux/regulator/consumer.h>
-#include <linux/msm-bus.h>
-#include <crypto/ice.h>
-#include <soc/qcom/scm.h>
-#include <soc/qcom/qseecomi.h>
-#include "iceregs.h"
-#include <linux/pfk.h>
-#include <linux/atomic.h>
-#include <linux/wait.h>
-
-#define TZ_SYSCALL_CREATE_SMC_ID(o, s, f) \
-	((uint32_t)((((o & 0x3f) << 24) | (s & 0xff) << 8) | (f & 0xff)))
-
-#define TZ_OWNER_QSEE_OS                 50
-#define TZ_SVC_KEYSTORE                  5     /* Keystore management */
-
-#define TZ_OS_KS_RESTORE_KEY_ID \
-	TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_QSEE_OS, TZ_SVC_KEYSTORE, 0x06)
-
-#define TZ_SYSCALL_CREATE_PARAM_ID_0 0
-
-#define TZ_OS_KS_RESTORE_KEY_ID_PARAM_ID \
-	TZ_SYSCALL_CREATE_PARAM_ID_0
-
-#define TZ_OS_KS_RESTORE_KEY_CONFIG_ID \
-	TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_QSEE_OS, TZ_SVC_KEYSTORE, 0x06)
-
-#define TZ_OS_KS_RESTORE_KEY_CONFIG_ID_PARAM_ID \
-	TZ_SYSCALL_CREATE_PARAM_ID_1(TZ_SYSCALL_PARAM_TYPE_VAL)
-
-
-#define ICE_REV(x, y) (((x) & ICE_CORE_##y##_REV_MASK) >> ICE_CORE_##y##_REV)
-#define QCOM_UFS_ICE_DEV	"iceufs"
-#define QCOM_UFS_CARD_ICE_DEV	"iceufscard"
-#define QCOM_SDCC_ICE_DEV	"icesdcc"
-#define QCOM_ICE_MAX_BIST_CHECK_COUNT 100
-
-#define QCOM_ICE_ENCRYPT	0x1
-#define QCOM_ICE_DECRYPT	0x2
-#define QCOM_SECT_LEN_IN_BYTE	512
-#define QCOM_UD_FOOTER_SIZE	0x4000
-#define QCOM_UD_FOOTER_SECS	(QCOM_UD_FOOTER_SIZE / QCOM_SECT_LEN_IN_BYTE)
-
-#define ICE_CRYPTO_CXT_FDE 1
-#define ICE_CRYPTO_CXT_FBE 2
-#define ICE_INSTANCE_TYPE_LENGTH 12
-
-static int ice_fde_flag;
-
-struct ice_clk_info {
-	struct list_head list;
-	struct clk *clk;
-	const char *name;
-	u32 max_freq;
-	u32 min_freq;
-	u32 curr_freq;
-	bool enabled;
-};
-
-static LIST_HEAD(ice_devices);
-
-static int qti_ice_setting_config(struct request *req,
-		struct ice_device *ice_dev,
-		struct ice_crypto_setting *crypto_data,
-		struct ice_data_setting *setting, uint32_t cxt)
-{
-	if (ice_dev->is_ice_disable_fuse_blown) {
-		pr_err("%s ICE disabled fuse is blown\n", __func__);
-		return -EPERM;
-	}
-
-	if (!setting)
-		return -EINVAL;
-
-	if ((short)(crypto_data->key_index) >= 0) {
-		memcpy(&setting->crypto_data, crypto_data,
-				sizeof(setting->crypto_data));
-
-		if (rq_data_dir(req) == WRITE) {
-			if ((cxt == ICE_CRYPTO_CXT_FBE) ||
-				((cxt == ICE_CRYPTO_CXT_FDE) &&
-					(ice_fde_flag & QCOM_ICE_ENCRYPT)))
-				setting->encr_bypass = false;
-		} else if (rq_data_dir(req) == READ) {
-			if ((cxt == ICE_CRYPTO_CXT_FBE) ||
-				((cxt == ICE_CRYPTO_CXT_FDE) &&
-					(ice_fde_flag & QCOM_ICE_DECRYPT)))
-				setting->decr_bypass = false;
-		} else {
-			/* Should I say BUG_ON */
-			setting->encr_bypass = true;
-			setting->decr_bypass = true;
-		}
-	}
-
-	return 0;
-}
-
-void qcom_ice_set_fde_flag(int flag)
-{
-	ice_fde_flag = flag;
-	pr_debug("%s read_write setting %d\n", __func__, ice_fde_flag);
-}
-EXPORT_SYMBOL(qcom_ice_set_fde_flag);
-
-static int qcom_ice_enable_clocks(struct ice_device *, bool);
-
-#ifdef CONFIG_MSM_BUS_SCALING
-
-static int qcom_ice_set_bus_vote(struct ice_device *ice_dev, int vote)
-{
-	int err = 0;
-
-	if (vote != ice_dev->bus_vote.curr_vote) {
-		err = msm_bus_scale_client_update_request(
-				ice_dev->bus_vote.client_handle, vote);
-		if (err) {
-			dev_err(ice_dev->pdev,
-				"%s:failed:client_handle=0x%x, vote=%d, err=%d\n",
-				__func__, ice_dev->bus_vote.client_handle,
-				vote, err);
-			goto out;
-		}
-		ice_dev->bus_vote.curr_vote = vote;
-	}
-out:
-	return err;
-}
-
-static int qcom_ice_get_bus_vote(struct ice_device *ice_dev,
-		const char *speed_mode)
-{
-	struct device *dev = ice_dev->pdev;
-	struct device_node *np = dev->of_node;
-	int err;
-	const char *key = "qcom,bus-vector-names";
-
-	if (!speed_mode) {
-		err = -EINVAL;
-		goto out;
-	}
-
-	if (ice_dev->bus_vote.is_max_bw_needed && !!strcmp(speed_mode, "MIN"))
-		err = of_property_match_string(np, key, "MAX");
-	else
-		err = of_property_match_string(np, key, speed_mode);
-out:
-	if (err < 0)
-		dev_err(dev, "%s: Invalid %s mode %d\n",
-				__func__, speed_mode, err);
-	return err;
-}
-
-static int qcom_ice_bus_register(struct ice_device *ice_dev)
-{
-	int err = 0;
-	struct msm_bus_scale_pdata *bus_pdata;
-	struct device *dev = ice_dev->pdev;
-	struct platform_device *pdev = to_platform_device(dev);
-	struct device_node *np = dev->of_node;
-
-	bus_pdata = msm_bus_cl_get_pdata(pdev);
-	if (!bus_pdata) {
-		dev_err(dev, "%s: failed to get bus vectors\n", __func__);
-		err = -ENODATA;
-		goto out;
-	}
-
-	err = of_property_count_strings(np, "qcom,bus-vector-names");
-	if (err < 0 || err != bus_pdata->num_usecases) {
-		dev_err(dev, "%s: Error = %d with qcom,bus-vector-names\n",
-				__func__, err);
-		goto out;
-	}
-	err = 0;
-
-	ice_dev->bus_vote.client_handle =
-			msm_bus_scale_register_client(bus_pdata);
-	if (!ice_dev->bus_vote.client_handle) {
-		dev_err(dev, "%s: msm_bus_scale_register_client failed\n",
-				__func__);
-		err = -EFAULT;
-		goto out;
-	}
-
-	/* cache the vote index for minimum and maximum bandwidth */
-	ice_dev->bus_vote.min_bw_vote = qcom_ice_get_bus_vote(ice_dev, "MIN");
-	ice_dev->bus_vote.max_bw_vote = qcom_ice_get_bus_vote(ice_dev, "MAX");
-out:
-	return err;
-}
-
-#else
-
-static int qcom_ice_set_bus_vote(struct ice_device *ice_dev, int vote)
-{
-	return 0;
-}
-
-static int qcom_ice_get_bus_vote(struct ice_device *ice_dev,
-		const char *speed_mode)
-{
-	return 0;
-}
-
-static int qcom_ice_bus_register(struct ice_device *ice_dev)
-{
-	return 0;
-}
-#endif /* CONFIG_MSM_BUS_SCALING */
-
-static int qcom_ice_get_vreg(struct ice_device *ice_dev)
-{
-	int ret = 0;
-
-	if (!ice_dev->is_regulator_available)
-		return 0;
-
-	if (ice_dev->reg)
-		return 0;
-
-	ice_dev->reg = devm_regulator_get(ice_dev->pdev, "vdd-hba");
-	if (IS_ERR(ice_dev->reg)) {
-		ret = PTR_ERR(ice_dev->reg);
-		dev_err(ice_dev->pdev, "%s: %s get failed, err=%d\n",
-			__func__, "vdd-hba-supply", ret);
-	}
-	return ret;
-}
-
-static void qcom_ice_config_proc_ignore(struct ice_device *ice_dev)
-{
-	u32 regval;
-
-	if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 2 &&
-	    ICE_REV(ice_dev->ice_hw_version, MINOR) == 0 &&
-	    ICE_REV(ice_dev->ice_hw_version, STEP) == 0) {
-		regval = qcom_ice_readl(ice_dev,
-				QCOM_ICE_REGS_ADVANCED_CONTROL);
-		regval |= 0x800;
-		qcom_ice_writel(ice_dev, regval,
-				QCOM_ICE_REGS_ADVANCED_CONTROL);
-		/* Ensure register is updated */
-		mb();
-	}
-}
-
-static void qcom_ice_low_power_mode_enable(struct ice_device *ice_dev)
-{
-	u32 regval;
-
-	regval = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ADVANCED_CONTROL);
-	/*
-	 * Enable low power mode sequence
-	 * [0]-0, [1]-0, [2]-0, [3]-E, [4]-0, [5]-0, [6]-0, [7]-0
-	 */
-	regval |= 0x7000;
-	qcom_ice_writel(ice_dev, regval, QCOM_ICE_REGS_ADVANCED_CONTROL);
-	/*
-	 * Ensure previous instructions was completed before issuing next
-	 * ICE initialization/optimization instruction
-	 */
-	mb();
-}
-
-static void qcom_ice_enable_test_bus_config(struct ice_device *ice_dev)
-{
-	/*
-	 * Configure & enable ICE_TEST_BUS_REG to reflect ICE intr lines
-	 * MAIN_TEST_BUS_SELECTOR = 0 (ICE_CONFIG)
-	 * TEST_BUS_REG_EN = 1 (ENABLE)
-	 */
-	u32 regval;
-
-	if (ICE_REV(ice_dev->ice_hw_version, MAJOR) >= 2)
-		return;
-
-	regval = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_TEST_BUS_CONTROL);
-	regval &= 0x0FFFFFFF;
-	/* TBD: replace 0x2 with define in iceregs.h */
-	regval |= 0x2;
-	qcom_ice_writel(ice_dev, regval, QCOM_ICE_REGS_TEST_BUS_CONTROL);
-
-	/*
-	 * Ensure previous instructions was completed before issuing next
-	 * ICE initialization/optimization instruction
-	 */
-	mb();
-}
-
-static void qcom_ice_optimization_enable(struct ice_device *ice_dev)
-{
-	u32 regval;
-
-	regval = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ADVANCED_CONTROL);
-	if (ICE_REV(ice_dev->ice_hw_version, MAJOR) >= 2)
-		regval |= 0xD807100;
-	else if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1)
-		regval |= 0x3F007100;
-
-	/* ICE Optimizations Enable Sequence */
-	udelay(5);
-	/* [0]-0, [1]-0, [2]-8, [3]-E, [4]-0, [5]-0, [6]-F, [7]-A */
-	qcom_ice_writel(ice_dev, regval, QCOM_ICE_REGS_ADVANCED_CONTROL);
-	/*
-	 * Ensure previous instructions was completed before issuing next
-	 * ICE initialization/optimization instruction
-	 */
-	mb();
-
-	/* ICE HPG requires sleep before writing */
-	udelay(5);
-	if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1) {
-		regval = 0;
-		regval = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ENDIAN_SWAP);
-		regval |= 0xF;
-		qcom_ice_writel(ice_dev, regval, QCOM_ICE_REGS_ENDIAN_SWAP);
-		/*
-		 * Ensure previous instructions were completed before issue
-		 * next ICE commands
-		 */
-		mb();
-	}
-}
-
-static int qcom_ice_wait_bist_status(struct ice_device *ice_dev)
-{
-	int count;
-	u32 reg;
-
-	/* Poll until all BIST bits are reset */
-	for (count = 0; count < QCOM_ICE_MAX_BIST_CHECK_COUNT; count++) {
-		reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_BIST_STATUS);
-		if (!(reg & ICE_BIST_STATUS_MASK))
-			break;
-		udelay(50);
-	}
-
-	if (reg)
-		return -ETIMEDOUT;
-
-	return 0;
-}
-
-static int qcom_ice_enable(struct ice_device *ice_dev)
-{
-	unsigned int reg;
-	int ret = 0;
-
-	if ((ICE_REV(ice_dev->ice_hw_version, MAJOR) > 2) ||
-		((ICE_REV(ice_dev->ice_hw_version, MAJOR) == 2) &&
-		 (ICE_REV(ice_dev->ice_hw_version, MINOR) >= 1)))
-		ret = qcom_ice_wait_bist_status(ice_dev);
-	if (ret) {
-		dev_err(ice_dev->pdev, "BIST status error (%d)\n", ret);
-		return ret;
-	}
-
-	/* Starting ICE v3 enabling is done at storage controller (UFS/SDCC) */
-	if (ICE_REV(ice_dev->ice_hw_version, MAJOR) >= 3)
-		return 0;
-
-	/*
-	 * To enable ICE, perform following
-	 * 1. Set IGNORE_CONTROLLER_RESET to USE in ICE_RESET register
-	 * 2. Disable GLOBAL_BYPASS bit in ICE_CONTROL register
-	 */
-	reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_RESET);
-
-	if (ICE_REV(ice_dev->ice_hw_version, MAJOR) >= 2)
-		reg &= 0x0;
-	else if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1)
-		reg &= ~0x100;
-
-	qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_RESET);
-
-	/*
-	 * Ensure previous instructions was completed before issuing next
-	 * ICE initialization/optimization instruction
-	 */
-	mb();
-
-	reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_CONTROL);
-
-	if (ICE_REV(ice_dev->ice_hw_version, MAJOR) >= 2)
-		reg &= 0xFFFE;
-	else if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1)
-		reg &= ~0x7;
-	qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_CONTROL);
-
-	/*
-	 * Ensure previous instructions was completed before issuing next
-	 * ICE initialization/optimization instruction
-	 */
-	mb();
-
-	if ((ICE_REV(ice_dev->ice_hw_version, MAJOR) > 2) ||
-		((ICE_REV(ice_dev->ice_hw_version, MAJOR) == 2) &&
-		 (ICE_REV(ice_dev->ice_hw_version, MINOR) >= 1))) {
-		reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_BYPASS_STATUS);
-		if ((reg & 0x80000000) != 0x0) {
-			pr_err("%s: Bypass failed for ice = %pK\n",
-				__func__, (void *)ice_dev);
-			WARN_ON(1);
-		}
-	}
-	return 0;
-}
-
-static int qcom_ice_verify_ice(struct ice_device *ice_dev)
-{
-	unsigned int rev;
-	unsigned int maj_rev, min_rev, step_rev;
-
-	rev = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_VERSION);
-	maj_rev = (rev & ICE_CORE_MAJOR_REV_MASK) >> ICE_CORE_MAJOR_REV;
-	min_rev = (rev & ICE_CORE_MINOR_REV_MASK) >> ICE_CORE_MINOR_REV;
-	step_rev = (rev & ICE_CORE_STEP_REV_MASK) >> ICE_CORE_STEP_REV;
-
-	if (maj_rev > ICE_CORE_CURRENT_MAJOR_VERSION) {
-		pr_err("%s: Unknown QC ICE device at %lu, rev %d.%d.%d\n",
-			__func__, (unsigned long)ice_dev->mmio,
-			maj_rev, min_rev, step_rev);
-		return -ENODEV;
-	}
-	ice_dev->ice_hw_version = rev;
-
-	dev_info(ice_dev->pdev, "QC ICE %d.%d.%d device found @0x%pK\n",
-					maj_rev, min_rev, step_rev,
-					ice_dev->mmio);
-
-	return 0;
-}
-
-static void qcom_ice_enable_intr(struct ice_device *ice_dev)
-{
-	unsigned int reg;
-
-	reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
-	reg &= ~QCOM_ICE_NON_SEC_IRQ_MASK;
-	qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
-	/*
-	 * Ensure previous instructions was completed before issuing next
-	 * ICE initialization/optimization instruction
-	 */
-	mb();
-}
-
-static void qcom_ice_disable_intr(struct ice_device *ice_dev)
-{
-	unsigned int reg;
-
-	reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
-	reg |= QCOM_ICE_NON_SEC_IRQ_MASK;
-	qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
-	/*
-	 * Ensure previous instructions was completed before issuing next
-	 * ICE initialization/optimization instruction
-	 */
-	mb();
-}
-
-static irqreturn_t qcom_ice_isr(int isr, void *data)
-{
-	irqreturn_t retval = IRQ_NONE;
-	u32 status;
-	struct ice_device *ice_dev = data;
-
-	status = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_STTS);
-	if (status) {
-		ice_dev->error_cb(ice_dev->host_controller_data, status);
-
-		/* Interrupt has been handled. Clear the IRQ */
-		qcom_ice_writel(ice_dev, status, QCOM_ICE_REGS_NON_SEC_IRQ_CLR);
-		/* Ensure instruction is completed */
-		mb();
-		retval = IRQ_HANDLED;
-	}
-	return retval;
-}
-
-static void qcom_ice_parse_ice_instance_type(struct platform_device *pdev,
-		struct ice_device *ice_dev)
-{
-	int ret = -1;
-	struct device *dev = &pdev->dev;
-	struct device_node *np = dev->of_node;
-	const char *type;
-
-	ret = of_property_read_string_index(np, "qcom,instance-type", 0, &type);
-	if (ret) {
-		pr_err("%s: Could not get ICE instance type\n", __func__);
-		goto out;
-	}
-	strlcpy(ice_dev->ice_instance_type, type, QCOM_ICE_TYPE_NAME_LEN);
-out:
-	return;
-}
-
-static int qcom_ice_parse_clock_info(struct platform_device *pdev,
-		struct ice_device *ice_dev)
-{
-	int ret = -1, cnt, i, len;
-	struct device *dev = &pdev->dev;
-	struct device_node *np = dev->of_node;
-	char *name;
-	struct ice_clk_info *clki;
-	u32 *clkfreq = NULL;
-
-	if (!np)
-		goto out;
-
-	cnt = of_property_count_strings(np, "clock-names");
-	if (cnt <= 0) {
-		dev_info(dev, "%s: Unable to find clocks, assuming enabled\n",
-				__func__);
-		ret = cnt;
-		goto out;
-	}
-
-	if (!of_get_property(np, "qcom,op-freq-hz", &len)) {
-		dev_info(dev, "qcom,op-freq-hz property not specified\n");
-		goto out;
-	}
-
-	len = len/sizeof(*clkfreq);
-	if (len != cnt)
-		goto out;
-
-	clkfreq = devm_kzalloc(dev, len * sizeof(*clkfreq), GFP_KERNEL);
-	if (!clkfreq) {
-		ret = -ENOMEM;
-		goto out;
-	}
-	ret = of_property_read_u32_array(np, "qcom,op-freq-hz", clkfreq, len);
-
-	INIT_LIST_HEAD(&ice_dev->clk_list_head);
-
-	for (i = 0; i < cnt; i++) {
-		ret = of_property_read_string_index(np,
-				"clock-names", i, (const char **)&name);
-		if (ret)
-			goto out;
-
-		clki = devm_kzalloc(dev, sizeof(*clki), GFP_KERNEL);
-		if (!clki) {
-			ret = -ENOMEM;
-			goto out;
-		}
-		clki->max_freq = clkfreq[i];
-		clki->name = kstrdup(name, GFP_KERNEL);
-		list_add_tail(&clki->list, &ice_dev->clk_list_head);
-	}
-out:
-	if (clkfreq)
-		devm_kfree(dev, (void *)clkfreq);
-	return ret;
-}
-
-static int qcom_ice_get_device_tree_data(struct platform_device *pdev,
-		struct ice_device *ice_dev)
-{
-	struct device *dev = &pdev->dev;
-	int rc = -1;
-	int irq;
-
-	ice_dev->res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-	if (!ice_dev->res) {
-		pr_err("%s: No memory available for IORESOURCE\n", __func__);
-		return -ENOMEM;
-	}
-
-	ice_dev->mmio = devm_ioremap_resource(dev, ice_dev->res);
-	if (IS_ERR(ice_dev->mmio)) {
-		rc = PTR_ERR(ice_dev->mmio);
-		pr_err("%s: Error = %d mapping ICE io memory\n", __func__, rc);
-		goto out;
-	}
-
-	if (!of_parse_phandle(pdev->dev.of_node, "vdd-hba-supply", 0)) {
-		pr_err("%s: No vdd-hba-supply regulator, assuming not needed\n",
-								 __func__);
-		ice_dev->is_regulator_available = false;
-	} else {
-		ice_dev->is_regulator_available = true;
-	}
-	ice_dev->is_ice_clk_available = of_property_read_bool(
-						(&pdev->dev)->of_node,
-						"qcom,enable-ice-clk");
-
-	if (ice_dev->is_ice_clk_available) {
-		rc = qcom_ice_parse_clock_info(pdev, ice_dev);
-		if (rc) {
-			pr_err("%s: qcom_ice_parse_clock_info failed (%d)\n",
-				__func__, rc);
-			goto err_dev;
-		}
-	}
-
-	/* ICE interrupts is only relevant for v2.x */
-	irq = platform_get_irq(pdev, 0);
-	if (irq >= 0) {
-		rc = devm_request_irq(dev, irq, qcom_ice_isr, 0, dev_name(dev),
-				ice_dev);
-		if (rc) {
-			pr_err("%s: devm_request_irq irq=%d failed (%d)\n",
-				__func__, irq, rc);
-			goto err_dev;
-		}
-		ice_dev->irq = irq;
-		pr_info("ICE IRQ = %d\n", ice_dev->irq);
-	} else {
-		dev_dbg(dev, "IRQ resource not available\n");
-	}
-
-	qcom_ice_parse_ice_instance_type(pdev, ice_dev);
-
-	return 0;
-err_dev:
-	if (rc && ice_dev->mmio)
-		devm_iounmap(dev, ice_dev->mmio);
-out:
-	return rc;
-}
-
-/*
- * ICE HW instance can exist in UFS or eMMC based storage HW
- * Userspace does not know what kind of ICE it is dealing with.
- * Though userspace can find which storage device it is booting
- * from but all kind of storage types dont support ICE from
- * beginning. So ICE device is created for user space to ping
- * if ICE exist for that kind of storage
- */
-static const struct file_operations qcom_ice_fops = {
-	.owner = THIS_MODULE,
-};
-
-static int register_ice_device(struct ice_device *ice_dev)
-{
-	int rc = 0;
-	unsigned int baseminor = 0;
-	unsigned int count = 1;
-	struct device *class_dev;
-	char ice_type[ICE_INSTANCE_TYPE_LENGTH];
-
-	if (!strcmp(ice_dev->ice_instance_type, "sdcc"))
-		strlcpy(ice_type, QCOM_SDCC_ICE_DEV, sizeof(ice_type));
-	else if (!strcmp(ice_dev->ice_instance_type, "ufscard"))
-		strlcpy(ice_type, QCOM_UFS_CARD_ICE_DEV, sizeof(ice_type));
-	else
-		strlcpy(ice_type, QCOM_UFS_ICE_DEV, sizeof(ice_type));
-
-	rc = alloc_chrdev_region(&ice_dev->device_no, baseminor, count,
-			ice_type);
-	if (rc < 0) {
-		pr_err("alloc_chrdev_region failed %d for %s\n", rc,
-			ice_type);
-		return rc;
-	}
-	ice_dev->driver_class = class_create(THIS_MODULE, ice_type);
-	if (IS_ERR(ice_dev->driver_class)) {
-		rc = -ENOMEM;
-		pr_err("class_create failed %d for %s\n", rc, ice_type);
-		goto exit_unreg_chrdev_region;
-	}
-	class_dev = device_create(ice_dev->driver_class, NULL,
-					ice_dev->device_no, NULL, ice_type);
-
-	if (!class_dev) {
-		pr_err("class_device_create failed %d for %s\n", rc, ice_type);
-		rc = -ENOMEM;
-		goto exit_destroy_class;
-	}
-
-	cdev_init(&ice_dev->cdev, &qcom_ice_fops);
-	ice_dev->cdev.owner = THIS_MODULE;
-
-	rc = cdev_add(&ice_dev->cdev, MKDEV(MAJOR(ice_dev->device_no), 0), 1);
-	if (rc < 0) {
-		pr_err("cdev_add failed %d for %s\n", rc, ice_type);
-		goto exit_destroy_device;
-	}
-	return  0;
-
-exit_destroy_device:
-	device_destroy(ice_dev->driver_class, ice_dev->device_no);
-
-exit_destroy_class:
-	class_destroy(ice_dev->driver_class);
-
-exit_unreg_chrdev_region:
-	unregister_chrdev_region(ice_dev->device_no, 1);
-	return rc;
-}
-
-static int qcom_ice_probe(struct platform_device *pdev)
-{
-	struct ice_device *ice_dev;
-	int rc = 0;
-
-	if (!pdev) {
-		pr_err("%s: Invalid platform_device passed\n",
-			__func__);
-		return -EINVAL;
-	}
-
-	ice_dev = kzalloc(sizeof(struct ice_device), GFP_KERNEL);
-
-	if (!ice_dev) {
-		rc = -ENOMEM;
-		pr_err("%s: Error %d allocating memory for ICE device:\n",
-			__func__, rc);
-		goto out;
-	}
-
-	ice_dev->pdev = &pdev->dev;
-	if (!ice_dev->pdev) {
-		rc = -EINVAL;
-		pr_err("%s: Invalid device passed in platform_device\n",
-								__func__);
-		goto err_ice_dev;
-	}
-
-	if (pdev->dev.of_node)
-		rc = qcom_ice_get_device_tree_data(pdev, ice_dev);
-	else {
-		rc = -EINVAL;
-		pr_err("%s: ICE device node not found\n", __func__);
-	}
-
-	if (rc)
-		goto err_ice_dev;
-
-	pr_debug("%s: Registering ICE device\n", __func__);
-	rc = register_ice_device(ice_dev);
-	if (rc) {
-		pr_err("create character device failed.\n");
-		goto err_ice_dev;
-	}
-
-	/*
-	 * If ICE is enabled here, it would be waste of power.
-	 * We would enable ICE when first request for crypto
-	 * operation arrives.
-	 */
-	ice_dev->is_ice_enabled = false;
-
-	rc = pfk_initialize_key_table(ice_dev);
-	if (rc) {
-		pr_err("Failed to initialize key table\n");
-		goto err_ice_dev;
-	}
-
-	platform_set_drvdata(pdev, ice_dev);
-	list_add_tail(&ice_dev->list, &ice_devices);
-
-	goto out;
-
-err_ice_dev:
-	kfree(ice_dev);
-out:
-	return rc;
-}
-
-static int qcom_ice_remove(struct platform_device *pdev)
-{
-	struct ice_device *ice_dev;
-
-	ice_dev = (struct ice_device *)platform_get_drvdata(pdev);
-
-	if (!ice_dev)
-		return 0;
-
-	pfk_remove(ice_dev);
-	qcom_ice_disable_intr(ice_dev);
-
-	device_init_wakeup(&pdev->dev, false);
-	if (ice_dev->mmio)
-		iounmap(ice_dev->mmio);
-
-	list_del_init(&ice_dev->list);
-	kfree(ice_dev);
-
-	return 1;
-}
-
-static int  qcom_ice_suspend(struct platform_device *pdev)
-{
-	struct ice_device *ice_dev;
-	int ret = 0;
-
-	ice_dev = (struct ice_device *)platform_get_drvdata(pdev);
-
-	if (!ice_dev)
-		return -EINVAL;
-	if (atomic_read(&ice_dev->is_ice_busy) != 0) {
-		ret = wait_event_interruptible_timeout(
-			ice_dev->block_suspend_ice_queue,
-			atomic_read(&ice_dev->is_ice_busy) == 0,
-			msecs_to_jiffies(1000));
-
-		if (!ret) {
-			pr_err("%s: Suspend ICE during an ongoing operation\n",
-				__func__);
-			atomic_set(&ice_dev->is_ice_suspended, 0);
-			return -ETIME;
-		}
-	}
-
-	atomic_set(&ice_dev->is_ice_suspended, 1);
-	return 0;
-}
-
-static int qcom_ice_restore_config(void)
-{
-	struct scm_desc desc = {0};
-	int ret;
-
-	/*
-	 * TZ would check KEYS_RAM_RESET_COMPLETED status bit before processing
-	 * restore config command. This would prevent two calls from HLOS to TZ
-	 * One to check KEYS_RAM_RESET_COMPLETED status bit second to restore
-	 * config
-	 */
-
-	desc.arginfo = TZ_OS_KS_RESTORE_KEY_ID_PARAM_ID;
-
-	ret = scm_call2(TZ_OS_KS_RESTORE_KEY_ID, &desc);
-
-	if (ret)
-		pr_err("%s: Error: 0x%x\n", __func__, ret);
-
-	return ret;
-}
-
-static int qcom_ice_init_clocks(struct ice_device *ice)
-{
-	int ret = -EINVAL;
-	struct ice_clk_info *clki = NULL;
-	struct device *dev = ice->pdev;
-	struct list_head *head = &ice->clk_list_head;
-
-	if (!head || list_empty(head)) {
-		dev_err(dev, "%s:ICE Clock list null/empty\n", __func__);
-		goto out;
-	}
-
-	list_for_each_entry(clki, head, list) {
-		if (!clki->name)
-			continue;
-
-		clki->clk = devm_clk_get(dev, clki->name);
-		if (IS_ERR(clki->clk)) {
-			ret = PTR_ERR(clki->clk);
-			dev_err(dev, "%s: %s clk get failed, %d\n",
-					__func__, clki->name, ret);
-			goto out;
-		}
-
-		/* Not all clocks would have a rate to be set */
-		ret = 0;
-		if (clki->max_freq) {
-			ret = clk_set_rate(clki->clk, clki->max_freq);
-			if (ret) {
-				dev_err(dev,
-				"%s: %s clk set rate(%dHz) failed, %d\n",
-						__func__, clki->name,
-				clki->max_freq, ret);
-				goto out;
-			}
-			clki->curr_freq = clki->max_freq;
-			dev_dbg(dev, "%s: clk: %s, rate: %lu\n", __func__,
-				clki->name, clk_get_rate(clki->clk));
-		}
-	}
-out:
-	return ret;
-}
-
-static int qcom_ice_enable_clocks(struct ice_device *ice, bool enable)
-{
-	int ret = 0;
-	struct ice_clk_info *clki = NULL;
-	struct device *dev = ice->pdev;
-	struct list_head *head = &ice->clk_list_head;
-
-	if (!head || list_empty(head)) {
-		dev_err(dev, "%s:ICE Clock list null/empty\n", __func__);
-		ret = -EINVAL;
-		goto out;
-	}
-
-	if (!ice->is_ice_clk_available) {
-		dev_err(dev, "%s:ICE Clock not available\n", __func__);
-		ret = -EINVAL;
-		goto out;
-	}
-
-	list_for_each_entry(clki, head, list) {
-		if (!clki->name)
-			continue;
-
-		if (enable)
-			ret = clk_prepare_enable(clki->clk);
-		else
-			clk_disable_unprepare(clki->clk);
-
-		if (ret) {
-			dev_err(dev, "Unable to %s ICE core clk\n",
-				enable?"enable":"disable");
-			goto out;
-		}
-	}
-out:
-	return ret;
-}
-
-static int qcom_ice_secure_ice_init(struct ice_device *ice_dev)
-{
-	/* We need to enable source for ICE secure interrupts */
-	int ret = 0;
-	u32 regval;
-
-	regval = scm_io_read((unsigned long)ice_dev->res +
-			QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_MASK);
-
-	regval &= ~QCOM_ICE_SEC_IRQ_MASK;
-	ret = scm_io_write((unsigned long)ice_dev->res +
-			QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_MASK, regval);
-
-	/*
-	 * Ensure previous instructions was completed before issuing next
-	 * ICE initialization/optimization instruction
-	 */
-	mb();
-
-	if (!ret)
-		pr_err("%s: failed(0x%x) to init secure ICE config\n",
-								__func__, ret);
-	return ret;
-}
-
-static int qcom_ice_update_sec_cfg(struct ice_device *ice_dev)
-{
-	int ret = 0, scm_ret = 0;
-
-	/* scm command buffer structure */
-	struct qcom_scm_cmd_buf {
-		unsigned int device_id;
-		unsigned int spare;
-	} cbuf = {0};
-
-	/*
-	 * Ideally, we should check ICE version to decide whether to proceed or
-	 * or not. Since version wont be available when this function is called
-	 * we need to depend upon is_ice_clk_available to decide
-	 */
-	if (ice_dev->is_ice_clk_available)
-		goto out;
-
-	/*
-	 * Store dev_id in ice_device structure so that emmc/ufs cases can be
-	 * handled properly
-	 */
-	#define RESTORE_SEC_CFG_CMD	0x2
-	#define ICE_TZ_DEV_ID	20
-
-	cbuf.device_id = ICE_TZ_DEV_ID;
-	ret = scm_restore_sec_cfg(cbuf.device_id, cbuf.spare, &scm_ret);
-	if (ret || scm_ret) {
-		pr_err("%s: failed, ret %d scm_ret %d\n",
-						__func__, ret, scm_ret);
-		if (!ret)
-			ret = scm_ret;
-	}
-out:
-
-	return ret;
-}
-
-static int qcom_ice_finish_init(struct ice_device *ice_dev)
-{
-	unsigned int reg;
-	int err = 0;
-
-	if (!ice_dev) {
-		pr_err("%s: Null data received\n", __func__);
-		err = -ENODEV;
-		goto out;
-	}
-
-	if (ice_dev->is_ice_clk_available) {
-		err = qcom_ice_init_clocks(ice_dev);
-		if (err)
-			goto out;
-
-		err = qcom_ice_bus_register(ice_dev);
-		if (err)
-			goto out;
-	}
-
-	/*
-	 * It is possible that ICE device is not probed when host is probed
-	 * This would cause host probe to be deferred. When probe for host is
-	 * deferred, it can cause power collapse for host and that can wipe
-	 * configurations of host & ice. It is prudent to restore the config
-	 */
-	err = qcom_ice_update_sec_cfg(ice_dev);
-	if (err)
-		goto out;
-
-	err = qcom_ice_verify_ice(ice_dev);
-	if (err)
-		goto out;
-
-	/* if ICE_DISABLE_FUSE is blown, return immediately
-	 * Currently, FORCE HW Keys are also disabled, since
-	 * there is no use case for their usage neither in FDE
-	 * nor in PFE
-	 */
-	reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_FUSE_SETTING);
-	reg &= (ICE_FUSE_SETTING_MASK |
-		ICE_FORCE_HW_KEY0_SETTING_MASK |
-		ICE_FORCE_HW_KEY1_SETTING_MASK);
-
-	if (reg) {
-		ice_dev->is_ice_disable_fuse_blown = true;
-		pr_err("%s: Error: ICE_ERROR_HW_DISABLE_FUSE_BLOWN\n",
-								__func__);
-		err = -EPERM;
-		goto out;
-	}
-
-	/* TZ side of ICE driver would handle secure init of ICE HW from v2 */
-	if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1 &&
-		!qcom_ice_secure_ice_init(ice_dev)) {
-		pr_err("%s: Error: ICE_ERROR_ICE_TZ_INIT_FAILED\n", __func__);
-		err = -EFAULT;
-		goto out;
-	}
-	init_waitqueue_head(&ice_dev->block_suspend_ice_queue);
-	qcom_ice_low_power_mode_enable(ice_dev);
-	qcom_ice_optimization_enable(ice_dev);
-	qcom_ice_config_proc_ignore(ice_dev);
-	qcom_ice_enable_test_bus_config(ice_dev);
-	qcom_ice_enable(ice_dev);
-	ice_dev->is_ice_enabled = true;
-	qcom_ice_enable_intr(ice_dev);
-	atomic_set(&ice_dev->is_ice_suspended, 0);
-	atomic_set(&ice_dev->is_ice_busy, 0);
-out:
-	return err;
-}
-
-static int qcom_ice_init(struct platform_device *pdev,
-			void *host_controller_data,
-			ice_error_cb error_cb)
-{
-	/*
-	 * A completion event for host controller would be triggered upon
-	 * initialization completion
-	 * When ICE is initialized, it would put ICE into Global Bypass mode
-	 * When any request for data transfer is received, it would enable
-	 * the ICE for that particular request
-	 */
-	struct ice_device *ice_dev;
-
-	ice_dev = platform_get_drvdata(pdev);
-	if (!ice_dev) {
-		pr_err("%s: invalid device\n", __func__);
-		return -EINVAL;
-	}
-
-	ice_dev->error_cb = error_cb;
-	ice_dev->host_controller_data = host_controller_data;
-
-	return qcom_ice_finish_init(ice_dev);
-}
-
-static int qcom_ice_finish_power_collapse(struct ice_device *ice_dev)
-{
-	int err = 0;
-
-	if (ice_dev->is_ice_disable_fuse_blown) {
-		err = -EPERM;
-		goto out;
-	}
-
-	if (ice_dev->is_ice_enabled) {
-		/*
-		 * ICE resets into global bypass mode with optimization and
-		 * low power mode disabled. Hence we need to redo those seq's.
-		 */
-		qcom_ice_low_power_mode_enable(ice_dev);
-
-		qcom_ice_enable_test_bus_config(ice_dev);
-
-		qcom_ice_optimization_enable(ice_dev);
-		qcom_ice_enable(ice_dev);
-
-		if (ICE_REV(ice_dev->ice_hw_version, MAJOR) == 1) {
-			/*
-			 * When ICE resets, it wipes all of keys from LUTs
-			 * ICE driver should call TZ to restore keys
-			 */
-			if (qcom_ice_restore_config()) {
-				err = -EFAULT;
-				goto out;
-			}
-
-		/*
-		 * ICE looses its key configuration when UFS is reset,
-		 * restore it
-		 */
-		} else if (ICE_REV(ice_dev->ice_hw_version, MAJOR) > 2) {
-			/*
-			 * for PFE case, clear the cached ICE key table,
-			 * this will force keys to be reconfigured
-			 * per each next transaction
-			 */
-			pfk_clear_on_reset(ice_dev);
-		}
-	}
-
-	ice_dev->ice_reset_complete_time = ktime_get();
-out:
-	return err;
-}
-
-static int qcom_ice_resume(struct platform_device *pdev)
-{
-	/*
-	 * ICE is power collapsed when storage controller is power collapsed
-	 * ICE resume function is responsible for:
-	 * ICE HW enabling sequence
-	 * Key restoration
-	 * A completion event should be triggered
-	 * upon resume completion
-	 * Storage driver will be fully operational only
-	 * after receiving this event
-	 */
-	struct ice_device *ice_dev;
-
-	ice_dev = platform_get_drvdata(pdev);
-
-	if (!ice_dev)
-		return -EINVAL;
-
-	if (ice_dev->is_ice_clk_available) {
-		/*
-		 * Storage is calling this function after power collapse which
-		 * would put ICE into GLOBAL_BYPASS mode. Make sure to enable
-		 * ICE
-		 */
-		qcom_ice_enable(ice_dev);
-	}
-	atomic_set(&ice_dev->is_ice_suspended, 0);
-	return 0;
-}
-
-static void qcom_ice_dump_test_bus(struct ice_device *ice_dev)
-{
-	u32 reg = 0x1;
-	u32 val;
-	u8 bus_selector;
-	u8 stream_selector;
-
-	pr_err("ICE TEST BUS DUMP:\n");
-
-	for (bus_selector = 0; bus_selector <= 0xF;  bus_selector++) {
-		reg = 0x1;	/* enable test bus */
-		reg |= bus_selector << 28;
-		if (bus_selector == 0xD)
-			continue;
-		qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_TEST_BUS_CONTROL);
-		/*
-		 * make sure test bus selector is written before reading
-		 * the test bus register
-		 */
-		mb();
-		val = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_TEST_BUS_REG);
-		pr_err("ICE_TEST_BUS_CONTROL: 0x%08x | ICE_TEST_BUS_REG: 0x%08x\n",
-			reg, val);
-	}
-
-	pr_err("ICE TEST BUS DUMP (ICE_STREAM1_DATAPATH_TEST_BUS):\n");
-	for (stream_selector = 0; stream_selector <= 0xF; stream_selector++) {
-		reg = 0xD0000001;	/* enable stream test bus */
-		reg |= stream_selector << 16;
-		qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_TEST_BUS_CONTROL);
-		/*
-		 * make sure test bus selector is written before reading
-		 * the test bus register
-		 */
-		mb();
-		val = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_TEST_BUS_REG);
-		pr_err("ICE_TEST_BUS_CONTROL: 0x%08x | ICE_TEST_BUS_REG: 0x%08x\n",
-			reg, val);
-	}
-}
-
-static void qcom_ice_debug(struct platform_device *pdev)
-{
-	struct ice_device *ice_dev;
-
-	if (!pdev) {
-		pr_err("%s: Invalid params passed\n", __func__);
-		goto out;
-	}
-
-	ice_dev = platform_get_drvdata(pdev);
-
-	if (!ice_dev) {
-		pr_err("%s: No ICE device available\n", __func__);
-		goto out;
-	}
-
-	if (!ice_dev->is_ice_enabled) {
-		pr_err("%s: ICE device is not enabled\n", __func__);
-		goto out;
-	}
-
-	pr_err("%s: =========== REGISTER DUMP (%pK)===========\n",
-			ice_dev->ice_instance_type, ice_dev);
-
-	pr_err("%s: ICE Control: 0x%08x | ICE Reset: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_CONTROL),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_RESET));
-
-	pr_err("%s: ICE Version: 0x%08x | ICE FUSE:  0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_VERSION),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_FUSE_SETTING));
-
-	pr_err("%s: ICE Param1: 0x%08x | ICE Param2:  0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_PARAMETERS_1),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_PARAMETERS_2));
-
-	pr_err("%s: ICE Param3: 0x%08x | ICE Param4:  0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_PARAMETERS_3),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_PARAMETERS_4));
-
-	pr_err("%s: ICE Param5: 0x%08x | ICE IRQ STTS:  0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_PARAMETERS_5),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_STTS));
-
-	pr_err("%s: ICE IRQ MASK: 0x%08x | ICE IRQ CLR:  0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_MASK),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_CLR));
-
-	if (ICE_REV(ice_dev->ice_hw_version, MAJOR) > 2) {
-		pr_err("%s: ICE INVALID CCFG ERR STTS: 0x%08x\n",
-			ice_dev->ice_instance_type,
-			qcom_ice_readl(ice_dev,
-				QCOM_ICE_INVALID_CCFG_ERR_STTS));
-	}
-
-	if ((ICE_REV(ice_dev->ice_hw_version, MAJOR) > 2) ||
-		((ICE_REV(ice_dev->ice_hw_version, MAJOR) == 2) &&
-		 (ICE_REV(ice_dev->ice_hw_version, MINOR) >= 1))) {
-		pr_err("%s: ICE BIST Sts: 0x%08x | ICE Bypass Sts:  0x%08x\n",
-			ice_dev->ice_instance_type,
-			qcom_ice_readl(ice_dev, QCOM_ICE_REGS_BIST_STATUS),
-			qcom_ice_readl(ice_dev, QCOM_ICE_REGS_BYPASS_STATUS));
-	}
-
-	pr_err("%s: ICE ADV CTRL: 0x%08x | ICE ENDIAN SWAP:  0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ADVANCED_CONTROL),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ENDIAN_SWAP));
-
-	pr_err("%s: ICE_STM1_ERR_SYND1: 0x%08x | ICE_STM1_ERR_SYND2: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_ERROR_SYNDROME1),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_ERROR_SYNDROME2));
-
-	pr_err("%s: ICE_STM2_ERR_SYND1: 0x%08x | ICE_STM2_ERR_SYND2: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_ERROR_SYNDROME1),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_ERROR_SYNDROME2));
-
-	pr_err("%s: ICE_STM1_COUNTER1: 0x%08x | ICE_STM1_COUNTER2: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS1),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS2));
-
-	pr_err("%s: ICE_STM1_COUNTER3: 0x%08x | ICE_STM1_COUNTER4: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS3),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS4));
-
-	pr_err("%s: ICE_STM2_COUNTER1: 0x%08x | ICE_STM2_COUNTER2: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS1),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS2));
-
-	pr_err("%s: ICE_STM2_COUNTER3: 0x%08x | ICE_STM2_COUNTER4: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS3),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS4));
-
-	pr_err("%s: ICE_STM1_CTR5_MSB: 0x%08x | ICE_STM1_CTR5_LSB: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS5_MSB),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS5_LSB));
-
-	pr_err("%s: ICE_STM1_CTR6_MSB: 0x%08x | ICE_STM1_CTR6_LSB: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS6_MSB),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS6_LSB));
-
-	pr_err("%s: ICE_STM1_CTR7_MSB: 0x%08x | ICE_STM1_CTR7_LSB: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS7_MSB),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS7_LSB));
-
-	pr_err("%s: ICE_STM1_CTR8_MSB: 0x%08x | ICE_STM1_CTR8_LSB: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS8_MSB),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS8_LSB));
-
-	pr_err("%s: ICE_STM1_CTR9_MSB: 0x%08x | ICE_STM1_CTR9_LSB: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS9_MSB),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM1_COUNTERS9_LSB));
-
-	pr_err("%s: ICE_STM2_CTR5_MSB: 0x%08x | ICE_STM2_CTR5_LSB: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS5_MSB),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS5_LSB));
-
-	pr_err("%s: ICE_STM2_CTR6_MSB: 0x%08x | ICE_STM2_CTR6_LSB: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS6_MSB),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS6_LSB));
-
-	pr_err("%s: ICE_STM2_CTR7_MSB: 0x%08x | ICE_STM2_CTR7_LSB: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS7_MSB),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS7_LSB));
-
-	pr_err("%s: ICE_STM2_CTR8_MSB: 0x%08x | ICE_STM2_CTR8_LSB: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS8_MSB),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS8_LSB));
-
-	pr_err("%s: ICE_STM2_CTR9_MSB: 0x%08x | ICE_STM2_CTR9_LSB: 0x%08x\n",
-		ice_dev->ice_instance_type,
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS9_MSB),
-		qcom_ice_readl(ice_dev, QCOM_ICE_REGS_STREAM2_COUNTERS9_LSB));
-
-	qcom_ice_dump_test_bus(ice_dev);
-	pr_err("%s: ICE reset start time: %llu ICE reset done time: %llu\n",
-			ice_dev->ice_instance_type,
-		(unsigned long long)ice_dev->ice_reset_start_time,
-		(unsigned long long)ice_dev->ice_reset_complete_time);
-
-	if (ktime_to_us(ktime_sub(ice_dev->ice_reset_complete_time,
-				  ice_dev->ice_reset_start_time)) > 0)
-		pr_err("%s: Time taken for reset: %lu\n",
-			ice_dev->ice_instance_type,
-			(unsigned long)ktime_to_us(ktime_sub(
-					ice_dev->ice_reset_complete_time,
-					ice_dev->ice_reset_start_time)));
-out:
-	return;
-}
-
-static int qcom_ice_reset(struct  platform_device *pdev)
-{
-	struct ice_device *ice_dev;
-
-	ice_dev = platform_get_drvdata(pdev);
-	if (!ice_dev) {
-		pr_err("%s: INVALID ice_dev\n", __func__);
-		return -EINVAL;
-	}
-
-	ice_dev->ice_reset_start_time = ktime_get();
-
-	return qcom_ice_finish_power_collapse(ice_dev);
-}
-
-static int qcom_ice_config_start(struct platform_device *pdev,
-		struct request *req,
-		struct ice_data_setting *setting, bool async)
-{
-	struct ice_crypto_setting pfk_crypto_data = {0};
-	struct ice_crypto_setting ice_data = {0};
-	int ret = 0;
-	bool is_pfe = false;
-	unsigned long sec_end = 0;
-	sector_t data_size;
-	struct ice_device *ice_dev;
-
-	if (!pdev || !req) {
-		pr_err("%s: Invalid params passed\n", __func__);
-		return -EINVAL;
-	}
-	ice_dev = platform_get_drvdata(pdev);
-	if (!ice_dev) {
-		pr_debug("%s no ICE device\n", __func__);
-		/* make the caller finish peacefully */
-		return 0;
-	}
-
-	/*
-	 * It is not an error to have a request with no  bio
-	 * Such requests must bypass ICE. So first set bypass and then
-	 * return if bio is not available in request
-	 */
-	if (setting) {
-		setting->encr_bypass = true;
-		setting->decr_bypass = true;
-	}
-
-	if (!req->bio) {
-		/* It is not an error to have a request with no  bio */
-		return 0;
-	}
-
-	if (atomic_read(&ice_dev->is_ice_suspended) == 1)
-		return -EINVAL;
-
-	if (async)
-		atomic_set(&ice_dev->is_ice_busy, 1);
-
-	ret = pfk_load_key_start(req->bio, ice_dev, &pfk_crypto_data,
-			&is_pfe, async);
-
-	if (async) {
-		atomic_set(&ice_dev->is_ice_busy, 0);
-		wake_up_interruptible(&ice_dev->block_suspend_ice_queue);
-	}
-
-	if (is_pfe) {
-		if (ret) {
-			if (ret != -EBUSY && ret != -EAGAIN)
-				pr_err("%s error %d while configuring ice key for PFE\n",
-						__func__, ret);
-			return ret;
-		}
-
-		return qti_ice_setting_config(req, ice_dev,
-				&pfk_crypto_data, setting, ICE_CRYPTO_CXT_FBE);
-	}
-
-	if (ice_fde_flag && req->part && req->part->info
-				&& req->part->info->volname[0]) {
-		if (!strcmp(req->part->info->volname, "userdata")) {
-			sec_end = req->part->start_sect + req->part->nr_sects -
-					QCOM_UD_FOOTER_SECS;
-			if ((req->__sector >= req->part->start_sect) &&
-				(req->__sector < sec_end)) {
-				/*
-				 * Ugly hack to address non-block-size aligned
-				 * userdata end address in eMMC based devices.
-				 * for eMMC based devices, since sector and
-				 * block sizes are not same i.e. 4K, it is
-				 * possible that partition is not a multiple of
-				 * block size. For UFS based devices sector
-				 * size and block size are same. Hence ensure
-				 * that data is within userdata partition using
-				 * sector based calculation
-				 */
-				data_size = req->__data_len /
-						QCOM_SECT_LEN_IN_BYTE;
-
-				if ((req->__sector + data_size) > sec_end)
-					return 0;
-				else
-					return qti_ice_setting_config(req,
-						ice_dev, &ice_data, setting,
-						ICE_CRYPTO_CXT_FDE);
-			}
-		}
-	}
-
-	/*
-	 * It is not an error. If target is not req-crypt based, all request
-	 * from storage driver would come here to check if there is any ICE
-	 * setting required
-	 */
-	return 0;
-}
-EXPORT_SYMBOL(qcom_ice_config_start);
-
-static int qcom_ice_config_end(struct platform_device *pdev,
-		struct request *req)
-{
-	int ret = 0;
-	bool is_pfe = false;
-	struct ice_device *ice_dev;
-
-	if (!req || !pdev) {
-		pr_err("%s: Invalid params passed\n", __func__);
-		return -EINVAL;
-	}
-
-	if (!req->bio) {
-		/* It is not an error to have a request with no  bio */
-		return 0;
-	}
-
-	ice_dev = platform_get_drvdata(pdev);
-	if (!ice_dev) {
-		pr_debug("%s no ICE device\n", __func__);
-		/* make the caller finish peacefully */
-		return 0;
-	}
-
-	ret = pfk_load_key_end(req->bio, ice_dev, &is_pfe);
-	if (is_pfe) {
-		if (ret != 0)
-			pr_err("%s error %d while end configuring ice key for PFE\n",
-								__func__, ret);
-		return ret;
-	}
-
-
-	return 0;
-}
-EXPORT_SYMBOL(qcom_ice_config_end);
-
-
-static int qcom_ice_status(struct platform_device *pdev)
-{
-	struct ice_device *ice_dev;
-	unsigned int test_bus_reg_status;
-
-	if (!pdev) {
-		pr_err("%s: Invalid params passed\n", __func__);
-		return -EINVAL;
-	}
-
-	ice_dev = platform_get_drvdata(pdev);
-
-	if (!ice_dev)
-		return -ENODEV;
-
-	if (!ice_dev->is_ice_enabled)
-		return -ENODEV;
-
-	test_bus_reg_status = qcom_ice_readl(ice_dev,
-					QCOM_ICE_REGS_TEST_BUS_REG);
-
-	return !!(test_bus_reg_status & QCOM_ICE_TEST_BUS_REG_NON_SECURE_INTR);
-
-}
-
-struct qcom_ice_variant_ops qcom_ice_ops = {
-	.name             = "qcom",
-	.init             = qcom_ice_init,
-	.reset            = qcom_ice_reset,
-	.resume           = qcom_ice_resume,
-	.suspend          = qcom_ice_suspend,
-	.config_start     = qcom_ice_config_start,
-	.config_end       = qcom_ice_config_end,
-	.status           = qcom_ice_status,
-	.debug            = qcom_ice_debug,
-};
-
-struct platform_device *qcom_ice_get_pdevice(struct device_node *node)
-{
-	struct platform_device *ice_pdev = NULL;
-	struct ice_device *ice_dev = NULL;
-
-	if (!node) {
-		pr_err("%s: invalid node %pK\n", __func__, node);
-		goto out;
-	}
-
-	if (!of_device_is_available(node)) {
-		pr_err("%s: device unavailable\n", __func__);
-		goto out;
-	}
-
-	if (list_empty(&ice_devices)) {
-		pr_err("%s: invalid device list\n", __func__);
-		ice_pdev = ERR_PTR(-EPROBE_DEFER);
-		goto out;
-	}
-
-	list_for_each_entry(ice_dev, &ice_devices, list) {
-		if (ice_dev->pdev->of_node == node) {
-			pr_info("%s: found ice device %pK\n", __func__,
-			ice_dev);
-			ice_pdev = to_platform_device(ice_dev->pdev);
-			break;
-		}
-	}
-
-	if (ice_pdev)
-		pr_info("%s: matching platform device %pK\n", __func__,
-			ice_pdev);
-out:
-	return ice_pdev;
-}
-
-static struct ice_device *get_ice_device_from_storage_type
-					(const char *storage_type)
-{
-	struct ice_device *ice_dev = NULL;
-
-	if (list_empty(&ice_devices)) {
-		pr_err("%s: invalid device list\n", __func__);
-		ice_dev = ERR_PTR(-EPROBE_DEFER);
-		goto out;
-	}
-
-	list_for_each_entry(ice_dev, &ice_devices, list) {
-		if (!strcmp(ice_dev->ice_instance_type, storage_type)) {
-			pr_debug("%s: ice device %pK\n", __func__, ice_dev);
-			return ice_dev;
-		}
-	}
-out:
-	return NULL;
-}
-
-int enable_ice_setup(struct ice_device *ice_dev)
-{
-	int ret = -1, vote;
-
-	/* Setup Regulator */
-	if (ice_dev->is_regulator_available) {
-		if (qcom_ice_get_vreg(ice_dev)) {
-			pr_err("%s: Could not get regulator\n", __func__);
-			goto out;
-		}
-		ret = regulator_enable(ice_dev->reg);
-		if (ret) {
-			pr_err("%s:%pK: Could not enable regulator\n",
-					__func__, ice_dev);
-			goto out;
-		}
-	}
-
-	/* Setup Clocks */
-	if (qcom_ice_enable_clocks(ice_dev, true)) {
-		pr_err("%s:%pK:%s Could not enable clocks\n", __func__,
-				ice_dev, ice_dev->ice_instance_type);
-		goto out_reg;
-	}
-
-	/* Setup Bus Vote */
-	vote = qcom_ice_get_bus_vote(ice_dev, "MAX");
-	if (vote < 0)
-		goto out_clocks;
-
-	ret = qcom_ice_set_bus_vote(ice_dev, vote);
-	if (ret) {
-		pr_err("%s:%pK: failed %d\n", __func__, ice_dev, ret);
-		goto out_clocks;
-	}
-
-	return ret;
-
-out_clocks:
-	qcom_ice_enable_clocks(ice_dev, false);
-out_reg:
-	if (ice_dev->is_regulator_available) {
-		if (qcom_ice_get_vreg(ice_dev)) {
-			pr_err("%s: Could not get regulator\n", __func__);
-			goto out;
-		}
-		ret = regulator_disable(ice_dev->reg);
-		if (ret) {
-			pr_err("%s:%pK: Could not disable regulator\n",
-					__func__, ice_dev);
-			goto out;
-		}
-	}
-out:
-	return ret;
-}
-
-int disable_ice_setup(struct ice_device *ice_dev)
-{
-	int ret = -1, vote;
-
-	/* Setup Bus Vote */
-	vote = qcom_ice_get_bus_vote(ice_dev, "MIN");
-	if (vote < 0) {
-		pr_err("%s:%pK: Unable to get bus vote\n", __func__, ice_dev);
-		goto out_disable_clocks;
-	}
-
-	ret = qcom_ice_set_bus_vote(ice_dev, vote);
-	if (ret)
-		pr_err("%s:%pK: failed %d\n", __func__, ice_dev, ret);
-
-out_disable_clocks:
-
-	/* Setup Clocks */
-	if (qcom_ice_enable_clocks(ice_dev, false))
-		pr_err("%s:%pK:%s Could not disable clocks\n", __func__,
-				ice_dev, ice_dev->ice_instance_type);
-
-	/* Setup Regulator */
-	if (ice_dev->is_regulator_available) {
-		if (qcom_ice_get_vreg(ice_dev)) {
-			pr_err("%s: Could not get regulator\n", __func__);
-			goto out;
-		}
-		ret = regulator_disable(ice_dev->reg);
-		if (ret) {
-			pr_err("%s:%pK: Could not disable regulator\n",
-					__func__, ice_dev);
-			goto out;
-		}
-	}
-out:
-	return ret;
-}
-
-int qcom_ice_setup_ice_hw(const char *storage_type, int enable)
-{
-	int ret = -1;
-	struct ice_device *ice_dev = NULL;
-
-	ice_dev = get_ice_device_from_storage_type(storage_type);
-	if (ice_dev == ERR_PTR(-EPROBE_DEFER))
-		return -EPROBE_DEFER;
-
-	if (!ice_dev || !(ice_dev->is_ice_enabled))
-		return ret;
-
-	if (enable)
-		return enable_ice_setup(ice_dev);
-	else
-		return disable_ice_setup(ice_dev);
-}
-
-struct list_head *get_ice_dev_list(void)
-{
-	return &ice_devices;
-}
-
-struct qcom_ice_variant_ops *qcom_ice_get_variant_ops(struct device_node *node)
-{
-	return &qcom_ice_ops;
-}
-EXPORT_SYMBOL(qcom_ice_get_variant_ops);
-
-/* Following struct is required to match device with driver from dts file */
-static const struct of_device_id qcom_ice_match[] = {
-	{ .compatible = "qcom,ice" },
-	{},
-};
-MODULE_DEVICE_TABLE(of, qcom_ice_match);
-
-static struct platform_driver qcom_ice_driver = {
-	.probe          = qcom_ice_probe,
-	.remove         = qcom_ice_remove,
-	.driver         = {
-		.name   = "qcom_ice",
-		.of_match_table = qcom_ice_match,
-	},
-};
-module_platform_driver(qcom_ice_driver);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("QTI Inline Crypto Engine driver");
diff --git a/drivers/crypto/msm/iceregs.h b/drivers/crypto/msm/iceregs.h
deleted file mode 100644
index c3b5718..0000000
--- a/drivers/crypto/msm/iceregs.h
+++ /dev/null
@@ -1,151 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_
-#define _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_
-
-/* Register bits for ICE version */
-#define ICE_CORE_CURRENT_MAJOR_VERSION 0x03
-
-#define ICE_CORE_STEP_REV_MASK		0xFFFF
-#define ICE_CORE_STEP_REV		0 /* bit 15-0 */
-#define ICE_CORE_MAJOR_REV_MASK		0xFF000000
-#define ICE_CORE_MAJOR_REV		24 /* bit 31-24 */
-#define ICE_CORE_MINOR_REV_MASK		0xFF0000
-#define ICE_CORE_MINOR_REV		16 /* bit 23-16 */
-
-#define ICE_BIST_STATUS_MASK		(0xF0000000)	/* bits 28-31 */
-
-#define ICE_FUSE_SETTING_MASK			0x1
-#define ICE_FORCE_HW_KEY0_SETTING_MASK		0x2
-#define ICE_FORCE_HW_KEY1_SETTING_MASK		0x4
-
-/* QCOM ICE Registers from SWI */
-#define QCOM_ICE_REGS_CONTROL			0x0000
-#define QCOM_ICE_REGS_RESET			0x0004
-#define QCOM_ICE_REGS_VERSION			0x0008
-#define QCOM_ICE_REGS_FUSE_SETTING		0x0010
-#define QCOM_ICE_REGS_PARAMETERS_1		0x0014
-#define QCOM_ICE_REGS_PARAMETERS_2		0x0018
-#define QCOM_ICE_REGS_PARAMETERS_3		0x001C
-#define QCOM_ICE_REGS_PARAMETERS_4		0x0020
-#define QCOM_ICE_REGS_PARAMETERS_5		0x0024
-
-
-/* QCOM ICE v3.X only */
-#define QCOM_ICE_GENERAL_ERR_STTS		0x0040
-#define QCOM_ICE_INVALID_CCFG_ERR_STTS		0x0030
-#define QCOM_ICE_GENERAL_ERR_MASK		0x0044
-
-
-/* QCOM ICE v2.X only */
-#define QCOM_ICE_REGS_NON_SEC_IRQ_STTS		0x0040
-#define QCOM_ICE_REGS_NON_SEC_IRQ_MASK		0x0044
-
-
-#define QCOM_ICE_REGS_NON_SEC_IRQ_CLR		0x0048
-#define QCOM_ICE_REGS_STREAM1_ERROR_SYNDROME1	0x0050
-#define QCOM_ICE_REGS_STREAM1_ERROR_SYNDROME2	0x0054
-#define QCOM_ICE_REGS_STREAM2_ERROR_SYNDROME1	0x0058
-#define QCOM_ICE_REGS_STREAM2_ERROR_SYNDROME2	0x005C
-#define QCOM_ICE_REGS_STREAM1_BIST_ERROR_VEC	0x0060
-#define QCOM_ICE_REGS_STREAM2_BIST_ERROR_VEC	0x0064
-#define QCOM_ICE_REGS_STREAM1_BIST_FINISH_VEC	0x0068
-#define QCOM_ICE_REGS_STREAM2_BIST_FINISH_VEC	0x006C
-#define QCOM_ICE_REGS_BIST_STATUS		0x0070
-#define QCOM_ICE_REGS_BYPASS_STATUS		0x0074
-#define QCOM_ICE_REGS_ADVANCED_CONTROL		0x1000
-#define QCOM_ICE_REGS_ENDIAN_SWAP		0x1004
-#define QCOM_ICE_REGS_TEST_BUS_CONTROL		0x1010
-#define QCOM_ICE_REGS_TEST_BUS_REG		0x1014
-#define QCOM_ICE_REGS_STREAM1_COUNTERS1		0x1100
-#define QCOM_ICE_REGS_STREAM1_COUNTERS2		0x1104
-#define QCOM_ICE_REGS_STREAM1_COUNTERS3		0x1108
-#define QCOM_ICE_REGS_STREAM1_COUNTERS4		0x110C
-#define QCOM_ICE_REGS_STREAM1_COUNTERS5_MSB	0x1110
-#define QCOM_ICE_REGS_STREAM1_COUNTERS5_LSB	0x1114
-#define QCOM_ICE_REGS_STREAM1_COUNTERS6_MSB	0x1118
-#define QCOM_ICE_REGS_STREAM1_COUNTERS6_LSB	0x111C
-#define QCOM_ICE_REGS_STREAM1_COUNTERS7_MSB	0x1120
-#define QCOM_ICE_REGS_STREAM1_COUNTERS7_LSB	0x1124
-#define QCOM_ICE_REGS_STREAM1_COUNTERS8_MSB	0x1128
-#define QCOM_ICE_REGS_STREAM1_COUNTERS8_LSB	0x112C
-#define QCOM_ICE_REGS_STREAM1_COUNTERS9_MSB	0x1130
-#define QCOM_ICE_REGS_STREAM1_COUNTERS9_LSB	0x1134
-#define QCOM_ICE_REGS_STREAM2_COUNTERS1		0x1200
-#define QCOM_ICE_REGS_STREAM2_COUNTERS2		0x1204
-#define QCOM_ICE_REGS_STREAM2_COUNTERS3		0x1208
-#define QCOM_ICE_REGS_STREAM2_COUNTERS4		0x120C
-#define QCOM_ICE_REGS_STREAM2_COUNTERS5_MSB	0x1210
-#define QCOM_ICE_REGS_STREAM2_COUNTERS5_LSB	0x1214
-#define QCOM_ICE_REGS_STREAM2_COUNTERS6_MSB	0x1218
-#define QCOM_ICE_REGS_STREAM2_COUNTERS6_LSB	0x121C
-#define QCOM_ICE_REGS_STREAM2_COUNTERS7_MSB	0x1220
-#define QCOM_ICE_REGS_STREAM2_COUNTERS7_LSB	0x1224
-#define QCOM_ICE_REGS_STREAM2_COUNTERS8_MSB	0x1228
-#define QCOM_ICE_REGS_STREAM2_COUNTERS8_LSB	0x122C
-#define QCOM_ICE_REGS_STREAM2_COUNTERS9_MSB	0x1230
-#define QCOM_ICE_REGS_STREAM2_COUNTERS9_LSB	0x1234
-
-#define QCOM_ICE_STREAM1_PREMATURE_LBA_CHANGE		(1L << 0)
-#define QCOM_ICE_STREAM2_PREMATURE_LBA_CHANGE		(1L << 1)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_LBO		(1L << 2)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_LBO		(1L << 3)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_DUN		(1L << 4)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_DUN		(1L << 5)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_DUS		(1L << 6)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_DUS		(1L << 7)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_DBO		(1L << 8)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_DBO		(1L << 9)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_ENC_SEL		(1L << 10)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_ENC_SEL		(1L << 11)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_CONF_IDX		(1L << 12)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_CONF_IDX		(1L << 13)
-#define QCOM_ICE_STREAM1_NOT_EXPECTED_NEW_TRNS		(1L << 14)
-#define QCOM_ICE_STREAM2_NOT_EXPECTED_NEW_TRNS		(1L << 15)
-
-#define QCOM_ICE_NON_SEC_IRQ_MASK				\
-			(QCOM_ICE_STREAM1_PREMATURE_LBA_CHANGE |\
-			 QCOM_ICE_STREAM2_PREMATURE_LBA_CHANGE |\
-			 QCOM_ICE_STREAM1_NOT_EXPECTED_LBO |\
-			 QCOM_ICE_STREAM2_NOT_EXPECTED_LBO |\
-			 QCOM_ICE_STREAM1_NOT_EXPECTED_DUN |\
-			 QCOM_ICE_STREAM2_NOT_EXPECTED_DUN |\
-			 QCOM_ICE_STREAM2_NOT_EXPECTED_DUS |\
-			 QCOM_ICE_STREAM1_NOT_EXPECTED_DBO |\
-			 QCOM_ICE_STREAM2_NOT_EXPECTED_DBO |\
-			 QCOM_ICE_STREAM1_NOT_EXPECTED_ENC_SEL |\
-			 QCOM_ICE_STREAM2_NOT_EXPECTED_ENC_SEL |\
-			 QCOM_ICE_STREAM1_NOT_EXPECTED_CONF_IDX |\
-			 QCOM_ICE_STREAM1_NOT_EXPECTED_NEW_TRNS |\
-			 QCOM_ICE_STREAM2_NOT_EXPECTED_NEW_TRNS)
-
-/* QCOM ICE registers from secure side */
-#define QCOM_ICE_TEST_BUS_REG_SECURE_INTR            (1L << 28)
-#define QCOM_ICE_TEST_BUS_REG_NON_SECURE_INTR        (1L << 2)
-
-#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_STTS	0x2050
-#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_MASK	0x2054
-#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_CLR	0x2058
-
-#define QCOM_ICE_STREAM1_PARTIALLY_SET_KEY_USED		(1L << 0)
-#define QCOM_ICE_STREAM2_PARTIALLY_SET_KEY_USED		(1L << 1)
-#define QCOM_ICE_QCOMC_DBG_OPEN_EVENT			(1L << 30)
-#define QCOM_ICE_KEYS_RAM_RESET_COMPLETED		(1L << 31)
-
-#define QCOM_ICE_SEC_IRQ_MASK					  \
-			(QCOM_ICE_STREAM1_PARTIALLY_SET_KEY_USED |\
-			 QCOM_ICE_STREAM2_PARTIALLY_SET_KEY_USED |\
-			 QCOM_ICE_QCOMC_DBG_OPEN_EVENT |	  \
-			 QCOM_ICE_KEYS_RAM_RESET_COMPLETED)
-
-
-#define qcom_ice_writel(ice, val, reg)	\
-	writel_relaxed((val), (ice)->mmio + (reg))
-#define qcom_ice_readl(ice, reg)	\
-	readl_relaxed((ice)->mmio + (reg))
-
-
-#endif /* _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_ */
diff --git a/drivers/crypto/msm/qcedev.c b/drivers/crypto/msm/qcedev.c
index dd7cbbb..812ba67 100644
--- a/drivers/crypto/msm/qcedev.c
+++ b/drivers/crypto/msm/qcedev.c
@@ -2331,11 +2331,8 @@
 
 static int qcedev_init(void)
 {
-	int rc;
+	_qcedev_debug_init();
 
-	rc = _qcedev_debug_init();
-	if (rc)
-		return rc;
 	return platform_driver_register(&qcedev_plat_driver);
 }
 
diff --git a/drivers/crypto/msm/qcrypto.c b/drivers/crypto/msm/qcrypto.c
index d00c6f5..6a8e0d2 100644
--- a/drivers/crypto/msm/qcrypto.c
+++ b/drivers/crypto/msm/qcrypto.c
@@ -2,7 +2,7 @@
 /*
  * QTI Crypto driver
  *
- * Copyright (c) 2010-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2010-2020, The Linux Foundation. All rights reserved.
  */
 
 #include <linux/module.h>
@@ -5546,12 +5546,9 @@
 
 static int __init _qcrypto_init(void)
 {
-	int rc;
 	struct crypto_priv *pcp = &qcrypto_dev;
 
-	rc = _qcrypto_debug_init();
-	if (rc)
-		return rc;
+	_qcrypto_debug_init();
 	INIT_LIST_HEAD(&pcp->alg_list);
 	INIT_LIST_HEAD(&pcp->engine_list);
 	init_llist_head(&pcp->ordered_resp_list);
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 75b024c..7221983 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -41,6 +41,7 @@
 #include <linux/list_sort.h>
 #include <linux/hashtable.h>
 #include <linux/mount.h>
+#include <linux/dcache.h>
 
 #include <uapi/linux/dma-buf.h>
 #include <uapi/linux/magic.h>
@@ -69,18 +70,34 @@
 
 static struct dma_buf_list db_list;
 
+static void dmabuf_dent_put(struct dma_buf *dmabuf)
+{
+	if (atomic_dec_and_test(&dmabuf->dent_count)) {
+		kfree(dmabuf->name);
+		kfree(dmabuf);
+	}
+}
+
+
 static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
 {
 	struct dma_buf *dmabuf;
 	char name[DMA_BUF_NAME_LEN];
 	size_t ret = 0;
 
+	spin_lock(&dentry->d_lock);
 	dmabuf = dentry->d_fsdata;
+	if (!dmabuf || !atomic_add_unless(&dmabuf->dent_count, 1, 0)) {
+		spin_unlock(&dentry->d_lock);
+		goto out;
+	}
+	spin_unlock(&dentry->d_lock);
 	mutex_lock(&dmabuf->lock);
 	if (dmabuf->name)
 		ret = strlcpy(name, dmabuf->name, DMA_BUF_NAME_LEN);
 	mutex_unlock(&dmabuf->lock);
-
+	dmabuf_dent_put(dmabuf);
+out:
 	return dynamic_dname(dentry, buffer, buflen, "/%s:%s",
 			     dentry->d_name.name, ret > 0 ? name : "");
 }
@@ -107,6 +124,7 @@
 static int dma_buf_release(struct inode *inode, struct file *file)
 {
 	struct dma_buf *dmabuf;
+	struct dentry *dentry = file->f_path.dentry;
 	int dtor_ret = 0;
 
 	if (!is_dma_buf_file(file))
@@ -114,6 +132,9 @@
 
 	dmabuf = file->private_data;
 
+	spin_lock(&dentry->d_lock);
+	dentry->d_fsdata = NULL;
+	spin_unlock(&dentry->d_lock);
 	BUG_ON(dmabuf->vmapping_counter);
 
 	/*
@@ -145,8 +166,7 @@
 		reservation_object_fini(dmabuf->resv);
 
 	module_put(dmabuf->owner);
-	kfree(dmabuf->buf_name);
-	kfree(dmabuf);
+	dmabuf_dent_put(dmabuf);
 	return 0;
 }
 
@@ -603,7 +623,9 @@
 	dmabuf->cb_excl.poll = dmabuf->cb_shared.poll = &dmabuf->poll;
 	dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0;
 	dmabuf->buf_name = bufname;
+	dmabuf->name = bufname;
 	dmabuf->ktime = ktime_get();
+	atomic_set(&dmabuf->dent_count, 1);
 
 	if (!resv) {
 		resv = (struct reservation_object *)&dmabuf[1];
diff --git a/drivers/gpu/drm/bridge/lt9611uxc.c b/drivers/gpu/drm/bridge/lt9611uxc.c
index 8c27c26..e37e770 100644
--- a/drivers/gpu/drm/bridge/lt9611uxc.c
+++ b/drivers/gpu/drm/bridge/lt9611uxc.c
@@ -125,6 +125,7 @@
 	u8 i2c_wbuf[WRITE_BUF_MAX_SIZE];
 	u8 i2c_rbuf[READ_BUF_MAX_SIZE];
 	bool hdmi_mode;
+	bool hpd_support;
 	enum lt9611_fw_upgrade_status fw_status;
 };
 
@@ -1302,7 +1303,7 @@
 	struct lt9611 *pdata = connector_to_lt9611(connector);
 
 	pdata->status = connector_status_disconnected;
-	if (force) {
+	if (force && pdata->hpd_support) {
 		lt9611_write_byte(pdata, 0xFF, 0x80);
 		lt9611_write_byte(pdata, 0xEE, 0x01);
 		lt9611_write_byte(pdata, 0xFF, 0xB0);
@@ -1668,6 +1669,7 @@
 {
 	struct lt9611 *pdata;
 	int ret = 0;
+	u8 chip_version = 0;
 
 	if (!client || !client->dev.of_node) {
 		pr_err("invalid input\n");
@@ -1730,8 +1732,12 @@
 		goto err_i2c_prog;
 	}
 
-	if (lt9611_get_version(pdata)) {
+	chip_version = lt9611_get_version(pdata);
+	pdata->hpd_support = false;
+	if (chip_version) {
 		pr_info("LT9611 works, no need to upgrade FW\n");
+		if (chip_version >= 0x40)
+			pdata->hpd_support = true;
 	} else {
 		ret = request_firmware_nowait(THIS_MODULE, true,
 			"lt9611_fw.bin", &client->dev, GFP_KERNEL, pdata,
diff --git a/drivers/gpu/msm/adreno-gpulist.h b/drivers/gpu/msm/adreno-gpulist.h
index af16e73..e0b945e 100644
--- a/drivers/gpu/msm/adreno-gpulist.h
+++ b/drivers/gpu/msm/adreno-gpulist.h
@@ -919,7 +919,7 @@
 	},
 	.prim_fifo_threshold = 0x0018000,
 	.gmu_major = 1,
-	.gmu_minor = 9,
+	.gmu_minor = 10,
 	.sqefw_name = "a630_sqe.fw",
 	.gmufw_name = "a619_gmu.bin",
 	.zap_name = "a615_zap",
@@ -1460,7 +1460,7 @@
 	.base = {
 		DEFINE_ADRENO_REV(ADRENO_REV_A702, 7, 0, 2, ANY_ID),
 		.features = ADRENO_64BIT | ADRENO_CONTENT_PROTECTION |
-			ADRENO_APRIV,
+			ADRENO_APRIV | ADRENO_PREEMPTION,
 		.gpudev = &adreno_a6xx_gpudev,
 		.gmem_size = SZ_128K,
 		.busy_mask = 0xfffffffe,
diff --git a/drivers/gpu/msm/adreno.h b/drivers/gpu/msm/adreno.h
index ab2f94b..982f8ba 100644
--- a/drivers/gpu/msm/adreno.h
+++ b/drivers/gpu/msm/adreno.h
@@ -1739,8 +1739,9 @@
 	if (!ret) {
 		ret = gmu_core_dev_oob_set(device, oob_perfcntr);
 		if (ret) {
+			gmu_core_snapshot(device);
 			adreno_set_gpu_fault(ADRENO_DEVICE(device),
-				ADRENO_GMU_FAULT);
+				ADRENO_GMU_FAULT_SKIP_SNAPSHOT);
 			adreno_dispatcher_schedule(device);
 			kgsl_active_count_put(device);
 		}
diff --git a/drivers/gpu/msm/adreno_a6xx_gmu.c b/drivers/gpu/msm/adreno_a6xx_gmu.c
index 1848c83..cb688bc 100644
--- a/drivers/gpu/msm/adreno_a6xx_gmu.c
+++ b/drivers/gpu/msm/adreno_a6xx_gmu.c
@@ -1644,6 +1644,8 @@
 {
 	unsigned int val;
 
+	dev_err(device->dev, "GMU snapshot started at 0x%llx ticks\n",
+			a6xx_gmu_read_ao_counter(device));
 	a6xx_gmu_snapshot_versions(device, snapshot);
 
 	a6xx_gmu_snapshot_memories(device, snapshot);
diff --git a/drivers/gpu/msm/adreno_a6xx_preempt.c b/drivers/gpu/msm/adreno_a6xx_preempt.c
index 77bf757..07bc874 100644
--- a/drivers/gpu/msm/adreno_a6xx_preempt.c
+++ b/drivers/gpu/msm/adreno_a6xx_preempt.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2017-2020, The Linux Foundation. All rights reserved.
  */
 
 #include "adreno.h"
@@ -388,10 +388,15 @@
 
 	return;
 err:
-
-	/* If fenced write fails, set the fault and trigger recovery */
+	/* If fenced write fails, take inline snapshot and trigger recovery */
+	if (!in_interrupt()) {
+		gmu_core_snapshot(device);
+		adreno_set_gpu_fault(adreno_dev,
+			ADRENO_GMU_FAULT_SKIP_SNAPSHOT);
+	} else {
+		adreno_set_gpu_fault(adreno_dev, ADRENO_GMU_FAULT);
+	}
 	adreno_set_preempt_state(adreno_dev, ADRENO_PREEMPT_NONE);
-	adreno_set_gpu_fault(adreno_dev, ADRENO_GMU_FAULT);
 	adreno_dispatcher_schedule(device);
 	/* Clear the keep alive */
 	if (gmu_core_isenabled(device))
diff --git a/drivers/gpu/msm/adreno_a6xx_snapshot.c b/drivers/gpu/msm/adreno_a6xx_snapshot.c
index ef4c8f2..2b22042 100644
--- a/drivers/gpu/msm/adreno_a6xx_snapshot.c
+++ b/drivers/gpu/msm/adreno_a6xx_snapshot.c
@@ -404,6 +404,7 @@
 	A6XX_DBGBUS_RAS          = 0xc,
 	A6XX_DBGBUS_VSC          = 0xd,
 	A6XX_DBGBUS_COM          = 0xe,
+	A6XX_DBGBUS_COM_1        = 0xf,
 	A6XX_DBGBUS_LRZ          = 0x10,
 	A6XX_DBGBUS_A2D          = 0x11,
 	A6XX_DBGBUS_CCUFCHE      = 0x12,
@@ -515,6 +516,11 @@
 	{ A6XX_DBGBUS_SPTP_5, 0x100, },
 };
 
+static const struct adreno_debugbus_block a702_dbgc_debugbus_blocks[] = {
+	{ A6XX_DBGBUS_COM_1, 0x100, },
+	{ A6XX_DBGBUS_SPTP_0, 0x100, },
+};
+
 #define A6XX_NUM_SHADER_BANKS 3
 #define A6XX_SHADER_STATETYPE_SHIFT 8
 
@@ -1528,6 +1534,15 @@
 		}
 	}
 
+	if (adreno_is_a702(adreno_dev)) {
+		for (i = 0; i < ARRAY_SIZE(a702_dbgc_debugbus_blocks); i++) {
+			kgsl_snapshot_add_section(device,
+				KGSL_SNAPSHOT_SECTION_DEBUGBUS,
+				snapshot, a6xx_snapshot_dbgc_debugbus_block,
+				(void *) &a702_dbgc_debugbus_blocks[i]);
+		}
+	}
+
 	/*
 	 * GBIF has same debugbus as of other GPU blocks hence fall back to
 	 * default path if GPU uses GBIF.
diff --git a/drivers/gpu/msm/adreno_ringbuffer.c b/drivers/gpu/msm/adreno_ringbuffer.c
index 9cb9ec4..1a65634 100644
--- a/drivers/gpu/msm/adreno_ringbuffer.c
+++ b/drivers/gpu/msm/adreno_ringbuffer.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (c) 2002,2007-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2002,2007-2020, The Linux Foundation. All rights reserved.
  */
 
 #include <linux/sched/clock.h>
@@ -74,6 +74,7 @@
 static void adreno_ringbuffer_wptr(struct adreno_device *adreno_dev,
 		struct adreno_ringbuffer *rb)
 {
+	struct kgsl_device *device = KGSL_DEVICE(adreno_dev);
 	unsigned long flags;
 	int ret = 0;
 
@@ -85,7 +86,7 @@
 			 * Let the pwrscale policy know that new commands have
 			 * been submitted.
 			 */
-			kgsl_pwrscale_busy(KGSL_DEVICE(adreno_dev));
+			kgsl_pwrscale_busy(device);
 
 			/*
 			 * Ensure the write posted after a possible
@@ -110,9 +111,14 @@
 	spin_unlock_irqrestore(&rb->preempt_lock, flags);
 
 	if (ret) {
-		/* If WPTR update fails, set the fault and trigger recovery */
-		adreno_set_gpu_fault(adreno_dev, ADRENO_GMU_FAULT);
-		adreno_dispatcher_schedule(KGSL_DEVICE(adreno_dev));
+		/*
+		 * If WPTR update fails, take inline snapshot and trigger
+		 * recovery.
+		 */
+		gmu_core_snapshot(device);
+		adreno_set_gpu_fault(adreno_dev,
+			ADRENO_GMU_FAULT_SKIP_SNAPSHOT);
+		adreno_dispatcher_schedule(device);
 	}
 }
 
diff --git a/drivers/gpu/msm/kgsl_pwrctrl.c b/drivers/gpu/msm/kgsl_pwrctrl.c
index 606df36..ca22326 100644
--- a/drivers/gpu/msm/kgsl_pwrctrl.c
+++ b/drivers/gpu/msm/kgsl_pwrctrl.c
@@ -3284,6 +3284,41 @@
 EXPORT_SYMBOL(kgsl_pwr_limits_set_freq);
 
 /**
+ * kgsl_pwr_limits_set_gpu_fmax() - Set the requested limit for the
+ * client, if requested freq value is larger than fmax supported
+ * function returns with success.
+ * @limit_ptr: Client handle
+ * @freq: Client requested frequency
+ *
+ * Set the new limit for the client and adjust the clocks
+ */
+int kgsl_pwr_limits_set_gpu_fmax(void *limit_ptr, unsigned int freq)
+{
+	struct kgsl_pwrctrl *pwr;
+	struct kgsl_pwr_limit *limit = limit_ptr;
+	int level;
+
+	if (IS_ERR_OR_NULL(limit))
+		return -EINVAL;
+
+	pwr = &limit->device->pwrctrl;
+
+	/*
+	 * When requested frequency is greater than fmax,
+	 * requested limit is implicit, return success here.
+	 */
+	if (freq >= pwr->pwrlevels[0].gpu_freq)
+		return 0;
+
+	level = _get_nearest_pwrlevel(pwr, freq);
+	if (level < 0)
+		return -EINVAL;
+	_update_limits(limit, KGSL_PWR_SET_LIMIT, level);
+	return 0;
+}
+EXPORT_SYMBOL(kgsl_pwr_limits_set_gpu_fmax);
+
+/**
  * kgsl_pwr_limits_set_default() - Set the default thermal limit for the client
  * @limit_ptr: Client handle
  *
diff --git a/drivers/hwtracing/coresight/coresight-byte-cntr.c b/drivers/hwtracing/coresight/coresight-byte-cntr.c
index 48ca137..6353106 100644
--- a/drivers/hwtracing/coresight/coresight-byte-cntr.c
+++ b/drivers/hwtracing/coresight/coresight-byte-cntr.c
@@ -340,8 +340,6 @@
 			goto out;
 		}
 
-		init_completion(&usb_req->write_done);
-
 		actual = tmc_etr_buf_get_data(etr_buf, drvdata->offset,
 					req_size, &usb_req->buf);
 		usb_req->length = actual;
@@ -425,7 +423,6 @@
 						sizeof(*usb_req), GFP_KERNEL);
 			if (!usb_req)
 				return;
-			init_completion(&usb_req->write_done);
 			usb_req->sg = devm_kzalloc(tmcdrvdata->dev,
 					sizeof(*(usb_req->sg)) * req_sg_num,
 					GFP_KERNEL);
@@ -522,7 +519,7 @@
 
 	switch (event) {
 	case USB_QDSS_CONNECT:
-		usb_qdss_alloc_req(ch, USB_BUF_NUM, 0);
+		usb_qdss_alloc_req(ch, USB_BUF_NUM);
 		usb_bypass_start(drvdata);
 		queue_work(drvdata->usb_wq, &(drvdata->read_work));
 		break;
diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
index b57f388..1ac9bfd 100644
--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
+++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
@@ -90,7 +90,7 @@
 
 static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
 {
-	coresight_disclaim_device(drvdata);
+	coresight_disclaim_device(drvdata->base);
 	__tmc_etb_disable_hw(drvdata);
 }
 
diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
index e48a981..7284e28 100644
--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
+++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
@@ -1354,6 +1354,14 @@
 	int ret = 0;
 
 	mutex_lock(&drvdata->mem_lock);
+	if (drvdata->out_mode != TMC_ETR_OUT_MODE_USB
+			|| drvdata->mode == CS_MODE_DISABLED) {
+		dev_err(&drvdata->csdev->dev,
+		"%s: ETR is not USB mode, or ETR is disabled.\n", __func__);
+		mutex_unlock(&drvdata->mem_lock);
+		return;
+	}
+
 	if (event == USB_QDSS_CONNECT) {
 		ret = tmc_etr_fill_usb_bam_data(drvdata);
 		if (ret)
@@ -1976,7 +1984,8 @@
 	return -EINVAL;
 }
 
-static int _tmc_disable_etr_sink(struct coresight_device *csdev)
+static int _tmc_disable_etr_sink(struct coresight_device *csdev,
+			bool mode_switch)
 {
 	unsigned long flags;
 	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
@@ -1988,7 +1997,7 @@
 		return -EBUSY;
 	}
 
-	if (atomic_dec_return(csdev->refcnt)) {
+	if (atomic_dec_return(csdev->refcnt) && !mode_switch) {
 		spin_unlock_irqrestore(&drvdata->spinlock, flags);
 		return -EBUSY;
 	}
@@ -2062,7 +2071,7 @@
 	int ret;
 
 	mutex_lock(&drvdata->mem_lock);
-	ret = _tmc_disable_etr_sink(csdev);
+	ret = _tmc_disable_etr_sink(csdev, false);
 	mutex_unlock(&drvdata->mem_lock);
 	return ret;
 }
@@ -2093,7 +2102,7 @@
 	}
 
 	coresight_disable_all_source_link();
-	_tmc_disable_etr_sink(drvdata->csdev);
+	_tmc_disable_etr_sink(drvdata->csdev, true);
 	old_mode = drvdata->out_mode;
 	drvdata->out_mode = new_mode;
 	if (tmc_enable_etr_sink_sysfs(drvdata->csdev)) {
diff --git a/drivers/input/touchscreen/focaltech_touch/focaltech_core.c b/drivers/input/touchscreen/focaltech_touch/focaltech_core.c
index cf1a406..273d474 100644
--- a/drivers/input/touchscreen/focaltech_touch/focaltech_core.c
+++ b/drivers/input/touchscreen/focaltech_touch/focaltech_core.c
@@ -1271,11 +1271,11 @@
 	}
 
 	blank = evdata->data;
-	FTS_INFO("FB event:%lu,blank:%d", event, *blank);
+	FTS_DEBUG("FB event:%lu,blank:%d", event, *blank);
 	switch (*blank) {
 	case DRM_PANEL_BLANK_UNBLANK:
 		if (event == DRM_PANEL_EARLY_EVENT_BLANK) {
-			FTS_INFO("resume: event = %lu, not care\n", event);
+			FTS_DEBUG("resume: event = %lu, not care\n", event);
 		} else if (event == DRM_PANEL_EVENT_BLANK) {
 			queue_work(fts_data->ts_workqueue, &fts_data->resume_work);
 		}
@@ -1286,12 +1286,12 @@
 			cancel_work_sync(&fts_data->resume_work);
 			fts_ts_suspend(ts_data->dev);
 		} else if (event == DRM_PANEL_EVENT_BLANK) {
-			FTS_INFO("suspend: event = %lu, not care\n", event);
+			FTS_DEBUG("suspend: event = %lu, not care\n", event);
 		}
 		break;
 
 	default:
-		FTS_INFO("FB BLANK(%d) do not need process\n", *blank);
+		FTS_DEBUG("FB BLANK(%d) do not need process\n", *blank);
 		break;
 	}
 
diff --git a/drivers/input/touchscreen/focaltech_touch/focaltech_flash.c b/drivers/input/touchscreen/focaltech_touch/focaltech_flash.c
index 51c5622..05d93a5 100644
--- a/drivers/input/touchscreen/focaltech_touch/focaltech_flash.c
+++ b/drivers/input/touchscreen/focaltech_touch/focaltech_flash.c
@@ -1862,7 +1862,7 @@
 	upg->lic = upg->fw;
 	upg->lic_length = upg->fw_length;
 
-	FTS_INFO("upgrade fw file len:%d", upg->fw_length);
+	FTS_DEBUG("upgrade fw file len:%d", upg->fw_length);
 	if ((upg->fw_length < FTS_MIN_LEN)
 		|| (upg->fw_length > FTS_MAX_LEN_FILE)) {
 		FTS_ERROR("fw file len(%d) fail", upg->fw_length);
@@ -1898,7 +1898,7 @@
 	return ;
 #endif
 
-	FTS_INFO("fw upgrade work function");
+	FTS_DEBUG("fw upgrade work function");
 	if (!upg || !upg->ts_data) {
 		FTS_ERROR("upg/ts_data is null");
 		return ;
diff --git a/drivers/input/touchscreen/nt36xxx/nt36xxx_mp_ctrlram.c b/drivers/input/touchscreen/nt36xxx/nt36xxx_mp_ctrlram.c
index c14b0c8..7aaf12d 100644
--- a/drivers/input/touchscreen/nt36xxx/nt36xxx_mp_ctrlram.c
+++ b/drivers/input/touchscreen/nt36xxx/nt36xxx_mp_ctrlram.c
@@ -1054,7 +1054,7 @@
 static int32_t nvt_selftest_open(struct inode *inode, struct file *file)
 {
 	struct device_node *np = ts->client->dev.of_node;
-	unsigned char mpcriteria[32] = {0};	//novatek-mp-criteria-default
+	unsigned char mpcriteria[64] = {0};	//novatek-mp-criteria-default
 
 	TestResult_Short = 0;
 	TestResult_Open = 0;
@@ -1093,7 +1093,8 @@
 		 * Ex. nvt_pid = 500A
 		 *     mpcriteria = "novatek-mp-criteria-500A"
 		 */
-		snprintf(mpcriteria, PAGE_SIZE, "novatek-mp-criteria-%04X", ts->nvt_pid);
+		snprintf(mpcriteria, sizeof(mpcriteria),
+				"novatek-mp-criteria-%04X", ts->nvt_pid);
 
 		if (nvt_mp_parse_dt(np, mpcriteria)) {
 			mutex_unlock(&ts->lock);
diff --git a/drivers/input/touchscreen/synaptics_tcm/synaptics_tcm_core.c b/drivers/input/touchscreen/synaptics_tcm/synaptics_tcm_core.c
index 610bff5..f095624 100644
--- a/drivers/input/touchscreen/synaptics_tcm/synaptics_tcm_core.c
+++ b/drivers/input/touchscreen/synaptics_tcm/synaptics_tcm_core.c
@@ -3732,11 +3732,6 @@
 	return 0;
 }
 
-static void syna_tcm_shutdown(struct platform_device *pdev)
-{
-	syna_tcm_remove(pdev);
-}
-
 #ifdef CONFIG_PM
 static const struct dev_pm_ops syna_tcm_dev_pm_ops = {
 #if !defined(CONFIG_DRM) && !defined(CONFIG_FB)
@@ -3756,7 +3751,6 @@
 	},
 	.probe = syna_tcm_probe,
 	.remove = syna_tcm_remove,
-	.shutdown = syna_tcm_shutdown,
 };
 
 static int __init syna_tcm_module_init(void)
diff --git a/drivers/iommu/io-pgtable-fast.c b/drivers/iommu/io-pgtable-fast.c
index 31596c0..2908b56 100644
--- a/drivers/iommu/io-pgtable-fast.c
+++ b/drivers/iommu/io-pgtable-fast.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2016-2017, 2020, The Linux Foundation. All rights reserved.
  */
 
 #define pr_fmt(fmt)	"io-pgtable-fast: " fmt
@@ -739,7 +739,7 @@
 	}
 
 	/* sweep up TLB proving PTEs */
-	av8l_fast_clear_stale_ptes(pmds, base, base, max, false);
+	av8l_fast_clear_stale_ptes(ops, base, base, max, false);
 
 	/* map the entire 4GB VA space with 8K map calls */
 	for (iova = base; iova < max; iova += SZ_8K) {
@@ -760,7 +760,7 @@
 	}
 
 	/* sweep up TLB proving PTEs */
-	av8l_fast_clear_stale_ptes(pmds, base, base, max, false);
+	av8l_fast_clear_stale_ptes(ops, base, base, max, false);
 
 	/* map the entire 4GB VA space with 16K map calls */
 	for (iova = base; iova < max; iova += SZ_16K) {
@@ -781,7 +781,7 @@
 	}
 
 	/* sweep up TLB proving PTEs */
-	av8l_fast_clear_stale_ptes(pmds, base, base, max, false);
+	av8l_fast_clear_stale_ptes(ops, base, base, max, false);
 
 	/* map the entire 4GB VA space with 64K map calls */
 	for (iova = base; iova < max; iova += SZ_64K) {
diff --git a/drivers/leds/leds-qti-flash.c b/drivers/leds/leds-qti-flash.c
index fdb903d..0e02e78 100644
--- a/drivers/leds/leds-qti-flash.c
+++ b/drivers/leds/leds-qti-flash.c
@@ -15,6 +15,7 @@
 #include <linux/of_gpio.h>
 #include <linux/of_irq.h>
 #include <linux/platform_device.h>
+#include <linux/power_supply.h>
 #include <linux/regmap.h>
 
 #include "leds.h"
@@ -57,6 +58,10 @@
 #define  FLASH_LED_HW_SW_STROBE_SEL		BIT(2)
 #define  FLASH_LED_STROBE_SEL_SHIFT		2
 
+#define FLASH_LED_IBATT_OCP_THRESH_DEFAULT_UA		2500000
+#define FLASH_LED_RPARA_DEFAULT_UOHM			80000
+#define VPH_DROOP_THRESH_VAL_UV			3400000
+
 #define FLASH_EN_LED_CTRL		0x4E
 #define  FLASH_LED_ENABLE(id)			BIT(id)
 #define  FLASH_LED_DISABLE		0
@@ -84,6 +89,11 @@
 	HW_STROBE,
 };
 
+enum pmic_type {
+	PM8350C,
+	PM2250,
+};
+
 /* Configurations for each individual flash or torch device */
 struct flash_node_data {
 	struct qti_flash_led		*led;
@@ -111,6 +121,11 @@
 	bool				symmetry_en;
 };
 
+struct pmic_data {
+	u8	max_channels;
+	int	pmic_type;
+};
+
 /**
  * struct qti_flash_led: Main Flash LED data structure
  * @pdev		: Pointer for platform device
@@ -125,7 +140,6 @@
  * @all_ramp_down_done_irq		: IRQ number for all ramp down interrupt
  * @led_fault_irq		: IRQ number for LED fault interrupt
  * @base		: Base address of the flash LED module
- * @max_channels	: Maximum number of channels supported by flash module
  * @ref_count		: Reference count used to enable/disable flash LED
  */
 struct qti_flash_led {
@@ -133,6 +147,10 @@
 	struct regmap		*regmap;
 	struct flash_node_data		*fnode;
 	struct flash_switch_data		*snode;
+	struct power_supply		*usb_psy;
+	struct power_supply		*main_psy;
+	struct power_supply		*bms_psy;
+	struct pmic_data		*data;
 	spinlock_t		lock;
 	u32			num_fnodes;
 	u32			num_snodes;
@@ -140,9 +158,9 @@
 	int			all_ramp_up_done_irq;
 	int			all_ramp_down_done_irq;
 	int			led_fault_irq;
+	int			ibatt_ocp_threshold_ua;
 	int			max_current;
 	u16			base;
-	u8		max_channels;
 	u8		ref_count;
 	u8		subtype;
 };
@@ -232,6 +250,62 @@
 	return rc;
 }
 
+static int is_main_psy_available(struct qti_flash_led *led)
+{
+	if (!led->main_psy) {
+		led->main_psy = power_supply_get_by_name("main");
+		if (!led->main_psy) {
+			pr_err_ratelimited("Couldn't get main_psy\n");
+			return -ENODEV;
+		}
+	}
+
+	return 0;
+}
+
+static int qti_flash_poll_vreg_ok(struct qti_flash_led *led)
+{
+	int rc, i;
+	union power_supply_propval pval = {0, };
+
+	if (led->data->pmic_type != PM2250)
+		return 0;
+
+	rc = is_main_psy_available(led);
+	if (rc < 0)
+		return rc;
+
+	for (i = 0; i < 60; i++) {
+		/* wait for the flash vreg_ok to be set */
+		usleep_range(5000, 5500);
+
+		rc = power_supply_get_property(led->main_psy,
+					POWER_SUPPLY_PROP_FLASH_TRIGGER, &pval);
+		if (rc < 0) {
+			pr_err("main psy doesn't support reading prop %d rc = %d\n",
+				POWER_SUPPLY_PROP_FLASH_TRIGGER, rc);
+			return rc;
+		}
+
+		if (pval.intval > 0) {
+			pr_debug("Flash trigger set\n");
+			break;
+		}
+
+		if (pval.intval < 0) {
+			pr_err("Error during flash trigger %d\n", pval.intval);
+			return pval.intval;
+		}
+	}
+
+	if (!pval.intval) {
+		pr_err("Failed to enable the module\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
 static int qti_flash_led_module_control(struct qti_flash_led *led,
 				bool enable)
 {
@@ -247,6 +321,15 @@
 				return rc;
 		}
 
+		val = FLASH_MODULE_DISABLE;
+		rc = qti_flash_poll_vreg_ok(led);
+		if (rc < 0) {
+			/* Disable the module */
+			rc = qti_flash_led_write(led, FLASH_ENABLE_CONTROL,
+						&val, 1);
+			return rc;
+		}
+
 		led->ref_count++;
 	} else {
 		if (led->ref_count)
@@ -590,14 +673,189 @@
 		snode->enabled = state;
 }
 
+static int is_usb_psy_available(struct qti_flash_led *led)
+{
+	if (!led->usb_psy) {
+		led->usb_psy = power_supply_get_by_name("usb");
+		if (!led->usb_psy) {
+			pr_err_ratelimited("Couldn't get usb_psy\n");
+			return -ENODEV;
+		}
+	}
+
+	return 0;
+}
+
+static int get_property_from_fg(struct qti_flash_led *led,
+		enum power_supply_property prop, int *val)
+{
+	int rc;
+	union power_supply_propval pval = {0, };
+
+	if (!led->bms_psy)
+		led->bms_psy = power_supply_get_by_name("bms");
+
+	if (!led->bms_psy) {
+		pr_err("no bms psy found\n");
+		return -EINVAL;
+	}
+
+	rc = power_supply_get_property(led->bms_psy, prop, &pval);
+	if (rc) {
+		pr_err("bms psy doesn't support reading prop %d rc = %d\n",
+			prop, rc);
+		return rc;
+	}
+
+	*val = pval.intval;
+
+	return rc;
+}
+
+#define UCONV				1000000LL
+#define MCONV			1000LL
+#define CHGBST_EFFICIENCY		800LL
+#define CHGBST_FLASH_VDIP_MARGIN	10000
+#define VIN_FLASH_UV			5000000
+#define VIN_FLASH_RANGE_1		4250000
+#define VIN_FLASH_RANGE_2		4500000
+#define VIN_FLASH_RANGE_3		4750000
+#define OCV_RANGE_1			3800000
+#define OCV_RANGE_2			4100000
+#define OCV_RANGE_3			4350000
+#define BHARGER_FLASH_LED_MAX_TOTAL_CURRENT_MA		1000
+static int qti_flash_led_calc_bharger_max_current(struct qti_flash_led *led,
+						    int *max_current)
+{
+	union power_supply_propval pval = {0, };
+	int ocv_uv, ibat_now, flash_led_max_total_curr_ma, rc;
+	int rbatt_uohm = 0, usb_present, otg_enable;
+	int64_t ibat_flash_ua, avail_flash_ua, avail_flash_power_fw;
+	int64_t ibat_safe_ua, vin_flash_uv, vph_flash_uv, vph_flash_vdip;
+
+	if (led->data->pmic_type != PM2250)
+		return 0;
+
+	rc = is_usb_psy_available(led);
+	if (rc < 0)
+		return rc;
+
+	rc = power_supply_get_property(led->usb_psy, POWER_SUPPLY_PROP_SCOPE,
+					&pval);
+	if (rc < 0) {
+		pr_err("usb psy does not support usb present, rc=%d\n", rc);
+		return rc;
+	}
+	otg_enable = pval.intval;
+
+	/* RESISTANCE = esr_uohm + rslow_uohm */
+	rc = get_property_from_fg(led, POWER_SUPPLY_PROP_RESISTANCE,
+			&rbatt_uohm);
+	if (rc < 0) {
+		pr_err("bms psy does not support resistance, rc=%d\n", rc);
+		return rc;
+	}
+
+	/* If no battery is connected, return max possible flash current */
+	if (!rbatt_uohm) {
+		*max_current = BHARGER_FLASH_LED_MAX_TOTAL_CURRENT_MA;
+		return 0;
+	}
+
+	rc = get_property_from_fg(led, POWER_SUPPLY_PROP_VOLTAGE_OCV, &ocv_uv);
+	if (rc < 0) {
+		pr_err("bms psy does not support OCV, rc=%d\n", rc);
+		return rc;
+	}
+
+	rc = get_property_from_fg(led, POWER_SUPPLY_PROP_CURRENT_NOW,
+			&ibat_now);
+	if (rc < 0) {
+		pr_err("bms psy does not support current, rc=%d\n", rc);
+		return rc;
+	}
+
+	rc = power_supply_get_property(led->usb_psy, POWER_SUPPLY_PROP_PRESENT,
+							&pval);
+	if (rc < 0) {
+		pr_err("usb psy does not support usb present, rc=%d\n", rc);
+		return rc;
+	}
+	usb_present = pval.intval;
+
+	rbatt_uohm += FLASH_LED_RPARA_DEFAULT_UOHM;
+
+	vph_flash_vdip = VPH_DROOP_THRESH_VAL_UV + CHGBST_FLASH_VDIP_MARGIN;
+
+	/*
+	 * Calculate the maximum current that can pulled out of the battery
+	 * before the battery voltage dips below a safe threshold.
+	 */
+	ibat_safe_ua = div_s64((ocv_uv - vph_flash_vdip) * UCONV,
+				rbatt_uohm);
+
+	if (ibat_safe_ua <= led->ibatt_ocp_threshold_ua) {
+		/*
+		 * If the calculated current is below the OCP threshold, then
+		 * use it as the possible flash current.
+		 */
+		ibat_flash_ua = ibat_safe_ua - ibat_now;
+		vph_flash_uv = vph_flash_vdip;
+	} else {
+		/*
+		 * If the calculated current is above the OCP threshold, then
+		 * use the ocp threshold instead.
+		 *
+		 * Any higher current will be tripping the battery OCP.
+		 */
+		ibat_flash_ua = led->ibatt_ocp_threshold_ua - ibat_now;
+		vph_flash_uv = ocv_uv - div64_s64((int64_t)rbatt_uohm
+				* led->ibatt_ocp_threshold_ua, UCONV);
+	}
+
+	/* when USB is present or OTG is enabled, VIN_FLASH is always at 5V */
+	if (usb_present || (otg_enable == POWER_SUPPLY_SCOPE_SYSTEM))
+		vin_flash_uv = VIN_FLASH_UV;
+	else if (ocv_uv <= OCV_RANGE_1)
+		vin_flash_uv = VIN_FLASH_RANGE_1;
+	else if (ocv_uv  > OCV_RANGE_1 && ocv_uv <= OCV_RANGE_2)
+		vin_flash_uv = VIN_FLASH_RANGE_2;
+	else if (ocv_uv > OCV_RANGE_2 && ocv_uv <= OCV_RANGE_3)
+		vin_flash_uv = VIN_FLASH_RANGE_3;
+
+	/* Calculate the available power for the flash module. */
+	avail_flash_power_fw = CHGBST_EFFICIENCY * vph_flash_uv * ibat_flash_ua;
+	/*
+	 * Calculate the available amount of current the flash module can draw
+	 * before collapsing the battery. (available power/ flash input voltage)
+	 */
+	avail_flash_ua = div64_s64(avail_flash_power_fw, vin_flash_uv * MCONV);
+
+	flash_led_max_total_curr_ma = BHARGER_FLASH_LED_MAX_TOTAL_CURRENT_MA;
+	*max_current = min(flash_led_max_total_curr_ma,
+			(int)(div64_s64(avail_flash_ua, MCONV)));
+
+	pr_debug("avail_iflash=%lld, ocv=%d, ibat=%d, rbatt=%d,max_current=%lld, usb_present=%d, otg_enable = %d\n",
+		avail_flash_ua, ocv_uv, ibat_now, rbatt_uohm,
+		(*max_current * MCONV), usb_present, otg_enable);
+
+	return 0;
+}
+
 static ssize_t qti_flash_led_max_current_show(struct device *dev,
 		struct device_attribute *attr, char *buf)
 {
 	struct flash_switch_data *snode;
 	struct led_classdev *led_cdev = dev_get_drvdata(dev);
+	int rc = 0;
 
 	snode = container_of(led_cdev, struct flash_switch_data, cdev);
 
+	rc = qti_flash_led_calc_bharger_max_current(snode->led,
+						&snode->led->max_current);
+	if (rc < 0)
+		pr_err("Failed to query max avail current, rc=%d\n", rc);
+
 	return scnprintf(buf, PAGE_SIZE, "%d\n", snode->led->max_current);
 }
 
@@ -606,6 +864,7 @@
 {
 	struct led_classdev *led_cdev;
 	struct flash_switch_data *snode;
+	int rc = 0;
 
 	if (!trig) {
 		pr_err("Invalid led_trigger\n");
@@ -625,7 +884,20 @@
 			pr_err("Invalid max_current pointer\n");
 			return -EINVAL;
 		}
-		*max_current = snode->led->max_current;
+
+		if (snode->led->data->pmic_type == PM2250) {
+			rc = qti_flash_led_calc_bharger_max_current(snode->led,
+					max_current);
+			if (rc < 0) {
+				pr_err("Failed to query max avail current, rc=%d\n",
+					rc);
+				*max_current = snode->led->max_current;
+				return rc;
+			}
+		} else {
+			*max_current = snode->led->max_current;
+		}
+
 		return 0;
 	}
 
@@ -867,7 +1139,7 @@
 		pr_err("Failed to read led mask rc=%d\n", rc);
 		return rc;
 	}
-	if ((snode->led_mask > ((1 << led->max_channels) - 1))) {
+	if ((snode->led_mask > ((1 << led->data->max_channels) - 1))) {
 		pr_err("Error, Invalid value for led-mask mask=0x%x\n",
 			snode->led_mask);
 		return -EINVAL;
@@ -1065,13 +1337,12 @@
 		return rc;
 	}
 	led->base = val;
-
 	led->hw_strobe_gpio = devm_kcalloc(&led->pdev->dev,
-			led->max_channels, sizeof(u32), GFP_KERNEL);
+			led->data->max_channels, sizeof(u32), GFP_KERNEL);
 	if (!led->hw_strobe_gpio)
 		return -ENOMEM;
 
-	for (i = 0; i < led->max_channels; i++) {
+	for (i = 0; i < led->data->max_channels; i++) {
 
 		led->hw_strobe_gpio[i] = -EINVAL;
 
@@ -1183,6 +1454,15 @@
 		}
 	}
 
+	led->ibatt_ocp_threshold_ua = FLASH_LED_IBATT_OCP_THRESH_DEFAULT_UA;
+	rc = of_property_read_u32(node, "qcom,ibatt-ocp-threshold-ua", &val);
+	if (!rc) {
+		led->ibatt_ocp_threshold_ua = val;
+	} else if (rc != -EINVAL) {
+		pr_err("Unable to parse ibatt_ocp threshold, rc=%d\n", rc);
+		return rc;
+	}
+
 	return 0;
 
 unreg_led:
@@ -1208,9 +1488,9 @@
 		return -EINVAL;
 	}
 
-	led->max_channels = (u8)of_device_get_match_data(&pdev->dev);
-	if (!led->max_channels) {
-		pr_err("Failed to get max supported led channels\n");
+	led->data = (struct pmic_data *)of_device_get_match_data(&pdev->dev);
+	if (!led->data) {
+		pr_err("Failed to get max match_data\n");
 		return -EINVAL;
 	}
 
@@ -1271,9 +1551,29 @@
 	return 0;
 }
 
+static const struct pmic_data data[] = {
+	[PM8350C] = {
+		.max_channels = 4,
+		.pmic_type = PM8350C,
+	},
+
+	[PM2250] = {
+		.max_channels = 1,
+		.pmic_type = PM2250,
+	},
+};
+
 const static struct of_device_id qti_flash_led_match_table[] = {
-	{ .compatible = "qcom,pm8350c-flash-led", .data = (void *)4, },
-	{ .compatible = "qcom,pm2250-flash-led", .data = (void *)1, },
+	{
+		.compatible = "qcom,pm8350c-flash-led",
+		.data = &data[PM8350C],
+	},
+
+	{
+		.compatible = "qcom,pm2250-flash-led",
+		.data = &data[PM2250],
+	},
+
 	{ },
 };
 
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index 850e669..3b0bdde 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -295,21 +295,22 @@
 	  If unsure, say N.
 
 config DM_DEFAULT_KEY
-	tristate "Default-key crypt target support"
+	tristate "Default-key target support"
 	depends on BLK_DEV_DM
-	depends on PFK
-	---help---
-	  This (currently Android-specific) device-mapper target allows you to
-	  create a device that assigns a default encryption key to bios that
-	  don't already have one.  This can sit between inline cryptographic
-	  acceleration hardware and filesystems that use it.  This ensures a
-	  default key is used when the filesystem doesn't explicitly specify a
-	  key, such as for filesystem metadata, leaving no sectors unencrypted.
+	depends on BLK_INLINE_ENCRYPTION
+	help
+	  This device-mapper target allows you to create a device that
+	  assigns a default encryption key to bios that aren't for the
+	  contents of an encrypted file.
 
-	  To compile this code as a module, choose M here: the module will be
-	  called dm-default-key.
+	  This ensures that all blocks on-disk will be encrypted with
+	  some key, without the performance hit of file contents being
+	  encrypted twice when fscrypt (File-Based Encryption) is used.
 
-	  If unsure, say N.
+	  It is only appropriate to use dm-default-key when key
+	  configuration is tightly controlled, like it is in Android,
+	  such that all fscrypt keys are at least as hard to compromise
+	  as the default key.
 
 config DM_SNAPSHOT
        tristate "Snapshot target"
diff --git a/drivers/md/dm-bow.c b/drivers/md/dm-bow.c
index 9323c7c..ee0e2b6 100644
--- a/drivers/md/dm-bow.c
+++ b/drivers/md/dm-bow.c
@@ -725,6 +725,7 @@
 	rb_insert_color(&br->node, &bc->ranges);
 
 	ti->discards_supported = true;
+	ti->may_passthrough_inline_crypto = true;
 
 	return 0;
 
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 73b321b..62f7004 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -125,8 +125,7 @@
  * and encrypts / decrypts at the same time.
  */
 enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID,
-	     DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD,
-	     DM_CRYPT_ENCRYPT_OVERRIDE };
+	     DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD };
 
 enum cipher_flags {
 	CRYPT_MODE_INTEGRITY_AEAD,	/* Use authenticated mode for cihper */
@@ -2665,8 +2664,6 @@
 			cc->sector_shift = __ffs(cc->sector_size) - SECTOR_SHIFT;
 		} else if (!strcasecmp(opt_string, "iv_large_sectors"))
 			set_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags);
-		else if (!strcasecmp(opt_string, "allow_encrypt_override"))
-			set_bit(DM_CRYPT_ENCRYPT_OVERRIDE, &cc->flags);
 		else {
 			ti->error = "Invalid feature arguments";
 			return -EINVAL;
@@ -2872,15 +2869,12 @@
 	struct crypt_config *cc = ti->private;
 
 	/*
-	 * If bio is REQ_PREFLUSH, REQ_NOENCRYPT, or REQ_OP_DISCARD,
-	 * just bypass crypt queues.
+	 * If bio is REQ_PREFLUSH or REQ_OP_DISCARD, just bypass crypt queues.
 	 * - for REQ_PREFLUSH device-mapper core ensures that no IO is in-flight
 	 * - for REQ_OP_DISCARD caller must use flush if IO ordering matters
 	 */
-	if (unlikely(bio->bi_opf & REQ_PREFLUSH) ||
-	    (unlikely(bio->bi_opf & REQ_NOENCRYPT) &&
-	     test_bit(DM_CRYPT_ENCRYPT_OVERRIDE, &cc->flags)) ||
-	    bio_op(bio) == REQ_OP_DISCARD) {
+	if (unlikely(bio->bi_opf & REQ_PREFLUSH ||
+	    bio_op(bio) == REQ_OP_DISCARD)) {
 		bio_set_dev(bio, cc->dev->bdev);
 		if (bio_sectors(bio))
 			bio->bi_iter.bi_sector = cc->start +
@@ -2967,8 +2961,6 @@
 		num_feature_args += test_bit(DM_CRYPT_NO_OFFLOAD, &cc->flags);
 		num_feature_args += cc->sector_size != (1 << SECTOR_SHIFT);
 		num_feature_args += test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags);
-		num_feature_args += test_bit(DM_CRYPT_ENCRYPT_OVERRIDE,
-							&cc->flags);
 		if (cc->on_disk_tag_size)
 			num_feature_args++;
 		if (num_feature_args) {
@@ -2985,8 +2977,6 @@
 				DMEMIT(" sector_size:%d", cc->sector_size);
 			if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags))
 				DMEMIT(" iv_large_sectors");
-			if (test_bit(DM_CRYPT_ENCRYPT_OVERRIDE, &cc->flags))
-				DMEMIT(" allow_encrypt_override");
 		}
 
 		break;
diff --git a/drivers/md/dm-default-key.c b/drivers/md/dm-default-key.c
index 8812dea..ea29ebc 100644
--- a/drivers/md/dm-default-key.c
+++ b/drivers/md/dm-default-key.c
@@ -1,50 +1,207 @@
+// SPDX-License-Identifier: GPL-2.0
 /*
  * Copyright (C) 2017 Google, Inc.
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
  */
 
+#include <linux/blk-crypto.h>
 #include <linux/device-mapper.h>
 #include <linux/module.h>
-#include <linux/pfk.h>
 
-#define DM_MSG_PREFIX "default-key"
+#define DM_MSG_PREFIX		"default-key"
 
+#define DM_DEFAULT_KEY_MAX_WRAPPED_KEY_SIZE 128
+
+static const struct dm_default_key_cipher {
+	const char *name;
+	enum blk_crypto_mode_num mode_num;
+	int key_size;
+} dm_default_key_ciphers[] = {
+	{
+		.name = "aes-xts-plain64",
+		.mode_num = BLK_ENCRYPTION_MODE_AES_256_XTS,
+		.key_size = 64,
+	}, {
+		.name = "xchacha12,aes-adiantum-plain64",
+		.mode_num = BLK_ENCRYPTION_MODE_ADIANTUM,
+		.key_size = 32,
+	},
+};
+
+/**
+ * struct dm_default_c - private data of a default-key target
+ * @dev: the underlying device
+ * @start: starting sector of the range of @dev which this target actually maps.
+ *	   For this purpose a "sector" is 512 bytes.
+ * @cipher_string: the name of the encryption algorithm being used
+ * @iv_offset: starting offset for IVs.  IVs are generated as if the target were
+ *	       preceded by @iv_offset 512-byte sectors.
+ * @sector_size: crypto sector size in bytes (usually 4096)
+ * @sector_bits: log2(sector_size)
+ * @key: the encryption key to use
+ */
 struct default_key_c {
 	struct dm_dev *dev;
 	sector_t start;
-	struct blk_encryption_key key;
+	const char *cipher_string;
+	u64 iv_offset;
+	unsigned int sector_size;
+	unsigned int sector_bits;
+	struct blk_crypto_key key;
+	bool is_hw_wrapped;
 };
 
+static const struct dm_default_key_cipher *
+lookup_cipher(const char *cipher_string)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(dm_default_key_ciphers); i++) {
+		if (strcmp(cipher_string, dm_default_key_ciphers[i].name) == 0)
+			return &dm_default_key_ciphers[i];
+	}
+	return NULL;
+}
+
 static void default_key_dtr(struct dm_target *ti)
 {
 	struct default_key_c *dkc = ti->private;
+	int err;
 
-	if (dkc->dev)
+	if (dkc->dev) {
+		err = blk_crypto_evict_key(dkc->dev->bdev->bd_queue, &dkc->key);
+		if (err && err != -ENOKEY)
+			DMWARN("Failed to evict crypto key: %d", err);
 		dm_put_device(ti, dkc->dev);
+	}
+	kzfree(dkc->cipher_string);
 	kzfree(dkc);
 }
 
+static int default_key_ctr_optional(struct dm_target *ti,
+				    unsigned int argc, char **argv)
+{
+	struct default_key_c *dkc = ti->private;
+	struct dm_arg_set as;
+	static const struct dm_arg _args[] = {
+		{0, 4, "Invalid number of feature args"},
+	};
+	unsigned int opt_params;
+	const char *opt_string;
+	bool iv_large_sectors = false;
+	char dummy;
+	int err;
+
+	as.argc = argc;
+	as.argv = argv;
+
+	err = dm_read_arg_group(_args, &as, &opt_params, &ti->error);
+	if (err)
+		return err;
+
+	while (opt_params--) {
+		opt_string = dm_shift_arg(&as);
+		if (!opt_string) {
+			ti->error = "Not enough feature arguments";
+			return -EINVAL;
+		}
+		if (!strcmp(opt_string, "allow_discards")) {
+			ti->num_discard_bios = 1;
+		} else if (sscanf(opt_string, "sector_size:%u%c",
+				  &dkc->sector_size, &dummy) == 1) {
+			if (dkc->sector_size < SECTOR_SIZE ||
+			    dkc->sector_size > 4096 ||
+			    !is_power_of_2(dkc->sector_size)) {
+				ti->error = "Invalid sector_size";
+				return -EINVAL;
+			}
+		} else if (!strcmp(opt_string, "iv_large_sectors")) {
+			iv_large_sectors = true;
+		} else if (!strcmp(opt_string, "wrappedkey_v0")) {
+			dkc->is_hw_wrapped = true;
+		} else {
+			ti->error = "Invalid feature arguments";
+			return -EINVAL;
+		}
+	}
+
+	/* dm-default-key doesn't implement iv_large_sectors=false. */
+	if (dkc->sector_size != SECTOR_SIZE && !iv_large_sectors) {
+		ti->error = "iv_large_sectors must be specified";
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void default_key_adjust_sector_size_and_iv(char **argv,
+						  struct dm_target *ti,
+						  struct default_key_c **dkc,
+						  u8 *raw, u32 size,
+						  bool is_legacy)
+{
+	struct dm_dev *dev;
+	int i;
+	union {
+		u8 bytes[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE];
+		u32 words[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE / sizeof(u32)];
+	} key_new;
+
+	dev = (*dkc)->dev;
+
+	if (is_legacy) {
+		memcpy(key_new.bytes, raw, size);
+
+		for (i = 0; i < ARRAY_SIZE(key_new.words); i++)
+			__cpu_to_be32s(&key_new.words[i]);
+
+		memcpy(raw, key_new.bytes, size);
+
+		if (ti->len & (((*dkc)->sector_size >> SECTOR_SHIFT) - 1))
+			(*dkc)->sector_size = SECTOR_SIZE;
+
+		if (dev->bdev->bd_part)
+			(*dkc)->iv_offset += dev->bdev->bd_part->start_sect;
+	}
+}
+
 /*
- * Construct a default-key mapping: <mode> <key> <dev_path> <start>
+ * Construct a default-key mapping:
+ * <cipher> <key> <iv_offset> <dev_path> <start>
+ *
+ * This syntax matches dm-crypt's, but lots of unneeded functionality has been
+ * removed.  Also, dm-default-key requires that the "iv_large_sectors" option be
+ * given whenever a non-default sector size is used.
  */
 static int default_key_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 {
 	struct default_key_c *dkc;
-	size_t key_size;
-	unsigned long long tmp;
+	const struct dm_default_key_cipher *cipher;
+	u8 raw_key[DM_DEFAULT_KEY_MAX_WRAPPED_KEY_SIZE];
+	unsigned int raw_key_size;
+	unsigned long long tmpll;
 	char dummy;
 	int err;
+	char *_argv[10];
+	bool is_legacy = false;
 
-	if (argc != 4) {
-		ti->error = "Invalid argument count";
+	if (argc >= 4 && !strcmp(argv[0], "AES-256-XTS")) {
+		argc = 0;
+		_argv[argc++] = "aes-xts-plain64";
+		_argv[argc++] = argv[1];
+		_argv[argc++] = "0";
+		_argv[argc++] = argv[2];
+		_argv[argc++] = argv[3];
+		_argv[argc++] = "3";
+		_argv[argc++] = "allow_discards";
+		_argv[argc++] = "sector_size:4096";
+		_argv[argc++] = "iv_large_sectors";
+		_argv[argc] = NULL;
+		argv = _argv;
+		is_legacy = true;
+	}
+
+	if (argc < 5) {
+		ti->error = "Not enough arguments";
 		return -EINVAL;
 	}
 
@@ -55,86 +212,147 @@
 	}
 	ti->private = dkc;
 
-	if (strcmp(argv[0], "AES-256-XTS") != 0) {
-		ti->error = "Unsupported encryption mode";
+	/* <cipher> */
+	dkc->cipher_string = kstrdup(argv[0], GFP_KERNEL);
+	if (!dkc->cipher_string) {
+		ti->error = "Out of memory";
+		err = -ENOMEM;
+		goto bad;
+	}
+	cipher = lookup_cipher(dkc->cipher_string);
+	if (!cipher) {
+		ti->error = "Unsupported cipher";
 		err = -EINVAL;
 		goto bad;
 	}
 
-	key_size = strlen(argv[1]);
-	if (key_size != 2 * BLK_ENCRYPTION_KEY_SIZE_AES_256_XTS) {
-		ti->error = "Unsupported key size";
+	/* <key> */
+	raw_key_size = strlen(argv[1]);
+	if (raw_key_size > 2 * DM_DEFAULT_KEY_MAX_WRAPPED_KEY_SIZE ||
+	    raw_key_size % 2) {
+		ti->error = "Invalid keysize";
 		err = -EINVAL;
 		goto bad;
 	}
-	key_size /= 2;
-
-	if (hex2bin(dkc->key.raw, argv[1], key_size) != 0) {
+	raw_key_size /= 2;
+	if (hex2bin(raw_key, argv[1], raw_key_size) != 0) {
 		ti->error = "Malformed key string";
 		err = -EINVAL;
 		goto bad;
 	}
 
-	err = dm_get_device(ti, argv[2], dm_table_get_mode(ti->table),
+	/* <iv_offset> */
+	if (sscanf(argv[2], "%llu%c", &dkc->iv_offset, &dummy) != 1) {
+		ti->error = "Invalid iv_offset sector";
+		err = -EINVAL;
+		goto bad;
+	}
+
+	/* <dev_path> */
+	err = dm_get_device(ti, argv[3], dm_table_get_mode(ti->table),
 			    &dkc->dev);
 	if (err) {
 		ti->error = "Device lookup failed";
 		goto bad;
 	}
 
-	if (sscanf(argv[3], "%llu%c", &tmp, &dummy) != 1) {
+	/* <start> */
+	if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1 ||
+	    tmpll != (sector_t)tmpll) {
 		ti->error = "Invalid start sector";
 		err = -EINVAL;
 		goto bad;
 	}
-	dkc->start = tmp;
+	dkc->start = tmpll;
 
-	if (!blk_queue_inlinecrypt(bdev_get_queue(dkc->dev->bdev))) {
-		ti->error = "Device does not support inline encryption";
+	/* optional arguments */
+	dkc->sector_size = SECTOR_SIZE;
+	if (argc > 5) {
+		err = default_key_ctr_optional(ti, argc - 5, &argv[5]);
+		if (err)
+			goto bad;
+	}
+
+	default_key_adjust_sector_size_and_iv(argv, ti, &dkc, raw_key,
+					      raw_key_size, is_legacy);
+
+	dkc->sector_bits = ilog2(dkc->sector_size);
+	if (ti->len & ((dkc->sector_size >> SECTOR_SHIFT) - 1)) {
+		ti->error = "Device size is not a multiple of sector_size";
 		err = -EINVAL;
 		goto bad;
 	}
 
-	/* Pass flush requests through to the underlying device. */
+	err = blk_crypto_init_key(&dkc->key, raw_key, cipher->key_size,
+				  dkc->is_hw_wrapped, cipher->mode_num,
+				  dkc->sector_size);
+	if (err) {
+		ti->error = "Error initializing blk-crypto key";
+		goto bad;
+	}
+
+	err = blk_crypto_start_using_mode(cipher->mode_num, dkc->sector_size,
+					  dkc->dev->bdev->bd_queue);
+	if (err) {
+		ti->error = "Error starting to use blk-crypto";
+		goto bad;
+	}
+
 	ti->num_flush_bios = 1;
 
-	/*
-	 * We pass discard requests through to the underlying device, although
-	 * the discarded blocks will be zeroed, which leaks information about
-	 * unused blocks.  It's also impossible for dm-default-key to know not
-	 * to decrypt discarded blocks, so they will not be read back as zeroes
-	 * and we must set discard_zeroes_data_unsupported.
-	 */
-	ti->num_discard_bios = 1;
+	ti->may_passthrough_inline_crypto = true;
 
-	/*
-	 * It's unclear whether WRITE_SAME would work with inline encryption; it
-	 * would depend on whether the hardware duplicates the data before or
-	 * after encryption.  But since the internal storage in some  devices
-	 * (MSM8998-based) doesn't claim to support WRITE_SAME anyway, we don't
-	 * currently have a way to test it.  Leave it disabled it for now.
-	 */
-	/*ti->num_write_same_bios = 1;*/
-
-	return 0;
+	err = 0;
+	goto out;
 
 bad:
 	default_key_dtr(ti);
+out:
+	memzero_explicit(raw_key, sizeof(raw_key));
 	return err;
 }
 
 static int default_key_map(struct dm_target *ti, struct bio *bio)
 {
 	const struct default_key_c *dkc = ti->private;
+	sector_t sector_in_target;
+	u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE] = { 0 };
 
 	bio_set_dev(bio, dkc->dev->bdev);
-	if (bio_sectors(bio)) {
-		bio->bi_iter.bi_sector = dkc->start +
-			dm_target_offset(ti, bio->bi_iter.bi_sector);
-	}
 
-	if (!bio->bi_crypt_key && !bio->bi_crypt_skip)
-		bio->bi_crypt_key = &dkc->key;
+	/*
+	 * If the bio is a device-level request which doesn't target a specific
+	 * sector, there's nothing more to do.
+	 */
+	if (bio_sectors(bio) == 0)
+		return DM_MAPIO_REMAPPED;
+
+	/* Map the bio's sector to the underlying device. (512-byte sectors) */
+	sector_in_target = dm_target_offset(ti, bio->bi_iter.bi_sector);
+	bio->bi_iter.bi_sector = dkc->start + sector_in_target;
+
+	/*
+	 * If the bio should skip dm-default-key (i.e. if it's for an encrypted
+	 * file's contents), or if it doesn't have any data (e.g. if it's a
+	 * DISCARD request), there's nothing more to do.
+	 */
+	if (bio_should_skip_dm_default_key(bio) || !bio_has_data(bio))
+		return DM_MAPIO_REMAPPED;
+
+	/*
+	 * Else, dm-default-key needs to set this bio's encryption context.
+	 * It must not already have one.
+	 */
+	if (WARN_ON_ONCE(bio_has_crypt_ctx(bio)))
+		return DM_MAPIO_KILL;
+
+	/* Calculate the DUN and enforce data-unit (crypto sector) alignment. */
+	dun[0] = dkc->iv_offset + sector_in_target; /* 512-byte sectors */
+	if (dun[0] & ((dkc->sector_size >> SECTOR_SHIFT) - 1))
+		return DM_MAPIO_KILL;
+	dun[0] >>= dkc->sector_bits - SECTOR_SHIFT; /* crypto sectors */
+
+	bio_crypt_set_ctx(bio, &dkc->key, dun, GFP_NOIO);
 
 	return DM_MAPIO_REMAPPED;
 }
@@ -145,6 +363,7 @@
 {
 	const struct default_key_c *dkc = ti->private;
 	unsigned int sz = 0;
+	int num_feature_args = 0;
 
 	switch (type) {
 	case STATUSTYPE_INFO:
@@ -152,16 +371,26 @@
 		break;
 
 	case STATUSTYPE_TABLE:
+		/* Omit the key for now. */
+		DMEMIT("%s - %llu %s %llu", dkc->cipher_string, dkc->iv_offset,
+		       dkc->dev->name, (unsigned long long)dkc->start);
 
-		/* encryption mode */
-		DMEMIT("AES-256-XTS");
-
-		/* reserved for key; dm-crypt shows it, but we don't for now */
-		DMEMIT(" -");
-
-		/* name of underlying device, and the start sector in it */
-		DMEMIT(" %s %llu", dkc->dev->name,
-		       (unsigned long long)dkc->start);
+		num_feature_args += !!ti->num_discard_bios;
+		if (dkc->sector_size != SECTOR_SIZE)
+			num_feature_args += 2;
+		if (dkc->is_hw_wrapped)
+			num_feature_args += 1;
+		if (num_feature_args != 0) {
+			DMEMIT(" %d", num_feature_args);
+			if (ti->num_discard_bios)
+				DMEMIT(" allow_discards");
+			if (dkc->sector_size != SECTOR_SIZE) {
+				DMEMIT(" sector_size:%u", dkc->sector_size);
+				DMEMIT(" iv_large_sectors");
+			}
+			if (dkc->is_hw_wrapped)
+				DMEMIT(" wrappedkey_v0");
+		}
 		break;
 	}
 }
@@ -169,15 +398,13 @@
 static int default_key_prepare_ioctl(struct dm_target *ti,
 				     struct block_device **bdev)
 {
-	struct default_key_c *dkc = ti->private;
-	struct dm_dev *dev = dkc->dev;
+	const struct default_key_c *dkc = ti->private;
+	const struct dm_dev *dev = dkc->dev;
 
 	*bdev = dev->bdev;
 
-	/*
-	 * Only pass ioctls through if the device sizes match exactly.
-	 */
-	if (dkc->start ||
+	/* Only pass ioctls through if the device sizes match exactly. */
+	if (dkc->start != 0 ||
 	    ti->len != i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT)
 		return 1;
 	return 0;
@@ -187,21 +414,35 @@
 				       iterate_devices_callout_fn fn,
 				       void *data)
 {
-	struct default_key_c *dkc = ti->private;
+	const struct default_key_c *dkc = ti->private;
 
 	return fn(ti, dkc->dev, dkc->start, ti->len, data);
 }
 
+static void default_key_io_hints(struct dm_target *ti,
+				 struct queue_limits *limits)
+{
+	const struct default_key_c *dkc = ti->private;
+	const unsigned int sector_size = dkc->sector_size;
+
+	limits->logical_block_size =
+		max_t(unsigned short, limits->logical_block_size, sector_size);
+	limits->physical_block_size =
+		max_t(unsigned int, limits->physical_block_size, sector_size);
+	limits->io_min = max_t(unsigned int, limits->io_min, sector_size);
+}
+
 static struct target_type default_key_target = {
-	.name   = "default-key",
-	.version = {1, 0, 0},
-	.module = THIS_MODULE,
-	.ctr    = default_key_ctr,
-	.dtr    = default_key_dtr,
-	.map    = default_key_map,
-	.status = default_key_status,
-	.prepare_ioctl = default_key_prepare_ioctl,
-	.iterate_devices = default_key_iterate_devices,
+	.name			= "default-key",
+	.version		= {2, 1, 0},
+	.module			= THIS_MODULE,
+	.ctr			= default_key_ctr,
+	.dtr			= default_key_dtr,
+	.map			= default_key_map,
+	.status			= default_key_status,
+	.prepare_ioctl		= default_key_prepare_ioctl,
+	.iterate_devices	= default_key_iterate_devices,
+	.io_hints		= default_key_io_hints,
 };
 
 static int __init dm_default_key_init(void)
@@ -221,4 +462,4 @@
 MODULE_AUTHOR("Paul Crowley <paulcrowley@google.com>");
 MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
 MODULE_DESCRIPTION(DM_NAME " target for encrypting filesystem metadata");
-MODULE_LICENSE("GPL v2");
+MODULE_LICENSE("GPL");
diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
index f0b088a..6cc231f 100644
--- a/drivers/md/dm-linear.c
+++ b/drivers/md/dm-linear.c
@@ -62,6 +62,7 @@
 	ti->num_secure_erase_bios = 1;
 	ti->num_write_same_bios = 1;
 	ti->num_write_zeroes_bios = 1;
+	ti->may_passthrough_inline_crypto = true;
 	ti->private = lc;
 	return 0;
 
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 96343c7..cc2fbb0 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -22,6 +22,8 @@
 #include <linux/blk-mq.h>
 #include <linux/mount.h>
 #include <linux/dax.h>
+#include <linux/bio.h>
+#include <linux/keyslot-manager.h>
 
 #define DM_MSG_PREFIX "table"
 
@@ -1638,6 +1640,54 @@
 	}
 }
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+static int device_intersect_crypto_modes(struct dm_target *ti,
+					 struct dm_dev *dev, sector_t start,
+					 sector_t len, void *data)
+{
+	struct keyslot_manager *parent = data;
+	struct keyslot_manager *child = bdev_get_queue(dev->bdev)->ksm;
+
+	keyslot_manager_intersect_modes(parent, child);
+	return 0;
+}
+
+/*
+ * Update the inline crypto modes supported by 'q->ksm' to be the intersection
+ * of the modes supported by all targets in the table.
+ *
+ * For any mode to be supported at all, all targets must have explicitly
+ * declared that they can pass through inline crypto support.  For a particular
+ * mode to be supported, all underlying devices must also support it.
+ *
+ * Assume that 'q->ksm' initially declares all modes to be supported.
+ */
+static void dm_calculate_supported_crypto_modes(struct dm_table *t,
+						struct request_queue *q)
+{
+	struct dm_target *ti;
+	unsigned int i;
+
+	for (i = 0; i < dm_table_get_num_targets(t); i++) {
+		ti = dm_table_get_target(t, i);
+
+		if (!ti->may_passthrough_inline_crypto) {
+			keyslot_manager_intersect_modes(q->ksm, NULL);
+			return;
+		}
+		if (!ti->type->iterate_devices)
+			continue;
+		ti->type->iterate_devices(ti, device_intersect_crypto_modes,
+					  q->ksm);
+	}
+}
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+static inline void dm_calculate_supported_crypto_modes(struct dm_table *t,
+						       struct request_queue *q)
+{
+}
+#endif /* !CONFIG_BLK_INLINE_ENCRYPTION */
+
 static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
 				sector_t start, sector_t len, void *data)
 {
@@ -1730,16 +1780,6 @@
 	return q && !test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags);
 }
 
-static int queue_supports_inline_encryption(struct dm_target *ti,
-					    struct dm_dev *dev,
-					    sector_t start, sector_t len,
-					    void *data)
-{
-	struct request_queue *q = bdev_get_queue(dev->bdev);
-
-	return q && blk_queue_inlinecrypt(q);
-}
-
 static bool dm_table_all_devices_attribute(struct dm_table *t,
 					   iterate_devices_callout_fn func)
 {
@@ -1971,13 +2011,10 @@
 	else
 		blk_queue_flag_set(QUEUE_FLAG_NO_SG_MERGE, q);
 
-	if (dm_table_all_devices_attribute(t, queue_supports_inline_encryption))
-		queue_flag_set_unlocked(QUEUE_FLAG_INLINECRYPT, q);
-	else
-		queue_flag_clear_unlocked(QUEUE_FLAG_INLINECRYPT, q);
-
 	dm_table_verify_integrity(t);
 
+	dm_calculate_supported_crypto_modes(t, q);
+
 	/*
 	 * Some devices don't use blk_integrity but still want stable pages
 	 * because they do their own checksumming.
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c9860e3..5df0480 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -25,6 +25,8 @@
 #include <linux/wait.h>
 #include <linux/pr.h>
 #include <linux/refcount.h>
+#include <linux/blk-crypto.h>
+#include <linux/keyslot-manager.h>
 
 #define DM_MSG_PREFIX "core"
 
@@ -1315,9 +1317,10 @@
 
 	__bio_clone_fast(clone, bio);
 
+	bio_crypt_clone(clone, bio, GFP_NOIO);
+
 	if (unlikely(bio_integrity(bio) != NULL)) {
 		int r;
-
 		if (unlikely(!dm_target_has_integrity(tio->ti->type) &&
 			     !dm_target_passes_integrity(tio->ti->type))) {
 			DMWARN("%s: the target %s doesn't support integrity data.",
@@ -1822,6 +1825,8 @@
 	md->queue->backing_dev_info->congested_fn = dm_any_congested;
 }
 
+static void dm_destroy_inline_encryption(struct request_queue *q);
+
 static void cleanup_mapped_device(struct mapped_device *md)
 {
 	if (md->wq)
@@ -1845,8 +1850,10 @@
 		put_disk(md->disk);
 	}
 
-	if (md->queue)
+	if (md->queue) {
+		dm_destroy_inline_encryption(md->queue);
 		blk_cleanup_queue(md->queue);
+	}
 
 	cleanup_srcu_struct(&md->io_barrier);
 
@@ -2214,6 +2221,160 @@
 }
 EXPORT_SYMBOL_GPL(dm_get_queue_limits);
 
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+struct dm_keyslot_evict_args {
+	const struct blk_crypto_key *key;
+	int err;
+};
+
+static int dm_keyslot_evict_callback(struct dm_target *ti, struct dm_dev *dev,
+				     sector_t start, sector_t len, void *data)
+{
+	struct dm_keyslot_evict_args *args = data;
+	int err;
+
+	err = blk_crypto_evict_key(dev->bdev->bd_queue, args->key);
+	if (!args->err)
+		args->err = err;
+	/* Always try to evict the key from all devices. */
+	return 0;
+}
+
+/*
+ * When an inline encryption key is evicted from a device-mapper device, evict
+ * it from all the underlying devices.
+ */
+static int dm_keyslot_evict(struct keyslot_manager *ksm,
+			    const struct blk_crypto_key *key, unsigned int slot)
+{
+	struct mapped_device *md = keyslot_manager_private(ksm);
+	struct dm_keyslot_evict_args args = { key };
+	struct dm_table *t;
+	int srcu_idx;
+	int i;
+	struct dm_target *ti;
+
+	t = dm_get_live_table(md, &srcu_idx);
+	if (!t)
+		return 0;
+	for (i = 0; i < dm_table_get_num_targets(t); i++) {
+		ti = dm_table_get_target(t, i);
+		if (!ti->type->iterate_devices)
+			continue;
+		ti->type->iterate_devices(ti, dm_keyslot_evict_callback, &args);
+	}
+	dm_put_live_table(md, srcu_idx);
+	return args.err;
+}
+
+struct dm_derive_raw_secret_args {
+	const u8 *wrapped_key;
+	unsigned int wrapped_key_size;
+	u8 *secret;
+	unsigned int secret_size;
+	int err;
+};
+
+static int dm_derive_raw_secret_callback(struct dm_target *ti,
+					 struct dm_dev *dev, sector_t start,
+					 sector_t len, void *data)
+{
+	struct dm_derive_raw_secret_args *args = data;
+	struct request_queue *q = dev->bdev->bd_queue;
+
+	if (!args->err)
+		return 0;
+
+	if (!q->ksm) {
+		args->err = -EOPNOTSUPP;
+		return 0;
+	}
+
+	args->err = keyslot_manager_derive_raw_secret(q->ksm, args->wrapped_key,
+						args->wrapped_key_size,
+						args->secret,
+						args->secret_size);
+	/* Try another device in case this fails. */
+	return 0;
+}
+
+/*
+ * Retrieve the raw_secret from the underlying device. Given that
+ * only only one raw_secret can exist for a particular wrappedkey,
+ * retrieve it only from the first device that supports derive_raw_secret()
+ */
+static int dm_derive_raw_secret(struct keyslot_manager *ksm,
+				const u8 *wrapped_key,
+				unsigned int wrapped_key_size,
+				u8 *secret, unsigned int secret_size)
+{
+	struct mapped_device *md = keyslot_manager_private(ksm);
+	struct dm_derive_raw_secret_args args = {
+		.wrapped_key = wrapped_key,
+		.wrapped_key_size = wrapped_key_size,
+		.secret = secret,
+		.secret_size = secret_size,
+		.err = -EOPNOTSUPP,
+	};
+	struct dm_table *t;
+	int srcu_idx;
+	int i;
+	struct dm_target *ti;
+
+	t = dm_get_live_table(md, &srcu_idx);
+	if (!t)
+		return -EOPNOTSUPP;
+	for (i = 0; i < dm_table_get_num_targets(t); i++) {
+		ti = dm_table_get_target(t, i);
+		if (!ti->type->iterate_devices)
+			continue;
+		ti->type->iterate_devices(ti, dm_derive_raw_secret_callback,
+					  &args);
+		if (!args.err)
+			break;
+	}
+	dm_put_live_table(md, srcu_idx);
+	return args.err;
+}
+
+static struct keyslot_mgmt_ll_ops dm_ksm_ll_ops = {
+	.keyslot_evict = dm_keyslot_evict,
+	.derive_raw_secret = dm_derive_raw_secret,
+};
+
+static int dm_init_inline_encryption(struct mapped_device *md)
+{
+	unsigned int mode_masks[BLK_ENCRYPTION_MODE_MAX];
+
+	/*
+	 * Start out with all crypto mode support bits set.  Any unsupported
+	 * bits will be cleared later when calculating the device restrictions.
+	 */
+	memset(mode_masks, 0xFF, sizeof(mode_masks));
+
+	md->queue->ksm = keyslot_manager_create_passthrough(&dm_ksm_ll_ops,
+							    mode_masks, md);
+	if (!md->queue->ksm)
+		return -ENOMEM;
+	return 0;
+}
+
+static void dm_destroy_inline_encryption(struct request_queue *q)
+{
+	keyslot_manager_destroy(q->ksm);
+	q->ksm = NULL;
+}
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+static inline int dm_init_inline_encryption(struct mapped_device *md)
+{
+	return 0;
+}
+
+static inline void dm_destroy_inline_encryption(struct request_queue *q)
+{
+}
+#endif /* !CONFIG_BLK_INLINE_ENCRYPTION */
+
 /*
  * Setup the DM device's queue based on md's type
  */
@@ -2258,6 +2419,13 @@
 		DMERR("Cannot calculate initial queue limits");
 		return r;
 	}
+
+	r = dm_init_inline_encryption(md);
+	if (r) {
+		DMERR("Cannot initialize inline encryption");
+		return r;
+	}
+
 	dm_table_set_restrictions(t, md->queue, &limits);
 	blk_register_queue(md->disk);
 
diff --git a/drivers/media/platform/msm/cvp/msm_cvp.c b/drivers/media/platform/msm/cvp/msm_cvp.c
index 36a1375..e87f392c 100644
--- a/drivers/media/platform/msm/cvp/msm_cvp.c
+++ b/drivers/media/platform/msm/cvp/msm_cvp.c
@@ -1329,6 +1329,7 @@
 	struct cvp_kmd_hfi_packet *in_pkt;
 	unsigned int signal, offset, buf_num, in_offset, in_buf_num;
 	struct msm_cvp_inst *s;
+	unsigned int max_buf_num;
 	struct msm_cvp_fence_thread_data *fence_thread_data;
 
 	dprintk(CVP_DBG, "%s: Enter inst = %#x", __func__, inst);
@@ -1374,6 +1375,16 @@
 		buf_num = in_buf_num;
 	}
 
+	max_buf_num = sizeof(struct cvp_kmd_hfi_packet)
+						/ sizeof(struct cvp_buf_type);
+
+	if (buf_num > max_buf_num)
+		return -EINVAL;
+
+	if ((offset + buf_num * sizeof(struct cvp_buf_type)) >
+					sizeof(struct cvp_kmd_hfi_packet))
+		return -EINVAL;
+
 	rc = msm_cvp_map_buf(inst, in_pkt, offset, buf_num);
 	if (rc)
 		goto free_and_exit;
diff --git a/drivers/media/platform/msm/cvp/msm_cvp_debug.c b/drivers/media/platform/msm/cvp/msm_cvp_debug.c
index 9d61d2ac..2beee35 100644
--- a/drivers/media/platform/msm/cvp/msm_cvp_debug.c
+++ b/drivers/media/platform/msm/cvp/msm_cvp_debug.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
  */
 
 #define CREATE_TRACE_POINTS
@@ -277,7 +277,8 @@
 
 	snprintf(debugfs_name, MAX_DEBUGFS_NAME, "core%d", core->id);
 	dir = debugfs_create_dir(debugfs_name, parent);
-	if (!dir) {
+	if (IS_ERR_OR_NULL(dir)) {
+		dir = NULL;
 		dprintk(CVP_ERR, "Failed to create debugfs for msm_cvp\n");
 		goto failed_create_dir;
 	}
@@ -423,7 +424,8 @@
 	idata->inst = inst;
 
 	dir = debugfs_create_dir(debugfs_name, parent);
-	if (!dir) {
+	if (IS_ERR_OR_NULL(dir)) {
+		dir = NULL;
 		dprintk(CVP_ERR, "Failed to create debugfs for msm_cvp\n");
 		goto failed_create_dir;
 	}
diff --git a/drivers/media/platform/msm/cvp/msm_cvp_dsp.c b/drivers/media/platform/msm/cvp/msm_cvp_dsp.c
index 1eab89a..2e80670 100644
--- a/drivers/media/platform/msm/cvp/msm_cvp_dsp.c
+++ b/drivers/media/platform/msm/cvp/msm_cvp_dsp.c
@@ -1,12 +1,13 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
  */
 #include <linux/module.h>
 #include <linux/rpmsg.h>
 #include <linux/of_platform.h>
 #include <linux/of_fdt.h>
 #include <soc/qcom/secure_buffer.h>
+#include "cvp_core_hfi.h"
 #include "msm_cvp_dsp.h"
 
 #define VMID_CDSP_Q6 (30)
@@ -90,25 +91,81 @@
 	return err;
 }
 
+static void __reset_queue_hdr_defaults(struct cvp_hfi_queue_header *q_hdr)
+{
+	q_hdr->qhdr_status = 0x1;
+	q_hdr->qhdr_type = CVP_IFACEQ_DFLT_QHDR;
+	q_hdr->qhdr_q_size = CVP_IFACEQ_QUEUE_SIZE / 4;
+	q_hdr->qhdr_pkt_size = 0;
+	q_hdr->qhdr_rx_wm = 0x1;
+	q_hdr->qhdr_tx_wm = 0x1;
+	q_hdr->qhdr_rx_req = 0x1;
+	q_hdr->qhdr_tx_req = 0x0;
+	q_hdr->qhdr_rx_irq_status = 0x0;
+	q_hdr->qhdr_tx_irq_status = 0x0;
+	q_hdr->qhdr_read_idx = 0x0;
+	q_hdr->qhdr_write_idx = 0x0;
+}
+
 void msm_cvp_cdsp_ssr_handler(struct work_struct *work)
 {
 	struct cvp_dsp_apps *me;
 	uint64_t msg_ptr;
 	uint32_t msg_ptr_len;
 	int err;
+	u32 i;
+	struct iris_hfi_device *dev;
+	struct cvp_hfi_queue_table_header *q_tbl_hdr;
+	struct cvp_hfi_queue_header *q_hdr;
+	struct cvp_iface_q_info *iface_q;
 
+	dprintk(CVP_WARN, "%s: Entering CDSP-SSR handler\n", __func__);
 	me = container_of(work, struct cvp_dsp_apps, ssr_work);
 	if (!me) {
 		dprintk(CVP_ERR, "%s: Invalid params\n", __func__);
 		return;
 	}
 
+	dev = me->device;
+	for (i = 0; i < CVP_IFACEQ_NUMQ; i++) {
+		iface_q = &dev->dsp_iface_queues[i];
+		iface_q->q_hdr = CVP_IFACEQ_GET_QHDR_START_ADDR(
+			dev->dsp_iface_q_table.align_virtual_addr, i);
+		__reset_queue_hdr_defaults(iface_q->q_hdr);
+	}
+
+	q_tbl_hdr = (struct cvp_hfi_queue_table_header *)
+			dev->dsp_iface_q_table.align_virtual_addr;
+	q_tbl_hdr->qtbl_version = 0;
+	q_tbl_hdr->device_addr = (void *)dev;
+	strlcpy(q_tbl_hdr->name, "msm_v4l2_cvp", sizeof(q_tbl_hdr->name));
+	q_tbl_hdr->qtbl_size = CVP_IFACEQ_TABLE_SIZE;
+	q_tbl_hdr->qtbl_qhdr0_offset =
+				sizeof(struct cvp_hfi_queue_table_header);
+	q_tbl_hdr->qtbl_qhdr_size = sizeof(struct cvp_hfi_queue_header);
+	q_tbl_hdr->qtbl_num_q = CVP_IFACEQ_NUMQ;
+	q_tbl_hdr->qtbl_num_active_q = CVP_IFACEQ_NUMQ;
+
+	iface_q = &dev->dsp_iface_queues[CVP_IFACEQ_CMDQ_IDX];
+	q_hdr = iface_q->q_hdr;
+	q_hdr->qhdr_type |= HFI_Q_ID_HOST_TO_CTRL_CMD_Q;
+
+	iface_q = &dev->dsp_iface_queues[CVP_IFACEQ_MSGQ_IDX];
+	q_hdr = iface_q->q_hdr;
+	q_hdr->qhdr_type |= HFI_Q_ID_CTRL_TO_HOST_MSG_Q;
+
+	iface_q = &dev->dsp_iface_queues[CVP_IFACEQ_DBGQ_IDX];
+	q_hdr = iface_q->q_hdr;
+	q_hdr->qhdr_type |= HFI_Q_ID_CTRL_TO_HOST_DEBUG_Q;
+	q_hdr->qhdr_rx_req = 0;
+
 	msg_ptr = cmd_msg.msg_ptr;
 	msg_ptr_len =  cmd_msg.msg_ptr_len;
 
+	dprintk(CVP_WARN, "%s: HFI queue cmd after CDSP-SSR\n", __func__);
 	err = cvp_dsp_send_cmd_hfi_queue((phys_addr_t *)msg_ptr,
 					msg_ptr_len,
-					(void *)NULL);
+					(struct iris_hfi_device *)(me->device));
 	if (err) {
 		dprintk(CVP_ERR,
 			"%s: Failed to send HFI Queue address. err=%d\n",
diff --git a/drivers/media/platform/msm/npu/npu_dev.c b/drivers/media/platform/msm/npu/npu_dev.c
index f4af1c4..bcfd230 100644
--- a/drivers/media/platform/msm/npu/npu_dev.c
+++ b/drivers/media/platform/msm/npu/npu_dev.c
@@ -2626,9 +2626,7 @@
 		goto error_res_init;
 	}
 
-	rc = npu_debugfs_init(npu_dev);
-	if (rc)
-		goto error_driver_init;
+	npu_debugfs_init(npu_dev);
 
 	rc = npu_host_init(npu_dev);
 	if (rc) {
diff --git a/drivers/media/platform/msm/npu/npu_mgr.c b/drivers/media/platform/msm/npu/npu_mgr.c
index 876dc23..d497a7f 100644
--- a/drivers/media/platform/msm/npu/npu_mgr.c
+++ b/drivers/media/platform/msm/npu/npu_mgr.c
@@ -49,7 +49,7 @@
 	int64_t id);
 static int network_get(struct npu_network *network);
 static int network_put(struct npu_network *network);
-static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg);
+static int app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg);
 static void log_msg_proc(struct npu_device *npu_dev, uint32_t *msg);
 static void host_session_msg_hdlr(struct npu_device *npu_dev);
 static void host_session_log_hdlr(struct npu_device *npu_dev);
@@ -1630,7 +1630,7 @@
 	return ret;
 }
 
-static void app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
+static int app_msg_proc(struct npu_host_ctx *host_ctx, uint32_t *msg)
 {
 	uint32_t msg_id;
 	struct npu_network *network = NULL;
@@ -1638,6 +1638,7 @@
 	struct npu_device *npu_dev = host_ctx->npu_dev;
 	struct npu_network_cmd *network_cmd = NULL;
 	struct npu_misc_cmd *misc_cmd = NULL;
+	int need_ctx_switch = 0;
 
 	msg_id = msg[1];
 	switch (msg_id) {
@@ -1682,7 +1683,7 @@
 				NPU_ERR("queue npu event failed\n");
 		}
 		network_put(network);
-
+		need_ctx_switch = 1;
 		break;
 	}
 	case NPU_IPC_MSG_EXECUTE_V2_DONE:
@@ -1742,6 +1743,7 @@
 			complete(&network_cmd->cmd_done);
 		}
 		network_put(network);
+		need_ctx_switch = 1;
 		break;
 	}
 	case NPU_IPC_MSG_LOAD_DONE:
@@ -1780,6 +1782,7 @@
 
 		complete(&network_cmd->cmd_done);
 		network_put(network);
+		need_ctx_switch = 1;
 		break;
 	}
 	case NPU_IPC_MSG_UNLOAD_DONE:
@@ -1812,6 +1815,7 @@
 
 		complete(&network_cmd->cmd_done);
 		network_put(network);
+		need_ctx_switch = 1;
 		break;
 	}
 	case NPU_IPC_MSG_LOOPBACK_DONE:
@@ -1832,6 +1836,7 @@
 
 		misc_cmd->ret_status = lb_rsp_pkt->header.status;
 		complete_all(&misc_cmd->cmd_done);
+		need_ctx_switch = 1;
 		break;
 	}
 	case NPU_IPC_MSG_SET_PROPERTY_DONE:
@@ -1855,6 +1860,7 @@
 
 		misc_cmd->ret_status = prop_rsp_pkt->header.status;
 		complete(&misc_cmd->cmd_done);
+		need_ctx_switch = 1;
 		break;
 	}
 	case NPU_IPC_MSG_GET_PROPERTY_DONE:
@@ -1893,6 +1899,7 @@
 		}
 
 		complete_all(&misc_cmd->cmd_done);
+		need_ctx_switch = 1;
 		break;
 	}
 	case NPU_IPC_MSG_GENERAL_NOTIFY:
@@ -1923,12 +1930,15 @@
 			msg_id);
 		break;
 	}
+
+	return need_ctx_switch;
 }
 
 static void host_session_msg_hdlr(struct npu_device *npu_dev)
 {
 	struct npu_host_ctx *host_ctx = &npu_dev->host_ctx;
 
+retry:
 	mutex_lock(&host_ctx->lock);
 	if (host_ctx->fw_state != FW_ENABLED) {
 		NPU_WARN("handle npu session msg when FW is disabled\n");
@@ -1938,7 +1948,15 @@
 	while (npu_host_ipc_read_msg(npu_dev, IPC_QUEUE_APPS_RSP,
 		host_ctx->ipc_msg_buf) == 0) {
 		NPU_DBG("received from msg queue\n");
-		app_msg_proc(host_ctx, host_ctx->ipc_msg_buf);
+		if (app_msg_proc(host_ctx, host_ctx->ipc_msg_buf)) {
+			/*
+			 * force context switch to let user
+			 * process have chance to run
+			 */
+			mutex_unlock(&host_ctx->lock);
+			usleep_range(500, 501);
+			goto retry;
+		}
 	}
 
 skip_read_msg:
@@ -2859,6 +2877,8 @@
 		exec_ioctl->stats_buf_size = 0;
 	}
 
+
+	NPU_DBG("Execute done %x\n", ret);
 free_exec_cmd:
 	npu_dequeue_network_cmd(network, exec_cmd);
 	npu_free_network_cmd(host_ctx, exec_cmd);
diff --git a/drivers/media/radio/rtc6226/radio-rtc6226-common.c b/drivers/media/radio/rtc6226/radio-rtc6226-common.c
index dfc8096..2e50c4f 100644
--- a/drivers/media/radio/rtc6226/radio-rtc6226-common.c
+++ b/drivers/media/radio/rtc6226/radio-rtc6226-common.c
@@ -385,8 +385,30 @@
 			next_freq_khz, radio->registers[RSSI] & RSSI_RSSI);
 
 		if (radio->registers[STATUS] & STATUS_SF) {
-			FMDERR("%s band limit reached. Seek one more.\n",
+			FMDERR("%s Seek one more time if lower freq is valid\n",
 					__func__);
+			retval = rtc6226_set_seek(radio, SRCH_UP, WRAP_ENABLE);
+			if (retval < 0) {
+				FMDERR("%s seek fail %d\n", __func__, retval);
+				goto seek_tune_fail;
+			}
+			if (!wait_for_completion_timeout(&radio->completion,
+					msecs_to_jiffies(WAIT_TIMEOUT_MSEC))) {
+				FMDERR("timeout didn't receive STC for seek\n");
+			} else {
+				FMDERR("%s: received STC for seek\n", __func__);
+				retval = rtc6226_get_freq(radio,
+						&next_freq_khz);
+				if (retval < 0) {
+					FMDERR("%s getFreq failed\n", __func__);
+					goto seek_tune_fail;
+				}
+				if ((radio->recv_conf.band_low_limit *
+						TUNE_STEP_SIZE) ==
+							next_freq_khz)
+					rtc6226_q_event(radio,
+						RTC6226_EVT_TUNE_SUCC);
+			}
 			break;
 		}
 		if (radio->g_search_mode == SCAN)
@@ -438,13 +460,12 @@
 		if (!wait_for_completion_timeout(&radio->completion,
 			msecs_to_jiffies(WAIT_TIMEOUT_MSEC)))
 			FMDERR("%s: didn't receive STD for tune\n", __func__);
-		else {
+		else
 			FMDERR("%s: received STD for tune\n", __func__);
-			rtc6226_q_event(radio, RTC6226_EVT_TUNE_SUCC);
-		}
 	}
 seek_cancelled:
 	rtc6226_q_event(radio, RTC6226_EVT_SEEK_COMPLETE);
+	rtc6226_q_event(radio, RTC6226_EVT_TUNE_SUCC);
 	radio->seek_tune_status = NO_SEEK_TUNE_PENDING;
 	FMDERR("%s seek cancelled %d\n", __func__, retval);
 	return;
diff --git a/drivers/media/radio/rtc6226/radio-rtc6226-i2c.c b/drivers/media/radio/rtc6226/radio-rtc6226-i2c.c
index f4e62c1..4ea5011 100644
--- a/drivers/media/radio/rtc6226/radio-rtc6226-i2c.c
+++ b/drivers/media/radio/rtc6226/radio-rtc6226-i2c.c
@@ -523,13 +523,9 @@
 int rtc6226_fops_open(struct file *file)
 {
 	struct rtc6226_device *radio = video_drvdata(file);
-	int retval = v4l2_fh_open(file);
+	int retval;
 
 	FMDBG("%s enter user num = %d\n", __func__, radio->users);
-	if (retval) {
-		FMDERR("%s fail to open v4l2\n", __func__);
-		return retval;
-	}
 	if (atomic_inc_return(&radio->users) != 1) {
 		FMDERR("Device already in use. Try again later\n");
 		atomic_dec(&radio->users);
@@ -560,7 +556,6 @@
 	rtc6226_fm_power_cfg(radio, TURNING_OFF);
 open_err_setup:
 	atomic_dec(&radio->users);
-	v4l2_fh_release(file);
 	return retval;
 }
 
@@ -573,18 +568,16 @@
 	int retval = 0;
 
 	FMDBG("%s : Exit\n", __func__);
-	if (v4l2_fh_is_singular_file(file)) {
-		if (radio->mode != FM_OFF) {
-			rtc6226_power_down(radio);
-			radio->mode = FM_OFF;
-		}
+	if (radio->mode != FM_OFF) {
+		rtc6226_power_down(radio);
+		radio->mode = FM_OFF;
 	}
 	rtc6226_disable_irq(radio);
 	atomic_dec(&radio->users);
 	retval = rtc6226_fm_power_cfg(radio, TURNING_OFF);
 	if (retval < 0)
 		FMDERR("%s: failed to apply voltage\n", __func__);
-	return v4l2_fh_release(file);
+	return retval;
 }
 
 static int rtc6226_parse_dt(struct device *dev,
diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
index dddf0f5..fcda5c4 100644
--- a/drivers/media/v4l2-core/v4l2-ctrls.c
+++ b/drivers/media/v4l2-core/v4l2-ctrls.c
@@ -836,6 +836,8 @@
 	case V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER:return "H264 Number of HC Layers";
 	case V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER_QP:
 								return "H264 Set QP Value for HC Layers";
+	case V4L2_CID_MPEG_VIDEO_H264_CHROMA_QP_INDEX_OFFSET:
+					return "H264 Chroma QP Index Offset";
 	case V4L2_CID_MPEG_VIDEO_MPEG4_I_FRAME_QP:		return "MPEG4 I-Frame QP Value";
 	case V4L2_CID_MPEG_VIDEO_MPEG4_P_FRAME_QP:		return "MPEG4 P-Frame QP Value";
 	case V4L2_CID_MPEG_VIDEO_MPEG4_B_FRAME_QP:		return "MPEG4 B-Frame QP Value";
diff --git a/drivers/mfd/qcom-i2c-pmic.c b/drivers/mfd/qcom-i2c-pmic.c
index d0ba0c5..8c9c249 100644
--- a/drivers/mfd/qcom-i2c-pmic.c
+++ b/drivers/mfd/qcom-i2c-pmic.c
@@ -552,7 +552,7 @@
 			goto exit;
 		}
 
-		usleep_range(10000, 11000);
+		usleep_range(5000, 5500);
 
 		rc = regmap_write(chip->regmap,
 				chip->periph[0].addr | INT_TEST_VAL_OFFSET,
@@ -570,7 +570,7 @@
 			goto exit;
 		}
 
-		usleep_range(10000, 11000);
+		usleep_range(5000, 5500);
 	}
 exit:
 	regmap_write(chip->regmap, chip->periph[0].addr | INT_TEST_OFFSET, 0);
diff --git a/drivers/misc/qseecom.c b/drivers/misc/qseecom.c
index 1ca6820..80ce22b 100644
--- a/drivers/misc/qseecom.c
+++ b/drivers/misc/qseecom.c
@@ -9291,6 +9291,15 @@
 		goto exit_del_cdev;
 	}
 
+	if (!qseecom.dev->dma_parms) {
+		qseecom.dev->dma_parms =
+			kzalloc(sizeof(*qseecom.dev->dma_parms), GFP_KERNEL);
+		if (!qseecom.dev->dma_parms) {
+			rc = -ENOMEM;
+			goto exit_del_cdev;
+		}
+	}
+	dma_set_max_seg_size(qseecom.dev, DMA_BIT_MASK(32));
 	return 0;
 
 exit_del_cdev:
@@ -9307,6 +9316,8 @@
 
 static void qseecom_deinit_dev(void)
 {
+	kfree(qseecom.dev->dma_parms);
+	qseecom.dev->dma_parms = NULL;
 	cdev_del(&qseecom.cdev);
 	device_destroy(qseecom.driver_class, qseecom.qseecom_device_no);
 	class_destroy(qseecom.driver_class);
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index d88832a..a3c4862 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -1574,7 +1574,6 @@
 	int err = 0;
 
 	mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL);
-	mqrq->brq.mrq.req = req;
 
 	mmc_deferred_scaling(mq->card->host);
 	mmc_cqe_clk_scaling_start_busy(mq, mq->card->host, true);
@@ -2209,7 +2208,6 @@
 	mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq);
 
 	mqrq->brq.mrq.done = mmc_blk_mq_req_done;
-	mqrq->brq.mrq.req = req;
 
 	mmc_pre_req(host, &mqrq->brq.mrq);
 
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 304842e..f483926 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -114,7 +114,6 @@
 				mmc_cqe_recovery_notifier(mrq);
 			return BLK_EH_RESET_TIMER;
 		}
-		/* No timeout (XXX: huh? comment doesn't make much sense) */
 
 		pr_info("%s: %s: Timeout even before req reaching LDD, completing the req. Active reqs: %d Req: %p Tag: %d\n",
 				mmc_hostname(host), __func__,
@@ -122,7 +121,7 @@
 		mmc_log_string(host,
 				"Timeout even before req reaching LDD,completing the req. Active reqs: %d Req: %p Tag: %d\n",
 				mmc_cqe_qcnt(mq), req, req->tag);
-		blk_mq_complete_request(req);
+		/* The request has gone already */
 		return BLK_EH_DONE;
 	default:
 		/* Timeout is handled by mmc core */
@@ -385,8 +384,6 @@
 	blk_queue_max_hw_sectors(mq->queue,
 		min(host->max_blk_count, host->max_req_size / 512));
 	blk_queue_max_segments(mq->queue, host->max_segs);
-	if (host->inlinecrypt_support)
-		queue_flag_set_unlocked(QUEUE_FLAG_INLINECRYPT, mq->queue);
 
 	if (host->ops->init)
 		host->ops->init(host);
@@ -404,6 +401,9 @@
 	mutex_init(&mq->complete_lock);
 
 	init_waitqueue_head(&mq->wait);
+
+	if (host->cqe_ops->cqe_crypto_update_queue)
+		host->cqe_ops->cqe_crypto_update_queue(host, mq->queue);
 }
 
 static int mmc_mq_init_queue(struct mmc_queue *mq, int q_depth,
diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
index 25a0a48..e9dcda1 100644
--- a/drivers/mmc/host/Kconfig
+++ b/drivers/mmc/host/Kconfig
@@ -151,17 +151,6 @@
 	help
 	  This selects the Atmel SDMMC driver
 
-config MMC_SDHCI_MSM_ICE
-	bool "Qualcomm Technologies, Inc Inline Crypto Engine for SDHCI core"
-	depends on MMC_SDHCI_MSM && CRYPTO_DEV_QCOM_ICE
-	help
-	  This selects the QTI specific additions to support Inline Crypto
-	  Engine (ICE). ICE accelerates the crypto operations and maintains
-	  the high SDHCI performance.
-
-	  Select this if you have ICE supported for SDHCI on QTI chipset.
-	  If unsure, say N.
-
 config MMC_SDHCI_OF_ESDHC
 	tristate "SDHCI OF support for the Freescale eSDHC controller"
 	depends on MMC_SDHCI_PLTFM
@@ -957,3 +946,20 @@
 	  If you have a controller with this interface, say Y or M here.
 
 	  If unsure, say N.
+
+config MMC_CQHCI_CRYPTO
+	bool "CQHCI Crypto Engine Support"
+	depends on MMC_CQHCI && BLK_INLINE_ENCRYPTION
+	help
+	  Enable Crypto Engine Support in CQHCI.
+	  Enabling this makes it possible for the kernel to use the crypto
+	  capabilities of the CQHCI device (if present) to perform crypto
+	  operations on data being transferred to/from the device.
+
+config MMC_CQHCI_CRYPTO_QTI
+	bool "Vendor specific CQHCI Crypto Engine Support"
+	depends on MMC_CQHCI_CRYPTO
+	help
+	 Enable Vendor Crypto Engine Support in CQHCI
+	 Enabling this allows kernel to use CQHCI crypto operations defined
+	 and implemented by QTI.
diff --git a/drivers/mmc/host/Makefile b/drivers/mmc/host/Makefile
index 72af5d8..d550cb2 100644
--- a/drivers/mmc/host/Makefile
+++ b/drivers/mmc/host/Makefile
@@ -87,12 +87,13 @@
 obj-$(CONFIG_MMC_SDHCI_BCM_KONA)	+= sdhci-bcm-kona.o
 obj-$(CONFIG_MMC_SDHCI_IPROC)		+= sdhci-iproc.o
 obj-$(CONFIG_MMC_SDHCI_MSM)		+= sdhci-msm.o
-obj-$(CONFIG_MMC_SDHCI_MSM_ICE)		+= sdhci-msm-ice.o
 obj-$(CONFIG_MMC_SDHCI_ST)		+= sdhci-st.o
 obj-$(CONFIG_MMC_SDHCI_MICROCHIP_PIC32)	+= sdhci-pic32.o
 obj-$(CONFIG_MMC_SDHCI_BRCMSTB)		+= sdhci-brcmstb.o
 obj-$(CONFIG_MMC_SDHCI_OMAP)		+= sdhci-omap.o
 obj-$(CONFIG_MMC_CQHCI)			+= cqhci.o
+obj-$(CONFIG_MMC_CQHCI_CRYPTO)		+= cqhci-crypto.o
+obj-$(CONFIG_MMC_CQHCI_CRYPTO_QTI)	+= cqhci-crypto-qti.o
 
 ifeq ($(CONFIG_CB710_DEBUG),y)
 	CFLAGS-cb710-mmc	+= -DDEBUG
diff --git a/drivers/mmc/host/cqhci-crypto-qti.c b/drivers/mmc/host/cqhci-crypto-qti.c
new file mode 100644
index 0000000..7be5335
--- /dev/null
+++ b/drivers/mmc/host/cqhci-crypto-qti.c
@@ -0,0 +1,300 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2020, Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include "sdhci.h"
+#include "sdhci-pltfm.h"
+#include "sdhci-msm.h"
+#include "cqhci-crypto-qti.h"
+#include <linux/crypto-qti-common.h>
+
+#define RAW_SECRET_SIZE 32
+#define MINIMUM_DUN_SIZE 512
+#define MAXIMUM_DUN_SIZE 65536
+
+static struct cqhci_host_crypto_variant_ops cqhci_crypto_qti_variant_ops = {
+	.host_init_crypto = cqhci_crypto_qti_init_crypto,
+	.enable = cqhci_crypto_qti_enable,
+	.disable = cqhci_crypto_qti_disable,
+	.resume = cqhci_crypto_qti_resume,
+	.debug = cqhci_crypto_qti_debug,
+};
+
+static bool ice_cap_idx_valid(struct cqhci_host *host,
+					unsigned int cap_idx)
+{
+	return cap_idx < host->crypto_capabilities.num_crypto_cap;
+}
+
+static uint8_t get_data_unit_size_mask(unsigned int data_unit_size)
+{
+	if (data_unit_size < MINIMUM_DUN_SIZE ||
+		data_unit_size > MAXIMUM_DUN_SIZE ||
+	    !is_power_of_2(data_unit_size))
+		return 0;
+
+	return data_unit_size / MINIMUM_DUN_SIZE;
+}
+
+
+void cqhci_crypto_qti_enable(struct cqhci_host *host)
+{
+	int err = 0;
+
+	if (!cqhci_host_is_crypto_supported(host))
+		return;
+
+	host->caps |= CQHCI_CAP_CRYPTO_SUPPORT;
+
+	err = crypto_qti_enable(host->crypto_vops->priv);
+	if (err) {
+		pr_err("%s: Error enabling crypto, err %d\n",
+				__func__, err);
+		cqhci_crypto_qti_disable(host);
+	}
+}
+
+void cqhci_crypto_qti_disable(struct cqhci_host *host)
+{
+	cqhci_crypto_disable_spec(host);
+	crypto_qti_disable(host->crypto_vops->priv);
+}
+
+static int cqhci_crypto_qti_keyslot_program(struct keyslot_manager *ksm,
+					    const struct blk_crypto_key *key,
+					    unsigned int slot)
+{
+	struct cqhci_host *host = keyslot_manager_private(ksm);
+	int err = 0;
+	u8 data_unit_mask;
+	int crypto_alg_id;
+
+	crypto_alg_id = cqhci_crypto_cap_find(host, key->crypto_mode,
+					       key->data_unit_size);
+
+	if (!cqhci_is_crypto_enabled(host) ||
+	    !cqhci_keyslot_valid(host, slot) ||
+	    !ice_cap_idx_valid(host, crypto_alg_id)) {
+		return -EINVAL;
+	}
+
+	data_unit_mask = get_data_unit_size_mask(key->data_unit_size);
+
+	if (!(data_unit_mask &
+	      host->crypto_cap_array[crypto_alg_id].sdus_mask)) {
+		return -EINVAL;
+	}
+
+	err = crypto_qti_keyslot_program(host->crypto_vops->priv, key,
+					 slot, data_unit_mask, crypto_alg_id);
+	if (err)
+		pr_err("%s: failed with error %d\n", __func__, err);
+
+	return err;
+}
+
+static int cqhci_crypto_qti_keyslot_evict(struct keyslot_manager *ksm,
+					  const struct blk_crypto_key *key,
+					  unsigned int slot)
+{
+	int err = 0;
+	struct cqhci_host *host = keyslot_manager_private(ksm);
+
+	if (!cqhci_is_crypto_enabled(host) ||
+	    !cqhci_keyslot_valid(host, slot))
+		return -EINVAL;
+
+	err = crypto_qti_keyslot_evict(host->crypto_vops->priv, slot);
+	if (err)
+		pr_err("%s: failed with error %d\n", __func__, err);
+
+	return err;
+}
+
+static int cqhci_crypto_qti_derive_raw_secret(struct keyslot_manager *ksm,
+		const u8 *wrapped_key, unsigned int wrapped_key_size,
+		u8 *secret, unsigned int secret_size)
+{
+	int err = 0;
+
+	if (wrapped_key_size <= RAW_SECRET_SIZE) {
+		pr_err("%s: Invalid wrapped_key_size: %u\n", __func__,
+			wrapped_key_size);
+		err = -EINVAL;
+		return err;
+	}
+	if (secret_size != RAW_SECRET_SIZE) {
+		pr_err("%s: Invalid secret size: %u\n", __func__, secret_size);
+		err = -EINVAL;
+		return err;
+	}
+	memcpy(secret, wrapped_key, secret_size);
+	return 0;
+}
+
+static const struct keyslot_mgmt_ll_ops cqhci_crypto_qti_ksm_ops = {
+	.keyslot_program	= cqhci_crypto_qti_keyslot_program,
+	.keyslot_evict		= cqhci_crypto_qti_keyslot_evict,
+	.derive_raw_secret	= cqhci_crypto_qti_derive_raw_secret
+};
+
+enum blk_crypto_mode_num cqhci_blk_crypto_qti_mode_num_for_alg_dusize(
+	enum cqhci_crypto_alg cqhci_crypto_alg,
+	enum cqhci_crypto_key_size key_size)
+{
+	/*
+	 * Currently the only mode that eMMC and blk-crypto both support.
+	 */
+	if (cqhci_crypto_alg == CQHCI_CRYPTO_ALG_AES_XTS &&
+		key_size == CQHCI_CRYPTO_KEY_SIZE_256)
+		return BLK_ENCRYPTION_MODE_AES_256_XTS;
+
+	return BLK_ENCRYPTION_MODE_INVALID;
+}
+
+int cqhci_host_init_crypto_qti_spec(struct cqhci_host *host,
+				    const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+	int cap_idx = 0;
+	int err = 0;
+	unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
+	enum blk_crypto_mode_num blk_mode_num;
+
+	/* Default to disabling crypto */
+	host->caps &= ~CQHCI_CAP_CRYPTO_SUPPORT;
+
+	if (!(cqhci_readl(host, CQHCI_CAP) & CQHCI_CAP_CS)) {
+		pr_debug("%s no crypto capability\n", __func__);
+		err = -ENODEV;
+		goto out;
+	}
+
+	/*
+	 * Crypto Capabilities should never be 0, because the
+	 * config_array_ptr > 04h. So we use a 0 value to indicate that
+	 * crypto init failed, and can't be enabled.
+	 */
+	host->crypto_capabilities.reg_val = cqhci_readl(host, CQHCI_CCAP);
+	host->crypto_cfg_register =
+		(u32)host->crypto_capabilities.config_array_ptr * 0x100;
+	host->crypto_cap_array =
+		devm_kcalloc(mmc_dev(host->mmc),
+				host->crypto_capabilities.num_crypto_cap,
+				sizeof(host->crypto_cap_array[0]), GFP_KERNEL);
+	if (!host->crypto_cap_array) {
+		err = -ENOMEM;
+		pr_err("%s failed to allocate memory\n", __func__);
+		goto out;
+	}
+
+	memset(crypto_modes_supported, 0, sizeof(crypto_modes_supported));
+
+	/*
+	 * Store all the capabilities now so that we don't need to repeatedly
+	 * access the device each time we want to know its capabilities
+	 */
+	for (cap_idx = 0; cap_idx < host->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		host->crypto_cap_array[cap_idx].reg_val =
+			cpu_to_le32(cqhci_readl(host,
+						 CQHCI_CRYPTOCAP +
+						 cap_idx * sizeof(__le32)));
+		blk_mode_num = cqhci_blk_crypto_qti_mode_num_for_alg_dusize(
+				host->crypto_cap_array[cap_idx].algorithm_id,
+				host->crypto_cap_array[cap_idx].key_size);
+		if (blk_mode_num == BLK_ENCRYPTION_MODE_INVALID)
+			continue;
+		crypto_modes_supported[blk_mode_num] |=
+				host->crypto_cap_array[cap_idx].sdus_mask * 512;
+	}
+
+	host->ksm = keyslot_manager_create(cqhci_num_keyslots(host), ksm_ops,
+					crypto_modes_supported, host);
+
+	if (!host->ksm) {
+		err = -ENOMEM;
+		goto out;
+	}
+	/*
+	 * In case host controller supports cryptographic operations
+	 * then, it uses 128bit task descriptor. Upper 64 bits of task
+	 * descriptor would be used to pass crypto specific informaton.
+	 */
+	host->caps |= CQHCI_TASK_DESC_SZ_128;
+
+	return 0;
+
+out:
+	/* Indicate that init failed by setting crypto_capabilities to 0 */
+	host->crypto_capabilities.reg_val = 0;
+	return err;
+}
+
+int cqhci_crypto_qti_init_crypto(struct cqhci_host *host,
+				const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+	int err = 0;
+	struct sdhci_host *sdhci = mmc_priv(host->mmc);
+	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(sdhci);
+	struct sdhci_msm_host *msm_host = pltfm_host->priv;
+	struct resource *cqhci_ice_memres = NULL;
+
+	cqhci_ice_memres = platform_get_resource_byname(msm_host->pdev,
+							IORESOURCE_MEM,
+							"cqhci_ice");
+	if (!cqhci_ice_memres) {
+		pr_debug("%s ICE not supported\n", __func__);
+		host->icemmio = NULL;
+		return PTR_ERR(cqhci_ice_memres);
+	}
+
+	host->icemmio = devm_ioremap(&msm_host->pdev->dev,
+				     cqhci_ice_memres->start,
+				     resource_size(cqhci_ice_memres));
+	if (!host->icemmio) {
+		pr_err("%s failed to remap ice regs\n", __func__);
+		return PTR_ERR(host->icemmio);
+	}
+
+	err = cqhci_host_init_crypto_qti_spec(host, &cqhci_crypto_qti_ksm_ops);
+	if (err) {
+		pr_err("%s: Error initiating crypto capabilities, err %d\n",
+					__func__, err);
+		return err;
+	}
+
+	err = crypto_qti_init_crypto(&msm_host->pdev->dev,
+			host->icemmio, (void **)&host->crypto_vops->priv);
+	if (err) {
+		pr_err("%s: Error initiating crypto, err %d\n",
+					__func__, err);
+	}
+	return err;
+}
+
+int cqhci_crypto_qti_debug(struct cqhci_host *host)
+{
+	return crypto_qti_debug(host->crypto_vops->priv);
+}
+
+void cqhci_crypto_qti_set_vops(struct cqhci_host *host)
+{
+	return cqhci_crypto_set_vops(host, &cqhci_crypto_qti_variant_ops);
+}
+
+int cqhci_crypto_qti_resume(struct cqhci_host *host)
+{
+	return crypto_qti_resume(host->crypto_vops->priv);
+}
diff --git a/drivers/mmc/host/cqhci-crypto-qti.h b/drivers/mmc/host/cqhci-crypto-qti.h
new file mode 100644
index 0000000..2788e96b
--- /dev/null
+++ b/drivers/mmc/host/cqhci-crypto-qti.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _UFSHCD_CRYPTO_QTI_H
+#define _UFSHCD_CRYPTO_QTI_H
+
+#include "cqhci-crypto.h"
+
+void cqhci_crypto_qti_enable(struct cqhci_host *host);
+
+void cqhci_crypto_qti_disable(struct cqhci_host *host);
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+int cqhci_crypto_qti_init_crypto(struct cqhci_host *host,
+				 const struct keyslot_mgmt_ll_ops *ksm_ops);
+#endif
+
+int cqhci_crypto_qti_debug(struct cqhci_host *host);
+
+void cqhci_crypto_qti_set_vops(struct cqhci_host *host);
+
+int cqhci_crypto_qti_resume(struct cqhci_host *host);
+
+#endif /* _UFSHCD_ICE_QTI_H */
diff --git a/drivers/mmc/host/cqhci-crypto.c b/drivers/mmc/host/cqhci-crypto.c
new file mode 100644
index 0000000..5b06a6b
--- /dev/null
+++ b/drivers/mmc/host/cqhci-crypto.c
@@ -0,0 +1,528 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2020 Google LLC
+ *
+ * Copyright (c) 2020 The Linux Foundation. All rights reserved.
+ *
+ * drivers/mmc/host/cqhci-crypto.c - Qualcomm Technologies, Inc.
+ *
+ * Original source is taken from:
+ * https://android.googlesource.com/kernel/common/+/4bac1109a10c55d49c0aa4f7ebdc4bc53cc368e8
+ * The driver caters to crypto engine support for UFS controllers.
+ * The crypto engine programming sequence, HW functionality and register
+ * offset is almost same in UFS and eMMC controllers.
+ */
+
+#include <crypto/algapi.h>
+#include "cqhci-crypto.h"
+#include "../core/queue.h"
+
+static bool cqhci_cap_idx_valid(struct cqhci_host *host, unsigned int cap_idx)
+{
+	return cap_idx < host->crypto_capabilities.num_crypto_cap;
+}
+
+static u8 get_data_unit_size_mask(unsigned int data_unit_size)
+{
+	if (data_unit_size < 512 || data_unit_size > 65536 ||
+	    !is_power_of_2(data_unit_size))
+		return 0;
+
+	return data_unit_size / 512;
+}
+
+static size_t get_keysize_bytes(enum cqhci_crypto_key_size size)
+{
+	switch (size) {
+	case CQHCI_CRYPTO_KEY_SIZE_128:
+		return 16;
+	case CQHCI_CRYPTO_KEY_SIZE_192:
+		return 24;
+	case CQHCI_CRYPTO_KEY_SIZE_256:
+		return 32;
+	case CQHCI_CRYPTO_KEY_SIZE_512:
+		return 64;
+	default:
+		return 0;
+	}
+}
+
+int cqhci_crypto_cap_find(void *host_p, enum blk_crypto_mode_num crypto_mode,
+			  unsigned int data_unit_size)
+{
+	struct cqhci_host *host = host_p;
+	enum cqhci_crypto_alg cqhci_alg;
+	u8 data_unit_mask;
+	int cap_idx;
+	enum cqhci_crypto_key_size cqhci_key_size;
+	union cqhci_crypto_cap_entry *ccap_array = host->crypto_cap_array;
+
+	if (!cqhci_host_is_crypto_supported(host))
+		return -EINVAL;
+
+	switch (crypto_mode) {
+	case BLK_ENCRYPTION_MODE_AES_256_XTS:
+		cqhci_alg = CQHCI_CRYPTO_ALG_AES_XTS;
+		cqhci_key_size = CQHCI_CRYPTO_KEY_SIZE_256;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	data_unit_mask = get_data_unit_size_mask(data_unit_size);
+
+	for (cap_idx = 0; cap_idx < host->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		if (ccap_array[cap_idx].algorithm_id == cqhci_alg &&
+		    (ccap_array[cap_idx].sdus_mask & data_unit_mask) &&
+		    ccap_array[cap_idx].key_size == cqhci_key_size)
+			return cap_idx;
+	}
+
+	return -EINVAL;
+}
+EXPORT_SYMBOL(cqhci_crypto_cap_find);
+
+/**
+ * cqhci_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry
+ *
+ *	Writes the key with the appropriate format - for AES_XTS,
+ *	the first half of the key is copied as is, the second half is
+ *	copied with an offset halfway into the cfg->crypto_key array.
+ *	For the other supported crypto algs, the key is just copied.
+ *
+ * @cfg: The crypto config to write to
+ * @key: The key to write
+ * @cap: The crypto capability (which specifies the crypto alg and key size)
+ *
+ * Returns 0 on success, or -EINVAL
+ */
+static int cqhci_crypto_cfg_entry_write_key(union cqhci_crypto_cfg_entry *cfg,
+					     const u8 *key,
+					     union cqhci_crypto_cap_entry cap)
+{
+	size_t key_size_bytes = get_keysize_bytes(cap.key_size);
+
+	if (key_size_bytes == 0)
+		return -EINVAL;
+
+	switch (cap.algorithm_id) {
+	case CQHCI_CRYPTO_ALG_AES_XTS:
+		key_size_bytes *= 2;
+		if (key_size_bytes > CQHCI_CRYPTO_KEY_MAX_SIZE)
+			return -EINVAL;
+
+		memcpy(cfg->crypto_key, key, key_size_bytes/2);
+		memcpy(cfg->crypto_key + CQHCI_CRYPTO_KEY_MAX_SIZE/2,
+		       key + key_size_bytes/2, key_size_bytes/2);
+		return 0;
+	case CQHCI_CRYPTO_ALG_BITLOCKER_AES_CBC:
+		/* fall through */
+	case CQHCI_CRYPTO_ALG_AES_ECB:
+		/* fall through */
+	case CQHCI_CRYPTO_ALG_ESSIV_AES_CBC:
+		memcpy(cfg->crypto_key, key, key_size_bytes);
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static void cqhci_program_key(struct cqhci_host *host,
+			const union cqhci_crypto_cfg_entry *cfg,
+			int slot)
+{
+	int i;
+	u32 slot_offset = host->crypto_cfg_register + slot * sizeof(*cfg);
+
+	if (host->crypto_vops && host->crypto_vops->program_key)
+		host->crypto_vops->program_key(host, cfg, slot);
+
+	/* Clear the dword 16 */
+	cqhci_writel(host, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
+	/* Ensure that CFGE is cleared before programming the key */
+	wmb();
+	for (i = 0; i < 16; i++) {
+		cqhci_writel(host, le32_to_cpu(cfg->reg_val[i]),
+			      slot_offset + i * sizeof(cfg->reg_val[0]));
+		/* Spec says each dword in key must be written sequentially */
+		wmb();
+	}
+	/* Write dword 17 */
+	cqhci_writel(host, le32_to_cpu(cfg->reg_val[17]),
+		      slot_offset + 17 * sizeof(cfg->reg_val[0]));
+	/* Dword 16 must be written last */
+	wmb();
+	/* Write dword 16 */
+	cqhci_writel(host, le32_to_cpu(cfg->reg_val[16]),
+		      slot_offset + 16 * sizeof(cfg->reg_val[0]));
+	/*Ensure that dword 16 is written */
+	wmb();
+}
+
+static void cqhci_crypto_clear_keyslot(struct cqhci_host *host, int slot)
+{
+	union cqhci_crypto_cfg_entry cfg = { {0} };
+
+	cqhci_program_key(host, &cfg, slot);
+}
+
+static void cqhci_crypto_clear_all_keyslots(struct cqhci_host *host)
+{
+	int slot;
+
+	for (slot = 0; slot < cqhci_num_keyslots(host); slot++)
+		cqhci_crypto_clear_keyslot(host, slot);
+}
+
+static int cqhci_crypto_keyslot_program(struct keyslot_manager *ksm,
+					const struct blk_crypto_key *key,
+					unsigned int slot)
+{
+	struct cqhci_host *host = keyslot_manager_private(ksm);
+	int err = 0;
+	u8 data_unit_mask;
+	union cqhci_crypto_cfg_entry cfg;
+	int cap_idx;
+
+	cap_idx = cqhci_crypto_cap_find(host, key->crypto_mode,
+					key->data_unit_size);
+
+	if (!cqhci_is_crypto_enabled(host) ||
+	    !cqhci_keyslot_valid(host, slot) ||
+	    !cqhci_cap_idx_valid(host, cap_idx))
+		return -EINVAL;
+
+	data_unit_mask = get_data_unit_size_mask(key->data_unit_size);
+
+	if (!(data_unit_mask & host->crypto_cap_array[cap_idx].sdus_mask))
+		return -EINVAL;
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.data_unit_size = data_unit_mask;
+	cfg.crypto_cap_idx = cap_idx;
+	cfg.config_enable |= CQHCI_CRYPTO_CONFIGURATION_ENABLE;
+
+	err = cqhci_crypto_cfg_entry_write_key(&cfg, key->raw,
+					host->crypto_cap_array[cap_idx]);
+	if (err)
+		return err;
+
+	cqhci_program_key(host, &cfg, slot);
+
+	memzero_explicit(&cfg, sizeof(cfg));
+
+	return 0;
+}
+
+static int cqhci_crypto_keyslot_evict(struct keyslot_manager *ksm,
+				      const struct blk_crypto_key *key,
+				      unsigned int slot)
+{
+	struct cqhci_host *host = keyslot_manager_private(ksm);
+
+	if (!cqhci_is_crypto_enabled(host) ||
+	    !cqhci_keyslot_valid(host, slot))
+		return -EINVAL;
+
+	/*
+	 * Clear the crypto cfg on the device. Clearing CFGE
+	 * might not be sufficient, so just clear the entire cfg.
+	 */
+	cqhci_crypto_clear_keyslot(host, slot);
+
+	return 0;
+}
+
+/* Functions implementing eMMC v5.2 specification behaviour */
+void cqhci_crypto_enable_spec(struct cqhci_host *host)
+{
+	if (!cqhci_host_is_crypto_supported(host))
+		return;
+
+	host->caps |= CQHCI_CAP_CRYPTO_SUPPORT;
+}
+EXPORT_SYMBOL(cqhci_crypto_enable_spec);
+
+void cqhci_crypto_disable_spec(struct cqhci_host *host)
+{
+	host->caps &= ~CQHCI_CAP_CRYPTO_SUPPORT;
+}
+EXPORT_SYMBOL(cqhci_crypto_disable_spec);
+
+static const struct keyslot_mgmt_ll_ops cqhci_ksm_ops = {
+	.keyslot_program	= cqhci_crypto_keyslot_program,
+	.keyslot_evict		= cqhci_crypto_keyslot_evict,
+};
+
+enum blk_crypto_mode_num cqhci_crypto_blk_crypto_mode_num_for_alg_dusize(
+	enum cqhci_crypto_alg cqhci_crypto_alg,
+	enum cqhci_crypto_key_size key_size)
+{
+	/*
+	 * Currently the only mode that eMMC and blk-crypto both support.
+	 */
+	if (cqhci_crypto_alg == CQHCI_CRYPTO_ALG_AES_XTS &&
+		key_size == CQHCI_CRYPTO_KEY_SIZE_256)
+		return BLK_ENCRYPTION_MODE_AES_256_XTS;
+
+	return BLK_ENCRYPTION_MODE_INVALID;
+}
+
+/**
+ * cqhci_host_init_crypto - Read crypto capabilities, init crypto fields in host
+ * @host: Per adapter instance
+ *
+ * Returns 0 on success. Returns -ENODEV if such capabilities don't exist, and
+ * -ENOMEM upon OOM.
+ */
+int cqhci_host_init_crypto_spec(struct cqhci_host *host,
+				const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+	int cap_idx = 0;
+	int err = 0;
+	unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
+	enum blk_crypto_mode_num blk_mode_num;
+
+	/* Default to disabling crypto */
+	host->caps &= ~CQHCI_CAP_CRYPTO_SUPPORT;
+
+	if (!(cqhci_readl(host, CQHCI_CAP) & CQHCI_CAP_CS)) {
+		pr_err("%s no crypto capability\n", __func__);
+		err = -ENODEV;
+		goto out;
+	}
+
+	/*
+	 * Crypto Capabilities should never be 0, because the
+	 * config_array_ptr > 04h. So we use a 0 value to indicate that
+	 * crypto init failed, and can't be enabled.
+	 */
+	host->crypto_capabilities.reg_val = cqhci_readl(host, CQHCI_CCAP);
+	host->crypto_cfg_register =
+		(u32)host->crypto_capabilities.config_array_ptr * 0x100;
+	host->crypto_cap_array =
+		devm_kcalloc(mmc_dev(host->mmc),
+				host->crypto_capabilities.num_crypto_cap,
+				sizeof(host->crypto_cap_array[0]), GFP_KERNEL);
+	if (!host->crypto_cap_array) {
+		err = -ENOMEM;
+		pr_err("%s no memory cap\n", __func__);
+		goto out;
+	}
+
+	memset(crypto_modes_supported, 0, sizeof(crypto_modes_supported));
+
+	/*
+	 * Store all the capabilities now so that we don't need to repeatedly
+	 * access the device each time we want to know its capabilities
+	 */
+	for (cap_idx = 0; cap_idx < host->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		host->crypto_cap_array[cap_idx].reg_val =
+			cpu_to_le32(cqhci_readl(host,
+						 CQHCI_CRYPTOCAP +
+						 cap_idx * sizeof(__le32)));
+		blk_mode_num = cqhci_crypto_blk_crypto_mode_num_for_alg_dusize(
+				host->crypto_cap_array[cap_idx].algorithm_id,
+				host->crypto_cap_array[cap_idx].key_size);
+		if (blk_mode_num == BLK_ENCRYPTION_MODE_INVALID)
+			continue;
+		crypto_modes_supported[blk_mode_num] |=
+				host->crypto_cap_array[cap_idx].sdus_mask * 512;
+	}
+
+	cqhci_crypto_clear_all_keyslots(host);
+
+	host->ksm = keyslot_manager_create(cqhci_num_keyslots(host), ksm_ops,
+					crypto_modes_supported, host);
+
+	if (!host->ksm) {
+		err = -ENOMEM;
+		goto out_free_caps;
+	}
+	/*
+	 * In case host controller supports cryptographic operations
+	 * then, it uses 128bit task descriptor. Upper 64 bits of task
+	 * descriptor would be used to pass crypto specific informaton.
+	 */
+	host->caps |= CQHCI_TASK_DESC_SZ_128;
+
+	return 0;
+out_free_caps:
+	devm_kfree(mmc_dev(host->mmc), host->crypto_cap_array);
+out:
+	// TODO: print error?
+	/* Indicate that init failed by setting crypto_capabilities to 0 */
+	host->crypto_capabilities.reg_val = 0;
+	return err;
+}
+EXPORT_SYMBOL(cqhci_host_init_crypto_spec);
+
+void cqhci_crypto_setup_rq_keyslot_manager_spec(struct cqhci_host *host,
+				struct request_queue *q)
+{
+	if (!cqhci_host_is_crypto_supported(host) || !q)
+		return;
+
+	q->ksm = host->ksm;
+}
+EXPORT_SYMBOL(cqhci_crypto_setup_rq_keyslot_manager_spec);
+
+void cqhci_crypto_destroy_rq_keyslot_manager_spec(struct cqhci_host *host,
+					      struct request_queue *q)
+{
+	keyslot_manager_destroy(host->ksm);
+}
+EXPORT_SYMBOL(cqhci_crypto_destroy_rq_keyslot_manager_spec);
+
+int cqhci_prepare_crypto_desc_spec(struct cqhci_host *host,
+				struct mmc_request *mrq,
+				u64 *ice_ctx)
+{
+	struct bio_crypt_ctx *bc;
+	struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
+						  brq.mrq);
+	struct request *req = mmc_queue_req_to_req(mqrq);
+
+	if (!req->bio ||
+	    !bio_crypt_should_process(req)) {
+		*ice_ctx = 0;
+		return 0;
+	}
+	if (WARN_ON(!cqhci_is_crypto_enabled(host))) {
+		/*
+		 * Upper layer asked us to do inline encryption
+		 * but that isn't enabled, so we fail this request.
+		 */
+		return -EINVAL;
+	}
+
+	bc = req->bio->bi_crypt_context;
+
+	if (!cqhci_keyslot_valid(host, bc->bc_keyslot))
+		return -EINVAL;
+
+	if (ice_ctx) {
+		*ice_ctx = DATA_UNIT_NUM(bc->bc_dun[0]) |
+			   CRYPTO_CONFIG_INDEX(bc->bc_keyslot) |
+			   CRYPTO_ENABLE(true);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(cqhci_prepare_crypto_desc_spec);
+
+/* Crypto Variant Ops Support */
+
+void cqhci_crypto_enable(struct cqhci_host *host)
+{
+	if (host->crypto_vops && host->crypto_vops->enable)
+		return host->crypto_vops->enable(host);
+
+	return cqhci_crypto_enable_spec(host);
+}
+
+void cqhci_crypto_disable(struct cqhci_host *host)
+{
+	if (host->crypto_vops && host->crypto_vops->disable)
+		return host->crypto_vops->disable(host);
+
+	return cqhci_crypto_disable_spec(host);
+}
+
+int cqhci_host_init_crypto(struct cqhci_host *host)
+{
+	if (host->crypto_vops && host->crypto_vops->host_init_crypto)
+		return host->crypto_vops->host_init_crypto(host,
+							   &cqhci_ksm_ops);
+
+	return cqhci_host_init_crypto_spec(host, &cqhci_ksm_ops);
+}
+
+void cqhci_crypto_setup_rq_keyslot_manager(struct cqhci_host *host,
+					    struct request_queue *q)
+{
+	if (host->crypto_vops && host->crypto_vops->setup_rq_keyslot_manager)
+		return host->crypto_vops->setup_rq_keyslot_manager(host, q);
+
+	return cqhci_crypto_setup_rq_keyslot_manager_spec(host, q);
+}
+
+void cqhci_crypto_destroy_rq_keyslot_manager(struct cqhci_host *host,
+					      struct request_queue *q)
+{
+	if (host->crypto_vops && host->crypto_vops->destroy_rq_keyslot_manager)
+		return host->crypto_vops->destroy_rq_keyslot_manager(host, q);
+
+	return cqhci_crypto_destroy_rq_keyslot_manager_spec(host, q);
+}
+
+int cqhci_crypto_get_ctx(struct cqhci_host *host,
+			       struct mmc_request *mrq,
+			       u64 *ice_ctx)
+{
+	if (host->crypto_vops && host->crypto_vops->prepare_crypto_desc)
+		return host->crypto_vops->prepare_crypto_desc(host, mrq,
+								ice_ctx);
+
+	return cqhci_prepare_crypto_desc_spec(host, mrq, ice_ctx);
+}
+
+int cqhci_complete_crypto_desc(struct cqhci_host *host,
+				struct mmc_request *mrq,
+				u64 *ice_ctx)
+{
+	if (host->crypto_vops && host->crypto_vops->complete_crypto_desc)
+		return host->crypto_vops->complete_crypto_desc(host, mrq,
+								ice_ctx);
+
+	return 0;
+}
+
+void cqhci_crypto_debug(struct cqhci_host *host)
+{
+	if (host->crypto_vops && host->crypto_vops->debug)
+		host->crypto_vops->debug(host);
+}
+
+void cqhci_crypto_set_vops(struct cqhci_host *host,
+			struct cqhci_host_crypto_variant_ops *crypto_vops)
+{
+	host->crypto_vops = crypto_vops;
+}
+
+int cqhci_crypto_suspend(struct cqhci_host *host)
+{
+	if (host->crypto_vops && host->crypto_vops->suspend)
+		return host->crypto_vops->suspend(host);
+
+	return 0;
+}
+
+int cqhci_crypto_resume(struct cqhci_host *host)
+{
+	if (host->crypto_vops && host->crypto_vops->resume)
+		return host->crypto_vops->resume(host);
+
+	return 0;
+}
+
+int cqhci_crypto_reset(struct cqhci_host *host)
+{
+	if (host->crypto_vops && host->crypto_vops->reset)
+		return host->crypto_vops->reset(host);
+
+	return 0;
+}
+
+int cqhci_crypto_recovery_finish(struct cqhci_host *host)
+{
+	if (host->crypto_vops && host->crypto_vops->recovery_finish)
+		return host->crypto_vops->recovery_finish(host);
+
+	/* Reset/Recovery might clear all keys, so reprogram all the keys. */
+	keyslot_manager_reprogram_all_keys(host->ksm);
+
+	return 0;
+}
diff --git a/drivers/mmc/host/cqhci-crypto.h b/drivers/mmc/host/cqhci-crypto.h
new file mode 100644
index 0000000..fefad90
--- /dev/null
+++ b/drivers/mmc/host/cqhci-crypto.h
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ *
+ * Copyright (c) 2020 The Linux Foundation. All rights reserved.
+ *
+ */
+
+#ifndef _CQHCI_CRYPTO_H
+#define _CQHCI_CRYPTO_H
+
+#ifdef CONFIG_MMC_CQHCI_CRYPTO
+#include <linux/mmc/host.h>
+#include "cqhci.h"
+
+static inline int cqhci_num_keyslots(struct cqhci_host *host)
+{
+	return host->crypto_capabilities.config_count + 1;
+}
+
+static inline bool cqhci_keyslot_valid(struct cqhci_host *host,
+				       unsigned int slot)
+{
+	/*
+	 * The actual number of configurations supported is (CFGC+1), so slot
+	 * numbers range from 0 to config_count inclusive.
+	 */
+	return slot < cqhci_num_keyslots(host);
+}
+
+static inline bool cqhci_host_is_crypto_supported(struct cqhci_host *host)
+{
+	return host->crypto_capabilities.reg_val != 0;
+}
+
+static inline bool cqhci_is_crypto_enabled(struct cqhci_host *host)
+{
+	return host->caps & CQHCI_CAP_CRYPTO_SUPPORT;
+}
+
+/* Functions implementing eMMC v5.2 specification behaviour */
+int cqhci_prepare_crypto_desc_spec(struct cqhci_host *host,
+				    struct mmc_request *mrq,
+				    u64 *ice_ctx);
+
+void cqhci_crypto_enable_spec(struct cqhci_host *host);
+
+void cqhci_crypto_disable_spec(struct cqhci_host *host);
+
+int cqhci_host_init_crypto_spec(struct cqhci_host *host,
+				const struct keyslot_mgmt_ll_ops *ksm_ops);
+
+void cqhci_crypto_setup_rq_keyslot_manager_spec(struct cqhci_host *host,
+						 struct request_queue *q);
+
+void cqhci_crypto_destroy_rq_keyslot_manager_spec(struct cqhci_host *host,
+						   struct request_queue *q);
+
+void cqhci_crypto_set_vops(struct cqhci_host *host,
+			    struct cqhci_host_crypto_variant_ops *crypto_vops);
+
+/* Crypto Variant Ops Support */
+
+void cqhci_crypto_enable(struct cqhci_host *host);
+
+void cqhci_crypto_disable(struct cqhci_host *host);
+
+int cqhci_host_init_crypto(struct cqhci_host *host);
+
+void cqhci_crypto_setup_rq_keyslot_manager(struct cqhci_host *host,
+					    struct request_queue *q);
+
+void cqhci_crypto_destroy_rq_keyslot_manager(struct cqhci_host *host,
+					      struct request_queue *q);
+
+int cqhci_crypto_get_ctx(struct cqhci_host *host,
+			       struct mmc_request *mrq,
+			       u64 *ice_ctx);
+
+int cqhci_complete_crypto_desc(struct cqhci_host *host,
+				struct mmc_request *mrq,
+				u64 *ice_ctx);
+
+void cqhci_crypto_debug(struct cqhci_host *host);
+
+int cqhci_crypto_suspend(struct cqhci_host *host);
+
+int cqhci_crypto_resume(struct cqhci_host *host);
+
+int cqhci_crypto_reset(struct cqhci_host *host);
+
+int cqhci_crypto_recovery_finish(struct cqhci_host *host);
+
+int cqhci_crypto_cap_find(void *host_p,  enum blk_crypto_mode_num crypto_mode,
+			  unsigned int data_unit_size);
+
+#else /* CONFIG_MMC_CQHCI_CRYPTO */
+
+static inline bool cqhci_keyslot_valid(struct cqhci_host *host,
+					unsigned int slot)
+{
+	return false;
+}
+
+static inline bool cqhci_host_is_crypto_supported(struct cqhci_host *host)
+{
+	return false;
+}
+
+static inline bool cqhci_is_crypto_enabled(struct cqhci_host *host)
+{
+	return false;
+}
+
+static inline void cqhci_crypto_enable(struct cqhci_host *host) { }
+
+static inline int cqhci_crypto_cap_find(void *host_p,
+					enum blk_crypto_mode_num crypto_mode,
+					unsigned int data_unit_size)
+{
+	return 0;
+}
+
+static inline void cqhci_crypto_disable(struct cqhci_host *host) { }
+
+static inline int cqhci_host_init_crypto(struct cqhci_host *host)
+{
+	return 0;
+}
+
+static inline void cqhci_crypto_setup_rq_keyslot_manager(
+					struct cqhci_host *host,
+					struct request_queue *q) { }
+
+static inline void
+cqhci_crypto_destroy_rq_keyslot_manager(struct cqhci_host *host,
+					 struct request_queue *q) { }
+
+static inline int cqhci_crypto_get_ctx(struct cqhci_host *host,
+				       struct mmc_request *mrq,
+				       u64 *ice_ctx)
+{
+	*ice_ctx = 0;
+	return 0;
+}
+
+static inline int cqhci_complete_crypto_desc(struct cqhci_host *host,
+					     struct mmc_request *mrq,
+					     u64 *ice_ctx)
+{
+	return 0;
+}
+
+static inline void cqhci_crypto_debug(struct cqhci_host *host) { }
+
+static inline void cqhci_crypto_set_vops(struct cqhci_host *host,
+			struct cqhci_host_crypto_variant_ops *crypto_vops) { }
+
+static inline int cqhci_crypto_suspend(struct cqhci_host *host)
+{
+	return 0;
+}
+
+static inline int cqhci_crypto_resume(struct cqhci_host *host)
+{
+	return 0;
+}
+
+static inline int cqhci_crypto_reset(struct cqhci_host *host)
+{
+	return 0;
+}
+
+static inline int cqhci_crypto_recovery_finish(struct cqhci_host *host)
+{
+	return 0;
+}
+
+#endif /* CONFIG_MMC_CQHCI_CRYPTO */
+#endif /* _CQHCI_CRYPTO_H */
+
+
diff --git a/drivers/mmc/host/cqhci.c b/drivers/mmc/host/cqhci.c
index b759421..55bfcfd 100644
--- a/drivers/mmc/host/cqhci.c
+++ b/drivers/mmc/host/cqhci.c
@@ -17,7 +17,10 @@
 #include <linux/mmc/host.h>
 #include <linux/mmc/card.h>
 
+#include "../core/queue.h"
 #include "cqhci.h"
+#include "cqhci-crypto.h"
+
 #include "sdhci-msm.h"
 
 #define DCMD_SLOT 31
@@ -154,6 +157,8 @@
 	CQHCI_DUMP("Vendor cfg 0x%08x\n",
 		   cqhci_readl(cq_host, CQHCI_VENDOR_CFG + offset));
 
+	cqhci_crypto_debug(cq_host);
+
 	if (cq_host->ops->dumpregs)
 		cq_host->ops->dumpregs(mmc);
 	else
@@ -257,7 +262,6 @@
 {
 	struct mmc_host *mmc = cq_host->mmc;
 	u32 cqcfg;
-	u32 cqcap = 0;
 
 	cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
 
@@ -275,16 +279,10 @@
 	if (cq_host->caps & CQHCI_TASK_DESC_SZ_128)
 		cqcfg |= CQHCI_TASK_DESC_SZ;
 
-	cqcap = cqhci_readl(cq_host, CQHCI_CAP);
-	if (cqcap & CQHCI_CAP_CS) {
-		/*
-		 * In case host controller supports cryptographic operations
-		 * then, enable crypro support.
-		 */
-		cq_host->caps |= CQHCI_CAP_CRYPTO_SUPPORT;
+	if (cqhci_host_is_crypto_supported(cq_host)) {
+		cqhci_crypto_enable(cq_host);
 		cqcfg |= CQHCI_ICE_ENABLE;
-		/*
-		 * For SDHC v5.0 onwards, ICE 3.0 specific registers are added
+		/* For SDHC v5.0 onwards, ICE 3.0 specific registers are added
 		 * in CQ register space, due to which few CQ registers are
 		 * shifted. Set offset_changed boolean to use updated address.
 		 */
@@ -326,6 +324,9 @@
 {
 	u32 cqcfg;
 
+	if (cqhci_host_is_crypto_supported(cq_host))
+		cqhci_crypto_disable(cq_host);
+
 	cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
 	cqcfg &= ~CQHCI_ENABLE;
 	cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
@@ -333,6 +334,7 @@
 	cq_host->mmc->cqe_on = false;
 
 	cq_host->activated = false;
+
 	mmc_log_string(cq_host->mmc, "CQ disabled\n");
 }
 
@@ -340,6 +342,8 @@
 {
 	struct cqhci_host *cq_host = mmc->cqe_private;
 
+	cqhci_crypto_suspend(cq_host);
+
 	if (cq_host->enabled)
 		__cqhci_disable(cq_host);
 
@@ -584,16 +588,23 @@
 {
 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
 	struct sdhci_msm_host *msm_host = pltfm_host->priv;
+	struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
+						  brq.mrq);
+	struct request *req = mmc_queue_req_to_req(mqrq);
 
 	sdhci_msm_pm_qos_cpu_vote(host,
-		msm_host->pdata->pm_qos_data.cmdq_latency, mrq->req->cpu);
+		msm_host->pdata->pm_qos_data.cmdq_latency, req->cpu);
 }
 
 static void cqhci_pm_qos_unvote(struct sdhci_host *host,
 						struct mmc_request *mrq)
 {
+	struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
+						  brq.mrq);
+	struct request *req = mmc_queue_req_to_req(mqrq);
+
 	/* use async as we're inside an atomic context (soft-irq) */
-	sdhci_msm_pm_qos_cpu_unvote(host, mrq->req->cpu, true);
+	sdhci_msm_pm_qos_cpu_unvote(host, req->cpu, true);
 }
 
 static void cqhci_post_req(struct mmc_host *host, struct mmc_request *mrq)
@@ -618,7 +629,7 @@
 }
 
 static inline
-void cqe_prep_crypto_desc(struct cqhci_host *cq_host, u64 *task_desc,
+void cqhci_prep_crypto_desc(struct cqhci_host *cq_host, u64 *task_desc,
 			u64 ice_ctx)
 {
 	u64 *ice_desc = NULL;
@@ -629,8 +640,8 @@
 		 * ice context is present in the upper 64bits of task descriptor
 		 * ice_conext_base_address = task_desc + 8-bytes
 		 */
-		ice_desc = (__le64 __force *)((u8 *)task_desc +
-					CQHCI_TASK_DESC_TASK_PARAMS_SIZE);
+		ice_desc = (u64 *)((u8 *)task_desc +
+					CQHCI_TASK_DESC_ICE_PARAM_OFFSET);
 		memset(ice_desc, 0, CQHCI_TASK_DESC_ICE_PARAMS_SIZE);
 
 		/*
@@ -675,25 +686,23 @@
 	}
 
 	if (mrq->data) {
-		if (cq_host->ops->crypto_cfg) {
-			err = cq_host->ops->crypto_cfg(mmc, mrq, tag, &ice_ctx);
-			if (err) {
-				mmc->err_stats[MMC_ERR_ICE_CFG]++;
-				pr_err("%s: failed to configure crypto: err %d tag %d\n",
-						mmc_hostname(mmc), err, tag);
-				goto out;
-			}
+		err = cqhci_crypto_get_ctx(cq_host, mrq, &ice_ctx);
+		if (err) {
+			mmc->err_stats[MMC_ERR_ICE_CFG]++;
+			pr_err("%s: failed to retrieve crypto ctx for tag %d\n",
+				mmc_hostname(mmc), tag);
+			goto out;
 		}
 		task_desc = (__le64 __force *)get_desc(cq_host, tag);
 		cqhci_prep_task_desc(mrq, &data, 1);
 		*task_desc = cpu_to_le64(data);
-		cqe_prep_crypto_desc(cq_host, task_desc, ice_ctx);
+		cqhci_prep_crypto_desc(cq_host, task_desc, ice_ctx);
 
 		err = cqhci_prep_tran_desc(mrq, cq_host, tag);
 		if (err) {
 			pr_err("%s: cqhci: failed to setup tx desc: %d\n",
 			       mmc_hostname(mmc), err);
-			goto end_crypto;
+			goto out;
 		}
 		/* PM QoS */
 		sdhci_msm_pm_qos_irq_vote(host);
@@ -735,23 +744,26 @@
 	if (err)
 		cqhci_post_req(mmc, mrq);
 
-	goto out;
-
-end_crypto:
-	if (cq_host->ops->crypto_cfg_end && mrq->data) {
-		err = cq_host->ops->crypto_cfg_end(mmc, mrq);
-		if (err)
-			pr_err("%s: failed to end ice config: err %d tag %d\n",
-					mmc_hostname(mmc), err, tag);
-	}
-	if (!(cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) &&
-			cq_host->ops->crypto_cfg_reset && mrq->data)
-		cq_host->ops->crypto_cfg_reset(mmc, tag);
-
+	if (mrq->data)
+		cqhci_complete_crypto_desc(cq_host, mrq, NULL);
 out:
 	return err;
 }
 
+static void cqhci_crypto_update_queue(struct mmc_host *mmc,
+					struct request_queue *queue)
+{
+	struct cqhci_host *cq_host = mmc->cqe_private;
+
+	if (cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) {
+		if (queue)
+			cqhci_crypto_setup_rq_keyslot_manager(cq_host, queue);
+		else
+			pr_err("%s can not register keyslot manager\n",
+				mmc_hostname(mmc));
+	}
+}
+
 static void cqhci_recovery_needed(struct mmc_host *mmc, struct mmc_request *mrq,
 				  bool notify)
 {
@@ -851,7 +863,7 @@
 	struct cqhci_slot *slot = &cq_host->slot[tag];
 	struct mmc_request *mrq = slot->mrq;
 	struct mmc_data *data;
-	int err = 0, offset = 0;
+	int offset = 0;
 
 	if (cq_host->offset_changed)
 		offset = CQE_V5_VENDOR_CFG;
@@ -873,13 +885,8 @@
 
 	data = mrq->data;
 	if (data) {
-		if (cq_host->ops->crypto_cfg_end) {
-			err = cq_host->ops->crypto_cfg_end(mmc, mrq);
-			if (err) {
-				pr_err("%s: failed to end ice config: err %d tag %d\n",
-						mmc_hostname(mmc), err, tag);
-			}
-		}
+		cqhci_complete_crypto_desc(cq_host, mrq, NULL);
+
 		if (data->error)
 			data->bytes_xfered = 0;
 		else
@@ -891,9 +898,6 @@
 				CQHCI_VENDOR_CFG + offset);
 	}
 
-	if (!(cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) &&
-			cq_host->ops->crypto_cfg_reset)
-		cq_host->ops->crypto_cfg_reset(mmc, tag);
 	mmc_cqe_request_done(mmc, mrq);
 }
 
@@ -1090,6 +1094,8 @@
 
 	pr_debug("%s: cqhci: %s\n", mmc_hostname(mmc), __func__);
 
+	cqhci_crypto_reset(cq_host);
+
 	WARN_ON(!cq_host->recovery_halt);
 
 	cqhci_halt(mmc, CQHCI_START_HALT_TIMEOUT);
@@ -1210,6 +1216,8 @@
 
 	cqhci_set_irqs(cq_host, CQHCI_IS_MASK);
 
+	cqhci_crypto_recovery_finish(cq_host);
+
 	pr_debug("%s: cqhci: recovery done\n", mmc_hostname(mmc));
 	mmc_log_string(mmc, "recovery done\n");
 }
@@ -1224,6 +1232,7 @@
 	.cqe_timeout = cqhci_timeout,
 	.cqe_recovery_start = cqhci_recovery_start,
 	.cqe_recovery_finish = cqhci_recovery_finish,
+	.cqe_crypto_update_queue = cqhci_crypto_update_queue,
 };
 
 struct cqhci_host *cqhci_pltfm_init(struct platform_device *pdev)
@@ -1287,14 +1296,6 @@
 		mmc->cqe_qdepth -= 1;
 
 	cqcap = cqhci_readl(cq_host, CQHCI_CAP);
-	if (cqcap & CQHCI_CAP_CS) {
-		/*
-		 * In case host controller supports cryptographic operations
-		 * then, it uses 128bit task descriptor. Upper 64 bits of task
-		 * descriptor would be used to pass crypto specific informaton.
-		 */
-		cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
-	}
 
 	cq_host->slot = devm_kcalloc(mmc_dev(mmc), cq_host->num_slots,
 				     sizeof(*cq_host->slot), GFP_KERNEL);
@@ -1305,6 +1306,13 @@
 
 	spin_lock_init(&cq_host->lock);
 
+	err = cqhci_host_init_crypto(cq_host);
+	if (err) {
+		pr_err("%s: CQHCI version %u.%02u Crypto init failed err %d\n",
+		       mmc_hostname(mmc), cqhci_ver_major(cq_host),
+		       cqhci_ver_minor(cq_host), err);
+	}
+
 	init_completion(&cq_host->halt_comp);
 	init_waitqueue_head(&cq_host->wait_queue);
 
diff --git a/drivers/mmc/host/cqhci.h b/drivers/mmc/host/cqhci.h
index 024c81b..8c54e97 100644
--- a/drivers/mmc/host/cqhci.h
+++ b/drivers/mmc/host/cqhci.h
@@ -21,6 +21,7 @@
 #include <linux/wait.h>
 #include <linux/irqreturn.h>
 #include <asm/io.h>
+#include <linux/keyslot-manager.h>
 
 /* registers */
 /* version */
@@ -32,6 +33,9 @@
 /* capabilities */
 #define CQHCI_CAP			0x04
 #define CQHCI_CAP_CS			(1 << 28)
+#define CQHCI_CCAP			0x100
+#define CQHCI_CRYPTOCAP			0x104
+
 /* configuration */
 #define CQHCI_CFG			0x08
 #define CQHCI_DCMD			0x00001000
@@ -164,17 +168,107 @@
 #define CQHCI_DAT_LENGTH(x)		(((x) & 0xFFFF) << 16)
 #define CQHCI_DAT_ADDR_LO(x)		(((x) & 0xFFFFFFFF) << 32)
 #define CQHCI_DAT_ADDR_HI(x)		(((x) & 0xFFFFFFFF) << 0)
+#define DATA_UNIT_NUM(x)		(((u64)(x) & 0xFFFFFFFF) << 0)
+#define CRYPTO_CONFIG_INDEX(x)		(((u64)(x) & 0xFF) << 32)
+#define CRYPTO_ENABLE(x)		(((u64)(x) & 0x1) << 47)
 
-#define CQHCI_TASK_DESC_TASK_PARAMS_SIZE	8
-#define CQHCI_TASK_DESC_ICE_PARAMS_SIZE	8
+/* ICE context is present in the upper 64bits of task descriptor */
+#define CQHCI_TASK_DESC_ICE_PARAM_OFFSET	8
+/* ICE descriptor size */
+#define CQHCI_TASK_DESC_ICE_PARAMS_SIZE		8
 
 struct cqhci_host_ops;
 struct mmc_host;
 struct cqhci_slot;
+struct cqhci_host;
+
+/* CCAP - Crypto Capability 100h */
+union cqhci_crypto_capabilities {
+	__le32 reg_val;
+	struct {
+		u8 num_crypto_cap;
+		u8 config_count;
+		u8 reserved;
+		u8 config_array_ptr;
+	};
+};
+
+enum cqhci_crypto_key_size {
+	CQHCI_CRYPTO_KEY_SIZE_INVALID	= 0x0,
+	CQHCI_CRYPTO_KEY_SIZE_128	= 0x1,
+	CQHCI_CRYPTO_KEY_SIZE_192	= 0x2,
+	CQHCI_CRYPTO_KEY_SIZE_256	= 0x3,
+	CQHCI_CRYPTO_KEY_SIZE_512	= 0x4,
+};
+
+enum cqhci_crypto_alg {
+	CQHCI_CRYPTO_ALG_AES_XTS		= 0x0,
+	CQHCI_CRYPTO_ALG_BITLOCKER_AES_CBC	= 0x1,
+	CQHCI_CRYPTO_ALG_AES_ECB		= 0x2,
+	CQHCI_CRYPTO_ALG_ESSIV_AES_CBC		= 0x3,
+};
+
+/* x-CRYPTOCAP - Crypto Capability X */
+union cqhci_crypto_cap_entry {
+	__le32 reg_val;
+	struct {
+		u8 algorithm_id;
+		u8 sdus_mask; /* Supported data unit size mask */
+		u8 key_size;
+		u8 reserved;
+	};
+};
+
+#define CQHCI_CRYPTO_CONFIGURATION_ENABLE (1 << 7)
+#define CQHCI_CRYPTO_KEY_MAX_SIZE 64
+/* x-CRYPTOCFG - Crypto Configuration X */
+union cqhci_crypto_cfg_entry {
+	__le32 reg_val[32];
+	struct {
+		u8 crypto_key[CQHCI_CRYPTO_KEY_MAX_SIZE];
+		u8 data_unit_size;
+		u8 crypto_cap_idx;
+		u8 reserved_1;
+		u8 config_enable;
+		u8 reserved_multi_host;
+		u8 reserved_2;
+		u8 vsb[2];
+		u8 reserved_3[56];
+	};
+};
+
+struct cqhci_host_crypto_variant_ops {
+	void (*setup_rq_keyslot_manager)(struct cqhci_host *host,
+					 struct request_queue *q);
+	void (*destroy_rq_keyslot_manager)(struct cqhci_host *host,
+					   struct request_queue *q);
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	int (*host_init_crypto)(struct cqhci_host *host,
+				const struct keyslot_mgmt_ll_ops *ksm_ops);
+#endif
+	void (*enable)(struct cqhci_host *host);
+	void (*disable)(struct cqhci_host *host);
+	int (*suspend)(struct cqhci_host *host);
+	int (*resume)(struct cqhci_host *host);
+	int (*debug)(struct cqhci_host *host);
+	int (*prepare_crypto_desc)(struct cqhci_host *host,
+				   struct mmc_request *mrq,
+				   u64 *ice_ctx);
+	int (*complete_crypto_desc)(struct cqhci_host *host,
+				    struct mmc_request *mrq,
+				    u64 *ice_ctx);
+	int (*reset)(struct cqhci_host *host);
+	int (*recovery_finish)(struct cqhci_host *host);
+	int (*program_key)(struct cqhci_host *host,
+			   const union cqhci_crypto_cfg_entry *cfg,
+			   int slot);
+	void *priv;
+};
 
 struct cqhci_host {
 	const struct cqhci_host_ops *ops;
 	void __iomem *mmio;
+	void __iomem *icemmio;
 	struct mmc_host *mmc;
 
 	spinlock_t lock;
@@ -227,6 +321,16 @@
 	struct completion halt_comp;
 	wait_queue_head_t wait_queue;
 	struct cqhci_slot *slot;
+	const struct cqhci_host_crypto_variant_ops *crypto_vops;
+
+#ifdef CONFIG_MMC_CQHCI_CRYPTO
+	union cqhci_crypto_capabilities crypto_capabilities;
+	union cqhci_crypto_cap_entry *crypto_cap_array;
+	u32 crypto_cfg_register;
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	struct keyslot_manager *ksm;
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+#endif /* CONFIG_SCSI_CQHCI_CRYPTO */
 };
 
 struct cqhci_host_ops {
@@ -235,10 +339,6 @@
 	u32 (*read_l)(struct cqhci_host *host, int reg);
 	void (*enable)(struct mmc_host *mmc);
 	void (*disable)(struct mmc_host *mmc, bool recovery);
-	int (*crypto_cfg)(struct mmc_host *mmc, struct mmc_request *mrq,
-				u32 slot, u64 *ice_ctx);
-	int (*crypto_cfg_end)(struct mmc_host *mmc, struct mmc_request *mrq);
-	void (*crypto_cfg_reset)(struct mmc_host *mmc, unsigned int slot);
 };
 
 static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg)
diff --git a/drivers/mmc/host/sdhci-msm-ice.c b/drivers/mmc/host/sdhci-msm-ice.c
deleted file mode 100644
index 3bbb5b3..0000000
--- a/drivers/mmc/host/sdhci-msm-ice.c
+++ /dev/null
@@ -1,581 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015, 2017-2019, The Linux Foundation. All rights reserved.
- */
-
-#include "sdhci-msm-ice.h"
-
-static void sdhci_msm_ice_error_cb(void *host_ctrl, u32 error)
-{
-	struct sdhci_msm_host *msm_host = (struct sdhci_msm_host *)host_ctrl;
-
-	dev_err(&msm_host->pdev->dev, "%s: Error in ice operation 0x%x\n",
-		__func__, error);
-
-	if (msm_host->ice.state == SDHCI_MSM_ICE_STATE_ACTIVE)
-		msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
-}
-
-static struct platform_device *sdhci_msm_ice_get_pdevice(struct device *dev)
-{
-	struct device_node *node;
-	struct platform_device *ice_pdev = NULL;
-
-	node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
-	if (!node) {
-		dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
-			__func__);
-		goto out;
-	}
-	ice_pdev = qcom_ice_get_pdevice(node);
-out:
-	return ice_pdev;
-}
-
-static
-struct qcom_ice_variant_ops *sdhci_msm_ice_get_vops(struct device *dev)
-{
-	struct qcom_ice_variant_ops *ice_vops = NULL;
-	struct device_node *node;
-
-	node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
-	if (!node) {
-		dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
-			__func__);
-		goto out;
-	}
-	ice_vops = qcom_ice_get_variant_ops(node);
-	of_node_put(node);
-out:
-	return ice_vops;
-}
-
-static
-void sdhci_msm_enable_ice_hci(struct sdhci_host *host, bool enable)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	u32 config = 0;
-	u32 ice_cap = 0;
-
-	/*
-	 * Enable the cryptographic support inside SDHC.
-	 * This is a global config which needs to be enabled
-	 * all the time.
-	 * Only when it it is enabled, the ICE_HCI capability
-	 * will get reflected in CQCAP register.
-	 */
-	config = readl_relaxed(host->ioaddr + HC_VENDOR_SPECIFIC_FUNC4);
-
-	if (enable)
-		config &= ~DISABLE_CRYPTO;
-	else
-		config |= DISABLE_CRYPTO;
-	writel_relaxed(config, host->ioaddr + HC_VENDOR_SPECIFIC_FUNC4);
-
-	/*
-	 * CQCAP register is in different register space from above
-	 * ice global enable register. So a mb() is required to ensure
-	 * above write gets completed before reading the CQCAP register.
-	 */
-	mb();
-
-	/*
-	 * Check if ICE HCI capability support is present
-	 * If present, enable it.
-	 */
-	ice_cap = readl_relaxed(msm_host->cryptoio + ICE_CQ_CAPABILITIES);
-	if (ice_cap & ICE_HCI_SUPPORT) {
-		config = readl_relaxed(msm_host->cryptoio + ICE_CQ_CONFIG);
-
-		if (enable)
-			config |= CRYPTO_GENERAL_ENABLE;
-		else
-			config &= ~CRYPTO_GENERAL_ENABLE;
-		writel_relaxed(config, msm_host->cryptoio + ICE_CQ_CONFIG);
-	}
-}
-
-int sdhci_msm_ice_get_dev(struct sdhci_host *host)
-{
-	struct device *sdhc_dev;
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-
-	if (!msm_host || !msm_host->pdev) {
-		pr_err("%s: invalid msm_host %p or msm_host->pdev\n",
-			__func__, msm_host);
-		return -EINVAL;
-	}
-
-	sdhc_dev = &msm_host->pdev->dev;
-	msm_host->ice.vops  = sdhci_msm_ice_get_vops(sdhc_dev);
-	msm_host->ice.pdev = sdhci_msm_ice_get_pdevice(sdhc_dev);
-
-	if (msm_host->ice.pdev == ERR_PTR(-EPROBE_DEFER)) {
-		dev_err(sdhc_dev, "%s: ICE device not probed yet\n",
-			__func__);
-		msm_host->ice.pdev = NULL;
-		msm_host->ice.vops = NULL;
-		return -EPROBE_DEFER;
-	}
-
-	if (!msm_host->ice.pdev) {
-		dev_dbg(sdhc_dev, "%s: invalid platform device\n", __func__);
-		msm_host->ice.vops = NULL;
-		return -ENODEV;
-	}
-	if (!msm_host->ice.vops) {
-		dev_dbg(sdhc_dev, "%s: invalid ice vops\n", __func__);
-		msm_host->ice.pdev = NULL;
-		return -ENODEV;
-	}
-	msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
-	return 0;
-}
-
-static
-int sdhci_msm_ice_pltfm_init(struct sdhci_msm_host *msm_host)
-{
-	struct resource *ice_memres = NULL;
-	struct platform_device *pdev = msm_host->pdev;
-	int err = 0;
-
-	if (!msm_host->ice_hci_support)
-		goto out;
-	/*
-	 * ICE HCI registers are present in cmdq register space.
-	 * So map the cmdq mem for accessing ICE HCI registers.
-	 */
-	ice_memres = platform_get_resource_byname(pdev,
-						IORESOURCE_MEM, "cqhci_mem");
-	if (!ice_memres) {
-		dev_err(&pdev->dev, "Failed to get iomem resource for ice\n");
-		err = -EINVAL;
-		goto out;
-	}
-	msm_host->cryptoio = devm_ioremap(&pdev->dev,
-					ice_memres->start,
-					resource_size(ice_memres));
-	if (!msm_host->cryptoio) {
-		dev_err(&pdev->dev, "Failed to remap registers\n");
-		err = -ENOMEM;
-	}
-out:
-	return err;
-}
-
-int sdhci_msm_ice_init(struct sdhci_host *host)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	int err = 0;
-
-	if (msm_host->ice.vops->init) {
-		err = sdhci_msm_ice_pltfm_init(msm_host);
-		if (err)
-			goto out;
-
-		if (msm_host->ice_hci_support)
-			sdhci_msm_enable_ice_hci(host, true);
-
-		err = msm_host->ice.vops->init(msm_host->ice.pdev,
-					msm_host,
-					sdhci_msm_ice_error_cb);
-		if (err) {
-			pr_err("%s: ice init err %d\n",
-				mmc_hostname(host->mmc), err);
-			sdhci_msm_ice_print_regs(host);
-			if (msm_host->ice_hci_support)
-				sdhci_msm_enable_ice_hci(host, false);
-			goto out;
-		}
-		msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
-	}
-
-out:
-	return err;
-}
-
-void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
-{
-	writel_relaxed(SDHCI_MSM_ICE_ENABLE_BYPASS,
-		host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
-}
-
-static
-int sdhci_msm_ice_get_cfg(struct sdhci_msm_host *msm_host, struct request *req,
-			unsigned int *bypass, short *key_index)
-{
-	int err = 0;
-	struct ice_data_setting ice_set;
-
-	memset(&ice_set, 0, sizeof(struct ice_data_setting));
-	if (msm_host->ice.vops->config_start) {
-		err = msm_host->ice.vops->config_start(
-						msm_host->ice.pdev,
-						req, &ice_set, false);
-		if (err) {
-			pr_err("%s: ice config failed %d\n",
-					mmc_hostname(msm_host->mmc), err);
-			return err;
-		}
-	}
-	/* if writing data command */
-	if (rq_data_dir(req) == WRITE)
-		*bypass = ice_set.encr_bypass ?
-				SDHCI_MSM_ICE_ENABLE_BYPASS :
-				SDHCI_MSM_ICE_DISABLE_BYPASS;
-	/* if reading data command */
-	else if (rq_data_dir(req) == READ)
-		*bypass = ice_set.decr_bypass ?
-				SDHCI_MSM_ICE_ENABLE_BYPASS :
-				SDHCI_MSM_ICE_DISABLE_BYPASS;
-	*key_index = ice_set.crypto_data.key_index;
-	return err;
-}
-
-static
-void sdhci_msm_ice_update_cfg(struct sdhci_host *host, u64 lba, u32 slot,
-		unsigned int bypass, short key_index, u32 cdu_sz)
-{
-	unsigned int ctrl_info_val = 0;
-
-	/* Configure ICE index */
-	ctrl_info_val =
-		(key_index &
-		 MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX)
-		 << OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX;
-
-	/* Configure data unit size of transfer request */
-	ctrl_info_val |=
-		(cdu_sz &
-		 MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU)
-		 << OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU;
-
-	/* Configure ICE bypass mode */
-	ctrl_info_val |=
-		(bypass & MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS)
-		 << OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS;
-
-	writel_relaxed((lba & 0xFFFFFFFF),
-		host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n + 16 * slot);
-	writel_relaxed(((lba >> 32) & 0xFFFFFFFF),
-		host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n + 16 * slot);
-	writel_relaxed(ctrl_info_val,
-		host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
-	/* Ensure ICE registers are configured before issuing SDHCI request */
-	mb();
-}
-
-static inline
-void sdhci_msm_ice_hci_update_cqe_cfg(u64 dun, unsigned int bypass,
-				short key_index, u64 *ice_ctx)
-{
-	/*
-	 *
-	 * registers fields. Below is the equivalent names for
-	 * ICE3.0 Vs ICE2.0:
-	 *   Data Unit Number(DUN) == Logical Base address(LBA)
-	 *   Crypto Configuration index (CCI) == Key Index
-	 *   Crypto Enable (CE) == !BYPASS
-	 */
-	if (ice_ctx)
-		*ice_ctx = DATA_UNIT_NUM(dun) |
-			CRYPTO_CONFIG_INDEX(key_index) |
-			CRYPTO_ENABLE(!bypass);
-}
-
-static
-void sdhci_msm_ice_hci_update_noncq_cfg(struct sdhci_host *host,
-		u64 dun, unsigned int bypass, short key_index)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	unsigned int crypto_params = 0;
-	/*
-	 * The naming convention got changed between ICE2.0 and ICE3.0
-	 * registers fields. Below is the equivalent names for
-	 * ICE3.0 Vs ICE2.0:
-	 *   Data Unit Number(DUN) == Logical Base address(LBA)
-	 *   Crypto Configuration index (CCI) == Key Index
-	 *   Crypto Enable (CE) == !BYPASS
-	 */
-	/* Configure ICE bypass mode */
-	crypto_params |=
-		((!bypass) & MASK_SDHCI_MSM_ICE_HCI_PARAM_CE)
-			<< OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CE;
-	/* Configure Crypto Configure Index (CCI) */
-	crypto_params |= (key_index &
-			 MASK_SDHCI_MSM_ICE_HCI_PARAM_CCI)
-			 << OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CCI;
-
-	writel_relaxed((crypto_params & 0xFFFFFFFF),
-		msm_host->cryptoio + ICE_NONCQ_CRYPTO_PARAMS);
-
-	/* Update DUN */
-	writel_relaxed((dun & 0xFFFFFFFF),
-		msm_host->cryptoio + ICE_NONCQ_CRYPTO_DUN);
-	/* Ensure ICE registers are configured before issuing SDHCI request */
-	mb();
-}
-
-int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
-			u32 slot)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	int err = 0;
-	short key_index = 0;
-	u64 dun = 0;
-	unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
-	u32 cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_512_B;
-	struct request *req;
-
-	if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
-		pr_err("%s: ice is in invalid state %d\n",
-			mmc_hostname(host->mmc), msm_host->ice.state);
-		return -EINVAL;
-	}
-
-	WARN_ON(!mrq);
-	if (!mrq)
-		return -EINVAL;
-	req = mrq->req;
-	if (req && req->bio) {
-#ifdef CONFIG_PFK
-		if (bio_dun(req->bio)) {
-			dun = bio_dun(req->bio);
-			cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB;
-		} else {
-			dun = req->__sector;
-		}
-#else
-		dun = req->__sector;
-#endif
-		err = sdhci_msm_ice_get_cfg(msm_host, req, &bypass, &key_index);
-		if (err)
-			return err;
-		pr_debug("%s: %s: slot %d bypass %d key_index %d\n",
-				mmc_hostname(host->mmc),
-				(rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
-				slot, bypass, key_index);
-	}
-
-	if (msm_host->ice_hci_support) {
-		/* For ICE HCI / ICE3.0 */
-		sdhci_msm_ice_hci_update_noncq_cfg(host, dun, bypass,
-						key_index);
-	} else {
-		/* For ICE versions earlier to ICE3.0 */
-		sdhci_msm_ice_update_cfg(host, dun, slot, bypass, key_index,
-					cdu_sz);
-	}
-	return 0;
-}
-
-int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
-			struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	int err = 0;
-	short key_index = 0;
-	u64 dun = 0;
-	unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
-	struct request *req;
-	u32 cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_512_B;
-
-	if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
-		pr_err("%s: ice is in invalid state %d\n",
-			mmc_hostname(host->mmc), msm_host->ice.state);
-		return -EINVAL;
-	}
-
-	WARN_ON(!mrq);
-	if (!mrq)
-		return -EINVAL;
-	req = mrq->req;
-	if (req && req->bio) {
-#ifdef CONFIG_PFK
-		if (bio_dun(req->bio)) {
-			dun = bio_dun(req->bio);
-			cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB;
-		} else {
-			dun = req->__sector;
-		}
-#else
-		dun = req->__sector;
-#endif
-		err = sdhci_msm_ice_get_cfg(msm_host, req, &bypass, &key_index);
-		if (err)
-			return err;
-		pr_debug("%s: %s: slot %d bypass %d key_index %d\n",
-				mmc_hostname(host->mmc),
-				(rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
-				slot, bypass, key_index);
-	}
-
-	if (msm_host->ice_hci_support) {
-		/* For ICE HCI / ICE3.0 */
-		sdhci_msm_ice_hci_update_cqe_cfg(dun, bypass, key_index,
-						ice_ctx);
-	} else {
-		/* For ICE versions earlier to ICE3.0 */
-		sdhci_msm_ice_update_cfg(host, dun, slot, bypass, key_index,
-					cdu_sz);
-	}
-
-	return 0;
-}
-
-int sdhci_msm_ice_cfg_end(struct sdhci_host *host, struct mmc_request *mrq)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	int err = 0;
-	struct request *req;
-
-	if (!host->is_crypto_en)
-		return 0;
-
-	if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
-		pr_err("%s: ice is in invalid state %d\n",
-			mmc_hostname(host->mmc), msm_host->ice.state);
-		return -EINVAL;
-	}
-
-	req = mrq->req;
-	if (req) {
-		if (msm_host->ice.vops->config_end) {
-			err = msm_host->ice.vops->config_end(
-					msm_host->ice.pdev, req);
-			if (err) {
-				pr_err("%s: ice config end failed %d\n",
-						mmc_hostname(host->mmc), err);
-				return err;
-			}
-		}
-	}
-
-	return 0;
-}
-
-int sdhci_msm_ice_reset(struct sdhci_host *host)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	int err = 0;
-
-	if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
-		pr_err("%s: ice is in invalid state before reset %d\n",
-			mmc_hostname(host->mmc), msm_host->ice.state);
-		return -EINVAL;
-	}
-
-	if (msm_host->ice.vops->reset) {
-		err = msm_host->ice.vops->reset(msm_host->ice.pdev);
-		if (err) {
-			pr_err("%s: ice reset failed %d\n",
-					mmc_hostname(host->mmc), err);
-			sdhci_msm_ice_print_regs(host);
-			return err;
-		}
-	}
-
-	/* If ICE HCI support is present then re-enable it */
-	if (msm_host->ice_hci_support)
-		sdhci_msm_enable_ice_hci(host, true);
-
-	if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
-		pr_err("%s: ice is in invalid state after reset %d\n",
-			mmc_hostname(host->mmc), msm_host->ice.state);
-		return -EINVAL;
-	}
-	return 0;
-}
-
-int sdhci_msm_ice_resume(struct sdhci_host *host)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	int err = 0;
-
-	if (msm_host->ice.state !=
-			SDHCI_MSM_ICE_STATE_SUSPENDED) {
-		pr_err("%s: ice is in invalid state before resume %d\n",
-			mmc_hostname(host->mmc), msm_host->ice.state);
-		return -EINVAL;
-	}
-
-	if (msm_host->ice.vops->resume) {
-		err = msm_host->ice.vops->resume(msm_host->ice.pdev);
-		if (err) {
-			pr_err("%s: ice resume failed %d\n",
-					mmc_hostname(host->mmc), err);
-			return err;
-		}
-	}
-
-	msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
-	return 0;
-}
-
-int sdhci_msm_ice_suspend(struct sdhci_host *host)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	int err = 0;
-
-	if (msm_host->ice.state !=
-			SDHCI_MSM_ICE_STATE_ACTIVE) {
-		pr_err("%s: ice is in invalid state before resume %d\n",
-			mmc_hostname(host->mmc), msm_host->ice.state);
-		return -EINVAL;
-	}
-
-	if (msm_host->ice.vops->suspend) {
-		err = msm_host->ice.vops->suspend(msm_host->ice.pdev);
-		if (err) {
-			pr_err("%s: ice suspend failed %d\n",
-					mmc_hostname(host->mmc), err);
-			return -EINVAL;
-		}
-	}
-	msm_host->ice.state = SDHCI_MSM_ICE_STATE_SUSPENDED;
-	return 0;
-}
-
-int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	int stat = -EINVAL;
-
-	if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
-		pr_err("%s: ice is in invalid state %d\n",
-			mmc_hostname(host->mmc), msm_host->ice.state);
-		return -EINVAL;
-	}
-
-	if (msm_host->ice.vops->status) {
-		*ice_status = 0;
-		stat = msm_host->ice.vops->status(msm_host->ice.pdev);
-		if (stat < 0) {
-			pr_err("%s: ice get sts failed %d\n",
-					mmc_hostname(host->mmc), stat);
-			return -EINVAL;
-		}
-		*ice_status = stat;
-	}
-	return 0;
-}
-
-void sdhci_msm_ice_print_regs(struct sdhci_host *host)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-
-	if (msm_host->ice.vops->debug)
-		msm_host->ice.vops->debug(msm_host->ice.pdev);
-}
diff --git a/drivers/mmc/host/sdhci-msm-ice.h b/drivers/mmc/host/sdhci-msm-ice.h
deleted file mode 100644
index c0df636..0000000
--- a/drivers/mmc/host/sdhci-msm-ice.h
+++ /dev/null
@@ -1,164 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2015, 2017, 2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef __SDHCI_MSM_ICE_H__
-#define __SDHCI_MSM_ICE_H__
-
-#include <linux/io.h>
-#include <linux/of.h>
-#include <linux/blkdev.h>
-#include <crypto/ice.h>
-
-#include "sdhci-msm.h"
-
-#define SDHC_MSM_CRYPTO_LABEL "sdhc-msm-crypto"
-/* Timeout waiting for ICE initialization, that requires TZ access */
-#define SDHCI_MSM_ICE_COMPLETION_TIMEOUT_MS	500
-
-/*
- * SDHCI host controller ICE registers. There are n [0..31]
- * of each of these registers
- */
-#define NUM_SDHCI_MSM_ICE_CTRL_INFO_n_REGS	32
-
-#define CORE_VENDOR_SPEC_ICE_CTRL		0x300
-#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n	0x304
-#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n	0x308
-#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n	0x30C
-
-/* ICE3.0 register which got added cmdq reg space */
-#define ICE_CQ_CAPABILITIES	0x04
-#define ICE_HCI_SUPPORT		(1 << 28)
-#define ICE_CQ_CONFIG		0x08
-#define CRYPTO_GENERAL_ENABLE	(1 << 1)
-#define ICE_NONCQ_CRYPTO_PARAMS	0x70
-#define ICE_NONCQ_CRYPTO_DUN	0x74
-
-/* ICE3.0 register which got added hc reg space */
-#define HC_VENDOR_SPECIFIC_FUNC4	0x260
-#define DISABLE_CRYPTO			(1 << 15)
-#define HC_VENDOR_SPECIFIC_ICE_CTRL	0x800
-#define ICE_SW_RST_EN			(1 << 0)
-
-/* SDHCI MSM ICE CTRL Info register offset */
-enum {
-	OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS     = 0,
-	OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX  = 1,
-	OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU        = 6,
-	OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CCI	  = 0,
-	OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CE	  = 8,
-};
-
-/* SDHCI MSM ICE CTRL Info register masks */
-enum {
-	MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS     = 0x1,
-	MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX  = 0x1F,
-	MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU        = 0x7,
-	MASK_SDHCI_MSM_ICE_HCI_PARAM_CE		= 0x1,
-	MASK_SDHCI_MSM_ICE_HCI_PARAM_CCI	= 0xff
-};
-
-/* SDHCI MSM ICE encryption/decryption bypass state */
-enum {
-	SDHCI_MSM_ICE_DISABLE_BYPASS  = 0,
-	SDHCI_MSM_ICE_ENABLE_BYPASS = 1,
-};
-
-/* SDHCI MSM ICE Crypto Data Unit of target DUN of Transfer Request */
-enum {
-	SDHCI_MSM_ICE_TR_DATA_UNIT_512_B          = 0,
-	SDHCI_MSM_ICE_TR_DATA_UNIT_1_KB           = 1,
-	SDHCI_MSM_ICE_TR_DATA_UNIT_2_KB           = 2,
-	SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB           = 3,
-	SDHCI_MSM_ICE_TR_DATA_UNIT_8_KB           = 4,
-	SDHCI_MSM_ICE_TR_DATA_UNIT_16_KB          = 5,
-	SDHCI_MSM_ICE_TR_DATA_UNIT_32_KB          = 6,
-	SDHCI_MSM_ICE_TR_DATA_UNIT_64_KB          = 7,
-};
-
-/* SDHCI MSM ICE internal state */
-enum {
-	SDHCI_MSM_ICE_STATE_DISABLED   = 0,
-	SDHCI_MSM_ICE_STATE_ACTIVE     = 1,
-	SDHCI_MSM_ICE_STATE_SUSPENDED  = 2,
-};
-
-/* crypto context fields in cmdq data command task descriptor */
-#define DATA_UNIT_NUM(x)	(((u64)(x) & 0xFFFFFFFF) << 0)
-#define CRYPTO_CONFIG_INDEX(x)	(((u64)(x) & 0xFF) << 32)
-#define CRYPTO_ENABLE(x)	(((u64)(x) & 0x1) << 47)
-
-#ifdef CONFIG_MMC_SDHCI_MSM_ICE
-int sdhci_msm_ice_get_dev(struct sdhci_host *host);
-int sdhci_msm_ice_init(struct sdhci_host *host);
-void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot);
-int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
-			u32 slot);
-int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
-			struct mmc_request *mrq, u32 slot, u64 *ice_ctx);
-int sdhci_msm_ice_cfg_end(struct sdhci_host *host, struct mmc_request *mrq);
-int sdhci_msm_ice_reset(struct sdhci_host *host);
-int sdhci_msm_ice_resume(struct sdhci_host *host);
-int sdhci_msm_ice_suspend(struct sdhci_host *host);
-int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status);
-void sdhci_msm_ice_print_regs(struct sdhci_host *host);
-#else
-inline int sdhci_msm_ice_get_dev(struct sdhci_host *host)
-{
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-
-	if (msm_host) {
-		msm_host->ice.pdev = NULL;
-		msm_host->ice.vops = NULL;
-	}
-	return -ENODEV;
-}
-inline int sdhci_msm_ice_init(struct sdhci_host *host)
-{
-	return 0;
-}
-
-inline void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
-{
-}
-
-inline int sdhci_msm_ice_cfg(struct sdhci_host *host,
-		struct mmc_request *mrq, u32 slot)
-{
-	return 0;
-}
-inline int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
-		struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
-{
-	return 0;
-}
-inline int sdhci_msm_ice_cfg_end(struct sdhci_host *host,
-			struct mmc_request *mrq)
-{
-	return 0;
-}
-inline int sdhci_msm_ice_reset(struct sdhci_host *host)
-{
-	return 0;
-}
-inline int sdhci_msm_ice_resume(struct sdhci_host *host)
-{
-	return 0;
-}
-inline int sdhci_msm_ice_suspend(struct sdhci_host *host)
-{
-	return 0;
-}
-inline int sdhci_msm_ice_get_status(struct sdhci_host *host,
-				   int *ice_status)
-{
-	return 0;
-}
-inline void sdhci_msm_ice_print_regs(struct sdhci_host *host)
-{
-}
-#endif /* CONFIG_MMC_SDHCI_MSM_ICE */
-#endif /* __SDHCI_MSM_ICE_H__ */
diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index 9a44c5c..9d097b8 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -34,9 +34,9 @@
 #include <linux/clk/qcom.h>
 
 #include "sdhci-msm.h"
-#include "sdhci-msm-ice.h"
 #include "sdhci-pltfm.h"
 #include "cqhci.h"
+#include "cqhci-crypto-qti.h"
 
 #define QOS_REMOVE_DELAY_MS	10
 #define CORE_POWER		0x0
@@ -163,6 +163,8 @@
 
 #define INVALID_TUNING_PHASE	-1
 #define sdhci_is_valid_gpio_wakeup_int(_h) ((_h)->pdata->sdiowakeup_irq >= 0)
+#define sdhci_is_valid_gpio_testbus_trigger_int(_h) \
+	((_h)->pdata->testbus_trigger_irq >= 0)
 
 #define NUM_TUNING_PHASES		16
 #define MAX_DRV_TYPES_SUPPORTED_HS200	4
@@ -1210,7 +1212,115 @@
 			drv_type);
 }
 
+#define MAX_TESTBUS 127
 #define IPCAT_MINOR_MASK(val) ((val & 0x0fff0000) >> 0x10)
+#define TB_CONF_MASK 0x7f
+#define TB_TRIG_CONF 0xff80ffff
+#define TB_WRITE_STATUS BIT(8)
+
+/*
+ * This function needs to be used when getting mask and
+ * match pattern either from cmdline or sysfs
+ */
+void sdhci_msm_mm_dbg_configure(struct sdhci_host *host, u32 mask,
+			u32 match, u32 bit_shift, u32 testbus)
+{
+	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+	struct sdhci_msm_host *msm_host = pltfm_host->priv;
+	struct platform_device *pdev = msm_host->pdev;
+	u32 val;
+	u32 enable_dbg_feature = 0;
+	int ret = 0;
+
+	if (testbus > MAX_TESTBUS) {
+		dev_err(&pdev->dev, "%s: testbus should be less than 128.\n",
+						__func__);
+		return;
+	}
+
+	/* Enable debug mode */
+	writel_relaxed(ENABLE_DBG,
+			host->ioaddr + SDCC_TESTBUS_CONFIG);
+	writel_relaxed(DUMMY,
+			host->ioaddr + SDCC_DEBUG_EN_DIS_REG);
+	writel_relaxed((readl_relaxed(host->ioaddr +
+			SDCC_TESTBUS_CONFIG) | TESTBUS_EN),
+			host->ioaddr + SDCC_TESTBUS_CONFIG);
+
+	/* Enable particular feature */
+	enable_dbg_feature |= MM_TRIGGER_DISABLE;
+	writel_relaxed((readl_relaxed(host->ioaddr +
+			SDCC_DEBUG_FEATURE_CFG_REG) | enable_dbg_feature),
+			host->ioaddr + SDCC_DEBUG_FEATURE_CFG_REG);
+
+	/* Configure Mask & Match pattern*/
+	writel_relaxed((mask << bit_shift),
+			host->ioaddr + SDCC_DEBUG_MASK_PATTERN_REG);
+	writel_relaxed((match << bit_shift),
+			host->ioaddr + SDCC_DEBUG_MATCH_PATTERN_REG);
+
+	/* Configure test bus for above mm */
+	writel_relaxed((testbus & TB_CONF_MASK), host->ioaddr +
+			SDCC_DEBUG_MM_TB_CFG_REG);
+	/* Initiate conf shifting */
+	writel_relaxed(BIT(8),
+			host->ioaddr + SDCC_DEBUG_MM_TB_CFG_REG);
+
+	/* Wait for test bus to be configured */
+	ret = readl_poll_timeout(host->ioaddr + SDCC_DEBUG_MM_TB_CFG_REG,
+			val, !(val & TB_WRITE_STATUS), 50, 1000);
+	if (ret == -ETIMEDOUT)
+		pr_err("%s: Unable to set mask & match\n",
+				mmc_hostname(host->mmc));
+
+	/* Direct test bus to GPIO */
+	writel_relaxed(((readl_relaxed(host->ioaddr +
+				SDCC_TESTBUS_CONFIG) & TB_TRIG_CONF)
+				| (testbus << 16)), host->ioaddr +
+				SDCC_TESTBUS_CONFIG);
+
+	/* Read back to ensure write went through */
+	readl_relaxed(host->ioaddr + SDCC_DEBUG_FEATURE_CFG_REG);
+}
+
+static ssize_t store_mask_and_match(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t count)
+{
+	struct sdhci_host *host = dev_get_drvdata(dev);
+	unsigned long value;
+	char *token;
+	int i = 0;
+	u32 mask, match, bit_shift, testbus;
+
+	char *temp = (char *)buf;
+
+	if (!host)
+		return -EINVAL;
+
+	while ((token = strsep(&temp, " "))) {
+		kstrtoul(token, 0, &value);
+		if (i == 0)
+			mask = value;
+		else if (i == 1)
+			match = value;
+		else if (i == 2)
+			bit_shift = value;
+		else if (i == 3) {
+			testbus = value;
+			break;
+		}
+		i++;
+	}
+
+	pr_info("%s: M&M parameter passed are: %d %d %d %d\n",
+		mmc_hostname(host->mmc), mask, match, bit_shift, testbus);
+	pm_runtime_get_sync(dev);
+	sdhci_msm_mm_dbg_configure(host, mask, match, bit_shift, testbus);
+	pm_runtime_put_sync(dev);
+
+	pr_debug("%s: M&M debug enabled.\n", mmc_hostname(host->mmc));
+	return count;
+}
 
 /* Enter sdcc debug mode */
 void sdhci_msm_enter_dbg_mode(struct sdhci_host *host)
@@ -2168,26 +2278,20 @@
 		}
 	}
 
-	if (msm_host->ice.pdev) {
-		if (sdhci_msm_dt_get_array(dev, "qcom,ice-clk-rates",
-				&ice_clk_table, &ice_clk_table_len, 0)) {
-			dev_err(dev, "failed parsing supported ice clock rates\n");
-			goto out;
-		}
-		if (!ice_clk_table || !ice_clk_table_len) {
-			dev_err(dev, "Invalid clock table\n");
-			goto out;
-		}
-		if (ice_clk_table_len != 2) {
-			dev_err(dev, "Need max and min frequencies in the table\n");
-			goto out;
-		}
-		pdata->sup_ice_clk_table = ice_clk_table;
-		pdata->sup_ice_clk_cnt = ice_clk_table_len;
-		pdata->ice_clk_max = pdata->sup_ice_clk_table[0];
-		pdata->ice_clk_min = pdata->sup_ice_clk_table[1];
-		dev_dbg(dev, "supported ICE clock rates (Hz): max: %u min: %u\n",
+	if (!sdhci_msm_dt_get_array(dev, "qcom,ice-clk-rates",
+			&ice_clk_table, &ice_clk_table_len, 0)) {
+		if (ice_clk_table && ice_clk_table_len) {
+			if (ice_clk_table_len != 2) {
+				dev_err(dev, "Need max and min frequencies\n");
+				goto out;
+			}
+			pdata->sup_ice_clk_table = ice_clk_table;
+			pdata->sup_ice_clk_cnt = ice_clk_table_len;
+			pdata->ice_clk_max = pdata->sup_ice_clk_table[0];
+			pdata->ice_clk_min = pdata->sup_ice_clk_table[1];
+			dev_dbg(dev, "ICE clock rates (Hz): max: %u min: %u\n",
 				pdata->ice_clk_max, pdata->ice_clk_min);
+		}
 	}
 
 	if (sdhci_msm_dt_get_array(dev, "qcom,devfreq,freq-table",
@@ -2409,64 +2513,6 @@
 	sdhci_cqe_disable(mmc, recovery);
 }
 
-int sdhci_msm_cqe_crypto_cfg(struct mmc_host *mmc,
-			struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
-{
-	int err = 0;
-	struct sdhci_host *host = mmc_priv(mmc);
-
-	if (!host->is_crypto_en)
-		return 0;
-
-	if (host->mmc->inlinecrypt_reset_needed &&
-			host->ops->crypto_engine_reset) {
-		err = host->ops->crypto_engine_reset(host);
-		if (err) {
-			pr_err("%s: crypto reset failed\n",
-					mmc_hostname(host->mmc));
-			goto out;
-		}
-		host->mmc->inlinecrypt_reset_needed = false;
-	}
-
-	err = sdhci_msm_ice_cqe_cfg(host, mrq, slot, ice_ctx);
-	if (err) {
-		pr_err("%s: failed to configure crypto\n",
-					mmc_hostname(host->mmc));
-		goto out;
-	}
-out:
-	return err;
-}
-
-void sdhci_msm_cqe_crypto_cfg_reset(struct mmc_host *mmc, unsigned int slot)
-{
-	struct sdhci_host *host = mmc_priv(mmc);
-
-	if (!host->is_crypto_en)
-		return;
-
-	return sdhci_msm_ice_cfg_reset(host, slot);
-}
-
-int sdhci_msm_cqe_crypto_cfg_end(struct mmc_host *mmc,
-			struct mmc_request *mrq)
-{
-	int err = 0;
-	struct sdhci_host *host = mmc_priv(mmc);
-
-	if (!host->is_crypto_en)
-		return 0;
-
-	err = sdhci_msm_ice_cfg_end(host, mrq);
-	if (err) {
-		pr_err("%s: failed to configure crypto\n",
-				mmc_hostname(host->mmc));
-		return err;
-	}
-	return 0;
-}
-
 void sdhci_msm_cqe_sdhci_dumpregs(struct mmc_host *mmc)
 {
 	struct sdhci_host *host = mmc_priv(mmc);
@@ -2477,9 +2523,6 @@
 static const struct cqhci_host_ops sdhci_msm_cqhci_ops = {
 	.enable		= sdhci_msm_cqe_enable,
 	.disable	= sdhci_msm_cqe_disable,
-	.crypto_cfg	= sdhci_msm_cqe_crypto_cfg,
-	.crypto_cfg_reset	= sdhci_msm_cqe_crypto_cfg_reset,
-	.crypto_cfg_end		= sdhci_msm_cqe_crypto_cfg_end,
 	.dumpregs		= sdhci_msm_cqe_sdhci_dumpregs,
 };
 
@@ -2509,6 +2552,13 @@
 	msm_host->cq_host = cq_host;
 
 	dma64 = host->flags & SDHCI_USE_64_BIT_DMA;
+	/*
+	 * Set the vendor specific ops needed for ICE.
+	 * Default implementation if the ops are not set.
+	 */
+#ifdef CONFIG_MMC_CQHCI_CRYPTO_QTI
+	cqhci_crypto_qti_set_vops(cq_host);
+#endif
 
 	ret = cqhci_init(cq_host, host->mmc, dma64);
 	if (ret) {
@@ -2921,6 +2971,16 @@
 	msm_host->is_sdiowakeup_enabled = enable;
 }
 
+static irqreturn_t sdhci_msm_testbus_trigger_irq(int irq, void *data)
+{
+	struct sdhci_host *host = (struct sdhci_host *)data;
+
+	pr_info("%s: match happened against mask\n",
+				mmc_hostname(host->mmc));
+
+	return IRQ_HANDLED;
+}
+
 static irqreturn_t sdhci_msm_sdiowakeup_irq(int irq, void *data)
 {
 	struct sdhci_host *host = (struct sdhci_host *)data;
@@ -4179,7 +4239,6 @@
 	int i, index = 0;
 	u32 test_bus_val = 0;
 	u32 debug_reg[MAX_TEST_BUS] = {0};
-	u32 sts = 0;
 
 	sdhci_msm_cache_debug_data(host);
 	pr_info("----------- VENDOR REGISTER DUMP -----------\n");
@@ -4260,29 +4319,10 @@
 		pr_info(" Test bus[%d to %d]: 0x%08x 0x%08x 0x%08x 0x%08x\n",
 				i, i + 3, debug_reg[i], debug_reg[i+1],
 				debug_reg[i+2], debug_reg[i+3]);
-
-	if (host->is_crypto_en) {
-		sdhci_msm_ice_get_status(host, &sts);
-		pr_info("%s: ICE status %x\n", mmc_hostname(host->mmc), sts);
-		sdhci_msm_ice_print_regs(host);
-	}
 }
 
 static void sdhci_msm_reset(struct sdhci_host *host, u8 mask)
 {
-	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-
-	/* Set ICE core to be reset in sync with SDHC core */
-	if (msm_host->ice.pdev) {
-		if (msm_host->ice_hci_support)
-			writel_relaxed(1, host->ioaddr +
-						HC_VENDOR_SPECIFIC_ICE_CTRL);
-		else
-			writel_relaxed(1,
-				host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL);
-	}
-
 	sdhci_reset(host, mask);
 	if ((host->mmc->caps2 & MMC_CAP2_CQE) && (mask & SDHCI_RESET_ALL))
 		cqhci_suspend(host->mmc);
@@ -4974,9 +5014,6 @@
 }
 
 static struct sdhci_ops sdhci_msm_ops = {
-	.crypto_engine_cfg = sdhci_msm_ice_cfg,
-	.crypto_engine_cfg_end = sdhci_msm_ice_cfg_end,
-	.crypto_engine_reset = sdhci_msm_ice_reset,
 	.set_uhs_signaling = sdhci_msm_set_uhs_signaling,
 	.check_power_status = sdhci_msm_check_power_status,
 	.platform_execute_tuning = sdhci_msm_execute_tuning,
@@ -5108,7 +5145,6 @@
 
 	if ((major == 1) && (minor >= 0x6b)) {
 		host->cdr_support = true;
-		msm_host->ice_hci_support = true;
 	}
 
 	/* 7FF projects with 7nm DLL */
@@ -5144,84 +5180,23 @@
 {
 	int ret = 0;
 
-	if (msm_host->ice.pdev) {
-		/* Setup SDC ICE clock */
-		msm_host->ice_clk = devm_clk_get(&pdev->dev, "ice_core_clk");
-		if (!IS_ERR(msm_host->ice_clk)) {
-			/* ICE core has only one clock frequency for now */
-			ret = clk_set_rate(msm_host->ice_clk,
-					msm_host->pdata->ice_clk_max);
-			if (ret) {
-				dev_err(&pdev->dev, "ICE_CLK rate set failed (%d) for %u\n",
-					ret,
-					msm_host->pdata->ice_clk_max);
-				return ret;
-			}
-			ret = clk_prepare_enable(msm_host->ice_clk);
-			if (ret)
-				return ret;
-			ret = clk_set_flags(msm_host->ice_clk,
-					CLKFLAG_RETAIN_MEM);
-			if (ret)
-				dev_err(&pdev->dev, "ICE_CLK set RETAIN_MEM failed: %d\n",
-					ret);
-
-			msm_host->ice_clk_rate =
-				msm_host->pdata->ice_clk_max;
-		}
-	}
-
-	return ret;
-}
-
-static int sdhci_msm_initialize_ice(struct sdhci_msm_host *msm_host,
-						struct platform_device *pdev,
-						struct sdhci_host *host)
-{
-	int ret = 0;
-
-	if (msm_host->ice.pdev) {
-		ret = sdhci_msm_ice_init(host);
+	/* Setup SDC ICE clock */
+	msm_host->ice_clk = devm_clk_get(&pdev->dev, "ice_core_clk");
+	if (!IS_ERR(msm_host->ice_clk)) {
+		/* ICE core has only one clock frequency for now */
+		ret = clk_set_rate(msm_host->ice_clk,
+				msm_host->pdata->ice_clk_max);
 		if (ret) {
-			dev_err(&pdev->dev, "%s: SDHCi ICE init failed (%d)\n",
-					mmc_hostname(host->mmc), ret);
-			return -EINVAL;
+			dev_err(&pdev->dev, "ICE_CLK rate set failed (%d) for %u\n",
+				ret,
+				msm_host->pdata->ice_clk_max);
+			return ret;
 		}
-		host->is_crypto_en = true;
-		msm_host->mmc->inlinecrypt_support = true;
-		/* Packed commands cannot be encrypted/decrypted using ICE */
-		msm_host->mmc->caps2 &= ~(MMC_CAP2_PACKED_WR |
-				MMC_CAP2_PACKED_WR_CONTROL);
-	}
-
-	return 0;
-}
-
-static int sdhci_msm_get_ice_device_vops(struct sdhci_host *host,
-					struct platform_device *pdev)
-{
-	int ret = 0;
-
-	ret = sdhci_msm_ice_get_dev(host);
-	if (ret == -EPROBE_DEFER) {
-		/*
-		 * SDHCI driver might be probed before ICE driver does.
-		 * In that case we would like to return EPROBE_DEFER code
-		 * in order to delay its probing.
-		 */
-		dev_err(&pdev->dev, "%s: required ICE device not probed yet err = %d\n",
-			__func__, ret);
-	} else if (ret == -ENODEV) {
-		/*
-		 * ICE device is not enabled in DTS file. No need for further
-		 * initialization of ICE driver.
-		 */
-		dev_warn(&pdev->dev, "%s: ICE device is not enabled\n",
-			__func__);
-		ret = 0;
-	} else if (ret) {
-		dev_err(&pdev->dev, "%s: sdhci_msm_ice_get_dev failed %d\n",
-			__func__, ret);
+		ret = clk_prepare_enable(msm_host->ice_clk);
+		if (ret)
+			return ret;
+		msm_host->ice_clk_rate =
+			msm_host->pdata->ice_clk_max;
 	}
 
 	return ret;
@@ -5311,11 +5286,6 @@
 	msm_host->mmc = host->mmc;
 	msm_host->pdev = pdev;
 
-	/* get the ice device vops if present */
-	ret = sdhci_msm_get_ice_device_vops(host, pdev);
-	if (ret)
-		goto out_host_free;
-
 	/* Extract platform data */
 	if (pdev->dev.of_node) {
 		ret = of_alias_get_id(pdev->dev.of_node, "sdhc");
@@ -5653,11 +5623,6 @@
 	if (msm_host->pdata->nonhotplug)
 		msm_host->mmc->caps2 |= MMC_CAP2_NONHOTPLUG;
 
-	/* Initialize ICE if present */
-	ret = sdhci_msm_initialize_ice(msm_host, pdev, host);
-	if (ret == -EINVAL)
-		goto vreg_deinit;
-
 	init_completion(&msm_host->pwr_irq_completion);
 
 	if (gpio_is_valid(msm_host->pdata->status_gpio)) {
@@ -5719,6 +5684,22 @@
 		}
 	}
 
+	msm_host->pdata->testbus_trigger_irq = platform_get_irq_byname(pdev,
+							  "tb_trig_irq");
+	if (sdhci_is_valid_gpio_testbus_trigger_int(msm_host)) {
+		dev_info(&pdev->dev, "%s: testbus_trigger_irq = %d\n", __func__,
+				msm_host->pdata->testbus_trigger_irq);
+		ret = request_irq(msm_host->pdata->testbus_trigger_irq,
+				  sdhci_msm_testbus_trigger_irq,
+				  IRQF_SHARED | IRQF_TRIGGER_RISING,
+				  "sdhci-msm tb_trig", host);
+		if (ret) {
+			dev_err(&pdev->dev, "%s: request tb_trig IRQ %d: failed: %d\n",
+				__func__, msm_host->pdata->testbus_trigger_irq,
+				ret);
+		}
+	}
+
 	if (of_device_is_compatible(node, "qcom,sdhci-msm-cqe")) {
 		dev_dbg(&pdev->dev, "node with qcom,sdhci-msm-cqe\n");
 		ret = sdhci_msm_cqe_add_host(host, pdev);
@@ -5777,6 +5758,20 @@
 		device_remove_file(&pdev->dev, &msm_host->auto_cmd21_attr);
 	}
 
+	if (IPCAT_MINOR_MASK(readl_relaxed(host->ioaddr +
+				SDCC_IP_CATALOG)) >= 2) {
+		msm_host->mask_and_match.store = store_mask_and_match;
+		sysfs_attr_init(&msm_host->mask_and_match.attr);
+		msm_host->mask_and_match.attr.name = "mask_and_match";
+		msm_host->mask_and_match.attr.mode = 0644;
+		ret = device_create_file(&pdev->dev,
+					&msm_host->mask_and_match);
+		if (ret) {
+			pr_err("%s: %s: failed creating M&M attr: %d\n",
+					mmc_hostname(host->mmc), __func__, ret);
+		}
+	}
+
 	if (sdhci_msm_is_bootdevice(&pdev->dev))
 		mmc_flush_detect_work(host->mmc);
 	/* Successful initialization */
@@ -5939,7 +5934,6 @@
 	struct sdhci_host *host = dev_get_drvdata(dev);
 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
 	struct sdhci_msm_host *msm_host = pltfm_host->priv;
-	int ret;
 	ktime_t start = ktime_get();
 
 	if (host->mmc->card && mmc_card_sdio(host->mmc->card))
@@ -5950,12 +5944,6 @@
 defer_disable_host_irq:
 	disable_irq(msm_host->pwr_irq);
 
-	if (host->is_crypto_en) {
-		ret = sdhci_msm_ice_suspend(host);
-		if (ret < 0)
-			pr_err("%s: failed to suspend crypto engine %d\n",
-					mmc_hostname(host->mmc), ret);
-	}
 	sdhci_msm_disable_controller_clock(host);
 	trace_sdhci_msm_runtime_suspend(mmc_hostname(host->mmc), 0,
 			ktime_to_us(ktime_sub(ktime_get(), start)));
@@ -5974,20 +5962,11 @@
 	if (ret) {
 		pr_err("%s: Failed to enable reqd clocks\n",
 				mmc_hostname(host->mmc));
-		goto skip_ice_resume;
 	}
 
 	if (host->mmc->ios.timing == MMC_TIMING_MMC_HS400)
 		sdhci_msm_toggle_fifo_write_clk(host);
 
-	if (host->is_crypto_en) {
-		ret = sdhci_msm_ice_resume(host);
-		if (ret)
-			pr_err("%s: failed to resume crypto engine %d\n",
-					mmc_hostname(host->mmc), ret);
-	}
-skip_ice_resume:
-
 	if (host->mmc->card && mmc_card_sdio(host->mmc->card))
 		goto defer_enable_host_irq;
 
diff --git a/drivers/mmc/host/sdhci-msm.h b/drivers/mmc/host/sdhci-msm.h
index fe20609..026faae 100644
--- a/drivers/mmc/host/sdhci-msm.h
+++ b/drivers/mmc/host/sdhci-msm.h
@@ -207,6 +207,7 @@
 	u32 *sup_clk_table;
 	unsigned char sup_clk_cnt;
 	int sdiowakeup_irq;
+	int testbus_trigger_irq;
 	struct sdhci_msm_pm_qos_data pm_qos_data;
 	u32 *bus_clk_table;
 	unsigned char bus_clk_cnt;
@@ -266,17 +267,9 @@
 	struct sdhci_host copy_host;
 };
 
-struct sdhci_msm_ice_data {
-	struct qcom_ice_variant_ops *vops;
-	struct platform_device *pdev;
-	int state;
-};
-
 struct sdhci_msm_host {
 	struct platform_device	*pdev;
 	void __iomem *core_mem;    /* MSM SDCC mapped address */
-	void __iomem *cryptoio;    /* ICE HCI mapped address */
-	bool ice_hci_support;
 	int	pwr_irq;	/* power irq */
 	struct clk	 *clk;     /* main SD/MMC bus clock */
 	struct clk	 *pclk;    /* SDHC peripheral bus clock */
@@ -296,6 +289,7 @@
 	struct completion pwr_irq_completion;
 	struct sdhci_msm_bus_vote msm_bus_vote;
 	struct device_attribute	polling;
+	struct device_attribute mask_and_match;
 	u32 clk_rate; /* Keeps track of current clock rate that is set */
 	bool tuning_done;
 	bool calibration_done;
@@ -327,7 +321,6 @@
 	int soc_min_rev;
 	struct workqueue_struct *pm_qos_wq;
 	struct sdhci_msm_dll_hsr *dll_hsr;
-	struct sdhci_msm_ice_data ice;
 	u32 ice_clk_rate;
 	bool debug_mode_enabled;
 	bool reg_store;
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 505fcf4..5e8263f 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -1922,50 +1922,6 @@
 		return MMC_SEND_TUNING_BLOCK;
 }
 
-static int sdhci_crypto_cfg(struct sdhci_host *host, struct mmc_request *mrq,
-		u32 slot)
-{
-	int err = 0;
-
-	if (host->mmc->inlinecrypt_reset_needed &&
-			host->ops->crypto_engine_reset) {
-		err = host->ops->crypto_engine_reset(host);
-		if (err) {
-			pr_err("%s: crypto reset failed\n",
-					mmc_hostname(host->mmc));
-			goto out;
-		}
-		host->mmc->inlinecrypt_reset_needed = false;
-	}
-
-	if (host->ops->crypto_engine_cfg) {
-		err = host->ops->crypto_engine_cfg(host, mrq, slot);
-		if (err) {
-			pr_err("%s: failed to configure crypto\n",
-					mmc_hostname(host->mmc));
-			goto out;
-		}
-	}
-out:
-	return err;
-}
-
-static int sdhci_crypto_cfg_end(struct sdhci_host *host,
-				struct mmc_request *mrq)
-{
-	int err = 0;
-
-	if (host->ops->crypto_engine_cfg_end) {
-		err = host->ops->crypto_engine_cfg_end(host, mrq);
-		if (err) {
-			pr_err("%s: failed to configure crypto\n",
-					mmc_hostname(host->mmc));
-			return err;
-		}
-	}
-	return 0;
-}
-
 static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
 {
 	struct sdhci_host *host;
@@ -2032,13 +1988,6 @@
 					sdhci_get_tuning_cmd(host));
 		}
 
-		if (host->is_crypto_en) {
-			spin_unlock_irqrestore(&host->lock, flags);
-			if (sdhci_crypto_cfg(host, mrq, 0))
-				goto end_req;
-			spin_lock_irqsave(&host->lock, flags);
-		}
-
 		if (mrq->sbc && !(host->flags & SDHCI_AUTO_CMD23))
 			sdhci_send_command(host, mrq->sbc);
 		else
@@ -2048,13 +1997,6 @@
 	mmiowb();
 	spin_unlock_irqrestore(&host->lock, flags);
 	return;
-end_req:
-	mrq->cmd->error = -EIO;
-	if (mrq->data)
-		mrq->data->error = -EIO;
-	host->mrq = NULL;
-	sdhci_dumpregs(host);
-	mmc_request_done(host->mmc, mrq);
 }
 
 void sdhci_set_bus_width(struct sdhci_host *host, int width)
@@ -3121,7 +3063,6 @@
 	mmiowb();
 	spin_unlock_irqrestore(&host->lock, flags);
 
-	sdhci_crypto_cfg_end(host, mrq);
 	mmc_request_done(host->mmc, mrq);
 
 	return false;
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index 84a98bc..ab25f13 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -671,7 +671,6 @@
 	enum sdhci_power_policy power_policy;
 
 	bool sdio_irq_async_status;
-	bool is_crypto_en;
 
 	u32 auto_cmd_err_sts;
 	struct ratelimit_state dbg_dump_rs;
@@ -712,11 +711,6 @@
 	unsigned int    (*get_ro)(struct sdhci_host *host);
 	void		(*reset)(struct sdhci_host *host, u8 mask);
 	int	(*platform_execute_tuning)(struct sdhci_host *host, u32 opcode);
-	int	(*crypto_engine_cfg)(struct sdhci_host *host,
-				struct mmc_request *mrq, u32 slot);
-	int	(*crypto_engine_cfg_end)(struct sdhci_host *host,
-					struct mmc_request *mrq);
-	int	(*crypto_engine_reset)(struct sdhci_host *host);
 	void	(*set_uhs_signaling)(struct sdhci_host *host, unsigned int uhs);
 	void	(*hw_reset)(struct sdhci_host *host);
 	void    (*adma_workaround)(struct sdhci_host *host, u32 intmask);
diff --git a/drivers/net/wireless/cnss2/debug.c b/drivers/net/wireless/cnss2/debug.c
index 21ceda6..9002866 100644
--- a/drivers/net/wireless/cnss2/debug.c
+++ b/drivers/net/wireless/cnss2/debug.c
@@ -879,12 +879,6 @@
 
 	return 0;
 }
-#else
-static int cnss_create_debug_only_node(struct cnss_plat_data *plat_priv)
-{
-	return 0;
-}
-#endif
 
 int cnss_debugfs_create(struct cnss_plat_data *plat_priv)
 {
@@ -910,6 +904,12 @@
 out:
 	return ret;
 }
+#else
+int cnss_debugfs_create(struct cnss_plat_data *plat_priv)
+{
+	return 0;
+}
+#endif
 
 void cnss_debugfs_destroy(struct cnss_plat_data *plat_priv)
 {
diff --git a/drivers/net/wireless/cnss2/main.c b/drivers/net/wireless/cnss2/main.c
index 9a348db..0324c64 100644
--- a/drivers/net/wireless/cnss2/main.c
+++ b/drivers/net/wireless/cnss2/main.c
@@ -665,6 +665,8 @@
 		return -ENODEV;
 	}
 
+	mutex_lock(&plat_priv->driver_ops_lock);
+
 	cnss_pr_dbg("Doing idle restart\n");
 
 	reinit_completion(&plat_priv->power_up_complete);
@@ -703,9 +705,11 @@
 		goto out;
 	}
 
+	mutex_unlock(&plat_priv->driver_ops_lock);
 	return 0;
 
 out:
+	mutex_unlock(&plat_priv->driver_ops_lock);
 	return ret;
 }
 EXPORT_SYMBOL(cnss_idle_restart);
@@ -1979,6 +1983,7 @@
 		set_bit(CNSS_IN_REBOOT, &plat_priv->driver_state);
 		del_timer(&plat_priv->fw_boot_timer);
 		complete_all(&plat_priv->power_up_complete);
+		complete_all(&plat_priv->cal_complete);
 	}
 
 	cnss_pr_dbg("Received shutdown notification\n");
@@ -2120,6 +2125,7 @@
 	set_bit(CNSS_IN_REBOOT, &plat_priv->driver_state);
 	del_timer(&plat_priv->fw_boot_timer);
 	complete_all(&plat_priv->power_up_complete);
+	complete_all(&plat_priv->cal_complete);
 	cnss_pr_dbg("Reboot is in progress with action %d\n", action);
 
 	return NOTIFY_DONE;
@@ -2152,6 +2158,7 @@
 	init_completion(&plat_priv->rddm_complete);
 	init_completion(&plat_priv->recovery_complete);
 	mutex_init(&plat_priv->dev_lock);
+	mutex_init(&plat_priv->driver_ops_lock);
 
 	return 0;
 }
@@ -2301,9 +2308,7 @@
 	if (ret)
 		goto deinit_event_work;
 
-	ret = cnss_debugfs_create(plat_priv);
-	if (ret)
-		goto deinit_qmi;
+	cnss_debugfs_create(plat_priv);
 
 	ret = cnss_misc_init(plat_priv);
 	if (ret)
@@ -2322,7 +2327,6 @@
 
 destroy_debugfs:
 	cnss_debugfs_destroy(plat_priv);
-deinit_qmi:
 	cnss_qmi_deinit(plat_priv);
 deinit_event_work:
 	cnss_event_work_deinit(plat_priv);
diff --git a/drivers/net/wireless/cnss2/main.h b/drivers/net/wireless/cnss2/main.h
index ea5385a..9d0c51a 100644
--- a/drivers/net/wireless/cnss2/main.h
+++ b/drivers/net/wireless/cnss2/main.h
@@ -23,6 +23,8 @@
 #define RECOVERY_TIMEOUT		60000
 #define WLAN_WD_TIMEOUT_MS		60000
 #define TIME_CLOCK_FREQ_HZ		19200000
+#define CNSS_RAMDUMP_MAGIC		0x574C414E
+#define CNSS_RAMDUMP_VERSION		0
 
 #define CNSS_EVENT_SYNC   BIT(0)
 #define CNSS_EVENT_UNINTERRUPTIBLE BIT(1)
@@ -167,6 +169,21 @@
 	CNSS_FW_IMAGE,
 	CNSS_FW_RDDM,
 	CNSS_FW_REMOTE_HEAP,
+	CNSS_FW_DUMP_TYPE_MAX,
+};
+
+struct cnss_dump_entry {
+	u32 type;
+	u32 entry_start;
+	u32 entry_num;
+};
+
+struct cnss_dump_meta_info {
+	u32 magic;
+	u32 version;
+	u32 chipset;
+	u32 total_entries;
+	struct cnss_dump_entry entry[CNSS_FW_DUMP_TYPE_MAX];
 };
 
 enum cnss_driver_event_type {
@@ -344,6 +361,7 @@
 	struct completion power_up_complete;
 	struct completion cal_complete;
 	struct mutex dev_lock; /* mutex for register access through debugfs */
+	struct mutex driver_ops_lock; /* mutex for external driver ops */
 	u32 device_freq_hz;
 	u32 diag_reg_read_addr;
 	u32 diag_reg_read_mem_type;
diff --git a/drivers/net/wireless/cnss2/pci.c b/drivers/net/wireless/cnss2/pci.c
index 898da90..f2f8560 100644
--- a/drivers/net/wireless/cnss2/pci.c
+++ b/drivers/net/wireless/cnss2/pci.c
@@ -1881,20 +1881,33 @@
 	struct cnss_dump_data *dump_data = &info_v2->dump_data;
 	struct cnss_dump_seg *dump_seg = info_v2->dump_data_vaddr;
 	struct ramdump_segment *ramdump_segs, *s;
+	struct cnss_dump_meta_info meta_info = {0};
 	int i, ret = 0;
 
 	if (!info_v2->dump_data_valid ||
 	    dump_data->nentries == 0)
 		return 0;
 
-	ramdump_segs = kcalloc(dump_data->nentries,
+	ramdump_segs = kcalloc(dump_data->nentries + 1,
 			       sizeof(*ramdump_segs),
 			       GFP_KERNEL);
 	if (!ramdump_segs)
 		return -ENOMEM;
 
-	s = ramdump_segs;
+	s = ramdump_segs + 1;
 	for (i = 0; i < dump_data->nentries; i++) {
+		if (dump_seg->type >= CNSS_FW_DUMP_TYPE_MAX) {
+			cnss_pr_err("Unsupported dump type: %d",
+				    dump_seg->type);
+			continue;
+		}
+
+		if (meta_info.entry[dump_seg->type].entry_start == 0) {
+			meta_info.entry[dump_seg->type].type = dump_seg->type;
+			meta_info.entry[dump_seg->type].entry_start = i + 1;
+		}
+		meta_info.entry[dump_seg->type].entry_num++;
+
 		s->address = dump_seg->address;
 		s->v_address = dump_seg->v_address;
 		s->size = dump_seg->size;
@@ -1902,8 +1915,16 @@
 		dump_seg++;
 	}
 
+	meta_info.magic = CNSS_RAMDUMP_MAGIC;
+	meta_info.version = CNSS_RAMDUMP_VERSION;
+	meta_info.chipset = pci_priv->device_id;
+	meta_info.total_entries = CNSS_FW_DUMP_TYPE_MAX;
+
+	ramdump_segs->v_address = &meta_info;
+	ramdump_segs->size = sizeof(meta_info);
+
 	ret = do_elf_ramdump(info_v2->ramdump_dev, ramdump_segs,
-			     dump_data->nentries);
+			     dump_data->nentries + 1);
 	kfree(ramdump_segs);
 
 	cnss_pci_clear_dump_info(pci_priv);
@@ -2055,6 +2076,12 @@
 		return -EEXIST;
 	}
 
+	if (!driver_ops->id_table || !pci_dev_present(driver_ops->id_table)) {
+		cnss_pr_err("PCIe device id is %x, not supported by loading driver\n",
+			    pci_priv->device_id);
+		return -ENODEV;
+	}
+
 	if (!test_bit(CNSS_COLD_BOOT_CAL, &plat_priv->driver_state))
 		goto register_driver;
 
@@ -2065,7 +2092,8 @@
 					  msecs_to_jiffies(timeout) << 2);
 	if (!ret) {
 		cnss_pr_err("Timeout waiting for calibration to complete\n");
-		CNSS_ASSERT(0);
+		if (!test_bit(CNSS_IN_REBOOT, &plat_priv->driver_state))
+			CNSS_ASSERT(0);
 
 		cal_info = kzalloc(sizeof(*cal_info), GFP_KERNEL);
 		if (!cal_info)
@@ -2077,17 +2105,17 @@
 				       0, cal_info);
 	}
 
+	if (test_bit(CNSS_IN_REBOOT, &plat_priv->driver_state)) {
+		cnss_pr_dbg("Reboot or shutdown is in progress, ignore register driver\n");
+		return -EINVAL;
+	}
+
 register_driver:
 	reinit_completion(&plat_priv->power_up_complete);
 	ret = cnss_driver_event_post(plat_priv,
 				     CNSS_DRIVER_EVENT_REGISTER_DRIVER,
-				     CNSS_EVENT_SYNC_UNINTERRUPTIBLE,
+				     CNSS_EVENT_SYNC_UNKILLABLE,
 				     driver_ops);
-	if (ret == -EINTR) {
-		cnss_pr_dbg("Register driver work is killed\n");
-		del_timer(&plat_priv->fw_boot_timer);
-		pci_priv->driver_ops = NULL;
-	}
 
 	return ret;
 }
@@ -2104,9 +2132,9 @@
 		return;
 	}
 
-	if (plat_priv->device_id == QCA6174_DEVICE_ID ||
-	    !(test_bit(CNSS_DRIVER_IDLE_RESTART, &plat_priv->driver_state) ||
-	      test_bit(CNSS_DRIVER_LOADING, &plat_priv->driver_state)))
+	mutex_lock(&plat_priv->driver_ops_lock);
+
+	if (plat_priv->device_id == QCA6174_DEVICE_ID)
 		goto skip_wait_power_up;
 
 	timeout = cnss_get_qmi_timeout(plat_priv);
@@ -2134,7 +2162,9 @@
 skip_wait_recovery:
 	cnss_driver_event_post(plat_priv,
 			       CNSS_DRIVER_EVENT_UNREGISTER_DRIVER,
-			       CNSS_EVENT_SYNC_UNINTERRUPTIBLE, NULL);
+			       CNSS_EVENT_SYNC_UNKILLABLE, NULL);
+
+	mutex_unlock(&plat_priv->driver_ops_lock);
 }
 EXPORT_SYMBOL(cnss_wlan_unregister_driver);
 
@@ -3355,7 +3385,14 @@
 
 	info->va = pci_priv->bar;
 	info->pa = pci_resource_start(pci_priv->pci_dev, PCI_BAR_NUM);
-
+	info->chip_id = plat_priv->chip_info.chip_id;
+	info->chip_family = plat_priv->chip_info.chip_family;
+	info->board_id = plat_priv->board_info.board_id;
+	info->soc_id = plat_priv->soc_info.soc_id;
+	info->fw_version = plat_priv->fw_version_info.fw_version;
+	strlcpy(info->fw_build_timestamp,
+		plat_priv->fw_version_info.fw_build_timestamp,
+		sizeof(info->fw_build_timestamp));
 	memcpy(&info->device_version, &plat_priv->device_version,
 	       sizeof(info->device_version));
 
diff --git a/drivers/platform/msm/ipa/ipa_clients/ipa_uc_offload.c b/drivers/platform/msm/ipa/ipa_clients/ipa_uc_offload.c
index 807c75a..4519473 100644
--- a/drivers/platform/msm/ipa/ipa_clients/ipa_uc_offload.c
+++ b/drivers/platform/msm/ipa/ipa_clients/ipa_uc_offload.c
@@ -1,10 +1,11 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2015-2019, 2020, The Linux Foundation. All rights reserved.
  */
 
 #include <linux/ipa_uc_offload.h>
 #include <linux/msm_ipa.h>
+#include <linux/if_vlan.h>
 #include "../ipa_common_i.h"
 #include "../ipa_v3/ipa_pm.h"
 
@@ -160,7 +161,7 @@
 	struct ipa_ioc_rx_intf_prop rx_prop[2];
 	int ret = 0;
 	u32 len;
-
+	bool is_vlan_mode;
 
 	IPA_UC_OFFLOAD_DBG("register interface for netdev %s\n",
 					 inp->netdev_name);
@@ -182,6 +183,41 @@
 		goto fail_alloc;
 	}
 
+	ret = ipa_is_vlan_mode(IPA_VLAN_IF_ETH, &is_vlan_mode);
+	if (ret) {
+		IPA_UC_OFFLOAD_ERR("get vlan mode failed\n");
+		goto fail;
+	}
+
+	if (is_vlan_mode) {
+		if ((inp->hdr_info[0].hdr_type != IPA_HDR_L2_802_1Q) ||
+			(inp->hdr_info[1].hdr_type != IPA_HDR_L2_802_1Q)) {
+			IPA_UC_OFFLOAD_ERR(
+				"hdr_type mismatch in vlan mode\n");
+			WARN_ON_RATELIMIT_IPA(1);
+			ret = -EFAULT;
+			goto fail;
+		}
+		IPA_UC_OFFLOAD_DBG("vlan HEADER type compatible\n");
+
+		if ((inp->hdr_info[0].hdr_len <
+			(ETH_HLEN + VLAN_HLEN)) ||
+			(inp->hdr_info[1].hdr_len <
+			(ETH_HLEN + VLAN_HLEN))) {
+			IPA_UC_OFFLOAD_ERR(
+				"hdr_len shorter than vlan len (%u) (%u)\n"
+				, inp->hdr_info[0].hdr_len
+				, inp->hdr_info[1].hdr_len);
+			WARN_ON_RATELIMIT_IPA(1);
+			ret = -EFAULT;
+			goto fail;
+		}
+
+		IPA_UC_OFFLOAD_DBG("vlan HEADER len compatible (%u) (%u)\n",
+			inp->hdr_info[0].hdr_len,
+			inp->hdr_info[1].hdr_len);
+	}
+
 	if (ipa_commit_partial_hdr(hdr, ntn_ctx->netdev_name, inp->hdr_info)) {
 		IPA_UC_OFFLOAD_ERR("fail to commit partial headers\n");
 		ret = -EFAULT;
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_dp.c b/drivers/platform/msm/ipa/ipa_v3/ipa_dp.c
index e897ae2..25cf988 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_dp.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_dp.c
@@ -87,6 +87,8 @@
 
 #define IPA_QMAP_ID_BYTE 0
 
+#define IPA_TX_MAX_DESC (50)
+
 static struct sk_buff *ipa3_get_skb_ipa_rx(unsigned int len, gfp_t flags);
 static void ipa3_replenish_wlan_rx_cache(struct ipa3_sys_context *sys);
 static void ipa3_replenish_rx_cache(struct ipa3_sys_context *sys);
@@ -195,6 +197,15 @@
 	ipa3_wq_write_done_common(sys, tx_pkt);
 }
 
+static void ipa3_tasklet_schd_work(struct work_struct *work)
+{
+	struct ipa3_sys_context *sys;
+
+	sys = container_of(work, struct ipa3_sys_context, tasklet_work);
+	if (atomic_read(&sys->xmit_eot_cnt))
+		tasklet_schedule(&sys->tasklet);
+}
+
 /**
  * ipa_write_done() - this function will be (eventually) called when a Tx
  * operation is complete
@@ -212,6 +223,7 @@
 	struct ipa3_sys_context *sys;
 	struct ipa3_tx_pkt_wrapper *this_pkt;
 	bool xmit_done = false;
+	unsigned int max_tx_pkt = 0;
 
 	sys = (struct ipa3_sys_context *)data;
 	spin_lock_bh(&sys->spinlock);
@@ -223,11 +235,22 @@
 			spin_unlock_bh(&sys->spinlock);
 			ipa3_wq_write_done_common(sys, this_pkt);
 			spin_lock_bh(&sys->spinlock);
+			max_tx_pkt++;
 			if (xmit_done)
 				break;
 		}
+		/* If TX packets processing continuously in tasklet other
+		 * softirqs are not able to run on that core which is leading
+		 * to watchdog bark. For avoiding these scenarios exit from
+		 * tasklet after reaching max limit.
+		 */
+		if (max_tx_pkt >= IPA_TX_MAX_DESC)
+			break;
 	}
 	spin_unlock_bh(&sys->spinlock);
+
+	if (max_tx_pkt >= IPA_TX_MAX_DESC)
+		queue_work(sys->tasklet_wq, &sys->tasklet_work);
 }
 
 
@@ -1029,6 +1052,16 @@
 			goto fail_wq2;
 		}
 
+		snprintf(buff, IPA_RESOURCE_NAME_MAX, "ipataskletwq%d",
+				sys_in->client);
+		ep->sys->tasklet_wq = alloc_workqueue(buff,
+				WQ_MEM_RECLAIM | WQ_UNBOUND | WQ_SYSFS, 1);
+		if (!ep->sys->tasklet_wq) {
+			IPAERR("failed to create rep wq for client %d\n",
+					sys_in->client);
+			result = -EFAULT;
+			goto fail_wq3;
+		}
 		INIT_LIST_HEAD(&ep->sys->head_desc_list);
 		INIT_LIST_HEAD(&ep->sys->rcycl_list);
 		spin_lock_init(&ep->sys->spinlock);
@@ -1077,6 +1110,8 @@
 	atomic_set(&ep->sys->xmit_eot_cnt, 0);
 	tasklet_init(&ep->sys->tasklet, ipa3_tasklet_write_done,
 			(unsigned long) ep->sys);
+	INIT_WORK(&ep->sys->tasklet_work,
+		ipa3_tasklet_schd_work);
 	ep->skip_ep_cfg = sys_in->skip_ep_cfg;
 	if (ipa3_assign_policy(sys_in, ep->sys)) {
 		IPAERR("failed to sys ctx for client %d\n", sys_in->client);
@@ -1262,6 +1297,8 @@
 fail_gen2:
 	ipa_pm_deregister(ep->sys->pm_hdl);
 fail_pm:
+	destroy_workqueue(ep->sys->tasklet_wq);
+fail_wq3:
 	destroy_workqueue(ep->sys->repl_wq);
 fail_wq2:
 	destroy_workqueue(ep->sys->wq);
@@ -1391,6 +1428,8 @@
 	}
 	if (ep->sys->repl_wq)
 		flush_workqueue(ep->sys->repl_wq);
+	if (ep->sys->tasklet_wq)
+		flush_workqueue(ep->sys->tasklet_wq);
 	if (IPA_CLIENT_IS_CONS(ep->client))
 		ipa3_cleanup_rx(ep->sys);
 
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c b/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c
index d8b7bf1..8071128 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_flt.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
  */
 
 #include "ipa_i.h"
@@ -1693,17 +1693,23 @@
 	}
 
 	mutex_lock(&ipa3_ctx->lock);
+
 	for (i = 0; i < hdls->num_rules; i++) {
 		/* if hashing not supported, all tables are non-hash tables*/
 		if (ipa3_ctx->ipa_fltrt_not_hashable)
 			hdls->rules[i].rule.hashable = false;
+
 		__ipa_convert_flt_mdfy_in(hdls->rules[i], &rule);
-		if (__ipa_mdfy_flt_rule(&rule, hdls->ip)) {
-			IPAERR_RL("failed to mdfy flt rule %i\n", i);
+
+		result = __ipa_mdfy_flt_rule(&rule, hdls->ip);
+
+		__ipa_convert_flt_mdfy_out(rule, &hdls->rules[i]);
+
+		if (result) {
+			IPAERR_RL("failed to mdfy flt rule %d\n", i);
 			hdls->rules[i].status = IPA_FLT_STATUS_OF_MDFY_FAILED;
 		} else {
 			hdls->rules[i].status = 0;
-			__ipa_convert_flt_mdfy_out(rule, &hdls->rules[i]);
 		}
 	}
 
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_i.h b/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
index bb6f7a0..a270f44 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
@@ -1031,6 +1031,7 @@
 	struct list_head pending_pkts[GSI_VEID_MAX];
 	atomic_t xmit_eot_cnt;
 	struct tasklet_struct tasklet;
+	struct work_struct tasklet_work;
 
 	/* ordering is important - mutable fields go above */
 	struct ipa3_ep_context *ep;
@@ -1044,6 +1045,7 @@
 	u32 pm_hdl;
 	unsigned int napi_sch_cnt;
 	unsigned int napi_comp_cnt;
+	struct workqueue_struct *tasklet_wq;
 	/* ordering is important - other immutable fields go below */
 };
 
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c b/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c
index 461d77c..b41b177 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_nat.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2020, The Linux Foundation. All rights reserved.
  */
 
 #include <linux/device.h>
@@ -1108,7 +1108,7 @@
 	IPADBG("return\n");
 }
 
-static void ipa3_nat_create_modify_pdn_cmd(
+static int ipa3_nat_create_modify_pdn_cmd(
 	struct ipahal_imm_cmd_dma_shared_mem *mem_cmd, bool zero_mem)
 {
 	size_t pdn_entry_size, mem_size;
@@ -1118,6 +1118,10 @@
 	ipahal_nat_entry_size(IPAHAL_NAT_IPV4_PDN, &pdn_entry_size);
 	mem_size = pdn_entry_size * IPA_MAX_PDN_NUM;
 
+	/* Before providing physical base address check pointer exist or not*/
+	if (!ipa3_ctx->nat_mem.pdn_mem.base)
+		return -EFAULT;
+
 	if (zero_mem && ipa3_ctx->nat_mem.pdn_mem.base)
 		memset(ipa3_ctx->nat_mem.pdn_mem.base, 0, mem_size);
 
@@ -1131,6 +1135,7 @@
 		IPA_MEM_PART(pdn_config_ofst);
 
 	IPADBG("return\n");
+	return 0;
 }
 
 static int ipa3_nat_send_init_cmd(struct ipahal_imm_cmd_ip_v4_nat_init *cmd,
@@ -1202,7 +1207,12 @@
 		}
 
 		/* Copy the PDN config table to SRAM */
-		ipa3_nat_create_modify_pdn_cmd(&mem_cmd, zero_pdn_table);
+		result = ipa3_nat_create_modify_pdn_cmd(&mem_cmd,
+							zero_pdn_table);
+		if (result) {
+			IPAERR(" Fail to create modify pdn command\n");
+			goto destroy_imm_cmd;
+		}
 		cmd_pyld[num_cmd] = ipahal_construct_imm_cmd(
 			IPA_IMM_CMD_DMA_SHARED_MEM, &mem_cmd, false);
 		if (!cmd_pyld[num_cmd]) {
@@ -1694,7 +1704,12 @@
 	/*
 	 * Copy the PDN config table to SRAM
 	 */
-	ipa3_nat_create_modify_pdn_cmd(&mem_cmd, false);
+	result = ipa3_nat_create_modify_pdn_cmd(&mem_cmd, false);
+
+	if (result) {
+		IPAERR(" Fail to create modify pdn command\n");
+		goto bail;
+	}
 
 	cmd_pyld = ipahal_construct_imm_cmd(
 		IPA_IMM_CMD_DMA_SHARED_MEM, &mem_cmd, false);
@@ -2095,6 +2110,8 @@
 				mld_ptr->index_table_expansion_addr = NULL;
 			}
 
+			dev->is_hw_init           = false;
+			dev->is_mapped            = false;
 			memset(nm_ptr->mem_loc, 0, sizeof(nm_ptr->mem_loc));
 		}
 	}
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal.c b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal.c
index 0bef801..965e48a 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipahal/ipahal.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (c) 2016-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2016-2020, The Linux Foundation. All rights reserved.
  */
 
 #include <linux/debugfs.h>
@@ -1422,12 +1422,16 @@
 			(base + offset);
 
 		ctx->hdr_add.tlv.type = IPA_PROC_CTX_TLV_TYPE_HDR_ADD;
-		ctx->hdr_add.tlv.length = 1;
+		ctx->hdr_add.tlv.length = 2;
 		ctx->hdr_add.tlv.value = hdr_len;
-		ctx->hdr_add.hdr_addr = is_hdr_proc_ctx ? phys_base :
+		hdr_addr = is_hdr_proc_ctx ? phys_base :
 			hdr_base_addr + offset_entry->offset;
 		IPAHAL_DBG("header address 0x%x\n",
 			ctx->hdr_add.hdr_addr);
+		IPAHAL_CP_PROC_CTX_HEADER_UPDATE(ctx->hdr_add.hdr_addr,
+			ctx->hdr_add.hdr_addr_hi, hdr_addr);
+		if (!is_64)
+			ctx->hdr_add.hdr_addr_hi = 0;
 
 		ctx->hdr_add_ex.tlv.type = IPA_PROC_CTX_TLV_TYPE_PROC_CMD;
 		ctx->hdr_add_ex.tlv.length = 1;
diff --git a/drivers/power/supply/qcom/fg-alg.c b/drivers/power/supply/qcom/fg-alg.c
index ead4839..c17c61e 100644
--- a/drivers/power/supply/qcom/fg-alg.c
+++ b/drivers/power/supply/qcom/fg-alg.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018-2020 The Linux Foundation. All rights reserved.
  */
 
 #define pr_fmt(fmt)	"ALG: %s: " fmt, __func__
@@ -714,7 +714,7 @@
 	pr_debug("Aborting cap_learning\n");
 	cl->active = false;
 	cl->init_cap_uah = 0;
-	mutex_lock(&cl->lock);
+	mutex_unlock(&cl->lock);
 }
 
 /**
diff --git a/drivers/power/supply/qcom/qpnp-fg-gen4.c b/drivers/power/supply/qcom/qpnp-fg-gen4.c
index dac6aaf..9c179be 100644
--- a/drivers/power/supply/qcom/qpnp-fg-gen4.c
+++ b/drivers/power/supply/qcom/qpnp-fg-gen4.c
@@ -6373,12 +6373,7 @@
 	/* Keep MEM_ATTN_IRQ disabled until we require it */
 	vote(chip->mem_attn_irq_en_votable, MEM_ATTN_IRQ_VOTER, false, 0);
 
-	rc = fg_debugfs_create(fg);
-	if (rc < 0) {
-		dev_err(fg->dev, "Error in creating debugfs entries, rc:%d\n",
-			rc);
-		goto exit;
-	}
+	fg_debugfs_create(fg);
 
 	rc = sysfs_create_groups(&fg->dev->kobj, fg_groups);
 	if (rc < 0) {
diff --git a/drivers/power/supply/qcom/qpnp-smblite.c b/drivers/power/supply/qcom/qpnp-smblite.c
index aed333b..5f8c714 100644
--- a/drivers/power/supply/qcom/qpnp-smblite.c
+++ b/drivers/power/supply/qcom/qpnp-smblite.c
@@ -433,6 +433,9 @@
 	case POWER_SUPPLY_PROP_SCOPE:
 		rc = smblite_lib_get_prop_scope(chg, val);
 		break;
+	case POWER_SUPPLY_PROP_FLASH_TRIGGER:
+		rc = schgm_flashlite_get_vreg_ok(chg, &val->intval);
+		break;
 	default:
 		pr_err("get prop %d is not supported in usb\n", psp);
 		rc = -EINVAL;
@@ -526,6 +529,7 @@
 	POWER_SUPPLY_PROP_INPUT_VOLTAGE_SETTLED,
 	POWER_SUPPLY_PROP_FCC_DELTA,
 	POWER_SUPPLY_PROP_CURRENT_MAX,
+	POWER_SUPPLY_PROP_FLASH_TRIGGER,
 };
 
 static int smblite_usb_main_get_prop(struct power_supply *psy,
@@ -960,6 +964,13 @@
 		return rc;
 	}
 
+	rc = smblite_lib_write(chg, TYPE_C_INTERRUPT_EN_CFG_2_REG, 0);
+	if (rc < 0) {
+		dev_err(chg->dev,
+			"Couldn't configure Type-C interrupts rc=%d\n", rc);
+		return rc;
+	}
+
 	rc = smblite_lib_masked_write(chg, TYPE_C_MODE_CFG_REG,
 					EN_SNK_ONLY_BIT, 0);
 	if (rc < 0) {
@@ -969,17 +980,6 @@
 		return rc;
 	}
 
-	/* Enable detection of unoriented debug accessory in source mode */
-	rc = smblite_lib_masked_write(chg, DEBUG_ACCESS_SRC_CFG_REG,
-				 EN_UNORIENTED_DEBUG_ACCESS_SRC_BIT,
-				 EN_UNORIENTED_DEBUG_ACCESS_SRC_BIT);
-	if (rc < 0) {
-		dev_err(chg->dev,
-			"Couldn't configure TYPE_C_DEBUG_ACCESS_SRC_CFG_REG rc=%d\n",
-				rc);
-		return rc;
-	}
-
 	rc = smblite_lib_masked_write(chg, TYPE_C_EXIT_STATE_CFG_REG,
 				SEL_SRC_UPPER_REF_BIT, SEL_SRC_UPPER_REF_BIT);
 	if (rc < 0)
diff --git a/drivers/power/supply/qcom/schgm-flashlite.c b/drivers/power/supply/qcom/schgm-flashlite.c
index 78f3ef3..37ecb83 100644
--- a/drivers/power/supply/qcom/schgm-flashlite.c
+++ b/drivers/power/supply/qcom/schgm-flashlite.c
@@ -85,13 +85,6 @@
 		if (IS_BETWEEN(0, 100, val))
 			chg->flash_disable_soc = (val * 255) / 100;
 	}
-
-	chg->headroom_mode = -EINVAL;
-	rc = of_property_read_u32(node, "qcom,headroom-mode", &val);
-	if (!rc) {
-		if (IS_BETWEEN(FIXED_MODE, ADAPTIVE_MODE, val))
-			chg->headroom_mode = val;
-	}
 }
 
 bool is_flashlite_active(struct smb_charger *chg)
@@ -151,13 +144,6 @@
 	int rc;
 	u8 reg;
 
-	/*
-	 * If torch is configured in default BOOST mode, skip any update in the
-	 * mode configuration.
-	 */
-	if (chg->headroom_mode == FIXED_MODE)
-		return;
-
 	if ((mode != TORCH_BOOST_MODE) && (mode != TORCH_BUCK_MODE))
 		return;
 
@@ -209,31 +195,6 @@
 		}
 	}
 
-	if (chg->headroom_mode != -EINVAL) {
-		/*
-		 * configure headroom management policy for
-		 * flash and torch mode.
-		 */
-		reg = (chg->headroom_mode == FIXED_MODE)
-					? FORCE_FLASH_BOOST_5V_BIT : 0;
-		rc = smblite_lib_write(chg, SCHGM_FORCE_BOOST_CONTROL, reg);
-		if (rc < 0) {
-			pr_err("Couldn't write force boost control reg rc=%d\n",
-					rc);
-			return rc;
-		}
-
-		reg = (chg->headroom_mode == FIXED_MODE)
-					? TORCH_PRIORITY_CONTROL_BIT : 0;
-		rc = smblite_lib_write(chg,
-					SCHGM_TORCH_PRIORITY_CONTROL_REG, reg);
-		if (rc < 0) {
-			pr_err("Couldn't force 5V boost in torch mode rc=%d\n",
-					rc);
-			return rc;
-		}
-	}
-
 	if ((chg->flash_derating_soc != -EINVAL)
 				|| (chg->flash_disable_soc != -EINVAL)) {
 		/* Check if SOC based derating/disable is enabled */
diff --git a/drivers/power/supply/qcom/schgm-flashlite.h b/drivers/power/supply/qcom/schgm-flashlite.h
index d43e545..a994a42 100644
--- a/drivers/power/supply/qcom/schgm-flashlite.h
+++ b/drivers/power/supply/qcom/schgm-flashlite.h
@@ -21,9 +21,6 @@
 
 #define SCHGM_FLASH_STATUS_5_REG		(SCHGM_FLASH_BASE + 0x0B)
 
-#define SCHGM_FORCE_BOOST_CONTROL		(SCHGM_FLASH_BASE + 0x41)
-#define FORCE_FLASH_BOOST_5V_BIT		BIT(0)
-
 #define SCHGM_FLASH_S2_LATCH_RESET_CMD_REG	(SCHGM_FLASH_BASE + 0x44)
 #define FLASH_S2_LATCH_RESET_BIT		BIT(0)
 
diff --git a/drivers/power/supply/qcom/smb1398-charger.c b/drivers/power/supply/qcom/smb1398-charger.c
index 3d6c3ae..2c2e339 100644
--- a/drivers/power/supply/qcom/smb1398-charger.c
+++ b/drivers/power/supply/qcom/smb1398-charger.c
@@ -1624,7 +1624,7 @@
 	 * valid due to the battery discharging later, remove
 	 * vote from CUTOFF_SOC_VOTER.
 	 */
-	if (is_cutoff_soc_reached(chip))
+	if (!is_cutoff_soc_reached(chip))
 		vote(chip->div2_cp_disable_votable, CUTOFF_SOC_VOTER, false, 0);
 
 	rc = power_supply_get_property(chip->usb_psy,
diff --git a/drivers/power/supply/qcom/smblite-lib.c b/drivers/power/supply/qcom/smblite-lib.c
index 365e37e..5c127f0 100644
--- a/drivers/power/supply/qcom/smblite-lib.c
+++ b/drivers/power/supply/qcom/smblite-lib.c
@@ -16,10 +16,9 @@
 #include <linux/ktime.h>
 #include "smblite-lib.h"
 #include "smblite-reg.h"
-#include "schgm-flash.h"
 #include "step-chg-jeita.h"
 #include "storm-watch.h"
-#include "schgm-flash.h"
+#include "schgm-flashlite.h"
 
 #define smblite_lib_err(chg, fmt, ...)		\
 	pr_err("%s: %s: " fmt, chg->name,	\
@@ -433,7 +432,7 @@
 	vote(chg->pl_enable_votable_indirect, USBIN_I_VOTER, false, 0);
 	vote(chg->pl_enable_votable_indirect, USBIN_V_VOTER, false, 0);
 	vote(chg->usb_icl_votable, SW_ICL_MAX_VOTER, true,
-			is_flash_active(chg) ? USBIN_500UA : USBIN_100UA);
+			is_flashlite_active(chg) ? USBIN_500UA : USBIN_100UA);
 
 	/* Remove SW thermal regulation votes */
 	vote(chg->usb_icl_votable, SW_THERM_REGULATION_VOTER, false, 0);
@@ -522,7 +521,7 @@
 	/* suspend if 25mA or less is requested */
 	bool suspend = (icl_ua <= USBIN_25UA);
 
-	schgm_flash_torch_priority(chg, suspend ? TORCH_BOOST_MODE :
+	schgm_flashlite_torch_priority(chg, suspend ? TORCH_BOOST_MODE :
 							TORCH_BUCK_MODE);
 	/* Do not configure ICL from SW for DAM */
 	if (smblite_lib_get_prop_typec_mode(chg) ==
@@ -2017,7 +2016,7 @@
 
 unsuspend_input:
 		/* Force torch in boost mode to ensure it works with low ICL */
-		schgm_flash_torch_priority(chg, TORCH_BOOST_MODE);
+		schgm_flashlite_torch_priority(chg, TORCH_BOOST_MODE);
 
 		if (chg->aicl_max_reached) {
 			smblite_lib_dbg(chg, PR_MISC,
@@ -2210,7 +2209,7 @@
 						USB_PSY_VOTER)) {
 			/* if flash is active force 500mA */
 			vote(chg->usb_icl_votable, USB_PSY_VOTER, true,
-					is_flash_active(chg) ?
+					is_flashlite_active(chg) ?
 					USBIN_500UA : USBIN_100UA);
 		}
 		vote(chg->usb_icl_votable, SW_ICL_MAX_VOTER, false, 0);
@@ -2516,7 +2515,7 @@
 
 	/* reset input current limit voters */
 	vote(chg->usb_icl_votable, SW_ICL_MAX_VOTER, true,
-			is_flash_active(chg) ? USBIN_500UA : USBIN_100UA);
+			is_flashlite_active(chg) ? USBIN_500UA : USBIN_100UA);
 	vote(chg->usb_icl_votable, USB_PSY_VOTER, false, 0);
 
 	/* reset parallel voters */
@@ -2893,23 +2892,23 @@
 	}
 
 	if (stat & DIE_TEMP_UB_BIT) {
-		icl_ua = get_effective_result(chg->usb_icl_votable)
-				- THERM_REGULATION_STEP_UA;
-
-		/* Decrement ICL by one step */
-		vote(chg->usb_icl_votable, SW_THERM_REGULATION_VOTER,
-				true, icl_ua - THERM_REGULATION_STEP_UA);
-
 		/* Check if we reached minimum ICL limit */
 		if (icl_ua < USBIN_500UA + THERM_REGULATION_STEP_UA)
 			goto exit;
 
+		/* Decrement ICL by one step */
+		icl_ua -= THERM_REGULATION_STEP_UA;
+		vote(chg->usb_icl_votable, SW_THERM_REGULATION_VOTER,
+				true, icl_ua);
+
 		goto reschedule;
 	}
 
-	if (stat & DIE_TEMP_LB_BIT) {
+	/* check if DIE_TEMP is below LB */
+	if (!(stat & DIE_TEMP_MASK)) {
+		icl_ua += THERM_REGULATION_STEP_UA;
 		vote(chg->usb_icl_votable, SW_THERM_REGULATION_VOTER,
-				true, icl_ua + THERM_REGULATION_STEP_UA);
+				true, icl_ua);
 
 		/*
 		 * Check if we need further increments:
diff --git a/drivers/power/supply/qcom/smblite-lib.h b/drivers/power/supply/qcom/smblite-lib.h
index 54e8301..0d6e8c3 100644
--- a/drivers/power/supply/qcom/smblite-lib.h
+++ b/drivers/power/supply/qcom/smblite-lib.h
@@ -144,12 +144,9 @@
 	VREG_OK_IRQ,
 	ILIM_S2_IRQ,
 	ILIM_S1_IRQ,
-	VOUT_DOWN_IRQ,
-	VOUT_UP_IRQ,
 	FLASH_STATE_CHANGE_IRQ,
 	TORCH_REQ_IRQ,
 	FLASH_EN_IRQ,
-	SDAM_STS_IRQ,
 	/* END */
 	SMB_IRQ_MAX,
 };
diff --git a/drivers/power/supply/qcom/smblite-reg.h b/drivers/power/supply/qcom/smblite-reg.h
index 199922e..361d0c2f 100644
--- a/drivers/power/supply/qcom/smblite-reg.h
+++ b/drivers/power/supply/qcom/smblite-reg.h
@@ -266,6 +266,7 @@
 #define THERMREG_DISABLED_BIT			BIT(0)
 
 #define DIE_TEMP_STATUS_REG			(MISC_BASE + 0x09)
+#define DIE_TEMP_MASK				GENMASK(3, 0)
 #define DIE_TEMP_SHDN_BIT			BIT(3)
 #define DIE_TEMP_RST_BIT			BIT(2)
 #define DIE_TEMP_UB_BIT				BIT(1)
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index f86a6be..54e74ca 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -2261,8 +2261,6 @@
 	if (!shost->use_clustering)
 		q->limits.cluster = 0;
 
-	if (shost->inlinecrypt_support)
-		queue_flag_set_unlocked(QUEUE_FLAG_INLINECRYPT, q);
 	/*
 	 * Set a reasonable default alignment:  The larger of 32-byte (dword),
 	 * which is a common minimum for HBAs, and the minimum DMA alignment,
diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
index 9dd9167..f1aae8b 100644
--- a/drivers/scsi/ufs/Kconfig
+++ b/drivers/scsi/ufs/Kconfig
@@ -101,18 +101,6 @@
 	  Select this if you have UFS controller on QCOM chipset.
 	  If unsure, say N.
 
-config SCSI_UFS_QCOM_ICE
-	bool "QCOM specific hooks to Inline Crypto Engine for UFS driver"
-	depends on SCSI_UFS_QCOM && CRYPTO_DEV_QCOM_ICE
-	help
-	  This selects the QCOM specific additions to support Inline Crypto
-	  Engine (ICE).
-	  ICE accelerates the crypto operations and maintains the high UFS
-	  performance.
-
-	  Select this if you have ICE supported for UFS on QCOM chipset.
-	  If unsure, say N.
-
 config SCSI_UFS_TEST
 	tristate "Universal Flash Storage host controller driver unit-tests"
 	depends on SCSI_UFSHCD && IOSCHED_TEST
@@ -143,3 +131,20 @@
 
 	  Select this if you have UFS controller on Hisilicon chipset.
 	  If unsure, say N.
+
+config SCSI_UFS_CRYPTO
+	bool "UFS Crypto Engine Support"
+	depends on SCSI_UFSHCD && BLK_INLINE_ENCRYPTION
+	help
+	  Enable Crypto Engine Support in UFS.
+	  Enabling this makes it possible for the kernel to use the crypto
+	  capabilities of the UFS device (if present) to perform crypto
+	  operations on data being transferred to/from the device.
+
+config SCSI_UFS_CRYPTO_QTI
+	tristate "Vendor specific UFS Crypto Engine Support"
+	depends on SCSI_UFS_CRYPTO
+	help
+	 Enable Vendor Crypto Engine Support in UFS
+	 Enabling this allows kernel to use UFS crypto operations defined
+	 and implemented by QTI.
diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile
index 7084ae4..e7294e6 100644
--- a/drivers/scsi/ufs/Makefile
+++ b/drivers/scsi/ufs/Makefile
@@ -11,3 +11,5 @@
 obj-$(CONFIG_SCSI_UFS_TEST) += ufs_test.o
 obj-$(CONFIG_DEBUG_FS) += ufs-debugfs.o ufs-qcom-debugfs.o
 obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o
+ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
+ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO_QTI) += ufshcd-crypto-qti.o
diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c
index c2cee73..71655be 100644
--- a/drivers/scsi/ufs/ufs-hisi.c
+++ b/drivers/scsi/ufs/ufs-hisi.c
@@ -540,6 +540,14 @@
 	if (!host)
 		return -ENOMEM;
 
+	/*
+	 * Inline crypto is currently broken with ufs-hisi because the keyslots
+	 * overlap with the vendor-specific SYS CTRL registers -- and even if
+	 * software uses only non-overlapping keyslots, the kernel crashes when
+	 * programming a key or a UFS error occurs on the first encrypted I/O.
+	 */
+	hba->quirks |= UFSHCD_QUIRK_BROKEN_CRYPTO;
+
 	host->hba = hba;
 	ufshcd_set_variant(hba, host);
 
diff --git a/drivers/scsi/ufs/ufs-qcom-ice.c b/drivers/scsi/ufs/ufs-qcom-ice.c
deleted file mode 100644
index 48fd18c..0000000
--- a/drivers/scsi/ufs/ufs-qcom-ice.c
+++ /dev/null
@@ -1,782 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- */
-
-#include <linux/io.h>
-#include <linux/of.h>
-#include <linux/blkdev.h>
-#include <linux/spinlock.h>
-#include <crypto/ice.h>
-
-#include "ufshcd.h"
-#include "ufs-qcom-ice.h"
-#include "ufs-qcom-debugfs.h"
-
-#define UFS_QCOM_CRYPTO_LABEL "ufs-qcom-crypto"
-/* Timeout waiting for ICE initialization, that requires TZ access */
-#define UFS_QCOM_ICE_COMPLETION_TIMEOUT_MS 500
-
-#define UFS_QCOM_ICE_DEFAULT_DBG_PRINT_EN	0
-
-static struct workqueue_struct *ice_workqueue;
-
-static void ufs_qcom_ice_dump_regs(struct ufs_qcom_host *qcom_host, int offset,
-					int len, char *prefix)
-{
-	print_hex_dump(KERN_ERR, prefix,
-			len > 4 ? DUMP_PREFIX_OFFSET : DUMP_PREFIX_NONE,
-			16, 4, qcom_host->hba->mmio_base + offset, len * 4,
-			false);
-}
-
-void ufs_qcom_ice_print_regs(struct ufs_qcom_host *qcom_host)
-{
-	int i;
-
-	if (!(qcom_host->dbg_print_en & UFS_QCOM_DBG_PRINT_ICE_REGS_EN))
-		return;
-
-	ufs_qcom_ice_dump_regs(qcom_host, REG_UFS_QCOM_ICE_CFG, 1,
-			"REG_UFS_QCOM_ICE_CFG ");
-	for (i = 0; i < NUM_QCOM_ICE_CTRL_INFO_n_REGS; i++) {
-		pr_err("REG_UFS_QCOM_ICE_CTRL_INFO_1_%d = 0x%08X\n", i,
-			ufshcd_readl(qcom_host->hba,
-				(REG_UFS_QCOM_ICE_CTRL_INFO_1_n + 8 * i)));
-
-		pr_err("REG_UFS_QCOM_ICE_CTRL_INFO_2_%d = 0x%08X\n", i,
-			ufshcd_readl(qcom_host->hba,
-				(REG_UFS_QCOM_ICE_CTRL_INFO_2_n + 8 * i)));
-	}
-
-	if (qcom_host->ice.pdev && qcom_host->ice.vops &&
-	    qcom_host->ice.vops->debug)
-		qcom_host->ice.vops->debug(qcom_host->ice.pdev);
-}
-
-static void ufs_qcom_ice_error_cb(void *host_ctrl, u32 error)
-{
-	struct ufs_qcom_host *qcom_host = (struct ufs_qcom_host *)host_ctrl;
-
-	dev_err(qcom_host->hba->dev, "%s: Error in ice operation 0x%x\n",
-		__func__, error);
-
-	if (qcom_host->ice.state == UFS_QCOM_ICE_STATE_ACTIVE)
-		qcom_host->ice.state = UFS_QCOM_ICE_STATE_DISABLED;
-}
-
-static struct platform_device *ufs_qcom_ice_get_pdevice(struct device *ufs_dev)
-{
-	struct device_node *node;
-	struct platform_device *ice_pdev = NULL;
-
-	node = of_parse_phandle(ufs_dev->of_node, UFS_QCOM_CRYPTO_LABEL, 0);
-
-	if (!node) {
-		dev_err(ufs_dev, "%s: ufs-qcom-crypto property not specified\n",
-			__func__);
-		goto out;
-	}
-
-	ice_pdev = qcom_ice_get_pdevice(node);
-out:
-	return ice_pdev;
-}
-
-static
-struct qcom_ice_variant_ops *ufs_qcom_ice_get_vops(struct device *ufs_dev)
-{
-	struct qcom_ice_variant_ops *ice_vops = NULL;
-	struct device_node *node;
-
-	node = of_parse_phandle(ufs_dev->of_node, UFS_QCOM_CRYPTO_LABEL, 0);
-
-	if (!node) {
-		dev_err(ufs_dev, "%s: ufs-qcom-crypto property not specified\n",
-			__func__);
-		goto out;
-	}
-
-	ice_vops = qcom_ice_get_variant_ops(node);
-
-	if (!ice_vops)
-		dev_err(ufs_dev, "%s: invalid ice_vops\n", __func__);
-
-	of_node_put(node);
-out:
-	return ice_vops;
-}
-
-/**
- * ufs_qcom_ice_get_dev() - sets pointers to ICE data structs in UFS QCom host
- * @qcom_host:	Pointer to a UFS QCom internal host structure.
- *
- * Sets ICE platform device pointer and ICE vops structure
- * corresponding to the current UFS device.
- *
- * Return: -EINVAL in-case of invalid input parameters:
- *  qcom_host, qcom_host->hba or qcom_host->hba->dev
- *         -ENODEV in-case ICE device is not required
- *         -EPROBE_DEFER in-case ICE is required and hasn't been probed yet
- *         0 otherwise
- */
-int ufs_qcom_ice_get_dev(struct ufs_qcom_host *qcom_host)
-{
-	struct device *ufs_dev;
-	int err = 0;
-
-	if (!qcom_host || !qcom_host->hba || !qcom_host->hba->dev) {
-		pr_err("%s: invalid qcom_host %p or qcom_host->hba or qcom_host->hba->dev\n",
-			__func__, qcom_host);
-		err = -EINVAL;
-		goto out;
-	}
-
-	ufs_dev = qcom_host->hba->dev;
-
-	qcom_host->ice.vops  = ufs_qcom_ice_get_vops(ufs_dev);
-	qcom_host->ice.pdev = ufs_qcom_ice_get_pdevice(ufs_dev);
-
-	if (qcom_host->ice.pdev == ERR_PTR(-EPROBE_DEFER)) {
-		dev_err(ufs_dev, "%s: ICE device not probed yet\n",
-			__func__);
-		qcom_host->ice.pdev = NULL;
-		qcom_host->ice.vops = NULL;
-		err = -EPROBE_DEFER;
-		goto out;
-	}
-
-	if (!qcom_host->ice.pdev || !qcom_host->ice.vops) {
-		dev_err(ufs_dev, "%s: invalid platform device %p or vops %p\n",
-			__func__, qcom_host->ice.pdev, qcom_host->ice.vops);
-		qcom_host->ice.pdev = NULL;
-		qcom_host->ice.vops = NULL;
-		err = -ENODEV;
-		goto out;
-	}
-
-	qcom_host->ice.state = UFS_QCOM_ICE_STATE_DISABLED;
-
-out:
-	return err;
-}
-
-static void ufs_qcom_ice_cfg_work(struct work_struct *work)
-{
-	unsigned long flags;
-	struct ufs_qcom_host *qcom_host =
-		container_of(work, struct ufs_qcom_host, ice_cfg_work);
-
-	if (!qcom_host->ice.vops->config_start)
-		return;
-
-	spin_lock_irqsave(&qcom_host->ice_work_lock, flags);
-	if (!qcom_host->req_pending ||
-			ufshcd_is_shutdown_ongoing(qcom_host->hba)) {
-		qcom_host->work_pending = false;
-		spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
-		return;
-	}
-	spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
-
-	/*
-	 * config_start is called again as previous attempt returned -EAGAIN,
-	 * this call shall now take care of the necessary key setup.
-	 */
-	qcom_host->ice.vops->config_start(qcom_host->ice.pdev,
-		qcom_host->req_pending, NULL, false);
-
-	spin_lock_irqsave(&qcom_host->ice_work_lock, flags);
-	qcom_host->req_pending = NULL;
-	qcom_host->work_pending = false;
-	spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
-}
-
-/**
- * ufs_qcom_ice_init() - initializes the ICE-UFS interface and ICE device
- * @qcom_host:	Pointer to a UFS QCom internal host structure.
- *		qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- *		be valid pointers.
- *
- * Return: -EINVAL in-case of an error
- *         0 otherwise
- */
-int ufs_qcom_ice_init(struct ufs_qcom_host *qcom_host)
-{
-	struct device *ufs_dev = qcom_host->hba->dev;
-	int err;
-
-	err = qcom_host->ice.vops->init(qcom_host->ice.pdev,
-				qcom_host,
-				ufs_qcom_ice_error_cb);
-	if (err) {
-		dev_err(ufs_dev, "%s: ice init failed. err = %d\n",
-			__func__, err);
-		goto out;
-	} else {
-		qcom_host->ice.state = UFS_QCOM_ICE_STATE_ACTIVE;
-	}
-
-	qcom_host->dbg_print_en |= UFS_QCOM_ICE_DEFAULT_DBG_PRINT_EN;
-	if (!ice_workqueue) {
-		ice_workqueue = alloc_workqueue("ice-set-key",
-			WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_FREEZABLE, 0);
-		if (!ice_workqueue) {
-			dev_err(ufs_dev, "%s: workqueue allocation failed.\n",
-			__func__);
-			err = -ENOMEM;
-			goto out;
-		}
-	}
-	if (ice_workqueue) {
-		if (!qcom_host->is_ice_cfg_work_set) {
-			INIT_WORK(&qcom_host->ice_cfg_work,
-					ufs_qcom_ice_cfg_work);
-			qcom_host->is_ice_cfg_work_set = true;
-		}
-	}
-
-out:
-	return err;
-}
-
-static inline bool ufs_qcom_is_data_cmd(char cmd_op, bool is_write)
-{
-	if (is_write) {
-		if (cmd_op == WRITE_6 || cmd_op == WRITE_10 ||
-		    cmd_op == WRITE_16)
-			return true;
-	} else {
-		if (cmd_op == READ_6 || cmd_op == READ_10 ||
-		    cmd_op == READ_16)
-			return true;
-	}
-
-	return false;
-}
-
-int ufs_qcom_ice_req_setup(struct ufs_qcom_host *qcom_host,
-		struct scsi_cmnd *cmd, u8 *cc_index, bool *enable)
-{
-	struct ice_data_setting ice_set;
-	char cmd_op = cmd->cmnd[0];
-	int err;
-	unsigned long flags;
-
-	if (!qcom_host->ice.pdev || !qcom_host->ice.vops) {
-		dev_dbg(qcom_host->hba->dev, "%s: ice device is not enabled\n",
-			__func__);
-		return 0;
-	}
-
-	if (qcom_host->ice.vops->config_start) {
-		memset(&ice_set, 0, sizeof(ice_set));
-
-		spin_lock_irqsave(
-			&qcom_host->ice_work_lock, flags);
-
-		err = qcom_host->ice.vops->config_start(qcom_host->ice.pdev,
-			cmd->request, &ice_set, true);
-		if (err) {
-			/*
-			 * config_start() returns -EAGAIN when a key slot is
-			 * available but still not configured. As configuration
-			 * requires a non-atomic context, this means we should
-			 * call the function again from the worker thread to do
-			 * the configuration. For this request the error will
-			 * propagate so it will be re-queued.
-			 */
-			if (err == -EAGAIN) {
-				if (!ice_workqueue) {
-					spin_unlock_irqrestore(
-					&qcom_host->ice_work_lock,
-					flags);
-
-					dev_err(qcom_host->hba->dev,
-						"%s: error %d workqueue NULL\n",
-						__func__, err);
-					return -EINVAL;
-				}
-
-				dev_dbg(qcom_host->hba->dev,
-					"%s: scheduling task for ice setup\n",
-					__func__);
-
-				if (!qcom_host->work_pending) {
-					qcom_host->req_pending = cmd->request;
-
-					if (!queue_work(ice_workqueue,
-						&qcom_host->ice_cfg_work)) {
-						qcom_host->req_pending = NULL;
-
-						spin_unlock_irqrestore(
-						&qcom_host->ice_work_lock,
-						flags);
-
-						return err;
-					}
-					qcom_host->work_pending = true;
-				}
-			} else {
-				if (err != -EBUSY)
-					dev_err(qcom_host->hba->dev,
-						"%s: error in ice_vops->config %d\n",
-						__func__, err);
-			}
-
-			spin_unlock_irqrestore(&qcom_host->ice_work_lock,
-				flags);
-
-			return err;
-		}
-
-		spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
-
-		if (ufs_qcom_is_data_cmd(cmd_op, true))
-			*enable = !ice_set.encr_bypass;
-		else if (ufs_qcom_is_data_cmd(cmd_op, false))
-			*enable = !ice_set.decr_bypass;
-
-		if (ice_set.crypto_data.key_index >= 0)
-			*cc_index = (u8)ice_set.crypto_data.key_index;
-	}
-	return 0;
-}
-
-/**
- * ufs_qcom_ice_cfg_start() - starts configuring UFS's ICE registers
- *							  for an ICE transaction
- * @qcom_host:	Pointer to a UFS QCom internal host structure.
- *		qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- *		be valid pointers.
- * @cmd:	Pointer to a valid scsi command. cmd->request should also be
- *              a valid pointer.
- *
- * Return: -EINVAL in-case of an error
- *         0 otherwise
- */
-int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
-		struct scsi_cmnd *cmd)
-{
-	struct device *dev = qcom_host->hba->dev;
-	int err = 0;
-	struct ice_data_setting ice_set;
-	unsigned int slot = 0;
-	sector_t lba = 0;
-	unsigned int ctrl_info_val = 0;
-	unsigned int bypass = 0;
-	struct request *req;
-	char cmd_op;
-	unsigned long flags;
-
-	if (!qcom_host->ice.pdev || !qcom_host->ice.vops) {
-		dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
-		goto out;
-	}
-
-	if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE) {
-		dev_err(dev, "%s: ice state (%d) is not active\n",
-			__func__, qcom_host->ice.state);
-		return -EINVAL;
-	}
-
-	if (qcom_host->hw_ver.major >= 0x3) {
-		/*
-		 * ICE 3.0 crypto sequences were changed,
-		 * CTRL_INFO register no longer exists
-		 * and doesn't need to be configured.
-		 * The configuration is done via utrd.
-		 */
-		return 0;
-	}
-
-	req = cmd->request;
-	if (req->bio)
-		lba = (req->bio->bi_iter.bi_sector) >>
-			UFS_QCOM_ICE_TR_DATA_UNIT_4_KB;
-
-	slot = req->tag;
-	if (slot < 0 || slot > qcom_host->hba->nutrs) {
-		dev_err(dev, "%s: slot (%d) is out of boundaries (0...%d)\n",
-			__func__, slot, qcom_host->hba->nutrs);
-		return -EINVAL;
-	}
-
-
-	memset(&ice_set, 0, sizeof(ice_set));
-	if (qcom_host->ice.vops->config_start) {
-
-		spin_lock_irqsave(
-			&qcom_host->ice_work_lock, flags);
-
-		err = qcom_host->ice.vops->config_start(qcom_host->ice.pdev,
-							req, &ice_set, true);
-		if (err) {
-			/*
-			 * config_start() returns -EAGAIN when a key slot is
-			 * available but still not configured. As configuration
-			 * requires a non-atomic context, this means we should
-			 * call the function again from the worker thread to do
-			 * the configuration. For this request the error will
-			 * propagate so it will be re-queued.
-			 */
-			if (err == -EAGAIN) {
-				if (!ice_workqueue) {
-					spin_unlock_irqrestore(
-					&qcom_host->ice_work_lock,
-					flags);
-
-					dev_err(qcom_host->hba->dev,
-						"%s: error %d workqueue NULL\n",
-						__func__, err);
-					return -EINVAL;
-				}
-
-				dev_dbg(qcom_host->hba->dev,
-					"%s: scheduling task for ice setup\n",
-					__func__);
-
-				if (!qcom_host->work_pending) {
-
-					qcom_host->req_pending = cmd->request;
-					if (!queue_work(ice_workqueue,
-						&qcom_host->ice_cfg_work)) {
-						qcom_host->req_pending = NULL;
-
-						spin_unlock_irqrestore(
-						&qcom_host->ice_work_lock,
-						flags);
-
-						return err;
-					}
-					qcom_host->work_pending = true;
-				}
-
-			} else {
-				if (err != -EBUSY)
-					dev_err(qcom_host->hba->dev,
-						"%s: error in ice_vops->config %d\n",
-						__func__, err);
-			}
-
-			spin_unlock_irqrestore(
-				&qcom_host->ice_work_lock, flags);
-
-			return err;
-		}
-
-		spin_unlock_irqrestore(
-			&qcom_host->ice_work_lock, flags);
-	}
-
-	cmd_op = cmd->cmnd[0];
-
-#define UFS_QCOM_DIR_WRITE	true
-#define UFS_QCOM_DIR_READ	false
-	/* if non data command, bypass shall be enabled */
-	if (!ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_WRITE) &&
-	    !ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_READ))
-		bypass = UFS_QCOM_ICE_ENABLE_BYPASS;
-	/* if writing data command */
-	else if (ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_WRITE))
-		bypass = ice_set.encr_bypass ? UFS_QCOM_ICE_ENABLE_BYPASS :
-						UFS_QCOM_ICE_DISABLE_BYPASS;
-	/* if reading data command */
-	else if (ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_READ))
-		bypass = ice_set.decr_bypass ? UFS_QCOM_ICE_ENABLE_BYPASS :
-						UFS_QCOM_ICE_DISABLE_BYPASS;
-
-
-	/* Configure ICE index */
-	ctrl_info_val =
-		(ice_set.crypto_data.key_index &
-		 MASK_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX)
-		 << OFFSET_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX;
-
-	/* Configure data unit size of transfer request */
-	ctrl_info_val |=
-		UFS_QCOM_ICE_TR_DATA_UNIT_4_KB
-		 << OFFSET_UFS_QCOM_ICE_CTRL_INFO_CDU;
-
-	/* Configure ICE bypass mode */
-	ctrl_info_val |=
-		(bypass & MASK_UFS_QCOM_ICE_CTRL_INFO_BYPASS)
-		 << OFFSET_UFS_QCOM_ICE_CTRL_INFO_BYPASS;
-
-	if (qcom_host->hw_ver.major == 0x1) {
-		ufshcd_writel(qcom_host->hba, lba,
-			     (REG_UFS_QCOM_ICE_CTRL_INFO_1_n + 8 * slot));
-
-		ufshcd_writel(qcom_host->hba, ctrl_info_val,
-			     (REG_UFS_QCOM_ICE_CTRL_INFO_2_n + 8 * slot));
-	}
-	if (qcom_host->hw_ver.major == 0x2) {
-		ufshcd_writel(qcom_host->hba, (lba & 0xFFFFFFFF),
-			     (REG_UFS_QCOM_ICE_CTRL_INFO_1_n + 16 * slot));
-
-		ufshcd_writel(qcom_host->hba, ((lba >> 32) & 0xFFFFFFFF),
-			     (REG_UFS_QCOM_ICE_CTRL_INFO_2_n + 16 * slot));
-
-		ufshcd_writel(qcom_host->hba, ctrl_info_val,
-			     (REG_UFS_QCOM_ICE_CTRL_INFO_3_n + 16 * slot));
-	}
-
-	/*
-	 * Ensure UFS-ICE registers are being configured
-	 * before next operation, otherwise UFS Host Controller might
-	 * set get errors
-	 */
-	mb();
-out:
-	return err;
-}
-
-/**
- * ufs_qcom_ice_cfg_end() - finishes configuring UFS's ICE registers
- *							for an ICE transaction
- * @qcom_host:	Pointer to a UFS QCom internal host structure.
- *				qcom_host, qcom_host->hba and
- *				qcom_host->hba->dev should all
- *				be valid pointers.
- * @cmd:	Pointer to a valid scsi command. cmd->request should also be
- *              a valid pointer.
- *
- * Return: -EINVAL in-case of an error
- *         0 otherwise
- */
-int ufs_qcom_ice_cfg_end(struct ufs_qcom_host *qcom_host, struct request *req)
-{
-	int err = 0;
-	struct device *dev = qcom_host->hba->dev;
-
-	if (qcom_host->ice.vops->config_end) {
-		err = qcom_host->ice.vops->config_end(qcom_host->ice.pdev, req);
-		if (err) {
-			dev_err(dev, "%s: error in ice_vops->config_end %d\n",
-				__func__, err);
-			return err;
-		}
-	}
-
-	return 0;
-}
-
-/**
- * ufs_qcom_ice_reset() - resets UFS-ICE interface and ICE device
- * @qcom_host:	Pointer to a UFS QCom internal host structure.
- *		qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- *		be valid pointers.
- *
- * Return: -EINVAL in-case of an error
- *         0 otherwise
- */
-int ufs_qcom_ice_reset(struct ufs_qcom_host *qcom_host)
-{
-	struct device *dev = qcom_host->hba->dev;
-	int err = 0;
-
-	if (!qcom_host->ice.pdev) {
-		dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
-		goto out;
-	}
-
-	if (!qcom_host->ice.vops) {
-		dev_err(dev, "%s: invalid ice_vops\n", __func__);
-		return -EINVAL;
-	}
-
-	if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE)
-		goto out;
-
-	if (qcom_host->ice.vops->reset) {
-		err = qcom_host->ice.vops->reset(qcom_host->ice.pdev);
-		if (err) {
-			dev_err(dev, "%s: ice_vops->reset failed. err %d\n",
-				__func__, err);
-			goto out;
-		}
-	}
-
-	if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE) {
-		dev_err(qcom_host->hba->dev,
-			"%s: error. ice.state (%d) is not in active state\n",
-			__func__, qcom_host->ice.state);
-		err = -EINVAL;
-	}
-
-out:
-	return err;
-}
-
-/**
- * ufs_qcom_ice_resume() - resumes UFS-ICE interface and ICE device from power
- * collapse
- * @qcom_host:	Pointer to a UFS QCom internal host structure.
- *		qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- *		be valid pointers.
- *
- * Return: -EINVAL in-case of an error
- *         0 otherwise
- */
-int ufs_qcom_ice_resume(struct ufs_qcom_host *qcom_host)
-{
-	struct device *dev = qcom_host->hba->dev;
-	int err = 0;
-
-	if (!qcom_host->ice.pdev) {
-		dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
-		goto out;
-	}
-
-	if (qcom_host->ice.state !=
-			UFS_QCOM_ICE_STATE_SUSPENDED) {
-		goto out;
-	}
-
-	if (!qcom_host->ice.vops) {
-		dev_err(dev, "%s: invalid ice_vops\n", __func__);
-		return -EINVAL;
-	}
-
-	if (qcom_host->ice.vops->resume) {
-		err = qcom_host->ice.vops->resume(qcom_host->ice.pdev);
-		if (err) {
-			dev_err(dev, "%s: ice_vops->resume failed. err %d\n",
-				__func__, err);
-			return err;
-		}
-	}
-	qcom_host->ice.state = UFS_QCOM_ICE_STATE_ACTIVE;
-out:
-	return err;
-}
-
-/**
- * ufs_qcom_is_ice_busy() - lets the caller of the function know if
- * there is any ongoing operation in ICE in workqueue context.
- * @qcom_host:	Pointer to a UFS QCom internal host structure.
- *		qcom_host should be a valid pointer.
- *
- * Return:	1 if ICE is busy, 0 if it is free.
- *		-EINVAL in case of error.
- */
-int ufs_qcom_is_ice_busy(struct ufs_qcom_host *qcom_host)
-{
-	if (!qcom_host) {
-		pr_err("%s: invalid qcom_host\n", __func__);
-		return -EINVAL;
-	}
-
-	if (qcom_host->req_pending)
-		return 1;
-	else
-		return 0;
-}
-
-/**
- * ufs_qcom_ice_suspend() - suspends UFS-ICE interface and ICE device
- * @qcom_host:	Pointer to a UFS QCom internal host structure.
- *		qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- *		be valid pointers.
- *
- * Return: -EINVAL in-case of an error
- *         0 otherwise
- */
-int ufs_qcom_ice_suspend(struct ufs_qcom_host *qcom_host)
-{
-	struct device *dev = qcom_host->hba->dev;
-	int err = 0;
-
-	if (!qcom_host->ice.pdev) {
-		dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
-		goto out;
-	}
-
-	if (qcom_host->ice.vops->suspend) {
-		err = qcom_host->ice.vops->suspend(qcom_host->ice.pdev);
-		if (err) {
-			dev_err(qcom_host->hba->dev,
-				"%s: ice_vops->suspend failed. err %d\n",
-				__func__, err);
-			return -EINVAL;
-		}
-	}
-
-	if (qcom_host->ice.state == UFS_QCOM_ICE_STATE_ACTIVE) {
-		qcom_host->ice.state = UFS_QCOM_ICE_STATE_SUSPENDED;
-	} else if (qcom_host->ice.state == UFS_QCOM_ICE_STATE_DISABLED) {
-		dev_err(qcom_host->hba->dev,
-				"%s: ice state is invalid: disabled\n",
-				__func__);
-		err = -EINVAL;
-	}
-
-out:
-	return err;
-}
-
-/**
- * ufs_qcom_ice_get_status() - returns the status of an ICE transaction
- * @qcom_host:	Pointer to a UFS QCom internal host structure.
- *		qcom_host, qcom_host->hba and qcom_host->hba->dev should all
- *		be valid pointers.
- * @ice_status:	Pointer to a valid output parameter.
- *		< 0 in case of ICE transaction failure.
- *		0 otherwise.
- *
- * Return: -EINVAL in-case of an error
- *         0 otherwise
- */
-int ufs_qcom_ice_get_status(struct ufs_qcom_host *qcom_host, int *ice_status)
-{
-	struct device *dev = NULL;
-	int err = 0;
-	int stat = -EINVAL;
-
-	*ice_status = 0;
-
-	dev = qcom_host->hba->dev;
-	if (!dev) {
-		err = -EINVAL;
-		goto out;
-	}
-
-	if (!qcom_host->ice.pdev) {
-		dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
-		goto out;
-	}
-
-	if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE) {
-		err = -EINVAL;
-		goto out;
-	}
-
-	if (!qcom_host->ice.vops) {
-		dev_err(dev, "%s: invalid ice_vops\n", __func__);
-		return -EINVAL;
-	}
-
-	if (qcom_host->ice.vops->status) {
-		stat = qcom_host->ice.vops->status(qcom_host->ice.pdev);
-		if (stat < 0) {
-			dev_err(dev, "%s: ice_vops->status failed. stat %d\n",
-				__func__, stat);
-			err = -EINVAL;
-			goto out;
-		}
-
-		*ice_status = stat;
-	}
-
-out:
-	return err;
-}
diff --git a/drivers/scsi/ufs/ufs-qcom-ice.h b/drivers/scsi/ufs/ufs-qcom-ice.h
deleted file mode 100644
index 2b42459..0000000
--- a/drivers/scsi/ufs/ufs-qcom-ice.h
+++ /dev/null
@@ -1,137 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- */
-
-#ifndef _UFS_QCOM_ICE_H_
-#define _UFS_QCOM_ICE_H_
-
-#include <scsi/scsi_cmnd.h>
-
-#include "ufs-qcom.h"
-
-/*
- * UFS host controller ICE registers. There are n [0..31]
- * of each of these registers
- */
-enum {
-	REG_UFS_QCOM_ICE_CFG		         = 0x2200,
-	REG_UFS_QCOM_ICE_CTRL_INFO_1_n           = 0x2204,
-	REG_UFS_QCOM_ICE_CTRL_INFO_2_n           = 0x2208,
-	REG_UFS_QCOM_ICE_CTRL_INFO_3_n           = 0x220C,
-};
-#define NUM_QCOM_ICE_CTRL_INFO_n_REGS		32
-
-/* UFS QCOM ICE CTRL Info register offset */
-enum {
-	OFFSET_UFS_QCOM_ICE_CTRL_INFO_BYPASS     = 0,
-	OFFSET_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX  = 0x1,
-	OFFSET_UFS_QCOM_ICE_CTRL_INFO_CDU        = 0x6,
-};
-
-/* UFS QCOM ICE CTRL Info register masks */
-enum {
-	MASK_UFS_QCOM_ICE_CTRL_INFO_BYPASS     = 0x1,
-	MASK_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX  = 0x1F,
-	MASK_UFS_QCOM_ICE_CTRL_INFO_CDU        = 0x8,
-};
-
-/* UFS QCOM ICE encryption/decryption bypass state */
-enum {
-	UFS_QCOM_ICE_DISABLE_BYPASS  = 0,
-	UFS_QCOM_ICE_ENABLE_BYPASS = 1,
-};
-
-/* UFS QCOM ICE Crypto Data Unit of target DUN of Transfer Request */
-enum {
-	UFS_QCOM_ICE_TR_DATA_UNIT_512_B          = 0,
-	UFS_QCOM_ICE_TR_DATA_UNIT_1_KB           = 1,
-	UFS_QCOM_ICE_TR_DATA_UNIT_2_KB           = 2,
-	UFS_QCOM_ICE_TR_DATA_UNIT_4_KB           = 3,
-	UFS_QCOM_ICE_TR_DATA_UNIT_8_KB           = 4,
-	UFS_QCOM_ICE_TR_DATA_UNIT_16_KB          = 5,
-	UFS_QCOM_ICE_TR_DATA_UNIT_32_KB          = 6,
-};
-
-/* UFS QCOM ICE internal state */
-enum {
-	UFS_QCOM_ICE_STATE_DISABLED   = 0,
-	UFS_QCOM_ICE_STATE_ACTIVE     = 1,
-	UFS_QCOM_ICE_STATE_SUSPENDED  = 2,
-};
-
-#ifdef CONFIG_SCSI_UFS_QCOM_ICE
-int ufs_qcom_ice_get_dev(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_ice_init(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_ice_req_setup(struct ufs_qcom_host *qcom_host,
-			   struct scsi_cmnd *cmd, u8 *cc_index, bool *enable);
-int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
-		struct scsi_cmnd *cmd);
-int ufs_qcom_ice_cfg_end(struct ufs_qcom_host *qcom_host,
-		struct request *req);
-int ufs_qcom_ice_reset(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_ice_resume(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_ice_suspend(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_ice_get_status(struct ufs_qcom_host *qcom_host, int *ice_status);
-void ufs_qcom_ice_print_regs(struct ufs_qcom_host *qcom_host);
-int ufs_qcom_is_ice_busy(struct ufs_qcom_host *qcom_host);
-#else
-inline int ufs_qcom_ice_get_dev(struct ufs_qcom_host *qcom_host)
-{
-	if (qcom_host) {
-		qcom_host->ice.pdev = NULL;
-		qcom_host->ice.vops = NULL;
-	}
-	return -ENODEV;
-}
-inline int ufs_qcom_ice_init(struct ufs_qcom_host *qcom_host)
-{
-	return 0;
-}
-inline int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
-					struct scsi_cmnd *cmd)
-{
-	return 0;
-}
-inline int ufs_qcom_ice_cfg_end(struct ufs_qcom_host *qcom_host,
-					struct request *req)
-{
-	return 0;
-}
-inline int ufs_qcom_ice_reset(struct ufs_qcom_host *qcom_host)
-{
-	return 0;
-}
-inline int ufs_qcom_ice_resume(struct ufs_qcom_host *qcom_host)
-{
-	return 0;
-}
-inline int ufs_qcom_ice_suspend(struct ufs_qcom_host *qcom_host)
-{
-	return 0;
-}
-inline int ufs_qcom_ice_get_status(struct ufs_qcom_host *qcom_host,
-				   int *ice_status)
-{
-	return 0;
-}
-inline void ufs_qcom_ice_print_regs(struct ufs_qcom_host *qcom_host)
-{
-}
-static inline int ufs_qcom_is_ice_busy(struct ufs_qcom_host *qcom_host)
-{
-	return 0;
-}
-#endif /* CONFIG_SCSI_UFS_QCOM_ICE */
-
-#endif /* UFS_QCOM_ICE_H_ */
diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
index 63918b0..982968f 100644
--- a/drivers/scsi/ufs/ufs-qcom.c
+++ b/drivers/scsi/ufs/ufs-qcom.c
@@ -28,9 +28,9 @@
 #include "unipro.h"
 #include "ufs-qcom.h"
 #include "ufshci.h"
-#include "ufs-qcom-ice.h"
 #include "ufs-qcom-debugfs.h"
 #include "ufs_quirks.h"
+#include "ufshcd-crypto-qti.h"
 
 #define MAX_PROP_SIZE		   32
 #define VDDP_REF_CLK_MIN_UV        1200000
@@ -408,15 +408,6 @@
 		 * is initialized.
 		 */
 		err = ufs_qcom_enable_lane_clks(host);
-		if (!err && host->ice.pdev) {
-			err = ufs_qcom_ice_init(host);
-			if (err) {
-				dev_err(hba->dev, "%s: ICE init failed (%d)\n",
-					__func__, err);
-				err = -EINVAL;
-			}
-		}
-
 		break;
 	case POST_CHANGE:
 		/* check if UFS PHY moved from DISABLED to HIBERN8 */
@@ -847,11 +838,11 @@
 		if (host->vddp_ref_clk && ufs_qcom_is_link_off(hba))
 			ret = ufs_qcom_disable_vreg(hba->dev,
 					host->vddp_ref_clk);
+
 		if (host->vccq_parent && !hba->auto_bkops_enabled)
 			ufs_qcom_config_vreg(hba->dev,
 					host->vccq_parent, false);
 
-		ufs_qcom_ice_suspend(host);
 		if (ufs_qcom_is_link_off(hba)) {
 			/* Assert PHY soft reset */
 			ufs_qcom_assert_reset(hba);
@@ -891,13 +882,6 @@
 	if (err)
 		goto out;
 
-	err = ufs_qcom_ice_resume(host);
-	if (err) {
-		dev_err(hba->dev, "%s: ufs_qcom_ice_resume failed, err = %d\n",
-			__func__, err);
-		goto out;
-	}
-
 	hba->is_sys_suspended = false;
 
 out:
@@ -937,104 +921,6 @@
 	return ret;
 }
 
-#ifdef CONFIG_SCSI_UFS_QCOM_ICE
-static int ufs_qcom_crypto_req_setup(struct ufs_hba *hba,
-	struct ufshcd_lrb *lrbp, u8 *cc_index, bool *enable, u64 *dun)
-{
-	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
-	struct request *req;
-	int ret;
-
-	if (lrbp->cmd && lrbp->cmd->request)
-		req = lrbp->cmd->request;
-	else
-		return 0;
-
-	/* Use request LBA or given dun as the DUN value */
-	if (req->bio) {
-#ifdef CONFIG_PFK
-		if (bio_dun(req->bio)) {
-			/* dun @bio can be split, so we have to adjust offset */
-			*dun = bio_dun(req->bio);
-		} else {
-			*dun = req->bio->bi_iter.bi_sector;
-			*dun >>= UFS_QCOM_ICE_TR_DATA_UNIT_4_KB;
-		}
-#else
-		*dun = req->bio->bi_iter.bi_sector;
-		*dun >>= UFS_QCOM_ICE_TR_DATA_UNIT_4_KB;
-#endif
-	}
-	ret = ufs_qcom_ice_req_setup(host, lrbp->cmd, cc_index, enable);
-
-	return ret;
-}
-
-static
-int ufs_qcom_crytpo_engine_cfg_start(struct ufs_hba *hba, unsigned int task_tag)
-{
-	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
-	struct ufshcd_lrb *lrbp = &hba->lrb[task_tag];
-	int err = 0;
-
-	if (!host->ice.pdev ||
-	    !lrbp->cmd ||
-		(lrbp->command_type != UTP_CMD_TYPE_SCSI &&
-		 lrbp->command_type != UTP_CMD_TYPE_UFS_STORAGE))
-		goto out;
-
-	err = ufs_qcom_ice_cfg_start(host, lrbp->cmd);
-out:
-	return err;
-}
-
-static
-int ufs_qcom_crytpo_engine_cfg_end(struct ufs_hba *hba,
-		struct ufshcd_lrb *lrbp, struct request *req)
-{
-	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
-	int err = 0;
-
-	if (!host->ice.pdev || (lrbp->command_type != UTP_CMD_TYPE_SCSI &&
-		lrbp->command_type != UTP_CMD_TYPE_UFS_STORAGE))
-		goto out;
-
-	err = ufs_qcom_ice_cfg_end(host, req);
-out:
-	return err;
-}
-
-static
-int ufs_qcom_crytpo_engine_reset(struct ufs_hba *hba)
-{
-	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
-	int err = 0;
-
-	if (!host->ice.pdev)
-		goto out;
-
-	err = ufs_qcom_ice_reset(host);
-out:
-	return err;
-}
-
-static int ufs_qcom_crypto_engine_get_status(struct ufs_hba *hba, u32 *status)
-{
-	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
-
-	if (!status)
-		return -EINVAL;
-
-	return ufs_qcom_ice_get_status(host, status);
-}
-#else /* !CONFIG_SCSI_UFS_QCOM_ICE */
-#define ufs_qcom_crypto_req_setup		NULL
-#define ufs_qcom_crytpo_engine_cfg_start	NULL
-#define ufs_qcom_crytpo_engine_cfg_end		NULL
-#define ufs_qcom_crytpo_engine_reset		NULL
-#define ufs_qcom_crypto_engine_get_status	NULL
-#endif /* CONFIG_SCSI_UFS_QCOM_ICE */
-
 struct ufs_qcom_dev_params {
 	u32 pwm_rx_gear;	/* pwm rx gear to work in */
 	u32 pwm_tx_gear;	/* pwm tx gear to work in */
@@ -1574,6 +1460,12 @@
 
 	if (host->disable_lpm)
 		hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8;
+	/*
+	 * Inline crypto is currently broken with ufs-qcom at least because the
+	 * device tree doesn't include the crypto registers.  There are likely
+	 * to be other issues that will need to be addressed too.
+	 */
+	//hba->quirks |= UFSHCD_QUIRK_BROKEN_CRYPTO;
 }
 
 static void ufs_qcom_set_caps(struct ufs_hba *hba)
@@ -1642,14 +1534,7 @@
 		if (ufshcd_is_hs_mode(&hba->pwr_info))
 			ufs_qcom_dev_ref_clk_ctrl(host, true);
 
-		err = ufs_qcom_ice_resume(host);
-		if (err)
-			goto out;
 	} else if (!on && (status == PRE_CHANGE)) {
-		err = ufs_qcom_ice_suspend(host);
-		if (err)
-			goto out;
-
 		/*
 		 * If auto hibern8 is enabled then the link will already
 		 * be in hibern8 state and the ref clock can be gated.
@@ -2227,36 +2112,9 @@
 
 	/* Make a two way bind between the qcom host and the hba */
 	host->hba = hba;
-	spin_lock_init(&host->ice_work_lock);
 
 	ufshcd_set_variant(hba, host);
 
-	err = ufs_qcom_ice_get_dev(host);
-	if (err == -EPROBE_DEFER) {
-		/*
-		 * UFS driver might be probed before ICE driver does.
-		 * In that case we would like to return EPROBE_DEFER code
-		 * in order to delay its probing.
-		 */
-		dev_err(dev, "%s: required ICE device not probed yet err = %d\n",
-			__func__, err);
-		goto out_variant_clear;
-
-	} else if (err == -ENODEV) {
-		/*
-		 * ICE device is not enabled in DTS file. No need for further
-		 * initialization of ICE driver.
-		 */
-		dev_warn(dev, "%s: ICE device is not enabled\n",
-			__func__);
-	} else if (err) {
-		dev_err(dev, "%s: ufs_qcom_ice_get_dev failed %d\n",
-			__func__, err);
-		goto out_variant_clear;
-	} else {
-		hba->host->inlinecrypt_support = 1;
-	}
-
 	host->generic_phy = devm_phy_get(dev, "ufsphy");
 
 	if (host->generic_phy == ERR_PTR(-EPROBE_DEFER)) {
@@ -2281,6 +2139,12 @@
 	/* restore the secure configuration */
 	ufs_qcom_update_sec_cfg(hba, true);
 
+	/*
+	 * Set the vendor specific ops needed for ICE.
+	 * Default implementation if the ops are not set.
+	 */
+	ufshcd_crypto_qti_set_vops(hba);
+
 	err = ufs_qcom_bus_register(host);
 	if (err)
 		goto out_variant_clear;
@@ -2832,7 +2696,6 @@
 	usleep_range(1000, 1100);
 	ufs_qcom_phy_dbg_register_dump(phy);
 	usleep_range(1000, 1100);
-	ufs_qcom_ice_print_regs(host);
 }
 
 static u32 ufs_qcom_get_user_cap_mode(struct ufs_hba *hba)
@@ -2869,14 +2732,6 @@
 	.get_user_cap_mode	= ufs_qcom_get_user_cap_mode,
 };
 
-static struct ufs_hba_crypto_variant_ops ufs_hba_crypto_variant_ops = {
-	.crypto_req_setup	= ufs_qcom_crypto_req_setup,
-	.crypto_engine_cfg_start	= ufs_qcom_crytpo_engine_cfg_start,
-	.crypto_engine_cfg_end	= ufs_qcom_crytpo_engine_cfg_end,
-	.crypto_engine_reset	  = ufs_qcom_crytpo_engine_reset,
-	.crypto_engine_get_status = ufs_qcom_crypto_engine_get_status,
-};
-
 static struct ufs_hba_pm_qos_variant_ops ufs_hba_pm_qos_variant_ops = {
 	.req_start	= ufs_qcom_pm_qos_req_start,
 	.req_end	= ufs_qcom_pm_qos_req_end,
@@ -2885,7 +2740,6 @@
 static struct ufs_hba_variant ufs_hba_qcom_variant = {
 	.name		= "qcom",
 	.vops		= &ufs_hba_qcom_vops,
-	.crypto_vops	= &ufs_hba_crypto_variant_ops,
 	.pm_qos_vops	= &ufs_hba_pm_qos_variant_ops,
 };
 
diff --git a/drivers/scsi/ufs/ufs-qcom.h b/drivers/scsi/ufs/ufs-qcom.h
index 9197742..6538637 100644
--- a/drivers/scsi/ufs/ufs-qcom.h
+++ b/drivers/scsi/ufs/ufs-qcom.h
@@ -238,26 +238,6 @@
 	u8 select_minor;
 };
 
-/**
- * struct ufs_qcom_ice_data - ICE related information
- * @vops:	pointer to variant operations of ICE
- * @async_done:	completion for supporting ICE's driver asynchronous nature
- * @pdev:	pointer to the proper ICE platform device
- * @state:      UFS-ICE interface's internal state (see
- *       ufs-qcom-ice.h for possible internal states)
- * @quirks:     UFS-ICE interface related quirks
- * @crypto_engine_err: crypto engine errors
- */
-struct ufs_qcom_ice_data {
-	struct qcom_ice_variant_ops *vops;
-	struct platform_device *pdev;
-	int state;
-
-	u16 quirks;
-
-	bool crypto_engine_err;
-};
-
 #ifdef CONFIG_DEBUG_FS
 struct qcom_debugfs_files {
 	struct dentry *debugfs_root;
@@ -366,7 +346,6 @@
 	bool disable_lpm;
 	bool is_lane_clks_enabled;
 	bool sec_cfg_updated;
-	struct ufs_qcom_ice_data ice;
 
 	void __iomem *dev_ref_clk_ctrl_mmio;
 	bool is_dev_ref_clk_enabled;
@@ -381,9 +360,6 @@
 	u32 dbg_print_en;
 	struct ufs_qcom_testbus testbus;
 
-	spinlock_t ice_work_lock;
-	struct work_struct ice_cfg_work;
-	bool is_ice_cfg_work_set;
 	struct request *req_pending;
 	struct ufs_vreg *vddp_ref_clk;
 	struct ufs_vreg *vccq_parent;
diff --git a/drivers/scsi/ufs/ufshcd-crypto-qti.c b/drivers/scsi/ufs/ufshcd-crypto-qti.c
new file mode 100644
index 0000000..f3351d0
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto-qti.c
@@ -0,0 +1,302 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2020, Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include <linux/platform_device.h>
+#include <linux/crypto-qti-common.h>
+
+#include "ufshcd-crypto-qti.h"
+
+#define MINIMUM_DUN_SIZE 512
+#define MAXIMUM_DUN_SIZE 65536
+
+#define NUM_KEYSLOTS(hba) (hba->crypto_capabilities.config_count + 1)
+
+static struct ufs_hba_crypto_variant_ops ufshcd_crypto_qti_variant_ops = {
+	.hba_init_crypto = ufshcd_crypto_qti_init_crypto,
+	.enable = ufshcd_crypto_qti_enable,
+	.disable = ufshcd_crypto_qti_disable,
+	.resume = ufshcd_crypto_qti_resume,
+	.debug = ufshcd_crypto_qti_debug,
+};
+
+static uint8_t get_data_unit_size_mask(unsigned int data_unit_size)
+{
+	if (data_unit_size < MINIMUM_DUN_SIZE ||
+		data_unit_size > MAXIMUM_DUN_SIZE ||
+	    !is_power_of_2(data_unit_size))
+		return 0;
+
+	return data_unit_size / MINIMUM_DUN_SIZE;
+}
+
+static bool ice_cap_idx_valid(struct ufs_hba *hba,
+			      unsigned int cap_idx)
+{
+	return cap_idx < hba->crypto_capabilities.num_crypto_cap;
+}
+
+void ufshcd_crypto_qti_enable(struct ufs_hba *hba)
+{
+	int err = 0;
+
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return;
+
+	err = crypto_qti_enable(hba->crypto_vops->priv);
+	if (err) {
+		pr_err("%s: Error enabling crypto, err %d\n",
+				__func__, err);
+		ufshcd_crypto_qti_disable(hba);
+	}
+
+	ufshcd_crypto_enable_spec(hba);
+
+}
+
+void ufshcd_crypto_qti_disable(struct ufs_hba *hba)
+{
+	ufshcd_crypto_disable_spec(hba);
+	crypto_qti_disable(hba->crypto_vops->priv);
+}
+
+
+static int ufshcd_crypto_qti_keyslot_program(struct keyslot_manager *ksm,
+					     const struct blk_crypto_key *key,
+					     unsigned int slot)
+{
+	struct ufs_hba *hba = keyslot_manager_private(ksm);
+	int err = 0;
+	u8 data_unit_mask;
+	int crypto_alg_id;
+
+	crypto_alg_id = ufshcd_crypto_cap_find(hba, key->crypto_mode,
+					       key->data_unit_size);
+
+	if (!ufshcd_is_crypto_enabled(hba) ||
+	    !ufshcd_keyslot_valid(hba, slot) ||
+	    !ice_cap_idx_valid(hba, crypto_alg_id))
+		return -EINVAL;
+
+	data_unit_mask = get_data_unit_size_mask(key->data_unit_size);
+
+	if (!(data_unit_mask &
+	      hba->crypto_cap_array[crypto_alg_id].sdus_mask))
+		return -EINVAL;
+
+	pm_runtime_get_sync(hba->dev);
+	err = ufshcd_hold(hba, false);
+	if (err) {
+		pr_err("%s: failed to enable clocks, err %d\n", __func__, err);
+		return err;
+	}
+
+	err = crypto_qti_keyslot_program(hba->crypto_vops->priv, key, slot,
+					data_unit_mask, crypto_alg_id);
+	if (err) {
+		pr_err("%s: failed with error %d\n", __func__, err);
+		ufshcd_release(hba, false);
+		pm_runtime_put_sync(hba->dev);
+		return err;
+	}
+
+	ufshcd_release(hba, false);
+	pm_runtime_put_sync(hba->dev);
+
+	return 0;
+}
+
+static int ufshcd_crypto_qti_keyslot_evict(struct keyslot_manager *ksm,
+					   const struct blk_crypto_key *key,
+					   unsigned int slot)
+{
+	int err = 0;
+	struct ufs_hba *hba = keyslot_manager_private(ksm);
+
+	if (!ufshcd_is_crypto_enabled(hba) ||
+	    !ufshcd_keyslot_valid(hba, slot))
+		return -EINVAL;
+
+	pm_runtime_get_sync(hba->dev);
+	err = ufshcd_hold(hba, false);
+	if (err) {
+		pr_err("%s: failed to enable clocks, err %d\n", __func__, err);
+		return err;
+	}
+
+	err = crypto_qti_keyslot_evict(hba->crypto_vops->priv, slot);
+	if (err) {
+		pr_err("%s: failed with error %d\n",
+			__func__, err);
+		ufshcd_release(hba, false);
+		pm_runtime_put_sync(hba->dev);
+		return err;
+	}
+
+	ufshcd_release(hba, false);
+	pm_runtime_put_sync(hba->dev);
+
+	return err;
+}
+
+static int ufshcd_crypto_qti_derive_raw_secret(struct keyslot_manager *ksm,
+					       const u8 *wrapped_key,
+					       unsigned int wrapped_key_size,
+					       u8 *secret,
+					       unsigned int secret_size)
+{
+	return crypto_qti_derive_raw_secret(wrapped_key, wrapped_key_size,
+			secret, secret_size);
+}
+
+static const struct keyslot_mgmt_ll_ops ufshcd_crypto_qti_ksm_ops = {
+	.keyslot_program	= ufshcd_crypto_qti_keyslot_program,
+	.keyslot_evict		= ufshcd_crypto_qti_keyslot_evict,
+	.derive_raw_secret	= ufshcd_crypto_qti_derive_raw_secret,
+};
+
+static enum blk_crypto_mode_num ufshcd_blk_crypto_qti_mode_num_for_alg_dusize(
+					enum ufs_crypto_alg ufs_crypto_alg,
+					enum ufs_crypto_key_size key_size)
+{
+	/*
+	 * This is currently the only mode that UFS and blk-crypto both support.
+	 */
+	if (ufs_crypto_alg == UFS_CRYPTO_ALG_AES_XTS &&
+		key_size == UFS_CRYPTO_KEY_SIZE_256)
+		return BLK_ENCRYPTION_MODE_AES_256_XTS;
+
+	return BLK_ENCRYPTION_MODE_INVALID;
+}
+
+static int ufshcd_hba_init_crypto_qti_spec(struct ufs_hba *hba,
+				    const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+	int cap_idx = 0;
+	int err = 0;
+	unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
+	enum blk_crypto_mode_num blk_mode_num;
+
+	/* Default to disabling crypto */
+	hba->caps &= ~UFSHCD_CAP_CRYPTO;
+
+	if (!(hba->capabilities & MASK_CRYPTO_SUPPORT)) {
+		err = -ENODEV;
+		goto out;
+	}
+
+	/*
+	 * Crypto Capabilities should never be 0, because the
+	 * config_array_ptr > 04h. So we use a 0 value to indicate that
+	 * crypto init failed, and can't be enabled.
+	 */
+	hba->crypto_capabilities.reg_val =
+			  cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
+	hba->crypto_cfg_register =
+		 (u32)hba->crypto_capabilities.config_array_ptr * 0x100;
+	hba->crypto_cap_array =
+		 devm_kcalloc(hba->dev,
+				hba->crypto_capabilities.num_crypto_cap,
+				sizeof(hba->crypto_cap_array[0]),
+				GFP_KERNEL);
+	if (!hba->crypto_cap_array) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	memset(crypto_modes_supported, 0, sizeof(crypto_modes_supported));
+	/*
+	 * Store all the capabilities now so that we don't need to repeatedly
+	 * access the device each time we want to know its capabilities
+	 */
+	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		hba->crypto_cap_array[cap_idx].reg_val =
+				cpu_to_le32(ufshcd_readl(hba,
+						REG_UFS_CRYPTOCAP +
+						cap_idx * sizeof(__le32)));
+		blk_mode_num = ufshcd_blk_crypto_qti_mode_num_for_alg_dusize(
+				hba->crypto_cap_array[cap_idx].algorithm_id,
+				hba->crypto_cap_array[cap_idx].key_size);
+		if (blk_mode_num == BLK_ENCRYPTION_MODE_INVALID)
+			continue;
+		crypto_modes_supported[blk_mode_num] |=
+			hba->crypto_cap_array[cap_idx].sdus_mask * 512;
+	}
+
+	hba->ksm = keyslot_manager_create(ufshcd_num_keyslots(hba), ksm_ops,
+					crypto_modes_supported, hba);
+
+	if (!hba->ksm) {
+		err = -ENOMEM;
+		goto out;
+	}
+	pr_debug("%s: keyslot manager created\n", __func__);
+
+	return 0;
+
+out:
+	/* Indicate that init failed by setting crypto_capabilities to 0 */
+	hba->crypto_capabilities.reg_val = 0;
+	return err;
+}
+
+int ufshcd_crypto_qti_init_crypto(struct ufs_hba *hba,
+				  const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+	int err = 0;
+	struct platform_device *pdev = to_platform_device(hba->dev);
+	void __iomem *mmio_base;
+	struct resource *mem_res;
+
+	mem_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+								"ufs_ice");
+	mmio_base = devm_ioremap_resource(hba->dev, mem_res);
+	if (IS_ERR(mmio_base)) {
+		pr_err("%s: Unable to get ufs_crypto mmio base\n", __func__);
+		return PTR_ERR(mmio_base);
+	}
+
+	err = ufshcd_hba_init_crypto_qti_spec(hba, &ufshcd_crypto_qti_ksm_ops);
+	if (err) {
+		pr_err("%s: Error initiating crypto capabilities, err %d\n",
+					__func__, err);
+		return err;
+	}
+
+	err = crypto_qti_init_crypto(hba->dev,
+			mmio_base, (void **)&hba->crypto_vops->priv);
+	if (err) {
+		pr_err("%s: Error initiating crypto, err %d\n",
+					__func__, err);
+	}
+	return err;
+}
+
+int ufshcd_crypto_qti_debug(struct ufs_hba *hba)
+{
+	return crypto_qti_debug(hba->crypto_vops->priv);
+}
+
+void ufshcd_crypto_qti_set_vops(struct ufs_hba *hba)
+{
+	return ufshcd_crypto_set_vops(hba, &ufshcd_crypto_qti_variant_ops);
+}
+
+int ufshcd_crypto_qti_resume(struct ufs_hba *hba,
+			     enum ufs_pm_op pm_op)
+{
+	return crypto_qti_resume(hba->crypto_vops->priv);
+}
diff --git a/drivers/scsi/ufs/ufshcd-crypto-qti.h b/drivers/scsi/ufs/ufshcd-crypto-qti.h
new file mode 100644
index 0000000..5c1b2ae
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto-qti.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _UFSHCD_CRYPTO_QTI_H
+#define _UFSHCD_CRYPTO_QTI_H
+
+#include "ufshcd.h"
+#include "ufshcd-crypto.h"
+
+void ufshcd_crypto_qti_enable(struct ufs_hba *hba);
+
+void ufshcd_crypto_qti_disable(struct ufs_hba *hba);
+
+int ufshcd_crypto_qti_init_crypto(struct ufs_hba *hba,
+	const struct keyslot_mgmt_ll_ops *ksm_ops);
+
+void ufshcd_crypto_qti_setup_rq_keyslot_manager(struct ufs_hba *hba,
+					    struct request_queue *q);
+
+void ufshcd_crypto_qti_destroy_rq_keyslot_manager(struct ufs_hba *hba,
+			struct request_queue *q);
+
+int ufshcd_crypto_qti_prepare_lrbp_crypto(struct ufs_hba *hba,
+			struct scsi_cmnd *cmd, struct ufshcd_lrb *lrbp);
+
+int ufshcd_crypto_qti_complete_lrbp_crypto(struct ufs_hba *hba,
+				struct scsi_cmnd *cmd, struct ufshcd_lrb *lrbp);
+
+int ufshcd_crypto_qti_debug(struct ufs_hba *hba);
+
+int ufshcd_crypto_qti_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+
+int ufshcd_crypto_qti_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO_QTI
+void ufshcd_crypto_qti_set_vops(struct ufs_hba *hba);
+#else
+static inline void ufshcd_crypto_qti_set_vops(struct ufs_hba *hba)
+{}
+#endif /* CONFIG_SCSI_UFS_CRYPTO_QTI */
+#endif /* _UFSHCD_CRYPTO_QTI_H */
diff --git a/drivers/scsi/ufs/ufshcd-crypto.c b/drivers/scsi/ufs/ufshcd-crypto.c
new file mode 100644
index 0000000..a72b1ca
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto.c
@@ -0,0 +1,499 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#include <linux/keyslot-manager.h>
+#include "ufshcd.h"
+#include "ufshcd-crypto.h"
+
+static bool ufshcd_cap_idx_valid(struct ufs_hba *hba, unsigned int cap_idx)
+{
+	return cap_idx < hba->crypto_capabilities.num_crypto_cap;
+}
+
+static u8 get_data_unit_size_mask(unsigned int data_unit_size)
+{
+	if (data_unit_size < 512 || data_unit_size > 65536 ||
+	    !is_power_of_2(data_unit_size))
+		return 0;
+
+	return data_unit_size / 512;
+}
+
+static size_t get_keysize_bytes(enum ufs_crypto_key_size size)
+{
+	switch (size) {
+	case UFS_CRYPTO_KEY_SIZE_128:
+		return 16;
+	case UFS_CRYPTO_KEY_SIZE_192:
+		return 24;
+	case UFS_CRYPTO_KEY_SIZE_256:
+		return 32;
+	case UFS_CRYPTO_KEY_SIZE_512:
+		return 64;
+	default:
+		return 0;
+	}
+}
+
+int ufshcd_crypto_cap_find(struct ufs_hba *hba,
+			   enum blk_crypto_mode_num crypto_mode,
+			   unsigned int data_unit_size)
+{
+	enum ufs_crypto_alg ufs_alg;
+	u8 data_unit_mask;
+	int cap_idx;
+	enum ufs_crypto_key_size ufs_key_size;
+	union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
+
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return -EINVAL;
+
+	switch (crypto_mode) {
+	case BLK_ENCRYPTION_MODE_AES_256_XTS:
+		ufs_alg = UFS_CRYPTO_ALG_AES_XTS;
+		ufs_key_size = UFS_CRYPTO_KEY_SIZE_256;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	data_unit_mask = get_data_unit_size_mask(data_unit_size);
+
+	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		if (ccap_array[cap_idx].algorithm_id == ufs_alg &&
+		    (ccap_array[cap_idx].sdus_mask & data_unit_mask) &&
+		    ccap_array[cap_idx].key_size == ufs_key_size)
+			return cap_idx;
+	}
+
+	return -EINVAL;
+}
+EXPORT_SYMBOL(ufshcd_crypto_cap_find);
+
+/**
+ * ufshcd_crypto_cfg_entry_write_key - Write a key into a crypto_cfg_entry
+ *
+ *	Writes the key with the appropriate format - for AES_XTS,
+ *	the first half of the key is copied as is, the second half is
+ *	copied with an offset halfway into the cfg->crypto_key array.
+ *	For the other supported crypto algs, the key is just copied.
+ *
+ * @cfg: The crypto config to write to
+ * @key: The key to write
+ * @cap: The crypto capability (which specifies the crypto alg and key size)
+ *
+ * Returns 0 on success, or -EINVAL
+ */
+static int ufshcd_crypto_cfg_entry_write_key(union ufs_crypto_cfg_entry *cfg,
+					     const u8 *key,
+					     union ufs_crypto_cap_entry cap)
+{
+	size_t key_size_bytes = get_keysize_bytes(cap.key_size);
+
+	if (key_size_bytes == 0)
+		return -EINVAL;
+
+	switch (cap.algorithm_id) {
+	case UFS_CRYPTO_ALG_AES_XTS:
+		key_size_bytes *= 2;
+		if (key_size_bytes > UFS_CRYPTO_KEY_MAX_SIZE)
+			return -EINVAL;
+
+		memcpy(cfg->crypto_key, key, key_size_bytes/2);
+		memcpy(cfg->crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2,
+		       key + key_size_bytes/2, key_size_bytes/2);
+		return 0;
+	case UFS_CRYPTO_ALG_BITLOCKER_AES_CBC:
+		/* fall through */
+	case UFS_CRYPTO_ALG_AES_ECB:
+		/* fall through */
+	case UFS_CRYPTO_ALG_ESSIV_AES_CBC:
+		memcpy(cfg->crypto_key, key, key_size_bytes);
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static int ufshcd_program_key(struct ufs_hba *hba,
+			      const union ufs_crypto_cfg_entry *cfg, int slot)
+{
+	int i;
+	u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
+	int err;
+
+	pm_runtime_get_sync(hba->dev);
+	ufshcd_hold(hba, false);
+
+	if (hba->var->vops->program_key) {
+		err = hba->var->vops->program_key(hba, cfg, slot);
+		goto out;
+	}
+
+	/* Clear the dword 16 */
+	ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
+	/* Ensure that CFGE is cleared before programming the key */
+	wmb();
+	for (i = 0; i < 16; i++) {
+		ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[i]),
+			      slot_offset + i * sizeof(cfg->reg_val[0]));
+		/* Spec says each dword in key must be written sequentially */
+		wmb();
+	}
+	/* Write dword 17 */
+	ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[17]),
+		      slot_offset + 17 * sizeof(cfg->reg_val[0]));
+	/* Dword 16 must be written last */
+	wmb();
+	/* Write dword 16 */
+	ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[16]),
+		      slot_offset + 16 * sizeof(cfg->reg_val[0]));
+	wmb();
+	err = 0;
+out:
+	ufshcd_release(hba, false);
+	pm_runtime_put_sync(hba->dev);
+	return err;
+}
+
+static void ufshcd_clear_keyslot(struct ufs_hba *hba, int slot)
+{
+	union ufs_crypto_cfg_entry cfg = { {0} };
+	int err;
+
+	err = ufshcd_program_key(hba, &cfg, slot);
+	WARN_ON_ONCE(err);
+}
+
+/* Clear all keyslots at driver init time */
+static void ufshcd_clear_all_keyslots(struct ufs_hba *hba)
+{
+	int slot;
+
+	for (slot = 0; slot < ufshcd_num_keyslots(hba); slot++)
+		ufshcd_clear_keyslot(hba, slot);
+}
+
+static int ufshcd_crypto_keyslot_program(struct keyslot_manager *ksm,
+					 const struct blk_crypto_key *key,
+					 unsigned int slot)
+{
+	struct ufs_hba *hba = keyslot_manager_private(ksm);
+	int err = 0;
+	u8 data_unit_mask;
+	union ufs_crypto_cfg_entry cfg;
+	int cap_idx;
+
+	cap_idx = ufshcd_crypto_cap_find(hba, key->crypto_mode,
+					 key->data_unit_size);
+
+	if (!ufshcd_is_crypto_enabled(hba) ||
+	    !ufshcd_keyslot_valid(hba, slot) ||
+	    !ufshcd_cap_idx_valid(hba, cap_idx))
+		return -EINVAL;
+
+	data_unit_mask = get_data_unit_size_mask(key->data_unit_size);
+
+	if (!(data_unit_mask & hba->crypto_cap_array[cap_idx].sdus_mask))
+		return -EINVAL;
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.data_unit_size = data_unit_mask;
+	cfg.crypto_cap_idx = cap_idx;
+	cfg.config_enable |= UFS_CRYPTO_CONFIGURATION_ENABLE;
+
+	err = ufshcd_crypto_cfg_entry_write_key(&cfg, key->raw,
+						hba->crypto_cap_array[cap_idx]);
+	if (err)
+		return err;
+
+	err = ufshcd_program_key(hba, &cfg, slot);
+
+	memzero_explicit(&cfg, sizeof(cfg));
+
+	return err;
+}
+
+static int ufshcd_crypto_keyslot_evict(struct keyslot_manager *ksm,
+				       const struct blk_crypto_key *key,
+				       unsigned int slot)
+{
+	struct ufs_hba *hba = keyslot_manager_private(ksm);
+
+	if (!ufshcd_is_crypto_enabled(hba) ||
+	    !ufshcd_keyslot_valid(hba, slot))
+		return -EINVAL;
+
+	/*
+	 * Clear the crypto cfg on the device. Clearing CFGE
+	 * might not be sufficient, so just clear the entire cfg.
+	 */
+	ufshcd_clear_keyslot(hba, slot);
+
+	return 0;
+}
+
+/* Functions implementing UFSHCI v2.1 specification behaviour */
+void ufshcd_crypto_enable_spec(struct ufs_hba *hba)
+{
+	if (!ufshcd_hba_is_crypto_supported(hba))
+		return;
+
+	hba->caps |= UFSHCD_CAP_CRYPTO;
+
+	/* Reset might clear all keys, so reprogram all the keys. */
+	keyslot_manager_reprogram_all_keys(hba->ksm);
+}
+EXPORT_SYMBOL_GPL(ufshcd_crypto_enable_spec);
+
+void ufshcd_crypto_disable_spec(struct ufs_hba *hba)
+{
+	hba->caps &= ~UFSHCD_CAP_CRYPTO;
+}
+EXPORT_SYMBOL_GPL(ufshcd_crypto_disable_spec);
+
+static const struct keyslot_mgmt_ll_ops ufshcd_ksm_ops = {
+	.keyslot_program	= ufshcd_crypto_keyslot_program,
+	.keyslot_evict		= ufshcd_crypto_keyslot_evict,
+};
+
+enum blk_crypto_mode_num ufshcd_blk_crypto_mode_num_for_alg_dusize(
+					enum ufs_crypto_alg ufs_crypto_alg,
+					enum ufs_crypto_key_size key_size)
+{
+	/*
+	 * This is currently the only mode that UFS and blk-crypto both support.
+	 */
+	if (ufs_crypto_alg == UFS_CRYPTO_ALG_AES_XTS &&
+		key_size == UFS_CRYPTO_KEY_SIZE_256)
+		return BLK_ENCRYPTION_MODE_AES_256_XTS;
+
+	return BLK_ENCRYPTION_MODE_INVALID;
+}
+
+/**
+ * ufshcd_hba_init_crypto - Read crypto capabilities, init crypto fields in hba
+ * @hba: Per adapter instance
+ *
+ * Return: 0 if crypto was initialized or is not supported, else a -errno value.
+ */
+int ufshcd_hba_init_crypto_spec(struct ufs_hba *hba,
+				const struct keyslot_mgmt_ll_ops *ksm_ops)
+{
+	int cap_idx = 0;
+	int err = 0;
+	unsigned int crypto_modes_supported[BLK_ENCRYPTION_MODE_MAX];
+	enum blk_crypto_mode_num blk_mode_num;
+
+	/* Default to disabling crypto */
+	hba->caps &= ~UFSHCD_CAP_CRYPTO;
+
+	/* Return 0 if crypto support isn't present */
+	if (!(hba->capabilities & MASK_CRYPTO_SUPPORT) ||
+	    (hba->quirks & UFSHCD_QUIRK_BROKEN_CRYPTO))
+		goto out;
+
+	/*
+	 * Crypto Capabilities should never be 0, because the
+	 * config_array_ptr > 04h. So we use a 0 value to indicate that
+	 * crypto init failed, and can't be enabled.
+	 */
+	hba->crypto_capabilities.reg_val =
+			cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
+	hba->crypto_cfg_register =
+		(u32)hba->crypto_capabilities.config_array_ptr * 0x100;
+	hba->crypto_cap_array =
+		devm_kcalloc(hba->dev,
+			     hba->crypto_capabilities.num_crypto_cap,
+			     sizeof(hba->crypto_cap_array[0]),
+			     GFP_KERNEL);
+	if (!hba->crypto_cap_array) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	memset(crypto_modes_supported, 0, sizeof(crypto_modes_supported));
+	/*
+	 * Store all the capabilities now so that we don't need to repeatedly
+	 * access the device each time we want to know its capabilities
+	 */
+	for (cap_idx = 0; cap_idx < hba->crypto_capabilities.num_crypto_cap;
+	     cap_idx++) {
+		hba->crypto_cap_array[cap_idx].reg_val =
+			cpu_to_le32(ufshcd_readl(hba,
+						 REG_UFS_CRYPTOCAP +
+						 cap_idx * sizeof(__le32)));
+		blk_mode_num = ufshcd_blk_crypto_mode_num_for_alg_dusize(
+				hba->crypto_cap_array[cap_idx].algorithm_id,
+				hba->crypto_cap_array[cap_idx].key_size);
+		if (blk_mode_num == BLK_ENCRYPTION_MODE_INVALID)
+			continue;
+		crypto_modes_supported[blk_mode_num] |=
+			hba->crypto_cap_array[cap_idx].sdus_mask * 512;
+	}
+
+	ufshcd_clear_all_keyslots(hba);
+
+	hba->ksm = keyslot_manager_create(ufshcd_num_keyslots(hba), ksm_ops,
+					  crypto_modes_supported, hba);
+
+	if (!hba->ksm) {
+		err = -ENOMEM;
+		goto out_free_caps;
+	}
+
+	return 0;
+
+out_free_caps:
+	devm_kfree(hba->dev, hba->crypto_cap_array);
+out:
+	/* Indicate that init failed by setting crypto_capabilities to 0 */
+	hba->crypto_capabilities.reg_val = 0;
+	return err;
+}
+EXPORT_SYMBOL_GPL(ufshcd_hba_init_crypto_spec);
+
+void ufshcd_crypto_setup_rq_keyslot_manager_spec(struct ufs_hba *hba,
+						 struct request_queue *q)
+{
+	if (!ufshcd_hba_is_crypto_supported(hba) || !q)
+		return;
+
+	q->ksm = hba->ksm;
+}
+EXPORT_SYMBOL_GPL(ufshcd_crypto_setup_rq_keyslot_manager_spec);
+
+void ufshcd_crypto_destroy_rq_keyslot_manager_spec(struct ufs_hba *hba,
+						   struct request_queue *q)
+{
+	keyslot_manager_destroy(hba->ksm);
+}
+EXPORT_SYMBOL_GPL(ufshcd_crypto_destroy_rq_keyslot_manager_spec);
+
+int ufshcd_prepare_lrbp_crypto_spec(struct ufs_hba *hba,
+				    struct scsi_cmnd *cmd,
+				    struct ufshcd_lrb *lrbp)
+{
+	struct bio_crypt_ctx *bc;
+
+	if (!bio_crypt_should_process(cmd->request)) {
+		lrbp->crypto_enable = false;
+		return 0;
+	}
+	bc = cmd->request->bio->bi_crypt_context;
+
+	if (WARN_ON(!ufshcd_is_crypto_enabled(hba))) {
+		/*
+		 * Upper layer asked us to do inline encryption
+		 * but that isn't enabled, so we fail this request.
+		 */
+		return -EINVAL;
+	}
+	if (!ufshcd_keyslot_valid(hba, bc->bc_keyslot))
+		return -EINVAL;
+
+	lrbp->crypto_enable = true;
+	lrbp->crypto_key_slot = bc->bc_keyslot;
+	lrbp->data_unit_num = bc->bc_dun[0];
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(ufshcd_prepare_lrbp_crypto_spec);
+
+/* Crypto Variant Ops Support */
+
+void ufshcd_crypto_enable(struct ufs_hba *hba)
+{
+	if (hba->crypto_vops && hba->crypto_vops->enable)
+		return hba->crypto_vops->enable(hba);
+
+	return ufshcd_crypto_enable_spec(hba);
+}
+
+void ufshcd_crypto_disable(struct ufs_hba *hba)
+{
+	if (hba->crypto_vops && hba->crypto_vops->disable)
+		return hba->crypto_vops->disable(hba);
+
+	return ufshcd_crypto_disable_spec(hba);
+}
+
+int ufshcd_hba_init_crypto(struct ufs_hba *hba)
+{
+	if (hba->crypto_vops && hba->crypto_vops->hba_init_crypto)
+		return hba->crypto_vops->hba_init_crypto(hba,
+							 &ufshcd_ksm_ops);
+
+	return ufshcd_hba_init_crypto_spec(hba, &ufshcd_ksm_ops);
+}
+
+void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+					    struct request_queue *q)
+{
+	if (hba->crypto_vops && hba->crypto_vops->setup_rq_keyslot_manager)
+		return hba->crypto_vops->setup_rq_keyslot_manager(hba, q);
+
+	return ufshcd_crypto_setup_rq_keyslot_manager_spec(hba, q);
+}
+
+void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
+					      struct request_queue *q)
+{
+	if (hba->crypto_vops && hba->crypto_vops->destroy_rq_keyslot_manager)
+		return hba->crypto_vops->destroy_rq_keyslot_manager(hba, q);
+
+	return ufshcd_crypto_destroy_rq_keyslot_manager_spec(hba, q);
+}
+
+int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+			       struct scsi_cmnd *cmd,
+			       struct ufshcd_lrb *lrbp)
+{
+	if (hba->crypto_vops && hba->crypto_vops->prepare_lrbp_crypto)
+		return hba->crypto_vops->prepare_lrbp_crypto(hba, cmd, lrbp);
+
+	return ufshcd_prepare_lrbp_crypto_spec(hba, cmd, lrbp);
+}
+
+int ufshcd_complete_lrbp_crypto(struct ufs_hba *hba,
+				struct scsi_cmnd *cmd,
+				struct ufshcd_lrb *lrbp)
+{
+	if (hba->crypto_vops && hba->crypto_vops->complete_lrbp_crypto)
+		return hba->crypto_vops->complete_lrbp_crypto(hba, cmd, lrbp);
+
+	return 0;
+}
+
+void ufshcd_crypto_debug(struct ufs_hba *hba)
+{
+	if (hba->crypto_vops && hba->crypto_vops->debug)
+		hba->crypto_vops->debug(hba);
+}
+
+int ufshcd_crypto_suspend(struct ufs_hba *hba,
+			  enum ufs_pm_op pm_op)
+{
+	if (hba->crypto_vops && hba->crypto_vops->suspend)
+		return hba->crypto_vops->suspend(hba, pm_op);
+
+	return 0;
+}
+
+int ufshcd_crypto_resume(struct ufs_hba *hba,
+			 enum ufs_pm_op pm_op)
+{
+	if (hba->crypto_vops && hba->crypto_vops->resume)
+		return hba->crypto_vops->resume(hba, pm_op);
+
+	return 0;
+}
+
+void ufshcd_crypto_set_vops(struct ufs_hba *hba,
+			    struct ufs_hba_crypto_variant_ops *crypto_vops)
+{
+	hba->crypto_vops = crypto_vops;
+}
diff --git a/drivers/scsi/ufs/ufshcd-crypto.h b/drivers/scsi/ufs/ufshcd-crypto.h
new file mode 100644
index 0000000..95f37c9
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-crypto.h
@@ -0,0 +1,167 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef _UFSHCD_CRYPTO_H
+#define _UFSHCD_CRYPTO_H
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+#include <linux/keyslot-manager.h>
+#include "ufshcd.h"
+#include "ufshci.h"
+
+static inline int ufshcd_num_keyslots(struct ufs_hba *hba)
+{
+	return hba->crypto_capabilities.config_count + 1;
+}
+
+static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba, unsigned int slot)
+{
+	/*
+	 * The actual number of configurations supported is (CFGC+1), so slot
+	 * numbers range from 0 to config_count inclusive.
+	 */
+	return slot < ufshcd_num_keyslots(hba);
+}
+
+static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
+{
+	return hba->crypto_capabilities.reg_val != 0;
+}
+
+static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
+{
+	return hba->caps & UFSHCD_CAP_CRYPTO;
+}
+
+/* Functions implementing UFSHCI v2.1 specification behaviour */
+int ufshcd_crypto_cap_find(struct ufs_hba *hba,
+			   enum blk_crypto_mode_num crypto_mode,
+			   unsigned int data_unit_size);
+
+int ufshcd_prepare_lrbp_crypto_spec(struct ufs_hba *hba,
+				    struct scsi_cmnd *cmd,
+				    struct ufshcd_lrb *lrbp);
+
+void ufshcd_crypto_enable_spec(struct ufs_hba *hba);
+
+void ufshcd_crypto_disable_spec(struct ufs_hba *hba);
+
+struct keyslot_mgmt_ll_ops;
+int ufshcd_hba_init_crypto_spec(struct ufs_hba *hba,
+				const struct keyslot_mgmt_ll_ops *ksm_ops);
+
+void ufshcd_crypto_setup_rq_keyslot_manager_spec(struct ufs_hba *hba,
+						 struct request_queue *q);
+
+void ufshcd_crypto_destroy_rq_keyslot_manager_spec(struct ufs_hba *hba,
+						   struct request_queue *q);
+
+static inline bool ufshcd_lrbp_crypto_enabled(struct ufshcd_lrb *lrbp)
+{
+	return lrbp->crypto_enable;
+}
+
+/* Crypto Variant Ops Support */
+void ufshcd_crypto_enable(struct ufs_hba *hba);
+
+void ufshcd_crypto_disable(struct ufs_hba *hba);
+
+int ufshcd_hba_init_crypto(struct ufs_hba *hba);
+
+void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+					    struct request_queue *q);
+
+void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
+					      struct request_queue *q);
+
+int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+			       struct scsi_cmnd *cmd,
+			       struct ufshcd_lrb *lrbp);
+
+int ufshcd_complete_lrbp_crypto(struct ufs_hba *hba,
+				struct scsi_cmnd *cmd,
+				struct ufshcd_lrb *lrbp);
+
+void ufshcd_crypto_debug(struct ufs_hba *hba);
+
+int ufshcd_crypto_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+
+int ufshcd_crypto_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+
+void ufshcd_crypto_set_vops(struct ufs_hba *hba,
+			    struct ufs_hba_crypto_variant_ops *crypto_vops);
+
+#else /* CONFIG_SCSI_UFS_CRYPTO */
+
+static inline bool ufshcd_keyslot_valid(struct ufs_hba *hba,
+					unsigned int slot)
+{
+	return false;
+}
+
+static inline bool ufshcd_hba_is_crypto_supported(struct ufs_hba *hba)
+{
+	return false;
+}
+
+static inline bool ufshcd_is_crypto_enabled(struct ufs_hba *hba)
+{
+	return false;
+}
+
+static inline void ufshcd_crypto_enable(struct ufs_hba *hba) { }
+
+static inline void ufshcd_crypto_disable(struct ufs_hba *hba) { }
+
+static inline int ufshcd_hba_init_crypto(struct ufs_hba *hba)
+{
+	return 0;
+}
+
+static inline void ufshcd_crypto_setup_rq_keyslot_manager(struct ufs_hba *hba,
+						struct request_queue *q) { }
+
+static inline void ufshcd_crypto_destroy_rq_keyslot_manager(struct ufs_hba *hba,
+						struct request_queue *q) { }
+
+static inline int ufshcd_prepare_lrbp_crypto(struct ufs_hba *hba,
+					     struct scsi_cmnd *cmd,
+					     struct ufshcd_lrb *lrbp)
+{
+	return 0;
+}
+
+static inline bool ufshcd_lrbp_crypto_enabled(struct ufshcd_lrb *lrbp)
+{
+	return false;
+}
+
+static inline int ufshcd_complete_lrbp_crypto(struct ufs_hba *hba,
+					      struct scsi_cmnd *cmd,
+					      struct ufshcd_lrb *lrbp)
+{
+	return 0;
+}
+
+static inline void ufshcd_crypto_debug(struct ufs_hba *hba) { }
+
+static inline int ufshcd_crypto_suspend(struct ufs_hba *hba,
+					enum ufs_pm_op pm_op)
+{
+	return 0;
+}
+
+static inline int ufshcd_crypto_resume(struct ufs_hba *hba,
+					enum ufs_pm_op pm_op)
+{
+	return 0;
+}
+
+static inline void ufshcd_crypto_set_vops(struct ufs_hba *hba,
+			struct ufs_hba_crypto_variant_ops *crypto_vops) { }
+
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
+
+#endif /* _UFSHCD_CRYPTO_H */
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 7365651..b8a11a1 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -204,6 +204,7 @@
 		break;
 	}
 }
+#include "ufshcd-crypto.h"
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/ufs.h>
@@ -424,7 +425,7 @@
 	/* UFS cards deviations table */
 	UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
 		UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM),
-	UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
+	UFS_FIX(UFS_ANY_VENDOR, UFS_ANY_MODEL,
 		UFS_DEVICE_NO_FASTAUTO),
 	UFS_FIX(UFS_VENDOR_SAMSUNG, UFS_ANY_MODEL,
 		UFS_DEVICE_QUIRK_HOST_PA_TACTIVATE),
@@ -905,6 +906,8 @@
 static void ufshcd_print_host_regs(struct ufs_hba *hba)
 {
 	__ufshcd_print_host_regs(hba, false);
+
+	ufshcd_crypto_debug(hba);
 }
 
 static
@@ -1412,8 +1415,11 @@
 {
 	u32 val = CONTROLLER_ENABLE;
 
-	if (ufshcd_is_crypto_supported(hba))
+	if (ufshcd_hba_is_crypto_supported(hba)) {
+		ufshcd_crypto_enable(hba);
 		val |= CRYPTO_GENERAL_ENABLE;
+	}
+
 	ufshcd_writel(hba, val, REG_CONTROLLER_ENABLE);
 }
 
@@ -3366,41 +3372,6 @@
 	ufshcd_writel(hba, set, REG_INTERRUPT_ENABLE);
 }
 
-static int ufshcd_prepare_crypto_utrd(struct ufs_hba *hba,
-		struct ufshcd_lrb *lrbp)
-{
-	struct utp_transfer_req_desc *req_desc = lrbp->utr_descriptor_ptr;
-	u8 cc_index = 0;
-	bool enable = false;
-	u64 dun = 0;
-	int ret;
-
-	/*
-	 * Call vendor specific code to get crypto info for this request:
-	 * enable, crypto config. index, DUN.
-	 * If bypass is set, don't bother setting the other fields.
-	 */
-	ret = ufshcd_vops_crypto_req_setup(hba, lrbp, &cc_index, &enable, &dun);
-	if (ret) {
-		if (ret != -EAGAIN) {
-			dev_err(hba->dev,
-				"%s: failed to setup crypto request (%d)\n",
-				__func__, ret);
-		}
-
-		return ret;
-	}
-
-	if (!enable)
-		goto out;
-
-	req_desc->header.dword_0 |= cc_index | UTRD_CRYPTO_ENABLE;
-	req_desc->header.dword_1 = (u32)(dun & 0xFFFFFFFF);
-	req_desc->header.dword_3 = (u32)((dun >> 32) & 0xFFFFFFFF);
-out:
-	return 0;
-}
-
 /**
  * ufshcd_prepare_req_desc_hdr() - Fills the requests header
  * descriptor according to request
@@ -3434,9 +3405,23 @@
 		dword_0 |= UTP_REQ_DESC_INT_CMD;
 
 	/* Transfer request descriptor header fields */
+	if (ufshcd_lrbp_crypto_enabled(lrbp)) {
+#if IS_ENABLED(CONFIG_SCSI_UFS_CRYPTO)
+		dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD;
+		dword_0 |= lrbp->crypto_key_slot;
+		req_desc->header.dword_1 =
+			cpu_to_le32(lower_32_bits(lrbp->data_unit_num));
+		req_desc->header.dword_3 =
+			cpu_to_le32(upper_32_bits(lrbp->data_unit_num));
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
+	} else {
+		/* dword_1 and dword_3 are reserved, hence they are set to 0 */
+		req_desc->header.dword_1 = 0;
+		req_desc->header.dword_3 = 0;
+	}
+
 	req_desc->header.dword_0 = cpu_to_le32(dword_0);
-	/* dword_1 is reserved, hence it is set to 0 */
-	req_desc->header.dword_1 = 0;
+
 	/*
 	 * assigning invalid value for command status. Controller
 	 * updates OCS on command completion, with the command
@@ -3444,14 +3429,9 @@
 	 */
 	req_desc->header.dword_2 =
 		cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
-	/* dword_3 is reserved, hence it is set to 0 */
-	req_desc->header.dword_3 = 0;
 
 	req_desc->prd_table_length = 0;
 
-	if (ufshcd_is_crypto_supported(hba))
-		return ufshcd_prepare_crypto_utrd(hba, lrbp);
-
 	return 0;
 }
 
@@ -3807,6 +3787,13 @@
 	lrbp->task_tag = tag;
 	lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
 	lrbp->intr_cmd = !ufshcd_is_intr_aggr_allowed(hba) ? true : false;
+
+	err = ufshcd_prepare_lrbp_crypto(hba, cmd, lrbp);
+	if (err) {
+		lrbp->cmd = NULL;
+		clear_bit_unlock(tag, &hba->lrb_in_use);
+		goto out;
+	}
 	lrbp->req_abort_skip = false;
 
 	err = ufshcd_comp_scsi_upiu(hba, lrbp);
@@ -3832,21 +3819,6 @@
 		goto out;
 	}
 
-	err = ufshcd_vops_crypto_engine_cfg_start(hba, tag);
-	if (err) {
-		if (err != -EAGAIN)
-			dev_err(hba->dev,
-				"%s: failed to configure crypto engine %d\n",
-				__func__, err);
-
-		scsi_dma_unmap(lrbp->cmd);
-		lrbp->cmd = NULL;
-		clear_bit_unlock(tag, &hba->lrb_in_use);
-		ufshcd_release_all(hba);
-		ufshcd_vops_pm_qos_req_end(hba, cmd->request, true);
-
-		goto out;
-	}
 
 	/* Make sure descriptors are ready before ringing the doorbell */
 	wmb();
@@ -3863,7 +3835,6 @@
 		clear_bit_unlock(tag, &hba->lrb_in_use);
 		ufshcd_release_all(hba);
 		ufshcd_vops_pm_qos_req_end(hba, cmd->request, true);
-		ufshcd_vops_crypto_engine_cfg_end(hba, lrbp, cmd->request);
 		dev_err(hba->dev, "%s: failed sending command, %d\n",
 							__func__, err);
 		err = DID_ERROR;
@@ -3887,6 +3858,9 @@
 	lrbp->task_tag = tag;
 	lrbp->lun = 0; /* device management cmd is not specific to any LUN */
 	lrbp->intr_cmd = true; /* No interrupt aggregation */
+#if IS_ENABLED(CONFIG_SCSI_UFS_CRYPTO)
+	lrbp->crypto_enable = false; /* No crypto operations */
+#endif
 	hba->dev_cmd.type = cmd_type;
 
 	return ufshcd_comp_devman_upiu(hba, lrbp);
@@ -5788,6 +5762,8 @@
 {
 	int err;
 
+	ufshcd_crypto_disable(hba);
+
 	ufshcd_writel(hba, CONTROLLER_DISABLE,  REG_CONTROLLER_ENABLE);
 	err = ufshcd_wait_for_register(hba, REG_CONTROLLER_ENABLE,
 					CONTROLLER_ENABLE, CONTROLLER_DISABLE,
@@ -6224,6 +6200,8 @@
 	sdev->autosuspend_delay = UFSHCD_AUTO_SUSPEND_DELAY_MS;
 	sdev->use_rpm_auto = 1;
 
+	ufshcd_crypto_setup_rq_keyslot_manager(hba, q);
+
 	return 0;
 }
 
@@ -6234,6 +6212,7 @@
 static void ufshcd_slave_destroy(struct scsi_device *sdev)
 {
 	struct ufs_hba *hba;
+	struct request_queue *q = sdev->request_queue;
 
 	hba = shost_priv(sdev->host);
 	/* Drop the reference as it won't be needed anymore */
@@ -6244,6 +6223,8 @@
 		hba->sdev_ufs_device = NULL;
 		spin_unlock_irqrestore(hba->host->host_lock, flags);
 	}
+
+	ufshcd_crypto_destroy_rq_keyslot_manager(hba, q);
 }
 
 /**
@@ -6517,6 +6498,7 @@
 			cmd->result = result;
 			lrbp->compl_time_stamp = ktime_get();
 			update_req_stats(hba, lrbp);
+			ufshcd_complete_lrbp_crypto(hba, cmd, lrbp);
 			/* Mark completed command as NULL in LRB */
 			lrbp->cmd = NULL;
 			hba->ufs_stats.clk_rel.ctx = XFR_REQ_COMPL;
@@ -6530,8 +6512,6 @@
 				 */
 				ufshcd_vops_pm_qos_req_end(hba, cmd->request,
 					false);
-				ufshcd_vops_crypto_engine_cfg_end(hba,
-					lrbp, cmd->request);
 			}
 
 			clear_bit_unlock(index, &hba->lrb_in_use);
@@ -6599,8 +6579,6 @@
 				 */
 				ufshcd_vops_pm_qos_req_end(hba, cmd->request,
 					true);
-				ufshcd_vops_crypto_engine_cfg_end(hba,
-						lrbp, cmd->request);
 			}
 			clear_bit_unlock(index, &hba->lrb_in_use);
 			/* Do not touch lrbp after scsi done */
@@ -7668,8 +7646,6 @@
 	ufsdbg_error_inject_dispatcher(hba,
 		ERR_INJECT_INTR, intr_status, &intr_status);
 
-	ufshcd_vops_crypto_engine_get_status(hba, &hba->ce_error);
-
 	hba->errors = UFSHCD_ERROR_MASK & intr_status;
 	if (hba->errors || hba->ce_error)
 		retval |= ufshcd_check_errors(hba);
@@ -8147,15 +8123,6 @@
 		goto out;
 	}
 
-	if (!err) {
-		err = ufshcd_vops_crypto_engine_reset(hba);
-		if (err) {
-			dev_err(hba->dev,
-				"%s: failed to reset crypto engine %d\n",
-				__func__, err);
-			goto out;
-		}
-	}
 
 out:
 	if (err)
@@ -10353,6 +10320,10 @@
 		req_link_state = UIC_LINK_OFF_STATE;
 	}
 
+	ret = ufshcd_crypto_suspend(hba, pm_op);
+	if (ret)
+		goto out;
+
 	/*
 	 * If we can't transition into any of the low power modes
 	 * just gate the clocks.
@@ -10481,6 +10452,7 @@
 	hba->hibern8_on_idle.is_suspended = false;
 	hba->clk_gating.is_suspended = false;
 	ufshcd_release_all(hba);
+	ufshcd_crypto_resume(hba, pm_op);
 out:
 	hba->pm_op_in_progress = 0;
 
@@ -10504,9 +10476,11 @@
 {
 	int ret;
 	enum uic_link_state old_link_state;
+	enum ufs_dev_pwr_mode old_pwr_mode;
 
 	hba->pm_op_in_progress = 1;
 	old_link_state = hba->uic_link_state;
+	old_pwr_mode = hba->curr_dev_pwr_mode;
 
 	ufshcd_hba_vreg_set_hpm(hba);
 	/* Make sure clocks are enabled before accessing controller */
@@ -10574,6 +10548,11 @@
 				goto set_old_link_state;
 		}
 	}
+
+	ret = ufshcd_crypto_resume(hba, pm_op);
+	if (ret)
+		goto set_old_dev_pwr_mode;
+
 	if (ufshcd_keep_autobkops_enabled_except_suspend(hba))
 		ufshcd_enable_auto_bkops(hba);
 	else
@@ -10596,6 +10575,9 @@
 	ufshcd_release_all(hba);
 	goto out;
 
+set_old_dev_pwr_mode:
+	if (old_pwr_mode != hba->curr_dev_pwr_mode)
+		ufshcd_set_dev_pwr_mode(hba, old_pwr_mode);
 set_old_link_state:
 	ufshcd_link_state_transition(hba, old_link_state, 0);
 	if (ufshcd_is_link_hibern8(hba) &&
@@ -11112,6 +11094,12 @@
 
 	if (hba->force_g4)
 		hba->phy_init_g4 = true;
+	/* Init crypto */
+	err = ufshcd_hba_init_crypto(hba);
+	if (err) {
+		dev_err(hba->dev, "crypto setup failed\n");
+		goto out_remove_scsi_host;
+	}
 
 	/* Host controller enable */
 	err = ufshcd_hba_enable(hba);
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index 887b976..8d88011 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -197,6 +197,9 @@
  * @intr_cmd: Interrupt command (doesn't participate in interrupt aggregation)
  * @issue_time_stamp: time stamp for debug purposes
  * @compl_time_stamp: time stamp for statistics
+ * @crypto_enable: whether or not the request needs inline crypto operations
+ * @crypto_key_slot: the key slot to use for inline crypto
+ * @data_unit_num: the data unit number for the first block for inline crypto
  * @req_abort_skip: skip request abort task flag
  */
 struct ufshcd_lrb {
@@ -221,6 +224,11 @@
 	bool intr_cmd;
 	ktime_t issue_time_stamp;
 	ktime_t compl_time_stamp;
+#if IS_ENABLED(CONFIG_SCSI_UFS_CRYPTO)
+	bool crypto_enable;
+	u8 crypto_key_slot;
+	u64 data_unit_num;
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
 
 	bool req_abort_skip;
 };
@@ -302,6 +310,8 @@
 	struct ufs_pa_layer_attr info;
 };
 
+union ufs_crypto_cfg_entry;
+
 /**
  * struct ufs_hba_variant_ops - variant specific callbacks
  * @init: called when the driver is initialized
@@ -332,6 +342,7 @@
  *			 scale down
  * @set_bus_vote: called to vote for the required bus bandwidth
  * @phy_initialization: used to initialize phys
+ * @program_key: program an inline encryption key into a keyslot
  */
 struct ufs_hba_variant_ops {
 	int	(*init)(struct ufs_hba *);
@@ -368,31 +379,8 @@
 	void	(*add_debugfs)(struct ufs_hba *hba, struct dentry *root);
 	void	(*remove_debugfs)(struct ufs_hba *hba);
 #endif
-};
-
-/**
- * struct ufs_hba_crypto_variant_ops - variant specific crypto callbacks
- * @crypto_req_setup:	retreieve the necessary cryptographic arguments to setup
-			a requests's transfer descriptor.
- * @crypto_engine_cfg_start: start configuring cryptographic engine
- *							 according to tag
- *							 parameter
- * @crypto_engine_cfg_end: end configuring cryptographic engine
- *						   according to tag parameter
- * @crypto_engine_reset: perform reset to the cryptographic engine
- * @crypto_engine_get_status: get errors status of the cryptographic engine
- */
-struct ufs_hba_crypto_variant_ops {
-	int	(*crypto_req_setup)(struct ufs_hba *hba,
-				    struct ufshcd_lrb *lrbp, u8 *cc_index,
-				    bool *enable, u64 *dun);
-	int	(*crypto_engine_cfg_start)(struct ufs_hba *hba,
-					   unsigned int task_tag);
-	int	(*crypto_engine_cfg_end)(struct ufs_hba *hba,
-					 struct ufshcd_lrb *lrbp,
-					 struct request *req);
-	int	(*crypto_engine_reset)(struct ufs_hba *hba);
-	int	(*crypto_engine_get_status)(struct ufs_hba *hba, u32 *status);
+	int	(*program_key)(struct ufs_hba *hba,
+			       const union ufs_crypto_cfg_entry *cfg, int slot);
 };
 
 /**
@@ -412,10 +400,31 @@
 	struct device				*dev;
 	const char				*name;
 	struct ufs_hba_variant_ops		*vops;
-	struct ufs_hba_crypto_variant_ops	*crypto_vops;
 	struct ufs_hba_pm_qos_variant_ops	*pm_qos_vops;
 };
 
+struct keyslot_mgmt_ll_ops;
+struct ufs_hba_crypto_variant_ops {
+	void (*setup_rq_keyslot_manager)(struct ufs_hba *hba,
+					 struct request_queue *q);
+	void (*destroy_rq_keyslot_manager)(struct ufs_hba *hba,
+					   struct request_queue *q);
+	int (*hba_init_crypto)(struct ufs_hba *hba,
+			       const struct keyslot_mgmt_ll_ops *ksm_ops);
+	void (*enable)(struct ufs_hba *hba);
+	void (*disable)(struct ufs_hba *hba);
+	int (*suspend)(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+	int (*resume)(struct ufs_hba *hba, enum ufs_pm_op pm_op);
+	int (*debug)(struct ufs_hba *hba);
+	int (*prepare_lrbp_crypto)(struct ufs_hba *hba,
+				   struct scsi_cmnd *cmd,
+				   struct ufshcd_lrb *lrbp);
+	int (*complete_lrbp_crypto)(struct ufs_hba *hba,
+				    struct scsi_cmnd *cmd,
+				    struct ufshcd_lrb *lrbp);
+	void *priv;
+};
+
 /* clock gating state  */
 enum clk_gating_state {
 	CLKS_OFF,
@@ -784,6 +793,10 @@
  * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for
  *  device is known or not.
  * @scsi_block_reqs_cnt: reference counting for scsi block requests
+ * @crypto_capabilities: Content of crypto capabilities register (0x100)
+ * @crypto_cap_array: Array of crypto capabilities
+ * @crypto_cfg_register: Start of the crypto cfg array
+ * @ksm: the keyslot manager tied to this hba
  */
 struct ufs_hba {
 	void __iomem *mmio_base;
@@ -831,6 +844,7 @@
 	u32 ufs_version;
 	struct ufs_hba_variant *var;
 	void *priv;
+	const struct ufs_hba_crypto_variant_ops *crypto_vops;
 	unsigned int irq;
 	bool is_irq_enabled;
 	bool crash_on_err;
@@ -935,6 +949,11 @@
 	#define UFSHCD_QUIRK_DME_PEER_GET_FAST_MODE		0x20000
 
 	#define UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8		0x40000
+	/*
+	 * This quirk needs to be enabled if the host controller advertises
+	 * inline encryption support but it doesn't work correctly.
+	 */
+	#define UFSHCD_QUIRK_BROKEN_CRYPTO			0x800
 
 	unsigned int quirks;	/* Deviations from standard UFSHCI spec. */
 
@@ -1048,6 +1067,11 @@
 	 * in hibern8 then enable this cap.
 	 */
 #define UFSHCD_CAP_POWER_COLLAPSE_DURING_HIBERN8 (1 << 7)
+	/*
+	 * This capability allows the host controller driver to use the
+	 * inline crypto engine, if it is present
+	 */
+#define UFSHCD_CAP_CRYPTO (1 << 7)
 
 	struct devfreq *devfreq;
 	struct ufs_clk_scaling clk_scaling;
@@ -1075,6 +1099,14 @@
 	bool phy_init_g4;
 	bool force_g4;
 	bool wb_enabled;
+
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+	/* crypto */
+	union ufs_crypto_capabilities crypto_capabilities;
+	union ufs_crypto_cap_entry *crypto_cap_array;
+	u32 crypto_cfg_register;
+	struct keyslot_manager *ksm;
+#endif /* CONFIG_SCSI_UFS_CRYPTO */
 };
 
 static inline void ufshcd_mark_shutdown_ongoing(struct ufs_hba *hba)
@@ -1539,55 +1571,6 @@
 }
 #endif
 
-static inline int ufshcd_vops_crypto_req_setup(struct ufs_hba *hba,
-	struct ufshcd_lrb *lrbp, u8 *cc_index, bool *enable, u64 *dun)
-{
-	if (hba->var && hba->var->crypto_vops &&
-		hba->var->crypto_vops->crypto_req_setup)
-		return hba->var->crypto_vops->crypto_req_setup(hba, lrbp,
-			cc_index, enable, dun);
-	return 0;
-}
-
-static inline int ufshcd_vops_crypto_engine_cfg_start(struct ufs_hba *hba,
-						unsigned int task_tag)
-{
-	if (hba->var && hba->var->crypto_vops &&
-	    hba->var->crypto_vops->crypto_engine_cfg_start)
-		return hba->var->crypto_vops->crypto_engine_cfg_start
-				(hba, task_tag);
-	return 0;
-}
-
-static inline int ufshcd_vops_crypto_engine_cfg_end(struct ufs_hba *hba,
-						struct ufshcd_lrb *lrbp,
-						struct request *req)
-{
-	if (hba->var && hba->var->crypto_vops &&
-	    hba->var->crypto_vops->crypto_engine_cfg_end)
-		return hba->var->crypto_vops->crypto_engine_cfg_end
-				(hba, lrbp, req);
-	return 0;
-}
-
-static inline int ufshcd_vops_crypto_engine_reset(struct ufs_hba *hba)
-{
-	if (hba->var && hba->var->crypto_vops &&
-	    hba->var->crypto_vops->crypto_engine_reset)
-		return hba->var->crypto_vops->crypto_engine_reset(hba);
-	return 0;
-}
-
-static inline int ufshcd_vops_crypto_engine_get_status(struct ufs_hba *hba,
-		u32 *status)
-{
-	if (hba->var && hba->var->crypto_vops &&
-	    hba->var->crypto_vops->crypto_engine_get_status)
-		return hba->var->crypto_vops->crypto_engine_get_status(hba,
-			status);
-	return 0;
-}
-
 static inline void ufshcd_vops_pm_qos_req_start(struct ufs_hba *hba,
 		struct request *req)
 {
diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
index 8a20eb7..3bf11f7 100644
--- a/drivers/scsi/ufs/ufshci.h
+++ b/drivers/scsi/ufs/ufshci.h
@@ -363,6 +363,61 @@
 	INTERRUPT_MASK_ALL_VER_21	= 0x71FFF,
 };
 
+/* CCAP - Crypto Capability 100h */
+union ufs_crypto_capabilities {
+	__le32 reg_val;
+	struct {
+		u8 num_crypto_cap;
+		u8 config_count;
+		u8 reserved;
+		u8 config_array_ptr;
+	};
+};
+
+enum ufs_crypto_key_size {
+	UFS_CRYPTO_KEY_SIZE_INVALID	= 0x0,
+	UFS_CRYPTO_KEY_SIZE_128		= 0x1,
+	UFS_CRYPTO_KEY_SIZE_192		= 0x2,
+	UFS_CRYPTO_KEY_SIZE_256		= 0x3,
+	UFS_CRYPTO_KEY_SIZE_512		= 0x4,
+};
+
+enum ufs_crypto_alg {
+	UFS_CRYPTO_ALG_AES_XTS			= 0x0,
+	UFS_CRYPTO_ALG_BITLOCKER_AES_CBC	= 0x1,
+	UFS_CRYPTO_ALG_AES_ECB			= 0x2,
+	UFS_CRYPTO_ALG_ESSIV_AES_CBC		= 0x3,
+};
+
+/* x-CRYPTOCAP - Crypto Capability X */
+union ufs_crypto_cap_entry {
+	__le32 reg_val;
+	struct {
+		u8 algorithm_id;
+		u8 sdus_mask; /* Supported data unit size mask */
+		u8 key_size;
+		u8 reserved;
+	};
+};
+
+#define UFS_CRYPTO_CONFIGURATION_ENABLE (1 << 7)
+#define UFS_CRYPTO_KEY_MAX_SIZE 64
+/* x-CRYPTOCFG - Crypto Configuration X */
+union ufs_crypto_cfg_entry {
+	__le32 reg_val[32];
+	struct {
+		u8 crypto_key[UFS_CRYPTO_KEY_MAX_SIZE];
+		u8 data_unit_size;
+		u8 crypto_cap_idx;
+		u8 reserved_1;
+		u8 config_enable;
+		u8 reserved_multi_host;
+		u8 reserved_2;
+		u8 vsb[2];
+		u8 reserved_3[56];
+	};
+};
+
 /*
  * Request Descriptor Definitions
  */
@@ -384,6 +439,7 @@
 	UTP_NATIVE_UFS_COMMAND		= 0x10000000,
 	UTP_DEVICE_MANAGEMENT_FUNCTION	= 0x20000000,
 	UTP_REQ_DESC_INT_CMD		= 0x01000000,
+	UTP_REQ_DESC_CRYPTO_ENABLE_CMD	= 0x00800000,
 };
 
 /* UTP Transfer Request Data Direction (DD) */
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index ab38c20..ee74d75 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -844,6 +844,24 @@
 	  bit in tcsr register if it is going to cross its own threshold.
 	  If all clients are going to cross their thresholds then Cx ipeak
 	  hw module will raise an interrupt to cDSP block to throttle cDSP fmax.
+
+config QTI_CRYPTO_COMMON
+	tristate "Enable common crypto functionality used for FBE"
+	depends on BLK_INLINE_ENCRYPTION
+	help
+	 Say 'Y' to enable the common crypto implementation to be used by
+	 different storage layers such as UFS and EMMC for file based hardware
+	 encryption. This library implements API to program and evict
+	 keys using Trustzone or Hardware Key Manager.
+
+config QTI_CRYPTO_TZ
+	tristate "Enable Trustzone to be used for FBE"
+	depends on QTI_CRYPTO_COMMON
+	help
+	 Say 'Y' to enable routing crypto requests to Trustzone while
+	 performing hardware based file encryption. This means keys are
+	 programmed and managed through SCM calls to TZ where ICE driver
+	 will configure keys.
 endmenu
 
 config QCOM_HYP_CORE_CTL
@@ -855,6 +873,15 @@
 	  An offline CPU is considered as a reserved CPU since this OS can't use
 	  it.
 
+config QTI_HW_KEY_MANAGER
+	tristate "Enable QTI Hardware Key Manager for storage encryption"
+	default n
+	help
+	 Say 'Y' to enable the hardware key manager driver used to operate
+	 and access key manager hardware block. This is used to interface with
+	 HWKM hardware to perform key operations from the kernel which will
+	 be used for storage encryption.
+
 source "drivers/soc/qcom/icnss2/Kconfig"
 
 config ICNSS
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index 530043d..2783498 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -94,9 +94,14 @@
 obj-$(CONFIG_MSM_RPM_SMD)   +=  rpm-smd-debug.o
 endif
 obj-$(CONFIG_QCOM_CDSP_RM) += cdsprm.o
-obj-$(CONFIG_ICNSS) += icnss.o
-obj-$(CONFIG_ICNSS_QMI) += icnss_qmi.o wlan_firmware_service_v01.o
+obj-$(CONFIG_ICNSS) += msm_icnss.o
+msm_icnss-y := icnss.o
+msm_icnss-$(CONFIG_ICNSS_QMI) += icnss_qmi.o wlan_firmware_service_v01.o
 obj-$(CONFIG_RMNET_CTL) += rmnet_ctl/
 obj-$(CONFIG_QCOM_CX_IPEAK) += cx_ipeak.o
 obj-$(CONFIG_QTI_L2_REUSE) += l2_reuse.o
 obj-$(CONFIG_ICNSS2) += icnss2/
+obj-$(CONFIG_QTI_CRYPTO_COMMON) += crypto-qti-common.o
+obj-$(CONFIG_QTI_CRYPTO_TZ) += crypto-qti-tz.o
+obj-$(CONFIG_QTI_HW_KEY_MANAGER) += hwkm_qti.o
+hwkm_qti-y += hwkm.o
diff --git a/drivers/soc/qcom/crypto-qti-common.c b/drivers/soc/qcom/crypto-qti-common.c
new file mode 100644
index 0000000..97df33a
--- /dev/null
+++ b/drivers/soc/qcom/crypto-qti-common.c
@@ -0,0 +1,467 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2020, Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/crypto-qti-common.h>
+#include "crypto-qti-ice-regs.h"
+#include "crypto-qti-platform.h"
+
+static int ice_check_fuse_setting(struct crypto_vops_qti_entry *ice_entry)
+{
+	uint32_t regval;
+	uint32_t major, minor;
+
+	major = (ice_entry->ice_hw_version & ICE_CORE_MAJOR_REV_MASK) >>
+			ICE_CORE_MAJOR_REV;
+	minor = (ice_entry->ice_hw_version & ICE_CORE_MINOR_REV_MASK) >>
+			ICE_CORE_MINOR_REV;
+
+	//Check fuse setting is not supported on ICE 3.2 onwards
+	if ((major == 0x03) && (minor >= 0x02))
+		return 0;
+	regval = ice_readl(ice_entry, ICE_REGS_FUSE_SETTING);
+	regval &= (ICE_FUSE_SETTING_MASK |
+		ICE_FORCE_HW_KEY0_SETTING_MASK |
+		ICE_FORCE_HW_KEY1_SETTING_MASK);
+
+	if (regval) {
+		pr_err("%s: error: ICE_ERROR_HW_DISABLE_FUSE_BLOWN\n",
+				__func__);
+		return -EPERM;
+	}
+	return 0;
+}
+
+static int ice_check_version(struct crypto_vops_qti_entry *ice_entry)
+{
+	uint32_t version, major, minor, step;
+
+	version = ice_readl(ice_entry, ICE_REGS_VERSION);
+	major = (version & ICE_CORE_MAJOR_REV_MASK) >> ICE_CORE_MAJOR_REV;
+	minor = (version & ICE_CORE_MINOR_REV_MASK) >> ICE_CORE_MINOR_REV;
+	step = (version & ICE_CORE_STEP_REV_MASK) >> ICE_CORE_STEP_REV;
+
+	if (major < ICE_CORE_CURRENT_MAJOR_VERSION) {
+		pr_err("%s: Unknown ICE device at %lu, rev %d.%d.%d\n",
+			__func__, (unsigned long)ice_entry->icemmio_base,
+				major, minor, step);
+		return -ENODEV;
+	}
+
+	ice_entry->ice_hw_version = version;
+
+	return 0;
+}
+
+int crypto_qti_init_crypto(struct device *dev, void __iomem *mmio_base,
+			   void **priv_data)
+{
+	int err = 0;
+	struct crypto_vops_qti_entry *ice_entry;
+
+	ice_entry = devm_kzalloc(dev,
+		sizeof(struct crypto_vops_qti_entry),
+		GFP_KERNEL);
+	if (!ice_entry)
+		return -ENOMEM;
+
+	ice_entry->icemmio_base = mmio_base;
+	ice_entry->flags = 0;
+
+	err = ice_check_version(ice_entry);
+	if (err) {
+		pr_err("%s: check version failed, err %d\n", __func__, err);
+		return err;
+	}
+
+	err = ice_check_fuse_setting(ice_entry);
+	if (err)
+		return err;
+
+	*priv_data = (void *)ice_entry;
+
+	return err;
+}
+
+static void ice_low_power_and_optimization_enable(
+		struct crypto_vops_qti_entry *ice_entry)
+{
+	uint32_t regval;
+
+	regval = ice_readl(ice_entry, ICE_REGS_ADVANCED_CONTROL);
+	/* Enable low power mode sequence
+	 * [0]-0,[1]-0,[2]-0,[3]-7,[4]-0,[5]-0,[6]-0,[7]-0,
+	 * Enable CONFIG_CLK_GATING, STREAM2_CLK_GATING and STREAM1_CLK_GATING
+	 */
+	regval |= 0x7000;
+	/* Optimization enable sequence
+	 */
+	regval |= 0xD807100;
+	ice_writel(ice_entry, regval, ICE_REGS_ADVANCED_CONTROL);
+	/*
+	 * Memory barrier - to ensure write completion before next transaction
+	 */
+	wmb();
+}
+
+static int ice_wait_bist_status(struct crypto_vops_qti_entry *ice_entry)
+{
+	int count;
+	uint32_t regval;
+
+	for (count = 0; count < QTI_ICE_MAX_BIST_CHECK_COUNT; count++) {
+		regval = ice_readl(ice_entry, ICE_REGS_BIST_STATUS);
+		if (!(regval & ICE_BIST_STATUS_MASK))
+			break;
+		udelay(50);
+	}
+
+	if (regval) {
+		pr_err("%s: wait bist status failed, reg %d\n",
+				__func__, regval);
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static void ice_enable_intr(struct crypto_vops_qti_entry *ice_entry)
+{
+	uint32_t regval;
+
+	regval = ice_readl(ice_entry, ICE_REGS_NON_SEC_IRQ_MASK);
+	regval &= ~ICE_REGS_NON_SEC_IRQ_MASK;
+	ice_writel(ice_entry, regval, ICE_REGS_NON_SEC_IRQ_MASK);
+	/*
+	 * Memory barrier - to ensure write completion before next transaction
+	 */
+	wmb();
+}
+
+static void ice_disable_intr(struct crypto_vops_qti_entry *ice_entry)
+{
+	uint32_t regval;
+
+	regval = ice_readl(ice_entry, ICE_REGS_NON_SEC_IRQ_MASK);
+	regval |= ICE_REGS_NON_SEC_IRQ_MASK;
+	ice_writel(ice_entry, regval, ICE_REGS_NON_SEC_IRQ_MASK);
+	/*
+	 * Memory barrier - to ensure write completion before next transaction
+	 */
+	wmb();
+}
+
+int crypto_qti_enable(void *priv_data)
+{
+	int err = 0;
+	struct crypto_vops_qti_entry *ice_entry;
+
+	ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+	if (!ice_entry) {
+		pr_err("%s: vops ice data is invalid\n", __func__);
+		return -EINVAL;
+	}
+
+	ice_low_power_and_optimization_enable(ice_entry);
+	err = ice_wait_bist_status(ice_entry);
+	if (err)
+		return err;
+	ice_enable_intr(ice_entry);
+
+	return err;
+}
+
+void crypto_qti_disable(void *priv_data)
+{
+	struct crypto_vops_qti_entry *ice_entry;
+
+	ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+	if (!ice_entry) {
+		pr_err("%s: vops ice data is invalid\n", __func__);
+		return;
+	}
+
+	crypto_qti_disable_platform(ice_entry);
+	ice_disable_intr(ice_entry);
+}
+
+int crypto_qti_resume(void *priv_data)
+{
+	int err = 0;
+	struct crypto_vops_qti_entry *ice_entry;
+
+	ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+	if (!ice_entry) {
+		pr_err("%s: vops ice data is invalid\n", __func__);
+		return -EINVAL;
+	}
+
+	err = ice_wait_bist_status(ice_entry);
+
+	return err;
+}
+
+static void ice_dump_test_bus(struct crypto_vops_qti_entry *ice_entry)
+{
+	uint32_t regval = 0x1;
+	uint32_t val;
+	uint8_t bus_selector;
+	uint8_t stream_selector;
+
+	pr_err("ICE TEST BUS DUMP:\n");
+
+	for (bus_selector = 0; bus_selector <= 0xF;  bus_selector++) {
+		regval = 0x1;	/* enable test bus */
+		regval |= bus_selector << 28;
+		if (bus_selector == 0xD)
+			continue;
+		ice_writel(ice_entry, regval, ICE_REGS_TEST_BUS_CONTROL);
+		/*
+		 * make sure test bus selector is written before reading
+		 * the test bus register
+		 */
+		wmb();
+		val = ice_readl(ice_entry, ICE_REGS_TEST_BUS_REG);
+		pr_err("ICE_TEST_BUS_CONTROL: 0x%08x | ICE_TEST_BUS_REG: 0x%08x\n",
+			regval, val);
+	}
+
+	pr_err("ICE TEST BUS DUMP (ICE_STREAM1_DATAPATH_TEST_BUS):\n");
+	for (stream_selector = 0; stream_selector <= 0xF; stream_selector++) {
+		regval = 0xD0000001;	/* enable stream test bus */
+		regval |= stream_selector << 16;
+		ice_writel(ice_entry, regval, ICE_REGS_TEST_BUS_CONTROL);
+		/*
+		 * make sure test bus selector is written before reading
+		 * the test bus register
+		 */
+		wmb();
+		val = ice_readl(ice_entry, ICE_REGS_TEST_BUS_REG);
+		pr_err("ICE_TEST_BUS_CONTROL: 0x%08x | ICE_TEST_BUS_REG: 0x%08x\n",
+			regval, val);
+	}
+}
+
+
+int crypto_qti_debug(void *priv_data)
+{
+	struct crypto_vops_qti_entry *ice_entry;
+
+	ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+	if (!ice_entry) {
+		pr_err("%s: vops ice data is invalid\n", __func__);
+		return -EINVAL;
+	}
+
+	pr_err("%s: ICE Control: 0x%08x | ICE Reset: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_CONTROL),
+		ice_readl(ice_entry, ICE_REGS_RESET));
+
+	pr_err("%s: ICE Version: 0x%08x | ICE FUSE:	0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_VERSION),
+		ice_readl(ice_entry, ICE_REGS_FUSE_SETTING));
+
+	pr_err("%s: ICE Param1: 0x%08x | ICE Param2:  0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_PARAMETERS_1),
+		ice_readl(ice_entry, ICE_REGS_PARAMETERS_2));
+
+	pr_err("%s: ICE Param3: 0x%08x | ICE Param4:  0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_PARAMETERS_3),
+		ice_readl(ice_entry, ICE_REGS_PARAMETERS_4));
+
+	pr_err("%s: ICE Param5: 0x%08x | ICE IRQ STTS:  0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_PARAMETERS_5),
+		ice_readl(ice_entry, ICE_REGS_NON_SEC_IRQ_STTS));
+
+	pr_err("%s: ICE IRQ MASK: 0x%08x | ICE IRQ CLR:	0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_NON_SEC_IRQ_MASK),
+		ice_readl(ice_entry, ICE_REGS_NON_SEC_IRQ_CLR));
+
+	pr_err("%s: ICE INVALID CCFG ERR STTS: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_INVALID_CCFG_ERR_STTS));
+
+	pr_err("%s: ICE BIST Sts: 0x%08x | ICE Bypass Sts:  0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_BIST_STATUS),
+		ice_readl(ice_entry, ICE_REGS_BYPASS_STATUS));
+
+	pr_err("%s: ICE ADV CTRL: 0x%08x | ICE ENDIAN SWAP:	0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_ADVANCED_CONTROL),
+		ice_readl(ice_entry, ICE_REGS_ENDIAN_SWAP));
+
+	pr_err("%s: ICE_STM1_ERR_SYND1: 0x%08x | ICE_STM1_ERR_SYND2: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM1_ERROR_SYNDROME1),
+		ice_readl(ice_entry, ICE_REGS_STREAM1_ERROR_SYNDROME2));
+
+	pr_err("%s: ICE_STM2_ERR_SYND1: 0x%08x | ICE_STM2_ERR_SYND2: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM2_ERROR_SYNDROME1),
+		ice_readl(ice_entry, ICE_REGS_STREAM2_ERROR_SYNDROME2));
+
+	pr_err("%s: ICE_STM1_COUNTER1: 0x%08x | ICE_STM1_COUNTER2: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS1),
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS2));
+
+	pr_err("%s: ICE_STM1_COUNTER3: 0x%08x | ICE_STM1_COUNTER4: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS3),
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS4));
+
+	pr_err("%s: ICE_STM2_COUNTER1: 0x%08x | ICE_STM2_COUNTER2: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS1),
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS2));
+
+	pr_err("%s: ICE_STM2_COUNTER3: 0x%08x | ICE_STM2_COUNTER4: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS3),
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS4));
+
+	pr_err("%s: ICE_STM1_CTR5_MSB: 0x%08x | ICE_STM1_CTR5_LSB: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS5_MSB),
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS5_LSB));
+
+	pr_err("%s: ICE_STM1_CTR6_MSB: 0x%08x | ICE_STM1_CTR6_LSB: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS6_MSB),
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS6_LSB));
+
+	pr_err("%s: ICE_STM1_CTR7_MSB: 0x%08x | ICE_STM1_CTR7_LSB: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS7_MSB),
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS7_LSB));
+
+	pr_err("%s: ICE_STM1_CTR8_MSB: 0x%08x | ICE_STM1_CTR8_LSB: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS8_MSB),
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS8_LSB));
+
+	pr_err("%s: ICE_STM1_CTR9_MSB: 0x%08x | ICE_STM1_CTR9_LSB: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS9_MSB),
+		ice_readl(ice_entry, ICE_REGS_STREAM1_COUNTERS9_LSB));
+
+	pr_err("%s: ICE_STM2_CTR5_MSB: 0x%08x | ICE_STM2_CTR5_LSB: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS5_MSB),
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS5_LSB));
+
+	pr_err("%s: ICE_STM2_CTR6_MSB: 0x%08x | ICE_STM2_CTR6_LSB: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS6_MSB),
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS6_LSB));
+
+	pr_err("%s: ICE_STM2_CTR7_MSB: 0x%08x | ICE_STM2_CTR7_LSB: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS7_MSB),
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS7_LSB));
+
+	pr_err("%s: ICE_STM2_CTR8_MSB: 0x%08x | ICE_STM2_CTR8_LSB: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS8_MSB),
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS8_LSB));
+
+	pr_err("%s: ICE_STM2_CTR9_MSB: 0x%08x | ICE_STM2_CTR9_LSB: 0x%08x\n",
+		ice_entry->ice_dev_type,
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS9_MSB),
+		ice_readl(ice_entry, ICE_REGS_STREAM2_COUNTERS9_LSB));
+
+	ice_dump_test_bus(ice_entry);
+
+	return 0;
+}
+
+int crypto_qti_keyslot_program(void *priv_data,
+			       const struct blk_crypto_key *key,
+			       unsigned int slot,
+			       u8 data_unit_mask, int capid)
+{
+	int err = 0;
+	struct crypto_vops_qti_entry *ice_entry;
+
+	ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+	if (!ice_entry) {
+		pr_err("%s: vops ice data is invalid\n", __func__);
+		return -EINVAL;
+	}
+
+	err = crypto_qti_program_key(ice_entry, key, slot,
+				data_unit_mask, capid);
+	if (err) {
+		pr_err("%s: program key failed with error %d\n", __func__, err);
+		err = crypto_qti_invalidate_key(ice_entry, slot);
+		if (err) {
+			pr_err("%s: invalidate key failed with error %d\n",
+				__func__, err);
+			return err;
+		}
+	}
+
+	return err;
+}
+
+int crypto_qti_keyslot_evict(void *priv_data, unsigned int slot)
+{
+	int err = 0;
+	struct crypto_vops_qti_entry *ice_entry;
+
+	ice_entry = (struct crypto_vops_qti_entry *) priv_data;
+	if (!ice_entry) {
+		pr_err("%s: vops ice data is invalid\n", __func__);
+		return -EINVAL;
+	}
+
+	err = crypto_qti_invalidate_key(ice_entry, slot);
+	if (err) {
+		pr_err("%s: invalidate key failed with error %d\n",
+			__func__, err);
+		return err;
+	}
+
+	return err;
+}
+
+int crypto_qti_derive_raw_secret(const u8 *wrapped_key,
+				 unsigned int wrapped_key_size, u8 *secret,
+				 unsigned int secret_size)
+{
+	int err = 0;
+
+	if (wrapped_key_size <= RAW_SECRET_SIZE) {
+		pr_err("%s: Invalid wrapped_key_size: %u\n",
+				__func__, wrapped_key_size);
+		err = -EINVAL;
+		return err;
+	}
+	if (secret_size != RAW_SECRET_SIZE) {
+		pr_err("%s: Invalid secret size: %u\n", __func__, secret_size);
+		err = -EINVAL;
+		return err;
+	}
+
+	memcpy(secret, wrapped_key, secret_size);
+
+	return err;
+}
diff --git a/drivers/soc/qcom/crypto-qti-ice-regs.h b/drivers/soc/qcom/crypto-qti-ice-regs.h
new file mode 100644
index 0000000..38e5c35
--- /dev/null
+++ b/drivers/soc/qcom/crypto-qti-ice-regs.h
@@ -0,0 +1,156 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _CRYPTO_INLINE_CRYPTO_ENGINE_REGS_H_
+#define _CRYPTO_INLINE_CRYPTO_ENGINE_REGS_H_
+
+#include <linux/io.h>
+
+/* Register bits for ICE version */
+#define ICE_CORE_CURRENT_MAJOR_VERSION 0x03
+
+#define ICE_CORE_STEP_REV_MASK		0xFFFF
+#define ICE_CORE_STEP_REV		0 /* bit 15-0 */
+#define ICE_CORE_MAJOR_REV_MASK		0xFF000000
+#define ICE_CORE_MAJOR_REV		24 /* bit 31-24 */
+#define ICE_CORE_MINOR_REV_MASK		0xFF0000
+#define ICE_CORE_MINOR_REV		16 /* bit 23-16 */
+
+#define ICE_BIST_STATUS_MASK		(0xF0000000)	/* bits 28-31 */
+
+#define ICE_FUSE_SETTING_MASK			0x1
+#define ICE_FORCE_HW_KEY0_SETTING_MASK		0x2
+#define ICE_FORCE_HW_KEY1_SETTING_MASK		0x4
+
+/* QTI ICE Registers from SWI */
+#define ICE_REGS_CONTROL			0x0000
+#define ICE_REGS_RESET				0x0004
+#define ICE_REGS_VERSION			0x0008
+#define ICE_REGS_FUSE_SETTING			0x0010
+#define ICE_REGS_PARAMETERS_1			0x0014
+#define ICE_REGS_PARAMETERS_2			0x0018
+#define ICE_REGS_PARAMETERS_3			0x001C
+#define ICE_REGS_PARAMETERS_4			0x0020
+#define ICE_REGS_PARAMETERS_5			0x0024
+
+
+/* QTI ICE v3.X only */
+#define ICE_GENERAL_ERR_STTS			0x0040
+#define ICE_INVALID_CCFG_ERR_STTS		0x0030
+#define ICE_GENERAL_ERR_MASK			0x0044
+
+
+/* QTI ICE v2.X only */
+#define ICE_REGS_NON_SEC_IRQ_STTS		0x0040
+#define ICE_REGS_NON_SEC_IRQ_MASK		0x0044
+
+
+#define ICE_REGS_NON_SEC_IRQ_CLR		0x0048
+#define ICE_REGS_STREAM1_ERROR_SYNDROME1	0x0050
+#define ICE_REGS_STREAM1_ERROR_SYNDROME2	0x0054
+#define ICE_REGS_STREAM2_ERROR_SYNDROME1	0x0058
+#define ICE_REGS_STREAM2_ERROR_SYNDROME2	0x005C
+#define ICE_REGS_STREAM1_BIST_ERROR_VEC		0x0060
+#define ICE_REGS_STREAM2_BIST_ERROR_VEC		0x0064
+#define ICE_REGS_STREAM1_BIST_FINISH_VEC	0x0068
+#define ICE_REGS_STREAM2_BIST_FINISH_VEC	0x006C
+#define ICE_REGS_BIST_STATUS			0x0070
+#define ICE_REGS_BYPASS_STATUS			0x0074
+#define ICE_REGS_ADVANCED_CONTROL		0x1000
+#define ICE_REGS_ENDIAN_SWAP			0x1004
+#define ICE_REGS_TEST_BUS_CONTROL		0x1010
+#define ICE_REGS_TEST_BUS_REG			0x1014
+#define ICE_REGS_STREAM1_COUNTERS1		0x1100
+#define ICE_REGS_STREAM1_COUNTERS2		0x1104
+#define ICE_REGS_STREAM1_COUNTERS3		0x1108
+#define ICE_REGS_STREAM1_COUNTERS4		0x110C
+#define ICE_REGS_STREAM1_COUNTERS5_MSB		0x1110
+#define ICE_REGS_STREAM1_COUNTERS5_LSB		0x1114
+#define ICE_REGS_STREAM1_COUNTERS6_MSB		0x1118
+#define ICE_REGS_STREAM1_COUNTERS6_LSB		0x111C
+#define ICE_REGS_STREAM1_COUNTERS7_MSB		0x1120
+#define ICE_REGS_STREAM1_COUNTERS7_LSB		0x1124
+#define ICE_REGS_STREAM1_COUNTERS8_MSB		0x1128
+#define ICE_REGS_STREAM1_COUNTERS8_LSB		0x112C
+#define ICE_REGS_STREAM1_COUNTERS9_MSB		0x1130
+#define ICE_REGS_STREAM1_COUNTERS9_LSB		0x1134
+#define ICE_REGS_STREAM2_COUNTERS1		0x1200
+#define ICE_REGS_STREAM2_COUNTERS2		0x1204
+#define ICE_REGS_STREAM2_COUNTERS3		0x1208
+#define ICE_REGS_STREAM2_COUNTERS4		0x120C
+#define ICE_REGS_STREAM2_COUNTERS5_MSB		0x1210
+#define ICE_REGS_STREAM2_COUNTERS5_LSB		0x1214
+#define ICE_REGS_STREAM2_COUNTERS6_MSB		0x1218
+#define ICE_REGS_STREAM2_COUNTERS6_LSB		0x121C
+#define ICE_REGS_STREAM2_COUNTERS7_MSB		0x1220
+#define ICE_REGS_STREAM2_COUNTERS7_LSB		0x1224
+#define ICE_REGS_STREAM2_COUNTERS8_MSB		0x1228
+#define ICE_REGS_STREAM2_COUNTERS8_LSB		0x122C
+#define ICE_REGS_STREAM2_COUNTERS9_MSB		0x1230
+#define ICE_REGS_STREAM2_COUNTERS9_LSB		0x1234
+
+#define ICE_STREAM1_PREMATURE_LBA_CHANGE	(1L << 0)
+#define ICE_STREAM2_PREMATURE_LBA_CHANGE	(1L << 1)
+#define ICE_STREAM1_NOT_EXPECTED_LBO		(1L << 2)
+#define ICE_STREAM2_NOT_EXPECTED_LBO		(1L << 3)
+#define ICE_STREAM1_NOT_EXPECTED_DUN		(1L << 4)
+#define ICE_STREAM2_NOT_EXPECTED_DUN		(1L << 5)
+#define ICE_STREAM1_NOT_EXPECTED_DUS		(1L << 6)
+#define ICE_STREAM2_NOT_EXPECTED_DUS		(1L << 7)
+#define ICE_STREAM1_NOT_EXPECTED_DBO		(1L << 8)
+#define ICE_STREAM2_NOT_EXPECTED_DBO		(1L << 9)
+#define ICE_STREAM1_NOT_EXPECTED_ENC_SEL	(1L << 10)
+#define ICE_STREAM2_NOT_EXPECTED_ENC_SEL	(1L << 11)
+#define ICE_STREAM1_NOT_EXPECTED_CONF_IDX	(1L << 12)
+#define ICE_STREAM2_NOT_EXPECTED_CONF_IDX	(1L << 13)
+#define ICE_STREAM1_NOT_EXPECTED_NEW_TRNS	(1L << 14)
+#define ICE_STREAM2_NOT_EXPECTED_NEW_TRNS	(1L << 15)
+
+#define ICE_NON_SEC_IRQ_MASK				\
+			(ICE_STREAM1_PREMATURE_LBA_CHANGE |\
+			 ICE_STREAM2_PREMATURE_LBA_CHANGE |\
+			 ICE_STREAM1_NOT_EXPECTED_LBO |\
+			 ICE_STREAM2_NOT_EXPECTED_LBO |\
+			 ICE_STREAM1_NOT_EXPECTED_DUN |\
+			 ICE_STREAM2_NOT_EXPECTED_DUN |\
+			 ICE_STREAM2_NOT_EXPECTED_DUS |\
+			 ICE_STREAM1_NOT_EXPECTED_DBO |\
+			 ICE_STREAM2_NOT_EXPECTED_DBO |\
+			 ICE_STREAM1_NOT_EXPECTED_ENC_SEL |\
+			 ICE_STREAM2_NOT_EXPECTED_ENC_SEL |\
+			 ICE_STREAM1_NOT_EXPECTED_CONF_IDX |\
+			 ICE_STREAM1_NOT_EXPECTED_NEW_TRNS |\
+			 ICE_STREAM2_NOT_EXPECTED_NEW_TRNS)
+
+/* QTI ICE registers from secure side */
+#define ICE_TEST_BUS_REG_SECURE_INTR            (1L << 28)
+#define ICE_TEST_BUS_REG_NON_SECURE_INTR        (1L << 2)
+
+#define ICE_LUT_KEYS_CRYPTOCFG_R_16		0x4040
+#define ICE_LUT_KEYS_CRYPTOCFG_R_17		0x4044
+#define ICE_LUT_KEYS_CRYPTOCFG_OFFSET		0x80
+
+
+#define ICE_LUT_KEYS_ICE_SEC_IRQ_STTS		0x6200
+#define ICE_LUT_KEYS_ICE_SEC_IRQ_MASK		0x6204
+#define ICE_LUT_KEYS_ICE_SEC_IRQ_CLR		0x6208
+
+#define ICE_STREAM1_PARTIALLY_SET_KEY_USED	(1L << 0)
+#define ICE_STREAM2_PARTIALLY_SET_KEY_USED	(1L << 1)
+#define ICE_QTIC_DBG_OPEN_EVENT			(1L << 30)
+#define ICE_KEYS_RAM_RESET_COMPLETED		(1L << 31)
+
+#define ICE_SEC_IRQ_MASK					  \
+			(ICE_STREAM1_PARTIALLY_SET_KEY_USED |\
+			 ICE_STREAM2_PARTIALLY_SET_KEY_USED |\
+			 ICE_QTIC_DBG_OPEN_EVENT |	  \
+			 ICE_KEYS_RAM_RESET_COMPLETED)
+
+#define ice_writel(ice_entry, val, reg)	\
+	writel_relaxed((val), (ice_entry)->icemmio_base + (reg))
+#define ice_readl(ice_entry, reg)	\
+	readl_relaxed((ice_entry)->icemmio_base + (reg))
+
+#endif /* _CRYPTO_INLINE_CRYPTO_ENGINE_REGS_H_ */
diff --git a/drivers/soc/qcom/crypto-qti-platform.h b/drivers/soc/qcom/crypto-qti-platform.h
new file mode 100644
index 0000000..be00e50
--- /dev/null
+++ b/drivers/soc/qcom/crypto-qti-platform.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _CRYPTO_QTI_PLATFORM_H
+#define _CRYPTO_QTI_PLATFORM_H
+
+#include <linux/bio-crypt-ctx.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/device.h>
+
+#if IS_ENABLED(CONFIG_QTI_CRYPTO_TZ)
+int crypto_qti_program_key(struct crypto_vops_qti_entry *ice_entry,
+			   const struct blk_crypto_key *key, unsigned int slot,
+			   unsigned int data_unit_mask, int capid);
+int crypto_qti_invalidate_key(struct crypto_vops_qti_entry *ice_entry,
+			      unsigned int slot);
+#else
+static inline int crypto_qti_program_key(
+				struct crypto_vops_qti_entry *ice_entry,
+				const struct blk_crypto_key *key,
+				unsigned int slot, unsigned int data_unit_mask,
+				int capid)
+{
+	return 0;
+}
+static inline int crypto_qti_invalidate_key(
+		struct crypto_vops_qti_entry *ice_entry, unsigned int slot)
+{
+	return 0;
+}
+#endif /* CONFIG_QTI_CRYPTO_TZ */
+
+static inline void crypto_qti_disable_platform(
+				struct crypto_vops_qti_entry *ice_entry)
+{}
+
+#endif /* _CRYPTO_QTI_PLATFORM_H */
diff --git a/drivers/soc/qcom/crypto-qti-tz.c b/drivers/soc/qcom/crypto-qti-tz.c
new file mode 100644
index 0000000..b4fef6b
--- /dev/null
+++ b/drivers/soc/qcom/crypto-qti-tz.c
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2020, Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/module.h>
+#include <asm/cacheflush.h>
+#include <soc/qcom/scm.h>
+#include <soc/qcom/qtee_shmbridge.h>
+#include <linux/crypto-qti-common.h>
+#include "crypto-qti-platform.h"
+#include "crypto-qti-tz.h"
+
+unsigned int storage_type = SDCC_CE;
+
+int crypto_qti_program_key(struct crypto_vops_qti_entry *ice_entry,
+			   const struct blk_crypto_key *key,
+			   unsigned int slot, unsigned int data_unit_mask,
+			   int capid)
+{
+	int err = 0;
+	uint32_t smc_id = 0;
+	char *tzbuf = NULL;
+	struct qtee_shm shm;
+	struct scm_desc desc = {0};
+	int i;
+	union {
+		u8 bytes[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE];
+		u32 words[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE / sizeof(u32)];
+	} key_new;
+
+	err = qtee_shmbridge_allocate_shm(key->size, &shm);
+	if (err)
+		return -ENOMEM;
+
+	tzbuf = shm.vaddr;
+
+	memcpy(key_new.bytes, key->raw, key->size);
+	if (!key->is_hw_wrapped) {
+		for (i = 0; i < ARRAY_SIZE(key_new.words); i++)
+			__cpu_to_be32s(&key_new.words[i]);
+	}
+
+	memcpy(tzbuf, key_new.bytes, key->size);
+	dmac_flush_range(tzbuf, tzbuf + key->size);
+
+	smc_id = TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_ID;
+	desc.arginfo = TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_PARAM_ID;
+	desc.args[0] = slot;
+	desc.args[1] = shm.paddr;
+	desc.args[2] = shm.size;
+	desc.args[3] = ICE_CIPHER_MODE_XTS_256;
+	desc.args[4] = data_unit_mask;
+	desc.args[5] = storage_type;
+
+
+	err = scm_call2_noretry(smc_id, &desc);
+	if (err)
+		pr_err("%s:SCM call Error: 0x%x slot %d\n",
+				__func__, err, slot);
+
+	qtee_shmbridge_free_shm(&shm);
+
+	return err;
+}
+
+int crypto_qti_invalidate_key(
+		struct crypto_vops_qti_entry *ice_entry, unsigned int slot)
+{
+	int err = 0;
+	uint32_t smc_id = 0;
+	struct scm_desc desc = {0};
+
+	smc_id = TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_ID;
+
+	desc.arginfo = TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_PARAM_ID;
+	desc.args[0] = slot;
+	desc.args[1] = storage_type;
+
+	err = scm_call2_noretry(smc_id, &desc);
+	if (err)
+		pr_err("%s:SCM call Error: 0x%x\n", __func__, err);
+	return err;
+}
+
+static int crypto_qti_storage_type(unsigned int *s_type)
+{
+	char boot[20] = {'\0'};
+	char *match = (char *)strnstr(saved_command_line,
+				"androidboot.bootdevice=",
+				strlen(saved_command_line));
+	if (match) {
+		memcpy(boot, (match + strlen("androidboot.bootdevice=")),
+			sizeof(boot) - 1);
+		if (strnstr(boot, "ufs", strlen(boot)))
+			*s_type = UFS_CE;
+
+		return 0;
+	}
+	return -EINVAL;
+}
+
+static int __init crypto_qti_init(void)
+{
+	return crypto_qti_storage_type(&storage_type);
+}
+
+module_init(crypto_qti_init);
diff --git a/drivers/soc/qcom/crypto-qti-tz.h b/drivers/soc/qcom/crypto-qti-tz.h
new file mode 100644
index 0000000..bf7ac00
--- /dev/null
+++ b/drivers/soc/qcom/crypto-qti-tz.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#include <soc/qcom/qseecomi.h>
+
+#ifndef _CRYPTO_QTI_TZ_H
+#define _CRYPTO_QTI_TZ_H
+
+#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE 0x5
+#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE 0x6
+
+#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_ID \
+	TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, TZ_SVC_ES, \
+	TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE)
+
+#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_ID \
+	TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, \
+	TZ_SVC_ES, TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE)
+
+#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_PARAM_ID \
+	TZ_SYSCALL_CREATE_PARAM_ID_2( \
+	TZ_SYSCALL_PARAM_TYPE_VAL, TZ_SYSCALL_PARAM_TYPE_VAL)
+
+#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_PARAM_ID \
+	TZ_SYSCALL_CREATE_PARAM_ID_6( \
+	TZ_SYSCALL_PARAM_TYPE_VAL, \
+	TZ_SYSCALL_PARAM_TYPE_BUF_RW, TZ_SYSCALL_PARAM_TYPE_VAL, \
+	TZ_SYSCALL_PARAM_TYPE_VAL, TZ_SYSCALL_PARAM_TYPE_VAL, \
+	TZ_SYSCALL_PARAM_TYPE_VAL)
+
+enum {
+	ICE_CIPHER_MODE_XTS_128 = 0,
+	ICE_CIPHER_MODE_CBC_128 = 1,
+	ICE_CIPHER_MODE_XTS_256 = 3,
+	ICE_CIPHER_MODE_CBC_256 = 4
+};
+
+#define UFS_CE 10
+#define SDCC_CE 20
+#define UFS_CARD_CE 30
+
+#endif /* _CRYPTO_QTI_TZ_H */
diff --git a/drivers/soc/qcom/dcc_v2.c b/drivers/soc/qcom/dcc_v2.c
index a5e2ec0..ada4be8 100644
--- a/drivers/soc/qcom/dcc_v2.c
+++ b/drivers/soc/qcom/dcc_v2.c
@@ -719,6 +719,7 @@
 	int ret = 0;
 	int list;
 	uint32_t ram_cfg_base;
+	uint32_t hw_info;
 
 	mutex_lock(&drvdata->mutex);
 
@@ -754,6 +755,10 @@
 				drvdata->ram_offset/4, DCC_FD_BASE(list));
 		dcc_writel(drvdata, 0xFFF, DCC_LL_TIMEOUT(list));
 
+		hw_info = dcc_readl(drvdata, DCC_HW_INFO);
+		if (hw_info & 0x80)
+			dcc_writel(drvdata, 0x3F, DCC_TRANS_TIMEOUT(list));
+
 		/* 4. Clears interrupt status register */
 		dcc_writel(drvdata, 0, DCC_LL_INT_ENABLE(list));
 		dcc_writel(drvdata, (BIT(0) | BIT(1) | BIT(2)),
diff --git a/drivers/soc/qcom/eud.c b/drivers/soc/qcom/eud.c
index 0ee43a8..864bd65 100644
--- a/drivers/soc/qcom/eud.c
+++ b/drivers/soc/qcom/eud.c
@@ -92,6 +92,14 @@
 static bool eud_ready;
 static struct platform_device *eud_private;
 
+static int check_eud_mode_mgr2(struct eud_chip *chip)
+{
+	u32 val;
+
+	val = scm_io_read(chip->eud_mode_mgr2_phys_base);
+	return val & BIT(0);
+}
+
 static void enable_eud(struct platform_device *pdev)
 {
 	struct eud_chip *priv = platform_get_drvdata(pdev);
@@ -105,7 +113,7 @@
 			priv->eud_reg_base + EUD_REG_INT1_EN_MASK);
 
 	/* Enable secure eud if supported */
-	if (priv->secure_eud_en) {
+	if (priv->secure_eud_en && !check_eud_mode_mgr2(priv)) {
 		ret = scm_io_write(priv->eud_mode_mgr2_phys_base +
 				   EUD_REG_EUD_EN2, EUD_ENABLE_CMD);
 		if (ret)
@@ -564,6 +572,9 @@
 		}
 
 		chip->eud_mode_mgr2_phys_base = res->start;
+
+		if (check_eud_mode_mgr2(chip))
+			enable = 1;
 	}
 
 	chip->need_phy_clk_vote = of_property_read_bool(pdev->dev.of_node,
diff --git a/drivers/soc/qcom/hwkm.c b/drivers/soc/qcom/hwkm.c
new file mode 100644
index 0000000..af19d18
--- /dev/null
+++ b/drivers/soc/qcom/hwkm.c
@@ -0,0 +1,1214 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * QTI hardware key manager driver.
+ *
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mod_devicetable.h>
+#include <linux/device.h>
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/dma-mapping.h>
+#include <linux/io.h>
+#include <linux/platform_device.h>
+#include <linux/spinlock.h>
+#include <linux/delay.h>
+#include <linux/crypto.h>
+#include <linux/bitops.h>
+#include <crypto/hash.h>
+#include <crypto/sha.h>
+#include <linux/iommu.h>
+
+#include <linux/hwkm.h>
+#include "hwkmregs.h"
+#include "hwkm_serialize.h"
+
+#define BYTES_TO_WORDS(bytes) (((bytes) + 3) / 4)
+
+#define WRITE_TO_KDF_PACKET(cmd_ptr, src, len)	\
+	do {					\
+		memcpy(cmd_ptr, src, len);	\
+		cmd_ptr += len;			\
+	} while (0)
+
+#define ASYNC_CMD_HANDLING false
+
+// Maximum number of times to poll
+#define MAX_RETRIES 20000
+
+int retries;
+#define WAIT_UNTIL(cond)			\
+for (retries = 0; !(cond) && (retries < MAX_RETRIES); retries++)
+
+#define ICEMEM_SLAVE_TPKEY_VAL	0x192
+#define ICEMEM_SLAVE_TPKEY_SLOT	0x92
+#define KM_MASTER_TPKEY_SLOT	10
+
+struct hwkm_clk_info {
+	struct list_head list;
+	struct clk *clk;
+	const char *name;
+	u32 max_freq;
+	u32 min_freq;
+	u32 curr_freq;
+	bool enabled;
+};
+
+struct hwkm_device {
+	struct device *dev;
+	void __iomem *km_base;
+	void __iomem *ice_base;
+	struct resource *km_res;
+	struct resource *ice_res;
+	struct list_head clk_list_head;
+	bool is_hwkm_clk_available;
+	bool is_hwkm_enabled;
+};
+
+static struct hwkm_device *km_device;
+
+#define qti_hwkm_readl(hwkm, reg, dest)				\
+	(((dest) == KM_MASTER) ?				\
+	(readl_relaxed((hwkm)->km_base + (reg))) :		\
+	(readl_relaxed((hwkm)->ice_base + (reg))))
+#define qti_hwkm_writel(hwkm, val, reg, dest)			\
+	(((dest) == KM_MASTER) ?				\
+	(writel_relaxed((val), (hwkm)->km_base + (reg))) :	\
+	(writel_relaxed((val), (hwkm)->ice_base + (reg))))
+#define qti_hwkm_setb(hwkm, reg, nr, dest) {			\
+	u32 val = qti_hwkm_readl(hwkm, reg, dest);		\
+	val |= (0x1 << nr);					\
+	qti_hwkm_writel(hwkm, val, reg, dest);			\
+}
+#define qti_hwkm_clearb(hwkm, reg, nr, dest) {			\
+	u32 val = qti_hwkm_readl(hwkm, reg, dest);		\
+	val &= ~(0x1 << nr);					\
+	qti_hwkm_writel(hwkm, val, reg, dest);			\
+}
+
+static inline bool qti_hwkm_testb(struct hwkm_device *hwkm, u32 reg, u8 nr,
+				  enum hwkm_destination dest)
+{
+	u32 val = qti_hwkm_readl(hwkm, reg, dest);
+
+	val = (val >> nr) & 0x1;
+	if (val == 0)
+		return false;
+	return true;
+}
+
+static inline unsigned int qti_hwkm_get_reg_data(struct hwkm_device *dev,
+						 u32 reg, u32 offset, u32 mask,
+						 enum hwkm_destination dest)
+{
+	u32 val = 0;
+
+	val = qti_hwkm_readl(dev, reg, dest);
+	return ((val & mask) >> offset);
+}
+
+/**
+ * @brief Send a command packet to the HWKM Master instance as described
+ *        in section 3.2.5.1 of Key Manager HPG
+ *        - Clear CMD FIFO
+ *        - Clear Error Status Register
+ *        - Write CMD_ENABLE = 1
+ *        - for word in cmd_packet:
+ *          - poll until CMD_FIFO_AVAILABLE_SPACE > 0.
+ *            Timeout error after 1,000 retries.
+ *          - write word to CMD register
+ *        - for word in rsp_packet:
+ *          - poll until RSP_FIFO_AVAILABLE_DATA > 0.
+ *            Timeout error after 1,000 retries.
+ *          - read word from RSP register
+ *        - Verify CMD_DONE == 1
+ *        - Clear CMD_DONE
+ *
+ * @return HWKM_SUCCESS if successful. HWKW Error Code otherwise.
+ */
+
+static int qti_hwkm_master_transaction(struct hwkm_device *dev,
+				       const uint32_t *cmd_packet,
+				       size_t cmd_words,
+				       uint32_t *rsp_packet,
+				       size_t rsp_words)
+{
+	int i = 0;
+	int err = 0;
+
+	// Clear CMD FIFO
+	qti_hwkm_setb(dev, QTI_HWKM_MASTER_RG_BANK2_BANKN_CTL,
+			CMD_FIFO_CLEAR_BIT, KM_MASTER);
+	/* Write memory barrier */
+	wmb();
+	qti_hwkm_clearb(dev, QTI_HWKM_MASTER_RG_BANK2_BANKN_CTL,
+			CMD_FIFO_CLEAR_BIT, KM_MASTER);
+	/* Write memory barrier */
+	wmb();
+
+	// Clear previous CMD errors
+	qti_hwkm_writel(dev, 0x0, QTI_HWKM_MASTER_RG_BANK2_BANKN_ESR,
+			KM_MASTER);
+	/* Write memory barrier */
+	wmb();
+
+	// Enable command
+	qti_hwkm_setb(dev, QTI_HWKM_MASTER_RG_BANK2_BANKN_CTL, CMD_ENABLE_BIT,
+			KM_MASTER);
+	/* Write memory barrier */
+	wmb();
+
+	if (qti_hwkm_testb(dev, QTI_HWKM_MASTER_RG_BANK2_BANKN_CTL,
+			CMD_FIFO_CLEAR_BIT, KM_MASTER)) {
+
+		pr_err("%s: CMD_FIFO_CLEAR_BIT not set\n", __func__);
+		err = -1;
+		return -err;
+	}
+
+	for (i = 0; i < cmd_words; i++) {
+		WAIT_UNTIL(qti_hwkm_get_reg_data(dev,
+			QTI_HWKM_MASTER_RG_BANK2_BANKN_STATUS,
+			CMD_FIFO_AVAILABLE_SPACE, CMD_FIFO_AVAILABLE_SPACE_MASK,
+			KM_MASTER) > 0);
+		if (qti_hwkm_get_reg_data(dev,
+			QTI_HWKM_MASTER_RG_BANK2_BANKN_STATUS,
+			CMD_FIFO_AVAILABLE_SPACE, CMD_FIFO_AVAILABLE_SPACE_MASK,
+			KM_MASTER) == 0) {
+			pr_err("%s: cmd fifo space not available\n", __func__);
+			err = -1;
+			return err;
+		}
+		qti_hwkm_writel(dev, cmd_packet[i],
+				QTI_HWKM_MASTER_RG_BANK2_CMD_0, KM_MASTER);
+		/* Write memory barrier */
+		wmb();
+	}
+
+	for (i = 0; i < rsp_words; i++) {
+		WAIT_UNTIL(qti_hwkm_get_reg_data(dev,
+			QTI_HWKM_MASTER_RG_BANK2_BANKN_STATUS,
+			RSP_FIFO_AVAILABLE_DATA, RSP_FIFO_AVAILABLE_DATA_MASK,
+			KM_MASTER) > 0);
+		if (qti_hwkm_get_reg_data(dev,
+			QTI_HWKM_MASTER_RG_BANK2_BANKN_STATUS,
+			RSP_FIFO_AVAILABLE_DATA, RSP_FIFO_AVAILABLE_DATA_MASK,
+			KM_MASTER) == 0) {
+			pr_err("%s: rsp fifo data not available\n", __func__);
+			err = -1;
+			return err;
+		}
+		rsp_packet[i] = qti_hwkm_readl(dev,
+				QTI_HWKM_MASTER_RG_BANK2_RSP_0, KM_MASTER);
+	}
+
+	if (!qti_hwkm_testb(dev, QTI_HWKM_MASTER_RG_BANK2_BANKN_IRQ_STATUS,
+			CMD_DONE_BIT, KM_MASTER)) {
+		pr_err("%s: CMD_DONE_BIT not set\n", __func__);
+		err = -1;
+		return err;
+	}
+
+	// Clear CMD_DONE status bit
+	qti_hwkm_setb(dev, QTI_HWKM_MASTER_RG_BANK2_BANKN_IRQ_STATUS,
+			CMD_DONE_BIT, KM_MASTER);
+	/* Write memory barrier */
+	wmb();
+
+	return err;
+}
+
+/**
+ * @brief Send a command packet to the HWKM ICE slave instance as described in
+ *        section 3.2.5.1 of Key Manager HPG
+ *        - Clear CMD FIFO
+ *        - Clear Error Status Register
+ *        - Write CMD_ENABLE = 1
+ *        - for word in cmd_packet:
+ *          - poll until CMD_FIFO_AVAILABLE_SPACE > 0.
+ *            Timeout error after 1,000 retries.
+ *          - write word to CMD register
+ *        - for word in rsp_packet:
+ *          - poll until RSP_FIFO_AVAILABLE_DATA > 0.
+ *            Timeout error after 1,000 retries.
+ *          - read word from RSP register
+ *        - Verify CMD_DONE == 1
+ *        - Clear CMD_DONE
+ *
+ * @return HWKM_SUCCESS if successful. HWKW Error Code otherwise.
+ */
+
+static int qti_hwkm_ice_transaction(struct hwkm_device *dev,
+				    const uint32_t *cmd_packet,
+				    size_t cmd_words,
+				    uint32_t *rsp_packet,
+				    size_t rsp_words)
+{
+	int i = 0;
+	int err = 0;
+
+	// Clear CMD FIFO
+	qti_hwkm_setb(dev, QTI_HWKM_ICE_RG_BANK0_BANKN_CTL,
+			CMD_FIFO_CLEAR_BIT, ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+	qti_hwkm_clearb(dev, QTI_HWKM_ICE_RG_BANK0_BANKN_CTL,
+			CMD_FIFO_CLEAR_BIT, ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+
+	// Clear previous CMD errors
+	qti_hwkm_writel(dev, 0x0, QTI_HWKM_ICE_RG_BANK0_BANKN_ESR,
+			ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+
+	// Enable command
+	qti_hwkm_setb(dev, QTI_HWKM_ICE_RG_BANK0_BANKN_CTL, CMD_ENABLE_BIT,
+			ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+
+	if (qti_hwkm_testb(dev, QTI_HWKM_ICE_RG_BANK0_BANKN_CTL,
+			CMD_FIFO_CLEAR_BIT, ICEMEM_SLAVE)) {
+
+		pr_err("%s: CMD_FIFO_CLEAR_BIT not set\n", __func__);
+		err = -1;
+		return err;
+	}
+
+	for (i = 0; i < cmd_words; i++) {
+		WAIT_UNTIL(qti_hwkm_get_reg_data(dev,
+			QTI_HWKM_ICE_RG_BANK0_BANKN_STATUS,
+			CMD_FIFO_AVAILABLE_SPACE, CMD_FIFO_AVAILABLE_SPACE_MASK,
+			ICEMEM_SLAVE) > 0);
+		if (qti_hwkm_get_reg_data(dev,
+			QTI_HWKM_ICE_RG_BANK0_BANKN_STATUS,
+			CMD_FIFO_AVAILABLE_SPACE, CMD_FIFO_AVAILABLE_SPACE_MASK,
+			ICEMEM_SLAVE) == 0) {
+			pr_err("%s: cmd fifo space not available\n", __func__);
+			err = -1;
+			return err;
+		}
+		qti_hwkm_writel(dev, cmd_packet[i],
+				QTI_HWKM_ICE_RG_BANK0_CMD_0, ICEMEM_SLAVE);
+		/* Write memory barrier */
+		wmb();
+	}
+
+	for (i = 0; i < rsp_words; i++) {
+		WAIT_UNTIL(qti_hwkm_get_reg_data(dev,
+			QTI_HWKM_ICE_RG_BANK0_BANKN_STATUS,
+			RSP_FIFO_AVAILABLE_DATA, RSP_FIFO_AVAILABLE_DATA_MASK,
+			ICEMEM_SLAVE) > 0);
+		if (qti_hwkm_get_reg_data(dev,
+			QTI_HWKM_ICE_RG_BANK0_BANKN_STATUS,
+			RSP_FIFO_AVAILABLE_DATA, RSP_FIFO_AVAILABLE_DATA_MASK,
+			ICEMEM_SLAVE) == 0) {
+			pr_err("%s: rsp fifo data not available\n", __func__);
+			err = -1;
+			return err;
+		}
+		rsp_packet[i] = qti_hwkm_readl(dev,
+				QTI_HWKM_ICE_RG_BANK0_RSP_0, ICEMEM_SLAVE);
+	}
+
+	if (!qti_hwkm_testb(dev, QTI_HWKM_ICE_RG_BANK0_BANKN_IRQ_STATUS,
+			CMD_DONE_BIT, ICEMEM_SLAVE)) {
+		pr_err("%s: CMD_DONE_BIT not set\n", __func__);
+		err = -1;
+		return err;
+	}
+
+	// Clear CMD_DONE status bit
+	qti_hwkm_setb(dev, QTI_HWKM_ICE_RG_BANK0_BANKN_IRQ_STATUS,
+			CMD_DONE_BIT, ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+
+	return err;
+}
+
+/*
+ * @brief Send a command packet to the selected KM instance and read
+ *        the response
+ *
+ * @param dest            [in]  Destination KM instance
+ * @param cmd_packet      [in]  pointer to start of command packet
+ * @param cmd_words       [in]  words in the command packet
+ * @param rsp_packet      [out] pointer to start of response packet
+ * @param rsp_words       [in]  words in the response buffer
+ *
+ * @return HWKM_SUCCESS if successful. HWKW Error Code otherwise.
+ */
+
+static int qti_hwkm_run_transaction(enum hwkm_destination dest,
+				    const uint32_t *cmd_packet,
+				    size_t cmd_words,
+				    uint32_t *rsp_packet,
+				    size_t rsp_words)
+{
+	int status = 0;
+
+	if (cmd_packet == NULL || rsp_packet == NULL) {
+		status = -1;
+		return status;
+	}
+
+	switch (dest) {
+	case KM_MASTER:
+		status = qti_hwkm_master_transaction(km_device,
+					cmd_packet, cmd_words,
+					rsp_packet, rsp_words);
+		break;
+	case ICEMEM_SLAVE:
+		status = qti_hwkm_ice_transaction(km_device,
+					cmd_packet, cmd_words,
+					rsp_packet, rsp_words);
+		break;
+	default:
+		status = -2;
+		break;
+	}
+
+	return status;
+}
+
+static void serialize_policy(struct hwkm_serialized_policy *out,
+			     const struct hwkm_key_policy *policy)
+{
+	memset(out, 0, sizeof(struct hwkm_serialized_policy));
+	out->wrap_with_tpkey = policy->wrap_with_tpk_allowed;
+	out->hw_destination = policy->hw_destination;
+	out->security_level = policy->security_lvl;
+	out->swap_export_allowed = policy->swap_export_allowed;
+	out->wrap_export_allowed = policy->wrap_export_allowed;
+	out->key_type = policy->key_type;
+	out->kdf_depth = policy->kdf_depth;
+	out->encrypt_allowed = policy->enc_allowed;
+	out->decrypt_allowed = policy->dec_allowed;
+	out->alg_allowed = policy->alg_allowed;
+	out->key_management_by_tz_secure_allowed = policy->km_by_tz_allowed;
+	out->key_management_by_nonsecure_allowed = policy->km_by_nsec_allowed;
+	out->key_management_by_modem_allowed = policy->km_by_modem_allowed;
+	out->key_management_by_spu_allowed = policy->km_by_spu_allowed;
+}
+
+static void serialize_kdf_bsve(struct hwkm_kdf_bsve *out,
+			       const struct hwkm_bsve *bsve, u8 mks)
+{
+	memset(out, 0, sizeof(struct hwkm_kdf_bsve));
+	out->mks = mks;
+	out->key_policy_version_en = bsve->km_key_policy_ver_en;
+	out->apps_secure_en = bsve->km_apps_secure_en;
+	out->msa_secure_en = bsve->km_msa_secure_en;
+	out->lcm_fuse_row_en = bsve->km_lcm_fuse_en;
+	out->boot_stage_otp_en = bsve->km_boot_stage_otp_en;
+	out->swc_en = bsve->km_swc_en;
+	out->fuse_region_sha_digest_en = bsve->km_fuse_region_sha_digest_en;
+	out->child_key_policy_en = bsve->km_child_key_policy_en;
+	out->mks_en = bsve->km_mks_en;
+}
+
+static void deserialize_policy(struct hwkm_key_policy *out,
+			       const struct hwkm_serialized_policy *policy)
+{
+	memset(out, 0, sizeof(struct hwkm_key_policy));
+	out->wrap_with_tpk_allowed = policy->wrap_with_tpkey;
+	out->hw_destination = policy->hw_destination;
+	out->security_lvl = policy->security_level;
+	out->swap_export_allowed = policy->swap_export_allowed;
+	out->wrap_export_allowed = policy->wrap_export_allowed;
+	out->key_type = policy->key_type;
+	out->kdf_depth = policy->kdf_depth;
+	out->enc_allowed = policy->encrypt_allowed;
+	out->dec_allowed = policy->decrypt_allowed;
+	out->alg_allowed = policy->alg_allowed;
+	out->km_by_tz_allowed = policy->key_management_by_tz_secure_allowed;
+	out->km_by_nsec_allowed = policy->key_management_by_nonsecure_allowed;
+	out->km_by_modem_allowed = policy->key_management_by_modem_allowed;
+	out->km_by_spu_allowed = policy->key_management_by_spu_allowed;
+}
+
+static void reverse_key(u8 *key, size_t keylen)
+{
+	size_t left = 0;
+	size_t right = 0;
+
+	for (left = 0, right = keylen - 1; left < right; left++, right--) {
+		key[left] ^= key[right];
+		key[right] ^= key[left];
+		key[left] ^= key[right];
+	}
+}
+
+/*
+ * Command packet format (word indices):
+ * CMD[0]    = Operation info (OP, IRQ_EN, DKS, LEN)
+ * CMD[1:17] = Wrapped Key Blob
+ * CMD[18]   = CRC (disabled)
+ *
+ * Response packet format (word indices):
+ * RSP[0]    = Operation info (OP, IRQ_EN, LEN)
+ * RSP[1]    = Error status
+ */
+
+static int qti_handle_key_unwrap_import(const struct hwkm_cmd *cmd_in,
+					struct hwkm_rsp *rsp_in)
+{
+	int status = 0;
+	u32 cmd[UNWRAP_IMPORT_CMD_WORDS] = {0};
+	u32 rsp[UNWRAP_IMPORT_RSP_WORDS] = {0};
+	struct hwkm_operation_info operation = {
+		.op = KEY_UNWRAP_IMPORT,
+		.irq_en = ASYNC_CMD_HANDLING,
+		.slot1_desc = cmd_in->unwrap.dks,
+		.slot2_desc = cmd_in->unwrap.kwk,
+		.len = UNWRAP_IMPORT_CMD_WORDS
+	};
+
+	pr_debug("%s: KEY_UNWRAP_IMPORT start\n", __func__);
+
+	memcpy(cmd, &operation, OPERATION_INFO_LENGTH);
+	memcpy(cmd + COMMAND_WRAPPED_KEY_IDX, cmd_in->unwrap.wkb,
+			cmd_in->unwrap.sz);
+
+	status = qti_hwkm_run_transaction(ICEMEM_SLAVE, cmd,
+			UNWRAP_IMPORT_CMD_WORDS, rsp, UNWRAP_IMPORT_RSP_WORDS);
+	if (status) {
+		pr_err("%s: Error running transaction %d\n", __func__, status);
+		return status;
+	}
+
+	rsp_in->status = rsp[RESPONSE_ERR_IDX];
+	if (rsp_in->status) {
+		pr_err("%s: KEY_UNWRAP_IMPORT error status 0x%x\n", __func__,
+								rsp_in->status);
+		return rsp_in->status;
+	}
+
+	return status;
+}
+
+/*
+ * Command packet format (word indices):
+ * CMD[0] = Operation info (OP, IRQ_EN, DKS, DK, LEN)
+ * CMD[1] = CRC (disabled)
+ *
+ * Response packet format (word indices):
+ * RSP[0] = Operation info (OP, IRQ_EN, LEN)
+ * RSP[1] = Error status
+ */
+
+static int qti_handle_keyslot_clear(const struct hwkm_cmd *cmd_in,
+				    struct hwkm_rsp *rsp_in)
+{
+	int status = 0;
+	u32 cmd[KEYSLOT_CLEAR_CMD_WORDS] = {0};
+	u32 rsp[KEYSLOT_CLEAR_RSP_WORDS] = {0};
+	struct hwkm_operation_info operation = {
+		.op = KEY_SLOT_CLEAR,
+		.irq_en = ASYNC_CMD_HANDLING,
+		.slot1_desc = cmd_in->clear.dks,
+		.op_flag = cmd_in->clear.is_double_key,
+		.len = KEYSLOT_CLEAR_CMD_WORDS
+	};
+
+	pr_debug("%s: KEY_SLOT_CLEAR start\n", __func__);
+
+	memcpy(cmd, &operation, OPERATION_INFO_LENGTH);
+
+	status = qti_hwkm_run_transaction(ICEMEM_SLAVE, cmd,
+				KEYSLOT_CLEAR_CMD_WORDS, rsp,
+				KEYSLOT_CLEAR_RSP_WORDS);
+	if (status) {
+		pr_err("%s: Error running transaction %d\n", __func__, status);
+		return status;
+	}
+
+	rsp_in->status = rsp[RESPONSE_ERR_IDX];
+	if (rsp_in->status) {
+		pr_err("%s: KEYSLOT_CLEAR error status 0x%x\n",
+				__func__, rsp_in->status);
+		return rsp_in->status;
+	}
+
+	return status;
+}
+
+/*
+ * NOTE: The command packet can vary in length. If BE = 0, the last 2 indices
+ * for the BSVE are skipped. Similarly, if Software Context Length (SCL) < 16,
+ * only SCL words are written to the packet. The CRC word is after the last
+ * word of the SWC. The LEN field of this command does not include the SCL
+ * (unlike other commands where the LEN field is the length of the entire
+ * packet). The HW will expect SCL + LEN words to be sent.
+ *
+ * Command packet format (word indices):
+ * CMD[0]    = Operation info (OP, IRQ_EN, DKS, KDK, BE, SCL, LEN)
+ * CMD[1:2]  = Policy
+ * CMD[3]    = BSVE[0] if BE = 1, 0 if BE = 0
+ * CMD[4:5]  = BSVE[1:2] if BE = 1, skipped if BE = 0
+ * CMD[6:21] = Software Context, only writing the number of words in SCL
+ * CMD[22]   = CRC
+ *
+ * Response packet format (word indices):
+ * RSP[0]    = Operation info (OP, IRQ_EN, LEN)
+ * RSP[1]    = Error status
+ */
+
+static int qti_handle_system_kdf(const struct hwkm_cmd *cmd_in,
+				 struct hwkm_rsp *rsp_in)
+{
+	int status = 0;
+	u32 cmd[SYSTEM_KDF_CMD_MAX_WORDS] = {0};
+	u32 rsp[SYSTEM_KDF_RSP_WORDS] = {0};
+	u8 *cmd_ptr = (u8 *) cmd;
+	struct hwkm_serialized_policy policy;
+	struct hwkm_operation_info operation = {
+		.op = SYSTEM_KDF,
+		.irq_en = ASYNC_CMD_HANDLING,
+		.slot1_desc = cmd_in->kdf.dks,
+		.slot2_desc = cmd_in->kdf.kdk,
+		.op_flag = cmd_in->kdf.bsve.enabled,
+		.context_len = BYTES_TO_WORDS(cmd_in->kdf.sz),
+		.len = SYSTEM_KDF_CMD_MIN_WORDS +
+			(cmd_in->kdf.bsve.enabled ? BSVE_WORDS : 1)
+	};
+
+	pr_debug("%s: SYSTEM_KDF start\n", __func__);
+
+	serialize_policy(&policy, &cmd_in->kdf.policy);
+
+	WRITE_TO_KDF_PACKET(cmd_ptr, &operation, OPERATION_INFO_LENGTH);
+	WRITE_TO_KDF_PACKET(cmd_ptr, &policy, KEY_POLICY_LENGTH);
+
+	if (cmd_in->kdf.bsve.enabled) {
+		struct hwkm_kdf_bsve bsve;
+
+		serialize_kdf_bsve(&bsve, &cmd_in->kdf.bsve, cmd_in->kdf.mks);
+		WRITE_TO_KDF_PACKET(cmd_ptr, &bsve, MAX_BSVE_LENGTH);
+	} else {
+		// Skip the remaining 3 bytes of the current word
+		cmd_ptr += 3 * (sizeof(u8));
+	}
+
+	WRITE_TO_KDF_PACKET(cmd_ptr, cmd_in->kdf.ctx, cmd_in->kdf.sz);
+
+	status = qti_hwkm_run_transaction(ICEMEM_SLAVE, cmd,
+				operation.len + operation.context_len,
+				rsp, SYSTEM_KDF_RSP_WORDS);
+	if (status) {
+		pr_err("%s: Error running transaction %d\n", __func__, status);
+		return status;
+	}
+
+	rsp_in->status = rsp[RESPONSE_ERR_IDX];
+	if (rsp_in->status) {
+		pr_err("%s: SYSTEM_KDF error status 0x%x\n", __func__,
+					rsp_in->status);
+		return rsp_in->status;
+	}
+
+	return status;
+}
+
+/*
+ * Command packet format (word indices):
+ * CMD[0] = Operation info (OP, IRQ_EN, SKS, LEN)
+ * CMD[1] = CRC (disabled)
+ *
+ * Response packet format (word indices):
+ * RSP[0] = Operation info (OP, IRQ_EN, LEN)
+ * RSP[1] = Error status
+ */
+
+static int qti_handle_set_tpkey(const struct hwkm_cmd *cmd_in,
+				struct hwkm_rsp *rsp_in)
+{
+	int status = 0;
+	u32 cmd[SET_TPKEY_CMD_WORDS] = {0};
+	u32 rsp[SET_TPKEY_RSP_WORDS] = {0};
+	struct hwkm_operation_info operation = {
+		.op = SET_TPKEY,
+		.irq_en = ASYNC_CMD_HANDLING,
+		.slot1_desc = cmd_in->set_tpkey.sks,
+		.len = SET_TPKEY_CMD_WORDS
+	};
+
+	pr_debug("%s: SET_TPKEY start\n", __func__);
+
+	memcpy(cmd, &operation, OPERATION_INFO_LENGTH);
+
+	status = qti_hwkm_run_transaction(KM_MASTER, cmd,
+			SET_TPKEY_CMD_WORDS, rsp, SET_TPKEY_RSP_WORDS);
+	if (status) {
+		pr_err("%s: Error running transaction %d\n", __func__, status);
+		return status;
+	}
+
+	rsp_in->status = rsp[RESPONSE_ERR_IDX];
+	if (rsp_in->status) {
+		pr_err("%s: SET_TPKEY error status 0x%x\n", __func__,
+					rsp_in->status);
+		return rsp_in->status;
+	}
+
+	return status;
+}
+
+/**
+ * 254 * NOTE: To anyone maintaining or porting this code wondering why the key
+ * is reversed in the command packet: the plaintext key value is expected by
+ * the HW in reverse byte order.
+ *       See section 1.8.2.2 of the HWKM CPAS for more details
+ *       Mapping of key to CE key read order:
+ *       Key[255:224] -> CRYPTO0_CRYPTO_ENCR_KEY0
+ *       Key[223:192] -> CRYPTO0_CRYPTO_ENCR_KEY1
+ *       ...
+ *       Key[63:32]   -> CRYPTO0_CRYPTO_ENCR_KEY6
+ *       Key[31:0]    -> CRYPTO0_CRYPTO_ENCR_KEY7
+ *       In this notation Key[31:0] is the least significant word of the key
+ *       If the key length is less than 256 bits, the key is filled in from
+ *       higher index to lower
+ *       For example, for a 128 bit key, Key[255:128] would have the key,
+ *       Key[127:0] would be all 0
+ *       This means that CMD[3:6] is all 0, CMD[7:10] has the key value.
+ *
+ * Command packet format (word indices):
+ * CMD[0]    = Operation info (OP, IRQ_EN, DKS/SKS, WE, LEN)
+ * CMD[1:2]  = Policy (0 if we == 0)
+ * CMD[3:10] = Write key value (0 if we == 0)
+ * CMD[11]   = CRC (disabled)
+ *
+ * Response packet format (word indices):
+ * RSP[0]    = Operation info (OP, IRQ_EN, LEN)
+ * RSP[1]    = Error status
+ * RSP[2:3]  = Policy (0 if we == 1)
+ * RSP[4:11] = Read key value (0 if we == 1)
+ **/
+
+static int qti_handle_keyslot_rdwr(const struct hwkm_cmd *cmd_in,
+				   struct hwkm_rsp *rsp_in)
+{
+	int status = 0;
+	u32 cmd[KEYSLOT_RDWR_CMD_WORDS] = {0};
+	u32 rsp[KEYSLOT_RDWR_RSP_WORDS] = {0};
+	struct hwkm_serialized_policy policy;
+	struct hwkm_operation_info operation = {
+		.op = KEY_SLOT_RDWR,
+		.irq_en = ASYNC_CMD_HANDLING,
+		.slot1_desc = cmd_in->rdwr.slot,
+		.op_flag = cmd_in->rdwr.is_write,
+		.len = KEYSLOT_RDWR_CMD_WORDS
+	};
+
+	pr_debug("%s: KEY_SLOT_RDWR start\n", __func__);
+	memcpy(cmd, &operation, OPERATION_INFO_LENGTH);
+
+	if (cmd_in->rdwr.is_write) {
+		serialize_policy(&policy, &cmd_in->rdwr.policy);
+		memcpy(cmd + COMMAND_KEY_POLICY_IDX, &policy,
+				KEY_POLICY_LENGTH);
+		memcpy(cmd + COMMAND_KEY_VALUE_IDX, cmd_in->rdwr.key,
+				cmd_in->rdwr.sz);
+		// Need to reverse the key because the HW expects it in reverse
+		// byte order
+		reverse_key((u8 *) (cmd + COMMAND_KEY_VALUE_IDX),
+				HWKM_MAX_KEY_SIZE);
+	}
+
+	status = qti_hwkm_run_transaction(ICEMEM_SLAVE, cmd,
+			KEYSLOT_RDWR_CMD_WORDS, rsp, KEYSLOT_RDWR_RSP_WORDS);
+	if (status) {
+		pr_err("%s: Error running transaction %d\n", __func__, status);
+		return status;
+	}
+
+	rsp_in->status = rsp[RESPONSE_ERR_IDX];
+	if (rsp_in->status) {
+		pr_err("%s: KEY_SLOT_RDWR error status 0x%x\n",
+				__func__, rsp_in->status);
+		return rsp_in->status;
+	}
+
+	if (!cmd_in->rdwr.is_write &&
+			(rsp_in->status == 0)) {
+		memcpy(&policy, rsp + RESPONSE_KEY_POLICY_IDX,
+						KEY_POLICY_LENGTH);
+		memcpy(rsp_in->rdwr.key,
+			rsp + RESPONSE_KEY_VALUE_IDX, RESPONSE_KEY_LENGTH);
+		// Need to reverse the key because the HW returns it in
+		// reverse byte order
+		reverse_key(rsp_in->rdwr.key, HWKM_MAX_KEY_SIZE);
+		deserialize_policy(&rsp_in->rdwr.policy, &policy);
+	}
+
+	// Clear cmd and rsp buffers, since they may contain plaintext keys
+	memset(cmd, 0, sizeof(cmd));
+	memset(rsp, 0, sizeof(rsp));
+
+	return status;
+}
+
+static int qti_hwkm_parse_clock_info(struct platform_device *pdev,
+				     struct hwkm_device *hwkm_dev)
+{
+	int ret = -1, cnt, i, len;
+	struct device *dev = &pdev->dev;
+	struct device_node *np = dev->of_node;
+	char *name;
+	struct hwkm_clk_info *clki;
+	u32 *clkfreq = NULL;
+
+	if (!np)
+		goto out;
+
+	cnt = of_property_count_strings(np, "clock-names");
+	if (cnt <= 0) {
+		dev_info(dev, "%s: Unable to find clocks, assuming enabled\n",
+				__func__);
+		ret = cnt;
+		goto out;
+	}
+
+	if (!of_get_property(np, "qcom,op-freq-hz", &len)) {
+		dev_info(dev, "qcom,op-freq-hz property not specified\n");
+		goto out;
+	}
+
+	len = len/sizeof(*clkfreq);
+	if (len != cnt)
+		goto out;
+
+	clkfreq = devm_kzalloc(dev, len * sizeof(*clkfreq), GFP_KERNEL);
+	if (!clkfreq) {
+		ret = -ENOMEM;
+		goto out;
+	}
+	ret = of_property_read_u32_array(np, "qcom,op-freq-hz", clkfreq, len);
+
+	INIT_LIST_HEAD(&hwkm_dev->clk_list_head);
+
+	for (i = 0; i < cnt; i++) {
+		ret = of_property_read_string_index(np,
+			"clock-names", i, (const char **)&name);
+		if (ret)
+			goto out;
+
+		clki = devm_kzalloc(dev, sizeof(*clki), GFP_KERNEL);
+		if (!clki) {
+			ret = -ENOMEM;
+			goto out;
+		}
+		clki->max_freq = clkfreq[i];
+		clki->name = kstrdup(name, GFP_KERNEL);
+		list_add_tail(&clki->list, &hwkm_dev->clk_list_head);
+	}
+out:
+	return ret;
+}
+
+static int qti_hwkm_init_clocks(struct hwkm_device *hwkm_dev)
+{
+	int ret = -EINVAL;
+	struct hwkm_clk_info *clki = NULL;
+	struct device *dev = hwkm_dev->dev;
+	struct list_head *head = &hwkm_dev->clk_list_head;
+
+	if (!hwkm_dev->is_hwkm_clk_available)
+		return 0;
+
+	if (!head || list_empty(head)) {
+		dev_err(dev, "%s: HWKM clock list null/empty\n", __func__);
+		goto out;
+	}
+
+	list_for_each_entry(clki, head, list) {
+		if (!clki->name)
+			continue;
+
+		clki->clk = devm_clk_get(dev, clki->name);
+		if (IS_ERR(clki->clk)) {
+			ret = PTR_ERR(clki->clk);
+			dev_err(dev, "%s: %s clk get failed, %d\n",
+					__func__, clki->name, ret);
+			goto out;
+		}
+
+		ret = 0;
+		if (clki->max_freq) {
+			ret = clk_set_rate(clki->clk, clki->max_freq);
+			if (ret) {
+				dev_err(dev,
+				"%s: %s clk set rate(%dHz) failed, %d\n",
+				__func__, clki->name, clki->max_freq, ret);
+				goto out;
+			}
+			clki->curr_freq = clki->max_freq;
+			dev_dbg(dev, "%s: clk: %s, rate: %lu\n", __func__,
+				clki->name, clk_get_rate(clki->clk));
+		}
+	}
+out:
+	return ret;
+}
+
+static int qti_hwkm_enable_disable_clocks(struct hwkm_device *hwkm_dev,
+					  bool enable)
+{
+	int ret = 0;
+	struct hwkm_clk_info *clki = NULL;
+	struct device *dev = hwkm_dev->dev;
+	struct list_head *head = &hwkm_dev->clk_list_head;
+
+	if (!head || list_empty(head)) {
+		dev_err(dev, "%s: HWKM clock list null/empty\n", __func__);
+		ret = -EINVAL;
+		goto out;
+	}
+
+	if (!hwkm_dev->is_hwkm_clk_available) {
+		dev_err(dev, "%s: HWKM clock not available\n", __func__);
+		ret = -EINVAL;
+		goto out;
+	}
+
+	list_for_each_entry(clki, head, list) {
+		if (!clki->name)
+			continue;
+
+		if (enable)
+			ret = clk_prepare_enable(clki->clk);
+		else
+			clk_disable_unprepare(clki->clk);
+
+		if (ret) {
+			dev_err(dev, "Unable to %s HWKM clock\n",
+				enable?"enable":"disable");
+			goto out;
+		}
+	}
+out:
+	return ret;
+}
+
+int qti_hwkm_clocks(bool on)
+{
+	int ret = 0;
+
+	ret = qti_hwkm_enable_disable_clocks(km_device, on);
+	if (ret) {
+		pr_err("%s:%pK Could not enable/disable clocks\n",
+				__func__, km_device);
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(qti_hwkm_clocks);
+
+static int qti_hwkm_get_device_tree_data(struct platform_device *pdev,
+					 struct hwkm_device *hwkm_dev)
+{
+	struct device *dev = &pdev->dev;
+	int ret = 0;
+
+	hwkm_dev->km_res = platform_get_resource_byname(pdev,
+				IORESOURCE_MEM, "km_master");
+	hwkm_dev->ice_res = platform_get_resource_byname(pdev,
+				IORESOURCE_MEM, "ice_slave");
+	if (!hwkm_dev->km_res || !hwkm_dev->ice_res) {
+		pr_err("%s: No memory available for IORESOURCE\n", __func__);
+		return -ENOMEM;
+	}
+
+	hwkm_dev->km_base = devm_ioremap_resource(dev, hwkm_dev->km_res);
+	hwkm_dev->ice_base = devm_ioremap_resource(dev, hwkm_dev->ice_res);
+
+	if (IS_ERR(hwkm_dev->km_base) || IS_ERR(hwkm_dev->ice_base)) {
+		ret = PTR_ERR(hwkm_dev->km_base);
+		pr_err("%s: Error = %d mapping HWKM memory\n", __func__, ret);
+		goto out;
+	}
+
+	hwkm_dev->is_hwkm_clk_available = of_property_read_bool(
+				dev->of_node, "qcom,enable-hwkm-clk");
+
+	if (hwkm_dev->is_hwkm_clk_available) {
+		ret = qti_hwkm_parse_clock_info(pdev, hwkm_dev);
+		if (ret) {
+			pr_err("%s: qti_hwkm_parse_clock_info failed (%d)\n",
+				__func__, ret);
+			goto out;
+		}
+	}
+
+out:
+	return ret;
+}
+
+int qti_hwkm_handle_cmd(struct hwkm_cmd *cmd, struct hwkm_rsp *rsp)
+{
+	switch (cmd->op) {
+	case SYSTEM_KDF:
+		return qti_handle_system_kdf(cmd, rsp);
+	case KEY_UNWRAP_IMPORT:
+		return qti_handle_key_unwrap_import(cmd, rsp);
+	case KEY_SLOT_CLEAR:
+		return qti_handle_keyslot_clear(cmd, rsp);
+	case KEY_SLOT_RDWR:
+		return qti_handle_keyslot_rdwr(cmd, rsp);
+	case SET_TPKEY:
+		return qti_handle_set_tpkey(cmd, rsp);
+	case NIST_KEYGEN:
+	case KEY_WRAP_EXPORT:
+	case QFPROM_KEY_RDWR: // cmd for HW initialization cmd only
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(qti_hwkm_handle_cmd);
+
+static void qti_hwkm_configure_slot_access(struct hwkm_device *dev)
+{
+	qti_hwkm_writel(dev, 0xffffffff,
+		QTI_HWKM_ICE_RG_BANK0_AC_BANKN_BBAC_0, ICEMEM_SLAVE);
+	qti_hwkm_writel(dev, 0xffffffff,
+		QTI_HWKM_ICE_RG_BANK0_AC_BANKN_BBAC_1, ICEMEM_SLAVE);
+	qti_hwkm_writel(dev, 0xffffffff,
+		QTI_HWKM_ICE_RG_BANK0_AC_BANKN_BBAC_2, ICEMEM_SLAVE);
+	qti_hwkm_writel(dev, 0xffffffff,
+		QTI_HWKM_ICE_RG_BANK0_AC_BANKN_BBAC_3, ICEMEM_SLAVE);
+	qti_hwkm_writel(dev, 0xffffffff,
+		QTI_HWKM_ICE_RG_BANK0_AC_BANKN_BBAC_4, ICEMEM_SLAVE);
+}
+
+static int qti_hwkm_check_bist_status(struct hwkm_device *hwkm_dev)
+{
+	if (!qti_hwkm_testb(hwkm_dev, QTI_HWKM_ICE_RG_TZ_KM_STATUS,
+		BIST_DONE, ICEMEM_SLAVE)) {
+		pr_err("%s: Error with BIST_DONE\n", __func__);
+		return -EINVAL;
+	}
+
+	if (!qti_hwkm_testb(hwkm_dev, QTI_HWKM_ICE_RG_TZ_KM_STATUS,
+		CRYPTO_LIB_BIST_DONE, ICEMEM_SLAVE)) {
+		pr_err("%s: Error with CRYPTO_LIB_BIST_DONE\n", __func__);
+		return -EINVAL;
+	}
+
+	if (!qti_hwkm_testb(hwkm_dev, QTI_HWKM_ICE_RG_TZ_KM_STATUS,
+		BOOT_CMD_LIST1_DONE, ICEMEM_SLAVE)) {
+		pr_err("%s: Error with BOOT_CMD_LIST1_DONE\n", __func__);
+		return -EINVAL;
+	}
+
+	if (!qti_hwkm_testb(hwkm_dev, QTI_HWKM_ICE_RG_TZ_KM_STATUS,
+		BOOT_CMD_LIST0_DONE, ICEMEM_SLAVE)) {
+		pr_err("%s: Error with BOOT_CMD_LIST0_DONE\n", __func__);
+		return -EINVAL;
+	}
+
+	if (!qti_hwkm_testb(hwkm_dev, QTI_HWKM_ICE_RG_TZ_KM_STATUS,
+		KT_CLEAR_DONE, ICEMEM_SLAVE)) {
+		pr_err("%s: KT_CLEAR_DONE\n", __func__);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int qti_hwkm_ice_init_sequence(struct hwkm_device *hwkm_dev)
+{
+	int ret = 0;
+
+	// Put ICE in standard mode
+	qti_hwkm_writel(hwkm_dev, 0x7, QTI_HWKM_ICE_RG_TZ_KM_CTL, ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+
+	ret = qti_hwkm_check_bist_status(hwkm_dev);
+	if (ret) {
+		pr_err("%s: Error in BIST initialization %d\n", __func__, ret);
+		return ret;
+	}
+
+	// Disable CRC checks
+	qti_hwkm_clearb(hwkm_dev, QTI_HWKM_ICE_RG_TZ_KM_CTL, CRC_CHECK_EN,
+			ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+
+	// Configure key slots to be accessed by HLOS
+	qti_hwkm_configure_slot_access(hwkm_dev);
+	/* Write memory barrier */
+	wmb();
+
+	// Clear RSP_FIFO_FULL bit
+	qti_hwkm_setb(hwkm_dev, QTI_HWKM_ICE_RG_BANK0_BANKN_IRQ_STATUS,
+			RSP_FIFO_FULL, ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+
+	return ret;
+}
+
+static void qti_hwkm_enable_slave_receive_mode(struct hwkm_device *hwkm_dev)
+{
+	qti_hwkm_clearb(hwkm_dev, QTI_HWKM_ICE_RG_TZ_TPKEY_RECEIVE_CTL,
+			TPKEY_EN, ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+	qti_hwkm_writel(hwkm_dev, ICEMEM_SLAVE_TPKEY_VAL,
+			QTI_HWKM_ICE_RG_TZ_TPKEY_RECEIVE_CTL, ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+}
+
+static void qti_hwkm_disable_slave_receive_mode(struct hwkm_device *hwkm_dev)
+{
+	qti_hwkm_clearb(hwkm_dev, QTI_HWKM_ICE_RG_TZ_TPKEY_RECEIVE_CTL,
+			TPKEY_EN, ICEMEM_SLAVE);
+	/* Write memory barrier */
+	wmb();
+}
+
+static void qti_hwkm_check_tpkey_status(struct hwkm_device *hwkm_dev)
+{
+	int val = 0;
+
+	val = qti_hwkm_readl(hwkm_dev, QTI_HWKM_ICE_RG_TZ_TPKEY_RECEIVE_STATUS,
+			ICEMEM_SLAVE);
+
+	pr_debug("%s: Tpkey receive status 0x%x\n", __func__, val);
+}
+
+static int qti_hwkm_set_tpkey(void)
+{
+	int ret = 0;
+	struct hwkm_cmd cmd;
+	struct hwkm_rsp rsp;
+
+	cmd.op = SET_TPKEY;
+	cmd.set_tpkey.sks = KM_MASTER_TPKEY_SLOT;
+
+	qti_hwkm_enable_slave_receive_mode(km_device);
+	ret = qti_hwkm_handle_cmd(&cmd, &rsp);
+	if (ret) {
+		pr_err("%s: Error running commands\n", __func__, ret);
+		return ret;
+	}
+
+	qti_hwkm_check_tpkey_status(km_device);
+	qti_hwkm_disable_slave_receive_mode(km_device);
+
+	return 0;
+}
+
+int qti_hwkm_init(void)
+{
+	int ret = 0;
+
+	ret = qti_hwkm_ice_init_sequence(km_device);
+	if (ret) {
+		pr_err("%s: Error in ICE init sequence %d\n", __func__, ret);
+		return ret;
+	}
+
+	ret = qti_hwkm_set_tpkey();
+	if (ret) {
+		pr_err("%s: Error setting ICE to receive %d\n", __func__, ret);
+		return ret;
+	}
+	/* Write memory barrier */
+	wmb();
+	return ret;
+}
+EXPORT_SYMBOL(qti_hwkm_init);
+
+static int qti_hwkm_probe(struct platform_device *pdev)
+{
+	struct hwkm_device *hwkm_dev;
+	int ret = 0;
+
+	pr_debug("%s %d: HWKM probe start\n", __func__, __LINE__);
+	if (!pdev) {
+		pr_err("%s: Invalid platform_device passed\n", __func__);
+		return -EINVAL;
+	}
+
+	hwkm_dev = kzalloc(sizeof(struct hwkm_device), GFP_KERNEL);
+	if (!hwkm_dev) {
+		ret = -ENOMEM;
+		pr_err("%s: Error %d allocating memory for HWKM device\n",
+			__func__, ret);
+		goto err_hwkm_dev;
+	}
+
+	hwkm_dev->dev = &pdev->dev;
+	if (!hwkm_dev->dev) {
+		ret = -EINVAL;
+		pr_err("%s: Invalid device passed in platform_device\n",
+			__func__);
+		goto err_hwkm_dev;
+	}
+
+	if (pdev->dev.of_node)
+		ret = qti_hwkm_get_device_tree_data(pdev, hwkm_dev);
+	else {
+		ret = -EINVAL;
+		pr_err("%s: HWKM device node not found\n", __func__);
+	}
+	if (ret)
+		goto err_hwkm_dev;
+
+	ret = qti_hwkm_init_clocks(hwkm_dev);
+	if (ret) {
+		pr_err("%s: Error initializing clocks %d\n", __func__, ret);
+		goto err_hwkm_dev;
+	}
+
+	hwkm_dev->is_hwkm_enabled = true;
+	km_device = hwkm_dev;
+	platform_set_drvdata(pdev, hwkm_dev);
+
+	pr_debug("%s %d: HWKM probe ends\n", __func__, __LINE__);
+	return ret;
+
+err_hwkm_dev:
+	km_device = NULL;
+	kfree(hwkm_dev);
+	return ret;
+}
+
+
+static int qti_hwkm_remove(struct platform_device *pdev)
+{
+	kfree(km_device);
+	return 0;
+}
+
+static const struct of_device_id qti_hwkm_match[] = {
+	{ .compatible = "qcom,hwkm"},
+	{},
+};
+MODULE_DEVICE_TABLE(of, qti_hwkm_match);
+
+static struct platform_driver qti_hwkm_driver = {
+	.probe		= qti_hwkm_probe,
+	.remove		= qti_hwkm_remove,
+	.driver		= {
+		.name	= "qti_hwkm",
+		.of_match_table	= qti_hwkm_match,
+	},
+};
+module_platform_driver(qti_hwkm_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("QTI Hardware Key Manager driver");
diff --git a/drivers/soc/qcom/hwkm_serialize.h b/drivers/soc/qcom/hwkm_serialize.h
new file mode 100644
index 0000000..2d73ff5
--- /dev/null
+++ b/drivers/soc/qcom/hwkm_serialize.h
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef __HWKM_SERIALIZE_H_
+#define __HWKM_SERIALIZE_H_
+
+#include <stdbool.h>
+#include <stddef.h>
+
+#include <linux/hwkm.h>
+
+/* Command lengths (words) */
+#define NIST_KEYGEN_CMD_WORDS 4
+#define SYSTEM_KDF_CMD_MIN_WORDS 4
+#define SYSTEM_KDF_CMD_MAX_WORDS 29
+#define KEYSLOT_CLEAR_CMD_WORDS 2
+#define UNWRAP_IMPORT_CMD_WORDS 19
+#define WRAP_EXPORT_CMD_WORDS 5
+#define SET_TPKEY_CMD_WORDS 2
+#define KEYSLOT_RDWR_CMD_WORDS 12
+#define QFPROM_RDWR_CMD_WORDS 2
+
+/* Response lengths (words) */
+#define NIST_KEYGEN_RSP_WORDS 2
+#define SYSTEM_KDF_RSP_WORDS 2
+#define KEYSLOT_CLEAR_RSP_WORDS 2
+#define UNWRAP_IMPORT_RSP_WORDS 2
+#define WRAP_EXPORT_RSP_WORDS 19
+#define SET_TPKEY_RSP_WORDS 2
+#define KEYSLOT_RDWR_RSP_WORDS 12
+#define QFPROM_RDWR_RSP_WORDS 2
+
+/* Field lengths (words) */
+#define OPERATION_INFO_WORDS 1
+#define KEY_POLICY_WORDS 2
+#define BSVE_WORDS 3
+#define MAX_SWC_WORDS 16
+#define RESPONSE_KEY_WORDS 8
+#define KEY_BLOB_WORDS 17
+
+/* Field lengths (bytes) */
+#define OPERATION_INFO_LENGTH (OPERATION_INFO_WORDS * sizeof(uint32_t))
+#define KEY_POLICY_LENGTH (KEY_POLICY_WORDS * sizeof(uint32_t))
+#define MAX_BSVE_LENGTH (BSVE_WORDS * sizeof(uint32_t))
+#define MAX_SWC_LENGTH (MAX_SWC_WORDS * sizeof(uint32_t))
+#define RESPONSE_KEY_LENGTH (RESPONSE_KEY_WORDS * sizeof(uint32_t))
+#define KEY_BLOB_LENGTH (KEY_BLOB_WORDS * sizeof(uint32_t))
+
+/* Command indices */
+#define COMMAND_KEY_POLICY_IDX 1
+#define COMMAND_KEY_VALUE_IDX 3
+#define COMMAND_WRAPPED_KEY_IDX 1
+#define COMMAND_KEY_WRAP_BSVE_IDX 1
+
+/* Response indices */
+#define RESPONSE_ERR_IDX 1
+#define RESPONSE_KEY_POLICY_IDX 2
+#define RESPONSE_KEY_VALUE_IDX 4
+#define RESPONSE_WRAPPED_KEY_IDX 2
+
+struct hwkm_serialized_policy {
+	unsigned dbg_qfprom_key_rd_iv_sel:1;		// [0]
+	unsigned reserved0:1;				// [1]
+	unsigned wrap_with_tpkey:1;			// [2]
+	unsigned hw_destination:4;			// [3:6]
+	unsigned reserved1:1;				// [7]
+	unsigned propagate_sec_level_to_child_keys:1;	// [8]
+	unsigned security_level:2;			// [9:10]
+	unsigned swap_export_allowed:1;			// [11]
+	unsigned wrap_export_allowed:1;			// [12]
+	unsigned key_type:3;				// [13:15]
+	unsigned kdf_depth:8;				// [16:23]
+	unsigned decrypt_allowed:1;			// [24]
+	unsigned encrypt_allowed:1;			// [25]
+	unsigned alg_allowed:6;				// [26:31]
+	unsigned key_management_by_tz_secure_allowed:1;	// [32]
+	unsigned key_management_by_nonsecure_allowed:1;	// [33]
+	unsigned key_management_by_modem_allowed:1;	// [34]
+	unsigned key_management_by_spu_allowed:1;	// [35]
+	unsigned reserved2:28;				// [36:63]
+} __packed;
+
+struct hwkm_kdf_bsve {
+	unsigned mks:8;				// [0:7]
+	unsigned key_policy_version_en:1;	// [8]
+	unsigned apps_secure_en:1;		// [9]
+	unsigned msa_secure_en:1;		// [10]
+	unsigned lcm_fuse_row_en:1;		// [11]
+	unsigned boot_stage_otp_en:1;		// [12]
+	unsigned swc_en:1;			// [13]
+	u64 fuse_region_sha_digest_en:64;	// [14:78]
+	unsigned child_key_policy_en:1;		// [79]
+	unsigned mks_en:1;			// [80]
+	unsigned reserved:16;			// [81:95]
+} __packed;
+
+struct hwkm_wrapping_bsve {
+	unsigned key_policy_version_en:1;      // [0]
+	unsigned apps_secure_en:1;             // [1]
+	unsigned msa_secure_en:1;              // [2]
+	unsigned lcm_fuse_row_en:1;            // [3]
+	unsigned boot_stage_otp_en:1;          // [4]
+	unsigned swc_en:1;                     // [5]
+	u64 fuse_region_sha_digest_en:64; // [6:69]
+	unsigned child_key_policy_en:1;        // [70]
+	unsigned mks_en:1;                     // [71]
+	unsigned reserved:24;                  // [72:95]
+} __packed;
+
+struct hwkm_operation_info {
+	unsigned op:4;		// [0-3]
+	unsigned irq_en:1;	// [4]
+	unsigned slot1_desc:8;	// [5,12]
+	unsigned slot2_desc:8;	// [13,20]
+	unsigned op_flag:1;	// [21]
+	unsigned context_len:5;	// [22-26]
+	unsigned len:5;		// [27-31]
+} __packed;
+
+#endif /* __HWKM_SERIALIZE_H_ */
diff --git a/drivers/soc/qcom/hwkmregs.h b/drivers/soc/qcom/hwkmregs.h
new file mode 100644
index 0000000..552e489
--- /dev/null
+++ b/drivers/soc/qcom/hwkmregs.h
@@ -0,0 +1,261 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _QTI_HARDWARE_KEY_MANAGER_REGS_H_
+#define _QTI_HARDWARE_KEY_MANAGER_REGS_H_
+
+#define HWKM_VERSION_STEP_REV_MASK		0xFFFF
+#define HWKM_VERSION_STEP_REV			0 /* bit 15-0 */
+#define HWKM_VERSION_MAJOR_REV_MASK		0xFF000000
+#define HWKM_VERSION_MAJOR_REV			24 /* bit 31-24 */
+#define HWKM_VERSION_MINOR_REV_MASK		0xFF0000
+#define HWKM_VERSION_MINOR_REV			16 /* bit 23-16 */
+
+/* QTI HWKM master registers from SWI */
+/* QTI HWKM master shared registers */
+#define QTI_HWKM_MASTER_RG_IPCAT_VERSION		0x0000
+#define QTI_HWKM_MASTER_RG_KEY_POLICY_VERSION		0x0004
+#define QTI_HWKM_MASTER_RG_SHARED_STATUS		0x0008
+#define QTI_HWKM_MASTER_RG_KEYTABLE_SIZE		0x000C
+
+/* QTI HWKM master register bank 2 */
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_CTL		0x4000
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_STATUS		0x4004
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_IRQ_STATUS	0x4008
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_IRQ_MASK		0x400C
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_ESR		0x4010
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_ESR_IRQ_MASK	0x4014
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_ESYNR		0x4018
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_0			0x401C
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_1			0x4020
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_2			0x4024
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_3			0x4028
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_4			0x402C
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_5			0x4030
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_6			0x4034
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_7			0x4038
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_8			0x403C
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_9			0x4040
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_10			0x4044
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_11			0x4048
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_12			0x404C
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_13			0x4050
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_14			0x4054
+#define QTI_HWKM_MASTER_RG_BANK2_CMD_15			0x4058
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_0			0x405C
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_1			0x4060
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_2			0x4064
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_3			0x4068
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_4			0x406C
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_5			0x4070
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_6			0x4074
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_7			0x4078
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_8			0x407C
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_9			0x4080
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_10			0x4084
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_11			0x4088
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_12			0x408C
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_13			0x4090
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_14			0x4094
+#define QTI_HWKM_MASTER_RG_BANK2_RSP_15			0x4098
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_IRQ_ROUTING	0x409C
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_BBAC_0		0x40A0
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_BBAC_1		0x40A4
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_BBAC_2		0x40A8
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_BBAC_3		0x40AC
+#define QTI_HWKM_MASTER_RG_BANK2_BANKN_BBAC_4		0x40B0
+
+/* QTI HWKM master register bank 3 */
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_CTL		0x5000
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_STATUS		0x5004
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_IRQ_STATUS	0x5008
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_IRQ_MASK		0x500C
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_ESR		0x5010
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_ESR_IRQ_MASK	0x5014
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_ESYNR		0x5018
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_0			0x501C
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_1			0x5020
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_2			0x5024
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_3			0x5028
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_4			0x502C
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_5			0x5030
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_6			0x5034
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_7			0x5038
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_8			0x503C
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_9			0x5040
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_10			0x5044
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_11			0x5048
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_12			0x504C
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_13			0x5050
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_14			0x5054
+#define QTI_HWKM_MASTER_RG_BANK3_CMD_15			0x5058
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_0			0x505C
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_1			0x5060
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_2			0x5064
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_3			0x5068
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_4			0x506C
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_5			0x5070
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_6			0x5074
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_7			0x5078
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_8			0x507C
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_9			0x5080
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_10			0x5084
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_11			0x5088
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_12			0x508C
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_13			0x5090
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_14			0x5094
+#define QTI_HWKM_MASTER_RG_BANK3_RSP_15			0x5098
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_IRQ_ROUTING	0x509C
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_BBAC_0		0x50A0
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_BBAC_1		0x50A4
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_BBAC_2		0x50A8
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_BBAC_3		0x50AC
+#define QTI_HWKM_MASTER_RG_BANK3_BANKN_BBAC_4		0x50B0
+
+/* QTI HWKM access control registers for Bank 2 */
+#define QTI_HWKM_MASTER_RG_BANK2_AC_BANKN_BBAC_0	0x8000
+#define QTI_HWKM_MASTER_RG_BANK2_AC_BANKN_BBAC_1	0x8004
+#define QTI_HWKM_MASTER_RG_BANK2_AC_BANKN_BBAC_2	0x8008
+#define QTI_HWKM_MASTER_RG_BANK2_AC_BANKN_BBAC_3	0x800C
+#define QTI_HWKM_MASTER_RG_BANK2_AC_BANKN_BBAC_4	0x8010
+
+/* QTI HWKM access control registers for Bank 3 */
+#define QTI_HWKM_MASTER_RG_BANK3_AC_BANKN_BBAC_0	0x9000
+#define QTI_HWKM_MASTER_RG_BANK3_AC_BANKN_BBAC_1	0x9004
+#define QTI_HWKM_MASTER_RG_BANK3_AC_BANKN_BBAC_2	0x9008
+#define QTI_HWKM_MASTER_RG_BANK3_AC_BANKN_BBAC_3	0x900C
+#define QTI_HWKM_MASTER_RG_BANK3_AC_BANKN_BBAC_4	0x9010
+
+/* QTI HWKM ICE slave config and status registers */
+#define QTI_HWKM_ICE_RG_TZ_KM_CTL			0x1000
+#define QTI_HWKM_ICE_RG_TZ_KM_STATUS			0x1004
+#define QTI_HWKM_ICE_RG_TZ_KM_STATUS_IRQ_MASK		0x1008
+#define QTI_HWKM_ICE_RG_TZ_KM_BOOT_STAGE_OTP		0x100C
+#define QTI_HWKM_ICE_RG_TZ_KM_DEBUG_CTL			0x1010
+#define QTI_HWKM_ICE_RG_TZ_KM_DEBUG_WRITE		0x1014
+#define QTI_HWKM_ICE_RG_TZ_KM_DEBUG_READ		0x1018
+#define QTI_HWKM_ICE_RG_TZ_TPKEY_RECEIVE_CTL		0x101C
+#define QTI_HWKM_ICE_RG_TZ_TPKEY_RECEIVE_STATUS		0x1020
+#define QTI_HWKM_ICE_RG_TZ_KM_COMMON_IRQ_ROUTING	0x1024
+
+/* QTI HWKM ICE slave registers from SWI */
+/* QTI HWKM ICE slave shared registers */
+#define QTI_HWKM_ICE_RG_IPCAT_VERSION			0x0000
+#define QTI_HWKM_ICE_RG_KEY_POLICY_VERSION		0x0004
+#define QTI_HWKM_ICE_RG_SHARED_STATUS			0x0008
+#define QTI_HWKM_ICE_RG_KEYTABLE_SIZE			0x000C
+
+/* QTI HWKM ICE slave register bank 0 */
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_CTL			0x2000
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_STATUS		0x2004
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_IRQ_STATUS		0x2008
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_IRQ_MASK		0x200C
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_ESR			0x2010
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_ESR_IRQ_MASK	0x2014
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_ESYNR		0x2018
+#define QTI_HWKM_ICE_RG_BANK0_CMD_0			0x201C
+#define QTI_HWKM_ICE_RG_BANK0_CMD_1			0x2020
+#define QTI_HWKM_ICE_RG_BANK0_CMD_2			0x2024
+#define QTI_HWKM_ICE_RG_BANK0_CMD_3			0x2028
+#define QTI_HWKM_ICE_RG_BANK0_CMD_4			0x202C
+#define QTI_HWKM_ICE_RG_BANK0_CMD_5			0x2030
+#define QTI_HWKM_ICE_RG_BANK0_CMD_6			0x2034
+#define QTI_HWKM_ICE_RG_BANK0_CMD_7			0x2038
+#define QTI_HWKM_ICE_RG_BANK0_CMD_8			0x203C
+#define QTI_HWKM_ICE_RG_BANK0_CMD_9			0x2040
+#define QTI_HWKM_ICE_RG_BANK0_CMD_10			0x2044
+#define QTI_HWKM_ICE_RG_BANK0_CMD_11			0x2048
+#define QTI_HWKM_ICE_RG_BANK0_CMD_12			0x204C
+#define QTI_HWKM_ICE_RG_BANK0_CMD_13			0x2050
+#define QTI_HWKM_ICE_RG_BANK0_CMD_14			0x2054
+#define QTI_HWKM_ICE_RG_BANK0_CMD_15			0x2058
+#define QTI_HWKM_ICE_RG_BANK0_RSP_0			0x205C
+#define QTI_HWKM_ICE_RG_BANK0_RSP_1			0x2060
+#define QTI_HWKM_ICE_RG_BANK0_RSP_2			0x2064
+#define QTI_HWKM_ICE_RG_BANK0_RSP_3			0x2068
+#define QTI_HWKM_ICE_RG_BANK0_RSP_4			0x206C
+#define QTI_HWKM_ICE_RG_BANK0_RSP_5			0x2070
+#define QTI_HWKM_ICE_RG_BANK0_RSP_6			0x2074
+#define QTI_HWKM_ICE_RG_BANK0_RSP_7			0x2078
+#define QTI_HWKM_ICE_RG_BANK0_RSP_8			0x207C
+#define QTI_HWKM_ICE_RG_BANK0_RSP_9			0x2080
+#define QTI_HWKM_ICE_RG_BANK0_RSP_10			0x2084
+#define QTI_HWKM_ICE_RG_BANK0_RSP_11			0x2088
+#define QTI_HWKM_ICE_RG_BANK0_RSP_12			0x208C
+#define QTI_HWKM_ICE_RG_BANK0_RSP_13			0x2090
+#define QTI_HWKM_ICE_RG_BANK0_RSP_14			0x2094
+#define QTI_HWKM_ICE_RG_BANK0_RSP_15			0x2098
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_IRQ_ROUTING		0x209C
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_BBAC_0		0x20A0
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_BBAC_1		0x20A4
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_BBAC_2		0x20A8
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_BBAC_3		0x20AC
+#define QTI_HWKM_ICE_RG_BANK0_BANKN_BBAC_4		0x20B0
+
+/* QTI HWKM access control registers for Bank 2 */
+#define QTI_HWKM_ICE_RG_BANK0_AC_BANKN_BBAC_0		0x5000
+#define QTI_HWKM_ICE_RG_BANK0_AC_BANKN_BBAC_1		0x5004
+#define QTI_HWKM_ICE_RG_BANK0_AC_BANKN_BBAC_2		0x5008
+#define QTI_HWKM_ICE_RG_BANK0_AC_BANKN_BBAC_3		0x500C
+#define QTI_HWKM_ICE_RG_BANK0_AC_BANKN_BBAC_4		0x5010
+
+
+/* QTI HWKM ICE slave config reg vals */
+
+/* HWKM_ICEMEM_SLAVE_ICE_KM_RG_TZ_KM_CTL */
+#define CRC_CHECK_EN				0
+#define KEYTABLE_HW_WR_ACCESS_EN		1
+#define KEYTABLE_HW_RD_ACCESS_EN		2
+#define BOOT_INIT0_DISABLE			3
+#define BOOT_INIT1_DISABLE			4
+#define ICE_LEGACY_MODE_EN_OTP			5
+
+/* HWKM_ICEMEM_SLAVE_ICE_KM_RG_TZ_KM_STATUS */
+#define KT_CLEAR_DONE				0
+#define BOOT_CMD_LIST0_DONE			1
+#define BOOT_CMD_LIST1_DONE			2
+#define KEYTABLE_KEY_POLICY			3
+#define KEYTABLE_INTEGRITY_ERROR		4
+#define KEYTABLE_KEY_SLOT_ERROR			5
+#define KEYTABLE_KEY_SLOT_NOT_EVEN_ERROR	6
+#define KEYTABLE_KEY_SLOT_OUT_OF_RANGE		7
+#define KEYTABLE_KEY_SIZE_ERROR			8
+#define KEYTABLE_OPERATION_ERROR		9
+#define LAST_ACTIVITY_BANK			10
+#define CRYPTO_LIB_BIST_ERROR			13
+#define CRYPTO_LIB_BIST_DONE			14
+#define BIST_ERROR				15
+#define BIST_DONE				16
+#define LAST_ACTIVITY_BANK_MASK			0x1c00
+
+/* HWKM_ICEMEM_SLAVE_ICE_KM_RG_TZ_TPKEY_RECEIVE_CTL */
+#define TPKEY_EN				8
+
+/* QTI HWKM Bank status & control reg vals */
+
+/* HWKM_MASTER_CFG_KM_BANKN_CTL */
+#define CMD_ENABLE_BIT				0
+#define CMD_FIFO_CLEAR_BIT			1
+
+/* HWKM_MASTER_CFG_KM_BANKN_STATUS */
+#define CURRENT_CMD_REMAINING_LENGTH		0
+#define MOST_RECENT_OPCODE			5
+#define RSP_FIFO_AVAILABLE_DATA			9
+#define CMD_FIFO_AVAILABLE_SPACE		14
+#define ICE_LEGACY_MODE_BIT			19
+#define CMD_FIFO_AVAILABLE_SPACE_MASK		0x7c000
+#define RSP_FIFO_AVAILABLE_DATA_MASK		0x3e00
+#define MOST_RECENT_OPCODE_MASK			0x1e0
+#define CURRENT_CMD_REMAINING_LENGTH_MASK	0x1f
+
+/* HWKM_MASTER_CFG_KM_BANKN_IRQ_STATUS */
+#define ARB_GRAN_WINNER				0
+#define CMD_DONE_BIT				1
+#define RSP_FIFO_NOT_EMPTY			2
+#define RSP_FIFO_FULL				3
+#define RSP_FIFO_UNDERFLOW			4
+#define CMD_FIFO_UNDERFLOW			5
+
+#endif /* __QTI_HARDWARE_KEY_MANAGER_REGS_H_ */
diff --git a/drivers/soc/qcom/icnss2/main.c b/drivers/soc/qcom/icnss2/main.c
index 27c0484..b869497 100644
--- a/drivers/soc/qcom/icnss2/main.c
+++ b/drivers/soc/qcom/icnss2/main.c
@@ -1613,6 +1613,32 @@
 	return 0;
 }
 
+int icnss_qmi_send(struct device *dev, int type, void *cmd,
+		  int cmd_len, void *cb_ctx,
+		  int (*cb)(void *ctx, void *event, int event_len))
+{
+	struct icnss_priv *priv = icnss_get_plat_priv();
+	int ret;
+
+	if (!priv)
+		return -ENODEV;
+
+	if (!test_bit(ICNSS_WLFW_CONNECTED, &priv->state))
+		return -EINVAL;
+
+	priv->get_info_cb = cb;
+	priv->get_info_cb_ctx = cb_ctx;
+
+	ret = icnss_wlfw_get_info_send_sync(priv, type, cmd, cmd_len);
+	if (ret) {
+		priv->get_info_cb = NULL;
+		priv->get_info_cb_ctx = NULL;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(icnss_qmi_send);
+
 int __icnss_register_driver(struct icnss_driver_ops *ops,
 			    struct module *owner, const char *mod_name)
 {
@@ -2763,6 +2789,14 @@
 	    !test_bit(ICNSS_DRIVER_PROBED, &priv->state))
 		goto out;
 
+	if (priv->device_id == WCN6750_DEVICE_ID) {
+		ret = wlfw_exit_power_save_send_msg(priv);
+		if (ret) {
+			priv->stats.pm_resume_err++;
+			return ret;
+		}
+	}
+
 	ret = priv->ops->pm_resume(dev);
 
 out:
diff --git a/drivers/soc/qcom/icnss2/main.h b/drivers/soc/qcom/icnss2/main.h
index f90ef2c..cd5d6dd 100644
--- a/drivers/soc/qcom/icnss2/main.h
+++ b/drivers/soc/qcom/icnss2/main.h
@@ -207,6 +207,9 @@
 	uint32_t device_info_req;
 	uint32_t device_info_resp;
 	uint32_t device_info_err;
+	u32 exit_power_save_req;
+	u32 exit_power_save_resp;
+	u32 exit_power_save_err;
 };
 
 #define WLFW_MAX_TIMESTAMP_LEN 32
@@ -337,6 +340,8 @@
 	atomic_t is_shutdown;
 	u32 qdss_mem_seg_len;
 	struct icnss_fw_mem qdss_mem[QMI_WLFW_MAX_NUM_MEM_SEG];
+	void *get_info_cb_ctx;
+	int (*get_info_cb)(void *ctx, void *event, int event_len);
 };
 
 struct icnss_reg_info {
diff --git a/drivers/soc/qcom/icnss2/qmi.c b/drivers/soc/qcom/icnss2/qmi.c
index 9631a8b..3a96131 100644
--- a/drivers/soc/qcom/icnss2/qmi.c
+++ b/drivers/soc/qcom/icnss2/qmi.c
@@ -340,6 +340,79 @@
 	return ret;
 }
 
+int wlfw_exit_power_save_send_msg(struct icnss_priv *priv)
+{
+	int ret;
+	struct wlfw_exit_power_save_req_msg_v01 *req;
+	struct wlfw_exit_power_save_resp_msg_v01 *resp;
+	struct qmi_txn txn;
+
+	if (!priv)
+		return -ENODEV;
+
+	if (test_bit(ICNSS_FW_DOWN, &priv->state))
+		return -EINVAL;
+
+	icnss_pr_dbg("Sending exit power save, state: 0x%lx\n",
+		     priv->state);
+
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	resp = kzalloc(sizeof(*resp), GFP_KERNEL);
+	if (!resp) {
+		kfree(req);
+		return -ENOMEM;
+	}
+
+	priv->stats.exit_power_save_req++;
+
+	ret = qmi_txn_init(&priv->qmi, &txn,
+			   wlfw_exit_power_save_resp_msg_v01_ei, resp);
+	if (ret < 0) {
+		icnss_qmi_fatal_err("Fail to init txn for exit power save%d\n",
+				    ret);
+		goto out;
+	}
+
+	ret = qmi_send_request(&priv->qmi, NULL, &txn,
+			       QMI_WLFW_EXIT_POWER_SAVE_REQ_V01,
+			       WLFW_EXIT_POWER_SAVE_REQ_MSG_V01_MAX_MSG_LEN,
+			       wlfw_exit_power_save_req_msg_v01_ei, req);
+	if (ret < 0) {
+		qmi_txn_cancel(&txn);
+		icnss_qmi_fatal_err("Fail to send exit power save req %d\n",
+				    ret);
+		goto out;
+	}
+
+	ret = qmi_txn_wait(&txn, priv->ctrl_params.qmi_timeout);
+	if (ret < 0) {
+		icnss_qmi_fatal_err("Exit power save wait failed with ret %d\n",
+				    ret);
+		goto out;
+	} else if (resp->resp.result != QMI_RESULT_SUCCESS_V01) {
+		icnss_qmi_fatal_err(
+		    "QMI exit power save request rejected,result:%d error:%d\n",
+				    resp->resp.result, resp->resp.error);
+		ret = -resp->resp.result;
+		goto out;
+	}
+
+	priv->stats.exit_power_save_resp++;
+
+	kfree(resp);
+	kfree(req);
+	return 0;
+
+out:
+	kfree(resp);
+	kfree(req);
+	priv->stats.exit_power_save_err++;
+	return ret;
+}
+
 int wlfw_ind_register_send_sync_msg(struct icnss_priv *priv)
 {
 	int ret;
@@ -389,6 +462,8 @@
 		req->qdss_trace_save_enable = 1;
 		req->qdss_trace_free_enable_valid = 1;
 		req->qdss_trace_free_enable = 1;
+		req->respond_get_info_enable_valid = 1;
+		req->respond_get_info_enable = 1;
 	}
 
 	priv->stats.ind_register_req++;
@@ -1643,6 +1718,32 @@
 	icnss_driver_event_post(priv, ICNSS_DRIVER_EVENT_QDSS_TRACE_FREE,
 				0, NULL);
 }
+
+static void icnss_wlfw_respond_get_info_ind_cb(struct qmi_handle *qmi,
+					      struct sockaddr_qrtr *sq,
+					      struct qmi_txn *txn,
+					      const void *data)
+{
+	struct icnss_priv *priv = container_of(qmi, struct icnss_priv, qmi);
+	const struct wlfw_respond_get_info_ind_msg_v01 *ind_msg = data;
+
+	icnss_pr_vdbg("Received QMI WLFW respond get info indication\n");
+
+	if (!txn) {
+		icnss_pr_err("Spurious indication\n");
+		return;
+	}
+
+	icnss_pr_vdbg("Extract message with event length: %d, type: %d, is last: %d, seq no: %d\n",
+		     ind_msg->data_len, ind_msg->type,
+		     ind_msg->is_last, ind_msg->seq_no);
+
+	if (priv->get_info_cb_ctx && priv->get_info_cb)
+		priv->get_info_cb(priv->get_info_cb_ctx,
+				       (void *)ind_msg->data,
+				       ind_msg->data_len);
+}
+
 static struct qmi_msg_handler wlfw_msg_handlers[] = {
 	{
 		.type = QMI_INDICATION,
@@ -1711,6 +1812,14 @@
 		sizeof(struct wlfw_qdss_trace_free_ind_msg_v01),
 		.fn = wlfw_qdss_trace_free_ind_cb
 	},
+	{
+		.type = QMI_INDICATION,
+		.msg_id = QMI_WLFW_RESPOND_GET_INFO_IND_V01,
+		.ei = wlfw_respond_get_info_ind_msg_v01_ei,
+		.decoded_size =
+		sizeof(struct wlfw_respond_get_info_ind_msg_v01),
+		.fn = icnss_wlfw_respond_get_info_ind_cb
+	},
 	{}
 };
 
@@ -2072,3 +2181,77 @@
 	kfree(resp);
 	return ret;
 }
+
+int icnss_wlfw_get_info_send_sync(struct icnss_priv *plat_priv, int type,
+				 void *cmd, int cmd_len)
+{
+	struct wlfw_get_info_req_msg_v01 *req;
+	struct wlfw_get_info_resp_msg_v01 *resp;
+	struct qmi_txn txn;
+	int ret = 0;
+
+	icnss_pr_dbg("Sending get info message, type: %d, cmd length: %d, state: 0x%lx\n",
+		     type, cmd_len, plat_priv->state);
+
+	if (cmd_len > QMI_WLFW_MAX_DATA_SIZE_V01)
+		return -EINVAL;
+
+	if (test_bit(ICNSS_FW_DOWN, &priv->state))
+		return -EINVAL;
+
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	resp = kzalloc(sizeof(*resp), GFP_KERNEL);
+	if (!resp) {
+		kfree(req);
+		return -ENOMEM;
+	}
+
+	req->type = type;
+	req->data_len = cmd_len;
+	memcpy(req->data, cmd, req->data_len);
+
+	ret = qmi_txn_init(&plat_priv->qmi, &txn,
+			   wlfw_get_info_resp_msg_v01_ei, resp);
+	if (ret < 0) {
+		icnss_pr_err("Failed to initialize txn for get info request, err: %d\n",
+			    ret);
+		goto out;
+	}
+
+	ret = qmi_send_request(&plat_priv->qmi, NULL, &txn,
+			       QMI_WLFW_GET_INFO_REQ_V01,
+			       WLFW_GET_INFO_REQ_MSG_V01_MAX_MSG_LEN,
+			       wlfw_get_info_req_msg_v01_ei, req);
+	if (ret < 0) {
+		qmi_txn_cancel(&txn);
+		icnss_pr_err("Failed to send get info request, err: %d\n",
+			    ret);
+		goto out;
+	}
+
+	ret = qmi_txn_wait(&txn, plat_priv->ctrl_params.qmi_timeout);
+	if (ret < 0) {
+		icnss_pr_err("Failed to wait for response of get info request, err: %d\n",
+			    ret);
+		goto out;
+	}
+
+	if (resp->resp.result != QMI_RESULT_SUCCESS_V01) {
+		icnss_pr_err("Get info request failed, result: %d, err: %d\n",
+			    resp->resp.result, resp->resp.error);
+		ret = -resp->resp.result;
+		goto out;
+	}
+
+	kfree(req);
+	kfree(resp);
+	return 0;
+
+out:
+	kfree(req);
+	kfree(resp);
+	return ret;
+}
diff --git a/drivers/soc/qcom/icnss2/qmi.h b/drivers/soc/qcom/icnss2/qmi.h
index 9a053d9..c4c42ce 100644
--- a/drivers/soc/qcom/icnss2/qmi.h
+++ b/drivers/soc/qcom/icnss2/qmi.h
@@ -128,6 +128,17 @@
 {
 	return 0;
 }
+
+int wlfw_exit_power_save_send_msg(struct icnss_priv *priv)
+{
+	return 0;
+}
+
+int icnss_wlfw_get_info_send_sync(struct icnss_priv *priv, int type,
+				  void *cmd, int cmd_len)
+{
+	return 0;
+}
 #else
 int wlfw_ind_register_send_sync_msg(struct icnss_priv *priv);
 int icnss_connect_to_fw_server(struct icnss_priv *priv, void *data);
@@ -163,6 +174,9 @@
 				 enum wlfw_driver_mode_enum_v01 mode);
 int icnss_wlfw_bdf_dnld_send_sync(struct icnss_priv *priv, u32 bdf_type);
 int wlfw_qdss_trace_mem_info_send_sync(struct icnss_priv *priv);
+int wlfw_exit_power_save_send_msg(struct icnss_priv *priv);
+int icnss_wlfw_get_info_send_sync(struct icnss_priv *priv, int type,
+				  void *cmd, int cmd_len);
 #endif
 
 #endif /* __ICNSS_QMI_H__*/
diff --git a/drivers/soc/qcom/l2_reuse.c b/drivers/soc/qcom/l2_reuse.c
index e20b49a..bd96d38 100644
--- a/drivers/soc/qcom/l2_reuse.c
+++ b/drivers/soc/qcom/l2_reuse.c
@@ -13,7 +13,7 @@
 
 #define L2_REUSE_SMC_ID 0x00200090C
 
-static bool l2_reuse_enable = true;
+static bool l2_reuse_enable;
 static struct kobject *l2_reuse_kobj;
 
 static ssize_t sysfs_show(struct kobject *kobj,
@@ -38,12 +38,12 @@
 	return count;
 }
 
-struct kobj_attribute l2_reuse_attr = __ATTR(l2_reuse_enable, 0660,
+struct kobj_attribute l2_reuse_attr = __ATTR(extended_cache_enable, 0660,
 		sysfs_show, sysfs_store);
 
 static int __init l2_reuse_driver_init(void)
 {
-	l2_reuse_kobj = kobject_create_and_add("l2_reuse_enable", power_kobj);
+	l2_reuse_kobj = kobject_create_and_add("l2_reuse", power_kobj);
 
 	if (!l2_reuse_kobj) {
 		pr_info("kobj creation for l2_reuse failed\n");
diff --git a/drivers/soc/qcom/qdss_bridge.c b/drivers/soc/qcom/qdss_bridge.c
index 40a6d5c..92af4dcb6 100644
--- a/drivers/soc/qcom/qdss_bridge.c
+++ b/drivers/soc/qcom/qdss_bridge.c
@@ -108,7 +108,6 @@
 
 		buf = kzalloc(drvdata->mtu, GFP_KERNEL);
 		usb_req = kzalloc(sizeof(*usb_req), GFP_KERNEL);
-		init_completion(&usb_req->write_done);
 
 		entry->buf = buf;
 		entry->usb_req = usb_req;
@@ -450,12 +449,16 @@
 {
 	struct qdss_bridge_drvdata *drvdata = priv;
 
-	if (!drvdata)
+	if (!drvdata || drvdata->mode != MHI_TRANSFER_TYPE_USB
+			|| drvdata->opened == DISABLE) {
+		pr_err_ratelimited("%s can't be called in invalid status.\n",
+				__func__);
 		return;
+	}
 
 	switch (event) {
 	case USB_QDSS_CONNECT:
-		usb_qdss_alloc_req(ch, drvdata->nr_trbs, 0);
+		usb_qdss_alloc_req(ch, drvdata->nr_trbs);
 		mhi_queue_read(drvdata);
 		break;
 
diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
index 8734243..9d24d3c 100644
--- a/drivers/soc/qcom/socinfo.c
+++ b/drivers/soc/qcom/socinfo.c
@@ -295,6 +295,9 @@
 	[305] = {MSM_CPU_8996, "MSM8996pro"},
 	[312] = {MSM_CPU_8996, "APQ8096pro"},
 
+	/* SDM660 ID */
+	[317] = {MSM_CPU_SDM660, "SDM660"},
+
 	/* sm8150 ID */
 	[339] = {MSM_CPU_SM8150, "SM8150"},
 
@@ -1188,6 +1191,10 @@
 		dummy_socinfo.id = 310;
 		strlcpy(dummy_socinfo.build_id, "msm8996-auto - ",
 		sizeof(dummy_socinfo.build_id));
+	} else if (early_machine_is_sdm660()) {
+		dummy_socinfo.id = 317;
+		strlcpy(dummy_socinfo.build_id, "sdm660 - ",
+		sizeof(dummy_socinfo.build_id));
 	} else if (early_machine_is_sm8150()) {
 		dummy_socinfo.id = 339;
 		strlcpy(dummy_socinfo.build_id, "sm8150 - ",
diff --git a/drivers/tty/serial/msm_geni_serial.c b/drivers/tty/serial/msm_geni_serial.c
index f739273..462ab9a 100644
--- a/drivers/tty/serial/msm_geni_serial.c
+++ b/drivers/tty/serial/msm_geni_serial.c
@@ -10,6 +10,7 @@
 #include <linux/console.h>
 #include <linux/io.h>
 #include <linux/ipc_logging.h>
+#include <linux/irq.h>
 #include <linux/module.h>
 #include <linux/of.h>
 #include <linux/of_device.h>
@@ -800,8 +801,8 @@
 		 * Failure IPC logs are not added as this API is
 		 * used by early console and it doesn't have log handle.
 		 */
-		geni_write_reg(S_GENI_CMD_CANCEL, uport->membase,
-						SE_GENI_S_CMD_CTRL_REG);
+		geni_write_reg(M_GENI_CMD_CANCEL, uport->membase,
+						SE_GENI_M_CMD_CTRL_REG);
 		done = msm_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
 						M_CMD_CANCEL_EN, true);
 		if (!done) {
@@ -2020,6 +2021,7 @@
 	/* Stop the console before stopping the current tx */
 	if (uart_console(uport)) {
 		console_stop(uport->cons);
+		disable_irq(uport->irq);
 	} else {
 		msm_geni_serial_power_on(uport);
 		wait_for_transfers_inflight(uport);
@@ -2178,6 +2180,16 @@
 	 */
 	mb();
 
+	/* Console usecase requires irq to be in enable state after early
+	 * console switch from probe to handle RX data. Hence enable IRQ
+	 * from starup and disable it form shutdown APIs for cosnole case.
+	 * BT HSUART usecase, IRQ will be enabled from runtime_resume()
+	 * and disabled in runtime_suspend to avoid spurious interrupts
+	 * after suspend.
+	 */
+	if (uart_console(uport))
+		enable_irq(uport->irq);
+
 	if (msm_port->wakeup_irq > 0) {
 		ret = request_irq(msm_port->wakeup_irq, msm_geni_wakeup_isr,
 				IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
@@ -3152,6 +3164,7 @@
 
 	dev_port->name = devm_kasprintf(uport->dev, GFP_KERNEL,
 					"msm_serial_geni%d", uport->line);
+	irq_set_status_flags(uport->irq, IRQ_NOAUTOEN);
 	ret = devm_request_irq(uport->dev, uport->irq, msm_geni_serial_isr,
 				IRQF_TRIGGER_HIGH, dev_port->name, uport);
 	if (ret) {
@@ -3159,12 +3172,6 @@
 							__func__, ret);
 		goto exit_geni_serial_probe;
 	}
-	/*
-	 * Console usecase requires irq to be in enable state to handle RX data.
-	 * disable irq only for HSUART case from here.
-	 */
-	if (!is_console)
-		disable_irq(dev_port->uport.irq);
 
 	uport->private_data = (void *)drv;
 	platform_set_drvdata(pdev, dev_port);
@@ -3188,11 +3195,6 @@
 
 	dev_info(&pdev->dev, "Serial port%d added.FifoSize %d is_console%d\n",
 				line, uport->fifosize, is_console);
-	/*
-	 * We are using this spinlock before the serial layer initialises it.
-	 * Hence, we are initializing it.
-	 */
-	spin_lock_init(&uport->lock);
 
 	device_create_file(uport->dev, &dev_attr_loopback);
 	device_create_file(uport->dev, &dev_attr_xfer_mode);
diff --git a/drivers/usb/dwc3/dwc3-msm.c b/drivers/usb/dwc3/dwc3-msm.c
index d872a5f..fff99dd 100644
--- a/drivers/usb/dwc3/dwc3-msm.c
+++ b/drivers/usb/dwc3/dwc3-msm.c
@@ -311,7 +311,9 @@
 	bool			in_device_mode;
 	enum usb_device_speed	max_rh_port_speed;
 	unsigned int		tx_fifo_size;
+	bool			check_eud_state;
 	bool			vbus_active;
+	bool			eud_active;
 	bool			suspend;
 	bool			use_pdc_interrupts;
 	enum dwc3_id_state	id_state;
@@ -691,6 +693,7 @@
 	memset(trb, 0, sizeof(*trb));
 
 	req->trb = trb;
+	req->num_trbs++;
 	trb->bph = DBM_TRB_BIT | DBM_TRB_DMA | DBM_TRB_EP_NUM(dep->number);
 	trb->size = DWC3_TRB_SIZE_LENGTH(req->request.length);
 	trb->ctrl = DWC3_TRBCTL_NORMAL | DWC3_TRB_CTRL_HWO |
@@ -2171,6 +2174,9 @@
 		break;
 	case DWC3_CONTROLLER_NOTIFY_CLEAR_DB:
 		dev_dbg(mdwc->dev, "DWC3_CONTROLLER_NOTIFY_CLEAR_DB\n");
+		if (!mdwc->gsi_ev_buff)
+			break;
+
 		dwc3_msm_write_reg_field(mdwc->base,
 			GSI_GENERAL_CFG_REG(mdwc->gsi_reg),
 			BLOCK_GSI_WR_GO_MASK, true);
@@ -2855,8 +2861,13 @@
 	}
 
 	if (mdwc->vbus_active && !mdwc->in_restart) {
-		dev_dbg(mdwc->dev, "XCVR: BSV set\n");
-		set_bit(B_SESS_VLD, &mdwc->inputs);
+		if (mdwc->hs_phy->flags & EUD_SPOOF_DISCONNECT) {
+			dev_dbg(mdwc->dev, "XCVR:EUD: BSV clear\n");
+			clear_bit(B_SESS_VLD, &mdwc->inputs);
+		} else {
+			dev_dbg(mdwc->dev, "XCVR: BSV set\n");
+			set_bit(B_SESS_VLD, &mdwc->inputs);
+		}
 	} else {
 		dev_dbg(mdwc->dev, "XCVR: BSV clear\n");
 		clear_bit(B_SESS_VLD, &mdwc->inputs);
@@ -2870,6 +2881,39 @@
 		clear_bit(B_SUSPEND, &mdwc->inputs);
 	}
 
+	if (mdwc->check_eud_state) {
+		mdwc->hs_phy->flags &=
+			~(EUD_SPOOF_CONNECT | EUD_SPOOF_DISCONNECT);
+		dev_dbg(mdwc->dev, "eud: state:%d active:%d hs_phy_flags:0x%x\n",
+			mdwc->check_eud_state, mdwc->eud_active,
+			mdwc->hs_phy->flags);
+		if (mdwc->eud_active) {
+			mdwc->hs_phy->flags |= EUD_SPOOF_CONNECT;
+			dev_dbg(mdwc->dev, "EUD: XCVR: BSV set\n");
+			set_bit(B_SESS_VLD, &mdwc->inputs);
+		} else {
+			mdwc->hs_phy->flags |= EUD_SPOOF_DISCONNECT;
+			dev_dbg(mdwc->dev, "EUD: XCVR: BSV clear\n");
+			clear_bit(B_SESS_VLD, &mdwc->inputs);
+		}
+
+		mdwc->check_eud_state = false;
+	}
+
+
+	dev_dbg(mdwc->dev, "eud: state:%d active:%d hs_phy_flags:0x%x\n",
+		mdwc->check_eud_state, mdwc->eud_active, mdwc->hs_phy->flags);
+
+	/* handle case of USB cable disconnect after USB spoof disconnect */
+	if (!mdwc->vbus_active &&
+			(mdwc->hs_phy->flags & EUD_SPOOF_DISCONNECT)) {
+		mdwc->hs_phy->flags &= ~EUD_SPOOF_DISCONNECT;
+		mdwc->hs_phy->flags |= PHY_SUS_OVERRIDE;
+		usb_phy_set_suspend(mdwc->hs_phy, 1);
+		mdwc->hs_phy->flags &= ~PHY_SUS_OVERRIDE;
+		return;
+	}
+
 	queue_delayed_work(mdwc->sm_usb_wq, &mdwc->sm_work, 0);
 }
 
@@ -3235,6 +3279,8 @@
 	struct extcon_dev *edev = ptr;
 	struct extcon_nb *enb = container_of(nb, struct extcon_nb, vbus_nb);
 	struct dwc3_msm *mdwc = enb->mdwc;
+	char *eud_str;
+	const char *edev_name;
 
 	if (!edev || !mdwc)
 		return NOTIFY_DONE;
@@ -3242,15 +3288,22 @@
 	dwc = platform_get_drvdata(mdwc->dwc3);
 
 	dbg_event(0xFF, "extcon idx", enb->idx);
-
-	if (mdwc->vbus_active == event)
-		return NOTIFY_DONE;
-
-	mdwc->ext_idx = enb->idx;
-
 	dev_dbg(mdwc->dev, "vbus:%ld event received\n", event);
+	edev_name = extcon_get_edev_name(edev);
+	dbg_log_string("edev:%s\n", edev_name);
 
-	mdwc->vbus_active = event;
+	/* detect USB spoof disconnect/connect notification with EUD device */
+	eud_str = strnstr(edev_name, "eud", strlen(edev_name));
+	if (eud_str) {
+		if (mdwc->eud_active == event)
+			return NOTIFY_DONE;
+		mdwc->eud_active = event;
+		mdwc->check_eud_state = true;
+	} else {
+		if (mdwc->vbus_active == event)
+			return NOTIFY_DONE;
+		mdwc->vbus_active = event;
+	}
 
 	if (get_psy_type(mdwc) == POWER_SUPPLY_TYPE_USB_CDP &&
 			mdwc->vbus_active) {
diff --git a/drivers/usb/gadget/function/f_qdss.c b/drivers/usb/gadget/function/f_qdss.c
index 2bb20476..e5c179b 100644
--- a/drivers/usb/gadget/function/f_qdss.c
+++ b/drivers/usb/gadget/function/f_qdss.c
@@ -219,7 +219,8 @@
 	struct usb_request *req)
 {
 	struct f_qdss *qdss = ep->driver_data;
-	struct qdss_request *d_req = req->context;
+	struct qdss_req *qreq = req->context;
+	struct qdss_request *d_req = qreq->qdss_req;
 	struct usb_ep *in;
 	struct list_head *list_pool;
 	enum qdss_state state;
@@ -239,9 +240,9 @@
 
 	spin_lock_irqsave(&qdss->lock, flags);
 	if (!qdss->debug_inface_enabled)
-		list_del(&req->list);
-	list_add_tail(&req->list, list_pool);
-	complete(&d_req->write_done);
+		list_del(&qreq->list);
+	list_add_tail(&qreq->list, list_pool);
+	complete(&qreq->write_done);
 	if (req->length != 0) {
 		d_req->actual = req->actual;
 		d_req->status = req->status;
@@ -252,32 +253,11 @@
 		qdss->ch.notify(qdss->ch.priv, state, d_req, NULL);
 }
 
-static void qdss_ctrl_read_complete(struct usb_ep *ep,
-	struct usb_request *req)
-{
-	struct f_qdss *qdss = ep->driver_data;
-	struct qdss_request *d_req = req->context;
-	unsigned long flags;
-
-	qdss_log("%s\n", __func__);
-
-	d_req->actual = req->actual;
-	d_req->status = req->status;
-
-	spin_lock_irqsave(&qdss->lock, flags);
-	list_add_tail(&req->list, &qdss->ctrl_read_pool);
-	spin_unlock_irqrestore(&qdss->lock, flags);
-
-	if (qdss->ch.notify)
-		qdss->ch.notify(qdss->ch.priv, USB_QDSS_CTRL_READ_DONE, d_req,
-			NULL);
-}
-
 void usb_qdss_free_req(struct usb_qdss_ch *ch)
 {
 	struct f_qdss *qdss;
-	struct usb_request *req;
 	struct list_head *act, *tmp;
+	struct qdss_req *qreq;
 
 	qdss = ch->priv_usb;
 	if (!qdss) {
@@ -288,33 +268,31 @@
 	qdss_log("%s: channel name = %s\n", __func__, qdss->ch.name);
 
 	list_for_each_safe(act, tmp, &qdss->data_write_pool) {
-		req = list_entry(act, struct usb_request, list);
-		list_del(&req->list);
-		usb_ep_free_request(qdss->port.data, req);
+		qreq = list_entry(act, struct qdss_req, list);
+		list_del(&qreq->list);
+		usb_ep_free_request(qdss->port.data, qreq->usb_req);
+		kfree(qreq);
+
 	}
 
 	list_for_each_safe(act, tmp, &qdss->ctrl_write_pool) {
-		req = list_entry(act, struct usb_request, list);
-		list_del(&req->list);
-		usb_ep_free_request(qdss->port.ctrl_in, req);
-	}
+		qreq = list_entry(act, struct qdss_req, list);
+		list_del(&qreq->list);
+		usb_ep_free_request(qdss->port.ctrl_in, qreq->usb_req);
+		kfree(qreq);
 
-	list_for_each_safe(act, tmp, &qdss->ctrl_read_pool) {
-		req = list_entry(act, struct usb_request, list);
-		list_del(&req->list);
-		usb_ep_free_request(qdss->port.ctrl_out, req);
 	}
 }
 EXPORT_SYMBOL(usb_qdss_free_req);
 
-int usb_qdss_alloc_req(struct usb_qdss_ch *ch, int no_write_buf,
-	int no_read_buf)
+int usb_qdss_alloc_req(struct usb_qdss_ch *ch, int no_write_buf)
 {
 	struct f_qdss *qdss = ch->priv_usb;
 	struct usb_request *req;
 	struct usb_ep *in;
 	struct list_head *list_pool;
 	int i;
+	struct qdss_req *qreq;
 
 	qdss_log("%s\n", __func__);
 
@@ -323,10 +301,8 @@
 		return -ENODEV;
 	}
 
-	if ((qdss->debug_inface_enabled &&
-		(no_write_buf <= 0 || no_read_buf <= 0)) ||
-		(!qdss->debug_inface_enabled &&
-		(no_write_buf <= 0 || no_read_buf))) {
+	if ((qdss->debug_inface_enabled && no_write_buf <= 0) ||
+		(!qdss->debug_inface_enabled && no_write_buf <= 0)) {
 		pr_err("%s: missing params\n", __func__);
 		return -ENODEV;
 	}
@@ -340,23 +316,17 @@
 	}
 
 	for (i = 0; i < no_write_buf; i++) {
+		qreq = kzalloc(sizeof(struct qdss_req), GFP_KERNEL);
 		req = usb_ep_alloc_request(in, GFP_ATOMIC);
 		if (!req) {
 			pr_err("%s: ctrl_in allocation err\n", __func__);
 			goto fail;
 		}
+		qreq->usb_req = req;
+		req->context = qreq;
 		req->complete = qdss_write_complete;
-		list_add_tail(&req->list, list_pool);
-	}
-
-	for (i = 0; i < no_read_buf; i++) {
-		req = usb_ep_alloc_request(qdss->port.ctrl_out, GFP_ATOMIC);
-		if (!req) {
-			pr_err("%s: ctrl_out allocation err\n", __func__);
-			goto fail;
-		}
-		req->complete = qdss_ctrl_read_complete;
-		list_add_tail(&req->list, &qdss->ctrl_read_pool);
+		list_add_tail(&qreq->list, list_pool);
+		init_completion(&qreq->write_done);
 	}
 
 	return 0;
@@ -810,7 +780,6 @@
 	spin_unlock_irqrestore(&qdss_lock, flags);
 
 	spin_lock_init(&qdss->lock);
-	INIT_LIST_HEAD(&qdss->ctrl_read_pool);
 	INIT_LIST_HEAD(&qdss->ctrl_write_pool);
 	INIT_LIST_HEAD(&qdss->data_write_pool);
 	INIT_LIST_HEAD(&qdss->queued_data_pool);
@@ -820,56 +789,12 @@
 	return qdss;
 }
 
-int usb_qdss_ctrl_read(struct usb_qdss_ch *ch, struct qdss_request *d_req)
-{
-	struct f_qdss *qdss = ch->priv_usb;
-	unsigned long flags;
-	struct usb_request *req = NULL;
-
-	qdss_log("%s\n", __func__);
-
-	if (!qdss)
-		return -ENODEV;
-
-	spin_lock_irqsave(&qdss->lock, flags);
-
-	if (qdss->usb_connected == 0) {
-		spin_unlock_irqrestore(&qdss->lock, flags);
-		return -EIO;
-	}
-
-	if (list_empty(&qdss->ctrl_read_pool)) {
-		spin_unlock_irqrestore(&qdss->lock, flags);
-		pr_err("error: %s list is empty\n", __func__);
-		return -EAGAIN;
-	}
-
-	req = list_first_entry(&qdss->ctrl_read_pool, struct usb_request, list);
-	list_del(&req->list);
-	spin_unlock_irqrestore(&qdss->lock, flags);
-
-	req->buf = d_req->buf;
-	req->length = d_req->length;
-	req->context = d_req;
-
-	if (usb_ep_queue(qdss->port.ctrl_out, req, GFP_ATOMIC)) {
-		/* If error add the link to linked list again*/
-		spin_lock_irqsave(&qdss->lock, flags);
-		list_add_tail(&req->list, &qdss->ctrl_read_pool);
-		spin_unlock_irqrestore(&qdss->lock, flags);
-		pr_err("qdss usb_ep_queue failed\n");
-		return -EIO;
-	}
-
-	return 0;
-}
-EXPORT_SYMBOL(usb_qdss_ctrl_read);
-
 int usb_qdss_ctrl_write(struct usb_qdss_ch *ch, struct qdss_request *d_req)
 {
 	struct f_qdss *qdss = ch->priv_usb;
 	unsigned long flags;
 	struct usb_request *req = NULL;
+	struct qdss_req *qreq;
 
 	qdss_log("%s\n", __func__);
 
@@ -889,17 +814,18 @@
 		return -EAGAIN;
 	}
 
-	req = list_first_entry(&qdss->ctrl_write_pool, struct usb_request,
+	qreq = list_first_entry(&qdss->ctrl_write_pool, struct qdss_req,
 		list);
-	list_del(&req->list);
+	list_del(&qreq->list);
 	spin_unlock_irqrestore(&qdss->lock, flags);
 
+	qreq->qdss_req = d_req;
+	req = qreq->usb_req;
 	req->buf = d_req->buf;
 	req->length = d_req->length;
-	req->context = d_req;
 	if (usb_ep_queue(qdss->port.ctrl_in, req, GFP_ATOMIC)) {
 		spin_lock_irqsave(&qdss->lock, flags);
-		list_add_tail(&req->list, &qdss->ctrl_write_pool);
+		list_add_tail(&qreq->list, &qdss->ctrl_write_pool);
 		spin_unlock_irqrestore(&qdss->lock, flags);
 		pr_err("%s usb_ep_queue failed\n", __func__);
 		return -EIO;
@@ -914,6 +840,7 @@
 	struct f_qdss *qdss = ch->priv_usb;
 	unsigned long flags;
 	struct usb_request *req = NULL;
+	struct qdss_req *qreq;
 
 	qdss_log("usb_qdss_data_write\n");
 
@@ -933,23 +860,24 @@
 		return -EAGAIN;
 	}
 
-	req = list_first_entry(&qdss->data_write_pool, struct usb_request,
+	qreq = list_first_entry(&qdss->data_write_pool, struct qdss_req,
 		list);
-	list_move_tail(&req->list, &qdss->queued_data_pool);
+	list_move_tail(&qreq->list, &qdss->queued_data_pool);
 	spin_unlock_irqrestore(&qdss->lock, flags);
 
+	qreq->qdss_req = d_req;
+	req = qreq->usb_req;
 	req->buf = d_req->buf;
 	req->length = d_req->length;
-	req->context = d_req;
 	req->sg = d_req->sg;
 	req->num_sgs = d_req->num_sgs;
 	req->num_mapped_sgs = d_req->num_mapped_sgs;
-	reinit_completion(&d_req->write_done);
+	reinit_completion(&qreq->write_done);
 	if (usb_ep_queue(qdss->port.data, req, GFP_ATOMIC)) {
 		spin_lock_irqsave(&qdss->lock, flags);
 		/* Remove from queued pool and add back to data pool */
-		list_move_tail(&req->list, &qdss->data_write_pool);
-		complete(&d_req->write_done);
+		list_move_tail(&qreq->list, &qdss->data_write_pool);
+		complete(&qreq->write_done);
 		spin_unlock_irqrestore(&qdss->lock, flags);
 		pr_err("qdss usb_ep_queue failed\n");
 		return -EIO;
@@ -1013,8 +941,7 @@
 	struct usb_gadget *gadget;
 	unsigned long flags;
 	int status;
-	struct usb_request *req;
-	struct qdss_request *d_req;
+	struct qdss_req *qreq;
 
 	qdss_log("%s\n", __func__);
 
@@ -1023,17 +950,17 @@
 		goto close;
 	qdss->qdss_close = true;
 	while (!list_empty(&qdss->queued_data_pool)) {
-		req = list_first_entry(&qdss->queued_data_pool,
-				struct usb_request, list);
-		d_req = req->context;
+		qreq = list_first_entry(&qdss->queued_data_pool,
+				struct qdss_req, list);
 		spin_unlock_irqrestore(&qdss_lock, flags);
-		usb_ep_dequeue(qdss->port.data, req);
-		wait_for_completion(&d_req->write_done);
+		usb_ep_dequeue(qdss->port.data, qreq->usb_req);
+		wait_for_completion(&qreq->write_done);
 		spin_lock_irqsave(&qdss_lock, flags);
 	}
 	usb_qdss_free_req(ch);
 close:
 	ch->priv_usb = NULL;
+	ch->notify = NULL;
 	if (!qdss || !qdss->usb_connected ||
 			!strcmp(qdss->ch.name, USB_QDSS_CH_MDM)) {
 		ch->app_conn = 0;
diff --git a/drivers/usb/gadget/function/f_qdss.h b/drivers/usb/gadget/function/f_qdss.h
index 28fee89..50b2f2d 100644
--- a/drivers/usb/gadget/function/f_qdss.h
+++ b/drivers/usb/gadget/function/f_qdss.h
@@ -50,7 +50,6 @@
 	bool debug_inface_enabled;
 	struct usb_request *endless_req;
 	struct usb_qdss_ch ch;
-	struct list_head ctrl_read_pool;
 	struct list_head ctrl_write_pool;
 
 	/* for mdm channel SW path */
diff --git a/drivers/usb/phy/phy-msm-qusb-v2.c b/drivers/usb/phy/phy-msm-qusb-v2.c
index dcc1fed..0171a2a 100644
--- a/drivers/usb/phy/phy-msm-qusb-v2.c
+++ b/drivers/usb/phy/phy-msm-qusb-v2.c
@@ -650,11 +650,15 @@
 	u32 linestate = 0, intr_mask = 0;
 
 	if (qphy->suspended == suspend) {
+		if (qphy->phy.flags & PHY_SUS_OVERRIDE)
+			goto suspend;
+
 		dev_dbg(phy->dev, "%s: USB PHY is already suspended\n",
-			__func__);
+								__func__);
 		return 0;
 	}
 
+suspend:
 	if (suspend) {
 		/* Bus suspend case */
 		if (qphy->cable_connected) {
@@ -699,11 +703,16 @@
 			qusb_phy_enable_clocks(qphy, false);
 		} else { /* Cable disconnect case */
 			/* Disable all interrupts */
-			writel_relaxed(0x00,
-				qphy->base + qphy->phy_reg[INTR_CTRL]);
-			qusb_phy_reset(qphy);
-			qusb_phy_enable_clocks(qphy, false);
-			qusb_phy_disable_power(qphy);
+			dev_dbg(phy->dev, "%s: phy->flags:0x%x\n",
+			__func__, qphy->phy.flags);
+			if (!(qphy->phy.flags & EUD_SPOOF_DISCONNECT)) {
+				dev_dbg(phy->dev, "turning off clocks/ldo\n");
+				writel_relaxed(0x00,
+					qphy->base + qphy->phy_reg[INTR_CTRL]);
+				qusb_phy_reset(qphy);
+				qusb_phy_enable_clocks(qphy, false);
+				qusb_phy_disable_power(qphy);
+			}
 		}
 		qphy->suspended = true;
 	} else {
diff --git a/drivers/usb/phy/phy-msm-qusb.c b/drivers/usb/phy/phy-msm-qusb.c
index 5216bc7..8e73f4e 100644
--- a/drivers/usb/phy/phy-msm-qusb.c
+++ b/drivers/usb/phy/phy-msm-qusb.c
@@ -640,11 +640,22 @@
 	u32 linestate = 0, intr_mask = 0;
 
 	if (qphy->suspended == suspend) {
+		/*
+		 * PHY_SUS_OVERRIDE is set when there is a cable
+		 * disconnect and the previous suspend call was because
+		 * of EUD spoof disconnect. Override this check and
+		 * ensure that the PHY is properly put in low power
+		 * mode.
+		 */
+		if (qphy->phy.flags & PHY_SUS_OVERRIDE)
+			goto suspend;
+
 		dev_dbg(phy->dev, "%s: USB PHY is already suspended\n",
 			__func__);
 		return 0;
 	}
 
+suspend:
 	if (suspend) {
 		/* Bus suspend case */
 		if (qphy->cable_connected) {
@@ -697,8 +708,7 @@
 			writel_relaxed(0x00,
 				qphy->base + QUSB2PHY_PORT_INTR_CTRL);
 
-			if (!qphy->eud_enable_reg ||
-					!readl_relaxed(qphy->eud_enable_reg)) {
+			if (!(qphy->phy.flags & EUD_SPOOF_DISCONNECT)) {
 				/* Disable PHY */
 				writel_relaxed(POWER_DOWN |
 					readl_relaxed(qphy->base +
@@ -710,10 +720,11 @@
 				if (qphy->tcsr_clamp_dig_n)
 					writel_relaxed(0x0,
 						qphy->tcsr_clamp_dig_n);
+
+				qusb_phy_enable_clocks(qphy, false);
+				qusb_phy_enable_power(qphy, false);
 			}
 
-			qusb_phy_enable_clocks(qphy, false);
-			qusb_phy_enable_power(qphy, false);
 			mutex_unlock(&qphy->phy_lock);
 
 			/*
@@ -1695,6 +1706,14 @@
 
 	qphy->suspended = true;
 
+	/*
+	 * EUD may be enable in boot loader and to keep EUD session alive across
+	 * kernel boot till USB phy driver is initialized based on cable status,
+	 * keep LDOs on here.
+	 */
+	if (qphy->eud_enable_reg && readl_relaxed(qphy->eud_enable_reg))
+		qusb_phy_enable_power(qphy, true);
+
 	if (of_property_read_bool(dev->of_node, "extcon")) {
 		qphy->id_state = true;
 		qphy->vbus_active = false;
diff --git a/fs/buffer.c b/fs/buffer.c
index e08639c..5583977 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -46,6 +46,7 @@
 #include <linux/pagevec.h>
 #include <linux/sched/mm.h>
 #include <trace/events/block.h>
+#include <linux/fscrypt.h>
 
 static int fsync_buffers_list(spinlock_t *lock, struct list_head *list);
 static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
@@ -3104,6 +3105,8 @@
 	 */
 	bio = bio_alloc(GFP_NOIO, 1);
 
+	fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
+
 	if (wbc) {
 		wbc_init_bio(wbc, bio);
 		wbc_account_io(wbc, bh->b_page, bh->b_size);
diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig
index 4f7235e..0701bb9 100644
--- a/fs/crypto/Kconfig
+++ b/fs/crypto/Kconfig
@@ -6,6 +6,8 @@
 	select CRYPTO_ECB
 	select CRYPTO_XTS
 	select CRYPTO_CTS
+	select CRYPTO_SHA512
+	select CRYPTO_HMAC
 	select KEYS
 	help
 	  Enable encryption of files and directories.  This
@@ -13,3 +15,9 @@
 	  efficient since it avoids caching the encrypted and
 	  decrypted pages in the page cache.  Currently Ext4,
 	  F2FS and UBIFS make use of this feature.
+
+config FS_ENCRYPTION_INLINE_CRYPT
+	bool "Enable fscrypt to use inline crypto"
+	depends on FS_ENCRYPTION && BLK_INLINE_ENCRYPTION
+	help
+	  Enable fscrypt to use inline encryption hardware if available.
diff --git a/fs/crypto/Makefile b/fs/crypto/Makefile
index b0ca0e6..1a6b077 100644
--- a/fs/crypto/Makefile
+++ b/fs/crypto/Makefile
@@ -1,8 +1,13 @@
 obj-$(CONFIG_FS_ENCRYPTION)	+= fscrypto.o
 
-ccflags-y += -Ifs/ext4
-ccflags-y += -Ifs/f2fs
+fscrypto-y := crypto.o \
+	      fname.o \
+	      hkdf.o \
+	      hooks.o \
+	      keyring.o \
+	      keysetup.o \
+	      keysetup_v1.o \
+	      policy.o
 
-fscrypto-y := crypto.o fname.o hooks.o keyinfo.o policy.o
 fscrypto-$(CONFIG_BLOCK) += bio.o
-fscrypto-$(CONFIG_PFK) += fscrypt_ice.o
+fscrypto-$(CONFIG_FS_ENCRYPTION_INLINE_CRYPT) += inline_crypt.o
diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
index b871f7d..6927578 100644
--- a/fs/crypto/bio.c
+++ b/fs/crypto/bio.c
@@ -26,81 +26,59 @@
 #include <linux/namei.h>
 #include "fscrypt_private.h"
 
-static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
+void fscrypt_decrypt_bio(struct bio *bio)
 {
 	struct bio_vec *bv;
 	int i;
 
 	bio_for_each_segment_all(bv, bio, i) {
 		struct page *page = bv->bv_page;
-		if (fscrypt_using_hardware_encryption(page->mapping->host)) {
-			SetPageUptodate(page);
-		} else {
-			int ret = fscrypt_decrypt_pagecache_blocks(page,
-								   bv->bv_len,
-								   bv->bv_offset);
-			if (ret)
-				SetPageError(page);
-			else if (done)
-				SetPageUptodate(page);
-		}
-		if (done)
-			unlock_page(page);
+		int ret = fscrypt_decrypt_pagecache_blocks(page,
+							   bv->bv_len,
+							   bv->bv_offset);
+		if (ret)
+			SetPageError(page);
 	}
 }
-
-void fscrypt_decrypt_bio(struct bio *bio)
-{
-	__fscrypt_decrypt_bio(bio, false);
-}
 EXPORT_SYMBOL(fscrypt_decrypt_bio);
 
-static void completion_pages(struct work_struct *work)
-{
-	struct fscrypt_ctx *ctx = container_of(work, struct fscrypt_ctx, work);
-	struct bio *bio = ctx->bio;
-
-	__fscrypt_decrypt_bio(bio, true);
-	fscrypt_release_ctx(ctx);
-	bio_put(bio);
-}
-
-void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx, struct bio *bio)
-{
-	INIT_WORK(&ctx->work, completion_pages);
-	ctx->bio = bio;
-	fscrypt_enqueue_decrypt_work(&ctx->work);
-}
-EXPORT_SYMBOL(fscrypt_enqueue_decrypt_bio);
-
 int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 				sector_t pblk, unsigned int len)
 {
 	const unsigned int blockbits = inode->i_blkbits;
 	const unsigned int blocksize = 1 << blockbits;
+	const bool inlinecrypt = fscrypt_inode_uses_inline_crypto(inode);
 	struct page *ciphertext_page;
 	struct bio *bio;
 	int ret, err = 0;
 
-	ciphertext_page = fscrypt_alloc_bounce_page(GFP_NOWAIT);
-	if (!ciphertext_page)
-		return -ENOMEM;
+	if (inlinecrypt) {
+		ciphertext_page = ZERO_PAGE(0);
+	} else {
+		ciphertext_page = fscrypt_alloc_bounce_page(GFP_NOWAIT);
+		if (!ciphertext_page)
+			return -ENOMEM;
+	}
 
 	while (len--) {
-		err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk,
-					  ZERO_PAGE(0), ciphertext_page,
-					  blocksize, 0, GFP_NOFS);
-		if (err)
-			goto errout;
+		if (!inlinecrypt) {
+			err = fscrypt_crypt_block(inode, FS_ENCRYPT, lblk,
+						  ZERO_PAGE(0), ciphertext_page,
+						  blocksize, 0, GFP_NOFS);
+			if (err)
+				goto errout;
+		}
 
 		bio = bio_alloc(GFP_NOWAIT, 1);
 		if (!bio) {
 			err = -ENOMEM;
 			goto errout;
 		}
+		fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOIO);
+
 		bio_set_dev(bio, inode->i_sb->s_bdev);
 		bio->bi_iter.bi_sector = pblk << (blockbits - 9);
-		bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_NOENCRYPT);
+		bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
 		ret = bio_add_page(bio, ciphertext_page, blocksize, 0);
 		if (WARN_ON(ret != blocksize)) {
 			/* should never happen! */
@@ -119,7 +97,8 @@
 	}
 	err = 0;
 errout:
-	fscrypt_free_bounce_page(ciphertext_page);
+	if (!inlinecrypt)
+		fscrypt_free_bounce_page(ciphertext_page);
 	return err;
 }
 EXPORT_SYMBOL(fscrypt_zeroout_range);
diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
index dcf630d..05ba4ff 100644
--- a/fs/crypto/crypto.c
+++ b/fs/crypto/crypto.c
@@ -26,29 +26,20 @@
 #include <linux/ratelimit.h>
 #include <linux/dcache.h>
 #include <linux/namei.h>
-#include <crypto/aes.h>
 #include <crypto/skcipher.h>
 #include "fscrypt_private.h"
 
 static unsigned int num_prealloc_crypto_pages = 32;
-static unsigned int num_prealloc_crypto_ctxs = 128;
 
 module_param(num_prealloc_crypto_pages, uint, 0444);
 MODULE_PARM_DESC(num_prealloc_crypto_pages,
 		"Number of crypto pages to preallocate");
-module_param(num_prealloc_crypto_ctxs, uint, 0444);
-MODULE_PARM_DESC(num_prealloc_crypto_ctxs,
-		"Number of crypto contexts to preallocate");
 
 static mempool_t *fscrypt_bounce_page_pool = NULL;
 
-static LIST_HEAD(fscrypt_free_ctxs);
-static DEFINE_SPINLOCK(fscrypt_ctx_lock);
-
 static struct workqueue_struct *fscrypt_read_workqueue;
 static DEFINE_MUTEX(fscrypt_init_mutex);
 
-static struct kmem_cache *fscrypt_ctx_cachep;
 struct kmem_cache *fscrypt_info_cachep;
 
 void fscrypt_enqueue_decrypt_work(struct work_struct *work)
@@ -57,62 +48,6 @@
 }
 EXPORT_SYMBOL(fscrypt_enqueue_decrypt_work);
 
-/**
- * fscrypt_release_ctx() - Release a decryption context
- * @ctx: The decryption context to release.
- *
- * If the decryption context was allocated from the pre-allocated pool, return
- * it to that pool.  Else, free it.
- */
-void fscrypt_release_ctx(struct fscrypt_ctx *ctx)
-{
-	unsigned long flags;
-
-	if (ctx->flags & FS_CTX_REQUIRES_FREE_ENCRYPT_FL) {
-		kmem_cache_free(fscrypt_ctx_cachep, ctx);
-	} else {
-		spin_lock_irqsave(&fscrypt_ctx_lock, flags);
-		list_add(&ctx->free_list, &fscrypt_free_ctxs);
-		spin_unlock_irqrestore(&fscrypt_ctx_lock, flags);
-	}
-}
-EXPORT_SYMBOL(fscrypt_release_ctx);
-
-/**
- * fscrypt_get_ctx() - Get a decryption context
- * @gfp_flags:   The gfp flag for memory allocation
- *
- * Allocate and initialize a decryption context.
- *
- * Return: A new decryption context on success; an ERR_PTR() otherwise.
- */
-struct fscrypt_ctx *fscrypt_get_ctx(gfp_t gfp_flags)
-{
-	struct fscrypt_ctx *ctx;
-	unsigned long flags;
-
-	/*
-	 * First try getting a ctx from the free list so that we don't have to
-	 * call into the slab allocator.
-	 */
-	spin_lock_irqsave(&fscrypt_ctx_lock, flags);
-	ctx = list_first_entry_or_null(&fscrypt_free_ctxs,
-					struct fscrypt_ctx, free_list);
-	if (ctx)
-		list_del(&ctx->free_list);
-	spin_unlock_irqrestore(&fscrypt_ctx_lock, flags);
-	if (!ctx) {
-		ctx = kmem_cache_zalloc(fscrypt_ctx_cachep, gfp_flags);
-		if (!ctx)
-			return ERR_PTR(-ENOMEM);
-		ctx->flags |= FS_CTX_REQUIRES_FREE_ENCRYPT_FL;
-	} else {
-		ctx->flags &= ~FS_CTX_REQUIRES_FREE_ENCRYPT_FL;
-	}
-	return ctx;
-}
-EXPORT_SYMBOL(fscrypt_get_ctx);
-
 struct page *fscrypt_alloc_bounce_page(gfp_t gfp_flags)
 {
 	return mempool_alloc(fscrypt_bounce_page_pool, gfp_flags);
@@ -137,14 +72,24 @@
 void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
 			 const struct fscrypt_info *ci)
 {
+	u8 flags = fscrypt_policy_flags(&ci->ci_policy);
+
+	bool inlinecrypt = false;
+
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+	inlinecrypt = ci->ci_inlinecrypt;
+#endif
 	memset(iv, 0, ci->ci_mode->ivsize);
-	iv->lblk_num = cpu_to_le64(lblk_num);
 
-	if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY)
+	if (flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 ||
+		((fscrypt_policy_contents_mode(&ci->ci_policy) ==
+		  FSCRYPT_MODE_PRIVATE) && inlinecrypt)) {
+		WARN_ON_ONCE((u32)lblk_num != lblk_num);
+		lblk_num |= (u64)ci->ci_inode->i_ino << 32;
+	} else if (flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
 		memcpy(iv->nonce, ci->ci_nonce, FS_KEY_DERIVATION_NONCE_SIZE);
-
-	if (ci->ci_essiv_tfm != NULL)
-		crypto_cipher_encrypt_one(ci->ci_essiv_tfm, iv->raw, iv->raw);
+	}
+	iv->lblk_num = cpu_to_le64(lblk_num);
 }
 
 /* Encrypt or decrypt a single filesystem block of file contents */
@@ -158,7 +103,7 @@
 	DECLARE_CRYPTO_WAIT(wait);
 	struct scatterlist dst, src;
 	struct fscrypt_info *ci = inode->i_crypt_info;
-	struct crypto_skcipher *tfm = ci->ci_ctfm;
+	struct crypto_skcipher *tfm = ci->ci_key.tfm;
 	int res = 0;
 
 	if (WARN_ON_ONCE(len <= 0))
@@ -187,10 +132,8 @@
 		res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
 	skcipher_request_free(req);
 	if (res) {
-		fscrypt_err(inode->i_sb,
-			    "%scryption failed for inode %lu, block %llu: %d",
-			    (rw == FS_DECRYPT ? "de" : "en"),
-			    inode->i_ino, lblk_num, res);
+		fscrypt_err(inode, "%scryption failed for block %llu: %d",
+			    (rw == FS_DECRYPT ? "De" : "En"), lblk_num, res);
 		return res;
 	}
 	return 0;
@@ -397,17 +340,6 @@
 	.d_revalidate = fscrypt_d_revalidate,
 };
 
-static void fscrypt_destroy(void)
-{
-	struct fscrypt_ctx *pos, *n;
-
-	list_for_each_entry_safe(pos, n, &fscrypt_free_ctxs, free_list)
-		kmem_cache_free(fscrypt_ctx_cachep, pos);
-	INIT_LIST_HEAD(&fscrypt_free_ctxs);
-	mempool_destroy(fscrypt_bounce_page_pool);
-	fscrypt_bounce_page_pool = NULL;
-}
-
 /**
  * fscrypt_initialize() - allocate major buffers for fs encryption.
  * @cop_flags:  fscrypt operations flags
@@ -415,11 +347,11 @@
  * We only call this when we start accessing encrypted files, since it
  * results in memory getting allocated that wouldn't otherwise be used.
  *
- * Return: Zero on success, non-zero otherwise.
+ * Return: 0 on success; -errno on failure
  */
 int fscrypt_initialize(unsigned int cop_flags)
 {
-	int i, res = -ENOMEM;
+	int err = 0;
 
 	/* No need to allocate a bounce page pool if this FS won't use it. */
 	if (cop_flags & FS_CFLG_OWN_PAGES)
@@ -427,32 +359,21 @@
 
 	mutex_lock(&fscrypt_init_mutex);
 	if (fscrypt_bounce_page_pool)
-		goto already_initialized;
+		goto out_unlock;
 
-	for (i = 0; i < num_prealloc_crypto_ctxs; i++) {
-		struct fscrypt_ctx *ctx;
-
-		ctx = kmem_cache_zalloc(fscrypt_ctx_cachep, GFP_NOFS);
-		if (!ctx)
-			goto fail;
-		list_add(&ctx->free_list, &fscrypt_free_ctxs);
-	}
-
+	err = -ENOMEM;
 	fscrypt_bounce_page_pool =
 		mempool_create_page_pool(num_prealloc_crypto_pages, 0);
 	if (!fscrypt_bounce_page_pool)
-		goto fail;
+		goto out_unlock;
 
-already_initialized:
+	err = 0;
+out_unlock:
 	mutex_unlock(&fscrypt_init_mutex);
-	return 0;
-fail:
-	fscrypt_destroy();
-	mutex_unlock(&fscrypt_init_mutex);
-	return res;
+	return err;
 }
 
-void fscrypt_msg(struct super_block *sb, const char *level,
+void fscrypt_msg(const struct inode *inode, const char *level,
 		 const char *fmt, ...)
 {
 	static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL,
@@ -466,8 +387,9 @@
 	va_start(args, fmt);
 	vaf.fmt = fmt;
 	vaf.va = &args;
-	if (sb)
-		printk("%sfscrypt (%s): %pV\n", level, sb->s_id, &vaf);
+	if (inode)
+		printk("%sfscrypt (%s, inode %lu): %pV\n",
+		       level, inode->i_sb->s_id, inode->i_ino, &vaf);
 	else
 		printk("%sfscrypt: %pV\n", level, &vaf);
 	va_end(args);
@@ -478,6 +400,8 @@
  */
 static int __init fscrypt_init(void)
 {
+	int err = -ENOMEM;
+
 	/*
 	 * Use an unbound workqueue to allow bios to be decrypted in parallel
 	 * even when they happen to complete on the same CPU.  This sacrifices
@@ -492,39 +416,21 @@
 	if (!fscrypt_read_workqueue)
 		goto fail;
 
-	fscrypt_ctx_cachep = KMEM_CACHE(fscrypt_ctx, SLAB_RECLAIM_ACCOUNT);
-	if (!fscrypt_ctx_cachep)
-		goto fail_free_queue;
-
 	fscrypt_info_cachep = KMEM_CACHE(fscrypt_info, SLAB_RECLAIM_ACCOUNT);
 	if (!fscrypt_info_cachep)
-		goto fail_free_ctx;
+		goto fail_free_queue;
+
+	err = fscrypt_init_keyring();
+	if (err)
+		goto fail_free_info;
 
 	return 0;
 
-fail_free_ctx:
-	kmem_cache_destroy(fscrypt_ctx_cachep);
+fail_free_info:
+	kmem_cache_destroy(fscrypt_info_cachep);
 fail_free_queue:
 	destroy_workqueue(fscrypt_read_workqueue);
 fail:
-	return -ENOMEM;
+	return err;
 }
-module_init(fscrypt_init)
-
-/**
- * fscrypt_exit() - Shutdown the fs encryption system
- */
-static void __exit fscrypt_exit(void)
-{
-	fscrypt_destroy();
-
-	if (fscrypt_read_workqueue)
-		destroy_workqueue(fscrypt_read_workqueue);
-	kmem_cache_destroy(fscrypt_ctx_cachep);
-	kmem_cache_destroy(fscrypt_info_cachep);
-
-	fscrypt_essiv_cleanup();
-}
-module_exit(fscrypt_exit);
-
-MODULE_LICENSE("GPL");
+late_initcall(fscrypt_init)
diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
index 00d150f..3aafdda 100644
--- a/fs/crypto/fname.c
+++ b/fs/crypto/fname.c
@@ -40,7 +40,7 @@
 	struct skcipher_request *req = NULL;
 	DECLARE_CRYPTO_WAIT(wait);
 	struct fscrypt_info *ci = inode->i_crypt_info;
-	struct crypto_skcipher *tfm = ci->ci_ctfm;
+	struct crypto_skcipher *tfm = ci->ci_key.tfm;
 	union fscrypt_iv iv;
 	struct scatterlist sg;
 	int res;
@@ -71,9 +71,7 @@
 	res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
 	skcipher_request_free(req);
 	if (res < 0) {
-		fscrypt_err(inode->i_sb,
-			    "Filename encryption failed for inode %lu: %d",
-			    inode->i_ino, res);
+		fscrypt_err(inode, "Filename encryption failed: %d", res);
 		return res;
 	}
 
@@ -95,7 +93,7 @@
 	DECLARE_CRYPTO_WAIT(wait);
 	struct scatterlist src_sg, dst_sg;
 	struct fscrypt_info *ci = inode->i_crypt_info;
-	struct crypto_skcipher *tfm = ci->ci_ctfm;
+	struct crypto_skcipher *tfm = ci->ci_key.tfm;
 	union fscrypt_iv iv;
 	int res;
 
@@ -117,9 +115,7 @@
 	res = crypto_wait_req(crypto_skcipher_decrypt(req), &wait);
 	skcipher_request_free(req);
 	if (res < 0) {
-		fscrypt_err(inode->i_sb,
-			    "Filename decryption failed for inode %lu: %d",
-			    inode->i_ino, res);
+		fscrypt_err(inode, "Filename decryption failed: %d", res);
 		return res;
 	}
 
@@ -127,44 +123,45 @@
 	return 0;
 }
 
-static const char *lookup_table =
+static const char lookup_table[65] =
 	"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+,";
 
 #define BASE64_CHARS(nbytes)	DIV_ROUND_UP((nbytes) * 4, 3)
 
 /**
- * digest_encode() -
+ * base64_encode() -
  *
- * Encodes the input digest using characters from the set [a-zA-Z0-9_+].
+ * Encodes the input string using characters from the set [A-Za-z0-9+,].
  * The encoded string is roughly 4/3 times the size of the input string.
+ *
+ * Return: length of the encoded string
  */
-static int digest_encode(const char *src, int len, char *dst)
+static int base64_encode(const u8 *src, int len, char *dst)
 {
-	int i = 0, bits = 0, ac = 0;
+	int i, bits = 0, ac = 0;
 	char *cp = dst;
 
-	while (i < len) {
-		ac += (((unsigned char) src[i]) << bits);
+	for (i = 0; i < len; i++) {
+		ac += src[i] << bits;
 		bits += 8;
 		do {
 			*cp++ = lookup_table[ac & 0x3f];
 			ac >>= 6;
 			bits -= 6;
 		} while (bits >= 6);
-		i++;
 	}
 	if (bits)
 		*cp++ = lookup_table[ac & 0x3f];
 	return cp - dst;
 }
 
-static int digest_decode(const char *src, int len, char *dst)
+static int base64_decode(const char *src, int len, u8 *dst)
 {
-	int i = 0, bits = 0, ac = 0;
+	int i, bits = 0, ac = 0;
 	const char *p;
-	char *cp = dst;
+	u8 *cp = dst;
 
-	while (i < len) {
+	for (i = 0; i < len; i++) {
 		p = strchr(lookup_table, src[i]);
 		if (p == NULL || src[i] == 0)
 			return -2;
@@ -175,7 +172,6 @@
 			ac >>= 8;
 			bits -= 8;
 		}
-		i++;
 	}
 	if (ac)
 		return -1;
@@ -185,8 +181,9 @@
 bool fscrypt_fname_encrypted_size(const struct inode *inode, u32 orig_len,
 				  u32 max_len, u32 *encrypted_len_ret)
 {
-	int padding = 4 << (inode->i_crypt_info->ci_flags &
-			    FS_POLICY_FLAGS_PAD_MASK);
+	const struct fscrypt_info *ci = inode->i_crypt_info;
+	int padding = 4 << (fscrypt_policy_flags(&ci->ci_policy) &
+			    FSCRYPT_POLICY_FLAGS_PAD_MASK);
 	u32 encrypted_len;
 
 	if (orig_len > max_len)
@@ -272,7 +269,7 @@
 		return fname_decrypt(inode, iname, oname);
 
 	if (iname->len <= FSCRYPT_FNAME_MAX_UNDIGESTED_SIZE) {
-		oname->len = digest_encode(iname->name, iname->len,
+		oname->len = base64_encode(iname->name, iname->len,
 					   oname->name);
 		return 0;
 	}
@@ -287,7 +284,7 @@
 	       FSCRYPT_FNAME_DIGEST(iname->name, iname->len),
 	       FSCRYPT_FNAME_DIGEST_SIZE);
 	oname->name[0] = '_';
-	oname->len = 1 + digest_encode((const char *)&digested_name,
+	oname->len = 1 + base64_encode((const u8 *)&digested_name,
 				       sizeof(digested_name), oname->name + 1);
 	return 0;
 }
@@ -380,8 +377,8 @@
 	if (fname->crypto_buf.name == NULL)
 		return -ENOMEM;
 
-	ret = digest_decode(iname->name + digested, iname->len - digested,
-				fname->crypto_buf.name);
+	ret = base64_decode(iname->name + digested, iname->len - digested,
+			    fname->crypto_buf.name);
 	if (ret < 0) {
 		ret = -ENOENT;
 		goto errout;
diff --git a/fs/crypto/fscrypt_ice.c b/fs/crypto/fscrypt_ice.c
deleted file mode 100644
index 6c88233..0000000
--- a/fs/crypto/fscrypt_ice.c
+++ /dev/null
@@ -1,153 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
- */
-
-#include "fscrypt_ice.h"
-
-int fscrypt_using_hardware_encryption(const struct inode *inode)
-{
-	struct fscrypt_info *ci = inode->i_crypt_info;
-
-	return S_ISREG(inode->i_mode) && ci &&
-		ci->ci_data_mode == FS_ENCRYPTION_MODE_PRIVATE;
-}
-EXPORT_SYMBOL(fscrypt_using_hardware_encryption);
-
-/*
- * Retrieves encryption key from the inode
- */
-char *fscrypt_get_ice_encryption_key(const struct inode *inode)
-{
-	struct fscrypt_info *ci = NULL;
-
-	if (!inode)
-		return NULL;
-
-	ci = inode->i_crypt_info;
-	if (!ci)
-		return NULL;
-
-	return &(ci->ci_raw_key[0]);
-}
-
-/*
- * Retrieves encryption salt from the inode
- */
-char *fscrypt_get_ice_encryption_salt(const struct inode *inode)
-{
-	struct fscrypt_info *ci = NULL;
-
-	if (!inode)
-		return NULL;
-
-	ci = inode->i_crypt_info;
-	if (!ci)
-		return NULL;
-
-	return &(ci->ci_raw_key[fscrypt_get_ice_encryption_key_size(inode)]);
-}
-
-/*
- * returns true if the cipher mode in inode is AES XTS
- */
-int fscrypt_is_aes_xts_cipher(const struct inode *inode)
-{
-	struct fscrypt_info *ci = inode->i_crypt_info;
-
-	if (!ci)
-		return 0;
-
-	return (ci->ci_data_mode == FS_ENCRYPTION_MODE_PRIVATE);
-}
-
-/*
- * returns true if encryption info in both inodes is equal
- */
-bool fscrypt_is_ice_encryption_info_equal(const struct inode *inode1,
-					const struct inode *inode2)
-{
-	char *key1 = NULL;
-	char *key2 = NULL;
-	char *salt1 = NULL;
-	char *salt2 = NULL;
-
-	if (!inode1 || !inode2)
-		return false;
-
-	if (inode1 == inode2)
-		return true;
-
-	/*
-	 * both do not belong to ice, so we don't care, they are equal
-	 * for us
-	 */
-	if (!fscrypt_should_be_processed_by_ice(inode1) &&
-			!fscrypt_should_be_processed_by_ice(inode2))
-		return true;
-
-	/* one belongs to ice, the other does not -> not equal */
-	if (fscrypt_should_be_processed_by_ice(inode1) ^
-			fscrypt_should_be_processed_by_ice(inode2))
-		return false;
-
-	key1 = fscrypt_get_ice_encryption_key(inode1);
-	key2 = fscrypt_get_ice_encryption_key(inode2);
-	salt1 = fscrypt_get_ice_encryption_salt(inode1);
-	salt2 = fscrypt_get_ice_encryption_salt(inode2);
-
-	/* key and salt should not be null by this point */
-	if (!key1 || !key2 || !salt1 || !salt2 ||
-		(fscrypt_get_ice_encryption_key_size(inode1) !=
-		 fscrypt_get_ice_encryption_key_size(inode2)) ||
-		(fscrypt_get_ice_encryption_salt_size(inode1) !=
-		 fscrypt_get_ice_encryption_salt_size(inode2)))
-		return false;
-
-	if ((memcmp(key1, key2,
-			fscrypt_get_ice_encryption_key_size(inode1)) == 0) &&
-		(memcmp(salt1, salt2,
-			fscrypt_get_ice_encryption_salt_size(inode1)) == 0))
-		return true;
-
-	return false;
-}
-
-void fscrypt_set_ice_dun(const struct inode *inode, struct bio *bio, u64 dun)
-{
-	if (fscrypt_should_be_processed_by_ice(inode))
-		bio->bi_iter.bi_dun = dun;
-}
-EXPORT_SYMBOL(fscrypt_set_ice_dun);
-
-void fscrypt_set_ice_skip(struct bio *bio, int bi_crypt_skip)
-{
-#ifdef CONFIG_DM_DEFAULT_KEY
-	bio->bi_crypt_skip = bi_crypt_skip;
-#endif
-}
-EXPORT_SYMBOL(fscrypt_set_ice_skip);
-
-/*
- * This function will be used for filesystem when deciding to merge bios.
- * Basic assumption is, if inline_encryption is set, single bio has to
- * guarantee consecutive LBAs as well as ino|pg->index.
- */
-bool fscrypt_mergeable_bio(struct bio *bio, u64 dun, bool bio_encrypted,
-						int bi_crypt_skip)
-{
-	if (!bio)
-		return true;
-
-#ifdef CONFIG_DM_DEFAULT_KEY
-	if (bi_crypt_skip != bio->bi_crypt_skip)
-		return false;
-#endif
-	/* if both of them are not encrypted, no further check is needed */
-	if (!bio_dun(bio) && !bio_encrypted)
-		return true;
-
-	/* ICE allows only consecutive iv_key stream. */
-	return bio_end_dun(bio) == dun;
-}
-EXPORT_SYMBOL(fscrypt_mergeable_bio);
diff --git a/fs/crypto/fscrypt_ice.h b/fs/crypto/fscrypt_ice.h
deleted file mode 100644
index 84de010..0000000
--- a/fs/crypto/fscrypt_ice.h
+++ /dev/null
@@ -1,99 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef _FSCRYPT_ICE_H
-#define _FSCRYPT_ICE_H
-
-#include <linux/blkdev.h>
-#include "fscrypt_private.h"
-
-#if IS_ENABLED(CONFIG_FS_ENCRYPTION)
-static inline bool fscrypt_should_be_processed_by_ice(const struct inode *inode)
-{
-	if (!inode->i_sb->s_cop)
-		return false;
-	if (!inode->i_sb->s_cop->is_encrypted((struct inode *)inode))
-		return false;
-
-	return fscrypt_using_hardware_encryption(inode);
-}
-
-static inline int fscrypt_is_ice_capable(const struct super_block *sb)
-{
-	return blk_queue_inlinecrypt(bdev_get_queue(sb->s_bdev));
-}
-
-int fscrypt_is_aes_xts_cipher(const struct inode *inode);
-
-char *fscrypt_get_ice_encryption_key(const struct inode *inode);
-char *fscrypt_get_ice_encryption_salt(const struct inode *inode);
-
-bool fscrypt_is_ice_encryption_info_equal(const struct inode *inode1,
-					const struct inode *inode2);
-
-static inline size_t fscrypt_get_ice_encryption_key_size(
-					const struct inode *inode)
-{
-	return FS_AES_256_XTS_KEY_SIZE / 2;
-}
-
-static inline size_t fscrypt_get_ice_encryption_salt_size(
-					const struct inode *inode)
-{
-	return FS_AES_256_XTS_KEY_SIZE / 2;
-}
-#else
-static inline bool fscrypt_should_be_processed_by_ice(const struct inode *inode)
-{
-	return false;
-}
-
-static inline int fscrypt_is_ice_capable(const struct super_block *sb)
-{
-	return false;
-}
-
-static inline char *fscrypt_get_ice_encryption_key(const struct inode *inode)
-{
-	return NULL;
-}
-
-static inline char *fscrypt_get_ice_encryption_salt(const struct inode *inode)
-{
-	return NULL;
-}
-
-static inline size_t fscrypt_get_ice_encryption_key_size(
-					const struct inode *inode)
-{
-	return 0;
-}
-
-static inline size_t fscrypt_get_ice_encryption_salt_size(
-					const struct inode *inode)
-{
-	return 0;
-}
-
-static inline int fscrypt_is_xts_cipher(const struct inode *inode)
-{
-	return 0;
-}
-
-static inline bool fscrypt_is_ice_encryption_info_equal(
-					const struct inode *inode1,
-					const struct inode *inode2)
-{
-	return false;
-}
-
-static inline int fscrypt_is_aes_xts_cipher(const struct inode *inode)
-{
-	return 0;
-}
-
-#endif
-
-#endif	/* _FSCRYPT_ICE_H */
diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
index 70e34437..af6300c 100644
--- a/fs/crypto/fscrypt_private.h
+++ b/fs/crypto/fscrypt_private.h
@@ -4,9 +4,8 @@
  *
  * Copyright (C) 2015, Google, Inc.
  *
- * This contains encryption key functions.
- *
- * Written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar, 2015.
+ * Originally written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar.
+ * Heavily modified since then.
  */
 
 #ifndef _FSCRYPT_PRIVATE_H
@@ -14,41 +13,136 @@
 
 #include <linux/fscrypt.h>
 #include <crypto/hash.h>
-#include <linux/pfk.h>
+#include <linux/bio-crypt-ctx.h>
 
-/* Encryption parameters */
-
-#define FS_AES_128_ECB_KEY_SIZE		16
-#define FS_AES_128_CBC_KEY_SIZE		16
-#define FS_AES_128_CTS_KEY_SIZE		16
-#define FS_AES_256_GCM_KEY_SIZE		32
-#define FS_AES_256_CBC_KEY_SIZE		32
-#define FS_AES_256_CTS_KEY_SIZE		32
-#define FS_AES_256_XTS_KEY_SIZE		64
+#define CONST_STRLEN(str)	(sizeof(str) - 1)
 
 #define FS_KEY_DERIVATION_NONCE_SIZE	16
 
-/**
- * Encryption context for inode
- *
- * Protector format:
- *  1 byte: Protector format (1 = this version)
- *  1 byte: File contents encryption mode
- *  1 byte: File names encryption mode
- *  1 byte: Flags
- *  8 bytes: Master Key descriptor
- *  16 bytes: Encryption Key derivation nonce
- */
-struct fscrypt_context {
-	u8 format;
+#define FSCRYPT_MIN_KEY_SIZE		16
+#define FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE	128
+
+#define FSCRYPT_CONTEXT_V1	1
+#define FSCRYPT_CONTEXT_V2	2
+
+struct fscrypt_context_v1 {
+	u8 version; /* FSCRYPT_CONTEXT_V1 */
 	u8 contents_encryption_mode;
 	u8 filenames_encryption_mode;
 	u8 flags;
-	u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
+	u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
 	u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
-} __packed;
+};
 
-#define FS_ENCRYPTION_CONTEXT_FORMAT_V1		1
+struct fscrypt_context_v2 {
+	u8 version; /* FSCRYPT_CONTEXT_V2 */
+	u8 contents_encryption_mode;
+	u8 filenames_encryption_mode;
+	u8 flags;
+	u8 __reserved[4];
+	u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
+	u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
+};
+
+/**
+ * fscrypt_context - the encryption context of an inode
+ *
+ * This is the on-disk equivalent of an fscrypt_policy, stored alongside each
+ * encrypted file usually in a hidden extended attribute.  It contains the
+ * fields from the fscrypt_policy, in order to identify the encryption algorithm
+ * and key with which the file is encrypted.  It also contains a nonce that was
+ * randomly generated by fscrypt itself; this is used as KDF input or as a tweak
+ * to cause different files to be encrypted differently.
+ */
+union fscrypt_context {
+	u8 version;
+	struct fscrypt_context_v1 v1;
+	struct fscrypt_context_v2 v2;
+};
+
+/*
+ * Return the size expected for the given fscrypt_context based on its version
+ * number, or 0 if the context version is unrecognized.
+ */
+static inline int fscrypt_context_size(const union fscrypt_context *ctx)
+{
+	switch (ctx->version) {
+	case FSCRYPT_CONTEXT_V1:
+		BUILD_BUG_ON(sizeof(ctx->v1) != 28);
+		return sizeof(ctx->v1);
+	case FSCRYPT_CONTEXT_V2:
+		BUILD_BUG_ON(sizeof(ctx->v2) != 40);
+		return sizeof(ctx->v2);
+	}
+	return 0;
+}
+
+#undef fscrypt_policy
+union fscrypt_policy {
+	u8 version;
+	struct fscrypt_policy_v1 v1;
+	struct fscrypt_policy_v2 v2;
+};
+
+/*
+ * Return the size expected for the given fscrypt_policy based on its version
+ * number, or 0 if the policy version is unrecognized.
+ */
+static inline int fscrypt_policy_size(const union fscrypt_policy *policy)
+{
+	switch (policy->version) {
+	case FSCRYPT_POLICY_V1:
+		return sizeof(policy->v1);
+	case FSCRYPT_POLICY_V2:
+		return sizeof(policy->v2);
+	}
+	return 0;
+}
+
+/* Return the contents encryption mode of a valid encryption policy */
+static inline u8
+fscrypt_policy_contents_mode(const union fscrypt_policy *policy)
+{
+	switch (policy->version) {
+	case FSCRYPT_POLICY_V1:
+		return policy->v1.contents_encryption_mode;
+	case FSCRYPT_POLICY_V2:
+		return policy->v2.contents_encryption_mode;
+	}
+	BUG();
+}
+
+/* Return the filenames encryption mode of a valid encryption policy */
+static inline u8
+fscrypt_policy_fnames_mode(const union fscrypt_policy *policy)
+{
+	switch (policy->version) {
+	case FSCRYPT_POLICY_V1:
+		return policy->v1.filenames_encryption_mode;
+	case FSCRYPT_POLICY_V2:
+		return policy->v2.filenames_encryption_mode;
+	}
+	BUG();
+}
+
+/* Return the flags (FSCRYPT_POLICY_FLAG*) of a valid encryption policy */
+static inline u8
+fscrypt_policy_flags(const union fscrypt_policy *policy)
+{
+	switch (policy->version) {
+	case FSCRYPT_POLICY_V1:
+		return policy->v1.flags;
+	case FSCRYPT_POLICY_V2:
+		return policy->v2.flags;
+	}
+	BUG();
+}
+
+static inline bool
+fscrypt_is_direct_key_policy(const union fscrypt_policy *policy)
+{
+	return fscrypt_policy_flags(policy) & FSCRYPT_POLICY_FLAG_DIRECT_KEY;
+}
 
 /**
  * For encrypted symlinks, the ciphertext length is stored at the beginning
@@ -59,6 +153,20 @@
 	char encrypted_path[1];
 } __packed;
 
+/**
+ * struct fscrypt_prepared_key - a key prepared for actual encryption/decryption
+ * @tfm: crypto API transform object
+ * @blk_key: key for blk-crypto
+ *
+ * Normally only one of the fields will be non-NULL.
+ */
+struct fscrypt_prepared_key {
+	struct crypto_skcipher *tfm;
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+	struct fscrypt_blk_crypto_key *blk_key;
+#endif
+};
+
 /*
  * fscrypt_info - the "encryption key" for an inode
  *
@@ -68,36 +176,53 @@
  */
 struct fscrypt_info {
 
-	/* The actual crypto transform used for encryption and decryption */
-	struct crypto_skcipher *ci_ctfm;
+	/* The key in a form prepared for actual encryption/decryption */
+	struct fscrypt_prepared_key	ci_key;
 
+	/* True if the key should be freed when this fscrypt_info is freed */
+	bool ci_owns_key;
+
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
 	/*
-	 * Cipher for ESSIV IV generation.  Only set for CBC contents
-	 * encryption, otherwise is NULL.
+	 * True if this inode will use inline encryption (blk-crypto) instead of
+	 * the traditional filesystem-layer encryption.
 	 */
-	struct crypto_cipher *ci_essiv_tfm;
+	bool ci_inlinecrypt;
+#endif
 
 	/*
-	 * Encryption mode used for this inode.  It corresponds to either
-	 * ci_data_mode or ci_filename_mode, depending on the inode type.
+	 * Encryption mode used for this inode.  It corresponds to either the
+	 * contents or filenames encryption mode, depending on the inode type.
 	 */
 	struct fscrypt_mode *ci_mode;
 
+	/* Back-pointer to the inode */
+	struct inode *ci_inode;
+
 	/*
-	 * If non-NULL, then this inode uses a master key directly rather than a
-	 * derived key, and ci_ctfm will equal ci_master_key->mk_ctfm.
-	 * Otherwise, this inode uses a derived key.
+	 * The master key with which this inode was unlocked (decrypted).  This
+	 * will be NULL if the master key was found in a process-subscribed
+	 * keyring rather than in the filesystem-level keyring.
 	 */
-	struct fscrypt_master_key *ci_master_key;
+	struct key *ci_master_key;
 
-	/* fields from the fscrypt_context */
+	/*
+	 * Link in list of inodes that were unlocked with the master key.
+	 * Only used when ->ci_master_key is set.
+	 */
+	struct list_head ci_master_key_link;
 
-	u8 ci_data_mode;
-	u8 ci_filename_mode;
-	u8 ci_flags;
-	u8 ci_master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
+	/*
+	 * If non-NULL, then encryption is done using the master key directly
+	 * and ci_key will equal ci_direct_key->dk_key.
+	 */
+	struct fscrypt_direct_key *ci_direct_key;
+
+	/* The encryption policy used by this inode */
+	union fscrypt_policy ci_policy;
+
+	/* This inode's nonce, copied from the fscrypt_context */
 	u8 ci_nonce[FS_KEY_DERIVATION_NONCE_SIZE];
-	u8 ci_raw_key[FS_MAX_KEY_SIZE];
 };
 
 typedef enum {
@@ -105,25 +230,23 @@
 	FS_ENCRYPT,
 } fscrypt_direction_t;
 
-#define FS_CTX_REQUIRES_FREE_ENCRYPT_FL		0x00000001
-
 static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
 					   u32 filenames_mode)
 {
-	if (contents_mode == FS_ENCRYPTION_MODE_AES_128_CBC &&
-	    filenames_mode == FS_ENCRYPTION_MODE_AES_128_CTS)
+	if (contents_mode == FSCRYPT_MODE_AES_128_CBC &&
+	    filenames_mode == FSCRYPT_MODE_AES_128_CTS)
 		return true;
 
-	if (contents_mode == FS_ENCRYPTION_MODE_AES_256_XTS &&
-	    filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS)
+	if (contents_mode == FSCRYPT_MODE_AES_256_XTS &&
+	    filenames_mode == FSCRYPT_MODE_AES_256_CTS)
 		return true;
 
-	if (contents_mode == FS_ENCRYPTION_MODE_ADIANTUM &&
-	    filenames_mode == FS_ENCRYPTION_MODE_ADIANTUM)
+	if (contents_mode == FSCRYPT_MODE_ADIANTUM &&
+	    filenames_mode == FSCRYPT_MODE_ADIANTUM)
 		return true;
 
-	if (contents_mode == FS_ENCRYPTION_MODE_PRIVATE &&
-	    filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS)
+	if (contents_mode == FSCRYPT_MODE_PRIVATE &&
+		filenames_mode == FSCRYPT_MODE_AES_256_CTS)
 		return true;
 
 	return false;
@@ -141,12 +264,12 @@
 extern const struct dentry_operations fscrypt_d_ops;
 
 extern void __printf(3, 4) __cold
-fscrypt_msg(struct super_block *sb, const char *level, const char *fmt, ...);
+fscrypt_msg(const struct inode *inode, const char *level, const char *fmt, ...);
 
-#define fscrypt_warn(sb, fmt, ...)		\
-	fscrypt_msg(sb, KERN_WARNING, fmt, ##__VA_ARGS__)
-#define fscrypt_err(sb, fmt, ...)		\
-	fscrypt_msg(sb, KERN_ERR, fmt, ##__VA_ARGS__)
+#define fscrypt_warn(inode, fmt, ...)		\
+	fscrypt_msg((inode), KERN_WARNING, fmt, ##__VA_ARGS__)
+#define fscrypt_err(inode, fmt, ...)		\
+	fscrypt_msg((inode), KERN_ERR, fmt, ##__VA_ARGS__)
 
 #define FSCRYPT_MAX_IV_SIZE	32
 
@@ -159,6 +282,7 @@
 		u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE];
 	};
 	u8 raw[FSCRYPT_MAX_IV_SIZE];
+	__le64 dun[FSCRYPT_MAX_IV_SIZE / sizeof(__le64)];
 };
 
 void fscrypt_generate_iv(union fscrypt_iv *iv, u64 lblk_num,
@@ -171,7 +295,273 @@
 					 u32 orig_len, u32 max_len,
 					 u32 *encrypted_len_ret);
 
-/* keyinfo.c */
+/* hkdf.c */
+
+struct fscrypt_hkdf {
+	struct crypto_shash *hmac_tfm;
+};
+
+extern int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
+			     unsigned int master_key_size);
+
+/*
+ * The list of contexts in which fscrypt uses HKDF.  These values are used as
+ * the first byte of the HKDF application-specific info string to guarantee that
+ * info strings are never repeated between contexts.  This ensures that all HKDF
+ * outputs are unique and cryptographically isolated, i.e. knowledge of one
+ * output doesn't reveal another.
+ */
+#define HKDF_CONTEXT_KEY_IDENTIFIER	1
+#define HKDF_CONTEXT_PER_FILE_KEY	2
+#define HKDF_CONTEXT_DIRECT_KEY		3
+#define HKDF_CONTEXT_IV_INO_LBLK_64_KEY	4
+
+extern int fscrypt_hkdf_expand(struct fscrypt_hkdf *hkdf, u8 context,
+			       const u8 *info, unsigned int infolen,
+			       u8 *okm, unsigned int okmlen);
+
+extern void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf);
+
+/* inline_crypt.c */
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+extern void fscrypt_select_encryption_impl(struct fscrypt_info *ci);
+
+static inline bool
+fscrypt_using_inline_encryption(const struct fscrypt_info *ci)
+{
+	return ci->ci_inlinecrypt;
+}
+
+extern int fscrypt_prepare_inline_crypt_key(
+					struct fscrypt_prepared_key *prep_key,
+					const u8 *raw_key,
+					unsigned int raw_key_size,
+					bool is_hw_wrapped,
+					const struct fscrypt_info *ci);
+
+extern void fscrypt_destroy_inline_crypt_key(
+					struct fscrypt_prepared_key *prep_key);
+
+extern int fscrypt_derive_raw_secret(struct super_block *sb,
+				     const u8 *wrapped_key,
+				     unsigned int wrapped_key_size,
+				     u8 *raw_secret,
+				     unsigned int raw_secret_size);
+
+/*
+ * Check whether the crypto transform or blk-crypto key has been allocated in
+ * @prep_key, depending on which encryption implementation the file will use.
+ */
+static inline bool
+fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key,
+			const struct fscrypt_info *ci)
+{
+	/*
+	 * The READ_ONCE() here pairs with the smp_store_release() in
+	 * fscrypt_prepare_key().  (This only matters for the per-mode keys,
+	 * which are shared by multiple inodes.)
+	 */
+	if (fscrypt_using_inline_encryption(ci))
+		return READ_ONCE(prep_key->blk_key) != NULL;
+	return READ_ONCE(prep_key->tfm) != NULL;
+}
+
+#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+
+static inline void fscrypt_select_encryption_impl(struct fscrypt_info *ci)
+{
+}
+
+static inline bool fscrypt_using_inline_encryption(
+					const struct fscrypt_info *ci)
+{
+	return false;
+}
+
+static inline int
+fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
+				 const u8 *raw_key, unsigned int raw_key_size,
+				 bool is_hw_wrapped,
+				 const struct fscrypt_info *ci)
+{
+	WARN_ON(1);
+	return -EOPNOTSUPP;
+}
+
+static inline void
+fscrypt_destroy_inline_crypt_key(struct fscrypt_prepared_key *prep_key)
+{
+}
+
+static inline int fscrypt_derive_raw_secret(struct super_block *sb,
+					    const u8 *wrapped_key,
+					    unsigned int wrapped_key_size,
+					    u8 *raw_secret,
+					    unsigned int raw_secret_size)
+{
+	fscrypt_warn(NULL,
+		     "kernel built without support for hardware-wrapped keys");
+	return -EOPNOTSUPP;
+}
+
+static inline bool
+fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key,
+			const struct fscrypt_info *ci)
+{
+	return READ_ONCE(prep_key->tfm) != NULL;
+}
+#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+
+/* keyring.c */
+
+/*
+ * fscrypt_master_key_secret - secret key material of an in-use master key
+ */
+struct fscrypt_master_key_secret {
+
+	/*
+	 * For v2 policy keys: HKDF context keyed by this master key.
+	 * For v1 policy keys: not set (hkdf.hmac_tfm == NULL).
+	 */
+	struct fscrypt_hkdf	hkdf;
+
+	/* Size of the raw key in bytes.  Set even if ->raw isn't set. */
+	u32			size;
+
+	/* True if the key in ->raw is a hardware-wrapped key. */
+	bool			is_hw_wrapped;
+
+	/*
+	 * For v1 policy keys: the raw key.  Wiped for v2 policy keys, unless
+	 * ->is_hw_wrapped is true, in which case this contains the wrapped key
+	 * rather than the key with which 'hkdf' was keyed.
+	 */
+	u8			raw[FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE];
+
+} __randomize_layout;
+
+/*
+ * fscrypt_master_key - an in-use master key
+ *
+ * This represents a master encryption key which has been added to the
+ * filesystem and can be used to "unlock" the encrypted files which were
+ * encrypted with it.
+ */
+struct fscrypt_master_key {
+
+	/*
+	 * The secret key material.  After FS_IOC_REMOVE_ENCRYPTION_KEY is
+	 * executed, this is wiped and no new inodes can be unlocked with this
+	 * key; however, there may still be inodes in ->mk_decrypted_inodes
+	 * which could not be evicted.  As long as some inodes still remain,
+	 * FS_IOC_REMOVE_ENCRYPTION_KEY can be retried, or
+	 * FS_IOC_ADD_ENCRYPTION_KEY can add the secret again.
+	 *
+	 * Locking: protected by key->sem (outer) and mk_secret_sem (inner).
+	 * The reason for two locks is that key->sem also protects modifying
+	 * mk_users, which ranks it above the semaphore for the keyring key
+	 * type, which is in turn above page faults (via keyring_read).  But
+	 * sometimes filesystems call fscrypt_get_encryption_info() from within
+	 * a transaction, which ranks it below page faults.  So we need a
+	 * separate lock which protects mk_secret but not also mk_users.
+	 */
+	struct fscrypt_master_key_secret	mk_secret;
+	struct rw_semaphore			mk_secret_sem;
+
+	/*
+	 * For v1 policy keys: an arbitrary key descriptor which was assigned by
+	 * userspace (->descriptor).
+	 *
+	 * For v2 policy keys: a cryptographic hash of this key (->identifier).
+	 */
+	struct fscrypt_key_specifier		mk_spec;
+
+	/*
+	 * Keyring which contains a key of type 'key_type_fscrypt_user' for each
+	 * user who has added this key.  Normally each key will be added by just
+	 * one user, but it's possible that multiple users share a key, and in
+	 * that case we need to keep track of those users so that one user can't
+	 * remove the key before the others want it removed too.
+	 *
+	 * This is NULL for v1 policy keys; those can only be added by root.
+	 *
+	 * Locking: in addition to this keyrings own semaphore, this is
+	 * protected by the master key's key->sem, so we can do atomic
+	 * search+insert.  It can also be searched without taking any locks, but
+	 * in that case the returned key may have already been removed.
+	 */
+	struct key		*mk_users;
+
+	/*
+	 * Length of ->mk_decrypted_inodes, plus one if mk_secret is present.
+	 * Once this goes to 0, the master key is removed from ->s_master_keys.
+	 * The 'struct fscrypt_master_key' will continue to live as long as the
+	 * 'struct key' whose payload it is, but we won't let this reference
+	 * count rise again.
+	 */
+	refcount_t		mk_refcount;
+
+	/*
+	 * List of inodes that were unlocked using this key.  This allows the
+	 * inodes to be evicted efficiently if the key is removed.
+	 */
+	struct list_head	mk_decrypted_inodes;
+	spinlock_t		mk_decrypted_inodes_lock;
+
+	/* Per-mode keys for DIRECT_KEY policies, allocated on-demand */
+	struct fscrypt_prepared_key mk_direct_keys[__FSCRYPT_MODE_MAX + 1];
+
+	/* Per-mode keys for IV_INO_LBLK_64 policies, allocated on-demand */
+	struct fscrypt_prepared_key mk_iv_ino_lblk_64_keys[__FSCRYPT_MODE_MAX + 1];
+
+} __randomize_layout;
+
+static inline bool
+is_master_key_secret_present(const struct fscrypt_master_key_secret *secret)
+{
+	/*
+	 * The READ_ONCE() is only necessary for fscrypt_drop_inode() and
+	 * fscrypt_key_describe().  These run in atomic context, so they can't
+	 * take ->mk_secret_sem and thus 'secret' can change concurrently which
+	 * would be a data race.  But they only need to know whether the secret
+	 * *was* present at the time of check, so READ_ONCE() suffices.
+	 */
+	return READ_ONCE(secret->size) != 0;
+}
+
+static inline const char *master_key_spec_type(
+				const struct fscrypt_key_specifier *spec)
+{
+	switch (spec->type) {
+	case FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR:
+		return "descriptor";
+	case FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER:
+		return "identifier";
+	}
+	return "[unknown]";
+}
+
+static inline int master_key_spec_len(const struct fscrypt_key_specifier *spec)
+{
+	switch (spec->type) {
+	case FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR:
+		return FSCRYPT_KEY_DESCRIPTOR_SIZE;
+	case FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER:
+		return FSCRYPT_KEY_IDENTIFIER_SIZE;
+	}
+	return 0;
+}
+
+extern struct key *
+fscrypt_find_master_key(struct super_block *sb,
+			const struct fscrypt_key_specifier *mk_spec);
+
+extern int fscrypt_verify_key_added(struct super_block *sb,
+				    const u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE]);
+
+extern int __init fscrypt_init_keyring(void);
+
+/* keysetup.c */
 
 struct fscrypt_mode {
 	const char *friendly_name;
@@ -179,10 +569,44 @@
 	int keysize;
 	int ivsize;
 	bool logged_impl_name;
-	bool needs_essiv;
-	bool inline_encryption;
+	enum blk_crypto_mode_num blk_crypto_mode;
 };
 
-extern void __exit fscrypt_essiv_cleanup(void);
+extern struct fscrypt_mode fscrypt_modes[];
+
+static inline bool
+fscrypt_mode_supports_direct_key(const struct fscrypt_mode *mode)
+{
+	return mode->ivsize >= offsetofend(union fscrypt_iv, nonce);
+}
+
+extern int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key,
+			       const u8 *raw_key, unsigned int raw_key_size,
+			       bool is_hw_wrapped,
+			       const struct fscrypt_info *ci);
+
+extern void fscrypt_destroy_prepared_key(struct fscrypt_prepared_key *prep_key);
+
+extern int fscrypt_set_derived_key(struct fscrypt_info *ci,
+				   const u8 *derived_key);
+
+/* keysetup_v1.c */
+
+extern void fscrypt_put_direct_key(struct fscrypt_direct_key *dk);
+
+extern int fscrypt_setup_v1_file_key(struct fscrypt_info *ci,
+				     const u8 *raw_master_key);
+
+extern int fscrypt_setup_v1_file_key_via_subscribed_keyrings(
+					struct fscrypt_info *ci);
+/* policy.c */
+
+extern bool fscrypt_policies_equal(const union fscrypt_policy *policy1,
+				   const union fscrypt_policy *policy2);
+extern bool fscrypt_supported_policy(const union fscrypt_policy *policy_u,
+				     const struct inode *inode);
+extern int fscrypt_policy_from_context(union fscrypt_policy *policy_u,
+				       const union fscrypt_context *ctx_u,
+				       int ctx_size);
 
 #endif /* _FSCRYPT_PRIVATE_H */
diff --git a/fs/crypto/hkdf.c b/fs/crypto/hkdf.c
new file mode 100644
index 0000000..2c02600
--- /dev/null
+++ b/fs/crypto/hkdf.c
@@ -0,0 +1,183 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Implementation of HKDF ("HMAC-based Extract-and-Expand Key Derivation
+ * Function"), aka RFC 5869.  See also the original paper (Krawczyk 2010):
+ * "Cryptographic Extraction and Key Derivation: The HKDF Scheme".
+ *
+ * This is used to derive keys from the fscrypt master keys.
+ *
+ * Copyright 2019 Google LLC
+ */
+
+#include <crypto/hash.h>
+#include <crypto/sha.h>
+
+#include "fscrypt_private.h"
+
+/*
+ * HKDF supports any unkeyed cryptographic hash algorithm, but fscrypt uses
+ * SHA-512 because it is reasonably secure and efficient; and since it produces
+ * a 64-byte digest, deriving an AES-256-XTS key preserves all 64 bytes of
+ * entropy from the master key and requires only one iteration of HKDF-Expand.
+ */
+#define HKDF_HMAC_ALG		"hmac(sha512)"
+#define HKDF_HASHLEN		SHA512_DIGEST_SIZE
+
+/*
+ * HKDF consists of two steps:
+ *
+ * 1. HKDF-Extract: extract a pseudorandom key of length HKDF_HASHLEN bytes from
+ *    the input keying material and optional salt.
+ * 2. HKDF-Expand: expand the pseudorandom key into output keying material of
+ *    any length, parameterized by an application-specific info string.
+ *
+ * HKDF-Extract can be skipped if the input is already a pseudorandom key of
+ * length HKDF_HASHLEN bytes.  However, cipher modes other than AES-256-XTS take
+ * shorter keys, and we don't want to force users of those modes to provide
+ * unnecessarily long master keys.  Thus fscrypt still does HKDF-Extract.  No
+ * salt is used, since fscrypt master keys should already be pseudorandom and
+ * there's no way to persist a random salt per master key from kernel mode.
+ */
+
+/* HKDF-Extract (RFC 5869 section 2.2), unsalted */
+static int hkdf_extract(struct crypto_shash *hmac_tfm, const u8 *ikm,
+			unsigned int ikmlen, u8 prk[HKDF_HASHLEN])
+{
+	static const u8 default_salt[HKDF_HASHLEN];
+	SHASH_DESC_ON_STACK(desc, hmac_tfm);
+	int err;
+
+	err = crypto_shash_setkey(hmac_tfm, default_salt, HKDF_HASHLEN);
+	if (err)
+		return err;
+
+	desc->tfm = hmac_tfm;
+	desc->flags = 0;
+	err = crypto_shash_digest(desc, ikm, ikmlen, prk);
+	shash_desc_zero(desc);
+	return err;
+}
+
+/*
+ * Compute HKDF-Extract using the given master key as the input keying material,
+ * and prepare an HMAC transform object keyed by the resulting pseudorandom key.
+ *
+ * Afterwards, the keyed HMAC transform object can be used for HKDF-Expand many
+ * times without having to recompute HKDF-Extract each time.
+ */
+int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
+		      unsigned int master_key_size)
+{
+	struct crypto_shash *hmac_tfm;
+	u8 prk[HKDF_HASHLEN];
+	int err;
+
+	hmac_tfm = crypto_alloc_shash(HKDF_HMAC_ALG, 0, 0);
+	if (IS_ERR(hmac_tfm)) {
+		fscrypt_err(NULL, "Error allocating " HKDF_HMAC_ALG ": %ld",
+			    PTR_ERR(hmac_tfm));
+		return PTR_ERR(hmac_tfm);
+	}
+
+	if (WARN_ON(crypto_shash_digestsize(hmac_tfm) != sizeof(prk))) {
+		err = -EINVAL;
+		goto err_free_tfm;
+	}
+
+	err = hkdf_extract(hmac_tfm, master_key, master_key_size, prk);
+	if (err)
+		goto err_free_tfm;
+
+	err = crypto_shash_setkey(hmac_tfm, prk, sizeof(prk));
+	if (err)
+		goto err_free_tfm;
+
+	hkdf->hmac_tfm = hmac_tfm;
+	goto out;
+
+err_free_tfm:
+	crypto_free_shash(hmac_tfm);
+out:
+	memzero_explicit(prk, sizeof(prk));
+	return err;
+}
+
+/*
+ * HKDF-Expand (RFC 5869 section 2.3).  This expands the pseudorandom key, which
+ * was already keyed into 'hkdf->hmac_tfm' by fscrypt_init_hkdf(), into 'okmlen'
+ * bytes of output keying material parameterized by the application-specific
+ * 'info' of length 'infolen' bytes, prefixed by "fscrypt\0" and the 'context'
+ * byte.  This is thread-safe and may be called by multiple threads in parallel.
+ *
+ * ('context' isn't part of the HKDF specification; it's just a prefix fscrypt
+ * adds to its application-specific info strings to guarantee that it doesn't
+ * accidentally repeat an info string when using HKDF for different purposes.)
+ */
+int fscrypt_hkdf_expand(struct fscrypt_hkdf *hkdf, u8 context,
+			const u8 *info, unsigned int infolen,
+			u8 *okm, unsigned int okmlen)
+{
+	SHASH_DESC_ON_STACK(desc, hkdf->hmac_tfm);
+	u8 prefix[9];
+	unsigned int i;
+	int err;
+	const u8 *prev = NULL;
+	u8 counter = 1;
+	u8 tmp[HKDF_HASHLEN];
+
+	if (WARN_ON(okmlen > 255 * HKDF_HASHLEN))
+		return -EINVAL;
+
+	desc->tfm = hkdf->hmac_tfm;
+	desc->flags = 0;
+
+	memcpy(prefix, "fscrypt\0", 8);
+	prefix[8] = context;
+
+	for (i = 0; i < okmlen; i += HKDF_HASHLEN) {
+
+		err = crypto_shash_init(desc);
+		if (err)
+			goto out;
+
+		if (prev) {
+			err = crypto_shash_update(desc, prev, HKDF_HASHLEN);
+			if (err)
+				goto out;
+		}
+
+		err = crypto_shash_update(desc, prefix, sizeof(prefix));
+		if (err)
+			goto out;
+
+		err = crypto_shash_update(desc, info, infolen);
+		if (err)
+			goto out;
+
+		BUILD_BUG_ON(sizeof(counter) != 1);
+		if (okmlen - i < HKDF_HASHLEN) {
+			err = crypto_shash_finup(desc, &counter, 1, tmp);
+			if (err)
+				goto out;
+			memcpy(&okm[i], tmp, okmlen - i);
+			memzero_explicit(tmp, sizeof(tmp));
+		} else {
+			err = crypto_shash_finup(desc, &counter, 1, &okm[i]);
+			if (err)
+				goto out;
+		}
+		counter++;
+		prev = &okm[i];
+	}
+	err = 0;
+out:
+	if (unlikely(err))
+		memzero_explicit(okm, okmlen); /* so caller doesn't need to */
+	shash_desc_zero(desc);
+	return err;
+}
+
+void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf)
+{
+	crypto_free_shash(hkdf->hmac_tfm);
+}
diff --git a/fs/crypto/hooks.c b/fs/crypto/hooks.c
index 34c2d03..30b1ca6 100644
--- a/fs/crypto/hooks.c
+++ b/fs/crypto/hooks.c
@@ -38,9 +38,9 @@
 	dir = dget_parent(file_dentry(filp));
 	if (IS_ENCRYPTED(d_inode(dir)) &&
 	    !fscrypt_has_permitted_context(d_inode(dir), inode)) {
-		fscrypt_warn(inode->i_sb,
-			     "inconsistent encryption contexts: %lu/%lu",
-			     d_inode(dir)->i_ino, inode->i_ino);
+		fscrypt_warn(inode,
+			     "Inconsistent encryption context (parent directory: %lu)",
+			     d_inode(dir)->i_ino);
 		err = -EPERM;
 	}
 	dput(dir);
diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c
new file mode 100644
index 0000000..00da0ef
--- /dev/null
+++ b/fs/crypto/inline_crypt.c
@@ -0,0 +1,353 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Inline encryption support for fscrypt
+ *
+ * Copyright 2019 Google LLC
+ */
+
+/*
+ * With "inline encryption", the block layer handles the decryption/encryption
+ * as part of the bio, instead of the filesystem doing the crypto itself via
+ * crypto API.  See Documentation/block/inline-encryption.rst.  fscrypt still
+ * provides the key and IV to use.
+ */
+
+#include <linux/blk-crypto.h>
+#include <linux/blkdev.h>
+#include <linux/buffer_head.h>
+#include <linux/keyslot-manager.h>
+
+#include "fscrypt_private.h"
+
+struct fscrypt_blk_crypto_key {
+	struct blk_crypto_key base;
+	int num_devs;
+	struct request_queue *devs[];
+};
+
+/* Enable inline encryption for this file if supported. */
+void fscrypt_select_encryption_impl(struct fscrypt_info *ci)
+{
+	const struct inode *inode = ci->ci_inode;
+	struct super_block *sb = inode->i_sb;
+
+	/* The file must need contents encryption, not filenames encryption */
+	if (!S_ISREG(inode->i_mode))
+		return;
+
+	/* blk-crypto must implement the needed encryption algorithm */
+	if (ci->ci_mode->blk_crypto_mode == BLK_ENCRYPTION_MODE_INVALID)
+		return;
+
+	/* The filesystem must be mounted with -o inlinecrypt */
+	if (!sb->s_cop->inline_crypt_enabled ||
+	    !sb->s_cop->inline_crypt_enabled(sb))
+		return;
+
+	ci->ci_inlinecrypt = true;
+}
+
+int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
+				     const u8 *raw_key,
+				     unsigned int raw_key_size,
+				     bool is_hw_wrapped,
+				     const struct fscrypt_info *ci)
+{
+	const struct inode *inode = ci->ci_inode;
+	struct super_block *sb = inode->i_sb;
+	enum blk_crypto_mode_num crypto_mode = ci->ci_mode->blk_crypto_mode;
+	int num_devs = 1;
+	int queue_refs = 0;
+	struct fscrypt_blk_crypto_key *blk_key;
+	int err;
+	int i;
+
+	if (sb->s_cop->get_num_devices)
+		num_devs = sb->s_cop->get_num_devices(sb);
+	if (WARN_ON(num_devs < 1))
+		return -EINVAL;
+
+	blk_key = kzalloc(struct_size(blk_key, devs, num_devs), GFP_NOFS);
+	if (!blk_key)
+		return -ENOMEM;
+
+	blk_key->num_devs = num_devs;
+	if (num_devs == 1)
+		blk_key->devs[0] = bdev_get_queue(sb->s_bdev);
+	else
+		sb->s_cop->get_devices(sb, blk_key->devs);
+
+	BUILD_BUG_ON(FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE >
+		     BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE);
+
+	err = blk_crypto_init_key(&blk_key->base, raw_key, raw_key_size,
+				  is_hw_wrapped, crypto_mode, sb->s_blocksize);
+	if (err) {
+		fscrypt_err(inode, "error %d initializing blk-crypto key", err);
+		goto fail;
+	}
+
+	/*
+	 * We have to start using blk-crypto on all the filesystem's devices.
+	 * We also have to save all the request_queue's for later so that the
+	 * key can be evicted from them.  This is needed because some keys
+	 * aren't destroyed until after the filesystem was already unmounted
+	 * (namely, the per-mode keys in struct fscrypt_master_key).
+	 */
+	for (i = 0; i < num_devs; i++) {
+		if (!blk_get_queue(blk_key->devs[i])) {
+			fscrypt_err(inode, "couldn't get request_queue");
+			err = -EAGAIN;
+			goto fail;
+		}
+		queue_refs++;
+
+		err = blk_crypto_start_using_mode(crypto_mode, sb->s_blocksize,
+						  blk_key->devs[i]);
+		if (err) {
+			fscrypt_err(inode,
+				    "error %d starting to use blk-crypto", err);
+			goto fail;
+		}
+	}
+	/*
+	 * Pairs with READ_ONCE() in fscrypt_is_key_prepared().  (Only matters
+	 * for the per-mode keys, which are shared by multiple inodes.)
+	 */
+	smp_store_release(&prep_key->blk_key, blk_key);
+	return 0;
+
+fail:
+	for (i = 0; i < queue_refs; i++)
+		blk_put_queue(blk_key->devs[i]);
+	kzfree(blk_key);
+	return err;
+}
+
+void fscrypt_destroy_inline_crypt_key(struct fscrypt_prepared_key *prep_key)
+{
+	struct fscrypt_blk_crypto_key *blk_key = prep_key->blk_key;
+	int i;
+
+	if (blk_key) {
+		for (i = 0; i < blk_key->num_devs; i++) {
+			blk_crypto_evict_key(blk_key->devs[i], &blk_key->base);
+			blk_put_queue(blk_key->devs[i]);
+		}
+		kzfree(blk_key);
+	}
+}
+
+int fscrypt_derive_raw_secret(struct super_block *sb,
+			      const u8 *wrapped_key,
+			      unsigned int wrapped_key_size,
+			      u8 *raw_secret, unsigned int raw_secret_size)
+{
+	struct request_queue *q;
+
+	q = sb->s_bdev->bd_queue;
+	if (!q->ksm)
+		return -EOPNOTSUPP;
+
+	return keyslot_manager_derive_raw_secret(q->ksm,
+						 wrapped_key, wrapped_key_size,
+						 raw_secret, raw_secret_size);
+}
+
+/**
+ * fscrypt_inode_uses_inline_crypto - test whether an inode uses inline
+ *				      encryption
+ * @inode: an inode
+ *
+ * Return: true if the inode requires file contents encryption and if the
+ *	   encryption should be done in the block layer via blk-crypto rather
+ *	   than in the filesystem layer.
+ */
+bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
+{
+	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
+		inode->i_crypt_info->ci_inlinecrypt;
+}
+EXPORT_SYMBOL_GPL(fscrypt_inode_uses_inline_crypto);
+
+/**
+ * fscrypt_inode_uses_fs_layer_crypto - test whether an inode uses fs-layer
+ *					encryption
+ * @inode: an inode
+ *
+ * Return: true if the inode requires file contents encryption and if the
+ *	   encryption should be done in the filesystem layer rather than in the
+ *	   block layer via blk-crypto.
+ */
+bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
+{
+	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) &&
+		!inode->i_crypt_info->ci_inlinecrypt;
+}
+EXPORT_SYMBOL_GPL(fscrypt_inode_uses_fs_layer_crypto);
+
+static void fscrypt_generate_dun(const struct fscrypt_info *ci, u64 lblk_num,
+				 u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
+{
+	union fscrypt_iv iv;
+	int i;
+
+	fscrypt_generate_iv(&iv, lblk_num, ci);
+
+	BUILD_BUG_ON(FSCRYPT_MAX_IV_SIZE > BLK_CRYPTO_MAX_IV_SIZE);
+	memset(dun, 0, BLK_CRYPTO_MAX_IV_SIZE);
+	for (i = 0; i < ci->ci_mode->ivsize/sizeof(dun[0]); i++)
+		dun[i] = le64_to_cpu(iv.dun[i]);
+}
+
+/**
+ * fscrypt_set_bio_crypt_ctx - prepare a file contents bio for inline encryption
+ * @bio: a bio which will eventually be submitted to the file
+ * @inode: the file's inode
+ * @first_lblk: the first file logical block number in the I/O
+ * @gfp_mask: memory allocation flags - these must be a waiting mask so that
+ *					bio_crypt_set_ctx can't fail.
+ *
+ * If the contents of the file should be encrypted (or decrypted) with inline
+ * encryption, then assign the appropriate encryption context to the bio.
+ *
+ * Normally the bio should be newly allocated (i.e. no pages added yet), as
+ * otherwise fscrypt_mergeable_bio() won't work as intended.
+ *
+ * The encryption context will be freed automatically when the bio is freed.
+ *
+ * This function also handles setting bi_skip_dm_default_key when needed.
+ */
+void fscrypt_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode,
+			       u64 first_lblk, gfp_t gfp_mask)
+{
+	const struct fscrypt_info *ci = inode->i_crypt_info;
+	u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+
+	if (fscrypt_inode_should_skip_dm_default_key(inode))
+		bio_set_skip_dm_default_key(bio);
+
+	if (!fscrypt_inode_uses_inline_crypto(inode))
+		return;
+
+	fscrypt_generate_dun(ci, first_lblk, dun);
+	bio_crypt_set_ctx(bio, &ci->ci_key.blk_key->base, dun, gfp_mask);
+}
+EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx);
+
+/* Extract the inode and logical block number from a buffer_head. */
+static bool bh_get_inode_and_lblk_num(const struct buffer_head *bh,
+				      const struct inode **inode_ret,
+				      u64 *lblk_num_ret)
+{
+	struct page *page = bh->b_page;
+	const struct address_space *mapping;
+	const struct inode *inode;
+
+	/*
+	 * The ext4 journal (jbd2) can submit a buffer_head it directly created
+	 * for a non-pagecache page.  fscrypt doesn't care about these.
+	 */
+	mapping = page_mapping(page);
+	if (!mapping)
+		return false;
+	inode = mapping->host;
+
+	*inode_ret = inode;
+	*lblk_num_ret = ((u64)page->index << (PAGE_SHIFT - inode->i_blkbits)) +
+			(bh_offset(bh) >> inode->i_blkbits);
+	return true;
+}
+
+/**
+ * fscrypt_set_bio_crypt_ctx_bh - prepare a file contents bio for inline
+ *				  encryption
+ * @bio: a bio which will eventually be submitted to the file
+ * @first_bh: the first buffer_head for which I/O will be submitted
+ * @gfp_mask: memory allocation flags
+ *
+ * Same as fscrypt_set_bio_crypt_ctx(), except this takes a buffer_head instead
+ * of an inode and block number directly.
+ */
+void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
+				 const struct buffer_head *first_bh,
+				 gfp_t gfp_mask)
+{
+	const struct inode *inode;
+	u64 first_lblk;
+
+	if (bh_get_inode_and_lblk_num(first_bh, &inode, &first_lblk))
+		fscrypt_set_bio_crypt_ctx(bio, inode, first_lblk, gfp_mask);
+}
+EXPORT_SYMBOL_GPL(fscrypt_set_bio_crypt_ctx_bh);
+
+/**
+ * fscrypt_mergeable_bio - test whether data can be added to a bio
+ * @bio: the bio being built up
+ * @inode: the inode for the next part of the I/O
+ * @next_lblk: the next file logical block number in the I/O
+ *
+ * When building a bio which may contain data which should undergo inline
+ * encryption (or decryption) via fscrypt, filesystems should call this function
+ * to ensure that the resulting bio contains only logically contiguous data.
+ * This will return false if the next part of the I/O cannot be merged with the
+ * bio because either the encryption key would be different or the encryption
+ * data unit numbers would be discontiguous.
+ *
+ * fscrypt_set_bio_crypt_ctx() must have already been called on the bio.
+ *
+ * This function also returns false if the next part of the I/O would need to
+ * have a different value for the bi_skip_dm_default_key flag.
+ *
+ * Return: true iff the I/O is mergeable
+ */
+bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
+			   u64 next_lblk)
+{
+	const struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+	u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+
+	if (!!bc != fscrypt_inode_uses_inline_crypto(inode))
+		return false;
+	if (bio_should_skip_dm_default_key(bio) !=
+	    fscrypt_inode_should_skip_dm_default_key(inode))
+		return false;
+	if (!bc)
+		return true;
+
+	/*
+	 * Comparing the key pointers is good enough, as all I/O for each key
+	 * uses the same pointer.  I.e., there's currently no need to support
+	 * merging requests where the keys are the same but the pointers differ.
+	 */
+	if (bc->bc_key != &inode->i_crypt_info->ci_key.blk_key->base)
+		return false;
+
+	fscrypt_generate_dun(inode->i_crypt_info, next_lblk, next_dun);
+	return bio_crypt_dun_is_contiguous(bc, bio->bi_iter.bi_size, next_dun);
+}
+EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio);
+
+/**
+ * fscrypt_mergeable_bio_bh - test whether data can be added to a bio
+ * @bio: the bio being built up
+ * @next_bh: the next buffer_head for which I/O will be submitted
+ *
+ * Same as fscrypt_mergeable_bio(), except this takes a buffer_head instead of
+ * an inode and block number directly.
+ *
+ * Return: true iff the I/O is mergeable
+ */
+bool fscrypt_mergeable_bio_bh(struct bio *bio,
+			      const struct buffer_head *next_bh)
+{
+	const struct inode *inode;
+	u64 next_lblk;
+
+	if (!bh_get_inode_and_lblk_num(next_bh, &inode, &next_lblk))
+		return !bio->bi_crypt_context &&
+		       !bio_should_skip_dm_default_key(bio);
+
+	return fscrypt_mergeable_bio(bio, inode, next_lblk);
+}
+EXPORT_SYMBOL_GPL(fscrypt_mergeable_bio_bh);
diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
deleted file mode 100644
index 123598c..0000000
--- a/fs/crypto/keyinfo.c
+++ /dev/null
@@ -1,650 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * key management facility for FS encryption support.
- *
- * Copyright (C) 2015, Google, Inc.
- *
- * This contains encryption key functions.
- *
- * Written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar, 2015.
- */
-
-#include <keys/user-type.h>
-#include <linux/hashtable.h>
-#include <linux/scatterlist.h>
-#include <crypto/aes.h>
-#include <crypto/algapi.h>
-#include <crypto/sha.h>
-#include <crypto/skcipher.h>
-#include "fscrypt_private.h"
-#include "fscrypt_ice.h"
-
-static struct crypto_shash *essiv_hash_tfm;
-
-/* Table of keys referenced by FS_POLICY_FLAG_DIRECT_KEY policies */
-static DEFINE_HASHTABLE(fscrypt_master_keys, 6); /* 6 bits = 64 buckets */
-static DEFINE_SPINLOCK(fscrypt_master_keys_lock);
-
-/*
- * Key derivation function.  This generates the derived key by encrypting the
- * master key with AES-128-ECB using the inode's nonce as the AES key.
- *
- * The master key must be at least as long as the derived key.  If the master
- * key is longer, then only the first 'derived_keysize' bytes are used.
- */
-static int derive_key_aes(const u8 *master_key,
-			  const struct fscrypt_context *ctx,
-			  u8 *derived_key, unsigned int derived_keysize)
-{
-	int res = 0;
-	struct skcipher_request *req = NULL;
-	DECLARE_CRYPTO_WAIT(wait);
-	struct scatterlist src_sg, dst_sg;
-	struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
-
-	if (IS_ERR(tfm)) {
-		res = PTR_ERR(tfm);
-		tfm = NULL;
-		goto out;
-	}
-	crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
-	req = skcipher_request_alloc(tfm, GFP_NOFS);
-	if (!req) {
-		res = -ENOMEM;
-		goto out;
-	}
-	skcipher_request_set_callback(req,
-			CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
-			crypto_req_done, &wait);
-	res = crypto_skcipher_setkey(tfm, ctx->nonce, sizeof(ctx->nonce));
-	if (res < 0)
-		goto out;
-
-	sg_init_one(&src_sg, master_key, derived_keysize);
-	sg_init_one(&dst_sg, derived_key, derived_keysize);
-	skcipher_request_set_crypt(req, &src_sg, &dst_sg, derived_keysize,
-				   NULL);
-	res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
-out:
-	skcipher_request_free(req);
-	crypto_free_skcipher(tfm);
-	return res;
-}
-
-/*
- * Search the current task's subscribed keyrings for a "logon" key with
- * description prefix:descriptor, and if found acquire a read lock on it and
- * return a pointer to its validated payload in *payload_ret.
- */
-static struct key *
-find_and_lock_process_key(const char *prefix,
-			  const u8 descriptor[FS_KEY_DESCRIPTOR_SIZE],
-			  unsigned int min_keysize,
-			  const struct fscrypt_key **payload_ret)
-{
-	char *description;
-	struct key *key;
-	const struct user_key_payload *ukp;
-	const struct fscrypt_key *payload;
-
-	description = kasprintf(GFP_NOFS, "%s%*phN", prefix,
-				FS_KEY_DESCRIPTOR_SIZE, descriptor);
-	if (!description)
-		return ERR_PTR(-ENOMEM);
-
-	key = request_key(&key_type_logon, description, NULL);
-	kfree(description);
-	if (IS_ERR(key))
-		return key;
-
-	down_read(&key->sem);
-	ukp = user_key_payload_locked(key);
-
-	if (!ukp) /* was the key revoked before we acquired its semaphore? */
-		goto invalid;
-
-	payload = (const struct fscrypt_key *)ukp->data;
-
-	if (ukp->datalen != sizeof(struct fscrypt_key) ||
-	    payload->size < 1 || payload->size > FS_MAX_KEY_SIZE) {
-		fscrypt_warn(NULL,
-			     "key with description '%s' has invalid payload",
-			     key->description);
-		goto invalid;
-	}
-
-	if (payload->size < min_keysize) {
-		fscrypt_warn(NULL,
-			     "key with description '%s' is too short (got %u bytes, need %u+ bytes)",
-			     key->description, payload->size, min_keysize);
-		goto invalid;
-	}
-
-	*payload_ret = payload;
-	return key;
-
-invalid:
-	up_read(&key->sem);
-	key_put(key);
-	return ERR_PTR(-ENOKEY);
-}
-
-static struct fscrypt_mode available_modes[] = {
-	[FS_ENCRYPTION_MODE_AES_256_XTS] = {
-		.friendly_name = "AES-256-XTS",
-		.cipher_str = "xts(aes)",
-		.keysize = 64,
-		.ivsize = 16,
-	},
-	[FS_ENCRYPTION_MODE_AES_256_CTS] = {
-		.friendly_name = "AES-256-CTS-CBC",
-		.cipher_str = "cts(cbc(aes))",
-		.keysize = 32,
-		.ivsize = 16,
-	},
-	[FS_ENCRYPTION_MODE_AES_128_CBC] = {
-		.friendly_name = "AES-128-CBC",
-		.cipher_str = "cbc(aes)",
-		.keysize = 16,
-		.ivsize = 16,
-		.needs_essiv = true,
-	},
-	[FS_ENCRYPTION_MODE_AES_128_CTS] = {
-		.friendly_name = "AES-128-CTS-CBC",
-		.cipher_str = "cts(cbc(aes))",
-		.keysize = 16,
-		.ivsize = 16,
-	},
-	[FS_ENCRYPTION_MODE_ADIANTUM] = {
-		.friendly_name = "Adiantum",
-		.cipher_str = "adiantum(xchacha12,aes)",
-		.keysize = 32,
-		.ivsize = 32,
-	},
-	[FS_ENCRYPTION_MODE_PRIVATE] = {
-		.friendly_name = "ice",
-		.cipher_str = "xts(aes)",
-		.keysize = 64,
-		.ivsize = 16,
-		.inline_encryption = true,
-	},
-};
-
-static struct fscrypt_mode *
-select_encryption_mode(const struct fscrypt_info *ci, const struct inode *inode)
-{
-	struct fscrypt_mode *mode = NULL;
-
-	if (!fscrypt_valid_enc_modes(ci->ci_data_mode, ci->ci_filename_mode)) {
-		fscrypt_warn(inode->i_sb,
-			     "inode %lu uses unsupported encryption modes (contents mode %d, filenames mode %d)",
-			     inode->i_ino, ci->ci_data_mode,
-			     ci->ci_filename_mode);
-		return ERR_PTR(-EINVAL);
-	}
-
-	if (S_ISREG(inode->i_mode)) {
-		mode = &available_modes[ci->ci_data_mode];
-		if (IS_ERR(mode)) {
-			fscrypt_warn(inode->i_sb, "Invalid mode");
-			return ERR_PTR(-EINVAL);
-		}
-		if (mode->inline_encryption &&
-				!fscrypt_is_ice_capable(inode->i_sb)) {
-			fscrypt_warn(inode->i_sb, "ICE support not available");
-			return ERR_PTR(-EINVAL);
-		}
-		return mode;
-	}
-
-	if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
-		return &available_modes[ci->ci_filename_mode];
-
-	WARN_ONCE(1, "fscrypt: filesystem tried to load encryption info for inode %lu, which is not encryptable (file type %d)\n",
-		  inode->i_ino, (inode->i_mode & S_IFMT));
-	return ERR_PTR(-EINVAL);
-}
-
-/* Find the master key, then derive the inode's actual encryption key */
-static int find_and_derive_key(const struct inode *inode,
-			       const struct fscrypt_context *ctx,
-			       u8 *derived_key, const struct fscrypt_mode *mode)
-{
-	struct key *key;
-	const struct fscrypt_key *payload;
-	int err;
-
-	key = find_and_lock_process_key(FS_KEY_DESC_PREFIX,
-					ctx->master_key_descriptor,
-					mode->keysize, &payload);
-	if (key == ERR_PTR(-ENOKEY) && inode->i_sb->s_cop->key_prefix) {
-		key = find_and_lock_process_key(inode->i_sb->s_cop->key_prefix,
-						ctx->master_key_descriptor,
-						mode->keysize, &payload);
-	}
-	if (IS_ERR(key))
-		return PTR_ERR(key);
-
-	if (ctx->flags & FS_POLICY_FLAG_DIRECT_KEY) {
-		if (mode->ivsize < offsetofend(union fscrypt_iv, nonce)) {
-			fscrypt_warn(inode->i_sb,
-				     "direct key mode not allowed with %s",
-				     mode->friendly_name);
-			err = -EINVAL;
-		} else if (ctx->contents_encryption_mode !=
-			   ctx->filenames_encryption_mode) {
-			fscrypt_warn(inode->i_sb,
-				     "direct key mode not allowed with different contents and filenames modes");
-			err = -EINVAL;
-		} else {
-			memcpy(derived_key, payload->raw, mode->keysize);
-			err = 0;
-		}
-	} else if (mode->inline_encryption) {
-		memcpy(derived_key, payload->raw, mode->keysize);
-		err = 0;
-	} else {
-		err = derive_key_aes(payload->raw, ctx, derived_key,
-				     mode->keysize);
-	}
-	up_read(&key->sem);
-	key_put(key);
-	return err;
-}
-
-/* Allocate and key a symmetric cipher object for the given encryption mode */
-static struct crypto_skcipher *
-allocate_skcipher_for_mode(struct fscrypt_mode *mode, const u8 *raw_key,
-			   const struct inode *inode)
-{
-	struct crypto_skcipher *tfm;
-	int err;
-
-	tfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0);
-	if (IS_ERR(tfm)) {
-		fscrypt_warn(inode->i_sb,
-			     "error allocating '%s' transform for inode %lu: %ld",
-			     mode->cipher_str, inode->i_ino, PTR_ERR(tfm));
-		return tfm;
-	}
-	if (unlikely(!mode->logged_impl_name)) {
-		/*
-		 * fscrypt performance can vary greatly depending on which
-		 * crypto algorithm implementation is used.  Help people debug
-		 * performance problems by logging the ->cra_driver_name the
-		 * first time a mode is used.  Note that multiple threads can
-		 * race here, but it doesn't really matter.
-		 */
-		mode->logged_impl_name = true;
-		pr_info("fscrypt: %s using implementation \"%s\"\n",
-			mode->friendly_name,
-			crypto_skcipher_alg(tfm)->base.cra_driver_name);
-	}
-	crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
-	err = crypto_skcipher_setkey(tfm, raw_key, mode->keysize);
-	if (err)
-		goto err_free_tfm;
-
-	return tfm;
-
-err_free_tfm:
-	crypto_free_skcipher(tfm);
-	return ERR_PTR(err);
-}
-
-/* Master key referenced by FS_POLICY_FLAG_DIRECT_KEY policy */
-struct fscrypt_master_key {
-	struct hlist_node mk_node;
-	refcount_t mk_refcount;
-	const struct fscrypt_mode *mk_mode;
-	struct crypto_skcipher *mk_ctfm;
-	u8 mk_descriptor[FS_KEY_DESCRIPTOR_SIZE];
-	u8 mk_raw[FS_MAX_KEY_SIZE];
-};
-
-static void free_master_key(struct fscrypt_master_key *mk)
-{
-	if (mk) {
-		crypto_free_skcipher(mk->mk_ctfm);
-		kzfree(mk);
-	}
-}
-
-static void put_master_key(struct fscrypt_master_key *mk)
-{
-	if (!refcount_dec_and_lock(&mk->mk_refcount, &fscrypt_master_keys_lock))
-		return;
-	hash_del(&mk->mk_node);
-	spin_unlock(&fscrypt_master_keys_lock);
-
-	free_master_key(mk);
-}
-
-/*
- * Find/insert the given master key into the fscrypt_master_keys table.  If
- * found, it is returned with elevated refcount, and 'to_insert' is freed if
- * non-NULL.  If not found, 'to_insert' is inserted and returned if it's
- * non-NULL; otherwise NULL is returned.
- */
-static struct fscrypt_master_key *
-find_or_insert_master_key(struct fscrypt_master_key *to_insert,
-			  const u8 *raw_key, const struct fscrypt_mode *mode,
-			  const struct fscrypt_info *ci)
-{
-	unsigned long hash_key;
-	struct fscrypt_master_key *mk;
-
-	/*
-	 * Careful: to avoid potentially leaking secret key bytes via timing
-	 * information, we must key the hash table by descriptor rather than by
-	 * raw key, and use crypto_memneq() when comparing raw keys.
-	 */
-
-	BUILD_BUG_ON(sizeof(hash_key) > FS_KEY_DESCRIPTOR_SIZE);
-	memcpy(&hash_key, ci->ci_master_key_descriptor, sizeof(hash_key));
-
-	spin_lock(&fscrypt_master_keys_lock);
-	hash_for_each_possible(fscrypt_master_keys, mk, mk_node, hash_key) {
-		if (memcmp(ci->ci_master_key_descriptor, mk->mk_descriptor,
-			   FS_KEY_DESCRIPTOR_SIZE) != 0)
-			continue;
-		if (mode != mk->mk_mode)
-			continue;
-		if (crypto_memneq(raw_key, mk->mk_raw, mode->keysize))
-			continue;
-		/* using existing tfm with same (descriptor, mode, raw_key) */
-		refcount_inc(&mk->mk_refcount);
-		spin_unlock(&fscrypt_master_keys_lock);
-		free_master_key(to_insert);
-		return mk;
-	}
-	if (to_insert)
-		hash_add(fscrypt_master_keys, &to_insert->mk_node, hash_key);
-	spin_unlock(&fscrypt_master_keys_lock);
-	return to_insert;
-}
-
-/* Prepare to encrypt directly using the master key in the given mode */
-static struct fscrypt_master_key *
-fscrypt_get_master_key(const struct fscrypt_info *ci, struct fscrypt_mode *mode,
-		       const u8 *raw_key, const struct inode *inode)
-{
-	struct fscrypt_master_key *mk;
-	int err;
-
-	/* Is there already a tfm for this key? */
-	mk = find_or_insert_master_key(NULL, raw_key, mode, ci);
-	if (mk)
-		return mk;
-
-	/* Nope, allocate one. */
-	mk = kzalloc(sizeof(*mk), GFP_NOFS);
-	if (!mk)
-		return ERR_PTR(-ENOMEM);
-	refcount_set(&mk->mk_refcount, 1);
-	mk->mk_mode = mode;
-	mk->mk_ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
-	if (IS_ERR(mk->mk_ctfm)) {
-		err = PTR_ERR(mk->mk_ctfm);
-		mk->mk_ctfm = NULL;
-		goto err_free_mk;
-	}
-	memcpy(mk->mk_descriptor, ci->ci_master_key_descriptor,
-	       FS_KEY_DESCRIPTOR_SIZE);
-	memcpy(mk->mk_raw, raw_key, mode->keysize);
-
-	return find_or_insert_master_key(mk, raw_key, mode, ci);
-
-err_free_mk:
-	free_master_key(mk);
-	return ERR_PTR(err);
-}
-
-static int derive_essiv_salt(const u8 *key, int keysize, u8 *salt)
-{
-	struct crypto_shash *tfm = READ_ONCE(essiv_hash_tfm);
-
-	/* init hash transform on demand */
-	if (unlikely(!tfm)) {
-		struct crypto_shash *prev_tfm;
-
-		tfm = crypto_alloc_shash("sha256", 0, 0);
-		if (IS_ERR(tfm)) {
-			fscrypt_warn(NULL,
-				     "error allocating SHA-256 transform: %ld",
-				     PTR_ERR(tfm));
-			return PTR_ERR(tfm);
-		}
-		prev_tfm = cmpxchg(&essiv_hash_tfm, NULL, tfm);
-		if (prev_tfm) {
-			crypto_free_shash(tfm);
-			tfm = prev_tfm;
-		}
-	}
-
-	{
-		SHASH_DESC_ON_STACK(desc, tfm);
-		desc->tfm = tfm;
-		desc->flags = 0;
-
-		return crypto_shash_digest(desc, key, keysize, salt);
-	}
-}
-
-static int init_essiv_generator(struct fscrypt_info *ci, const u8 *raw_key,
-				int keysize)
-{
-	int err;
-	struct crypto_cipher *essiv_tfm;
-	u8 salt[SHA256_DIGEST_SIZE];
-
-	essiv_tfm = crypto_alloc_cipher("aes", 0, 0);
-	if (IS_ERR(essiv_tfm))
-		return PTR_ERR(essiv_tfm);
-
-	ci->ci_essiv_tfm = essiv_tfm;
-
-	err = derive_essiv_salt(raw_key, keysize, salt);
-	if (err)
-		goto out;
-
-	/*
-	 * Using SHA256 to derive the salt/key will result in AES-256 being
-	 * used for IV generation. File contents encryption will still use the
-	 * configured keysize (AES-128) nevertheless.
-	 */
-	err = crypto_cipher_setkey(essiv_tfm, salt, sizeof(salt));
-	if (err)
-		goto out;
-
-out:
-	memzero_explicit(salt, sizeof(salt));
-	return err;
-}
-
-void __exit fscrypt_essiv_cleanup(void)
-{
-	crypto_free_shash(essiv_hash_tfm);
-}
-
-/*
- * Given the encryption mode and key (normally the derived key, but for
- * FS_POLICY_FLAG_DIRECT_KEY mode it's the master key), set up the inode's
- * symmetric cipher transform object(s).
- */
-static int setup_crypto_transform(struct fscrypt_info *ci,
-				  struct fscrypt_mode *mode,
-				  const u8 *raw_key, const struct inode *inode)
-{
-	struct fscrypt_master_key *mk;
-	struct crypto_skcipher *ctfm;
-	int err;
-
-	if (ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY) {
-		mk = fscrypt_get_master_key(ci, mode, raw_key, inode);
-		if (IS_ERR(mk))
-			return PTR_ERR(mk);
-		ctfm = mk->mk_ctfm;
-	} else {
-		mk = NULL;
-		ctfm = allocate_skcipher_for_mode(mode, raw_key, inode);
-		if (IS_ERR(ctfm))
-			return PTR_ERR(ctfm);
-	}
-	ci->ci_master_key = mk;
-	ci->ci_ctfm = ctfm;
-
-	if (mode->needs_essiv) {
-		/* ESSIV implies 16-byte IVs which implies !DIRECT_KEY */
-		WARN_ON(mode->ivsize != AES_BLOCK_SIZE);
-		WARN_ON(ci->ci_flags & FS_POLICY_FLAG_DIRECT_KEY);
-
-		err = init_essiv_generator(ci, raw_key, mode->keysize);
-		if (err) {
-			fscrypt_warn(inode->i_sb,
-				     "error initializing ESSIV generator for inode %lu: %d",
-				     inode->i_ino, err);
-			return err;
-		}
-	}
-	return 0;
-}
-
-static void put_crypt_info(struct fscrypt_info *ci)
-{
-	if (!ci)
-		return;
-
-	if (ci->ci_master_key) {
-		put_master_key(ci->ci_master_key);
-	} else {
-		if (ci->ci_ctfm)
-			crypto_free_skcipher(ci->ci_ctfm);
-		if (ci->ci_essiv_tfm)
-			crypto_free_cipher(ci->ci_essiv_tfm);
-	}
-	memset(ci->ci_raw_key, 0, sizeof(ci->ci_raw_key));
-	kmem_cache_free(fscrypt_info_cachep, ci);
-}
-
-static int fscrypt_data_encryption_mode(struct inode *inode)
-{
-	return fscrypt_should_be_processed_by_ice(inode) ?
-	FS_ENCRYPTION_MODE_PRIVATE : FS_ENCRYPTION_MODE_AES_256_XTS;
-}
-
-int fscrypt_get_encryption_info(struct inode *inode)
-{
-	struct fscrypt_info *crypt_info;
-	struct fscrypt_context ctx;
-	struct fscrypt_mode *mode;
-	u8 *raw_key = NULL;
-	int res;
-
-	if (fscrypt_has_encryption_key(inode))
-		return 0;
-
-	res = fscrypt_initialize(inode->i_sb->s_cop->flags);
-	if (res)
-		return res;
-
-	res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
-	if (res < 0) {
-		if (!fscrypt_dummy_context_enabled(inode) ||
-		    IS_ENCRYPTED(inode))
-			return res;
-		/* Fake up a context for an unencrypted directory */
-		memset(&ctx, 0, sizeof(ctx));
-		ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
-		ctx.contents_encryption_mode =
-			fscrypt_data_encryption_mode(inode);
-		ctx.filenames_encryption_mode = FS_ENCRYPTION_MODE_AES_256_CTS;
-		memset(ctx.master_key_descriptor, 0x42, FS_KEY_DESCRIPTOR_SIZE);
-	} else if (res != sizeof(ctx)) {
-		return -EINVAL;
-	}
-
-	if (ctx.format != FS_ENCRYPTION_CONTEXT_FORMAT_V1)
-		return -EINVAL;
-
-	if (ctx.flags & ~FS_POLICY_FLAGS_VALID)
-		return -EINVAL;
-
-	crypt_info = kmem_cache_zalloc(fscrypt_info_cachep, GFP_NOFS);
-	if (!crypt_info)
-		return -ENOMEM;
-
-	crypt_info->ci_flags = ctx.flags;
-	crypt_info->ci_data_mode = ctx.contents_encryption_mode;
-	crypt_info->ci_filename_mode = ctx.filenames_encryption_mode;
-	memcpy(crypt_info->ci_master_key_descriptor, ctx.master_key_descriptor,
-	       FS_KEY_DESCRIPTOR_SIZE);
-	memcpy(crypt_info->ci_nonce, ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
-
-	mode = select_encryption_mode(crypt_info, inode);
-	if (IS_ERR(mode)) {
-		res = PTR_ERR(mode);
-		goto out;
-	}
-	WARN_ON(mode->ivsize > FSCRYPT_MAX_IV_SIZE);
-	crypt_info->ci_mode = mode;
-
-	/*
-	 * This cannot be a stack buffer because it may be passed to the
-	 * scatterlist crypto API as part of key derivation.
-	 */
-	res = -ENOMEM;
-	raw_key = kmalloc(mode->keysize, GFP_NOFS);
-	if (!raw_key)
-		goto out;
-
-	res = find_and_derive_key(inode, &ctx, raw_key, mode);
-	if (res)
-		goto out;
-
-	if (!mode->inline_encryption) {
-		res = setup_crypto_transform(crypt_info, mode, raw_key, inode);
-		if (res)
-			goto out;
-	} else {
-		memcpy(crypt_info->ci_raw_key, raw_key, mode->keysize);
-	}
-
-	if (cmpxchg_release(&inode->i_crypt_info, NULL, crypt_info) == NULL)
-		crypt_info = NULL;
-out:
-	if (res == -ENOKEY)
-		res = 0;
-	put_crypt_info(crypt_info);
-	kzfree(raw_key);
-	return res;
-}
-EXPORT_SYMBOL(fscrypt_get_encryption_info);
-
-/**
- * fscrypt_put_encryption_info - free most of an inode's fscrypt data
- *
- * Free the inode's fscrypt_info.  Filesystems must call this when the inode is
- * being evicted.  An RCU grace period need not have elapsed yet.
- */
-void fscrypt_put_encryption_info(struct inode *inode)
-{
-	put_crypt_info(inode->i_crypt_info);
-	inode->i_crypt_info = NULL;
-}
-EXPORT_SYMBOL(fscrypt_put_encryption_info);
-
-/**
- * fscrypt_free_inode - free an inode's fscrypt data requiring RCU delay
- *
- * Free the inode's cached decrypted symlink target, if any.  Filesystems must
- * call this after an RCU grace period, just before they free the inode.
- */
-void fscrypt_free_inode(struct inode *inode)
-{
-	if (IS_ENCRYPTED(inode) && S_ISLNK(inode->i_mode)) {
-		kfree(inode->i_link);
-		inode->i_link = NULL;
-	}
-}
-EXPORT_SYMBOL(fscrypt_free_inode);
diff --git a/fs/crypto/keyring.c b/fs/crypto/keyring.c
new file mode 100644
index 0000000..d524b43
--- /dev/null
+++ b/fs/crypto/keyring.c
@@ -0,0 +1,1157 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Filesystem-level keyring for fscrypt
+ *
+ * Copyright 2019 Google LLC
+ */
+
+/*
+ * This file implements management of fscrypt master keys in the
+ * filesystem-level keyring, including the ioctls:
+ *
+ * - FS_IOC_ADD_ENCRYPTION_KEY
+ * - FS_IOC_REMOVE_ENCRYPTION_KEY
+ * - FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS
+ * - FS_IOC_GET_ENCRYPTION_KEY_STATUS
+ *
+ * See the "User API" section of Documentation/filesystems/fscrypt.rst for more
+ * information about these ioctls.
+ */
+
+#include <crypto/skcipher.h>
+#include <linux/key-type.h>
+#include <linux/seq_file.h>
+
+#include "fscrypt_private.h"
+
+static void wipe_master_key_secret(struct fscrypt_master_key_secret *secret)
+{
+	fscrypt_destroy_hkdf(&secret->hkdf);
+	memzero_explicit(secret, sizeof(*secret));
+}
+
+static void move_master_key_secret(struct fscrypt_master_key_secret *dst,
+				   struct fscrypt_master_key_secret *src)
+{
+	memcpy(dst, src, sizeof(*dst));
+	memzero_explicit(src, sizeof(*src));
+}
+
+static void free_master_key(struct fscrypt_master_key *mk)
+{
+	size_t i;
+
+	wipe_master_key_secret(&mk->mk_secret);
+
+	for (i = 0; i <= __FSCRYPT_MODE_MAX; i++) {
+		fscrypt_destroy_prepared_key(&mk->mk_direct_keys[i]);
+		fscrypt_destroy_prepared_key(&mk->mk_iv_ino_lblk_64_keys[i]);
+	}
+
+	key_put(mk->mk_users);
+	kzfree(mk);
+}
+
+static inline bool valid_key_spec(const struct fscrypt_key_specifier *spec)
+{
+	if (spec->__reserved)
+		return false;
+	return master_key_spec_len(spec) != 0;
+}
+
+static int fscrypt_key_instantiate(struct key *key,
+				   struct key_preparsed_payload *prep)
+{
+	key->payload.data[0] = (struct fscrypt_master_key *)prep->data;
+	return 0;
+}
+
+static void fscrypt_key_destroy(struct key *key)
+{
+	free_master_key(key->payload.data[0]);
+}
+
+static void fscrypt_key_describe(const struct key *key, struct seq_file *m)
+{
+	seq_puts(m, key->description);
+
+	if (key_is_positive(key)) {
+		const struct fscrypt_master_key *mk = key->payload.data[0];
+
+		if (!is_master_key_secret_present(&mk->mk_secret))
+			seq_puts(m, ": secret removed");
+	}
+}
+
+/*
+ * Type of key in ->s_master_keys.  Each key of this type represents a master
+ * key which has been added to the filesystem.  Its payload is a
+ * 'struct fscrypt_master_key'.  The "." prefix in the key type name prevents
+ * users from adding keys of this type via the keyrings syscalls rather than via
+ * the intended method of FS_IOC_ADD_ENCRYPTION_KEY.
+ */
+static struct key_type key_type_fscrypt = {
+	.name			= "._fscrypt",
+	.instantiate		= fscrypt_key_instantiate,
+	.destroy		= fscrypt_key_destroy,
+	.describe		= fscrypt_key_describe,
+};
+
+static int fscrypt_user_key_instantiate(struct key *key,
+					struct key_preparsed_payload *prep)
+{
+	/*
+	 * We just charge FSCRYPT_MAX_KEY_SIZE bytes to the user's key quota for
+	 * each key, regardless of the exact key size.  The amount of memory
+	 * actually used is greater than the size of the raw key anyway.
+	 */
+	return key_payload_reserve(key, FSCRYPT_MAX_KEY_SIZE);
+}
+
+static void fscrypt_user_key_describe(const struct key *key, struct seq_file *m)
+{
+	seq_puts(m, key->description);
+}
+
+/*
+ * Type of key in ->mk_users.  Each key of this type represents a particular
+ * user who has added a particular master key.
+ *
+ * Note that the name of this key type really should be something like
+ * ".fscrypt-user" instead of simply ".fscrypt".  But the shorter name is chosen
+ * mainly for simplicity of presentation in /proc/keys when read by a non-root
+ * user.  And it is expected to be rare that a key is actually added by multiple
+ * users, since users should keep their encryption keys confidential.
+ */
+static struct key_type key_type_fscrypt_user = {
+	.name			= ".fscrypt",
+	.instantiate		= fscrypt_user_key_instantiate,
+	.describe		= fscrypt_user_key_describe,
+};
+
+/* Search ->s_master_keys or ->mk_users */
+static struct key *search_fscrypt_keyring(struct key *keyring,
+					  struct key_type *type,
+					  const char *description)
+{
+	/*
+	 * We need to mark the keyring reference as "possessed" so that we
+	 * acquire permission to search it, via the KEY_POS_SEARCH permission.
+	 */
+	key_ref_t keyref = make_key_ref(keyring, true /* possessed */);
+
+	keyref = keyring_search(keyref, type, description);
+	if (IS_ERR(keyref)) {
+		if (PTR_ERR(keyref) == -EAGAIN || /* not found */
+		    PTR_ERR(keyref) == -EKEYREVOKED) /* recently invalidated */
+			keyref = ERR_PTR(-ENOKEY);
+		return ERR_CAST(keyref);
+	}
+	return key_ref_to_ptr(keyref);
+}
+
+#define FSCRYPT_FS_KEYRING_DESCRIPTION_SIZE	\
+	(CONST_STRLEN("fscrypt-") + FIELD_SIZEOF(struct super_block, s_id))
+
+#define FSCRYPT_MK_DESCRIPTION_SIZE	(2 * FSCRYPT_KEY_IDENTIFIER_SIZE + 1)
+
+#define FSCRYPT_MK_USERS_DESCRIPTION_SIZE	\
+	(CONST_STRLEN("fscrypt-") + 2 * FSCRYPT_KEY_IDENTIFIER_SIZE + \
+	 CONST_STRLEN("-users") + 1)
+
+#define FSCRYPT_MK_USER_DESCRIPTION_SIZE	\
+	(2 * FSCRYPT_KEY_IDENTIFIER_SIZE + CONST_STRLEN(".uid.") + 10 + 1)
+
+static void format_fs_keyring_description(
+			char description[FSCRYPT_FS_KEYRING_DESCRIPTION_SIZE],
+			const struct super_block *sb)
+{
+	sprintf(description, "fscrypt-%s", sb->s_id);
+}
+
+static void format_mk_description(
+			char description[FSCRYPT_MK_DESCRIPTION_SIZE],
+			const struct fscrypt_key_specifier *mk_spec)
+{
+	sprintf(description, "%*phN",
+		master_key_spec_len(mk_spec), (u8 *)&mk_spec->u);
+}
+
+static void format_mk_users_keyring_description(
+			char description[FSCRYPT_MK_USERS_DESCRIPTION_SIZE],
+			const u8 mk_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE])
+{
+	sprintf(description, "fscrypt-%*phN-users",
+		FSCRYPT_KEY_IDENTIFIER_SIZE, mk_identifier);
+}
+
+static void format_mk_user_description(
+			char description[FSCRYPT_MK_USER_DESCRIPTION_SIZE],
+			const u8 mk_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE])
+{
+
+	sprintf(description, "%*phN.uid.%u", FSCRYPT_KEY_IDENTIFIER_SIZE,
+		mk_identifier, __kuid_val(current_fsuid()));
+}
+
+/* Create ->s_master_keys if needed.  Synchronized by fscrypt_add_key_mutex. */
+static int allocate_filesystem_keyring(struct super_block *sb)
+{
+	char description[FSCRYPT_FS_KEYRING_DESCRIPTION_SIZE];
+	struct key *keyring;
+
+	if (sb->s_master_keys)
+		return 0;
+
+	format_fs_keyring_description(description, sb);
+	keyring = keyring_alloc(description, GLOBAL_ROOT_UID, GLOBAL_ROOT_GID,
+				current_cred(), KEY_POS_SEARCH |
+				  KEY_USR_SEARCH | KEY_USR_READ | KEY_USR_VIEW,
+				KEY_ALLOC_NOT_IN_QUOTA, NULL, NULL);
+	if (IS_ERR(keyring))
+		return PTR_ERR(keyring);
+
+	/* Pairs with READ_ONCE() in fscrypt_find_master_key() */
+	smp_store_release(&sb->s_master_keys, keyring);
+	return 0;
+}
+
+void fscrypt_sb_free(struct super_block *sb)
+{
+	key_put(sb->s_master_keys);
+	sb->s_master_keys = NULL;
+}
+
+/*
+ * Find the specified master key in ->s_master_keys.
+ * Returns ERR_PTR(-ENOKEY) if not found.
+ */
+struct key *fscrypt_find_master_key(struct super_block *sb,
+				    const struct fscrypt_key_specifier *mk_spec)
+{
+	struct key *keyring;
+	char description[FSCRYPT_MK_DESCRIPTION_SIZE];
+
+	/* pairs with smp_store_release() in allocate_filesystem_keyring() */
+	keyring = READ_ONCE(sb->s_master_keys);
+	if (keyring == NULL)
+		return ERR_PTR(-ENOKEY); /* No keyring yet, so no keys yet. */
+
+	format_mk_description(description, mk_spec);
+	return search_fscrypt_keyring(keyring, &key_type_fscrypt, description);
+}
+
+static int allocate_master_key_users_keyring(struct fscrypt_master_key *mk)
+{
+	char description[FSCRYPT_MK_USERS_DESCRIPTION_SIZE];
+	struct key *keyring;
+
+	format_mk_users_keyring_description(description,
+					    mk->mk_spec.u.identifier);
+	keyring = keyring_alloc(description, GLOBAL_ROOT_UID, GLOBAL_ROOT_GID,
+				current_cred(), KEY_POS_SEARCH |
+				  KEY_USR_SEARCH | KEY_USR_READ | KEY_USR_VIEW,
+				KEY_ALLOC_NOT_IN_QUOTA, NULL, NULL);
+	if (IS_ERR(keyring))
+		return PTR_ERR(keyring);
+
+	mk->mk_users = keyring;
+	return 0;
+}
+
+/*
+ * Find the current user's "key" in the master key's ->mk_users.
+ * Returns ERR_PTR(-ENOKEY) if not found.
+ */
+static struct key *find_master_key_user(struct fscrypt_master_key *mk)
+{
+	char description[FSCRYPT_MK_USER_DESCRIPTION_SIZE];
+
+	format_mk_user_description(description, mk->mk_spec.u.identifier);
+	return search_fscrypt_keyring(mk->mk_users, &key_type_fscrypt_user,
+				      description);
+}
+
+/*
+ * Give the current user a "key" in ->mk_users.  This charges the user's quota
+ * and marks the master key as added by the current user, so that it cannot be
+ * removed by another user with the key.  Either the master key's key->sem must
+ * be held for write, or the master key must be still undergoing initialization.
+ */
+static int add_master_key_user(struct fscrypt_master_key *mk)
+{
+	char description[FSCRYPT_MK_USER_DESCRIPTION_SIZE];
+	struct key *mk_user;
+	int err;
+
+	format_mk_user_description(description, mk->mk_spec.u.identifier);
+	mk_user = key_alloc(&key_type_fscrypt_user, description,
+			    current_fsuid(), current_gid(), current_cred(),
+			    KEY_POS_SEARCH | KEY_USR_VIEW, 0, NULL);
+	if (IS_ERR(mk_user))
+		return PTR_ERR(mk_user);
+
+	err = key_instantiate_and_link(mk_user, NULL, 0, mk->mk_users, NULL);
+	key_put(mk_user);
+	return err;
+}
+
+/*
+ * Remove the current user's "key" from ->mk_users.
+ * The master key's key->sem must be held for write.
+ *
+ * Returns 0 if removed, -ENOKEY if not found, or another -errno code.
+ */
+static int remove_master_key_user(struct fscrypt_master_key *mk)
+{
+	struct key *mk_user;
+	int err;
+
+	mk_user = find_master_key_user(mk);
+	if (IS_ERR(mk_user))
+		return PTR_ERR(mk_user);
+	err = key_unlink(mk->mk_users, mk_user);
+	key_put(mk_user);
+	return err;
+}
+
+/*
+ * Allocate a new fscrypt_master_key which contains the given secret, set it as
+ * the payload of a new 'struct key' of type fscrypt, and link the 'struct key'
+ * into the given keyring.  Synchronized by fscrypt_add_key_mutex.
+ */
+static int add_new_master_key(struct fscrypt_master_key_secret *secret,
+			      const struct fscrypt_key_specifier *mk_spec,
+			      struct key *keyring)
+{
+	struct fscrypt_master_key *mk;
+	char description[FSCRYPT_MK_DESCRIPTION_SIZE];
+	struct key *key;
+	int err;
+
+	mk = kzalloc(sizeof(*mk), GFP_KERNEL);
+	if (!mk)
+		return -ENOMEM;
+
+	mk->mk_spec = *mk_spec;
+
+	move_master_key_secret(&mk->mk_secret, secret);
+	init_rwsem(&mk->mk_secret_sem);
+
+	refcount_set(&mk->mk_refcount, 1); /* secret is present */
+	INIT_LIST_HEAD(&mk->mk_decrypted_inodes);
+	spin_lock_init(&mk->mk_decrypted_inodes_lock);
+
+	if (mk_spec->type == FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER) {
+		err = allocate_master_key_users_keyring(mk);
+		if (err)
+			goto out_free_mk;
+		err = add_master_key_user(mk);
+		if (err)
+			goto out_free_mk;
+	}
+
+	/*
+	 * Note that we don't charge this key to anyone's quota, since when
+	 * ->mk_users is in use those keys are charged instead, and otherwise
+	 * (when ->mk_users isn't in use) only root can add these keys.
+	 */
+	format_mk_description(description, mk_spec);
+	key = key_alloc(&key_type_fscrypt, description,
+			GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, current_cred(),
+			KEY_POS_SEARCH | KEY_USR_SEARCH | KEY_USR_VIEW,
+			KEY_ALLOC_NOT_IN_QUOTA, NULL);
+	if (IS_ERR(key)) {
+		err = PTR_ERR(key);
+		goto out_free_mk;
+	}
+	err = key_instantiate_and_link(key, mk, sizeof(*mk), keyring, NULL);
+	key_put(key);
+	if (err)
+		goto out_free_mk;
+
+	return 0;
+
+out_free_mk:
+	free_master_key(mk);
+	return err;
+}
+
+#define KEY_DEAD	1
+
+static int add_existing_master_key(struct fscrypt_master_key *mk,
+				   struct fscrypt_master_key_secret *secret)
+{
+	struct key *mk_user;
+	bool rekey;
+	int err;
+
+	/*
+	 * If the current user is already in ->mk_users, then there's nothing to
+	 * do.  (Not applicable for v1 policy keys, which have NULL ->mk_users.)
+	 */
+	if (mk->mk_users) {
+		mk_user = find_master_key_user(mk);
+		if (mk_user != ERR_PTR(-ENOKEY)) {
+			if (IS_ERR(mk_user))
+				return PTR_ERR(mk_user);
+			key_put(mk_user);
+			return 0;
+		}
+	}
+
+	/* If we'll be re-adding ->mk_secret, try to take the reference. */
+	rekey = !is_master_key_secret_present(&mk->mk_secret);
+	if (rekey && !refcount_inc_not_zero(&mk->mk_refcount))
+		return KEY_DEAD;
+
+	/* Add the current user to ->mk_users, if applicable. */
+	if (mk->mk_users) {
+		err = add_master_key_user(mk);
+		if (err) {
+			if (rekey && refcount_dec_and_test(&mk->mk_refcount))
+				return KEY_DEAD;
+			return err;
+		}
+	}
+
+	/* Re-add the secret if needed. */
+	if (rekey) {
+		down_write(&mk->mk_secret_sem);
+		move_master_key_secret(&mk->mk_secret, secret);
+		up_write(&mk->mk_secret_sem);
+	}
+	return 0;
+}
+
+static int add_master_key(struct super_block *sb,
+			  struct fscrypt_master_key_secret *secret,
+			  const struct fscrypt_key_specifier *mk_spec)
+{
+	static DEFINE_MUTEX(fscrypt_add_key_mutex);
+	struct key *key;
+	int err;
+
+	mutex_lock(&fscrypt_add_key_mutex); /* serialize find + link */
+retry:
+	key = fscrypt_find_master_key(sb, mk_spec);
+	if (IS_ERR(key)) {
+		err = PTR_ERR(key);
+		if (err != -ENOKEY)
+			goto out_unlock;
+		/* Didn't find the key in ->s_master_keys.  Add it. */
+		err = allocate_filesystem_keyring(sb);
+		if (err)
+			goto out_unlock;
+		err = add_new_master_key(secret, mk_spec, sb->s_master_keys);
+	} else {
+		/*
+		 * Found the key in ->s_master_keys.  Re-add the secret if
+		 * needed, and add the user to ->mk_users if needed.
+		 */
+		down_write(&key->sem);
+		err = add_existing_master_key(key->payload.data[0], secret);
+		up_write(&key->sem);
+		if (err == KEY_DEAD) {
+			/* Key being removed or needs to be removed */
+			key_invalidate(key);
+			key_put(key);
+			goto retry;
+		}
+		key_put(key);
+	}
+out_unlock:
+	mutex_unlock(&fscrypt_add_key_mutex);
+	return err;
+}
+
+static int fscrypt_provisioning_key_preparse(struct key_preparsed_payload *prep)
+{
+	const struct fscrypt_provisioning_key_payload *payload = prep->data;
+
+	BUILD_BUG_ON(FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE < FSCRYPT_MAX_KEY_SIZE);
+
+	if (prep->datalen < sizeof(*payload) + FSCRYPT_MIN_KEY_SIZE ||
+	    prep->datalen > sizeof(*payload) + FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE)
+		return -EINVAL;
+
+	if (payload->type != FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR &&
+	    payload->type != FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER)
+		return -EINVAL;
+
+	if (payload->__reserved)
+		return -EINVAL;
+
+	prep->payload.data[0] = kmemdup(payload, prep->datalen, GFP_KERNEL);
+	if (!prep->payload.data[0])
+		return -ENOMEM;
+
+	prep->quotalen = prep->datalen;
+	return 0;
+}
+
+static void fscrypt_provisioning_key_free_preparse(
+					struct key_preparsed_payload *prep)
+{
+	kzfree(prep->payload.data[0]);
+}
+
+static void fscrypt_provisioning_key_describe(const struct key *key,
+					      struct seq_file *m)
+{
+	seq_puts(m, key->description);
+	if (key_is_positive(key)) {
+		const struct fscrypt_provisioning_key_payload *payload =
+			key->payload.data[0];
+
+		seq_printf(m, ": %u [%u]", key->datalen, payload->type);
+	}
+}
+
+static void fscrypt_provisioning_key_destroy(struct key *key)
+{
+	kzfree(key->payload.data[0]);
+}
+
+static struct key_type key_type_fscrypt_provisioning = {
+	.name			= "fscrypt-provisioning",
+	.preparse		= fscrypt_provisioning_key_preparse,
+	.free_preparse		= fscrypt_provisioning_key_free_preparse,
+	.instantiate		= generic_key_instantiate,
+	.describe		= fscrypt_provisioning_key_describe,
+	.destroy		= fscrypt_provisioning_key_destroy,
+};
+
+/*
+ * Retrieve the raw key from the Linux keyring key specified by 'key_id', and
+ * store it into 'secret'.
+ *
+ * The key must be of type "fscrypt-provisioning" and must have the field
+ * fscrypt_provisioning_key_payload::type set to 'type', indicating that it's
+ * only usable with fscrypt with the particular KDF version identified by
+ * 'type'.  We don't use the "logon" key type because there's no way to
+ * completely restrict the use of such keys; they can be used by any kernel API
+ * that accepts "logon" keys and doesn't require a specific service prefix.
+ *
+ * The ability to specify the key via Linux keyring key is intended for cases
+ * where userspace needs to re-add keys after the filesystem is unmounted and
+ * re-mounted.  Most users should just provide the raw key directly instead.
+ */
+static int get_keyring_key(u32 key_id, u32 type,
+			   struct fscrypt_master_key_secret *secret)
+{
+	key_ref_t ref;
+	struct key *key;
+	const struct fscrypt_provisioning_key_payload *payload;
+	int err;
+
+	ref = lookup_user_key(key_id, 0, KEY_NEED_SEARCH);
+	if (IS_ERR(ref))
+		return PTR_ERR(ref);
+	key = key_ref_to_ptr(ref);
+
+	if (key->type != &key_type_fscrypt_provisioning)
+		goto bad_key;
+	payload = key->payload.data[0];
+
+	/* Don't allow fscrypt v1 keys to be used as v2 keys and vice versa. */
+	if (payload->type != type)
+		goto bad_key;
+
+	secret->size = key->datalen - sizeof(*payload);
+	memcpy(secret->raw, payload->raw, secret->size);
+	err = 0;
+	goto out_put;
+
+bad_key:
+	err = -EKEYREJECTED;
+out_put:
+	key_ref_put(ref);
+	return err;
+}
+
+/* Size of software "secret" derived from hardware-wrapped key */
+#define RAW_SECRET_SIZE 32
+
+/*
+ * Add a master encryption key to the filesystem, causing all files which were
+ * encrypted with it to appear "unlocked" (decrypted) when accessed.
+ *
+ * When adding a key for use by v1 encryption policies, this ioctl is
+ * privileged, and userspace must provide the 'key_descriptor'.
+ *
+ * When adding a key for use by v2+ encryption policies, this ioctl is
+ * unprivileged.  This is needed, in general, to allow non-root users to use
+ * encryption without encountering the visibility problems of process-subscribed
+ * keyrings and the inability to properly remove keys.  This works by having
+ * each key identified by its cryptographically secure hash --- the
+ * 'key_identifier'.  The cryptographic hash ensures that a malicious user
+ * cannot add the wrong key for a given identifier.  Furthermore, each added key
+ * is charged to the appropriate user's quota for the keyrings service, which
+ * prevents a malicious user from adding too many keys.  Finally, we forbid a
+ * user from removing a key while other users have added it too, which prevents
+ * a user who knows another user's key from causing a denial-of-service by
+ * removing it at an inopportune time.  (We tolerate that a user who knows a key
+ * can prevent other users from removing it.)
+ *
+ * For more details, see the "FS_IOC_ADD_ENCRYPTION_KEY" section of
+ * Documentation/filesystems/fscrypt.rst.
+ */
+int fscrypt_ioctl_add_key(struct file *filp, void __user *_uarg)
+{
+	struct super_block *sb = file_inode(filp)->i_sb;
+	struct fscrypt_add_key_arg __user *uarg = _uarg;
+	struct fscrypt_add_key_arg arg;
+	struct fscrypt_master_key_secret secret;
+	u8 _kdf_key[RAW_SECRET_SIZE];
+	u8 *kdf_key;
+	unsigned int kdf_key_size;
+	int err;
+
+	if (copy_from_user(&arg, uarg, sizeof(arg)))
+		return -EFAULT;
+
+	if (!valid_key_spec(&arg.key_spec))
+		return -EINVAL;
+
+	if (memchr_inv(arg.__reserved, 0, sizeof(arg.__reserved)))
+		return -EINVAL;
+
+	memset(&secret, 0, sizeof(secret));
+	if (arg.key_id) {
+		if (arg.raw_size != 0)
+			return -EINVAL;
+		err = get_keyring_key(arg.key_id, arg.key_spec.type, &secret);
+		if (err)
+			goto out_wipe_secret;
+		err = -EINVAL;
+		if (!(arg.__flags & __FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED) &&
+		    secret.size > FSCRYPT_MAX_KEY_SIZE)
+			goto out_wipe_secret;
+	} else {
+		if (arg.raw_size < FSCRYPT_MIN_KEY_SIZE ||
+		    arg.raw_size >
+		    ((arg.__flags & __FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED) ?
+		    FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE : FSCRYPT_MAX_KEY_SIZE))
+			return -EINVAL;
+		secret.size = arg.raw_size;
+		err = -EFAULT;
+		if (copy_from_user(secret.raw, uarg->raw, secret.size))
+			goto out_wipe_secret;
+	}
+
+	switch (arg.key_spec.type) {
+	case FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR:
+		/*
+		 * Only root can add keys that are identified by an arbitrary
+		 * descriptor rather than by a cryptographic hash --- since
+		 * otherwise a malicious user could add the wrong key.
+		 */
+		err = -EACCES;
+		if (!capable(CAP_SYS_ADMIN))
+			goto out_wipe_secret;
+
+		err = -EINVAL;
+		if (arg.__flags & ~__FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED)
+			goto out_wipe_secret;
+		break;
+	case FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER:
+		err = -EINVAL;
+		if (arg.__flags & ~__FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED)
+			goto out_wipe_secret;
+		if (arg.__flags & __FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED) {
+			kdf_key = _kdf_key;
+			kdf_key_size = RAW_SECRET_SIZE;
+			err = fscrypt_derive_raw_secret(sb, secret.raw,
+							secret.size,
+							kdf_key, kdf_key_size);
+			if (err)
+				goto out_wipe_secret;
+			secret.is_hw_wrapped = true;
+		} else {
+			kdf_key = secret.raw;
+			kdf_key_size = secret.size;
+		}
+		err = fscrypt_init_hkdf(&secret.hkdf, kdf_key, kdf_key_size);
+		/*
+		 * Now that the HKDF context is initialized, the raw HKDF
+		 * key is no longer needed.
+		 */
+		memzero_explicit(kdf_key, kdf_key_size);
+		if (err)
+			goto out_wipe_secret;
+
+		/* Calculate the key identifier and return it to userspace. */
+		err = fscrypt_hkdf_expand(&secret.hkdf,
+					  HKDF_CONTEXT_KEY_IDENTIFIER,
+					  NULL, 0, arg.key_spec.u.identifier,
+					  FSCRYPT_KEY_IDENTIFIER_SIZE);
+		if (err)
+			goto out_wipe_secret;
+		err = -EFAULT;
+		if (copy_to_user(uarg->key_spec.u.identifier,
+				 arg.key_spec.u.identifier,
+				 FSCRYPT_KEY_IDENTIFIER_SIZE))
+			goto out_wipe_secret;
+		break;
+	default:
+		WARN_ON(1);
+		err = -EINVAL;
+		goto out_wipe_secret;
+	}
+
+	err = add_master_key(sb, &secret, &arg.key_spec);
+out_wipe_secret:
+	wipe_master_key_secret(&secret);
+	return err;
+}
+EXPORT_SYMBOL_GPL(fscrypt_ioctl_add_key);
+
+/*
+ * Verify that the current user has added a master key with the given identifier
+ * (returns -ENOKEY if not).  This is needed to prevent a user from encrypting
+ * their files using some other user's key which they don't actually know.
+ * Cryptographically this isn't much of a problem, but the semantics of this
+ * would be a bit weird, so it's best to just forbid it.
+ *
+ * The system administrator (CAP_FOWNER) can override this, which should be
+ * enough for any use cases where encryption policies are being set using keys
+ * that were chosen ahead of time but aren't available at the moment.
+ *
+ * Note that the key may have already removed by the time this returns, but
+ * that's okay; we just care whether the key was there at some point.
+ *
+ * Return: 0 if the key is added, -ENOKEY if it isn't, or another -errno code
+ */
+int fscrypt_verify_key_added(struct super_block *sb,
+			     const u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE])
+{
+	struct fscrypt_key_specifier mk_spec;
+	struct key *key, *mk_user;
+	struct fscrypt_master_key *mk;
+	int err;
+
+	mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER;
+	memcpy(mk_spec.u.identifier, identifier, FSCRYPT_KEY_IDENTIFIER_SIZE);
+
+	key = fscrypt_find_master_key(sb, &mk_spec);
+	if (IS_ERR(key)) {
+		err = PTR_ERR(key);
+		goto out;
+	}
+	mk = key->payload.data[0];
+	mk_user = find_master_key_user(mk);
+	if (IS_ERR(mk_user)) {
+		err = PTR_ERR(mk_user);
+	} else {
+		key_put(mk_user);
+		err = 0;
+	}
+	key_put(key);
+out:
+	if (err == -ENOKEY && capable(CAP_FOWNER))
+		err = 0;
+	return err;
+}
+
+/*
+ * Try to evict the inode's dentries from the dentry cache.  If the inode is a
+ * directory, then it can have at most one dentry; however, that dentry may be
+ * pinned by child dentries, so first try to evict the children too.
+ */
+static void shrink_dcache_inode(struct inode *inode)
+{
+	struct dentry *dentry;
+
+	if (S_ISDIR(inode->i_mode)) {
+		dentry = d_find_any_alias(inode);
+		if (dentry) {
+			shrink_dcache_parent(dentry);
+			dput(dentry);
+		}
+	}
+	d_prune_aliases(inode);
+}
+
+static void evict_dentries_for_decrypted_inodes(struct fscrypt_master_key *mk)
+{
+	struct fscrypt_info *ci;
+	struct inode *inode;
+	struct inode *toput_inode = NULL;
+
+	spin_lock(&mk->mk_decrypted_inodes_lock);
+
+	list_for_each_entry(ci, &mk->mk_decrypted_inodes, ci_master_key_link) {
+		inode = ci->ci_inode;
+		spin_lock(&inode->i_lock);
+		if (inode->i_state & (I_FREEING | I_WILL_FREE | I_NEW)) {
+			spin_unlock(&inode->i_lock);
+			continue;
+		}
+		__iget(inode);
+		spin_unlock(&inode->i_lock);
+		spin_unlock(&mk->mk_decrypted_inodes_lock);
+
+		shrink_dcache_inode(inode);
+		iput(toput_inode);
+		toput_inode = inode;
+
+		spin_lock(&mk->mk_decrypted_inodes_lock);
+	}
+
+	spin_unlock(&mk->mk_decrypted_inodes_lock);
+	iput(toput_inode);
+}
+
+static int check_for_busy_inodes(struct super_block *sb,
+				 struct fscrypt_master_key *mk)
+{
+	struct list_head *pos;
+	size_t busy_count = 0;
+	unsigned long ino;
+	struct dentry *dentry;
+	char _path[256];
+	char *path = NULL;
+
+	spin_lock(&mk->mk_decrypted_inodes_lock);
+
+	list_for_each(pos, &mk->mk_decrypted_inodes)
+		busy_count++;
+
+	if (busy_count == 0) {
+		spin_unlock(&mk->mk_decrypted_inodes_lock);
+		return 0;
+	}
+
+	{
+		/* select an example file to show for debugging purposes */
+		struct inode *inode =
+			list_first_entry(&mk->mk_decrypted_inodes,
+					 struct fscrypt_info,
+					 ci_master_key_link)->ci_inode;
+		ino = inode->i_ino;
+		dentry = d_find_alias(inode);
+	}
+	spin_unlock(&mk->mk_decrypted_inodes_lock);
+
+	if (dentry) {
+		path = dentry_path(dentry, _path, sizeof(_path));
+		dput(dentry);
+	}
+	if (IS_ERR_OR_NULL(path))
+		path = "(unknown)";
+
+	fscrypt_warn(NULL,
+		     "%s: %zu inode(s) still busy after removing key with %s %*phN, including ino %lu (%s)",
+		     sb->s_id, busy_count, master_key_spec_type(&mk->mk_spec),
+		     master_key_spec_len(&mk->mk_spec), (u8 *)&mk->mk_spec.u,
+		     ino, path);
+	return -EBUSY;
+}
+
+static BLOCKING_NOTIFIER_HEAD(fscrypt_key_removal_notifiers);
+
+/*
+ * Register a function to be executed when the FS_IOC_REMOVE_ENCRYPTION_KEY
+ * ioctl has removed a key and is about to try evicting inodes.
+ */
+int fscrypt_register_key_removal_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_register(&fscrypt_key_removal_notifiers,
+						nb);
+}
+EXPORT_SYMBOL_GPL(fscrypt_register_key_removal_notifier);
+
+int fscrypt_unregister_key_removal_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_unregister(&fscrypt_key_removal_notifiers,
+						  nb);
+}
+EXPORT_SYMBOL_GPL(fscrypt_unregister_key_removal_notifier);
+
+static int try_to_lock_encrypted_files(struct super_block *sb,
+				       struct fscrypt_master_key *mk)
+{
+	int err1;
+	int err2;
+
+	blocking_notifier_call_chain(&fscrypt_key_removal_notifiers, 0, NULL);
+
+	/*
+	 * An inode can't be evicted while it is dirty or has dirty pages.
+	 * Thus, we first have to clean the inodes in ->mk_decrypted_inodes.
+	 *
+	 * Just do it the easy way: call sync_filesystem().  It's overkill, but
+	 * it works, and it's more important to minimize the amount of caches we
+	 * drop than the amount of data we sync.  Also, unprivileged users can
+	 * already call sync_filesystem() via sys_syncfs() or sys_sync().
+	 */
+	down_read(&sb->s_umount);
+	err1 = sync_filesystem(sb);
+	up_read(&sb->s_umount);
+	/* If a sync error occurs, still try to evict as much as possible. */
+
+	/*
+	 * Inodes are pinned by their dentries, so we have to evict their
+	 * dentries.  shrink_dcache_sb() would suffice, but would be overkill
+	 * and inappropriate for use by unprivileged users.  So instead go
+	 * through the inodes' alias lists and try to evict each dentry.
+	 */
+	evict_dentries_for_decrypted_inodes(mk);
+
+	/*
+	 * evict_dentries_for_decrypted_inodes() already iput() each inode in
+	 * the list; any inodes for which that dropped the last reference will
+	 * have been evicted due to fscrypt_drop_inode() detecting the key
+	 * removal and telling the VFS to evict the inode.  So to finish, we
+	 * just need to check whether any inodes couldn't be evicted.
+	 */
+	err2 = check_for_busy_inodes(sb, mk);
+
+	return err1 ?: err2;
+}
+
+/*
+ * Try to remove an fscrypt master encryption key.
+ *
+ * FS_IOC_REMOVE_ENCRYPTION_KEY (all_users=false) removes the current user's
+ * claim to the key, then removes the key itself if no other users have claims.
+ * FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS (all_users=true) always removes the
+ * key itself.
+ *
+ * To "remove the key itself", first we wipe the actual master key secret, so
+ * that no more inodes can be unlocked with it.  Then we try to evict all cached
+ * inodes that had been unlocked with the key.
+ *
+ * If all inodes were evicted, then we unlink the fscrypt_master_key from the
+ * keyring.  Otherwise it remains in the keyring in the "incompletely removed"
+ * state (without the actual secret key) where it tracks the list of remaining
+ * inodes.  Userspace can execute the ioctl again later to retry eviction, or
+ * alternatively can re-add the secret key again.
+ *
+ * For more details, see the "Removing keys" section of
+ * Documentation/filesystems/fscrypt.rst.
+ */
+static int do_remove_key(struct file *filp, void __user *_uarg, bool all_users)
+{
+	struct super_block *sb = file_inode(filp)->i_sb;
+	struct fscrypt_remove_key_arg __user *uarg = _uarg;
+	struct fscrypt_remove_key_arg arg;
+	struct key *key;
+	struct fscrypt_master_key *mk;
+	u32 status_flags = 0;
+	int err;
+	bool dead;
+
+	if (copy_from_user(&arg, uarg, sizeof(arg)))
+		return -EFAULT;
+
+	if (!valid_key_spec(&arg.key_spec))
+		return -EINVAL;
+
+	if (memchr_inv(arg.__reserved, 0, sizeof(arg.__reserved)))
+		return -EINVAL;
+
+	/*
+	 * Only root can add and remove keys that are identified by an arbitrary
+	 * descriptor rather than by a cryptographic hash.
+	 */
+	if (arg.key_spec.type == FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR &&
+	    !capable(CAP_SYS_ADMIN))
+		return -EACCES;
+
+	/* Find the key being removed. */
+	key = fscrypt_find_master_key(sb, &arg.key_spec);
+	if (IS_ERR(key))
+		return PTR_ERR(key);
+	mk = key->payload.data[0];
+
+	down_write(&key->sem);
+
+	/* If relevant, remove current user's (or all users) claim to the key */
+	if (mk->mk_users && mk->mk_users->keys.nr_leaves_on_tree != 0) {
+		if (all_users)
+			err = keyring_clear(mk->mk_users);
+		else
+			err = remove_master_key_user(mk);
+		if (err) {
+			up_write(&key->sem);
+			goto out_put_key;
+		}
+		if (mk->mk_users->keys.nr_leaves_on_tree != 0) {
+			/*
+			 * Other users have still added the key too.  We removed
+			 * the current user's claim to the key, but we still
+			 * can't remove the key itself.
+			 */
+			status_flags |=
+				FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS;
+			err = 0;
+			up_write(&key->sem);
+			goto out_put_key;
+		}
+	}
+
+	/* No user claims remaining.  Go ahead and wipe the secret. */
+	dead = false;
+	if (is_master_key_secret_present(&mk->mk_secret)) {
+		down_write(&mk->mk_secret_sem);
+		wipe_master_key_secret(&mk->mk_secret);
+		dead = refcount_dec_and_test(&mk->mk_refcount);
+		up_write(&mk->mk_secret_sem);
+	}
+	up_write(&key->sem);
+	if (dead) {
+		/*
+		 * No inodes reference the key, and we wiped the secret, so the
+		 * key object is free to be removed from the keyring.
+		 */
+		key_invalidate(key);
+		err = 0;
+	} else {
+		/* Some inodes still reference this key; try to evict them. */
+		err = try_to_lock_encrypted_files(sb, mk);
+		if (err == -EBUSY) {
+			status_flags |=
+				FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY;
+			err = 0;
+		}
+	}
+	/*
+	 * We return 0 if we successfully did something: removed a claim to the
+	 * key, wiped the secret, or tried locking the files again.  Users need
+	 * to check the informational status flags if they care whether the key
+	 * has been fully removed including all files locked.
+	 */
+out_put_key:
+	key_put(key);
+	if (err == 0)
+		err = put_user(status_flags, &uarg->removal_status_flags);
+	return err;
+}
+
+int fscrypt_ioctl_remove_key(struct file *filp, void __user *uarg)
+{
+	return do_remove_key(filp, uarg, false);
+}
+EXPORT_SYMBOL_GPL(fscrypt_ioctl_remove_key);
+
+int fscrypt_ioctl_remove_key_all_users(struct file *filp, void __user *uarg)
+{
+	if (!capable(CAP_SYS_ADMIN))
+		return -EACCES;
+	return do_remove_key(filp, uarg, true);
+}
+EXPORT_SYMBOL_GPL(fscrypt_ioctl_remove_key_all_users);
+
+/*
+ * Retrieve the status of an fscrypt master encryption key.
+ *
+ * We set ->status to indicate whether the key is absent, present, or
+ * incompletely removed.  "Incompletely removed" means that the master key
+ * secret has been removed, but some files which had been unlocked with it are
+ * still in use.  This field allows applications to easily determine the state
+ * of an encrypted directory without using a hack such as trying to open a
+ * regular file in it (which can confuse the "incompletely removed" state with
+ * absent or present).
+ *
+ * In addition, for v2 policy keys we allow applications to determine, via
+ * ->status_flags and ->user_count, whether the key has been added by the
+ * current user, by other users, or by both.  Most applications should not need
+ * this, since ordinarily only one user should know a given key.  However, if a
+ * secret key is shared by multiple users, applications may wish to add an
+ * already-present key to prevent other users from removing it.  This ioctl can
+ * be used to check whether that really is the case before the work is done to
+ * add the key --- which might e.g. require prompting the user for a passphrase.
+ *
+ * For more details, see the "FS_IOC_GET_ENCRYPTION_KEY_STATUS" section of
+ * Documentation/filesystems/fscrypt.rst.
+ */
+int fscrypt_ioctl_get_key_status(struct file *filp, void __user *uarg)
+{
+	struct super_block *sb = file_inode(filp)->i_sb;
+	struct fscrypt_get_key_status_arg arg;
+	struct key *key;
+	struct fscrypt_master_key *mk;
+	int err;
+
+	if (copy_from_user(&arg, uarg, sizeof(arg)))
+		return -EFAULT;
+
+	if (!valid_key_spec(&arg.key_spec))
+		return -EINVAL;
+
+	if (memchr_inv(arg.__reserved, 0, sizeof(arg.__reserved)))
+		return -EINVAL;
+
+	arg.status_flags = 0;
+	arg.user_count = 0;
+	memset(arg.__out_reserved, 0, sizeof(arg.__out_reserved));
+
+	key = fscrypt_find_master_key(sb, &arg.key_spec);
+	if (IS_ERR(key)) {
+		if (key != ERR_PTR(-ENOKEY))
+			return PTR_ERR(key);
+		arg.status = FSCRYPT_KEY_STATUS_ABSENT;
+		err = 0;
+		goto out;
+	}
+	mk = key->payload.data[0];
+	down_read(&key->sem);
+
+	if (!is_master_key_secret_present(&mk->mk_secret)) {
+		arg.status = FSCRYPT_KEY_STATUS_INCOMPLETELY_REMOVED;
+		err = 0;
+		goto out_release_key;
+	}
+
+	arg.status = FSCRYPT_KEY_STATUS_PRESENT;
+	if (mk->mk_users) {
+		struct key *mk_user;
+
+		arg.user_count = mk->mk_users->keys.nr_leaves_on_tree;
+		mk_user = find_master_key_user(mk);
+		if (!IS_ERR(mk_user)) {
+			arg.status_flags |=
+				FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF;
+			key_put(mk_user);
+		} else if (mk_user != ERR_PTR(-ENOKEY)) {
+			err = PTR_ERR(mk_user);
+			goto out_release_key;
+		}
+	}
+	err = 0;
+out_release_key:
+	up_read(&key->sem);
+	key_put(key);
+out:
+	if (!err && copy_to_user(uarg, &arg, sizeof(arg)))
+		err = -EFAULT;
+	return err;
+}
+EXPORT_SYMBOL_GPL(fscrypt_ioctl_get_key_status);
+
+int __init fscrypt_init_keyring(void)
+{
+	int err;
+
+	err = register_key_type(&key_type_fscrypt);
+	if (err)
+		return err;
+
+	err = register_key_type(&key_type_fscrypt_user);
+	if (err)
+		goto err_unregister_fscrypt;
+
+	err = register_key_type(&key_type_fscrypt_provisioning);
+	if (err)
+		goto err_unregister_fscrypt_user;
+
+	return 0;
+
+err_unregister_fscrypt_user:
+	unregister_key_type(&key_type_fscrypt_user);
+err_unregister_fscrypt:
+	unregister_key_type(&key_type_fscrypt);
+	return err;
+}
diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
new file mode 100644
index 0000000..5414e27
--- /dev/null
+++ b/fs/crypto/keysetup.c
@@ -0,0 +1,605 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Key setup facility for FS encryption support.
+ *
+ * Copyright (C) 2015, Google, Inc.
+ *
+ * Originally written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar.
+ * Heavily modified since then.
+ */
+
+#include <crypto/skcipher.h>
+#include <linux/key.h>
+
+#include "fscrypt_private.h"
+
+struct fscrypt_mode fscrypt_modes[] = {
+	[FSCRYPT_MODE_AES_256_XTS] = {
+		.friendly_name = "AES-256-XTS",
+		.cipher_str = "xts(aes)",
+		.keysize = 64,
+		.ivsize = 16,
+		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS,
+	},
+	[FSCRYPT_MODE_AES_256_CTS] = {
+		.friendly_name = "AES-256-CTS-CBC",
+		.cipher_str = "cts(cbc(aes))",
+		.keysize = 32,
+		.ivsize = 16,
+	},
+	[FSCRYPT_MODE_AES_128_CBC] = {
+		.friendly_name = "AES-128-CBC-ESSIV",
+		.cipher_str = "essiv(cbc(aes),sha256)",
+		.keysize = 16,
+		.ivsize = 16,
+		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
+	},
+	[FSCRYPT_MODE_AES_128_CTS] = {
+		.friendly_name = "AES-128-CTS-CBC",
+		.cipher_str = "cts(cbc(aes))",
+		.keysize = 16,
+		.ivsize = 16,
+	},
+	[FSCRYPT_MODE_ADIANTUM] = {
+		.friendly_name = "Adiantum",
+		.cipher_str = "adiantum(xchacha12,aes)",
+		.keysize = 32,
+		.ivsize = 32,
+		.blk_crypto_mode = BLK_ENCRYPTION_MODE_ADIANTUM,
+	},
+	[FSCRYPT_MODE_PRIVATE] = {
+		.friendly_name = "ice",
+		.cipher_str = "xts(aes)",
+		.keysize = 64,
+		.ivsize = 16,
+		.blk_crypto_mode = BLK_ENCRYPTION_MODE_AES_256_XTS,
+	},
+};
+
+static struct fscrypt_mode *
+select_encryption_mode(const union fscrypt_policy *policy,
+		       const struct inode *inode)
+{
+	if (S_ISREG(inode->i_mode))
+		return &fscrypt_modes[fscrypt_policy_contents_mode(policy)];
+
+	if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
+		return &fscrypt_modes[fscrypt_policy_fnames_mode(policy)];
+
+	WARN_ONCE(1, "fscrypt: filesystem tried to load encryption info for inode %lu, which is not encryptable (file type %d)\n",
+		  inode->i_ino, (inode->i_mode & S_IFMT));
+	return ERR_PTR(-EINVAL);
+}
+
+/* Create a symmetric cipher object for the given encryption mode and key */
+static struct crypto_skcipher *
+fscrypt_allocate_skcipher(struct fscrypt_mode *mode, const u8 *raw_key,
+			  const struct inode *inode)
+{
+	struct crypto_skcipher *tfm;
+	int err;
+
+	tfm = crypto_alloc_skcipher(mode->cipher_str, 0, 0);
+	if (IS_ERR(tfm)) {
+		if (PTR_ERR(tfm) == -ENOENT) {
+			fscrypt_warn(inode,
+				     "Missing crypto API support for %s (API name: \"%s\")",
+				     mode->friendly_name, mode->cipher_str);
+			return ERR_PTR(-ENOPKG);
+		}
+		fscrypt_err(inode, "Error allocating '%s' transform: %ld",
+			    mode->cipher_str, PTR_ERR(tfm));
+		return tfm;
+	}
+	if (unlikely(!mode->logged_impl_name)) {
+		/*
+		 * fscrypt performance can vary greatly depending on which
+		 * crypto algorithm implementation is used.  Help people debug
+		 * performance problems by logging the ->cra_driver_name the
+		 * first time a mode is used.  Note that multiple threads can
+		 * race here, but it doesn't really matter.
+		 */
+		mode->logged_impl_name = true;
+		pr_info("fscrypt: %s using implementation \"%s\"\n",
+			mode->friendly_name,
+			crypto_skcipher_alg(tfm)->base.cra_driver_name);
+	}
+	crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
+	err = crypto_skcipher_setkey(tfm, raw_key, mode->keysize);
+	if (err)
+		goto err_free_tfm;
+
+	return tfm;
+
+err_free_tfm:
+	crypto_free_skcipher(tfm);
+	return ERR_PTR(err);
+}
+
+/*
+ * Prepare the crypto transform object or blk-crypto key in @prep_key, given the
+ * raw key, encryption mode, and flag indicating which encryption implementation
+ * (fs-layer or blk-crypto) will be used.
+ */
+int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key,
+			const u8 *raw_key, unsigned int raw_key_size,
+			bool is_hw_wrapped, const struct fscrypt_info *ci)
+{
+	struct crypto_skcipher *tfm;
+
+	if (fscrypt_using_inline_encryption(ci))
+		return fscrypt_prepare_inline_crypt_key(prep_key,
+				raw_key, raw_key_size, is_hw_wrapped, ci);
+
+	if (WARN_ON(is_hw_wrapped || raw_key_size != ci->ci_mode->keysize))
+		return -EINVAL;
+
+	tfm = fscrypt_allocate_skcipher(ci->ci_mode, raw_key, ci->ci_inode);
+	if (IS_ERR(tfm))
+		return PTR_ERR(tfm);
+	/*
+	 * Pairs with READ_ONCE() in fscrypt_is_key_prepared().  (Only matters
+	 * for the per-mode keys, which are shared by multiple inodes.)
+	 */
+	smp_store_release(&prep_key->tfm, tfm);
+	return 0;
+}
+
+/* Destroy a crypto transform object and/or blk-crypto key. */
+void fscrypt_destroy_prepared_key(struct fscrypt_prepared_key *prep_key)
+{
+	crypto_free_skcipher(prep_key->tfm);
+	fscrypt_destroy_inline_crypt_key(prep_key);
+}
+
+/* Given the per-file key, set up the file's crypto transform object */
+int fscrypt_set_derived_key(struct fscrypt_info *ci, const u8 *derived_key)
+{
+	ci->ci_owns_key = true;
+	return fscrypt_prepare_key(&ci->ci_key, derived_key,
+				   ci->ci_mode->keysize, false, ci);
+}
+
+static int setup_per_mode_key(struct fscrypt_info *ci,
+			      struct fscrypt_master_key *mk,
+			      struct fscrypt_prepared_key *keys,
+			      u8 hkdf_context, bool include_fs_uuid)
+{
+	static DEFINE_MUTEX(mode_key_setup_mutex);
+	const struct inode *inode = ci->ci_inode;
+	const struct super_block *sb = inode->i_sb;
+	struct fscrypt_mode *mode = ci->ci_mode;
+	const u8 mode_num = mode - fscrypt_modes;
+	struct fscrypt_prepared_key *prep_key;
+	u8 mode_key[FSCRYPT_MAX_KEY_SIZE];
+	u8 hkdf_info[sizeof(mode_num) + sizeof(sb->s_uuid)];
+	unsigned int hkdf_infolen = 0;
+	int err;
+
+	if (WARN_ON(mode_num > __FSCRYPT_MODE_MAX))
+		return -EINVAL;
+
+	prep_key = &keys[mode_num];
+	if (fscrypt_is_key_prepared(prep_key, ci)) {
+		ci->ci_key = *prep_key;
+		return 0;
+	}
+
+	mutex_lock(&mode_key_setup_mutex);
+
+	if (fscrypt_is_key_prepared(prep_key, ci))
+		goto done_unlock;
+
+	if (mk->mk_secret.is_hw_wrapped && S_ISREG(inode->i_mode)) {
+		int i;
+
+		if (!fscrypt_using_inline_encryption(ci)) {
+			fscrypt_warn(ci->ci_inode,
+				     "Hardware-wrapped keys require inline encryption (-o inlinecrypt)");
+			err = -EINVAL;
+			goto out_unlock;
+		}
+		for (i = 0; i <= __FSCRYPT_MODE_MAX; i++) {
+			if (fscrypt_is_key_prepared(&keys[i], ci)) {
+				fscrypt_warn(ci->ci_inode,
+					     "Each hardware-wrapped key can only be used with one encryption mode");
+				err = -EINVAL;
+				goto out_unlock;
+			}
+		}
+		err = fscrypt_prepare_key(prep_key, mk->mk_secret.raw,
+					  mk->mk_secret.size, true, ci);
+		if (err)
+			goto out_unlock;
+	} else {
+		BUILD_BUG_ON(sizeof(mode_num) != 1);
+		BUILD_BUG_ON(sizeof(sb->s_uuid) != 16);
+		BUILD_BUG_ON(sizeof(hkdf_info) != 17);
+		hkdf_info[hkdf_infolen++] = mode_num;
+		if (include_fs_uuid) {
+			memcpy(&hkdf_info[hkdf_infolen], &sb->s_uuid,
+				   sizeof(sb->s_uuid));
+			hkdf_infolen += sizeof(sb->s_uuid);
+		}
+		err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
+					  hkdf_context, hkdf_info, hkdf_infolen,
+					  mode_key, mode->keysize);
+		if (err)
+			goto out_unlock;
+		err = fscrypt_prepare_key(prep_key, mode_key, mode->keysize,
+					  false /*is_hw_wrapped*/, ci);
+		memzero_explicit(mode_key, mode->keysize);
+		if (err)
+			goto out_unlock;
+	}
+done_unlock:
+	ci->ci_key = *prep_key;
+	err = 0;
+out_unlock:
+	mutex_unlock(&mode_key_setup_mutex);
+	return err;
+}
+
+static int fscrypt_setup_v2_file_key(struct fscrypt_info *ci,
+				     struct fscrypt_master_key *mk)
+{
+	u8 derived_key[FSCRYPT_MAX_KEY_SIZE];
+	int err;
+
+	if (mk->mk_secret.is_hw_wrapped &&
+	    !(ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64)) {
+		fscrypt_warn(ci->ci_inode,
+			     "Hardware-wrapped keys are only supported with IV_INO_LBLK_64 policies");
+		return -EINVAL;
+	}
+
+	if (ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
+		/*
+		 * DIRECT_KEY: instead of deriving per-file keys, the per-file
+		 * nonce will be included in all the IVs.  But unlike v1
+		 * policies, for v2 policies in this case we don't encrypt with
+		 * the master key directly but rather derive a per-mode key.
+		 * This ensures that the master key is consistently used only
+		 * for HKDF, avoiding key reuse issues.
+		 */
+		if (!fscrypt_mode_supports_direct_key(ci->ci_mode)) {
+			fscrypt_warn(ci->ci_inode,
+				     "Direct key flag not allowed with %s",
+				     ci->ci_mode->friendly_name);
+			return -EINVAL;
+		}
+		return setup_per_mode_key(ci, mk, mk->mk_direct_keys,
+					  HKDF_CONTEXT_DIRECT_KEY, false);
+	} else if (ci->ci_policy.v2.flags &
+		   FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) {
+		/*
+		 * IV_INO_LBLK_64: encryption keys are derived from (master_key,
+		 * mode_num, filesystem_uuid), and inode number is included in
+		 * the IVs.  This format is optimized for use with inline
+		 * encryption hardware compliant with the UFS or eMMC standards.
+		 */
+		return setup_per_mode_key(ci, mk, mk->mk_iv_ino_lblk_64_keys,
+					  HKDF_CONTEXT_IV_INO_LBLK_64_KEY,
+					  true);
+	}
+
+	err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
+				  HKDF_CONTEXT_PER_FILE_KEY,
+				  ci->ci_nonce, FS_KEY_DERIVATION_NONCE_SIZE,
+				  derived_key, ci->ci_mode->keysize);
+	if (err)
+		return err;
+
+	err = fscrypt_set_derived_key(ci, derived_key);
+	memzero_explicit(derived_key, ci->ci_mode->keysize);
+	return err;
+}
+
+/*
+ * Find the master key, then set up the inode's actual encryption key.
+ *
+ * If the master key is found in the filesystem-level keyring, then the
+ * corresponding 'struct key' is returned in *master_key_ret with
+ * ->mk_secret_sem read-locked.  This is needed to ensure that only one task
+ * links the fscrypt_info into ->mk_decrypted_inodes (as multiple tasks may race
+ * to create an fscrypt_info for the same inode), and to synchronize the master
+ * key being removed with a new inode starting to use it.
+ */
+static int setup_file_encryption_key(struct fscrypt_info *ci,
+				     struct key **master_key_ret)
+{
+	struct key *key;
+	struct fscrypt_master_key *mk = NULL;
+	struct fscrypt_key_specifier mk_spec;
+	int err;
+
+	fscrypt_select_encryption_impl(ci);
+
+	switch (ci->ci_policy.version) {
+	case FSCRYPT_POLICY_V1:
+		mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR;
+		memcpy(mk_spec.u.descriptor,
+		       ci->ci_policy.v1.master_key_descriptor,
+		       FSCRYPT_KEY_DESCRIPTOR_SIZE);
+		break;
+	case FSCRYPT_POLICY_V2:
+		mk_spec.type = FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER;
+		memcpy(mk_spec.u.identifier,
+		       ci->ci_policy.v2.master_key_identifier,
+		       FSCRYPT_KEY_IDENTIFIER_SIZE);
+		break;
+	default:
+		WARN_ON(1);
+		return -EINVAL;
+	}
+
+	key = fscrypt_find_master_key(ci->ci_inode->i_sb, &mk_spec);
+	if (IS_ERR(key)) {
+		if (key != ERR_PTR(-ENOKEY) ||
+		    ci->ci_policy.version != FSCRYPT_POLICY_V1)
+			return PTR_ERR(key);
+
+		/*
+		 * As a legacy fallback for v1 policies, search for the key in
+		 * the current task's subscribed keyrings too.  Don't move this
+		 * to before the search of ->s_master_keys, since users
+		 * shouldn't be able to override filesystem-level keys.
+		 */
+		return fscrypt_setup_v1_file_key_via_subscribed_keyrings(ci);
+	}
+
+	mk = key->payload.data[0];
+	down_read(&mk->mk_secret_sem);
+
+	/* Has the secret been removed (via FS_IOC_REMOVE_ENCRYPTION_KEY)? */
+	if (!is_master_key_secret_present(&mk->mk_secret)) {
+		err = -ENOKEY;
+		goto out_release_key;
+	}
+
+	/*
+	 * Require that the master key be at least as long as the derived key.
+	 * Otherwise, the derived key cannot possibly contain as much entropy as
+	 * that required by the encryption mode it will be used for.  For v1
+	 * policies it's also required for the KDF to work at all.
+	 */
+	if (mk->mk_secret.size < ci->ci_mode->keysize) {
+		fscrypt_warn(NULL,
+			     "key with %s %*phN is too short (got %u bytes, need %u+ bytes)",
+			     master_key_spec_type(&mk_spec),
+			     master_key_spec_len(&mk_spec), (u8 *)&mk_spec.u,
+			     mk->mk_secret.size, ci->ci_mode->keysize);
+		err = -ENOKEY;
+		goto out_release_key;
+	}
+
+	switch (ci->ci_policy.version) {
+	case FSCRYPT_POLICY_V1:
+		err = fscrypt_setup_v1_file_key(ci, mk->mk_secret.raw);
+		break;
+	case FSCRYPT_POLICY_V2:
+		err = fscrypt_setup_v2_file_key(ci, mk);
+		break;
+	default:
+		WARN_ON(1);
+		err = -EINVAL;
+		break;
+	}
+	if (err)
+		goto out_release_key;
+
+	*master_key_ret = key;
+	return 0;
+
+out_release_key:
+	up_read(&mk->mk_secret_sem);
+	key_put(key);
+	return err;
+}
+
+static void put_crypt_info(struct fscrypt_info *ci)
+{
+	struct key *key;
+
+	if (!ci)
+		return;
+
+	if (ci->ci_direct_key)
+		fscrypt_put_direct_key(ci->ci_direct_key);
+	else if (ci->ci_owns_key)
+		fscrypt_destroy_prepared_key(&ci->ci_key);
+
+	key = ci->ci_master_key;
+	if (key) {
+		struct fscrypt_master_key *mk = key->payload.data[0];
+
+		/*
+		 * Remove this inode from the list of inodes that were unlocked
+		 * with the master key.
+		 *
+		 * In addition, if we're removing the last inode from a key that
+		 * already had its secret removed, invalidate the key so that it
+		 * gets removed from ->s_master_keys.
+		 */
+		spin_lock(&mk->mk_decrypted_inodes_lock);
+		list_del(&ci->ci_master_key_link);
+		spin_unlock(&mk->mk_decrypted_inodes_lock);
+		if (refcount_dec_and_test(&mk->mk_refcount))
+			key_invalidate(key);
+		key_put(key);
+	}
+	memzero_explicit(ci, sizeof(*ci));
+	kmem_cache_free(fscrypt_info_cachep, ci);
+}
+
+int fscrypt_get_encryption_info(struct inode *inode)
+{
+	struct fscrypt_info *crypt_info;
+	union fscrypt_context ctx;
+	struct fscrypt_mode *mode;
+	struct key *master_key = NULL;
+	int res;
+
+	if (fscrypt_has_encryption_key(inode))
+		return 0;
+
+	res = fscrypt_initialize(inode->i_sb->s_cop->flags);
+	if (res)
+		return res;
+
+	res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
+	if (res < 0) {
+		if (!fscrypt_dummy_context_enabled(inode) ||
+		    IS_ENCRYPTED(inode)) {
+			fscrypt_warn(inode,
+				     "Error %d getting encryption context",
+				     res);
+			return res;
+		}
+		/* Fake up a context for an unencrypted directory */
+		memset(&ctx, 0, sizeof(ctx));
+		ctx.version = FSCRYPT_CONTEXT_V1;
+		ctx.v1.contents_encryption_mode = FSCRYPT_MODE_AES_256_XTS;
+		ctx.v1.filenames_encryption_mode = FSCRYPT_MODE_AES_256_CTS;
+		memset(ctx.v1.master_key_descriptor, 0x42,
+		       FSCRYPT_KEY_DESCRIPTOR_SIZE);
+		res = sizeof(ctx.v1);
+	}
+
+	crypt_info = kmem_cache_zalloc(fscrypt_info_cachep, GFP_NOFS);
+	if (!crypt_info)
+		return -ENOMEM;
+
+	crypt_info->ci_inode = inode;
+
+	res = fscrypt_policy_from_context(&crypt_info->ci_policy, &ctx, res);
+	if (res) {
+		fscrypt_warn(inode,
+			     "Unrecognized or corrupt encryption context");
+		goto out;
+	}
+
+	switch (ctx.version) {
+	case FSCRYPT_CONTEXT_V1:
+		memcpy(crypt_info->ci_nonce, ctx.v1.nonce,
+		       FS_KEY_DERIVATION_NONCE_SIZE);
+		break;
+	case FSCRYPT_CONTEXT_V2:
+		memcpy(crypt_info->ci_nonce, ctx.v2.nonce,
+		       FS_KEY_DERIVATION_NONCE_SIZE);
+		break;
+	default:
+		WARN_ON(1);
+		res = -EINVAL;
+		goto out;
+	}
+
+	if (!fscrypt_supported_policy(&crypt_info->ci_policy, inode)) {
+		res = -EINVAL;
+		goto out;
+	}
+
+	mode = select_encryption_mode(&crypt_info->ci_policy, inode);
+	if (IS_ERR(mode)) {
+		res = PTR_ERR(mode);
+		goto out;
+	}
+	WARN_ON(mode->ivsize > FSCRYPT_MAX_IV_SIZE);
+	crypt_info->ci_mode = mode;
+
+	res = setup_file_encryption_key(crypt_info, &master_key);
+	if (res)
+		goto out;
+
+	if (cmpxchg_release(&inode->i_crypt_info, NULL, crypt_info) == NULL) {
+		if (master_key) {
+			struct fscrypt_master_key *mk =
+				master_key->payload.data[0];
+
+			refcount_inc(&mk->mk_refcount);
+			crypt_info->ci_master_key = key_get(master_key);
+			spin_lock(&mk->mk_decrypted_inodes_lock);
+			list_add(&crypt_info->ci_master_key_link,
+				 &mk->mk_decrypted_inodes);
+			spin_unlock(&mk->mk_decrypted_inodes_lock);
+		}
+		crypt_info = NULL;
+	}
+	res = 0;
+out:
+	if (master_key) {
+		struct fscrypt_master_key *mk = master_key->payload.data[0];
+
+		up_read(&mk->mk_secret_sem);
+		key_put(master_key);
+	}
+	if (res == -ENOKEY)
+		res = 0;
+	put_crypt_info(crypt_info);
+	return res;
+}
+EXPORT_SYMBOL(fscrypt_get_encryption_info);
+
+/**
+ * fscrypt_put_encryption_info - free most of an inode's fscrypt data
+ *
+ * Free the inode's fscrypt_info.  Filesystems must call this when the inode is
+ * being evicted.  An RCU grace period need not have elapsed yet.
+ */
+void fscrypt_put_encryption_info(struct inode *inode)
+{
+	put_crypt_info(inode->i_crypt_info);
+	inode->i_crypt_info = NULL;
+}
+EXPORT_SYMBOL(fscrypt_put_encryption_info);
+
+/**
+ * fscrypt_free_inode - free an inode's fscrypt data requiring RCU delay
+ *
+ * Free the inode's cached decrypted symlink target, if any.  Filesystems must
+ * call this after an RCU grace period, just before they free the inode.
+ */
+void fscrypt_free_inode(struct inode *inode)
+{
+	if (IS_ENCRYPTED(inode) && S_ISLNK(inode->i_mode)) {
+		kfree(inode->i_link);
+		inode->i_link = NULL;
+	}
+}
+EXPORT_SYMBOL(fscrypt_free_inode);
+
+/**
+ * fscrypt_drop_inode - check whether the inode's master key has been removed
+ *
+ * Filesystems supporting fscrypt must call this from their ->drop_inode()
+ * method so that encrypted inodes are evicted as soon as they're no longer in
+ * use and their master key has been removed.
+ *
+ * Return: 1 if fscrypt wants the inode to be evicted now, otherwise 0
+ */
+int fscrypt_drop_inode(struct inode *inode)
+{
+	const struct fscrypt_info *ci = READ_ONCE(inode->i_crypt_info);
+	const struct fscrypt_master_key *mk;
+
+	/*
+	 * If ci is NULL, then the inode doesn't have an encryption key set up
+	 * so it's irrelevant.  If ci_master_key is NULL, then the master key
+	 * was provided via the legacy mechanism of the process-subscribed
+	 * keyrings, so we don't know whether it's been removed or not.
+	 */
+	if (!ci || !ci->ci_master_key)
+		return 0;
+	mk = ci->ci_master_key->payload.data[0];
+
+	/*
+	 * Note: since we aren't holding ->mk_secret_sem, the result here can
+	 * immediately become outdated.  But there's no correctness problem with
+	 * unnecessarily evicting.  Nor is there a correctness problem with not
+	 * evicting while iput() is racing with the key being removed, since
+	 * then the thread removing the key will either evict the inode itself
+	 * or will correctly detect that it wasn't evicted due to the race.
+	 */
+	return !is_master_key_secret_present(&mk->mk_secret);
+}
+EXPORT_SYMBOL_GPL(fscrypt_drop_inode);
diff --git a/fs/crypto/keysetup_v1.c b/fs/crypto/keysetup_v1.c
new file mode 100644
index 0000000..1f0c19d
--- /dev/null
+++ b/fs/crypto/keysetup_v1.c
@@ -0,0 +1,357 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Key setup for v1 encryption policies
+ *
+ * Copyright 2015, 2019 Google LLC
+ */
+
+/*
+ * This file implements compatibility functions for the original encryption
+ * policy version ("v1"), including:
+ *
+ * - Deriving per-file keys using the AES-128-ECB based KDF
+ *   (rather than the new method of using HKDF-SHA512)
+ *
+ * - Retrieving fscrypt master keys from process-subscribed keyrings
+ *   (rather than the new method of using a filesystem-level keyring)
+ *
+ * - Handling policies with the DIRECT_KEY flag set using a master key table
+ *   (rather than the new method of implementing DIRECT_KEY with per-mode keys
+ *    managed alongside the master keys in the filesystem-level keyring)
+ */
+
+#include <crypto/algapi.h>
+#include <crypto/skcipher.h>
+#include <keys/user-type.h>
+#include <linux/hashtable.h>
+#include <linux/scatterlist.h>
+#include <linux/bio-crypt-ctx.h>
+
+#include "fscrypt_private.h"
+
+/* Table of keys referenced by DIRECT_KEY policies */
+static DEFINE_HASHTABLE(fscrypt_direct_keys, 6); /* 6 bits = 64 buckets */
+static DEFINE_SPINLOCK(fscrypt_direct_keys_lock);
+
+/*
+ * v1 key derivation function.  This generates the derived key by encrypting the
+ * master key with AES-128-ECB using the nonce as the AES key.  This provides a
+ * unique derived key with sufficient entropy for each inode.  However, it's
+ * nonstandard, non-extensible, doesn't evenly distribute the entropy from the
+ * master key, and is trivially reversible: an attacker who compromises a
+ * derived key can "decrypt" it to get back to the master key, then derive any
+ * other key.  For all new code, use HKDF instead.
+ *
+ * The master key must be at least as long as the derived key.  If the master
+ * key is longer, then only the first 'derived_keysize' bytes are used.
+ */
+static int derive_key_aes(const u8 *master_key,
+			  const u8 nonce[FS_KEY_DERIVATION_NONCE_SIZE],
+			  u8 *derived_key, unsigned int derived_keysize)
+{
+	int res = 0;
+	struct skcipher_request *req = NULL;
+	DECLARE_CRYPTO_WAIT(wait);
+	struct scatterlist src_sg, dst_sg;
+	struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
+
+	if (IS_ERR(tfm)) {
+		res = PTR_ERR(tfm);
+		tfm = NULL;
+		goto out;
+	}
+	crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
+	req = skcipher_request_alloc(tfm, GFP_NOFS);
+	if (!req) {
+		res = -ENOMEM;
+		goto out;
+	}
+	skcipher_request_set_callback(req,
+			CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
+			crypto_req_done, &wait);
+	res = crypto_skcipher_setkey(tfm, nonce, FS_KEY_DERIVATION_NONCE_SIZE);
+	if (res < 0)
+		goto out;
+
+	sg_init_one(&src_sg, master_key, derived_keysize);
+	sg_init_one(&dst_sg, derived_key, derived_keysize);
+	skcipher_request_set_crypt(req, &src_sg, &dst_sg, derived_keysize,
+				   NULL);
+	res = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
+out:
+	skcipher_request_free(req);
+	crypto_free_skcipher(tfm);
+	return res;
+}
+
+/*
+ * Search the current task's subscribed keyrings for a "logon" key with
+ * description prefix:descriptor, and if found acquire a read lock on it and
+ * return a pointer to its validated payload in *payload_ret.
+ */
+static struct key *
+find_and_lock_process_key(const char *prefix,
+			  const u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE],
+			  unsigned int min_keysize,
+			  const struct fscrypt_key **payload_ret)
+{
+	char *description;
+	struct key *key;
+	const struct user_key_payload *ukp;
+	const struct fscrypt_key *payload;
+
+	description = kasprintf(GFP_NOFS, "%s%*phN", prefix,
+				FSCRYPT_KEY_DESCRIPTOR_SIZE, descriptor);
+	if (!description)
+		return ERR_PTR(-ENOMEM);
+
+	key = request_key(&key_type_logon, description, NULL);
+	kfree(description);
+	if (IS_ERR(key))
+		return key;
+
+	down_read(&key->sem);
+	ukp = user_key_payload_locked(key);
+
+	if (!ukp) /* was the key revoked before we acquired its semaphore? */
+		goto invalid;
+
+	payload = (const struct fscrypt_key *)ukp->data;
+
+	if (ukp->datalen != sizeof(struct fscrypt_key) ||
+	    payload->size < 1 || payload->size > FSCRYPT_MAX_KEY_SIZE) {
+		fscrypt_warn(NULL,
+			     "key with description '%s' has invalid payload",
+			     key->description);
+		goto invalid;
+	}
+
+	if (payload->size < min_keysize) {
+		fscrypt_warn(NULL,
+			     "key with description '%s' is too short (got %u bytes, need %u+ bytes)",
+			     key->description, payload->size, min_keysize);
+		goto invalid;
+	}
+
+	*payload_ret = payload;
+	return key;
+
+invalid:
+	up_read(&key->sem);
+	key_put(key);
+	return ERR_PTR(-ENOKEY);
+}
+
+/* Master key referenced by DIRECT_KEY policy */
+struct fscrypt_direct_key {
+	struct hlist_node		dk_node;
+	refcount_t			dk_refcount;
+	const struct fscrypt_mode	*dk_mode;
+	struct fscrypt_prepared_key	dk_key;
+	u8				dk_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
+	u8				dk_raw[FSCRYPT_MAX_KEY_SIZE];
+};
+
+static void free_direct_key(struct fscrypt_direct_key *dk)
+{
+	if (dk) {
+		fscrypt_destroy_prepared_key(&dk->dk_key);
+		kzfree(dk);
+	}
+}
+
+void fscrypt_put_direct_key(struct fscrypt_direct_key *dk)
+{
+	if (!refcount_dec_and_lock(&dk->dk_refcount, &fscrypt_direct_keys_lock))
+		return;
+	hash_del(&dk->dk_node);
+	spin_unlock(&fscrypt_direct_keys_lock);
+
+	free_direct_key(dk);
+}
+
+/*
+ * Find/insert the given key into the fscrypt_direct_keys table.  If found, it
+ * is returned with elevated refcount, and 'to_insert' is freed if non-NULL.  If
+ * not found, 'to_insert' is inserted and returned if it's non-NULL; otherwise
+ * NULL is returned.
+ */
+static struct fscrypt_direct_key *
+find_or_insert_direct_key(struct fscrypt_direct_key *to_insert,
+			  const u8 *raw_key, const struct fscrypt_info *ci)
+{
+	unsigned long hash_key;
+	struct fscrypt_direct_key *dk;
+
+	/*
+	 * Careful: to avoid potentially leaking secret key bytes via timing
+	 * information, we must key the hash table by descriptor rather than by
+	 * raw key, and use crypto_memneq() when comparing raw keys.
+	 */
+
+	BUILD_BUG_ON(sizeof(hash_key) > FSCRYPT_KEY_DESCRIPTOR_SIZE);
+	memcpy(&hash_key, ci->ci_policy.v1.master_key_descriptor,
+	       sizeof(hash_key));
+
+	spin_lock(&fscrypt_direct_keys_lock);
+	hash_for_each_possible(fscrypt_direct_keys, dk, dk_node, hash_key) {
+		if (memcmp(ci->ci_policy.v1.master_key_descriptor,
+			   dk->dk_descriptor, FSCRYPT_KEY_DESCRIPTOR_SIZE) != 0)
+			continue;
+		if (ci->ci_mode != dk->dk_mode)
+			continue;
+		if (!fscrypt_is_key_prepared(&dk->dk_key, ci))
+			continue;
+		if (crypto_memneq(raw_key, dk->dk_raw, ci->ci_mode->keysize))
+			continue;
+		/* using existing tfm with same (descriptor, mode, raw_key) */
+		refcount_inc(&dk->dk_refcount);
+		spin_unlock(&fscrypt_direct_keys_lock);
+		free_direct_key(to_insert);
+		return dk;
+	}
+	if (to_insert)
+		hash_add(fscrypt_direct_keys, &to_insert->dk_node, hash_key);
+	spin_unlock(&fscrypt_direct_keys_lock);
+	return to_insert;
+}
+
+/* Prepare to encrypt directly using the master key in the given mode */
+static struct fscrypt_direct_key *
+fscrypt_get_direct_key(const struct fscrypt_info *ci, const u8 *raw_key)
+{
+	struct fscrypt_direct_key *dk;
+	int err;
+
+	/* Is there already a tfm for this key? */
+	dk = find_or_insert_direct_key(NULL, raw_key, ci);
+	if (dk)
+		return dk;
+
+	/* Nope, allocate one. */
+	dk = kzalloc(sizeof(*dk), GFP_NOFS);
+	if (!dk)
+		return ERR_PTR(-ENOMEM);
+	refcount_set(&dk->dk_refcount, 1);
+	dk->dk_mode = ci->ci_mode;
+	err = fscrypt_prepare_key(&dk->dk_key, raw_key, ci->ci_mode->keysize,
+				  false /*is_hw_wrapped*/, ci);
+	if (err)
+		goto err_free_dk;
+	memcpy(dk->dk_descriptor, ci->ci_policy.v1.master_key_descriptor,
+	       FSCRYPT_KEY_DESCRIPTOR_SIZE);
+	memcpy(dk->dk_raw, raw_key, ci->ci_mode->keysize);
+
+	return find_or_insert_direct_key(dk, raw_key, ci);
+
+err_free_dk:
+	free_direct_key(dk);
+	return ERR_PTR(err);
+}
+
+/* v1 policy, DIRECT_KEY: use the master key directly */
+static int setup_v1_file_key_direct(struct fscrypt_info *ci,
+				    const u8 *raw_master_key)
+{
+	const struct fscrypt_mode *mode = ci->ci_mode;
+	struct fscrypt_direct_key *dk;
+
+	if (!fscrypt_mode_supports_direct_key(mode)) {
+		fscrypt_warn(ci->ci_inode,
+			     "Direct key mode not allowed with %s",
+			     mode->friendly_name);
+		return -EINVAL;
+	}
+
+	if (ci->ci_policy.v1.contents_encryption_mode !=
+	    ci->ci_policy.v1.filenames_encryption_mode) {
+		fscrypt_warn(ci->ci_inode,
+			     "Direct key mode not allowed with different contents and filenames modes");
+		return -EINVAL;
+	}
+
+	dk = fscrypt_get_direct_key(ci, raw_master_key);
+	if (IS_ERR(dk))
+		return PTR_ERR(dk);
+	ci->ci_direct_key = dk;
+	ci->ci_key = dk->dk_key;
+	return 0;
+}
+
+/* v1 policy, !DIRECT_KEY: derive the file's encryption key */
+static int setup_v1_file_key_derived(struct fscrypt_info *ci,
+				     const u8 *raw_master_key)
+{
+	u8 *derived_key;
+	int err;
+	int i;
+	union {
+		u8 bytes[FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE];
+		u32 words[FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE / sizeof(u32)];
+	} key_new;
+
+	/*Support legacy ice based content encryption mode*/
+	if ((fscrypt_policy_contents_mode(&ci->ci_policy) ==
+					  FSCRYPT_MODE_PRIVATE) &&
+					  fscrypt_using_inline_encryption(ci)) {
+		memcpy(key_new.bytes, raw_master_key, ci->ci_mode->keysize);
+
+		for (i = 0; i < ARRAY_SIZE(key_new.words); i++)
+			__cpu_to_be32s(&key_new.words[i]);
+
+		err = fscrypt_prepare_inline_crypt_key(&ci->ci_key,
+						       key_new.bytes,
+						       ci->ci_mode->keysize,
+						       false,
+						       ci);
+		return err;
+	}
+	/*
+	 * This cannot be a stack buffer because it will be passed to the
+	 * scatterlist crypto API during derive_key_aes().
+	 */
+	derived_key = kmalloc(ci->ci_mode->keysize, GFP_NOFS);
+	if (!derived_key)
+		return -ENOMEM;
+
+	err = derive_key_aes(raw_master_key, ci->ci_nonce,
+			     derived_key, ci->ci_mode->keysize);
+	if (err)
+		goto out;
+
+	err = fscrypt_set_derived_key(ci, derived_key);
+out:
+	kzfree(derived_key);
+	return err;
+}
+
+int fscrypt_setup_v1_file_key(struct fscrypt_info *ci, const u8 *raw_master_key)
+{
+	if (ci->ci_policy.v1.flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY)
+		return setup_v1_file_key_direct(ci, raw_master_key);
+	else
+		return setup_v1_file_key_derived(ci, raw_master_key);
+}
+
+int fscrypt_setup_v1_file_key_via_subscribed_keyrings(struct fscrypt_info *ci)
+{
+	struct key *key;
+	const struct fscrypt_key *payload;
+	int err;
+
+	key = find_and_lock_process_key(FSCRYPT_KEY_DESC_PREFIX,
+					ci->ci_policy.v1.master_key_descriptor,
+					ci->ci_mode->keysize, &payload);
+	if (key == ERR_PTR(-ENOKEY) && ci->ci_inode->i_sb->s_cop->key_prefix) {
+		key = find_and_lock_process_key(ci->ci_inode->i_sb->s_cop->key_prefix,
+						ci->ci_policy.v1.master_key_descriptor,
+						ci->ci_mode->keysize, &payload);
+	}
+	if (IS_ERR(key))
+		return PTR_ERR(key);
+
+	err = fscrypt_setup_v1_file_key(ci, payload->raw);
+	up_read(&key->sem);
+	key_put(key);
+	return err;
+}
diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
index 4941fe8..96f5280 100644
--- a/fs/crypto/policy.c
+++ b/fs/crypto/policy.c
@@ -5,8 +5,9 @@
  * Copyright (C) 2015, Google, Inc.
  * Copyright (C) 2015, Motorola Mobility.
  *
- * Written by Michael Halcrow, 2015.
+ * Originally written by Michael Halcrow, 2015.
  * Modified by Jaegeuk Kim, 2015.
+ * Modified by Eric Biggers, 2019 for v2 policy support.
  */
 
 #include <linux/random.h>
@@ -14,70 +15,342 @@
 #include <linux/mount.h>
 #include "fscrypt_private.h"
 
-/*
- * check whether an encryption policy is consistent with an encryption context
+/**
+ * fscrypt_policies_equal - check whether two encryption policies are the same
+ *
+ * Return: %true if equal, else %false
  */
-static bool is_encryption_context_consistent_with_policy(
-				const struct fscrypt_context *ctx,
-				const struct fscrypt_policy *policy)
+bool fscrypt_policies_equal(const union fscrypt_policy *policy1,
+			    const union fscrypt_policy *policy2)
 {
-	return memcmp(ctx->master_key_descriptor, policy->master_key_descriptor,
-		      FS_KEY_DESCRIPTOR_SIZE) == 0 &&
-		(ctx->flags == policy->flags) &&
-		(ctx->contents_encryption_mode ==
-		 policy->contents_encryption_mode) &&
-		(ctx->filenames_encryption_mode ==
-		 policy->filenames_encryption_mode);
+	if (policy1->version != policy2->version)
+		return false;
+
+	return !memcmp(policy1, policy2, fscrypt_policy_size(policy1));
 }
 
-static int create_encryption_context_from_policy(struct inode *inode,
-				const struct fscrypt_policy *policy)
+static bool supported_iv_ino_lblk_64_policy(
+					const struct fscrypt_policy_v2 *policy,
+					const struct inode *inode)
 {
-	struct fscrypt_context ctx;
+	struct super_block *sb = inode->i_sb;
+	int ino_bits = 64, lblk_bits = 64;
 
-	ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
-	memcpy(ctx.master_key_descriptor, policy->master_key_descriptor,
-					FS_KEY_DESCRIPTOR_SIZE);
+	if (policy->flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
+		fscrypt_warn(inode,
+			     "The DIRECT_KEY and IV_INO_LBLK_64 flags are mutually exclusive");
+		return false;
+	}
+	/*
+	 * It's unsafe to include inode numbers in the IVs if the filesystem can
+	 * potentially renumber inodes, e.g. via filesystem shrinking.
+	 */
+	if (!sb->s_cop->has_stable_inodes ||
+	    !sb->s_cop->has_stable_inodes(sb)) {
+		fscrypt_warn(inode,
+			     "Can't use IV_INO_LBLK_64 policy on filesystem '%s' because it doesn't have stable inode numbers",
+			     sb->s_id);
+		return false;
+	}
+	if (sb->s_cop->get_ino_and_lblk_bits)
+		sb->s_cop->get_ino_and_lblk_bits(sb, &ino_bits, &lblk_bits);
+	if (ino_bits > 32 || lblk_bits > 32) {
+		fscrypt_warn(inode,
+			     "Can't use IV_INO_LBLK_64 policy on filesystem '%s' because it doesn't use 32-bit inode and block numbers",
+			     sb->s_id);
+		return false;
+	}
+	return true;
+}
 
-	if (!fscrypt_valid_enc_modes(policy->contents_encryption_mode,
-				     policy->filenames_encryption_mode))
+/**
+ * fscrypt_supported_policy - check whether an encryption policy is supported
+ *
+ * Given an encryption policy, check whether all its encryption modes and other
+ * settings are supported by this kernel.  (But we don't currently don't check
+ * for crypto API support here, so attempting to use an algorithm not configured
+ * into the crypto API will still fail later.)
+ *
+ * Return: %true if supported, else %false
+ */
+bool fscrypt_supported_policy(const union fscrypt_policy *policy_u,
+			      const struct inode *inode)
+{
+	switch (policy_u->version) {
+	case FSCRYPT_POLICY_V1: {
+		const struct fscrypt_policy_v1 *policy = &policy_u->v1;
+
+		if (!fscrypt_valid_enc_modes(policy->contents_encryption_mode,
+					     policy->filenames_encryption_mode)) {
+			fscrypt_warn(inode,
+				     "Unsupported encryption modes (contents %d, filenames %d)",
+				     policy->contents_encryption_mode,
+				     policy->filenames_encryption_mode);
+			return false;
+		}
+
+		if (policy->flags & ~(FSCRYPT_POLICY_FLAGS_PAD_MASK |
+				      FSCRYPT_POLICY_FLAG_DIRECT_KEY)) {
+			fscrypt_warn(inode,
+				     "Unsupported encryption flags (0x%02x)",
+				     policy->flags);
+			return false;
+		}
+
+		return true;
+	}
+	case FSCRYPT_POLICY_V2: {
+		const struct fscrypt_policy_v2 *policy = &policy_u->v2;
+
+		if (!fscrypt_valid_enc_modes(policy->contents_encryption_mode,
+					     policy->filenames_encryption_mode)) {
+			fscrypt_warn(inode,
+				     "Unsupported encryption modes (contents %d, filenames %d)",
+				     policy->contents_encryption_mode,
+				     policy->filenames_encryption_mode);
+			return false;
+		}
+
+		if (policy->flags & ~FSCRYPT_POLICY_FLAGS_VALID) {
+			fscrypt_warn(inode,
+				     "Unsupported encryption flags (0x%02x)",
+				     policy->flags);
+			return false;
+		}
+
+		if ((policy->flags & FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64) &&
+		    !supported_iv_ino_lblk_64_policy(policy, inode))
+			return false;
+
+		if (memchr_inv(policy->__reserved, 0,
+			       sizeof(policy->__reserved))) {
+			fscrypt_warn(inode,
+				     "Reserved bits set in encryption policy");
+			return false;
+		}
+
+		return true;
+	}
+	}
+	return false;
+}
+
+/**
+ * fscrypt_new_context_from_policy - create a new fscrypt_context from a policy
+ *
+ * Create an fscrypt_context for an inode that is being assigned the given
+ * encryption policy.  A new nonce is randomly generated.
+ *
+ * Return: the size of the new context in bytes.
+ */
+static int fscrypt_new_context_from_policy(union fscrypt_context *ctx_u,
+					   const union fscrypt_policy *policy_u)
+{
+	memset(ctx_u, 0, sizeof(*ctx_u));
+
+	switch (policy_u->version) {
+	case FSCRYPT_POLICY_V1: {
+		const struct fscrypt_policy_v1 *policy = &policy_u->v1;
+		struct fscrypt_context_v1 *ctx = &ctx_u->v1;
+
+		ctx->version = FSCRYPT_CONTEXT_V1;
+		ctx->contents_encryption_mode =
+			policy->contents_encryption_mode;
+		ctx->filenames_encryption_mode =
+			policy->filenames_encryption_mode;
+		ctx->flags = policy->flags;
+		memcpy(ctx->master_key_descriptor,
+		       policy->master_key_descriptor,
+		       sizeof(ctx->master_key_descriptor));
+		get_random_bytes(ctx->nonce, sizeof(ctx->nonce));
+		return sizeof(*ctx);
+	}
+	case FSCRYPT_POLICY_V2: {
+		const struct fscrypt_policy_v2 *policy = &policy_u->v2;
+		struct fscrypt_context_v2 *ctx = &ctx_u->v2;
+
+		ctx->version = FSCRYPT_CONTEXT_V2;
+		ctx->contents_encryption_mode =
+			policy->contents_encryption_mode;
+		ctx->filenames_encryption_mode =
+			policy->filenames_encryption_mode;
+		ctx->flags = policy->flags;
+		memcpy(ctx->master_key_identifier,
+		       policy->master_key_identifier,
+		       sizeof(ctx->master_key_identifier));
+		get_random_bytes(ctx->nonce, sizeof(ctx->nonce));
+		return sizeof(*ctx);
+	}
+	}
+	BUG();
+}
+
+/**
+ * fscrypt_policy_from_context - convert an fscrypt_context to an fscrypt_policy
+ *
+ * Given an fscrypt_context, build the corresponding fscrypt_policy.
+ *
+ * Return: 0 on success, or -EINVAL if the fscrypt_context has an unrecognized
+ * version number or size.
+ *
+ * This does *not* validate the settings within the policy itself, e.g. the
+ * modes, flags, and reserved bits.  Use fscrypt_supported_policy() for that.
+ */
+int fscrypt_policy_from_context(union fscrypt_policy *policy_u,
+				const union fscrypt_context *ctx_u,
+				int ctx_size)
+{
+	memset(policy_u, 0, sizeof(*policy_u));
+
+	if (ctx_size <= 0 || ctx_size != fscrypt_context_size(ctx_u))
 		return -EINVAL;
 
-	if (policy->flags & ~FS_POLICY_FLAGS_VALID)
+	switch (ctx_u->version) {
+	case FSCRYPT_CONTEXT_V1: {
+		const struct fscrypt_context_v1 *ctx = &ctx_u->v1;
+		struct fscrypt_policy_v1 *policy = &policy_u->v1;
+
+		policy->version = FSCRYPT_POLICY_V1;
+		policy->contents_encryption_mode =
+			ctx->contents_encryption_mode;
+		policy->filenames_encryption_mode =
+			ctx->filenames_encryption_mode;
+		policy->flags = ctx->flags;
+		memcpy(policy->master_key_descriptor,
+		       ctx->master_key_descriptor,
+		       sizeof(policy->master_key_descriptor));
+		return 0;
+	}
+	case FSCRYPT_CONTEXT_V2: {
+		const struct fscrypt_context_v2 *ctx = &ctx_u->v2;
+		struct fscrypt_policy_v2 *policy = &policy_u->v2;
+
+		policy->version = FSCRYPT_POLICY_V2;
+		policy->contents_encryption_mode =
+			ctx->contents_encryption_mode;
+		policy->filenames_encryption_mode =
+			ctx->filenames_encryption_mode;
+		policy->flags = ctx->flags;
+		memcpy(policy->__reserved, ctx->__reserved,
+		       sizeof(policy->__reserved));
+		memcpy(policy->master_key_identifier,
+		       ctx->master_key_identifier,
+		       sizeof(policy->master_key_identifier));
+		return 0;
+	}
+	}
+	/* unreachable */
+	return -EINVAL;
+}
+
+/* Retrieve an inode's encryption policy */
+static int fscrypt_get_policy(struct inode *inode, union fscrypt_policy *policy)
+{
+	const struct fscrypt_info *ci;
+	union fscrypt_context ctx;
+	int ret;
+
+	ci = READ_ONCE(inode->i_crypt_info);
+	if (ci) {
+		/* key available, use the cached policy */
+		*policy = ci->ci_policy;
+		return 0;
+	}
+
+	if (!IS_ENCRYPTED(inode))
+		return -ENODATA;
+
+	ret = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
+	if (ret < 0)
+		return (ret == -ERANGE) ? -EINVAL : ret;
+
+	return fscrypt_policy_from_context(policy, &ctx, ret);
+}
+
+static int set_encryption_policy(struct inode *inode,
+				 const union fscrypt_policy *policy)
+{
+	union fscrypt_context ctx;
+	int ctxsize;
+	int err;
+
+	if (!fscrypt_supported_policy(policy, inode))
 		return -EINVAL;
 
-	ctx.contents_encryption_mode = policy->contents_encryption_mode;
-	ctx.filenames_encryption_mode = policy->filenames_encryption_mode;
-	ctx.flags = policy->flags;
-	BUILD_BUG_ON(sizeof(ctx.nonce) != FS_KEY_DERIVATION_NONCE_SIZE);
-	get_random_bytes(ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
+	switch (policy->version) {
+	case FSCRYPT_POLICY_V1:
+		/*
+		 * The original encryption policy version provided no way of
+		 * verifying that the correct master key was supplied, which was
+		 * insecure in scenarios where multiple users have access to the
+		 * same encrypted files (even just read-only access).  The new
+		 * encryption policy version fixes this and also implies use of
+		 * an improved key derivation function and allows non-root users
+		 * to securely remove keys.  So as long as compatibility with
+		 * old kernels isn't required, it is recommended to use the new
+		 * policy version for all new encrypted directories.
+		 */
+		pr_warn_once("%s (pid %d) is setting deprecated v1 encryption policy; recommend upgrading to v2.\n",
+			     current->comm, current->pid);
+		break;
+	case FSCRYPT_POLICY_V2:
+		err = fscrypt_verify_key_added(inode->i_sb,
+					       policy->v2.master_key_identifier);
+		if (err)
+			return err;
+		break;
+	default:
+		WARN_ON(1);
+		return -EINVAL;
+	}
 
-	return inode->i_sb->s_cop->set_context(inode, &ctx, sizeof(ctx), NULL);
+	ctxsize = fscrypt_new_context_from_policy(&ctx, policy);
+
+	return inode->i_sb->s_cop->set_context(inode, &ctx, ctxsize, NULL);
 }
 
 int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
 {
-	struct fscrypt_policy policy;
+	union fscrypt_policy policy;
+	union fscrypt_policy existing_policy;
 	struct inode *inode = file_inode(filp);
+	u8 version;
+	int size;
 	int ret;
-	struct fscrypt_context ctx;
 
-	if (copy_from_user(&policy, arg, sizeof(policy)))
+	if (get_user(policy.version, (const u8 __user *)arg))
 		return -EFAULT;
 
+	size = fscrypt_policy_size(&policy);
+	if (size <= 0)
+		return -EINVAL;
+
+	/*
+	 * We should just copy the remaining 'size - 1' bytes here, but a
+	 * bizarre bug in gcc 7 and earlier (fixed by gcc r255731) causes gcc to
+	 * think that size can be 0 here (despite the check above!) *and* that
+	 * it's a compile-time constant.  Thus it would think copy_from_user()
+	 * is passed compile-time constant ULONG_MAX, causing the compile-time
+	 * buffer overflow check to fail, breaking the build. This only occurred
+	 * when building an i386 kernel with -Os and branch profiling enabled.
+	 *
+	 * Work around it by just copying the first byte again...
+	 */
+	version = policy.version;
+	if (copy_from_user(&policy, arg, size))
+		return -EFAULT;
+	policy.version = version;
+
 	if (!inode_owner_or_capable(inode))
 		return -EACCES;
 
-	if (policy.version != 0)
-		return -EINVAL;
-
 	ret = mnt_want_write_file(filp);
 	if (ret)
 		return ret;
 
 	inode_lock(inode);
 
-	ret = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
+	ret = fscrypt_get_policy(inode, &existing_policy);
 	if (ret == -ENODATA) {
 		if (!S_ISDIR(inode->i_mode))
 			ret = -ENOTDIR;
@@ -86,14 +359,10 @@
 		else if (!inode->i_sb->s_cop->empty_dir(inode))
 			ret = -ENOTEMPTY;
 		else
-			ret = create_encryption_context_from_policy(inode,
-								    &policy);
-	} else if (ret == sizeof(ctx) &&
-		   is_encryption_context_consistent_with_policy(&ctx,
-								&policy)) {
-		/* The file already uses the same encryption policy. */
-		ret = 0;
-	} else if (ret >= 0 || ret == -ERANGE) {
+			ret = set_encryption_policy(inode, &policy);
+	} else if (ret == -EINVAL ||
+		   (ret == 0 && !fscrypt_policies_equal(&policy,
+							&existing_policy))) {
 		/* The file already uses a different encryption policy. */
 		ret = -EEXIST;
 	}
@@ -105,37 +374,57 @@
 }
 EXPORT_SYMBOL(fscrypt_ioctl_set_policy);
 
+/* Original ioctl version; can only get the original policy version */
 int fscrypt_ioctl_get_policy(struct file *filp, void __user *arg)
 {
-	struct inode *inode = file_inode(filp);
-	struct fscrypt_context ctx;
-	struct fscrypt_policy policy;
-	int res;
+	union fscrypt_policy policy;
+	int err;
 
-	if (!IS_ENCRYPTED(inode))
-		return -ENODATA;
+	err = fscrypt_get_policy(file_inode(filp), &policy);
+	if (err)
+		return err;
 
-	res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
-	if (res < 0 && res != -ERANGE)
-		return res;
-	if (res != sizeof(ctx))
-		return -EINVAL;
-	if (ctx.format != FS_ENCRYPTION_CONTEXT_FORMAT_V1)
+	if (policy.version != FSCRYPT_POLICY_V1)
 		return -EINVAL;
 
-	policy.version = 0;
-	policy.contents_encryption_mode = ctx.contents_encryption_mode;
-	policy.filenames_encryption_mode = ctx.filenames_encryption_mode;
-	policy.flags = ctx.flags;
-	memcpy(policy.master_key_descriptor, ctx.master_key_descriptor,
-				FS_KEY_DESCRIPTOR_SIZE);
-
-	if (copy_to_user(arg, &policy, sizeof(policy)))
+	if (copy_to_user(arg, &policy, sizeof(policy.v1)))
 		return -EFAULT;
 	return 0;
 }
 EXPORT_SYMBOL(fscrypt_ioctl_get_policy);
 
+/* Extended ioctl version; can get policies of any version */
+int fscrypt_ioctl_get_policy_ex(struct file *filp, void __user *uarg)
+{
+	struct fscrypt_get_policy_ex_arg arg;
+	union fscrypt_policy *policy = (union fscrypt_policy *)&arg.policy;
+	size_t policy_size;
+	int err;
+
+	/* arg is policy_size, then policy */
+	BUILD_BUG_ON(offsetof(typeof(arg), policy_size) != 0);
+	BUILD_BUG_ON(offsetofend(typeof(arg), policy_size) !=
+		     offsetof(typeof(arg), policy));
+	BUILD_BUG_ON(sizeof(arg.policy) != sizeof(*policy));
+
+	err = fscrypt_get_policy(file_inode(filp), policy);
+	if (err)
+		return err;
+	policy_size = fscrypt_policy_size(policy);
+
+	if (copy_from_user(&arg, uarg, sizeof(arg.policy_size)))
+		return -EFAULT;
+
+	if (policy_size > arg.policy_size)
+		return -EOVERFLOW;
+	arg.policy_size = policy_size;
+
+	if (copy_to_user(uarg, &arg, sizeof(arg.policy_size) + policy_size))
+		return -EFAULT;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(fscrypt_ioctl_get_policy_ex);
+
 /**
  * fscrypt_has_permitted_context() - is a file's encryption policy permitted
  *				     within its directory?
@@ -157,10 +446,8 @@
  */
 int fscrypt_has_permitted_context(struct inode *parent, struct inode *child)
 {
-	const struct fscrypt_operations *cops = parent->i_sb->s_cop;
-	const struct fscrypt_info *parent_ci, *child_ci;
-	struct fscrypt_context parent_ctx, child_ctx;
-	int res;
+	union fscrypt_policy parent_policy, child_policy;
+	int err;
 
 	/* No restrictions on file types which are never encrypted */
 	if (!S_ISREG(child->i_mode) && !S_ISDIR(child->i_mode) &&
@@ -190,41 +477,22 @@
 	 * In any case, if an unexpected error occurs, fall back to "forbidden".
 	 */
 
-	res = fscrypt_get_encryption_info(parent);
-	if (res)
+	err = fscrypt_get_encryption_info(parent);
+	if (err)
 		return 0;
-	res = fscrypt_get_encryption_info(child);
-	if (res)
-		return 0;
-	parent_ci = READ_ONCE(parent->i_crypt_info);
-	child_ci = READ_ONCE(child->i_crypt_info);
-
-	if (parent_ci && child_ci) {
-		return memcmp(parent_ci->ci_master_key_descriptor,
-			      child_ci->ci_master_key_descriptor,
-			      FS_KEY_DESCRIPTOR_SIZE) == 0 &&
-			(parent_ci->ci_data_mode == child_ci->ci_data_mode) &&
-			(parent_ci->ci_filename_mode ==
-			 child_ci->ci_filename_mode) &&
-			(parent_ci->ci_flags == child_ci->ci_flags);
-	}
-
-	res = cops->get_context(parent, &parent_ctx, sizeof(parent_ctx));
-	if (res != sizeof(parent_ctx))
+	err = fscrypt_get_encryption_info(child);
+	if (err)
 		return 0;
 
-	res = cops->get_context(child, &child_ctx, sizeof(child_ctx));
-	if (res != sizeof(child_ctx))
+	err = fscrypt_get_policy(parent, &parent_policy);
+	if (err)
 		return 0;
 
-	return memcmp(parent_ctx.master_key_descriptor,
-		      child_ctx.master_key_descriptor,
-		      FS_KEY_DESCRIPTOR_SIZE) == 0 &&
-		(parent_ctx.contents_encryption_mode ==
-		 child_ctx.contents_encryption_mode) &&
-		(parent_ctx.filenames_encryption_mode ==
-		 child_ctx.filenames_encryption_mode) &&
-		(parent_ctx.flags == child_ctx.flags);
+	err = fscrypt_get_policy(child, &child_policy);
+	if (err)
+		return 0;
+
+	return fscrypt_policies_equal(&parent_policy, &child_policy);
 }
 EXPORT_SYMBOL(fscrypt_has_permitted_context);
 
@@ -240,7 +508,8 @@
 int fscrypt_inherit_context(struct inode *parent, struct inode *child,
 						void *fs_data, bool preload)
 {
-	struct fscrypt_context ctx;
+	union fscrypt_context ctx;
+	int ctxsize;
 	struct fscrypt_info *ci;
 	int res;
 
@@ -252,16 +521,10 @@
 	if (ci == NULL)
 		return -ENOKEY;
 
-	ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
-	ctx.contents_encryption_mode = ci->ci_data_mode;
-	ctx.filenames_encryption_mode = ci->ci_filename_mode;
-	ctx.flags = ci->ci_flags;
-	memcpy(ctx.master_key_descriptor, ci->ci_master_key_descriptor,
-	       FS_KEY_DESCRIPTOR_SIZE);
-	get_random_bytes(ctx.nonce, FS_KEY_DERIVATION_NONCE_SIZE);
+	ctxsize = fscrypt_new_context_from_policy(&ctx, &ci->ci_policy);
+
 	BUILD_BUG_ON(sizeof(ctx) != FSCRYPT_SET_CONTEXT_MAX_SIZE);
-	res = parent->i_sb->s_cop->set_context(child, &ctx,
-						sizeof(ctx), fs_data);
+	res = parent->i_sb->s_cop->set_context(child, &ctx, ctxsize, fs_data);
 	if (res)
 		return res;
 	return preload ? fscrypt_get_encryption_info(child): 0;
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 5362449..0cfb4d6 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -23,6 +23,7 @@
 #include <linux/module.h>
 #include <linux/types.h>
 #include <linux/fs.h>
+#include <linux/fscrypt.h>
 #include <linux/mm.h>
 #include <linux/slab.h>
 #include <linux/highmem.h>
@@ -37,7 +38,6 @@
 #include <linux/uio.h>
 #include <linux/atomic.h>
 #include <linux/prefetch.h>
-#include <linux/fscrypt.h>
 
 /*
  * How many user pages to map in one call to get_user_pages().  This determines
@@ -431,6 +431,7 @@
 	      sector_t first_sector, int nr_vecs)
 {
 	struct bio *bio;
+	struct inode *inode = dio->inode;
 
 	/*
 	 * bio_alloc() is guaranteed to return a bio when allowed to sleep and
@@ -438,6 +439,9 @@
 	 */
 	bio = bio_alloc(GFP_KERNEL, nr_vecs);
 
+	fscrypt_set_bio_crypt_ctx(bio, inode,
+				  sdio->cur_page_fs_offset >> inode->i_blkbits,
+				  GFP_KERNEL);
 	bio_set_dev(bio, bdev);
 	bio->bi_iter.bi_sector = first_sector;
 	bio_set_op_attrs(bio, dio->op, dio->op_flags);
@@ -452,23 +456,6 @@
 	sdio->logical_offset_in_bio = sdio->cur_page_fs_offset;
 }
 
-#ifdef CONFIG_PFK
-static bool is_inode_filesystem_type(const struct inode *inode,
-					const char *fs_type)
-{
-	if (!inode || !fs_type)
-		return false;
-
-	if (!inode->i_sb)
-		return false;
-
-	if (!inode->i_sb->s_type)
-		return false;
-
-	return (strcmp(inode->i_sb->s_type->name, fs_type) == 0);
-}
-#endif
-
 /*
  * In the AIO read case we speculatively dirty the pages before starting IO.
  * During IO completion, any of these pages which happen to have been written
@@ -491,17 +478,7 @@
 		bio_set_pages_dirty(bio);
 
 	dio->bio_disk = bio->bi_disk;
-#ifdef CONFIG_PFK
-	bio->bi_dio_inode = dio->inode;
 
-/* iv sector for security/pfe/pfk_fscrypt.c and f2fs in fs/f2fs/f2fs.h */
-#define PG_DUN_NEW(i, p)                                            \
-	(((((u64)(i)->i_ino) & 0xffffffff) << 32) | ((p) & 0xffffffff))
-
-	if (is_inode_filesystem_type(dio->inode, "f2fs"))
-		fscrypt_set_ice_dun(dio->inode, bio, PG_DUN_NEW(dio->inode,
-			(sdio->logical_offset_in_bio >> PAGE_SHIFT)));
-#endif
 	if (sdio->submit_io) {
 		sdio->submit_io(bio, dio->inode, sdio->logical_offset_in_bio);
 		dio->bio_cookie = BLK_QC_T_NONE;
@@ -513,18 +490,6 @@
 	sdio->logical_offset_in_bio = 0;
 }
 
-struct inode *dio_bio_get_inode(struct bio *bio)
-{
-	struct inode *inode = NULL;
-
-	if (bio == NULL)
-		return NULL;
-#ifdef CONFIG_PFK
-	inode = bio->bi_dio_inode;
-#endif
-	return inode;
-}
-
 /*
  * Release any resources in case of a failure
  */
diff --git a/fs/ext4/Kconfig b/fs/ext4/Kconfig
index 3ed1939..037358b 100644
--- a/fs/ext4/Kconfig
+++ b/fs/ext4/Kconfig
@@ -106,16 +106,10 @@
 	  files
 
 config EXT4_FS_ENCRYPTION
-	bool "Ext4 FS Encryption"
-	default n
+	bool
+	default y
 	depends on EXT4_ENCRYPTION
 
-config EXT4_FS_ICE_ENCRYPTION
-	bool "Ext4 Encryption with ICE support"
-	default n
-	depends on EXT4_FS_ENCRYPTION
-	depends on PFK
-
 config EXT4_DEBUG
 	bool "EXT4 debugging support"
 	depends on EXT4_FS
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 734dc63..56f9de2 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -224,10 +224,7 @@
 	ssize_t			size;		/* size of the extent */
 } ext4_io_end_t;
 
-#define EXT4_IO_ENCRYPTED	1
-
 struct ext4_io_submit {
-	unsigned int		io_flags;
 	struct writeback_control *io_wbc;
 	struct bio		*io_bio;
 	ext4_io_end_t		*io_end;
@@ -1143,6 +1140,7 @@
 #define EXT4_MOUNT_JOURNAL_CHECKSUM	0x800000 /* Journal checksums */
 #define EXT4_MOUNT_JOURNAL_ASYNC_COMMIT	0x1000000 /* Journal Async Commit */
 #define EXT4_MOUNT_WARN_ON_ERROR	0x2000000 /* Trigger WARN_ON on error */
+#define EXT4_MOUNT_INLINECRYPT		0x4000000 /* Inline encryption support */
 #define EXT4_MOUNT_DELALLOC		0x8000000 /* Delalloc support */
 #define EXT4_MOUNT_DATA_ERR_ABORT	0x10000000 /* Abort on file data write */
 #define EXT4_MOUNT_BLOCK_VALIDITY	0x20000000 /* Block validity checking */
@@ -1669,6 +1667,7 @@
 #define EXT4_FEATURE_COMPAT_RESIZE_INODE	0x0010
 #define EXT4_FEATURE_COMPAT_DIR_INDEX		0x0020
 #define EXT4_FEATURE_COMPAT_SPARSE_SUPER2	0x0200
+#define EXT4_FEATURE_COMPAT_STABLE_INODES	0x0800
 
 #define EXT4_FEATURE_RO_COMPAT_SPARSE_SUPER	0x0001
 #define EXT4_FEATURE_RO_COMPAT_LARGE_FILE	0x0002
@@ -1770,6 +1769,7 @@
 EXT4_FEATURE_COMPAT_FUNCS(resize_inode,		RESIZE_INODE)
 EXT4_FEATURE_COMPAT_FUNCS(dir_index,		DIR_INDEX)
 EXT4_FEATURE_COMPAT_FUNCS(sparse_super2,	SPARSE_SUPER2)
+EXT4_FEATURE_COMPAT_FUNCS(stable_inodes,	STABLE_INODES)
 
 EXT4_FEATURE_RO_COMPAT_FUNCS(sparse_super,	SPARSE_SUPER)
 EXT4_FEATURE_RO_COMPAT_FUNCS(large_file,	LARGE_FILE)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 52cbf51..e8d1c11 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1235,12 +1235,9 @@
 		if (!buffer_uptodate(bh) && !buffer_delay(bh) &&
 		    !buffer_unwritten(bh) &&
 		    (block_start < from || block_end > to)) {
-			decrypt = IS_ENCRYPTED(inode) &&
-				S_ISREG(inode->i_mode) &&
-				!fscrypt_using_hardware_encryption(inode);
-			ll_rw_block(REQ_OP_READ, (decrypt ? REQ_NOENCRYPT : 0),
-				    1, &bh);
+			ll_rw_block(REQ_OP_READ, 0, 1, &bh);
 			*wait_bh++ = bh;
+			decrypt = fscrypt_inode_uses_fs_layer_crypto(inode);
 		}
 	}
 	/*
@@ -3806,14 +3803,10 @@
 		get_block_func = ext4_dio_get_block_unwritten_async;
 		dio_flags = DIO_LOCKING;
 	}
-#if defined(CONFIG_FS_ENCRYPTION)
-	WARN_ON(IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)
-		&& !fscrypt_using_hardware_encryption(inode));
-#endif
-	ret = __blockdev_direct_IO(iocb, inode,
-				   inode->i_sb->s_bdev, iter,
-				   get_block_func,
-				   ext4_end_io_dio, NULL, dio_flags);
+
+	ret = __blockdev_direct_IO(iocb, inode, inode->i_sb->s_bdev, iter,
+				   get_block_func, ext4_end_io_dio, NULL,
+				   dio_flags);
 
 	if (ret > 0 && !overwrite && ext4_test_inode_state(inode,
 						EXT4_STATE_DIO_UNWRITTEN)) {
@@ -3926,11 +3919,12 @@
 	ssize_t ret;
 	int rw = iov_iter_rw(iter);
 
-#ifdef CONFIG_FS_ENCRYPTION
-	if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)
-		&& !fscrypt_using_hardware_encryption(inode))
-		return 0;
-#endif
+	if (IS_ENABLED(CONFIG_FS_ENCRYPTION) && IS_ENCRYPTED(inode)) {
+		if (!fscrypt_inode_uses_inline_crypto(inode) ||
+		    !IS_ALIGNED(iocb->ki_pos | iov_iter_alignment(iter),
+				i_blocksize(inode)))
+			return 0;
+	}
 	if (fsverity_active(inode))
 		return 0;
 
@@ -4097,7 +4091,6 @@
 	struct inode *inode = mapping->host;
 	struct buffer_head *bh;
 	struct page *page;
-	bool decrypt;
 	int err = 0;
 
 	page = find_or_create_page(mapping, from >> PAGE_SHIFT,
@@ -4140,14 +4133,12 @@
 
 	if (!buffer_uptodate(bh)) {
 		err = -EIO;
-		decrypt = S_ISREG(inode->i_mode) && IS_ENCRYPTED(inode) &&
-		    !fscrypt_using_hardware_encryption(inode);
-		ll_rw_block(REQ_OP_READ, (decrypt ? REQ_NOENCRYPT : 0), 1, &bh);
+		ll_rw_block(REQ_OP_READ, 0, 1, &bh);
 		wait_on_buffer(bh);
 		/* Uhhuh. Read error. Complain and punt. */
 		if (!buffer_uptodate(bh))
 			goto unlock;
-		if (decrypt) {
+		if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
 			/* We expect the key to be set. */
 			BUG_ON(!fscrypt_has_encryption_key(inode));
 			BUG_ON(blocksize != PAGE_SIZE);
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index 1385541..96f8329 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -1131,8 +1131,35 @@
 #endif
 	}
 	case EXT4_IOC_GET_ENCRYPTION_POLICY:
+		if (!ext4_has_feature_encrypt(sb))
+			return -EOPNOTSUPP;
 		return fscrypt_ioctl_get_policy(filp, (void __user *)arg);
 
+	case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+		if (!ext4_has_feature_encrypt(sb))
+			return -EOPNOTSUPP;
+		return fscrypt_ioctl_get_policy_ex(filp, (void __user *)arg);
+
+	case FS_IOC_ADD_ENCRYPTION_KEY:
+		if (!ext4_has_feature_encrypt(sb))
+			return -EOPNOTSUPP;
+		return fscrypt_ioctl_add_key(filp, (void __user *)arg);
+
+	case FS_IOC_REMOVE_ENCRYPTION_KEY:
+		if (!ext4_has_feature_encrypt(sb))
+			return -EOPNOTSUPP;
+		return fscrypt_ioctl_remove_key(filp, (void __user *)arg);
+
+	case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+		if (!ext4_has_feature_encrypt(sb))
+			return -EOPNOTSUPP;
+		return fscrypt_ioctl_remove_key_all_users(filp,
+							  (void __user *)arg);
+	case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
+		if (!ext4_has_feature_encrypt(sb))
+			return -EOPNOTSUPP;
+		return fscrypt_ioctl_get_key_status(filp, (void __user *)arg);
+
 	case EXT4_IOC_FSGETXATTR:
 	{
 		struct fsxattr fa;
@@ -1265,6 +1292,11 @@
 	case EXT4_IOC_SET_ENCRYPTION_POLICY:
 	case EXT4_IOC_GET_ENCRYPTION_PWSALT:
 	case EXT4_IOC_GET_ENCRYPTION_POLICY:
+	case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+	case FS_IOC_ADD_ENCRYPTION_KEY:
+	case FS_IOC_REMOVE_ENCRYPTION_KEY:
+	case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+	case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
 	case EXT4_IOC_SHUTDOWN:
 	case FS_IOC_GETFSMAP:
 	case FS_IOC_ENABLE_VERITY:
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 1539ab5..8518086 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -344,8 +344,6 @@
 		int io_op_flags = io->io_wbc->sync_mode == WB_SYNC_ALL ?
 				  REQ_SYNC : 0;
 		io->io_bio->bi_write_hint = io->io_end->inode->i_write_hint;
-		if (io->io_flags & EXT4_IO_ENCRYPTED)
-			io_op_flags |= REQ_NOENCRYPT;
 		bio_set_op_attrs(io->io_bio, REQ_OP_WRITE, io_op_flags);
 		submit_bio(io->io_bio);
 	}
@@ -355,7 +353,6 @@
 void ext4_io_submit_init(struct ext4_io_submit *io,
 			 struct writeback_control *wbc)
 {
-	io->io_flags = 0;
 	io->io_wbc = wbc;
 	io->io_bio = NULL;
 	io->io_end = NULL;
@@ -369,6 +366,7 @@
 	bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
 	if (!bio)
 		return -ENOMEM;
+	fscrypt_set_bio_crypt_ctx_bh(bio, bh, GFP_NOIO);
 	wbc_init_bio(io->io_wbc, bio);
 	bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
 	bio_set_dev(bio, bh->b_bdev);
@@ -386,7 +384,8 @@
 {
 	int ret;
 
-	if (io->io_bio && bh->b_blocknr != io->io_next_block) {
+	if (io->io_bio && (bh->b_blocknr != io->io_next_block ||
+			   !fscrypt_mergeable_bio_bh(io->io_bio, bh))) {
 submit_and_retry:
 		ext4_io_submit(io);
 	}
@@ -472,12 +471,11 @@
 
 	bh = head = page_buffers(page);
 
-	if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode) && nr_to_submit) {
+	if (fscrypt_inode_uses_fs_layer_crypto(inode) && nr_to_submit) {
 		gfp_t gfp_flags = GFP_NOFS;
 
 	retry_encrypt:
-		if (!fscrypt_using_hardware_encryption(inode))
-			bounce_page = fscrypt_encrypt_pagecache_blocks(page,
+		bounce_page = fscrypt_encrypt_pagecache_blocks(page,
 					PAGE_SIZE,0, gfp_flags);
 		if (IS_ERR(bounce_page)) {
 			ret = PTR_ERR(bounce_page);
@@ -498,8 +496,6 @@
 	do {
 		if (!buffer_async_write(bh))
 			continue;
-		if (bounce_page)
-			io->io_flags |= EXT4_IO_ENCRYPTED;
 		ret = io_submit_add_bh(io, inode, bounce_page ?: page, bh);
 		if (ret) {
 			/*
diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
index 72f59b2..eb9c630 100644
--- a/fs/ext4/readpage.c
+++ b/fs/ext4/readpage.c
@@ -198,7 +198,7 @@
 	unsigned int post_read_steps = 0;
 	struct bio_post_read_ctx *ctx = NULL;
 
-	if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode))
+	if (fscrypt_inode_uses_fs_layer_crypto(inode))
 		post_read_steps |= 1 << STEP_DECRYPT;
 
 	if (ext4_need_verity(inode, first_idx))
@@ -259,6 +259,7 @@
 	const unsigned blkbits = inode->i_blkbits;
 	const unsigned blocks_per_page = PAGE_SIZE >> blkbits;
 	const unsigned blocksize = 1 << blkbits;
+	sector_t next_block;
 	sector_t block_in_file;
 	sector_t last_block;
 	sector_t last_block_in_file;
@@ -290,7 +291,8 @@
 		if (page_has_buffers(page))
 			goto confused;
 
-		block_in_file = (sector_t)page->index << (PAGE_SHIFT - blkbits);
+		block_in_file = next_block =
+			(sector_t)page->index << (PAGE_SHIFT - blkbits);
 		last_block = block_in_file + nr_pages * blocks_per_page;
 		last_block_in_file = (ext4_readpage_limit(inode) +
 				      blocksize - 1) >> blkbits;
@@ -390,19 +392,21 @@
 		 * This page will go to BIO.  Do we need to send this
 		 * BIO off first?
 		 */
-		if (bio && (last_block_in_bio != blocks[0] - 1)) {
+		if (bio && (last_block_in_bio != blocks[0] - 1 ||
+			    !fscrypt_mergeable_bio(bio, inode, next_block))) {
 		submit_and_realloc:
 			ext4_submit_bio_read(bio);
 			bio = NULL;
 		}
 		if (bio == NULL) {
 			struct bio_post_read_ctx *ctx;
-			unsigned int flags = 0;
 
 			bio = bio_alloc(GFP_KERNEL,
 				min_t(int, nr_pages, BIO_MAX_PAGES));
 			if (!bio)
 				goto set_error_page;
+			fscrypt_set_bio_crypt_ctx(bio, inode, next_block,
+						  GFP_KERNEL);
 			ctx = get_bio_post_read_ctx(inode, bio, page->index);
 			if (IS_ERR(ctx)) {
 				bio_put(bio);
@@ -413,10 +417,8 @@
 			bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9);
 			bio->bi_end_io = mpage_end_io;
 			bio->bi_private = ctx;
-			if (is_readahead)
-				flags = flags | REQ_RAHEAD;
-			flags = flags | (ctx ? REQ_NOENCRYPT : 0);
-			bio_set_op_attrs(bio, REQ_OP_READ, flags);
+			bio_set_op_attrs(bio, REQ_OP_READ,
+						is_readahead ? REQ_RAHEAD : 0);
 		}
 
 		length = first_hole << blkbits;
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 4ff9461..efcb091 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -71,7 +71,6 @@
 static void ext4_clear_journal_err(struct super_block *sb,
 				   struct ext4_super_block *es);
 static int ext4_sync_fs(struct super_block *sb, int wait);
-static void ext4_umount_end(struct super_block *sb, int flags);
 static int ext4_remount(struct super_block *sb, int *flags, char *data);
 static int ext4_statfs(struct dentry *dentry, struct kstatfs *buf);
 static int ext4_unfreeze(struct super_block *sb);
@@ -1108,6 +1107,9 @@
 {
 	int drop = generic_drop_inode(inode);
 
+	if (!drop)
+		drop = fscrypt_drop_inode(inode);
+
 	trace_ext4_drop_inode(inode, drop);
 	return drop;
 }
@@ -1347,9 +1349,21 @@
 	return DUMMY_ENCRYPTION_ENABLED(EXT4_SB(inode->i_sb));
 }
 
-static inline bool ext4_is_encrypted(struct inode *inode)
+static bool ext4_has_stable_inodes(struct super_block *sb)
 {
-	return IS_ENCRYPTED(inode);
+	return ext4_has_feature_stable_inodes(sb);
+}
+
+static void ext4_get_ino_and_lblk_bits(struct super_block *sb,
+				       int *ino_bits_ret, int *lblk_bits_ret)
+{
+	*ino_bits_ret = 8 * sizeof(EXT4_SB(sb)->s_es->s_inodes_count);
+	*lblk_bits_ret = 8 * sizeof(ext4_lblk_t);
+}
+
+static bool ext4_inline_crypt_enabled(struct super_block *sb)
+{
+	return test_opt(sb, INLINECRYPT);
 }
 
 static const struct fscrypt_operations ext4_cryptops = {
@@ -1359,7 +1373,9 @@
 	.dummy_context		= ext4_dummy_context,
 	.empty_dir		= ext4_empty_dir,
 	.max_namelen		= EXT4_NAME_LEN,
-	.is_encrypted       = ext4_is_encrypted,
+	.has_stable_inodes	= ext4_has_stable_inodes,
+	.get_ino_and_lblk_bits	= ext4_get_ino_and_lblk_bits,
+	.inline_crypt_enabled	= ext4_inline_crypt_enabled,
 };
 #endif
 
@@ -1427,7 +1443,6 @@
 	.freeze_fs	= ext4_freeze,
 	.unfreeze_fs	= ext4_unfreeze,
 	.statfs		= ext4_statfs,
-	.umount_end	= ext4_umount_end,
 	.remount_fs	= ext4_remount,
 	.show_options	= ext4_show_options,
 #ifdef CONFIG_QUOTA
@@ -1455,6 +1470,7 @@
 	Opt_journal_path, Opt_journal_checksum, Opt_journal_async_commit,
 	Opt_abort, Opt_data_journal, Opt_data_ordered, Opt_data_writeback,
 	Opt_data_err_abort, Opt_data_err_ignore, Opt_test_dummy_encryption,
+	Opt_inlinecrypt,
 	Opt_usrjquota, Opt_grpjquota, Opt_offusrjquota, Opt_offgrpjquota,
 	Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_jqfmt_vfsv1, Opt_quota,
 	Opt_noquota, Opt_barrier, Opt_nobarrier, Opt_err,
@@ -1551,6 +1567,7 @@
 	{Opt_noinit_itable, "noinit_itable"},
 	{Opt_max_dir_size_kb, "max_dir_size_kb=%u"},
 	{Opt_test_dummy_encryption, "test_dummy_encryption"},
+	{Opt_inlinecrypt, "inlinecrypt"},
 	{Opt_nombcache, "nombcache"},
 	{Opt_nombcache, "no_mbcache"},	/* for backward compatibility */
 	{Opt_removed, "check=none"},	/* mount option from ext2/3 */
@@ -1762,6 +1779,11 @@
 	{Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT},
 	{Opt_max_dir_size_kb, 0, MOPT_GTE0},
 	{Opt_test_dummy_encryption, 0, MOPT_GTE0},
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+	{Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_SET},
+#else
+	{Opt_inlinecrypt, EXT4_MOUNT_INLINECRYPT, MOPT_NOSUPPORT},
+#endif
 	{Opt_nombcache, EXT4_MOUNT_NO_MBCACHE, MOPT_SET},
 	{Opt_err, 0, 0}
 };
@@ -5266,25 +5288,6 @@
 #endif
 };
 
-static void ext4_umount_end(struct super_block *sb, int flags)
-{
-	/*
-	 * this is called at the end of umount(2). If there is an unclosed
-	 * namespace, ext4 won't do put_super() which triggers fsck in the
-	 * next boot.
-	 */
-	if ((flags & MNT_FORCE) || atomic_read(&sb->s_active) > 1) {
-		ext4_msg(sb, KERN_ERR,
-			"errors=remount-ro for active namespaces on umount %x",
-						flags);
-		clear_opt(sb, ERRORS_PANIC);
-		set_opt(sb, ERRORS_RO);
-		/* to write the latest s_kbytes_written */
-		if (!(sb->s_flags & MS_RDONLY))
-			ext4_commit_super(sb, 1);
-	}
-}
-
 static int ext4_remount(struct super_block *sb, int *flags, char *data)
 {
 	struct ext4_super_block *es;
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 670da21..3a02d79 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -1216,21 +1216,19 @@
 		goto retry_flush_quotas;
 	}
 
-retry_flush_nodes:
 	down_write(&sbi->node_write);
 
 	if (get_pages(sbi, F2FS_DIRTY_NODES)) {
 		up_write(&sbi->node_write);
+		up_write(&sbi->node_change);
+		f2fs_unlock_all(sbi);
 		atomic_inc(&sbi->wb_sync_req[NODE]);
 		err = f2fs_sync_node_pages(sbi, &wbc, false, FS_CP_NODE_IO);
 		atomic_dec(&sbi->wb_sync_req[NODE]);
-		if (err) {
-			up_write(&sbi->node_change);
-			f2fs_unlock_all(sbi);
+		if (err)
 			goto out;
-		}
 		cond_resched();
-		goto retry_flush_nodes;
+		goto retry_flush_quotas;
 	}
 
 	/*
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index d2c0075..8ebefd7 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -317,6 +317,37 @@
 	return bio;
 }
 
+static void f2fs_set_bio_crypt_ctx(struct bio *bio, const struct inode *inode,
+				  pgoff_t first_idx,
+				  const struct f2fs_io_info *fio,
+				  gfp_t gfp_mask)
+{
+	/*
+	 * The f2fs garbage collector sets ->encrypted_page when it wants to
+	 * read/write raw data without encryption.
+	 */
+	if (!fio || !fio->encrypted_page)
+		fscrypt_set_bio_crypt_ctx(bio, inode, first_idx, gfp_mask);
+	else if (fscrypt_inode_should_skip_dm_default_key(inode))
+		bio_set_skip_dm_default_key(bio);
+}
+
+static bool f2fs_crypt_mergeable_bio(struct bio *bio, const struct inode *inode,
+				     pgoff_t next_idx,
+				     const struct f2fs_io_info *fio)
+{
+	/*
+	 * The f2fs garbage collector sets ->encrypted_page when it wants to
+	 * read/write raw data without encryption.
+	 */
+	if (fio && fio->encrypted_page)
+		return !bio_has_crypt_ctx(bio) &&
+			(bio_should_skip_dm_default_key(bio) ==
+			 fscrypt_inode_should_skip_dm_default_key(inode));
+
+	return fscrypt_mergeable_bio(bio, inode, next_idx);
+}
+
 static inline void __submit_bio(struct f2fs_sb_info *sbi,
 				struct bio *bio, enum page_type type)
 {
@@ -514,7 +545,6 @@
 	struct bio *bio;
 	struct page *page = fio->encrypted_page ?
 			fio->encrypted_page : fio->page;
-	struct inode *inode = fio->page->mapping->host;
 
 	if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr,
 			fio->is_por ? META_POR : (__is_meta_io(fio) ?
@@ -527,15 +557,17 @@
 	/* Allocate a new bio */
 	bio = __bio_alloc(fio, 1);
 
-	if (f2fs_may_encrypt_bio(inode, fio))
-		fscrypt_set_ice_dun(inode, bio, PG_DUN(inode, fio->page));
-	fscrypt_set_ice_skip(bio, fio->encrypted_page ? 1 : 0);
+	f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host,
+			       fio->page->index, fio, GFP_NOIO);
 
 	if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
 		bio_put(bio);
 		return -EFAULT;
 	}
-	fio->op_flags |= fio->encrypted_page ? REQ_NOENCRYPT : 0;
+
+	if (fio->io_wbc && !is_read_io(fio->op))
+		wbc_account_io(fio->io_wbc, page, PAGE_SIZE);
+
 	bio_set_op_attrs(bio, fio->op, fio->op_flags);
 
 	inc_page_count(fio->sbi, is_read_io(fio->op) ?
@@ -710,10 +742,6 @@
 	struct bio *bio = *fio->bio;
 	struct page *page = fio->encrypted_page ?
 			fio->encrypted_page : fio->page;
-	struct inode *inode;
-	bool bio_encrypted;
-	int bi_crypt_skip;
-	u64 dun;
 
 	if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr,
 			__is_meta_io(fio) ? META_GENERIC : DATA_GENERIC))
@@ -722,29 +750,20 @@
 	trace_f2fs_submit_page_bio(page, fio);
 	f2fs_trace_ios(fio, 0);
 
-	inode = fio->page->mapping->host;
-	dun = PG_DUN(inode, fio->page);
-	bi_crypt_skip = fio->encrypted_page ? 1 : 0;
-	bio_encrypted = f2fs_may_encrypt_bio(inode, fio);
-	fio->op_flags |= fio->encrypted_page ? REQ_NOENCRYPT : 0;
-
-	if (bio && !page_is_mergeable(fio->sbi, bio, *fio->last_block,
-						fio->new_blkaddr))
+	if (bio && (!page_is_mergeable(fio->sbi, bio, *fio->last_block,
+						fio->new_blkaddr) ||
+		    !f2fs_crypt_mergeable_bio(bio, fio->page->mapping->host,
+					      fio->page->index, fio)))
 		f2fs_submit_merged_ipu_write(fio->sbi, &bio, NULL);
 
-	/* ICE support */
-	if (bio && !fscrypt_mergeable_bio(bio, dun,
-				bio_encrypted, bi_crypt_skip))
-		f2fs_submit_merged_ipu_write(fio->sbi, &bio, NULL);
 alloc_new:
 	if (!bio) {
 		bio = __bio_alloc(fio, BIO_MAX_PAGES);
+		f2fs_set_bio_crypt_ctx(bio, fio->page->mapping->host,
+				       fio->page->index, fio,
+				       GFP_NOIO);
 		bio_set_op_attrs(bio, fio->op, fio->op_flags);
 
-		if (bio_encrypted)
-			fscrypt_set_ice_dun(inode, bio, dun);
-		fscrypt_set_ice_skip(bio, bi_crypt_skip);
-
 		add_bio_entry(fio->sbi, bio, page, fio->temp);
 	} else {
 		if (add_ipu_page(fio->sbi, &bio, page))
@@ -768,10 +787,6 @@
 	enum page_type btype = PAGE_TYPE_OF_BIO(fio->type);
 	struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp;
 	struct page *bio_page;
-	struct inode *inode;
-	bool bio_encrypted;
-	int bi_crypt_skip;
-	u64 dun;
 
 	f2fs_bug_on(sbi, is_read_io(fio->op));
 
@@ -792,25 +807,18 @@
 	verify_fio_blkaddr(fio);
 
 	bio_page = fio->encrypted_page ? fio->encrypted_page : fio->page;
-	inode = fio->page->mapping->host;
-	dun = PG_DUN(inode, fio->page);
-	bi_crypt_skip = fio->encrypted_page ? 1 : 0;
-	bio_encrypted = f2fs_may_encrypt_bio(inode, fio);
-	fio->op_flags |= fio->encrypted_page ? REQ_NOENCRYPT : 0;
 
 	/* set submitted = true as a return value */
 	fio->submitted = true;
 
 	inc_page_count(sbi, WB_DATA_TYPE(bio_page));
 
-	if (io->bio && !io_is_mergeable(sbi, io->bio, io, fio,
-			io->last_block_in_bio, fio->new_blkaddr))
+	if (io->bio &&
+	    (!io_is_mergeable(sbi, io->bio, io, fio, io->last_block_in_bio,
+			      fio->new_blkaddr) ||
+	     !f2fs_crypt_mergeable_bio(io->bio, fio->page->mapping->host,
+				       fio->page->index, fio)))
 		__submit_merged_bio(io);
-
-	/* ICE support */
-	if (!fscrypt_mergeable_bio(io->bio, dun, bio_encrypted, bi_crypt_skip))
-		__submit_merged_bio(io);
-
 alloc_new:
 	if (io->bio == NULL) {
 		if (F2FS_IO_ALIGNED(sbi) &&
@@ -821,11 +829,9 @@
 			goto skip;
 		}
 		io->bio = __bio_alloc(fio, BIO_MAX_PAGES);
-
-		if (bio_encrypted)
-			fscrypt_set_ice_dun(inode, io->bio, dun);
-		fscrypt_set_ice_skip(io->bio, bi_crypt_skip);
-
+		f2fs_set_bio_crypt_ctx(io->bio, fio->page->mapping->host,
+				       fio->page->index, fio,
+				       GFP_NOIO);
 		io->fio = *fio;
 	}
 
@@ -869,13 +875,14 @@
 	bio = f2fs_bio_alloc(sbi, min_t(int, nr_pages, BIO_MAX_PAGES), false);
 	if (!bio)
 		return ERR_PTR(-ENOMEM);
+
+	f2fs_set_bio_crypt_ctx(bio, inode, first_idx, NULL, GFP_NOFS);
+
 	f2fs_target_device(sbi, blkaddr, bio);
 	bio->bi_end_io = f2fs_read_end_io;
-	op_flag |= IS_ENCRYPTED(inode) ? REQ_NOENCRYPT : 0;
 	bio_set_op_attrs(bio, REQ_OP_READ, op_flag);
 
-	if (f2fs_encrypted_file(inode) &&
-		!fscrypt_using_hardware_encryption(inode))
+	if (fscrypt_inode_uses_fs_layer_crypto(inode))
 		post_read_steps |= 1 << STEP_DECRYPT;
 
 	if (f2fs_need_verity(inode, first_idx))
@@ -906,9 +913,6 @@
 	if (IS_ERR(bio))
 		return PTR_ERR(bio);
 
-	if (f2fs_may_encrypt_bio(inode, NULL))
-		fscrypt_set_ice_dun(inode, bio, PG_DUN(inode, page));
-
 	/* wait for GCed page writeback via META_MAPPING */
 	f2fs_wait_on_block_writeback(inode, blkaddr);
 
@@ -1375,7 +1379,6 @@
 		if (map->m_next_extent)
 			*map->m_next_extent = pgofs + map->m_len;
 
-		/* for hardware encryption, but to avoid potential issue in future */
 		if (flag == F2FS_GET_BLOCK_DIO)
 			f2fs_wait_on_block_writeback_range(inode,
 						map->m_pblk, map->m_len);
@@ -1540,7 +1543,6 @@
 
 sync_out:
 
-	/* for hardware encryption, but to avoid potential issue in future */
 	if (flag == F2FS_GET_BLOCK_DIO && map->m_flags & F2FS_MAP_MAPPED)
 		f2fs_wait_on_block_writeback_range(inode,
 						map->m_pblk, map->m_len);
@@ -1851,8 +1853,6 @@
 	sector_t last_block;
 	sector_t last_block_in_file;
 	sector_t block_nr;
-	bool bio_encrypted;
-	u64 dun;
 	int ret = 0;
 
 	block_in_file = (sector_t)page_index(page);
@@ -1917,20 +1917,14 @@
 	 * This page will go to BIO.  Do we need to send this
 	 * BIO off first?
 	 */
-	if (bio && !page_is_mergeable(F2FS_I_SB(inode), bio,
-				*last_block_in_bio, block_nr)) {
+	if (bio && (!page_is_mergeable(F2FS_I_SB(inode), bio,
+				       *last_block_in_bio, block_nr) ||
+		    !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) {
 submit_and_realloc:
 		__f2fs_submit_read_bio(F2FS_I_SB(inode), bio, DATA);
 		bio = NULL;
 	}
 
-	dun = PG_DUN(inode, page);
-	bio_encrypted = f2fs_may_encrypt_bio(inode, NULL);
-	if (!fscrypt_mergeable_bio(bio, dun, bio_encrypted, 0)) {
-		__submit_bio(F2FS_I_SB(inode), bio, DATA);
-		bio = NULL;
-	}
-
 	if (bio == NULL) {
 		bio = f2fs_grab_read_bio(inode, block_nr, nr_pages,
 				is_readahead ? REQ_RAHEAD : 0, page->index);
@@ -1939,8 +1933,6 @@
 			bio = NULL;
 			goto out;
 		}
-		if (bio_encrypted)
-			fscrypt_set_ice_dun(inode, bio, dun);
 	}
 
 	/*
@@ -2014,6 +2006,7 @@
 			zero_user_segment(page, 0, PAGE_SIZE);
 			unlock_page(page);
 		}
+
 next_page:
 		if (pages)
 			put_page(page);
@@ -2068,10 +2061,11 @@
 	/* wait for GCed page writeback via META_MAPPING */
 	f2fs_wait_on_block_writeback(inode, fio->old_blkaddr);
 
-retry_encrypt:
-	if (fscrypt_using_hardware_encryption(inode))
+	if (fscrypt_inode_uses_inline_crypto(inode))
 		return 0;
 
+retry_encrypt:
+
 	fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(fio->page,
 							       PAGE_SIZE, 0,
 							       gfp_flags);
@@ -2245,7 +2239,7 @@
 			f2fs_unlock_op(fio->sbi);
 		err = f2fs_inplace_write_data(fio);
 		if (err) {
-			if (f2fs_encrypted_file(inode))
+			if (fscrypt_inode_uses_fs_layer_crypto(inode))
 				fscrypt_finalize_bounce_page(&fio->encrypted_page);
 			if (PageWriteback(page))
 				end_page_writeback(page);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 0bb27ec..2297267 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -137,6 +137,9 @@
 	int alloc_mode;			/* segment allocation policy */
 	int fsync_mode;			/* fsync policy */
 	bool test_dummy_encryption;	/* test dummy encryption */
+#ifdef CONFIG_FS_ENCRYPTION
+	bool inlinecrypt;		/* inline encryption enabled */
+#endif
 	block_t unusable_cap;		/* Amount of space allowed to be
 					 * unusable when disabling checkpoint
 					 */
@@ -3603,9 +3606,7 @@
  */
 static inline bool f2fs_post_read_required(struct inode *inode)
 {
-	return (f2fs_encrypted_file(inode)
-			&& !fscrypt_using_hardware_encryption(inode))
-	|| fsverity_active(inode);
+	return f2fs_encrypted_file(inode) || fsverity_active(inode);
 }
 
 #define F2FS_FEATURE_FUNCS(name, flagname) \
@@ -3734,7 +3735,13 @@
 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
 	int rw = iov_iter_rw(iter);
 
-	if (f2fs_post_read_required(inode))
+	if (IS_ENABLED(CONFIG_FS_ENCRYPTION) && f2fs_encrypted_file(inode)) {
+		if (!fscrypt_inode_uses_inline_crypto(inode) ||
+		    !IS_ALIGNED(iocb->ki_pos | iov_iter_alignment(iter),
+				F2FS_BLKSIZE))
+			return true;
+	}
+	if (fsverity_active(inode))
 		return true;
 	if (f2fs_is_multi_device(sbi))
 		return true;
@@ -3757,16 +3764,6 @@
 	return false;
 }
 
-static inline bool f2fs_may_encrypt_bio(struct inode *inode,
-		struct f2fs_io_info *fio)
-{
-	if (fio && (fio->type != DATA || fio->encrypted_page))
-		return false;
-
-	return (f2fs_encrypted_file(inode) &&
-			fscrypt_using_hardware_encryption(inode));
-}
-
 #ifdef CONFIG_F2FS_FAULT_INJECTION
 extern void f2fs_build_fault_attr(struct f2fs_sb_info *sbi, unsigned int rate,
 							unsigned int type);
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index c4ac231..ec22279 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -2267,6 +2267,49 @@
 	return err;
 }
 
+static int f2fs_ioc_get_encryption_policy_ex(struct file *filp,
+					     unsigned long arg)
+{
+	if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
+		return -EOPNOTSUPP;
+
+	return fscrypt_ioctl_get_policy_ex(filp, (void __user *)arg);
+}
+
+static int f2fs_ioc_add_encryption_key(struct file *filp, unsigned long arg)
+{
+	if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
+		return -EOPNOTSUPP;
+
+	return fscrypt_ioctl_add_key(filp, (void __user *)arg);
+}
+
+static int f2fs_ioc_remove_encryption_key(struct file *filp, unsigned long arg)
+{
+	if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
+		return -EOPNOTSUPP;
+
+	return fscrypt_ioctl_remove_key(filp, (void __user *)arg);
+}
+
+static int f2fs_ioc_remove_encryption_key_all_users(struct file *filp,
+						    unsigned long arg)
+{
+	if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
+		return -EOPNOTSUPP;
+
+	return fscrypt_ioctl_remove_key_all_users(filp, (void __user *)arg);
+}
+
+static int f2fs_ioc_get_encryption_key_status(struct file *filp,
+					      unsigned long arg)
+{
+	if (!f2fs_sb_has_encrypt(F2FS_I_SB(file_inode(filp))))
+		return -EOPNOTSUPP;
+
+	return fscrypt_ioctl_get_key_status(filp, (void __user *)arg);
+}
+
 static int f2fs_ioc_gc(struct file *filp, unsigned long arg)
 {
 	struct inode *inode = file_inode(filp);
@@ -3265,6 +3308,16 @@
 		return f2fs_ioc_get_encryption_policy(filp, arg);
 	case F2FS_IOC_GET_ENCRYPTION_PWSALT:
 		return f2fs_ioc_get_encryption_pwsalt(filp, arg);
+	case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+		return f2fs_ioc_get_encryption_policy_ex(filp, arg);
+	case FS_IOC_ADD_ENCRYPTION_KEY:
+		return f2fs_ioc_add_encryption_key(filp, arg);
+	case FS_IOC_REMOVE_ENCRYPTION_KEY:
+		return f2fs_ioc_remove_encryption_key(filp, arg);
+	case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+		return f2fs_ioc_remove_encryption_key_all_users(filp, arg);
+	case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
+		return f2fs_ioc_get_encryption_key_status(filp, arg);
 	case F2FS_IOC_GARBAGE_COLLECT:
 		return f2fs_ioc_gc(filp, arg);
 	case F2FS_IOC_GARBAGE_COLLECT_RANGE:
@@ -3396,6 +3449,11 @@
 	case F2FS_IOC_SET_ENCRYPTION_POLICY:
 	case F2FS_IOC_GET_ENCRYPTION_PWSALT:
 	case F2FS_IOC_GET_ENCRYPTION_POLICY:
+	case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+	case FS_IOC_ADD_ENCRYPTION_KEY:
+	case FS_IOC_REMOVE_ENCRYPTION_KEY:
+	case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+	case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
 	case F2FS_IOC_GARBAGE_COLLECT:
 	case F2FS_IOC_GARBAGE_COLLECT_RANGE:
 	case F2FS_IOC_WRITE_CHECKPOINT:
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index fa32ce92..110f380 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -1100,7 +1100,6 @@
 	} else if (discard_type == DPOLICY_FSTRIM) {
 		dpolicy->io_aware = false;
 	} else if (discard_type == DPOLICY_UMOUNT) {
-		dpolicy->max_requests = UINT_MAX;
 		dpolicy->io_aware = false;
 		/* we need to issue all to keep CP_TRIMMED_FLAG */
 		dpolicy->granularity = 1;
@@ -1461,6 +1460,8 @@
 
 	return issued;
 }
+static unsigned int __wait_all_discard_cmd(struct f2fs_sb_info *sbi,
+					struct discard_policy *dpolicy);
 
 static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
 					struct discard_policy *dpolicy)
@@ -1469,12 +1470,14 @@
 	struct list_head *pend_list;
 	struct discard_cmd *dc, *tmp;
 	struct blk_plug plug;
-	int i, issued = 0;
+	int i, issued;
 	bool io_interrupted = false;
 
 	if (dpolicy->timeout != 0)
 		f2fs_update_time(sbi, dpolicy->timeout);
 
+retry:
+	issued = 0;
 	for (i = MAX_PLIST_NUM - 1; i >= 0; i--) {
 		if (dpolicy->timeout != 0 &&
 				f2fs_time_over(sbi, dpolicy->timeout))
@@ -1521,6 +1524,11 @@
 			break;
 	}
 
+	if (dpolicy->type == DPOLICY_UMOUNT && issued) {
+		__wait_all_discard_cmd(sbi, dpolicy);
+		goto retry;
+	}
+
 	if (!issued && io_interrupted)
 		issued = -1;
 
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 9c2e10d..9be6d2c 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -137,6 +137,7 @@
 	Opt_alloc,
 	Opt_fsync,
 	Opt_test_dummy_encryption,
+	Opt_inlinecrypt,
 	Opt_checkpoint_disable,
 	Opt_checkpoint_disable_cap,
 	Opt_checkpoint_disable_cap_perc,
@@ -199,6 +200,7 @@
 	{Opt_alloc, "alloc_mode=%s"},
 	{Opt_fsync, "fsync_mode=%s"},
 	{Opt_test_dummy_encryption, "test_dummy_encryption"},
+	{Opt_inlinecrypt, "inlinecrypt"},
 	{Opt_checkpoint_disable, "checkpoint=disable"},
 	{Opt_checkpoint_disable_cap, "checkpoint=disable:%u"},
 	{Opt_checkpoint_disable_cap_perc, "checkpoint=disable:%u%%"},
@@ -785,6 +787,13 @@
 			f2fs_info(sbi, "Test dummy encryption mount option ignored");
 #endif
 			break;
+		case Opt_inlinecrypt:
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+			F2FS_OPTION(sbi).inlinecrypt = true;
+#else
+			f2fs_info(sbi, "inline encryption not supported");
+#endif
+			break;
 		case Opt_checkpoint_disable_cap_perc:
 			if (args->from && match_int(args, &arg))
 				return -EINVAL;
@@ -965,6 +974,8 @@
 		return 0;
 	}
 	ret = generic_drop_inode(inode);
+	if (!ret)
+		ret = fscrypt_drop_inode(inode);
 	trace_f2fs_drop_inode(inode, ret);
 	return ret;
 }
@@ -1064,27 +1075,6 @@
 	kvfree(sbi->devs);
 }
 
-static void f2fs_umount_end(struct super_block *sb, int flags)
-{
-	/*
-	 * this is called at the end of umount(2). If there is an unclosed
-	 * namespace, f2fs won't do put_super() which triggers fsck in the
-	 * next boot.
-	 */
-	if ((flags & MNT_FORCE) || atomic_read(&sb->s_active) > 1) {
-		/* to write the latest kbytes_written */
-		if (!(sb->s_flags & MS_RDONLY)) {
-			struct f2fs_sb_info *sbi = F2FS_SB(sb);
-			struct cp_control cpc = {
-				.reason = CP_UMOUNT,
-			};
-			mutex_lock(&sbi->gc_mutex);
-			f2fs_write_checkpoint(F2FS_SB(sb), &cpc);
-			mutex_unlock(&sbi->gc_mutex);
-		}
-	}
-}
-
 static void f2fs_put_super(struct super_block *sb)
 {
 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
@@ -1473,6 +1463,8 @@
 #ifdef CONFIG_FS_ENCRYPTION
 	if (F2FS_OPTION(sbi).test_dummy_encryption)
 		seq_puts(seq, ",test_dummy_encryption");
+	if (F2FS_OPTION(sbi).inlinecrypt)
+		seq_puts(seq, ",inlinecrypt");
 #endif
 
 	if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_DEFAULT)
@@ -1501,6 +1493,9 @@
 	F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_DEFAULT;
 	F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_POSIX;
 	F2FS_OPTION(sbi).test_dummy_encryption = false;
+#ifdef CONFIG_FS_ENCRYPTION
+	F2FS_OPTION(sbi).inlinecrypt = false;
+#endif
 	F2FS_OPTION(sbi).s_resuid = make_kuid(&init_user_ns, F2FS_DEF_RESUID);
 	F2FS_OPTION(sbi).s_resgid = make_kgid(&init_user_ns, F2FS_DEF_RESGID);
 
@@ -2303,7 +2298,6 @@
 #endif
 	.evict_inode	= f2fs_evict_inode,
 	.put_super	= f2fs_put_super,
-	.umount_end	= f2fs_umount_end,
 	.sync_fs	= f2fs_sync_fs,
 	.freeze_fs	= f2fs_freeze,
 	.unfreeze_fs	= f2fs_unfreeze,
@@ -2344,19 +2338,54 @@
 	return DUMMY_ENCRYPTION_ENABLED(F2FS_I_SB(inode));
 }
 
-static inline bool f2fs_is_encrypted(struct inode *inode)
+static bool f2fs_has_stable_inodes(struct super_block *sb)
 {
-	return f2fs_encrypted_file(inode);
+	return true;
+}
+
+static void f2fs_get_ino_and_lblk_bits(struct super_block *sb,
+				       int *ino_bits_ret, int *lblk_bits_ret)
+{
+	*ino_bits_ret = 8 * sizeof(nid_t);
+	*lblk_bits_ret = 8 * sizeof(block_t);
+}
+
+static bool f2fs_inline_crypt_enabled(struct super_block *sb)
+{
+	return F2FS_OPTION(F2FS_SB(sb)).inlinecrypt;
+}
+
+static int f2fs_get_num_devices(struct super_block *sb)
+{
+	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+
+	if (f2fs_is_multi_device(sbi))
+		return sbi->s_ndevs;
+	return 1;
+}
+
+static void f2fs_get_devices(struct super_block *sb,
+			     struct request_queue **devs)
+{
+	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+	int i;
+
+	for (i = 0; i < sbi->s_ndevs; i++)
+		devs[i] = bdev_get_queue(FDEV(i).bdev);
 }
 
 static const struct fscrypt_operations f2fs_cryptops = {
-	.key_prefix	= "f2fs:",
-	.get_context	= f2fs_get_context,
-	.set_context	= f2fs_set_context,
-	.dummy_context	= f2fs_dummy_context,
-	.empty_dir	= f2fs_empty_dir,
-	.max_namelen	= F2FS_NAME_LEN,
-	.is_encrypted	= f2fs_is_encrypted,
+	.key_prefix		= "f2fs:",
+	.get_context		= f2fs_get_context,
+	.set_context		= f2fs_set_context,
+	.dummy_context		= f2fs_dummy_context,
+	.empty_dir		= f2fs_empty_dir,
+	.max_namelen		= F2FS_NAME_LEN,
+	.has_stable_inodes	= f2fs_has_stable_inodes,
+	.get_ino_and_lblk_bits	= f2fs_get_ino_and_lblk_bits,
+	.inline_crypt_enabled	= f2fs_inline_crypt_enabled,
+	.get_num_devices	= f2fs_get_num_devices,
+	.get_devices		= f2fs_get_devices,
 };
 #endif
 
diff --git a/fs/iomap.c b/fs/iomap.c
index 03edf62..5c77dbc 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -14,6 +14,7 @@
 #include <linux/module.h>
 #include <linux/compiler.h>
 #include <linux/fs.h>
+#include <linux/fscrypt.h>
 #include <linux/iomap.h>
 #include <linux/uaccess.h>
 #include <linux/gfp.h>
@@ -1580,10 +1581,13 @@
 iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos,
 		unsigned len)
 {
+	struct inode *inode = file_inode(dio->iocb->ki_filp);
 	struct page *page = ZERO_PAGE(0);
 	struct bio *bio;
 
 	bio = bio_alloc(GFP_KERNEL, 1);
+	fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits,
+				  GFP_KERNEL);
 	bio_set_dev(bio, iomap->bdev);
 	bio->bi_iter.bi_sector = iomap_sector(iomap, pos);
 	bio->bi_private = dio;
@@ -1664,6 +1668,8 @@
 		}
 
 		bio = bio_alloc(GFP_KERNEL, nr_pages);
+		fscrypt_set_bio_crypt_ctx(bio, inode, pos >> inode->i_blkbits,
+					  GFP_KERNEL);
 		bio_set_dev(bio, iomap->bdev);
 		bio->bi_iter.bi_sector = iomap_sector(iomap, pos);
 		bio->bi_write_hint = dio->iocb->ki_hint;
diff --git a/fs/namei.c b/fs/namei.c
index af523d9..c99cb21 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -3010,11 +3010,6 @@
 	if (error)
 		return error;
 	error = dir->i_op->create(dir, dentry, mode, want_excl);
-	if (error)
-		return error;
-	error = security_inode_post_create(dir, dentry, mode);
-	if (error)
-		return error;
 	if (!error)
 		fsnotify_create(dir, dentry);
 	return error;
@@ -3839,11 +3834,6 @@
 		return error;
 
 	error = dir->i_op->mknod(dir, dentry, mode, dev);
-	if (error)
-		return error;
-	error = security_inode_post_create(dir, dentry, mode);
-	if (error)
-		return error;
 	if (!error)
 		fsnotify_create(dir, dentry);
 	return error;
diff --git a/fs/namespace.c b/fs/namespace.c
index 7899153..3a93384 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -21,7 +21,6 @@
 #include <linux/fs_struct.h>	/* get_fs_root et.al. */
 #include <linux/fsnotify.h>	/* fsnotify_vfsmount_delete */
 #include <linux/uaccess.h>
-#include <linux/file.h>
 #include <linux/proc_ns.h>
 #include <linux/magic.h>
 #include <linux/bootmem.h>
@@ -1135,12 +1134,6 @@
 }
 static DECLARE_DELAYED_WORK(delayed_mntput_work, delayed_mntput);
 
-void flush_delayed_mntput_wait(void)
-{
-	delayed_mntput(NULL);
-	flush_delayed_work(&delayed_mntput_work);
-}
-
 static void mntput_no_expire(struct mount *mnt)
 {
 	rcu_read_lock();
@@ -1657,7 +1650,6 @@
 	struct mount *mnt;
 	int retval;
 	int lookup_flags = 0;
-	bool user_request = !(current->flags & PF_KTHREAD);
 
 	if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW))
 		return -EINVAL;
@@ -1683,36 +1675,12 @@
 	if (flags & MNT_FORCE && !capable(CAP_SYS_ADMIN))
 		goto dput_and_out;
 
-	/* flush delayed_fput to put mnt_count */
-	if (user_request)
-		flush_delayed_fput_wait();
-
 	retval = do_umount(mnt, flags);
 dput_and_out:
 	/* we mustn't call path_put() as that would clear mnt_expiry_mark */
 	dput(path.dentry);
-	if (user_request && (!retval || (flags & MNT_FORCE))) {
-		/* filesystem needs to handle unclosed namespaces */
-		if (mnt->mnt.mnt_sb->s_op->umount_end)
-			mnt->mnt.mnt_sb->s_op->umount_end(mnt->mnt.mnt_sb,
-					flags);
-	}
 	mntput_no_expire(mnt);
 
-	if (!user_request)
-		goto out;
-
-	if (!retval) {
-		/*
-		 * If the last delayed_fput() is called during do_umount()
-		 * and makes mnt_count zero, we need to guarantee to register
-		 * delayed_mntput by waiting for delayed_fput work again.
-		 */
-		flush_delayed_fput_wait();
-
-		/* flush delayed_mntput_work to put sb->s_active */
-		flush_delayed_mntput_wait();
-	}
 out:
 	return retval;
 }
diff --git a/fs/sdcardfs/main.c b/fs/sdcardfs/main.c
index 4c7b7fa..cb668f7 100644
--- a/fs/sdcardfs/main.c
+++ b/fs/sdcardfs/main.c
@@ -19,6 +19,7 @@
  */
 
 #include "sdcardfs.h"
+#include <linux/fscrypt.h>
 #include <linux/module.h>
 #include <linux/types.h>
 #include <linux/parser.h>
@@ -375,6 +376,9 @@
 	list_add(&sb_info->list, &sdcardfs_super_list);
 	mutex_unlock(&sdcardfs_super_list_lock);
 
+	sb_info->fscrypt_nb.notifier_call = sdcardfs_on_fscrypt_key_removed;
+	fscrypt_register_key_removal_notifier(&sb_info->fscrypt_nb);
+
 	if (!silent)
 		pr_info("sdcardfs: mounted on top of %s type %s\n",
 				dev_name, lower_sb->s_type->name);
@@ -445,6 +449,9 @@
 
 	if (sb->s_magic == SDCARDFS_SUPER_MAGIC && sb->s_fs_info) {
 		sbi = SDCARDFS_SB(sb);
+
+		fscrypt_unregister_key_removal_notifier(&sbi->fscrypt_nb);
+
 		mutex_lock(&sdcardfs_super_list_lock);
 		list_del(&sbi->list);
 		mutex_unlock(&sdcardfs_super_list_lock);
diff --git a/fs/sdcardfs/sdcardfs.h b/fs/sdcardfs/sdcardfs.h
index 9ccf62c..401445e 100644
--- a/fs/sdcardfs/sdcardfs.h
+++ b/fs/sdcardfs/sdcardfs.h
@@ -151,6 +151,8 @@
 				 struct inode *lower_inode, userid_t id);
 extern int sdcardfs_interpose(struct dentry *dentry, struct super_block *sb,
 			    struct path *lower_path, userid_t id);
+extern int sdcardfs_on_fscrypt_key_removed(struct notifier_block *nb,
+					   unsigned long action, void *data);
 
 /* file private data */
 struct sdcardfs_file_info {
@@ -224,6 +226,7 @@
 	struct path obbpath;
 	void *pkgl_id;
 	struct list_head list;
+	struct notifier_block fscrypt_nb;
 };
 
 /*
diff --git a/fs/sdcardfs/super.c b/fs/sdcardfs/super.c
index 1240ef2..b2ba09a 100644
--- a/fs/sdcardfs/super.c
+++ b/fs/sdcardfs/super.c
@@ -319,6 +319,23 @@
 	return 0;
 };
 
+int sdcardfs_on_fscrypt_key_removed(struct notifier_block *nb,
+				    unsigned long action, void *data)
+{
+	struct sdcardfs_sb_info *sbi = container_of(nb, struct sdcardfs_sb_info,
+						    fscrypt_nb);
+
+	/*
+	 * Evict any unused sdcardfs dentries (and hence any unused sdcardfs
+	 * inodes, since sdcardfs doesn't cache unpinned inodes by themselves)
+	 * so that the lower filesystem's encrypted inodes can be evicted.
+	 * This is needed to make the FS_IOC_REMOVE_ENCRYPTION_KEY ioctl
+	 * properly "lock" the files underneath the sdcardfs mount.
+	 */
+	shrink_dcache_sb(sbi->sb);
+	return NOTIFY_OK;
+}
+
 const struct super_operations sdcardfs_sops = {
 	.put_super	= sdcardfs_put_super,
 	.statfs		= sdcardfs_statfs,
diff --git a/fs/super.c b/fs/super.c
index b02e086..7fa6fe5 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -32,6 +32,7 @@
 #include <linux/backing-dev.h>
 #include <linux/rculist_bl.h>
 #include <linux/cleancache.h>
+#include <linux/fscrypt.h>
 #include <linux/fsnotify.h>
 #include <linux/lockdep.h>
 #include <linux/user_namespace.h>
@@ -288,6 +289,7 @@
 		WARN_ON(s->s_inode_lru.node);
 		WARN_ON(!list_empty(&s->s_mounts));
 		security_sb_free(s);
+		fscrypt_sb_free(s);
 		put_user_ns(s->s_user_ns);
 		kfree(s->s_subtype);
 		call_rcu(&s->rcu, destroy_super_rcu);
diff --git a/fs/ubifs/ioctl.c b/fs/ubifs/ioctl.c
index 0f9c362..71c3440 100644
--- a/fs/ubifs/ioctl.c
+++ b/fs/ubifs/ioctl.c
@@ -205,6 +205,21 @@
 #endif
 	}
 
+	case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+		return fscrypt_ioctl_get_policy_ex(file, (void __user *)arg);
+
+	case FS_IOC_ADD_ENCRYPTION_KEY:
+		return fscrypt_ioctl_add_key(file, (void __user *)arg);
+
+	case FS_IOC_REMOVE_ENCRYPTION_KEY:
+		return fscrypt_ioctl_remove_key(file, (void __user *)arg);
+
+	case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+		return fscrypt_ioctl_remove_key_all_users(file,
+							  (void __user *)arg);
+	case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
+		return fscrypt_ioctl_get_key_status(file, (void __user *)arg);
+
 	default:
 		return -ENOTTY;
 	}
@@ -222,6 +237,11 @@
 		break;
 	case FS_IOC_SET_ENCRYPTION_POLICY:
 	case FS_IOC_GET_ENCRYPTION_POLICY:
+	case FS_IOC_GET_ENCRYPTION_POLICY_EX:
+	case FS_IOC_ADD_ENCRYPTION_KEY:
+	case FS_IOC_REMOVE_ENCRYPTION_KEY:
+	case FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS:
+	case FS_IOC_GET_ENCRYPTION_KEY_STATUS:
 		break;
 	default:
 		return -ENOIOCTLCMD;
diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
index ebb9e84..e276b54 100644
--- a/fs/ubifs/super.c
+++ b/fs/ubifs/super.c
@@ -336,6 +336,16 @@
 	return err;
 }
 
+static int ubifs_drop_inode(struct inode *inode)
+{
+	int drop = generic_drop_inode(inode);
+
+	if (!drop)
+		drop = fscrypt_drop_inode(inode);
+
+	return drop;
+}
+
 static void ubifs_evict_inode(struct inode *inode)
 {
 	int err;
@@ -1925,6 +1935,7 @@
 	.destroy_inode = ubifs_destroy_inode,
 	.put_super     = ubifs_put_super,
 	.write_inode   = ubifs_write_inode,
+	.drop_inode    = ubifs_drop_inode,
 	.evict_inode   = ubifs_evict_inode,
 	.statfs        = ubifs_statfs,
 	.dirty_inode   = ubifs_dirty_inode,
diff --git a/gen_headers_arm.bp b/gen_headers_arm.bp
index a8f40a7..65319b0 100644
--- a/gen_headers_arm.bp
+++ b/gen_headers_arm.bp
@@ -638,6 +638,7 @@
     "linux/xilinx-v4l2-controls.h",
     "linux/zorro.h",
     "linux/zorro_ids.h",
+    "linux/fscrypt.h",
     "media/msm_cvp_private.h",
     "media/msm_cvp_utils.h",
     "media/msm_media_info.h",
@@ -970,6 +971,9 @@
     "media/cam_req_mgr.h",
     "media/cam_sensor.h",
     "media/cam_sync.h",
+    "media/cam_tfe.h",
+    "media/cam_ope.h",
+    "media/cam_isp_tfe.h",
 ]
 
 genrule {
diff --git a/gen_headers_arm64.bp b/gen_headers_arm64.bp
index 0b9d2ba..3e1627b 100644
--- a/gen_headers_arm64.bp
+++ b/gen_headers_arm64.bp
@@ -632,6 +632,7 @@
     "linux/xilinx-v4l2-controls.h",
     "linux/zorro.h",
     "linux/zorro_ids.h",
+    "linux/fscrypt.h",
     "media/msm_cvp_private.h",
     "media/msm_cvp_utils.h",
     "media/msm_media_info.h",
@@ -964,6 +965,9 @@
     "media/cam_req_mgr.h",
     "media/cam_sensor.h",
     "media/cam_sync.h",
+    "media/cam_tfe.h",
+    "media/cam_ope.h",
+    "media/cam_isp_tfe.h",
 ]
 
 genrule {
diff --git a/include/dt-bindings/msm/msm-bus-ids.h b/include/dt-bindings/msm/msm-bus-ids.h
index 835fb0c..4aff0c9 100644
--- a/include/dt-bindings/msm/msm-bus-ids.h
+++ b/include/dt-bindings/msm/msm-bus-ids.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2014-2020, The Linux Foundation. All rights reserved.
  */
 
 #ifndef __MSM_BUS_IDS_H
@@ -699,6 +699,8 @@
 #define	MSM_BUS_SLAVE_ANOC_SNOC 834
 #define	MSM_BUS_SLAVE_GPU_CDSP_BIMC 835
 #define	MSM_BUS_SLAVE_AHB2PHY_2 836
+#define	MSM_BUS_SLAVE_HWKM 837
+#define	MSM_BUS_SLAVE_PKA_WRAPPER 838
 
 #define	MSM_BUS_SLAVE_EBI_CH0_DISPLAY 20512
 #define	MSM_BUS_SLAVE_LLCC_DISPLAY 20513
@@ -1175,4 +1177,6 @@
 #define	ICBID_SLAVE_MAPSS 277
 #define	ICBID_SLAVE_MDSP_MPU_CFG 278
 #define	ICBID_SLAVE_CAMERA_RT_THROTTLE_CFG 279
+#define	ICBID_SLAVE_HWKM 280
+#define	ICBID_SLAVE_PKA_WRAPPER 281
 #endif
diff --git a/include/dt-bindings/msm/msm-camera.h b/include/dt-bindings/msm/msm-camera.h
index 07817a7..84c8e4c 100644
--- a/include/dt-bindings/msm/msm-camera.h
+++ b/include/dt-bindings/msm/msm-camera.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Copyright (c) 2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2019-2020, The Linux Foundation. All rights reserved.
  */
 
 #ifndef __MSM_CAMERA_H
@@ -62,10 +62,14 @@
 #define CAM_CPAS_TRAFFIC_MERGE_SUM 0
 #define CAM_CPAS_TRAFFIC_MERGE_SUM_INTERLEAVE 1
 
+#define CAM_CPAS_FEATURE_TYPE_DISABLE        0
+#define CAM_CPAS_FEATURE_TYPE_ENABLE         1
 
-/* Feature support bit positions in feature fuse register*/
-#define CAM_CPAS_QCFA_BINNING_ENABLE 0
-#define CAM_CPAS_SECURE_CAMERA_ENABLE 1
-#define CAM_CPAS_FUSE_FEATURE_MAX 2
+/* Fuse Feature support ids */
+#define CAM_CPAS_QCFA_BINNING_ENABLE        0
+#define CAM_CPAS_SECURE_CAMERA_ENABLE       1
+#define CAM_CPAS_ISP_FUSE_ID                2
+#define CAM_CPAS_ISP_PIX_FUSE_ID            3
+#define CAM_CPAS_FUSE_FEATURE_MAX           4
 
 #endif
diff --git a/include/linux/bio-crypt-ctx.h b/include/linux/bio-crypt-ctx.h
new file mode 100644
index 0000000..d10c5ad
--- /dev/null
+++ b/include/linux/bio-crypt-ctx.h
@@ -0,0 +1,231 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+#ifndef __LINUX_BIO_CRYPT_CTX_H
+#define __LINUX_BIO_CRYPT_CTX_H
+
+#include <linux/string.h>
+
+enum blk_crypto_mode_num {
+	BLK_ENCRYPTION_MODE_INVALID,
+	BLK_ENCRYPTION_MODE_AES_256_XTS,
+	BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
+	BLK_ENCRYPTION_MODE_ADIANTUM,
+	BLK_ENCRYPTION_MODE_MAX,
+};
+
+#ifdef CONFIG_BLOCK
+#include <linux/blk_types.h>
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+#define BLK_CRYPTO_MAX_KEY_SIZE		64
+#define BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE		128
+
+/**
+ * struct blk_crypto_key - an inline encryption key
+ * @crypto_mode: encryption algorithm this key is for
+ * @data_unit_size: the data unit size for all encryption/decryptions with this
+ *	key.  This is the size in bytes of each individual plaintext and
+ *	ciphertext.  This is always a power of 2.  It might be e.g. the
+ *	filesystem block size or the disk sector size.
+ * @data_unit_size_bits: log2 of data_unit_size
+ * @size: size of this key in bytes (determined by @crypto_mode)
+ * @hash: hash of this key, for keyslot manager use only
+ * @is_hw_wrapped: @raw points to a wrapped key to be used by an inline
+ *	encryption hardware that accepts wrapped keys.
+ * @raw: the raw bytes of this key.  Only the first @size bytes are used.
+ *
+ * A blk_crypto_key is immutable once created, and many bios can reference it at
+ * the same time.  It must not be freed until all bios using it have completed.
+ */
+struct blk_crypto_key {
+	enum blk_crypto_mode_num crypto_mode;
+	unsigned int data_unit_size;
+	unsigned int data_unit_size_bits;
+	unsigned int size;
+	unsigned int hash;
+	bool is_hw_wrapped;
+	u8 raw[BLK_CRYPTO_MAX_WRAPPED_KEY_SIZE];
+};
+
+#define BLK_CRYPTO_MAX_IV_SIZE		32
+#define BLK_CRYPTO_DUN_ARRAY_SIZE	(BLK_CRYPTO_MAX_IV_SIZE/sizeof(u64))
+
+/**
+ * struct bio_crypt_ctx - an inline encryption context
+ * @bc_key: the key, algorithm, and data unit size to use
+ * @bc_keyslot: the keyslot that has been assigned for this key in @bc_ksm,
+ *		or -1 if no keyslot has been assigned yet.
+ * @bc_dun: the data unit number (starting IV) to use
+ * @bc_ksm: the keyslot manager into which the key has been programmed with
+ *	    @bc_keyslot, or NULL if this key hasn't yet been programmed.
+ *
+ * A bio_crypt_ctx specifies that the contents of the bio will be encrypted (for
+ * write requests) or decrypted (for read requests) inline by the storage device
+ * or controller, or by the crypto API fallback.
+ */
+struct bio_crypt_ctx {
+	const struct blk_crypto_key	*bc_key;
+	int				bc_keyslot;
+
+	/* Data unit number */
+	u64				bc_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
+
+	/*
+	 * The keyslot manager where the key has been programmed
+	 * with keyslot.
+	 */
+	struct keyslot_manager		*bc_ksm;
+};
+
+int bio_crypt_ctx_init(void);
+
+struct bio_crypt_ctx *bio_crypt_alloc_ctx(gfp_t gfp_mask);
+
+void bio_crypt_free_ctx(struct bio *bio);
+
+static inline bool bio_has_crypt_ctx(struct bio *bio)
+{
+	return bio->bi_crypt_context;
+}
+
+void bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask);
+
+static inline void bio_crypt_set_ctx(struct bio *bio,
+				     const struct blk_crypto_key *key,
+				     u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+				     gfp_t gfp_mask)
+{
+	struct bio_crypt_ctx *bc = bio_crypt_alloc_ctx(gfp_mask);
+
+	bc->bc_key = key;
+	memcpy(bc->bc_dun, dun, sizeof(bc->bc_dun));
+	bc->bc_ksm = NULL;
+	bc->bc_keyslot = -1;
+
+	bio->bi_crypt_context = bc;
+}
+
+void bio_crypt_ctx_release_keyslot(struct bio_crypt_ctx *bc);
+
+int bio_crypt_ctx_acquire_keyslot(struct bio_crypt_ctx *bc,
+				  struct keyslot_manager *ksm);
+
+struct request;
+bool bio_crypt_should_process(struct request *rq);
+
+static inline bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
+					       unsigned int bytes,
+					u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE])
+{
+	int i = 0;
+	unsigned int inc = bytes >> bc->bc_key->data_unit_size_bits;
+
+	while (i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
+		if (bc->bc_dun[i] + inc != next_dun[i])
+			return false;
+		inc = ((bc->bc_dun[i] + inc)  < inc);
+		i++;
+	}
+
+	return true;
+}
+
+
+static inline void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
+					   unsigned int inc)
+{
+	int i = 0;
+
+	while (inc && i < BLK_CRYPTO_DUN_ARRAY_SIZE) {
+		dun[i] += inc;
+		inc = (dun[i] < inc);
+		i++;
+	}
+}
+
+static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes)
+{
+	struct bio_crypt_ctx *bc = bio->bi_crypt_context;
+
+	if (!bc)
+		return;
+
+	bio_crypt_dun_increment(bc->bc_dun,
+				bytes >> bc->bc_key->data_unit_size_bits);
+}
+
+bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2);
+
+bool bio_crypt_ctx_mergeable(struct bio *b_1, unsigned int b1_bytes,
+			     struct bio *b_2);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+static inline int bio_crypt_ctx_init(void)
+{
+	return 0;
+}
+
+static inline bool bio_has_crypt_ctx(struct bio *bio)
+{
+	return false;
+}
+
+static inline void bio_crypt_clone(struct bio *dst, struct bio *src,
+				   gfp_t gfp_mask) { }
+
+static inline void bio_crypt_free_ctx(struct bio *bio) { }
+
+static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes) { }
+
+static inline bool bio_crypt_ctx_compatible(struct bio *b_1, struct bio *b_2)
+{
+	return true;
+}
+
+static inline bool bio_crypt_ctx_mergeable(struct bio *b_1,
+					   unsigned int b1_bytes,
+					   struct bio *b_2)
+{
+	return true;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+#if IS_ENABLED(CONFIG_DM_DEFAULT_KEY)
+static inline void bio_set_skip_dm_default_key(struct bio *bio)
+{
+	bio->bi_skip_dm_default_key = true;
+}
+
+static inline bool bio_should_skip_dm_default_key(const struct bio *bio)
+{
+	return bio->bi_skip_dm_default_key;
+}
+
+static inline void bio_clone_skip_dm_default_key(struct bio *dst,
+						 const struct bio *src)
+{
+	dst->bi_skip_dm_default_key = src->bi_skip_dm_default_key;
+}
+#else /* CONFIG_DM_DEFAULT_KEY */
+static inline void bio_set_skip_dm_default_key(struct bio *bio)
+{
+}
+
+static inline bool bio_should_skip_dm_default_key(const struct bio *bio)
+{
+	return false;
+}
+
+static inline void bio_clone_skip_dm_default_key(struct bio *dst,
+						 const struct bio *src)
+{
+}
+#endif /* !CONFIG_DM_DEFAULT_KEY */
+
+#endif /* CONFIG_BLOCK */
+
+#endif /* __LINUX_BIO_CRYPT_CTX_H */
diff --git a/include/linux/bio.h b/include/linux/bio.h
index efa15cf..b7efb85 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -22,6 +22,7 @@
 #include <linux/mempool.h>
 #include <linux/ioprio.h>
 #include <linux/bug.h>
+#include <linux/bio-crypt-ctx.h>
 
 #ifdef CONFIG_BLOCK
 
@@ -73,9 +74,6 @@
 
 #define bio_sectors(bio)	bvec_iter_sectors((bio)->bi_iter)
 #define bio_end_sector(bio)	bvec_iter_end_sector((bio)->bi_iter)
-#define bio_dun(bio)		((bio)->bi_iter.bi_dun)
-#define bio_duns(bio)		(bio_sectors(bio) >> 3) /* 4KB unit */
-#define bio_end_dun(bio)	(bio_dun(bio) + bio_duns(bio))
 
 /*
  * Return the data direction, READ or WRITE.
@@ -173,11 +171,6 @@
 {
 	iter->bi_sector += bytes >> 9;
 
-#ifdef CONFIG_PFK
-	if (iter->bi_dun)
-		iter->bi_dun += bytes >> 12;
-#endif
-
 	if (bio_no_advance_iter(bio)) {
 		iter->bi_size -= bytes;
 		iter->bi_done += bytes;
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
new file mode 100644
index 0000000..2d871a7
--- /dev/null
+++ b/include/linux/blk-crypto.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_BLK_CRYPTO_H
+#define __LINUX_BLK_CRYPTO_H
+
+#include <linux/bio.h>
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+int blk_crypto_submit_bio(struct bio **bio_ptr);
+
+bool blk_crypto_endio(struct bio *bio);
+
+int blk_crypto_init_key(struct blk_crypto_key *blk_key,
+			const u8 *raw_key, unsigned int raw_key_size,
+			bool is_hw_wrapped,
+			enum blk_crypto_mode_num crypto_mode,
+			unsigned int data_unit_size);
+
+int blk_crypto_evict_key(struct request_queue *q,
+			 const struct blk_crypto_key *key);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+static inline int blk_crypto_submit_bio(struct bio **bio_ptr)
+{
+	return 0;
+}
+
+static inline bool blk_crypto_endio(struct bio *bio)
+{
+	return true;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK
+
+int blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num,
+				unsigned int data_unit_size,
+				struct request_queue *q);
+
+int blk_crypto_fallback_init(void);
+
+#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
+static inline int
+blk_crypto_start_using_mode(enum blk_crypto_mode_num mode_num,
+			    unsigned int data_unit_size,
+			    struct request_queue *q)
+{
+	return 0;
+}
+
+static inline int blk_crypto_fallback_init(void)
+{
+	return 0;
+}
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
+
+#endif /* __LINUX_BLK_CRYPTO_H */
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index f2040ae..d93633d 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -18,6 +18,7 @@
 struct io_context;
 struct cgroup_subsys_state;
 typedef void (bio_end_io_t) (struct bio *);
+struct bio_crypt_ctx;
 
 /*
  * Block error status values.  See block/blk-core:blk_errors for the details.
@@ -182,18 +183,19 @@
 	struct blkcg_gq		*bi_blkg;
 	struct bio_issue	bi_issue;
 #endif
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	struct bio_crypt_ctx	*bi_crypt_context;
+#if IS_ENABLED(CONFIG_DM_DEFAULT_KEY)
+	bool			bi_skip_dm_default_key;
+#endif
+#endif
+
 	union {
 #if defined(CONFIG_BLK_DEV_INTEGRITY)
 		struct bio_integrity_payload *bi_integrity; /* data integrity */
 #endif
 	};
-#ifdef CONFIG_PFK
-	/* Encryption key to use (NULL if none) */
-	const struct blk_encryption_key	*bi_crypt_key;
-#endif
-#ifdef CONFIG_DM_DEFAULT_KEY
-	int bi_crypt_skip;
-#endif
 
 	unsigned short		bi_vcnt;	/* how many bio_vec's */
 
@@ -208,9 +210,7 @@
 	struct bio_vec		*bi_io_vec;	/* the actual vec list */
 
 	struct bio_set		*bi_pool;
-#ifdef CONFIG_PFK
-	struct inode		*bi_dio_inode;
-#endif
+
 	/*
 	 * We can inline a number of vecs at the end of the bio, to avoid
 	 * double allocations for a small number of bio_vecs. This member
@@ -340,11 +340,6 @@
 	/* for driver use */
 	__REQ_DRV,
 	__REQ_SWAP,		/* swapping request. */
-	/* Android specific flags */
-	__REQ_NOENCRYPT,	/*
-				 * ok to not encrypt (already encrypted at fs
-				 * level)
-				 */
 	__REQ_NR_BITS,		/* stops here */
 };
 
@@ -363,10 +358,11 @@
 #define REQ_RAHEAD		(1ULL << __REQ_RAHEAD)
 #define REQ_BACKGROUND		(1ULL << __REQ_BACKGROUND)
 #define REQ_NOWAIT		(1ULL << __REQ_NOWAIT)
+
 #define REQ_NOUNMAP		(1ULL << __REQ_NOUNMAP)
+
 #define REQ_DRV			(1ULL << __REQ_DRV)
 #define REQ_SWAP		(1ULL << __REQ_SWAP)
-#define REQ_NOENCRYPT		(1ULL << __REQ_NOENCRYPT)
 
 #define REQ_FAILFAST_MASK \
 	(REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 3a8a390..d01246c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -43,6 +43,7 @@
 struct rq_qos;
 struct blk_queue_stats;
 struct blk_stat_callback;
+struct keyslot_manager;
 
 #define BLKDEV_MIN_RQ	4
 #define BLKDEV_MAX_RQ	128	/* Default maximum */
@@ -161,7 +162,6 @@
 	unsigned int __data_len;	/* total data len */
 	int tag;
 	sector_t __sector;		/* sector cursor */
-	u64 __dun;			/* dun for UFS */
 
 	struct bio *bio;
 	struct bio *biotail;
@@ -575,6 +575,10 @@
 	 * queue_lock internally, e.g. scsi_request_fn().
 	 */
 	unsigned int		request_fn_active;
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+	/* Inline crypto capabilities */
+	struct keyslot_manager *ksm;
+#endif
 
 	unsigned int		rq_timeout;
 	int			poll_nsec;
@@ -705,7 +709,6 @@
 #define QUEUE_FLAG_REGISTERED  26	/* queue has been registered to a disk */
 #define QUEUE_FLAG_SCSI_PASSTHROUGH 27	/* queue supports SCSI commands */
 #define QUEUE_FLAG_QUIESCED    28	/* queue has been quiesced */
-#define QUEUE_FLAG_INLINECRYPT 29	/* inline encryption support */
 
 #define QUEUE_FLAG_DEFAULT	((1 << QUEUE_FLAG_IO_STAT) |		\
 				 (1 << QUEUE_FLAG_SAME_COMP)	|	\
@@ -738,8 +741,6 @@
 #define blk_queue_dax(q)	test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
 #define blk_queue_scsi_passthrough(q)	\
 	test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags)
-#define blk_queue_inlinecrypt(q) \
-	test_bit(QUEUE_FLAG_INLINECRYPT, &(q)->queue_flags)
 
 #define blk_noretry_request(rq) \
 	((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
@@ -886,24 +887,6 @@
 	return q->nr_requests;
 }
 
-static inline void queue_flag_set_unlocked(unsigned int flag,
-					   struct request_queue *q)
-{
-	if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
-	    kref_read(&q->kobj.kref))
-		lockdep_assert_held(q->queue_lock);
-	__set_bit(flag, &q->queue_flags);
-}
-
-static inline void queue_flag_clear_unlocked(unsigned int flag,
-		                         struct request_queue *q)
-{
-	    if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
-				        kref_read(&q->kobj.kref))
-			        lockdep_assert_held(q->queue_lock);
-		    __clear_bit(flag, &q->queue_flags);
-}
-
 /*
  * q->prep_rq_fn return values
  */
@@ -1069,11 +1052,6 @@
 	return rq->__sector;
 }
 
-static inline sector_t blk_rq_dun(const struct request *rq)
-{
-	return rq->__dun;
-}
-
 static inline unsigned int blk_rq_bytes(const struct request *rq)
 {
 	return rq->__data_len;
diff --git a/include/linux/bluetooth-power.h b/include/linux/bluetooth-power.h
index 8bcba91..c0984b3 100644
--- a/include/linux/bluetooth-power.h
+++ b/include/linux/bluetooth-power.h
@@ -93,7 +93,7 @@
 #define BT_CMD_SLIM_TEST            0xbfac
 #define BT_CMD_PWR_CTRL             0xbfad
 #define BT_CMD_CHIPSET_VERS         0xbfae
-#define BT_CMD_GETVAL_RESET_GPIO    0xbfaf
+#define BT_CMD_GETVAL_RESET_GPIO    0xbfb5
 #define BT_CMD_GETVAL_SW_CTRL_GPIO  0xbfb0
 #define BT_CMD_GETVAL_VDD_AON_LDO   0xbfb1
 #define BT_CMD_GETVAL_VDD_DIG_LDO   0xbfb2
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index 543bb5f..fe7a22d 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -44,7 +44,6 @@
 
 	unsigned int            bi_bvec_done;	/* number of bytes completed in
 						   current bvec */
-	u64			bi_dun;		/* DUN setting for bio */
 };
 
 /*
diff --git a/include/linux/crypto-qti-common.h b/include/linux/crypto-qti-common.h
new file mode 100644
index 0000000..ef72618
--- /dev/null
+++ b/include/linux/crypto-qti-common.h
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef _CRYPTO_QTI_COMMON_H
+#define _CRYPTO_QTI_COMMON_H
+
+#include <linux/bio-crypt-ctx.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+
+#define RAW_SECRET_SIZE 32
+#define QTI_ICE_MAX_BIST_CHECK_COUNT 100
+#define QTI_ICE_TYPE_NAME_LEN 8
+
+struct crypto_vops_qti_entry {
+	void __iomem *icemmio_base;
+	uint32_t ice_hw_version;
+	uint8_t ice_dev_type[QTI_ICE_TYPE_NAME_LEN];
+	uint32_t flags;
+};
+
+#if IS_ENABLED(CONFIG_QTI_CRYPTO_COMMON)
+// crypto-qti-common.c
+int crypto_qti_init_crypto(struct device *dev, void __iomem *mmio_base,
+			   void **priv_data);
+int crypto_qti_enable(void *priv_data);
+void crypto_qti_disable(void *priv_data);
+int crypto_qti_resume(void *priv_data);
+int crypto_qti_debug(void *priv_data);
+int crypto_qti_keyslot_program(void *priv_data,
+			       const struct blk_crypto_key *key,
+			       unsigned int slot, u8 data_unit_mask,
+			       int capid);
+int crypto_qti_keyslot_evict(void *priv_data, unsigned int slot);
+int crypto_qti_derive_raw_secret(const u8 *wrapped_key,
+				 unsigned int wrapped_key_size, u8 *secret,
+				 unsigned int secret_size);
+
+#else
+static inline int crypto_qti_init_crypto(struct device *dev,
+					 void __iomem *mmio_base,
+					 void **priv_data)
+{
+	return 0;
+}
+static inline int crypto_qti_enable(void *priv_data)
+{
+	return 0;
+}
+static inline void crypto_qti_disable(void *priv_data)
+{
+	return 0;
+}
+static inline int crypto_qti_resume(void *priv_data)
+{
+	return 0;
+}
+static inline int crypto_qti_debug(void *priv_data)
+{
+	return 0;
+}
+static inline int crypto_qti_keyslot_program(void *priv_data,
+					     const struct blk_crypto_key *key,
+					     unsigned int slot,
+					     u8 data_unit_mask,
+					     int capid)
+{
+	return 0;
+}
+static inline int crypto_qti_keyslot_evict(void *priv_data, unsigned int slot)
+{
+	return 0;
+}
+static inline int crypto_qti_derive_raw_secret(const u8 *wrapped_key,
+					       unsigned int wrapped_key_size,
+					       u8 *secret,
+					       unsigned int secret_size)
+{
+	return 0;
+}
+
+#endif /* CONFIG_QTI_CRYPTO_COMMON */
+
+#endif /* _CRYPTO_QTI_COMMON_H */
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 2f3d54e..b35970f 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -315,6 +315,12 @@
 	 * on max_io_len boundary.
 	 */
 	bool split_discard_bios:1;
+
+	/*
+	 * Set if inline crypto capabilities from this target's underlying
+	 * device(s) can be exposed via the device-mapper device.
+	 */
+	bool may_passthrough_inline_crypto:1;
 };
 
 /* Each target can link one of these into the table */
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index f6e7438..851a46c 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -445,6 +445,7 @@
 	struct list_head refs;
 	dma_buf_destructor dtor;
 	void *dtor_data;
+	atomic_t dent_count;
 };
 
 /**
diff --git a/include/linux/fs.h b/include/linux/fs.h
index a33dc31..66963a1 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1396,6 +1396,7 @@
 	const struct xattr_handler **s_xattr;
 #ifdef CONFIG_FS_ENCRYPTION
 	const struct fscrypt_operations	*s_cop;
+	struct key		*s_master_keys; /* master crypto keys in use */
 #endif
 #ifdef CONFIG_FS_VERITY
 	const struct fsverity_operations *s_vop;
@@ -1896,7 +1897,6 @@
 	void *(*clone_mnt_data) (void *);
 	void (*copy_mnt_data) (void *, void *);
 	void (*umount_begin) (struct super_block *);
-	void (*umount_end)(struct super_block *sb, int flags);
 
 	int (*show_options)(struct seq_file *, struct dentry *);
 	int (*show_options2)(struct vfsmount *,struct seq_file *, struct dentry *);
@@ -3122,8 +3122,6 @@
 		wake_up_bit(&inode->i_state, __I_DIO_WAKEUP);
 }
 
-struct inode *dio_bio_get_inode(struct bio *bio);
-
 extern void inode_set_flags(struct inode *inode, unsigned int flags,
 			    unsigned int mask);
 
diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
index 53193af..a298256 100644
--- a/include/linux/fscrypt.h
+++ b/include/linux/fscrypt.h
@@ -16,15 +16,10 @@
 #include <linux/fs.h>
 #include <linux/mm.h>
 #include <linux/slab.h>
+#include <uapi/linux/fscrypt.h>
 
 #define FS_CRYPTO_BLOCK_SIZE		16
 
-struct fscrypt_ctx;
-
-/* iv sector for security/pfe/pfk_fscrypt.c and f2fs */
-#define PG_DUN(i, p)                                            \
-	(((((u64)(i)->i_ino) & 0xffffffff) << 32) | ((p)->index & 0xffffffff))
-
 struct fscrypt_info;
 
 struct fscrypt_str {
@@ -47,7 +42,7 @@
 #define fname_len(p)		((p)->disk_name.len)
 
 /* Maximum value for the third parameter of fscrypt_operations.set_context(). */
-#define FSCRYPT_SET_CONTEXT_MAX_SIZE	28
+#define FSCRYPT_SET_CONTEXT_MAX_SIZE	40
 
 #ifdef CONFIG_FS_ENCRYPTION
 /*
@@ -66,19 +61,13 @@
 	bool (*dummy_context)(struct inode *);
 	bool (*empty_dir)(struct inode *);
 	unsigned int max_namelen;
-	bool (*is_encrypted)(struct inode *inode);
-};
-
-/* Decryption work */
-struct fscrypt_ctx {
-	union {
-		struct {
-			struct bio *bio;
-			struct work_struct work;
-		};
-		struct list_head free_list;	/* Free list */
-	};
-	u8 flags;				/* Flags */
+	bool (*has_stable_inodes)(struct super_block *sb);
+	void (*get_ino_and_lblk_bits)(struct super_block *sb,
+				      int *ino_bits_ret, int *lblk_bits_ret);
+	bool (*inline_crypt_enabled)(struct super_block *sb);
+	int (*get_num_devices)(struct super_block *sb);
+	void (*get_devices)(struct super_block *sb,
+			    struct request_queue **devs);
 };
 
 static inline bool fscrypt_has_encryption_key(const struct inode *inode)
@@ -107,8 +96,6 @@
 
 /* crypto.c */
 extern void fscrypt_enqueue_decrypt_work(struct work_struct *);
-extern struct fscrypt_ctx *fscrypt_get_ctx(gfp_t);
-extern void fscrypt_release_ctx(struct fscrypt_ctx *);
 
 extern struct page *fscrypt_encrypt_pagecache_blocks(struct page *page,
 						     unsigned int len,
@@ -140,13 +127,25 @@
 /* policy.c */
 extern int fscrypt_ioctl_set_policy(struct file *, const void __user *);
 extern int fscrypt_ioctl_get_policy(struct file *, void __user *);
+extern int fscrypt_ioctl_get_policy_ex(struct file *, void __user *);
 extern int fscrypt_has_permitted_context(struct inode *, struct inode *);
 extern int fscrypt_inherit_context(struct inode *, struct inode *,
 					void *, bool);
-/* keyinfo.c */
+/* keyring.c */
+extern void fscrypt_sb_free(struct super_block *sb);
+extern int fscrypt_ioctl_add_key(struct file *filp, void __user *arg);
+extern int fscrypt_ioctl_remove_key(struct file *filp, void __user *arg);
+extern int fscrypt_ioctl_remove_key_all_users(struct file *filp,
+					      void __user *arg);
+extern int fscrypt_ioctl_get_key_status(struct file *filp, void __user *arg);
+extern int fscrypt_register_key_removal_notifier(struct notifier_block *nb);
+extern int fscrypt_unregister_key_removal_notifier(struct notifier_block *nb);
+
+/* keysetup.c */
 extern int fscrypt_get_encryption_info(struct inode *);
 extern void fscrypt_put_encryption_info(struct inode *);
 extern void fscrypt_free_inode(struct inode *);
+extern int fscrypt_drop_inode(struct inode *inode);
 
 /* fname.c */
 extern int fscrypt_setup_filename(struct inode *, const struct qstr *,
@@ -239,8 +238,6 @@
 
 /* bio.c */
 extern void fscrypt_decrypt_bio(struct bio *);
-extern void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
-					struct bio *bio);
 extern int fscrypt_zeroout_range(const struct inode *, pgoff_t, sector_t,
 				 unsigned int);
 
@@ -285,16 +282,6 @@
 {
 }
 
-static inline struct fscrypt_ctx *fscrypt_get_ctx(gfp_t gfp_flags)
-{
-	return ERR_PTR(-EOPNOTSUPP);
-}
-
-static inline void fscrypt_release_ctx(struct fscrypt_ctx *ctx)
-{
-	return;
-}
-
 static inline struct page *fscrypt_encrypt_pagecache_blocks(struct page *page,
 							    unsigned int len,
 							    unsigned int offs,
@@ -354,6 +341,12 @@
 	return -EOPNOTSUPP;
 }
 
+static inline int fscrypt_ioctl_get_policy_ex(struct file *filp,
+					      void __user *arg)
+{
+	return -EOPNOTSUPP;
+}
+
 static inline int fscrypt_has_permitted_context(struct inode *parent,
 						struct inode *child)
 {
@@ -367,7 +360,46 @@
 	return -EOPNOTSUPP;
 }
 
-/* keyinfo.c */
+/* keyring.c */
+static inline void fscrypt_sb_free(struct super_block *sb)
+{
+}
+
+static inline int fscrypt_ioctl_add_key(struct file *filp, void __user *arg)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int fscrypt_ioctl_remove_key(struct file *filp, void __user *arg)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int fscrypt_ioctl_remove_key_all_users(struct file *filp,
+						     void __user *arg)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int fscrypt_ioctl_get_key_status(struct file *filp,
+					       void __user *arg)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline int fscrypt_register_key_removal_notifier(
+						struct notifier_block *nb)
+{
+	return 0;
+}
+
+static inline int fscrypt_unregister_key_removal_notifier(
+						struct notifier_block *nb)
+{
+	return 0;
+}
+
+/* keysetup.c */
 static inline int fscrypt_get_encryption_info(struct inode *inode)
 {
 	return -EOPNOTSUPP;
@@ -382,6 +414,11 @@
 {
 }
 
+static inline int fscrypt_drop_inode(struct inode *inode)
+{
+	return 0;
+}
+
  /* fname.c */
 static inline int fscrypt_setup_filename(struct inode *dir,
 					 const struct qstr *iname,
@@ -436,11 +473,6 @@
 {
 }
 
-static inline void fscrypt_enqueue_decrypt_bio(struct fscrypt_ctx *ctx,
-					       struct bio *bio)
-{
-}
-
 static inline int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
 					sector_t pblk, unsigned int len)
 {
@@ -504,6 +536,74 @@
 }
 #endif	/* !CONFIG_FS_ENCRYPTION */
 
+/* inline_crypt.c */
+#ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
+extern bool fscrypt_inode_uses_inline_crypto(const struct inode *inode);
+
+extern bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode);
+
+extern void fscrypt_set_bio_crypt_ctx(struct bio *bio,
+				      const struct inode *inode,
+				      u64 first_lblk, gfp_t gfp_mask);
+
+extern void fscrypt_set_bio_crypt_ctx_bh(struct bio *bio,
+					 const struct buffer_head *first_bh,
+					 gfp_t gfp_mask);
+
+extern bool fscrypt_mergeable_bio(struct bio *bio, const struct inode *inode,
+				  u64 next_lblk);
+
+extern bool fscrypt_mergeable_bio_bh(struct bio *bio,
+				     const struct buffer_head *next_bh);
+
+#else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+static inline bool fscrypt_inode_uses_inline_crypto(const struct inode *inode)
+{
+	return false;
+}
+
+static inline bool fscrypt_inode_uses_fs_layer_crypto(const struct inode *inode)
+{
+	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode);
+}
+
+static inline void fscrypt_set_bio_crypt_ctx(struct bio *bio,
+					     const struct inode *inode,
+					     u64 first_lblk, gfp_t gfp_mask) { }
+
+static inline void fscrypt_set_bio_crypt_ctx_bh(
+					 struct bio *bio,
+					 const struct buffer_head *first_bh,
+					 gfp_t gfp_mask) { }
+
+static inline bool fscrypt_mergeable_bio(struct bio *bio,
+					 const struct inode *inode,
+					 u64 next_lblk)
+{
+	return true;
+}
+
+static inline bool fscrypt_mergeable_bio_bh(struct bio *bio,
+					    const struct buffer_head *next_bh)
+{
+	return true;
+}
+#endif /* !CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
+
+#if IS_ENABLED(CONFIG_FS_ENCRYPTION) && IS_ENABLED(CONFIG_DM_DEFAULT_KEY)
+static inline bool
+fscrypt_inode_should_skip_dm_default_key(const struct inode *inode)
+{
+	return IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode);
+}
+#else
+static inline bool
+fscrypt_inode_should_skip_dm_default_key(const struct inode *inode)
+{
+	return false;
+}
+#endif
+
 /**
  * fscrypt_require_key - require an inode's encryption key
  * @inode: the inode we need the key for
@@ -712,33 +812,6 @@
 	return 0;
 }
 
-/* fscrypt_ice.c */
-#ifdef CONFIG_PFK
-extern int fscrypt_using_hardware_encryption(const struct inode *inode);
-extern void fscrypt_set_ice_dun(const struct inode *inode,
-				struct bio *bio, u64 dun);
-extern void fscrypt_set_ice_skip(struct bio *bio, int bi_crypt_skip);
-extern bool fscrypt_mergeable_bio(struct bio *bio, u64 dun, bool bio_encrypted,
-				int bi_crypt_skip);
-#else
-static inline int fscrypt_using_hardware_encryption(const struct inode *inode)
-{
-		return 0;
-}
-
-static inline void fscrypt_set_ice_dun(const struct inode *inode,
-				struct bio *bio, u64 dun){}
-
-static inline void fscrypt_set_ice_skip(struct bio *bio, int bi_crypt_skip)
-{}
-
-static inline bool fscrypt_mergeable_bio(struct bio *bio,
-				u64 dun, bool bio_encrypted, int bi_crypt_skip)
-{
-		return true;
-}
-#endif
-
 /* If *pagep is a bounce page, free it and set *pagep to the pagecache page */
 static inline void fscrypt_finalize_bounce_page(struct page **pagep)
 {
@@ -749,5 +822,4 @@
 		fscrypt_free_bounce_page(page);
 	}
 }
-
 #endif	/* _LINUX_FSCRYPT_H */
diff --git a/include/linux/hwkm.h b/include/linux/hwkm.h
new file mode 100644
index 0000000..4af06b9
--- /dev/null
+++ b/include/linux/hwkm.h
@@ -0,0 +1,306 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#ifndef __HWKM_H_
+#define __HWKM_H_
+
+#include <stdbool.h>
+#include <stddef.h>
+
+/* Maximum number of bytes in a key used in a KEY_SLOT_RDWR operation */
+#define HWKM_MAX_KEY_SIZE 32
+/* Maximum number of bytes in a SW ctx used in a SYSTEM_KDF operation */
+#define HWKM_MAX_CTX_SIZE 64
+/* Maximum number of bytes in a WKB used in a key wrap or unwrap operation */
+#define HWKM_MAX_BLOB_SIZE 68
+
+
+/* Opcodes to be set in the op field of a command */
+enum hwkm_op {
+	/* Opcode to generate a random key */
+	NIST_KEYGEN = 0,
+	/* Opcode to derive a key */
+	SYSTEM_KDF,
+	/* Used only by HW */
+	QFPROM_KEY_RDWR,
+	/* Opcode to wrap a key and export the wrapped key */
+	KEY_WRAP_EXPORT,
+	/*
+	 * Opcode to import a wrapped key and unwrap it in the
+	 * specified key slot
+	 */
+	KEY_UNWRAP_IMPORT,
+	/* Opcode to clear a slot */
+	KEY_SLOT_CLEAR,
+	/* Opcode to read or write a key from/to a slot */
+	KEY_SLOT_RDWR,
+	/*
+	 * Opcode to broadcast a TPKEY to all slaves configured
+	 * to receive a TPKEY.
+	 */
+	SET_TPKEY,
+
+
+	HWKM_MAX_OP,
+	HWKM_UNDEF_OP = 0xFF
+};
+
+/*
+ * Algorithm values which can be used in the alg_allowed field of the
+ * key policy.
+ */
+enum hwkm_alg {
+	AES128_ECB = 0,
+	AES256_ECB = 1,
+	DES_ECB = 2,
+	TDES_ECB = 3,
+	AES128_CBC = 4,
+	AES256_CBC = 5,
+	DES_CBC = 6,
+	TDES_CBC = 7,
+	AES128_CCM_TC = 8,
+	AES128_CCM_NTC = 9,
+	AES256_CCM_TC = 10,
+	AES256_CCM_NTC = 11,
+	AES256_SIV = 12,
+	AES128_CTR = 13,
+	AES256_CTR = 14,
+	AES128_XTS = 15,
+	AES256_XTS = 16,
+	SHA1_HMAC = 17,
+	SHA256_HMAC = 18,
+	AES128_CMAC = 19,
+	AES256_CMAC = 20,
+	SHA384_HMAC = 21,
+	SHA512_HMAC = 22,
+	AES128_GCM = 23,
+	AES256_GCM = 24,
+	KASUMI = 25,
+	SNOW3G = 26,
+	ZUC = 27,
+	PRINCE = 28,
+	SIPHASH = 29,
+	QARMA64 = 30,
+	QARMA128 = 31,
+
+	HWKM_ALG_MAX,
+
+	HWKM_UNDEF_ALG = 0xFF
+};
+
+/* Key type values which can be used in the key_type field of the key policy */
+enum hwkm_type {
+	KEY_DERIVATION_KEY = 0,
+	KEY_WRAPPING_KEY = 1,
+	KEY_SWAPPING_KEY = 2,
+	TRANSPORT_KEY = 3,
+	GENERIC_KEY = 4,
+
+	HWKM_TYPE_MAX,
+
+	HWKM_UNDEF_KEY_TYPE = 0xFF
+};
+
+/* Destinations which a context can use */
+enum hwkm_destination {
+	KM_MASTER = 0,
+	GPCE_SLAVE = 1,
+	MCE_SLAVE = 2,
+	PIMEM_SLAVE = 3,
+	ICE0_SLAVE = 4,
+	ICE1_SLAVE = 5,
+	ICE2_SLAVE = 6,
+	ICE3_SLAVE = 7,
+	DP0_HDCP_SLAVE = 8,
+	DP1_HDCP_SLAVE = 9,
+	ICEMEM_SLAVE = 10,
+
+	HWKM_DESTINATION_MAX,
+
+	HWKM_UNDEF_DESTINATION = 0xFF
+};
+
+/*
+ * Key security levels which can be set in the security_lvl field of
+ * key policy.
+ */
+enum hwkm_security_level {
+	/* Can be read by SW in plaintext using KEY_SLOT_RDWR cmd. */
+	SW_KEY = 0,
+	/* Usable by SW, but not readable in plaintext. */
+	MANAGED_KEY = 1,
+	/* Not usable by SW. */
+	HW_KEY = 2,
+
+	HWKM_SECURITY_LEVEL_MAX,
+
+	HWKM_UNDEF_SECURITY_LEVEL = 0xFF
+};
+
+struct hwkm_key_policy {
+	bool km_by_spu_allowed;
+	bool km_by_modem_allowed;
+	bool km_by_nsec_allowed;
+	bool km_by_tz_allowed;
+
+	enum hwkm_alg alg_allowed;
+
+	bool enc_allowed;
+	bool dec_allowed;
+
+	enum hwkm_type key_type;
+	u8 kdf_depth;
+
+	bool wrap_export_allowed;
+	bool swap_export_allowed;
+
+	enum hwkm_security_level security_lvl;
+
+	enum hwkm_destination hw_destination;
+
+	bool wrap_with_tpk_allowed;
+};
+
+struct hwkm_bsve {
+	bool enabled;
+	bool km_key_policy_ver_en;
+	bool km_apps_secure_en;
+	bool km_msa_secure_en;
+	bool km_lcm_fuse_en;
+	bool km_boot_stage_otp_en;
+	bool km_swc_en;
+	bool km_child_key_policy_en;
+	bool km_mks_en;
+	u64 km_fuse_region_sha_digest_en;
+};
+
+struct hwkm_keygen_cmd {
+	u8 dks;					/* Destination Key Slot */
+	struct hwkm_key_policy policy;		/* Key policy */
+};
+
+struct hwkm_rdwr_cmd {
+	uint8_t slot;			/* Key Slot */
+	bool is_write;			/* Write or read op */
+	struct hwkm_key_policy policy;	/* Key policy for write */
+	uint8_t key[HWKM_MAX_KEY_SIZE];	/* Key for write */
+	size_t sz;			/* Length of key in bytes */
+};
+
+struct hwkm_kdf_cmd {
+	uint8_t dks;			/* Destination Key Slot */
+	uint8_t kdk;			/* Key Derivation Key Slot */
+	uint8_t mks;			/* Mixing key slot (bsve controlled) */
+	struct hwkm_key_policy policy;	/* Key policy. */
+	struct hwkm_bsve bsve;		/* Binding state vector */
+	uint8_t ctx[HWKM_MAX_CTX_SIZE];	/* Context */
+	size_t sz;			/* Length of context in bytes */
+};
+
+struct hwkm_set_tpkey_cmd {
+	uint8_t sks;			/* The slot to use as the TPKEY */
+};
+
+struct hwkm_unwrap_cmd {
+	uint8_t dks;			/* Destination Key Slot */
+	uint8_t kwk;			/* Key Wrapping Key Slot */
+	uint8_t wkb[HWKM_MAX_BLOB_SIZE];/* Wrapped Key Blob */
+	uint8_t sz;			/* Length of WKB in bytes */
+};
+
+struct hwkm_wrap_cmd {
+	uint8_t sks;			/* Destination Key Slot */
+	uint8_t kwk;			/* Key Wrapping Key Slot */
+	struct hwkm_bsve bsve;		/* Binding state vector */
+};
+
+struct hwkm_clear_cmd {
+	uint8_t dks;			/* Destination key slot */
+	bool is_double_key;		/* Whether this is a double key */
+};
+
+
+struct hwkm_cmd {
+	enum hwkm_op op;		/* Operation */
+	union /* Structs with opcode specific parameters */
+	{
+		struct hwkm_keygen_cmd keygen;
+		struct hwkm_rdwr_cmd rdwr;
+		struct hwkm_kdf_cmd kdf;
+		struct hwkm_set_tpkey_cmd set_tpkey;
+		struct hwkm_unwrap_cmd unwrap;
+		struct hwkm_wrap_cmd wrap;
+		struct hwkm_clear_cmd clear;
+	};
+};
+
+struct hwkm_rdwr_rsp {
+	struct hwkm_key_policy policy;	/* Key policy for read */
+	uint8_t key[HWKM_MAX_KEY_SIZE];	/* Only available for read op */
+	size_t sz;			/* Length of the key (bytes) */
+};
+
+struct hwkm_wrap_rsp {
+	uint8_t wkb[HWKM_MAX_BLOB_SIZE];	/* Wrapping key blob */
+	size_t sz;				/* key blob len (bytes) */
+};
+
+struct hwkm_rsp {
+	u32 status;
+	union /* Structs with opcode specific outputs */
+	{
+		struct hwkm_rdwr_rsp rdwr;
+		struct hwkm_wrap_rsp wrap;
+	};
+};
+
+enum hwkm_master_key_slots {
+	/** L1 KDKs. Not usable by SW. Used by HW to derive L2 KDKs */
+	NKDK_L1 = 0,
+	PKDK_L1 = 1,
+	SKDK_L1 = 2,
+	UKDK_L1 = 3,
+
+	/*
+	 * L2 KDKs, used to derive keys by SW.
+	 * Cannot be used for crypto, only key derivation
+	 */
+	TZ_NKDK_L2 = 4,
+	TZ_PKDK_L2 = 5,
+	TZ_SKDK_L2 = 6,
+	MODEM_PKDK_L2 = 7,
+	MODEM_SKDK_L2 = 8,
+	TZ_UKDK_L2 = 9,
+
+	/** Slots reserved for TPKEY */
+	TPKEY_EVEN_SLOT = 10,
+	TPKEY_KEY_ODD_SLOT = 11,
+
+	/** First key slot available for general purpose use cases */
+	MASTER_GENERIC_SLOTS_START,
+
+	UNDEF_SLOT = 0xFF
+};
+
+#if IS_ENABLED(CONFIG_QTI_HW_KEY_MANAGER)
+int qti_hwkm_handle_cmd(struct hwkm_cmd *cmd, struct hwkm_rsp *rsp);
+int qti_hwkm_clocks(bool on);
+int qti_hwkm_init(void);
+#else
+static inline int qti_hwkm_add_req(struct hwkm_cmd *cmd,
+				   struct hwkm_rsp *rsp)
+{
+	return -EOPNOTSUPP;
+}
+static inline int qti_hwkm_clocks(bool on)
+{
+	return -EOPNOTSUPP;
+}
+static inline int qti_hwkm_init(void)
+{
+	return -EOPNOTSUPP;
+}
+#endif /* CONFIG_QTI_HW_KEY_MANAGER */
+#endif /* __HWKM_H_ */
diff --git a/include/linux/key.h b/include/linux/key.h
index e58ee10f6..86cbff8 100644
--- a/include/linux/key.h
+++ b/include/linux/key.h
@@ -303,6 +303,9 @@
 				      key_perm_t perm,
 				      unsigned long flags);
 
+extern key_ref_t lookup_user_key(key_serial_t id, unsigned long flags,
+				 key_perm_t perm);
+
 extern int key_update(key_ref_t key,
 		      const void *payload,
 		      size_t plen);
diff --git a/include/linux/keyslot-manager.h b/include/linux/keyslot-manager.h
new file mode 100644
index 0000000..6d32a03
--- /dev/null
+++ b/include/linux/keyslot-manager.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2019 Google LLC
+ */
+
+#ifndef __LINUX_KEYSLOT_MANAGER_H
+#define __LINUX_KEYSLOT_MANAGER_H
+
+#include <linux/bio.h>
+
+#ifdef CONFIG_BLK_INLINE_ENCRYPTION
+
+struct keyslot_manager;
+
+/**
+ * struct keyslot_mgmt_ll_ops - functions to manage keyslots in hardware
+ * @keyslot_program:	Program the specified key into the specified slot in the
+ *			inline encryption hardware.
+ * @keyslot_evict:	Evict key from the specified keyslot in the hardware.
+ *			The key is provided so that e.g. dm layers can evict
+ *			keys from the devices that they map over.
+ *			Returns 0 on success, -errno otherwise.
+ * @derive_raw_secret:	(Optional) Derive a software secret from a
+ *			hardware-wrapped key.  Returns 0 on success, -EOPNOTSUPP
+ *			if unsupported on the hardware, or another -errno code.
+ *
+ * This structure should be provided by storage device drivers when they set up
+ * a keyslot manager - this structure holds the function ptrs that the keyslot
+ * manager will use to manipulate keyslots in the hardware.
+ */
+struct keyslot_mgmt_ll_ops {
+	int (*keyslot_program)(struct keyslot_manager *ksm,
+			       const struct blk_crypto_key *key,
+			       unsigned int slot);
+	int (*keyslot_evict)(struct keyslot_manager *ksm,
+			     const struct blk_crypto_key *key,
+			     unsigned int slot);
+	int (*derive_raw_secret)(struct keyslot_manager *ksm,
+				 const u8 *wrapped_key,
+				 unsigned int wrapped_key_size,
+				 u8 *secret, unsigned int secret_size);
+};
+
+struct keyslot_manager *keyslot_manager_create(unsigned int num_slots,
+	const struct keyslot_mgmt_ll_ops *ksm_ops,
+	const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
+	void *ll_priv_data);
+
+int keyslot_manager_get_slot_for_key(struct keyslot_manager *ksm,
+				     const struct blk_crypto_key *key);
+
+void keyslot_manager_get_slot(struct keyslot_manager *ksm, unsigned int slot);
+
+void keyslot_manager_put_slot(struct keyslot_manager *ksm, unsigned int slot);
+
+bool keyslot_manager_crypto_mode_supported(struct keyslot_manager *ksm,
+					   enum blk_crypto_mode_num crypto_mode,
+					   unsigned int data_unit_size);
+
+int keyslot_manager_evict_key(struct keyslot_manager *ksm,
+			      const struct blk_crypto_key *key);
+
+void keyslot_manager_reprogram_all_keys(struct keyslot_manager *ksm);
+
+void *keyslot_manager_private(struct keyslot_manager *ksm);
+
+void keyslot_manager_destroy(struct keyslot_manager *ksm);
+
+struct keyslot_manager *keyslot_manager_create_passthrough(
+	const struct keyslot_mgmt_ll_ops *ksm_ops,
+	const unsigned int crypto_mode_supported[BLK_ENCRYPTION_MODE_MAX],
+	void *ll_priv_data);
+
+void keyslot_manager_intersect_modes(struct keyslot_manager *parent,
+				     const struct keyslot_manager *child);
+
+int keyslot_manager_derive_raw_secret(struct keyslot_manager *ksm,
+				      const u8 *wrapped_key,
+				      unsigned int wrapped_key_size,
+				      u8 *secret, unsigned int secret_size);
+
+#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+
+#endif /* __LINUX_KEYSLOT_MANAGER_H */
diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
index 8f29eb0..0605f86 100644
--- a/include/linux/lsm_hooks.h
+++ b/include/linux/lsm_hooks.h
@@ -1516,8 +1516,6 @@
 					size_t *len);
 	int (*inode_create)(struct inode *dir, struct dentry *dentry,
 				umode_t mode);
-	int (*inode_post_create)(struct inode *dir, struct dentry *dentry,
-				umode_t mode);
 	int (*inode_link)(struct dentry *old_dentry, struct inode *dir,
 				struct dentry *new_dentry);
 	int (*inode_unlink)(struct inode *dir, struct dentry *dentry);
@@ -1840,7 +1838,6 @@
 	struct hlist_head inode_free_security;
 	struct hlist_head inode_init_security;
 	struct hlist_head inode_create;
-	struct hlist_head inode_post_create;
 	struct hlist_head inode_link;
 	struct hlist_head inode_unlink;
 	struct hlist_head inode_symlink;
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 7ce6a52..547beaf 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -38,6 +38,7 @@
 	MHI_CB_EE_MISSION_MODE,
 	MHI_CB_SYS_ERROR,
 	MHI_CB_FATAL_ERROR,
+	MHI_CB_FW_FALLBACK_IMG,
 };
 
 /**
@@ -282,6 +283,7 @@
 
 	/* fw images */
 	const char *fw_image;
+	const char *fw_image_fallback;
 	const char *edl_image;
 
 	/* mhi host manages downloading entire fbc images */
diff --git a/include/linux/mmc/core.h b/include/linux/mmc/core.h
index 5e5256d..5754c8d 100644
--- a/include/linux/mmc/core.h
+++ b/include/linux/mmc/core.h
@@ -164,7 +164,6 @@
 	 */
 	void			(*recovery_notifier)(struct mmc_request *);
 	struct mmc_host		*host;
-	struct request *req;
 
 	/* Allow other commands during this ongoing data transfer or busy wait */
 	bool			cap_cmd_during_tfr;
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index cd5410d..6ab6bfb 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -260,6 +260,13 @@
 	 * will have zero data bytes transferred.
 	 */
 	void	(*cqe_recovery_finish)(struct mmc_host *host);
+	/*
+	 * Update the request queue with keyslot manager details. This keyslot
+	 * manager will be used by block crypto to configure the crypto Engine
+	 * for data encryption.
+	 */
+	void	(*cqe_crypto_update_queue)(struct mmc_host *host,
+					struct request_queue *queue);
 };
 
 struct mmc_async_req {
diff --git a/include/linux/msm_kgsl.h b/include/linux/msm_kgsl.h
index a527a0a..c84db40 100644
--- a/include/linux/msm_kgsl.h
+++ b/include/linux/msm_kgsl.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 /*
- * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2018, 2020 The Linux Foundation. All rights reserved.
  */
 #ifndef _MSM_KGSL_H
 #define _MSM_KGSL_H
@@ -11,6 +11,7 @@
 void *kgsl_pwr_limits_add(u32 id);
 void kgsl_pwr_limits_del(void *limit);
 int kgsl_pwr_limits_set_freq(void *limit, unsigned int freq);
+int kgsl_pwr_limits_set_gpu_fmax(void *limit, unsigned int freq);
 void kgsl_pwr_limits_set_default(void *limit);
 unsigned int kgsl_pwr_limits_get_freq(u32 id);
 
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 358b70f..436ad99 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -537,6 +537,8 @@
 	return wait_on_page_bit_killable(compound_head(page), PG_locked);
 }
 
+extern void put_and_wait_on_page_locked(struct page *page);
+
 /* 
  * Wait for a page to complete writeback
  */
diff --git a/include/linux/security.h b/include/linux/security.h
index fee252f..9fa7661 100644
--- a/include/linux/security.h
+++ b/include/linux/security.h
@@ -31,7 +31,6 @@
 #include <linux/string.h>
 #include <linux/mm.h>
 #include <linux/fs.h>
-#include <linux/bio.h>
 
 struct linux_binprm;
 struct cred;
@@ -284,8 +283,6 @@
 				     const struct qstr *qstr, const char **name,
 				     void **value, size_t *len);
 int security_inode_create(struct inode *dir, struct dentry *dentry, umode_t mode);
-int security_inode_post_create(struct inode *dir, struct dentry *dentry,
-					umode_t mode);
 int security_inode_link(struct dentry *old_dentry, struct inode *dir,
 			 struct dentry *new_dentry);
 int security_inode_unlink(struct inode *dir, struct dentry *dentry);
@@ -674,13 +671,6 @@
 	return 0;
 }
 
-static inline int security_inode_post_create(struct inode *dir,
-					 struct dentry *dentry,
-					 umode_t mode)
-{
-	return 0;
-}
-
 static inline int security_inode_link(struct dentry *old_dentry,
 				       struct inode *dir,
 				       struct dentry *new_dentry)
diff --git a/include/linux/usb/phy.h b/include/linux/usb/phy.h
index 2a44134..b863726c 100644
--- a/include/linux/usb/phy.h
+++ b/include/linux/usb/phy.h
@@ -26,6 +26,9 @@
 #define PHY_HSFS_MODE		BIT(8)
 #define PHY_LS_MODE		BIT(9)
 #define PHY_USB_DP_CONCURRENT_MODE	BIT(10)
+#define EUD_SPOOF_DISCONNECT	BIT(11)
+#define EUD_SPOOF_CONNECT	BIT(12)
+#define PHY_SUS_OVERRIDE	BIT(13)
 
 enum usb_phy_interface {
 	USBPHY_INTERFACE_MODE_UNKNOWN,
diff --git a/include/linux/usb/usb_qdss.h b/include/linux/usb/usb_qdss.h
index ae5cfa3..645d6f6 100644
--- a/include/linux/usb/usb_qdss.h
+++ b/include/linux/usb/usb_qdss.h
@@ -20,7 +20,6 @@
 	struct scatterlist *sg;
 	unsigned int num_sgs;
 	unsigned int num_mapped_sgs;
-	struct completion write_done;
 };
 
 struct usb_qdss_ch {
@@ -41,17 +40,22 @@
 	USB_QDSS_CTRL_WRITE_DONE,
 };
 
+struct qdss_req {
+	struct usb_request *usb_req;
+	struct completion write_done;
+	struct qdss_request *qdss_req;
+	struct list_head list;
+};
+
 #if IS_ENABLED(CONFIG_USB_F_QDSS)
 struct usb_qdss_ch *usb_qdss_open(const char *name, void *priv,
 	void (*notify)(void *priv, unsigned int event,
 		struct qdss_request *d_req, struct usb_qdss_ch *ch));
 void usb_qdss_close(struct usb_qdss_ch *ch);
-int usb_qdss_alloc_req(struct usb_qdss_ch *ch, int n_write, int n_read);
+int usb_qdss_alloc_req(struct usb_qdss_ch *ch, int n_write);
 void usb_qdss_free_req(struct usb_qdss_ch *ch);
-int usb_qdss_read(struct usb_qdss_ch *ch, struct qdss_request *d_req);
 int usb_qdss_write(struct usb_qdss_ch *ch, struct qdss_request *d_req);
 int usb_qdss_ctrl_write(struct usb_qdss_ch *ch, struct qdss_request *d_req);
-int usb_qdss_ctrl_read(struct usb_qdss_ch *ch, struct qdss_request *d_req);
 #else
 static inline struct usb_qdss_ch *usb_qdss_open(const char *name, void *priv,
 		void (*n)(void *, unsigned int event,
@@ -60,11 +64,6 @@
 	return ERR_PTR(-ENODEV);
 }
 
-static inline int usb_qdss_read(struct usb_qdss_ch *c, struct qdss_request *d)
-{
-	return -ENODEV;
-}
-
 static inline int usb_qdss_write(struct usb_qdss_ch *c, struct qdss_request *d)
 {
 	return -ENODEV;
@@ -76,11 +75,6 @@
 	return -ENODEV;
 }
 
-static inline int usb_qdss_ctrl_read(struct usb_qdss_ch *c,
-		struct qdss_request *d)
-{
-	return -ENODEV;
-}
 static inline int usb_qdss_alloc_req(struct usb_qdss_ch *c, int n_wr, int n_rd)
 {
 	return -ENODEV;
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index 20c5d80..1417526 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -651,9 +651,6 @@
 	/* The controller does not support WRITE SAME */
 	unsigned no_write_same:1;
 
-	/* Inline encryption support? */
-	unsigned inlinecrypt_support:1;
-
 	unsigned use_blk_mq:1;
 	unsigned use_cmd_list:1;
 
diff --git a/include/soc/qcom/icnss2.h b/include/soc/qcom/icnss2.h
index fca498f..64128de 100644
--- a/include/soc/qcom/icnss2.h
+++ b/include/soc/qcom/icnss2.h
@@ -164,4 +164,7 @@
 extern int icnss_get_msi_irq(struct device *dev, unsigned int vector);
 extern void icnss_get_msi_address(struct device *dev, u32 *msi_addr_low,
 			   u32 *msi_addr_high);
+extern int icnss_qmi_send(struct device *dev, int type, void *cmd,
+			  int cmd_len, void *cb_ctx,
+			  int (*cb)(void *ctx, void *event, int event_len));
 #endif /* _ICNSS_WLAN_H_ */
diff --git a/include/soc/qcom/socinfo.h b/include/soc/qcom/socinfo.h
index a808e7d..baa41cd 100644
--- a/include/soc/qcom/socinfo.h
+++ b/include/soc/qcom/socinfo.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Copyright (c) 2009-2019, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2009-2020, The Linux Foundation. All rights reserved.
  */
 
 #ifndef _ARCH_ARM_MACH_MSM_SOCINFO_H_
@@ -74,6 +74,8 @@
 	of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,sdxprairie")
 #define early_machine_is_sdmmagpie()	\
 	of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,sdmmagpie")
+#define early_machine_is_sdm660()	\
+	of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,sdm660")
 #else
 #define of_board_is_sim()		0
 #define of_board_is_rumi()		0
@@ -105,6 +107,7 @@
 #define early_machine_is_qcs405()	0
 #define early_machine_is_sdxprairie()	0
 #define early_machine_is_sdmmagpie()	0
+#define early_machine_is_sdm660()	0
 #endif
 
 #define PLATFORM_SUBTYPE_MDM	1
@@ -125,6 +128,7 @@
 	MSM_CPU_8916,
 	MSM_CPU_8084,
 	MSM_CPU_8996,
+	MSM_CPU_SDM660,
 	MSM_CPU_SM8150,
 	MSM_CPU_SA8150,
 	MSM_CPU_KONA,
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 6eb8e9f..463117c 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -13,6 +13,9 @@
 #include <linux/limits.h>
 #include <linux/ioctl.h>
 #include <linux/types.h>
+#ifndef __KERNEL__
+#include <linux/fscrypt.h>
+#endif
 
 /*
  * It's silly to have NR_OPEN bigger than NR_FILE, but you can change
@@ -259,58 +262,6 @@
 #define FS_IOC_SETFSLABEL		_IOW(0x94, 50, char[FSLABEL_MAX])
 
 /*
- * File system encryption support
- */
-/* Policy provided via an ioctl on the topmost directory */
-#define FS_KEY_DESCRIPTOR_SIZE	8
-
-#define FS_POLICY_FLAGS_PAD_4		0x00
-#define FS_POLICY_FLAGS_PAD_8		0x01
-#define FS_POLICY_FLAGS_PAD_16		0x02
-#define FS_POLICY_FLAGS_PAD_32		0x03
-#define FS_POLICY_FLAGS_PAD_MASK	0x03
-#define FS_POLICY_FLAG_DIRECT_KEY	0x04	/* use master key directly */
-#define FS_POLICY_FLAGS_VALID		0x07
-
-/* Encryption algorithms */
-#define FS_ENCRYPTION_MODE_INVALID		0
-#define FS_ENCRYPTION_MODE_AES_256_XTS		1
-#define FS_ENCRYPTION_MODE_AES_256_GCM		2
-#define FS_ENCRYPTION_MODE_AES_256_CBC		3
-#define FS_ENCRYPTION_MODE_AES_256_CTS		4
-#define FS_ENCRYPTION_MODE_AES_128_CBC		5
-#define FS_ENCRYPTION_MODE_AES_128_CTS		6
-#define FS_ENCRYPTION_MODE_SPECK128_256_XTS	7 /* Removed, do not use. */
-#define FS_ENCRYPTION_MODE_SPECK128_256_CTS	8 /* Removed, do not use. */
-#define FS_ENCRYPTION_MODE_ADIANTUM		9
-#define FS_ENCRYPTION_MODE_PRIVATE		127
-
-struct fscrypt_policy {
-	__u8 version;
-	__u8 contents_encryption_mode;
-	__u8 filenames_encryption_mode;
-	__u8 flags;
-	__u8 master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
-};
-
-#define FS_IOC_SET_ENCRYPTION_POLICY	_IOR('f', 19, struct fscrypt_policy)
-#define FS_IOC_GET_ENCRYPTION_PWSALT	_IOW('f', 20, __u8[16])
-#define FS_IOC_GET_ENCRYPTION_POLICY	_IOW('f', 21, struct fscrypt_policy)
-
-/* Parameters for passing an encryption key into the kernel keyring */
-#define FS_KEY_DESC_PREFIX		"fscrypt:"
-#define FS_KEY_DESC_PREFIX_SIZE		8
-
-/* Structure that userspace passes to the kernel keyring */
-#define FS_MAX_KEY_SIZE			64
-
-struct fscrypt_key {
-	__u32 mode;
-	__u8 raw[FS_MAX_KEY_SIZE];
-	__u32 size;
-};
-
-/*
  * Inode flags (FS_IOC_GETFLAGS / FS_IOC_SETFLAGS)
  *
  * Note: for historical reasons, these flags were originally used and
diff --git a/include/uapi/linux/fscrypt.h b/include/uapi/linux/fscrypt.h
new file mode 100644
index 0000000..12ac8cc
--- /dev/null
+++ b/include/uapi/linux/fscrypt.h
@@ -0,0 +1,197 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * fscrypt user API
+ *
+ * These ioctls can be used on filesystems that support fscrypt.  See the
+ * "User API" section of Documentation/filesystems/fscrypt.rst.
+ */
+#ifndef _UAPI_LINUX_FSCRYPT_H
+#define _UAPI_LINUX_FSCRYPT_H
+
+#include <linux/types.h>
+
+/* Encryption policy flags */
+#define FSCRYPT_POLICY_FLAGS_PAD_4		0x00
+#define FSCRYPT_POLICY_FLAGS_PAD_8		0x01
+#define FSCRYPT_POLICY_FLAGS_PAD_16		0x02
+#define FSCRYPT_POLICY_FLAGS_PAD_32		0x03
+#define FSCRYPT_POLICY_FLAGS_PAD_MASK		0x03
+#define FSCRYPT_POLICY_FLAG_DIRECT_KEY		0x04
+#define FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64	0x08
+#define FSCRYPT_POLICY_FLAGS_VALID		0x0F
+
+/* Encryption algorithms */
+#define FSCRYPT_MODE_AES_256_XTS		1
+#define FSCRYPT_MODE_AES_256_CTS		4
+#define FSCRYPT_MODE_AES_128_CBC		5
+#define FSCRYPT_MODE_AES_128_CTS		6
+#define FSCRYPT_MODE_ADIANTUM			9
+#define FSCRYPT_MODE_PRIVATE			127
+#define __FSCRYPT_MODE_MAX			127
+/*
+ * Legacy policy version; ad-hoc KDF and no key verification.
+ * For new encrypted directories, use fscrypt_policy_v2 instead.
+ *
+ * Careful: the .version field for this is actually 0, not 1.
+ */
+#define FSCRYPT_POLICY_V1		0
+#define FSCRYPT_KEY_DESCRIPTOR_SIZE	8
+struct fscrypt_policy_v1 {
+	__u8 version;
+	__u8 contents_encryption_mode;
+	__u8 filenames_encryption_mode;
+	__u8 flags;
+	__u8 master_key_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
+};
+#define fscrypt_policy	fscrypt_policy_v1
+
+/*
+ * Process-subscribed "logon" key description prefix and payload format.
+ * Deprecated; prefer FS_IOC_ADD_ENCRYPTION_KEY instead.
+ */
+#define FSCRYPT_KEY_DESC_PREFIX		"fscrypt:"
+#define FSCRYPT_KEY_DESC_PREFIX_SIZE	8
+#define FSCRYPT_MAX_KEY_SIZE		64
+struct fscrypt_key {
+	__u32 mode;
+	__u8 raw[FSCRYPT_MAX_KEY_SIZE];
+	__u32 size;
+};
+
+/*
+ * New policy version with HKDF and key verification (recommended).
+ */
+#define FSCRYPT_POLICY_V2		2
+#define FSCRYPT_KEY_IDENTIFIER_SIZE	16
+struct fscrypt_policy_v2 {
+	__u8 version;
+	__u8 contents_encryption_mode;
+	__u8 filenames_encryption_mode;
+	__u8 flags;
+	__u8 __reserved[4];
+	__u8 master_key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
+};
+
+/* Struct passed to FS_IOC_GET_ENCRYPTION_POLICY_EX */
+struct fscrypt_get_policy_ex_arg {
+	__u64 policy_size; /* input/output */
+	union {
+		__u8 version;
+		struct fscrypt_policy_v1 v1;
+		struct fscrypt_policy_v2 v2;
+	} policy; /* output */
+};
+
+/*
+ * v1 policy keys are specified by an arbitrary 8-byte key "descriptor",
+ * matching fscrypt_policy_v1::master_key_descriptor.
+ */
+#define FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR	1
+
+/*
+ * v2 policy keys are specified by a 16-byte key "identifier" which the kernel
+ * calculates as a cryptographic hash of the key itself,
+ * matching fscrypt_policy_v2::master_key_identifier.
+ */
+#define FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER	2
+
+/*
+ * Specifies a key, either for v1 or v2 policies.  This doesn't contain the
+ * actual key itself; this is just the "name" of the key.
+ */
+struct fscrypt_key_specifier {
+	__u32 type;	/* one of FSCRYPT_KEY_SPEC_TYPE_* */
+	__u32 __reserved;
+	union {
+		__u8 __reserved[32]; /* reserve some extra space */
+		__u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
+		__u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
+	} u;
+};
+
+/*
+ * Payload of Linux keyring key of type "fscrypt-provisioning", referenced by
+ * fscrypt_add_key_arg::key_id as an alternative to fscrypt_add_key_arg::raw.
+ */
+struct fscrypt_provisioning_key_payload {
+	__u32 type;
+	__u32 __reserved;
+	__u8 raw[];
+};
+
+/* Struct passed to FS_IOC_ADD_ENCRYPTION_KEY */
+struct fscrypt_add_key_arg {
+	struct fscrypt_key_specifier key_spec;
+	__u32 raw_size;
+	__u32 key_id;
+	__u32 __reserved[7];
+	/* N.B.: "temporary" flag, not reserved upstream */
+#define __FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED		0x00000001
+	__u32 __flags;
+	__u8 raw[];
+};
+
+/* Struct passed to FS_IOC_REMOVE_ENCRYPTION_KEY */
+struct fscrypt_remove_key_arg {
+	struct fscrypt_key_specifier key_spec;
+#define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_FILES_BUSY	0x00000001
+#define FSCRYPT_KEY_REMOVAL_STATUS_FLAG_OTHER_USERS	0x00000002
+	__u32 removal_status_flags;	/* output */
+	__u32 __reserved[5];
+};
+
+/* Struct passed to FS_IOC_GET_ENCRYPTION_KEY_STATUS */
+struct fscrypt_get_key_status_arg {
+	/* input */
+	struct fscrypt_key_specifier key_spec;
+	__u32 __reserved[6];
+
+	/* output */
+#define FSCRYPT_KEY_STATUS_ABSENT		1
+#define FSCRYPT_KEY_STATUS_PRESENT		2
+#define FSCRYPT_KEY_STATUS_INCOMPLETELY_REMOVED	3
+	__u32 status;
+#define FSCRYPT_KEY_STATUS_FLAG_ADDED_BY_SELF   0x00000001
+	__u32 status_flags;
+	__u32 user_count;
+	__u32 __out_reserved[13];
+};
+
+#define FS_IOC_SET_ENCRYPTION_POLICY		_IOR('f', 19, struct fscrypt_policy)
+#define FS_IOC_GET_ENCRYPTION_PWSALT		_IOW('f', 20, __u8[16])
+#define FS_IOC_GET_ENCRYPTION_POLICY		_IOW('f', 21, struct fscrypt_policy)
+#define FS_IOC_GET_ENCRYPTION_POLICY_EX		_IOWR('f', 22, __u8[9]) /* size + version */
+#define FS_IOC_ADD_ENCRYPTION_KEY		_IOWR('f', 23, struct fscrypt_add_key_arg)
+#define FS_IOC_REMOVE_ENCRYPTION_KEY		_IOWR('f', 24, struct fscrypt_remove_key_arg)
+#define FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS	_IOWR('f', 25, struct fscrypt_remove_key_arg)
+#define FS_IOC_GET_ENCRYPTION_KEY_STATUS	_IOWR('f', 26, struct fscrypt_get_key_status_arg)
+
+/**********************************************************************/
+
+/* old names; don't add anything new here! */
+#ifndef __KERNEL__
+#define FS_KEY_DESCRIPTOR_SIZE		FSCRYPT_KEY_DESCRIPTOR_SIZE
+#define FS_POLICY_FLAGS_PAD_4		FSCRYPT_POLICY_FLAGS_PAD_4
+#define FS_POLICY_FLAGS_PAD_8		FSCRYPT_POLICY_FLAGS_PAD_8
+#define FS_POLICY_FLAGS_PAD_16		FSCRYPT_POLICY_FLAGS_PAD_16
+#define FS_POLICY_FLAGS_PAD_32		FSCRYPT_POLICY_FLAGS_PAD_32
+#define FS_POLICY_FLAGS_PAD_MASK	FSCRYPT_POLICY_FLAGS_PAD_MASK
+#define FS_POLICY_FLAG_DIRECT_KEY	FSCRYPT_POLICY_FLAG_DIRECT_KEY
+#define FS_POLICY_FLAGS_VALID		FSCRYPT_POLICY_FLAGS_VALID
+#define FS_ENCRYPTION_MODE_INVALID	0	/* never used */
+#define FS_ENCRYPTION_MODE_AES_256_XTS	FSCRYPT_MODE_AES_256_XTS
+#define FS_ENCRYPTION_MODE_AES_256_GCM	2	/* never used */
+#define FS_ENCRYPTION_MODE_AES_256_CBC	3	/* never used */
+#define FS_ENCRYPTION_MODE_AES_256_CTS	FSCRYPT_MODE_AES_256_CTS
+#define FS_ENCRYPTION_MODE_AES_128_CBC	FSCRYPT_MODE_AES_128_CBC
+#define FS_ENCRYPTION_MODE_AES_128_CTS	FSCRYPT_MODE_AES_128_CTS
+#define FS_ENCRYPTION_MODE_SPECK128_256_XTS	7	/* removed */
+#define FS_ENCRYPTION_MODE_SPECK128_256_CTS	8	/* removed */
+#define FS_ENCRYPTION_MODE_ADIANTUM	FSCRYPT_MODE_ADIANTUM
+#define FS_ENCRYPTION_MODE_PRIVATE	FSCRYPT_MODE_PRIVATE
+#define FS_KEY_DESC_PREFIX		FSCRYPT_KEY_DESC_PREFIX
+#define FS_KEY_DESC_PREFIX_SIZE		FSCRYPT_KEY_DESC_PREFIX_SIZE
+#define FS_MAX_KEY_SIZE			FSCRYPT_MAX_KEY_SIZE
+#endif /* !__KERNEL__ */
+
+#endif /* _UAPI_LINUX_FSCRYPT_H */
diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h
index 0d16050..7369a22 100644
--- a/include/uapi/linux/v4l2-controls.h
+++ b/include/uapi/linux/v4l2-controls.h
@@ -543,6 +543,7 @@
 };
 #define V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER	(V4L2_CID_MPEG_BASE+381)
 #define V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER_QP	(V4L2_CID_MPEG_BASE+382)
+#define V4L2_CID_MPEG_VIDEO_H264_CHROMA_QP_INDEX_OFFSET	(V4L2_CID_MPEG_BASE+384)
 #define V4L2_CID_MPEG_VIDEO_MPEG4_I_FRAME_QP	(V4L2_CID_MPEG_BASE+400)
 #define V4L2_CID_MPEG_VIDEO_MPEG4_P_FRAME_QP	(V4L2_CID_MPEG_BASE+401)
 #define V4L2_CID_MPEG_VIDEO_MPEG4_B_FRAME_QP	(V4L2_CID_MPEG_BASE+402)
@@ -1002,8 +1003,8 @@
 	V4L2_CID_MPEG_VIDC_VIDEO_ROI_TYPE_2BYTE = 2,
 };
 
-#define V4L2_CID_MPEG_VIDC_VENC_CHROMA_QP_OFFSET \
-	(V4L2_CID_MPEG_MSM_VIDC_BASE + 132)
+#define V4L2_CID_MPEG_VIDC_VIDEO_LOWLATENCY_HINT \
+	(V4L2_CID_MPEG_MSM_VIDC_BASE + 133)
 
 /*  Camera class control IDs */
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c8ccfbe..f832fab 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3938,17 +3938,6 @@
 	bool strict_max;
 };
 
-static inline bool prefer_spread_on_idle(int cpu)
-{
-	if (likely(!sysctl_sched_prefer_spread))
-		return false;
-
-	if (is_min_capacity_cpu(cpu))
-		return sysctl_sched_prefer_spread >= 1;
-
-	return sysctl_sched_prefer_spread > 1;
-}
-
 static inline void adjust_cpus_for_packing(struct task_struct *p,
 			int *target_cpu, int *best_idle_cpu,
 			int shallowest_idle_cstate,
@@ -10482,8 +10471,7 @@
 
 	env.prefer_spread = (prefer_spread_on_idle(this_cpu) &&
 				!((sd->flags & SD_ASYM_CPUCAPACITY) &&
-				 !cpumask_test_cpu(this_cpu,
-						 &asym_cap_sibling_cpus)));
+				 !is_asym_cap_cpu(this_cpu)));
 
 	cpumask_and(cpus, sched_domain_span(sd), cpu_active_mask);
 
@@ -11690,7 +11678,7 @@
 
 		if (prefer_spread && !force_lb &&
 			(sd->flags & SD_ASYM_CPUCAPACITY) &&
-			!(cpumask_test_cpu(this_cpu, &asym_cap_sibling_cpus)))
+			!is_asym_cap_cpu(this_cpu))
 			avg_idle = this_rq->avg_idle;
 
 		if (avg_idle < curr_cost + sd->max_newidle_lb_cost) {
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 6a15c73..f800274 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2687,6 +2687,11 @@
 #define RESTRAINED_BOOST_DISABLE -3
 #define MAX_NUM_BOOST_TYPE (RESTRAINED_BOOST+1)
 
+static inline bool is_asym_cap_cpu(int cpu)
+{
+	return cpumask_test_cpu(cpu, &asym_cap_sibling_cpus);
+}
+
 static inline int asym_cap_siblings(int cpu1, int cpu2)
 {
 	return (cpumask_test_cpu(cpu1, &asym_cap_sibling_cpus) &&
@@ -2996,6 +3001,8 @@
 	return NULL;
 }
 
+static inline bool is_asym_cap_cpu(int cpu) { return false; }
+
 static inline int asym_cap_siblings(int cpu1, int cpu2) { return 0; }
 
 static inline bool asym_cap_sibling_group_has_capacity(int dst_cpu, int margin)
diff --git a/kernel/sched/walt.h b/kernel/sched/walt.h
index 4089158..c4e3bc5 100644
--- a/kernel/sched/walt.h
+++ b/kernel/sched/walt.h
@@ -445,8 +445,24 @@
 	}						\
 })
 
+static inline bool prefer_spread_on_idle(int cpu)
+{
+	if (likely(!sysctl_sched_prefer_spread))
+		return false;
+
+	if (is_min_capacity_cpu(cpu))
+		return sysctl_sched_prefer_spread >= 1;
+
+	return sysctl_sched_prefer_spread > 1;
+}
+
 #else /* CONFIG_SCHED_WALT */
 
+static inline bool prefer_spread_on_idle(int cpu)
+{
+	return false;
+}
+
 static inline void walt_sched_init_rq(struct rq *rq) { }
 
 static inline void walt_rotate_work_init(void) { }
diff --git a/kernel_headers.py b/kernel_headers.py
index 70ebe1a..4c2096e 100644
--- a/kernel_headers.py
+++ b/kernel_headers.py
@@ -778,7 +778,7 @@
     print('error: gen_headers blueprints file is out of date, suggested fix:')
     print('#######Please add or remove the above mentioned headers from %s' % (old_gen_headers_bp))
     print('then re-run the build')
-    #return 1
+    return 1
 
   error_count = 0
 
diff --git a/mm/filemap.c b/mm/filemap.c
index 494a638..1542132 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1029,7 +1029,14 @@
 	if (wait_page->bit_nr != key->bit_nr)
 		return 0;
 
-	/* Stop walking if it's locked */
+	/*
+	 * Stop walking if it's locked.
+	 * Is this safe if put_and_wait_on_page_locked() is in use?
+	 * Yes: the waker must hold a reference to this page, and if PG_locked
+	 * has now already been set by another task, that task must also hold
+	 * a reference to the *same usage* of this page; so there is no need
+	 * to walk on to wake even the put_and_wait_on_page_locked() callers.
+	 */
 	if (test_bit(key->bit_nr, &key->page->flags))
 		return -1;
 
@@ -1097,25 +1104,44 @@
 	wake_up_page_bit(page, bit);
 }
 
+/*
+ * A choice of three behaviors for wait_on_page_bit_common():
+ */
+enum behavior {
+	EXCLUSIVE,	/* Hold ref to page and take the bit when woken, like
+			 * __lock_page() waiting on then setting PG_locked.
+			 */
+	SHARED,		/* Hold ref to page and check the bit when woken, like
+			 * wait_on_page_writeback() waiting on PG_writeback.
+			 */
+	DROP,		/* Drop ref to page before wait, no check when woken,
+			 * like put_and_wait_on_page_locked() on PG_locked.
+			 */
+};
+
 static inline int wait_on_page_bit_common(wait_queue_head_t *q,
-		struct page *page, int bit_nr, int state, bool lock)
+	struct page *page, int bit_nr, int state, enum behavior behavior)
 {
 	struct wait_page_queue wait_page;
 	wait_queue_entry_t *wait = &wait_page.wait;
+	bool bit_is_set;
 	bool thrashing = false;
+	bool delayacct = false;
 	unsigned long pflags;
 	int ret = 0;
 
 	if (bit_nr == PG_locked &&
 	    !PageUptodate(page) && PageWorkingset(page)) {
-		if (!PageSwapBacked(page))
+		if (!PageSwapBacked(page)) {
 			delayacct_thrashing_start();
+			delayacct = true;
+		}
 		psi_memstall_enter(&pflags);
 		thrashing = true;
 	}
 
 	init_wait(wait);
-	wait->flags = lock ? WQ_FLAG_EXCLUSIVE : 0;
+	wait->flags = behavior == EXCLUSIVE ? WQ_FLAG_EXCLUSIVE : 0;
 	wait->func = wake_page_function;
 	wait_page.page = page;
 	wait_page.bit_nr = bit_nr;
@@ -1132,14 +1158,17 @@
 
 		spin_unlock_irq(&q->lock);
 
-		if (likely(test_bit(bit_nr, &page->flags))) {
-			io_schedule();
-		}
+		bit_is_set = test_bit(bit_nr, &page->flags);
+		if (behavior == DROP)
+			put_page(page);
 
-		if (lock) {
+		if (likely(bit_is_set))
+			io_schedule();
+
+		if (behavior == EXCLUSIVE) {
 			if (!test_and_set_bit_lock(bit_nr, &page->flags))
 				break;
-		} else {
+		} else if (behavior == SHARED) {
 			if (!test_bit(bit_nr, &page->flags))
 				break;
 		}
@@ -1148,12 +1177,23 @@
 			ret = -EINTR;
 			break;
 		}
+
+		if (behavior == DROP) {
+			/*
+			 * We can no longer safely access page->flags:
+			 * even if CONFIG_MEMORY_HOTREMOVE is not enabled,
+			 * there is a risk of waiting forever on a page reused
+			 * for something that keeps it locked indefinitely.
+			 * But best check for -EINTR above before breaking.
+			 */
+			break;
+		}
 	}
 
 	finish_wait(q, wait);
 
 	if (thrashing) {
-		if (!PageSwapBacked(page))
+		if (delayacct)
 			delayacct_thrashing_end();
 		psi_memstall_leave(&pflags);
 	}
@@ -1172,18 +1212,37 @@
 void wait_on_page_bit(struct page *page, int bit_nr)
 {
 	wait_queue_head_t *q = page_waitqueue(page);
-	wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, false);
+	wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, SHARED);
 }
 EXPORT_SYMBOL(wait_on_page_bit);
 
 int wait_on_page_bit_killable(struct page *page, int bit_nr)
 {
 	wait_queue_head_t *q = page_waitqueue(page);
-	return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, false);
+	return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, SHARED);
 }
 EXPORT_SYMBOL(wait_on_page_bit_killable);
 
 /**
+ * put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked
+ * @page: The page to wait for.
+ *
+ * The caller should hold a reference on @page.  They expect the page to
+ * become unlocked relatively soon, but do not wish to hold up migration
+ * (for example) by holding the reference while waiting for the page to
+ * come unlocked.  After this function returns, the caller should not
+ * dereference @page.
+ */
+void put_and_wait_on_page_locked(struct page *page)
+{
+	wait_queue_head_t *q;
+
+	page = compound_head(page);
+	q = page_waitqueue(page);
+	wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, DROP);
+}
+
+/**
  * add_page_wait_queue - Add an arbitrary waiter to a page's wait queue
  * @page: Page defining the wait queue of interest
  * @waiter: Waiter to add to the queue
@@ -1312,7 +1371,8 @@
 {
 	struct page *page = compound_head(__page);
 	wait_queue_head_t *q = page_waitqueue(page);
-	wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, true);
+	wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE,
+				EXCLUSIVE);
 }
 EXPORT_SYMBOL(__lock_page);
 
@@ -1320,7 +1380,8 @@
 {
 	struct page *page = compound_head(__page);
 	wait_queue_head_t *q = page_waitqueue(page);
-	return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE, true);
+	return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE,
+					EXCLUSIVE);
 }
 EXPORT_SYMBOL_GPL(__lock_page_killable);
 
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2f1a179..479a070 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1536,8 +1536,7 @@
 		if (!get_page_unless_zero(page))
 			goto out_unlock;
 		spin_unlock(vmf->ptl);
-		wait_on_page_locked(page);
-		put_page(page);
+		put_and_wait_on_page_locked(page);
 		goto out;
 	}
 
@@ -1573,8 +1572,7 @@
 		if (!get_page_unless_zero(page))
 			goto out_unlock;
 		spin_unlock(vmf->ptl);
-		wait_on_page_locked(page);
-		put_page(page);
+		put_and_wait_on_page_locked(page);
 		goto out;
 	}
 
diff --git a/mm/migrate.c b/mm/migrate.c
index 57de458..7c148b7 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -325,16 +325,13 @@
 
 	/*
 	 * Once radix-tree replacement of page migration started, page_count
-	 * *must* be zero. And, we don't want to call wait_on_page_locked()
-	 * against a page without get_page().
-	 * So, we use get_page_unless_zero(), here. Even failed, page fault
-	 * will occur again.
+	 * is zero; but we must not call put_and_wait_on_page_locked() without
+	 * a ref. Use get_page_unless_zero(), and just fault again if it fails.
 	 */
 	if (!get_page_unless_zero(page))
 		goto out;
 	pte_unmap_unlock(ptep, ptl);
-	wait_on_page_locked(page);
-	put_page(page);
+	put_and_wait_on_page_locked(page);
 	return;
 out:
 	pte_unmap_unlock(ptep, ptl);
@@ -368,8 +365,7 @@
 	if (!get_page_unless_zero(page))
 		goto unlock;
 	spin_unlock(ptl);
-	wait_on_page_locked(page);
-	put_page(page);
+	put_and_wait_on_page_locked(page);
 	return;
 unlock:
 	spin_unlock(ptl);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 043a170..7de87d2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7493,7 +7493,7 @@
 
 #ifdef CONFIG_HAVE_MEMBLOCK
 		memblock_dbg("memblock_free: [%#016llx-%#016llx] %pS\n",
-			__pa(start), __pa(end), (void *)_RET_IP_);
+			(u64)__pa(start), (u64)__pa(end), (void *)_RET_IP_);
 #endif
 
 	return pages;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c0a6622..98d6a33 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1475,14 +1475,8 @@
 			count_memcg_page_event(page, PGLAZYFREED);
 		} else if (!mapping || !__remove_mapping(mapping, page, true))
 			goto keep_locked;
-		/*
-		 * At this point, we have no other references and there is
-		 * no way to pick any more up (removed from LRU, removed
-		 * from pagecache). Can use non-atomic bitops now (and
-		 * we obviously don't have to worry about waking up a process
-		 * waiting on the page lock, because there are no references.
-		 */
-		__ClearPageLocked(page);
+
+		unlock_page(page);
 free_it:
 		nr_reclaimed++;
 
diff --git a/security/Kconfig b/security/Kconfig
index 3d68322..e483bbc 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -6,10 +6,6 @@
 
 source security/keys/Kconfig
 
-if ARCH_QCOM
-source security/pfe/Kconfig
-endif
-
 config SECURITY_DMESG_RESTRICT
 	bool "Restrict unprivileged access to the kernel syslog"
 	default n
diff --git a/security/Makefile b/security/Makefile
index 47bffaa..4d2d378 100644
--- a/security/Makefile
+++ b/security/Makefile
@@ -10,7 +10,6 @@
 subdir-$(CONFIG_SECURITY_APPARMOR)	+= apparmor
 subdir-$(CONFIG_SECURITY_YAMA)		+= yama
 subdir-$(CONFIG_SECURITY_LOADPIN)	+= loadpin
-subdir-$(CONFIG_ARCH_QCOM)		+= pfe
 
 # always enable default capabilities
 obj-y					+= commoncap.o
@@ -27,7 +26,6 @@
 obj-$(CONFIG_SECURITY_YAMA)		+= yama/
 obj-$(CONFIG_SECURITY_LOADPIN)		+= loadpin/
 obj-$(CONFIG_CGROUP_DEVICE)		+= device_cgroup.o
-obj-$(CONFIG_ARCH_QCOM)			+= pfe/
 
 # Object integrity file lists
 subdir-$(CONFIG_INTEGRITY)		+= integrity
diff --git a/security/pfe/Kconfig b/security/pfe/Kconfig
deleted file mode 100644
index 47c8a03..0000000
--- a/security/pfe/Kconfig
+++ /dev/null
@@ -1,42 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-menu "Qualcomm Technologies, Inc Per File Encryption security device drivers"
-	depends on ARCH_QCOM
-
-config PFT
-	bool "Per-File-Tagger driver"
-	depends on SECURITY
-	default n
-	help
-		This driver is used for tagging enterprise files.
-		It is part of the Per-File-Encryption (PFE) feature.
-		The driver is tagging files when created by
-		registered application.
-		Tagged files are encrypted using the dm-req-crypt driver.
-
-config PFK
-	bool "Per-File-Key driver"
-	depends on SECURITY
-	depends on SECURITY_SELINUX
-	default n
-	help
-		This driver is used for storing eCryptfs information
-		in file node.
-		This is part of eCryptfs hardware enhanced solution
-		provided by Qualcomm Technologies, Inc.
-		Information is used when file is encrypted later using
-		ICE or dm crypto engine
-
-config PFK_WRAPPED_KEY_SUPPORTED
-	bool "Per-File-Key driver with wrapped key support"
-	depends on SECURITY
-	depends on SECURITY_SELINUX
-	depends on QSEECOM
-	depends on PFK
-	default n
-	help
-		Adds wrapped key support in PFK driver. Instead of setting
-		the key directly in ICE, it unwraps the key and sets the key
-		in ICE.
-		It ensures the key is protected within a secure environment
-		and only the wrapped key is present in the kernel.
-endmenu
diff --git a/security/pfe/Makefile b/security/pfe/Makefile
deleted file mode 100644
index 5758772..0000000
--- a/security/pfe/Makefile
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-ccflags-y += -Isecurity/selinux -Isecurity/selinux/include
-ccflags-y += -Ifs/crypto
-ccflags-y += -Idrivers/misc
-
-obj-$(CONFIG_PFT) += pft.o
-obj-$(CONFIG_PFK) += pfk.o pfk_kc.o pfk_ice.o pfk_ext4.o pfk_f2fs.o
diff --git a/security/pfe/pfk.c b/security/pfe/pfk.c
deleted file mode 100644
index a46c39d..0000000
--- a/security/pfe/pfk.c
+++ /dev/null
@@ -1,554 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-/*
- * Per-File-Key (PFK).
- *
- * This driver is responsible for overall management of various
- * Per File Encryption variants that work on top of or as part of different
- * file systems.
- *
- * The driver has the following purpose :
- * 1) Define priorities between PFE's if more than one is enabled
- * 2) Extract key information from inode
- * 3) Load and manage various keys in ICE HW engine
- * 4) It should be invoked from various layers in FS/BLOCK/STORAGE DRIVER
- *    that need to take decision on HW encryption management of the data
- *    Some examples:
- *	BLOCK LAYER: when it takes decision on whether 2 chunks can be united
- *	to one encryption / decryption request sent to the HW
- *
- *	UFS DRIVER: when it need to configure ICE HW with a particular key slot
- *	to be used for encryption / decryption
- *
- * PFE variants can differ on particular way of storing the cryptographic info
- * inside inode, actions to be taken upon file operations, etc., but the common
- * properties are described above
- *
- */
-
-#define pr_fmt(fmt)	"pfk [%s]: " fmt, __func__
-
-#include <linux/module.h>
-#include <linux/fs.h>
-#include <linux/errno.h>
-#include <linux/printk.h>
-#include <linux/bio.h>
-#include <linux/security.h>
-#include <crypto/algapi.h>
-#include <crypto/ice.h>
-
-#include <linux/pfk.h>
-
-#include "pfk_kc.h"
-#include "objsec.h"
-#include "pfk_ice.h"
-#include "pfk_ext4.h"
-#include "pfk_f2fs.h"
-#include "pfk_internal.h"
-
-static bool pfk_ready;
-
-
-/* might be replaced by a table when more than one cipher is supported */
-#define PFK_SUPPORTED_KEY_SIZE 32
-#define PFK_SUPPORTED_SALT_SIZE 32
-
-/* Various PFE types and function tables to support each one of them */
-enum pfe_type {EXT4_CRYPT_PFE, F2FS_CRYPT_PFE, INVALID_PFE};
-
-typedef int (*pfk_parse_inode_type)(const struct bio *bio,
-	const struct inode *inode,
-	struct pfk_key_info *key_info,
-	enum ice_cryto_algo_mode *algo,
-	bool *is_pfe);
-
-typedef bool (*pfk_allow_merge_bio_type)(const struct bio *bio1,
-	const struct bio *bio2, const struct inode *inode1,
-	const struct inode *inode2);
-
-static const pfk_parse_inode_type pfk_parse_inode_ftable[] = {
-	&pfk_ext4_parse_inode, /* EXT4_CRYPT_PFE */
-	&pfk_f2fs_parse_inode, /* F2FS_CRYPT_PFE */
-};
-
-static const pfk_allow_merge_bio_type pfk_allow_merge_bio_ftable[] = {
-	&pfk_ext4_allow_merge_bio, /* EXT4_CRYPT_PFE */
-	&pfk_f2fs_allow_merge_bio, /* F2FS_CRYPT_PFE */
-};
-
-static void __exit pfk_exit(void)
-{
-	pfk_ready = false;
-	pfk_ext4_deinit();
-	pfk_f2fs_deinit();
-	pfk_kc_deinit();
-}
-
-static int __init pfk_init(void)
-{
-	int ret = 0;
-
-	ret = pfk_ext4_init();
-	if (ret != 0)
-		goto fail;
-
-	ret = pfk_f2fs_init();
-	if (ret != 0)
-		goto fail;
-
-	pfk_ready = true;
-	pr_debug("Driver initialized successfully\n");
-
-	return 0;
-
-fail:
-	pr_err("Failed to init driver\n");
-	return -ENODEV;
-}
-
-/*
- * If more than one type is supported simultaneously, this function will also
- * set the priority between them
- */
-static enum pfe_type pfk_get_pfe_type(const struct inode *inode)
-{
-	if (!inode)
-		return INVALID_PFE;
-
-	if (pfk_is_ext4_type(inode))
-		return EXT4_CRYPT_PFE;
-
-	if (pfk_is_f2fs_type(inode))
-		return F2FS_CRYPT_PFE;
-
-	return INVALID_PFE;
-}
-
-/**
- * inode_to_filename() - get the filename from inode pointer.
- * @inode: inode pointer
- *
- * it is used for debug prints.
- *
- * Return: filename string or "unknown".
- */
-char *inode_to_filename(const struct inode *inode)
-{
-	struct dentry *dentry = NULL;
-	char *filename = NULL;
-
-	if (!inode)
-		return "NULL";
-
-	if (hlist_empty(&inode->i_dentry))
-		return "unknown";
-
-	dentry = hlist_entry(inode->i_dentry.first, struct dentry, d_u.d_alias);
-	filename = dentry->d_iname;
-
-	return filename;
-}
-
-/**
- * pfk_is_ready() - driver is initialized and ready.
- *
- * Return: true if the driver is ready.
- */
-static inline bool pfk_is_ready(void)
-{
-	return pfk_ready;
-}
-
-/**
- * pfk_bio_get_inode() - get the inode from a bio.
- * @bio: Pointer to BIO structure.
- *
- * Walk the bio struct links to get the inode.
- * Please note, that in general bio may consist of several pages from
- * several files, but in our case we always assume that all pages come
- * from the same file, since our logic ensures it. That is why we only
- * walk through the first page to look for inode.
- *
- * Return: pointer to the inode struct if successful, or NULL otherwise.
- *
- */
-static struct inode *pfk_bio_get_inode(const struct bio *bio)
-{
-	if (!bio)
-		return NULL;
-	if (!bio_has_data((struct bio *)bio))
-		return NULL;
-	if (!bio->bi_io_vec)
-		return NULL;
-	if (!bio->bi_io_vec->bv_page)
-		return NULL;
-
-	if (PageAnon(bio->bi_io_vec->bv_page)) {
-		struct inode *inode;
-
-		/* Using direct-io (O_DIRECT) without page cache */
-		inode = dio_bio_get_inode((struct bio *)bio);
-		pr_debug("inode on direct-io, inode = 0x%pK.\n", inode);
-
-		return inode;
-	}
-
-	if (!page_mapping(bio->bi_io_vec->bv_page))
-		return NULL;
-
-	return page_mapping(bio->bi_io_vec->bv_page)->host;
-}
-
-/**
- * pfk_key_size_to_key_type() - translate key size to key size enum
- * @key_size: key size in bytes
- * @key_size_type: pointer to store the output enum (can be null)
- *
- * return 0 in case of success, error otherwise (i.e not supported key size)
- */
-int pfk_key_size_to_key_type(size_t key_size,
-	enum ice_crpto_key_size *key_size_type)
-{
-	/*
-	 *  currently only 32 bit key size is supported
-	 *  in the future, table with supported key sizes might
-	 *  be introduced
-	 */
-
-	if (key_size != PFK_SUPPORTED_KEY_SIZE) {
-		pr_err("not supported key size %zu\n", key_size);
-		return -EINVAL;
-	}
-
-	if (key_size_type)
-		*key_size_type = ICE_CRYPTO_KEY_SIZE_256;
-
-	return 0;
-}
-
-/*
- * Retrieves filesystem type from inode's superblock
- */
-bool pfe_is_inode_filesystem_type(const struct inode *inode,
-	const char *fs_type)
-{
-	if (!inode || !fs_type)
-		return false;
-
-	if (!inode->i_sb)
-		return false;
-
-	if (!inode->i_sb->s_type)
-		return false;
-
-	return (strcmp(inode->i_sb->s_type->name, fs_type) == 0);
-}
-
-/**
- * pfk_get_key_for_bio() - get the encryption key to be used for a bio
- *
- * @bio: pointer to the BIO
- * @key_info: pointer to the key information which will be filled in
- * @algo_mode: optional pointer to the algorithm identifier which will be set
- * @is_pfe: will be set to false if the BIO should be left unencrypted
- *
- * Return: 0 if a key is being used, otherwise a -errno value
- */
-static int pfk_get_key_for_bio(const struct bio *bio,
-		struct pfk_key_info *key_info,
-		enum ice_cryto_algo_mode *algo_mode,
-		bool *is_pfe, unsigned int *data_unit)
-{
-	const struct inode *inode;
-	enum pfe_type which_pfe;
-	const struct blk_encryption_key *key = NULL;
-	char *s_type = NULL;
-
-	inode = pfk_bio_get_inode(bio);
-	which_pfe = pfk_get_pfe_type(inode);
-	s_type = (char *)pfk_kc_get_storage_type();
-
-	/*
-	 * Update dun based on storage type.
-	 * 512 byte dun - For ext4 emmc
-	 * 4K dun - For ext4 ufs, f2fs ufs and f2fs emmc
-	 */
-
-	if (data_unit) {
-		if (!bio_dun(bio) && !memcmp(s_type, "sdcc", strlen("sdcc")))
-			*data_unit = 1 << ICE_CRYPTO_DATA_UNIT_512_B;
-		else
-			*data_unit = 1 << ICE_CRYPTO_DATA_UNIT_4_KB;
-	}
-
-	if (which_pfe != INVALID_PFE) {
-		/* Encrypted file; override ->bi_crypt_key */
-		pr_debug("parsing inode %lu with PFE type %d\n",
-			 inode->i_ino, which_pfe);
-		return (*(pfk_parse_inode_ftable[which_pfe]))
-				(bio, inode, key_info, algo_mode, is_pfe);
-	}
-
-	/*
-	 * bio is not for an encrypted file.  Use ->bi_crypt_key if it was set.
-	 * Otherwise, don't encrypt/decrypt the bio.
-	 */
-#ifdef CONFIG_DM_DEFAULT_KEY
-	key = bio->bi_crypt_key;
-#endif
-	if (!key) {
-		*is_pfe = false;
-		return -EINVAL;
-	}
-
-	/* Note: the "salt" is really just the second half of the XTS key. */
-	BUILD_BUG_ON(sizeof(key->raw) !=
-		     PFK_SUPPORTED_KEY_SIZE + PFK_SUPPORTED_SALT_SIZE);
-	key_info->key = &key->raw[0];
-	key_info->key_size = PFK_SUPPORTED_KEY_SIZE;
-	key_info->salt = &key->raw[PFK_SUPPORTED_KEY_SIZE];
-	key_info->salt_size = PFK_SUPPORTED_SALT_SIZE;
-	if (algo_mode)
-		*algo_mode = ICE_CRYPTO_ALGO_MODE_AES_XTS;
-	return 0;
-}
-
-/**
- * pfk_load_key_start() - loads PFE encryption key to the ICE
- *			  Can also be invoked from non
- *			  PFE context, in this case it
- *			  is not relevant and is_pfe
- *			  flag is set to false
- *
- * @bio: Pointer to the BIO structure
- * @ice_setting: Pointer to ice setting structure that will be filled with
- * ice configuration values, including the index to which the key was loaded
- *  @is_pfe: will be false if inode is not relevant to PFE, in such a case
- * it should be treated as non PFE by the block layer
- *
- * Returns the index where the key is stored in encryption hw and additional
- * information that will be used later for configuration of the encryption hw.
- *
- * Must be followed by pfk_load_key_end when key is no longer used by ice
- *
- */
-int pfk_load_key_start(const struct bio *bio, struct ice_device *ice_dev,
-		struct ice_crypto_setting *ice_setting, bool *is_pfe,
-		bool async)
-{
-	int ret = 0;
-	struct pfk_key_info key_info = {NULL, NULL, 0, 0};
-	enum ice_cryto_algo_mode algo_mode = ICE_CRYPTO_ALGO_MODE_AES_XTS;
-	enum ice_crpto_key_size key_size_type = 0;
-	unsigned int data_unit = 1 << ICE_CRYPTO_DATA_UNIT_512_B;
-	u32 key_index = 0;
-
-	if (!is_pfe) {
-		pr_err("is_pfe is NULL\n");
-		return -EINVAL;
-	}
-
-	/*
-	 * only a few errors below can indicate that
-	 * this function was not invoked within PFE context,
-	 * otherwise we will consider it PFE
-	 */
-	*is_pfe = true;
-
-	if (!pfk_is_ready())
-		return -ENODEV;
-
-	if (!ice_setting) {
-		pr_err("ice setting is NULL\n");
-		return -EINVAL;
-	}
-
-	ret = pfk_get_key_for_bio(bio, &key_info, &algo_mode, is_pfe,
-					&data_unit);
-
-	if (ret != 0)
-		return ret;
-
-	ret = pfk_key_size_to_key_type(key_info.key_size, &key_size_type);
-	if (ret != 0)
-		return ret;
-
-	ret = pfk_kc_load_key_start(key_info.key, key_info.key_size,
-			key_info.salt, key_info.salt_size, &key_index, async,
-			data_unit, ice_dev);
-	if (ret) {
-		if (ret != -EBUSY && ret != -EAGAIN)
-			pr_err("start: could not load key into pfk key cache, error %d\n",
-					ret);
-
-		return ret;
-	}
-
-	ice_setting->key_size = key_size_type;
-	ice_setting->algo_mode = algo_mode;
-	/* hardcoded for now */
-	ice_setting->key_mode = ICE_CRYPTO_USE_LUT_SW_KEY;
-	ice_setting->key_index = key_index;
-
-	pr_debug("loaded key for file %s key_index %d\n",
-		inode_to_filename(pfk_bio_get_inode(bio)), key_index);
-
-	return 0;
-}
-
-/**
- * pfk_load_key_end() - marks the PFE key as no longer used by ICE
- *			Can also be invoked from non
- *			PFE context, in this case it is not
- *			relevant and is_pfe flag is
- *			set to false
- *
- * @bio: Pointer to the BIO structure
- * @is_pfe: Pointer to is_pfe flag, which will be true if function was invoked
- *			from PFE context
- */
-int pfk_load_key_end(const struct bio *bio, struct ice_device *ice_dev,
-			bool *is_pfe)
-{
-	int ret = 0;
-	struct pfk_key_info key_info = {NULL, NULL, 0, 0};
-
-	if (!is_pfe) {
-		pr_err("is_pfe is NULL\n");
-		return -EINVAL;
-	}
-
-	/* only a few errors below can indicate that
-	 * this function was not invoked within PFE context,
-	 * otherwise we will consider it PFE
-	 */
-	*is_pfe = true;
-
-	if (!pfk_is_ready())
-		return -ENODEV;
-
-	ret = pfk_get_key_for_bio(bio, &key_info, NULL, is_pfe, NULL);
-	if (ret != 0)
-		return ret;
-
-	pfk_kc_load_key_end(key_info.key, key_info.key_size,
-		key_info.salt, key_info.salt_size, ice_dev);
-
-	pr_debug("finished using key for file %s\n",
-		inode_to_filename(pfk_bio_get_inode(bio)));
-
-	return 0;
-}
-
-/**
- * pfk_allow_merge_bio() - Check if 2 BIOs can be merged.
- * @bio1:	Pointer to first BIO structure.
- * @bio2:	Pointer to second BIO structure.
- *
- * Prevent merging of BIOs from encrypted and non-encrypted
- * files, or files encrypted with different key.
- * Also prevent non encrypted and encrypted data from the same file
- * to be merged (ecryptfs header if stored inside file should be non
- * encrypted)
- * This API is called by the file system block layer.
- *
- * Return: true if the BIOs allowed to be merged, false
- * otherwise.
- */
-bool pfk_allow_merge_bio(const struct bio *bio1, const struct bio *bio2)
-{
-	const struct blk_encryption_key *key1 = NULL;
-	const struct blk_encryption_key *key2 = NULL;
-	const struct inode *inode1;
-	const struct inode *inode2;
-	enum pfe_type which_pfe1;
-	enum pfe_type which_pfe2;
-
-	if (!pfk_is_ready())
-		return false;
-
-	if (!bio1 || !bio2)
-		return false;
-
-	if (bio1 == bio2)
-		return true;
-
-#ifdef CONFIG_DM_DEFAULT_KEY
-	key1 = bio1->bi_crypt_key;
-	key2 = bio2->bi_crypt_key;
-#endif
-
-	inode1 = pfk_bio_get_inode(bio1);
-	inode2 = pfk_bio_get_inode(bio2);
-
-	which_pfe1 = pfk_get_pfe_type(inode1);
-	which_pfe2 = pfk_get_pfe_type(inode2);
-
-	/*
-	 * If one bio is for an encrypted file and the other is for a different
-	 * type of encrypted file or for blocks that are not part of an
-	 * encrypted file, do not merge.
-	 */
-	if (which_pfe1 != which_pfe2)
-		return false;
-
-	if (which_pfe1 != INVALID_PFE) {
-		/* Both bios are for the same type of encrypted file. */
-		return (*(pfk_allow_merge_bio_ftable[which_pfe1]))(bio1, bio2,
-				inode1, inode2);
-	}
-
-	/*
-	 * Neither bio is for an encrypted file.  Merge only if the default keys
-	 * are the same (or both are NULL).
-	 */
-	return key1 == key2 ||
-		(key1 && key2 &&
-		 !crypto_memneq(key1->raw, key2->raw, sizeof(key1->raw)));
-}
-
-int pfk_fbe_clear_key(const unsigned char *key, size_t key_size,
-		const unsigned char *salt, size_t salt_size)
-{
-	int ret = -EINVAL;
-
-	if (!key || !salt)
-		return ret;
-
-	ret = pfk_kc_remove_key_with_salt(key, key_size, salt, salt_size);
-	if (ret)
-		pr_err("Clear key error: ret value %d\n", ret);
-	return ret;
-}
-
-/**
- * Flush key table on storage core reset. During core reset key configuration
- * is lost in ICE. We need to flash the cache, so that the keys will be
- * reconfigured again for every subsequent transaction
- */
-void pfk_clear_on_reset(struct ice_device *ice_dev)
-{
-	if (!pfk_is_ready())
-		return;
-
-	pfk_kc_clear_on_reset(ice_dev);
-}
-
-int pfk_remove(struct ice_device *ice_dev)
-{
-	return pfk_kc_clear(ice_dev);
-}
-
-int pfk_initialize_key_table(struct ice_device *ice_dev)
-{
-	return pfk_kc_initialize_key_table(ice_dev);
-}
-
-module_init(pfk_init);
-module_exit(pfk_exit);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("Per-File-Key driver");
diff --git a/security/pfe/pfk_ext4.c b/security/pfe/pfk_ext4.c
deleted file mode 100644
index 0ccd46b..0000000
--- a/security/pfe/pfk_ext4.c
+++ /dev/null
@@ -1,177 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-/*
- * Per-File-Key (PFK) - EXT4
- *
- * This driver is used for working with EXT4 crypt extension
- *
- * The key information  is stored in node by EXT4 when file is first opened
- * and will be later accessed by Block Device Driver to actually load the key
- * to encryption hw.
- *
- * PFK exposes API's for loading and removing keys from encryption hw
- * and also API to determine whether 2 adjacent blocks can be agregated by
- * Block Layer in one request to encryption hw.
- *
- */
-
-#define pr_fmt(fmt)	"pfk_ext4 [%s]: " fmt, __func__
-
-#include <linux/module.h>
-#include <linux/fs.h>
-#include <linux/errno.h>
-#include <linux/printk.h>
-
-#include "fscrypt_ice.h"
-#include "pfk_ext4.h"
-//#include "ext4_ice.h"
-
-static bool pfk_ext4_ready;
-
-/*
- * pfk_ext4_deinit() - Deinit function, should be invoked by upper PFK layer
- */
-void pfk_ext4_deinit(void)
-{
-	pfk_ext4_ready = false;
-}
-
-/*
- * pfk_ecryptfs_init() - Init function, should be invoked by upper PFK layer
- */
-int __init pfk_ext4_init(void)
-{
-	pfk_ext4_ready = true;
-	pr_info("PFK EXT4 inited successfully\n");
-
-	return 0;
-}
-
-/**
- * pfk_ecryptfs_is_ready() - driver is initialized and ready.
- *
- * Return: true if the driver is ready.
- */
-static inline bool pfk_ext4_is_ready(void)
-{
-	return pfk_ext4_ready;
-}
-
-/**
- * pfk_is_ext4_type() - return true if inode belongs to ICE EXT4 PFE
- * @inode: inode pointer
- */
-bool pfk_is_ext4_type(const struct inode *inode)
-{
-	if (!pfe_is_inode_filesystem_type(inode, "ext4"))
-		return false;
-
-	return fscrypt_should_be_processed_by_ice(inode);
-}
-
-/**
- * pfk_ext4_parse_cipher() - parse cipher from inode to enum
- * @inode: inode
- * @algo: pointer to store the output enum (can be null)
- *
- * return 0 in case of success, error otherwise (i.e not supported cipher)
- */
-static int pfk_ext4_parse_cipher(const struct inode *inode,
-	enum ice_cryto_algo_mode *algo)
-{
-	/*
-	 * currently only AES XTS algo is supported
-	 * in the future, table with supported ciphers might
-	 * be introduced
-	 */
-
-	if (!inode)
-		return -EINVAL;
-
-	if (!fscrypt_is_aes_xts_cipher(inode)) {
-		pr_err("ext4 alghoritm is not supported by pfk\n");
-		return -EINVAL;
-	}
-
-	if (algo)
-		*algo = ICE_CRYPTO_ALGO_MODE_AES_XTS;
-
-	return 0;
-}
-
-int pfk_ext4_parse_inode(const struct bio *bio,
-	const struct inode *inode,
-	struct pfk_key_info *key_info,
-	enum ice_cryto_algo_mode *algo,
-	bool *is_pfe)
-{
-	int ret = 0;
-
-	if (!is_pfe)
-		return -EINVAL;
-
-	/*
-	 * only a few errors below can indicate that
-	 * this function was not invoked within PFE context,
-	 * otherwise we will consider it PFE
-	 */
-	*is_pfe = true;
-
-	if (!pfk_ext4_is_ready())
-		return -ENODEV;
-
-	if (!inode)
-		return -EINVAL;
-
-	if (!key_info)
-		return -EINVAL;
-
-	key_info->key = fscrypt_get_ice_encryption_key(inode);
-	if (!key_info->key) {
-		pr_err("could not parse key from ext4\n");
-		return -EINVAL;
-	}
-
-	key_info->key_size = fscrypt_get_ice_encryption_key_size(inode);
-	if (!key_info->key_size) {
-		pr_err("could not parse key size from ext4\n");
-		return -EINVAL;
-	}
-
-	key_info->salt = fscrypt_get_ice_encryption_salt(inode);
-	if (!key_info->salt) {
-		pr_err("could not parse salt from ext4\n");
-		return -EINVAL;
-	}
-
-	key_info->salt_size = fscrypt_get_ice_encryption_salt_size(inode);
-	if (!key_info->salt_size) {
-		pr_err("could not parse salt size from ext4\n");
-		return -EINVAL;
-	}
-
-	ret = pfk_ext4_parse_cipher(inode, algo);
-	if (ret != 0) {
-		pr_err("not supported cipher\n");
-		return ret;
-	}
-
-	return 0;
-}
-
-bool pfk_ext4_allow_merge_bio(const struct bio *bio1,
-	const struct bio *bio2, const struct inode *inode1,
-	const struct inode *inode2)
-{
-	/* if there is no ext4 pfk, don't disallow merging blocks */
-	if (!pfk_ext4_is_ready())
-		return true;
-
-	if (!inode1 || !inode2)
-		return false;
-
-	return fscrypt_is_ice_encryption_info_equal(inode1, inode2);
-}
diff --git a/security/pfe/pfk_ext4.h b/security/pfe/pfk_ext4.h
deleted file mode 100644
index bca23f3..0000000
--- a/security/pfe/pfk_ext4.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef _PFK_EXT4_H_
-#define _PFK_EXT4_H_
-
-#include <linux/types.h>
-#include <linux/fs.h>
-#include <crypto/ice.h>
-#include "pfk_internal.h"
-
-bool pfk_is_ext4_type(const struct inode *inode);
-
-int pfk_ext4_parse_inode(const struct bio *bio,
-	const struct inode *inode,
-	struct pfk_key_info *key_info,
-	enum ice_cryto_algo_mode *algo,
-	bool *is_pfe);
-
-bool pfk_ext4_allow_merge_bio(const struct bio *bio1,
-	const struct bio *bio2, const struct inode *inode1,
-	const struct inode *inode2);
-
-int __init pfk_ext4_init(void);
-
-void pfk_ext4_deinit(void);
-
-#endif /* _PFK_EXT4_H_ */
diff --git a/security/pfe/pfk_f2fs.c b/security/pfe/pfk_f2fs.c
deleted file mode 100644
index 5ea79ace..0000000
--- a/security/pfe/pfk_f2fs.c
+++ /dev/null
@@ -1,188 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-/*
- * Per-File-Key (PFK) - f2fs
- *
- * This driver is used for working with EXT4/F2FS crypt extension
- *
- * The key information  is stored in node by EXT4/F2FS when file is first opened
- * and will be later accessed by Block Device Driver to actually load the key
- * to encryption hw.
- *
- * PFK exposes API's for loading and removing keys from encryption hw
- * and also API to determine whether 2 adjacent blocks can be agregated by
- * Block Layer in one request to encryption hw.
- *
- */
-
-#define pr_fmt(fmt)	"pfk_f2fs [%s]: " fmt, __func__
-
-#include <linux/module.h>
-#include <linux/fs.h>
-#include <linux/errno.h>
-#include <linux/printk.h>
-
-#include "fscrypt_ice.h"
-#include "pfk_f2fs.h"
-
-static bool pfk_f2fs_ready;
-
-/*
- * pfk_f2fs_deinit() - Deinit function, should be invoked by upper PFK layer
- */
-void pfk_f2fs_deinit(void)
-{
-	pfk_f2fs_ready = false;
-}
-
-/*
- * pfk_f2fs_init() - Init function, should be invoked by upper PFK layer
- */
-int __init pfk_f2fs_init(void)
-{
-	pfk_f2fs_ready = true;
-	pr_info("PFK F2FS inited successfully\n");
-
-	return 0;
-}
-
-/**
- * pfk_f2fs_is_ready() - driver is initialized and ready.
- *
- * Return: true if the driver is ready.
- */
-static inline bool pfk_f2fs_is_ready(void)
-{
-	return pfk_f2fs_ready;
-}
-
-/**
- * pfk_is_f2fs_type() - return true if inode belongs to ICE F2FS PFE
- * @inode: inode pointer
- */
-bool pfk_is_f2fs_type(const struct inode *inode)
-{
-	if (!pfe_is_inode_filesystem_type(inode, "f2fs"))
-		return false;
-
-	return fscrypt_should_be_processed_by_ice(inode);
-}
-
-/**
- * pfk_f2fs_parse_cipher() - parse cipher from inode to enum
- * @inode: inode
- * @algo: pointer to store the output enum (can be null)
- *
- * return 0 in case of success, error otherwise (i.e not supported cipher)
- */
-static int pfk_f2fs_parse_cipher(const struct inode *inode,
-		enum ice_cryto_algo_mode *algo)
-{
-	/*
-	 * currently only AES XTS algo is supported
-	 * in the future, table with supported ciphers might
-	 * be introduced
-	 */
-	if (!inode)
-		return -EINVAL;
-
-	if (!fscrypt_is_aes_xts_cipher(inode)) {
-		pr_err("f2fs alghoritm is not supported by pfk\n");
-		return -EINVAL;
-	}
-
-	if (algo)
-		*algo = ICE_CRYPTO_ALGO_MODE_AES_XTS;
-
-	return 0;
-}
-
-int pfk_f2fs_parse_inode(const struct bio *bio,
-		const struct inode *inode,
-		struct pfk_key_info *key_info,
-		enum ice_cryto_algo_mode *algo,
-		bool *is_pfe)
-{
-	int ret = 0;
-
-	if (!is_pfe)
-		return -EINVAL;
-
-	/*
-	 * only a few errors below can indicate that
-	 * this function was not invoked within PFE context,
-	 * otherwise we will consider it PFE
-	 */
-	*is_pfe = true;
-
-	if (!pfk_f2fs_is_ready())
-		return -ENODEV;
-
-	if (!inode)
-		return -EINVAL;
-
-	if (!key_info)
-		return -EINVAL;
-
-	key_info->key = fscrypt_get_ice_encryption_key(inode);
-	if (!key_info->key) {
-		pr_err("could not parse key from f2fs\n");
-		return -EINVAL;
-	}
-
-	key_info->key_size = fscrypt_get_ice_encryption_key_size(inode);
-	if (!key_info->key_size) {
-		pr_err("could not parse key size from f2fs\n");
-		return -EINVAL;
-	}
-
-	key_info->salt = fscrypt_get_ice_encryption_salt(inode);
-	if (!key_info->salt) {
-		pr_err("could not parse salt from f2fs\n");
-		return -EINVAL;
-	}
-
-	key_info->salt_size = fscrypt_get_ice_encryption_salt_size(inode);
-	if (!key_info->salt_size) {
-		pr_err("could not parse salt size from f2fs\n");
-		return -EINVAL;
-	}
-
-	ret = pfk_f2fs_parse_cipher(inode, algo);
-	if (ret != 0) {
-		pr_err("not supported cipher\n");
-		return ret;
-	}
-
-	return 0;
-}
-
-bool pfk_f2fs_allow_merge_bio(const struct bio *bio1,
-		const struct bio *bio2, const struct inode *inode1,
-		const struct inode *inode2)
-{
-	bool mergeable;
-
-	/* if there is no f2fs pfk, don't disallow merging blocks */
-	if (!pfk_f2fs_is_ready())
-		return true;
-
-	if (!inode1 || !inode2)
-		return false;
-
-	mergeable = fscrypt_is_ice_encryption_info_equal(inode1, inode2);
-	if (!mergeable)
-		return false;
-
-
-	/* ICE allows only consecutive iv_key stream. */
-	if (!bio_dun(bio1) && !bio_dun(bio2))
-		return true;
-	else if (!bio_dun(bio1) || !bio_dun(bio2))
-		return false;
-
-	return bio_end_dun(bio1) == bio_dun(bio2);
-}
diff --git a/security/pfe/pfk_f2fs.h b/security/pfe/pfk_f2fs.h
deleted file mode 100644
index 3c6f7ec..0000000
--- a/security/pfe/pfk_f2fs.h
+++ /dev/null
@@ -1,30 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef _PFK_F2FS_H_
-#define _PFK_F2FS_H_
-
-#include <linux/types.h>
-#include <linux/fs.h>
-#include <crypto/ice.h>
-#include "pfk_internal.h"
-
-bool pfk_is_f2fs_type(const struct inode *inode);
-
-int pfk_f2fs_parse_inode(const struct bio *bio,
-		const struct inode *inode,
-		struct pfk_key_info *key_info,
-		enum ice_cryto_algo_mode *algo,
-		bool *is_pfe);
-
-bool pfk_f2fs_allow_merge_bio(const struct bio *bio1,
-	const struct bio *bio2, const struct inode *inode1,
-	const struct inode *inode2);
-
-int __init pfk_f2fs_init(void);
-
-void pfk_f2fs_deinit(void);
-
-#endif /* _PFK_F2FS_H_ */
diff --git a/security/pfe/pfk_ice.c b/security/pfe/pfk_ice.c
deleted file mode 100644
index 8d3928f9..0000000
--- a/security/pfe/pfk_ice.c
+++ /dev/null
@@ -1,205 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/errno.h>
-#include <linux/io.h>
-#include <linux/interrupt.h>
-#include <linux/delay.h>
-#include <linux/async.h>
-#include <linux/mm.h>
-#include <linux/of.h>
-#include <linux/device-mapper.h>
-#include <soc/qcom/scm.h>
-#include <soc/qcom/qseecomi.h>
-#include <soc/qcom/qtee_shmbridge.h>
-#include <crypto/ice.h>
-#include "pfk_ice.h"
-
-/**********************************/
-/** global definitions		 **/
-/**********************************/
-
-#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE 0x5
-#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE 0x6
-
-/* index 0 and 1 is reserved for FDE */
-#define MIN_ICE_KEY_INDEX 2
-
-#define MAX_ICE_KEY_INDEX 31
-
-#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_ID \
-	TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, TZ_SVC_ES, \
-	TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE)
-
-#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_ID \
-		TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, \
-			TZ_SVC_ES, TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE)
-
-#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_PARAM_ID \
-	TZ_SYSCALL_CREATE_PARAM_ID_2( \
-	TZ_SYSCALL_PARAM_TYPE_VAL, TZ_SYSCALL_PARAM_TYPE_VAL)
-
-#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_PARAM_ID \
-	TZ_SYSCALL_CREATE_PARAM_ID_6( \
-	TZ_SYSCALL_PARAM_TYPE_VAL, \
-	TZ_SYSCALL_PARAM_TYPE_BUF_RW, TZ_SYSCALL_PARAM_TYPE_VAL, \
-	TZ_SYSCALL_PARAM_TYPE_VAL, TZ_SYSCALL_PARAM_TYPE_VAL, \
-	TZ_SYSCALL_PARAM_TYPE_VAL)
-
-#define CONTEXT_SIZE 0x1000
-
-#define ICE_BUFFER_SIZE 64
-
-#define PFK_UFS "ufs"
-#define PFK_SDCC "sdcc"
-#define PFK_UFS_CARD "ufscard"
-
-#define UFS_CE 10
-#define SDCC_CE 20
-#define UFS_CARD_CE 30
-
-enum {
-	ICE_CIPHER_MODE_XTS_128 = 0,
-	ICE_CIPHER_MODE_CBC_128 = 1,
-	ICE_CIPHER_MODE_XTS_256 = 3,
-	ICE_CIPHER_MODE_CBC_256 = 4
-};
-
-static int set_key(uint32_t index, const uint8_t *key, const uint8_t *salt,
-		unsigned int data_unit, struct ice_device *ice_dev)
-{
-	struct scm_desc desc = {0};
-	int ret = 0;
-	uint32_t smc_id = 0;
-	char *tzbuf = NULL;
-	uint32_t key_size = ICE_BUFFER_SIZE / 2;
-	struct qtee_shm shm;
-
-	ret = qtee_shmbridge_allocate_shm(ICE_BUFFER_SIZE, &shm);
-	if (ret)
-		return -ENOMEM;
-
-	tzbuf = shm.vaddr;
-
-	memcpy(tzbuf, key, key_size);
-	memcpy(tzbuf+key_size, salt, key_size);
-	dmac_flush_range(tzbuf, tzbuf + ICE_BUFFER_SIZE);
-
-	smc_id = TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_ID;
-
-	desc.arginfo = TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_PARAM_ID;
-	desc.args[0] = index;
-	desc.args[1] = shm.paddr;
-	desc.args[2] = shm.size;
-	desc.args[3] = ICE_CIPHER_MODE_XTS_256;
-	desc.args[4] = data_unit;
-
-	if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS_CARD))
-		desc.args[5] = UFS_CARD_CE;
-	else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_SDCC))
-		desc.args[5] = SDCC_CE;
-	else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS))
-		desc.args[5] = UFS_CE;
-
-	ret = scm_call2_noretry(smc_id, &desc);
-	if (ret)
-		pr_err("%s:SCM call Error: 0x%x\n", __func__, ret);
-
-	qtee_shmbridge_free_shm(&shm);
-	return ret;
-}
-
-static int clear_key(uint32_t index, struct ice_device *ice_dev)
-{
-	struct scm_desc desc = {0};
-	int ret = 0;
-	uint32_t smc_id = 0;
-
-	smc_id = TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_ID;
-
-	desc.arginfo = TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_PARAM_ID;
-	desc.args[0] = index;
-
-	if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS_CARD))
-		desc.args[1] = UFS_CARD_CE;
-	else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_SDCC))
-		desc.args[1] = SDCC_CE;
-	else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS))
-		desc.args[1] = UFS_CE;
-
-	ret = scm_call2_noretry(smc_id, &desc);
-	if (ret)
-		pr_err("%s:SCM call Error: 0x%x\n", __func__, ret);
-	return ret;
-}
-
-int qti_pfk_ice_set_key(uint32_t index, uint8_t *key, uint8_t *salt,
-			struct ice_device *ice_dev, unsigned int data_unit)
-{
-	int ret = 0, ret1 = 0;
-
-	if (index < MIN_ICE_KEY_INDEX || index > MAX_ICE_KEY_INDEX) {
-		pr_err("%s Invalid index %d\n", __func__, index);
-		return -EINVAL;
-	}
-	if (!key || !salt) {
-		pr_err("%s Invalid key/salt\n", __func__);
-		return -EINVAL;
-	}
-
-	ret = enable_ice_setup(ice_dev);
-	if (ret) {
-		pr_err("%s: could not enable clocks: %d\n", __func__, ret);
-		goto out;
-	}
-
-	ret = set_key(index, key, salt, data_unit, ice_dev);
-	if (ret) {
-		pr_err("%s: Set Key Error: %d\n", __func__, ret);
-		if (ret == -EBUSY) {
-			if (disable_ice_setup(ice_dev))
-				pr_err("%s: clock disable failed\n", __func__);
-			goto out;
-		}
-		/* Try to invalidate the key to keep ICE in proper state */
-		ret1 = clear_key(index, ice_dev);
-		if (ret1)
-			pr_err("%s: Invalidate key error: %d\n", __func__, ret);
-	}
-
-	ret1 = disable_ice_setup(ice_dev);
-	if (ret)
-		pr_err("%s: Error %d disabling clocks\n", __func__, ret);
-
-out:
-	return ret;
-}
-
-int qti_pfk_ice_invalidate_key(uint32_t index, struct ice_device *ice_dev)
-{
-	int ret = 0;
-
-	if (index < MIN_ICE_KEY_INDEX || index > MAX_ICE_KEY_INDEX) {
-		pr_err("%s Invalid index %d\n", __func__, index);
-		return -EINVAL;
-	}
-
-	ret = enable_ice_setup(ice_dev);
-	if (ret) {
-		pr_err("%s: could not enable clocks: 0x%x\n", __func__, ret);
-		return ret;
-	}
-
-	ret = clear_key(index, ice_dev);
-	if (ret)
-		pr_err("%s: Invalidate key error: %d\n", __func__, ret);
-
-	if (disable_ice_setup(ice_dev))
-		pr_err("%s: could not disable clocks\n", __func__);
-
-	return ret;
-}
diff --git a/security/pfe/pfk_ice.h b/security/pfe/pfk_ice.h
deleted file mode 100644
index 527fb61..0000000
--- a/security/pfe/pfk_ice.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef PFK_ICE_H_
-#define PFK_ICE_H_
-
-/*
- * PFK ICE
- *
- * ICE keys configuration through scm calls.
- *
- */
-
-#include <linux/types.h>
-#include <crypto/ice.h>
-
-int qti_pfk_ice_set_key(uint32_t index, uint8_t *key, uint8_t *salt,
-			struct ice_device *ice_dev, unsigned int data_unit);
-int qti_pfk_ice_invalidate_key(uint32_t index, struct ice_device *ice_dev);
-
-#endif /* PFK_ICE_H_ */
diff --git a/security/pfe/pfk_internal.h b/security/pfe/pfk_internal.h
deleted file mode 100644
index 7a800d3..0000000
--- a/security/pfe/pfk_internal.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef _PFK_INTERNAL_H_
-#define _PFK_INTERNAL_H_
-
-#include <linux/types.h>
-#include <crypto/ice.h>
-
-struct pfk_key_info {
-	const unsigned char *key;
-	const unsigned char *salt;
-	size_t key_size;
-	size_t salt_size;
-};
-
-int pfk_key_size_to_key_type(size_t key_size,
-	enum ice_crpto_key_size *key_size_type);
-
-bool pfe_is_inode_filesystem_type(const struct inode *inode,
-	const char *fs_type);
-
-char *inode_to_filename(const struct inode *inode);
-
-#endif /* _PFK_INTERNAL_H_ */
diff --git a/security/pfe/pfk_kc.c b/security/pfe/pfk_kc.c
deleted file mode 100644
index 5a0a557..0000000
--- a/security/pfe/pfk_kc.c
+++ /dev/null
@@ -1,870 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-/*
- * PFK Key Cache
- *
- * Key Cache used internally in PFK.
- * The purpose of the cache is to save access time to QSEE when loading keys.
- * Currently the cache is the same size as the total number of keys that can
- * be loaded to ICE. Since this number is relatively small, the algorithms for
- * cache eviction are simple, linear and based on last usage timestamp, i.e
- * the node that will be evicted is the one with the oldest timestamp.
- * Empty entries always have the oldest timestamp.
- */
-
-#include <linux/module.h>
-#include <linux/mutex.h>
-#include <linux/spinlock.h>
-#include <linux/errno.h>
-#include <linux/string.h>
-#include <linux/jiffies.h>
-#include <linux/slab.h>
-#include <linux/printk.h>
-#include <linux/sched/signal.h>
-
-#include "pfk_kc.h"
-#include "pfk_ice.h"
-
-
-/** the first available index in ice engine */
-#define PFK_KC_STARTING_INDEX 2
-
-/** currently the only supported key and salt sizes */
-#define PFK_KC_KEY_SIZE 32
-#define PFK_KC_SALT_SIZE 32
-
-/** Table size */
-#define PFK_KC_TABLE_SIZE ((32) - (PFK_KC_STARTING_INDEX))
-
-/** The maximum key and salt size */
-#define PFK_MAX_KEY_SIZE PFK_KC_KEY_SIZE
-#define PFK_MAX_SALT_SIZE PFK_KC_SALT_SIZE
-#define PFK_UFS "ufs"
-#define PFK_UFS_CARD "ufscard"
-
-static DEFINE_SPINLOCK(kc_lock);
-static unsigned long flags;
-static bool kc_ready;
-static char *s_type = "sdcc";
-
-/**
- * enum pfk_kc_entry_state - state of the entry inside kc table
- *
- * @FREE:		   entry is free
- * @ACTIVE_ICE_PRELOAD:    entry is actively used by ICE engine
-			   and cannot be used by others. SCM call
-			   to load key to ICE is pending to be performed
- * @ACTIVE_ICE_LOADED:     entry is actively used by ICE engine and
-			   cannot be used by others. SCM call to load the
-			   key to ICE was successfully executed and key is
-			   now loaded
- * @INACTIVE_INVALIDATING: entry is being invalidated during file close
-			   and cannot be used by others until invalidation
-			   is complete
- * @INACTIVE:		   entry's key is already loaded, but is not
-			   currently being used. It can be re-used for
-			   optimization and to avoid SCM call cost or
-			   it can be taken by another key if there are
-			   no FREE entries
- * @SCM_ERROR:		   error occurred while scm call was performed to
-			   load the key to ICE
- */
-enum pfk_kc_entry_state {
-	FREE,
-	ACTIVE_ICE_PRELOAD,
-	ACTIVE_ICE_LOADED,
-	INACTIVE_INVALIDATING,
-	INACTIVE,
-	SCM_ERROR
-};
-
-struct kc_entry {
-	unsigned char key[PFK_MAX_KEY_SIZE];
-	size_t key_size;
-
-	unsigned char salt[PFK_MAX_SALT_SIZE];
-	size_t salt_size;
-
-	u64 time_stamp;
-	u32 key_index;
-
-	struct task_struct *thread_pending;
-
-	enum pfk_kc_entry_state state;
-
-	/* ref count for the number of requests in the HW queue for this key */
-	int loaded_ref_cnt;
-	int scm_error;
-};
-
-/**
- * kc_is_ready() - driver is initialized and ready.
- *
- * Return: true if the key cache is ready.
- */
-static inline bool kc_is_ready(void)
-{
-	return kc_ready;
-}
-
-static inline void kc_spin_lock(void)
-{
-	spin_lock_irqsave(&kc_lock, flags);
-}
-
-static inline void kc_spin_unlock(void)
-{
-	spin_unlock_irqrestore(&kc_lock, flags);
-}
-
-/**
- * pfk_kc_get_storage_type() - return the hardware storage type.
- *
- * Return: storage type queried during bootup.
- */
-const char *pfk_kc_get_storage_type(void)
-{
-	return s_type;
-}
-
-/**
- * kc_entry_is_available() - checks whether the entry is available
- *
- * Return true if it is , false otherwise or if invalid
- * Should be invoked under spinlock
- */
-static bool kc_entry_is_available(const struct kc_entry *entry)
-{
-	if (!entry)
-		return false;
-
-	return (entry->state == FREE || entry->state == INACTIVE);
-}
-
-/**
- * kc_entry_wait_till_available() - waits till entry is available
- *
- * Returns 0 in case of success or -ERESTARTSYS if the wait was interrupted
- * by signal
- *
- * Should be invoked under spinlock
- */
-static int kc_entry_wait_till_available(struct kc_entry *entry)
-{
-	int res = 0;
-
-	while (!kc_entry_is_available(entry)) {
-		set_current_state(TASK_INTERRUPTIBLE);
-		if (signal_pending(current)) {
-			res = -ERESTARTSYS;
-			break;
-		}
-		/* assuming only one thread can try to invalidate
-		 * the same entry
-		 */
-		entry->thread_pending = current;
-		kc_spin_unlock();
-		schedule();
-		kc_spin_lock();
-	}
-	set_current_state(TASK_RUNNING);
-
-	return res;
-}
-
-/**
- * kc_entry_start_invalidating() - moves entry to state
- *			           INACTIVE_INVALIDATING
- *				   If entry is in use, waits till
- *				   it gets available
- * @entry: pointer to entry
- *
- * Return 0 in case of success, otherwise error
- * Should be invoked under spinlock
- */
-static int kc_entry_start_invalidating(struct kc_entry *entry)
-{
-	int res;
-
-	res = kc_entry_wait_till_available(entry);
-	if (res)
-		return res;
-
-	entry->state = INACTIVE_INVALIDATING;
-
-	return 0;
-}
-
-/**
- * kc_entry_finish_invalidating() - moves entry to state FREE
- *				    wakes up all the tasks waiting
- *				    on it
- *
- * @entry: pointer to entry
- *
- * Return 0 in case of success, otherwise error
- * Should be invoked under spinlock
- */
-static void kc_entry_finish_invalidating(struct kc_entry *entry)
-{
-	if (!entry)
-		return;
-
-	if (entry->state != INACTIVE_INVALIDATING)
-		return;
-
-	entry->state = FREE;
-}
-
-/**
- * kc_min_entry() - compare two entries to find one with minimal time
- * @a: ptr to the first entry. If NULL the other entry will be returned
- * @b: pointer to the second entry
- *
- * Return the entry which timestamp is the minimal, or b if a is NULL
- */
-static inline struct kc_entry *kc_min_entry(struct kc_entry *a,
-		struct kc_entry *b)
-{
-	if (!a)
-		return b;
-
-	if (time_before64(b->time_stamp, a->time_stamp))
-		return b;
-
-	return a;
-}
-
-/**
- * kc_entry_at_index() - return entry at specific index
- * @index: index of entry to be accessed
- *
- * Return entry
- * Should be invoked under spinlock
- */
-static struct kc_entry *kc_entry_at_index(int index,
-		struct ice_device *ice_dev)
-{
-	return (struct kc_entry *)(ice_dev->key_table) + index;
-}
-
-/**
- * kc_find_key_at_index() - find kc entry starting at specific index
- * @key: key to look for
- * @key_size: the key size
- * @salt: salt to look for
- * @salt_size: the salt size
- * @sarting_index: index to start search with, if entry found, updated with
- * index of that entry
- *
- * Return entry or NULL in case of error
- * Should be invoked under spinlock
- */
-static struct kc_entry *kc_find_key_at_index(const unsigned char *key,
-	size_t key_size, const unsigned char *salt, size_t salt_size,
-	struct ice_device *ice_dev, int *starting_index)
-{
-	struct kc_entry *entry = NULL;
-	int i = 0;
-
-	for (i = *starting_index; i < PFK_KC_TABLE_SIZE; i++) {
-		entry = kc_entry_at_index(i, ice_dev);
-
-		if (salt != NULL) {
-			if (entry->salt_size != salt_size)
-				continue;
-
-			if (memcmp(entry->salt, salt, salt_size) != 0)
-				continue;
-		}
-
-		if (entry->key_size != key_size)
-			continue;
-
-		if (memcmp(entry->key, key, key_size) == 0) {
-			*starting_index = i;
-			return entry;
-		}
-	}
-
-	return NULL;
-}
-
-/**
- * kc_find_key() - find kc entry
- * @key: key to look for
- * @key_size: the key size
- * @salt: salt to look for
- * @salt_size: the salt size
- *
- * Return entry or NULL in case of error
- * Should be invoked under spinlock
- */
-static struct kc_entry *kc_find_key(const unsigned char *key, size_t key_size,
-		const unsigned char *salt, size_t salt_size,
-		struct ice_device *ice_dev)
-{
-	int index = 0;
-
-	return kc_find_key_at_index(key, key_size, salt, salt_size,
-				ice_dev, &index);
-}
-
-/**
- * kc_find_oldest_entry_non_locked() - finds the entry with minimal timestamp
- * that is not locked
- *
- * Returns entry with minimal timestamp. Empty entries have timestamp
- * of 0, therefore they are returned first.
- * If all the entries are locked, will return NULL
- * Should be invoked under spin lock
- */
-static struct kc_entry *kc_find_oldest_entry_non_locked(
-		struct ice_device *ice_dev)
-{
-	struct kc_entry *curr_min_entry = NULL;
-	struct kc_entry *entry = NULL;
-	int i = 0;
-
-	for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
-		entry = kc_entry_at_index(i, ice_dev);
-
-		if (entry->state == FREE)
-			return entry;
-
-		if (entry->state == INACTIVE)
-			curr_min_entry = kc_min_entry(curr_min_entry, entry);
-	}
-
-	return curr_min_entry;
-}
-
-/**
- * kc_update_timestamp() - updates timestamp of entry to current
- *
- * @entry: entry to update
- *
- */
-static void kc_update_timestamp(struct kc_entry *entry)
-{
-	if (!entry)
-		return;
-
-	entry->time_stamp = get_jiffies_64();
-}
-
-/**
- * kc_clear_entry() - clear the key from entry and mark entry not in use
- *
- * @entry: pointer to entry
- *
- * Should be invoked under spinlock
- */
-static void kc_clear_entry(struct kc_entry *entry)
-{
-	if (!entry)
-		return;
-
-	memset(entry->key, 0, entry->key_size);
-	memset(entry->salt, 0, entry->salt_size);
-
-	entry->key_size = 0;
-	entry->salt_size = 0;
-
-	entry->time_stamp = 0;
-	entry->scm_error = 0;
-
-	entry->state = FREE;
-
-	entry->loaded_ref_cnt = 0;
-	entry->thread_pending = NULL;
-}
-
-/**
- * kc_update_entry() - replaces the key in given entry and
- *			loads the new key to ICE
- *
- * @entry: entry to replace key in
- * @key: key
- * @key_size: key_size
- * @salt: salt
- * @salt_size: salt_size
- * @data_unit: dun size
- *
- * The previous key is securely released and wiped, the new one is loaded
- * to ICE.
- * Should be invoked under spinlock
- * Caller to validate that key/salt_size matches the size in struct kc_entry
- */
-static int kc_update_entry(struct kc_entry *entry, const unsigned char *key,
-	size_t key_size, const unsigned char *salt, size_t salt_size,
-	unsigned int data_unit, struct ice_device *ice_dev)
-{
-	int ret;
-
-	kc_clear_entry(entry);
-
-	memcpy(entry->key, key, key_size);
-	entry->key_size = key_size;
-
-	memcpy(entry->salt, salt, salt_size);
-	entry->salt_size = salt_size;
-
-	/* Mark entry as no longer free before releasing the lock */
-	entry->state = ACTIVE_ICE_PRELOAD;
-	kc_spin_unlock();
-
-	ret = qti_pfk_ice_set_key(entry->key_index, entry->key,
-			entry->salt, ice_dev, data_unit);
-
-	kc_spin_lock();
-	return ret;
-}
-
-/**
- * pfk_kc_init() - init function
- *
- * Return 0 in case of success, error otherwise
- */
-static int pfk_kc_init(struct ice_device *ice_dev)
-{
-	int i = 0;
-	struct kc_entry *entry = NULL;
-
-	kc_spin_lock();
-	for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
-		entry = kc_entry_at_index(i, ice_dev);
-		entry->key_index = PFK_KC_STARTING_INDEX + i;
-	}
-	kc_ready = true;
-	kc_spin_unlock();
-
-	return 0;
-}
-
-/**
- * pfk_kc_denit() - deinit function
- *
- * Return 0 in case of success, error otherwise
- */
-int pfk_kc_deinit(void)
-{
-	kc_ready = false;
-
-	return 0;
-}
-
-/**
- * pfk_kc_load_key_start() - retrieve the key from cache or add it if
- * it's not there and return the ICE hw key index in @key_index.
- * @key: pointer to the key
- * @key_size: the size of the key
- * @salt: pointer to the salt
- * @salt_size: the size of the salt
- * @key_index: the pointer to key_index where the output will be stored
- * @async: whether scm calls are allowed in the caller context
- *
- * If key is present in cache, than the key_index will be retrieved from cache.
- * If it is not present, the oldest entry from kc table will be evicted,
- * the key will be loaded to ICE via QSEE to the index that is the evicted
- * entry number and stored in cache.
- * Entry that is going to be used is marked as being used, it will mark
- * as not being used when ICE finishes using it and pfk_kc_load_key_end
- * will be invoked.
- * As QSEE calls can only be done from a non-atomic context, when @async flag
- * is set to 'false', it specifies that it is ok to make the calls in the
- * current context. Otherwise, when @async is set, the caller should retry the
- * call again from a different context, and -EAGAIN error will be returned.
- *
- * Return 0 in case of success, error otherwise
- */
-int pfk_kc_load_key_start(const unsigned char *key, size_t key_size,
-		const unsigned char *salt, size_t salt_size, u32 *key_index,
-		bool async, unsigned int data_unit, struct ice_device *ice_dev)
-{
-	int ret = 0;
-	struct kc_entry *entry = NULL;
-	bool entry_exists = false;
-
-	if (!kc_is_ready())
-		return -ENODEV;
-
-	if (!key || !salt || !key_index) {
-		pr_err("%s key/salt/key_index NULL\n", __func__);
-		return -EINVAL;
-	}
-
-	if (key_size != PFK_KC_KEY_SIZE) {
-		pr_err("unsupported key size %zu\n", key_size);
-		return -EINVAL;
-	}
-
-	if (salt_size != PFK_KC_SALT_SIZE) {
-		pr_err("unsupported salt size %zu\n", salt_size);
-		return -EINVAL;
-	}
-
-	kc_spin_lock();
-
-	entry = kc_find_key(key, key_size, salt, salt_size, ice_dev);
-	if (!entry) {
-		if (async) {
-			pr_debug("%s task will populate entry\n", __func__);
-			kc_spin_unlock();
-			return -EAGAIN;
-		}
-
-		entry = kc_find_oldest_entry_non_locked(ice_dev);
-		if (!entry) {
-			/* could not find a single non locked entry,
-			 * return EBUSY to upper layers so that the
-			 * request will be rescheduled
-			 */
-			kc_spin_unlock();
-			return -EBUSY;
-		}
-	} else {
-		entry_exists = true;
-	}
-
-	pr_debug("entry with index %d is in state %d\n",
-		entry->key_index, entry->state);
-
-	switch (entry->state) {
-	case (INACTIVE):
-		if (entry_exists) {
-			kc_update_timestamp(entry);
-			entry->state = ACTIVE_ICE_LOADED;
-
-			if (!strcmp(ice_dev->ice_instance_type,
-				(char *)PFK_UFS) ||
-					!strcmp(ice_dev->ice_instance_type,
-						(char *)PFK_UFS_CARD)) {
-				if (async)
-					entry->loaded_ref_cnt++;
-			} else {
-				entry->loaded_ref_cnt++;
-			}
-			break;
-		}
-	case (FREE):
-		ret = kc_update_entry(entry, key, key_size, salt, salt_size,
-					data_unit, ice_dev);
-		if (ret) {
-			entry->state = SCM_ERROR;
-			entry->scm_error = ret;
-			pr_err("%s: key load error (%d)\n", __func__, ret);
-		} else {
-			kc_update_timestamp(entry);
-			entry->state = ACTIVE_ICE_LOADED;
-
-			/*
-			 * In case of UFS only increase ref cnt for async calls,
-			 * sync calls from within work thread do not pass
-			 * requests further to HW
-			 */
-			if (!strcmp(ice_dev->ice_instance_type,
-				(char *)PFK_UFS) ||
-					!strcmp(ice_dev->ice_instance_type,
-						(char *)PFK_UFS_CARD)) {
-				if (async)
-					entry->loaded_ref_cnt++;
-			} else {
-				entry->loaded_ref_cnt++;
-			}
-		}
-		break;
-	case (ACTIVE_ICE_PRELOAD):
-	case (INACTIVE_INVALIDATING):
-		ret = -EAGAIN;
-		break;
-	case (ACTIVE_ICE_LOADED):
-		kc_update_timestamp(entry);
-
-		if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS) ||
-			!strcmp(ice_dev->ice_instance_type,
-				(char *)PFK_UFS_CARD)) {
-			if (async)
-				entry->loaded_ref_cnt++;
-		} else {
-			entry->loaded_ref_cnt++;
-		}
-		break;
-	case(SCM_ERROR):
-		ret = entry->scm_error;
-		kc_clear_entry(entry);
-		entry->state = FREE;
-		break;
-	default:
-		pr_err("invalid state %d for entry with key index %d\n",
-			entry->state, entry->key_index);
-		ret = -EINVAL;
-	}
-
-	*key_index = entry->key_index;
-	kc_spin_unlock();
-
-	return ret;
-}
-
-/**
- * pfk_kc_load_key_end() - finish the process of key loading that was started
- *						   by pfk_kc_load_key_start
- *						   by marking the entry as not
- *						   being in use
- * @key: pointer to the key
- * @key_size: the size of the key
- * @salt: pointer to the salt
- * @salt_size: the size of the salt
- *
- */
-void pfk_kc_load_key_end(const unsigned char *key, size_t key_size,
-		const unsigned char *salt, size_t salt_size,
-		struct ice_device *ice_dev)
-{
-	struct kc_entry *entry = NULL;
-	struct task_struct *tmp_pending = NULL;
-	int ref_cnt = 0;
-
-	if (!kc_is_ready())
-		return;
-
-	if (!key || !salt)
-		return;
-
-	if (key_size != PFK_KC_KEY_SIZE)
-		return;
-
-	if (salt_size != PFK_KC_SALT_SIZE)
-		return;
-
-	kc_spin_lock();
-
-	entry = kc_find_key(key, key_size, salt, salt_size, ice_dev);
-	if (!entry) {
-		kc_spin_unlock();
-		pr_err("internal error, there should an entry to unlock\n");
-
-		return;
-	}
-	ref_cnt = --entry->loaded_ref_cnt;
-
-	if (ref_cnt < 0)
-		pr_err("internal error, ref count should never be negative\n");
-
-	if (!ref_cnt) {
-		entry->state = INACTIVE;
-		/*
-		 * wake-up invalidation if it's waiting
-		 * for the entry to be released
-		 */
-		if (entry->thread_pending) {
-			tmp_pending = entry->thread_pending;
-			entry->thread_pending = NULL;
-
-			kc_spin_unlock();
-			wake_up_process(tmp_pending);
-			return;
-		}
-	}
-
-	kc_spin_unlock();
-}
-
-/**
- * pfk_kc_remove_key_with_salt() - remove the key and salt from cache
- * and from ICE engine.
- * @key: pointer to the key
- * @key_size: the size of the key
- * @salt: pointer to the key
- * @salt_size: the size of the key
- *
- * Return 0 in case of success, error otherwise (also in case of non
- * (existing key)
- */
-int pfk_kc_remove_key_with_salt(const unsigned char *key, size_t key_size,
-		const unsigned char *salt, size_t salt_size)
-{
-	struct kc_entry *entry = NULL;
-	struct list_head *ice_dev_list = NULL;
-	struct ice_device *ice_dev;
-	int res = 0;
-
-	if (!kc_is_ready())
-		return -ENODEV;
-
-	if (!key)
-		return -EINVAL;
-
-	if (!salt)
-		return -EINVAL;
-
-	if (key_size != PFK_KC_KEY_SIZE)
-		return -EINVAL;
-
-	if (salt_size != PFK_KC_SALT_SIZE)
-		return -EINVAL;
-
-	kc_spin_lock();
-
-	ice_dev_list = get_ice_dev_list();
-	if (!ice_dev_list) {
-		pr_err("%s: Did not find ICE device head\n", __func__);
-		return -ENODEV;
-	}
-	list_for_each_entry(ice_dev, ice_dev_list, list) {
-		entry = kc_find_key(key, key_size, salt, salt_size, ice_dev);
-		if (entry) {
-			pr_debug("%s: Found entry for ice_dev number %d\n",
-					__func__, ice_dev->device_no);
-
-			break;
-		}
-		pr_debug("%s: Can't find  entry for ice_dev number %d\n",
-					__func__, ice_dev->device_no);
-	}
-
-	if (!entry) {
-		pr_debug("%s: Cannot find entry for any ice device\n",
-				__func__);
-		kc_spin_unlock();
-		return -EINVAL;
-	}
-
-	res = kc_entry_start_invalidating(entry);
-	if (res != 0) {
-		kc_spin_unlock();
-		return res;
-	}
-	kc_clear_entry(entry);
-
-	kc_spin_unlock();
-
-	qti_pfk_ice_invalidate_key(entry->key_index, ice_dev);
-
-	kc_spin_lock();
-	kc_entry_finish_invalidating(entry);
-	kc_spin_unlock();
-
-	return 0;
-}
-
-/**
- * pfk_kc_clear() - clear the table and remove all keys from ICE
- *
- * Return 0 on success, error otherwise
- *
- */
-int pfk_kc_clear(struct ice_device *ice_dev)
-{
-	struct kc_entry *entry = NULL;
-	int i = 0;
-	int res = 0;
-
-	if (!kc_is_ready())
-		return -ENODEV;
-
-	kc_spin_lock();
-	for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
-		entry = kc_entry_at_index(i, ice_dev);
-		res = kc_entry_start_invalidating(entry);
-		if (res != 0) {
-			kc_spin_unlock();
-			goto out;
-		}
-		kc_clear_entry(entry);
-	}
-	kc_spin_unlock();
-
-	for (i = 0; i < PFK_KC_TABLE_SIZE; i++)
-		qti_pfk_ice_invalidate_key(
-			kc_entry_at_index(i, ice_dev)->key_index, ice_dev);
-
-	/* fall through */
-	res = 0;
-out:
-	kc_spin_lock();
-	for (i = 0; i < PFK_KC_TABLE_SIZE; i++)
-		kc_entry_finish_invalidating(kc_entry_at_index(i, ice_dev));
-	kc_spin_unlock();
-
-	return res;
-}
-
-/**
- * pfk_kc_clear_on_reset() - clear the table and remove all keys from ICE
- * The assumption is that at this point we don't have any pending transactions
- * Also, there is no need to clear keys from ICE
- *
- * Return 0 on success, error otherwise
- *
- */
-void pfk_kc_clear_on_reset(struct ice_device *ice_dev)
-{
-	struct kc_entry *entry = NULL;
-	int i = 0;
-
-	if (!kc_is_ready())
-		return;
-
-	kc_spin_lock();
-	for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
-		entry = kc_entry_at_index(i, ice_dev);
-		kc_clear_entry(entry);
-	}
-	kc_spin_unlock();
-}
-
-static int pfk_kc_find_storage_type(char **device)
-{
-	char boot[20] = {'\0'};
-	char *match = (char *)strnstr(saved_command_line,
-				"androidboot.bootdevice=",
-				strlen(saved_command_line));
-	if (match) {
-		memcpy(boot, (match + strlen("androidboot.bootdevice=")),
-			sizeof(boot) - 1);
-		if (strnstr(boot, PFK_UFS, strlen(boot)))
-			*device = PFK_UFS;
-
-		return 0;
-	}
-	return -EINVAL;
-}
-
-int pfk_kc_initialize_key_table(struct ice_device *ice_dev)
-{
-	int res = 0;
-	struct kc_entry *kc_table;
-
-	kc_table = kzalloc(PFK_KC_TABLE_SIZE*sizeof(struct kc_entry),
-			GFP_KERNEL);
-	if (!kc_table) {
-		res = -ENOMEM;
-		pr_err("%s: Error %d allocating memory for key table\n",
-			__func__, res);
-	}
-	ice_dev->key_table = kc_table;
-	pfk_kc_init(ice_dev);
-
-	return res;
-}
-
-static int __init pfk_kc_pre_init(void)
-{
-	return pfk_kc_find_storage_type(&s_type);
-}
-
-static void __exit pfk_kc_exit(void)
-{
-	s_type = NULL;
-}
-
-module_init(pfk_kc_pre_init);
-module_exit(pfk_kc_exit);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("Per-File-Key-KC driver");
diff --git a/security/pfe/pfk_kc.h b/security/pfe/pfk_kc.h
deleted file mode 100644
index cc89827..0000000
--- a/security/pfe/pfk_kc.h
+++ /dev/null
@@ -1,29 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
- */
-
-#ifndef PFK_KC_H_
-#define PFK_KC_H_
-
-#include <linux/types.h>
-#include <crypto/ice.h>
-
-
-int pfk_kc_deinit(void);
-int pfk_kc_load_key_start(const unsigned char *key, size_t key_size,
-		const unsigned char *salt, size_t salt_size, u32 *key_index,
-		bool async, unsigned int data_unit, struct ice_device *ice_dev);
-void pfk_kc_load_key_end(const unsigned char *key, size_t key_size,
-		const unsigned char *salt, size_t salt_size,
-		struct ice_device *ice_dev);
-int pfk_kc_remove_key_with_salt(const unsigned char *key, size_t key_size,
-		const unsigned char *salt, size_t salt_size);
-int pfk_kc_clear(struct ice_device *ice_dev);
-void pfk_kc_clear_on_reset(struct ice_device *ice_dev);
-int pfk_kc_initialize_key_table(struct ice_device *ice_dev);
-const char *pfk_kc_get_storage_type(void);
-extern char *saved_command_line;
-
-
-#endif /* PFK_KC_H_ */
diff --git a/security/security.c b/security/security.c
index 1e8151d..1baf585 100644
--- a/security/security.c
+++ b/security/security.c
@@ -623,14 +623,6 @@
 }
 EXPORT_SYMBOL_GPL(security_inode_create);
 
-int security_inode_post_create(struct inode *dir, struct dentry *dentry,
-				umode_t mode)
-{
-	if (unlikely(IS_PRIVATE(dir)))
-		return 0;
-	return call_int_hook(inode_post_create, 0, dir, dentry, mode);
-}
-
 int security_inode_link(struct dentry *old_dentry, struct inode *dir,
 			 struct dentry *new_dentry)
 {
diff --git a/security/selinux/include/objsec.h b/security/selinux/include/objsec.h
index 17901da..25b69dc 100644
--- a/security/selinux/include/objsec.h
+++ b/security/selinux/include/objsec.h
@@ -26,7 +26,8 @@
 #include <linux/in.h>
 #include <linux/spinlock.h>
 #include <net/net_namespace.h>
-#include "security.h"
+#include "flask.h"
+#include "avc.h"
 
 struct task_security_struct {
 	u32 osid;		/* SID prior to last execve */
@@ -63,8 +64,6 @@
 	u32 sid;		/* SID of this object */
 	u16 sclass;		/* security class of this object */
 	unsigned char initialized;	/* initialization flag */
-	u32 tag;		/* Per-File-Encryption tag */
-	void *pfk_data;		/* Per-File-Key data from ecryptfs */
 	spinlock_t lock;
 };
 
diff --git a/security/selinux/include/security.h b/security/selinux/include/security.h
index cf931d8..f068ee1 100644
--- a/security/selinux/include/security.h
+++ b/security/selinux/include/security.h
@@ -15,6 +15,7 @@
 #include <linux/types.h>
 #include <linux/refcount.h>
 #include <linux/workqueue.h>
+#include "flask.h"
 
 #define SECSID_NULL			0x00000000 /* unspecified SID */
 #define SECSID_WILD			0xffffffff /* wildcard SID */