KVM: MMU: using __xchg_spte more smarter
Sometimes, atomically set spte is not needed, this patch call __xchg_spte()
more smartly
Note: if the old mapping's access bit is already set, we no need atomic operation
since the access bit is not lost
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e4b862e..0dcc95e 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -682,9 +682,14 @@
static void set_spte_track_bits(u64 *sptep, u64 new_spte)
{
pfn_t pfn;
- u64 old_spte;
+ u64 old_spte = *sptep;
- old_spte = __xchg_spte(sptep, new_spte);
+ if (!shadow_accessed_mask || !is_shadow_present_pte(old_spte) ||
+ old_spte & shadow_accessed_mask) {
+ __set_spte(sptep, new_spte);
+ } else
+ old_spte = __xchg_spte(sptep, new_spte);
+
if (!is_rmap_spte(old_spte))
return;
pfn = spte_to_pfn(old_spte);