Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 3ff519f2 authored by Wei Yang's avatar Wei Yang Committed by Paolo Bonzini
Browse files

KVM: x86: adjust kvm_mmu_page member to save 8 bytes



On a 64bits machine, struct is naturally aligned with 8 bytes. Since
kvm_mmu_page member *unsync* and *role* are less then 4 bytes, we can
rearrange the sequence to compace the struct.

As the comment shows, *role* and *gfn* are used to key the shadow page. In
order to keep the comment valid, this patch moves the *unsync* up and
exchange the position of *role* and *gfn*.

From /proc/slabinfo, it shows the size of kvm_mmu_page is 8 bytes less and
with one more object per slap after applying this patch.

    # name            <active_objs> <num_objs> <objsize> <objperslab>
    kvm_mmu_page_header      0           0       168         24

    kvm_mmu_page_header      0           0       160         25

Signed-off-by: default avatarWei Yang <richard.weiyang@gmail.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent bd18bffc
Loading
Loading
Loading
Loading
+2 −2
Original line number Diff line number Diff line
@@ -281,18 +281,18 @@ struct kvm_rmap_head {
struct kvm_mmu_page {
	struct list_head link;
	struct hlist_node hash_link;
	bool unsync;

	/*
	 * The following two entries are used to key the shadow page in the
	 * hash table.
	 */
	gfn_t gfn;
	union kvm_mmu_page_role role;
	gfn_t gfn;

	u64 *spt;
	/* hold the gfn of each spte inside spt */
	gfn_t *gfns;
	bool unsync;
	int root_count;          /* Currently serving as active root */
	unsigned int unsync_children;
	struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */