Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit bbf4aef8 authored by Marcelo Tosatti's avatar Marcelo Tosatti Committed by Marcelo Tosatti
Browse files

Merge tag 'kvm-s390-next-20150318' of...

Merge tag 'kvm-s390-next-20150318' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into queue

KVM: s390: Features and fixes for 4.1 (kvm/next)

1. Fixes
2. Implement access register mode in KVM
3. Provide a userspace post handler for the STSI instruction
4. Provide an interface for compliant memory accesses
5. Provide an interface for getting/setting the guest storage key
6. Fixup for the vector facility patches: do not announce the
   vector facility in the guest for old QEMUs.

1-5 were initially shown as RFC in

http://www.spinics.net/lists/kvm/msg114720.html

some small review changes
- added some ACKs
- have the AR mode patches first
- get rid of unnecessary AR_INVAL define
- typos and language

6. two new patches
The two new patches fixup the vector support patches that were
introduced in the last pull request for QEMU versions that dont
know about vector support and guests that do. (We announce the
facility bit, but dont enable the facility so vector aware guests
will crash on vector instructions).
parents 0a4e6be9 18280d8b
Loading
Loading
Loading
Loading
+132 −0
Original line number Original line Diff line number Diff line
@@ -2716,6 +2716,110 @@ The fields in each entry are defined as follows:
   eax, ebx, ecx, edx: the values returned by the cpuid instruction for
   eax, ebx, ecx, edx: the values returned by the cpuid instruction for
         this function/index combination
         this function/index combination


4.89 KVM_S390_MEM_OP

Capability: KVM_CAP_S390_MEM_OP
Architectures: s390
Type: vcpu ioctl
Parameters: struct kvm_s390_mem_op (in)
Returns: = 0 on success,
         < 0 on generic error (e.g. -EFAULT or -ENOMEM),
         > 0 if an exception occurred while walking the page tables

Read or write data from/to the logical (virtual) memory of a VPCU.

Parameters are specified via the following structure:

struct kvm_s390_mem_op {
	__u64 gaddr;		/* the guest address */
	__u64 flags;		/* flags */
	__u32 size;		/* amount of bytes */
	__u32 op;		/* type of operation */
	__u64 buf;		/* buffer in userspace */
	__u8 ar;		/* the access register number */
	__u8 reserved[31];	/* should be set to 0 */
};

The type of operation is specified in the "op" field. It is either
KVM_S390_MEMOP_LOGICAL_READ for reading from logical memory space or
KVM_S390_MEMOP_LOGICAL_WRITE for writing to logical memory space. The
KVM_S390_MEMOP_F_CHECK_ONLY flag can be set in the "flags" field to check
whether the corresponding memory access would create an access exception
(without touching the data in the memory at the destination). In case an
access exception occurred while walking the MMU tables of the guest, the
ioctl returns a positive error number to indicate the type of exception.
This exception is also raised directly at the corresponding VCPU if the
flag KVM_S390_MEMOP_F_INJECT_EXCEPTION is set in the "flags" field.

The start address of the memory region has to be specified in the "gaddr"
field, and the length of the region in the "size" field. "buf" is the buffer
supplied by the userspace application where the read data should be written
to for KVM_S390_MEMOP_LOGICAL_READ, or where the data that should be written
is stored for a KVM_S390_MEMOP_LOGICAL_WRITE. "buf" is unused and can be NULL
when KVM_S390_MEMOP_F_CHECK_ONLY is specified. "ar" designates the access
register number to be used.

The "reserved" field is meant for future extensions. It is not used by
KVM with the currently defined set of flags.

4.90 KVM_S390_GET_SKEYS

Capability: KVM_CAP_S390_SKEYS
Architectures: s390
Type: vm ioctl
Parameters: struct kvm_s390_skeys
Returns: 0 on success, KVM_S390_GET_KEYS_NONE if guest is not using storage
         keys, negative value on error

This ioctl is used to get guest storage key values on the s390
architecture. The ioctl takes parameters via the kvm_s390_skeys struct.

struct kvm_s390_skeys {
	__u64 start_gfn;
	__u64 count;
	__u64 skeydata_addr;
	__u32 flags;
	__u32 reserved[9];
};

The start_gfn field is the number of the first guest frame whose storage keys
you want to get.

The count field is the number of consecutive frames (starting from start_gfn)
whose storage keys to get. The count field must be at least 1 and the maximum
allowed value is defined as KVM_S390_SKEYS_ALLOC_MAX. Values outside this range
will cause the ioctl to return -EINVAL.

The skeydata_addr field is the address to a buffer large enough to hold count
bytes. This buffer will be filled with storage key data by the ioctl.

4.91 KVM_S390_SET_SKEYS

Capability: KVM_CAP_S390_SKEYS
Architectures: s390
Type: vm ioctl
Parameters: struct kvm_s390_skeys
Returns: 0 on success, negative value on error

This ioctl is used to set guest storage key values on the s390
architecture. The ioctl takes parameters via the kvm_s390_skeys struct.
See section on KVM_S390_GET_SKEYS for struct definition.

The start_gfn field is the number of the first guest frame whose storage keys
you want to set.

The count field is the number of consecutive frames (starting from start_gfn)
whose storage keys to get. The count field must be at least 1 and the maximum
allowed value is defined as KVM_S390_SKEYS_ALLOC_MAX. Values outside this range
will cause the ioctl to return -EINVAL.

The skeydata_addr field is the address to a buffer containing count bytes of
storage keys. Each byte in the buffer will be set as the storage key for a
single frame starting at start_gfn for count frames.

Note: If any architecturally invalid key value is found in the given data then
the ioctl will return -EINVAL.

5. The kvm_run structure
5. The kvm_run structure
------------------------
------------------------


@@ -3258,3 +3362,31 @@ Returns: 0 on success, negative value on error
Allows use of the vector registers introduced with z13 processor, and
Allows use of the vector registers introduced with z13 processor, and
provides for the synchronization between host and user space.  Will
provides for the synchronization between host and user space.  Will
return -EINVAL if the machine does not support vectors.
return -EINVAL if the machine does not support vectors.

7.4 KVM_CAP_S390_USER_STSI

Architectures: s390
Parameters: none

This capability allows post-handlers for the STSI instruction. After
initial handling in the kernel, KVM exits to user space with
KVM_EXIT_S390_STSI to allow user space to insert further data.

Before exiting to userspace, kvm handlers should fill in s390_stsi field of
vcpu->run:
struct {
	__u64 addr;
	__u8 ar;
	__u8 reserved;
	__u8 fc;
	__u8 sel1;
	__u16 sel2;
} s390_stsi;

@addr - guest address of STSI SYSIB
@fc   - function code
@sel1 - selector 1
@sel2 - selector 2
@ar   - access register number

KVM handlers should exit to userspace with rc = -EREMOTE.
+1 −1
Original line number Original line Diff line number Diff line
@@ -562,9 +562,9 @@ struct kvm_arch{
	int css_support;
	int css_support;
	int use_irqchip;
	int use_irqchip;
	int use_cmma;
	int use_cmma;
	int use_vectors;
	int user_cpu_state_ctrl;
	int user_cpu_state_ctrl;
	int user_sigp;
	int user_sigp;
	int user_stsi;
	struct s390_io_adapter *adapters[MAX_S390_IO_ADAPTERS];
	struct s390_io_adapter *adapters[MAX_S390_IO_ADAPTERS];
	wait_queue_head_t ipte_wq;
	wait_queue_head_t ipte_wq;
	int ipte_lock_count;
	int ipte_lock_count;
+2 −2
Original line number Original line Diff line number Diff line
@@ -77,7 +77,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)


	if (vcpu->run->s.regs.gprs[rx] & 7)
	if (vcpu->run->s.regs.gprs[rx] & 7)
		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
		return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
	rc = read_guest(vcpu, vcpu->run->s.regs.gprs[rx], &parm, sizeof(parm));
	rc = read_guest(vcpu, vcpu->run->s.regs.gprs[rx], rx, &parm, sizeof(parm));
	if (rc)
	if (rc)
		return kvm_s390_inject_prog_cond(vcpu, rc);
		return kvm_s390_inject_prog_cond(vcpu, rc);
	if (parm.parm_version != 2 || parm.parm_len < 5 || parm.code != 0x258)
	if (parm.parm_version != 2 || parm.parm_len < 5 || parm.code != 0x258)
@@ -230,7 +230,7 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *vcpu)


int kvm_s390_handle_diag(struct kvm_vcpu *vcpu)
int kvm_s390_handle_diag(struct kvm_vcpu *vcpu)
{
{
	int code = kvm_s390_get_base_disp_rs(vcpu) & 0xffff;
	int code = kvm_s390_get_base_disp_rs(vcpu, NULL) & 0xffff;


	if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
	if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
		return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
		return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
+242 −52
Original line number Original line Diff line number Diff line
@@ -10,6 +10,7 @@
#include <asm/pgtable.h>
#include <asm/pgtable.h>
#include "kvm-s390.h"
#include "kvm-s390.h"
#include "gaccess.h"
#include "gaccess.h"
#include <asm/switch_to.h>


union asce {
union asce {
	unsigned long val;
	unsigned long val;
@@ -207,6 +208,54 @@ union raddress {
	unsigned long pfra : 52; /* Page-Frame Real Address */
	unsigned long pfra : 52; /* Page-Frame Real Address */
};
};


union alet {
	u32 val;
	struct {
		u32 reserved : 7;
		u32 p        : 1;
		u32 alesn    : 8;
		u32 alen     : 16;
	};
};

union ald {
	u32 val;
	struct {
		u32     : 1;
		u32 alo : 24;
		u32 all : 7;
	};
};

struct ale {
	unsigned long i      : 1; /* ALEN-Invalid Bit */
	unsigned long        : 5;
	unsigned long fo     : 1; /* Fetch-Only Bit */
	unsigned long p      : 1; /* Private Bit */
	unsigned long alesn  : 8; /* Access-List-Entry Sequence Number */
	unsigned long aleax  : 16; /* Access-List-Entry Authorization Index */
	unsigned long        : 32;
	unsigned long        : 1;
	unsigned long asteo  : 25; /* ASN-Second-Table-Entry Origin */
	unsigned long        : 6;
	unsigned long astesn : 32; /* ASTE Sequence Number */
} __packed;

struct aste {
	unsigned long i      : 1; /* ASX-Invalid Bit */
	unsigned long ato    : 29; /* Authority-Table Origin */
	unsigned long        : 1;
	unsigned long b      : 1; /* Base-Space Bit */
	unsigned long ax     : 16; /* Authorization Index */
	unsigned long atl    : 12; /* Authority-Table Length */
	unsigned long        : 2;
	unsigned long ca     : 1; /* Controlled-ASN Bit */
	unsigned long ra     : 1; /* Reusable-ASN Bit */
	unsigned long asce   : 64; /* Address-Space-Control Element */
	unsigned long ald    : 32;
	unsigned long astesn : 32;
	/* .. more fields there */
} __packed;


int ipte_lock_held(struct kvm_vcpu *vcpu)
int ipte_lock_held(struct kvm_vcpu *vcpu)
{
{
@@ -307,15 +356,157 @@ void ipte_unlock(struct kvm_vcpu *vcpu)
		ipte_unlock_simple(vcpu);
		ipte_unlock_simple(vcpu);
}
}


static unsigned long get_vcpu_asce(struct kvm_vcpu *vcpu)
static int ar_translation(struct kvm_vcpu *vcpu, union asce *asce, ar_t ar,
			  int write)
{
	union alet alet;
	struct ale ale;
	struct aste aste;
	unsigned long ald_addr, authority_table_addr;
	union ald ald;
	int eax, rc;
	u8 authority_table;

	if (ar >= NUM_ACRS)
		return -EINVAL;

	save_access_regs(vcpu->run->s.regs.acrs);
	alet.val = vcpu->run->s.regs.acrs[ar];

	if (ar == 0 || alet.val == 0) {
		asce->val = vcpu->arch.sie_block->gcr[1];
		return 0;
	} else if (alet.val == 1) {
		asce->val = vcpu->arch.sie_block->gcr[7];
		return 0;
	}

	if (alet.reserved)
		return PGM_ALET_SPECIFICATION;

	if (alet.p)
		ald_addr = vcpu->arch.sie_block->gcr[5];
	else
		ald_addr = vcpu->arch.sie_block->gcr[2];
	ald_addr &= 0x7fffffc0;

	rc = read_guest_real(vcpu, ald_addr + 16, &ald.val, sizeof(union ald));
	if (rc)
		return rc;

	if (alet.alen / 8 > ald.all)
		return PGM_ALEN_TRANSLATION;

	if (0x7fffffff - ald.alo * 128 < alet.alen * 16)
		return PGM_ADDRESSING;

	rc = read_guest_real(vcpu, ald.alo * 128 + alet.alen * 16, &ale,
			     sizeof(struct ale));
	if (rc)
		return rc;

	if (ale.i == 1)
		return PGM_ALEN_TRANSLATION;
	if (ale.alesn != alet.alesn)
		return PGM_ALE_SEQUENCE;

	rc = read_guest_real(vcpu, ale.asteo * 64, &aste, sizeof(struct aste));
	if (rc)
		return rc;

	if (aste.i)
		return PGM_ASTE_VALIDITY;
	if (aste.astesn != ale.astesn)
		return PGM_ASTE_SEQUENCE;

	if (ale.p == 1) {
		eax = (vcpu->arch.sie_block->gcr[8] >> 16) & 0xffff;
		if (ale.aleax != eax) {
			if (eax / 16 > aste.atl)
				return PGM_EXTENDED_AUTHORITY;

			authority_table_addr = aste.ato * 4 + eax / 4;

			rc = read_guest_real(vcpu, authority_table_addr,
					     &authority_table,
					     sizeof(u8));
			if (rc)
				return rc;

			if ((authority_table & (0x40 >> ((eax & 3) * 2))) == 0)
				return PGM_EXTENDED_AUTHORITY;
		}
	}

	if (ale.fo == 1 && write)
		return PGM_PROTECTION;

	asce->val = aste.asce;
	return 0;
}

struct trans_exc_code_bits {
	unsigned long addr : 52; /* Translation-exception Address */
	unsigned long fsi  : 2;  /* Access Exception Fetch/Store Indication */
	unsigned long	   : 6;
	unsigned long b60  : 1;
	unsigned long b61  : 1;
	unsigned long as   : 2;  /* ASCE Identifier */
};

enum {
	FSI_UNKNOWN = 0, /* Unknown wether fetch or store */
	FSI_STORE   = 1, /* Exception was due to store operation */
	FSI_FETCH   = 2  /* Exception was due to fetch operation */
};

static int get_vcpu_asce(struct kvm_vcpu *vcpu, union asce *asce,
			 ar_t ar, int write)
{
{
	int rc;
	psw_t *psw = &vcpu->arch.sie_block->gpsw;
	struct kvm_s390_pgm_info *pgm = &vcpu->arch.pgm;
	struct trans_exc_code_bits *tec_bits;

	memset(pgm, 0, sizeof(*pgm));
	tec_bits = (struct trans_exc_code_bits *)&pgm->trans_exc_code;
	tec_bits->fsi = write ? FSI_STORE : FSI_FETCH;
	tec_bits->as = psw_bits(*psw).as;

	if (!psw_bits(*psw).t) {
		asce->val = 0;
		asce->r = 1;
		return 0;
	}

	switch (psw_bits(vcpu->arch.sie_block->gpsw).as) {
	switch (psw_bits(vcpu->arch.sie_block->gpsw).as) {
	case PSW_AS_PRIMARY:
	case PSW_AS_PRIMARY:
		return vcpu->arch.sie_block->gcr[1];
		asce->val = vcpu->arch.sie_block->gcr[1];
		return 0;
	case PSW_AS_SECONDARY:
	case PSW_AS_SECONDARY:
		return vcpu->arch.sie_block->gcr[7];
		asce->val = vcpu->arch.sie_block->gcr[7];
		return 0;
	case PSW_AS_HOME:
	case PSW_AS_HOME:
		return vcpu->arch.sie_block->gcr[13];
		asce->val = vcpu->arch.sie_block->gcr[13];
		return 0;
	case PSW_AS_ACCREG:
		rc = ar_translation(vcpu, asce, ar, write);
		switch (rc) {
		case PGM_ALEN_TRANSLATION:
		case PGM_ALE_SEQUENCE:
		case PGM_ASTE_VALIDITY:
		case PGM_ASTE_SEQUENCE:
		case PGM_EXTENDED_AUTHORITY:
			vcpu->arch.pgm.exc_access_id = ar;
			break;
		case PGM_PROTECTION:
			tec_bits->b60 = 1;
			tec_bits->b61 = 1;
			break;
		}
		if (rc > 0)
			pgm->code = rc;
		return rc;
	}
	}
	return 0;
	return 0;
}
}
@@ -330,6 +521,7 @@ static int deref_table(struct kvm *kvm, unsigned long gpa, unsigned long *val)
 * @vcpu: virtual cpu
 * @vcpu: virtual cpu
 * @gva: guest virtual address
 * @gva: guest virtual address
 * @gpa: points to where guest physical (absolute) address should be stored
 * @gpa: points to where guest physical (absolute) address should be stored
 * @asce: effective asce
 * @write: indicates if access is a write access
 * @write: indicates if access is a write access
 *
 *
 * Translate a guest virtual address into a guest absolute address by means
 * Translate a guest virtual address into a guest absolute address by means
@@ -345,7 +537,8 @@ static int deref_table(struct kvm *kvm, unsigned long gpa, unsigned long *val)
 *	      by the architecture
 *	      by the architecture
 */
 */
static unsigned long guest_translate(struct kvm_vcpu *vcpu, unsigned long gva,
static unsigned long guest_translate(struct kvm_vcpu *vcpu, unsigned long gva,
				     unsigned long *gpa, int write)
				     unsigned long *gpa, const union asce asce,
				     int write)
{
{
	union vaddress vaddr = {.addr = gva};
	union vaddress vaddr = {.addr = gva};
	union raddress raddr = {.addr = gva};
	union raddress raddr = {.addr = gva};
@@ -354,12 +547,10 @@ static unsigned long guest_translate(struct kvm_vcpu *vcpu, unsigned long gva,
	union ctlreg0 ctlreg0;
	union ctlreg0 ctlreg0;
	unsigned long ptr;
	unsigned long ptr;
	int edat1, edat2;
	int edat1, edat2;
	union asce asce;


	ctlreg0.val = vcpu->arch.sie_block->gcr[0];
	ctlreg0.val = vcpu->arch.sie_block->gcr[0];
	edat1 = ctlreg0.edat && test_kvm_facility(vcpu->kvm, 8);
	edat1 = ctlreg0.edat && test_kvm_facility(vcpu->kvm, 8);
	edat2 = edat1 && test_kvm_facility(vcpu->kvm, 78);
	edat2 = edat1 && test_kvm_facility(vcpu->kvm, 78);
	asce.val = get_vcpu_asce(vcpu);
	if (asce.r)
	if (asce.r)
		goto real_address;
		goto real_address;
	ptr = asce.origin * 4096;
	ptr = asce.origin * 4096;
@@ -506,48 +697,30 @@ static inline int is_low_address(unsigned long ga)
	return (ga & ~0x11fful) == 0;
	return (ga & ~0x11fful) == 0;
}
}


static int low_address_protection_enabled(struct kvm_vcpu *vcpu)
static int low_address_protection_enabled(struct kvm_vcpu *vcpu,
					  const union asce asce)
{
{
	union ctlreg0 ctlreg0 = {.val = vcpu->arch.sie_block->gcr[0]};
	union ctlreg0 ctlreg0 = {.val = vcpu->arch.sie_block->gcr[0]};
	psw_t *psw = &vcpu->arch.sie_block->gpsw;
	psw_t *psw = &vcpu->arch.sie_block->gpsw;
	union asce asce;


	if (!ctlreg0.lap)
	if (!ctlreg0.lap)
		return 0;
		return 0;
	asce.val = get_vcpu_asce(vcpu);
	if (psw_bits(*psw).t && asce.p)
	if (psw_bits(*psw).t && asce.p)
		return 0;
		return 0;
	return 1;
	return 1;
}
}


struct trans_exc_code_bits {
	unsigned long addr : 52; /* Translation-exception Address */
	unsigned long fsi  : 2;  /* Access Exception Fetch/Store Indication */
	unsigned long	   : 7;
	unsigned long b61  : 1;
	unsigned long as   : 2;  /* ASCE Identifier */
};

enum {
	FSI_UNKNOWN = 0, /* Unknown wether fetch or store */
	FSI_STORE   = 1, /* Exception was due to store operation */
	FSI_FETCH   = 2  /* Exception was due to fetch operation */
};

static int guest_page_range(struct kvm_vcpu *vcpu, unsigned long ga,
static int guest_page_range(struct kvm_vcpu *vcpu, unsigned long ga,
			    unsigned long *pages, unsigned long nr_pages,
			    unsigned long *pages, unsigned long nr_pages,
			    int write)
			    const union asce asce, int write)
{
{
	struct kvm_s390_pgm_info *pgm = &vcpu->arch.pgm;
	struct kvm_s390_pgm_info *pgm = &vcpu->arch.pgm;
	psw_t *psw = &vcpu->arch.sie_block->gpsw;
	psw_t *psw = &vcpu->arch.sie_block->gpsw;
	struct trans_exc_code_bits *tec_bits;
	struct trans_exc_code_bits *tec_bits;
	int lap_enabled, rc;
	int lap_enabled, rc;


	memset(pgm, 0, sizeof(*pgm));
	tec_bits = (struct trans_exc_code_bits *)&pgm->trans_exc_code;
	tec_bits = (struct trans_exc_code_bits *)&pgm->trans_exc_code;
	tec_bits->fsi = write ? FSI_STORE : FSI_FETCH;
	lap_enabled = low_address_protection_enabled(vcpu, asce);
	tec_bits->as = psw_bits(*psw).as;
	lap_enabled = low_address_protection_enabled(vcpu);
	while (nr_pages) {
	while (nr_pages) {
		ga = kvm_s390_logical_to_effective(vcpu, ga);
		ga = kvm_s390_logical_to_effective(vcpu, ga);
		tec_bits->addr = ga >> PAGE_SHIFT;
		tec_bits->addr = ga >> PAGE_SHIFT;
@@ -557,7 +730,7 @@ static int guest_page_range(struct kvm_vcpu *vcpu, unsigned long ga,
		}
		}
		ga &= PAGE_MASK;
		ga &= PAGE_MASK;
		if (psw_bits(*psw).t) {
		if (psw_bits(*psw).t) {
			rc = guest_translate(vcpu, ga, pages, write);
			rc = guest_translate(vcpu, ga, pages, asce, write);
			if (rc < 0)
			if (rc < 0)
				return rc;
				return rc;
			if (rc == PGM_PROTECTION)
			if (rc == PGM_PROTECTION)
@@ -578,7 +751,7 @@ static int guest_page_range(struct kvm_vcpu *vcpu, unsigned long ga,
	return 0;
	return 0;
}
}


int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, void *data,
int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, ar_t ar, void *data,
		 unsigned long len, int write)
		 unsigned long len, int write)
{
{
	psw_t *psw = &vcpu->arch.sie_block->gpsw;
	psw_t *psw = &vcpu->arch.sie_block->gpsw;
@@ -591,20 +764,19 @@ int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, void *data,


	if (!len)
	if (!len)
		return 0;
		return 0;
	/* Access register mode is not supported yet. */
	rc = get_vcpu_asce(vcpu, &asce, ar, write);
	if (psw_bits(*psw).t && psw_bits(*psw).as == PSW_AS_ACCREG)
	if (rc)
		return -EOPNOTSUPP;
		return rc;
	nr_pages = (((ga & ~PAGE_MASK) + len - 1) >> PAGE_SHIFT) + 1;
	nr_pages = (((ga & ~PAGE_MASK) + len - 1) >> PAGE_SHIFT) + 1;
	pages = pages_array;
	pages = pages_array;
	if (nr_pages > ARRAY_SIZE(pages_array))
	if (nr_pages > ARRAY_SIZE(pages_array))
		pages = vmalloc(nr_pages * sizeof(unsigned long));
		pages = vmalloc(nr_pages * sizeof(unsigned long));
	if (!pages)
	if (!pages)
		return -ENOMEM;
		return -ENOMEM;
	asce.val = get_vcpu_asce(vcpu);
	need_ipte_lock = psw_bits(*psw).t && !asce.r;
	need_ipte_lock = psw_bits(*psw).t && !asce.r;
	if (need_ipte_lock)
	if (need_ipte_lock)
		ipte_lock(vcpu);
		ipte_lock(vcpu);
	rc = guest_page_range(vcpu, ga, pages, nr_pages, write);
	rc = guest_page_range(vcpu, ga, pages, nr_pages, asce, write);
	for (idx = 0; idx < nr_pages && !rc; idx++) {
	for (idx = 0; idx < nr_pages && !rc; idx++) {
		gpa = *(pages + idx) + (ga & ~PAGE_MASK);
		gpa = *(pages + idx) + (ga & ~PAGE_MASK);
		_len = min(PAGE_SIZE - (gpa & ~PAGE_MASK), len);
		_len = min(PAGE_SIZE - (gpa & ~PAGE_MASK), len);
@@ -652,7 +824,7 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
 * Note: The IPTE lock is not taken during this function, so the caller
 * Note: The IPTE lock is not taken during this function, so the caller
 * has to take care of this.
 * has to take care of this.
 */
 */
int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva,
int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva, ar_t ar,
			    unsigned long *gpa, int write)
			    unsigned long *gpa, int write)
{
{
	struct kvm_s390_pgm_info *pgm = &vcpu->arch.pgm;
	struct kvm_s390_pgm_info *pgm = &vcpu->arch.pgm;
@@ -661,26 +833,21 @@ int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva,
	union asce asce;
	union asce asce;
	int rc;
	int rc;


	/* Access register mode is not supported yet. */
	if (psw_bits(*psw).t && psw_bits(*psw).as == PSW_AS_ACCREG)
		return -EOPNOTSUPP;

	gva = kvm_s390_logical_to_effective(vcpu, gva);
	gva = kvm_s390_logical_to_effective(vcpu, gva);
	memset(pgm, 0, sizeof(*pgm));
	tec = (struct trans_exc_code_bits *)&pgm->trans_exc_code;
	tec = (struct trans_exc_code_bits *)&pgm->trans_exc_code;
	tec->as = psw_bits(*psw).as;
	rc = get_vcpu_asce(vcpu, &asce, ar, write);
	tec->fsi = write ? FSI_STORE : FSI_FETCH;
	tec->addr = gva >> PAGE_SHIFT;
	tec->addr = gva >> PAGE_SHIFT;
	if (is_low_address(gva) && low_address_protection_enabled(vcpu)) {
	if (rc)
		return rc;
	if (is_low_address(gva) && low_address_protection_enabled(vcpu, asce)) {
		if (write) {
		if (write) {
			rc = pgm->code = PGM_PROTECTION;
			rc = pgm->code = PGM_PROTECTION;
			return rc;
			return rc;
		}
		}
	}
	}


	asce.val = get_vcpu_asce(vcpu);
	if (psw_bits(*psw).t && !asce.r) {	/* Use DAT? */
	if (psw_bits(*psw).t && !asce.r) {	/* Use DAT? */
		rc = guest_translate(vcpu, gva, gpa, write);
		rc = guest_translate(vcpu, gva, gpa, asce, write);
		if (rc > 0) {
		if (rc > 0) {
			if (rc == PGM_PROTECTION)
			if (rc == PGM_PROTECTION)
				tec->b61 = 1;
				tec->b61 = 1;
@@ -697,28 +864,51 @@ int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva,
}
}


/**
/**
 * kvm_s390_check_low_addr_protection - check for low-address protection
 * check_gva_range - test a range of guest virtual addresses for accessibility
 * @ga: Guest address
 */
int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, ar_t ar,
		    unsigned long length, int is_write)
{
	unsigned long gpa;
	unsigned long currlen;
	int rc = 0;

	ipte_lock(vcpu);
	while (length > 0 && !rc) {
		currlen = min(length, PAGE_SIZE - (gva % PAGE_SIZE));
		rc = guest_translate_address(vcpu, gva, ar, &gpa, is_write);
		gva += currlen;
		length -= currlen;
	}
	ipte_unlock(vcpu);

	return rc;
}

/**
 * kvm_s390_check_low_addr_prot_real - check for low-address protection
 * @gra: Guest real address
 *
 *
 * Checks whether an address is subject to low-address protection and set
 * Checks whether an address is subject to low-address protection and set
 * up vcpu->arch.pgm accordingly if necessary.
 * up vcpu->arch.pgm accordingly if necessary.
 *
 *
 * Return: 0 if no protection exception, or PGM_PROTECTION if protected.
 * Return: 0 if no protection exception, or PGM_PROTECTION if protected.
 */
 */
int kvm_s390_check_low_addr_protection(struct kvm_vcpu *vcpu, unsigned long ga)
int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra)
{
{
	struct kvm_s390_pgm_info *pgm = &vcpu->arch.pgm;
	struct kvm_s390_pgm_info *pgm = &vcpu->arch.pgm;
	psw_t *psw = &vcpu->arch.sie_block->gpsw;
	psw_t *psw = &vcpu->arch.sie_block->gpsw;
	struct trans_exc_code_bits *tec_bits;
	struct trans_exc_code_bits *tec_bits;
	union ctlreg0 ctlreg0 = {.val = vcpu->arch.sie_block->gcr[0]};


	if (!is_low_address(ga) || !low_address_protection_enabled(vcpu))
	if (!ctlreg0.lap || !is_low_address(gra))
		return 0;
		return 0;


	memset(pgm, 0, sizeof(*pgm));
	memset(pgm, 0, sizeof(*pgm));
	tec_bits = (struct trans_exc_code_bits *)&pgm->trans_exc_code;
	tec_bits = (struct trans_exc_code_bits *)&pgm->trans_exc_code;
	tec_bits->fsi = FSI_STORE;
	tec_bits->fsi = FSI_STORE;
	tec_bits->as = psw_bits(*psw).as;
	tec_bits->as = psw_bits(*psw).as;
	tec_bits->addr = ga >> PAGE_SHIFT;
	tec_bits->addr = gra >> PAGE_SHIFT;
	pgm->code = PGM_PROTECTION;
	pgm->code = PGM_PROTECTION;


	return pgm->code;
	return pgm->code;
+12 −9
Original line number Original line Diff line number Diff line
@@ -156,9 +156,11 @@ int read_guest_lc(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
}
}


int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva,
int guest_translate_address(struct kvm_vcpu *vcpu, unsigned long gva,
			    unsigned long *gpa, int write);
			    ar_t ar, unsigned long *gpa, int write);
int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, ar_t ar,
		    unsigned long length, int is_write);


int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, void *data,
int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, ar_t ar, void *data,
		 unsigned long len, int write);
		 unsigned long len, int write);


int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
@@ -168,6 +170,7 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
 * write_guest - copy data from kernel space to guest space
 * write_guest - copy data from kernel space to guest space
 * @vcpu: virtual cpu
 * @vcpu: virtual cpu
 * @ga: guest address
 * @ga: guest address
 * @ar: access register
 * @data: source address in kernel space
 * @data: source address in kernel space
 * @len: number of bytes to copy
 * @len: number of bytes to copy
 *
 *
@@ -176,8 +179,7 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
 * If DAT is off data will be copied to guest real or absolute memory.
 * If DAT is off data will be copied to guest real or absolute memory.
 * If DAT is on data will be copied to the address space as specified by
 * If DAT is on data will be copied to the address space as specified by
 * the address space bits of the PSW:
 * the address space bits of the PSW:
 * Primary, secondory or home space (access register mode is currently not
 * Primary, secondary, home space or access register mode.
 * implemented).
 * The addressing mode of the PSW is also inspected, so that address wrap
 * The addressing mode of the PSW is also inspected, so that address wrap
 * around is taken into account for 24-, 31- and 64-bit addressing mode,
 * around is taken into account for 24-, 31- and 64-bit addressing mode,
 * if the to be copied data crosses page boundaries in guest address space.
 * if the to be copied data crosses page boundaries in guest address space.
@@ -210,16 +212,17 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
 *	 if data has been changed in guest space in case of an exception.
 *	 if data has been changed in guest space in case of an exception.
 */
 */
static inline __must_check
static inline __must_check
int write_guest(struct kvm_vcpu *vcpu, unsigned long ga, void *data,
int write_guest(struct kvm_vcpu *vcpu, unsigned long ga, ar_t ar, void *data,
		unsigned long len)
		unsigned long len)
{
{
	return access_guest(vcpu, ga, data, len, 1);
	return access_guest(vcpu, ga, ar, data, len, 1);
}
}


/**
/**
 * read_guest - copy data from guest space to kernel space
 * read_guest - copy data from guest space to kernel space
 * @vcpu: virtual cpu
 * @vcpu: virtual cpu
 * @ga: guest address
 * @ga: guest address
 * @ar: access register
 * @data: destination address in kernel space
 * @data: destination address in kernel space
 * @len: number of bytes to copy
 * @len: number of bytes to copy
 *
 *
@@ -229,10 +232,10 @@ int write_guest(struct kvm_vcpu *vcpu, unsigned long ga, void *data,
 * data will be copied from guest space to kernel space.
 * data will be copied from guest space to kernel space.
 */
 */
static inline __must_check
static inline __must_check
int read_guest(struct kvm_vcpu *vcpu, unsigned long ga, void *data,
int read_guest(struct kvm_vcpu *vcpu, unsigned long ga, ar_t ar, void *data,
	       unsigned long len)
	       unsigned long len)
{
{
	return access_guest(vcpu, ga, data, len, 0);
	return access_guest(vcpu, ga, ar, data, len, 0);
}
}


/**
/**
@@ -330,6 +333,6 @@ int read_guest_real(struct kvm_vcpu *vcpu, unsigned long gra, void *data,
void ipte_lock(struct kvm_vcpu *vcpu);
void ipte_lock(struct kvm_vcpu *vcpu);
void ipte_unlock(struct kvm_vcpu *vcpu);
void ipte_unlock(struct kvm_vcpu *vcpu);
int ipte_lock_held(struct kvm_vcpu *vcpu);
int ipte_lock_held(struct kvm_vcpu *vcpu);
int kvm_s390_check_low_addr_protection(struct kvm_vcpu *vcpu, unsigned long ga);
int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra);


#endif /* __KVM_S390_GACCESS_H */
#endif /* __KVM_S390_GACCESS_H */
Loading