Commit Graph

15 Commits

Author SHA1 Message Date
Ravikiran G Thirumalai
05e12e1c4c x86: fix 27-rc crash on vsmp due to paravirt during module load
27-rc fails to boot up if configured to use modules.

Turns out vsmp_patch was marked __init, and vsmp_patch being the
pvops 'patch' routine for vsmp, a call to vsmp_patch just turns out
to execute a code page with series of 0xcc (POISON_FREE_INITMEM -- int3).

vsmp_patch has been marked with __init ever since pvops, however,
apply_paravirt can be called during module load causing calls to
freed memory location.

Since apply_paravirt can only be called during init/module load, make
vsmp_patch with "__init_or_module"

Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-23 10:31:26 +02:00
Thomas Gleixner
eef8f871d8 x86: vsmp_64 add missing includes
sparse mutters:
arch/x86/kernel/vsmp_64.c:126:5: warning: symbol 'is_vsmp_box' was not declared. Should it be static?
arch/x86/kernel/vsmp_64.c:145:13: warning: symbol 'vsmp_init' was not declared. Should it be static?

Include the appropriate headers.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-05-25 08:58:24 +02:00
Alexander van Heukelum
8008abbd87 x86: fix warning in "x86: clean up vSMP detection"
The function detect_vsmp_box is a void function in the PCI case.
Change the !PCI stub to void too.

Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-29 13:45:24 +02:00
Ravikiran G Thirumalai
e5699a8231 x86: clean up vSMP detection
vSMP detection: access pci config space early in boot to detect if the
system is a vSMPowered box, and cache the result in a flag, so that
is_vsmp_box() retrieves the value of the flag always.

Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17 17:41:29 +02:00
Ingo Molnar
6542fe80e6 x86: vsmp fix x86 vsmp fix is vsmp box cleanup
code got a bit smaller:

arch/x86/kernel/vsmp_64.o:

   text	   data	    bss	    dec	    hex	filename
    205	      4	      0	    209	     d1	vsmp_64.o.before
    181	      4	      0	    185	     b9	vsmp_64.o.after

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17 17:41:08 +02:00
Ravikiran G Thirumalai
9f6d8552a9 x86: vSMP: use pvops only if platform has the capability to support it
Re-arrange set_vsmp_pv_ops so that pv_ops are set only if
the platform has capability to support paravirtualized irq ops

Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17 17:41:08 +02:00
Ravikiran G Thirumalai
aa7d8e25ec x86: fix build breakage when PCI is define and PARAVIRT is not
- Fix the the build breakage when PARAVIRT is defined
  but PCI is not
  This fixes problem reported at:
	http://marc.info/?l=linux-kernel&m=120525966600698&w=2
- Make is_vsmp_box() available even when PARAVIRT is not defined.
  This is needed to determine if tsc's are reliable as a time source
  even when PARAVIRT is not defined.
- split vsmp_init to use is_vsmp_box() and set_vsmp_pv_ops()
  set_vsmp_pv_ops will do nothing if PCI is not enabled in the config.

Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17 17:41:08 +02:00
Ravikiran G Thirumalai
3250c91ada x86: vSMP: Fix is_vsmp_box()
is_vsmp_box() currently does not work on vSMPowered systems,  as pci cfg
space is not read correctly -- This patch fixes it.

Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17 17:41:08 +02:00
Yinghai Lu
f8fffa4583 x86: apic_is_clustered_box for vsmp
quad core 8 socket system will have apic id lifting.the apic id range could
be [4, 0x23]. and apic_is_clustered_box will think that need to three clusters
and that is larger than 2. So it is treated as a clustered_box.

and will get:

   Marking TSC unstable due to TSCs unsynchronized

even if the CPUs have X86_FEATURE_CONSTANT_TSC set.

this quick fix will check if the cpu is from AMD.

but vsmp still needs that checking...

this patch is fix to make sure that vsmp not to be passed.

Signed-off-by: Yinghai Lu <yinghai.lu@sun.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-17 17:40:50 +02:00
Glauber Costa
bc7c314d70 x86, vsmp: use the paravirt helpers
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ravikiran Thirumalai <kiran@scalemp.com>
Acked-by: Shai Fultheim <shai@scalemp.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-17 17:40:47 +02:00
Glauber Costa
96597fd2be x86: introduce vsmp paravirt helpers
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ravikiran Thirumalai <kiran@scalemp.com>
Acked-by: Shai Fultheim <shai@scalemp.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-17 17:40:47 +02:00
Glauber Costa
2785c8d052 x86: call vsmp_init explicitly
It becomes to early for ioremap, so we use early_ioremap

Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ravikiran Thirumalai <kiran@scalemp.com>
Acked-by: Shai Fultheim <shai@scalemp.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-17 17:40:47 +02:00
Glauber Costa
a2beab31b1 x86: make vsmp_init void, instead of static int
Signed-off-by: Glauber Costa <gcosta@redhat.com>
Signed-off-by: Ravikiran Thirumalai <kiran@scalemp.com>
Acked-by: Shai Fultheim <shai@scalemp.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-04-17 17:40:47 +02:00
Thomas Gleixner
ed4aed98da x86: clean up arch/x86/kernel/vsmp_64.c
White space and coding style clenaup.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-30 13:30:24 +01:00
Thomas Gleixner
250c22777f x86_64: move kernel
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-10-11 11:17:24 +02:00