aboutsummaryrefslogtreecommitdiffstats
path: root/arch/ia64/kernel/mca_asm.S
Commit message (Collapse)AuthorAgeFilesLines
* percpu: make percpu symbols in ia64 uniqueTejun Heo2009-10-291-1/+1
| | | | | | | | | | | | | | | | | | | This patch updates percpu related symbols in ia64 such that percpu symbols are unique and don't clash with local symbols. This serves two purposes of decreasing the possibility of global percpu symbol collision and allowing dropping per_cpu__ prefix from percpu symbols. * arch/ia64/kernel/setup.c: s/cpu_info/ia64_cpu_info/ Partly based on Rusty Russell's "alloc_percpu: rename percpu vars which cause name clashes" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Christoph Lameter <cl@linux-foundation.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64@vger.kernel.org
* [IA64] kexec: Make INIT safe while transition toHidetoshi Seto2009-09-141-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | kdump/kexec kernel Summary: Asserting INIT on the beginning of kdump/kexec kernel will result in unexpected behavior because INIT handler for previous kernel is invoked on new kernel. Description: In panic situation, we can receive INIT while kernel transition, i.e. from beginning of panic to bootstrap of kdump kernel. Since we initialize registers on leave from current kernel, no longer monarch/slave handlers of current kernel in virtual mode are called safely. (In fact system goes hang as far as I confirmed) How to Reproduce: Start kdump # echo c > /proc/sysrq-trigger Then assert INIT while kdump kernel is booting, before new INIT handler for kdump kernel is registered. Expected(Desirable) result: kdump kernel boots without any problem, crashdump retrieved Actual result: INIT handler for previous kernel is invoked on kdump kernel => panic, hang etc. (unexpected) Proposed fix: We can unregister these init handlers from SAL before jumping into new kernel, however then the INIT will fallback to default behavior, result in warmboot by SAL (according to the SAL specification) and we cannot retrieve the crashdump. Therefore this patch introduces a NOP init handler and register it to SAL before leave from current kernel, to start kdump safely by preventing INITs from entering virtual mode and resulting in warmboot. On the other hand, in case of kexec that not for kdump, it also has same problem with INIT while kernel transition. This patch handles this case differently, because for kexec unregistering handlers will be preferred than registering NOP handler, since the situation "no handlers registered" is usual state for kernel's entry. Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Haren Myneni <hbabu@us.ibm.com> Cc: kexec@lists.infradead.org Acked-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] kdump: Mask MCA/INIT on frozen cpusHidetoshi Seto2009-09-141-0/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: INIT asserted on kdump kernel invokes INIT handler not only on a cpu that running on the kdump kernel, but also BSP of the panicked kernel, because the (badly) frozen BSP can be thawed by INIT. Description: The kdump_cpu_freeze() is called on cpus except one that initiates panic and/or kdump, to stop/offline the cpu (on ia64, it means we pass control of cpus to SAL, or put them in spinloop). Note that CPU0(BSP) always go to spinloop, so if panic was happened on an AP, there are at least 2cpus (= the AP and BSP) which not back to SAL. On the spinning cpus, interrupts are disabled (rsm psr.i), but INIT is still interruptible because psr.mc for mask them is not set unless kdump_cpu_freeze() is not called from MCA/INIT context. Therefore, assume that a panic was happened on an AP, kdump was invoked, new INIT handlers for kdump kernel was registered and then an INIT is asserted. From the viewpoint of SAL, there are 2 online cpus, so INIT will be delivered to both of them. It likely means that not only the AP (= a cpu executing kdump) enters INIT handler which is newly registered, but also BSP (= another cpu spinning in panicked kernel) enters the same INIT handler. Of course setting of registers in BSP are still old (for panicked kernel), so what happen with running handler with wrong setting will be extremely unexpected. I believe this is not desirable behavior. How to Reproduce: Start kdump on one of APs (e.g. cpu1) # taskset 0x2 echo c > /proc/sysrq-trigger Then assert INIT after kdump kernel is booted, after new INIT handler for kdump kernel is registered. Expected results: An INIT handler is invoked only on the AP. Actual results: An INIT handler is invoked on the AP and BSP. Sample of results: I got following console log by asserting INIT after prompt "root:/>". It seems that two monarchs appeared by one INIT, and one panicked at last. And it also seems that the panicked one supposed there were 4 online cpus and no one did rendezvous: : [ 0 %]dropping to initramfs shell exiting this shell will reboot your system root:/> Entered OS INIT handler. PSP=fff301a0 cpu=0 monarch=0 ia64_init_handler: Promoting cpu 0 to monarch. Delaying for 5 seconds... All OS INIT slaves have reached rendezvous Processes interrupted by INIT - 0 (cpu 0 task 0xa000000100af0000) : <<snip>> : Entered OS INIT handler. PSP=fff301a0 cpu=0 monarch=1 Delaying for 5 seconds... mlogbuf_finish: printing switched to urgent mode, MCA/INIT might be dodgy or fail. OS INIT slave did not rendezvous on cpu 1 2 3 INIT swapper 0[0]: bugcheck! 0 [1] : <<snip>> : Kernel panic - not syncing: Attempted to kill the idle task! Proposed fix: To avoid this problem, this patch inserts ia64_set_psr_mc() to mask INIT on cpus going to be frozen. This masking have no effect if the kdump_cpu_freeze() is called from INIT handler when kdump_on_init == 1, because psr.mc is already turned on to 1 before entering OS_INIT. I confirmed that weird log like above are disappeared after applying this patch. Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Haren Myneni <hbabu@us.ibm.com> Cc: kexec@lists.infradead.org Acked-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Add API for allocating Dynamic TR resource.Xiantao Zhang2008-04-031-0/+5
| | | | | | | | | | | Dynamic TR resource should be managed in the uniform way. Add two interfaces for kernel: ia64_itr_entry: Allocate a (pair of) TR for caller. ia64_ptr_entry: Purge a (pair of ) TR by caller. Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: Anthony Xu <anthony.xu@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] mca style cleanupHidetoshi Seto2008-02-041-21/+25
| | | | | | | Unified changelog, 80 columns rule, and address form fix. Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Support multiple CPUs going through OS_MCARuss Anderson2007-07-111-12/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Linux does not gracefully deal with multiple processors going through OS_MCA aa part of the same MCA event. The first cpu into OS_MCA grabs the ia64_mca_serialize lock. Subsequent cpus wait for that lock, preventing them from reporting in as rendezvoused. The first cpu waits 5 seconds then complains that all the cpus have not rendezvoused. The first cpu then handles its MCA and frees up all the rendezvoused cpus and releases the ia64_mca_serialize lock. One of the subsequent cpus going thought OS_MCA then gets the ia64_mca_serialize lock, waits another 5 seconds and then complains that none of the other cpus have rendezvoused. This patch allows multiple CPUs to gracefully go through OS_MCA. The first CPU into ia64_mca_handler() grabs a mca_count lock. Subsequent CPUs into ia64_mca_handler() are added to a list of cpus that need to go through OS_MCA (a bit set in mca_cpu), and report in as rendezvoused, and but spin waiting their turn. The first CPU sees everyone rendezvous, handles his MCA, wakes up one of the other CPUs waiting to process their MCA (by clearing one mca_cpu bit), and then waits for the other cpus to complete their MCA handling. The next CPU handles his MCA and the process repeats until all the CPUs have handled their MCA. When the last CPU has handled it's MCA, it sets monarch_cpu to -1, releasing all the CPUs. In testing this works more reliably and faster. Thanks to Keith Owens for suggesting numerous improvements to this code. Signed-off-by: Russ Anderson <rja@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] relax per-cpu TLB requirement to DTCChen, Kenneth W2007-02-061-24/+0
| | | | | | | | | | | | | | | | | | | | | | | Instead of pinning per-cpu TLB into a DTR, use DTC. This will free up one TLB entry for application, or even kernel if access pattern to per-cpu data area has high temporal locality. Since per-cpu is mapped at the top of region 7 address, we just need to add special case in alt_dtlb_miss. The physical address of per-cpu data is already conveniently stored in IA64_KR(PER_CPU_DATA). Latency for alt_dtlb_miss is not affected as we can hide all the latency. It was measured that alt_dtlb_miss handler has 23 cycles latency before and after the patch. The performance effect is massive for applications that put lots of tlb pressure on CPU. Workload environment like database online transaction processing or application uses tera-byte of memory would benefit the most. Measurement with industry standard database benchmark shown an upward of 1.6% gain. While smaller workloads like cpu, java also showing small improvement. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] ar.fpsr not set on MCA/INIT kernel entryRuss Anderson2006-09-261-0/+4
| | | | | | | | | | | | | | | | | | | | When entering the kernel due to an MCA or INIT, ar.fpsr (ar40) was not getting set to the kernel default value (remaining at the user value). The effect depends on the user setting of ar.fpsr. In the test case, the effect was addresses printing with strange hex values. Setting ar.fpsr in ia64_set_kernel_registers sets it for both the MCA and INIT paths. The user value of ar.fpsr is correctly saved (in ia64_state_save) and restored (in ia64_state_restore). Below is an example of output with very strange hex values. Anyone know the value of hex 'g'? :-) Processes interrupted by INIT - 0 (cpu 14 task 0xdfffg55g7a4c6gA) Signed-off-by: Russ Anderson (rja@sgi.com) Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Make gp value point to Region 5 in mca handlerZou Nan hai2006-09-261-5/+0
| | | | | | | | | | | | | | | | | | MCA dispatch code take physical address of GP passed from SAL, then call DATA_PA_TO_VA twice on GP before call into C code. The first time is in ia64_set_kernel_register, the second time is in VIRTUAL_MODE_ENTER. The gp is changed to a virtual address in region 7 because DATA_PA_TO_VA is implemented by dep instruction. However when notify blocks were called from MCA handler code, because notify blocks are supported by callback function pointers, gp value value was switched to region 5 again. The patch set gp register to kernel gp of region 5 at entry of MCA dispatch. Signed-off-by: Zou Nan hai <nanhai.zou@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-301-1/+0
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [IA64] Sanitize assembler code for ia64_sal_os_stateKeith Owens2006-06-211-11/+17
| | | | | | | | | | | | | | | struct ia64_sal_os_state has three semi-independent sections. The code in mca_asm.S assumes that these three sections are contiguous, which makes it very awkward to add new data to this structure. Remove the assumption that the sections are contiguous. Define a macro to shorten references to offsets in ia64_sal_os_state. This patch does not change the way that the code behaves. It just makes it easier to update the code in future and to add fields to ia64_sal_os_state when debugging the MCA/INIT handlers. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Failure to resume after INIT in user spaceKeith Owens2006-04-071-5/+5
| | | | | | | | | The OS INIT handler is loading incorrect values into cr.ifa on exit. This shows up as a hang when resuming after an INIT that is delivered while a cpu is in user space. Correct the value loaded into cr.ifa. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Set the correct default OS status in the MCA handlerKeith Owens2006-01-241-1/+1
| | | | | | | | | sos->os_status is set to a default value of IA64_MCA_COLD_BOOT for an MCA, but then is incorrectly overwritten with IA64_MCA_SAME_CONTEXT (0). This makes SAL think that all MCAs have been recovered. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Fix conversion of pal_min_state physical addressFrancois Wellenrieter2006-01-131-1/+1
| | | | | | | | | On return from INIT handler we must convert the address of the minstate area from a kernel virtual uncached address (0xC...) to physical uncached (0x8...). A typo (or thinko?) in the code converted to physical cached. Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] Wire in the MCA/INIT handler stacksKeith Owens2005-09-221-11/+85
| | | | | | | | | | Wire the MCA/INIT handler stacks into DTR[2] and track them in IA64_KR(CURRENT_STACK). This gives the MCA/INIT handler stacks the same TLB status as normal kernel stacks. Reload the old CURRENT_STACK data on return from OS to SAL. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [PATCH] MCA/INIT: use per cpu stacksKeith Owens2005-09-111-640/+718
| | | | | | | | | | | | The bulk of the change. Use per cpu MCA/INIT stacks. Change the SAL to OS state (sos) to be per process. Do all the assembler work on the MCA/INIT stacks, leaving the original stack alone. Pass per cpu state data to the C handlers for MCA and INIT, which also means changing the mca_drv interfaces slightly. Lots of verification on whether the original stack is usable before converting it to a sleeping process. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* [IA64] cpu hotplug: return offlined cpus to SALAshok Raj2005-04-221-35/+53
| | | | | | | | | | | | | | | | This patch is required to support cpu removal for IPF systems. Existing code just fakes the real offline by keeping it run the idle thread, and polling for the bit to re-appear in the cpu_state to get out of the idle loop. For the cpu-offline to work correctly, we need to pass control of this CPU back to SAL so it can continue in the boot-rendez mode. This gives the SAL control to not pick this cpu as the monarch processor for global MCA events, and addition does not wait for this cpu to checkin with SAL for global MCA events as well. The handoff is implemented as documented in SAL specification section 3.2.5.1 "OS_BOOT_RENDEZ to SAL return State" Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* Linux-2.6.12-rc2Linus Torvalds2005-04-161-0/+928
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!