(*) Security (currently only AFS kaserver and KerberosIV tickets).
- (*) File reading.
+ (*) File reading and writing.
(*) Automounting.
-It does not yet support the following AFS features:
-
- (*) Write support.
+ (*) Local caching (via fscache).
- (*) Local caching.
+It does not yet support the following AFS features:
(*) pioctl() system call.
the masks in the following files:
/sys/module/af_rxrpc/parameters/debug
- /sys/module/afs/parameters/debug
+ /sys/module/kafs/parameters/debug
=====
When inserting the driver modules the root cell must be specified along with a
list of volume location server IP addresses:
- insmod af_rxrpc.o
- insmod rxkad.o
- insmod kafs.o rootcell=cambridge.redhat.com:172.16.18.73:172.16.18.91
+ modprobe af_rxrpc
+ modprobe rxkad
+ modprobe kafs rootcell=cambridge.redhat.com:172.16.18.73:172.16.18.91
The first module is the AF_RXRPC network protocol driver. This provides the
RxRPC remote operation protocol and may also be accessed from userspace. See:
Once the module has been loaded, more modules can be added by the following
procedure:
- echo add grand.central.org 18.7.14.88:128.2.191.224 >/proc/fs/afs/cells
+ echo add grand.central.org 18.9.48.14:128.2.203.61:130.237.48.87 >/proc/fs/afs/cells
Where the parameters to the "add" command are the name of a cell and a list of
volume location servers within that cell, with the latter separated by colons.
specify connection to only volumes of those types.
The name of the cell is optional, and if not given during a mount, then the
-named volume will be looked up in the cell specified during insmod.
+named volume will be looked up in the cell specified during modprobe.
Additional cells can be added through /proc (see later section).
The filesystem maintains an internal database of all the cells it knows and the
IP addresses of the volume location servers for those cells. The cell to which
-the system belongs is added to the database when insmod is performed by the
+the system belongs is added to the database when modprobe is performed by the
"rootcell=" argument or, if compiled in, using a "kafs.rootcell=" argument on
the kernel command line.
Further cells can be added by commands similar to the following:
echo add CELLNAME VLADDR[:VLADDR][:VLADDR]... >/proc/fs/afs/cells
- echo add grand.central.org 18.7.14.88:128.2.191.224 >/proc/fs/afs/cells
+ echo add grand.central.org 18.9.48.14:128.2.203.61:130.237.48.87 >/proc/fs/afs/cells
No other cell database operations are available at this time.
mount -t afs \%root.afs. /afs
mount -t afs \%cambridge.redhat.com:root.cell. /afs/cambridge.redhat.com/
-echo add grand.central.org 18.7.14.88:128.2.191.224 > /proc/fs/afs/cells
+echo add grand.central.org 18.9.48.14:128.2.203.61:130.237.48.87 > /proc/fs/afs/cells
mount -t afs "#grand.central.org:root.cell." /afs/grand.central.org/
mount -t afs "#grand.central.org:root.archive." /afs/grand.central.org/archive
mount -t afs "#grand.central.org:root.contrib." /afs/grand.central.org/contrib
3.1 /proc/<pid>/oom_adj - Adjust the oom-killer score
------------------------------------------------------
-This file can be used to adjust the score used to select which processes should
-be killed in an out-of-memory situation. The oom_adj value is a characteristic
-of the task's mm, so all threads that share an mm with pid will have the same
-oom_adj value. A high value will increase the likelihood of this process being
-killed by the oom-killer. Valid values are in the range -16 to +15 as
-explained below and a special value of -17, which disables oom-killing
-altogether for threads sharing pid's mm.
+This file can be used to adjust the score used to select which processes
+should be killed in an out-of-memory situation. Giving it a high score will
+increase the likelihood of this process being killed by the oom-killer. Valid
+values are in the range -16 to +15, plus the special value -17, which disables
+oom-killing altogether for this process.
The process to be killed in an out-of-memory situation is selected among all others
based on its badness score. This value equals the original memory size of the process
are the prime candidates to be killed. Having only one 'hungry' child will make
parent less preferable than the child.
-/proc/<pid>/oom_adj cannot be changed for kthreads since they are immune from
-oom-killing already.
-
/proc/<pid>/oom_score shows process' current badness score.
The following heuristics are then applied:
0 -> Unknown EM2800 video grabber (em2800) [eb1a:2800]
- 1 -> Unknown EM2750/28xx video grabber (em2820/em2840) [eb1a:2820,eb1a:2821,eb1a:2860,eb1a:2861,eb1a:2870,eb1a:2881,eb1a:2883]
+ 1 -> Unknown EM2750/28xx video grabber (em2820/em2840) [eb1a:2710,eb1a:2820,eb1a:2821,eb1a:2860,eb1a:2861,eb1a:2870,eb1a:2881,eb1a:2883]
2 -> Terratec Cinergy 250 USB (em2820/em2840) [0ccd:0036]
3 -> Pinnacle PCTV USB 2 (em2820/em2840) [2304:0208]
4 -> Hauppauge WinTV USB 2 (em2820/em2840) [2040:4200,2040:4201]
152 -> Asus Tiger Rev:1.00 [1043:4857]
153 -> Kworld Plus TV Analog Lite PCI [17de:7128]
154 -> Avermedia AVerTV GO 007 FM Plus [1461:f31d]
-155 -> Hauppauge WinTV-HVR1120 ATSC/QAM-Hybrid [0070:6706,0070:6708]
-156 -> Hauppauge WinTV-HVR1110r3 DVB-T/Hybrid [0070:6707,0070:6709,0070:670a]
+155 -> Hauppauge WinTV-HVR1150 ATSC/QAM-Hybrid [0070:6706,0070:6708]
+156 -> Hauppauge WinTV-HVR1120 DVB-T/Hybrid [0070:6707,0070:6709,0070:670a]
157 -> Avermedia AVerTV Studio 507UA [1461:a11b]
158 -> AVerMedia Cardbus TV/Radio (E501R) [1461:b7e9]
159 -> Beholder BeholdTV 505 RDS [0000:505B]
ATLX ETHERNET DRIVERS
M: Jay Cliburn <jcliburn@gmail.com>
-M: Chris Snook <csnook@redhat.com>
+M: Chris Snook <chris.snook@gmail.com>
M: Jie Yang <jie.yang@atheros.com>
L: atl1-devel@lists.sourceforge.net
W: http://sourceforge.net/projects/atl1
S: Maintained
F: drivers/media/video/gspca/pac207.c
+GSPCA SN9C20X SUBDRIVER
+P: Brian Johnson
+M: brijohn@gmail.com
+L: linux-media@vger.kernel.org
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git
+S: Maintained
+F: drivers/media/video/gspca/sn9c20x.c
+
GSPCA T613 SUBDRIVER
M: Leandro Costantino <lcostantino@gmail.com>
L: linux-media@vger.kernel.org
MULTIMEDIA CARD (MMC), SECURE DIGITAL (SD) AND SDIO SUBSYSTEM
S: Orphan
+L: linux-mmc@vger.kernel.org
F: drivers/mmc/
F: include/linux/mmc/
S: Maintained
F: net/
F: include/net/
+F: include/linux/in.h
+F: include/linux/net.h
+F: include/linux/netdevice.h
NETWORKING [IPv4/IPv6]
M: "David S. Miller" <davem@davemloft.net>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6.git
S: Odd Fixes
F: drivers/net/
+F: include/linux/if_*
+F: include/linux/*device.h
NETXEN (1/10) GbE SUPPORT
M: Dhananjay Phadke <dhananjay@netxen.com>
T: git git://git.open-osd.org/open-osd.git
S: Maintained
F: drivers/scsi/osd/
-F: drivers/include/scsi/osd_*
+F: include/scsi/osd_*
F: fs/exofs/
P54 WIRELESS DRIVER
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 31
-EXTRAVERSION = -rc6
+EXTRAVERSION = -rc7
NAME = Man-Eating Seals of Antiquity
# *DOCUMENTATION*
to the person responsible for the code relevant to what you were doing.
If it occurs repeatably try and describe how to recreate it. That is
worth even more than the oops itself. The list of maintainers and
-mailing lists is in the MAINTAINERS file in this directory.
+mailing lists is in the MAINTAINERS file in this directory. If you
+know the file name that causes the problem you can use the following
+command in this directory to find some of the maintainers of that file:
+ perl scripts/get_maintainer.pl -f <filename>
If it is a security bug, please copy the Security Contact listed
in the MAINTAINERS file. They can help coordinate bugfix and disclosure.
CONFIG_ATA=y
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_SATA_PMP=y
-# CONFIG_SATA_AHCI is not set
+CONFIG_SATA_AHCI=y
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=y
# CONFIG_SATA_SVW is not set
#
CONFIG_ZBOOT_ROM_TEXT=0x0
CONFIG_ZBOOT_ROM_BSS=0x0
-CONFIG_CMDLINE="init=/sbin/preinit ubi.mtd=rootfs root=ubi0:rootfs rootfstype=ubifs rootflags=bulk_read,no_chk_data_crc rw console=ttyMTD,log console=tty0"
+CONFIG_CMDLINE="init=/sbin/preinit ubi.mtd=rootfs root=ubi0:rootfs rootfstype=ubifs rootflags=bulk_read,no_chk_data_crc rw console=ttyMTD,log console=tty0 console=ttyS2,115200n8"
# CONFIG_XIP_KERNEL is not set
# CONFIG_KEXEC is not set
# CONFIG_USB_GPIO_VBUS is not set
# CONFIG_ISP1301_OMAP is not set
CONFIG_TWL4030_USB=y
-CONFIG_MMC=m
+CONFIG_MMC=y
# CONFIG_MMC_DEBUG is not set
# CONFIG_MMC_UNSAFE_RESUME is not set
# on-CPU RTC drivers
#
# CONFIG_DMADEVICES is not set
-# CONFIG_REGULATOR is not set
+CONFIG_REGULATOR=y
+CONFIG_REGULATOR_TWL4030=y
# CONFIG_UIO is not set
# CONFIG_STAGING is not set
struct membank {
unsigned long start;
unsigned long size;
- int node;
+ unsigned short node;
+ unsigned short highmem;
};
struct meminfo {
#include <mach/hardware.h>
-#define IO_SPACE_LIMIT 0xffff0000
+#define IO_SPACE_LIMIT 0x0000ffff
extern int (*ixp4xx_pci_read)(u32 addr, u32 cmd, u32* data);
extern int ixp4xx_pci_write(u32 addr, u32 cmd, u32 data);
}
+static int __init ts219_pci_init(void)
+{
+ if (machine_is_ts219())
+ kirkwood_pcie_init();
+
+ return 0;
+}
+subsys_initcall(ts219_pci_init);
+
MACHINE_START(TS219, "QNAP TS-119/TS-219")
/* Maintainer: Martin Michlmayr <tbm@cyrius.com> */
.phys_io = KIRKWOOD_REGS_PHYS_BASE,
static int devboard_sdhc2_get_ro(struct device *dev)
{
- return gpio_get_value(SDHC2_WP);
+ return !gpio_get_value(SDHC2_WP);
}
static int devboard_sdhc2_init(struct device *dev, irq_handler_t detect_irq,
static int marxbot_sdhc2_get_ro(struct device *dev)
{
- return gpio_get_value(SDHC2_WP);
+ return !gpio_get_value(SDHC2_WP);
}
static int marxbot_sdhc2_init(struct device *dev, irq_handler_t detect_irq,
static int moboard_sdhc1_get_ro(struct device *dev)
{
- return gpio_get_value(SDHC1_WP);
+ return !gpio_get_value(SDHC1_WP);
}
static int moboard_sdhc1_init(struct device *dev, irq_handler_t detect_irq,
#include "devices.h"
static unsigned int pcm037_eet_pins[] = {
- /* SPI #1 */
- MX31_PIN_CSPI1_MISO__MISO,
- MX31_PIN_CSPI1_MOSI__MOSI,
- MX31_PIN_CSPI1_SCLK__SCLK,
- MX31_PIN_CSPI1_SPI_RDY__SPI_RDY,
- MX31_PIN_CSPI1_SS0__SS0,
- MX31_PIN_CSPI1_SS1__SS1,
- MX31_PIN_CSPI1_SS2__SS2,
-
/* Reserve and hardwire GPIO 57 high - S6E63D6 chipselect */
IOMUX_MODE(MX31_PIN_KEY_COL7, IOMUX_CONFIG_GPIO),
/* GPIO keys */
static void __init omap_2430sdp_init_irq(void)
{
- omap2_init_common_hw(NULL);
+ omap2_init_common_hw(NULL, NULL);
omap_init_irq();
omap_gpio_init();
}
static void __init omap_3430sdp_init_irq(void)
{
- omap2_init_common_hw(hyb18m512160af6_sdrc_params);
+ omap2_init_common_hw(hyb18m512160af6_sdrc_params, NULL);
omap_init_irq();
omap_gpio_init();
}
static void __init omap_4430sdp_init_irq(void)
{
- omap2_init_common_hw(NULL);
+ omap2_init_common_hw(NULL, NULL);
#ifdef CONFIG_OMAP_32K_TIMER
omap2_gp_clockevent_set_gptimer(1);
#endif
static void __init omap_apollon_init_irq(void)
{
- omap2_init_common_hw(NULL);
+ omap2_init_common_hw(NULL, NULL);
omap_init_irq();
omap_gpio_init();
apollon_init_smc91x();
static void __init omap_generic_init_irq(void)
{
- omap2_init_common_hw(NULL);
+ omap2_init_common_hw(NULL, NULL);
omap_init_irq();
}
static void __init omap_h4_init_irq(void)
{
- omap2_init_common_hw(NULL);
+ omap2_init_common_hw(NULL, NULL);
omap_init_irq();
omap_gpio_init();
h4_init_flash();
static void __init omap_ldp_init_irq(void)
{
- omap2_init_common_hw(NULL);
+ omap2_init_common_hw(NULL, NULL);
omap_init_irq();
omap_gpio_init();
ldp_init_smsc911x();
static void __init omap3_beagle_init_irq(void)
{
- omap2_init_common_hw(mt46h32m32lf6_sdrc_params);
+ omap2_init_common_hw(mt46h32m32lf6_sdrc_params,
+ mt46h32m32lf6_sdrc_params);
omap_init_irq();
#ifdef CONFIG_OMAP_32K_TIMER
omap2_gp_clockevent_set_gptimer(12);
usb_musb_init();
omap3beagle_flash_init();
+
+ /* Ensure SDRC pins are mux'd for self-refresh */
+ omap_cfg_reg(H16_34XX_SDRC_CKE0);
+ omap_cfg_reg(H17_34XX_SDRC_CKE1);
}
static void __init omap3_beagle_map_io(void)
static void __init omap3_evm_init_irq(void)
{
- omap2_init_common_hw(mt46h32m32lf6_sdrc_params);
+ omap2_init_common_hw(mt46h32m32lf6_sdrc_params, NULL);
omap_init_irq();
omap_gpio_init();
omap3evm_init_smc911x();
#include <mach/mcspi.h>
#include <mach/usb.h>
#include <mach/keypad.h>
+#include <mach/mux.h>
#include "sdram-micron-mt46h32m32lf-6.h"
#include "mmc-twl4030.h"
static void __init omap3pandora_init_irq(void)
{
- omap2_init_common_hw(mt46h32m32lf6_sdrc_params);
+ omap2_init_common_hw(mt46h32m32lf6_sdrc_params,
+ mt46h32m32lf6_sdrc_params);
omap_init_irq();
omap_gpio_init();
}
omap3pandora_ads7846_init();
pandora_keys_gpio_init();
usb_musb_init();
+
+ /* Ensure SDRC pins are mux'd for self-refresh */
+ omap_cfg_reg(H16_34XX_SDRC_CKE0);
+ omap_cfg_reg(H17_34XX_SDRC_CKE1);
}
static void __init omap3pandora_map_io(void)
#include <mach/gpmc.h>
#include <mach/hardware.h>
#include <mach/nand.h>
+#include <mach/mux.h>
#include <mach/usb.h>
#include "sdram-micron-mt46h32m32lf-6.h"
#define OVERO_GPIO_BT_XGATE 15
#define OVERO_GPIO_W2W_NRESET 16
+#define OVERO_GPIO_PENDOWN 114
#define OVERO_GPIO_BT_NRESET 164
#define OVERO_GPIO_USBH_CPEN 168
#define OVERO_GPIO_USBH_NRESET 183
.name = "smsc911x",
.id = -1,
.num_resources = ARRAY_SIZE(overo_smsc911x_resources),
- .resource = &overo_smsc911x_resources,
+ .resource = overo_smsc911x_resources,
.dev = {
.platform_data = &overo_smsc911x_config,
},
static void __init overo_init_irq(void)
{
- omap2_init_common_hw(mt46h32m32lf6_sdrc_params);
+ omap2_init_common_hw(mt46h32m32lf6_sdrc_params,
+ mt46h32m32lf6_sdrc_params);
omap_init_irq();
omap_gpio_init();
}
overo_ads7846_init();
overo_init_smsc911x();
+ /* Ensure SDRC pins are mux'd for self-refresh */
+ omap_cfg_reg(H16_34XX_SDRC_CKE0);
+ omap_cfg_reg(H17_34XX_SDRC_CKE1);
+
if ((gpio_request(OVERO_GPIO_W2W_NRESET,
"OVERO_GPIO_W2W_NRESET") == 0) &&
(gpio_direction_output(OVERO_GPIO_W2W_NRESET, 1) == 0)) {
.setup = rx51_twlgpio_setup,
};
+static struct twl4030_usb_data rx51_usb_data = {
+ .usb_mode = T2_USB_MODE_ULPI,
+};
+
static struct twl4030_platform_data rx51_twldata = {
.irq_base = TWL4030_IRQ_BASE,
.irq_end = TWL4030_IRQ_END,
.gpio = &rx51_gpio_data,
.keypad = &rx51_kp_data,
.madc = &rx51_madc_data,
+ .usb = &rx51_usb_data,
.vaux1 = &rx51_vaux1,
.vaux2 = &rx51_vaux2,
static void __init rx51_init_irq(void)
{
- omap2_init_common_hw(NULL);
+ omap2_init_common_hw(NULL, NULL);
omap_init_irq();
omap_gpio_init();
}
omap_serial_init();
usb_musb_init();
rx51_peripherals_init();
+
+ /* Ensure SDRC pins are mux'd for self-refresh */
+ omap_cfg_reg(H16_34XX_SDRC_CKE0);
+ omap_cfg_reg(H17_34XX_SDRC_CKE1);
}
static void __init rx51_map_io(void)
static void __init omap_zoom2_init_irq(void)
{
- omap2_init_common_hw(NULL);
+ omap2_init_common_hw(NULL, NULL);
omap_init_irq();
omap_gpio_init();
}
#include <mach/clock.h>
#include <mach/clockdomain.h>
#include <mach/cpu.h>
+#include <mach/prcm.h>
#include <asm/div64.h>
#include <mach/sdrc.h>
#include "cm-regbits-24xx.h"
#include "cm-regbits-34xx.h"
-#define MAX_CLOCK_ENABLE_WAIT 100000
-
/* DPLL rate rounding: minimum DPLL multiplier, divider values */
#define DPLL_MIN_MULTIPLIER 1
#define DPLL_MIN_DIVIDER 1
}
/**
- * omap2_wait_clock_ready - wait for clock to enable
- * @reg: physical address of clock IDLEST register
- * @mask: value to mask against to determine if the clock is active
- * @name: name of the clock (for printk)
+ * omap2_clk_dflt_find_companion - find companion clock to @clk
+ * @clk: struct clk * to find the companion clock of
+ * @other_reg: void __iomem ** to return the companion clock CM_*CLKEN va in
+ * @other_bit: u8 ** to return the companion clock bit shift in
+ *
+ * Note: We don't need special code here for INVERT_ENABLE for the
+ * time being since INVERT_ENABLE only applies to clocks enabled by
+ * CM_CLKEN_PLL
*
- * Returns 1 if the clock enabled in time, or 0 if it failed to enable
- * in roughly MAX_CLOCK_ENABLE_WAIT microseconds.
+ * Convert CM_ICLKEN* <-> CM_FCLKEN*. This conversion assumes it's
+ * just a matter of XORing the bits.
+ *
+ * Some clocks don't have companion clocks. For example, modules with
+ * only an interface clock (such as MAILBOXES) don't have a companion
+ * clock. Right now, this code relies on the hardware exporting a bit
+ * in the correct companion register that indicates that the
+ * nonexistent 'companion clock' is active. Future patches will
+ * associate this type of code with per-module data structures to
+ * avoid this issue, and remove the casts. No return value.
*/
-int omap2_wait_clock_ready(void __iomem *reg, u32 mask, const char *name)
+void omap2_clk_dflt_find_companion(struct clk *clk, void __iomem **other_reg,
+ u8 *other_bit)
{
- int i = 0;
- int ena = 0;
+ u32 r;
/*
- * 24xx uses 0 to indicate not ready, and 1 to indicate ready.
- * 34xx reverses this, just to keep us on our toes
+ * Convert CM_ICLKEN* <-> CM_FCLKEN*. This conversion assumes
+ * it's just a matter of XORing the bits.
*/
- if (cpu_mask & (RATE_IN_242X | RATE_IN_243X))
- ena = mask;
- else if (cpu_mask & RATE_IN_343X)
- ena = 0;
-
- /* Wait for lock */
- while (((__raw_readl(reg) & mask) != ena) &&
- (i++ < MAX_CLOCK_ENABLE_WAIT)) {
- udelay(1);
- }
-
- if (i <= MAX_CLOCK_ENABLE_WAIT)
- pr_debug("Clock %s stable after %d loops\n", name, i);
- else
- printk(KERN_ERR "Clock %s didn't enable in %d tries\n",
- name, MAX_CLOCK_ENABLE_WAIT);
-
-
- return (i < MAX_CLOCK_ENABLE_WAIT) ? 1 : 0;
-};
+ r = ((__force u32)clk->enable_reg ^ (CM_FCLKEN ^ CM_ICLKEN));
+ *other_reg = (__force void __iomem *)r;
+ *other_bit = clk->enable_bit;
+}
-/*
- * Note: We don't need special code here for INVERT_ENABLE
- * for the time being since INVERT_ENABLE only applies to clocks enabled by
- * CM_CLKEN_PLL
+/**
+ * omap2_clk_dflt_find_idlest - find CM_IDLEST reg va, bit shift for @clk
+ * @clk: struct clk * to find IDLEST info for
+ * @idlest_reg: void __iomem ** to return the CM_IDLEST va in
+ * @idlest_bit: u8 ** to return the CM_IDLEST bit shift in
+ *
+ * Return the CM_IDLEST register address and bit shift corresponding
+ * to the module that "owns" this clock. This default code assumes
+ * that the CM_IDLEST bit shift is the CM_*CLKEN bit shift, and that
+ * the IDLEST register address ID corresponds to the CM_*CLKEN
+ * register address ID (e.g., that CM_FCLKEN2 corresponds to
+ * CM_IDLEST2). This is not true for all modules. No return value.
*/
-static void omap2_clk_wait_ready(struct clk *clk)
+void omap2_clk_dflt_find_idlest(struct clk *clk, void __iomem **idlest_reg,
+ u8 *idlest_bit)
{
- void __iomem *reg, *other_reg, *st_reg;
- u32 bit;
+ u32 r;
- /*
- * REVISIT: This code is pretty ugly. It would be nice to generalize
- * it and pull it into struct clk itself somehow.
- */
- reg = clk->enable_reg;
+ r = (((__force u32)clk->enable_reg & ~0xf0) | 0x20);
+ *idlest_reg = (__force void __iomem *)r;
+ *idlest_bit = clk->enable_bit;
+}
- /*
- * Convert CM_ICLKEN* <-> CM_FCLKEN*. This conversion assumes
- * it's just a matter of XORing the bits.
- */
- other_reg = (void __iomem *)((u32)reg ^ (CM_FCLKEN ^ CM_ICLKEN));
+/**
+ * omap2_module_wait_ready - wait for an OMAP module to leave IDLE
+ * @clk: struct clk * belonging to the module
+ *
+ * If the necessary clocks for the OMAP hardware IP block that
+ * corresponds to clock @clk are enabled, then wait for the module to
+ * indicate readiness (i.e., to leave IDLE). This code does not
+ * belong in the clock code and will be moved in the medium term to
+ * module-dependent code. No return value.
+ */
+static void omap2_module_wait_ready(struct clk *clk)
+{
+ void __iomem *companion_reg, *idlest_reg;
+ u8 other_bit, idlest_bit;
+
+ /* Not all modules have multiple clocks that their IDLEST depends on */
+ if (clk->ops->find_companion) {
+ clk->ops->find_companion(clk, &companion_reg, &other_bit);
+ if (!(__raw_readl(companion_reg) & (1 << other_bit)))
+ return;
+ }
- /* Check if both functional and interface clocks
- * are running. */
- bit = 1 << clk->enable_bit;
- if (!(__raw_readl(other_reg) & bit))
- return;
- st_reg = (void __iomem *)(((u32)other_reg & ~0xf0) | 0x20); /* CM_IDLEST* */
+ clk->ops->find_idlest(clk, &idlest_reg, &idlest_bit);
- omap2_wait_clock_ready(st_reg, bit, clk->name);
+ omap2_cm_wait_idlest(idlest_reg, (1 << idlest_bit), clk->name);
}
-static int omap2_dflt_clk_enable(struct clk *clk)
+int omap2_dflt_clk_enable(struct clk *clk)
{
u32 v;
if (unlikely(clk->enable_reg == NULL)) {
- printk(KERN_ERR "clock.c: Enable for %s without enable code\n",
+ pr_err("clock.c: Enable for %s without enable code\n",
clk->name);
return 0; /* REVISIT: -EINVAL */
}
__raw_writel(v, clk->enable_reg);
v = __raw_readl(clk->enable_reg); /* OCP barrier */
- return 0;
-}
+ if (clk->ops->find_idlest)
+ omap2_module_wait_ready(clk);
-static int omap2_dflt_clk_enable_wait(struct clk *clk)
-{
- int ret;
-
- if (!clk->enable_reg) {
- printk(KERN_ERR "clock.c: Enable for %s without enable code\n",
- clk->name);
- return 0; /* REVISIT: -EINVAL */
- }
-
- ret = omap2_dflt_clk_enable(clk);
- if (ret == 0)
- omap2_clk_wait_ready(clk);
- return ret;
+ return 0;
}
-static void omap2_dflt_clk_disable(struct clk *clk)
+void omap2_dflt_clk_disable(struct clk *clk)
{
u32 v;
}
const struct clkops clkops_omap2_dflt_wait = {
- .enable = omap2_dflt_clk_enable_wait,
+ .enable = omap2_dflt_clk_enable,
.disable = omap2_dflt_clk_disable,
+ .find_companion = omap2_clk_dflt_find_companion,
+ .find_idlest = omap2_clk_dflt_find_idlest,
};
const struct clkops clkops_omap2_dflt = {
u32 omap2_get_dpll_rate(struct clk *clk);
int omap2_wait_clock_ready(void __iomem *reg, u32 cval, const char *name);
void omap2_clk_prepare_for_reboot(void);
+int omap2_dflt_clk_enable(struct clk *clk);
+void omap2_dflt_clk_disable(struct clk *clk);
+void omap2_clk_dflt_find_companion(struct clk *clk, void __iomem **other_reg,
+ u8 *other_bit);
+void omap2_clk_dflt_find_idlest(struct clk *clk, void __iomem **idlest_reg,
+ u8 *idlest_bit);
extern const struct clkops clkops_omap2_dflt_wait;
extern const struct clkops clkops_omap2_dflt;
#include <mach/clock.h>
#include <mach/sram.h>
+#include <mach/prcm.h>
#include <asm/div64.h>
#include <asm/clkdev.h>
static const struct clkops clkops_oscck;
static const struct clkops clkops_fixed;
+static void omap2430_clk_i2chs_find_idlest(struct clk *clk,
+ void __iomem **idlest_reg,
+ u8 *idlest_bit);
+
+/* 2430 I2CHS has non-standard IDLEST register */
+static const struct clkops clkops_omap2430_i2chs_wait = {
+ .enable = omap2_dflt_clk_enable,
+ .disable = omap2_dflt_clk_disable,
+ .find_idlest = omap2430_clk_i2chs_find_idlest,
+ .find_companion = omap2_clk_dflt_find_companion,
+};
+
#include "clock24xx.h"
struct omap_clk {
* Omap24xx specific clock functions
*-------------------------------------------------------------------------*/
+/**
+ * omap2430_clk_i2chs_find_idlest - return CM_IDLEST info for 2430 I2CHS
+ * @clk: struct clk * being enabled
+ * @idlest_reg: void __iomem ** to store CM_IDLEST reg address into
+ * @idlest_bit: pointer to a u8 to store the CM_IDLEST bit shift into
+ *
+ * OMAP2430 I2CHS CM_IDLEST bits are in CM_IDLEST1_CORE, but the
+ * CM_*CLKEN bits are in CM_{I,F}CLKEN2_CORE. This custom function
+ * passes back the correct CM_IDLEST register address for I2CHS
+ * modules. No return value.
+ */
+static void omap2430_clk_i2chs_find_idlest(struct clk *clk,
+ void __iomem **idlest_reg,
+ u8 *idlest_bit)
+{
+ *idlest_reg = OMAP_CM_REGADDR(CORE_MOD, CM_IDLEST);
+ *idlest_bit = clk->enable_bit;
+}
+
+
/**
* omap2xxx_clk_get_core_rate - return the CORE_CLK rate
* @clk: pointer to the combined dpll_ck + core_ck (currently "dpll_ck")
else if (clk == &apll54_ck)
cval = OMAP24XX_ST_54M_APLL;
- omap2_wait_clock_ready(OMAP_CM_REGADDR(PLL_MOD, CM_IDLEST), cval,
- clk->name);
+ omap2_cm_wait_idlest(OMAP_CM_REGADDR(PLL_MOD, CM_IDLEST), cval,
+ clk->name);
/*
* REVISIT: Should we return an error code if omap2_wait_clock_ready()
static struct clk i2chs2_fck = {
.name = "i2c_fck",
- .ops = &clkops_omap2_dflt_wait,
+ .ops = &clkops_omap2430_i2chs_wait,
.id = 2,
.parent = &func_96m_ck,
.clkdm_name = "core_l4_clkdm",
static struct clk i2chs1_fck = {
.name = "i2c_fck",
- .ops = &clkops_omap2_dflt_wait,
+ .ops = &clkops_omap2430_i2chs_wait,
.id = 1,
.parent = &func_96m_ck,
.clkdm_name = "core_l4_clkdm",
* OMAP3-specific clock framework functions
*
* Copyright (C) 2007-2008 Texas Instruments, Inc.
- * Copyright (C) 2007-2008 Nokia Corporation
+ * Copyright (C) 2007-2009 Nokia Corporation
*
* Written by Paul Walmsley
* Testing and integration fixes by Jouni Högander
static const struct clkops clkops_noncore_dpll_ops;
+static void omap3430es2_clk_ssi_find_idlest(struct clk *clk,
+ void __iomem **idlest_reg,
+ u8 *idlest_bit);
+static void omap3430es2_clk_hsotgusb_find_idlest(struct clk *clk,
+ void __iomem **idlest_reg,
+ u8 *idlest_bit);
+static void omap3430es2_clk_dss_usbhost_find_idlest(struct clk *clk,
+ void __iomem **idlest_reg,
+ u8 *idlest_bit);
+
+static const struct clkops clkops_omap3430es2_ssi_wait = {
+ .enable = omap2_dflt_clk_enable,
+ .disable = omap2_dflt_clk_disable,
+ .find_idlest = omap3430es2_clk_ssi_find_idlest,
+ .find_companion = omap2_clk_dflt_find_companion,
+};
+
+static const struct clkops clkops_omap3430es2_hsotgusb_wait = {
+ .enable = omap2_dflt_clk_enable,
+ .disable = omap2_dflt_clk_disable,
+ .find_idlest = omap3430es2_clk_hsotgusb_find_idlest,
+ .find_companion = omap2_clk_dflt_find_companion,
+};
+
+static const struct clkops clkops_omap3430es2_dss_usbhost_wait = {
+ .enable = omap2_dflt_clk_enable,
+ .disable = omap2_dflt_clk_disable,
+ .find_idlest = omap3430es2_clk_dss_usbhost_find_idlest,
+ .find_companion = omap2_clk_dflt_find_companion,
+};
+
#include "clock34xx.h"
struct omap_clk {
CLK(NULL, "fshostusb_fck", &fshostusb_fck, CK_3430ES1),
CLK(NULL, "core_12m_fck", &core_12m_fck, CK_343X),
CLK("omap_hdq.0", "fck", &hdq_fck, CK_343X),
- CLK(NULL, "ssi_ssr_fck", &ssi_ssr_fck, CK_343X),
- CLK(NULL, "ssi_sst_fck", &ssi_sst_fck, CK_343X),
+ CLK(NULL, "ssi_ssr_fck", &ssi_ssr_fck_3430es1, CK_3430ES1),
+ CLK(NULL, "ssi_ssr_fck", &ssi_ssr_fck_3430es2, CK_3430ES2),
+ CLK(NULL, "ssi_sst_fck", &ssi_sst_fck_3430es1, CK_3430ES1),
+ CLK(NULL, "ssi_sst_fck", &ssi_sst_fck_3430es2, CK_3430ES2),
CLK(NULL, "core_l3_ick", &core_l3_ick, CK_343X),
- CLK("musb_hdrc", "ick", &hsotgusb_ick, CK_343X),
+ CLK("musb_hdrc", "ick", &hsotgusb_ick_3430es1, CK_3430ES1),
+ CLK("musb_hdrc", "ick", &hsotgusb_ick_3430es2, CK_3430ES2),
CLK(NULL, "sdrc_ick", &sdrc_ick, CK_343X),
CLK(NULL, "gpmc_fck", &gpmc_fck, CK_343X),
CLK(NULL, "security_l3_ick", &security_l3_ick, CK_343X),
CLK(NULL, "mailboxes_ick", &mailboxes_ick, CK_343X),
CLK(NULL, "omapctrl_ick", &omapctrl_ick, CK_343X),
CLK(NULL, "ssi_l4_ick", &ssi_l4_ick, CK_343X),
- CLK(NULL, "ssi_ick", &ssi_ick, CK_343X),
+ CLK(NULL, "ssi_ick", &ssi_ick_3430es1, CK_3430ES1),
+ CLK(NULL, "ssi_ick", &ssi_ick_3430es2, CK_3430ES2),
CLK(NULL, "usb_l4_ick", &usb_l4_ick, CK_3430ES1),
CLK(NULL, "security_l4_ick2", &security_l4_ick2, CK_343X),
CLK(NULL, "aes1_ick", &aes1_ick, CK_343X),
CLK("omap_rng", "ick", &rng_ick, CK_343X),
CLK(NULL, "sha11_ick", &sha11_ick, CK_343X),
CLK(NULL, "des1_ick", &des1_ick, CK_343X),
- CLK("omapfb", "dss1_fck", &dss1_alwon_fck, CK_343X),
+ CLK("omapfb", "dss1_fck", &dss1_alwon_fck_3430es1, CK_3430ES1),
+ CLK("omapfb", "dss1_fck", &dss1_alwon_fck_3430es2, CK_3430ES2),
CLK("omapfb", "tv_fck", &dss_tv_fck, CK_343X),
CLK("omapfb", "video_fck", &dss_96m_fck, CK_343X),
CLK("omapfb", "dss2_fck", &dss2_alwon_fck, CK_343X),
- CLK("omapfb", "ick", &dss_ick, CK_343X),
+ CLK("omapfb", "ick", &dss_ick_3430es1, CK_3430ES1),
+ CLK("omapfb", "ick", &dss_ick_3430es2, CK_3430ES2),
CLK(NULL, "cam_mclk", &cam_mclk, CK_343X),
CLK(NULL, "cam_ick", &cam_ick, CK_343X),
CLK(NULL, "csi2_96m_fck", &csi2_96m_fck, CK_343X),
*/
#define SDRC_MPURATE_LOOPS 96
+/**
+ * omap3430es2_clk_ssi_find_idlest - return CM_IDLEST info for SSI
+ * @clk: struct clk * being enabled
+ * @idlest_reg: void __iomem ** to store CM_IDLEST reg address into
+ * @idlest_bit: pointer to a u8 to store the CM_IDLEST bit shift into
+ *
+ * The OMAP3430ES2 SSI target CM_IDLEST bit is at a different shift
+ * from the CM_{I,F}CLKEN bit. Pass back the correct info via
+ * @idlest_reg and @idlest_bit. No return value.
+ */
+static void omap3430es2_clk_ssi_find_idlest(struct clk *clk,
+ void __iomem **idlest_reg,
+ u8 *idlest_bit)
+{
+ u32 r;
+
+ r = (((__force u32)clk->enable_reg & ~0xf0) | 0x20);
+ *idlest_reg = (__force void __iomem *)r;
+ *idlest_bit = OMAP3430ES2_ST_SSI_IDLE_SHIFT;
+}
+
+/**
+ * omap3430es2_clk_dss_usbhost_find_idlest - CM_IDLEST info for DSS, USBHOST
+ * @clk: struct clk * being enabled
+ * @idlest_reg: void __iomem ** to store CM_IDLEST reg address into
+ * @idlest_bit: pointer to a u8 to store the CM_IDLEST bit shift into
+ *
+ * Some OMAP modules on OMAP3 ES2+ chips have both initiator and
+ * target IDLEST bits. For our purposes, we are concerned with the
+ * target IDLEST bits, which exist at a different bit position than
+ * the *CLKEN bit position for these modules (DSS and USBHOST) (The
+ * default find_idlest code assumes that they are at the same
+ * position.) No return value.
+ */
+static void omap3430es2_clk_dss_usbhost_find_idlest(struct clk *clk,
+ void __iomem **idlest_reg,
+ u8 *idlest_bit)
+{
+ u32 r;
+
+ r = (((__force u32)clk->enable_reg & ~0xf0) | 0x20);
+ *idlest_reg = (__force void __iomem *)r;
+ /* USBHOST_IDLE has same shift */
+ *idlest_bit = OMAP3430ES2_ST_DSS_IDLE_SHIFT;
+}
+
+/**
+ * omap3430es2_clk_hsotgusb_find_idlest - return CM_IDLEST info for HSOTGUSB
+ * @clk: struct clk * being enabled
+ * @idlest_reg: void __iomem ** to store CM_IDLEST reg address into
+ * @idlest_bit: pointer to a u8 to store the CM_IDLEST bit shift into
+ *
+ * The OMAP3430ES2 HSOTGUSB target CM_IDLEST bit is at a different
+ * shift from the CM_{I,F}CLKEN bit. Pass back the correct info via
+ * @idlest_reg and @idlest_bit. No return value.
+ */
+static void omap3430es2_clk_hsotgusb_find_idlest(struct clk *clk,
+ void __iomem **idlest_reg,
+ u8 *idlest_bit)
+{
+ u32 r;
+
+ r = (((__force u32)clk->enable_reg & ~0xf0) | 0x20);
+ *idlest_reg = (__force void __iomem *)r;
+ *idlest_bit = OMAP3430ES2_ST_HSOTGUSB_IDLE_SHIFT;
+}
+
/**
* omap3_dpll_recalc - recalculate DPLL rate
* @clk: DPLL struct clk
u32 unlock_dll = 0;
u32 c;
unsigned long validrate, sdrcrate, mpurate;
- struct omap_sdrc_params *sp;
+ struct omap_sdrc_params *sdrc_cs0;
+ struct omap_sdrc_params *sdrc_cs1;
+ int ret;
if (!clk || !rate)
return -EINVAL;
else
sdrcrate >>= ((clk->rate / rate) >> 1);
- sp = omap2_sdrc_get_params(sdrcrate);
- if (!sp)
+ ret = omap2_sdrc_get_params(sdrcrate, &sdrc_cs0, &sdrc_cs1);
+ if (ret)
return -EINVAL;
if (sdrcrate < MIN_SDRC_DLL_LOCK_FREQ) {
pr_debug("clock: changing CORE DPLL rate from %lu to %lu\n", clk->rate,
validrate);
- pr_debug("clock: SDRC timing params used: %08x %08x %08x\n",
- sp->rfr_ctrl, sp->actim_ctrla, sp->actim_ctrlb);
-
- omap3_configure_core_dpll(sp->rfr_ctrl, sp->actim_ctrla,
- sp->actim_ctrlb, new_div, unlock_dll, c,
- sp->mr, rate > clk->rate);
+ pr_debug("clock: SDRC CS0 timing params used:"
+ " RFR %08x CTRLA %08x CTRLB %08x MR %08x\n",
+ sdrc_cs0->rfr_ctrl, sdrc_cs0->actim_ctrla,
+ sdrc_cs0->actim_ctrlb, sdrc_cs0->mr);
+ if (sdrc_cs1)
+ pr_debug("clock: SDRC CS1 timing params used: "
+ " RFR %08x CTRLA %08x CTRLB %08x MR %08x\n",
+ sdrc_cs1->rfr_ctrl, sdrc_cs1->actim_ctrla,
+ sdrc_cs1->actim_ctrlb, sdrc_cs1->mr);
+
+ if (sdrc_cs1)
+ omap3_configure_core_dpll(
+ new_div, unlock_dll, c, rate > clk->rate,
+ sdrc_cs0->rfr_ctrl, sdrc_cs0->actim_ctrla,
+ sdrc_cs0->actim_ctrlb, sdrc_cs0->mr,
+ sdrc_cs1->rfr_ctrl, sdrc_cs1->actim_ctrla,
+ sdrc_cs1->actim_ctrlb, sdrc_cs1->mr);
+ else
+ omap3_configure_core_dpll(
+ new_div, unlock_dll, c, rate > clk->rate,
+ sdrc_cs0->rfr_ctrl, sdrc_cs0->actim_ctrla,
+ sdrc_cs0->actim_ctrlb, sdrc_cs0->mr,
+ 0, 0, 0, 0);
return 0;
}
{ .parent = NULL }
};
-static struct clk ssi_ssr_fck = {
+static struct clk ssi_ssr_fck_3430es1 = {
.name = "ssi_ssr_fck",
.ops = &clkops_omap2_dflt,
.init = &omap2_init_clksel_parent,
.recalc = &omap2_clksel_recalc,
};
-static struct clk ssi_sst_fck = {
+static struct clk ssi_ssr_fck_3430es2 = {
+ .name = "ssi_ssr_fck",
+ .ops = &clkops_omap3430es2_ssi_wait,
+ .init = &omap2_init_clksel_parent,
+ .enable_reg = OMAP_CM_REGADDR(CORE_MOD, CM_FCLKEN1),
+ .enable_bit = OMAP3430_EN_SSI_SHIFT,
+ .clksel_reg = OMAP_CM_REGADDR(CORE_MOD, CM_CLKSEL),
+ .clksel_mask = OMAP3430_CLKSEL_SSI_MASK,
+ .clksel = ssi_ssr_clksel,
+ .clkdm_name = "core_l4_clkdm",
+ .recalc = &omap2_clksel_recalc,
+};
+
+static struct clk ssi_sst_fck_3430es1 = {
.name = "ssi_sst_fck",
.ops = &clkops_null,
- .parent = &ssi_ssr_fck,
+ .parent = &ssi_ssr_fck_3430es1,
+ .fixed_div = 2,
+ .recalc = &omap2_fixed_divisor_recalc,
+};
+
+static struct clk ssi_sst_fck_3430es2 = {
+ .name = "ssi_sst_fck",
+ .ops = &clkops_null,
+ .parent = &ssi_ssr_fck_3430es2,
.fixed_div = 2,
.recalc = &omap2_fixed_divisor_recalc,
};
.recalc = &followparent_recalc,
};
-static struct clk hsotgusb_ick = {
+static struct clk hsotgusb_ick_3430es1 = {
.name = "hsotgusb_ick",
- .ops = &clkops_omap2_dflt_wait,
+ .ops = &clkops_omap2_dflt,
+ .parent = &core_l3_ick,
+ .enable_reg = OMAP_CM_REGADDR(CORE_MOD, CM_ICLKEN1),
+ .enable_bit = OMAP3430_EN_HSOTGUSB_SHIFT,
+ .clkdm_name = "core_l3_clkdm",
+ .recalc = &followparent_recalc,
+};
+
+static struct clk hsotgusb_ick_3430es2 = {
+ .name = "hsotgusb_ick",
+ .ops = &clkops_omap3430es2_hsotgusb_wait,
.parent = &core_l3_ick,
.enable_reg = OMAP_CM_REGADDR(CORE_MOD, CM_ICLKEN1),
.enable_bit = OMAP3430_EN_HSOTGUSB_SHIFT,
.recalc = &followparent_recalc,
};
-static struct clk ssi_ick = {
+static struct clk ssi_ick_3430es1 = {
.name = "ssi_ick",
.ops = &clkops_omap2_dflt,
.parent = &ssi_l4_ick,
.recalc = &followparent_recalc,
};
+static struct clk ssi_ick_3430es2 = {
+ .name = "ssi_ick",
+ .ops = &clkops_omap3430es2_ssi_wait,
+ .parent = &ssi_l4_ick,
+ .enable_reg = OMAP_CM_REGADDR(CORE_MOD, CM_ICLKEN1),
+ .enable_bit = OMAP3430_EN_SSI_SHIFT,
+ .clkdm_name = "core_l4_clkdm",
+ .recalc = &followparent_recalc,
+};
+
/* REVISIT: Technically the TRM claims that this is CORE_CLK based,
* but l4_ick makes more sense to me */
};
/* DSS */
-static struct clk dss1_alwon_fck = {
+static struct clk dss1_alwon_fck_3430es1 = {
.name = "dss1_alwon_fck",
.ops = &clkops_omap2_dflt,
.parent = &dpll4_m4x2_ck,
.recalc = &followparent_recalc,
};
+static struct clk dss1_alwon_fck_3430es2 = {
+ .name = "dss1_alwon_fck",
+ .ops = &clkops_omap3430es2_dss_usbhost_wait,
+ .parent = &dpll4_m4x2_ck,
+ .enable_reg = OMAP_CM_REGADDR(OMAP3430_DSS_MOD, CM_FCLKEN),
+ .enable_bit = OMAP3430_EN_DSS1_SHIFT,
+ .clkdm_name = "dss_clkdm",
+ .recalc = &followparent_recalc,
+};
+
static struct clk dss_tv_fck = {
.name = "dss_tv_fck",
.ops = &clkops_omap2_dflt,
.recalc = &followparent_recalc,
};
-static struct clk dss_ick = {
+static struct clk dss_ick_3430es1 = {
/* Handles both L3 and L4 clocks */
.name = "dss_ick",
.ops = &clkops_omap2_dflt,
.recalc = &followparent_recalc,
};
+static struct clk dss_ick_3430es2 = {
+ /* Handles both L3 and L4 clocks */
+ .name = "dss_ick",
+ .ops = &clkops_omap3430es2_dss_usbhost_wait,
+ .parent = &l4_ick,
+ .init = &omap2_init_clk_clkdm,
+ .enable_reg = OMAP_CM_REGADDR(OMAP3430_DSS_MOD, CM_ICLKEN),
+ .enable_bit = OMAP3430_CM_ICLKEN_DSS_EN_DSS_SHIFT,
+ .clkdm_name = "dss_clkdm",
+ .recalc = &followparent_recalc,
+};
+
/* CAM */
static struct clk cam_mclk = {
static struct clk usbhost_120m_fck = {
.name = "usbhost_120m_fck",
- .ops = &clkops_omap2_dflt_wait,
+ .ops = &clkops_omap2_dflt,
.parent = &dpll5_m2_ck,
.init = &omap2_init_clk_clkdm,
.enable_reg = OMAP_CM_REGADDR(OMAP3430ES2_USBHOST_MOD, CM_FCLKEN),
static struct clk usbhost_48m_fck = {
.name = "usbhost_48m_fck",
- .ops = &clkops_omap2_dflt_wait,
+ .ops = &clkops_omap3430es2_dss_usbhost_wait,
.parent = &omap_48m_fck,
.init = &omap2_init_clk_clkdm,
.enable_reg = OMAP_CM_REGADDR(OMAP3430ES2_USBHOST_MOD, CM_FCLKEN),
static struct clk usbhost_ick = {
/* Handles both L3 and L4 clocks */
.name = "usbhost_ick",
- .ops = &clkops_omap2_dflt_wait,
+ .ops = &clkops_omap3430es2_dss_usbhost_wait,
.parent = &l4_ick,
.init = &omap2_init_clk_clkdm,
.enable_reg = OMAP_CM_REGADDR(OMAP3430ES2_USBHOST_MOD, CM_ICLKEN),
* These registers appear once per CM module.
*/
-#define OMAP3430_CM_REVISION OMAP_CM_REGADDR(OCP_MOD, 0x0000)
-#define OMAP3430_CM_SYSCONFIG OMAP_CM_REGADDR(OCP_MOD, 0x0010)
-#define OMAP3430_CM_POLCTRL OMAP_CM_REGADDR(OCP_MOD, 0x009c)
+#define OMAP3430_CM_REVISION OMAP34XX_CM_REGADDR(OCP_MOD, 0x0000)
+#define OMAP3430_CM_SYSCONFIG OMAP34XX_CM_REGADDR(OCP_MOD, 0x0010)
+#define OMAP3430_CM_POLCTRL OMAP34XX_CM_REGADDR(OCP_MOD, 0x009c)
#define OMAP3_CM_CLKOUT_CTRL_OFFSET 0x0070
#define OMAP3430_CM_CLKOUT_CTRL OMAP_CM_REGADDR(OMAP3430_CCR_MOD, 0x0070)
return v;
}
-void __init omap2_init_common_hw(struct omap_sdrc_params *sp)
+void __init omap2_init_common_hw(struct omap_sdrc_params *sdrc_cs0,
+ struct omap_sdrc_params *sdrc_cs1)
{
omap2_mux_init();
#ifndef CONFIG_ARCH_OMAP4 /* FIXME: Remove this once the clkdev is ready */
pwrdm_init(powerdomains_omap);
clkdm_init(clockdomains_omap, clkdm_pwrdm_autodeps);
omap2_clk_init();
- omap2_sdrc_init(sp);
+ omap2_sdrc_init(sdrc_cs0, sdrc_cs1);
_omap2_init_reprogram_sdrc();
#endif
gpmc_init();
if (i != 0)
break;
ret = PTR_ERR(reg);
+ hsmmc[i].vcc = NULL;
goto err;
}
hsmmc[i].vcc = reg;
static void twl_mmc_cleanup(struct device *dev)
{
struct omap_mmc_platform_data *mmc = dev->platform_data;
+ int i;
gpio_free(mmc->slots[0].switch_pin);
+ for(i = 0; i < ARRAY_SIZE(hsmmc); i++) {
+ regulator_put(hsmmc[i].vcc);
+ regulator_put(hsmmc[i].vcc_aux);
+ }
}
#ifdef CONFIG_PM
OMAP34XX_MUX_MODE4 | OMAP34XX_PIN_OUTPUT)
MUX_CFG_34XX("J25_34XX_GPIO170", 0x1c6,
OMAP34XX_MUX_MODE4 | OMAP34XX_PIN_INPUT)
+
+/* OMAP3 SDRC CKE signals to SDR/DDR ram chips */
+MUX_CFG_34XX("H16_34XX_SDRC_CKE0", 0x262,
+ OMAP34XX_MUX_MODE0 | OMAP34XX_PIN_OUTPUT)
+MUX_CFG_34XX("H17_34XX_SDRC_CKE1", 0x264,
+ OMAP34XX_MUX_MODE0 | OMAP34XX_PIN_OUTPUT)
};
#define OMAP34XX_PINS_SZ ARRAY_SIZE(omap34xx_pins)
#ifndef __ARCH_ARM_MACH_OMAP2_PM_H
#define __ARCH_ARM_MACH_OMAP2_PM_H
-extern int omap2_pm_init(void);
-extern int omap3_pm_init(void);
-
#ifdef CONFIG_PM_DEBUG
extern void omap2_pm_dump(int mode, int resume, unsigned int us);
extern int omap2_pm_debug;
WKUP_MOD, PM_WKEN);
}
-int __init omap2_pm_init(void)
+static int __init omap2_pm_init(void)
{
u32 l;
struct power_state {
struct powerdomain *pwrdm;
u32 next_state;
+#ifdef CONFIG_SUSPEND
u32 saved_state;
+#endif
struct list_head node;
};
local_irq_enable();
}
+#ifdef CONFIG_SUSPEND
+static suspend_state_t suspend_state;
+
static int omap3_pm_prepare(void)
{
disable_hlt();
restore:
/* Restore next_pwrsts */
list_for_each_entry(pwrst, &pwrst_list, node) {
- set_pwrdm_state(pwrst->pwrdm, pwrst->saved_state);
state = pwrdm_read_prev_pwrst(pwrst->pwrdm);
if (state > pwrst->next_state) {
printk(KERN_INFO "Powerdomain (%s) didn't enter "
pwrst->pwrdm->name, pwrst->next_state);
ret = -1;
}
+ set_pwrdm_state(pwrst->pwrdm, pwrst->saved_state);
}
if (ret)
printk(KERN_ERR "Could not enter target state in pm_suspend\n");
return ret;
}
-static int omap3_pm_enter(suspend_state_t state)
+static int omap3_pm_enter(suspend_state_t unused)
{
int ret = 0;
- switch (state) {
+ switch (suspend_state) {
case PM_SUSPEND_STANDBY:
case PM_SUSPEND_MEM:
ret = omap3_pm_suspend();
enable_hlt();
}
+/* Hooks to enable / disable UART interrupts during suspend */
+static int omap3_pm_begin(suspend_state_t state)
+{
+ suspend_state = state;
+ omap_uart_enable_irqs(0);
+ return 0;
+}
+
+static void omap3_pm_end(void)
+{
+ suspend_state = PM_SUSPEND_ON;
+ omap_uart_enable_irqs(1);
+ return;
+}
+
static struct platform_suspend_ops omap_pm_ops = {
+ .begin = omap3_pm_begin,
+ .end = omap3_pm_end,
.prepare = omap3_pm_prepare,
.enter = omap3_pm_enter,
.finish = omap3_pm_finish,
.valid = suspend_valid_only_mem,
};
+#endif /* CONFIG_SUSPEND */
/**
/* Clear any pending PRCM interrupts */
prm_write_mod_reg(0, OCP_MOD, OMAP3_PRM_IRQSTATUS_MPU_OFFSET);
+ /* Don't attach IVA interrupts */
+ prm_write_mod_reg(0, WKUP_MOD, OMAP3430_PM_IVAGRPSEL);
+ prm_write_mod_reg(0, CORE_MOD, OMAP3430_PM_IVAGRPSEL1);
+ prm_write_mod_reg(0, CORE_MOD, OMAP3430ES2_PM_IVAGRPSEL3);
+ prm_write_mod_reg(0, OMAP3430_PER_MOD, OMAP3430_PM_IVAGRPSEL);
+
+ /* Clear any pending 'reset' flags */
+ prm_write_mod_reg(0xffffffff, MPU_MOD, RM_RSTST);
+ prm_write_mod_reg(0xffffffff, CORE_MOD, RM_RSTST);
+ prm_write_mod_reg(0xffffffff, OMAP3430_PER_MOD, RM_RSTST);
+ prm_write_mod_reg(0xffffffff, OMAP3430_EMU_MOD, RM_RSTST);
+ prm_write_mod_reg(0xffffffff, OMAP3430_NEON_MOD, RM_RSTST);
+ prm_write_mod_reg(0xffffffff, OMAP3430_DSS_MOD, RM_RSTST);
+ prm_write_mod_reg(0xffffffff, OMAP3430ES2_USBHOST_MOD, RM_RSTST);
+
+ /* Clear any pending PRCM interrupts */
+ prm_write_mod_reg(0, OCP_MOD, OMAP3_PRM_IRQSTATUS_MPU_OFFSET);
+
omap3_iva_idle();
omap3_d2d_idle();
}
return 0;
}
-int __init omap3_pm_init(void)
+static int __init omap3_pm_init(void)
{
struct power_state *pwrst, *tmp;
int ret;
_omap_sram_idle = omap_sram_push(omap34xx_cpu_suspend,
omap34xx_cpu_suspend_sz);
+#ifdef CONFIG_SUSPEND
suspend_set_ops(&omap_pm_ops);
+#endif /* CONFIG_SUSPEND */
pm_idle = omap3_pm_idle;
#include <linux/init.h>
#include <linux/clk.h>
#include <linux/io.h>
+#include <linux/delay.h>
#include <mach/common.h>
#include <mach/prcm.h>
static void __iomem *prm_base;
static void __iomem *cm_base;
+#define MAX_MODULE_ENABLE_WAIT 100000
+
u32 omap_prcm_get_reset_sources(void)
{
/* XXX This presumably needs modification for 34XX */
}
EXPORT_SYMBOL(cm_rmw_mod_reg_bits);
+/**
+ * omap2_cm_wait_idlest - wait for IDLEST bit to indicate module readiness
+ * @reg: physical address of module IDLEST register
+ * @mask: value to mask against to determine if the module is active
+ * @name: name of the clock (for printk)
+ *
+ * Returns 1 if the module indicated readiness in time, or 0 if it
+ * failed to enable in roughly MAX_MODULE_ENABLE_WAIT microseconds.
+ */
+int omap2_cm_wait_idlest(void __iomem *reg, u32 mask, const char *name)
+{
+ int i = 0;
+ int ena = 0;
+
+ /*
+ * 24xx uses 0 to indicate not ready, and 1 to indicate ready.
+ * 34xx reverses this, just to keep us on our toes
+ */
+ if (cpu_is_omap24xx())
+ ena = mask;
+ else if (cpu_is_omap34xx())
+ ena = 0;
+ else
+ BUG();
+
+ /* Wait for lock */
+ while (((__raw_readl(reg) & mask) != ena) &&
+ (i++ < MAX_MODULE_ENABLE_WAIT))
+ udelay(1);
+
+ if (i < MAX_MODULE_ENABLE_WAIT)
+ pr_debug("cm: Module associated with clock %s ready after %d "
+ "loops\n", name, i);
+ else
+ pr_err("cm: Module associated with clock %s didn't enable in "
+ "%d tries\n", name, MAX_MODULE_ENABLE_WAIT);
+
+ return (i < MAX_MODULE_ENABLE_WAIT) ? 1 : 0;
+};
+
void __init omap2_set_globals_prcm(struct omap_globals *omap2_globals)
{
prm_base = omap2_globals->prm;
#include <mach/sdrc.h>
#include "sdrc.h"
-static struct omap_sdrc_params *sdrc_init_params;
+static struct omap_sdrc_params *sdrc_init_params_cs0, *sdrc_init_params_cs1;
void __iomem *omap2_sdrc_base;
void __iomem *omap2_sms_base;
/**
* omap2_sdrc_get_params - return SDRC register values for a given clock rate
* @r: SDRC clock rate (in Hz)
+ * @sdrc_cs0: chip select 0 ram timings **
+ * @sdrc_cs1: chip select 1 ram timings **
*
* Return pre-calculated values for the SDRC_ACTIM_CTRLA,
- * SDRC_ACTIM_CTRLB, SDRC_RFR_CTRL, and SDRC_MR registers, for a given
- * SDRC clock rate 'r'. These parameters control various timing
- * delays in the SDRAM controller that are expressed in terms of the
- * number of SDRC clock cycles to wait; hence the clock rate
- * dependency. Note that sdrc_init_params must be sorted rate
- * descending. Also assumes that both chip-selects use the same
- * timing parameters. Returns a struct omap_sdrc_params * upon
- * success, or NULL upon failure.
+ * SDRC_ACTIM_CTRLB, SDRC_RFR_CTRL and SDRC_MR registers in sdrc_cs[01]
+ * structs,for a given SDRC clock rate 'r'.
+ * These parameters control various timing delays in the SDRAM controller
+ * that are expressed in terms of the number of SDRC clock cycles to
+ * wait; hence the clock rate dependency.
+ *
+ * Supports 2 different timing parameters for both chip selects.
+ *
+ * Note 1: the sdrc_init_params_cs[01] must be sorted rate descending.
+ * Note 2: If sdrc_init_params_cs_1 is not NULL it must be of same size
+ * as sdrc_init_params_cs_0.
+ *
+ * Fills in the struct omap_sdrc_params * for each chip select.
+ * Returns 0 upon success or -1 upon failure.
*/
-struct omap_sdrc_params *omap2_sdrc_get_params(unsigned long r)
+int omap2_sdrc_get_params(unsigned long r,
+ struct omap_sdrc_params **sdrc_cs0,
+ struct omap_sdrc_params **sdrc_cs1)
{
- struct omap_sdrc_params *sp;
+ struct omap_sdrc_params *sp0, *sp1;
- if (!sdrc_init_params)
- return NULL;
+ if (!sdrc_init_params_cs0)
+ return -1;
- sp = sdrc_init_params;
+ sp0 = sdrc_init_params_cs0;
+ sp1 = sdrc_init_params_cs1;
- while (sp->rate && sp->rate != r)
- sp++;
+ while (sp0->rate && sp0->rate != r) {
+ sp0++;
+ if (sdrc_init_params_cs1)
+ sp1++;
+ }
- if (!sp->rate)
- return NULL;
+ if (!sp0->rate)
+ return -1;
- return sp;
+ *sdrc_cs0 = sp0;
+ *sdrc_cs1 = sp1;
+ return 0;
}
/**
* omap2_sdrc_init - initialize SMS, SDRC devices on boot
- * @sp: pointer to a null-terminated list of struct omap_sdrc_params
+ * @sdrc_cs[01]: pointers to a null-terminated list of struct omap_sdrc_params
+ * Support for 2 chip selects timings
*
* Turn on smart idle modes for SDRAM scheduler and controller.
* Program a known-good configuration for the SDRC to deal with buggy
* bootloaders.
*/
-void __init omap2_sdrc_init(struct omap_sdrc_params *sp)
+void __init omap2_sdrc_init(struct omap_sdrc_params *sdrc_cs0,
+ struct omap_sdrc_params *sdrc_cs1)
{
u32 l;
l |= (0x2 << 3);
sdrc_write_reg(l, SDRC_SYSCONFIG);
- sdrc_init_params = sp;
+ sdrc_init_params_cs0 = sdrc_cs0;
+ sdrc_init_params_cs1 = sdrc_cs1;
/* XXX Enable SRFRONIDLEREQ here also? */
+ /*
+ * PWDENA should not be set due to 34xx erratum 1.150 - PWDENA
+ * can cause random memory corruption
+ */
l = (1 << SDRC_POWER_EXTCLKDIS_SHIFT) |
- (1 << SDRC_POWER_PWDENA_SHIFT) |
(1 << SDRC_POWER_PAGEPOLICY_SHIFT);
sdrc_write_reg(l, SDRC_POWER);
}
struct plat_serial8250_port *p;
struct list_head node;
+ struct platform_device pdev;
#if defined(CONFIG_ARCH_OMAP3) && defined(CONFIG_PM)
int context_valid;
#endif
};
-static struct omap_uart_state omap_uart[OMAP_MAX_NR_PORTS];
static LIST_HEAD(uart_list);
-static struct plat_serial8250_port serial_platform_data[] = {
+static struct plat_serial8250_port serial_platform_data0[] = {
{
.membase = IO_ADDRESS(OMAP_UART1_BASE),
.mapbase = OMAP_UART1_BASE,
.regshift = 2,
.uartclk = OMAP24XX_BASE_BAUD * 16,
}, {
+ .flags = 0
+ }
+};
+
+static struct plat_serial8250_port serial_platform_data1[] = {
+ {
.membase = IO_ADDRESS(OMAP_UART2_BASE),
.mapbase = OMAP_UART2_BASE,
.irq = 73,
.regshift = 2,
.uartclk = OMAP24XX_BASE_BAUD * 16,
}, {
+ .flags = 0
+ }
+};
+
+static struct plat_serial8250_port serial_platform_data2[] = {
+ {
.membase = IO_ADDRESS(OMAP_UART3_BASE),
.mapbase = OMAP_UART3_BASE,
.irq = 74,
clk_disable(uart->fck);
}
+static void omap_uart_enable_wakeup(struct omap_uart_state *uart)
+{
+ /* Set wake-enable bit */
+ if (uart->wk_en && uart->wk_mask) {
+ u32 v = __raw_readl(uart->wk_en);
+ v |= uart->wk_mask;
+ __raw_writel(v, uart->wk_en);
+ }
+
+ /* Ensure IOPAD wake-enables are set */
+ if (cpu_is_omap34xx() && uart->padconf) {
+ u16 v = omap_ctrl_readw(uart->padconf);
+ v |= OMAP3_PADCONF_WAKEUPENABLE0;
+ omap_ctrl_writew(v, uart->padconf);
+ }
+}
+
+static void omap_uart_disable_wakeup(struct omap_uart_state *uart)
+{
+ /* Clear wake-enable bit */
+ if (uart->wk_en && uart->wk_mask) {
+ u32 v = __raw_readl(uart->wk_en);
+ v &= ~uart->wk_mask;
+ __raw_writel(v, uart->wk_en);
+ }
+
+ /* Ensure IOPAD wake-enables are cleared */
+ if (cpu_is_omap34xx() && uart->padconf) {
+ u16 v = omap_ctrl_readw(uart->padconf);
+ v &= ~OMAP3_PADCONF_WAKEUPENABLE0;
+ omap_ctrl_writew(v, uart->padconf);
+ }
+}
+
static void omap_uart_smart_idle_enable(struct omap_uart_state *uart,
int enable)
{
static void omap_uart_allow_sleep(struct omap_uart_state *uart)
{
+ if (device_may_wakeup(&uart->pdev.dev))
+ omap_uart_enable_wakeup(uart);
+ else
+ omap_uart_disable_wakeup(uart);
+
if (!uart->clocked)
return;
/* Check for normal UART wakeup */
if (__raw_readl(uart->wk_st) & uart->wk_mask)
omap_uart_block_sleep(uart);
-
return;
}
}
return IRQ_NONE;
}
-static u32 sleep_timeout = DEFAULT_TIMEOUT;
-
static void omap_uart_idle_init(struct omap_uart_state *uart)
{
- u32 v;
struct plat_serial8250_port *p = uart->p;
int ret;
uart->can_sleep = 0;
- uart->timeout = sleep_timeout;
+ uart->timeout = DEFAULT_TIMEOUT;
setup_timer(&uart->timer, omap_uart_idle_timer,
(unsigned long) uart);
mod_timer(&uart->timer, jiffies + uart->timeout);
uart->padconf = 0;
}
- /* Set wake-enable bit */
- if (uart->wk_en && uart->wk_mask) {
- v = __raw_readl(uart->wk_en);
- v |= uart->wk_mask;
- __raw_writel(v, uart->wk_en);
- }
-
- /* Ensure IOPAD wake-enables are set */
- if (cpu_is_omap34xx() && uart->padconf) {
- u16 v;
-
- v = omap_ctrl_readw(uart->padconf);
- v |= OMAP3_PADCONF_WAKEUPENABLE0;
- omap_ctrl_writew(v, uart->padconf);
- }
-
p->flags |= UPF_SHARE_IRQ;
ret = request_irq(p->irq, omap_uart_interrupt, IRQF_SHARED,
"serial idle", (void *)uart);
WARN_ON(ret);
}
-static ssize_t sleep_timeout_show(struct kobject *kobj,
- struct kobj_attribute *attr,
+void omap_uart_enable_irqs(int enable)
+{
+ int ret;
+ struct omap_uart_state *uart;
+
+ list_for_each_entry(uart, &uart_list, node) {
+ if (enable)
+ ret = request_irq(uart->p->irq, omap_uart_interrupt,
+ IRQF_SHARED, "serial idle", (void *)uart);
+ else
+ free_irq(uart->p->irq, (void *)uart);
+ }
+}
+
+static ssize_t sleep_timeout_show(struct device *dev,
+ struct device_attribute *attr,
char *buf)
{
- return sprintf(buf, "%u\n", sleep_timeout / HZ);
+ struct platform_device *pdev = container_of(dev,
+ struct platform_device, dev);
+ struct omap_uart_state *uart = container_of(pdev,
+ struct omap_uart_state, pdev);
+
+ return sprintf(buf, "%u\n", uart->timeout / HZ);
}
-static ssize_t sleep_timeout_store(struct kobject *kobj,
- struct kobj_attribute *attr,
+static ssize_t sleep_timeout_store(struct device *dev,
+ struct device_attribute *attr,
const char *buf, size_t n)
{
- struct omap_uart_state *uart;
+ struct platform_device *pdev = container_of(dev,
+ struct platform_device, dev);
+ struct omap_uart_state *uart = container_of(pdev,
+ struct omap_uart_state, pdev);
unsigned int value;
if (sscanf(buf, "%u", &value) != 1) {
printk(KERN_ERR "sleep_timeout_store: Invalid value\n");
return -EINVAL;
}
- sleep_timeout = value * HZ;
- list_for_each_entry(uart, &uart_list, node) {
- uart->timeout = sleep_timeout;
- if (uart->timeout)
- mod_timer(&uart->timer, jiffies + uart->timeout);
- else
- /* A zero value means disable timeout feature */
- omap_uart_block_sleep(uart);
- }
+
+ uart->timeout = value * HZ;
+ if (uart->timeout)
+ mod_timer(&uart->timer, jiffies + uart->timeout);
+ else
+ /* A zero value means disable timeout feature */
+ omap_uart_block_sleep(uart);
+
return n;
}
-static struct kobj_attribute sleep_timeout_attr =
- __ATTR(sleep_timeout, 0644, sleep_timeout_show, sleep_timeout_store);
-
+DEVICE_ATTR(sleep_timeout, 0644, sleep_timeout_show, sleep_timeout_store);
+#define DEV_CREATE_FILE(dev, attr) WARN_ON(device_create_file(dev, attr))
#else
static inline void omap_uart_idle_init(struct omap_uart_state *uart) {}
+#define DEV_CREATE_FILE(dev, attr)
#endif /* CONFIG_PM */
-static struct platform_device serial_device = {
- .name = "serial8250",
- .id = PLAT8250_DEV_PLATFORM,
- .dev = {
- .platform_data = serial_platform_data,
+static struct omap_uart_state omap_uart[OMAP_MAX_NR_PORTS] = {
+ {
+ .pdev = {
+ .name = "serial8250",
+ .id = PLAT8250_DEV_PLATFORM,
+ .dev = {
+ .platform_data = serial_platform_data0,
+ },
+ },
+ }, {
+ .pdev = {
+ .name = "serial8250",
+ .id = PLAT8250_DEV_PLATFORM1,
+ .dev = {
+ .platform_data = serial_platform_data1,
+ },
+ },
+ }, {
+ .pdev = {
+ .name = "serial8250",
+ .id = PLAT8250_DEV_PLATFORM2,
+ .dev = {
+ .platform_data = serial_platform_data2,
+ },
+ },
},
};
void __init omap_serial_init(void)
{
- int i, err;
+ int i;
const struct omap_uart_config *info;
char name[16];
if (info == NULL)
return;
- if (cpu_is_omap44xx()) {
- for (i = 0; i < OMAP_MAX_NR_PORTS; i++)
- serial_platform_data[i].irq += 32;
- }
for (i = 0; i < OMAP_MAX_NR_PORTS; i++) {
- struct plat_serial8250_port *p = serial_platform_data + i;
struct omap_uart_state *uart = &omap_uart[i];
+ struct platform_device *pdev = &uart->pdev;
+ struct device *dev = &pdev->dev;
+ struct plat_serial8250_port *p = dev->platform_data;
if (!(info->enabled_uarts & (1 << i))) {
p->membase = NULL;
uart->num = i;
p->private_data = uart;
uart->p = p;
- list_add(&uart->node, &uart_list);
+ list_add_tail(&uart->node, &uart_list);
+
+ if (cpu_is_omap44xx())
+ p->irq += 32;
omap_uart_enable_clocks(uart);
omap_uart_reset(uart);
omap_uart_idle_init(uart);
- }
-
- err = platform_device_register(&serial_device);
-
-#ifdef CONFIG_PM
- if (!err)
- err = sysfs_create_file(&serial_device.dev.kobj,
- &sleep_timeout_attr.attr);
-#endif
+ if (WARN_ON(platform_device_register(pdev)))
+ continue;
+ if ((cpu_is_omap34xx() && uart->padconf) ||
+ (uart->wk_en && uart->wk_mask)) {
+ device_init_wakeup(dev, true);
+ DEV_CREATE_FILE(dev, &dev_attr_sleep_timeout);
+ }
+ }
}
-
.text
-/* r4 parameters */
+/* r1 parameters */
#define SDRC_NO_UNLOCK_DLL 0x0
#define SDRC_UNLOCK_DLL 0x1
/* SDRC_POWER bit settings */
#define SRFRONIDLEREQ_MASK 0x40
-#define PWDENA_MASK 0x4
/* CM_IDLEST1_CORE bit settings */
#define ST_SDRC_MASK 0x2
/*
* omap3_sram_configure_core_dpll - change DPLL3 M2 divider
- * r0 = new SDRC_RFR_CTRL register contents
- * r1 = new SDRC_ACTIM_CTRLA register contents
- * r2 = new SDRC_ACTIM_CTRLB register contents
- * r3 = new M2 divider setting (only 1 and 2 supported right now)
- * r4 = unlock SDRC DLL? (1 = yes, 0 = no). Only unlock DLL for
+ *
+ * Params passed in registers:
+ * r0 = new M2 divider setting (only 1 and 2 supported right now)
+ * r1 = unlock SDRC DLL? (1 = yes, 0 = no). Only unlock DLL for
* SDRC rates < 83MHz
- * r5 = number of MPU cycles to wait for SDRC to stabilize after
+ * r2 = number of MPU cycles to wait for SDRC to stabilize after
* reprogramming the SDRC when switching to a slower MPU speed
- * r6 = new SDRC_MR_0 register value
- * r7 = increasing SDRC rate? (1 = yes, 0 = no)
+ * r3 = increasing SDRC rate? (1 = yes, 0 = no)
+ *
+ * Params passed via the stack. The needed params will be copied in SRAM
+ * before use by the code in SRAM (SDRAM is not accessible during SDRC
+ * reconfiguration):
+ * new SDRC_RFR_CTRL_0 register contents
+ * new SDRC_ACTIM_CTRL_A_0 register contents
+ * new SDRC_ACTIM_CTRL_B_0 register contents
+ * new SDRC_MR_0 register value
+ * new SDRC_RFR_CTRL_1 register contents
+ * new SDRC_ACTIM_CTRL_A_1 register contents
+ * new SDRC_ACTIM_CTRL_B_1 register contents
+ * new SDRC_MR_1 register value
*
+ * If the param SDRC_RFR_CTRL_1 is 0, the parameters
+ * are not programmed into the SDRC CS1 registers
*/
ENTRY(omap3_sram_configure_core_dpll)
stmfd sp!, {r1-r12, lr} @ store regs to stack
- ldr r4, [sp, #52] @ pull extra args off the stack
- ldr r5, [sp, #56] @ load extra args from the stack
- ldr r6, [sp, #60] @ load extra args from the stack
- ldr r7, [sp, #64] @ load extra args from the stack
+
+ @ pull the extra args off the stack
+ @ and store them in SRAM
+ ldr r4, [sp, #52]
+ str r4, omap_sdrc_rfr_ctrl_0_val
+ ldr r4, [sp, #56]
+ str r4, omap_sdrc_actim_ctrl_a_0_val
+ ldr r4, [sp, #60]
+ str r4, omap_sdrc_actim_ctrl_b_0_val
+ ldr r4, [sp, #64]
+ str r4, omap_sdrc_mr_0_val
+ ldr r4, [sp, #68]
+ str r4, omap_sdrc_rfr_ctrl_1_val
+ cmp r4, #0 @ if SDRC_RFR_CTRL_1 is 0,
+ beq skip_cs1_params @ do not use cs1 params
+ ldr r4, [sp, #72]
+ str r4, omap_sdrc_actim_ctrl_a_1_val
+ ldr r4, [sp, #76]
+ str r4, omap_sdrc_actim_ctrl_b_1_val
+ ldr r4, [sp, #80]
+ str r4, omap_sdrc_mr_1_val
+skip_cs1_params:
dsb @ flush buffered writes to interconnect
- cmp r7, #1 @ if increasing SDRC clk rate,
+
+ cmp r3, #1 @ if increasing SDRC clk rate,
bleq configure_sdrc @ program the SDRC regs early (for RFR)
- cmp r4, #SDRC_UNLOCK_DLL @ set the intended DLL state
+ cmp r1, #SDRC_UNLOCK_DLL @ set the intended DLL state
bleq unlock_dll
blne lock_dll
bl sdram_in_selfrefresh @ put SDRAM in self refresh, idle SDRC
bl configure_core_dpll @ change the DPLL3 M2 divider
+ mov r12, r2
+ bl wait_clk_stable @ wait for SDRC to stabilize
bl enable_sdrc @ take SDRC out of idle
- cmp r4, #SDRC_UNLOCK_DLL @ wait for DLL status to change
+ cmp r1, #SDRC_UNLOCK_DLL @ wait for DLL status to change
bleq wait_dll_unlock
blne wait_dll_lock
- cmp r7, #1 @ if increasing SDRC clk rate,
+ cmp r3, #1 @ if increasing SDRC clk rate,
beq return_to_sdram @ return to SDRAM code, otherwise,
bl configure_sdrc @ reprogram SDRC regs now
- mov r12, r5
- bl wait_clk_stable @ wait for SDRC to stabilize
return_to_sdram:
isb @ prevent speculative exec past here
mov r0, #0 @ return value
unlock_dll:
ldr r11, omap3_sdrc_dlla_ctrl
ldr r12, [r11]
- and r12, r12, #FIXEDDELAY_MASK
+ bic r12, r12, #FIXEDDELAY_MASK
orr r12, r12, #FIXEDDELAY_DEFAULT
orr r12, r12, #DLLIDLE_MASK
str r12, [r11] @ (no OCP barrier needed)
ldr r12, [r11] @ read the contents of SDRC_POWER
mov r9, r12 @ keep a copy of SDRC_POWER bits
orr r12, r12, #SRFRONIDLEREQ_MASK @ enable self refresh on idle
- bic r12, r12, #PWDENA_MASK @ clear PWDENA
str r12, [r11] @ write back to SDRC_POWER register
ldr r12, [r11] @ posted-write barrier for SDRC
idle_sdrc:
ldr r12, [r11]
ldr r10, core_m2_mask_val @ modify m2 for core dpll
and r12, r12, r10
- orr r12, r12, r3, lsl #CORE_DPLL_CLKOUT_DIV_SHIFT
+ orr r12, r12, r0, lsl #CORE_DPLL_CLKOUT_DIV_SHIFT
str r12, [r11]
ldr r12, [r11] @ posted-write barrier for CM
bx lr
bne wait_dll_unlock
bx lr
configure_sdrc:
- ldr r11, omap3_sdrc_rfr_ctrl
- str r0, [r11]
- ldr r11, omap3_sdrc_actim_ctrla
- str r1, [r11]
- ldr r11, omap3_sdrc_actim_ctrlb
- str r2, [r11]
+ ldr r12, omap_sdrc_rfr_ctrl_0_val @ fetch value from SRAM
+ ldr r11, omap3_sdrc_rfr_ctrl_0 @ fetch addr from SRAM
+ str r12, [r11] @ store
+ ldr r12, omap_sdrc_actim_ctrl_a_0_val
+ ldr r11, omap3_sdrc_actim_ctrl_a_0
+ str r12, [r11]
+ ldr r12, omap_sdrc_actim_ctrl_b_0_val
+ ldr r11, omap3_sdrc_actim_ctrl_b_0
+ str r12, [r11]
+ ldr r12, omap_sdrc_mr_0_val
ldr r11, omap3_sdrc_mr_0
- str r6, [r11]
- ldr r6, [r11] @ posted-write barrier for SDRC
+ str r12, [r11]
+ ldr r12, omap_sdrc_rfr_ctrl_1_val
+ cmp r12, #0 @ if SDRC_RFR_CTRL_1 is 0,
+ beq skip_cs1_prog @ do not program cs1 params
+ ldr r11, omap3_sdrc_rfr_ctrl_1
+ str r12, [r11]
+ ldr r12, omap_sdrc_actim_ctrl_a_1_val
+ ldr r11, omap3_sdrc_actim_ctrl_a_1
+ str r12, [r11]
+ ldr r12, omap_sdrc_actim_ctrl_b_1_val
+ ldr r11, omap3_sdrc_actim_ctrl_b_1
+ str r12, [r11]
+ ldr r12, omap_sdrc_mr_1_val
+ ldr r11, omap3_sdrc_mr_1
+ str r12, [r11]
+skip_cs1_prog:
+ ldr r12, [r11] @ posted-write barrier for SDRC
bx lr
omap3_sdrc_power:
.word OMAP34XX_CM_REGADDR(CORE_MOD, CM_IDLEST)
omap3_cm_iclken1_core:
.word OMAP34XX_CM_REGADDR(CORE_MOD, CM_ICLKEN1)
-omap3_sdrc_rfr_ctrl:
+
+omap3_sdrc_rfr_ctrl_0:
.word OMAP34XX_SDRC_REGADDR(SDRC_RFR_CTRL_0)
-omap3_sdrc_actim_ctrla:
+omap3_sdrc_rfr_ctrl_1:
+ .word OMAP34XX_SDRC_REGADDR(SDRC_RFR_CTRL_1)
+omap3_sdrc_actim_ctrl_a_0:
.word OMAP34XX_SDRC_REGADDR(SDRC_ACTIM_CTRL_A_0)
-omap3_sdrc_actim_ctrlb:
+omap3_sdrc_actim_ctrl_a_1:
+ .word OMAP34XX_SDRC_REGADDR(SDRC_ACTIM_CTRL_A_1)
+omap3_sdrc_actim_ctrl_b_0:
.word OMAP34XX_SDRC_REGADDR(SDRC_ACTIM_CTRL_B_0)
+omap3_sdrc_actim_ctrl_b_1:
+ .word OMAP34XX_SDRC_REGADDR(SDRC_ACTIM_CTRL_B_1)
omap3_sdrc_mr_0:
.word OMAP34XX_SDRC_REGADDR(SDRC_MR_0)
+omap3_sdrc_mr_1:
+ .word OMAP34XX_SDRC_REGADDR(SDRC_MR_1)
+omap_sdrc_rfr_ctrl_0_val:
+ .word 0xDEADBEEF
+omap_sdrc_rfr_ctrl_1_val:
+ .word 0xDEADBEEF
+omap_sdrc_actim_ctrl_a_0_val:
+ .word 0xDEADBEEF
+omap_sdrc_actim_ctrl_a_1_val:
+ .word 0xDEADBEEF
+omap_sdrc_actim_ctrl_b_0_val:
+ .word 0xDEADBEEF
+omap_sdrc_actim_ctrl_b_1_val:
+ .word 0xDEADBEEF
+omap_sdrc_mr_0_val:
+ .word 0xDEADBEEF
+omap_sdrc_mr_1_val:
+ .word 0xDEADBEEF
+
omap3_sdrc_dlla_status:
.word OMAP34XX_SDRC_REGADDR(SDRC_DLLA_STATUS)
omap3_sdrc_dlla_ctrl:
ENTRY(omap3_sram_configure_core_dpll_sz)
.word . - omap3_sram_configure_core_dpll
+
}
};
-static void u300_init_check_chip(void)
+static void __init u300_init_check_chip(void)
{
u16 val;
printk("%d pages swap cached\n", cached);
}
+static void __init find_node_limits(int node, struct meminfo *mi,
+ unsigned long *min, unsigned long *max_low, unsigned long *max_high)
+{
+ int i;
+
+ *min = -1UL;
+ *max_low = *max_high = 0;
+
+ for_each_nodebank(i, mi, node) {
+ struct membank *bank = &mi->bank[i];
+ unsigned long start, end;
+
+ start = bank_pfn_start(bank);
+ end = bank_pfn_end(bank);
+
+ if (*min > start)
+ *min = start;
+ if (*max_high < end)
+ *max_high = end;
+ if (bank->highmem)
+ continue;
+ if (*max_low < end)
+ *max_low = end;
+ }
+}
+
/*
* FIXME: We really want to avoid allocating the bootmap bitmap
* over the top of the initrd. Hopefully, this is located towards
#endif
}
-static unsigned long __init bootmem_init_node(int node, struct meminfo *mi)
+static void __init bootmem_init_node(int node, struct meminfo *mi,
+ unsigned long start_pfn, unsigned long end_pfn)
{
- unsigned long start_pfn, end_pfn, boot_pfn;
+ unsigned long boot_pfn;
unsigned int boot_pages;
pg_data_t *pgdat;
int i;
- start_pfn = -1UL;
- end_pfn = 0;
-
/*
- * Calculate the pfn range, and map the memory banks for this node.
+ * Map the memory banks for this node.
*/
for_each_nodebank(i, mi, node) {
struct membank *bank = &mi->bank[i];
- unsigned long start, end;
- start = bank_pfn_start(bank);
- end = bank_pfn_end(bank);
-
- if (start_pfn > start)
- start_pfn = start;
- if (end_pfn < end)
- end_pfn = end;
-
- map_memory_bank(bank);
+ if (!bank->highmem)
+ map_memory_bank(bank);
}
- /*
- * If there is no memory in this node, ignore it.
- */
- if (end_pfn == 0)
- return end_pfn;
-
/*
* Allocate the bootmem bitmap page.
*/
for_each_nodebank(i, mi, node) {
struct membank *bank = &mi->bank[i];
- free_bootmem_node(pgdat, bank_phys_start(bank), bank_phys_size(bank));
+ if (!bank->highmem)
+ free_bootmem_node(pgdat, bank_phys_start(bank), bank_phys_size(bank));
memory_present(node, bank_pfn_start(bank), bank_pfn_end(bank));
}
*/
reserve_bootmem_node(pgdat, boot_pfn << PAGE_SHIFT,
boot_pages << PAGE_SHIFT, BOOTMEM_DEFAULT);
-
- return end_pfn;
}
static void __init bootmem_reserve_initrd(int node)
static void __init bootmem_free_node(int node, struct meminfo *mi)
{
unsigned long zone_size[MAX_NR_ZONES], zhole_size[MAX_NR_ZONES];
- unsigned long start_pfn, end_pfn;
- pg_data_t *pgdat = NODE_DATA(node);
+ unsigned long min, max_low, max_high;
int i;
- start_pfn = pgdat->bdata->node_min_pfn;
- end_pfn = pgdat->bdata->node_low_pfn;
+ find_node_limits(node, mi, &min, &max_low, &max_high);
/*
* initialise the zones within this node.
*/
memset(zone_size, 0, sizeof(zone_size));
- memset(zhole_size, 0, sizeof(zhole_size));
/*
* The size of this node has already been determined. If we need
* to do anything fancy with the allocation of this memory to the
* zones, now is the time to do it.
*/
- zone_size[0] = end_pfn - start_pfn;
+ zone_size[0] = max_low - min;
+#ifdef CONFIG_HIGHMEM
+ zone_size[ZONE_HIGHMEM] = max_high - max_low;
+#endif
/*
* For each bank in this node, calculate the size of the holes.
* holes = node_size - sum(bank_sizes_in_node)
*/
- zhole_size[0] = zone_size[0];
- for_each_nodebank(i, mi, node)
- zhole_size[0] -= bank_pfn_size(&mi->bank[i]);
+ memcpy(zhole_size, zone_size, sizeof(zhole_size));
+ for_each_nodebank(i, mi, node) {
+ int idx = 0;
+#ifdef CONFIG_HIGHMEM
+ if (mi->bank[i].highmem)
+ idx = ZONE_HIGHMEM;
+#endif
+ zhole_size[idx] -= bank_pfn_size(&mi->bank[i]);
+ }
/*
* Adjust the sizes according to any special requirements for
*/
arch_adjust_zones(node, zone_size, zhole_size);
- free_area_init_node(node, zone_size, start_pfn, zhole_size);
+ free_area_init_node(node, zone_size, min, zhole_size);
}
void __init bootmem_init(void)
{
struct meminfo *mi = &meminfo;
- unsigned long memend_pfn = 0;
+ unsigned long min, max_low, max_high;
int node, initrd_node;
/*
*/
initrd_node = check_initrd(mi);
+ max_low = max_high = 0;
+
/*
* Run through each node initialising the bootmem allocator.
*/
for_each_node(node) {
- unsigned long end_pfn = bootmem_init_node(node, mi);
+ unsigned long node_low, node_high;
+
+ find_node_limits(node, mi, &min, &node_low, &node_high);
+
+ if (node_low > max_low)
+ max_low = node_low;
+ if (node_high > max_high)
+ max_high = node_high;
+
+ /*
+ * If there is no memory in this node, ignore it.
+ * (We can't have nodes which have no lowmem)
+ */
+ if (node_low == 0)
+ continue;
+
+ bootmem_init_node(node, mi, min, node_low);
/*
* Reserve any special node zero regions.
*/
if (node == initrd_node)
bootmem_reserve_initrd(node);
-
- /*
- * Remember the highest memory PFN.
- */
- if (end_pfn > memend_pfn)
- memend_pfn = end_pfn;
}
/*
for_each_node(node)
bootmem_free_node(node, mi);
- high_memory = __va((memend_pfn << PAGE_SHIFT) - 1) + 1;
+ high_memory = __va((max_low << PAGE_SHIFT) - 1) + 1;
/*
* This doesn't seem to be used by the Linux memory manager any
* Note: max_low_pfn and max_pfn reflect the number of _pages_ in
* the system, not the maximum PFN.
*/
- max_pfn = max_low_pfn = memend_pfn - PHYS_PFN_OFFSET;
+ max_low_pfn = max_low - PHYS_PFN_OFFSET;
+ max_pfn = max_high - PHYS_PFN_OFFSET;
}
static inline int free_area(unsigned long pfn, unsigned long end, char *s)
static void __init sanity_check_meminfo(void)
{
- int i, j;
+ int i, j, highmem = 0;
for (i = 0, j = 0; i < meminfo.nr_banks; i++) {
struct membank *bank = &meminfo.bank[j];
*bank = meminfo.bank[i];
#ifdef CONFIG_HIGHMEM
+ if (__va(bank->start) > VMALLOC_MIN ||
+ __va(bank->start) < (void *)PAGE_OFFSET)
+ highmem = 1;
+
+ bank->highmem = highmem;
+
/*
* Split those memory banks which are partially overlapping
* the vmalloc area greatly simplifying things later.
i++;
bank[1].size -= VMALLOC_MIN - __va(bank->start);
bank[1].start = __pa(VMALLOC_MIN - 1) + 1;
+ bank[1].highmem = highmem = 1;
j++;
}
bank->size = VMALLOC_MIN - __va(bank->start);
/* Ensure desired rate is within allowed range. Some govenors
* (ondemand) will just pass target_freq=0 to get the minimum. */
- if (target_freq < policy->cpuinfo.min_freq)
- target_freq = policy->cpuinfo.min_freq;
- if (target_freq > policy->cpuinfo.max_freq)
- target_freq = policy->cpuinfo.max_freq;
+ if (target_freq < policy->min)
+ target_freq = policy->min;
+ if (target_freq > policy->max)
+ target_freq = policy->max;
freqs.old = omap_getspeed(0);
freqs.new = clk_round_rate(mpu_clk, target_freq * 1000) / 1000;
cur_lch = next_lch;
} while (next_lch != -1);
- } else if (cpu_class_is_omap2()) {
+ } else if (cpu_is_omap242x() ||
+ (cpu_is_omap243x() && omap_type() <= OMAP2430_REV_ES1_0)) {
+
/* Errata: Need to write lch even if not using chaining */
dma_write(lch, CLNK_CTRL(lch));
}
__raw_writel(l, reg);
}
-static int __omap_get_gpio_datain(int gpio)
+static int _get_gpio_datain(struct gpio_bank *bank, int gpio)
{
- struct gpio_bank *bank;
void __iomem *reg;
if (check_gpio(gpio) < 0)
return -EINVAL;
- bank = get_gpio_bank(gpio);
reg = bank->base;
switch (bank->method) {
#ifdef CONFIG_ARCH_OMAP1
& (1 << get_gpio_index(gpio))) != 0;
}
+static int _get_gpio_dataout(struct gpio_bank *bank, int gpio)
+{
+ void __iomem *reg;
+
+ if (check_gpio(gpio) < 0)
+ return -EINVAL;
+ reg = bank->base;
+
+ switch (bank->method) {
+#ifdef CONFIG_ARCH_OMAP1
+ case METHOD_MPUIO:
+ reg += OMAP_MPUIO_OUTPUT;
+ break;
+#endif
+#ifdef CONFIG_ARCH_OMAP15XX
+ case METHOD_GPIO_1510:
+ reg += OMAP1510_GPIO_DATA_OUTPUT;
+ break;
+#endif
+#ifdef CONFIG_ARCH_OMAP16XX
+ case METHOD_GPIO_1610:
+ reg += OMAP1610_GPIO_DATAOUT;
+ break;
+#endif
+#ifdef CONFIG_ARCH_OMAP730
+ case METHOD_GPIO_730:
+ reg += OMAP730_GPIO_DATA_OUTPUT;
+ break;
+#endif
+#ifdef CONFIG_ARCH_OMAP850
+ case METHOD_GPIO_850:
+ reg += OMAP850_GPIO_DATA_OUTPUT;
+ break;
+#endif
+#if defined(CONFIG_ARCH_OMAP24XX) || defined(CONFIG_ARCH_OMAP34XX) || \
+ defined(CONFIG_ARCH_OMAP4)
+ case METHOD_GPIO_24XX:
+ reg += OMAP24XX_GPIO_DATAOUT;
+ break;
+#endif
+ default:
+ return -EINVAL;
+ }
+
+ return (__raw_readl(reg) & (1 << get_gpio_index(gpio))) != 0;
+}
+
#define MOD_REG_BIT(reg, bit_mask, set) \
do { \
int l = __raw_readl(base + reg); \
struct gpio_bank *bank = get_irq_chip_data(irq);
_set_gpio_irqenable(bank, gpio, 0);
+ _set_gpio_triggering(bank, get_gpio_index(gpio), IRQ_TYPE_NONE);
}
static void gpio_unmask_irq(unsigned int irq)
unsigned int gpio = irq - IH_GPIO_BASE;
struct gpio_bank *bank = get_irq_chip_data(irq);
unsigned int irq_mask = 1 << get_gpio_index(gpio);
+ struct irq_desc *desc = irq_to_desc(irq);
+ u32 trigger = desc->status & IRQ_TYPE_SENSE_MASK;
+
+ if (trigger)
+ _set_gpio_triggering(bank, get_gpio_index(gpio), trigger);
/* For level-triggered GPIOs, the clearing must be done after
* the HW source is cleared, thus after the handler has run */
return 0;
}
+static int gpio_is_input(struct gpio_bank *bank, int mask)
+{
+ void __iomem *reg = bank->base;
+
+ switch (bank->method) {
+ case METHOD_MPUIO:
+ reg += OMAP_MPUIO_IO_CNTL;
+ break;
+ case METHOD_GPIO_1510:
+ reg += OMAP1510_GPIO_DIR_CONTROL;
+ break;
+ case METHOD_GPIO_1610:
+ reg += OMAP1610_GPIO_DIRECTION;
+ break;
+ case METHOD_GPIO_730:
+ reg += OMAP730_GPIO_DIR_CONTROL;
+ break;
+ case METHOD_GPIO_850:
+ reg += OMAP850_GPIO_DIR_CONTROL;
+ break;
+ case METHOD_GPIO_24XX:
+ reg += OMAP24XX_GPIO_OE;
+ break;
+ }
+ return __raw_readl(reg) & mask;
+}
+
static int gpio_get(struct gpio_chip *chip, unsigned offset)
{
- return __omap_get_gpio_datain(chip->base + offset);
+ struct gpio_bank *bank;
+ void __iomem *reg;
+ int gpio;
+ u32 mask;
+
+ gpio = chip->base + offset;
+ bank = get_gpio_bank(gpio);
+ reg = bank->base;
+ mask = 1 << get_gpio_index(gpio);
+
+ if (gpio_is_input(bank, mask))
+ return _get_gpio_datain(bank, gpio);
+ else
+ return _get_gpio_dataout(bank, gpio);
}
static int gpio_output(struct gpio_chip *chip, unsigned offset, int value)
#include <linux/debugfs.h>
#include <linux/seq_file.h>
-static int gpio_is_input(struct gpio_bank *bank, int mask)
-{
- void __iomem *reg = bank->base;
-
- switch (bank->method) {
- case METHOD_MPUIO:
- reg += OMAP_MPUIO_IO_CNTL;
- break;
- case METHOD_GPIO_1510:
- reg += OMAP1510_GPIO_DIR_CONTROL;
- break;
- case METHOD_GPIO_1610:
- reg += OMAP1610_GPIO_DIRECTION;
- break;
- case METHOD_GPIO_730:
- reg += OMAP730_GPIO_DIR_CONTROL;
- break;
- case METHOD_GPIO_850:
- reg += OMAP850_GPIO_DIR_CONTROL;
- break;
- case METHOD_GPIO_24XX:
- reg += OMAP24XX_GPIO_OE;
- break;
- }
- return __raw_readl(reg) & mask;
-}
-
-
static int dbg_gpio_show(struct seq_file *s, void *unused)
{
unsigned i, j, gpio;
struct clkops {
int (*enable)(struct clk *);
void (*disable)(struct clk *);
+ void (*find_idlest)(struct clk *, void __iomem **, u8 *);
+ void (*find_companion)(struct clk *, void __iomem **, u8 *);
};
#if defined(CONFIG_ARCH_OMAP2) || defined(CONFIG_ARCH_OMAP3) || \
#define cpu_class_is_omap2() (cpu_is_omap24xx() || cpu_is_omap34xx() || \
cpu_is_omap44xx())
-#if defined(CONFIG_ARCH_OMAP2) || defined(CONFIG_ARCH_OMAP3) || \
- defined(CONFIG_ARCH_OMAP4)
-
/* Various silicon revisions for omap2 */
#define OMAP242X_CLASS 0x24200024
#define OMAP2420_REV_ES1_0 0x24200024
int omap_chip_is(struct omap_chip_id oci);
void omap2_check_revision(void);
-
-#endif /* defined(CONFIG_ARCH_OMAP2) || defined(CONFIG_ARCH_OMAP3) */
extern void omap1_init_common_hw(void);
extern void omap2_map_common_io(void);
-extern void omap2_init_common_hw(struct omap_sdrc_params *sp);
+extern void omap2_init_common_hw(struct omap_sdrc_params *sdrc_cs0,
+ struct omap_sdrc_params *sdrc_cs1);
#define __arch_ioremap(p,s,t) omap_ioremap(p,s,t)
#define __arch_iounmap(v) omap_iounmap(v)
AE5_34XX_GPIO143,
H19_34XX_GPIO164_OUT,
J25_34XX_GPIO170,
+
+ /* OMAP3 SDRC CKE signals to SDR/DDR ram chips */
+ H16_34XX_SDRC_CKE0,
+ H17_34XX_SDRC_CKE1,
};
struct omap_mux_cfg {
u32 omap_prcm_get_reset_sources(void);
void omap_prcm_arch_reset(char mode);
+int omap2_cm_wait_idlest(void __iomem *reg, u32 mask, const char *name);
#endif
#define SDRC_ACTIM_CTRL_A_0 0x09c
#define SDRC_ACTIM_CTRL_B_0 0x0a0
#define SDRC_RFR_CTRL_0 0x0a4
+#define SDRC_MR_1 0x0B4
+#define SDRC_ACTIM_CTRL_A_1 0x0C4
+#define SDRC_ACTIM_CTRL_B_1 0x0C8
+#define SDRC_RFR_CTRL_1 0x0D4
/*
* These values represent the number of memory clock cycles between
u32 mr;
};
-void __init omap2_sdrc_init(struct omap_sdrc_params *sp);
-struct omap_sdrc_params *omap2_sdrc_get_params(unsigned long r);
+void __init omap2_sdrc_init(struct omap_sdrc_params *sdrc_cs0,
+ struct omap_sdrc_params *sdrc_cs1);
+int omap2_sdrc_get_params(unsigned long r,
+ struct omap_sdrc_params **sdrc_cs0,
+ struct omap_sdrc_params **sdrc_cs1);
#ifdef CONFIG_ARCH_OMAP2
extern void omap_uart_prepare_suspend(void);
extern void omap_uart_prepare_idle(int num);
extern void omap_uart_resume_idle(int num);
+extern void omap_uart_enable_irqs(int enable);
#endif
#endif
u32 mem_type);
extern u32 omap2_set_prcm(u32 dpll_ctrl_val, u32 sdrc_rfr_val, int bypass);
-extern u32 omap3_configure_core_dpll(u32 sdrc_rfr_ctrl,
- u32 sdrc_actim_ctrla,
- u32 sdrc_actim_ctrlb, u32 m2,
- u32 unlock_dll, u32 f, u32 sdrc_mr,
- u32 inc);
+extern u32 omap3_configure_core_dpll(
+ u32 m2, u32 unlock_dll, u32 f, u32 inc,
+ u32 sdrc_rfr_ctrl_0, u32 sdrc_actim_ctrl_a_0,
+ u32 sdrc_actim_ctrl_b_0, u32 sdrc_mr_0,
+ u32 sdrc_rfr_ctrl_1, u32 sdrc_actim_ctrl_a_1,
+ u32 sdrc_actim_ctrl_b_1, u32 sdrc_mr_1);
/* Do not use these */
extern void omap1_sram_reprogram_clock(u32 ckctl, u32 dpllctl);
u32 mem_type);
extern unsigned long omap243x_sram_reprogram_sdrc_sz;
-
-extern u32 omap3_sram_configure_core_dpll(u32 sdrc_rfr_ctrl,
- u32 sdrc_actim_ctrla,
- u32 sdrc_actim_ctrlb, u32 m2,
- u32 unlock_dll, u32 f, u32 sdrc_mr,
- u32 inc);
+extern u32 omap3_sram_configure_core_dpll(
+ u32 m2, u32 unlock_dll, u32 f, u32 inc,
+ u32 sdrc_rfr_ctrl_0, u32 sdrc_actim_ctrl_a_0,
+ u32 sdrc_actim_ctrl_b_0, u32 sdrc_mr_0,
+ u32 sdrc_rfr_ctrl_1, u32 sdrc_actim_ctrl_a_1,
+ u32 sdrc_actim_ctrl_b_1, u32 sdrc_mr_1);
extern unsigned long omap3_sram_configure_core_dpll_sz;
#endif
#define OMAP2_SRAM_VA 0xe3000000
#define OMAP2_SRAM_PUB_VA (OMAP2_SRAM_VA + 0x800)
#define OMAP3_SRAM_PA 0x40200000
-#define OMAP3_SRAM_VA 0xd7000000
+#define OMAP3_SRAM_VA 0xe3000000
#define OMAP3_SRAM_PUB_PA 0x40208000
-#define OMAP3_SRAM_PUB_VA 0xd7008000
+#define OMAP3_SRAM_PUB_VA (OMAP3_SRAM_VA + 0x8000)
#define OMAP4_SRAM_PA 0x40200000 /*0x402f0000*/
#define OMAP4_SRAM_VA 0xd7000000 /*0xd70f0000*/
#ifdef CONFIG_ARCH_OMAP3
-static u32 (*_omap3_sram_configure_core_dpll)(u32 sdrc_rfr_ctrl,
- u32 sdrc_actim_ctrla,
- u32 sdrc_actim_ctrlb,
- u32 m2, u32 unlock_dll,
- u32 f, u32 sdrc_mr, u32 inc);
-u32 omap3_configure_core_dpll(u32 sdrc_rfr_ctrl, u32 sdrc_actim_ctrla,
- u32 sdrc_actim_ctrlb, u32 m2, u32 unlock_dll,
- u32 f, u32 sdrc_mr, u32 inc)
+static u32 (*_omap3_sram_configure_core_dpll)(
+ u32 m2, u32 unlock_dll, u32 f, u32 inc,
+ u32 sdrc_rfr_ctrl_0, u32 sdrc_actim_ctrl_a_0,
+ u32 sdrc_actim_ctrl_b_0, u32 sdrc_mr_0,
+ u32 sdrc_rfr_ctrl_1, u32 sdrc_actim_ctrl_a_1,
+ u32 sdrc_actim_ctrl_b_1, u32 sdrc_mr_1);
+
+u32 omap3_configure_core_dpll(u32 m2, u32 unlock_dll, u32 f, u32 inc,
+ u32 sdrc_rfr_ctrl_0, u32 sdrc_actim_ctrl_a_0,
+ u32 sdrc_actim_ctrl_b_0, u32 sdrc_mr_0,
+ u32 sdrc_rfr_ctrl_1, u32 sdrc_actim_ctrl_a_1,
+ u32 sdrc_actim_ctrl_b_1, u32 sdrc_mr_1)
{
BUG_ON(!_omap3_sram_configure_core_dpll);
- return _omap3_sram_configure_core_dpll(sdrc_rfr_ctrl,
- sdrc_actim_ctrla,
- sdrc_actim_ctrlb, m2,
- unlock_dll, f, sdrc_mr, inc);
+ return _omap3_sram_configure_core_dpll(
+ m2, unlock_dll, f, inc,
+ sdrc_rfr_ctrl_0, sdrc_actim_ctrl_a_0,
+ sdrc_actim_ctrl_b_0, sdrc_mr_0,
+ sdrc_rfr_ctrl_1, sdrc_actim_ctrl_a_1,
+ sdrc_actim_ctrl_b_1, sdrc_mr_1);
}
/* REVISIT: Should this be same as omap34xx_sram_init() after off-idle? */
#ifndef __PLAT_GPIO_H
#define __PLAT_GPIO_H
+#include <linux/init.h>
+
/*
* GENERIC_GPIO primitives.
*/
/* calculate the MISCCR setting for the clock */
- if (parent == &clk_xtal)
+ if (parent == &clk_mpll)
source = S3C2410_MISCCR_CLK0_MPLL;
else if (parent == &clk_upll)
source = S3C2410_MISCCR_CLK0_UPLL;
.debounce_max = 20,
.debounce_rep = 4,
.debounce_tol = 5,
+
+ .keep_vref_on = true,
+ .settle_delay_usecs = 500,
+ .penirq_recheck_delay_usecs = 100,
};
static struct spi_board_info __initdata spi1_board_info[] = {
brne 1f
/* At this point, "from" is word-aligned */
-2: sub r10, 4
- mov r9, r12
+2: mov r9, r12
+5: sub r10, 4
brlt 4f
3: ld.w r8, r11++
/* Handle unaligned "from" pointer */
1: sub r10, 4
+ movlt r9, r12
brlt 4b
add r10, r9
lsl r9, 2
st.b r12++, r8
ld.ub r8, r11++
st.b r12++, r8
- rjmp 2b
+ mov r8, r12
+ add pc, pc, r9
+ sub r8, 1
+ nop
+ sub r8, 1
+ nop
+ sub r8, 1
+ nop
+ mov r9, r8
+ rjmp 5b
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.30-rc6
-# Fri May 22 10:02:33 2009
+# Linux kernel version: 2.6.31-rc6
+# Tue Aug 18 11:00:02 2009
#
CONFIG_MICROBLAZE=y
# CONFIG_SWAP is not set
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ=y
CONFIG_GENERIC_GPIO=y
+CONFIG_GENERIC_CSUM=y
+# CONFIG_PCI is not set
+CONFIG_NO_DMA=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
+CONFIG_CONSTRUCTORS=y
#
# General setup
CONFIG_RD_GZIP=y
# CONFIG_RD_BZIP2 is not set
# CONFIG_RD_LZMA is not set
-CONFIG_INITRAMFS_COMPRESSION_NONE=y
-# CONFIG_INITRAMFS_COMPRESSION_GZIP is not set
+# CONFIG_INITRAMFS_COMPRESSION_NONE is not set
+CONFIG_INITRAMFS_COMPRESSION_GZIP=y
# CONFIG_INITRAMFS_COMPRESSION_BZIP2 is not set
# CONFIG_INITRAMFS_COMPRESSION_LZMA is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
-# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_HOTPLUG is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_EVENTFD=y
# CONFIG_SHMEM is not set
CONFIG_AIO=y
+
+#
+# Performance Counters
+#
CONFIG_VM_EVENT_COUNTERS=y
+# CONFIG_STRIP_ASM_SYMS is not set
CONFIG_COMPAT_BRK=y
CONFIG_SLAB=y
# CONFIG_SLUB is not set
# CONFIG_SLOB is not set
# CONFIG_PROFILING is not set
# CONFIG_MARKERS is not set
+
+#
+# GCOV-based kernel profiling
+#
# CONFIG_SLOW_WORK is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_BLOCK=y
-# CONFIG_LBD is not set
+CONFIG_LBDAF=y
# CONFIG_BLK_DEV_BSG is not set
# CONFIG_BLK_DEV_INTEGRITY is not set
# CONFIG_PHYS_ADDR_T_64BIT is not set
CONFIG_ZONE_DMA_FLAG=0
CONFIG_VIRT_TO_BUS=y
-CONFIG_UNEVICTABLE_LRU=y
CONFIG_HAVE_MLOCK=y
CONFIG_HAVE_MLOCKED_PAGE_BIT=y
+CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
#
# Exectuable file formats
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_PHONET is not set
+# CONFIG_IEEE802154 is not set
# CONFIG_NET_SCHED is not set
# CONFIG_DCB is not set
# CONFIG_ATA is not set
# CONFIG_MD is not set
CONFIG_NETDEVICES=y
-CONFIG_COMPAT_NET_DEV_OPS=y
# CONFIG_DUMMY is not set
# CONFIG_BONDING is not set
# CONFIG_MACVLAN is not set
# CONFIG_IBM_NEW_EMAC_NO_FLOW_CTRL is not set
# CONFIG_IBM_NEW_EMAC_MAL_CLR_ICINTSTAT is not set
# CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set
-# CONFIG_B44 is not set
+# CONFIG_KS8842 is not set
CONFIG_NETDEV_1000=y
CONFIG_NETDEV_10000=y
# CONFIG_TCG_TPM is not set
# CONFIG_I2C is not set
# CONFIG_SPI is not set
+
+#
+# PPS support
+#
+# CONFIG_PPS is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
# CONFIG_THERMAL is not set
# CONFIG_THERMAL_HWMON is not set
# CONFIG_WATCHDOG is not set
-CONFIG_SSB_POSSIBLE=y
-
-#
-# Sonics Silicon Backplane
-#
-# CONFIG_SSB is not set
#
# Multifunction device drivers
# CONFIG_HTC_PASIC3 is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_REGULATOR is not set
-
-#
-# Multimedia devices
-#
-
-#
-# Multimedia core support
-#
-# CONFIG_VIDEO_DEV is not set
-# CONFIG_DVB_CORE is not set
-# CONFIG_VIDEO_MEDIA is not set
-
-#
-# Multimedia drivers
-#
-# CONFIG_DAB is not set
+# CONFIG_MEDIA_SUPPORT is not set
#
# Graphics support
# CONFIG_NEW_LEDS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_RTC_CLASS is not set
-# CONFIG_DMADEVICES is not set
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set
+
+#
+# TI VLYNQ
+#
# CONFIG_STAGING is not set
#
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
# CONFIG_FS_POSIX_ACL is not set
-CONFIG_FILE_LOCKING=y
# CONFIG_XFS_FS is not set
+# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
+CONFIG_FILE_LOCKING=y
+CONFIG_FSNOTIFY=y
# CONFIG_DNOTIFY is not set
# CONFIG_INOTIFY is not set
+CONFIG_INOTIFY_USER=y
# CONFIG_QUOTA is not set
# CONFIG_AUTOFS_FS is not set
# CONFIG_AUTOFS4_FS is not set
# CONFIG_SYSCTL_SYSCALL_CHECK is not set
# CONFIG_PAGE_POISONING is not set
# CONFIG_SAMPLES is not set
+# CONFIG_KMEMCHECK is not set
CONFIG_EARLY_PRINTK=y
CONFIG_HEART_BEAT=y
CONFIG_DEBUG_BOOTMEM=y
CONFIG_DECOMPRESS_GZIP=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
-CONFIG_HAS_DMA=y
CONFIG_HAVE_LMB=y
CONFIG_NLATTR=y
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.30-rc5
-# Mon May 11 09:01:02 2009
+# Linux kernel version: 2.6.31-rc6
+# Tue Aug 18 10:35:30 2009
#
CONFIG_MICROBLAZE=y
# CONFIG_SWAP is not set
# CONFIG_GENERIC_TIME_VSYSCALL is not set
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ=y
+CONFIG_GENERIC_GPIO=y
+CONFIG_GENERIC_CSUM=y
# CONFIG_PCI is not set
-# CONFIG_NO_DMA is not set
+CONFIG_NO_DMA=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
+CONFIG_CONSTRUCTORS=y
#
# General setup
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
-# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_HOTPLUG is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_AIO=y
+
+#
+# Performance Counters
+#
CONFIG_VM_EVENT_COUNTERS=y
+# CONFIG_STRIP_ASM_SYMS is not set
CONFIG_COMPAT_BRK=y
CONFIG_SLAB=y
# CONFIG_SLUB is not set
# CONFIG_SLOB is not set
# CONFIG_PROFILING is not set
# CONFIG_MARKERS is not set
+
+#
+# GCOV-based kernel profiling
+#
+# CONFIG_GCOV_KERNEL is not set
# CONFIG_SLOW_WORK is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_BLOCK=y
-# CONFIG_LBD is not set
+CONFIG_LBDAF=y
# CONFIG_BLK_DEV_BSG is not set
# CONFIG_BLK_DEV_INTEGRITY is not set
CONFIG_CMDLINE="console=ttyUL0,115200"
# CONFIG_CMDLINE_FORCE is not set
CONFIG_OF=y
-CONFIG_OF_DEVICE=y
CONFIG_PROC_DEVICETREE=y
+
+#
+# Advanced setup
+#
+
+#
+# Default settings for advanced configuration options are used
+#
+CONFIG_KERNEL_START=0x90000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_FLATMEM_MANUAL=y
# CONFIG_DISCONTIGMEM_MANUAL is not set
# CONFIG_PHYS_ADDR_T_64BIT is not set
CONFIG_ZONE_DMA_FLAG=0
CONFIG_VIRT_TO_BUS=y
-CONFIG_UNEVICTABLE_LRU=y
+CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_NOMMU_INITIAL_TRIM_EXCESS=1
#
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
# CONFIG_PHONET is not set
+# CONFIG_IEEE802154 is not set
# CONFIG_NET_SCHED is not set
# CONFIG_DCB is not set
CONFIG_WIRELESS_OLD_REGULATORY=y
# CONFIG_WIRELESS_EXT is not set
# CONFIG_LIB80211 is not set
-# CONFIG_MAC80211 is not set
+
+#
+# CFG80211 needs to be enabled for MAC80211
+#
+CONFIG_MAC80211_DEFAULT_PS_VALUE=0
# CONFIG_WIMAX is not set
# CONFIG_RFKILL is not set
# CONFIG_NET_9P is not set
# UBI - Unsorted block images
#
# CONFIG_MTD_UBI is not set
+CONFIG_OF_DEVICE=y
# CONFIG_PARPORT is not set
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_COW_COMMON is not set
# CONFIG_BLK_DEV_XIP is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
+# CONFIG_XILINX_SYSACE is not set
CONFIG_MISC_DEVICES=y
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_C2PORT is not set
# CONFIG_ATA is not set
# CONFIG_MD is not set
CONFIG_NETDEVICES=y
-CONFIG_COMPAT_NET_DEV_OPS=y
# CONFIG_DUMMY is not set
# CONFIG_BONDING is not set
# CONFIG_MACVLAN is not set
# CONFIG_IBM_NEW_EMAC_NO_FLOW_CTRL is not set
# CONFIG_IBM_NEW_EMAC_MAL_CLR_ICINTSTAT is not set
# CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set
-# CONFIG_B44 is not set
+# CONFIG_KS8842 is not set
CONFIG_NETDEV_1000=y
CONFIG_NETDEV_10000=y
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_RTC is not set
# CONFIG_GEN_RTC is not set
+# CONFIG_XILINX_HWICAP is not set
# CONFIG_R3964 is not set
# CONFIG_RAW_DRIVER is not set
# CONFIG_TCG_TPM is not set
# CONFIG_I2C is not set
# CONFIG_SPI is not set
+
+#
+# PPS support
+#
+# CONFIG_PPS is not set
+CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
+# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
# CONFIG_POWER_SUPPLY is not set
# CONFIG_HWMON is not set
# CONFIG_THERMAL is not set
# CONFIG_THERMAL_HWMON is not set
# CONFIG_WATCHDOG is not set
-CONFIG_SSB_POSSIBLE=y
-
-#
-# Sonics Silicon Backplane
-#
-# CONFIG_SSB is not set
#
# Multifunction device drivers
# CONFIG_HTC_PASIC3 is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_REGULATOR is not set
-
-#
-# Multimedia devices
-#
-
-#
-# Multimedia core support
-#
-# CONFIG_VIDEO_DEV is not set
-# CONFIG_DVB_CORE is not set
-# CONFIG_VIDEO_MEDIA is not set
-
-#
-# Multimedia drivers
-#
-CONFIG_DAB=y
+# CONFIG_MEDIA_SUPPORT is not set
#
# Graphics support
# CONFIG_DISPLAY_SUPPORT is not set
# CONFIG_SOUND is not set
CONFIG_USB_SUPPORT=y
-# CONFIG_USB_ARCH_HAS_HCD is not set
+CONFIG_USB_ARCH_HAS_HCD=y
# CONFIG_USB_ARCH_HAS_OHCI is not set
# CONFIG_USB_ARCH_HAS_EHCI is not set
+# CONFIG_USB is not set
# CONFIG_USB_OTG_WHITELIST is not set
# CONFIG_USB_OTG_BLACKLIST_HUB is not set
# CONFIG_NEW_LEDS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_RTC_CLASS is not set
-# CONFIG_DMADEVICES is not set
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set
+
+#
+# TI VLYNQ
+#
# CONFIG_STAGING is not set
#
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_FS_POSIX_ACL=y
-CONFIG_FILE_LOCKING=y
# CONFIG_XFS_FS is not set
+# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
+CONFIG_FILE_LOCKING=y
+CONFIG_FSNOTIFY=y
# CONFIG_DNOTIFY is not set
# CONFIG_INOTIFY is not set
+CONFIG_INOTIFY_USER=y
# CONFIG_QUOTA is not set
# CONFIG_AUTOFS_FS is not set
# CONFIG_AUTOFS4_FS is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
-CONFIG_HAS_DMA=y
CONFIG_HAVE_LMB=y
CONFIG_NLATTR=y
/* should be defined in each interrupt controller driver */
extern unsigned int get_irq(struct pt_regs *regs);
-#define ack_bad_irq ack_bad_irq
-void ack_bad_irq(unsigned int irq);
#include <asm-generic/hardirq.h>
#endif /* _ASM_MICROBLAZE_HARDIRQ_H */
#include <linux/irq.h>
#include <asm/page.h>
#include <linux/io.h>
+#include <linux/bug.h>
#include <asm/prom.h>
#include <asm/irq.h>
if (intc)
break;
}
+ BUG_ON(!intc);
intc_baseaddr = *(int *) of_get_property(intc, "reg", NULL);
intc_baseaddr = (unsigned long) ioremap(intc_baseaddr, PAGE_SIZE);
}
EXPORT_SYMBOL_GPL(irq_of_parse_and_map);
-/*
- * 'what should we do if we get a hw irq event on an illegal vector'.
- * each architecture has to answer this themselves.
- */
-void ack_bad_irq(unsigned int irq)
-{
- printk(KERN_WARNING "unexpected IRQ trap at vector %02x\n", irq);
-}
-
static u32 concurrent_irq;
void do_IRQ(struct pt_regs *regs)
.long sys_fchmodat
.long sys_faccessat
.long sys_ni_syscall /* pselect6 */
- .long sys_ni_syscall /* sys_ppoll */
+ .long sys_ppoll
.long sys_unshare /* 310 */
.long sys_set_robust_list
.long sys_get_robust_list
#include <linux/clocksource.h>
#include <linux/clockchips.h>
#include <linux/io.h>
+#include <linux/bug.h>
#include <asm/cpuinfo.h>
#include <asm/setup.h>
#include <asm/prom.h>
if (timer)
break;
}
+ BUG_ON(!timer);
timer_baseaddr = *(int *) of_get_property(timer, "reg", NULL);
timer_baseaddr = (unsigned long) ioremap(timer_baseaddr, PAGE_SIZE);
* (in case the address isn't page-aligned).
*/
#ifndef CONFIG_MMU
- map_size = init_bootmem_node(NODE_DATA(0), PFN_UP(TOPHYS((u32)_end)),
+ map_size = init_bootmem_node(NODE_DATA(0), PFN_UP(TOPHYS((u32)klimit)),
min_low_pfn, max_low_pfn);
#else
map_size = init_bootmem_node(&contig_page_data,
- PFN_UP(TOPHYS((u32)_end)), min_low_pfn, max_low_pfn);
+ PFN_UP(TOPHYS((u32)klimit)), min_low_pfn, max_low_pfn);
#endif
- lmb_reserve(PFN_UP(TOPHYS((u32)_end)) << PAGE_SHIFT, map_size);
+ lmb_reserve(PFN_UP(TOPHYS((u32)klimit)) << PAGE_SHIFT, map_size);
/* free bootmem is whole main memory */
free_bootmem(memory_start, memory_size);
#define PAGE_SIZE (1UL << PAGE_SHIFT)
#define PAGE_MASK (~((1 << PAGE_SHIFT) - 1))
+#ifdef CONFIG_HUGETLB_PAGE
#define HPAGE_SHIFT (PAGE_SHIFT + PAGE_SHIFT - 3)
#define HPAGE_SIZE ((1UL) << HPAGE_SHIFT)
#define HPAGE_MASK (~(HPAGE_SIZE - 1))
#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)
+#endif /* CONFIG_HUGETLB_PAGE */
#ifndef __ASSEMBLY__
__setup("condev=", condev_setup);
+static void __init set_preferred_console(void)
+{
+ if (MACHINE_IS_KVM) {
+ add_preferred_console("hvc", 0, NULL);
+ s390_virtio_console_init();
+ return;
+ }
+
+ if (CONSOLE_IS_3215 || CONSOLE_IS_SCLP)
+ add_preferred_console("ttyS", 0, NULL);
+ if (CONSOLE_IS_3270)
+ add_preferred_console("tty3270", 0, NULL);
+}
+
static int __init conmode_setup(char *str)
{
#if defined(CONFIG_SCLP_CONSOLE) || defined(CONFIG_SCLP_VT220_CONSOLE)
if (strncmp(str, "3270", 5) == 0)
SET_CONSOLE_3270;
#endif
+ set_preferred_console();
return 1;
}
void __init
setup_arch(char **cmdline_p)
{
- /* set up preferred console */
- add_preferred_console("ttyS", 0, NULL);
-
/*
* print what head.S has found out about the machine
*/
if (MACHINE_IS_VM)
pr_info("Linux is running as a z/VM "
"guest operating system in 64-bit mode\n");
- else if (MACHINE_IS_KVM) {
+ else if (MACHINE_IS_KVM)
pr_info("Linux is running under KVM in 64-bit mode\n");
- add_preferred_console("hvc", 0, NULL);
- s390_virtio_console_init();
- } else
+ else
pr_info("Linux is running natively in 64-bit mode\n");
#endif /* CONFIG_64BIT */
/* Setup default console */
conmode_default();
+ set_preferred_console();
/* Setup zfcpdump support */
setup_zfcpdump(console_devno);
},
};
-/* KEYSC */
+/* KEYSC in SoC (Needs SW33-2 set to ON) */
static struct sh_keysc_info keysc_info = {
.mode = SH_KEYSC_MODE_1,
.scan_timing = 10,
static struct resource keysc_resources[] = {
[0] = {
- .start = 0x1a204000,
- .end = 0x1a20400f,
+ .name = "KEYSC",
+ .start = 0x044b0000,
+ .end = 0x044b000f,
.flags = IORESOURCE_MEM,
},
[1] = {
- .start = IRQ0_KEY,
+ .start = 79,
.flags = IORESOURCE_IRQ,
},
};
tst #SUSP_SH_SF, r0
bt skip_set_sf
+#ifdef CONFIG_CPU_SUBTYPE_SH7724
+ /* DBSC: put memory in self-refresh mode */
- /* SDRAM: disable power down and put in self-refresh mode */
+ mov.l dben_reg, r4
+ mov.l dben_data0, r1
+ mov.l r1, @r4
+
+ mov.l dbrfpdn0_reg, r4
+ mov.l dbrfpdn0_data0, r1
+ mov.l r1, @r4
+
+ mov.l dbcmdcnt_reg, r4
+ mov.l dbcmdcnt_data0, r1
+ mov.l r1, @r4
+
+ mov.l dbcmdcnt_reg, r4
+ mov.l dbcmdcnt_data1, r1
+ mov.l r1, @r4
+
+ mov.l dbrfpdn0_reg, r4
+ mov.l dbrfpdn0_data1, r1
+ mov.l r1, @r4
+#else
+ /* SBSC: disable power down and put in self-refresh mode */
mov.l 1f, r4
mov.l 2f, r1
mov.l @r4, r2
mov.l 3f, r3
and r3, r2
mov.l r2, @r4
+#endif
skip_set_sf:
tst #SUSP_SH_SLEEP, r0
tst #SUSP_SH_SF, r0
bt skip_restore_sf
- /* SDRAM: set auto-refresh mode */
+#ifdef CONFIG_CPU_SUBTYPE_SH7724
+ /* DBSC: put memory in auto-refresh mode */
+
+ mov.l dbrfpdn0_reg, r4
+ mov.l dbrfpdn0_data0, r1
+ mov.l r1, @r4
+
+ /* sleep 140 ns */
+ nop
+ nop
+ nop
+ nop
+
+ mov.l dbcmdcnt_reg, r4
+ mov.l dbcmdcnt_data0, r1
+ mov.l r1, @r4
+
+ mov.l dbcmdcnt_reg, r4
+ mov.l dbcmdcnt_data1, r1
+ mov.l r1, @r4
+
+ mov.l dben_reg, r4
+ mov.l dben_data1, r1
+ mov.l r1, @r4
+
+ mov.l dbrfpdn0_reg, r4
+ mov.l dbrfpdn0_data2, r1
+ mov.l r1, @r4
+#else
+ /* SBSC: set auto-refresh mode */
mov.l 1f, r4
mov.l @r4, r2
mov.l 4f, r3
add r4, r3
or r2, r3
mov.l r3, @r1
+#endif
skip_restore_sf:
rts
nop
.balign 4
+#ifdef CONFIG_CPU_SUBTYPE_SH7724
+dben_reg: .long 0xfd000010 /* DBEN */
+dben_data0: .long 0
+dben_data1: .long 1
+dbrfpdn0_reg: .long 0xfd000040 /* DBRFPDN0 */
+dbrfpdn0_data0: .long 0
+dbrfpdn0_data1: .long 1
+dbrfpdn0_data2: .long 0x00010000
+dbcmdcnt_reg: .long 0xfd000014 /* DBCMDCNT */
+dbcmdcnt_data0: .long 2
+dbcmdcnt_data1: .long 4
+#else
1: .long 0xfe400008 /* SDCR0 */
2: .long 0x00000400
3: .long 0xffff7fff
4: .long 0xfffffbff
+#endif
5: .long 0xa4150020 /* STBCR */
6: .long 0xfe40001c /* RTCOR */
7: .long 0xfe400018 /* RTCNT */
dyn_size = pcpur_size - static_size - PERCPU_MODULE_RESERVE;
- ptrs_size = PFN_ALIGN(num_possible_cpus() * sizeof(pcpur_ptrs[0]));
+ ptrs_size = PFN_ALIGN(nr_cpu_ids * sizeof(pcpur_ptrs[0]));
pcpur_ptrs = alloc_bootmem(ptrs_size);
for_each_possible_cpu(cpu) {
/* allocate address and map */
vm.flags = VM_ALLOC;
- vm.size = num_possible_cpus() * PCPU_CHUNK_SIZE;
+ vm.size = nr_cpu_ids * PCPU_CHUNK_SIZE;
vm_area_register_early(&vm, PCPU_CHUNK_SIZE);
for_each_possible_cpu(cpu) {
* see table 4.2.3.0.1 in broacast_assist spec.
*/
struct bau_msg_header {
- unsigned int dest_subnodeid:6; /* must be zero */
+ unsigned int dest_subnodeid:6; /* must be 0x10, for the LB */
/* bits 5:0 */
unsigned int base_dest_nodeid:15; /* nasid>>1 (pnode) of */
/* bits 20:6 */ /* first bit in node_map */
unsigned long mask = cpumask_bits(cpumask)[0];
unsigned long flags;
+ if (WARN_ONCE(!mask, "empty IPI mask"))
+ return;
+
local_irq_save(flags);
WARN_ON(mask & ~cpumask_bits(cpu_online_mask)[0]);
__default_send_IPI_dest_field(mask, vector, apic->dest_logical);
return node_id.s.node_id;
}
-static int uv_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
+static int __init uv_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
{
if (!strcmp(oem_id, "SGI")) {
if (!strcmp(oem_table_id, "UVL"))
apic_write(APIC_SELF_IPI, vector);
}
-struct apic apic_x2apic_uv_x = {
+struct apic __refdata apic_x2apic_uv_x = {
.name = "UV large system",
.probe = NULL,
}
/* Add per CPU specific workarounds here */
-static void mce_cpu_quirks(struct cpuinfo_x86 *c)
+static int mce_cpu_quirks(struct cpuinfo_x86 *c)
{
+ if (c->x86_vendor == X86_VENDOR_UNKNOWN) {
+ pr_info("MCE: unknown CPU type - not enabling MCE support.\n");
+ return -EOPNOTSUPP;
+ }
+
/* This should be disabled by the BIOS, but isn't always */
if (c->x86_vendor == X86_VENDOR_AMD) {
if (c->x86 == 15 && banks > 4) {
if ((c->x86 > 6 || (c->x86 == 6 && c->x86_model >= 0xe)) &&
monarch_timeout < 0)
monarch_timeout = USEC_PER_SEC;
+
+ /*
+ * There are also broken BIOSes on some Pentium M and
+ * earlier systems:
+ */
+ if (c->x86 == 6 && c->x86_model <= 13 && mce_bootlog < 0)
+ mce_bootlog = 0;
}
if (monarch_timeout < 0)
monarch_timeout = 0;
if (mce_bootlog != 0)
mce_panic_timeout = 30;
+
+ return 0;
}
static void __cpuinit mce_ancient_init(struct cpuinfo_x86 *c)
if (!mce_available(c))
return;
- if (mce_cap_init() < 0) {
+ if (mce_cap_init() < 0 || mce_cpu_quirks(c) < 0) {
mce_disabled = 1;
return;
}
- mce_cpu_quirks(c);
machine_check_vector = do_machine_check;
cpu, __get_cpu_var(thermal_throttle_count));
add_taint(TAINT_MACHINE_CHECK);
- } else if (was_throttled) {
+ return 1;
+ }
+ if (was_throttled) {
printk(KERN_INFO "CPU%d: Temperature/speed normal\n", cpu);
+ return 1;
}
- return 1;
+ return 0;
}
#ifdef CONFIG_SYSFS
if (!chosen) {
size_t vm_size = VMALLOC_END - VMALLOC_START;
- size_t tot_size = num_possible_cpus() * PMD_SIZE;
+ size_t tot_size = nr_cpu_ids * PMD_SIZE;
/* on non-NUMA, embedding is better */
if (!pcpu_need_numa())
dyn_size = pcpul_size - static_size - PERCPU_FIRST_CHUNK_RESERVE;
/* allocate pointer array and alloc large pages */
- map_size = PFN_ALIGN(num_possible_cpus() * sizeof(pcpul_map[0]));
+ map_size = PFN_ALIGN(nr_cpu_ids * sizeof(pcpul_map[0]));
pcpul_map = alloc_bootmem(map_size);
for_each_possible_cpu(cpu) {
/* allocate address and map */
pcpul_vm.flags = VM_ALLOC;
- pcpul_vm.size = num_possible_cpus() * PMD_SIZE;
+ pcpul_vm.size = nr_cpu_ids * PMD_SIZE;
vm_area_register_early(&pcpul_vm, PMD_SIZE);
for_each_possible_cpu(cpu) {
PMD_SIZE, pcpul_vm.addr, NULL);
/* sort pcpul_map array for pcpu_lpage_remapped() */
- for (i = 0; i < num_possible_cpus() - 1; i++)
- for (j = i + 1; j < num_possible_cpus(); j++)
+ for (i = 0; i < nr_cpu_ids - 1; i++)
+ for (j = i + 1; j < nr_cpu_ids; j++)
if (pcpul_map[i].ptr > pcpul_map[j].ptr) {
struct pcpul_ent tmp = pcpul_map[i];
pcpul_map[i] = pcpul_map[j];
{
void *pmd_addr = (void *)((unsigned long)kaddr & PMD_MASK);
unsigned long offset = (unsigned long)kaddr & ~PMD_MASK;
- int left = 0, right = num_possible_cpus() - 1;
+ int left = 0, right = nr_cpu_ids - 1;
int pos;
/* pcpul in use at all? */
pcpu4k_nr_static_pages = PFN_UP(static_size);
/* unaligned allocations can't be freed, round up to page size */
- pages_size = PFN_ALIGN(pcpu4k_nr_static_pages * num_possible_cpus()
+ pages_size = PFN_ALIGN(pcpu4k_nr_static_pages * nr_cpu_ids
* sizeof(pcpu4k_pages[0]));
pcpu4k_pages = alloc_bootmem(pages_size);
* note that base_dest_nodeid is actually a nasid.
*/
ad2->header.base_dest_nodeid = uv_partition_base_pnode << 1;
+ ad2->header.dest_subnodeid = 0x10; /* the LB */
ad2->header.command = UV_NET_ENDPOINT_INTD;
ad2->header.int_both = 1;
/*
f->flush_mm = mm;
f->flush_va = va;
- cpumask_andnot(to_cpumask(f->flush_cpumask),
- cpumask, cpumask_of(smp_processor_id()));
-
- /*
- * We have to send the IPI only to
- * CPUs affected.
- */
- apic->send_IPI_mask(to_cpumask(f->flush_cpumask),
- INVALIDATE_TLB_VECTOR_START + sender);
+ if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
+ /*
+ * We have to send the IPI only to
+ * CPUs affected.
+ */
+ apic->send_IPI_mask(to_cpumask(f->flush_cpumask),
+ INVALIDATE_TLB_VECTOR_START + sender);
- while (!cpumask_empty(to_cpumask(f->flush_cpumask)))
- cpu_relax();
+ while (!cpumask_empty(to_cpumask(f->flush_cpumask)))
+ cpu_relax();
+ }
f->flush_mm = NULL;
f->flush_va = 0;
* be obtained while the delayed work queue halt ensures that no more
* data is fed to the ldisc.
*
- * In order to wait for any existing references to complete see
- * tty_ldisc_wait_idle.
+ * You need to do a 'flush_scheduled_work()' (outside the ldisc_mutex)
+ * in order to make sure any currently executing ldisc work is also
+ * flushed.
*/
static int tty_ldisc_halt(struct tty_struct *tty)
* N_TTY.
*/
if (tty->driver->flags & TTY_DRIVER_RESET_TERMIOS) {
+ /* Make sure the old ldisc is quiescent */
+ tty_ldisc_halt(tty);
+ flush_scheduled_work();
+
/* Avoid racing set_ldisc or tty_ldisc_release */
mutex_lock(&tty->ldisc_mutex);
if (tty->ldisc) { /* Not yet closed */
/* Switch back to N_TTY */
- tty_ldisc_halt(tty);
tty_ldisc_reinit(tty);
/* At this point we have a closed ldisc and we want to
reopen it. We could defer this to the next open but
struct platform_device *pdev;
unsigned long flags;
+ unsigned long flags_suspend;
unsigned long match_value;
unsigned long next_match_value;
unsigned long max_match_value;
return -EBUSY; /* cannot unregister clockevent and clocksource */
}
+static int sh_cmt_suspend(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct sh_cmt_priv *p = platform_get_drvdata(pdev);
+
+ /* save flag state and stop CMT channel */
+ p->flags_suspend = p->flags;
+ sh_cmt_stop(p, p->flags);
+ return 0;
+}
+
+static int sh_cmt_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct sh_cmt_priv *p = platform_get_drvdata(pdev);
+
+ /* start CMT channel from saved state */
+ sh_cmt_start(p, p->flags_suspend);
+ return 0;
+}
+
+static struct dev_pm_ops sh_cmt_dev_pm_ops = {
+ .suspend = sh_cmt_suspend,
+ .resume = sh_cmt_resume,
+};
+
static struct platform_driver sh_cmt_device_driver = {
.probe = sh_cmt_probe,
.remove = __devexit_p(sh_cmt_remove),
.driver = {
.name = "sh_cmt",
+ .pm = &sh_cmt_dev_pm_ops,
}
};
}
EXPORT_SYMBOL(drm_mode_object_find);
-/**
- * drm_crtc_from_fb - find the CRTC structure associated with an fb
- * @dev: DRM device
- * @fb: framebuffer in question
- *
- * LOCKING:
- * Caller must hold mode_config lock.
- *
- * Find CRTC in the mode_config structure that matches @fb.
- *
- * RETURNS:
- * Pointer to the CRTC or NULL if it wasn't found.
- */
-struct drm_crtc *drm_crtc_from_fb(struct drm_device *dev,
- struct drm_framebuffer *fb)
-{
- struct drm_crtc *crtc;
-
- list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
- if (crtc->fb == fb)
- return crtc;
- }
- return NULL;
-}
-
/**
* drm_framebuffer_init - initialize a framebuffer
* @dev: DRM device
{
struct drm_device *dev = fb->dev;
struct drm_crtc *crtc;
+ struct drm_mode_set set;
+ int ret;
/* remove from any CRTC */
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
- if (crtc->fb == fb)
- crtc->fb = NULL;
+ if (crtc->fb == fb) {
+ /* should turn off the crtc */
+ memset(&set, 0, sizeof(struct drm_mode_set));
+ set.crtc = crtc;
+ set.fb = NULL;
+ ret = crtc->funcs->set_config(&set);
+ if (ret)
+ DRM_ERROR("failed to reset crtc %p when fb was deleted\n", crtc);
+ }
}
drm_mode_object_put(dev, &fb->base);
set.mode = mode;
set.connectors = connector_set;
set.num_connectors = crtc_req->count_connectors;
- set.fb =fb;
+ set.fb = fb;
ret = crtc->funcs->set_config(&set);
out:
struct detailed_non_pixel *data = &timing->data.other_data;
struct drm_display_mode *newmode;
- /* EDID up to and including 1.2 may put monitor info here */
- if (edid->version == 1 && edid->revision < 3)
- continue;
-
- /* Detailed mode timing */
- if (timing->pixel_clock) {
+ /* X server check is version 1.1 or higher */
+ if (edid->version == 1 && edid->revision >= 1 &&
+ !timing->pixel_clock) {
+ /* Other timing or info */
+ switch (data->type) {
+ case EDID_DETAIL_MONITOR_SERIAL:
+ break;
+ case EDID_DETAIL_MONITOR_STRING:
+ break;
+ case EDID_DETAIL_MONITOR_RANGE:
+ /* Get monitor range data */
+ break;
+ case EDID_DETAIL_MONITOR_NAME:
+ break;
+ case EDID_DETAIL_MONITOR_CPDATA:
+ break;
+ case EDID_DETAIL_STD_MODES:
+ /* Five modes per detailed section */
+ for (j = 0; j < 5; i++) {
+ struct std_timing *std;
+ struct drm_display_mode *newmode;
+
+ std = &data->data.timings[j];
+ newmode = drm_mode_std(dev, std);
+ if (newmode) {
+ drm_mode_probed_add(connector, newmode);
+ modes++;
+ }
+ }
+ break;
+ default:
+ break;
+ }
+ } else {
newmode = drm_mode_detailed(dev, edid, timing, quirks);
if (!newmode)
continue;
drm_mode_probed_add(connector, newmode);
modes++;
- continue;
- }
-
- /* Other timing or info */
- switch (data->type) {
- case EDID_DETAIL_MONITOR_SERIAL:
- break;
- case EDID_DETAIL_MONITOR_STRING:
- break;
- case EDID_DETAIL_MONITOR_RANGE:
- /* Get monitor range data */
- break;
- case EDID_DETAIL_MONITOR_NAME:
- break;
- case EDID_DETAIL_MONITOR_CPDATA:
- break;
- case EDID_DETAIL_STD_MODES:
- /* Five modes per detailed section */
- for (j = 0; j < 5; i++) {
- struct std_timing *std;
- struct drm_display_mode *newmode;
-
- std = &data->data.timings[j];
- newmode = drm_mode_std(dev, std);
- if (newmode) {
- drm_mode_probed_add(connector, newmode);
- modes++;
- }
- }
- break;
- default:
- break;
}
}
#define to_drm_minor(d) container_of(d, struct drm_minor, kdev)
#define to_drm_connector(d) container_of(d, struct drm_connector, kdev)
+static struct device_type drm_sysfs_device_minor = {
+ .name = "drm_minor"
+};
+
/**
- * drm_sysfs_suspend - DRM class suspend hook
+ * drm_class_suspend - DRM class suspend hook
* @dev: Linux device to suspend
* @state: power state to enter
*
* Just figures out what the actual struct drm_device associated with
* @dev is and calls its suspend hook, if present.
*/
-static int drm_sysfs_suspend(struct device *dev, pm_message_t state)
+static int drm_class_suspend(struct device *dev, pm_message_t state)
{
- struct drm_minor *drm_minor = to_drm_minor(dev);
- struct drm_device *drm_dev = drm_minor->dev;
-
- if (drm_minor->type == DRM_MINOR_LEGACY &&
- !drm_core_check_feature(drm_dev, DRIVER_MODESET) &&
- drm_dev->driver->suspend)
- return drm_dev->driver->suspend(drm_dev, state);
-
+ if (dev->type == &drm_sysfs_device_minor) {
+ struct drm_minor *drm_minor = to_drm_minor(dev);
+ struct drm_device *drm_dev = drm_minor->dev;
+
+ if (drm_minor->type == DRM_MINOR_LEGACY &&
+ !drm_core_check_feature(drm_dev, DRIVER_MODESET) &&
+ drm_dev->driver->suspend)
+ return drm_dev->driver->suspend(drm_dev, state);
+ }
return 0;
}
/**
- * drm_sysfs_resume - DRM class resume hook
+ * drm_class_resume - DRM class resume hook
* @dev: Linux device to resume
*
* Just figures out what the actual struct drm_device associated with
* @dev is and calls its resume hook, if present.
*/
-static int drm_sysfs_resume(struct device *dev)
+static int drm_class_resume(struct device *dev)
{
- struct drm_minor *drm_minor = to_drm_minor(dev);
- struct drm_device *drm_dev = drm_minor->dev;
-
- if (drm_minor->type == DRM_MINOR_LEGACY &&
- !drm_core_check_feature(drm_dev, DRIVER_MODESET) &&
- drm_dev->driver->resume)
- return drm_dev->driver->resume(drm_dev);
-
+ if (dev->type == &drm_sysfs_device_minor) {
+ struct drm_minor *drm_minor = to_drm_minor(dev);
+ struct drm_device *drm_dev = drm_minor->dev;
+
+ if (drm_minor->type == DRM_MINOR_LEGACY &&
+ !drm_core_check_feature(drm_dev, DRIVER_MODESET) &&
+ drm_dev->driver->resume)
+ return drm_dev->driver->resume(drm_dev);
+ }
return 0;
}
goto err_out;
}
- class->suspend = drm_sysfs_suspend;
- class->resume = drm_sysfs_resume;
+ class->suspend = drm_class_suspend;
+ class->resume = drm_class_resume;
err = class_create_file(class, &class_attr_version);
if (err)
minor->kdev.class = drm_class;
minor->kdev.release = drm_sysfs_device_release;
minor->kdev.devt = minor->device;
+ minor->kdev.type = &drm_sysfs_device_minor;
if (minor->type == DRM_MINOR_CONTROL)
minor_str = "controlD%d";
else if (minor->type == DRM_MINOR_RENDER)
}
+/*
+ * Interrupts
+ */
+int r100_irq_set(struct radeon_device *rdev)
+{
+ uint32_t tmp = 0;
+
+ if (rdev->irq.sw_int) {
+ tmp |= RADEON_SW_INT_ENABLE;
+ }
+ if (rdev->irq.crtc_vblank_int[0]) {
+ tmp |= RADEON_CRTC_VBLANK_MASK;
+ }
+ if (rdev->irq.crtc_vblank_int[1]) {
+ tmp |= RADEON_CRTC2_VBLANK_MASK;
+ }
+ WREG32(RADEON_GEN_INT_CNTL, tmp);
+ return 0;
+}
+
+static inline uint32_t r100_irq_ack(struct radeon_device *rdev)
+{
+ uint32_t irqs = RREG32(RADEON_GEN_INT_STATUS);
+ uint32_t irq_mask = RADEON_SW_INT_TEST | RADEON_CRTC_VBLANK_STAT |
+ RADEON_CRTC2_VBLANK_STAT;
+
+ if (irqs) {
+ WREG32(RADEON_GEN_INT_STATUS, irqs);
+ }
+ return irqs & irq_mask;
+}
+
+int r100_irq_process(struct radeon_device *rdev)
+{
+ uint32_t status;
+
+ status = r100_irq_ack(rdev);
+ if (!status) {
+ return IRQ_NONE;
+ }
+ while (status) {
+ /* SW interrupt */
+ if (status & RADEON_SW_INT_TEST) {
+ radeon_fence_process(rdev);
+ }
+ /* Vertical blank interrupts */
+ if (status & RADEON_CRTC_VBLANK_STAT) {
+ drm_handle_vblank(rdev->ddev, 0);
+ }
+ if (status & RADEON_CRTC2_VBLANK_STAT) {
+ drm_handle_vblank(rdev->ddev, 1);
+ }
+ status = r100_irq_ack(rdev);
+ }
+ return IRQ_HANDLED;
+}
+
+u32 r100_get_vblank_counter(struct radeon_device *rdev, int crtc)
+{
+ if (crtc == 0)
+ return RREG32(RADEON_CRTC_CRNT_FRAME);
+ else
+ return RREG32(RADEON_CRTC2_CRNT_FRAME);
+}
+
+
/*
* Fence emission
*/
tmp |= tile_flags;
ib[idx] = tmp;
break;
+ case RADEON_RB3D_ZPASS_ADDR:
+ r = r100_cs_packet_next_reloc(p, &reloc);
+ if (r) {
+ DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
+ idx, reg);
+ r100_cs_dump_packet(p, pkt);
+ return r;
+ }
+ ib[idx] = ib_chunk->kdata[idx] + ((u32)reloc->lobj.gpu_offset);
+ break;
default:
/* FIXME: we don't want to allow anyothers packet */
break;
r100_pll_errata_after_data(rdev);
}
-uint32_t r100_mm_rreg(struct radeon_device *rdev, uint32_t reg)
-{
- if (reg < 0x10000)
- return readl(((void __iomem *)rdev->rmmio) + reg);
- else {
- writel(reg, ((void __iomem *)rdev->rmmio) + RADEON_MM_INDEX);
- return readl(((void __iomem *)rdev->rmmio) + RADEON_MM_DATA);
- }
-}
-
-void r100_mm_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
-{
- if (reg < 0x10000)
- writel(v, ((void __iomem *)rdev->rmmio) + reg);
- else {
- writel(reg, ((void __iomem *)rdev->rmmio) + RADEON_MM_INDEX);
- writel(v, ((void __iomem *)rdev->rmmio) + RADEON_MM_DATA);
- }
-}
-
int r100_init(struct radeon_device *rdev)
{
return 0;
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp | RADEON_PCIE_TX_GART_INVALIDATE_TLB);
(void)RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp);
- mb();
}
+ mb();
}
int rv370_pcie_gart_enable(struct radeon_device *rdev)
/* rv350,rv370,rv380 */
rdev->num_gb_pipes = 1;
}
+ rdev->num_z_pipes = 1;
gb_tile_config = (R300_ENABLE_TILING | R300_TILE_SIZE_16);
switch (rdev->num_gb_pipes) {
case 2:
printk(KERN_WARNING "Failed to wait MC idle while "
"programming pipes. Bad things might happen.\n");
}
- DRM_INFO("radeon: %d pipes initialized.\n", rdev->num_gb_pipes);
+ DRM_INFO("radeon: %d quad pipes, %d Z pipes initialized.\n",
+ rdev->num_gb_pipes, rdev->num_z_pipes);
}
int r300_ga_reset(struct radeon_device *rdev)
}
-/*
- * Indirect registers accessor
- */
-uint32_t rv370_pcie_rreg(struct radeon_device *rdev, uint32_t reg)
-{
- uint32_t r;
-
- WREG8(RADEON_PCIE_INDEX, ((reg) & 0xff));
- (void)RREG32(RADEON_PCIE_INDEX);
- r = RREG32(RADEON_PCIE_DATA);
- return r;
-}
-
-void rv370_pcie_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
-{
- WREG8(RADEON_PCIE_INDEX, ((reg) & 0xff));
- (void)RREG32(RADEON_PCIE_INDEX);
- WREG32(RADEON_PCIE_DATA, (v));
- (void)RREG32(RADEON_PCIE_DATA);
-}
-
/*
* PCIE Lanes
*/
tmp = (ib_chunk->kdata[idx] >> 22) & 0xF;
track->textures[i].txdepth = tmp;
break;
+ case R300_ZB_ZPASS_ADDR:
+ r = r100_cs_packet_next_reloc(p, &reloc);
+ if (r) {
+ DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
+ idx, reg);
+ r100_cs_dump_packet(p, pkt);
+ return r;
+ }
+ ib[idx] = ib_chunk->kdata[idx] + ((u32)reloc->lobj.gpu_offset);
+ break;
+ case 0x4be8:
+ /* valid register only on RV530 */
+ if (p->rdev->family == CHIP_RV530)
+ break;
+ /* fallthrough do not move */
default:
printk(KERN_ERR "Forbidden register 0x%04X in cs at %d\n",
reg, idx);
printk(KERN_WARNING "Failed to wait GUI idle while "
"programming pipes. Bad things might happen.\n");
}
- DRM_INFO("radeon: %d pipes initialized.\n", rdev->num_gb_pipes);
+
+ if (rdev->family == CHIP_RV530) {
+ tmp = RREG32(RV530_GB_PIPE_SELECT2);
+ if ((tmp & 3) == 3)
+ rdev->num_z_pipes = 2;
+ else
+ rdev->num_z_pipes = 1;
+ } else
+ rdev->num_z_pipes = 1;
+
+ DRM_INFO("radeon: %d quad pipes, %d z pipes initialized.\n",
+ rdev->num_gb_pipes, rdev->num_z_pipes);
}
void r420_gpu_init(struct radeon_device *rdev)
#define AVIVO_D1CRTC_BLANK_CONTROL 0x6084
#define AVIVO_D1CRTC_INTERLACE_CONTROL 0x6088
#define AVIVO_D1CRTC_INTERLACE_STATUS 0x608c
+#define AVIVO_D1CRTC_FRAME_COUNT 0x60a4
#define AVIVO_D1CRTC_STEREO_CONTROL 0x60c4
/* master controls */
# define AVIVO_DC_LB_DISP1_END_ADR_SHIFT 4
# define AVIVO_DC_LB_DISP1_END_ADR_MASK 0x7ff
-#define R500_DxMODE_INT_MASK 0x6540
-#define R500_D1MODE_INT_MASK (1<<0)
-#define R500_D2MODE_INT_MASK (1<<8)
-
#define AVIVO_D1MODE_DATA_FORMAT 0x6528
# define AVIVO_D1MODE_INTERLEAVE_EN (1 << 0)
#define AVIVO_D1MODE_DESKTOP_HEIGHT 0x652C
+#define AVIVO_D1MODE_VBLANK_STATUS 0x6534
+# define AVIVO_VBLANK_ACK (1 << 4)
#define AVIVO_D1MODE_VLINE_START_END 0x6538
+#define AVIVO_DxMODE_INT_MASK 0x6540
+# define AVIVO_D1MODE_INT_MASK (1 << 0)
+# define AVIVO_D2MODE_INT_MASK (1 << 8)
#define AVIVO_D1MODE_VIEWPORT_START 0x6580
#define AVIVO_D1MODE_VIEWPORT_SIZE 0x6584
#define AVIVO_D1MODE_EXT_OVERSCAN_LEFT_RIGHT 0x6588
#define AVIVO_D2CRTC_BLANK_CONTROL 0x6884
#define AVIVO_D2CRTC_INTERLACE_CONTROL 0x6888
#define AVIVO_D2CRTC_INTERLACE_STATUS 0x688c
+#define AVIVO_D2CRTC_FRAME_COUNT 0x68a4
#define AVIVO_D2CRTC_STEREO_CONTROL 0x68c4
#define AVIVO_D2GRPH_ENABLE 0x6900
#define AVIVO_D2CUR_SIZE 0x6c10
#define AVIVO_D2CUR_POSITION 0x6c14
+#define AVIVO_D2MODE_VBLANK_STATUS 0x6d34
#define AVIVO_D2MODE_VLINE_START_END 0x6d38
#define AVIVO_D2MODE_VIEWPORT_START 0x6d80
#define AVIVO_D2MODE_VIEWPORT_SIZE 0x6d84
# define AVIVO_I2C_EN (1 << 0)
# define AVIVO_I2C_RESET (1 << 8)
+#define AVIVO_DISP_INTERRUPT_STATUS 0x7edc
+# define AVIVO_D1_VBLANK_INTERRUPT (1 << 4)
+# define AVIVO_D2_VBLANK_INTERRUPT (1 << 5)
+
#endif
*/
/* workaround for RV530 */
if (rdev->family == CHIP_RV530) {
- WREG32(0x4124, 1);
WREG32(0x4128, 0xFF);
}
r420_pipes_init(rdev);
uint64_t *gpu_addr);
void radeon_object_unpin(struct radeon_object *robj);
int radeon_object_wait(struct radeon_object *robj);
+int radeon_object_busy_domain(struct radeon_object *robj, uint32_t *cur_placement);
int radeon_object_evict_vram(struct radeon_device *rdev);
int radeon_object_mmap(struct radeon_object *robj, uint64_t *offset);
void radeon_object_force_delete(struct radeon_device *rdev);
void (*ring_start)(struct radeon_device *rdev);
int (*irq_set)(struct radeon_device *rdev);
int (*irq_process)(struct radeon_device *rdev);
+ u32 (*get_vblank_counter)(struct radeon_device *rdev, int crtc);
void (*fence_ring_emit)(struct radeon_device *rdev, struct radeon_fence *fence);
int (*cs_parse)(struct radeon_cs_parser *p);
int (*copy_blit)(struct radeon_device *rdev,
int usec_timeout;
enum radeon_pll_errata pll_errata;
int num_gb_pipes;
+ int num_z_pipes;
int disp_priority;
/* BIOS */
uint8_t *bios;
resource_size_t rmmio_base;
resource_size_t rmmio_size;
void *rmmio;
- radeon_rreg_t mm_rreg;
- radeon_wreg_t mm_wreg;
radeon_rreg_t mc_rreg;
radeon_wreg_t mc_wreg;
radeon_rreg_t pll_rreg;
radeon_wreg_t pll_wreg;
- radeon_rreg_t pcie_rreg;
- radeon_wreg_t pcie_wreg;
+ uint32_t pcie_reg_mask;
radeon_rreg_t pciep_rreg;
radeon_wreg_t pciep_wreg;
struct radeon_clock clock;
void radeon_device_fini(struct radeon_device *rdev);
int radeon_gpu_wait_for_idle(struct radeon_device *rdev);
+static inline uint32_t r100_mm_rreg(struct radeon_device *rdev, uint32_t reg)
+{
+ if (reg < 0x10000)
+ return readl(((void __iomem *)rdev->rmmio) + reg);
+ else {
+ writel(reg, ((void __iomem *)rdev->rmmio) + RADEON_MM_INDEX);
+ return readl(((void __iomem *)rdev->rmmio) + RADEON_MM_DATA);
+ }
+}
+
+static inline void r100_mm_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
+{
+ if (reg < 0x10000)
+ writel(v, ((void __iomem *)rdev->rmmio) + reg);
+ else {
+ writel(reg, ((void __iomem *)rdev->rmmio) + RADEON_MM_INDEX);
+ writel(v, ((void __iomem *)rdev->rmmio) + RADEON_MM_DATA);
+ }
+}
+
/*
* Registers read & write functions.
*/
#define RREG8(reg) readb(((void __iomem *)rdev->rmmio) + (reg))
#define WREG8(reg, v) writeb(v, ((void __iomem *)rdev->rmmio) + (reg))
-#define RREG32(reg) rdev->mm_rreg(rdev, (reg))
-#define WREG32(reg, v) rdev->mm_wreg(rdev, (reg), (v))
+#define RREG32(reg) r100_mm_rreg(rdev, (reg))
+#define WREG32(reg, v) r100_mm_wreg(rdev, (reg), (v))
#define REG_SET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK)
#define REG_GET(FIELD, v) (((v) << FIELD##_SHIFT) & FIELD##_MASK)
#define RREG32_PLL(reg) rdev->pll_rreg(rdev, (reg))
#define WREG32_PLL(reg, v) rdev->pll_wreg(rdev, (reg), (v))
#define RREG32_MC(reg) rdev->mc_rreg(rdev, (reg))
#define WREG32_MC(reg, v) rdev->mc_wreg(rdev, (reg), (v))
-#define RREG32_PCIE(reg) rdev->pcie_rreg(rdev, (reg))
-#define WREG32_PCIE(reg, v) rdev->pcie_wreg(rdev, (reg), (v))
+#define RREG32_PCIE(reg) rv370_pcie_rreg(rdev, (reg))
+#define WREG32_PCIE(reg, v) rv370_pcie_wreg(rdev, (reg), (v))
#define WREG32_P(reg, val, mask) \
do { \
uint32_t tmp_ = RREG32(reg); \
WREG32_PLL(reg, tmp_); \
} while (0)
+/*
+ * Indirect registers accessor
+ */
+static inline uint32_t rv370_pcie_rreg(struct radeon_device *rdev, uint32_t reg)
+{
+ uint32_t r;
+
+ WREG32(RADEON_PCIE_INDEX, ((reg) & rdev->pcie_reg_mask));
+ r = RREG32(RADEON_PCIE_DATA);
+ return r;
+}
+
+static inline void rv370_pcie_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
+{
+ WREG32(RADEON_PCIE_INDEX, ((reg) & rdev->pcie_reg_mask));
+ WREG32(RADEON_PCIE_DATA, (v));
+}
+
void r100_pll_errata_after_index(struct radeon_device *rdev);
#define radeon_ring_start(rdev) (rdev)->asic->ring_start((rdev))
#define radeon_irq_set(rdev) (rdev)->asic->irq_set((rdev))
#define radeon_irq_process(rdev) (rdev)->asic->irq_process((rdev))
+#define radeon_get_vblank_counter(rdev, crtc) (rdev)->asic->get_vblank_counter((rdev), (crtc))
#define radeon_fence_ring_emit(rdev, fence) (rdev)->asic->fence_ring_emit((rdev), (fence))
#define radeon_copy_blit(rdev, s, d, np, f) (rdev)->asic->copy_blit((rdev), (s), (d), (np), (f))
#define radeon_copy_dma(rdev, s, d, np, f) (rdev)->asic->copy_dma((rdev), (s), (d), (np), (f))
int r100_gpu_reset(struct radeon_device *rdev);
int r100_mc_init(struct radeon_device *rdev);
void r100_mc_fini(struct radeon_device *rdev);
+u32 r100_get_vblank_counter(struct radeon_device *rdev, int crtc);
int r100_wb_init(struct radeon_device *rdev);
void r100_wb_fini(struct radeon_device *rdev);
int r100_gart_enable(struct radeon_device *rdev);
.ring_start = &r100_ring_start,
.irq_set = &r100_irq_set,
.irq_process = &r100_irq_process,
+ .get_vblank_counter = &r100_get_vblank_counter,
.fence_ring_emit = &r100_fence_ring_emit,
.cs_parse = &r100_cs_parse,
.copy_blit = &r100_copy_blit,
.ring_start = &r300_ring_start,
.irq_set = &r100_irq_set,
.irq_process = &r100_irq_process,
+ .get_vblank_counter = &r100_get_vblank_counter,
.fence_ring_emit = &r300_fence_ring_emit,
.cs_parse = &r300_cs_parse,
.copy_blit = &r100_copy_blit,
.ring_start = &r300_ring_start,
.irq_set = &r100_irq_set,
.irq_process = &r100_irq_process,
+ .get_vblank_counter = &r100_get_vblank_counter,
.fence_ring_emit = &r300_fence_ring_emit,
.cs_parse = &r300_cs_parse,
.copy_blit = &r100_copy_blit,
.ring_start = &r300_ring_start,
.irq_set = &r100_irq_set,
.irq_process = &r100_irq_process,
+ .get_vblank_counter = &r100_get_vblank_counter,
.fence_ring_emit = &r300_fence_ring_emit,
.cs_parse = &r300_cs_parse,
.copy_blit = &r100_copy_blit,
int rs600_mc_init(struct radeon_device *rdev);
void rs600_mc_fini(struct radeon_device *rdev);
int rs600_irq_set(struct radeon_device *rdev);
+int rs600_irq_process(struct radeon_device *rdev);
+u32 rs600_get_vblank_counter(struct radeon_device *rdev, int crtc);
int rs600_gart_enable(struct radeon_device *rdev);
void rs600_gart_disable(struct radeon_device *rdev);
void rs600_gart_tlb_flush(struct radeon_device *rdev);
.cp_disable = &r100_cp_disable,
.ring_start = &r300_ring_start,
.irq_set = &rs600_irq_set,
- .irq_process = &r100_irq_process,
+ .irq_process = &rs600_irq_process,
+ .get_vblank_counter = &rs600_get_vblank_counter,
.fence_ring_emit = &r300_fence_ring_emit,
.cs_parse = &r300_cs_parse,
.copy_blit = &r100_copy_blit,
/*
* rs690,rs740
*/
+int rs690_init(struct radeon_device *rdev);
void rs690_errata(struct radeon_device *rdev);
void rs690_vram_info(struct radeon_device *rdev);
int rs690_mc_init(struct radeon_device *rdev);
void rs690_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v);
void rs690_bandwidth_update(struct radeon_device *rdev);
static struct radeon_asic rs690_asic = {
- .init = &r300_init,
+ .init = &rs690_init,
.errata = &rs690_errata,
.vram_info = &rs690_vram_info,
.gpu_reset = &r300_gpu_reset,
.cp_disable = &r100_cp_disable,
.ring_start = &r300_ring_start,
.irq_set = &rs600_irq_set,
- .irq_process = &r100_irq_process,
+ .irq_process = &rs600_irq_process,
+ .get_vblank_counter = &rs600_get_vblank_counter,
.fence_ring_emit = &r300_fence_ring_emit,
.cs_parse = &r300_cs_parse,
.copy_blit = &r100_copy_blit,
.cp_fini = &r100_cp_fini,
.cp_disable = &r100_cp_disable,
.ring_start = &rv515_ring_start,
- .irq_set = &r100_irq_set,
- .irq_process = &r100_irq_process,
+ .irq_set = &rs600_irq_set,
+ .irq_process = &rs600_irq_process,
+ .get_vblank_counter = &rs600_get_vblank_counter,
.fence_ring_emit = &r300_fence_ring_emit,
.cs_parse = &r300_cs_parse,
.copy_blit = &r100_copy_blit,
.cp_fini = &r100_cp_fini,
.cp_disable = &r100_cp_disable,
.ring_start = &rv515_ring_start,
- .irq_set = &r100_irq_set,
- .irq_process = &r100_irq_process,
+ .irq_set = &rs600_irq_set,
+ .irq_process = &rs600_irq_process,
+ .get_vblank_counter = &rs600_get_vblank_counter,
.fence_ring_emit = &r300_fence_ring_emit,
.cs_parse = &r300_cs_parse,
.copy_blit = &r100_copy_blit,
0x00780000, /* rs480 */
};
-static struct radeon_encoder_tv_dac
- *radeon_legacy_get_tv_dac_info_from_table(struct radeon_device *rdev)
+static void radeon_legacy_get_tv_dac_info_from_table(struct radeon_device *rdev,
+ struct radeon_encoder_tv_dac *tv_dac)
{
- struct radeon_encoder_tv_dac *tv_dac = NULL;
-
- tv_dac = kzalloc(sizeof(struct radeon_encoder_tv_dac), GFP_KERNEL);
-
- if (!tv_dac)
- return NULL;
-
tv_dac->ps2_tvdac_adj = default_tvdac_adj[rdev->family];
if ((rdev->flags & RADEON_IS_MOBILITY) && (rdev->family == CHIP_RV250))
tv_dac->ps2_tvdac_adj = 0x00880000;
tv_dac->pal_tvdac_adj = tv_dac->ps2_tvdac_adj;
tv_dac->ntsc_tvdac_adj = tv_dac->ps2_tvdac_adj;
-
- return tv_dac;
+ return;
}
struct radeon_encoder_tv_dac *radeon_combios_get_tv_dac_info(struct
uint16_t dac_info;
uint8_t rev, bg, dac;
struct radeon_encoder_tv_dac *tv_dac = NULL;
+ int found = 0;
+
+ tv_dac = kzalloc(sizeof(struct radeon_encoder_tv_dac), GFP_KERNEL);
+ if (!tv_dac)
+ return NULL;
if (rdev->bios == NULL)
- return radeon_legacy_get_tv_dac_info_from_table(rdev);
+ goto out;
/* first check TV table */
dac_info = combios_get_table_offset(dev, COMBIOS_TV_INFO_TABLE);
if (dac_info) {
- tv_dac =
- kzalloc(sizeof(struct radeon_encoder_tv_dac), GFP_KERNEL);
-
- if (!tv_dac)
- return NULL;
-
rev = RBIOS8(dac_info + 0x3);
if (rev > 4) {
bg = RBIOS8(dac_info + 0xc) & 0xf;
bg = RBIOS8(dac_info + 0x10) & 0xf;
dac = RBIOS8(dac_info + 0x11) & 0xf;
tv_dac->ntsc_tvdac_adj = (bg << 16) | (dac << 20);
+ found = 1;
} else if (rev > 1) {
bg = RBIOS8(dac_info + 0xc) & 0xf;
dac = (RBIOS8(dac_info + 0xc) >> 4) & 0xf;
bg = RBIOS8(dac_info + 0xe) & 0xf;
dac = (RBIOS8(dac_info + 0xe) >> 4) & 0xf;
tv_dac->ntsc_tvdac_adj = (bg << 16) | (dac << 20);
+ found = 1;
}
-
tv_dac->tv_std = radeon_combios_get_tv_info(encoder);
-
- } else {
+ }
+ if (!found) {
/* then check CRT table */
dac_info =
combios_get_table_offset(dev, COMBIOS_CRT_INFO_TABLE);
if (dac_info) {
- tv_dac =
- kzalloc(sizeof(struct radeon_encoder_tv_dac),
- GFP_KERNEL);
-
- if (!tv_dac)
- return NULL;
-
rev = RBIOS8(dac_info) & 0x3;
if (rev < 2) {
bg = RBIOS8(da