rk: revert 20f3d0b+v3.0.66 to v3.0

This commit is contained in:
黄涛 2013-11-08 21:34:05 +08:00
parent 8216724bd9
commit ef88c53f60
2671 changed files with 17668 additions and 210063 deletions

View file

@ -218,16 +218,16 @@ The development process
Linux kernel development process currently consists of a few different
main kernel "branches" and lots of different subsystem-specific kernel
branches. These different branches are:
- main 3.x kernel tree
- 3.x.y -stable kernel tree
- 3.x -git kernel patches
- main 2.6.x kernel tree
- 2.6.x.y -stable kernel tree
- 2.6.x -git kernel patches
- subsystem specific kernel trees and patches
- the 3.x -next kernel tree for integration tests
- the 2.6.x -next kernel tree for integration tests
3.x kernel tree
2.6.x kernel tree
-----------------
3.x kernels are maintained by Linus Torvalds, and can be found on
kernel.org in the pub/linux/kernel/v3.x/ directory. Its development
2.6.x kernels are maintained by Linus Torvalds, and can be found on
kernel.org in the pub/linux/kernel/v2.6/ directory. Its development
process is as follows:
- As soon as a new kernel is released a two weeks window is open,
during this period of time maintainers can submit big diffs to
@ -262,21 +262,21 @@ mailing list about kernel releases:
released according to perceived bug status, not according to a
preconceived timeline."
3.x.y -stable kernel tree
2.6.x.y -stable kernel tree
---------------------------
Kernels with 3-part versions are -stable kernels. They contain
Kernels with 4-part versions are -stable kernels. They contain
relatively small and critical fixes for security problems or significant
regressions discovered in a given 3.x kernel.
regressions discovered in a given 2.6.x kernel.
This is the recommended branch for users who want the most recent stable
kernel and are not interested in helping test development/experimental
versions.
If no 3.x.y kernel is available, then the highest numbered 3.x
If no 2.6.x.y kernel is available, then the highest numbered 2.6.x
kernel is the current stable kernel.
3.x.y are maintained by the "stable" team <stable@vger.kernel.org>, and
are released as needs dictate. The normal release period is approximately
2.6.x.y are maintained by the "stable" team <stable@kernel.org>, and are
released as needs dictate. The normal release period is approximately
two weeks, but it can be longer if there are no pressing problems. A
security-related problem, instead, can cause a release to happen almost
instantly.
@ -285,7 +285,7 @@ The file Documentation/stable_kernel_rules.txt in the kernel tree
documents what kinds of changes are acceptable for the -stable tree, and
how the release process works.
3.x -git patches
2.6.x -git patches
------------------
These are daily snapshots of Linus' kernel tree which are managed in a
git repository (hence the name.) These patches are usually released
@ -317,13 +317,13 @@ revisions to it, and maintainers can mark patches as under review,
accepted, or rejected. Most of these patchwork sites are listed at
http://patchwork.kernel.org/.
3.x -next kernel tree for integration tests
2.6.x -next kernel tree for integration tests
---------------------------------------------
Before updates from subsystem trees are merged into the mainline 3.x
Before updates from subsystem trees are merged into the mainline 2.6.x
tree, they need to be integration-tested. For this purpose, a special
testing repository exists into which virtually all subsystem trees are
pulled on an almost daily basis:
http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git
http://git.kernel.org/?p=linux/kernel/git/sfr/linux-next.git
http://linux.f-seidel.de/linux-next/pmwiki/
This way, the -next kernel gives a summary outlook onto what will be

View file

@ -1,121 +0,0 @@
=============
A N D R O I D
=============
Copyright (C) 2009 Google, Inc.
Written by Mike Chan <mike@android.com>
CONTENTS:
---------
1. Android
1.1 Required enabled config options
1.2 Required disabled config options
1.3 Recommended enabled config options
2. Contact
1. Android
==========
Android (www.android.com) is an open source operating system for mobile devices.
This document describes configurations needed to run the Android framework on
top of the Linux kernel.
To see a working defconfig look at msm_defconfig or goldfish_defconfig
which can be found at http://android.git.kernel.org in kernel/common.git
and kernel/msm.git
1.1 Required enabled config options
-----------------------------------
After building a standard defconfig, ensure that these options are enabled in
your .config or defconfig if they are not already. Based off the msm_defconfig.
You should keep the rest of the default options enabled in the defconfig
unless you know what you are doing.
ANDROID_PARANOID_NETWORK
ASHMEM
CONFIG_FB_MODE_HELPERS
CONFIG_FONT_8x16
CONFIG_FONT_8x8
CONFIG_YAFFS_SHORT_NAMES_IN_RAM
DAB
EARLYSUSPEND
FB
FB_CFB_COPYAREA
FB_CFB_FILLRECT
FB_CFB_IMAGEBLIT
FB_DEFERRED_IO
FB_TILEBLITTING
HIGH_RES_TIMERS
INOTIFY
INOTIFY_USER
INPUT_EVDEV
INPUT_GPIO
INPUT_MISC
LEDS_CLASS
LEDS_GPIO
LOCK_KERNEL
LkOGGER
LOW_MEMORY_KILLER
MISC_DEVICES
NEW_LEDS
NO_HZ
POWER_SUPPLY
PREEMPT
RAMFS
RTC_CLASS
RTC_LIB
SWITCH
SWITCH_GPIO
TMPFS
UID_STAT
UID16
USB_FUNCTION
USB_FUNCTION_ADB
USER_WAKELOCK
VIDEO_OUTPUT_CONTROL
WAKELOCK
YAFFS_AUTO_YAFFS2
YAFFS_FS
YAFFS_YAFFS1
YAFFS_YAFFS2
1.2 Required disabled config options
------------------------------------
CONFIG_YAFFS_DISABLE_LAZY_LOAD
DNOTIFY
1.3 Recommended enabled config options
------------------------------
ANDROID_PMEM
ANDROID_RAM_CONSOLE
ANDROID_RAM_CONSOLE_ERROR_CORRECTION
SCHEDSTATS
DEBUG_PREEMPT
DEBUG_MUTEXES
DEBUG_SPINLOCK_SLEEP
DEBUG_INFO
FRAME_POINTER
CPU_FREQ
CPU_FREQ_TABLE
CPU_FREQ_DEFAULT_GOV_ONDEMAND
CPU_FREQ_GOV_ONDEMAND
CRC_CCITT
EMBEDDED
INPUT_TOUCHSCREEN
I2C
I2C_BOARDINFO
LOG_BUF_SHIFT=17
SERIAL_CORE
SERIAL_CORE_CONSOLE
2. Contact
==========
website: http://android.git.kernel.org
mailing-lists: android-kernel@googlegroups.com

View file

@ -593,15 +593,6 @@ there are not tasks in the cgroup. If pre_destroy() returns error code,
rmdir() will fail with it. From this behavior, pre_destroy() can be
called multiple times against a cgroup.
int allow_attach(struct cgroup *cgrp, struct task_struct *task)
(cgroup_mutex held by caller)
Called prior to moving a task into a cgroup; if the subsystem
returns an error, this will abort the attach operation. Used
to extend the permission checks - if all subsystems in a cgroup
return 0, the attach will be allowed to proceed, even if the
default permission check (root or same user) fails.
int can_attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
struct task_struct *task)
(cgroup_mutex held by caller)

View file

@ -39,13 +39,6 @@ system: Time spent by tasks of the cgroup in kernel mode.
user and system are in USER_HZ unit.
cpuacct.cpufreq file gives CPU time (in nanoseconds) spent at each CPU
frequency. Platform hooks must be implemented inorder to properly track
time at each CPU frequency.
cpuacct.power file gives CPU power consumed (in milliWatt seconds). Platform
must provide and implement power callback functions.
cpuacct controller uses percpu_counter interface to collect user and
system times. This has two side effects:

View file

@ -28,7 +28,6 @@ Contents:
2.3 Userspace
2.4 Ondemand
2.5 Conservative
2.6 Interactive
3. The Governor Interface in the CPUfreq Core
@ -194,80 +193,6 @@ governor but for the opposite direction. For example when set to its
default value of '20' it means that if the CPU usage needs to be below
20% between samples to have the frequency decreased.
2.6 Interactive
---------------
The CPUfreq governor "interactive" is designed for latency-sensitive,
interactive workloads. This governor sets the CPU speed depending on
usage, similar to "ondemand" and "conservative" governors, but with a
different set of configurable behaviors.
The tuneable values for this governor are:
target_loads: CPU load values used to adjust speed to influence the
current CPU load toward that value. In general, the lower the target
load, the more often the governor will raise CPU speeds to bring load
below the target. The format is a single target load, optionally
followed by pairs of CPU speeds and CPU loads to target at or above
those speeds. Colons can be used between the speeds and associated
target loads for readability. For example:
85 1000000:90 1700000:99
targets CPU load 85% below speed 1GHz, 90% at or above 1GHz, until
1.7GHz and above, at which load 99% is targeted. If speeds are
specified these must appear in ascending order. Higher target load
values are typically specified for higher speeds, that is, target load
values also usually appear in an ascending order. The default is
target load 90% for all speeds.
min_sample_time: The minimum amount of time to spend at the current
frequency before ramping down. Default is 80000 uS.
hispeed_freq: An intermediate "hi speed" at which to initially ramp
when CPU load hits the value specified in go_hispeed_load. If load
stays high for the amount of time specified in above_hispeed_delay,
then speed may be bumped higher. Default is the maximum speed
allowed by the policy at governor initialization time.
go_hispeed_load: The CPU load at which to ramp to hispeed_freq.
Default is 99%.
above_hispeed_delay: When speed is at or above hispeed_freq, wait for
this long before raising speed in response to continued high load.
Default is 20000 uS.
timer_rate: Sample rate for reevaluating CPU load when the CPU is not
idle. A deferrable timer is used, such that the CPU will not be woken
from idle to service this timer until something else needs to run.
(The maximum time to allow deferring this timer when not running at
minimum speed is configurable via timer_slack.) Default is 20000 uS.
timer_slack: Maximum additional time to defer handling the governor
sampling timer beyond timer_rate when running at speeds above the
minimum. For platforms that consume additional power at idle when
CPUs are running at speeds greater than minimum, this places an upper
bound on how long the timer will be deferred prior to re-evaluating
load and dropping speed. For example, if timer_rate is 20000uS and
timer_slack is 10000uS then timers will be deferred for up to 30msec
when not at lowest speed. A value of -1 means defer timers
indefinitely at all speeds. Default is 80000 uS.
boost: If non-zero, immediately boost speed of all CPUs to at least
hispeed_freq until zero is written to this attribute. If zero, allow
CPU speeds to drop below hispeed_freq according to load as usual.
Default is zero.
boostpulse: On each write, immediately boost speed of all CPUs to
hispeed_freq for at least the period of time specified by
boostpulse_duration, after which speeds are allowed to drop below
hispeed_freq according to load as usual.
boostpulse_duration: Length of time to hold CPU speed at hispeed_freq
on a write to boostpulse, before allowing speed to drop according to
load as usual. Default is 80000 uS.
3. The Governor Interface in the CPUfreq Core
=============================================

View file

@ -271,10 +271,10 @@ copies should go to:
the linux-kernel list.
- If you are fixing a bug, think about whether the fix should go into the
next stable update. If so, stable@vger.kernel.org should get a copy of
the patch. Also add a "Cc: stable@vger.kernel.org" to the tags within
the patch itself; that will cause the stable team to get a notification
when your fix goes into the mainline.
next stable update. If so, stable@kernel.org should get a copy of the
patch. Also add a "Cc: stable@kernel.org" to the tags within the patch
itself; that will cause the stable team to get a notification when your
fix goes into the mainline.
When selecting recipients for a patch, it is good to have an idea of who
you think will eventually accept the patch and get it merged. While it

View file

@ -114,7 +114,7 @@ sub tda10045 {
sub tda10046 {
my $sourcefile = "TT_PCI_2.19h_28_11_2006.zip";
my $url = "http://technotrend.com.ua/download/software/219/$sourcefile";
my $url = "http://www.tt-download.com/download/updates/219/$sourcefile";
my $hash = "6a7e1e2f2644b162ff0502367553c72d";
my $outfile = "dvb-fe-tda10046.fw";
my $tmpdir = tempdir(DIR => "/tmp", CLEANUP => 1);

View file

@ -6,6 +6,14 @@ be removed from this file.
---------------------------
What: x86 floppy disable_hlt
When: 2012
Why: ancient workaround of dubious utility clutters the
code used by everybody else.
Who: Len Brown <len.brown@intel.com>
---------------------------
What: CONFIG_APM_CPU_IDLE, and its ability to call APM BIOS in idle
When: 2012
Why: This optional sub-feature of APM is of dubious reliability,

View file

@ -1,169 +0,0 @@
UHID - User-space I/O driver support for HID subsystem
========================================================
The HID subsystem needs two kinds of drivers. In this document we call them:
1. The "HID I/O Driver" is the driver that performs raw data I/O to the
low-level device. Internally, they register an hid_ll_driver structure with
the HID core. They perform device setup, read raw data from the device and
push it into the HID subsystem and they provide a callback so the HID
subsystem can send data to the device.
2. The "HID Device Driver" is the driver that parses HID reports and reacts on
them. There are generic drivers like "generic-usb" and "generic-bluetooth"
which adhere to the HID specification and provide the standardizes features.
But there may be special drivers and quirks for each non-standard device out
there. Internally, they use the hid_driver structure.
Historically, the USB stack was the first subsystem to provide an HID I/O
Driver. However, other standards like Bluetooth have adopted the HID specs and
may provide HID I/O Drivers, too. The UHID driver allows to implement HID I/O
Drivers in user-space and feed the data into the kernel HID-subsystem.
This allows user-space to operate on the same level as USB-HID, Bluetooth-HID
and similar. It does not provide a way to write HID Device Drivers, though. Use
hidraw for this purpose.
There is an example user-space application in ./samples/uhid/uhid-example.c
The UHID API
------------
UHID is accessed through a character misc-device. The minor-number is allocated
dynamically so you need to rely on udev (or similar) to create the device node.
This is /dev/uhid by default.
If a new device is detected by your HID I/O Driver and you want to register this
device with the HID subsystem, then you need to open /dev/uhid once for each
device you want to register. All further communication is done by read()'ing or
write()'ing "struct uhid_event" objects. Non-blocking operations are supported
by setting O_NONBLOCK.
struct uhid_event {
__u32 type;
union {
struct uhid_create_req create;
struct uhid_data_req data;
...
} u;
};
The "type" field contains the ID of the event. Depending on the ID different
payloads are sent. You must not split a single event across multiple read()'s or
multiple write()'s. A single event must always be sent as a whole. Furthermore,
only a single event can be sent per read() or write(). Pending data is ignored.
If you want to handle multiple events in a single syscall, then use vectored
I/O with readv()/writev().
The first thing you should do is sending an UHID_CREATE event. This will
register the device. UHID will respond with an UHID_START event. You can now
start sending data to and reading data from UHID. However, unless UHID sends the
UHID_OPEN event, the internally attached HID Device Driver has no user attached.
That is, you might put your device asleep unless you receive the UHID_OPEN
event. If you receive the UHID_OPEN event, you should start I/O. If the last
user closes the HID device, you will receive an UHID_CLOSE event. This may be
followed by an UHID_OPEN event again and so on. There is no need to perform
reference-counting in user-space. That is, you will never receive multiple
UHID_OPEN events without an UHID_CLOSE event. The HID subsystem performs
ref-counting for you.
You may decide to ignore UHID_OPEN/UHID_CLOSE, though. I/O is allowed even
though the device may have no users.
If you want to send data to the HID subsystem, you send an HID_INPUT event with
your raw data payload. If the kernel wants to send data to the device, you will
read an UHID_OUTPUT or UHID_OUTPUT_EV event.
If your device disconnects, you should send an UHID_DESTROY event. This will
unregister the device. You can now send UHID_CREATE again to register a new
device.
If you close() the fd, the device is automatically unregistered and destroyed
internally.
write()
-------
write() allows you to modify the state of the device and feed input data into
the kernel. The following types are supported: UHID_CREATE, UHID_DESTROY and
UHID_INPUT. The kernel will parse the event immediately and if the event ID is
not supported, it will return -EOPNOTSUPP. If the payload is invalid, then
-EINVAL is returned, otherwise, the amount of data that was read is returned and
the request was handled successfully.
UHID_CREATE:
This creates the internal HID device. No I/O is possible until you send this
event to the kernel. The payload is of type struct uhid_create_req and
contains information about your device. You can start I/O now.
UHID_DESTROY:
This destroys the internal HID device. No further I/O will be accepted. There
may still be pending messages that you can receive with read() but no further
UHID_INPUT events can be sent to the kernel.
You can create a new device by sending UHID_CREATE again. There is no need to
reopen the character device.
UHID_INPUT:
You must send UHID_CREATE before sending input to the kernel! This event
contains a data-payload. This is the raw data that you read from your device.
The kernel will parse the HID reports and react on it.
UHID_FEATURE_ANSWER:
If you receive a UHID_FEATURE request you must answer with this request. You
must copy the "id" field from the request into the answer. Set the "err" field
to 0 if no error occured or to EIO if an I/O error occurred.
If "err" is 0 then you should fill the buffer of the answer with the results
of the feature request and set "size" correspondingly.
read()
------
read() will return a queued ouput report. These output reports can be of type
UHID_START, UHID_STOP, UHID_OPEN, UHID_CLOSE, UHID_OUTPUT or UHID_OUTPUT_EV. No
reaction is required to any of them but you should handle them according to your
needs. Only UHID_OUTPUT and UHID_OUTPUT_EV have payloads.
UHID_START:
This is sent when the HID device is started. Consider this as an answer to
UHID_CREATE. This is always the first event that is sent.
UHID_STOP:
This is sent when the HID device is stopped. Consider this as an answer to
UHID_DESTROY.
If the kernel HID device driver closes the device manually (that is, you
didn't send UHID_DESTROY) then you should consider this device closed and send
an UHID_DESTROY event. You may want to reregister your device, though. This is
always the last message that is sent to you unless you reopen the device with
UHID_CREATE.
UHID_OPEN:
This is sent when the HID device is opened. That is, the data that the HID
device provides is read by some other process. You may ignore this event but
it is useful for power-management. As long as you haven't received this event
there is actually no other process that reads your data so there is no need to
send UHID_INPUT events to the kernel.
UHID_CLOSE:
This is sent when there are no more processes which read the HID data. It is
the counterpart of UHID_OPEN and you may as well ignore this event.
UHID_OUTPUT:
This is sent if the HID device driver wants to send raw data to the I/O
device. You should read the payload and forward it to the device. The payload
is of type "struct uhid_data_req".
This may be received even though you haven't received UHID_OPEN, yet.
UHID_OUTPUT_EV:
Same as UHID_OUTPUT but this contains a "struct input_event" as payload. This
is called for force-feedback, LED or similar events which are received through
an input device by the HID subsystem. You should convert this into raw reports
and send them to your device similar to events of type UHID_OUTPUT.
UHID_FEATURE:
This event is sent if the kernel driver wants to perform a feature request as
described in the HID specs. The report-type and report-number are available in
the payload.
The kernel serializes feature requests so there will never be two in parallel.
However, if you fail to respond with a UHID_FEATURE_ANSWER in a time-span of 5
seconds, then the requests will be dropped and a new one might be sent.
Therefore, the payload also contains an "id" field that identifies every
request.
Document by:
David Herrmann <dh.herrmann@googlemail.com>

View file

@ -7,29 +7,21 @@ Supported chips:
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://www.analog.com/static/imported-files/data_sheets/ADT7408.pdf
* Atmel AT30TS00
Prefix: 'at30ts00'
* IDT TSE2002B3, TS3000B3
Prefix: 'tse2002b3', 'ts3000b3'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://www.atmel.com/Images/doc8585.pdf
* IDT TSE2002B3, TSE2002GB2, TS3000B3, TS3000GB2
Prefix: 'tse2002', 'ts3000'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://www.idt.com/sites/default/files/documents/IDT_TSE2002B3C_DST_20100512_120303152056.pdf
http://www.idt.com/sites/default/files/documents/IDT_TSE2002GB2A1_DST_20111107_120303145914.pdf
http://www.idt.com/sites/default/files/documents/IDT_TS3000B3A_DST_20101129_120303152013.pdf
http://www.idt.com/sites/default/files/documents/IDT_TS3000GB2A1_DST_20111104_120303151012.pdf
http://www.idt.com/products/getdoc.cfm?docid=18715691
http://www.idt.com/products/getdoc.cfm?docid=18715692
* Maxim MAX6604
Prefix: 'max6604'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://datasheets.maxim-ic.com/en/ds/MAX6604.pdf
* Microchip MCP9804, MCP9805, MCP98242, MCP98243, MCP9843
Prefixes: 'mcp9804', 'mcp9805', 'mcp98242', 'mcp98243', 'mcp9843'
* Microchip MCP9805, MCP98242, MCP98243, MCP9843
Prefixes: 'mcp9805', 'mcp98242', 'mcp98243', 'mcp9843'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://ww1.microchip.com/downloads/en/DeviceDoc/22203C.pdf
http://ww1.microchip.com/downloads/en/DeviceDoc/21977b.pdf
http://ww1.microchip.com/downloads/en/DeviceDoc/21996a.pdf
http://ww1.microchip.com/downloads/en/DeviceDoc/22153c.pdf
@ -56,12 +48,6 @@ Supported chips:
Datasheets:
http://www.st.com/stonline/products/literature/ds/13447/stts424.pdf
http://www.st.com/stonline/products/literature/ds/13448/stts424e02.pdf
* ST Microelectronics STTS2002, STTS3000
Prefix: 'stts2002', 'stts3000'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATASHEET/CD00225278.pdf
http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATA_BRIEF/CD00270920.pdf
* JEDEC JC 42.4 compliant temperature sensor chips
Prefix: 'jc42'
Addresses scanned: I2C 0x18 - 0x1f

View file

@ -39,20 +39,23 @@ independent, drivers.
in case an unused hwspinlock isn't available. Users of this
API will usually want to communicate the lock's id to the remote core
before it can be used to achieve synchronization.
Should be called from a process context (might sleep).
Can be called from an atomic context (this function will not sleep) but
not from within interrupt context.
struct hwspinlock *hwspin_lock_request_specific(unsigned int id);
- assign a specific hwspinlock id and return its address, or NULL
if that hwspinlock is already in use. Usually board code will
be calling this function in order to reserve specific hwspinlock
ids for predefined purposes.
Should be called from a process context (might sleep).
Can be called from an atomic context (this function will not sleep) but
not from within interrupt context.
int hwspin_lock_free(struct hwspinlock *hwlock);
- free a previously-assigned hwspinlock; returns 0 on success, or an
appropriate error code on failure (e.g. -EINVAL if the hwspinlock
is already free).
Should be called from a process context (might sleep).
Can be called from an atomic context (this function will not sleep) but
not from within interrupt context.
int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout);
- lock a previously-assigned hwspinlock with a timeout limit (specified in
@ -229,14 +232,15 @@ int hwspinlock_example2(void)
int hwspin_lock_register(struct hwspinlock *hwlock);
- to be called from the underlying platform-specific implementation, in
order to register a new hwspinlock instance. Should be called from
a process context (this function might sleep).
Returns 0 on success, or appropriate error code on failure.
order to register a new hwspinlock instance. Can be called from an atomic
context (this function will not sleep) but not from within interrupt
context. Returns 0 on success, or appropriate error code on failure.
struct hwspinlock *hwspin_lock_unregister(unsigned int id);
- to be called from the underlying vendor-specific implementation, in order
to unregister an existing (and unused) hwspinlock instance.
Should be called from a process context (this function might sleep).
Can be called from an atomic context (will not sleep) but not from
within interrupt context.
Returns the address of hwspinlock on success, or NULL on error (e.g.
if the hwspinlock is sill in use).

View file

@ -1764,11 +1764,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
noresidual [PPC] Don't use residual data on PReP machines.
nordrand [X86] Disable the direct use of the RDRAND
instruction even if it is supported by the
processor. RDRAND is still available to user
space applications.
noresume [SWSUSP] Disables resume and restores original swap
space.

View file

@ -539,14 +539,12 @@ static int if_getconfig(char *ifname)
metric = 0;
} else
metric = ifr.ifr_metric;
printf("The result of SIOCGIFMETRIC is %d\n", metric);
strcpy(ifr.ifr_name, ifname);
if (ioctl(skfd, SIOCGIFMTU, &ifr) < 0)
mtu = 0;
else
mtu = ifr.ifr_mtu;
printf("The result of SIOCGIFMTU is %d\n", mtu);
strcpy(ifr.ifr_name, ifname);
if (ioctl(skfd, SIOCGIFDSTADDR, &ifr) < 0) {

View file

@ -147,7 +147,7 @@ tcp_adv_win_scale - INTEGER
(if tcp_adv_win_scale > 0) or bytes-bytes/2^(-tcp_adv_win_scale),
if it is <= 0.
Possible values are [-31, 31], inclusive.
Default: 1
Default: 2
tcp_allowed_congestion_control - STRING
Show/set the congestion control choices available to non-privileged
@ -407,7 +407,7 @@ tcp_rmem - vector of 3 INTEGERs: min, default, max
net.core.rmem_max. Calling setsockopt() with SO_RCVBUF disables
automatic tuning of that socket's receive buffer size, in which
case this value is ignored.
Default: between 87380B and 6MB, depending on RAM size.
Default: between 87380B and 4MB, depending on RAM size.
tcp_sack - BOOLEAN
Enable select acknowledgments (SACKS).
@ -534,11 +534,6 @@ tcp_thin_dupack - BOOLEAN
Documentation/networking/tcp-thin.txt
Default: 0
tcp_challenge_ack_limit - INTEGER
Limits number of Challenge ACK sent per second, as recommended
in RFC 5961 (Improving TCP's Robustness to Blind In-Window Attacks)
Default: 100
UDP variables:
udp_mem - vector of 3 INTEGERs: min, pressure, max

View file

@ -469,7 +469,6 @@ pm_runtime_autosuspend()
pm_runtime_resume()
pm_runtime_get_sync()
pm_runtime_put_sync_suspend()
pm_runtime_put_sync_autosuspend()
5. Run-time PM Initialization, Device Probing and Removal
@ -709,16 +708,6 @@ will behave normally, not taking the autosuspend delay into account.
Similarly, if the power.use_autosuspend field isn't set then the autosuspend
helper functions will behave just like the non-autosuspend counterparts.
Under some circumstances a driver or subsystem may want to prevent a device
from autosuspending immediately, even though the usage counter is zero and the
autosuspend delay time has expired. If the ->runtime_suspend() callback
returns -EAGAIN or -EBUSY, and if the next autosuspend delay expiration time is
in the future (as it normally would be if the callback invoked
pm_runtime_mark_last_busy()), the PM core will automatically reschedule the
autosuspend. The ->runtime_suspend() callback can't do this rescheduling
itself because no suspend requests of any kind are accepted while the device is
suspending (i.e., while the callback is running).
The implementation is well suited for asynchronous use in interrupt contexts.
However such use inevitably involves races, because the PM core can't
synchronize ->runtime_suspend() callbacks with the arrival of I/O requests.

View file

@ -1,4 +1,4 @@
Everything you ever wanted to know about Linux -stable releases.
Everything you ever wanted to know about Linux 2.6 -stable releases.
Rules on what kind of patches are accepted, and which ones are not, into the
"-stable" tree:
@ -12,12 +12,6 @@ Rules on what kind of patches are accepted, and which ones are not, into the
marked CONFIG_BROKEN), an oops, a hang, data corruption, a real
security issue, or some "oh, that's not good" issue. In short, something
critical.
- Serious issues as reported by a user of a distribution kernel may also
be considered if they fix a notable performance or interactivity issue.
As these fixes are not as obvious and have a higher risk of a subtle
regression they should only be submitted by a distribution kernel
maintainer and include an addendum linking to a bugzilla entry if it
exists and additional information on the user-visible impact.
- New device IDs and quirks are also accepted.
- No "theoretical race condition" issues, unless an explanation of how the
race can be exploited is also provided.
@ -30,10 +24,10 @@ Rules on what kind of patches are accepted, and which ones are not, into the
Procedure for submitting patches to the -stable tree:
- Send the patch, after verifying that it follows the above rules, to
stable@vger.kernel.org. You must note the upstream commit ID in the
changelog of your submission.
stable@kernel.org. You must note the upstream commit ID in the changelog
of your submission.
- To have the patch automatically included in the stable tree, add the tag
Cc: stable@vger.kernel.org
Cc: stable@kernel.org
in the sign-off area. Once the patch is merged it will be applied to
the stable tree without anything else needing to be done by the author
or subsystem maintainer.
@ -41,10 +35,10 @@ Procedure for submitting patches to the -stable tree:
cherry-picked than this can be specified in the following format in
the sign-off area:
Cc: <stable@vger.kernel.org> # 3.3.x: a1f84a3: sched: Check for idle
Cc: <stable@vger.kernel.org> # 3.3.x: 1b9508f: sched: Rate-limit newidle
Cc: <stable@vger.kernel.org> # 3.3.x: fd21073: sched: Fix affinity logic
Cc: <stable@vger.kernel.org> # 3.3.x
Cc: <stable@kernel.org> # .32.x: a1f84a3: sched: Check for idle
Cc: <stable@kernel.org> # .32.x: 1b9508f: sched: Rate-limit newidle
Cc: <stable@kernel.org> # .32.x: fd21073: sched: Fix affinity logic
Cc: <stable@kernel.org> # .32.x
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The tag sequence has the meaning of:
@ -78,15 +72,6 @@ Review cycle:
security kernel team, and not go through the normal review cycle.
Contact the kernel security team for more details on this procedure.
Trees:
- The queues of patches, for both completed versions and in progress
versions can be found at:
http://git.kernel.org/?p=linux/kernel/git/stable/stable-queue.git
- The finalized and tagged releases of all stable kernels can be found
in separate branches per version at:
http://git.kernel.org/?p=linux/kernel/git/stable/linux-stable.git
Review committee:

View file

@ -379,10 +379,10 @@ EVENT_PROCESS:
# To closer match vmstat scanning statistics, only count isolate_both
# and isolate_inactive as scanning. isolate_active is rotation
# isolate_inactive == 1
# isolate_active == 2
# isolate_both == 3
if ($isolate_mode != 2) {
# isolate_inactive == 0
# isolate_active == 1
# isolate_both == 2
if ($isolate_mode != 1) {
$perprocesspid{$process_pid}->{HIGH_NR_SCANNED} += $nr_scanned;
}
$perprocesspid{$process_pid}->{HIGH_NR_CONTIG_DIRTY} += $nr_contig_dirty;

View file

@ -47,11 +47,10 @@ This allows to filter away annoying devices that talk continuously.
2. Find which bus connects to the desired device
Run "cat /sys/kernel/debug/usb/devices", and find the T-line which corresponds
to the device. Usually you do it by looking for the vendor string. If you have
many similar devices, unplug one and compare the two
/sys/kernel/debug/usb/devices outputs. The T-line will have a bus number.
Example:
Run "cat /proc/bus/usb/devices", and find the T-line which corresponds to
the device. Usually you do it by looking for the vendor string. If you have
many similar devices, unplug one and compare two /proc/bus/usb/devices outputs.
The T-line will have a bus number. Example:
T: Bus=03 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 2 Spd=12 MxCh= 0
D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1
@ -59,10 +58,7 @@ P: Vendor=0557 ProdID=2004 Rev= 1.00
S: Manufacturer=ATEN
S: Product=UC100KM V2.00
"Bus=03" means it's bus 3. Alternatively, you can look at the output from
"lsusb" and get the bus number from the appropriate line. Example:
Bus 003 Device 002: ID 0557:2004 ATEN UC100KM V2.00
Bus=03 means it's bus 3.
3. Start 'cat'

View file

@ -2008,9 +2008,6 @@ int main(int argc, char *argv[])
/* We use a simple helper to copy the arguments separated by spaces. */
concat((char *)(boot + 1), argv+optind+2);
/* Set kernel alignment to 16M (CONFIG_PHYSICAL_ALIGN) */
boot->hdr.kernel_alignment = 0x1000000;
/* Boot protocol version: 2.07 supports the fields for lguest. */
boot->hdr.version = 0x207;

View file

@ -1221,7 +1221,7 @@ F: Documentation/aoe/
F: drivers/block/aoe/
ATHEROS ATH GENERIC UTILITIES
M: "Luis R. Rodriguez" <mcgrof@qca.qualcomm.com>
M: "Luis R. Rodriguez" <lrodriguez@atheros.com>
L: linux-wireless@vger.kernel.org
S: Supported
F: drivers/net/wireless/ath/*
@ -1229,7 +1229,7 @@ F: drivers/net/wireless/ath/*
ATHEROS ATH5K WIRELESS DRIVER
M: Jiri Slaby <jirislaby@gmail.com>
M: Nick Kossifidis <mickflemm@gmail.com>
M: "Luis R. Rodriguez" <mcgrof@qca.qualcomm.com>
M: "Luis R. Rodriguez" <lrodriguez@atheros.com>
M: Bob Copeland <me@bobcopeland.com>
L: linux-wireless@vger.kernel.org
L: ath5k-devel@lists.ath5k.org
@ -1238,10 +1238,10 @@ S: Maintained
F: drivers/net/wireless/ath/ath5k/
ATHEROS ATH9K WIRELESS DRIVER
M: "Luis R. Rodriguez" <mcgrof@qca.qualcomm.com>
M: Jouni Malinen <jouni@qca.qualcomm.com>
M: Vasanthakumar Thiagarajan <vthiagar@qca.qualcomm.com>
M: Senthil Balasubramanian <senthilb@qca.qualcomm.com>
M: "Luis R. Rodriguez" <lrodriguez@atheros.com>
M: Jouni Malinen <jmalinen@atheros.com>
M: Vasanthakumar Thiagarajan <vasanth@atheros.com>
M: Senthil Balasubramanian <senthilkumar@atheros.com>
L: linux-wireless@vger.kernel.org
L: ath9k-devel@lists.ath9k.org
W: http://wireless.kernel.org/en/users/Drivers/ath9k
@ -1269,7 +1269,7 @@ F: drivers/input/misc/ati_remote2.c
ATLX ETHERNET DRIVERS
M: Jay Cliburn <jcliburn@gmail.com>
M: Chris Snook <chris.snook@gmail.com>
M: Jie Yang <yangjie@qca.qualcomm.com>
M: Jie Yang <jie.yang@atheros.com>
L: netdev@vger.kernel.org
W: http://sourceforge.net/projects/atl1
W: http://atl1.sourceforge.net
@ -2491,7 +2491,7 @@ S: Maintained
F: drivers/net/eexpress.*
ETHERNET BRIDGE
M: Stephen Hemminger <stephen@networkplumber.org>
M: Stephen Hemminger <shemminger@linux-foundation.org>
L: bridge@lists.linux-foundation.org
L: netdev@vger.kernel.org
W: http://www.linuxfoundation.org/en/Net:Bridge
@ -4327,7 +4327,7 @@ S: Supported
F: drivers/infiniband/hw/nes/
NETEM NETWORK EMULATOR
M: Stephen Hemminger <stephen@networkplumber.org>
M: Stephen Hemminger <shemminger@linux-foundation.org>
L: netem@lists.linux-foundation.org
S: Maintained
F: net/sched/sch_netem.c
@ -5247,7 +5247,7 @@ F: Documentation/blockdev/ramdisk.txt
F: drivers/block/brd.c
RANDOM NUMBER DRIVER
M: Theodore Ts'o" <tytso@mit.edu>
M: Matt Mackall <mpm@selenic.com>
S: Maintained
F: drivers/char/random.c
@ -5779,7 +5779,7 @@ S: Maintained
F: drivers/usb/misc/sisusbvga/
SKGE, SKY2 10/100/1000 GIGABIT ETHERNET DRIVERS
M: Stephen Hemminger <stephen@networkplumber.org>
M: Stephen Hemminger <shemminger@linux-foundation.org>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/skge.*
@ -6039,7 +6039,7 @@ F: arch/alpha/kernel/srm_env.c
STABLE BRANCH
M: Greg Kroah-Hartman <greg@kroah.com>
L: stable@vger.kernel.org
L: stable@kernel.org
S: Maintained
STAGING SUBSYSTEM
@ -6358,13 +6358,6 @@ S: Maintained
F: Documentation/filesystems/ufs.txt
F: fs/ufs/
UHID USERSPACE HID IO DRIVER:
M: David Herrmann <dh.herrmann@googlemail.com>
L: linux-input@vger.kernel.org
S: Maintained
F: drivers/hid/uhid.c
F: include/linux/uhid.h
ULTRA-WIDEBAND (UWB) SUBSYSTEM:
L: linux-usb@vger.kernel.org
S: Orphan

View file

@ -1,6 +1,6 @@
VERSION = 3
PATCHLEVEL = 0
SUBLEVEL = 66
SUBLEVEL = 0
EXTRAVERSION =
NAME = Sneaky Weasel

View file

@ -14,8 +14,8 @@
*/
#define ATOMIC_INIT(i) { (i) }
#define ATOMIC64_INIT(i) { (i) }
#define ATOMIC_INIT(i) ( (atomic_t) { (i) } )
#define ATOMIC64_INIT(i) ( (atomic64_t) { (i) } )
#define atomic_read(v) (*(volatile int *)&(v)->counter)
#define atomic64_read(v) (*(volatile long *)&(v)->counter)

View file

@ -108,7 +108,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
" lda $31,3b-2b(%0)\n"
" .previous\n"
: "+r"(ret), "=&r"(prev), "=&r"(cmp)
: "r"(uaddr), "r"((long)(int)oldval), "r"(newval)
: "r"(uaddr), "r"((long)oldval), "r"(newval)
: "memory");
*uval = prev;

View file

@ -69,11 +69,9 @@
#define SO_RXQ_OVFL 40
#ifdef __KERNEL__
/* O_NONBLOCK clashes with the bits used for socket types. Therefore we
* have to define SOCK_NONBLOCK to a different value here.
*/
#define SOCK_NONBLOCK 0x40000000
#endif /* __KERNEL__ */
#endif /* _ASM_SOCKET_H */

View file

@ -91,7 +91,7 @@ DEFINE_PER_CPU(u8, irq_work_pending);
#define test_irq_work_pending() __get_cpu_var(irq_work_pending)
#define clear_irq_work_pending() __get_cpu_var(irq_work_pending) = 0
void arch_irq_work_raise(void)
void set_irq_work_pending(void)
{
set_irq_work_pending_flag();
}

View file

@ -1179,7 +1179,7 @@ config ARM_ERRATA_743622
depends on CPU_V7
help
This option enables the workaround for the 743622 Cortex-A9
(r2p*) erratum. Under very rare conditions, a faulty
(r2p0..r2p2) erratum. Under very rare conditions, a faulty
optimisation in the Cortex-A9 Store Buffer may lead to data
corruption. This workaround sets a specific bit in the diagnostic
register of the Cortex-A9 which disables the Store Buffer
@ -1234,42 +1234,6 @@ config ARM_ERRATA_754327
This workaround defines cpu_relax() as smp_mb(), preventing correctly
written polling loops from denying visibility of updates to memory.
config ARM_ERRATA_764369
bool "ARM errata: Data cache line maintenance operation by MVA may not succeed"
depends on CPU_V7 && SMP
help
This option enables the workaround for erratum 764369
affecting Cortex-A9 MPCore with two or more processors (all
current revisions). Under certain timing circumstances, a data
cache line maintenance operation by MVA targeting an Inner
Shareable memory region may fail to proceed up to either the
Point of Coherency or to the Point of Unification of the
system. This workaround adds a DSB instruction before the
relevant cache maintenance functions and sets a specific bit
in the diagnostic control register of the SCU.
config PL310_ERRATA_769419
bool "PL310 errata: no automatic Store Buffer drain"
depends on CACHE_L2X0
help
On revisions of the PL310 prior to r3p2, the Store Buffer does
not automatically drain. This can cause normal, non-cacheable
writes to be retained when the memory system is idle, leading
to suboptimal I/O performance for drivers using coherent DMA.
This option adds a write barrier to the cpu_idle loop so that,
on systems with an outer cache, the store buffer is drained
explicitly.
config ARM_ERRATA_775420
bool "ARM errata: A data cache maintenance operation which aborts, might lead to deadlock"
depends on CPU_V7
help
This option enables the workaround for the 775420 Cortex-A9 (r2p2,
r2p6,r2p8,r2p10,r3p0) erratum. In case a date cache maintenance
operation aborts with MMU exception, it might cause the processor
to deadlock. This workaround puts DSB before executing ISB if
an abort may occur on cache maintenance.
endmenu
source "arch/arm/common/Kconfig"
@ -1707,15 +1671,6 @@ config DEPRECATED_PARAM_STRUCT
This was deprecated in 2001 and announced to live on for 5 years.
Some old boot loaders still use this way.
config ARM_FLUSH_CONSOLE_ON_RESTART
bool "Force flush the console on restart"
help
If the console is locked while the system is rebooted, the messages
in the temporary logbuffer would not have propogated to all the
console drivers. This option forces the console lock to be
released if it failed to be acquired, which will cause all the
pending messages to be flushed.
endmenu
menu "Boot options"
@ -1894,7 +1849,6 @@ source "drivers/cpufreq/Kconfig"
config CPU_FREQ_IMX
tristate "CPUfreq driver for i.MX CPUs"
depends on ARCH_MXC && CPU_FREQ
select CPU_FREQ_TABLE
help
This enables the CPUfreq driver for i.MX CPUs.

View file

@ -539,7 +539,6 @@ __armv7_mmu_cache_on:
mcrne p15, 0, r0, c8, c7, 0 @ flush I,D TLBs
#endif
mrc p15, 0, r0, c1, c0, 0 @ read control reg
bic r0, r0, #1 << 28 @ clear SCTLR.TRE
orr r0, r0, #0x5000 @ I-cache enable, RR cache replacement
orr r0, r0, #0x003c @ write buffer
#ifdef CONFIG_MMU
@ -657,8 +656,6 @@ proc_types:
@ b __arm6_mmu_cache_off
@ b __armv3_mmu_cache_flush
#if !defined(CONFIG_CPU_V7)
/* This collides with some V7 IDs, preventing correct detection */
.word 0x00000000 @ old ARM ID
.word 0x0000f000
mov pc, lr
@ -667,7 +664,6 @@ proc_types:
THUMB( nop )
mov pc, lr
THUMB( nop )
#endif
.word 0x41007000 @ ARM7/710
.word 0xfff8fe00

View file

@ -39,53 +39,3 @@ config SHARP_PARAM
config SHARP_SCOOP
bool
config FIQ_GLUE
bool
select FIQ
config FIQ_DEBUGGER
bool "FIQ Mode Serial Debugger"
select FIQ
select FIQ_GLUE
default n
help
The FIQ serial debugger can accept commands even when the
kernel is unresponsive due to being stuck with interrupts
disabled.
config FIQ_DEBUGGER_NO_SLEEP
bool "Keep serial debugger active"
depends on FIQ_DEBUGGER
default n
help
Enables the serial debugger at boot. Passing
fiq_debugger.no_sleep on the kernel commandline will
override this config option.
config FIQ_DEBUGGER_WAKEUP_IRQ_ALWAYS_ON
bool "Don't disable wakeup IRQ when debugger is active"
depends on FIQ_DEBUGGER
default n
help
Don't disable the wakeup irq when enabling the uart clock. This will
cause extra interrupts, but it makes the serial debugger usable with
on some MSM radio builds that ignore the uart clock request in power
collapse.
config FIQ_DEBUGGER_CONSOLE
bool "Console on FIQ Serial Debugger port"
depends on FIQ_DEBUGGER
default n
help
Enables a console so that printk messages are displayed on
the debugger serial port as the occur.
config FIQ_DEBUGGER_CONSOLE_DEFAULT_ENABLE
bool "Put the FIQ debugger into console mode by default"
depends on FIQ_DEBUGGER_CONSOLE
default n
help
If enabled, this puts the fiq debugger into console mode by default.
Otherwise, the fiq debugger will start out in debug mode.

View file

@ -17,5 +17,3 @@ obj-$(CONFIG_ARCH_IXP2000) += uengine.o
obj-$(CONFIG_ARCH_IXP23XX) += uengine.o
obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o
obj-$(CONFIG_ARM_TIMER_SP804) += timer-sp.o
obj-$(CONFIG_FIQ_GLUE) += fiq_glue.o fiq_glue_setup.o
obj-$(CONFIG_FIQ_DEBUGGER) += fiq_debugger.o

File diff suppressed because it is too large Load diff

View file

@ -1,94 +0,0 @@
/*
* arch/arm/common/fiq_debugger_ringbuf.c
*
* simple lockless ringbuffer
*
* Copyright (C) 2010 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/slab.h>
struct fiq_debugger_ringbuf {
int len;
int head;
int tail;
u8 buf[];
};
static inline struct fiq_debugger_ringbuf *fiq_debugger_ringbuf_alloc(int len)
{
struct fiq_debugger_ringbuf *rbuf;
rbuf = kzalloc(sizeof(*rbuf) + len, GFP_KERNEL);
if (rbuf == NULL)
return NULL;
rbuf->len = len;
rbuf->head = 0;
rbuf->tail = 0;
smp_mb();
return rbuf;
}
static inline void fiq_debugger_ringbuf_free(struct fiq_debugger_ringbuf *rbuf)
{
kfree(rbuf);
}
static inline int fiq_debugger_ringbuf_level(struct fiq_debugger_ringbuf *rbuf)
{
int level = rbuf->head - rbuf->tail;
if (level < 0)
level = rbuf->len + level;
return level;
}
static inline int fiq_debugger_ringbuf_room(struct fiq_debugger_ringbuf *rbuf)
{
return rbuf->len - fiq_debugger_ringbuf_level(rbuf) - 1;
}
static inline u8
fiq_debugger_ringbuf_peek(struct fiq_debugger_ringbuf *rbuf, int i)
{
return rbuf->buf[(rbuf->tail + i) % rbuf->len];
}
static inline int
fiq_debugger_ringbuf_consume(struct fiq_debugger_ringbuf *rbuf, int count)
{
count = min(count, fiq_debugger_ringbuf_level(rbuf));
rbuf->tail = (rbuf->tail + count) % rbuf->len;
smp_mb();
return count;
}
static inline int
fiq_debugger_ringbuf_push(struct fiq_debugger_ringbuf *rbuf, u8 datum)
{
if (fiq_debugger_ringbuf_room(rbuf) == 0)
return 0;
rbuf->buf[rbuf->head] = datum;
smp_mb();
rbuf->head = (rbuf->head + 1) % rbuf->len;
smp_mb();
return 1;
}

View file

@ -1,111 +0,0 @@
/*
* Copyright (C) 2008 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
.text
.global fiq_glue_end
/* fiq stack: r0-r15,cpsr,spsr of interrupted mode */
ENTRY(fiq_glue)
/* store pc, cpsr from previous mode */
mrs r12, spsr
sub r11, lr, #4
subs r10, #1
bne nested_fiq
stmfd sp!, {r11-r12, lr}
/* store r8-r14 from previous mode */
sub sp, sp, #(7 * 4)
stmia sp, {r8-r14}^
nop
/* store r0-r7 from previous mode */
stmfd sp!, {r0-r7}
/* setup func(data,regs) arguments */
mov r0, r9
mov r1, sp
mov r3, r8
mov r7, sp
/* Get sp and lr from non-user modes */
and r4, r12, #MODE_MASK
cmp r4, #USR_MODE
beq fiq_from_usr_mode
mov r7, sp
orr r4, r4, #(PSR_I_BIT | PSR_F_BIT)
msr cpsr_c, r4
str sp, [r7, #(4 * 13)]
str lr, [r7, #(4 * 14)]
mrs r5, spsr
str r5, [r7, #(4 * 17)]
cmp r4, #(SVC_MODE | PSR_I_BIT | PSR_F_BIT)
/* use fiq stack if we reenter this mode */
subne sp, r7, #(4 * 3)
fiq_from_usr_mode:
msr cpsr_c, #(SVC_MODE | PSR_I_BIT | PSR_F_BIT)
mov r2, sp
sub sp, r7, #12
stmfd sp!, {r2, ip, lr}
/* call func(data,regs) */
blx r3
ldmfd sp, {r2, ip, lr}
mov sp, r2
/* restore/discard saved state */
cmp r4, #USR_MODE
beq fiq_from_usr_mode_exit
msr cpsr_c, r4
ldr sp, [r7, #(4 * 13)]
ldr lr, [r7, #(4 * 14)]
msr spsr_cxsf, r5
fiq_from_usr_mode_exit:
msr cpsr_c, #(FIQ_MODE | PSR_I_BIT | PSR_F_BIT)
ldmfd sp!, {r0-r7}
add sp, sp, #(7 * 4)
ldmfd sp!, {r11-r12, lr}
exit_fiq:
msr spsr_cxsf, r12
add r10, #1
movs pc, r11
nested_fiq:
orr r12, r12, #(PSR_F_BIT)
b exit_fiq
fiq_glue_end:
ENTRY(fiq_glue_setup) /* func, data, sp */
mrs r3, cpsr
msr cpsr_c, #(FIQ_MODE | PSR_I_BIT | PSR_F_BIT)
movs r8, r0
mov r9, r1
mov sp, r2
moveq r10, #0
movne r10, #1
msr cpsr_c, r3
bx lr

View file

@ -1,100 +0,0 @@
/*
* Copyright (C) 2010 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/percpu.h>
#include <linux/slab.h>
#include <asm/fiq.h>
#include <asm/fiq_glue.h>
extern unsigned char fiq_glue, fiq_glue_end;
extern void fiq_glue_setup(void *func, void *data, void *sp);
static struct fiq_handler fiq_debbuger_fiq_handler = {
.name = "fiq_glue",
};
DEFINE_PER_CPU(void *, fiq_stack);
static struct fiq_glue_handler *current_handler;
static DEFINE_MUTEX(fiq_glue_lock);
static void fiq_glue_setup_helper(void *info)
{
struct fiq_glue_handler *handler = info;
fiq_glue_setup(handler->fiq, handler,
__get_cpu_var(fiq_stack) + THREAD_START_SP);
}
int fiq_glue_register_handler(struct fiq_glue_handler *handler)
{
int ret;
int cpu;
if (!handler || !handler->fiq)
return -EINVAL;
mutex_lock(&fiq_glue_lock);
if (fiq_stack) {
ret = -EBUSY;
goto err_busy;
}
for_each_possible_cpu(cpu) {
void *stack;
stack = (void *)__get_free_pages(GFP_KERNEL, THREAD_SIZE_ORDER);
if (WARN_ON(!stack)) {
ret = -ENOMEM;
goto err_alloc_fiq_stack;
}
per_cpu(fiq_stack, cpu) = stack;
}
ret = claim_fiq(&fiq_debbuger_fiq_handler);
if (WARN_ON(ret))
goto err_claim_fiq;
current_handler = handler;
on_each_cpu(fiq_glue_setup_helper, handler, true);
set_fiq_handler(&fiq_glue, &fiq_glue_end - &fiq_glue);
mutex_unlock(&fiq_glue_lock);
return 0;
err_claim_fiq:
err_alloc_fiq_stack:
for_each_possible_cpu(cpu) {
__free_pages(per_cpu(fiq_stack, cpu), THREAD_SIZE_ORDER);
per_cpu(fiq_stack, cpu) = NULL;
}
err_busy:
mutex_unlock(&fiq_glue_lock);
return ret;
}
/**
* fiq_glue_resume - Restore fiqs after suspend or low power idle states
*
* This must be called before calling local_fiq_enable after returning from a
* power state where the fiq mode registers were lost. If a driver provided
* a resume hook when it registered the handler it will be called.
*/
void fiq_glue_resume(void)
{
if (!current_handler)
return;
fiq_glue_setup(current_handler->fiq, current_handler,
__get_cpu_var(fiq_stack) + THREAD_START_SP);
if (current_handler->resume)
current_handler->resume(current_handler);
}

View file

@ -287,7 +287,7 @@ CONFIG_USB=y
# CONFIG_USB_DEVICE_CLASS is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_GADGET=y
CONFIG_USB_PXA27X=y
CONFIG_USB_GADGET_PXA27X=y
CONFIG_USB_ETH=m
# CONFIG_USB_ETH_RNDIS is not set
CONFIG_MMC=y

View file

@ -263,7 +263,7 @@ CONFIG_USB=y
# CONFIG_USB_DEVICE_CLASS is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_GADGET=y
CONFIG_USB_PXA27X=y
CONFIG_USB_GADGET_PXA27X=y
CONFIG_USB_ETH=m
# CONFIG_USB_ETH_RNDIS is not set
CONFIG_MMC=y

View file

@ -132,7 +132,7 @@ CONFIG_USB_MON=m
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_GADGET=y
CONFIG_USB_GADGET_VBUS_DRAW=500
CONFIG_USB_PXA27X=y
CONFIG_USB_GADGET_PXA27X=y
CONFIG_USB_ETH=m
# CONFIG_USB_ETH_RNDIS is not set
CONFIG_USB_GADGETFS=m

View file

@ -29,6 +29,7 @@ CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_PREEMPT_VOLUNTARY=y
CONFIG_AEABI=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_AUTO_ZRELADDR=y
CONFIG_FPE_NWFPE=y
CONFIG_NET=y

View file

@ -140,7 +140,7 @@ CONFIG_USB_SERIAL=m
CONFIG_USB_SERIAL_GENERIC=y
CONFIG_USB_SERIAL_MCT_U232=m
CONFIG_USB_GADGET=m
CONFIG_USB_PXA27X=y
CONFIG_USB_GADGET_PXA27X=y
CONFIG_USB_ETH=m
CONFIG_USB_GADGETFS=m
CONFIG_USB_FILE_STORAGE=m

View file

@ -137,11 +137,6 @@
disable_irq
.endm
.macro save_and_disable_irqs_notrace, oldcpsr
mrs \oldcpsr, cpsr
disable_irq_notrace
.endm
/*
* Restore interrupt state previously stored in a register. We don't
* guarantee that this will preserve the flags.

View file

@ -215,9 +215,7 @@ static inline void vivt_flush_cache_mm(struct mm_struct *mm)
static inline void
vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
{
struct mm_struct *mm = vma->vm_mm;
if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm)))
if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm)))
__cpuc_flush_user_range(start & PAGE_MASK, PAGE_ALIGN(end),
vma->vm_flags);
}
@ -225,9 +223,7 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned
static inline void
vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn)
{
struct mm_struct *mm = vma->vm_mm;
if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) {
if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) {
unsigned long addr = user_addr & PAGE_MASK;
__cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags);
}
@ -253,7 +249,7 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr
* Harvard caches are synchronised for the user space address range.
* This is used for the ARM private sys_cacheflush system call.
*/
#define flush_cache_user_range(start,end) \
#define flush_cache_user_range(vma,start,end) \
__cpuc_coherent_user_range((start) & PAGE_MASK, PAGE_ALIGN(end))
/*

View file

@ -1,64 +0,0 @@
/*
* arch/arm/include/asm/fiq_debugger.h
*
* Copyright (C) 2010 Google, Inc.
* Author: Colin Cross <ccross@android.com>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _ARCH_ARM_MACH_TEGRA_FIQ_DEBUGGER_H_
#define _ARCH_ARM_MACH_TEGRA_FIQ_DEBUGGER_H_
#include <linux/serial_core.h>
#define FIQ_DEBUGGER_NO_CHAR NO_POLL_CHAR
#define FIQ_DEBUGGER_BREAK 0x00ff0100
#define FIQ_DEBUGGER_FIQ_IRQ_NAME "fiq"
#define FIQ_DEBUGGER_SIGNAL_IRQ_NAME "signal"
#define FIQ_DEBUGGER_WAKEUP_IRQ_NAME "wakeup"
/**
* struct fiq_debugger_pdata - fiq debugger platform data
* @uart_resume: used to restore uart state right before enabling
* the fiq.
* @uart_enable: Do the work necessary to communicate with the uart
* hw (enable clocks, etc.). This must be ref-counted.
* @uart_disable: Do the work necessary to disable the uart hw
* (disable clocks, etc.). This must be ref-counted.
* @uart_dev_suspend: called during PM suspend, generally not needed
* for real fiq mode debugger.
* @uart_dev_resume: called during PM resume, generally not needed
* for real fiq mode debugger.
*/
struct fiq_debugger_pdata {
int (*uart_init)(struct platform_device *pdev);
void (*uart_free)(struct platform_device *pdev);
int (*uart_resume)(struct platform_device *pdev);
int (*uart_getc)(struct platform_device *pdev);
void (*uart_putc)(struct platform_device *pdev, unsigned int c);
void (*uart_flush)(struct platform_device *pdev);
void (*uart_enable)(struct platform_device *pdev);
void (*uart_disable)(struct platform_device *pdev);
int (*uart_dev_suspend)(struct platform_device *pdev);
int (*uart_dev_resume)(struct platform_device *pdev);
void (*fiq_enable)(struct platform_device *pdev, unsigned int fiq,
bool enable);
void (*fiq_ack)(struct platform_device *pdev, unsigned int fiq);
void (*force_irq)(struct platform_device *pdev, unsigned int irq);
void (*force_irq_ack)(struct platform_device *pdev, unsigned int irq);
};
#endif

View file

@ -1,30 +0,0 @@
/*
* Copyright (C) 2010 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __ASM_FIQ_GLUE_H
#define __ASM_FIQ_GLUE_H
struct fiq_glue_handler {
void (*fiq)(struct fiq_glue_handler *h, void *regs, void *svc_sp);
void (*resume)(struct fiq_glue_handler *h);
};
int fiq_glue_register_handler(struct fiq_glue_handler *handler);
#ifdef CONFIG_FIQ_GLUE
void fiq_glue_resume(void);
#else
static inline void fiq_glue_resume(void) {}
#endif
#endif

View file

@ -25,17 +25,17 @@
#ifdef CONFIG_SMP
#define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \
#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
smp_mb(); \
__asm__ __volatile__( \
"1: ldrex %1, [%3]\n" \
"1: ldrex %1, [%2]\n" \
" " insn "\n" \
"2: strex %2, %0, [%3]\n" \
" teq %2, #0\n" \
"2: strex %1, %0, [%2]\n" \
" teq %1, #0\n" \
" bne 1b\n" \
" mov %0, #0\n" \
__futex_atomic_ex_table("%5") \
: "=&r" (ret), "=&r" (oldval), "=&r" (tmp) \
__futex_atomic_ex_table("%4") \
: "=&r" (ret), "=&r" (oldval) \
: "r" (uaddr), "r" (oparg), "Ir" (-EFAULT) \
: "cc", "memory")
@ -73,14 +73,14 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
#include <linux/preempt.h>
#include <asm/domain.h>
#define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \
#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
__asm__ __volatile__( \
"1: " T(ldr) " %1, [%3]\n" \
"1: " T(ldr) " %1, [%2]\n" \
" " insn "\n" \
"2: " T(str) " %0, [%3]\n" \
"2: " T(str) " %0, [%2]\n" \
" mov %0, #0\n" \
__futex_atomic_ex_table("%5") \
: "=&r" (ret), "=&r" (oldval), "=&r" (tmp) \
__futex_atomic_ex_table("%4") \
: "=&r" (ret), "=&r" (oldval) \
: "r" (uaddr), "r" (oparg), "Ir" (-EFAULT) \
: "cc", "memory")
@ -117,7 +117,7 @@ futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
int cmp = (encoded_op >> 24) & 15;
int oparg = (encoded_op << 8) >> 20;
int cmparg = (encoded_op << 20) >> 20;
int oldval = 0, ret, tmp;
int oldval = 0, ret;
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
@ -129,19 +129,19 @@ futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
switch (op) {
case FUTEX_OP_SET:
__futex_atomic_op("mov %0, %4", ret, oldval, tmp, uaddr, oparg);
__futex_atomic_op("mov %0, %3", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_ADD:
__futex_atomic_op("add %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
__futex_atomic_op("add %0, %1, %3", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_OR:
__futex_atomic_op("orr %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
__futex_atomic_op("orr %0, %1, %3", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_ANDN:
__futex_atomic_op("and %0, %1, %4", ret, oldval, tmp, uaddr, ~oparg);
__futex_atomic_op("and %0, %1, %3", ret, oldval, uaddr, ~oparg);
break;
case FUTEX_OP_XOR:
__futex_atomic_op("eor %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
__futex_atomic_op("eor %0, %1, %3", ret, oldval, uaddr, oparg);
break;
default:
ret = -ENOSYS;

View file

@ -5,7 +5,7 @@
#include <linux/threads.h>
#include <asm/irq.h>
#define NR_IPI 6
#define NR_IPI 5
typedef struct {
unsigned int __softirq_pending;

View file

@ -57,7 +57,6 @@
#define L2X0_STNDBY_MODE_EN (1 << 0)
/* Registers shifts and masks */
#define L2X0_CACHE_ID_REV_MASK (0x3f)
#define L2X0_CACHE_ID_PART_MASK (0xf << 6)
#define L2X0_CACHE_ID_PART_L210 (1 << 6)
#define L2X0_CACHE_ID_PART_L310 (3 << 6)
@ -65,7 +64,7 @@
#define L2X0_AUX_CTRL_MASK 0xc0000fff
#define L2X0_AUX_CTRL_ASSOCIATIVITY_SHIFT 16
#define L2X0_AUX_CTRL_WAY_SIZE_SHIFT 17
#define L2X0_AUX_CTRL_WAY_SIZE_MASK (0x7 << 17)
#define L2X0_AUX_CTRL_WAY_SIZE_MASK (0x3 << 17)
#define L2X0_AUX_CTRL_SHARE_OVERRIDE_SHIFT 22
#define L2X0_AUX_CTRL_NS_LOCKDOWN_SHIFT 26
#define L2X0_AUX_CTRL_NS_INT_CTRL_SHIFT 27
@ -73,8 +72,6 @@
#define L2X0_AUX_CTRL_INSTR_PREFETCH_SHIFT 29
#define L2X0_AUX_CTRL_EARLY_BRESP_SHIFT 30
#define REV_PL310_R2P0 4
#ifndef __ASSEMBLY__
extern void __init l2x0_init(void __iomem *base, __u32 aux_val, __u32 aux_mask);
#endif

View file

@ -17,17 +17,15 @@
#define TRACER_ACCESSED_BIT 0
#define TRACER_RUNNING_BIT 1
#define TRACER_CYCLE_ACC_BIT 2
#define TRACER_TRACE_DATA_BIT 3
#define TRACER_ACCESSED BIT(TRACER_ACCESSED_BIT)
#define TRACER_RUNNING BIT(TRACER_RUNNING_BIT)
#define TRACER_CYCLE_ACC BIT(TRACER_CYCLE_ACC_BIT)
#define TRACER_TRACE_DATA BIT(TRACER_TRACE_DATA_BIT)
#define TRACER_TIMEOUT 10000
#define etm_writel(t, id, v, x) \
(__raw_writel((v), (t)->etm_regs[(id)] + (x)))
#define etm_readl(t, id, x) (__raw_readl((t)->etm_regs[(id)] + (x)))
#define etm_writel(t, v, x) \
(__raw_writel((v), (t)->etm_regs + (x)))
#define etm_readl(t, x) (__raw_readl((t)->etm_regs + (x)))
/* CoreSight Management Registers */
#define CSMR_LOCKACCESS 0xfb0
@ -115,19 +113,11 @@
#define ETMR_TRACEENCTRL 0x24
#define ETMTE_INCLEXCL BIT(24)
#define ETMR_TRACEENEVT 0x20
#define ETMR_VIEWDATAEVT 0x30
#define ETMR_VIEWDATACTRL1 0x34
#define ETMR_VIEWDATACTRL2 0x38
#define ETMR_VIEWDATACTRL3 0x3c
#define ETMVDC3_EXCLONLY BIT(16)
#define ETMCTRL_OPTS (ETMCTRL_DO_CPRT | \
ETMCTRL_DATA_DO_ADDR | \
ETMCTRL_BRANCH_OUTPUT | \
ETMCTRL_DO_CONTEXTID)
#define ETMR_TRACEIDR 0x200
/* ETM management registers, "ETM Architecture", 3.5.24 */
#define ETMMR_OSLAR 0x300
#define ETMMR_OSLSR 0x304
@ -150,16 +140,14 @@
#define ETBFF_TRIGIN BIT(8)
#define ETBFF_TRIGEVT BIT(9)
#define ETBFF_TRIGFL BIT(10)
#define ETBFF_STOPFL BIT(12)
#define etb_writel(t, v, x) \
(__raw_writel((v), (t)->etb_regs + (x)))
#define etb_readl(t, x) (__raw_readl((t)->etb_regs + (x)))
#define etm_lock(t, id) \
do { etm_writel((t), (id), 0, CSMR_LOCKACCESS); } while (0)
#define etm_unlock(t, id) \
do { etm_writel((t), (id), UNLOCK_MAGIC, CSMR_LOCKACCESS); } while (0)
#define etm_lock(t) do { etm_writel((t), 0, CSMR_LOCKACCESS); } while (0)
#define etm_unlock(t) \
do { etm_writel((t), UNLOCK_MAGIC, CSMR_LOCKACCESS); } while (0)
#define etb_lock(t) do { etb_writel((t), 0, CSMR_LOCKACCESS); } while (0)
#define etb_unlock(t) \

View file

@ -18,9 +18,8 @@
#define HWCAP_THUMBEE 2048
#define HWCAP_NEON 4096
#define HWCAP_VFPv3 8192
#define HWCAP_VFPv3D16 (1 << 14) /* also set for VFPv4-D16 */
#define HWCAP_VFPv3D16 16384
#define HWCAP_TLS 32768
#define HWCAP_VFPD32 (1 << 19) /* set if VFP has 32 regs (not 16) */
#if defined(__KERNEL__) && !defined(__ASSEMBLY__)
/*

View file

@ -25,9 +25,6 @@ extern void migrate_irqs(void);
extern void asm_do_IRQ(unsigned int, struct pt_regs *);
void init_IRQ(void);
void arch_trigger_all_cpu_backtrace(void);
#define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace
#endif
#endif

View file

@ -1,28 +0,0 @@
/*
* arch/arm/include/asm/mach/mmc.h
*/
#ifndef ASMARM_MACH_MMC_H
#define ASMARM_MACH_MMC_H
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#include <linux/mmc/sdio_func.h>
struct embedded_sdio_data {
struct sdio_cis cis;
struct sdio_cccr cccr;
struct sdio_embedded_func *funcs;
int num_funcs;
};
struct mmc_platform_data {
unsigned int ocr_mask; /* available voltages */
int built_in; /* built-in device flag */
int card_present; /* card detect state */
u32 (*translate_vdd)(struct device *, unsigned int);
unsigned int (*status)(struct device *);
struct embedded_sdio_data *embedded_sdio;
int (*register_status_notify)(void (*callback)(int card_present, void *dev_id), void *dev_id);
};
#endif

View file

@ -7,10 +7,121 @@
*/
#ifndef _ASM_MUTEX_H
#define _ASM_MUTEX_H
#if __LINUX_ARM_ARCH__ < 6
/* On pre-ARMv6 hardware the swp based implementation is the most efficient. */
# include <asm-generic/mutex-xchg.h>
#else
/*
* On pre-ARMv6 hardware this results in a swp-based implementation,
* which is the most efficient. For ARMv6+, we emit a pair of exclusive
* accesses instead.
* Attempting to lock a mutex on ARMv6+ can be done with a bastardized
* atomic decrement (it is not a reliable atomic decrement but it satisfies
* the defined semantics for our purpose, while being smaller and faster
* than a real atomic decrement or atomic swap. The idea is to attempt
* decrementing the lock value only once. If once decremented it isn't zero,
* or if its store-back fails due to a dispute on the exclusive store, we
* simply bail out immediately through the slow path where the lock will be
* reattempted until it succeeds.
*/
#include <asm-generic/mutex-xchg.h>
static inline void
__mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
{
int __ex_flag, __res;
__asm__ (
"ldrex %0, [%2] \n\t"
"sub %0, %0, #1 \n\t"
"strex %1, %0, [%2] "
: "=&r" (__res), "=&r" (__ex_flag)
: "r" (&(count)->counter)
: "cc","memory" );
__res |= __ex_flag;
if (unlikely(__res != 0))
fail_fn(count);
}
static inline int
__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
{
int __ex_flag, __res;
__asm__ (
"ldrex %0, [%2] \n\t"
"sub %0, %0, #1 \n\t"
"strex %1, %0, [%2] "
: "=&r" (__res), "=&r" (__ex_flag)
: "r" (&(count)->counter)
: "cc","memory" );
__res |= __ex_flag;
if (unlikely(__res != 0))
__res = fail_fn(count);
return __res;
}
/*
* Same trick is used for the unlock fast path. However the original value,
* rather than the result, is used to test for success in order to have
* better generated assembly.
*/
static inline void
__mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *))
{
int __ex_flag, __res, __orig;
__asm__ (
"ldrex %0, [%3] \n\t"
"add %1, %0, #1 \n\t"
"strex %2, %1, [%3] "
: "=&r" (__orig), "=&r" (__res), "=&r" (__ex_flag)
: "r" (&(count)->counter)
: "cc","memory" );
__orig |= __ex_flag;
if (unlikely(__orig != 0))
fail_fn(count);
}
/*
* If the unlock was done on a contended lock, or if the unlock simply fails
* then the mutex remains locked.
*/
#define __mutex_slowpath_needs_to_unlock() 1
/*
* For __mutex_fastpath_trylock we use another construct which could be
* described as a "single value cmpxchg".
*
* This provides the needed trylock semantics like cmpxchg would, but it is
* lighter and less generic than a true cmpxchg implementation.
*/
static inline int
__mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *))
{
int __ex_flag, __res, __orig;
__asm__ (
"1: ldrex %0, [%3] \n\t"
"subs %1, %0, #1 \n\t"
"strexeq %2, %1, [%3] \n\t"
"movlt %0, #0 \n\t"
"cmpeq %2, #0 \n\t"
"bgt 1b "
: "=&r" (__orig), "=&r" (__res), "=&r" (__ex_flag)
: "r" (&count->counter)
: "cc", "memory" );
return __orig;
}
#endif
#endif

View file

@ -360,18 +360,6 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd)
#define set_pte_ext(ptep,pte,ext) cpu_set_pte_ext(ptep,pte,ext)
#define pte_clear(mm,addr,ptep) set_pte_ext(ptep, __pte(0), 0)
#define pte_none(pte) (!pte_val(pte))
#define pte_present(pte) (pte_val(pte) & L_PTE_PRESENT)
#define pte_write(pte) (!(pte_val(pte) & L_PTE_RDONLY))
#define pte_dirty(pte) (pte_val(pte) & L_PTE_DIRTY)
#define pte_young(pte) (pte_val(pte) & L_PTE_YOUNG)
#define pte_exec(pte) (!(pte_val(pte) & L_PTE_XN))
#define pte_special(pte) (0)
#define pte_present_user(pte) \
((pte_val(pte) & (L_PTE_PRESENT | L_PTE_USER)) == \
(L_PTE_PRESENT | L_PTE_USER))
#if __LINUX_ARM_ARCH__ < 6
static inline void __sync_icache_dcache(pte_t pteval)
{
@ -383,16 +371,26 @@ extern void __sync_icache_dcache(pte_t pteval);
static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pteval)
{
unsigned long ext = 0;
if (addr < TASK_SIZE && pte_present_user(pteval)) {
if (addr >= TASK_SIZE)
set_pte_ext(ptep, pteval, 0);
else {
__sync_icache_dcache(pteval);
ext |= PTE_EXT_NG;
set_pte_ext(ptep, pteval, PTE_EXT_NG);
}
set_pte_ext(ptep, pteval, ext);
}
#define pte_none(pte) (!pte_val(pte))
#define pte_present(pte) (pte_val(pte) & L_PTE_PRESENT)
#define pte_write(pte) (!(pte_val(pte) & L_PTE_RDONLY))
#define pte_dirty(pte) (pte_val(pte) & L_PTE_DIRTY)
#define pte_young(pte) (pte_val(pte) & L_PTE_YOUNG)
#define pte_exec(pte) (!(pte_val(pte) & L_PTE_XN))
#define pte_special(pte) (0)
#define pte_present_user(pte) \
((pte_val(pte) & (L_PTE_PRESENT | L_PTE_USER)) == \
(L_PTE_PRESENT | L_PTE_USER))
#define PTE_BIT_FUNC(fn,op) \
static inline pte_t pte_##fn(pte_t pte) { pte_val(pte) op; return pte; }
@ -418,13 +416,13 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
*
* 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
* 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
* <--------------- offset ----------------------> < type -> 0 0 0
* <--------------- offset --------------------> <- type --> 0 0 0
*
* This gives us up to 31 swap files and 64GB per swap file. Note that
* This gives us up to 63 swap files and 32GB per swap file. Note that
* the offset field is always non-zero.
*/
#define __SWP_TYPE_SHIFT 3
#define __SWP_TYPE_BITS 5
#define __SWP_TYPE_BITS 6
#define __SWP_TYPE_MASK ((1 << __SWP_TYPE_BITS) - 1)
#define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT)

View file

@ -93,6 +93,4 @@ extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
*/
extern void show_local_irqs(struct seq_file *, int);
extern void smp_send_all_cpu_backtrace(void);
#endif /* ifndef __ASM_ARM_SMP_H */

View file

@ -7,8 +7,6 @@
.macro set_tls_v6k, tp, tmp1, tmp2
mcr p15, 0, \tp, c13, c0, 3 @ set TLS register
mov \tmp1, #0
mcr p15, 0, \tmp1, c13, c0, 2 @ clear user r/w TLS register
.endm
.macro set_tls_v6, tp, tmp1, tmp2
@ -17,8 +15,6 @@
mov \tmp2, #0xffff0fff
tst \tmp1, #HWCAP_TLS @ hardware TLS available?
mcrne p15, 0, \tp, c13, c0, 3 @ yes, set TLS register
movne \tmp1, #0
mcrne p15, 0, \tmp1, c13, c0, 2 @ clear user r/w TLS register
streq \tp, [\tmp2, #-15] @ set TLS value at 0xffff0ff0
.endm

View file

@ -27,9 +27,9 @@
#if __LINUX_ARM_ARCH__ <= 6
ldr \tmp, =elf_hwcap @ may not have MVFR regs
ldr \tmp, [\tmp, #0]
tst \tmp, #HWCAP_VFPD32
ldcnel p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d16-d31}
addeq \base, \base, #32*4 @ step over unused register space
tst \tmp, #HWCAP_VFPv3D16
ldceq p11, cr0, [\base],#32*4 @ FLDMIAD \base!, {d16-d31}
addne \base, \base, #32*4 @ step over unused register space
#else
VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0
and \tmp, \tmp, #MVFR0_A_SIMD_MASK @ A_SIMD field
@ -51,9 +51,9 @@
#if __LINUX_ARM_ARCH__ <= 6
ldr \tmp, =elf_hwcap @ may not have MVFR regs
ldr \tmp, [\tmp, #0]
tst \tmp, #HWCAP_VFPD32
stcnel p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d16-d31}
addeq \base, \base, #32*4 @ step over unused register space
tst \tmp, #HWCAP_VFPv3D16
stceq p11, cr0, [\base],#32*4 @ FSTMIAD \base!, {d16-d31}
addne \base, \base, #32*4 @ step over unused register space
#else
VFPFMRX \tmp, MVFR0 @ Media and VFP Feature Register 0
and \tmp, \tmp, #MVFR0_A_SIMD_MASK @ A_SIMD field

View file

@ -496,7 +496,7 @@ __und_usr:
blo __und_usr_unknown
3: ldrht r0, [r4]
add r2, r2, #2 @ r2 is PC + 2, make it PC + 4
orr r0, r0, r5, lsl #16
orr r0, r0, r5, lsl #16
#else
b __und_usr_unknown
#endif

View file

@ -15,7 +15,6 @@
#include <linux/init.h>
#include <linux/types.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/sysrq.h>
#include <linux/device.h>
#include <linux/clk.h>
@ -37,36 +36,26 @@ MODULE_AUTHOR("Alexander Shishkin");
struct tracectx {
unsigned int etb_bufsz;
void __iomem *etb_regs;
void __iomem **etm_regs;
int etm_regs_count;
void __iomem *etm_regs;
unsigned long flags;
int ncmppairs;
int etm_portsz;
u32 etb_fc;
unsigned long range_start;
unsigned long range_end;
unsigned long data_range_start;
unsigned long data_range_end;
bool dump_initial_etb;
struct device *dev;
struct clk *emu_clk;
struct mutex mutex;
};
static struct tracectx tracer = {
.range_start = (unsigned long)_stext,
.range_end = (unsigned long)_etext,
};
static struct tracectx tracer;
static inline bool trace_isrunning(struct tracectx *t)
{
return !!(t->flags & TRACER_RUNNING);
}
static int etm_setup_address_range(struct tracectx *t, int id, int n,
static int etm_setup_address_range(struct tracectx *t, int n,
unsigned long start, unsigned long end, int exclude, int data)
{
u32 flags = ETMAAT_ARM | ETMAAT_IGNCONTEXTID | ETMAAT_IGNSECURITY |
u32 flags = ETMAAT_ARM | ETMAAT_IGNCONTEXTID | ETMAAT_NSONLY | \
ETMAAT_NOVALCMP;
if (n < 1 || n > t->ncmppairs)
@ -82,155 +71,95 @@ static int etm_setup_address_range(struct tracectx *t, int id, int n,
flags |= ETMAAT_IEXEC;
/* first comparator for the range */
etm_writel(t, id, flags, ETMR_COMP_ACC_TYPE(n * 2));
etm_writel(t, id, start, ETMR_COMP_VAL(n * 2));
etm_writel(t, flags, ETMR_COMP_ACC_TYPE(n * 2));
etm_writel(t, start, ETMR_COMP_VAL(n * 2));
/* second comparator is right next to it */
etm_writel(t, id, flags, ETMR_COMP_ACC_TYPE(n * 2 + 1));
etm_writel(t, id, end, ETMR_COMP_VAL(n * 2 + 1));
etm_writel(t, flags, ETMR_COMP_ACC_TYPE(n * 2 + 1));
etm_writel(t, end, ETMR_COMP_VAL(n * 2 + 1));
if (data) {
flags = exclude ? ETMVDC3_EXCLONLY : 0;
if (exclude)
n += 8;
etm_writel(t, id, flags | BIT(n), ETMR_VIEWDATACTRL3);
} else {
flags = exclude ? ETMTE_INCLEXCL : 0;
etm_writel(t, id, flags | (1 << n), ETMR_TRACEENCTRL);
}
flags = exclude ? ETMTE_INCLEXCL : 0;
etm_writel(t, flags | (1 << n), ETMR_TRACEENCTRL);
return 0;
}
static int trace_start_etm(struct tracectx *t, int id)
{
u32 v;
unsigned long timeout = TRACER_TIMEOUT;
v = ETMCTRL_OPTS | ETMCTRL_PROGRAM | ETMCTRL_PORTSIZE(t->etm_portsz);
if (t->flags & TRACER_CYCLE_ACC)
v |= ETMCTRL_CYCLEACCURATE;
if (t->flags & TRACER_TRACE_DATA)
v |= ETMCTRL_DATA_DO_ADDR;
etm_unlock(t, id);
etm_writel(t, id, v, ETMR_CTRL);
while (!(etm_readl(t, id, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to assert timed out\n");
etm_lock(t, id);
return -EFAULT;
}
if (t->range_start || t->range_end)
etm_setup_address_range(t, id, 1,
t->range_start, t->range_end, 0, 0);
else
etm_writel(t, id, ETMTE_INCLEXCL, ETMR_TRACEENCTRL);
etm_writel(t, id, 0, ETMR_TRACEENCTRL2);
etm_writel(t, id, 0, ETMR_TRACESSCTRL);
etm_writel(t, id, 0x6f, ETMR_TRACEENEVT);
etm_writel(t, id, 0, ETMR_VIEWDATACTRL1);
etm_writel(t, id, 0, ETMR_VIEWDATACTRL2);
if (t->data_range_start || t->data_range_end)
etm_setup_address_range(t, id, 2, t->data_range_start,
t->data_range_end, 0, 1);
else
etm_writel(t, id, ETMVDC3_EXCLONLY, ETMR_VIEWDATACTRL3);
etm_writel(t, id, 0x6f, ETMR_VIEWDATAEVT);
v &= ~ETMCTRL_PROGRAM;
v |= ETMCTRL_PORTSEL;
etm_writel(t, id, v, ETMR_CTRL);
timeout = TRACER_TIMEOUT;
while (etm_readl(t, id, ETMR_CTRL) & ETMCTRL_PROGRAM && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to deassert timed out\n");
etm_lock(t, id);
return -EFAULT;
}
etm_lock(t, id);
return 0;
}
static int trace_start(struct tracectx *t)
{
int ret;
int id;
u32 etb_fc = t->etb_fc;
u32 v;
unsigned long timeout = TRACER_TIMEOUT;
etb_unlock(t);
t->dump_initial_etb = false;
etb_writel(t, 0, ETBR_WRITEADDR);
etb_writel(t, etb_fc, ETBR_FORMATTERCTRL);
etb_writel(t, 0, ETBR_FORMATTERCTRL);
etb_writel(t, 1, ETBR_CTRL);
etb_lock(t);
/* configure etm(s) */
for (id = 0; id < t->etm_regs_count; id++) {
ret = trace_start_etm(t, id);
if (ret)
return ret;
/* configure etm */
v = ETMCTRL_OPTS | ETMCTRL_PROGRAM | ETMCTRL_PORTSIZE(t->etm_portsz);
if (t->flags & TRACER_CYCLE_ACC)
v |= ETMCTRL_CYCLEACCURATE;
etm_unlock(t);
etm_writel(t, v, ETMR_CTRL);
while (!(etm_readl(t, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to assert timed out\n");
etm_lock(t);
return -EFAULT;
}
etm_setup_address_range(t, 1, (unsigned long)_stext,
(unsigned long)_etext, 0, 0);
etm_writel(t, 0, ETMR_TRACEENCTRL2);
etm_writel(t, 0, ETMR_TRACESSCTRL);
etm_writel(t, 0x6f, ETMR_TRACEENEVT);
v &= ~ETMCTRL_PROGRAM;
v |= ETMCTRL_PORTSEL;
etm_writel(t, v, ETMR_CTRL);
timeout = TRACER_TIMEOUT;
while (etm_readl(t, ETMR_CTRL) & ETMCTRL_PROGRAM && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to deassert timed out\n");
etm_lock(t);
return -EFAULT;
}
etm_lock(t);
t->flags |= TRACER_RUNNING;
return 0;
}
static int trace_stop_etm(struct tracectx *t, int id)
static int trace_stop(struct tracectx *t)
{
unsigned long timeout = TRACER_TIMEOUT;
etm_unlock(t, id);
etm_unlock(t);
etm_writel(t, id, 0x441, ETMR_CTRL);
while (!(etm_readl(t, id, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
etm_writel(t, 0x440, ETMR_CTRL);
while (!(etm_readl(t, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to assert timed out\n");
etm_lock(t, id);
etm_lock(t);
return -EFAULT;
}
etm_lock(t, id);
return 0;
}
static int trace_stop(struct tracectx *t)
{
int id;
int ret;
unsigned long timeout = TRACER_TIMEOUT;
u32 etb_fc = t->etb_fc;
for (id = 0; id < t->etm_regs_count; id++) {
ret = trace_stop_etm(t, id);
if (ret)
return ret;
}
etm_lock(t);
etb_unlock(t);
if (etb_fc) {
etb_fc |= ETBFF_STOPFL;
etb_writel(t, t->etb_fc, ETBR_FORMATTERCTRL);
}
etb_writel(t, etb_fc | ETBFF_MANUAL_FLUSH, ETBR_FORMATTERCTRL);
etb_writel(t, ETBFF_MANUAL_FLUSH, ETBR_FORMATTERCTRL);
timeout = TRACER_TIMEOUT;
while (etb_readl(t, ETBR_FORMATTERCTRL) &
@ -255,15 +184,24 @@ static int trace_stop(struct tracectx *t)
static int etb_getdatalen(struct tracectx *t)
{
u32 v;
int wp;
int rp, wp;
v = etb_readl(t, ETBR_STATUS);
if (v & 1)
return t->etb_bufsz;
rp = etb_readl(t, ETBR_READADDR);
wp = etb_readl(t, ETBR_WRITEADDR);
return wp;
if (rp > wp) {
etb_writel(t, 0, ETBR_READADDR);
etb_writel(t, 0, ETBR_WRITEADDR);
return 0;
}
return wp - rp;
}
/* sysrq+v will always stop the running trace and leave it at that */
@ -296,18 +234,21 @@ static void etm_dump(void)
printk("%08x", cpu_to_be32(etb_readl(t, ETBR_READMEM)));
printk(KERN_INFO "\n--- ETB buffer end ---\n");
/* deassert the overflow bit */
etb_writel(t, 1, ETBR_CTRL);
etb_writel(t, 0, ETBR_CTRL);
etb_writel(t, 0, ETBR_TRIGGERCOUNT);
etb_writel(t, 0, ETBR_READADDR);
etb_writel(t, 0, ETBR_WRITEADDR);
etb_lock(t);
}
static void sysrq_etm_dump(int key)
{
if (!mutex_trylock(&tracer.mutex)) {
printk(KERN_INFO "Tracing hardware busy\n");
return;
}
dev_dbg(tracer.dev, "Dumping ETB buffer\n");
etm_dump();
mutex_unlock(&tracer.mutex);
}
static struct sysrq_key_op sysrq_etm_op = {
@ -334,10 +275,6 @@ static ssize_t etb_read(struct file *file, char __user *data,
struct tracectx *t = file->private_data;
u32 first = 0;
u32 *buf;
int wpos;
int skip;
long wlength;
loff_t pos = *ppos;
mutex_lock(&t->mutex);
@ -349,39 +286,31 @@ static ssize_t etb_read(struct file *file, char __user *data,
etb_unlock(t);
total = etb_getdatalen(t);
if (total == 0 && t->dump_initial_etb)
total = t->etb_bufsz;
if (total == t->etb_bufsz)
first = etb_readl(t, ETBR_WRITEADDR);
if (pos > total * 4) {
skip = 0;
wpos = total;
} else {
skip = (int)pos % 4;
wpos = (int)pos / 4;
}
total -= wpos;
first = (first + wpos) % t->etb_bufsz;
etb_writel(t, first, ETBR_READADDR);
wlength = min(total, DIV_ROUND_UP(skip + (int)len, 4));
length = min(total * 4 - skip, (int)len);
buf = vmalloc(wlength * 4);
length = min(total * 4, (int)len);
buf = vmalloc(length);
dev_dbg(t->dev, "ETB read %ld bytes to %lld from %ld words at %d\n",
length, pos, wlength, first);
dev_dbg(t->dev, "ETB buffer length: %d\n", total + wpos);
dev_dbg(t->dev, "ETB buffer length: %d\n", total);
dev_dbg(t->dev, "ETB status reg: %x\n", etb_readl(t, ETBR_STATUS));
for (i = 0; i < wlength; i++)
for (i = 0; i < length / 4; i++)
buf[i] = etb_readl(t, ETBR_READMEM);
/* the only way to deassert overflow bit in ETB status is this */
etb_writel(t, 1, ETBR_CTRL);
etb_writel(t, 0, ETBR_CTRL);
etb_writel(t, 0, ETBR_WRITEADDR);
etb_writel(t, 0, ETBR_READADDR);
etb_writel(t, 0, ETBR_TRIGGERCOUNT);
etb_lock(t);
length -= copy_to_user(data, (u8 *)buf + skip, length);
length -= copy_to_user(data, buf, length);
vfree(buf);
*ppos = pos + length;
out:
mutex_unlock(&t->mutex);
@ -418,17 +347,28 @@ static int __devinit etb_probe(struct amba_device *dev, const struct amba_id *id
if (ret)
goto out;
mutex_lock(&t->mutex);
t->etb_regs = ioremap_nocache(dev->res.start, resource_size(&dev->res));
if (!t->etb_regs) {
ret = -ENOMEM;
goto out_release;
}
t->dev = &dev->dev;
t->dump_initial_etb = true;
amba_set_drvdata(dev, t);
etb_miscdev.parent = &dev->dev;
ret = misc_register(&etb_miscdev);
if (ret)
goto out_unmap;
t->emu_clk = clk_get(&dev->dev, "emu_src_ck");
if (IS_ERR(t->emu_clk)) {
dev_dbg(&dev->dev, "Failed to obtain emu_src_ck.\n");
return -EFAULT;
}
clk_enable(t->emu_clk);
etb_unlock(t);
t->etb_bufsz = etb_readl(t, ETBR_DEPTH);
dev_dbg(&dev->dev, "Size: %x\n", t->etb_bufsz);
@ -437,20 +377,6 @@ static int __devinit etb_probe(struct amba_device *dev, const struct amba_id *id
etb_writel(t, 0, ETBR_CTRL);
etb_writel(t, 0x1000, ETBR_FORMATTERCTRL);
etb_lock(t);
mutex_unlock(&t->mutex);
etb_miscdev.parent = &dev->dev;
ret = misc_register(&etb_miscdev);
if (ret)
goto out_unmap;
/* Get optional clock. Currently used to select clock source on omap3 */
t->emu_clk = clk_get(&dev->dev, "emu_src_ck");
if (IS_ERR(t->emu_clk))
dev_dbg(&dev->dev, "Failed to obtain emu_src_ck.\n");
else
clk_enable(t->emu_clk);
dev_dbg(&dev->dev, "ETB AMBA driver initialized.\n");
@ -458,13 +384,10 @@ out:
return ret;
out_unmap:
mutex_lock(&t->mutex);
amba_set_drvdata(dev, NULL);
iounmap(t->etb_regs);
t->etb_regs = NULL;
out_release:
mutex_unlock(&t->mutex);
amba_release_regions(dev);
return ret;
@ -479,10 +402,8 @@ static int etb_remove(struct amba_device *dev)
iounmap(t->etb_regs);
t->etb_regs = NULL;
if (!IS_ERR(t->emu_clk)) {
clk_disable(t->emu_clk);
clk_put(t->emu_clk);
}
clk_disable(t->emu_clk);
clk_put(t->emu_clk);
amba_release_regions(dev);
@ -526,10 +447,7 @@ static ssize_t trace_running_store(struct kobject *kobj,
return -EINVAL;
mutex_lock(&tracer.mutex);
if (!tracer.etb_regs)
ret = -ENODEV;
else
ret = value ? trace_start(&tracer) : trace_stop(&tracer);
ret = value ? trace_start(&tracer) : trace_stop(&tracer);
mutex_unlock(&tracer.mutex);
return ret ? : n;
@ -544,50 +462,36 @@ static ssize_t trace_info_show(struct kobject *kobj,
{
u32 etb_wa, etb_ra, etb_st, etb_fc, etm_ctrl, etm_st;
int datalen;
int id;
int ret;
mutex_lock(&tracer.mutex);
if (tracer.etb_regs) {
etb_unlock(&tracer);
datalen = etb_getdatalen(&tracer);
etb_wa = etb_readl(&tracer, ETBR_WRITEADDR);
etb_ra = etb_readl(&tracer, ETBR_READADDR);
etb_st = etb_readl(&tracer, ETBR_STATUS);
etb_fc = etb_readl(&tracer, ETBR_FORMATTERCTRL);
etb_lock(&tracer);
} else {
etb_wa = etb_ra = etb_st = etb_fc = ~0;
datalen = -1;
}
etb_unlock(&tracer);
datalen = etb_getdatalen(&tracer);
etb_wa = etb_readl(&tracer, ETBR_WRITEADDR);
etb_ra = etb_readl(&tracer, ETBR_READADDR);
etb_st = etb_readl(&tracer, ETBR_STATUS);
etb_fc = etb_readl(&tracer, ETBR_FORMATTERCTRL);
etb_lock(&tracer);
ret = sprintf(buf, "Trace buffer len: %d\nComparator pairs: %d\n"
etm_unlock(&tracer);
etm_ctrl = etm_readl(&tracer, ETMR_CTRL);
etm_st = etm_readl(&tracer, ETMR_STATUS);
etm_lock(&tracer);
return sprintf(buf, "Trace buffer len: %d\nComparator pairs: %d\n"
"ETBR_WRITEADDR:\t%08x\n"
"ETBR_READADDR:\t%08x\n"
"ETBR_STATUS:\t%08x\n"
"ETBR_FORMATTERCTRL:\t%08x\n",
"ETBR_FORMATTERCTRL:\t%08x\n"
"ETMR_CTRL:\t%08x\n"
"ETMR_STATUS:\t%08x\n",
datalen,
tracer.ncmppairs,
etb_wa,
etb_ra,
etb_st,
etb_fc
);
for (id = 0; id < tracer.etm_regs_count; id++) {
etm_unlock(&tracer, id);
etm_ctrl = etm_readl(&tracer, id, ETMR_CTRL);
etm_st = etm_readl(&tracer, id, ETMR_STATUS);
etm_lock(&tracer, id);
ret += sprintf(buf + ret, "ETMR_CTRL:\t%08x\n"
"ETMR_STATUS:\t%08x\n",
etb_fc,
etm_ctrl,
etm_st
);
}
mutex_unlock(&tracer.mutex);
return ret;
}
static struct kobj_attribute trace_info_attr =
@ -626,121 +530,42 @@ static ssize_t trace_mode_store(struct kobject *kobj,
static struct kobj_attribute trace_mode_attr =
__ATTR(trace_mode, 0644, trace_mode_show, trace_mode_store);
static ssize_t trace_range_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
return sprintf(buf, "%08lx %08lx\n",
tracer.range_start, tracer.range_end);
}
static ssize_t trace_range_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned long range_start, range_end;
if (sscanf(buf, "%lx %lx", &range_start, &range_end) != 2)
return -EINVAL;
mutex_lock(&tracer.mutex);
tracer.range_start = range_start;
tracer.range_end = range_end;
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_range_attr =
__ATTR(trace_range, 0644, trace_range_show, trace_range_store);
static ssize_t trace_data_range_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
unsigned long range_start;
u64 range_end;
mutex_lock(&tracer.mutex);
range_start = tracer.data_range_start;
range_end = tracer.data_range_end;
if (!range_end && (tracer.flags & TRACER_TRACE_DATA))
range_end = 0x100000000ULL;
mutex_unlock(&tracer.mutex);
return sprintf(buf, "%08lx %08llx\n", range_start, range_end);
}
static ssize_t trace_data_range_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned long range_start;
u64 range_end;
if (sscanf(buf, "%lx %llx", &range_start, &range_end) != 2)
return -EINVAL;
mutex_lock(&tracer.mutex);
tracer.data_range_start = range_start;
tracer.data_range_end = (unsigned long)range_end;
if (range_end)
tracer.flags |= TRACER_TRACE_DATA;
else
tracer.flags &= ~TRACER_TRACE_DATA;
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_data_range_attr =
__ATTR(trace_data_range, 0644,
trace_data_range_show, trace_data_range_store);
static int __devinit etm_probe(struct amba_device *dev, const struct amba_id *id)
{
struct tracectx *t = &tracer;
int ret = 0;
void __iomem **new_regs;
int new_count;
mutex_lock(&t->mutex);
new_count = t->etm_regs_count + 1;
new_regs = krealloc(t->etm_regs,
sizeof(t->etm_regs[0]) * new_count, GFP_KERNEL);
if (!new_regs) {
dev_dbg(&dev->dev, "Failed to allocate ETM register array\n");
ret = -ENOMEM;
if (t->etm_regs) {
dev_dbg(&dev->dev, "ETM already initialized\n");
ret = -EBUSY;
goto out;
}
t->etm_regs = new_regs;
ret = amba_request_regions(dev, NULL);
if (ret)
goto out;
t->etm_regs[t->etm_regs_count] =
ioremap_nocache(dev->res.start, resource_size(&dev->res));
if (!t->etm_regs[t->etm_regs_count]) {
t->etm_regs = ioremap_nocache(dev->res.start, resource_size(&dev->res));
if (!t->etm_regs) {
ret = -ENOMEM;
goto out_release;
}
amba_set_drvdata(dev, t->etm_regs[t->etm_regs_count]);
amba_set_drvdata(dev, t);
t->flags = TRACER_CYCLE_ACC | TRACER_TRACE_DATA;
mutex_init(&t->mutex);
t->dev = &dev->dev;
t->flags = TRACER_CYCLE_ACC;
t->etm_portsz = 1;
etm_unlock(t, t->etm_regs_count);
(void)etm_readl(t, t->etm_regs_count, ETMMR_PDSR);
etm_unlock(t);
(void)etm_readl(t, ETMMR_PDSR);
/* dummy first read */
(void)etm_readl(&tracer, t->etm_regs_count, ETMMR_OSSRR);
(void)etm_readl(&tracer, ETMMR_OSSRR);
t->ncmppairs = etm_readl(t, t->etm_regs_count, ETMR_CONFCODE) & 0xf;
etm_writel(t, t->etm_regs_count, 0x441, ETMR_CTRL);
etm_writel(t, t->etm_regs_count, new_count, ETMR_TRACEIDR);
etm_lock(t, t->etm_regs_count);
t->ncmppairs = etm_readl(t, ETMR_CONFCODE) & 0xf;
etm_writel(t, 0x440, ETMR_CTRL);
etm_lock(t);
ret = sysfs_create_file(&dev->dev.kobj,
&trace_running_attr.attr);
@ -756,67 +581,35 @@ static int __devinit etm_probe(struct amba_device *dev, const struct amba_id *id
if (ret)
dev_dbg(&dev->dev, "Failed to create trace_mode in sysfs\n");
ret = sysfs_create_file(&dev->dev.kobj, &trace_range_attr.attr);
if (ret)
dev_dbg(&dev->dev, "Failed to create trace_range in sysfs\n");
ret = sysfs_create_file(&dev->dev.kobj, &trace_data_range_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_data_range in sysfs\n");
dev_dbg(&dev->dev, "ETM AMBA driver initialized.\n");
/* Enable formatter if there are multiple trace sources */
if (new_count > 1)
t->etb_fc = ETBFF_ENFCONT | ETBFF_ENFTC;
t->etm_regs_count = new_count;
dev_dbg(t->dev, "ETM AMBA driver initialized.\n");
out:
mutex_unlock(&t->mutex);
return ret;
out_unmap:
amba_set_drvdata(dev, NULL);
iounmap(t->etm_regs[t->etm_regs_count]);
iounmap(t->etm_regs);
out_release:
amba_release_regions(dev);
mutex_unlock(&t->mutex);
return ret;
}
static int etm_remove(struct amba_device *dev)
{
int i;
struct tracectx *t = &tracer;
void __iomem *etm_regs = amba_get_drvdata(dev);
struct tracectx *t = amba_get_drvdata(dev);
amba_set_drvdata(dev, NULL);
iounmap(t->etm_regs);
t->etm_regs = NULL;
amba_release_regions(dev);
sysfs_remove_file(&dev->dev.kobj, &trace_running_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_info_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_mode_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_range_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_data_range_attr.attr);
amba_set_drvdata(dev, NULL);
mutex_lock(&t->mutex);
for (i = 0; i < t->etm_regs_count; i++)
if (t->etm_regs[i] == etm_regs)
break;
for (; i < t->etm_regs_count - 1; i++)
t->etm_regs[i] = t->etm_regs[i + 1];
t->etm_regs_count--;
if (!t->etm_regs_count) {
kfree(t->etm_regs);
t->etm_regs = NULL;
}
mutex_unlock(&t->mutex);
iounmap(etm_regs);
amba_release_regions(dev);
return 0;
}
@ -826,10 +619,6 @@ static struct amba_id etm_ids[] = {
.id = 0x0003b921,
.mask = 0x0007ffff,
},
{
.id = 0x0003b950,
.mask = 0x0007ffff,
},
{ 0, 0 },
};
@ -847,8 +636,6 @@ static int __init etm_init(void)
{
int retval;
mutex_init(&tracer.mutex);
retval = amba_driver_register(&etb_driver);
if (retval) {
printk(KERN_ERR "Failed to register etb\n");

View file

@ -348,7 +348,7 @@ __secondary_data:
* r13 = *virtual* address to jump to upon completion
*/
__enable_mmu:
#if defined(CONFIG_ALIGNMENT_TRAP) && __LINUX_ARM_ARCH__ < 6
#ifdef CONFIG_ALIGNMENT_TRAP
orr r0, r0, #CR_A
#else
bic r0, r0, #CR_A

View file

@ -9,8 +9,6 @@
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/notifier.h>
#include <linux/cpu.h>
#include <linux/sysdev.h>
#include <linux/syscore_ops.h>
@ -103,25 +101,6 @@ static struct syscore_ops leds_syscore_ops = {
.resume = leds_resume,
};
static int leds_idle_notifier(struct notifier_block *nb, unsigned long val,
void *data)
{
switch (val) {
case IDLE_START:
leds_event(led_idle_start);
break;
case IDLE_END:
leds_event(led_idle_end);
break;
}
return 0;
}
static struct notifier_block leds_idle_nb = {
.notifier_call = leds_idle_notifier,
};
static int __init leds_init(void)
{
int ret;
@ -130,12 +109,8 @@ static int __init leds_init(void)
ret = sysdev_register(&leds_device);
if (ret == 0)
ret = sysdev_create_file(&leds_device, &attr_event);
if (ret == 0) {
if (ret == 0)
register_syscore_ops(&leds_syscore_ops);
idle_notifier_register(&leds_idle_nb);
}
return ret;
}

View file

@ -264,8 +264,8 @@ static const unsigned armv7_a9_perf_map[PERF_COUNT_HW_MAX] = {
[PERF_COUNT_HW_CPU_CYCLES] = ARMV7_PERFCTR_CPU_CYCLES,
[PERF_COUNT_HW_INSTRUCTIONS] =
ARMV7_PERFCTR_INST_OUT_OF_RENAME_STAGE,
[PERF_COUNT_HW_CACHE_REFERENCES] = ARMV7_PERFCTR_DCACHE_ACCESS,
[PERF_COUNT_HW_CACHE_MISSES] = ARMV7_PERFCTR_DCACHE_REFILL,
[PERF_COUNT_HW_CACHE_REFERENCES] = ARMV7_PERFCTR_COHERENT_LINE_HIT,
[PERF_COUNT_HW_CACHE_MISSES] = ARMV7_PERFCTR_COHERENT_LINE_MISS,
[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV7_PERFCTR_PC_WRITE,
[PERF_COUNT_HW_BRANCH_MISSES] = ARMV7_PERFCTR_PC_BRANCH_MIS_PRED,
[PERF_COUNT_HW_BUS_CYCLES] = ARMV7_PERFCTR_CLOCK_CYCLES,

View file

@ -30,9 +30,9 @@
#include <linux/uaccess.h>
#include <linux/random.h>
#include <linux/hw_breakpoint.h>
#include <linux/console.h>
#include <asm/cacheflush.h>
#include <asm/leds.h>
#include <asm/processor.h>
#include <asm/system.h>
#include <asm/thread_notify.h>
@ -62,18 +62,6 @@ static volatile int hlt_counter;
#include <mach/system.h>
#ifdef CONFIG_SMP
void arch_trigger_all_cpu_backtrace(void)
{
smp_send_all_cpu_backtrace();
}
#else
void arch_trigger_all_cpu_backtrace(void)
{
dump_stack();
}
#endif
void disable_hlt(void)
{
hlt_counter++;
@ -103,37 +91,8 @@ static int __init hlt_setup(char *__unused)
__setup("nohlt", nohlt_setup);
__setup("hlt", hlt_setup);
#ifdef CONFIG_ARM_FLUSH_CONSOLE_ON_RESTART
void arm_machine_flush_console(void)
{
printk("\n");
pr_emerg("Restarting %s\n", linux_banner);
if (console_trylock()) {
console_unlock();
return;
}
mdelay(50);
local_irq_disable();
if (!console_trylock())
pr_emerg("arm_restart: Console was locked! Busting\n");
else
pr_emerg("arm_restart: Console was locked!\n");
console_unlock();
}
#else
void arm_machine_flush_console(void)
{
}
#endif
void arm_machine_restart(char mode, const char *cmd)
{
/* Flush the console to make sure all the relevant messages make it
* out to the console drivers */
arm_machine_flush_console();
/* Disable interrupts first */
local_irq_disable();
local_fiq_disable();
@ -223,8 +182,8 @@ void cpu_idle(void)
/* endless idle loop with no priority at all */
while (1) {
idle_notifier_call_chain(IDLE_START);
tick_nohz_stop_sched_tick(1);
leds_event(led_idle_start);
while (!need_resched()) {
#ifdef CONFIG_HOTPLUG_CPU
if (cpu_is_offline(smp_processor_id()))
@ -232,9 +191,6 @@ void cpu_idle(void)
#endif
local_irq_disable();
#ifdef CONFIG_PL310_ERRATA_769419
wmb();
#endif
if (hlt_counter) {
local_irq_enable();
cpu_relax();
@ -251,8 +207,8 @@ void cpu_idle(void)
local_irq_enable();
}
}
leds_event(led_idle_end);
tick_nohz_restart_sched_tick();
idle_notifier_call_chain(IDLE_END);
preempt_enable_no_resched();
schedule();
preempt_disable();
@ -272,15 +228,6 @@ __setup("reboot=", reboot_setup);
void machine_shutdown(void)
{
#ifdef CONFIG_SMP
/*
* Disable preemption so we're guaranteed to
* run to power off or reboot and prevent
* the possibility of switching to another
* thread that might wind up blocking on
* one of the stopped CPUs.
*/
preempt_disable();
smp_send_stop();
#endif
}
@ -304,77 +251,6 @@ void machine_restart(char *cmd)
arm_pm_restart(reboot_mode, cmd);
}
/*
* dump a block of kernel memory from around the given address
*/
static void show_data(unsigned long addr, int nbytes, const char *name)
{
int i, j;
int nlines;
u32 *p;
/*
* don't attempt to dump non-kernel addresses or
* values that are probably just small negative numbers
*/
if (addr < PAGE_OFFSET || addr > -256UL)
return;
printk("\n%s: %#lx:\n", name, addr);
/*
* round address down to a 32 bit boundary
* and always dump a multiple of 32 bytes
*/
p = (u32 *)(addr & ~(sizeof(u32) - 1));
nbytes += (addr & (sizeof(u32) - 1));
nlines = (nbytes + 31) / 32;
for (i = 0; i < nlines; i++) {
/*
* just display low 16 bits of address to keep
* each line of the dump < 80 characters
*/
printk("%04lx ", (unsigned long)p & 0xffff);
for (j = 0; j < 8; j++) {
u32 data;
if (probe_kernel_address(p, data)) {
printk(" ********");
} else {
printk(" %08x", data);
}
++p;
}
printk("\n");
}
}
static void show_extra_register_data(struct pt_regs *regs, int nbytes)
{
mm_segment_t fs;
fs = get_fs();
set_fs(KERNEL_DS);
show_data(regs->ARM_pc - nbytes, nbytes * 2, "PC");
show_data(regs->ARM_lr - nbytes, nbytes * 2, "LR");
show_data(regs->ARM_sp - nbytes, nbytes * 2, "SP");
show_data(regs->ARM_ip - nbytes, nbytes * 2, "IP");
show_data(regs->ARM_fp - nbytes, nbytes * 2, "FP");
show_data(regs->ARM_r0 - nbytes, nbytes * 2, "R0");
show_data(regs->ARM_r1 - nbytes, nbytes * 2, "R1");
show_data(regs->ARM_r2 - nbytes, nbytes * 2, "R2");
show_data(regs->ARM_r3 - nbytes, nbytes * 2, "R3");
show_data(regs->ARM_r4 - nbytes, nbytes * 2, "R4");
show_data(regs->ARM_r5 - nbytes, nbytes * 2, "R5");
show_data(regs->ARM_r6 - nbytes, nbytes * 2, "R6");
show_data(regs->ARM_r7 - nbytes, nbytes * 2, "R7");
show_data(regs->ARM_r8 - nbytes, nbytes * 2, "R8");
show_data(regs->ARM_r9 - nbytes, nbytes * 2, "R9");
show_data(regs->ARM_r10 - nbytes, nbytes * 2, "R10");
set_fs(fs);
}
void __show_regs(struct pt_regs *regs)
{
unsigned long flags;
@ -434,8 +310,6 @@ void __show_regs(struct pt_regs *regs)
printk("Control: %08x%s\n", ctrl, buf);
}
#endif
show_extra_register_data(regs, 128);
}
void show_regs(struct pt_regs * regs)

View file

@ -719,13 +719,10 @@ static int vfp_set(struct task_struct *target,
{
int ret;
struct thread_info *thread = task_thread_info(target);
struct vfp_hard_struct new_vfp;
struct vfp_hard_struct new_vfp = thread->vfpstate.hard;
const size_t user_fpregs_offset = offsetof(struct user_vfp, fpregs);
const size_t user_fpscr_offset = offsetof(struct user_vfp, fpscr);
vfp_sync_hwstate(thread);
new_vfp = thread->vfpstate.hard;
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&new_vfp.fpregs,
user_fpregs_offset,
@ -746,8 +743,9 @@ static int vfp_set(struct task_struct *target,
if (ret)
return ret;
vfp_flush_hwstate(thread);
vfp_sync_hwstate(thread);
thread->vfpstate.hard = new_vfp;
vfp_flush_hwstate(thread);
return 0;
}

View file

@ -227,8 +227,6 @@ static int restore_vfp_context(struct vfp_sigframe __user *frame)
if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE)
return -EINVAL;
vfp_flush_hwstate(thread);
/*
* Copy the floating point registers. There can be unused
* registers see asm/hwcap.h for details.
@ -253,6 +251,9 @@ static int restore_vfp_context(struct vfp_sigframe __user *frame)
__get_user_error(h->fpinst, &frame->ufp_exc.fpinst, err);
__get_user_error(h->fpinst2, &frame->ufp_exc.fpinst2, err);
if (!err)
vfp_flush_hwstate(thread);
return err ? -EFAULT : 0;
}

View file

@ -53,7 +53,6 @@ enum ipi_msg_type {
IPI_CALL_FUNC,
IPI_CALL_FUNC_SINGLE,
IPI_CPU_STOP,
IPI_CPU_BACKTRACE,
};
int __cpuinit __cpu_up(unsigned int cpu)
@ -278,26 +277,20 @@ static void __cpuinit smp_store_cpu_info(unsigned int cpuid)
asmlinkage void __cpuinit secondary_start_kernel(void)
{
struct mm_struct *mm = &init_mm;
unsigned int cpu;
unsigned int cpu = smp_processor_id();
/*
* The identity mapping is uncached (strongly ordered), so
* switch away from it before attempting any exclusive accesses.
*/
cpu_switch_mm(mm->pgd, mm);
enter_lazy_tlb(mm, current);
local_flush_tlb_all();
printk("CPU%u: Booted secondary processor\n", cpu);
/*
* All kernel threads share the same mm context; grab a
* reference and switch to it.
*/
cpu = smp_processor_id();
atomic_inc(&mm->mm_count);
current->active_mm = mm;
cpumask_set_cpu(cpu, mm_cpumask(mm));
printk("CPU%u: Booted secondary processor\n", cpu);
cpu_switch_mm(mm->pgd, mm);
enter_lazy_tlb(mm, current);
local_flush_tlb_all();
cpu_init();
preempt_disable();
@ -308,7 +301,17 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
*/
platform_secondary_init(cpu);
/*
* Enable local interrupts.
*/
notify_cpu_starting(cpu);
local_irq_enable();
local_fiq_enable();
/*
* Setup the percpu timer for this CPU.
*/
percpu_timer_setup();
calibrate_delay();
@ -320,22 +323,9 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
* before we continue.
*/
set_cpu_online(cpu, true);
/*
* Setup the percpu timer for this CPU.
*/
percpu_timer_setup();
while (!cpu_active(cpu))
cpu_relax();
/*
* cpu_active bit is set, so it's safe to enable interrupts
* now.
*/
local_irq_enable();
local_fiq_enable();
/*
* OK, it's off to the idle thread for us
*/
@ -415,7 +405,6 @@ static const char *ipi_types[NR_IPI] = {
S(IPI_CALL_FUNC, "Function call interrupts"),
S(IPI_CALL_FUNC_SINGLE, "Single function call interrupts"),
S(IPI_CPU_STOP, "CPU stop interrupts"),
S(IPI_CPU_BACKTRACE, "CPU backtrace"),
};
void show_ipi_list(struct seq_file *p, int prec)
@ -456,7 +445,9 @@ static DEFINE_PER_CPU(struct clock_event_device, percpu_clockevent);
static void ipi_timer(void)
{
struct clock_event_device *evt = &__get_cpu_var(percpu_clockevent);
irq_enter();
evt->event_handler(evt);
irq_exit();
}
#ifdef CONFIG_LOCAL_TIMERS
@ -467,9 +458,7 @@ asmlinkage void __exception_irq_entry do_local_timer(struct pt_regs *regs)
if (local_timer_ack()) {
__inc_irq_stat(cpu, local_timer_irqs);
irq_enter();
ipi_timer();
irq_exit();
}
set_irq_regs(old_regs);
@ -566,58 +555,6 @@ static void ipi_cpu_stop(unsigned int cpu)
cpu_relax();
}
static cpumask_t backtrace_mask;
static DEFINE_RAW_SPINLOCK(backtrace_lock);
/* "in progress" flag of arch_trigger_all_cpu_backtrace */
static unsigned long backtrace_flag;
void smp_send_all_cpu_backtrace(void)
{
unsigned int this_cpu = smp_processor_id();
int i;
if (test_and_set_bit(0, &backtrace_flag))
/*
* If there is already a trigger_all_cpu_backtrace() in progress
* (backtrace_flag == 1), don't output double cpu dump infos.
*/
return;
cpumask_copy(&backtrace_mask, cpu_online_mask);
cpu_clear(this_cpu, backtrace_mask);
pr_info("Backtrace for cpu %d (current):\n", this_cpu);
dump_stack();
pr_info("\nsending IPI to all other CPUs:\n");
smp_cross_call(&backtrace_mask, IPI_CPU_BACKTRACE);
/* Wait for up to 10 seconds for all other CPUs to do the backtrace */
for (i = 0; i < 10 * 1000; i++) {
if (cpumask_empty(&backtrace_mask))
break;
mdelay(1);
}
clear_bit(0, &backtrace_flag);
smp_mb__after_clear_bit();
}
/*
* ipi_cpu_backtrace - handle IPI from smp_send_all_cpu_backtrace()
*/
static void ipi_cpu_backtrace(unsigned int cpu, struct pt_regs *regs)
{
if (cpu_isset(cpu, backtrace_mask)) {
raw_spin_lock(&backtrace_lock);
pr_warning("IPI backtrace for cpu %d\n", cpu);
show_regs(regs);
raw_spin_unlock(&backtrace_lock);
cpu_clear(cpu, backtrace_mask);
}
}
/*
* Main handler for inter-processor interrupts
*/
@ -631,9 +568,7 @@ asmlinkage void __exception_irq_entry do_IPI(int ipinr, struct pt_regs *regs)
switch (ipinr) {
case IPI_TIMER:
irq_enter();
ipi_timer();
irq_exit();
break;
case IPI_RESCHEDULE:
@ -641,25 +576,15 @@ asmlinkage void __exception_irq_entry do_IPI(int ipinr, struct pt_regs *regs)
break;
case IPI_CALL_FUNC:
irq_enter();
generic_smp_call_function_interrupt();
irq_exit();
break;
case IPI_CALL_FUNC_SINGLE:
irq_enter();
generic_smp_call_function_single_interrupt();
irq_exit();
break;
case IPI_CPU_STOP:
irq_enter();
ipi_cpu_stop(cpu);
irq_exit();
break;
case IPI_CPU_BACKTRACE:
ipi_cpu_backtrace(cpu, regs);
break;
default:

View file

@ -13,7 +13,6 @@
#include <asm/smp_scu.h>
#include <asm/cacheflush.h>
#include <asm/cputype.h>
#define SCU_CTRL 0x00
#define SCU_CONFIG 0x04
@ -37,15 +36,6 @@ void __init scu_enable(void __iomem *scu_base)
{
u32 scu_ctrl;
#ifdef CONFIG_ARM_ERRATA_764369
/* Cortex-A9 only */
if ((read_cpuid(CPUID_ID) & 0xff0ffff0) == 0x410fc090) {
scu_ctrl = __raw_readl(scu_base + 0x30);
if (!(scu_ctrl & 1))
__raw_writel(scu_ctrl | 0x1, scu_base + 0x30);
}
#endif
scu_ctrl = __raw_readl(scu_base + SCU_CTRL);
/* already enabled? */
if (scu_ctrl & 1)

View file

@ -108,12 +108,10 @@ static void set_segfault(struct pt_regs *regs, unsigned long addr)
{
siginfo_t info;
down_read(&current->mm->mmap_sem);
if (find_vma(current->mm, addr) == NULL)
info.si_code = SEGV_MAPERR;
else
info.si_code = SEGV_ACCERR;
up_read(&current->mm->mmap_sem);
info.si_signo = SIGSEGV;
info.si_errno = 0;

View file

@ -115,7 +115,7 @@ int kernel_execve(const char *filename,
"Ir" (THREAD_START_SP - sizeof(regs)),
"r" (&regs),
"Ir" (sizeof(regs))
: "r0", "r1", "r2", "r3", "r8", "r9", "ip", "lr", "memory");
: "r0", "r1", "r2", "r3", "ip", "lr", "memory");
out:
return ret;

View file

@ -451,9 +451,7 @@ do_cache_op(unsigned long start, unsigned long end, int flags)
if (end > vma->vm_end)
end = vma->vm_end;
up_read(&mm->mmap_sem);
flush_cache_user_range(start, end);
return;
flush_cache_user_range(vma, start, end);
}
up_read(&mm->mmap_sem);
}

View file

@ -454,7 +454,7 @@ static struct i2c_gpio_platform_data pdata = {
static struct platform_device at91rm9200_twi_device = {
.name = "i2c-gpio",
.id = 0,
.id = -1,
.dev.platform_data = &pdata,
};

View file

@ -237,9 +237,9 @@ static struct clk_lookup periph_clocks_lookups[] = {
CLKDEV_CON_DEV_ID("t0_clk", "atmel_tcb.0", &tc0_clk),
CLKDEV_CON_DEV_ID("t1_clk", "atmel_tcb.0", &tc1_clk),
CLKDEV_CON_DEV_ID("t2_clk", "atmel_tcb.0", &tc2_clk),
CLKDEV_CON_DEV_ID("t0_clk", "atmel_tcb.1", &tc3_clk),
CLKDEV_CON_DEV_ID("t1_clk", "atmel_tcb.1", &tc4_clk),
CLKDEV_CON_DEV_ID("t2_clk", "atmel_tcb.1", &tc5_clk),
CLKDEV_CON_DEV_ID("t3_clk", "atmel_tcb.1", &tc3_clk),
CLKDEV_CON_DEV_ID("t4_clk", "atmel_tcb.1", &tc4_clk),
CLKDEV_CON_DEV_ID("t5_clk", "atmel_tcb.1", &tc5_clk),
CLKDEV_CON_DEV_ID("pclk", "ssc.0", &ssc_clk),
};

View file

@ -459,7 +459,7 @@ static struct i2c_gpio_platform_data pdata = {
static struct platform_device at91sam9260_twi_device = {
.name = "i2c-gpio",
.id = 0,
.id = -1,
.dev.platform_data = &pdata,
};

View file

@ -276,7 +276,7 @@ static struct i2c_gpio_platform_data pdata = {
static struct platform_device at91sam9261_twi_device = {
.name = "i2c-gpio",
.id = 0,
.id = -1,
.dev.platform_data = &pdata,
};

View file

@ -534,7 +534,7 @@ static struct i2c_gpio_platform_data pdata = {
static struct platform_device at91sam9263_twi_device = {
.name = "i2c-gpio",
.id = 0,
.id = -1,
.dev.platform_data = &pdata,
};

View file

@ -319,7 +319,7 @@ static struct i2c_gpio_platform_data pdata = {
static struct platform_device at91sam9rl_twi_device = {
.name = "i2c-gpio",
.id = 0,
.id = -1,
.dev.platform_data = &pdata,
};

View file

@ -115,32 +115,6 @@ static struct spi_board_info da850evm_spi_info[] = {
},
};
#ifdef CONFIG_MTD
static void da850_evm_m25p80_notify_add(struct mtd_info *mtd)
{
char *mac_addr = davinci_soc_info.emac_pdata->mac_addr;
size_t retlen;
if (!strcmp(mtd->name, "MAC-Address")) {
mtd->read(mtd, 0, ETH_ALEN, &retlen, mac_addr);
if (retlen == ETH_ALEN)
pr_info("Read MAC addr from SPI Flash: %pM\n",
mac_addr);
}
}
static struct mtd_notifier da850evm_spi_notifier = {
.add = da850_evm_m25p80_notify_add,
};
static void da850_evm_setup_mac_addr(void)
{
register_mtd_user(&da850evm_spi_notifier);
}
#else
static void da850_evm_setup_mac_addr(void) { }
#endif
static struct mtd_partition da850_evm_norflash_partition[] = {
{
.name = "bootloaders + env",
@ -748,7 +722,7 @@ static struct snd_platform_data da850_evm_snd_data = {
.num_serializer = ARRAY_SIZE(da850_iis_serializer_direction),
.tdm_slots = 2,
.serial_dir = da850_iis_serializer_direction,
.asp_chan_q = EVENTQ_0,
.asp_chan_q = EVENTQ_1,
.version = MCASP_VERSION_2,
.txnumevt = 1,
.rxnumevt = 1,
@ -1263,8 +1237,6 @@ static __init void da850_evm_init(void)
if (ret)
pr_warning("da850_evm_init: spi 1 registration failed: %d\n",
ret);
da850_evm_setup_mac_addr();
}
#ifdef CONFIG_SERIAL_8250_CONSOLE

View file

@ -563,7 +563,7 @@ static int setup_vpif_input_channel_mode(int mux_mode)
int val;
u32 value;
if (!vpif_vidclkctl_reg || !cpld_client)
if (!vpif_vsclkdis_reg || !cpld_client)
return -ENXIO;
val = i2c_smbus_read_byte(cpld_client);
@ -571,7 +571,7 @@ static int setup_vpif_input_channel_mode(int mux_mode)
return val;
spin_lock_irqsave(&vpif_reg_lock, flags);
value = __raw_readl(vpif_vidclkctl_reg);
value = __raw_readl(vpif_vsclkdis_reg);
if (mux_mode) {
val &= VPIF_INPUT_TWO_CHANNEL;
value |= VIDCH1CLK;
@ -579,7 +579,7 @@ static int setup_vpif_input_channel_mode(int mux_mode)
val |= VPIF_INPUT_ONE_CHANNEL;
value &= ~VIDCH1CLK;
}
__raw_writel(value, vpif_vidclkctl_reg);
__raw_writel(value, vpif_vsclkdis_reg);
spin_unlock_irqrestore(&vpif_reg_lock, flags);
err = i2c_smbus_write_byte(cpld_client, val);

View file

@ -217,11 +217,7 @@ ddr2clk_stop_done:
ENDPROC(davinci_ddr_psc_config)
CACHE_FLUSH:
#ifdef CONFIG_CPU_V6
.word v6_flush_kern_cache_all
#else
.word arm926_flush_kern_cache_all
#endif
.word arm926_flush_kern_cache_all
ENTRY(davinci_cpu_suspend_sz)
.word . - davinci_cpu_suspend

View file

@ -31,7 +31,6 @@
#include <asm/mach/arch.h>
#include <linux/irq.h>
#include <plat/time.h>
#include <plat/ehci-orion.h>
#include <plat/common.h>
#include "common.h"
@ -75,7 +74,7 @@ void __init dove_map_io(void)
void __init dove_ehci0_init(void)
{
orion_ehci_init(&dove_mbus_dram_info,
DOVE_USB0_PHYS_BASE, IRQ_DOVE_USB0, EHCI_PHY_NA);
DOVE_USB0_PHYS_BASE, IRQ_DOVE_USB0);
}
/*****************************************************************************
@ -161,7 +160,7 @@ void __init dove_spi0_init(void)
void __init dove_spi1_init(void)
{
orion_spi_1_init(DOVE_SPI1_PHYS_BASE, get_tclk());
orion_spi_init(DOVE_SPI1_PHYS_BASE, get_tclk());
}
/*****************************************************************************

View file

@ -45,7 +45,7 @@ static inline int pmu_to_irq(int pin)
static inline int irq_to_pmu(int irq)
{
if (IRQ_DOVE_PMU_START <= irq && irq < NR_IRQS)
if (IRQ_DOVE_PMU_START < irq && irq < NR_IRQS)
return irq - IRQ_DOVE_PMU_START;
return -EINVAL;

View file

@ -61,20 +61,8 @@ static void pmu_irq_ack(struct irq_data *d)
int pin = irq_to_pmu(d->irq);
u32 u;
/*
* The PMU mask register is not RW0C: it is RW. This means that
* the bits take whatever value is written to them; if you write
* a '1', you will set the interrupt.
*
* Unfortunately this means there is NO race free way to clear
* these interrupts.
*
* So, let's structure the code so that the window is as small as
* possible.
*/
u = ~(1 << (pin & 31));
u &= readl_relaxed(PMU_INTERRUPT_CAUSE);
writel_relaxed(u, PMU_INTERRUPT_CAUSE);
writel(u, PMU_INTERRUPT_CAUSE);
}
static struct irq_chip pmu_irq_chip = {

View file

@ -32,7 +32,7 @@
* Memory-mapped I/O on MX21ADS base board
*/
#define MX21ADS_MMIO_BASE_ADDR 0xf5000000
#define MX21ADS_MMIO_SIZE 0xc00000
#define MX21ADS_MMIO_SIZE SZ_16M
#define MX21ADS_REG_ADDR(offset) (void __force __iomem *) \
(MX21ADS_MMIO_BASE_ADDR + (offset))

View file

@ -337,15 +337,15 @@ static unsigned long timer_reload;
static void integrator_clocksource_init(u32 khz)
{
void __iomem *base = (void __iomem *)TIMER2_VA_BASE;
u32 ctrl = TIMER_CTRL_ENABLE | TIMER_CTRL_PERIODIC;
u32 ctrl = TIMER_CTRL_ENABLE;
if (khz >= 1500) {
khz /= 16;
ctrl |= TIMER_CTRL_DIV16;
ctrl = TIMER_CTRL_DIV16;
}
writel(0xffff, base + TIMER_LOAD);
writel(ctrl, base + TIMER_CTRL);
writel(0xffff, base + TIMER_LOAD);
clocksource_mmio_init(base + TIMER_VALUE, "timer2",
khz * 1000, 200, 16, clocksource_mmio_readl_down);

View file

@ -28,7 +28,6 @@
#include <plat/cache-feroceon-l2.h>
#include <plat/mvsdio.h>
#include <plat/orion_nand.h>
#include <plat/ehci-orion.h>
#include <plat/common.h>
#include <plat/time.h>
#include "common.h"
@ -75,7 +74,7 @@ void __init kirkwood_ehci_init(void)
{
kirkwood_clk_ctrl |= CGC_USB0;
orion_ehci_init(&kirkwood_mbus_dram_info,
USB_PHYS_BASE, IRQ_KIRKWOOD_USB, EHCI_PHY_NA);
USB_PHYS_BASE, IRQ_KIRKWOOD_USB);
}

View file

@ -31,313 +31,313 @@
#define MPP_F6282_MASK MPP( 0, 0x0, 0, 0, 0, 0, 0, 0, 1 )
#define MPP0_GPIO MPP( 0, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP0_NF_IO2 MPP( 0, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP0_SPI_SCn MPP( 0, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP0_NF_IO2 MPP( 0, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP0_SPI_SCn MPP( 0, 0x2, 0, 1, 1, 1, 1, 1, 1 )
#define MPP1_GPO MPP( 1, 0x0, 0, 1, 1, 1, 1, 1, 1 )
#define MPP1_NF_IO3 MPP( 1, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP1_SPI_MOSI MPP( 1, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP1_NF_IO3 MPP( 1, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP1_SPI_MOSI MPP( 1, 0x2, 0, 1, 1, 1, 1, 1, 1 )
#define MPP2_GPO MPP( 2, 0x0, 0, 1, 1, 1, 1, 1, 1 )
#define MPP2_NF_IO4 MPP( 2, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP2_SPI_SCK MPP( 2, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP2_NF_IO4 MPP( 2, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP2_SPI_SCK MPP( 2, 0x2, 0, 1, 1, 1, 1, 1, 1 )
#define MPP3_GPO MPP( 3, 0x0, 0, 1, 1, 1, 1, 1, 1 )
#define MPP3_NF_IO5 MPP( 3, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP3_SPI_MISO MPP( 3, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP3_NF_IO5 MPP( 3, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP3_SPI_MISO MPP( 3, 0x2, 1, 0, 1, 1, 1, 1, 1 )
#define MPP4_GPIO MPP( 4, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP4_NF_IO6 MPP( 4, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP4_UART0_RXD MPP( 4, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP4_SATA1_ACTn MPP( 4, 0x5, 0, 0, 0, 0, 1, 1, 1 )
#define MPP4_NF_IO6 MPP( 4, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP4_UART0_RXD MPP( 4, 0x2, 1, 0, 1, 1, 1, 1, 1 )
#define MPP4_SATA1_ACTn MPP( 4, 0x5, 0, 1, 0, 0, 1, 1, 1 )
#define MPP4_LCD_VGA_HSYNC MPP( 4, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP4_PTP_CLK MPP( 4, 0xd, 0, 0, 1, 1, 1, 1, 0 )
#define MPP4_PTP_CLK MPP( 4, 0xd, 1, 0, 1, 1, 1, 1, 0 )
#define MPP5_GPO MPP( 5, 0x0, 0, 1, 1, 1, 1, 1, 1 )
#define MPP5_NF_IO7 MPP( 5, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP5_UART0_TXD MPP( 5, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP5_PTP_TRIG_GEN MPP( 5, 0x4, 0, 0, 1, 1, 1, 1, 0 )
#define MPP5_SATA0_ACTn MPP( 5, 0x5, 0, 0, 0, 1, 1, 1, 1 )
#define MPP5_NF_IO7 MPP( 5, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP5_UART0_TXD MPP( 5, 0x2, 0, 1, 1, 1, 1, 1, 1 )
#define MPP5_PTP_TRIG_GEN MPP( 5, 0x4, 0, 1, 1, 1, 1, 1, 0 )
#define MPP5_SATA0_ACTn MPP( 5, 0x5, 0, 1, 0, 1, 1, 1, 1 )
#define MPP5_LCD_VGA_VSYNC MPP( 5, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP6_SYSRST_OUTn MPP( 6, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP6_SPI_MOSI MPP( 6, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP6_PTP_TRIG_GEN MPP( 6, 0x3, 0, 0, 1, 1, 1, 1, 0 )
#define MPP6_SYSRST_OUTn MPP( 6, 0x1, 0, 1, 1, 1, 1, 1, 1 )
#define MPP6_SPI_MOSI MPP( 6, 0x2, 0, 1, 1, 1, 1, 1, 1 )
#define MPP6_PTP_TRIG_GEN MPP( 6, 0x3, 0, 1, 1, 1, 1, 1, 0 )
#define MPP7_GPO MPP( 7, 0x0, 0, 1, 1, 1, 1, 1, 1 )
#define MPP7_PEX_RST_OUTn MPP( 7, 0x1, 0, 0, 1, 1, 1, 1, 0 )
#define MPP7_SPI_SCn MPP( 7, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP7_PTP_TRIG_GEN MPP( 7, 0x3, 0, 0, 1, 1, 1, 1, 0 )
#define MPP7_LCD_PWM MPP( 7, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP7_PEX_RST_OUTn MPP( 7, 0x1, 0, 1, 1, 1, 1, 1, 0 )
#define MPP7_SPI_SCn MPP( 7, 0x2, 0, 1, 1, 1, 1, 1, 1 )
#define MPP7_PTP_TRIG_GEN MPP( 7, 0x3, 0, 1, 1, 1, 1, 1, 0 )
#define MPP7_LCD_PWM MPP( 7, 0xb, 0, 1, 0, 0, 0, 0, 1 )
#define MPP8_GPIO MPP( 8, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP8_TW0_SDA MPP( 8, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP8_UART0_RTS MPP( 8, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP8_UART1_RTS MPP( 8, 0x3, 0, 0, 1, 1, 1, 1, 1 )
#define MPP8_MII0_RXERR MPP( 8, 0x4, 0, 0, 0, 1, 1, 1, 1 )
#define MPP8_SATA1_PRESENTn MPP( 8, 0x5, 0, 0, 0, 0, 1, 1, 1 )
#define MPP8_PTP_CLK MPP( 8, 0xc, 0, 0, 1, 1, 1, 1, 0 )
#define MPP8_MII0_COL MPP( 8, 0xd, 0, 0, 1, 1, 1, 1, 1 )
#define MPP8_TW0_SDA MPP( 8, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP8_UART0_RTS MPP( 8, 0x2, 0, 1, 1, 1, 1, 1, 1 )
#define MPP8_UART1_RTS MPP( 8, 0x3, 0, 1, 1, 1, 1, 1, 1 )
#define MPP8_MII0_RXERR MPP( 8, 0x4, 1, 0, 0, 1, 1, 1, 1 )
#define MPP8_SATA1_PRESENTn MPP( 8, 0x5, 0, 1, 0, 0, 1, 1, 1 )
#define MPP8_PTP_CLK MPP( 8, 0xc, 1, 0, 1, 1, 1, 1, 0 )
#define MPP8_MII0_COL MPP( 8, 0xd, 1, 0, 1, 1, 1, 1, 1 )
#define MPP9_GPIO MPP( 9, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP9_TW0_SCK MPP( 9, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP9_UART0_CTS MPP( 9, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP9_UART1_CTS MPP( 9, 0x3, 0, 0, 1, 1, 1, 1, 1 )
#define MPP9_SATA0_PRESENTn MPP( 9, 0x5, 0, 0, 0, 1, 1, 1, 1 )
#define MPP9_PTP_EVENT_REQ MPP( 9, 0xc, 0, 0, 1, 1, 1, 1, 0 )
#define MPP9_MII0_CRS MPP( 9, 0xd, 0, 0, 1, 1, 1, 1, 1 )
#define MPP9_TW0_SCK MPP( 9, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP9_UART0_CTS MPP( 9, 0x2, 1, 0, 1, 1, 1, 1, 1 )
#define MPP9_UART1_CTS MPP( 9, 0x3, 1, 0, 1, 1, 1, 1, 1 )
#define MPP9_SATA0_PRESENTn MPP( 9, 0x5, 0, 1, 0, 1, 1, 1, 1 )
#define MPP9_PTP_EVENT_REQ MPP( 9, 0xc, 1, 0, 1, 1, 1, 1, 0 )
#define MPP9_MII0_CRS MPP( 9, 0xd, 1, 0, 1, 1, 1, 1, 1 )
#define MPP10_GPO MPP( 10, 0x0, 0, 1, 1, 1, 1, 1, 1 )
#define MPP10_SPI_SCK MPP( 10, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP10_UART0_TXD MPP( 10, 0X3, 0, 0, 1, 1, 1, 1, 1 )
#define MPP10_SATA1_ACTn MPP( 10, 0x5, 0, 0, 0, 0, 1, 1, 1 )
#define MPP10_PTP_TRIG_GEN MPP( 10, 0xc, 0, 0, 1, 1, 1, 1, 0 )
#define MPP10_SPI_SCK MPP( 10, 0x2, 0, 1, 1, 1, 1, 1, 1 )
#define MPP10_UART0_TXD MPP( 10, 0X3, 0, 1, 1, 1, 1, 1, 1 )
#define MPP10_SATA1_ACTn MPP( 10, 0x5, 0, 1, 0, 0, 1, 1, 1 )
#define MPP10_PTP_TRIG_GEN MPP( 10, 0xc, 0, 1, 1, 1, 1, 1, 0 )
#define MPP11_GPIO MPP( 11, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP11_SPI_MISO MPP( 11, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP11_UART0_RXD MPP( 11, 0x3, 0, 0, 1, 1, 1, 1, 1 )
#define MPP11_PTP_EVENT_REQ MPP( 11, 0x4, 0, 0, 1, 1, 1, 1, 0 )
#define MPP11_PTP_TRIG_GEN MPP( 11, 0xc, 0, 0, 1, 1, 1, 1, 0 )
#define MPP11_PTP_CLK MPP( 11, 0xd, 0, 0, 1, 1, 1, 1, 0 )
#define MPP11_SATA0_ACTn MPP( 11, 0x5, 0, 0, 0, 1, 1, 1, 1 )
#define MPP11_SPI_MISO MPP( 11, 0x2, 1, 0, 1, 1, 1, 1, 1 )
#define MPP11_UART0_RXD MPP( 11, 0x3, 1, 0, 1, 1, 1, 1, 1 )
#define MPP11_PTP_EVENT_REQ MPP( 11, 0x4, 1, 0, 1, 1, 1, 1, 0 )
#define MPP11_PTP_TRIG_GEN MPP( 11, 0xc, 0, 1, 1, 1, 1, 1, 0 )
#define MPP11_PTP_CLK MPP( 11, 0xd, 1, 0, 1, 1, 1, 1, 0 )
#define MPP11_SATA0_ACTn MPP( 11, 0x5, 0, 1, 0, 1, 1, 1, 1 )
#define MPP12_GPO MPP( 12, 0x0, 0, 1, 1, 1, 1, 1, 1 )
#define MPP12_SD_CLK MPP( 12, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP12_AU_SPDIF0 MPP( 12, 0xa, 0, 0, 0, 0, 0, 0, 1 )
#define MPP12_SPI_MOSI MPP( 12, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP12_TW1_SDA MPP( 12, 0xd, 0, 0, 0, 0, 0, 0, 1 )
#define MPP12_SD_CLK MPP( 12, 0x1, 0, 1, 1, 1, 1, 1, 1 )
#define MPP12_AU_SPDIF0 MPP( 12, 0xa, 0, 1, 0, 0, 0, 0, 1 )
#define MPP12_SPI_MOSI MPP( 12, 0xb, 0, 1, 0, 0, 0, 0, 1 )
#define MPP12_TW1_SDA MPP( 12, 0xd, 1, 0, 0, 0, 0, 0, 1 )
#define MPP13_GPIO MPP( 13, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP13_SD_CMD MPP( 13, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP13_UART1_TXD MPP( 13, 0x3, 0, 0, 1, 1, 1, 1, 1 )
#define MPP13_AU_SPDIFRMCLK MPP( 13, 0xa, 0, 0, 0, 0, 0, 0, 1 )
#define MPP13_LCDPWM MPP( 13, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP13_SD_CMD MPP( 13, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP13_UART1_TXD MPP( 13, 0x3, 0, 1, 1, 1, 1, 1, 1 )
#define MPP13_AU_SPDIFRMCLK MPP( 13, 0xa, 0, 1, 0, 0, 0, 0, 1 )
#define MPP13_LCDPWM MPP( 13, 0xb, 0, 1, 0, 0, 0, 0, 1 )
#define MPP14_GPIO MPP( 14, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP14_SD_D0 MPP( 14, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP14_UART1_RXD MPP( 14, 0x3, 0, 0, 1, 1, 1, 1, 1 )
#define MPP14_SATA1_PRESENTn MPP( 14, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP14_AU_SPDIFI MPP( 14, 0xa, 0, 0, 0, 0, 0, 0, 1 )
#define MPP14_AU_I2SDI MPP( 14, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP14_MII0_COL MPP( 14, 0xd, 0, 0, 1, 1, 1, 1, 1 )
#define MPP14_SD_D0 MPP( 14, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP14_UART1_RXD MPP( 14, 0x3, 1, 0, 1, 1, 1, 1, 1 )
#define MPP14_SATA1_PRESENTn MPP( 14, 0x4, 0, 1, 0, 0, 1, 1, 1 )
#define MPP14_AU_SPDIFI MPP( 14, 0xa, 1, 0, 0, 0, 0, 0, 1 )
#define MPP14_AU_I2SDI MPP( 14, 0xb, 1, 0, 0, 0, 0, 0, 1 )
#define MPP14_MII0_COL MPP( 14, 0xd, 1, 0, 1, 1, 1, 1, 1 )
#define MPP15_GPIO MPP( 15, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP15_SD_D1 MPP( 15, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP15_UART0_RTS MPP( 15, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP15_UART1_TXD MPP( 15, 0x3, 0, 0, 1, 1, 1, 1, 1 )
#define MPP15_SATA0_ACTn MPP( 15, 0x4, 0, 0, 0, 1, 1, 1, 1 )
#define MPP15_SPI_CSn MPP( 15, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP15_SD_D1 MPP( 15, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP15_UART0_RTS MPP( 15, 0x2, 0, 1, 1, 1, 1, 1, 1 )
#define MPP15_UART1_TXD MPP( 15, 0x3, 0, 1, 1, 1, 1, 1, 1 )
#define MPP15_SATA0_ACTn MPP( 15, 0x4, 0, 1, 0, 1, 1, 1, 1 )
#define MPP15_SPI_CSn MPP( 15, 0xb, 0, 1, 0, 0, 0, 0, 1 )
#define MPP16_GPIO MPP( 16, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP16_SD_D2 MPP( 16, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP16_UART0_CTS MPP( 16, 0x2, 0, 0, 1, 1, 1, 1, 1 )
#define MPP16_UART1_RXD MPP( 16, 0x3, 0, 0, 1, 1, 1, 1, 1 )
#define MPP16_SATA1_ACTn MPP( 16, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP16_LCD_EXT_REF_CLK MPP( 16, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP16_MII0_CRS MPP( 16, 0xd, 0, 0, 1, 1, 1, 1, 1 )
#define MPP16_SD_D2 MPP( 16, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP16_UART0_CTS MPP( 16, 0x2, 1, 0, 1, 1, 1, 1, 1 )
#define MPP16_UART1_RXD MPP( 16, 0x3, 1, 0, 1, 1, 1, 1, 1 )
#define MPP16_SATA1_ACTn MPP( 16, 0x4, 0, 1, 0, 0, 1, 1, 1 )
#define MPP16_LCD_EXT_REF_CLK MPP( 16, 0xb, 1, 0, 0, 0, 0, 0, 1 )
#define MPP16_MII0_CRS MPP( 16, 0xd, 1, 0, 1, 1, 1, 1, 1 )
#define MPP17_GPIO MPP( 17, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP17_SD_D3 MPP( 17, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP17_SATA0_PRESENTn MPP( 17, 0x4, 0, 0, 0, 1, 1, 1, 1 )
#define MPP17_SATA1_ACTn MPP( 17, 0xa, 0, 0, 0, 0, 0, 0, 1 )
#define MPP17_TW1_SCK MPP( 17, 0xd, 0, 0, 0, 0, 0, 0, 1 )
#define MPP17_SD_D3 MPP( 17, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP17_SATA0_PRESENTn MPP( 17, 0x4, 0, 1, 0, 1, 1, 1, 1 )
#define MPP17_SATA1_ACTn MPP( 17, 0xa, 0, 1, 0, 0, 0, 0, 1 )
#define MPP17_TW1_SCK MPP( 17, 0xd, 1, 1, 0, 0, 0, 0, 1 )
#define MPP18_GPO MPP( 18, 0x0, 0, 1, 1, 1, 1, 1, 1 )
#define MPP18_NF_IO0 MPP( 18, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP18_PEX0_CLKREQ MPP( 18, 0x2, 0, 0, 0, 0, 0, 0, 1 )
#define MPP18_NF_IO0 MPP( 18, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP18_PEX0_CLKREQ MPP( 18, 0x2, 0, 1, 0, 0, 0, 0, 1 )
#define MPP19_GPO MPP( 19, 0x0, 0, 1, 1, 1, 1, 1, 1 )
#define MPP19_NF_IO1 MPP( 19, 0x1, 0, 0, 1, 1, 1, 1, 1 )
#define MPP19_NF_IO1 MPP( 19, 0x1, 1, 1, 1, 1, 1, 1, 1 )
#define MPP20_GPIO MPP( 20, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP20_TSMP0 MPP( 20, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP20_TDM_CH0_TX_QL MPP( 20, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP20_TSMP0 MPP( 20, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP20_TDM_CH0_TX_QL MPP( 20, 0x2, 0, 1, 0, 0, 1, 1, 1 )
#define MPP20_GE1_TXD0 MPP( 20, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP20_AU_SPDIFI MPP( 20, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP20_SATA1_ACTn MPP( 20, 0x5, 0, 0, 0, 0, 1, 1, 1 )
#define MPP20_AU_SPDIFI MPP( 20, 0x4, 1, 0, 0, 0, 1, 1, 1 )
#define MPP20_SATA1_ACTn MPP( 20, 0x5, 0, 1, 0, 0, 1, 1, 1 )
#define MPP20_LCD_D0 MPP( 20, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP21_GPIO MPP( 21, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP21_TSMP1 MPP( 21, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP21_TDM_CH0_RX_QL MPP( 21, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP21_TSMP1 MPP( 21, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP21_TDM_CH0_RX_QL MPP( 21, 0x2, 0, 1, 0, 0, 1, 1, 1 )
#define MPP21_GE1_TXD1 MPP( 21, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP21_AU_SPDIFO MPP( 21, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP21_SATA0_ACTn MPP( 21, 0x5, 0, 0, 0, 1, 1, 1, 1 )
#define MPP21_AU_SPDIFO MPP( 21, 0x4, 0, 1, 0, 0, 1, 1, 1 )
#define MPP21_SATA0_ACTn MPP( 21, 0x5, 0, 1, 0, 1, 1, 1, 1 )
#define MPP21_LCD_D1 MPP( 21, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP22_GPIO MPP( 22, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP22_TSMP2 MPP( 22, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP22_TDM_CH2_TX_QL MPP( 22, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP22_TSMP2 MPP( 22, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP22_TDM_CH2_TX_QL MPP( 22, 0x2, 0, 1, 0, 0, 1, 1, 1 )
#define MPP22_GE1_TXD2 MPP( 22, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP22_AU_SPDIFRMKCLK MPP( 22, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP22_SATA1_PRESENTn MPP( 22, 0x5, 0, 0, 0, 0, 1, 1, 1 )
#define MPP22_AU_SPDIFRMKCLK MPP( 22, 0x4, 0, 1, 0, 0, 1, 1, 1 )
#define MPP22_SATA1_PRESENTn MPP( 22, 0x5, 0, 1, 0, 0, 1, 1, 1 )
#define MPP22_LCD_D2 MPP( 22, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP23_GPIO MPP( 23, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP23_TSMP3 MPP( 23, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP23_TDM_CH2_RX_QL MPP( 23, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP23_TSMP3 MPP( 23, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP23_TDM_CH2_RX_QL MPP( 23, 0x2, 1, 0, 0, 0, 1, 1, 1 )
#define MPP23_GE1_TXD3 MPP( 23, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP23_AU_I2SBCLK MPP( 23, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP23_SATA0_PRESENTn MPP( 23, 0x5, 0, 0, 0, 1, 1, 1, 1 )
#define MPP23_AU_I2SBCLK MPP( 23, 0x4, 0, 1, 0, 0, 1, 1, 1 )
#define MPP23_SATA0_PRESENTn MPP( 23, 0x5, 0, 1, 0, 1, 1, 1, 1 )
#define MPP23_LCD_D3 MPP( 23, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP24_GPIO MPP( 24, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP24_TSMP4 MPP( 24, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP24_TDM_SPI_CS0 MPP( 24, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP24_TSMP4 MPP( 24, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP24_TDM_SPI_CS0 MPP( 24, 0x2, 0, 1, 0, 0, 1, 1, 1 )
#define MPP24_GE1_RXD0 MPP( 24, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP24_AU_I2SDO MPP( 24, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP24_AU_I2SDO MPP( 24, 0x4, 0, 1, 0, 0, 1, 1, 1 )
#define MPP24_LCD_D4 MPP( 24, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP25_GPIO MPP( 25, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP25_TSMP5 MPP( 25, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP25_TDM_SPI_SCK MPP( 25, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP25_TSMP5 MPP( 25, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP25_TDM_SPI_SCK MPP( 25, 0x2, 0, 1, 0, 0, 1, 1, 1 )
#define MPP25_GE1_RXD1 MPP( 25, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP25_AU_I2SLRCLK MPP( 25, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP25_AU_I2SLRCLK MPP( 25, 0x4, 0, 1, 0, 0, 1, 1, 1 )
#define MPP25_LCD_D5 MPP( 25, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP26_GPIO MPP( 26, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP26_TSMP6 MPP( 26, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP26_TDM_SPI_MISO MPP( 26, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP26_TSMP6 MPP( 26, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP26_TDM_SPI_MISO MPP( 26, 0x2, 1, 0, 0, 0, 1, 1, 1 )
#define MPP26_GE1_RXD2 MPP( 26, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP26_AU_I2SMCLK MPP( 26, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP26_AU_I2SMCLK MPP( 26, 0x4, 0, 1, 0, 0, 1, 1, 1 )
#define MPP26_LCD_D6 MPP( 26, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP27_GPIO MPP( 27, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP27_TSMP7 MPP( 27, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP27_TDM_SPI_MOSI MPP( 27, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP27_TSMP7 MPP( 27, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP27_TDM_SPI_MOSI MPP( 27, 0x2, 0, 1, 0, 0, 1, 1, 1 )
#define MPP27_GE1_RXD3 MPP( 27, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP27_AU_I2SDI MPP( 27, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP27_AU_I2SDI MPP( 27, 0x4, 1, 0, 0, 0, 1, 1, 1 )
#define MPP27_LCD_D7 MPP( 27, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP28_GPIO MPP( 28, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP28_TSMP8 MPP( 28, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP28_TSMP8 MPP( 28, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP28_TDM_CODEC_INTn MPP( 28, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP28_GE1_COL MPP( 28, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP28_AU_EXTCLK MPP( 28, 0x4, 0, 0, 0, 0, 1, 1, 1 )
#define MPP28_AU_EXTCLK MPP( 28, 0x4, 1, 0, 0, 0, 1, 1, 1 )
#define MPP28_LCD_D8 MPP( 28, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP29_GPIO MPP( 29, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP29_TSMP9 MPP( 29, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP29_TSMP9 MPP( 29, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP29_TDM_CODEC_RSTn MPP( 29, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP29_GE1_TCLK MPP( 29, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP29_LCD_D9 MPP( 29, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP30_GPIO MPP( 30, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP30_TSMP10 MPP( 30, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP30_TDM_PCLK MPP( 30, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP30_TSMP10 MPP( 30, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP30_TDM_PCLK MPP( 30, 0x2, 1, 1, 0, 0, 1, 1, 1 )
#define MPP30_GE1_RXCTL MPP( 30, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP30_LCD_D10 MPP( 30, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP31_GPIO MPP( 31, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP31_TSMP11 MPP( 31, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP31_TDM_FS MPP( 31, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP31_TSMP11 MPP( 31, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP31_TDM_FS MPP( 31, 0x2, 1, 1, 0, 0, 1, 1, 1 )
#define MPP31_GE1_RXCLK MPP( 31, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP31_LCD_D11 MPP( 31, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP32_GPIO MPP( 32, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP32_TSMP12 MPP( 32, 0x1, 0, 0, 0, 0, 1, 1, 1 )
#define MPP32_TDM_DRX MPP( 32, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP32_TSMP12 MPP( 32, 0x1, 1, 1, 0, 0, 1, 1, 1 )
#define MPP32_TDM_DRX MPP( 32, 0x2, 1, 0, 0, 0, 1, 1, 1 )
#define MPP32_GE1_TCLKOUT MPP( 32, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP32_LCD_D12 MPP( 32, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP33_GPO MPP( 33, 0x0, 0, 1, 0, 1, 1, 1, 1 )
#define MPP33_TDM_DTX MPP( 33, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP33_TDM_DTX MPP( 33, 0x2, 0, 1, 0, 0, 1, 1, 1 )
#define MPP33_GE1_TXCTL MPP( 33, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP33_LCD_D13 MPP( 33, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP34_GPIO MPP( 34, 0x0, 1, 1, 0, 1, 1, 1, 1 )
#define MPP34_TDM_SPI_CS1 MPP( 34, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP34_TDM_SPI_CS1 MPP( 34, 0x2, 0, 1, 0, 0, 1, 1, 1 )
#define MPP34_GE1_TXEN MPP( 34, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP34_SATA1_ACTn MPP( 34, 0x5, 0, 0, 0, 0, 0, 1, 1 )
#define MPP34_SATA1_ACTn MPP( 34, 0x5, 0, 1, 0, 0, 0, 1, 1 )
#define MPP34_LCD_D14 MPP( 34, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP35_GPIO MPP( 35, 0x0, 1, 1, 1, 1, 1, 1, 1 )
#define MPP35_TDM_CH0_TX_QL MPP( 35, 0x2, 0, 0, 0, 0, 1, 1, 1 )
#define MPP35_TDM_CH0_TX_QL MPP( 35, 0x2, 0, 1, 0, 0, 1, 1, 1 )
#define MPP35_GE1_RXERR MPP( 35, 0x3, 0, 0, 0, 1, 1, 1, 1 )
#define MPP35_SATA0_ACTn MPP( 35, 0x5, 0, 0, 0, 1, 1, 1, 1 )
#define MPP35_SATA0_ACTn MPP( 35, 0x5, 0, 1, 0, 1, 1, 1, 1 )
#define MPP35_LCD_D15 MPP( 22, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP35_MII0_RXERR MPP( 35, 0xc, 0, 0, 1, 1, 1, 1, 1 )
#define MPP35_MII0_RXERR MPP( 35, 0xc, 1, 0, 1, 1, 1, 1, 1 )
#define MPP36_GPIO MPP( 36, 0x0, 1, 1, 1, 0, 0, 1, 1 )
#define MPP36_TSMP0 MPP( 36, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP36_TDM_SPI_CS1 MPP( 36, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP36_AU_SPDIFI MPP( 36, 0x4, 0, 0, 1, 0, 0, 1, 1 )
#define MPP36_TW1_SDA MPP( 36, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP36_TSMP0 MPP( 36, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP36_TDM_SPI_CS1 MPP( 36, 0x2, 0, 1, 0, 0, 0, 1, 1 )
#define MPP36_AU_SPDIFI MPP( 36, 0x4, 1, 0, 1, 0, 0, 1, 1 )
#define MPP36_TW1_SDA MPP( 36, 0xb, 1, 1, 0, 0, 0, 0, 1 )
#define MPP37_GPIO MPP( 37, 0x0, 1, 1, 1, 0, 0, 1, 1 )
#define MPP37_TSMP1 MPP( 37, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP37_TDM_CH2_TX_QL MPP( 37, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP37_AU_SPDIFO MPP( 37, 0x4, 0, 0, 1, 0, 0, 1, 1 )
#define MPP37_TW1_SCK MPP( 37, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP37_TSMP1 MPP( 37, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP37_TDM_CH2_TX_QL MPP( 37, 0x2, 0, 1, 0, 0, 0, 1, 1 )
#define MPP37_AU_SPDIFO MPP( 37, 0x4, 0, 1, 1, 0, 0, 1, 1 )
#define MPP37_TW1_SCK MPP( 37, 0xb, 1, 1, 0, 0, 0, 0, 1 )
#define MPP38_GPIO MPP( 38, 0x0, 1, 1, 1, 0, 0, 1, 1 )
#define MPP38_TSMP2 MPP( 38, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP38_TDM_CH2_RX_QL MPP( 38, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP38_AU_SPDIFRMLCLK MPP( 38, 0x4, 0, 0, 1, 0, 0, 1, 1 )
#define MPP38_TSMP2 MPP( 38, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP38_TDM_CH2_RX_QL MPP( 38, 0x2, 0, 1, 0, 0, 0, 1, 1 )
#define MPP38_AU_SPDIFRMLCLK MPP( 38, 0x4, 0, 1, 1, 0, 0, 1, 1 )
#define MPP38_LCD_D18 MPP( 38, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP39_GPIO MPP( 39, 0x0, 1, 1, 1, 0, 0, 1, 1 )
#define MPP39_TSMP3 MPP( 39, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP39_TDM_SPI_CS0 MPP( 39, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP39_AU_I2SBCLK MPP( 39, 0x4, 0, 0, 1, 0, 0, 1, 1 )
#define MPP39_TSMP3 MPP( 39, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP39_TDM_SPI_CS0 MPP( 39, 0x2, 0, 1, 0, 0, 0, 1, 1 )
#define MPP39_AU_I2SBCLK MPP( 39, 0x4, 0, 1, 1, 0, 0, 1, 1 )
#define MPP39_LCD_D19 MPP( 39, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP40_GPIO MPP( 40, 0x0, 1, 1, 1, 0, 0, 1, 1 )
#define MPP40_TSMP4 MPP( 40, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP40_TDM_SPI_SCK MPP( 40, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP40_AU_I2SDO MPP( 40, 0x4, 0, 0, 1, 0, 0, 1, 1 )
#define MPP40_TSMP4 MPP( 40, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP40_TDM_SPI_SCK MPP( 40, 0x2, 0, 1, 0, 0, 0, 1, 1 )
#define MPP40_AU_I2SDO MPP( 40, 0x4, 0, 1, 1, 0, 0, 1, 1 )
#define MPP40_LCD_D20 MPP( 40, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP41_GPIO MPP( 41, 0x0, 1, 1, 1, 0, 0, 1, 1 )
#define MPP41_TSMP5 MPP( 41, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP41_TDM_SPI_MISO MPP( 41, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP41_AU_I2SLRCLK MPP( 41, 0x4, 0, 0, 1, 0, 0, 1, 1 )
#define MPP41_TSMP5 MPP( 41, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP41_TDM_SPI_MISO MPP( 41, 0x2, 1, 0, 0, 0, 0, 1, 1 )
#define MPP41_AU_I2SLRCLK MPP( 41, 0x4, 0, 1, 1, 0, 0, 1, 1 )
#define MPP41_LCD_D21 MPP( 41, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP42_GPIO MPP( 42, 0x0, 1, 1, 1, 0, 0, 1, 1 )
#define MPP42_TSMP6 MPP( 42, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP42_TDM_SPI_MOSI MPP( 42, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP42_AU_I2SMCLK MPP( 42, 0x4, 0, 0, 1, 0, 0, 1, 1 )
#define MPP42_TSMP6 MPP( 42, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP42_TDM_SPI_MOSI MPP( 42, 0x2, 0, 1, 0, 0, 0, 1, 1 )
#define MPP42_AU_I2SMCLK MPP( 42, 0x4, 0, 1, 1, 0, 0, 1, 1 )
#define MPP42_LCD_D22 MPP( 42, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP43_GPIO MPP( 43, 0x0, 1, 1, 1, 0, 0, 1, 1 )
#define MPP43_TSMP7 MPP( 43, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP43_TSMP7 MPP( 43, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP43_TDM_CODEC_INTn MPP( 43, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP43_AU_I2SDI MPP( 43, 0x4, 0, 0, 1, 0, 0, 1, 1 )
#define MPP43_AU_I2SDI MPP( 43, 0x4, 1, 0, 1, 0, 0, 1, 1 )
#define MPP43_LCD_D23 MPP( 22, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP44_GPIO MPP( 44, 0x0, 1, 1, 1, 0, 0, 1, 1 )
#define MPP44_TSMP8 MPP( 44, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP44_TSMP8 MPP( 44, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP44_TDM_CODEC_RSTn MPP( 44, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP44_AU_EXTCLK MPP( 44, 0x4, 0, 0, 1, 0, 0, 1, 1 )
#define MPP44_AU_EXTCLK MPP( 44, 0x4, 1, 0, 1, 0, 0, 1, 1 )
#define MPP44_LCD_CLK MPP( 44, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP45_GPIO MPP( 45, 0x0, 1, 1, 0, 0, 0, 1, 1 )
#define MPP45_TSMP9 MPP( 45, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP45_TDM_PCLK MPP( 45, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP45_TSMP9 MPP( 45, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP45_TDM_PCLK MPP( 45, 0x2, 1, 1, 0, 0, 0, 1, 1 )
#define MPP245_LCD_E MPP( 45, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP46_GPIO MPP( 46, 0x0, 1, 1, 0, 0, 0, 1, 1 )
#define MPP46_TSMP10 MPP( 46, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP46_TDM_FS MPP( 46, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP46_TSMP10 MPP( 46, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP46_TDM_FS MPP( 46, 0x2, 1, 1, 0, 0, 0, 1, 1 )
#define MPP46_LCD_HSYNC MPP( 46, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP47_GPIO MPP( 47, 0x0, 1, 1, 0, 0, 0, 1, 1 )
#define MPP47_TSMP11 MPP( 47, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP47_TDM_DRX MPP( 47, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP47_TSMP11 MPP( 47, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP47_TDM_DRX MPP( 47, 0x2, 1, 0, 0, 0, 0, 1, 1 )
#define MPP47_LCD_VSYNC MPP( 47, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP48_GPIO MPP( 48, 0x0, 1, 1, 0, 0, 0, 1, 1 )
#define MPP48_TSMP12 MPP( 48, 0x1, 0, 0, 0, 0, 0, 1, 1 )
#define MPP48_TDM_DTX MPP( 48, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP48_TSMP12 MPP( 48, 0x1, 1, 1, 0, 0, 0, 1, 1 )
#define MPP48_TDM_DTX MPP( 48, 0x2, 0, 1, 0, 0, 0, 1, 1 )
#define MPP48_LCD_D16 MPP( 22, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP49_GPIO MPP( 49, 0x0, 1, 1, 0, 0, 0, 1, 0 )
#define MPP49_GPO MPP( 49, 0x0, 0, 1, 0, 0, 0, 0, 1 )
#define MPP49_TSMP9 MPP( 49, 0x1, 0, 0, 0, 0, 0, 1, 0 )
#define MPP49_TDM_CH0_RX_QL MPP( 49, 0x2, 0, 0, 0, 0, 0, 1, 1 )
#define MPP49_PTP_CLK MPP( 49, 0x5, 0, 0, 0, 0, 0, 1, 0 )
#define MPP49_PEX0_CLKREQ MPP( 49, 0xa, 0, 0, 0, 0, 0, 0, 1 )
#define MPP49_TSMP9 MPP( 49, 0x1, 1, 1, 0, 0, 0, 1, 0 )
#define MPP49_TDM_CH0_RX_QL MPP( 49, 0x2, 0, 1, 0, 0, 0, 1, 1 )
#define MPP49_PTP_CLK MPP( 49, 0x5, 1, 0, 0, 0, 0, 1, 0 )
#define MPP49_PEX0_CLKREQ MPP( 49, 0xa, 0, 1, 0, 0, 0, 0, 1 )
#define MPP49_LCD_D17 MPP( 49, 0xb, 0, 0, 0, 0, 0, 0, 1 )
#define MPP_MAX 49

View file

@ -61,7 +61,7 @@
*/
#define IRQ_LPC32XX_JTAG_COMM_TX LPC32XX_SIC1_IRQ(1)
#define IRQ_LPC32XX_JTAG_COMM_RX LPC32XX_SIC1_IRQ(2)
#define IRQ_LPC32XX_GPI_28 LPC32XX_SIC1_IRQ(4)
#define IRQ_LPC32XX_GPI_11 LPC32XX_SIC1_IRQ(4)
#define IRQ_LPC32XX_TS_P LPC32XX_SIC1_IRQ(6)
#define IRQ_LPC32XX_TS_IRQ LPC32XX_SIC1_IRQ(7)
#define IRQ_LPC32XX_TS_AUX LPC32XX_SIC1_IRQ(8)

View file

@ -118,10 +118,6 @@ static const struct lpc32xx_event_info lpc32xx_events[NR_IRQS] = {
.event_group = &lpc32xx_event_pin_regs,
.mask = LPC32XX_CLKPWR_EXTSRC_GPI_06_BIT,
},
[IRQ_LPC32XX_GPI_28] = {
.event_group = &lpc32xx_event_pin_regs,
.mask = LPC32XX_CLKPWR_EXTSRC_GPI_28_BIT,
},
[IRQ_LPC32XX_GPIO_00] = {
.event_group = &lpc32xx_event_int_regs,
.mask = LPC32XX_CLKPWR_INTSRC_GPIO_00_BIT,
@ -309,18 +305,9 @@ static int lpc32xx_irq_wake(struct irq_data *d, unsigned int state)
if (state)
eventreg |= lpc32xx_events[d->irq].mask;
else {
else
eventreg &= ~lpc32xx_events[d->irq].mask;
/*
* When disabling the wakeup, clear the latched
* event
*/
__raw_writel(lpc32xx_events[d->irq].mask,
lpc32xx_events[d->irq].
event_group->rawstat_reg);
}
__raw_writel(eventreg,
lpc32xx_events[d->irq].event_group->enab_reg);
@ -393,15 +380,13 @@ void __init lpc32xx_init_irq(void)
/* Setup SIC1 */
__raw_writel(0, LPC32XX_INTC_MASK(LPC32XX_SIC1_BASE));
__raw_writel(SIC1_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_SIC1_BASE));
__raw_writel(SIC1_ATR_DEFAULT,
LPC32XX_INTC_ACT_TYPE(LPC32XX_SIC1_BASE));
__raw_writel(MIC_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_SIC1_BASE));
__raw_writel(MIC_ATR_DEFAULT, LPC32XX_INTC_ACT_TYPE(LPC32XX_SIC1_BASE));
/* Setup SIC2 */
__raw_writel(0, LPC32XX_INTC_MASK(LPC32XX_SIC2_BASE));
__raw_writel(SIC2_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_SIC2_BASE));
__raw_writel(SIC2_ATR_DEFAULT,
LPC32XX_INTC_ACT_TYPE(LPC32XX_SIC2_BASE));
__raw_writel(MIC_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_SIC2_BASE));
__raw_writel(MIC_ATR_DEFAULT, LPC32XX_INTC_ACT_TYPE(LPC32XX_SIC2_BASE));
/* Configure supported IRQ's */
for (i = 0; i < NR_IRQS; i++) {

View file

@ -88,7 +88,6 @@ struct uartinit {
char *uart_ck_name;
u32 ck_mode_mask;
void __iomem *pdiv_clk_reg;
resource_size_t mapbase;
};
static struct uartinit uartinit_data[] __initdata = {
@ -98,7 +97,6 @@ static struct uartinit uartinit_data[] __initdata = {
.ck_mode_mask =
LPC32XX_UART_CLKMODE_LOAD(LPC32XX_UART_CLKMODE_ON, 5),
.pdiv_clk_reg = LPC32XX_CLKPWR_UART5_CLK_CTRL,
.mapbase = LPC32XX_UART5_BASE,
},
#endif
#ifdef CONFIG_ARCH_LPC32XX_UART3_SELECT
@ -107,7 +105,6 @@ static struct uartinit uartinit_data[] __initdata = {
.ck_mode_mask =
LPC32XX_UART_CLKMODE_LOAD(LPC32XX_UART_CLKMODE_ON, 3),
.pdiv_clk_reg = LPC32XX_CLKPWR_UART3_CLK_CTRL,
.mapbase = LPC32XX_UART3_BASE,
},
#endif
#ifdef CONFIG_ARCH_LPC32XX_UART4_SELECT
@ -116,7 +113,6 @@ static struct uartinit uartinit_data[] __initdata = {
.ck_mode_mask =
LPC32XX_UART_CLKMODE_LOAD(LPC32XX_UART_CLKMODE_ON, 4),
.pdiv_clk_reg = LPC32XX_CLKPWR_UART4_CLK_CTRL,
.mapbase = LPC32XX_UART4_BASE,
},
#endif
#ifdef CONFIG_ARCH_LPC32XX_UART6_SELECT
@ -125,7 +121,6 @@ static struct uartinit uartinit_data[] __initdata = {
.ck_mode_mask =
LPC32XX_UART_CLKMODE_LOAD(LPC32XX_UART_CLKMODE_ON, 6),
.pdiv_clk_reg = LPC32XX_CLKPWR_UART6_CLK_CTRL,
.mapbase = LPC32XX_UART6_BASE,
},
#endif
};
@ -170,24 +165,11 @@ void __init lpc32xx_serial_init(void)
/* pre-UART clock divider set to 1 */
__raw_writel(0x0101, uartinit_data[i].pdiv_clk_reg);
/*
* Force a flush of the RX FIFOs to work around a
* HW bug
*/
puart = uartinit_data[i].mapbase;
__raw_writel(0xC1, LPC32XX_UART_IIR_FCR(puart));
__raw_writel(0x00, LPC32XX_UART_DLL_FIFO(puart));
j = LPC32XX_SUART_FIFO_SIZE;
while (j--)
tmp = __raw_readl(
LPC32XX_UART_DLL_FIFO(puart));
__raw_writel(0, LPC32XX_UART_IIR_FCR(puart));
}
/* This needs to be done after all UART clocks are setup */
__raw_writel(clkmodes, LPC32XX_UARTCTL_CLKMODE);
for (i = 0; i < ARRAY_SIZE(uartinit_data); i++) {
for (i = 0; i < ARRAY_SIZE(uartinit_data) - 1; i++) {
/* Force a flush of the RX FIFOs to work around a HW bug */
puart = serial_std_platform_data[i].mapbase;
__raw_writel(0xC1, LPC32XX_UART_IIR_FCR(puart));

View file

@ -20,7 +20,6 @@
#include <mach/mv78xx0.h>
#include <mach/bridge-regs.h>
#include <plat/cache-feroceon-l2.h>
#include <plat/ehci-orion.h>
#include <plat/orion_nand.h>
#include <plat/time.h>
#include <plat/common.h>
@ -171,7 +170,7 @@ void __init mv78xx0_map_io(void)
void __init mv78xx0_ehci0_init(void)
{
orion_ehci_init(&mv78xx0_mbus_dram_info,
USB0_PHYS_BASE, IRQ_MV78XX0_USB_0, EHCI_PHY_NA);
USB0_PHYS_BASE, IRQ_MV78XX0_USB_0);
}

View file

@ -24,296 +24,296 @@
#define MPP_78100_A0_MASK MPP(0, 0x0, 0, 0, 1)
#define MPP0_GPIO MPP(0, 0x0, 1, 1, 1)
#define MPP0_GE0_COL MPP(0, 0x1, 0, 0, 1)
#define MPP0_GE1_TXCLK MPP(0, 0x2, 0, 0, 1)
#define MPP0_GE0_COL MPP(0, 0x1, 1, 0, 1)
#define MPP0_GE1_TXCLK MPP(0, 0x2, 0, 1, 1)
#define MPP0_UNUSED MPP(0, 0x3, 0, 0, 1)
#define MPP1_GPIO MPP(1, 0x0, 1, 1, 1)
#define MPP1_GE0_RXERR MPP(1, 0x1, 0, 0, 1)
#define MPP1_GE1_TXCTL MPP(1, 0x2, 0, 0, 1)
#define MPP1_GE0_RXERR MPP(1, 0x1, 1, 0, 1)
#define MPP1_GE1_TXCTL MPP(1, 0x2, 0, 1, 1)
#define MPP1_UNUSED MPP(1, 0x3, 0, 0, 1)
#define MPP2_GPIO MPP(2, 0x0, 1, 1, 1)
#define MPP2_GE0_CRS MPP(2, 0x1, 0, 0, 1)
#define MPP2_GE1_RXCTL MPP(2, 0x2, 0, 0, 1)
#define MPP2_GE0_CRS MPP(2, 0x1, 1, 0, 1)
#define MPP2_GE1_RXCTL MPP(2, 0x2, 1, 0, 1)
#define MPP2_UNUSED MPP(2, 0x3, 0, 0, 1)
#define MPP3_GPIO MPP(3, 0x0, 1, 1, 1)
#define MPP3_GE0_TXERR MPP(3, 0x1, 0, 0, 1)
#define MPP3_GE1_RXCLK MPP(3, 0x2, 0, 0, 1)
#define MPP3_GE0_TXERR MPP(3, 0x1, 0, 1, 1)
#define MPP3_GE1_RXCLK MPP(3, 0x2, 1, 0, 1)
#define MPP3_UNUSED MPP(3, 0x3, 0, 0, 1)
#define MPP4_GPIO MPP(4, 0x0, 1, 1, 1)
#define MPP4_GE0_TXD4 MPP(4, 0x1, 0, 0, 1)
#define MPP4_GE1_TXD0 MPP(4, 0x2, 0, 0, 1)
#define MPP4_GE0_TXD4 MPP(4, 0x1, 0, 1, 1)
#define MPP4_GE1_TXD0 MPP(4, 0x2, 0, 1, 1)
#define MPP4_UNUSED MPP(4, 0x3, 0, 0, 1)
#define MPP5_GPIO MPP(5, 0x0, 1, 1, 1)
#define MPP5_GE0_TXD5 MPP(5, 0x1, 0, 0, 1)
#define MPP5_GE1_TXD1 MPP(5, 0x2, 0, 0, 1)
#define MPP5_GE0_TXD5 MPP(5, 0x1, 0, 1, 1)
#define MPP5_GE1_TXD1 MPP(5, 0x2, 0, 1, 1)
#define MPP5_UNUSED MPP(5, 0x3, 0, 0, 1)
#define MPP6_GPIO MPP(6, 0x0, 1, 1, 1)
#define MPP6_GE0_TXD6 MPP(6, 0x1, 0, 0, 1)
#define MPP6_GE1_TXD2 MPP(6, 0x2, 0, 0, 1)
#define MPP6_GE0_TXD6 MPP(6, 0x1, 0, 1, 1)
#define MPP6_GE1_TXD2 MPP(6, 0x2, 0, 1, 1)
#define MPP6_UNUSED MPP(6, 0x3, 0, 0, 1)
#define MPP7_GPIO MPP(7, 0x0, 1, 1, 1)
#define MPP7_GE0_TXD7 MPP(7, 0x1, 0, 0, 1)
#define MPP7_GE1_TXD3 MPP(7, 0x2, 0, 0, 1)
#define MPP7_GE0_TXD7 MPP(7, 0x1, 0, 1, 1)
#define MPP7_GE1_TXD3 MPP(7, 0x2, 0, 1, 1)
#define MPP7_UNUSED MPP(7, 0x3, 0, 0, 1)
#define MPP8_GPIO MPP(8, 0x0, 1, 1, 1)
#define MPP8_GE0_RXD4 MPP(8, 0x1, 0, 0, 1)
#define MPP8_GE1_RXD0 MPP(8, 0x2, 0, 0, 1)
#define MPP8_GE0_RXD4 MPP(8, 0x1, 1, 0, 1)
#define MPP8_GE1_RXD0 MPP(8, 0x2, 1, 0, 1)
#define MPP8_UNUSED MPP(8, 0x3, 0, 0, 1)
#define MPP9_GPIO MPP(9, 0x0, 1, 1, 1)
#define MPP9_GE0_RXD5 MPP(9, 0x1, 0, 0, 1)
#define MPP9_GE1_RXD1 MPP(9, 0x2, 0, 0, 1)
#define MPP9_GE0_RXD5 MPP(9, 0x1, 1, 0, 1)
#define MPP9_GE1_RXD1 MPP(9, 0x2, 1, 0, 1)
#define MPP9_UNUSED MPP(9, 0x3, 0, 0, 1)
#define MPP10_GPIO MPP(10, 0x0, 1, 1, 1)
#define MPP10_GE0_RXD6 MPP(10, 0x1, 0, 0, 1)
#define MPP10_GE1_RXD2 MPP(10, 0x2, 0, 0, 1)
#define MPP10_GE0_RXD6 MPP(10, 0x1, 1, 0, 1)
#define MPP10_GE1_RXD2 MPP(10, 0x2, 1, 0, 1)
#define MPP10_UNUSED MPP(10, 0x3, 0, 0, 1)
#define MPP11_GPIO MPP(11, 0x0, 1, 1, 1)
#define MPP11_GE0_RXD7 MPP(11, 0x1, 0, 0, 1)
#define MPP11_GE1_RXD3 MPP(11, 0x2, 0, 0, 1)
#define MPP11_GE0_RXD7 MPP(11, 0x1, 1, 0, 1)
#define MPP11_GE1_RXD3 MPP(11, 0x2, 1, 0, 1)
#define MPP11_UNUSED MPP(11, 0x3, 0, 0, 1)
#define MPP12_GPIO MPP(12, 0x0, 1, 1, 1)
#define MPP12_M_BB MPP(12, 0x3, 0, 0, 1)
#define MPP12_UA0_CTSn MPP(12, 0x4, 0, 0, 1)
#define MPP12_NAND_FLASH_REn0 MPP(12, 0x5, 0, 0, 1)
#define MPP12_TDM0_SCSn MPP(12, 0X6, 0, 0, 1)
#define MPP12_M_BB MPP(12, 0x3, 1, 0, 1)
#define MPP12_UA0_CTSn MPP(12, 0x4, 1, 0, 1)
#define MPP12_NAND_FLASH_REn0 MPP(12, 0x5, 0, 1, 1)
#define MPP12_TDM0_SCSn MPP(12, 0X6, 0, 1, 1)
#define MPP12_UNUSED MPP(12, 0x1, 0, 0, 1)
#define MPP13_GPIO MPP(13, 0x0, 1, 1, 1)
#define MPP13_SYSRST_OUTn MPP(13, 0x3, 0, 0, 1)
#define MPP13_UA0_RTSn MPP(13, 0x4, 0, 0, 1)
#define MPP13_NAN_FLASH_WEn0 MPP(13, 0x5, 0, 0, 1)
#define MPP13_TDM_SCLK MPP(13, 0x6, 0, 0, 1)
#define MPP13_SYSRST_OUTn MPP(13, 0x3, 0, 1, 1)
#define MPP13_UA0_RTSn MPP(13, 0x4, 0, 1, 1)
#define MPP13_NAN_FLASH_WEn0 MPP(13, 0x5, 0, 1, 1)
#define MPP13_TDM_SCLK MPP(13, 0x6, 0, 1, 1)
#define MPP13_UNUSED MPP(13, 0x1, 0, 0, 1)
#define MPP14_GPIO MPP(14, 0x0, 1, 1, 1)
#define MPP14_SATA1_ACTn MPP(14, 0x3, 0, 0, 1)
#define MPP14_UA1_CTSn MPP(14, 0x4, 0, 0, 1)
#define MPP14_NAND_FLASH_REn1 MPP(14, 0x5, 0, 0, 1)
#define MPP14_TDM_SMOSI MPP(14, 0x6, 0, 0, 1)
#define MPP14_SATA1_ACTn MPP(14, 0x3, 0, 1, 1)
#define MPP14_UA1_CTSn MPP(14, 0x4, 1, 0, 1)
#define MPP14_NAND_FLASH_REn1 MPP(14, 0x5, 0, 1, 1)
#define MPP14_TDM_SMOSI MPP(14, 0x6, 0, 1, 1)
#define MPP14_UNUSED MPP(14, 0x1, 0, 0, 1)
#define MPP15_GPIO MPP(15, 0x0, 1, 1, 1)
#define MPP15_SATA0_ACTn MPP(15, 0x3, 0, 0, 1)
#define MPP15_UA1_RTSn MPP(15, 0x4, 0, 0, 1)
#define MPP15_NAND_FLASH_WEn1 MPP(15, 0x5, 0, 0, 1)
#define MPP15_TDM_SMISO MPP(15, 0x6, 0, 0, 1)
#define MPP15_SATA0_ACTn MPP(15, 0x3, 0, 1, 1)
#define MPP15_UA1_RTSn MPP(15, 0x4, 0, 1, 1)
#define MPP15_NAND_FLASH_WEn1 MPP(15, 0x5, 0, 1, 1)
#define MPP15_TDM_SMISO MPP(15, 0x6, 1, 0, 1)
#define MPP15_UNUSED MPP(15, 0x1, 0, 0, 1)
#define MPP16_GPIO MPP(16, 0x0, 1, 1, 1)
#define MPP16_SATA1_PRESENTn MPP(16, 0x3, 0, 0, 1)
#define MPP16_UA2_TXD MPP(16, 0x4, 0, 0, 1)
#define MPP16_NAND_FLASH_REn3 MPP(16, 0x5, 0, 0, 1)
#define MPP16_TDM_INTn MPP(16, 0x6, 0, 0, 1)
#define MPP16_SATA1_PRESENTn MPP(16, 0x3, 0, 1, 1)
#define MPP16_UA2_TXD MPP(16, 0x4, 0, 1, 1)
#define MPP16_NAND_FLASH_REn3 MPP(16, 0x5, 0, 1, 1)
#define MPP16_TDM_INTn MPP(16, 0x6, 1, 0, 1)
#define MPP16_UNUSED MPP(16, 0x1, 0, 0, 1)
#define MPP17_GPIO MPP(17, 0x0, 1, 1, 1)
#define MPP17_SATA0_PRESENTn MPP(17, 0x3, 0, 0, 1)
#define MPP17_UA2_RXD MPP(17, 0x4, 0, 0, 1)
#define MPP17_NAND_FLASH_WEn3 MPP(17, 0x5, 0, 0, 1)
#define MPP17_TDM_RSTn MPP(17, 0x6, 0, 0, 1)
#define MPP17_SATA0_PRESENTn MPP(17, 0x3, 0, 1, 1)
#define MPP17_UA2_RXD MPP(17, 0x4, 1, 0, 1)
#define MPP17_NAND_FLASH_WEn3 MPP(17, 0x5, 0, 1, 1)
#define MPP17_TDM_RSTn MPP(17, 0x6, 0, 1, 1)
#define MPP17_UNUSED MPP(17, 0x1, 0, 0, 1)
#define MPP18_GPIO MPP(18, 0x0, 1, 1, 1)
#define MPP18_UA0_CTSn MPP(18, 0x4, 0, 0, 1)
#define MPP18_BOOT_FLASH_REn MPP(18, 0x5, 0, 0, 1)
#define MPP18_UA0_CTSn MPP(18, 0x4, 1, 0, 1)
#define MPP18_BOOT_FLASH_REn MPP(18, 0x5, 0, 1, 1)
#define MPP18_UNUSED MPP(18, 0x1, 0, 0, 1)
#define MPP19_GPIO MPP(19, 0x0, 1, 1, 1)
#define MPP19_UA0_CTSn MPP(19, 0x4, 0, 0, 1)
#define MPP19_BOOT_FLASH_WEn MPP(19, 0x5, 0, 0, 1)
#define MPP19_UA0_CTSn MPP(19, 0x4, 0, 1, 1)
#define MPP19_BOOT_FLASH_WEn MPP(19, 0x5, 0, 1, 1)
#define MPP19_UNUSED MPP(19, 0x1, 0, 0, 1)
#define MPP20_GPIO MPP(20, 0x0, 1, 1, 1)
#define MPP20_UA1_CTSs MPP(20, 0x4, 0, 0, 1)
#define MPP20_TDM_PCLK MPP(20, 0x6, 0, 0, 0)
#define MPP20_UA1_CTSs MPP(20, 0x4, 1, 0, 1)
#define MPP20_TDM_PCLK MPP(20, 0x6, 1, 1, 0)
#define MPP20_UNUSED MPP(20, 0x1, 0, 0, 1)
#define MPP21_GPIO MPP(21, 0x0, 1, 1, 1)
#define MPP21_UA1_CTSs MPP(21, 0x4, 0, 0, 1)
#define MPP21_TDM_FSYNC MPP(21, 0x6, 0, 0, 0)
#define MPP21_UA1_CTSs MPP(21, 0x4, 0, 1, 1)
#define MPP21_TDM_FSYNC MPP(21, 0x6, 1, 1, 0)
#define MPP21_UNUSED MPP(21, 0x1, 0, 0, 1)
#define MPP22_GPIO MPP(22, 0x0, 1, 1, 1)
#define MPP22_UA3_TDX MPP(22, 0x4, 0, 0, 1)
#define MPP22_NAND_FLASH_REn2 MPP(22, 0x5, 0, 0, 1)
#define MPP22_TDM_DRX MPP(22, 0x6, 0, 0, 1)
#define MPP22_UA3_TDX MPP(22, 0x4, 0, 1, 1)
#define MPP22_NAND_FLASH_REn2 MPP(22, 0x5, 0, 1, 1)
#define MPP22_TDM_DRX MPP(22, 0x6, 1, 0, 1)
#define MPP22_UNUSED MPP(22, 0x1, 0, 0, 1)
#define MPP23_GPIO MPP(23, 0x0, 1, 1, 1)
#define MPP23_UA3_RDX MPP(23, 0x4, 0, 0, 1)
#define MPP23_NAND_FLASH_WEn2 MPP(23, 0x5, 0, 0, 1)
#define MPP23_TDM_DTX MPP(23, 0x6, 0, 0, 1)
#define MPP23_UA3_RDX MPP(23, 0x4, 1, 0, 1)
#define MPP23_NAND_FLASH_WEn2 MPP(23, 0x5, 0, 1, 1)
#define MPP23_TDM_DTX MPP(23, 0x6, 0, 1, 1)
#define MPP23_UNUSED MPP(23, 0x1, 0, 0, 1)
#define MPP24_GPIO MPP(24, 0x0, 1, 1, 1)
#define MPP24_UA2_TXD MPP(24, 0x4, 0, 0, 1)
#define MPP24_TDM_INTn MPP(24, 0x6, 0, 0, 1)
#define MPP24_UA2_TXD MPP(24, 0x4, 0, 1, 1)
#define MPP24_TDM_INTn MPP(24, 0x6, 1, 0, 1)
#define MPP24_UNUSED MPP(24, 0x1, 0, 0, 1)
#define MPP25_GPIO MPP(25, 0x0, 1, 1, 1)
#define MPP25_UA2_RXD MPP(25, 0x4, 0, 0, 1)
#define MPP25_TDM_RSTn MPP(25, 0x6, 0, 0, 1)
#define MPP25_UA2_RXD MPP(25, 0x4, 1, 0, 1)
#define MPP25_TDM_RSTn MPP(25, 0x6, 0, 1, 1)
#define MPP25_UNUSED MPP(25, 0x1, 0, 0, 1)
#define MPP26_GPIO MPP(26, 0x0, 1, 1, 1)
#define MPP26_UA2_CTSn MPP(26, 0x4, 0, 0, 1)
#define MPP26_TDM_PCLK MPP(26, 0x6, 0, 0, 1)
#define MPP26_UA2_CTSn MPP(26, 0x4, 1, 0, 1)
#define MPP26_TDM_PCLK MPP(26, 0x6, 1, 1, 1)
#define MPP26_UNUSED MPP(26, 0x1, 0, 0, 1)
#define MPP27_GPIO MPP(27, 0x0, 1, 1, 1)
#define MPP27_UA2_RTSn MPP(27, 0x4, 0, 0, 1)
#define MPP27_TDM_FSYNC MPP(27, 0x6, 0, 0, 1)
#define MPP27_UA2_RTSn MPP(27, 0x4, 0, 1, 1)
#define MPP27_TDM_FSYNC MPP(27, 0x6, 1, 1, 1)
#define MPP27_UNUSED MPP(27, 0x1, 0, 0, 1)
#define MPP28_GPIO MPP(28, 0x0, 1, 1, 1)
#define MPP28_UA3_TXD MPP(28, 0x4, 0, 0, 1)
#define MPP28_TDM_DRX MPP(28, 0x6, 0, 0, 1)
#define MPP28_UA3_TXD MPP(28, 0x4, 0, 1, 1)
#define MPP28_TDM_DRX MPP(28, 0x6, 1, 0, 1)
#define MPP28_UNUSED MPP(28, 0x1, 0, 0, 1)
#define MPP29_GPIO MPP(29, 0x0, 1, 1, 1)
#define MPP29_UA3_RXD MPP(29, 0x4, 0, 0, 1)
#define MPP29_SYSRST_OUTn MPP(29, 0x5, 0, 0, 1)
#define MPP29_TDM_DTX MPP(29, 0x6, 0, 0, 1)
#define MPP29_UA3_RXD MPP(29, 0x4, 1, 0, 1)
#define MPP29_SYSRST_OUTn MPP(29, 0x5, 0, 1, 1)
#define MPP29_TDM_DTX MPP(29, 0x6, 0, 1, 1)
#define MPP29_UNUSED MPP(29, 0x1, 0, 0, 1)
#define MPP30_GPIO MPP(30, 0x0, 1, 1, 1)
#define MPP30_UA3_CTSn MPP(30, 0x4, 0, 0, 1)
#define MPP30_UA3_CTSn MPP(30, 0x4, 1, 0, 1)
#define MPP30_UNUSED MPP(30, 0x1, 0, 0, 1)
#define MPP31_GPIO MPP(31, 0x0, 1, 1, 1)
#define MPP31_UA3_RTSn MPP(31, 0x4, 0, 0, 1)
#define MPP31_TDM1_SCSn MPP(31, 0x6, 0, 0, 1)
#define MPP31_UA3_RTSn MPP(31, 0x4, 0, 1, 1)
#define MPP31_TDM1_SCSn MPP(31, 0x6, 0, 1, 1)
#define MPP31_UNUSED MPP(31, 0x1, 0, 0, 1)
#define MPP32_GPIO MPP(32, 0x1, 1, 1, 1)
#define MPP32_UA3_TDX MPP(32, 0x4, 0, 0, 1)
#define MPP32_SYSRST_OUTn MPP(32, 0x5, 0, 0, 1)
#define MPP32_TDM0_RXQ MPP(32, 0x6, 0, 0, 1)
#define MPP32_UA3_TDX MPP(32, 0x4, 0, 1, 1)
#define MPP32_SYSRST_OUTn MPP(32, 0x5, 0, 1, 1)
#define MPP32_TDM0_RXQ MPP(32, 0x6, 0, 1, 1)
#define MPP32_UNUSED MPP(32, 0x3, 0, 0, 1)
#define MPP33_GPIO MPP(33, 0x1, 1, 1, 1)
#define MPP33_UA3_RDX MPP(33, 0x4, 0, 0, 1)
#define MPP33_TDM0_TXQ MPP(33, 0x6, 0, 0, 1)
#define MPP33_UA3_RDX MPP(33, 0x4, 1, 0, 1)
#define MPP33_TDM0_TXQ MPP(33, 0x6, 0, 1, 1)
#define MPP33_UNUSED MPP(33, 0x3, 0, 0, 1)
#define MPP34_GPIO MPP(34, 0x1, 1, 1, 1)
#define MPP34_UA2_TDX MPP(34, 0x4, 0, 0, 1)
#define MPP34_TDM1_RXQ MPP(34, 0x6, 0, 0, 1)
#define MPP34_UA2_TDX MPP(34, 0x4, 0, 1, 1)
#define MPP34_TDM1_RXQ MPP(34, 0x6, 0, 1, 1)
#define MPP34_UNUSED MPP(34, 0x3, 0, 0, 1)
#define MPP35_GPIO MPP(35, 0x1, 1, 1, 1)
#define MPP35_UA2_RDX MPP(35, 0x4, 0, 0, 1)
#define MPP35_TDM1_TXQ MPP(35, 0x6, 0, 0, 1)
#define MPP35_UA2_RDX MPP(35, 0x4, 1, 0, 1)
#define MPP35_TDM1_TXQ MPP(35, 0x6, 0, 1, 1)
#define MPP35_UNUSED MPP(35, 0x3, 0, 0, 1)
#define MPP36_GPIO MPP(36, 0x1, 1, 1, 1)
#define MPP36_UA0_CTSn MPP(36, 0x2, 0, 0, 1)
#define MPP36_UA2_TDX MPP(36, 0x4, 0, 0, 1)
#define MPP36_TDM0_SCSn MPP(36, 0x6, 0, 0, 1)
#define MPP36_UA0_CTSn MPP(36, 0x2, 1, 0, 1)
#define MPP36_UA2_TDX MPP(36, 0x4, 0, 1, 1)
#define MPP36_TDM0_SCSn MPP(36, 0x6, 0, 1, 1)
#define MPP36_UNUSED MPP(36, 0x3, 0, 0, 1)
#define MPP37_GPIO MPP(37, 0x1, 1, 1, 1)
#define MPP37_UA0_RTSn MPP(37, 0x2, 0, 0, 1)
#define MPP37_UA2_RXD MPP(37, 0x4, 0, 0, 1)
#define MPP37_SYSRST_OUTn MPP(37, 0x5, 0, 0, 1)
#define MPP37_TDM_SCLK MPP(37, 0x6, 0, 0, 1)
#define MPP37_UA0_RTSn MPP(37, 0x2, 0, 1, 1)
#define MPP37_UA2_RXD MPP(37, 0x4, 1, 0, 1)
#define MPP37_SYSRST_OUTn MPP(37, 0x5, 0, 1, 1)
#define MPP37_TDM_SCLK MPP(37, 0x6, 0, 1, 1)
#define MPP37_UNUSED MPP(37, 0x3, 0, 0, 1)
#define MPP38_GPIO MPP(38, 0x1, 1, 1, 1)
#define MPP38_UA1_CTSn MPP(38, 0x2, 0, 0, 1)
#define MPP38_UA3_TXD MPP(38, 0x4, 0, 0, 1)
#define MPP38_SYSRST_OUTn MPP(38, 0x5, 0, 0, 1)
#define MPP38_TDM_SMOSI MPP(38, 0x6, 0, 0, 1)
#define MPP38_UA1_CTSn MPP(38, 0x2, 1, 0, 1)
#define MPP38_UA3_TXD MPP(38, 0x4, 0, 1, 1)
#define MPP38_SYSRST_OUTn MPP(38, 0x5, 0, 1, 1)
#define MPP38_TDM_SMOSI MPP(38, 0x6, 0, 1, 1)
#define MPP38_UNUSED MPP(38, 0x3, 0, 0, 1)
#define MPP39_GPIO MPP(39, 0x1, 1, 1, 1)
#define MPP39_UA1_RTSn MPP(39, 0x2, 0, 0, 1)
#define MPP39_UA3_RXD MPP(39, 0x4, 0, 0, 1)
#define MPP39_SYSRST_OUTn MPP(39, 0x5, 0, 0, 1)
#define MPP39_TDM_SMISO MPP(39, 0x6, 0, 0, 1)
#define MPP39_UA1_RTSn MPP(39, 0x2, 0, 1, 1)
#define MPP39_UA3_RXD MPP(39, 0x4, 1, 0, 1)
#define MPP39_SYSRST_OUTn MPP(39, 0x5, 0, 1, 1)
#define MPP39_TDM_SMISO MPP(39, 0x6, 1, 0, 1)
#define MPP39_UNUSED MPP(39, 0x3, 0, 0, 1)
#define MPP40_GPIO MPP(40, 0x1, 1, 1, 1)
#define MPP40_TDM_INTn MPP(40, 0x6, 0, 0, 1)
#define MPP40_TDM_INTn MPP(40, 0x6, 1, 0, 1)
#define MPP40_UNUSED MPP(40, 0x0, 0, 0, 1)
#define MPP41_GPIO MPP(41, 0x1, 1, 1, 1)
#define MPP41_TDM_RSTn MPP(41, 0x6, 0, 0, 1)
#define MPP41_TDM_RSTn MPP(41, 0x6, 0, 1, 1)
#define MPP41_UNUSED MPP(41, 0x0, 0, 0, 1)
#define MPP42_GPIO MPP(42, 0x1, 1, 1, 1)
#define MPP42_TDM_PCLK MPP(42, 0x6, 0, 0, 1)
#define MPP42_TDM_PCLK MPP(42, 0x6, 1, 1, 1)
#define MPP42_UNUSED MPP(42, 0x0, 0, 0, 1)
#define MPP43_GPIO MPP(43, 0x1, 1, 1, 1)
#define MPP43_TDM_FSYNC MPP(43, 0x6, 0, 0, 1)
#define MPP43_TDM_FSYNC MPP(43, 0x6, 1, 1, 1)
#define MPP43_UNUSED MPP(43, 0x0, 0, 0, 1)
#define MPP44_GPIO MPP(44, 0x1, 1, 1, 1)
#define MPP44_TDM_DRX MPP(44, 0x6, 0, 0, 1)
#define MPP44_TDM_DRX MPP(44, 0x6, 1, 0, 1)
#define MPP44_UNUSED MPP(44, 0x0, 0, 0, 1)
#define MPP45_GPIO MPP(45, 0x1, 1, 1, 1)
#define MPP45_SATA0_ACTn MPP(45, 0x3, 0, 0, 1)
#define MPP45_TDM_DRX MPP(45, 0x6, 0, 0, 1)
#define MPP45_SATA0_ACTn MPP(45, 0x3, 0, 1, 1)
#define MPP45_TDM_DRX MPP(45, 0x6, 0, 1, 1)
#define MPP45_UNUSED MPP(45, 0x0, 0, 0, 1)
#define MPP46_GPIO MPP(46, 0x1, 1, 1, 1)
#define MPP46_TDM_SCSn MPP(46, 0x6, 0, 0, 1)
#define MPP46_TDM_SCSn MPP(46, 0x6, 0, 1, 1)
#define MPP46_UNUSED MPP(46, 0x0, 0, 0, 1)
@ -323,14 +323,14 @@
#define MPP48_GPIO MPP(48, 0x1, 1, 1, 1)
#define MPP48_SATA1_ACTn MPP(48, 0x3, 0, 0, 1)
#define MPP48_SATA1_ACTn MPP(48, 0x3, 0, 1, 1)
#define MPP48_UNUSED MPP(48, 0x2, 0, 0, 1)
#define MPP49_GPIO MPP(49, 0x1, 1, 1, 1)
#define MPP49_SATA0_ACTn MPP(49, 0x3, 0, 0, 1)
#define MPP49_M_BB MPP(49, 0x4, 0, 0, 1)
#define MPP49_SATA0_ACTn MPP(49, 0x3, 0, 1, 1)
#define MPP49_M_BB MPP(49, 0x4, 1, 0, 1)
#define MPP49_UNUSED MPP(49, 0x2, 0, 0, 1)

View file

@ -404,7 +404,7 @@ static int name##_set_rate(struct clk *clk, unsigned long rate) \
reg = __raw_readl(CLKCTRL_BASE_ADDR + HW_CLKCTRL_##dr); \
reg &= ~BM_CLKCTRL_##dr##_DIV; \
reg |= div << BP_CLKCTRL_##dr##_DIV; \
if (reg & (1 << clk->enable_shift)) { \
if (reg | (1 << clk->enable_shift)) { \
pr_err("%s: clock is gated\n", __func__); \
return -EINVAL; \
} \

View file

@ -30,7 +30,6 @@
*/
#define cpu_is_mx23() ( \
machine_is_mx23evk() || \
machine_is_stmp378x() || \
0)
#define cpu_is_mx28() ( \
machine_is_mx28evk() || \

View file

@ -326,7 +326,6 @@ config MACH_OMAP4_PANDA
config OMAP3_EMU
bool "OMAP3 debugging peripherals"
depends on ARCH_OMAP3
select ARM_AMBA
select OC_ETM
help
Say Y here to enable debugging hardware of omap3

View file

@ -49,9 +49,8 @@
#define ETH_KS8851_QUART 138
#define OMAP4_SFH7741_SENSOR_OUTPUT_GPIO 184
#define OMAP4_SFH7741_ENABLE_GPIO 188
#define HDMI_GPIO_CT_CP_HPD 60 /* HPD mode enable/disable */
#define HDMI_GPIO_HPD 60 /* Hot plug pin for HDMI */
#define HDMI_GPIO_LS_OE 41 /* Level shifter for HDMI */
#define HDMI_GPIO_HPD 63 /* Hotplug detect */
static const int sdp4430_keymap[] = {
KEY(0, 0, KEY_E),
@ -579,8 +578,12 @@ static void __init omap_sfh7741prox_init(void)
static void sdp4430_hdmi_mux_init(void)
{
/* PAD0_HDMI_HPD_PAD1_HDMI_CEC */
omap_mux_init_signal("hdmi_hpd",
OMAP_PIN_INPUT_PULLUP);
omap_mux_init_signal("hdmi_cec",
OMAP_PIN_INPUT_PULLUP);
/* PAD0_HDMI_DDC_SCL_PAD1_HDMI_DDC_SDA */
omap_mux_init_signal("hdmi_ddc_scl",
OMAP_PIN_INPUT_PULLUP);
omap_mux_init_signal("hdmi_ddc_sda",
@ -588,9 +591,8 @@ static void sdp4430_hdmi_mux_init(void)
}
static struct gpio sdp4430_hdmi_gpios[] = {
{ HDMI_GPIO_CT_CP_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ct_cp_hpd" },
{ HDMI_GPIO_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_hpd" },
{ HDMI_GPIO_LS_OE, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ls_oe" },
{ HDMI_GPIO_HPD, GPIOF_DIR_IN, "hdmi_gpio_hpd" },
};
static int sdp4430_panel_enable_hdmi(struct omap_dss_device *dssdev)
@ -607,21 +609,26 @@ static int sdp4430_panel_enable_hdmi(struct omap_dss_device *dssdev)
static void sdp4430_panel_disable_hdmi(struct omap_dss_device *dssdev)
{
gpio_free_array(sdp4430_hdmi_gpios, ARRAY_SIZE(sdp4430_hdmi_gpios));
gpio_free(HDMI_GPIO_LS_OE);
gpio_free(HDMI_GPIO_HPD);
}
static struct omap_dss_hdmi_data sdp4430_hdmi_data = {
.hpd_gpio = HDMI_GPIO_HPD,
};
static struct omap_dss_device sdp4430_hdmi_device = {
.name = "hdmi",
.driver_name = "hdmi_panel",
.type = OMAP_DISPLAY_TYPE_HDMI,
.clocks = {
.dispc = {
.dispc_fclk_src = OMAP_DSS_CLK_SRC_FCK,
},
.hdmi = {
.regn = 15,
.regm2 = 1,
},
},
.platform_enable = sdp4430_panel_enable_hdmi,
.platform_disable = sdp4430_panel_disable_hdmi,
.channel = OMAP_DSS_CHANNEL_DIGIT,
.data = &sdp4430_hdmi_data,
};
static struct omap_dss_device *sdp4430_dss_devices[] = {
@ -638,10 +645,6 @@ void omap_4430sdp_display_init(void)
{
sdp4430_hdmi_mux_init();
omap_display_init(&sdp4430_dss_data);
omap_mux_init_gpio(HDMI_GPIO_LS_OE, OMAP_PIN_OUTPUT);
omap_mux_init_gpio(HDMI_GPIO_CT_CP_HPD, OMAP_PIN_OUTPUT);
omap_mux_init_gpio(HDMI_GPIO_HPD, OMAP_PIN_INPUT_PULLDOWN);
}
#ifdef CONFIG_OMAP_MUX

View file

@ -52,9 +52,8 @@
#define GPIO_HUB_NRESET 62
#define GPIO_WIFI_PMENA 43
#define GPIO_WIFI_IRQ 53
#define HDMI_GPIO_CT_CP_HPD 60 /* HPD mode enable/disable */
#define HDMI_GPIO_HPD 60 /* Hot plug pin for HDMI */
#define HDMI_GPIO_LS_OE 41 /* Level shifter for HDMI */
#define HDMI_GPIO_HPD 63 /* Hotplug detect */
/* wl127x BT, FM, GPS connectivity chip */
static int wl1271_gpios[] = {46, -1, -1};
@ -615,8 +614,12 @@ int __init omap4_panda_dvi_init(void)
static void omap4_panda_hdmi_mux_init(void)
{
/* PAD0_HDMI_HPD_PAD1_HDMI_CEC */
omap_mux_init_signal("hdmi_hpd",
OMAP_PIN_INPUT_PULLUP);
omap_mux_init_signal("hdmi_cec",
OMAP_PIN_INPUT_PULLUP);
/* PAD0_HDMI_DDC_SCL_PAD1_HDMI_DDC_SDA */
omap_mux_init_signal("hdmi_ddc_scl",
OMAP_PIN_INPUT_PULLUP);
omap_mux_init_signal("hdmi_ddc_sda",
@ -624,9 +627,8 @@ static void omap4_panda_hdmi_mux_init(void)
}
static struct gpio panda_hdmi_gpios[] = {
{ HDMI_GPIO_CT_CP_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ct_cp_hpd" },
{ HDMI_GPIO_HPD, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_hpd" },
{ HDMI_GPIO_LS_OE, GPIOF_OUT_INIT_HIGH, "hdmi_gpio_ls_oe" },
{ HDMI_GPIO_HPD, GPIOF_DIR_IN, "hdmi_gpio_hpd" },
};
static int omap4_panda_panel_enable_hdmi(struct omap_dss_device *dssdev)
@ -643,13 +645,10 @@ static int omap4_panda_panel_enable_hdmi(struct omap_dss_device *dssdev)
static void omap4_panda_panel_disable_hdmi(struct omap_dss_device *dssdev)
{
gpio_free_array(panda_hdmi_gpios, ARRAY_SIZE(panda_hdmi_gpios));
gpio_free(HDMI_GPIO_LS_OE);
gpio_free(HDMI_GPIO_HPD);
}
static struct omap_dss_hdmi_data omap4_panda_hdmi_data = {
.hpd_gpio = HDMI_GPIO_HPD,
};
static struct omap_dss_device omap4_panda_hdmi_device = {
.name = "hdmi",
.driver_name = "hdmi_panel",
@ -657,7 +656,6 @@ static struct omap_dss_device omap4_panda_hdmi_device = {
.platform_enable = omap4_panda_panel_enable_hdmi,
.platform_disable = omap4_panda_panel_disable_hdmi,
.channel = OMAP_DSS_CHANNEL_DIGIT,
.data = &omap4_panda_hdmi_data,
};
static struct omap_dss_device *omap4_panda_dss_devices[] = {
@ -681,10 +679,6 @@ void omap4_panda_display_init(void)
omap4_panda_hdmi_mux_init();
omap_display_init(&omap4_panda_dss_data);
omap_mux_init_gpio(HDMI_GPIO_LS_OE, OMAP_PIN_OUTPUT);
omap_mux_init_gpio(HDMI_GPIO_CT_CP_HPD, OMAP_PIN_OUTPUT);
omap_mux_init_gpio(HDMI_GPIO_HPD, OMAP_PIN_INPUT_PULLDOWN);
}
static void __init omap4_panda_init(void)

View file

@ -133,7 +133,7 @@ static struct platform_device rx51_charger_device = {
static void __init rx51_charger_init(void)
{
WARN_ON(gpio_request_one(RX51_USB_TRANSCEIVER_RST_GPIO,
GPIOF_OUT_INIT_HIGH, "isp1704_reset"));
GPIOF_OUT_INIT_LOW, "isp1704_reset"));
platform_device_register(&rx51_charger_device);
}

View file

@ -528,13 +528,7 @@ int gpmc_cs_configure(int cs, int cmd, int wval)
case GPMC_CONFIG_DEV_SIZE:
regval = gpmc_cs_read_reg(cs, GPMC_CS_CONFIG1);
/* clear 2 target bits */
regval &= ~GPMC_CONFIG1_DEVICESIZE(3);
/* set the proper value */
regval |= GPMC_CONFIG1_DEVICESIZE(wval);
gpmc_cs_write_reg(cs, GPMC_CS_CONFIG1, regval);
break;

View file

@ -53,7 +53,7 @@ int __init omap_init_opp_table(struct omap_opp_def *opp_def,
omap_table_init = 1;
/* Lets now register with OPP library */
for (i = 0; i < opp_def_size; i++, opp_def++) {
for (i = 0; i < opp_def_size; i++) {
struct omap_hwmod *oh;
struct device *dev;
@ -86,6 +86,7 @@ int __init omap_init_opp_table(struct omap_opp_def *opp_def,
__func__, opp_def->freq,
opp_def->hwmod_name, i, r);
}
opp_def++;
}
return 0;

View file

@ -137,7 +137,7 @@ static irqreturn_t sr_interrupt(int irq, void *data)
sr_write_reg(sr_info, ERRCONFIG_V1, status);
} else if (sr_info->ip_type == SR_TYPE_V2) {
/* Read the status bits */
status = sr_read_reg(sr_info, IRQSTATUS);
sr_read_reg(sr_info, IRQSTATUS);
/* Clear them by writing back */
sr_write_reg(sr_info, IRQSTATUS, status);

View file

@ -29,7 +29,6 @@
#include <mach/hardware.h>
#include <mach/orion5x.h>
#include <plat/orion_nand.h>
#include <plat/ehci-orion.h>
#include <plat/time.h>
#include <plat/common.h>
#include "common.h"
@ -73,8 +72,7 @@ void __init orion5x_map_io(void)
void __init orion5x_ehci0_init(void)
{
orion_ehci_init(&orion5x_mbus_dram_info,
ORION5X_USB0_PHYS_BASE, IRQ_ORION5X_USB0_CTRL,
EHCI_PHY_ORION);
ORION5X_USB0_PHYS_BASE, IRQ_ORION5X_USB0_CTRL);
}

View file

@ -65,8 +65,8 @@
#define MPP8_GIGE MPP(8, 0x1, 0, 0, 1, 1, 1)
#define MPP9_UNUSED MPP(9, 0x0, 0, 0, 1, 1, 1)
#define MPP9_GPIO MPP(9, 0x0, 1, 1, 1, 1, 1)
#define MPP9_GIGE MPP(9, 0x1, 0, 0, 1, 1, 1)
#define MPP9_GPIO MPP(9, 0x0, 0, 0, 1, 1, 1)
#define MPP9_GIGE MPP(9, 0x1, 1, 1, 1, 1, 1)
#define MPP10_UNUSED MPP(10, 0x0, 0, 0, 1, 1, 1)
#define MPP10_GPIO MPP(10, 0x0, 1, 1, 1, 1, 1)

View file

@ -307,7 +307,7 @@ static inline void balloon3_mmc_init(void) {}
/******************************************************************************
* USB Gadget
******************************************************************************/
#if defined(CONFIG_USB_PXA27X)||defined(CONFIG_USB_PXA27X_MODULE)
#if defined(CONFIG_USB_GADGET_PXA27X)||defined(CONFIG_USB_GADGET_PXA27X_MODULE)
static void balloon3_udc_command(int cmd)
{
if (cmd == PXA2XX_UDC_CMD_CONNECT)

Some files were not shown because too many files have changed in this diff Show more