| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid undefined behavior for arm64 stemming from 1u << 32 in
loops with upper bound kNumberOfXRegisters.
Create iterators for enumerating bits in an integer either
from high to low or from low to high and use them for
<arch>Context::FillCalleeSaves() on all architectures.
Refactor runtime/utils.{h,cc} by moving all bit-fiddling
functions to runtime/base/bit_utils.{h,cc} (together with
the new bit iterators) and all time-related functions to
runtime/base/time_utils.{h,cc}. Improve test coverage and
fix some corner cases for the bit-fiddling functions.
Bug: 13925192
(cherry picked from commit 80afd02024d20e60b197d3adfbb43cc303cf29e0)
Change-Id: I905257a21de90b5860ebe1e39563758f721eab82
|
|
|
|
|
|
|
|
|
|
| |
This enables the standard object header to be used with the
Baker-style read barrier.
Bug: 19355854
Bug: 12687968
Change-Id: Ie552b6e1dfe30e96cb1d0895bd0dff25f9d7d015
|
|
|
|
|
|
|
|
|
| |
This prepares for the CC collector to use the standard object header
model by storing the read barrier state in the lock word.
Bug: 19355854
Bug: 12687968
Change-Id: Ia7585662dd2cebf0479a3e74f734afe5059fb70f
|
|
|
|
|
|
|
|
|
|
|
| |
Fix associated errors about unused paramenters and implict sign conversions.
For sign conversion this was largely in the area of enums, so add ostream
operators for the effected enums and fix tools/generate-operator-out.py.
Tidy arena allocation code and arena allocated data types, rather than fixing
new and delete operators.
Remove dead code.
Change-Id: I5b433e722d2f75baacfacae4d32aef4a828bfe1b
|
|
|
|
| |
Change-Id: I1fc2d77f78f49741c1316ccc76b02357158dfdbe
|
|
|
|
|
|
|
|
| |
The lock max count should utilize 14 bits since 2 highest bits are reserved for lock state.
Change-Id: I9d562f7bca9c0853231800a706a8523204e8aa9d
Signed-off-by: Dmitry Petrochenko <dmitry.petrochenko@intel.com>
Signed-off-by: Serguei Katkov <serguei.i.katkov@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Modify mirror objects so that references between them use an ObjectReference
value type rather than an Object* so that functionality to compress larger
references can be captured in the ObjectRefererence implementation.
ObjectReferences are 32bit and all other aspects of object layout remain as
they are currently.
Expand fields in objects holding pointers so they can hold 64bit pointers. Its
expected the size of these will come down by improving where we hold compiler
meta-data.
Stub out x86_64 architecture specific runtime implementation.
Modify OutputStream so that reads and writes are of unsigned quantities.
Make the use of portable or quick code more explicit.
Templatize AtomicInteger to support more than just int32_t as a type.
Add missing, and fix issues relating to, missing annotalysis information on the
mutator lock.
Refactor and share implementations for array copy between System and uses
elsewhere in the runtime.
Fix numerous 64bit build issues.
Change-Id: I1a5694c251a42c9eff71084dfdd4b51fff716822
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The compacting collector is currently similar to semispace. It works by
copying objects back and forth between two bump pointer spaces. There
are types of objects which are "non-movable" due to current runtime
limitations. These are Classes, Methods, and Fields.
Bump pointer spaces are a new type of continuous alloc space which have
no lock in the allocation code path. When you allocate from these it uses
atomic operations to increase an index. Traversing the objects in the bump
pointer space relies on Object::SizeOf matching the allocated size exactly.
Runtime changes:
JNI::GetArrayElements returns copies objects if you attempt to get the
backing data of a movable array. For GetArrayElementsCritical, we return
direct backing storage for any types of arrays, but temporarily disable
the GC until the critical region is completed.
Added a new runtime call called VisitObjects, this is used in place of
the old pattern which was flushing the allocation stack and walking
the bitmaps.
Changed image writer to be compaction safe and use object monitor word
for forwarding addresses.
Added a bunch of added SIRTs to ClassLinker, MethodLinker, etc..
TODO: Enable switching allocators, compacting on background, etc..
Bug: 8981901
Change-Id: I3c886fd322a6eef2b99388d19a765042ec26ab99
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, we computed identity hashcodes whenever we inflated a monitor.
This caused issues since it meant that we would have all of these
hash codes in the image, causing locks to excessively inflate during
application run time.
This change makes it so that we lazily compute hash codes. When a
thin lock gets inflated, we assign a hash code of 0 assigned to it.
This value signifies no hash code. When we try to get the identity
hash code of an object with an inflated monitor, it gets computed if
it is 0.
Change-Id: Iae6acd1960515a36e74644e5b1323ff336731806
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The object identity hash is now stored in the monitor word after
being computed. Hashes are computed by a pseudo random number
generator.
When we write the image, we eagerly compute object hashes to
prevent pages getting dirtied.
Bug: 8981901
Change-Id: Ic8edacbacb0afc7055fd740a52444929f88ed564
|
|
Bug 6961405.
Don't inflate monitors for Notify and NotifyAll.
Tidy lock word, handle recursive lock case alongside unlocked case and move
assembly out of line (except for ARM quick). Also handle null in out-of-line
assembly as the test is quick and the enter/exit code is already a safepoint.
To gain ownership of a monitor on behalf of another thread, monitor contenders
must not hold the monitor_lock_, so they wait on a condition variable.
Reduce size of per mutex contention log.
Be consistent in calling thin lock thread ids just thread ids.
Fix potential thread death races caused by the use of FindThreadByThreadId,
make it invariant that returned threads are either self or suspended now.
Code size reduction on ARM boot.oat 0.2%.
Old nexus 7 speedup 0.25%, new nexus 7 speedup 1.4%, nexus 10 speedup 2.24%,
nexus 4 speedup 2.09% on DeltaBlue.
Change-Id: Id52558b914f160d9c8578fdd7fc8199a9598576a
|