summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorandybons <andybons@chromium.org>2015-08-24 14:37:09 -0700
committerCommit bot <commit-bot@chromium.org>2015-08-24 21:39:36 +0000
commit3322f7611ba1444e553b2cce4de3a1a32ad46e72 (patch)
treedfb6bbea413da0581b8d085b184a5e6ceea5af3e /docs
parent5d58c9eb2baa203be1b84ac88cde82c59d72f143 (diff)
downloadchromium_src-3322f7611ba1444e553b2cce4de3a1a32ad46e72.zip
chromium_src-3322f7611ba1444e553b2cce4de3a1a32ad46e72.tar.gz
chromium_src-3322f7611ba1444e553b2cce4de3a1a32ad46e72.tar.bz2
Per https://groups.google.com/a/chromium.org/forum/#!topic/chromium-dev/irLAQ8f8uGk
Initial migration of wiki content over to src/docs There will be a follow-up CL to ensure docs are following chromium’s style guide, links are fixed, etc. The file auditing was becoming too much for a single change and per Nico’s suggestion, it seems to be better to do + Bulk import with initial prune. + Follow-up CLs to clean up the documentation. So that each CL has its own purpose. BUG=none Review URL: https://codereview.chromium.org/1309473002 Cr-Commit-Position: refs/heads/master@{#345186}
Diffstat (limited to 'docs')
-rw-r--r--docs/OWNERS3
-rw-r--r--docs/android_test_instructions.md262
-rw-r--r--docs/angle_in_chromium.md71
-rw-r--r--docs/bitmap_pipeline.md61
-rw-r--r--docs/browser_view_resizer.md135
-rw-r--r--docs/ccache_mac.md99
-rw-r--r--docs/chrome_settings.md107
-rw-r--r--docs/chromium_browser_vs_google_chrome.md18
-rw-r--r--docs/chromoting_android_hacking.md124
-rw-r--r--docs/chromoting_build_instructions.md82
-rw-r--r--docs/clang.md99
-rw-r--r--docs/clang_format.md32
-rw-r--r--docs/clang_static_analyzer.md56
-rw-r--r--docs/clang_tool_refactoring.md52
-rw-r--r--docs/closure_compilation.md249
-rw-r--r--docs/cocoa_tips_and_tricks.md69
-rw-r--r--docs/code_coverage.md28
-rw-r--r--docs/common_build_tasks.md130
-rw-r--r--docs/cr_user_manual.md148
-rw-r--r--docs/cygwin_dll_remapping_failure.md55
-rw-r--r--docs/documentation_best_practices.md115
-rw-r--r--docs/emacs.md311
-rw-r--r--docs/erc_irc.md92
-rw-r--r--docs/git_cookbook.md194
-rw-r--r--docs/git_tips.md86
-rw-r--r--docs/gn_check.md85
-rw-r--r--docs/graphical_debugging_aid_chromium_views.md51
-rw-r--r--docs/gtk_vs_views_gtk.md27
-rw-r--r--docs/how_to_extend_layout_test_framework.md125
-rw-r--r--docs/include_what_you_use.md82
-rw-r--r--docs/installation_at_vmware.md21
-rw-r--r--docs/installazione_su_vmware.md20
-rw-r--r--docs/ipc_fuzzer.md52
-rw-r--r--docs/kiosk_mode.md85
-rw-r--r--docs/layout_tests_linux.md90
-rw-r--r--docs/linux64_bit_issues.md67
-rw-r--r--docs/linux_build_instructions.md164
-rw-r--r--docs/linux_build_instructions_prerequisites.md116
-rw-r--r--docs/linux_building_debug_gtk.md110
-rw-r--r--docs/linux_cert_management.md64
-rw-r--r--docs/linux_chromium_arm.md123
-rw-r--r--docs/linux_chromium_packages.md36
-rw-r--r--docs/linux_crash_dumping.md66
-rw-r--r--docs/linux_debugging.md385
-rw-r--r--docs/linux_debugging_gtk.md51
-rw-r--r--docs/linux_debugging_ssl.md124
-rw-r--r--docs/linux_dev_build_as_default_browser.md20
-rw-r--r--docs/linux_development.md33
-rw-r--r--docs/linux_eclipse_dev.md252
-rw-r--r--docs/linux_faster_builds.md109
-rw-r--r--docs/linux_graphics_pipeline.md5
-rw-r--r--docs/linux_gtk_theme_integration.md92
-rw-r--r--docs/linux_hw_video_decode.md57
-rw-r--r--docs/linux_minidump_to_core.md103
-rw-r--r--docs/linux_open_suse_build_instructions.md77
-rw-r--r--docs/linux_password_storage.md23
-rw-r--r--docs/linux_pid_namespace_support.md42
-rw-r--r--docs/linux_plugins.md27
-rw-r--r--docs/linux_printing.md74
-rw-r--r--docs/linux_profiling.md156
-rw-r--r--docs/linux_proxy_config.md11
-rw-r--r--docs/linux_sandbox_ipc.md30
-rw-r--r--docs/linux_sandboxing.md97
-rw-r--r--docs/linux_suid_sandbox.md63
-rw-r--r--docs/linux_suid_sandbox_development.md61
-rw-r--r--docs/linux_zygote.md15
-rw-r--r--docs/mac_build_instructions.md143
-rw-r--r--docs/mandriva_msttcorefonts.md30
-rw-r--r--docs/mojo_in_chromium.md719
-rw-r--r--docs/ninja_build.md79
-rw-r--r--docs/piranha_plant.md54
-rw-r--r--docs/profiling_content_shell_on_android.md149
-rw-r--r--docs/proxy_auto_config.md27
-rw-r--r--docs/retrieving_code_analysis_warnings.md40
-rw-r--r--docs/script_preprocessor.md64
-rw-r--r--docs/seccomp_sandbox_crash_dumping.md24
-rw-r--r--docs/shift_based_development.md152
-rw-r--r--docs/spelling_panel_planning_doc.md30
-rw-r--r--docs/system_hardening_features.md107
-rw-r--r--docs/test_descriptions.md58
-rw-r--r--docs/theme_creation_guide.md353
-rw-r--r--docs/tpm_quick_ref.md32
-rw-r--r--docs/updating_clang.md11
-rw-r--r--docs/updating_clang_format_binaries.md94
-rw-r--r--docs/use_find_bugs_for_android.md32
-rw-r--r--docs/useful_urls.md47
-rw-r--r--docs/user_handle_mapping.md106
-rw-r--r--docs/using_a_linux_chroot.md62
-rw-r--r--docs/using_build_runner.md65
-rw-r--r--docs/vanilla_msysgit_workflow.md68
-rw-r--r--docs/windows_incremental_linking.md5
-rw-r--r--docs/windows_precompiled_headers.md50
-rw-r--r--docs/windows_split_dll.md35
-rw-r--r--docs/working_remotely_with_android.md92
-rw-r--r--docs/writing_clang_plugins.md109
95 files changed, 8806 insertions, 0 deletions
diff --git a/docs/OWNERS b/docs/OWNERS
new file mode 100644
index 0000000..4256a6b
--- /dev/null
+++ b/docs/OWNERS
@@ -0,0 +1,3 @@
+jparent@chromium.org
+nodir@chromium.org
+andybons@chromium.org
diff --git a/docs/android_test_instructions.md b/docs/android_test_instructions.md
new file mode 100644
index 0000000..47959e5
--- /dev/null
+++ b/docs/android_test_instructions.md
@@ -0,0 +1,262 @@
+# Android Test Instructions
+
+Device Setup Tests are runnable on physical devices or emulators. See the
+instructions below for setting up either a physical device or an emulator.
+
+[TOC]
+
+## Physical Device Setup **ADB Debugging**
+
+In order to allow the ADB to connect to the device, you must enable USB
+debugging:
+ * Before Android 4.1 (Jelly Bean):
+ * Go to "System Settings"
+ * Go to "Developer options"
+ * Check "USB debugging".
+ * Un-check "Verify apps over USB".
+ * On Jelly Bean, developer options are hidden by default. To unhide them:
+ * Go to "About phone"
+ * Tap 10 times on "Build number"
+ * The "Developer options" menu will now be available.
+ * Check "USB debugging".
+ * Un-check "Verify apps over USB".
+
+### Screen
+
+You MUST ensure that the screen stays on while testing: `adb shell svc power
+stayon usb` Or do this manually on the device: Settings -> Developer options
+-> Stay Awake.
+
+If this option is greyed out, stay awake is probably disabled by policy. In that
+case, get another device or log in with a normal, unmanaged account (because the
+tests will break in exciting ways if stay awake is off).
+
+### Enable Asserts!
+
+`adb shell setprop debug.assert 1`
+
+### Disable Verify Apps
+
+You may see a dialog like
+[this one](http://www.samsungmobileusa.com/simulators/ATT_GalaxyMega/mobile/screens/06-02_12.jpg),
+which states, _Google may regularly check installed apps for potentially harmful
+behavior._ This can interfere with the test runner. To disable this dialog, run:
+`adb shell settings put global package_verifier_enable 0`
+
+## Emulator Setup
+
+### Option 1:
+
+Use an emulator (i.e. Android Virtual Device, AVD): Enabling Intel's
+Virtualizaton support provides the fastest, most reliable emulator configuration
+available (i.e. x86 emulator with GPU acceleration and KVM support).
+
+1. Enable Intel Virtualization support in the BIOS.
+
+2. Set up your environment:
+
+ ```shell
+ . build/android/envsetup.sh
+ ```
+
+3. Install emulator deps:
+
+ ```shell
+ build/android/install_emulator_deps.py --api-level=19
+ ```
+
+ This script will download Android SDK and place it a directory called
+ android\_tools in the same parent directory as your chromium checkout. It
+ will also download the system-images for the emulators (i.e. arm and x86).
+ Note that this is a different SDK download than the Android SDK in the
+ chromium source checkout (i.e. src/third\_party/android\_emulator\_sdk).
+
+4. Run the avd.py script. To start up _num_ emulators use -n. For non-x86 use
+ --abi.
+
+ ```shell
+ build/android/avd.py --api-level=19
+ ```
+
+ This script will attempt to use GPU emulation, so you must be running the
+ emulators in an environment with hardware rendering available. See
+ `avd.py --help` for more details.
+
+### Option 2:
+
+Alternatively, you can create an run your own emulator using the tools provided
+by the Android SDK. When doing so, be sure to enable GPU emulation in hardware
+settings, since Chromium requires it to render.
+
+## Building Tests
+
+It may not be immediately obvious where your test code gets compiled to, so here
+are some general rules:
+
+* If your test code lives under /content, it will probably be built as part of
+ the content\_shell\_test\_apk * If your test code lives under /chrome (or
+ higher), it will probably be built as part of the chrome\_shell\_test\_apk *
+ (Please fill in more details here if you know them).
+
+ NB: We used to call the chrome\_shell\_test\_apk the
+ chromium\_shell\_test\_apk. There may still be references to this kicking
+ around, but wherever you see chromium\_shell\_test you should replace with
+ chrome\_shell\_test.
+
+Once you know what to build, just do it like you normally would build anything
+else, e.g.: `ninja -C out/Release chrome_shell_test_apk`
+
+## Running Tests
+
+All functional tests are run using `build/android/test_runner.py`.
+Tests are sharded across all attached devices. In order to run tests, call:
+`build/android/test_runner.py <test_type> [options]`
+For a list of valid test types, see `test_runner.py --help`. For
+help on a specific test type, run `test_runner.py <test_type> --help`.
+
+The commands used by the buildbots are printed in the logs. Look at
+http://build.chromium.org/ to duplicate the same test command as a particular
+builder.
+
+If you build in an output directory other than "out", you may have to tell
+test\_runner.py where you place it. Say you build your android code in
+out\_android, then do `export CHROMIUM_OUT_DIR=out_android` before running the
+command below.
+
+## INSTALL\_FAILED\_CONTAINER\_ERROR or INSTALL\_FAILED\_INSUFFICIENT\_STORAGE
+
+If you see this error when test\_runner.py is attempting to deploy the test
+binaries to the AVD emulator, you may need to resize your userdata partition
+with the following commands:
+
+```shell
+# Resize userdata partition to be 1G resize2fs
+android_emulator_sdk/sdk/system-images/android-19/x86/userdata.img 1G
+
+# Set filesystem parameter to continue on errors; Android doesn't like some
+# things e2fsprogs does.
+tune2fs -e continue
+android_emulator_sdk/sdk/system-images/android-19/x86/userdata.img
+```
+
+## Symbolizing Crashes
+
+Crash stacks are logged and can be viewed using adb logcat. To symbolize the
+traces, pipe the output through
+`third_party/android_platform/development/scripts/stack`. If you build in an
+output directory other than "out", pass
+`--chrome-symbols-dir=out_directory/{Debug,Release}/lib` to the script as well.
+
+## Gtests
+
+```shell
+# Build a test suite
+ninja -C out/Release content_unittests_apk
+
+# Run a test suite
+build/android/test_runner.py gtest -s content_unittests --release -vvv
+
+# Run a subset of tests
+build/android/test_runner.py gtest -s content_unittests --release -vvv \
+--gtest-filter ByteStreamTest.*
+```
+
+## Instrumentation Tests
+
+In order to run instrumentation tests, you must leave your device screen ON and
+UNLOCKED. Otherwise, the test will timeout trying to launch an intent.
+Optionally you can disable screen lock under Settings -> Security -> Screen Lock
+-> None.
+
+Next, you need to build the app, build your tests, install the application APK,
+and then run your tests (which will install the test APK automatically).
+
+Examples:
+
+ContentShell tests:
+
+```shell
+# Build the code under test
+ninja -C out/Release content_shell_apk
+
+# Build the tests themselves
+ninja -C out/Release content_shell_test_apk
+
+# Install the code under test
+build/android/adb_install_apk.py out/Release/apks/ContentShell.apk
+
+# Run the test (will automagically install the test APK)
+build/android/test_runner.py instrumentation --test-apk=ContentShellTest \
+--isolate-file-path content/content_shell_test_apk.isolate --release -vv
+```
+
+ChromeShell tests:
+
+```shell
+# Build the code under test
+ninja -C out/Release chrome_shell_apk
+
+# Build the tests themselves
+ninja -C out/Release chrome_shell_test_apk
+
+# Install the code under test
+build/android/adb_install_apk.py out/Release/apks/ChromeShell.apk
+
+# Run the test (will automagically install the test APK)
+build/android/test_runner.py instrumentation --test-apk=ChromeShellTest \
+--isolate-file-path chrome/chrome_shell_test_apk.isolate --release -vv
+```
+
+AndroidWebView tests:
+
+```shell
+ninja -C out/Release android_webview_apk
+ninja -C out/Release android_webview_test_apk
+build/android/adb_install_apk.py out/Release/apks/AndroidWebView.apk \
+build/android/test_runner.py instrumentation --test-apk=AndroidWebViewTest \
+--test_data webview:android_webview/test/data/device_files --release -vvv
+```
+
+Use adb\_install\_apk.py to install the app under test, then run the test
+command. In order to run a subset of tests, use -f to filter based on test
+class/method or -A/-E to filter using annotations.
+
+Filtering examples:
+
+```shell
+# Run a test suite
+build/android/test_runner.py instrumentation --test-apk=ContentShellTest
+
+# Run a specific test class
+build/android/test_runner.py instrumentation --test-apk=ContentShellTest -f \
+AddressDetectionTest
+
+# Run a specific test method
+build/android/test_runner.py instrumentation --test-apk=ContentShellTest -f \
+AddressDetectionTest#testAddressLimits
+
+# Run a subset of tests by size (Smoke, SmallTest, MediumTest, LargeTest,
+# EnormousTest)
+build/android/test_runner.py instrumentation --test-apk=ContentShellTest -A \
+Smoke
+
+# Run a subset of tests by annotation, such as filtering by Feature
+build/android/test_runner.py instrumentation --test-apk=ContentShellTest -A \
+Feature=Navigation
+```
+
+You might want to add stars `*` to each as a regular expression, e.g.
+`*`AddressDetectionTest`*`
+
+## Running Blink Layout Tests
+
+See
+https://sites.google.com/a/chromium.org/dev/developers/testing/webkit-layout-tests
+
+## Running GPU tests
+
+(e.g. the "Android Debug (Nexus 7)" bot on the chromium.gpu waterfall)
+
+See http://www.chromium.org/developers/testing/gpu-testing for details. Use
+--browser=android-content-shell. Examine the stdio from the test invocation on
+the bots to see arguments to pass to src/content/test/gpu/run\_gpu\_test.py.
diff --git a/docs/angle_in_chromium.md b/docs/angle_in_chromium.md
new file mode 100644
index 0000000..a955ff0
--- /dev/null
+++ b/docs/angle_in_chromium.md
@@ -0,0 +1,71 @@
+# Hacking on ANGLE in Chromium
+
+In DEPS, comment out the part that looks like this.
+
+```
+# "src/third_party/angle":
+# Var("chromium_git") + "/angle/angle.git@" + Var("angle_revision"),
+```
+
+Delete or rename third\_party/angle.
+
+(Optional) sync and make sure the third\_party/angle directory doesn't come
+back. It shouldn’t because it is no longer referenced from DEPS.
+
+```shell
+gclient sync -r CURRENT_REVISION
+```
+
+Clone the ANGLE git repository.
+
+```
+> git clone https://chromium.googlesource.com/angle/angle third_party/angle
+> gclient runhooks
+```
+
+To check ANGLE builds (assumes you ran hooks with GYP\_GENERATORS=ninja) without
+building all of Chromium.
+
+```shell
+ninja -C out\Release libEGL.dll
+```
+
+Change files then commit locally.
+
+Upload to gerrit for review. You will need to have installed the git hook as
+described in the "Getting started with Gerrit for ANGLE" section of the
+ContributingCode doc before committing them locally.
+
+```shell
+git cl upload
+```
+
+As with subversion and rietveld: visit the upload link for the review site,
+check the diff and the commit message then add reviewer(s) and publish.
+
+Land your changes to the upstream repository from the gerrit web interface.
+
+If there are upstream changes, you may need to rebase your patches and reupload
+them.
+
+```shell
+git pull
+git cl upload
+```
+
+# Rolling ANGLE into Chrome
+
+To roll DEPS, make sure this is not commented out and update the hash associated
+with "angle\_revision". (Your hash will be different than the one below.)
+
+```
+ "angle_revision": "0ee126c670edae8dd1822980047450a9a530c032",
+```
+
+Then sync.
+
+```shell
+gclient sync
+```
+
+Your changes should now be in third\_party/angle.
diff --git a/docs/bitmap_pipeline.md b/docs/bitmap_pipeline.md
new file mode 100644
index 0000000..c12623c
--- /dev/null
+++ b/docs/bitmap_pipeline.md
@@ -0,0 +1,61 @@
+# Bitmap Pipeline
+
+This pages details how bitmaps are moved from the renderer to the screen.
+
+The renderer can request two different operations from the browser:
+* PaintRect: a bitmap to be painted at a given location on the screen
+* Scroll: a horizontal or vertical scroll of the screen, and a bitmap to painted
+
+Across all three platforms, shared memory is used to transport the bitmap from
+the renderer to the browser. On Windows, a shared section is used. On Linux,
+it's SysV shared memory and on the Mac we use POSIX shared memory.
+
+Windows and Linux create shared memory in the renderer process. On Mac, since
+the renderer is sandboxed, it cannot create shared memory segments and uses a
+synchronous IPC to the browser to create them (ViewHostMsg\_AllocTransportDIB).
+These shared memory segments are called TranportDIBs (device independent
+bitmaps) in the code.
+
+Transport DIBs are allocated on demand by the render\_process and cached
+therein, in a two entry cache. The IPC messages to the browser contain a
+TransportDIB::Id which names a transport DIB. In the case of Mac, since the
+browser created them in the first place, it keeps a map of all allocated
+transport DIBs in the RenderProcessHost. The ids on the wire are then the inode
+numbers of the shared memory segments.
+
+On Windows, the Id is the HANDLE value from the renderer process. On Linux the
+id is the SysV key. Thus, on both Windows and Linux, the id is sufficient to map
+the transport DIB, while on Mac is is not. This is why, on Mac, the browser
+keeps handles to all the possible transport DIBs.
+
+Each RenderProcessHost keeps a small cache of recently used transport DIBs. This
+means that, when many paint operations are performed in succession, the same
+shared memory should be reused (as long as it's large enough). Also, this shared
+memory should remain mapped in both the renderer and browser process, reduci ng
+the amount of VM churn.
+
+The transport DIB caches in both the renderer and browser are flushed after some
+period of inactivity, currently five seconds.
+
+### Backing stores
+
+Backing stores are browser side copies of the current RenderView bitmap. The
+renderer sends paints to the browser to update small portions of the backing
+store but, for performance reasons, when we want to repaint the whole thing
+(i.e. because we switched tabs) we don't want to go to the renderer to redraw it
+all.
+
+On Windows and Mac, the backing store is kept in heap memory in the browser. On
+Windows, we use one advantage which is that we can use Win32 calls to scroll
+both the window and the backing store. This is faster than scrolling ourselves
+and redrawing everything to the window.
+
+On Mac, the backing store is a Skia bitmap and we do the scrolling ourselves.
+
+On Linux, the backing store is kept on the X server. It's a large X pixmap and
+we handle exposes by directing the X server to copy from this pixmap. This means
+that we can repaint the window without sending any bitmaps to the X server. It
+also means that we can perform optimised scrolling by directing the X server to
+scroll the window and pixmap for us.
+
+Having backing stores on the X server is a major win in the case of remote X.
diff --git a/docs/browser_view_resizer.md b/docs/browser_view_resizer.md
new file mode 100644
index 0000000..b30d20b
--- /dev/null
+++ b/docs/browser_view_resizer.md
@@ -0,0 +1,135 @@
+# Browser View Resizer
+
+To fix bug [458](http://code.google.com/p/chromium/issues/detail?id=458), which
+identifies that it is hard to hit the thin window frame corner to resize the
+window. It would be better to have a resize hit area (called widget from now on)
+in the corner, as we currently have for edit boxes for example.
+
+[TOC]
+
+## Background
+
+This is specific to the Windows OS. On the Mac, Cocoa automatically adds a
+resize widget (Not sure about Linux, we should double check). On Windows, those
+resize widgets are at the extreme right of a status bar. For example, if you
+remove the status bar from a Windows Explorer window, you lose the resize
+widget. But since Chrome never ever has a status bar and simply take over the
+bottom of the window for specific tasks (like the download shelf for example),
+we need to find a creative way of giving access to a resize widget.
+
+The bottom corners where we would like to add the resize widget are currently
+controlled by the browser view, which can have either the tab contents view or
+other dynamic views (like the download shelf view) displayed in this area.
+
+## Requirements
+
+Since there is no status bar to simply fix a resize widget to, we must
+dynamically create a widget that can be laid either on the tab contents view or
+on other views that might temporarily take over the bottom part of the browser
+view.
+
+When no dynamic view is taking over the bottom of the browser view, the resize
+widget can sit in the bottom right corner of the tab contents view, over the tab
+contents view.
+
+![Resize Corner](http://lh6.ggpht.com/_2OD0ww7UZAs/SUAaNi6TWYI/AAAAAAAAGmI/89jCYQ1Cxsw/ResizeCorner-2.png)
+
+The resize widget must have the same width and height as
+the scroll bars so that it can fit in the corner currently left empty when both
+scroll bars are visible. If only one scroll bar is visible (either the
+horizontal or the vertical one), that scroll bar must still leave room for the
+resize widget to fit there (as it currently leave room for the empty corner when
+both scroll bars are visible), yet, only when the resize widget is laid on top
+of the tab contents view, not when a dynamic shelf is added at the bottom of the
+browser view.
+
+![Resize Corner](http://lh6.ggpht.com/_2OD0ww7UZAs/SUAaNjqr_iI/AAAAAAAAGmA/56hzjdnkVRI/ResizeCorner-1.png)
+![Resize Corner](http://lh3.ggpht.com/_2OD0ww7UZAs/SUAaN_wDEUI/AAAAAAAAGmQ/7B4CTZTXOmk/ResizeCorner-3.png)
+![Resize Corner](http://lh6.ggpht.com/_2OD0ww7UZAs/SUAaN7yme9I/AAAAAAAAGmY/EaniiAbwi-Q/ResizeCorner-4.png)
+
+If another view (e.g., again, the download shelf) is added at the bottom of the
+browser view, below the tab contents view, and covers the bottom corners, then
+the resize widget must be laid on top of this other child view. Of course, all
+child views that can potentially be added at the bottom of the browser view,
+must be designed in a way that leaves enough room in the bottom corners for the
+resize widget.
+
+![Resize Corner](http://lh3.ggpht.com/_2OD0ww7UZAs/SUAaN17TIrI/AAAAAAAAGmg/6bljNQ_vZkI/ResizeCorner-5.png)
+![Resize Corner](http://lh4.ggpht.com/_2OD0ww7UZAs/SUAaWINHA6I/AAAAAAAAGmo/-VG5FGC8Xds/ResizeCorner-6.png)
+![Resize Corner](http://lh6.ggpht.com/_2OD0ww7UZAs/SUAaWDUpo0I/AAAAAAAAGmw/8USPzoMpgu0/ResizeCorner-7.png)
+
+Since the bottom corners might have different colors, based on the state and
+content of the browser view, the resize widget must have a transparent
+background.
+
+The resize widget is not animated itself. It might move with the animation of
+the view it is laid on top of (e.g., when the download shelf is being animated
+in), but we won't attempt to animate the resize widget itself (or fix it in the
+bottom right corner of the browser view while the other views get animated it).
+
+## Design
+
+Unfortunately, we must deal with the two different cases (with or without a
+dynamic bottom view) in two different and distinct ways.
+
+### Over a Dynamic View
+
+For the cases where there is a dynamic view at the bottom of the browser view, a
+new view class (named `BrowserResizerView`) inheriting from
+[views::View](http://src.chromium.org/svn/trunk/src/chrome/views/view.h) is used
+to display the resize widget. It is set as a child of the dynamic view laid at
+the bottom of the browser view. The Browser view takes care of properly setting
+the bounds of the resize widget view, based on the language direction.
+
+Also, it is easier and more efficient to let the browser view handle the mouse
+interactions to resize the browser. We can let Windows take care of properly
+resizing the view by returning the HTBOTTOMLEFT or HTBOTTOMRIGHT flags from the
+NCClientHitTest windows message handler when they occur over the resize widget.
+The browser view also takes care of changing the mouse cursor to the appropriate
+resizing arrows when the mouse hovers over the resize widget area.
+
+### Without a Dynamic View
+
+To make sure that the scroll bars (handled by `WebKit`) are not drawn on top of
+the resizer widget (or vice versa), we need to properly implement the callback
+specifyinhg the rectangle covered by the resizer. This callback is implemented
+on the `RenderWidget` class that can delegate to a derive class via a new
+virtual method which returns an empty rect on the base class. Via a series of
+delegate interface calls, we eventually get back to the browser view which can
+return the size and position of the resize widget, but only if it is laid out on
+top of the tabs view, it returns an empty rect when there is a dynamic view.
+
+To handle the drawing of the resize widget over the render widget, we need to
+add code to the Windows specific version of the render widget host view which
+receives the bitmap rendered by WebKit so it can layer the transparent bitmap
+used for the resize widget. That same render widget host view must also handle
+the mouse interaction and use the same trick as the browser view to let Windows
+take care of resizing the whole frame. It must also take care of changing the
+mouse cursor to the appropriate resizing arrows when the mouse hovers over the
+resize widget area.
+
+## Implementation
+
+You can find the changes made to make this work in patch
+[16488](http://codereview.chromium.org/16488).
+
+## Alternatives Considered
+
+We could have tried to reuse the code that currently takes care of resizing the
+edit boxes within WebKit, but this code is wired to the overflow style of HTML
+element and would have been hard to rewire in an elegant way to be used in a
+higher level object like the browser view. Unless we missed something.
+
+We might also decide to go with the easier solution of only showing the resize
+corner within the tab contents view. In that case, it would still be recommended
+that the resize widget would not appear when dynamic views are taking over the
+bottom portion of the browser view, since it would look weird to have a resize
+corner widget that is not in the real... corner... of the browser view ;-)
+
+We may decide that we don't want to see the resize widget bitmap hide some
+pixels from the tab contents (or dynamic view) yet we would still have the
+resizing functionality via the mouse interaction and also get visual feedback
+with the mouse cursor changes while we hover over the resize widget area.
+
+We may do more research to find a way to solve this problem in a single place as
+opposed to the current dual solution, but none was found so far.
diff --git a/docs/ccache_mac.md b/docs/ccache_mac.md
new file mode 100644
index 0000000..1c22529
--- /dev/null
+++ b/docs/ccache_mac.md
@@ -0,0 +1,99 @@
+# Using CCache on Mac
+
+[ccache](http://ccache.samba.org/) is a compiler cache. It speeds up
+recompilation of C/C++ code by caching previous compilations and detecting when
+the same compilation is being done again. This often results in a significant
+speedup in common compilations, especially when switching between branches. This
+page is about using ccache on Mac with clang and the NinjaBuild system.
+
+[TOC]
+
+## Installation
+
+In order to use [ccache](http://ccache.samba.org) with
+[clang](http://code.google.com/p/chromium/wiki/Clang), you need to use the
+current [git HEAD](http://ccache.samba.org/repo.html), since the most recent
+version (3.1.9) doesn't contain the
+[patch needed](https://github.com/jrosdahl/ccache/pull/4) for using
+[the chromium style plugin](clang.md#Using_plugins).
+
+To install ccache with [homebrew](http://mxcl.github.com/homebrew/), use the
+following command:
+
+```shell
+brew install --HEAD ccache
+```
+
+You can also download and install yourself (with GNU automake, autoconf and
+libtool installed):
+
+```shell
+git clone git://git.samba.org/ccache.git cd ccache
+./autogen.sh
+./configure && make && make install
+```
+
+Make sure ccache can be found in your `$PATH`.
+
+You can also just use the current released version of ccache (3.1.8 or 3.1.9)
+and disable the chromium style plugin with `clang_use_chrome_plugins=0` in your
+`GYP_DEFINES`.
+
+## Use with GYP
+
+We have to set two environment variables (`CC` and `CXX`) before calling
+`gclient runhooks` or `build/gyp_chromium`, given you are currently in the
+`chromium/src` directory:
+
+```shell
+export CC="ccache clang -Qunused-arguments"
+export CXX="ccache clang++ -Qunused-arguments"
+```
+
+Then run:
+
+```shell
+GYP_GENERATORS="ninja" ./build/gyp_chromium
+```
+
+or
+
+```shell
+GYP_GENERATORS="ninja" gclient runhooks
+```
+
+(Instead of relying on the clang/clang++ for building chromium in your `$PATH`,
+you can also use the absolute path here.)
+
+## Use with GN
+
+You just need to set the use\_ccache variable. Do so like the following:
+
+```shell
+gn gen out-gn --args='use_ccache=true'
+```
+
+## Build
+
+In the build phase, the following environment variables must be set (assuming
+you are in `chromium/src`):
+
+```shell
+export CCACHE_CPP2=yes
+export CCACHE_SLOPPINESS=time_macros
+export PATH=`pwd`/third_party/llvm-build/Release+Asserts/bin:$PATH
+```
+
+Then you can just run ninja as normal:
+
+```shell
+ninja -C out/Release chrome
+```
+
+## Optional Steps
+
+* Configure ccache to use a different cache size with `ccache -M <max size>`.
+ You can see a list of configuration options by calling ccache alone. * The
+ default ccache directory is `~/.ccache`. You might want to symlink it to
+ another directory (for example, when using FileVault for your home
+ directory).
diff --git a/docs/chrome_settings.md b/docs/chrome_settings.md
new file mode 100644
index 0000000..fbd44f4
--- /dev/null
+++ b/docs/chrome_settings.md
@@ -0,0 +1,107 @@
+# What is chrome://settings?
+
+Chrome (version 10 and above) uses WebUI settings by default for all platforms.
+Access it via the wrench menu ("Preferences" on Mac and Linux; "Options" on
+Windows and ChromeOS), or by typing chrome://settings into the address bar.
+
+One advantage of chrome://settings over platform-native dialogs is that it is
+shared by all platforms; therefore, it is easier to add new options UI and to
+keep all platforms in sync.
+
+Note that at the time of this writing, DOMUI is being renamed to WebUI. The two
+terms will be used interchangeably herein.
+
+## Moving parts
+
+### String resources
+
+Strings live in `chrome/app/generated_resources.grd`. There are several rules
+governing the format of strings:
+
+* the **casing of button text** depends on the platform. If your string will
+ be displayed on a button, you need to add it twice, in sentence case and
+ title case. Follow examples inside `<if expr="pp_ifdef('use_titlecase')">`
+ blocks. Do this even if your string is a single word because it may not be a
+ single word in another locale.
+* strings that are associated with form controls (buttons, checkboxes,
+ dropdowns, etc.) should NOT have **trailing punctuation**. Standalone
+ strings, such as sectional descriptive text, should have trailing
+ punctuation.
+* strings may be different between Google Chrome and chromium. If they differ
+ only in **product name**, put them in `generated_resources.grd` and use
+ product name placeholders; if they differ more substantially, use
+ `chromium_strings.grd` and `google_chrome_strings.grd`.
+
+### WebUI Handlers
+
+The C++ WebUI handler classes for chrome://settings live in
+`chrome/browser/ui/webui/options/`. These objects both handle messages from and
+dispatch messages to the page. Each handler is a logical grouping of related
+functionality. Hence `ContentSettingsHandler` is both the delegate and
+controller of the content settings subpage.
+
+A handler sends a message to the page via `dom_ui_->CallJavascriptFunction()`
+and registers for messages from the page via
+`dom_ui_->RegisterMessageCallback()`.
+
+### HTML/CSS/JS
+
+The HTML, CSS, and JS resources live in `chrome/browser/resources/options`. They
+are compiled into one resource at compile time by grit, via `<link>`, `<src>`
+and `<include>` tags in `options.html`. There is no need to add new files to any
+`.gyp` file.
+
+Some things that are good to know:
+
+* JS objects are bound to HTML nodes via `decorate()`. * Javascript calls
+ into C++ via `chrome.send`. * Some automatic preference handling is
+ provided in `preferences.js` and `pref_ui.js`. * Strings defined in a WebUI
+ handler are available via `localStrings.getString()` and friends.
+
+## Example: adding a simple pref
+
+Suppose you want to add a pref controlled by a **checkbox**.
+
+### 1. Getting UI approval
+
+Ask one of the UI leads where your option belongs. First point of contact should
+be Alex Ainslie <ainslie at chromium>.
+
+### 2. Adding Strings
+
+Add the string to `chrome/app/generated_resources.grd` near related strings. No
+trailing punctuation; sentence case.
+
+### 3. Changing WebUI Handler
+
+For simple prefs, the UI is kept in sync with the value automagically (see
+`CoreOptionsHandler`). Find the handler most closely relevant to your pref. The
+only action we need take here is to make the page aware of the new string.
+Locate the method in the handler called `GetLocalizedStrings` and look at its
+body for examples of `SetString` usage. The first parameter is the javascript
+name for the string.
+
+### 4. HTML/CSS/JS
+
+An example of the form a checkbox should take in html:
+
+```html
+<label class="checkbox">
+ <input id="clear-cookies-on-exit"
+ pref="profile.clear_site_data_on_exit" type="checkbox">
+ <span i18n-content="clearCookiesOnExit"></span>
+</label>
+```
+
+Of note:
+
+* the `checkbox` class allows us to style all checkboxes the same via CSS
+* the class and ID are in dash-form * the i18n-content value is in camelCase
+* the pref attribute matches that which is defined in
+ `chrome/common/pref_names.cc` and allows the prefs framework to
+ automatically keep it in sync with the value in C++
+* the `i18n-content` value matches the string we defined in the WebUI handler.
+ The `textContent` of the `span` will automatically be set to the associated
+ text.
+
+In this example, no additional JS or CSS are needed.
diff --git a/docs/chromium_browser_vs_google_chrome.md b/docs/chromium_browser_vs_google_chrome.md
new file mode 100644
index 0000000..69804cb
--- /dev/null
+++ b/docs/chromium_browser_vs_google_chrome.md
@@ -0,0 +1,18 @@
+Chromium on Linux has two general flavors: You can either get [Google Chrome](http://www.google.com/chrome?platform=linux) or chromium-browser (see LinuxChromiumPackages). This page tries to describe the differences between the two.
+
+In short, Google Chrome is the Chromium open source project built, packaged, and distributed by Google. This table lists what Google adds to the Google Chrome builds **on Linux**.
+
+| | **Google Chrome** | **Chromium** | **Extra notes** |
+|:|:------------------|:-------------|:----------------|
+| Logo | Colorful | Blue |
+| [Crash reporting](LinuxCrashDumping.md) | Yes, if you turn it on | None | Please include symbolized backtraces in bug reports if you don't have crash reporting |
+| User metrics | Yes, if you turn it on | No |
+| Video and audio tags | AAC, H.264, MP3, Opus, Theora, Vorbis, VP8, VP9, and WAV | Opus, Theora, Vorbis, VP8, VP9, and WAV by default | May vary by distro |
+| Adobe Flash | Sandboxed PPAPI (non-free) plugin included in release | Supports NPAPI (unsandboxed) plugins, including the one from Adobe in Chrome 34 and below |
+| Code | Tested by Chrome developers | May be modified by distributions | Please include distribution information if you report bugs |
+| Sandbox | Always on | Depending on the distribution (navigate to about:sandbox to confirm) | |
+| Package | Single deb/rpm | Depending on the distribution | |
+| Profile | Kept in `~/.config/google-chrome` | Kept in `~/.config/chromium` |
+| Cache | Kept in `~/.cache/google-chrome` | Kept in `~/.cache/chromium` |
+| Quality Assurance | New releases are tested before sending to users | Depending on the distribution | Distributions are encouraged to track stable channel releases: see http://googlechromereleases.blogspot.com/ , http://omahaproxy.appspot.com/ and http://gsdview.appspot.com/chromium-browser-official/ |
+| Google API keys | Added by Google | Depending on the distribution | See http://www.chromium.org/developers/how-tos/api-keys | \ No newline at end of file
diff --git a/docs/chromoting_android_hacking.md b/docs/chromoting_android_hacking.md
new file mode 100644
index 0000000..c76be28
--- /dev/null
+++ b/docs/chromoting_android_hacking.md
@@ -0,0 +1,124 @@
+# Introduction
+
+This guide, which is meant to accompany the compilation guide at ChromotingBuildInstructions, explains the process of viewing the logs and debugging the CRD Android client. I'll assume you've already built the APK as described in the aforementioned guide, that you're in the `src/` directory, and that your binary is at `out/Release/apks/Chromoting.apk`. Additionally, you should have installed the app on at least one (still) connected device.
+
+# Viewing logging output
+
+In order to access LogCat and view the app's logging output, we need to launch the Android Device Monitor. Run `third_party/android_tools/sdk/tools/monitor` and select the desired device under `Devices`. Using the app as normal will display log messages to the `LogCat` pane.
+
+# Attaching debuggers for Java code
+
+## Eclipse
+ 1. Go to http://developer.android.com/sdk/index.html and click "Download the SDK ADT Bundle for Linux"
+ 1. Configure eclipse
+ 1. Select General > Workspace from the tree on the left.
+ 1. Uncheck Refresh workspace on startup (if present)
+ 1. Uncheck Refresh using native hooks or polling (if present)
+ 1. Disable build before launching
+ 1. Select Run/Debug > Launching
+ 1. Uncheck Build (if required) before launching
+ 1. Create a project
+ 1. Select File > New > Project... from the main menu.
+ 1. Choose Java/Java Project
+ 1. Eclipse should have generated .project and perhaps a .classpath file in your <project root>/src/ directory.
+ 1. Replace/Add the .classpath file with the content from Below. Remember that the path field should be the location of the chromium source relative to the current directory of your project.
+```
+<?xml version="1.0" encoding="UTF-8"?>
+<classpath>
+<classpathentry kind="src" path="net/test/android/javatests/src"/>
+<classpathentry kind="src" path="net/android/java/src"/>
+<classpathentry kind="src" path="net/android/javatests/src"/>
+<classpathentry kind="src" path="base/test/android/java/src"/>
+<classpathentry kind="src" path="base/test/android/javatests/src"/>
+<classpathentry kind="src" path="base/android/jni_generator/java/src"/>
+<classpathentry kind="src" path="base/android/java/src"/>
+<classpathentry kind="src" path="base/android/javatests/src"/>
+<classpathentry kind="src" path="components/cronet/android/java/src"/>
+<classpathentry kind="src" path="components/cronet/android/sample/src"/>
+<classpathentry kind="src" path="components/cronet/android/sample/javatests/src"/>
+<classpathentry kind="src" path="components/autofill/core/browser/android/java/src"/>
+<classpathentry kind="src" path="components/web_contents_delegate_android/android/java/src"/>
+<classpathentry kind="src" path="components/dom_distiller/android/java/src"/>
+<classpathentry kind="src" path="components/navigation_interception/android/java/src"/>
+<classpathentry kind="src" path="ui/android/java/src"/>
+<classpathentry kind="src" path="media/base/android/java/src"/>
+<classpathentry kind="src" path="chrome/test/android/unit_tests_apk/src"/>
+<classpathentry kind="src" path="chrome/test/android/javatests/src"/>
+<classpathentry kind="src" path="chrome/test/chromedriver/test/webview_shell/java/src"/>
+<classpathentry kind="src" path="chrome/common/extensions/docs/examples/extensions/irc/servlet/src"/>
+<classpathentry kind="src" path="chrome/android/java/src"/>
+<classpathentry kind="src" path="chrome/android/uiautomator_tests/src"/>
+<classpathentry kind="src" path="chrome/android/shell/java/src"/>
+<classpathentry kind="src" path="chrome/android/shell/javatests/src"/>
+<classpathentry kind="src" path="chrome/android/javatests/src"/>
+<classpathentry kind="src" path="sync/test/android/javatests/src"/>
+<classpathentry kind="src" path="sync/android/java/src"/>
+<classpathentry kind="src" path="sync/android/javatests/src"/>
+<classpathentry kind="src" path="mojo/public/java/src"/>
+<classpathentry kind="src" path="mojo/android/system/src"/>
+<classpathentry kind="src" path="mojo/android/javatests/src"/>
+<classpathentry kind="src" path="mojo/shell/android/apk/src"/>
+<classpathentry kind="src" path="mojo/services/native_viewport/android/src"/>
+<classpathentry kind="src" path="testing/android/java/src"/>
+<classpathentry kind="src" path="printing/android/java/src"/>
+<classpathentry kind="src" path="tools/binary_size/java/src"/>
+<classpathentry kind="src" path="tools/android/memconsumer/java/src"/>
+<classpathentry kind="src" path="tools/android/findbugs_plugin/test/java/src"/>
+<classpathentry kind="src" path="tools/android/findbugs_plugin/src"/>
+<classpathentry kind="src" path="remoting/android/java/src"/>
+<classpathentry kind="src" path="remoting/android/apk/src"/>
+<classpathentry kind="src" path="remoting/android/javatests/src"/>
+<classpathentry kind="src" path="third_party/WebKit/Source/devtools/scripts/jsdoc-validator/src"/>
+<classpathentry kind="src" path="third_party/WebKit/Source/devtools/scripts/compiler-runner/src"/>
+<classpathentry kind="src" path="third_party/webrtc/voice_engine/test/android/android_test/src"/>
+<classpathentry kind="src" path="third_party/webrtc/modules/video_capture/android/java/src"/>
+<classpathentry kind="src" path="third_party/webrtc/modules/video_render/android/java/src"/>
+<classpathentry kind="src" path="third_party/webrtc/modules/audio_device/test/android/audio_device_android_test/src"/>
+<classpathentry kind="src" path="third_party/webrtc/modules/audio_device/android/java/src"/>
+<classpathentry kind="src" path="third_party/webrtc/examples/android/media_demo/src"/>
+<classpathentry kind="src" path="third_party/webrtc/examples/android/opensl_loopback/src"/>
+<classpathentry kind="src" path="third_party/webrtc/video_engine/test/auto_test/android/src"/>
+<classpathentry kind="src" path="third_party/libjingle/source/talk/app/webrtc/java/src"/>
+<classpathentry kind="src" path="third_party/libjingle/source/talk/app/webrtc/javatests/src"/>
+<classpathentry kind="src" path="third_party/libjingle/source/talk/examples/android/src"/>
+<classpathentry kind="src" path="android_webview/java/src"/>
+<classpathentry kind="src" path="android_webview/java/generated_src"/>
+<classpathentry kind="src" path="android_webview/test/shell/src"/>
+<classpathentry kind="src" path="android_webview/unittestjava/src"/>
+<classpathentry kind="src" path="android_webview/javatests/src"/>
+<classpathentry kind="src" path="content/public/test/android/javatests/src"/>
+<classpathentry kind="src" path="content/public/android/java/src"/>
+<classpathentry kind="src" path="content/public/android/javatests/src"/>
+<classpathentry kind="src" path="content/shell/android/browsertests_apk/src"/>
+<classpathentry kind="src" path="content/shell/android/java/src"/>
+<classpathentry kind="src" path="content/shell/android/shell_apk/src"/>
+<classpathentry kind="src" path="content/shell/android/javatests/src"/>
+<classpathentry kind="src" path="content/shell/android/linker_test_apk/src"/>
+<classpathentry kind="lib" path="third_party/android_tools/sdk/platforms/android-19/data/layoutlib.jar"/>
+<classpathentry kind="lib" path="third_party/android_tools/sdk/platforms/android-19/android.jar"/>
+<classpathentry kind="output" path="out/bin"/>
+</classpath>
+```
+ 1. Obtain the debug port
+ 1. Go to Window > Open Perspective > DDMS
+ 1. In order for the app org.chromium.chromoting to show up, you must build Debug instead of Retail
+ 1. Note down the port number, should be 8600 or 8700
+ 1. Setup a debug configuration
+ 1. Go to Window > Open Perspective > Debug
+ 1. Run > Debug > Configurations
+ 1. Select "Remote Java Application" and click "New"
+ 1. Put Host: localhost and Port: <the port from DDMS>
+ 1. Hit Debug
+ 1. Configure source path
+ 1. Right click on the Chromoting [Application](Remoting.md) and select Edit source Lookup Path
+ 1. Click "Add" and select File System Directory
+ 1. Select the location of your chromium checkout, e.g. <project root>/src/remoting/android
+ 1. Debugging
+ 1. To add a breakpoint, simply open the source file and hit Ctrl+Shift+B to toggle the breakpoint. Happy hacking.
+## Command line debugger
+
+With the Android Device Monitor open, look under `Devices`, expand the entry for the device on which you want to debug, and select the entry for `org.chromium.chromoting` (it must already be running). This forwards the JVM debugging connection to your local port 8700. In your shell, do `$ jdb -attach localhost:8700`.
+
+# Attaching GDB to debug native code
+
+The Chromium build system provides a convenience wrapper script that can be used to easily launch GDB. Run `$ build/android/adb_gdb --package-name=org.chromium.chromoting --activity=.Chromoting --start`. Note that if you have multiple devices connected, you must export `ANDROID_SERIAL` to select one; set it to the serial number of the desired device as output by `$ adb devices`. \ No newline at end of file
diff --git a/docs/chromoting_build_instructions.md b/docs/chromoting_build_instructions.md
new file mode 100644
index 0000000..9821123
--- /dev/null
+++ b/docs/chromoting_build_instructions.md
@@ -0,0 +1,82 @@
+# Introduction
+
+Chromoting, also known as [Chrome Remote Desktop](https://support.google.com/chrome/answer/1649523), allows one to remotely control a distant machine, all from within the Chromium browser. Its source code is located in the `remoting/` folder in the Chromium codebase. For the sake of brevity, we'll assume that you already have a pre-built copy of Chromium (or Chrome) installed on your development computer.
+
+# Obtain API keys
+
+Before you can compile the code, you must obtain an API key to allow it to access the federated Chromoting API.
+
+ 1. Join the chromium-dev list, which can be found at https://groups.google.com/a/chromium.org/forum/#!forum/chromium-dev. (This step is required in order to gain access to the Chromoting API.)
+ 1. Visit the Google APIs console at https://code.google.com/apis/console.
+ 1. Use the `API Project` dropdown to create a new project with a name of your choice.
+ 1. On the `Services` page, click the button to activate the `Chrome Remote Desktop API`, and accept the license agreements that appear.
+ 1. On the `API Access` page, select the option to create an `OAuth2 client ID`, and supply a product name of your choice.
+ 1. Choose the `Web application` type, then click more options.
+ 1. Replace the contents of `Authorized Redirect URLs` with: `https://chromoting-oauth.talkgadget.google.com/talkgadget/oauth/chrome-remote-desktop/dev`
+ 1. Clear the `Authorized JavaScript Origins` field and click the button to finish creating the client ID.
+ 1. Create a file `~/.gyp/include.gypi` according to the following format:
+
+```
+{
+ 'variables': {
+ 'google_api_key': '<API key, listed under Simple API Access in the console>',
+ 'google_default_client_id': '<Client ID, listed under Client ID for web applications>',
+ 'google_default_client_secret': '<Client secret, listed under Client ID for web applications>',
+ },
+}
+```
+
+(NB: Replace the text in angled braces according the the instructions between them.)
+
+# Obtain Chromium code
+
+If you've already checked out a copy of the browser's codebase, you can skip this section, although you'll still need to run `gclient runhooks` to ensure you build using the API keys you just generated.
+
+ 1. Install the build dependencies, which are listed at http://code.google.com/p/chromium/wiki/LinuxBuildInstructionsPrerequisites.
+ 1. Install the depot\_tools utilities, a process that is documented at http://dev.chromium.org/developers/how-tos/install-depot-tools.
+ 1. Download the Chromium source code by running: `$ fetch chromium --nosvn=True`
+
+# Build and install the Linux host service
+
+If you want to remote into a (Debian-based) GNU/Linux host, follow these steps to compile and install the host service on that system. As of the time of writing, you must compile from source because no official binary package is being distributed.
+
+ 1. Start in the `src/` directory that contains your checkout of the Chromium code.
+ 1. Build the Chromoting host binaries: `$ ninja -C out/Release remoting_me2me_host remoting_start_host remoting_native_messaging_host remoting_native_messaging_manifests`
+ 1. When the build finishes, move into the installer directory: `$ cd remoting/host/installer/`
+ 1. Generate a DEB package for your system's package manager: `$ linux/build-deb.sh`
+ 1. Install the package on your system: `$ sudo dpkg -i *.deb`
+ 1. The next time you use the Chromoting extension from your browser, it should detect the presence of the host service and offer you the option to `Enable remote connections`.
+ 1. If the Web app doesn't properly detect the host process, you may need to create a symlink to help the plugin find the native messaging host: `$ sudo ln -s /etc/opt/chrome /etc/chromium`
+
+(NB: If you compile the host service from source and expect to configure it using the browser extension, you must also compile the latter from source. Otherwise, the package signing keys will not match and the Web app's OAuth2 token will not be valid within the host process.)
+
+# Build and install the browser extension
+
+The Web app is the Chromoting system's main user interface, and allows you to connect to existing hosts as well as set up the host process on the machine you're currently sitting at. Once built, it must be installed into your browser as an extension.
+
+ 1. Start in the `src/` directory that contains your checkout of the Chromium code.
+ 1. Build the browser extension: `$ GOOGLE_API_KEY=<API key> GOOGLE_CLIENT_ID_REMOTING=<client ID> GOOGLE_CLIENT_SECRET_REMOTING=<client secret> ninja -C out/Release remoting_webapp` (Be sure to replace the substitutions denoted by angled braces.)
+ 1. Install the extension into your Chromium (or Chrome) browser:
+ 1. Visit the settings page [chrome://extensions].
+ 1. If it is unchecked, tick the `Developer mode` box.
+ 1. Click `Load unpacked extension...`, then navigate to `out/Release/remoting/remoting.webapp/` within your code checkout.
+ 1. Confirm the installation, open a new tab, and click the new app's Chromoting icon.
+ 1. Complete the account authorization step, signing into your Google account if you weren't already.
+
+# Build and install the Android client
+
+If you want to use your Android device to connect to your Chromoting hosts, follow these steps to install the client app on it. Note that this is in the very early stages of development. At the time of writing, you must compile from source because no official version is being distributed.
+
+ 1. Follow all the instructions under the `Getting the code` and `Install prerequisites` sections of: http://code.google.com/p/chromium/wiki/AndroidBuildInstructions
+ 1. Move into the `src/` directory that contains your checkout of the Chromium code.
+ 1. Build the Android app: `$ ninja -C out/Release remoting_apk`
+ 1. Connect your device and set up USB debugging:
+ 1. Plug your device in via USB.
+ 1. Open the Settings app and look for the `Developer options` choice.
+ 1. If there is no such entry, open `About phone`, tap `Build number` 7 times, and look again.
+ 1. Under `Developer options`, toggle the main switch to `ON` and enable `USB debugging`.
+ 1. On your machine and still in the `src/` directory, run: `$ build/android/adb_install_apk.py --apk=out/Release/apks/Chromoting.apk`
+ 1. If your Android device prompts you to accept the host's key, do so.
+ 1. The app should now be listed as Chromoting in your app drawer.
+
+See the ChromotingAndroidHacking guide for instructions on viewing the Android app's log and attaching a debugger. \ No newline at end of file
diff --git a/docs/clang.md b/docs/clang.md
new file mode 100644
index 0000000..4840fca
--- /dev/null
+++ b/docs/clang.md
@@ -0,0 +1,99 @@
+[Clang](http://clang.llvm.org/) is a compiler with many desirable features (outlined on their website).
+
+Chrome can be built with Clang. It is now the default compiler on Mac and Linux for building Chrome, and it is currently useful for its warning and error messages on Android and Windows.
+
+See [the open bugs](http://code.google.com/p/chromium/issues/list?q=label:clang).
+
+## Build instructions
+
+Get clang (happens automatically during `gclient runhooks` on Mac and Linux):
+```
+tools/clang/scripts/update.sh
+```
+
+(Only needs to be run once per checkout, and clang will be automatically updated by `gclient runhooks`.)
+
+### Reverting to gcc on linux
+
+We don't have bots that test this, but building with gcc4.8+ should still work on Linux. If your system gcc is new enough, use this to build with gcc if you don't want to build with clang:
+
+```
+GYP_DEFINES=clang=0 build/gyp_chromium
+```
+
+### Ninja
+
+Regenerate the build files (`clang=1` is on by default on Mac and Linux):
+
+If you use gyp:
+```
+GYP_DEFINES=clang=1 build/gyp_chromium
+```
+
+If you use gn, run `gn args` and add `is_clang = true` to your args.gn file.
+
+Build:
+```
+ninja -C out/Debug chrome
+```
+
+## Mailing List
+http://groups.google.com/a/chromium.org/group/clang/topics
+
+## Using plugins
+
+The [chromium style plugin](http://dev.chromium.org/developers/coding-style/chromium-style-checker-errors) is used by default when clang is used.
+
+If you're working on the plugin, you can build it locally like so:
+
+ 1. Run `./tools/clang/scripts/update.sh --force-local-build --without-android` to build the plugin.
+ 1. Build with clang like described above.
+
+To run [other plugins](WritingClangPlugins.md), add these to your `GYP_DEFINES`:
+
+ * `clang_load`: Absolute path to a dynamic library containing your plugin
+ * `clang_add_plugin`: tells clang to run a specific PluginASTAction
+
+So for example, you could use the plugin in this directory with:
+
+ * `GYP_DEFINES='clang=1 clang_load=/path/to/libFindBadConstructs.so clang_add_plugin=find-bad-constructs' gclient runhooks`
+
+## Using the clang static analyzer
+
+See ClangStaticAnalyzer.
+
+## Windows
+
+**Experimental!**
+
+clang can be used as compiler on Windows. Clang uses Visual Studio's linker and SDK, so you still need to have Visual Studio installed.
+
+Things should compile, and all tests should pass. You can check these bots for how things are currently looking: http://build.chromium.org/p/chromium.fyi/console?category=win%20clang
+
+```
+python tools\clang\scripts\update.py
+set GYP_DEFINES=clang=1
+python build\gyp_chromium
+# or, if you use gn, run `gn args` and add `is_clang = true` to your args.gn
+ninja -C out\Debug chrome
+```
+
+Current brokenness:
+
+ * Goma doesn't work.
+ * Debug info is very limited.
+ * To get colored diagnostics, you need to be running [ansicon](https://github.com/adoxa/ansicon/releases).
+
+## Using a custom clang binary
+
+If you want to try building Chromium with your own clang binary that you've already built, set `make_clang_dir` to the directory containing `bin/clang` (i.e. the directory you ran cmake in, or your `Release+Asserts` folder if you use the configure/make build). You also need to disable chromium's clang plugin by setting `clang_use_chrome_plugins=0`, as it likely won't load in your custom clang binary.
+
+Here's an example that also disables debug info and enables the component build (both not strictly necessary, but they will speed up your build):
+
+```
+GYP_DEFINES="clang=1 fastbuild=1 component=shared_library clang_use_chrome_plugins=0 make_clang_dir=$HOME/src/llvm-build" build/gyp_chromium
+```
+
+You can then run `head out/Release/build.ninja` and check that the first to lines set `cc` and `cxx` to your clang binary. If things look good, run `ninja -C out/Release` to build.
+
+If your clang revision is very different from the one currently used in chromium – check tools/clang/scripts/update.sh to find chromium's clang revision – you might have to tweak warning flags. Or you could set `werror=` in the line above to disable warnings as errors (but this only works on Linux). \ No newline at end of file
diff --git a/docs/clang_format.md b/docs/clang_format.md
new file mode 100644
index 0000000..5883a13
--- /dev/null
+++ b/docs/clang_format.md
@@ -0,0 +1,32 @@
+## Using clang-format on Chromium C++ Code
+
+### Easiest usage, from the command line
+
+To automatically format a pending patch according to <a href='http://www.chromium.org/developers/coding-style'>Chromium style</a>, from the command line, simply run:
+```
+git cl format
+```
+This should work on all platforms _(yes, even Windows)_ without any set up or configuration: the tool comes with your checkout. Like other `git-cl` commands, this operates on a diff relative to the upstream branch. Only the lines that you've already touched in your patch will be reformatted. You can commit your changes to your git branch and then run `git cl format`, after which `git diff` will show you what clang-format changed. Alternatively, you can run `git cl format` with your changes uncommitted, and then commit your now-formatted code.
+
+### Editor integrations
+
+Many developers find it useful to integrate the clang-format tool with their editor of choice. As a convenience, the scripts for this are also available in your checkout of Chrome under <a href='https://code.google.com/p/chromium/codesearch#chromium/src/buildtools/clang_format/script/'><code>src/buildtools/clang_format/script/</code></a>.
+
+If you use an editor integration, you should try to make sure that you're using the version of clang-format that comes with your checkout. That way, you'll automatically get updates and be running a tool that formats consistently with other developers. The binary lives under `src/buildtools`, but it's also in your path indirectly via a `depot_tools` launcher script: <a href='https://code.google.com/p/chromium/codesearch#chromium/tools/depot_tools/clang-format'><code>clang-format</code></a> (<a href='https://code.google.com/p/chromium/codesearch#chromium/tools/depot_tools/clang-format.bat'><code>clang-format.bat</code></a> on Windows). Assuming that `depot_tools` is in your editor's `PATH` and the editor command runs from a working directory inside the Chromium checkout, the editor scripts (which anticipate clang-format on the path) should work.
+
+For further guidance on editor integration, see these specific pages:
+ * <a href='http://www.chromium.org/developers/sublime-text#TOC-Format-selection-or-area-around-cursor-using-clang-format'>Sublime Text</a>
+ * <a href='http://clang.llvm.org/docs/ClangFormat.html '>llvm's guidelines for vim, emacs, and bbedit</a>
+ * For vim, `:so tools/vim/clang-format.vim` and then hit cmd-shift-i (mac) ctrl-shift-i (elsewhere) to indent the current line or current selection.
+
+### Are robots taking over my freedom to choose where newlines go?
+
+No. For the project as a whole, using clang-format is just one optional way to format your code. While it will produce style-guide conformant code, other formattings would also satisfy the style guide, and all are okay.
+
+Having said that, many clang-format converts have found that relying on a tool saves both them and their reviewers time. The saved time can then be used to discover functional defects in their patch, to address style/readability concerns whose resolution can't be automated, or to do something else that matters.
+
+In directories where most contributors have already adopted clang-format, and code is already consistent with what clang-format would produce, some teams intend to experiment with standardizing on clang-format. When these local standards apply, it will be enforced by a PRESUBMIT.py check.
+
+### Reporting problems
+
+If clang-format is broken, or produces badly formatted code, please file a <a href='https://code.google.com/p/chromium/issues/entry?comment=clang-format%20produced%20code%20that%20(choose%20all%20that%20apply):%20%0A-%20Doesn%27t%20match%20Chromium%20style%0A-%20Doesn%27t%20match%20blink%20style%0A-%20Riles%20my%20finely%20honed%20stylistic%20dander%0A-%20No%20sane%20human%20would%20ever%20choose%0A%0AHere%27s%20the%20code%20before%20formatting:%0A%0A%0AHere%27s%20the%20code%20after%20formatting:%0A%0A%0AHere%27s%20how%20it%20ought%20to%20look:%0A%0A%0ACode%20review%20link%20for%20full%20files/context:&summary=clang-format%20quality%20problem&cc=thakis@chromium.org,%20nick@chromium.org&labels=Type-Bug,Build-Tools,OS-?'>Chromium bug using this link</a>. Assign it to thakis@chromium.org or nick@chromium.org who will route it upstream. \ No newline at end of file
diff --git a/docs/clang_static_analyzer.md b/docs/clang_static_analyzer.md
new file mode 100644
index 0000000..84179f8
--- /dev/null
+++ b/docs/clang_static_analyzer.md
@@ -0,0 +1,56 @@
+See the [official clang static analyzer page](http://clang-analyzer.llvm.org/) for background.
+
+We don't run this regularly (because the analyzer's [support for C++ isn't great yet](http://clang-analyzer.llvm.org/dev_cxx.html)), so everything on this page is likely broken. The last time I checked, the analyzer reported mostly uninteresting things. This assumes you're [building chromium with clang](Clang.md).
+
+You need an llvm checkout to get `scan-build` and `scan-view`; the easiest way to get that is to run
+```
+tools/clang/scripts/update.sh --force-local-build --without-android
+```
+
+## With make
+
+To build base, if you use the make build:
+
+```
+builddir_name=out_analyze \
+PATH=$PWD/third_party/llvm-build/Release+Asserts/bin:$PATH \
+third_party/llvm/tools/clang/tools/scan-build/scan-build \
+ --keep-going --use-cc clang --use-c++ clang++ \
+ make -j8 base
+```
+
+(`builddir_name` is set to force a clobber build.)
+
+Once that's done, run `third_party/llvm/tools/clang/tools/scan-view/scan-view` to see the results; pass in the pass that `scan-build` outputs.
+
+## With ninja
+
+scan-build does its stuff by mucking with $CC/$CXX, which ninja ignores. gyp does look at $CC/$CXX however, so you need to first run gyp\_chromium under scan-build:
+```
+time GYP_GENERATORS=ninja \
+GYP_DEFINES='component=shared_library clang_use_chrome_plugins=0 mac_strip_release=0 dcheck_always_on=1' \
+third_party/llvm/tools/clang/tools/scan-build/scan-build \
+ --use-analyzer $PWD/third_party/llvm-build/Release+Asserts/bin/clang \
+ build/gyp_chromium -Goutput_dir=out_analyze
+```
+You then need to run the build under scan-build too, to get a HTML report:
+```
+time third_party/llvm/tools/clang/tools/scan-build/scan-build \
+ --use-analyzer $PWD/third_party/llvm-build/Release+Asserts/bin/clang \
+ ninja -C out_analyze/Release/ base
+```
+Then run `scan-view` as described above.
+
+## Known False Positives
+
+ * http://llvm.org/bugs/show_bug.cgi?id=11425
+
+## Stuff found by the static analyzer
+
+ * http://code.google.com/p/skia/issues/detail?id=399
+ * http://code.google.com/p/skia/issues/detail?id=400
+ * http://codereview.chromium.org/8308008/
+ * http://codereview.chromium.org/8313008/
+ * http://codereview.chromium.org/8308009/
+ * http://codereview.chromium.org/10031018/
+ * https://codereview.chromium.org/12390058/ \ No newline at end of file
diff --git a/docs/clang_tool_refactoring.md b/docs/clang_tool_refactoring.md
new file mode 100644
index 0000000..2037810
--- /dev/null
+++ b/docs/clang_tool_refactoring.md
@@ -0,0 +1,52 @@
+# Caveats
+ * The current workflow requires git.
+ * This doesn't work on Windows... yet. I'm hoping to have a proof-of-concept working on Windows as well ~~in a month~~ several centuries from now.
+
+# Prerequisites
+Everything needed should be in a default Chromium checkout using gclient. third\_party/llvm-build/Release+Asserts/bin should be in your `$PATH`.
+
+# Writing the Tool
+An example clang tool is being implemented in https://codereview.chromium.org/12746010/. Other useful resources might be the [basic tutorial for Clang's AST matchers](http://clang.llvm.org/docs/LibASTMatchersTutorial.html) or the [AST matcher reference](http://clang.llvm.org/docs/LibASTMatchersReference.html).
+
+Build your tool by running the following command (requires cmake version 2.8.10 or later):
+```
+tools/clang/scripts/update.sh --force-local-build --without-android --with-chrome-tools <tools>
+```
+`<tools>` is a semicolon delimited list of subdirectories in `tools/clang` to build. The resulting binary will end up in `third_party/llvm-build/Release+Asserts/bin`. For example, to build the Chrome plugin and the empty\_string tool, run the following:
+```
+tools/clang/scripts/update.sh --force-local-build --without-android --with-chrome-tools "plugins;empty_string"
+```
+
+When writing AST matchers, the following can be helpful to see what clang thinks the AST is:
+```
+clang++ -cc1 -ast-dump foo.cc
+```
+
+# Running the tool
+First, you'll need to generate the compilation database with the following command:
+```
+cd $HOME/src/chrome/src
+ninja -C out/Debug -t compdb cc cxx objc objcxx > out/Debug/compile_commands.json
+```
+
+This will dump the command lines used to build the C/C++ modules in all of Chromium into the resulting file. Then run the following command to run your tool across all Chromium code:
+```
+# Make sure all chromium targets are built to avoid missing generated dependencies
+ninja -C out/Debug
+tools/clang/scripts/run_tool.py <toolname> <path/to/directory/with/compile_commands.json> <path 1> <path 2> ...
+```
+
+`<path 1>`, `<path 2>`, etc are optional arguments you use to filter the files that will be rewritten. For example, if you only want to run the `empty-string` tool on files in `chrome/browser/extensions` and `sync`, you'd do something like:
+```
+tools/clang/scripts/run_tool.py empty_string out/Debug chrome/browser/extensions sync
+```
+
+# Limitations
+Since the compile database is generated by ninja, that means that files that aren't compiled on that platform won't be processed. That means if you want to apply a change across all Chromium platforms, you'll have to run the tool once on each platform.
+
+# Testing
+`test_tool.py` is the test harness for running tests. To use it, simply run:
+```
+test_tool.py <tool name>
+```
+Note that name of the built tool and the subdirectory it lives in at `tools/clang` must match. What the test harness does is find all files that match the pattern `*-original.cc` in your tool's tests subdirectory. It then runs the tool across those files and compares it to the expected result, stored in `*-expected.cc` \ No newline at end of file
diff --git a/docs/closure_compilation.md b/docs/closure_compilation.md
new file mode 100644
index 0000000..2f78ae0
--- /dev/null
+++ b/docs/closure_compilation.md
@@ -0,0 +1,249 @@
+# I just need to fix the compile!
+
+To locally run closure compiler like the bots, do this:
+
+```
+cd $CHROMIUM_SRC
+# sudo apt-get install openjdk-7-jre # may be required
+GYP_GENERATORS=ninja tools/gyp/gyp --depth . third_party/closure_compiler/compiled_resources.gyp
+ninja -C out/Default
+```
+
+# Background
+
+In C++ and Java, compiling the code gives you _some_ level of protection against misusing variables based on their type information. JavaScript is loosely typed and therefore doesn't offer this safety. This makes writing JavaScript more error prone as it's _one more thing_ to mess up.
+
+Because having this safety is handy, Chrome now has a way to optionally typecheck your JavaScript and produce compiled output with [Closure Compiler](https://developers.google.com/closure/compiler/).
+
+See also: [the design doc](https://docs.google.com/a/chromium.org/document/d/1Ee9ggmp6U-lM-w9WmxN5cSLkK9B5YAq14939Woo-JY0/edit).
+
+# Assumptions
+
+A working Chrome checkout. See here: http://www.chromium.org/developers/how-tos/get-the-code
+
+# Typechecking Your Javascript
+
+So you'd like to compile your JavaScript!
+
+Maybe you're working on a page that looks like this:
+
+```
+<script src="other_file.js"></script>
+<script src="my_product/my_file.js"></script>
+```
+
+Where `other_file.js` contains:
+
+```
+var wit = 100;
+
+// ... later on, sneakily ...
+
+wit += ' IQ'; // '100 IQ'
+```
+
+and `src/my_product/my_file.js` contains:
+
+```
+/** @type {number} */ var mensa = wit + 50;
+alert(mensa); // '100 IQ50' instead of 150
+```
+
+In order to check that our code acts as we'd expect, we can create a
+
+```
+my_project/compiled_resources.gyp
+```
+
+with the contents:
+
+```
+# Copyright 2015 The Chromium Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+{
+ 'targets': [
+ {
+ 'target_name': 'my_file', # file name without ".js"
+
+ 'variables': { # Only use if necessary (no need to specify empty lists).
+ 'depends': [
+ 'other_file.js', # or 'other_project/compiled_resources.gyp:target',
+ ],
+ 'externs': [
+ '<(CLOSURE_DIR)/externs/any_needed_externs.js' # e.g. chrome.send(), chrome.app.window, etc.
+ ],
+ },
+
+ 'includes': ['../third_party/closure_compiler/compile_js.gypi'],
+ },
+ ],
+}
+```
+
+You should get results like:
+
+```
+(ERROR) Error in: my_project/my_file.js
+## /my/home/chromium/src/my_project/my_file.js:1: ERROR - initializing variable
+## found : string
+## required: number
+## /** @type {number} */ var mensa = wit + 50;
+## ^
+```
+
+Yay! We can easily find our unexpected type errors and write less error-prone code!
+
+# Continuous Checking
+
+To compile your code on every commit, add a line to [third\_party/closure\_compiler/compiled\_resources.gyp](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/closure_compiler/compiled_resources.gyp&sq=package:chromium&type=cs) like this:
+
+```
+{
+ 'targets': [
+ {
+ 'target_name': 'compile_all_resources',
+ 'dependencies': [
+ # ... other projects ...
+++ '../my_project/compiled_resources.gyp:*',
+ ],
+ }
+ ]
+}
+```
+
+and the [Closure compiler bot](http://build.chromium.org/p/chromium.fyi/builders/Closure%20Compilation%20Linux) will [re-]compile your code whenever relevant .js files change.
+
+# Using Compiled JavaScript
+
+Compiled JavaScript is output in src/out/<Debug|Release>/gen/closure/my\_project/my\_file.js along with a source map for use in debugging. In order to use the compiled JavaScript, we can create a
+
+```
+my_project/my_project_resources.gpy
+```
+
+with the contents:
+
+```
+# Copyright 2015 The Chromium Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+
+{
+ 'targets': [
+ {
+ # GN version: //my_project/resources
+ 'target_name': 'my_project_resources',
+ 'type': 'none',
+ 'variables': {
+ 'grit_out_dir': '<(SHARED_INTERMEDIATE_DIR)/my_project',
+ 'my_file_gen_js': '<(SHARED_INTERMEDIATE_DIR)/closure/my_project/my_file.js',
+ },
+ 'actions': [
+ {
+ # GN version: //my_project/resources:my_project_resources
+ 'action_name': 'generate_my_project_resources',
+ 'variables': {
+ 'grit_grd_file': 'resources/my_project_resources.grd',
+ 'grit_additional_defines': [
+ '-E', 'my_file_gen_js=<(my_file_gen_js)',
+ ],
+ },
+ 'includes': [ '../build/grit_action.gypi' ],
+ },
+ ],
+ 'includes': [ '../build/grit_target.gypi' ],
+ },
+ ],
+}
+```
+
+The variables can also be defined in an existing .gyp file if appropriate. The variables can then be used in to create a
+
+```
+my_project/my_project_resources.grd
+```
+
+with the contents:
+
+```
+<?xml version="1.0" encoding="utf-8"?>
+<grit-part>
+ <include name="IDR_MY_FILE_GEN_JS" file="${my_file_gen_js}" use_base_dir="false" type="BINDATA" />
+</grit-part>
+```
+
+In your C++, the resource can be retrieved like this:
+```
+base::string16 my_script =
+ base::UTF8ToUTF16(
+ ResourceBundle::GetSharedInstance()
+ .GetRawDataResource(IDR_MY_FILE_GEN_JS)
+ .as_string());
+```
+
+# Debugging Compiled JavaScript
+
+Along with the compiled JavaScript, a source map is created: src/out/<Debug|Release>/gen/closure/my\_project/my\_file.js.map
+
+Chrome DevTools has built in support for working with source maps: [https://developer.chrome.com/devtools/docs/javascript-debugging#source-maps](https://developer.chrome.com/devtools/docs/javascript-debugging#source-maps)
+
+In order to use the source map, you must first manually edit the path to the 'sources' in the .js.map file that was generated. For example, if the source map looks like this:
+```
+{
+"version":3,
+"file":"/tmp/gen/test_script.js",
+"lineCount":1,
+"mappings":"A,aAAA,IAAIA,OAASA,QAAQ,EAAG,CACtBC,KAAA,CAAM,OAAN,CADsB;",
+"sources":["/tmp/tmp70_QUi"],
+"names":["fooBar","alert"]
+}
+```
+
+sources should be changed to:
+```
+...
+"sources":["/tmp/test_script.js"],
+...
+```
+
+In your browser, the source map can be loaded through the Chrome DevTools context menu that appears when you right click in the compiled JavaScript source body. A dialog will pop up prompting you for the path to the source map file. Once the source map is loaded, the uncompiled version of the JavaScript will appear in the Sources panel on the left. You can set break points in the uncompiled version to help debug; behind the scenes Chrome will still be running the compiled version of the JavaScript.
+
+# Additional Arguments
+
+compile\_js.gypi accepts an optional script\_args variable, which passes additional arguments to compile.py, as well as an optional closure\_args variable, which passes additional arguments to the closure compiler. You may also override the disabled\_closure\_args for more strict compilation.
+
+For example, if you would like to specify multiple sources, strict compilation, and an output wrapper, you would create a
+
+```
+my_project/compiled_resources.gyp
+```
+
+with contents similar to this:
+```
+# Copyright 2015 The Chromium Authors. All rights reserved.
+# Use of this source code is governed by a BSD-style license that can be
+# found in the LICENSE file.
+{
+ 'targets' :[
+ {
+ 'target_name': 'my_file',
+ 'variables': {
+ 'source_files': [
+ 'my_file.js',
+ 'my_file2.js',
+ ],
+ 'script_args': ['--no-single-file'], # required to process multiple files at once
+ 'closure_args': [
+ 'output_wrapper=\'(function(){%output%})();\'',
+ 'jscomp_error=reportUnknownTypes', # the following three provide more strict compilation
+ 'jscomp_error=duplicate',
+ 'jscomp_error=misplacedTypeAnnotation',
+ ],
+ 'disabled_closure_args': [], # remove the disabled closure args for more strict compilation
+ },
+ 'includes': ['../third_party/closure_compiler/compile_js.gypi'],
+ },
+ ],
+}
+``` \ No newline at end of file
diff --git a/docs/cocoa_tips_and_tricks.md b/docs/cocoa_tips_and_tricks.md
new file mode 100644
index 0000000..d13ba68
--- /dev/null
+++ b/docs/cocoa_tips_and_tricks.md
@@ -0,0 +1,69 @@
+# Introduction
+
+A collection of idioms that we use when writing the Cocoa views and controllers for Chromium.
+
+## NSWindowController Initialization
+
+To make sure that |window| and |delegate| are wired up correctly in your xib, it's useful to add this to your window controller:
+
+```
+- (void)awakeFromNib {
+ DCHECK([self window]);
+ DCHECK_EQ(self, [[self window] delegate]);
+}
+```
+
+## NSWindowController Cleanup
+
+"You want the window controller to release itself it |-windowDidClose:|, because else it could die while its views are still around. if it (auto)releases itself in the callback, the window and its views are already gone and they won't send messages to the released controller."
+- Nico Weber (thakis@)
+
+See [Window Closing Behavior, ADC Reference](http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/Documents/Concepts/WindowClosingBehav.html#//apple_ref/doc/uid/20000027) for the full story.
+
+What this means in practice is:
+
+```
+@interface MyWindowController : NSWindowController<NSWindowDelegate> {
+ IBOutlet NSButton* closeButton_;
+}
+- (IBAction)closeButton:(id)sender;
+@end
+
+@implementation MyWindowController
+- (id)init {
+ if ((self = [super initWithWindowNibName:@"MyWindow" ofType:@"nib"])) {
+ }
+ return self;
+}
+
+- (void)awakeFromNib {
+ // Check that we set the window and its delegate in the XIB.
+ DCHECK([self window]);
+ DCHECK_EQ(self, [[self window] delegate]);
+}
+
+// NSWindowDelegate notification.
+- (void)windowWillClose:(NSNotification*)notif {
+ [self autorelease];
+}
+
+// Action for a button that lets the user close the window.
+- (IBAction)closeButton:(id)sender {
+ // We clean ourselves up after the window has closed.
+ [self close];
+}
+@end
+```
+
+## Unit Tests
+
+There are four Chromium-specific GTest macros for writing ObjC++ test cases. These macros are EXPECT\_NSEQ, EXPECT\_NSNE, and ASSERT variants by the same names. These test `-[id<NSObject> isEqual:]` and will print the object's `-description` in GTest-style if the assertion fails. These macros are defined in //testing/gtest\_mac.h. Just include that file and you can start using them.
+
+This allows you to write this:
+```
+ EXPECT_NSEQ(@"foo", aString);
+```
+Instead of this:
+```
+ EXPECT_TRUE([aString isEqualToString:@"foo"]);
+``` \ No newline at end of file
diff --git a/docs/code_coverage.md b/docs/code_coverage.md
new file mode 100644
index 0000000..3cfd3c0
--- /dev/null
+++ b/docs/code_coverage.md
@@ -0,0 +1,28 @@
+# Categories of coverage
+
+ * <strong><font color='greeb'>executed</font></strong> - this line of code was hit during execution
+ * <strong><font color='orange'>instrumented</font></strong> - this line of code was part of the compilation unit, but not executed
+ * <strong><font color='red'>missing</font></strong> - in a source file, but not compiled.
+ * ignored - not an executable line, or a line we don't care about
+
+Coverage is calculated as `exe / (inst + miss)`. In general, lines that are in `miss` should be ignored, but our exclusion rules are not good enough.
+
+# Buildbots
+
+Buildbots are currently on the [experimental waterfall](http://build.chromium.org/buildbot/waterfall.fyi/waterfall). The coverage figures they calculate come from running some subset of the chromium testing suite.
+
+ * [Linux](http://build.chromium.org/buildbot/waterfall.fyi/builders/Linux%20Coverage%20(dbg)) - uses `gcov`
+ * [Windows](http://build.chromium.org/buildbot/waterfall.fyi/builders/Win%20Coverage%20%28dbg%29)
+ * [Mac](http://build.chromium.org/buildbot/waterfall.fyi/builders/Mac%20Coverage%20%28dbg%29)
+
+Also,
+ * [Coverage dashboard](http://build.chromium.org/buildbot/coverage/)
+ * [Example coverage summary](http://build.chromium.org/buildbot/coverage/linux-debug/49936/) - the coverage is calculated at directory and file level, and the directory structure is navigable via the **Subdirectories** table.
+
+# Calculating coverage locally
+
+TODO
+
+# Advanced Tips
+
+Sometimes a line of code should never be reached (e.g., `NOTREACHED()`). These can be marked in the source with `// COV_NF_LINE`. Note that this syntax is exact. \ No newline at end of file
diff --git a/docs/common_build_tasks.md b/docs/common_build_tasks.md
new file mode 100644
index 0000000..43ac2d8
--- /dev/null
+++ b/docs/common_build_tasks.md
@@ -0,0 +1,130 @@
+The Chromium build system is a complicated beast of a system, and it is not very well documented beyond the basics of getting the source and building the Chromium product. This page has more advanced information about the build system.
+
+If you're new to Chromium development, read the [getting started guides](http://dev.chromium.org/developers/how-tos/get-the-code).
+
+
+
+# Faster Builds
+
+## Components Build
+
+A non-standard build configuration is to use dynamic linking instead of static linking for the various modules in the Chromium codebase. This results in significantly faster link times, but is a divergence from what is shipped and primarily tested. To enable the [component build](http://www.chromium.org/developers/how-tos/component-build):
+
+```
+$ GYP_DEFINES="component=shared_library" gclient runhooks
+```
+
+or
+
+```
+C:\...\src>set GYP_DEFINES=component=shared_library
+C:\...\src>gclient runhooks
+```
+
+
+## Windows: Debug Builds Link Faster
+
+On Windows if using the components build, building in debug mode will generally link faster. This is because in debug mode, the linker works incrementally. In release mode, a full link is performed each time.
+
+## Mac: Disable Release Mode Stripping
+
+On Mac, if building in release mode, one of the final build steps will be to strip the build products and create dSYM files. This process can slow down your incremental builds, but it can be disabled with the following define:
+
+```
+$ GYP_DEFINES="mac_strip_release=0" gclient runhooks
+```
+
+## Mac: DCHECKs in Release Mode
+
+DCHECKs are only designed to be run in debug builds. But building in release mode on Mac is significantly faster. You can have your cake and eat it too by building release mode with DCHECKs enabled using the following define:
+
+```
+$ GYP_DEFINES="dcheck_always_on=1" gclient runhooks
+```
+
+## Linux
+
+Linux has its own page on [making the build faster](https://code.google.com/p/chromium/wiki/LinuxFasterBuilds).
+
+# Configuring the Build
+
+## Environment Variables
+
+There are various environment variables that can be passed to the metabuild system GYP when generating project files. This is a summary of them:
+
+| GYP\_DEFINES | A set of key=value pairs separated by space that will set default values of variables used in .gyp and .gypi files |
+|:-------------|:-------------------------------------------------------------------------------------------------------------------|
+| GYP\_GENERATORS | The specific generator that creates build-system specific files |
+| GYP\_GENERATOR\_FLAGS | Flags that are passed down to the tool that generates the build-system specific files |
+| GYP\_GENERATOR\_OUTPUT | The directory that the top-level build output directory is relative to |
+
+Note also that GYP uses CPPFLAGS, CFLAGS, and CXXFLAGS when generating ninja files (the values at build time = ninja run time are _not_ used); see [gyp/generator/ninja.py](https://code.google.com/p/chromium/codesearch#chromium/src/tools/gyp/pylib/gyp/generator/ninja.py&q=cxxflags).
+
+## Variable Files
+
+If you want to keep a set of variables established, there are a couple of magic files that GYP reads:
+
+### chromium.gyp\_env
+
+Next to your top-level `/src/` directory, create a file called `chromium.gyp_env`. This holds a JSON dictionary, with the keys being any of the above environment variables. For the full list of supported keys, see [/src/build/gyp\_helper.py](https://code.google.com/p/chromium/codesearch#chromium/src/build/gyp_helper.py&sq=package:chromium&type=cs&q=file:gyp_helper.py%5Cb).
+
+```
+{
+ 'variables': {
+ 'mac_strip_release': 0,
+ },
+ 'GYP_DEFINES':
+ 'clang=1 '
+ 'component=shared_library '
+ 'dcheck_always_on=1 '
+}
+```
+
+
+### include.gyp
+
+Or globally in your home directory, create a file `~/.gyp/include.gypi`.
+
+### supplement.gypi
+
+The build system will also include any files named `/src/*/supplement.gypi`, which should be in the same format as include.gyp above.
+
+## Change the Build System
+
+Most platforms support multiple build systems (Windows: different Visual Studios versions and ninja, Mac: Xcode and ninja, etc.). A sensible default is selected, but it can be overridden:
+
+```
+$ GYP_GENERATORS=ninja gclient runhooks
+```
+
+[Ninja](https://code.google.com/p/chromium/wiki/NinjaBuild) is generally the fastest way to build anything on any platform.
+
+## Change Build Output Directory
+
+If you need to change a compile-time flag and do not want to touch your current build output, you can re-run GYP and place output into a new directory, like so, assuming you are using ninja:
+
+```
+$ GYP_GENERATOR_FLAGS="output_dir=out_other_ninja" gclient runhooks
+$ ninja -C out_other_ninja/Release chrome
+```
+
+Alternatively, you can do the following, which should work with all GYP generators, but the out directory is nested as `out_other/out/`.
+
+```
+$ GYP_GENERATOR_OUTPUT="out_other" gclient runhooks
+$ ninja -C out_other/out/Release chrome
+```
+
+**Note:** If you wish to run the WebKit layout tests, make sure you specify the new directory using `--build-directory=out_other_ninja` (don't include the `Release` part).
+
+## Building Google Chrome
+
+To build Chrome, you need to be a Google employee and have access to the [src-internal](http://go.ext.google.com/src-internal) repository. Once your checkout is set up, you can run gclient like so:
+
+```
+$ GYP_DEFINES="branding=Chrome buildtype=Official" gclient runhooks
+```
+
+Then building the `chrome` target will produce the official build. This tip can be used in conjunction with changing the output directory, since changing these defines will rebuild the world.
+
+Also note that some GYP\_DEFINES flags are incompatible with the official build. If you get an error when you try to build, try removing all your flags and start with just the above ones. \ No newline at end of file
diff --git a/docs/cr_user_manual.md b/docs/cr_user_manual.md
new file mode 100644
index 0000000..11d0438
--- /dev/null
+++ b/docs/cr_user_manual.md
@@ -0,0 +1,148 @@
+# What is it?
+
+Cr is the new unified interface to the myriad tools we use while working within a chromium checkout.
+Its main additional feature is that it allows you to build many configurations and run targets within a single checkout (by not relying on a directory called 'out').
+This is especially important when you want to cross-compile (for instance, building android from linux or building arm from intel), but it extends to any build variation.
+
+## A quick example
+
+The following is all you need to prepare an output directory, and then build and run the release build of chrome for the host platform:
+```
+ cr init
+ cr run
+```
+
+# How do I get it?
+
+You already have it, it lives in `src/tools/cr`
+
+You can run the cr.py file by hand, but if you are using bash it is much easier to source the bash helpers.
+This will add a cr function to your bash shell that runs with pyc generation turned off, and also installs the bash tab completion handler (which is very primitive at the moment, it only completes the command not the options)
+It also adds a function you can use in your prompt to tell you your selected build (`_cr_ps1`), and an helper to return you to the root of your active tree (`crcd`).
+I recommend you add the following lines to the end of your ~/.bashrc (with a more correct path)
+```
+ CR_CLIENT_PATH="/path/to/chromium"
+ source ${CR_CLIENT_PATH}/src/tools/cr/cr-bash-helpers.sh
+```
+At that point the cr command is available to you.
+
+# How do I use it?
+
+It should be mostly self documenting
+```
+ cr --help
+```
+will list all the commands installed
+```
+ cr --help command
+```
+will give you more detailed help for a specific command.
+
+> _**A note to existing android developers:**_
+> Do not source envsetup! ever!
+> If you use cr in a shell that has had envsetup sourced, miscellaneous things will be broken. The cr tool does all the work that envsetup used to do in a way that does not pollute your current shell.
+> If you really need a shell that has had the environment modified like envsetup used to do, see the cr shell command, also probably file a bug for a missing cr feature!
+
+# The commands
+
+Below are some common workflows, but first there is a quick overview of the commands currently in the system.
+
+## Output directory commands
+
+init
+> Create and configure an output directory. Also runs select to make it the default.
+select
+> Select an output directory. This makes it the default output for all commands, so you can omit the --out option if you want to.
+prepare
+> Prepares an output directory. This runs any preparation steps needed for an output directory to be viable, which at the moment means run gyp.
+
+## Build commands
+
+build
+> Build a target.
+install
+> Install a binary. Does build first unless `--builder==skip`
+> This does nothing on linux, and installs the apk onto the device for android builds.
+run
+> Invoke a target. Does an install first, unless `--installer=skip`.
+debug
+> Debug a target. Does a run first, unless `--runner=skip`. Uses the debugger specified by `--debugger`.
+
+## Other commands
+
+sync
+> Sync the source tree. Uses gclient sync to do the real work.
+shell
+> Run an exernal command in a cr environment.
+> This is an escape hatch, if passed a command it runs it in the correct environment for the current output directory, otherwise it starts a sub shell with that environment. This allows you to run any commands that don't have shims, or are too specialized to get one. This is especially important on android where the environment is heavily modified.
+
+# Preparing to build
+
+The first thing you need to do is prepare an output directory to build into.
+You do this with:
+```
+ cr init
+```
+By default on linux this will prepare a linux x86 release build output directory, called out\_linux/Release, if you want an android debug one, you can use:
+```
+ cr init --out=out_android/Debug
+```
+The output directory can be called anything you like, but if you pick a non standard name cr might not be able to infer the platform, in which case you need to specify it.
+The second part **must** be either Release or Debug.
+All options can be shortened to the shortest non ambiguous prefix, so the short command line to prepare an android debug output directory in out is:
+```
+ cr init --o=out/Debug --p=android
+```
+It is totally safe to do this in an existing output directory, and is an easy way to migrate an existing output directory to use in cr if you don't want to start from scratch.
+
+Most commands in cr take a --out parameter to tell them which output directory you want to operate on, but it will default to the last value passed to init or select.
+This enables you to omit it from most commands.
+
+Both init and select do additional work to prepare the output directory, which include things like running gyp. You can do that work on it's own with the prepare command if you want, something you need to do when changing between branches where you have modified the build files.
+
+If you want to set custom GYP defines for your build you can do this by adding adding the -s GYP\_DEFINES argument, for example:
+```
+ cr init --o=out/Debug -s GYP_DEFINES=component=shared_library
+```
+
+# Running chrome
+
+If you just want to do a basic build and run, then you do
+```
+ cr run
+```
+which will build, install if necessary, and run chrome, with some default args to open on https://www.google.com/.
+The same command line will work for any supported platform and mode.
+If you want to just run it again, you can turn off the build and install steps,
+```
+ cr run --installer=skip
+```
+note that turning off install automatically turns off build (which you could do with `--builder=skip`) as there is no point building if you are not going to install.
+
+
+# Debugging chrome
+
+To start chrome under a debugger you use
+```
+ cr debug
+```
+which will build, install, and run chrome, and attach a debugger to it. This works on any supported platform, and if multiple debuggers are available, you can select which one you want with `--debugger=my_debugger`
+
+# Help, it went wrong!
+
+There are a few things to do, and you should probably do all of them.
+Run your commands with dry-run and/or verbose turned on to see what the tool is really doing, for instance
+```
+ cr --d -vvvv init
+```
+The number of v's matter, it's the verbosity level, you can also just specify the value with -v=4 if you would rather.
+
+[Report a bug](https://code.google.com/p/chromium/issues/entry?comment=%3CDont%20forget%20to%20attach%20the%20command%20lines%20used%20with%20-v=4%20if%20possible%3E&pri=2&labels=OS-Android,tool-cr,Build-Tools,Type-Bug&owner=iancottrell@chromium.org&status=Assigned), even if it is just something that confused or annoyed rather than broke, we want this tool to be very low friction for developers.
+
+# Known issues
+
+You can see the full list of issues with [this](https://code.google.com/p/chromium/issues/list?can=2&q=label%3Atool-cr) query, but here are the high level issues:
+
+ * **Only supports gtest** : You run tests using the run command, which tries to infer from the target whether it is a runnable binary or a test. The inference could be improved, and it needs to handle the other test types as well.
+ * **No support for windows or mac** : allowed for in the design, but need people with expertise on those platforms to help out
+ * **Bash completion** : The hooks for it are there, but at the moment it only ever completes the command, not any of the arguments \ No newline at end of file
diff --git a/docs/cygwin_dll_remapping_failure.md b/docs/cygwin_dll_remapping_failure.md
new file mode 100644
index 0000000..3b56273
--- /dev/null
+++ b/docs/cygwin_dll_remapping_failure.md
@@ -0,0 +1,55 @@
+Handling repeated failures of rebaseall to allow cygwin remaps
+
+# Introduction
+
+Sometimes DLLs over which cygwin has no control get mapped into cygwin
+processes at locations that cygwin has chosen for its libraries.
+This has been seen primarily with anti-virus DLLs. When this occurs,
+cygwin must be instructed during the rebase to avoid the area of
+memory where that DLL is mapped.
+
+# Background
+
+Some background for this is available on http://www.dont-panic.cc/capi/2007/10/29/git-svn-fails-with-fatal-error-unable-to-remap/
+
+Because of unix fork semantics (presumably), cygwin libraries must be
+mapped in the same location in both parent and child of a fork. All
+cygwin libraries have hints in them as to where they should be mapped
+in a processes address space; if those hints are followed, each
+library will be mapped in the same location in both address spaces.
+However, Windows is perfectly happy mapping a DLL anywhere in the
+address space; the hint is not considered controlling. The remapping
+error occurs when a cygwin process starts and one of its libraries
+cannot be mapped to the location specified by its hint.
+
+/usr/bin/rebaseall changes the DLL hints for all of the cygwin
+libraries so that there are no inter-library conflicts; it does this
+by choosing a contiguous but not overlapping library layout starting
+at a base address and working down. This process makes sure there are
+no intra-cygwin conflicts, but cannot deal with conflicts with
+external DLLs that are in cygwin process address spaces
+(e.g. anti-virus DLLs).
+
+To handle this case, you need to figure out what the problematic
+non-cygwin library is, where it is in the address space, and do the
+rebase all so that no cygwin hints map libraries to that location.
+
+# Details
+
+<ul>
+<li>Download the ListDLLs executable from sysinternals<br>
+(<a href='http://technet.microsoft.com/en-us/sysinternals/bb896656.aspx'>http://technet.microsoft.com/en-us/sysinternals/bb896656.aspx</a>)</li>
+<li>Run it as administrator while some cygwin commands are running.</li>
+<li>Scan the output for the cygwin process (identifiable by the command) and for DLLs in that process that do not look like cygwin DLLs (like an AV). Note the location of those libraries (there will usually only be the one). Pick an address space location lower than its starting address.</li>
+<li>Quit all cygwin processes.</li>
+<li>Run a windows command shell as administrator</li>
+<li>cd in \cygwin\bin</li>
+<li>Run "ash /usr/bin/rebaseall -b <base address>" (This command can also take a "-v" flag if you want to see the DLL layout.)</li>
+</ul>
+
+
+That should fix the problem.
+
+# Failed rebaseall
+
+If you pick a base address that is too low, you may end up with a broken cygwin install. You can reinstall it by running cygwin's setup.exe again, and on the package selection page, clicking the "All" entry to Reinstall. You may have to do this twice, as you may get errors on the first reinstall pass. \ No newline at end of file
diff --git a/docs/documentation_best_practices.md b/docs/documentation_best_practices.md
new file mode 100644
index 0000000..ecace94
--- /dev/null
+++ b/docs/documentation_best_practices.md
@@ -0,0 +1,115 @@
+# Documentation Best Practices
+
+"Say what you mean, simply and directly." - [Brian Kernighan]
+(http://en.wikipedia.org/wiki/The_Elements_of_Programming_Style)
+
+[TOC]
+
+## Minimum viable documentation
+
+A small set of fresh and accurate docs is better than a large
+assembly of "documentation" in various states of disrepair.
+
+Write short and useful documents. Cut out everything unnecessary, while also
+making a habit of continually massaging and improving every doc to suit your
+changing needs. **Docs work best when they are alive but frequently trimmed,
+like a bonsai tree**.
+
+## Update docs with code
+
+**Change your documentation in the same CL as the code change**. This keeps your
+docs fresh, and is also a good place to explain to your reviewer what you're
+doing.
+
+## Delete dead documentation
+
+Dead docs are bad. They misinform, they slow down, they incite despair in
+new community members and laziness in existing ones. They set a precedent
+for leaving behind messes in a code base. If your home is clean, most
+guests will be clean without being asked.
+
+Just like any big cleaning project, **it's easy to be overwhelmed**. If your
+docs are in bad shape:
+
+* Take it slow, doc health is a gradual accumulation.
+* First delete what you're certain is wrong, ignore what's unclear.
+* Get the whole community involved. Devote time to quickly scan every doc and
+ make a simple decision: Keep or delete?
+* Default to delete or leave behind if migrating. Stragglers can always be
+ recovered.
+* Iterate.
+
+## Prefer the good over the perfect
+
+Documentation is an art. There is no perfect document, there are only proven
+methods and prudent guidelines. See
+go/g3doc-style#good.
+
+## Documentation is the story of your code
+
+Writing excellent code doesn't end when your code compiles or even if your
+test coverage reaches 100%. It's easy to write something a computer understands,
+it's much harder to write something both a human and a computer understand. Your
+mission as a Code Health-conscious engineer is to **write for humans first,
+computers second.** Documentation is an important part of this skill.
+
+There's a spectrum of engineering documentation that ranges from terse comments
+to detailed prose:
+
+1. **Inline comments**: The primary purpose of inline comments is to provide
+ information that the code itself cannot contain, such as why the code is
+ there.
+
+2. **Method and class comments**:
+
+ * **Method API documentation**: The header / Javadoc / docstring
+ comments that say what methods do and how to use them. This
+ documentation is **the contract of how your code must behave**. The
+ intended audience is future programmers who will use and modify your
+ code.
+
+ It is often reasonable to say that any behavior documented here should
+ have a test verifying it. This documentation details what arguments the
+ method takes, what it returns, any "gotchas" or restrictions, and what
+ exceptions it can throw or errors it can return. It does not usually
+ explain why code behaves a particular way unless that's relevant to a
+ developer's understanding of how to use the method. "Why" explanations
+ are for inline comments. Think in practical terms when writing method
+ documentation: "This is a hammer. You use it to pound nails."
+
+ * **Class / Module API documentation**: The header / Javadoc / docstring
+ comments for a class or a whole file. This documentation gives a brief
+ overview of what the class / file does and often gives a few short
+ examples of how you might use the class / file.
+
+ Examples are particularly relevant when there's several distinct ways to
+ use the class (some advanced, some simple). Always list the simplest
+ use case first.
+
+3. **README.md**: A good README.md orients the new user to the directory and
+ points to more detailed explanation and user guides:
+ * What is this directory intended to hold?
+ * Which files should the developer look at first? Are some files an API?
+ * Who maintains this directory and where I can learn more?
+
+4. **Design docs, PRDs**: A good design doc or PRD discusses the proposed
+ implementation at length for the purpose of collecting feeback on that
+ design. However, once the code is implemented, design docs should serve as
+ archives of these decisions, not as half-correct docs (they are often
+ misused). See
+ [Implementation state](#implementation_state_determines_document_location)
+ below.
+
+## Implementation state determines document repository
+
+**If the doc is about implemented code, put it in README.md**. If it's
+pre-implementation discussion, including Design docs, PRDs, and presentations,
+keep it in shared Drive folders.
+
+## Duplication is evil
+
+Do not write your own guide to a common technology or process. Link to it
+instead. If the guide doesn't exist or it's badly out of date, submit your
+updates to the appriopriate docs/ directory or create a package-level
+README.md. **Take ownership and don't be shy**: Other teams will usually welcome
+your contributions.
diff --git a/docs/emacs.md b/docs/emacs.md
new file mode 100644
index 0000000..39aacd0
--- /dev/null
+++ b/docs/emacs.md
@@ -0,0 +1,311 @@
+## Debugging
+
+LinuxDebugging has some emacs-specific debugging tips.
+
+
+## Blink Style (WebKit)
+
+Chrome and Blink/WebKit style differ. You can use [directory-local variables](http://www.gnu.org/software/emacs/manual/html_node/emacs/Directory-Variables.html) to make the tab key do the right thing. E.g., in `third_party/WebKit`, add a `.dir-locals.el` that contains
+
+```
+((nil . ((indent-tabs-mode . nil)
+ (c-basic-offset . 4)
+ (fill-column . 120))))
+```
+
+This turns off tabs, sets your indent to four spaces, and makes `M-q` wrap at 120 columns (WebKit doesn't define a wrap column, but there's a soft limit somewhere in that area for comments. Some reviewers do enforce the no wrap limit, which Emacs can deal with gracefully; see below.)
+
+Be sure to `echo .dir-locals.el >> .git/info/exclude` so `git clean` doesn't delete your file.
+
+It can be useful to set up a WebKit specific indent style. It's not too much different so it's easy to base off of the core Google style. Somewhere after you've loaded google.el (most likely in your .emacs file), add:
+
+```
+(c-add-style "WebKit" '("Google"
+ (c-basic-offset . 4)
+ (c-offsets-alist . ((innamespace . 0)
+ (access-label . -)
+ (case-label . 0)
+ (member-init-intro . +)
+ (topmost-intro . 0)
+ (arglist-cont-nonempty . +)))))
+```
+
+then you can add
+
+```
+ (c-mode . ((c-file-style . "WebKit")))
+ (c++-mode . ((c-file-style . "WebKit"))))
+```
+
+to the end of the .dir-locals.el file you created above. Note that this style may not yet be complete, but it covers the most common differences.
+
+Now that you have a WebKit specific style being applied, and assuming you have font locking and it's default jit locking turned on, you can also get Emacs 23 to wrap long lines more intelligently by adding the following to your .emacs file:
+
+```
+;; For dealing with WebKit long lines and word wrapping.
+(defun c-mode-adaptive-indent (beg end)
+ "Set the wrap-prefix for the the region between BEG and END with adaptive filling."
+ (goto-char beg)
+ (while
+ (let ((lbp (line-beginning-position))
+ (lep (line-end-position)))
+ (put-text-property lbp lep 'wrap-prefix (concat (fill-context-prefix lbp lep) (make-string c-basic-offset ? )))
+ (search-forward "\n" end t))))
+
+(define-minor-mode c-adaptive-wrap-mode
+ "Wrap the buffer text with adaptive filling for c-mode."
+ :lighter ""
+ (save-excursion
+ (save-restriction
+ (widen)
+ (let ((buffer-undo-list t)
+ (inhibit-read-only t)
+ (mod (buffer-modified-p)))
+ (if c-adaptive-wrap-mode
+ (jit-lock-register 'c-mode-adaptive-indent)
+ (jit-lock-unregister 'c-mode-adaptive-indent)
+ (remove-text-properties (point-min) (point-max) '(wrap-prefix pref)))
+ (restore-buffer-modified-p mod)))))
+
+(defun c-adaptive-wrap-mode-for-webkit ()
+ "Turn on visual line mode and adaptive wrapping for WebKit source files."
+ (if (or (string-equal "webkit" c-indentation-style)
+ (string-equal "WebKit" c-indentation-style))
+ (progn
+ (visual-line-mode t)
+ (c-adaptive-wrap-mode t))))
+
+(add-hook 'c-mode-common-hook 'c-adaptive-wrap-mode-for-webkit)
+(add-hook 'hack-local-variables-hook 'c-adaptive-wrap-mode-for-webkit)
+```
+
+This turns on visual wrap mode for files using the WebKit c style, and sets up a hook to dynamically set the indent on the wrapped lines. It's not quite as intelligent as it could be (e.g., what would the wrap be if there really were a newline there?), but it's very fast. It makes dealing with long code lines anywhere much more tolerable (not just in WebKit).
+
+## Syntax-error Highlighting
+NinjaBuild users get in-line highlighting of syntax errors using `flymake.el` on each buffer-save:
+```
+(load-file "src/tools/emacs/flymake-chromium.el")
+```
+
+## [ycmd](https://github.com/Valloric/ycmd) (YouCompleteMe) + flycheck
+
+[emacs-ycmd](https://github.com/abingham/emacs-ycmd) in combination with flycheck provides:
+ * advanced code completion
+ * syntax checking
+ * navigation to declarations and definitions (using `ycmd-goto`)
+based on on-the fly processing using clang. A quick demo video showing code completion and flycheck highlighting a missing semicolon syntax error:
+
+<a href='http://www.youtube.com/watch?feature=player_embedded&v=a0zMbm4jACk' target='_blank'><img src='http://img.youtube.com/vi/a0zMbm4jACk/0.jpg' width='696' height=250 /></a>
+
+#### Requirements:
+ * Your build system is set up for building with clang or wrapper+clang
+
+#### Setup
+
+ 1. Clone, update external git repositories and build.sh ycmd from https://github.com/Valloric/ycmd into a directory, e.g. `~/dev/ycmd`
+ 1. Test `ycmd` by running
+> `~/dev/ycmd$ python ycmd/__main__.py`
+
+> You should see `KeyError: 'hmac_secret'`
+ 1. Install the following packages to emacs, for example from melpa:
+ * `ycmd`
+ * `company-ycmd`
+ * `flycheck-ycmd`
+> [More info on configuring emacs-ycmd](https://github.com/abingham/emacs-ycmd#quickstart)
+ 1. Assuming your checkout of Chromium is in `~/dev/blink`, i.e. this is the directory in which you find the `src`folder, create a symbolic link as follows
+> `cd ~/dev/blink; ln -s src/tools/vim/chromium.ycm_extra_conf.py .ycm_extra_conf.py`
+ 1. Add something like the following to your `init.el`
+```
+;; ycmd
+
+;;; Googlers can replace a lot of this with (require 'google-ycmd).
+
+(require 'ycmd)
+(require 'company-ycmd)
+(require 'flycheck-ycmd)
+
+(company-ycmd-setup)
+(flycheck-ycmd-setup)
+
+;; Show completions after 0.15 seconds
+(setq company-idle-delay 0.15)
+
+;; Activate for editing C++ files
+(add-hook 'c++-mode-hook 'ycmd-mode)
+(add-hook 'c++-mode-hook 'company-mode)
+(add-hook 'c++-mode-hook 'flycheck-mode)
+
+;; Replace the directory information with where you downloaded ycmd to
+(set-variable 'ycmd-server-command (list "python" (substitute-in-file-name "$HOME/dev/ycmd/ycmd/__main__.py")))
+
+;; Edit according to where you have your Chromium/Blink checkout
+(add-to-list 'ycmd-extra-conf-whitelist (substitute-in-file-name "$HOME/dev/blink/.ycm_extra_conf.py"))
+
+;; Show flycheck errors in idle-mode as well
+(setq ycmd-parse-conditions '(save new-line mode-enabled idle-change))
+
+;; Makes emacs-ycmd less verbose
+(setq url-show-status nil)
+```
+
+#### Troubleshooting
+
+ * If no completions show up or emacs reports errors, you can check the `*ycmd-server*` buffer for errors. See the next bullet point for how to handle "OS Error: No such file or directory"
+ * Launching emacs from an OS menu might result in a different environment so that `ycmd` does not find ninja. In that case, you can use a package like [exec-path from shell](https://github.com/purcell/exec-path-from-shell) and add the following to your `init.el`:
+```
+(require 'exec-path-from-shell)
+(when (memq window-system '(mac ns x))
+ (exec-path-from-shell-initialize))
+```
+
+
+## ff-get-other-file
+
+There's a builtin function called `ff-get-other-file` which will get the "other file" based on file extension. I have this bound to C-o in c-mode (`(local-set-key "\C-o" 'ff-get-other-file)`). While "other file" is per-mode defined, in c-like languages it means jumping between the header and the source file. So I switch back and forth between the header and the source with C-o. If we had separate include/ and src/ directories, this would be a pain to setup, but this might just work out of the box for you. See the documentation for the variable `cc-other-file-alist` for more information.
+
+One drawback of ff-get-other-file is that it will always switch to a matching buffer, even if the other file is in a different directory, so if you have A.cc,A.h,A.cc(2) then ff-get-other-file will switch to A.h from A.cc(2) rather than load A.h(2) from the appropriate directory. If you prefer something (C specific) that always finds, try this:
+```
+(defun cc-other-file()
+ "Toggles source/header file"
+ (interactive)
+ (let ((buf (current-buffer))
+ (name (file-name-sans-extension (buffer-file-name)))
+ (other-extens
+ (cadr (assoc (concat "\\."
+ (file-name-extension (buffer-file-name))
+ "\\'")
+ cc-other-file-alist))))
+ (dolist (e other-extens)
+ (if (let ((f (concat name e)))
+ (and (file-exists-p f) (find-file f)))
+ (return)))
+ )
+ )
+```
+_Note: if you know an easy way to change the ff-get-other-file behavior, please replace this hack with that solution! - stevenjb@chromium.org_
+
+## Use Google's C++ style!
+
+We have an emacs module, [google-c-style.el](http://google-styleguide.googlecode.com/svn/trunk/google-c-style.el), which adds c-mode formatting.
+Then add to your .emacs:
+```
+ (load "/<path/to/chromium>/src/buildtools/clang_format/script/clang-format.el")
+ (add-hook 'c-mode-common-hook (function (lambda () (local-set-key (kbd "TAB") 'clang-format-region))))
+```
+Now, you can use the
+
+&lt;Tab&gt;
+
+ key to format the current line (even a long line) or region.
+
+### Highlight long lines
+
+One nice way to highlight long lines and other style issues:
+```
+(require 'whitespace)
+(setq whitespace-style '(face indentation trailing empty lines-tail))
+(setq whitespace-line-column nil)
+(set-face-attribute 'whitespace-line nil
+ :background "purple"
+ :foreground "white"
+ :weight 'bold)
+(global-whitespace-mode 1)
+```
+
+
+Note: You might need to grab the latest version of [whitespace.el](http://www.emacswiki.org/emacs-en/download/whitespace.el).
+
+## gyp
+
+### `gyp` style
+There is a gyp mode that provides basic indentation and font-lock (syntax highlighting) support. The mode derives from python.el (bundled with newer emacsen).
+
+You can find it in tools/gyp/tools/emacs or at http://code.google.com/p/gyp/source/browse/trunk/tools/emacs/
+
+See the README file there for installation instructions.
+
+**Important**: the mode is only tested with `python.el` (bundled with newer emacsen), not with `python-mode.el` (outdated and less maintained these days).
+
+### deep nesting
+
+A couple of helpers that show a summary of where you are; the first by tracing the indentation hierarchy upwards, the second by only showing `#if`s and `#else`s that are relevant to the current line:
+
+```el
+
+(defun ami-summarize-indentation-at-point ()
+"Echo a summary of how one gets from the left-most column to
+POINT in terms of indentation changes."
+(interactive)
+(save-excursion
+(let ((cur-indent most-positive-fixnum)
+(trace '()))
+(while (not (bobp))
+(let ((current-line (buffer-substring (line-beginning-position)
+(line-end-position))))
+(when (and (not (string-match "^\\s-*$" current-line))
+(< (current-indentation) cur-indent))
+(setq cur-indent (current-indentation))
+(setq trace (cons current-line trace))
+(if (or (string-match "^\\s-*}" current-line)
+(string-match "^\\s-*else " current-line)
+(string-match "^\\s-*elif " current-line))
+(setq cur-indent (1+ cur-indent)))))
+(forward-line -1))
+(message "%s" (mapconcat 'identity trace "\n")))))
+
+(require 'cl)
+(defun ami-summarize-preprocessor-branches-at-point ()
+"Summarize the C preprocessor branches needed to get to point."
+(interactive)
+(flet ((current-line-text ()
+(buffer-substring (line-beginning-position) (line-end-position))))
+(save-excursion
+(let ((eol (or (end-of-line) (point)))
+deactivate-mark directives-stack)
+(goto-char (point-min))
+(while (re-search-forward "^#\\(if\\|else\\|endif\\)" eol t)
+(if (or (string-prefix-p "#if" (match-string 0))
+(string-prefix-p "#else" (match-string 0)))
+(push (current-line-text) directives-stack)
+(if (string-prefix-p "#endif" (match-string 0))
+(while (string-prefix-p "#else" (pop directives-stack)) t))))
+(message "%s" (mapconcat 'identity (reverse directives-stack) "\n"))))))
+```
+
+## find-things-fast
+
+erg wrote a suite of tools that do common operations from the root of your repository, called [Find Things Fast](https://github.com/eglaysher/find-things-fast). It contains ido completion over `git ls-files` (or the svn find equivalent) and `grepsource` that only git greps files with extensions we care about (or the equivalent the `find | xargs grep` statement in non-git repos.)
+
+## vc-mode and find-file performance
+
+When you first open a file under git control, vc mode kicks in and does a high level stat of your git repo. For huge repos, especially WebKit and Chromium, this makes opening a file take literally seconds. This snippet disables VC git for chrome directories:
+
+```
+
+; Turn off VC git for chrome
+(when (locate-library "vc")
+(defadvice vc-registered (around nochrome-vc-registered (file))
+(message (format "nochrome-vc-registered %s" file))
+(if (string-match ".*chrome/src.*" file)
+(progn
+(message (format "Skipping VC mode for %s" % file))
+(setq ad-return-value nil)
+)
+ad-do-it)
+)
+(ad-activate 'vc-registered)
+)
+```
+
+## git tools
+We're collecting Chrome-specific tools under `tools/emacs`. See the files there for details.
+
+ * `trybot.el`: import Windows trybot output into a `compilation-mode` buffer.
+
+## ERC for IRC
+
+See ErcIrc.
+
+## TODO
+
+ * Figure out how to make `M-x compile` default to `cd /path/to/chrome/root; make -r chrome`. \ No newline at end of file
diff --git a/docs/erc_irc.md b/docs/erc_irc.md
new file mode 100644
index 0000000..380fec4
--- /dev/null
+++ b/docs/erc_irc.md
@@ -0,0 +1,92 @@
+It's very simple to get started with ERC; just do the following:
+ 1. Optional: Sign up at freenode.net to claim your nickname.
+ 1. M-x
+ 1. erc (and accept default for the first couple of items)
+ 1. /join #chromium
+
+You may notice the following problems:
+ * It's hard to notice when you're mentioned.
+ * ERC does not have built-in accidental paste prevention, so you might accidentally paste multiple lines of text into the IRC channel.
+
+You can modify the following and add it to your .emacs file to fix both of the above. Note that you'll need to install and configure sendxmpp for the mention hack, which also requires you to create an account for your "robot" on e.g. jabber.org:
+
+```
+(require 'erc)
+
+;; Notify me when someone mentions my nick or aliases on IRC.
+(erc-match-mode 1)
+(add-to-list 'erc-keywords "\\bjoi\\b")
+;; Only when I'm sheriff.
+;;(add-to-list 'erc-keywords "\\bsheriff\\b")
+;;(add-to-list 'erc-keywords "\\bsheriffs\\b")
+(defun erc-global-notify (matched-type nick msg)
+ (interactive)
+ (when (or (eq matched-type 'current-nick) (eq matched-type 'keyword)))
+ (shell-command
+ (concat "echo \"Mentioned by " (car (split-string nick "!")) ": " msg
+ "\" | sendxmpp joi@google.com")))
+(add-hook 'erc-text-matched-hook 'erc-global-notify)
+
+(defvar erc-last-input-time 0
+ "Time of last call to `erc-send-current-line' (as returned by `float-time'),
+ or 0 if that function has never been called.
+ Used to detect accidental pastes (i.e., large amounts of text
+ accidentally entered into the ERC buffer.)")
+
+(defcustom erc-accidental-paste-threshold-seconds 1
+ "Time in seconds that must pass between invocations of
+ `erc-send-current-line' in order for that function to consider
+ the new line to be intentional."
+ :group 'erc
+ :type '(choice number (other :tag "disabled" nil)))
+
+(defun erc-send-current-line-with-paste-protection ()
+ "Parse current line and send it to IRC, with accidental paste protection."
+ (interactive)
+ (let ((now (float-time)))
+ (if (or (not erc-accidental-paste-threshold-seconds)
+ (< erc-accidental-paste-threshold-seconds (- now erc-last-input-time)))
+ (save-restriction
+ (widen)
+ (if (< (point) (erc-beg-of-input-line))
+ (erc-error "Point is not in the input area")
+ (let ((inhibit-read-only t)
+ (str (erc-user-input))
+ (old-buf (current-buffer)))
+ (if (and (not (erc-server-buffer-live-p))
+ (not (erc-command-no-process-p str)))
+ (erc-error "ERC: No process running")
+ (erc-set-active-buffer (current-buffer))
+
+ ;; Kill the input and the prompt
+ (delete-region (erc-beg-of-input-line)
+ (erc-end-of-input-line))
+
+ (unwind-protect
+ (erc-send-input str)
+ ;; Fix the buffer if the command didn't kill it
+ (when (buffer-live-p old-buf)
+ (with-current-buffer old-buf
+ (save-restriction
+ (widen)
+ (goto-char (point-max))
+ (when (processp erc-server-process)
+ (set-marker (process-mark erc-server-process) (point)))
+ (set-marker erc-insert-marker (point))
+ (let ((buffer-modified (buffer-modified-p)))
+ (erc-display-prompt)
+ (set-buffer-modified-p buffer-modified))))))
+
+ ;; Only when last hook has been run...
+ (run-hook-with-args 'erc-send-completed-hook str))))
+ (setq erc-last-input-time now))
+ (switch-to-buffer "*ERC Accidental Paste Overflow*")
+ (lwarn 'erc :warning "You seem to have accidentally pasted some text!"))))
+
+(add-hook 'erc-mode-hook
+ '(lambda ()
+ (define-key erc-mode-map "\C-m" 'erc-send-current-line-with-paste-protection)
+ ))
+```
+
+Note: The paste protection code is modified from a paste by user 'yashh' at http://paste.lisp.org/display/78068 (Google cache [here](http://webcache.googleusercontent.com/search?q=cache:p_S9ZKlWZPoJ:paste.lisp.org/display/78068+paste+78068&cd=1&hl=en&ct=clnk&gl=ca&source=www.google.ca)). \ No newline at end of file
diff --git a/docs/git_cookbook.md b/docs/git_cookbook.md
new file mode 100644
index 0000000..4080f5b
--- /dev/null
+++ b/docs/git_cookbook.md
@@ -0,0 +1,194 @@
+A collection of git recipes to do common git tasks.
+
+See also UsingGit and GitTips.
+
+
+
+
+## Introduction
+
+This is designed to be a cookbook for common command sequences/tasks relating to git, git-cl, and how they work with chromium development. It might be a little light on explanations.
+
+If you are new to git, or do not have much experience with a distributed version control system, you should also check out [The Git Community Book](http://book.git-scm.com/) for an overview of basic git concepts and general git usage. Knowing what git means by branches, commits, reverts, and resets (as opposed to what SVN means by them) will help make the following much more understandable.
+
+## Excluding file(s) from git-cl, while preserving them for later use
+
+Since git-cl assumes that the diff between your current branch and its tracking branch (defaults to the svn-trunk if there is no tracking branch) is what should be used for the CL, the goal is to remove the unwanted files from the current branch, and preserve them in another branch, or a similar.
+
+### Method #1: Reset your current branch, and selectively commit files.
+
+ 1. `git log` # see the list of your commits. Find the hash of the last commit before your changes.
+ 1. `git reset --soft abcdef` # where abcdef is the hash found in the step above.
+ 1. `git commit <files_for_this_cl> -m "files to upload"` # commit the files you want included in the CL here.
+ 1. `git checkout -b new_branch_name origin/trunk` # Create a new branch for the files that you want to exclude.
+ 1. `git commit -a -m "preserved files"` # Commit the rest of the files.
+
+### Method #2: Create a new branch, reset, then commit files to preserve
+This method creates a new branch from your current one to preserve your changes. The commits on the new branch are undone, and then only the files you want to preserve are recommitted.
+
+ 1. `git checkout -b new_branch_name` # This preserves your old files.
+ 1. `git log` # see the list of your commits. Find the hash of the last commit before your changes.
+ 1. `git reset --soft abcdef` # where abcdef is the hash found in the step above.
+ 1. `git commit <files_to_preserve> -m "preserved files"` # commit the found files into the new\_branch\_name.
+
+Then revert your files however you'd like in your old branch. The files listed in step 4 will be saved in new\_branch\_name
+
+### Method #3: Cherry pick changes into review branches
+If you are systematic in creating separate local commits for independent changes, you can make a number of different changes in the same client and then cherry-pick each one into a separate review branch.
+
+ 1. Make and commit a set of independent changes.
+ 1. `git log` # see the hashes for each of your commits.
+ 1. repeat checkout, cherry-pick, upload steps for each change1..n
+ 1. `git checkout -b review-changeN origin` # create a new review branch tracking origin
+ 1. `git cherry-pick <hash of change N>`
+ 1. `git cl upload`
+
+If a change needs updating due to review comments, you can go back to your main working branch, update the commit, and re-cherry-pick it into the review branch.
+
+ 1. `git checkout <working branch>`
+ 1. Make changes.
+ 1. If the commit you want to update is the most recent one:
+ 1. `git commit --amend <files>`
+ 1. If not:
+ 1. `git commit <files>`
+ 1. `git rebase -i origin` # use interactive rebase to squash the new commit into the old one.
+ 1. `git log` # observe new hash for the change
+ 1. `git checkout review-changeN`
+ 1. `git reset --hard` # remove the previous version of the change
+ 1. `cherry-pick <new hash of change N>`
+ 1. `git cl upload`
+
+## Sharing code between multiple machines
+Assume Windows computer named vista, Linux one named penguin.
+Prerequisite: both machine have git clones of the main git tree.
+```
+vista$ git remote add linux ssh://penguin/path/to/git/repo
+vista$ git fetch linux
+vista$ git branch -a # should show "linux/branchname"
+vista$ git checkout -b foobar linux/foobar
+vista$ hack hack hack; git commit -a
+vista$ git push linux # push branch back to linux
+penguin$ git reset --hard # update with new stuff in branch
+```
+
+Note that, by default, `gclient sync` will update all remotes. If your other machine (i.e., `penguin` in the above example) is not always available, `gclient sync` will timeout and fail trying to reach it. To fix this, you may exclude your machine from being fetched by default:
+
+```
+vista$ git config --bool remote.linux.skipDefaultUpdate true
+```
+
+## Reverting and undoing reverts
+Two commands to be familiar with:
+ * `git cherry-pick X` -- patch in the change made in revision X (where X is a hash, or HEAD~2, or whatever)
+ * `git revert X` -- patch in the **inverse** of the change made
+
+With that in hand, say you learned that the commit `abcdef` you just made was bad.
+
+Revert it locally:
+```
+$ git checkout origin # start with trunk
+$ git show abcdef # grab the svn revision that abcdef was
+$ git revert abcdef
+# an editor will pop up; be sure to replace the unhelpful git hash
+# in the commit message with the svn revision number
+```
+
+Commit the revert:
+```
+# note that since "git svn dcommit" commits each local change separately, be
+# extra sure that your commit log looks exactly like what you want the tree's commit
+# log to look like before you do this.
+$ git log # double check that the commit log is *exactly* what you want
+$ git svn dcommit # commit to svn, bypassing all precommit checks and prompts
+```
+
+Roll it forward again locally:
+```
+$ git checkout mybranch # go back to your old branch again, and
+$ git reset --hard origin # reset the branch to origin, which now has your revert.
+
+$ git cherry-pick abcdef # re-apply your bad change
+$ git show # grab the rietveld issue number out of the old commit
+$ git cl issue 12345 # restore the rietveld issue that was cleared on commit
+```
+
+And now you can continue hacking where you left off, and since you're reusing the Reitveld issue you don't have to rewrite the commit message. (You may want to go manually reopen the issue on the Rietveld site -- `git cl status` will give you the URL.)
+
+## Retrieving, or diffing against an old file revision
+Git works in terms of commits, not files. Thus, working with the history of a single file requires modified version of the show and diff commands.
+```
+$ git log path/to/file # Find the commit you want in the file's commit log.
+$ git show 123abc:path/to/file # This prints out the file contents at commit 123abc.
+$ git diff 123abc -- path/to/file # Diff the current version against path/to/file
+ # against the version at path/to/file
+```
+
+When invoking `git show` or `git diff`, the `path/to/file` is **not relative the the current directory**. It must be the full path from the directory where the .git directory lives. This is different from invoking `git log` which understands relative paths.
+
+## Checking out pristine branch from git-svn
+In the backend, git-svn keeps a remote tracking branch that points to the the commit tree representing the svn repository. The name of this branch is configured during `git svn init`. The git-svn remote branch is often named `origin/trunk` for Chromium, and `origin/master` for WebKit.
+
+If you want to checkout a "fresh" branch, you can base it directly off the remote branch for svn.
+
+```
+$ git checkout -b fresh origin/trunk # Replace with origin/master for webkit.
+```
+
+To find out what your git-svn remote branch name is, you can examine your `.git/config` file and look for the `svn-remote` entry. It will look something like this:
+
+```
+[svn-remote "svn"]
+ url = svn://svn.chromium.org/chrome
+ fetch = trunk/src:refs/remotes/origin/trunk
+```
+
+The last line (`fetch = trunk/src:refs/remotes/origin/trunk`), says to make `trunk/src` on svn into `refs/remote/origin/trunk` in the local git checkout. Which means, the name of the svn remote branch name is `origin/trunk`. You can use this branch name for all sorts of actions (diff, log, show, etc.)
+
+## Making your `git svn {fetch,rebase}` go fast
+If you are pulling changes from the git repository in Chromium (or webkit), but your your `git svn` commands still seem to pull each change individually from svn, your repository is probably setup incorrectly. Make sure the entries in your `.git/config` look something like this:
+
+```
+[remote "origin"]
+ url = https://chromium.googlesource.com/chromium/src.git
+ fetch = +refs/heads/*:refs/remotes/origin/*
+[svn-remote "svn"]
+ url = svn://svn.chromium.org/chrome
+ fetch = trunk/src:refs/remotes/origin/trunk
+```
+
+Here, `git svn fetch` will update the hash in refs/remotes/origin/trunk as per the `fetch =` line under `svn-remote`. Similarly, `git fetch` will update the **same** tag under `refs/remotes/origin`.
+
+With this setup, `git fetch` will use the faster git protocol to pull changes down into `origin/trunk`. This effectively updates the high-water mark for `git-svn`. Later invocations of `git svn {find-rev, fetch, rebase}` will be be able to skip pulling those revisions down from the svn server. Instead, it will just run a regex over the commit log in `origin/trunk` and parse all the `git-svn-id` lines. To rebuild the mapping. Example:
+
+```
+commit 016d28b8c4959a3d28d2fbfb4b86c0361aad74ef
+Author: mpcomplete@chromium.org <mpcomplete@chromium.org@0039d316-1c4b-4281-b951-d872f2087c98>
+Date: Mon Jul 19 19:09:41 2010 +0000
+
+ Revert r42636. That hack is no longer needed now that we removed the compact
+ location bar view.
+
+ BUG=38992
+
+ Review URL: http://codereview.chromium.org/3036004
+
+ git-svn-id: svn://svn.chromium.org/chrome/trunk/src@52935 0039d316-1c4b-4281-b951-d872f2087c98
+```
+
+Will be parsed to map svn revision r52935 (on Google Code) to commit 016d28b8c4959a3d28d2fbfb4b86c0361aad74ef. The parsing will generate a lot of lines that look like `rXXXX = 01234ABCD`. It should generally take a minute or so when doing an incremental update.
+
+For this to work, two things must be true:
+
+ * The svn url in the `svn-remote` clause must exactly match the url in the git-svn-id pulled form the server.
+ * The fetch from origin must write into the exact same branch that specified in the fetch line of `svn-remote`.
+
+If either of these are not true, then `git svn fetch` and friends will talk to svn directly, and be very slow.
+
+## Reusing a Git mirror
+
+If you have a nearby copy of a Git repo, you can quickly bootstrap your copy from that one then adjust it to point it at the real upstream one.
+
+ 1. Clone a nearby copy of the code you want: `git clone coworker-machine:/path/to/repo`
+ 1. Change the URL your copy fetches from to point at the real git repo: `git set-url origin http://src.chromium.org/git/chromium.git`
+ 1. Update your copy: `git fetch`
+ 1. Delete any extra branches that you picked up in the initial clone: `git prune origin` \ No newline at end of file
diff --git a/docs/git_tips.md b/docs/git_tips.md
new file mode 100644
index 0000000..36a7ec4
--- /dev/null
+++ b/docs/git_tips.md
@@ -0,0 +1,86 @@
+When UsingGit, there are a few tips that are particularly useful when working on the Chromium codebase, especially due to its size.
+
+See also GitCookbook.
+
+Remember the basic git convention:
+> `git` _`COMMAND`_ `[`_`FLAGS`_`]` `[`_`ARGUMENTS`_`]`
+Various git commands have underlying executable with a hyphenated name, such as `git-grep`, but these can also be called via the `git` wrapper script as `git grep` (and `man` should work either way too).
+
+## Git references
+
+The following resources can provide background on how Git works:
+
+ * [Git-SVN Crash Course](http://git-scm.com/course/svn.html) -- this crash course is useful for Subversion users switching to Git.
+ * [Think Like (a) Git](http://think-like-a-git.net/) -- does a great job of explaining the main purpose of Git's operations.
+ * [Git User's Manual](http://schacon.github.com/git/user-manual.html) -- a great resource to learn more about how to use Git properly.
+ * [A Visual Git Reference](http://marklodato.github.com/visual-git-guide/index-en.html) -- a resource that explains various Git operations for visual reasons.
+ * [Git Cheat Sheet](http://cheat.errtheblog.com/s/git) -- now that you understand Git, here's a cheat sheet to quickly remind you of all the commands you need.
+
+## Committing changes
+For a simple workflow (always commit all changed files, don't keep local revisions), the following script handles check; you may wish to call it `gci` (git commit) or similar.
+
+Amending a single revision is generally easier for various reasons, notably for rebasing and for checking that CLs have been committed. However, if you don't use local revisions (a local branch with multiple revisions), you should make sure to upload revisions periodically to code review if you ever need to go to an old version of a CL.
+```
+#!/bin/bash
+# Commit all, amending if not initial commit.
+if git status | grep -q "# Your branch is ahead of 'master' by 1 commit."
+then
+ git commit --all --amend
+else
+ git commit --all # initial, not amendment
+fi
+```
+
+## Listing and changing branches
+```
+git branch # list branches
+git checkout - # change to last branch
+```
+To quickly list the 5 most recent branches, add the following to `.gitconfig` in the `[alias]` section:
+```
+last5 = "!git for-each-ref --sort=committerdate refs/heads/ --format='%(committerdate:short) %(refname:short)' | tail -5 | cut -c 12-"
+```
+
+A nicely color-coded list, sorted in descending order by date, can be made by the following bash function:
+```
+git-list-branches-by-date() {
+ local current_branch=$(git rev-parse --symbolic-full-name --abbrev-ref HEAD)
+ local normal_text=$(echo -ne '\E[0m')
+ local yellow_text=$(echo -ne '\E[0;33m')
+ local yellow_bg=$(echo -ne '\E[7;33m')
+ git for-each-ref --sort=-committerdate \
+ --format=$' %(refname:short) \t%(committerdate:short)\t%(authorname)\t%(objectname:short)' refs/heads \
+ | column -t -s $'\t' -n \
+ | sed -E "s:^ (${current_branch}) :* ${yellow_bg}\1${normal_text} :" \
+ | sed -E "s:^ ([^ ]+): ${yellow_text}\1${normal_text}:"
+}
+```
+
+## Searching
+Use `git-grep` instead of `grep` and `git-ls-files` instead of `find`, as these search only files in the index or _tracked_ files in the work tree, rather than all files in the work tree.
+
+Note that `git-ls-files` is rather simpler than `find`, so you'll often need to use `xargs` instead of `-exec` if you want to process matching files.
+
+## Global changes
+To make global changes across the source tree, it's often easiest to use `sed` with `git-ls-files`, using `-i` for in-place changing (this is generally safe, as we don't use symlinks much, but there are few places that do). Remember that you don't need to use `xargs`, since sed can take multiple input files. E.g., to strip trailing whitespace from C++ and header files:
+```
+ sed -i -E 's/\s+$//' $(git ls-files '*.cpp' '*.h')
+```
+
+You may also find `git-grep` useful for limiting the scope of your changes, using `-l` for listing files.
+```
+ sed -i -E '...' $(git grep -lw Foo '*.cpp' '*.h')
+```
+
+Remember that you can restrict sed actions to matching (or non-matching) lines. For example, to skip lines with a line comment, use the following:
+```
+ '\,//, ! s/foo/bar/g'
+```
+## Diffs
+```
+ git diff --shortstat
+```
+Displays summary statistics, such as:
+```
+ 2104 files changed, 9309 insertions(+), 9309 deletions(-)
+``` \ No newline at end of file
diff --git a/docs/gn_check.md b/docs/gn_check.md
new file mode 100644
index 0000000..c10e226
--- /dev/null
+++ b/docs/gn_check.md
@@ -0,0 +1,85 @@
+GN has several different ways to check dependencies. Many of them are checked by the `gn check` command. Running checks involve opening and scanning all source files so this isn't run every time a build is updated. To run check on an existing build:
+```
+gn check out/mybuild
+```
+
+To run the check as part of the "gen" command to update the build (this is what the bots do):
+```
+gn gen out/mybuild --check
+```
+
+# Concepts
+
+## Visibility
+
+Targets can control which other targets may depend on them by specifying `visibility`. Visibility is always checked when running any GN command (not just `gn check`.
+
+By default, targets are "public" meaning any target can depend on them. If you supply a list, visibility will be listed to those targets (possibly including wildcards):
+
+```
+visibility = [
+ ":*", # All targets in this file.
+ "//content/*", # All targets in content and any subdirectory thereof.
+ "//tools:doom_melon", # This specific target.
+]
+```
+
+See `gn help visibility` for more details and examples.
+
+## Public header files
+
+Targets can control which headers may be included by dependent targets so as to define a public API. If your target specifies only `sources`, then all headers listed there are public and can be included by all dependents.
+
+If your target defines a `public` variable, only the files listed in that list will be public. Files in `sources` but not `public` (they can be in both or only one) may not be included by dependent targets.
+
+```
+source_set("foo") {
+ public = [
+ "foo.h",
+ "foo_config.h",
+ ]
+ sources = [
+ "foo.cc",
+ "foo.h",
+ "bar.cc",
+ "bar.h",
+ ]
+}
+```
+
+## Public dependencies
+
+In order to include files from your target, that target must be listed in your target's dependencies. By default, transitively depending on a target doesn't give your files this privilege.
+
+If a target exposes a dependency as part of its public API, then it can list that dependency as a `public_deps`:
+```
+source_set("foo") {
+ sources = [ ... ]
+ public_deps = [
+ "//base",
+ ]
+ deps = [
+ "//tools/doom_melon",
+ ]
+}
+```
+Targets that depend on `foo` can include files from `base` but not from `doom_melon`. To include public headers from `doom\_melon, a target would need to depend directly on it.
+
+Public dependencies work transitively, so listing a target as a public dependency also exposes that target's public dependencies. Along with the ability to include headers, public dependencies forward the `public_configs` which allow settings like defines and include directories to apply to dependents.
+
+# Putting it all together
+
+In order to include a header from target Y in a file that is part of target X:
+
+ * X must be in Y's `visibility` list (or B must have no `visibility` defined).
+ * The header must be in Y's `public` headers (or Y must have no `public` variable defined).
+ * X must depend directly on Y, or there must be a path from X to Y following only public dependencies.
+
+### What gets checked
+
+Chrome currently doesn't come close to passing a `gn check` pass. You can check specific targets or subtrees for issues:
+```
+ gn check out/mybuild //base
+
+ gn check out/mybuild "//mojo/*"
+``` \ No newline at end of file
diff --git a/docs/graphical_debugging_aid_chromium_views.md b/docs/graphical_debugging_aid_chromium_views.md
new file mode 100644
index 0000000..ba59372
--- /dev/null
+++ b/docs/graphical_debugging_aid_chromium_views.md
@@ -0,0 +1,51 @@
+# Introduction
+
+A simple debugging tool exists to help visualize the views tree during debugging. It consists of 4 components:
+
+ 1. The function `View::PrintViewGraph()` (already in the file **view.cc** if you've sync'd recently),
+ 1. a gdb script file **viewg.gdb** (see below),
+ 1. the graphViz package (http://www.graphviz.org/ - downloadable for Linux, Windows and Mac), and
+ 1. an SVG viewer (_e.g._ Chrome).
+
+# Details
+
+To use the tool,
+
+ 1. Make sure you have 'dot' installed (part of graphViz),
+ 1. define `TOUCH_DEBUG` and compile chrome with Views enabled,
+ 1. run gdb on your build and
+ 1. source **viewg.gdb** (this can be done automatically in **.gdbinit**),
+ 1. stop at any breakpoint inside class `View` (or any derived class), and
+ 1. type `viewg` at the gdb prompt.
+
+This will cause the current view, and any descendants, to be described in a graph which is stored as **~/state.svg** (Windows users may need to modify the script slightly to run under CygWin). If **state.svg** is kept open in a browser window and refreshed each time `viewg` is run, then it provides a graphical representation of the state of the views hierarchy that is always up to date.
+
+It is easy to modify the gdb script to generate PDF in case viewing with evince (or other PDF viewer) is preferred.
+
+If you don't use gdb, you may be able to adapt the script to work with your favorite debugger. The gdb script invokes
+```
+ this->PrintViewGraph(true)
+```
+on the current object, returning `std::string`, whose contents must then be saved to a file in order to be processed by dot.
+
+# viewg.gdb
+
+```
+define viewg
+ if $argc != 0
+ echo Usage: viewg
+ else
+ set pagination off
+ set print elements 0
+ set logging off
+ set logging file ~/state.dot
+ set logging overwrite on
+ set logging redirect on
+ set logging on
+ printf "%s\n", this->PrintViewGraph(true).c_str()
+ set logging off
+ shell dot -Tsvg -o ~/state.svg ~/state.dot
+ set pagination on
+ end
+end
+``` \ No newline at end of file
diff --git a/docs/gtk_vs_views_gtk.md b/docs/gtk_vs_views_gtk.md
new file mode 100644
index 0000000..df72341
--- /dev/null
+++ b/docs/gtk_vs_views_gtk.md
@@ -0,0 +1,27 @@
+# Benefits of ViewsGtk
+
+ * Better code sharing. For example, don't have to duplicate tab layout or bookmark bar layout code.
+ * Tab Strip
+ * Drawing
+ * All the animationy bits
+ * Subtle click selection behavior (curved corners)
+ * Drag behavior, including dropping of files onto the URL bar
+ * Closing behavior
+ * Bookmarks bar
+ * drag & drop behavior, including menus
+ * chevron?
+ * Easier for folks to work on both platforms without knowing much about the underlying toolkits.
+ * Don't have to implement ui features twice.
+
+
+# Benefits of Gtk
+ * Dialogs
+ * Native feel layout
+ * Font size changes (e.g., changing the system font size will apply to our dialogs)
+ * Better RTL (e.g., http://crbug.com/2822 http://crbug.com/5729 http://crbug.com/6082 http://crbug.com/6103 http://crbug.com/6125 http://crbug.com/8686 http://crbug.com/8649 )
+ * Being able to obey the user's system theme
+ * Accessibility for buttons and dialogs (but not for tabstrip and bookmarks)
+ * A better change at good remote X performance?
+ * We still would currently need Pango / Cairo for text layout, so it will be more efficient to just draw that during the Gtk pipeline instead of with Skia.
+ * Gtk widgets will automatically "feel and behave" like Linux. The behavior of our own Views system does not necessarily feel right on Linux.
+ * People working on Windows features don't need to worry about breaking the Linux build. \ No newline at end of file
diff --git a/docs/how_to_extend_layout_test_framework.md b/docs/how_to_extend_layout_test_framework.md
new file mode 100644
index 0000000..3618d66
--- /dev/null
+++ b/docs/how_to_extend_layout_test_framework.md
@@ -0,0 +1,125 @@
+# Extending the Layout Framework
+
+# Introduction
+The Layout Test Framework that Blink uses is a regression testing tool that is multi-platform and it has a large amount of tools that help test varying types of regression, such as pixel diffs, text diffs, etc. The framework is mainly used by Blink, however it was made to be extensible so that other projects can use it test different parts of chrome (such as Print Preview). This is a guide to help people who want to actually the framework to test whatever they want.
+
+# Background
+Before you can start actually extending the framework, you should be familiar with how to use it. This wiki is basically all you need to learn how to use it
+http://www.chromium.org/developers/testing/webkit-layout-tests
+
+# How to Extend the Framework
+There are two parts to actually extending framework to test a piece of software.
+The first part is extending certain files in:
+src/third\_party/Webkit/Tools/Scripts/webkitpy/layout\_tests/
+The code in webkitpy/layout\_tests is the layout test framework itself
+
+The second part is creating a driver (program) to actually communicate the layout test framework. This part is significantly more tricky and dependant on what exactly exactly is being tested.
+
+## Part 1:
+This part isn’t too difficult. There are basically two classes that need to be extended (ideally, just inherited from). These classes are:
+Driver
+Located in layout\_tests/port/driver.py
+Each instance of this is the class that will actually an instance of the program that produces the test data (program in Part 2).
+Port
+Located in layout\_tests/port/base.py
+This class is responsible creating drivers with the correct settings, giving access to certain OS functionality to access expected files, etc.
+
+
+
+
+
+### Extending Driver:
+As said, Driver launches the program from Part 2. Said program will communicate with the driver class to receive instructions and send back data. All of the work for driver gets done in Driver.run\_test. Everything else is a helper or initialization function.
+run\_test() steps:
+ 1. On the very first call of this function, it will actually run the test program. On every subsequent call to this function, at the beginning it will verify that the process doesn’t need to be restarted, and if it does, it will create a new instance of the test program.
+ 1. It will then create a command to send the program
+ * This command generally consists of an html file path for the test program to navigate to.
+ * After creating it, the command is sent
+ 1. After the command has been sent, it will then wait for data from the program.
+ * It will actually wait for 2 blocks of data.
+ * The first part being text or audio data. This part is required (the program will always send something, even an empty string)
+ * The second block is optional and is image data and an image hash (md5) this block of data is used for pixel tests
+ 1. After it has received all the data, it will proceed to check if the program has timed out or crashed, and if so fail this instance of the test (it can be retried later if need be).
+
+Luckily, run\_test() most likely doesn’t need to be overridden unless extra blocks of data need to be sent to/read from the test program. However, you do need to know how it works because it will influence what functions you need to override. Here are the ones you’re probably going to need to override
+cmd\_line
+This function creates a set of command line arguments to run the test program, so the function will almost certainly need to be overridden.
+It creates the command line to run the program. Driver uses subprocess.popen to create the process, which takes the name of the test program and any options it might need.
+The first item in the list of arguments should be the path to test program using this function:
+self._port._path\_to\_driver()
+This is an absolute path to the test program.
+This is the bare minimum you need to get the driver to launch the test program, however if you have options you need to append, just append them to the list.
+start
+If your program has any special startup needs, then this will be the place to put it.
+
+That’s mostly it. The Driver class has almost all the functionality you could want, so there isn’t much to override here. If extra data needs to be read or sent, extra data members should be added to ContentBlock.
+
+### Extending Port:
+This class is responsible for providing functionality such as where to look for tests, where to store test results, what driver to run, what timeout to use, what kind of files can be run, etc. It provides a lot of functionality, however it isn’t really sufficient because it doesn’t account of platform specific problems, therefore port itself shouldn’t be extend. Instead LinuxPort, WinPort, and MacPort (and maybe the android port class) should be extended as they provide platform specific overrides/extensions that implement most of the important functionality. While there are many functions in Port, overriding one function will affect most of the other ones to get the desired behavior. For example, if layout\_tests\_dir() is overriden, not only will the code look for tests in that directory, but it will find the correct TestExpectations file, the platform specific expected files, etc.
+
+Here are some of the functions that most likely need to be overridden.
+ * driver\_class
+ * This should be overridden to allow the testing program to actually run. By default the code will run content\_shell, which might or might not be what you want.
+ * It should be overridden to return the driver extension class created earlier.This function doesn’t return an instance on the driver, just the class itself.
+ * driver\_name
+ * This should return the name of the program test p. By default it returns ‘content\_shell’, but you want to have it return the program you want to run, such as chrome or browser\_tests.
+ * layout\_tests\_dir
+ * This tells the port where to look for all the and everything associated with them such as resources files.
+ * By default it returns absolute path to the webkit tests.
+ * If you are planning on running something in the chromium src/ directory, there are helper functions to allow you to return a path relative to the base of the chromium src directory.
+
+The rest of the functions can definitely be overridden for your projects specific needs, however these are the bare minimum needed to get it running. There are also functions you can override to make certain actions that aren’t on by default always take place. For example, the layout test framework always checks for system dependencies unless you pass in a switch. If you want them disabled for your project, just override check\_sys\_deps to always return OK. This way you don’t need to pass in so many switches.
+
+As said earlier, you should override LinuxPort, MacPort, and/or WinPort. You should create a class that implements the platform independent overrides (such as driver\_class) and then create a separate class for each platform specific port of your program that inherits from the class with the independent overrides and the platform port you want. For example, you might want to have a different timeout for your project, but on Windows the timeout needs to be vastly different than the others. In this case you can just create a default override that every class uses except your Windows port. In that port you can just override the function again to provide the specific timeout you need. This way you don’t need to maintain the same function on each platform if they all do the same thing.
+
+For Driver and Port that’s basically it unless you need to make many odd modifications. Lots of functionality is already there so you shouldn’t really need to do much.
+
+## Part 2:
+This is the part where you create the program that your driver class launches. This part is very application dependent, so it will not be a guide on how implement certain features, just what should be implemented and the order in which events should occur and some guidelines about what to do/not do. For a good example of how to implement your test program, look at MockDRT in mock\_drt.pyin the same directory as base.py and driver.py. It goes through all the steps described below and is very clear and concise. It is written in python, but your driver can be anything that can be run by subprocess.popen and has stdout, stdin, stderr.
+
+### Goals
+Your goal for this part of the project is to create a program (or extend a program) to interface with the layout test framework. The layout test framework will communicate with this program to tell it what to do and it will accept data from this program to perform the regression testing or create new base line files.
+
+### Structure
+This is how your code should be laid out.
+ 1. Initialization
+ * The creation of any directories or the launching of any programs should be done here and should be done once.
+ * After the program is initialized, “#READY\n” should be sent to progress the run\_test() in the driver.
+ 1. Infinite Loop (!)
+ * After initialization, your program needs to actually wait for input, then process that input to carry out the test. In the context of layout testing, the content\_shell needs to wait for an html file to navigate to, render it, then convert that rendering to a PNG. It does this constantly, until a signal/message is sent to indicate that no more tests should be processed
+ * Details:
+ * The first thing you need is your test file path and any other additional information about the test that is required (this is sent during the write() step in run\_tests() is driver.py. This information will be passed through stdin and is just one large string, with each part of the command being split with apostrophes (ex: “/path’foo” is path to the test file, then foo is some setting that your program might need).
+ * After that, your program should act on this input, how it does this is dependent on your program, however in content\_shell, this would be the part where it navigates to the test file, then renders it. After the program acts on the input, it needs to send some text to the driver code to indicate that it has acted on the input. This text will indicate something that you want to test. For example, if you want to make sure you program always prints “foo” you should send it to the driver. If the program every prints “bar” (or anything else), that would indicate a failure and the test will fail.
+ * Then you need to send any image data in the same manner as you did for step ii.
+ * Cleanup everything related to processing the input from step i, then go back to step i.
+ * This is where the ‘infinite’ loop part comes in, your program should constantly accept input from the driver until the driver indicates that there are no more tests to run. The driver does this by closing stdin, which will cause std::cin to go into a bad state. However, you can also modify the driver to send a special string such as ‘QUIT’ to exit the while loop.
+
+That’s basically what the skeleton of your program should be.
+
+### Details:
+This is information about how to do some specific things, such as sending data to the layout test framework.
+ * Content Blocks
+ * The layout test framework accepts output from your program in blocks of data through stdout. Therefore, printing to stdout is really sending data to the layout test framework.
+ * Structure of block
+ * “Header: Data\n”
+ * Header indicates what type of data will be sent through. A list of valid headers is listed in Driver.py.
+ * Data is the data that you actually want to send. For pixel tests, you want to send the actual PNG data here.
+ * The newline is needed to indicate the end of a header.
+ * End of a content block
+ * To indicate the end of a a content block and cause the driver to progress, you need to write “#EOF\n” to stdout (mandatory) and to stderr for certain types of content, such as image data.
+ * Multiple headers per block
+ * Some blocks require different sets of data. For PNGs, not only is the PNG needed, but so is a hash of the bitmap used to create the PNG.
+ * In this case this is how your output should look.
+ * “Content-type: image/png\n”
+ * “ActualHash: hashData\n”
+ * “Content-Length: lengthOfPng\n”
+ * “pngdata”
+ * This part doesn’t need a header specifying that you are sending png data, just send it
+ * “#EOF\n” on both stdout and stderr
+ * To see the structure of the data required, look at the read\_block functions in Driver.py
+
+
+
+
+
+
diff --git a/docs/include_what_you_use.md b/docs/include_what_you_use.md
new file mode 100644
index 0000000..2446c61
--- /dev/null
+++ b/docs/include_what_you_use.md
@@ -0,0 +1,82 @@
+# Introduction
+
+**WARNING:** This is all very alpha. Proceed at your own risk. The Mac instructions are very out of date -- IWYU currently isn't generally usable, so we stopped looking at it for chromium.
+
+See [include what you use page](http://code.google.com/p/include-what-you-use/) for background about what it is and why it is important.
+
+This page describes running IWYU for Chromium.
+
+# Linux/Blink
+
+## Running IWYU
+
+These instructions have a slightly awkward workflow. Ideally we should use something like `CXX=include-what-you-use GYP_DEFINES="clang=1" gclient runhooks; ninja -C out/Debug webkit -k 10000` if someone can get it working.
+
+ * Install include-what-you-use (see [here](https://code.google.com/p/include-what-you-use/wiki/InstructionsForUsers)). Make sure to use --enable-optimized=YES when building otherwise IWYU will be very slow.
+ * Get the compilation commands from ninja (using g++), and derive include-what-you-use invocations from it
+```
+$ cd /path/to/chromium/src
+$ ninja -C out/Debug content_shell -v > ninjalog.txt
+$ sed '/obj\/third_party\/WebKit\/Source/!d; s/^\[[0-9\/]*\] //; /^g++/!d; s/^g++/include-what-you-use -Wno-c++11-extensions/; s/-fno-ident//' ninjalog.txt > commands.txt
+```
+ * Run the IWYU commands. We do this in parallel for speed. Merge the output and remove any complaints that the compiler has.
+```
+$ cd out/Debug
+$ for i in {1..32}; do (sed -ne "$i~32p" ../../commands.txt | xargs -n 1 -L 1 -d '\n' bash -c > iwyu_$i.txt 2>&1) & done
+$ cat iwyu_{1..32}.txt | sed '/In file included from/d;/\(note\|warning\|error\):/{:a;N;/should add/!b a;s/.*\n//}' > iwyu.txt
+$ rm iwyu_{1..32}.txt
+```
+ * The output in iwyu.txt has all the suggested changes
+
+# Mac
+
+## Setup
+
+ 1. Checkout and build IWYU (This will also check out and build clang. See [Clang page](http://code.google.com/p/chromium/wiki/Clang) for details.)
+```
+$ cd /path/to/src/
+$ tools/clang/scripts/update_iwyu.sh
+```
+ 1. Ensure "Continue building after errors" is enabled in the Xcode Preferences UI.
+
+## Chromium
+
+ 1. Build Chromium. Be sure to substitute in the correct absolute path for `/path/to/src/`.
+```
+$ GYP_DEFINES='clang=1' gclient runhooks
+$ cd chrome
+$ xcodebuild -configuration Release -target chrome OBJROOT=/path/to/src/clang/obj DSTROOT=/path/to/src/clang SYMROOT=/path/to/src/clang CC=/path/to/src/third_party/llvm-build/Release+Asserts/bin/clang++
+```
+ 1. Run IWYU. Be sure to substitute in the correct absolute path for `/path/to/src/`.
+```
+$ xcodebuild -configuration Release -target chrome OBJROOT=/path/to/src/clang/obj DSTROOT=/path/to/src/clang SYMROOT=/path/to/src/clang CC=/path/to/src/third_party/llvm-build/Release+Asserts/bin/include-what-you-use
+```
+
+## WebKit
+
+ 1. Build TestShell. Be sure to substitute in the correct absolute path for `/path/to/src/`.
+```
+$ GYP_DEFINES='clang=1' gclient runhooks
+$ cd webkit
+$ xcodebuild -configuration Release -target test_shell OBJROOT=/path/to/src/clang/obj DSTROOT=/path/to/src/clang SYMROOT=/path/to/src/clang CC=/path/to/src/third_party/llvm-build/Release+Asserts/bin/clang++
+```
+ 1. Run IWYU. Be sure to substitute in the correct absolute path for `/path/to/src/`.
+```
+$ xcodebuild -configuration Release -target test_shell OBJROOT=/path/to/src/clang/obj DSTROOT=/path/to/src/clang SYMROOT=/path/to/src/clang CC=/work/chromium/src/third_party/llvm-build/Release+Asserts/bin/include-what-you-use
+```
+
+# Bragging
+
+You can run `tools/include_tracer.py` to get header file sizes before and after running iwyu. You can then include stats like "This reduces the size of foo.h from 2MB to 80kB" in your CL descriptions.
+
+# Known Issues
+
+We are a long way off from being able to accept the results of IWYU for Chromium/Blink. However, even in its current state it can be a useful tool for finding forward declaration opportunities and unused includes.
+
+Using IWYU with Blink has several issues:
+ * Lack of understanding on Blink style, e.g. config.h, wtf/MathExtras.h, wtf/Forward.h, wtf/Threading.h
+ * "using" declarations (most of WTF) makes IWYU not suggest forward declarations
+ * Functions defined inline in a different location to the declaration are dropped, e.g. Document::renderView in RenderView.h and Node::renderStyle in NodeRenderStyle.h
+ * typedefs can cause unwanted dependencies, e.g. typedef int ExceptionCode in Document.h
+ * .cpp files don't always correspond directly to .h files, e.g. Foo.h can be implemented in e.g. chromium/FooChromium.cpp
+ * g++/clang/iwyu seems fine with using forward declarations for PassRefPtr types in some circumstances, which MSVC doesn't. \ No newline at end of file
diff --git a/docs/installation_at_vmware.md b/docs/installation_at_vmware.md
new file mode 100644
index 0000000..b66a714
--- /dev/null
+++ b/docs/installation_at_vmware.md
@@ -0,0 +1,21 @@
+#How to install Chromium OS on VMWare
+
+# Download
+
+1.[Download VMware player](http://www.vmware.com/products/player/)
+<br>2.<a href='http://gdgt.com/google/chrome-os/download/'>Create gtgt.com account and download Chrome image</a>
+
+<h1>Mounting</h1>
+
+1. Create a New Virtual Machine<br>
+<br>2. Do not selecting any OP-sys (other-other-etc)<br>
+<br>3. Delete your newly created hardisc (lets say you named it as Chrome)<br>
+<br>4. Move downloaded harddisc to the same folder as othere VMware files for this Virtual Machine<br>
+<br>5. Rename your downloaded hardisc to newly created Virtual Machine original name (in my example it was Chrome)<br>
+<br>6. Boot the Chrome Virtual Machine (recommended to use NAT network configuration)<br>
+<br>
+<h1>Google results</h1>
+
+> <a href='http://discuss.gdgt.com/google/chrome-os/general/download-chrome-os-vmware-image/'>http://discuss.gdgt.com/google/chrome-os/general/download-chrome-os-vmware-image/</a>
+<br>> <a href='http://www.engadget.com/2009/11/20/google-chrome-os-available-as-free-vmware-download/'>http://www.engadget.com/2009/11/20/google-chrome-os-available-as-free-vmware-download/</a>
+<br>> <a href='http://blogs.zdnet.com/gadgetreviews/?p=9583'>http://blogs.zdnet.com/gadgetreviews/?p=9583</a> \ No newline at end of file
diff --git a/docs/installazione_su_vmware.md b/docs/installazione_su_vmware.md
new file mode 100644
index 0000000..14bd157
--- /dev/null
+++ b/docs/installazione_su_vmware.md
@@ -0,0 +1,20 @@
+#Come installare Chromium OS su VMWare
+
+# Download
+
+1.[Scarica VMware player](http://www.vmware.com/products/player/)
+<br>2.<a href='http://gdgt.com/google/chrome-os/download/'>Crea un account su gtgt.com account e scarica l'immagine di Chrome</a>
+
+<h1>Montare l'Immagine</h1>
+
+1. Crea una nuova Virtual Machine<br>
+<br>2. Non selezionare nessun sistema operativo (other-other-ecc)<br>
+<br>3. Elimina l'hard disk virtuale appena creato (se hai nominato la virtual machine "Chrome", elimina il file "Chrome.vmdk" )<br>
+<br>4. Sposta l'immagine dell'Hard Disk scaricato nella stessa cartella dove vi sono gli altri files di VMware per questa virtual machine<br>
+<br>5. Rinomina l'immagine dell'Hard Disk scaricato con il nome della Virtual Machine creata (nel mio esempio è Chrome)<br>
+<br>6. Avvia la Virtual Machine (si consiglia di usare la configurazione di rete NAT)<br>
+<h1>Risultati di Google</h1>
+
+> <a href='http://discuss.gdgt.com/google/chrome-os/general/download-chrome-os-vmware-image/'>http://discuss.gdgt.com/google/chrome-os/general/download-chrome-os-vmware-image/</a>
+<br>> <a href='http://www.engadget.com/2009/11/20/google-chrome-os-available-as-free-vmware-download/'>http://www.engadget.com/2009/11/20/google-chrome-os-available-as-free-vmware-download/</a>
+<br>> <a href='http://blogs.zdnet.com/gadgetreviews/?p=9583'>http://blogs.zdnet.com/gadgetreviews/?p=9583</a>
diff --git a/docs/ipc_fuzzer.md b/docs/ipc_fuzzer.md
new file mode 100644
index 0000000..17a80c6
--- /dev/null
+++ b/docs/ipc_fuzzer.md
@@ -0,0 +1,52 @@
+# Introduction
+
+A chromium IPC fuzzer is under development by aedla and tsepez. The fuzzer lives under `src/tools/ipc_fuzzer/` and is running on ClusterFuzz. A previous version of the fuzzer was a simple bitflipper, which caught around 10 bugs. A new version is doing smarter mutations and generational fuzzing. To do so, each `ParamTraits<Type>` needs a corresponding `FuzzTraits<Type>`. Feel free to contribute.
+
+
+---
+
+# Working with the fuzzer
+
+## Build instructions
+ * add `enable_ipc_fuzzer=1` to `GYP_DEFINES`
+ * build `ipc_fuzzer_all` target
+ * component builds are currently broken, sorry
+ * Debug builds are broken; only Release mode works.
+
+## Replaying ipcdumps
+ * `tools/ipc_fuzzer/scripts/play_testcase.py path/to/testcase.ipcdump`
+ * more help: `tools/ipc_fuzzer/scripts/play_testcase.py -h`
+
+## Listing messages in ipcdump
+ * `out/`_Build_`/ipc_message_util --dump path/to/testcase.ipcdump`
+
+## Updating fuzzers in ClusterFuzz
+ * `tools/ipc_fuzzer/scripts/cf_package_builder.py`
+ * upload `ipc_fuzzer_mut.zip` and `ipc_fuzzer_gen.zip` under build directory to ClusterFuzz
+
+## Contributing FuzzTraits
+ * add them to tools/ipc\_fuzzer/fuzzer/fuzzer.cc
+ * thanks!
+
+
+---
+
+# Components
+
+## ipcdump logger
+ * add `enable_ipc_fuzzer=1` to `GYP_DEFINES`
+ * build `chrome` and `ipc_message_dump` targets
+ * run chrome with `--no-sandbox --ipc-dump-directory=/path/to/ipcdump/directory`
+ * ipcdumps will be created in this directory for each renderer using the format _pid_.ipcdump
+
+## ipcdump replay
+Lives under `ipc_fuzzer/replay`. The renderer is replaced with `ipc_fuzzer_replay` using `--renderer-cmd-prefix`. This is done automatically with the `ipc_fuzzer/play_testcase.py` convenience script.
+
+## ipcdump mutator / generator
+Lives under `ipc_fuzzer/fuzzer`. This is the code that runs on ClusterFuzz. It uses `FuzzTraits<Type>` to mutate ipcdumps or generate them out of thin air.
+
+
+---
+
+# Problems, questions, suggestions
+Send them to mbarbella@chromium.org. \ No newline at end of file
diff --git a/docs/kiosk_mode.md b/docs/kiosk_mode.md
new file mode 100644
index 0000000..55bc39c
--- /dev/null
+++ b/docs/kiosk_mode.md
@@ -0,0 +1,85 @@
+## Introduction
+
+If you have a real world kiosk application that you want to run on Google Chrome, then below are the steps to take to simulate kiosk mode.
+
+
+## Steps to Simulate Kiosk Mode
+
+### Step 1
+
+Compile the following Java code:
+
+```
+import java.awt.*;
+import java.applet.*;
+import java.security.*;
+import java.awt.event.*;
+
+public class FullScreen extends Applet
+{
+ public void fullScreen()
+ {
+ AccessController.doPrivileged
+ (
+ new PrivilegedAction()
+ {
+ public Object run()
+ {
+ try
+ {
+ Robot robot = new Robot();
+ robot.keyPress(KeyEvent.VK_F11);
+ }
+ catch (AWTException e)
+ {
+ e.printStackTrace();
+ }
+ return null;
+ }
+ }
+ );
+ }
+}
+```
+
+### Step 2
+
+Include it in an applet on your kiosk application's home page:
+
+```
+<applet name="appletFullScreen" code="FullScreen.class" width="1" height="1"></applet>
+```
+
+### Step 3
+
+Add the following to the kiosk computer's java.policy file:
+
+```
+grant codeBase "http://yourservername/*"
+{
+ permission java.security.AllPermission;
+};
+```
+
+### Step 4
+
+Include the following JavaScript and assign the doLoad function to the onload event:
+
+```
+var _appletFullScreen;
+
+function doLoad()
+{
+ _appletFullScreen = document.applets[0];
+ doFullScreen();
+}
+
+function doFullScreen()
+{
+ if (_appletFullScreen && _appletFullScreen.fullScreen)
+ {
+// Add an if statement to check whether document.body.clientHeight is not indicative of full screen mode
+ _appletFullScreen.fullScreen();
+ }
+}
+``` \ No newline at end of file
diff --git a/docs/layout_tests_linux.md b/docs/layout_tests_linux.md
new file mode 100644
index 0000000..c5d0be0
--- /dev/null
+++ b/docs/layout_tests_linux.md
@@ -0,0 +1,90 @@
+# Running layout tests on Linux
+
+ 1. Build `blink_tests` (see LinuxBuildInstructions)
+ 1. Checkout the layout tests
+ * If you have an entry in your .gclient file that includes "LayoutTests", you may need to comment it out and sync.
+ * You can run a subset of the tests by passing in a path relative to `src/third_party/WebKit/LayoutTests/`. For example, `run_layout_tests.py fast` will only run the tests under `src/third_party/WebKit/LayoutTests/fast/`.
+ 1. When the tests finish, any unexpected results should be displayed.
+
+See [Running WebKit Layout Tests](http://dev.chromium.org/developers/testing/webkit-layout-tests) for full documentation about set up and available options.
+
+## Pixel Tests
+
+The pixel test results were generated on Ubuntu 10.4 (Lucid). If you're running a newer version of Ubuntu, you will get some pixel test failures due to changes in freetype or fonts. In this case, you can create a Lucid 64 chroot using `build/install-chroot.sh` to compile and run tests.
+
+## Fonts
+
+Make sure you have all the necessary fonts installed.
+```
+sudo apt-get install apache2 wdiff php5-cgi ttf-indic-fonts \
+ msttcorefonts ttf-dejavu-core ttf-kochi-gothic ttf-kochi-mincho \
+ ttf-thai-tlwg
+```
+
+You can also just run `build/install-build-deps.sh` again.
+
+## Plugins
+
+If `fast/dom/object-plugin-hides-properties.html` and `plugins/embed-attributes-style.html` are failing, try uninstalling `totem-mozilla` from your system:
+```
+sudo apt-get remove totem-mozilla
+```
+
+## Running layout tests under valgrind on Linux
+
+As above, but use `tools/valgrind/chrome_tests.sh -t webkit` instead. e.g.
+```
+sh tools/valgrind/chrome_tests.sh -t webkit LayoutTests/fast/
+```
+This defaults to using --debug. Read the script for more details.
+
+If you're trying to reproduce a run from the valgrind buildbot, look for the --run\_chunk=XX:YY
+line in the bot's log. You can rerun exactly as the bot did with the commands
+```
+cd ~/chromium/src
+echo XX > valgrind_layout_chunk.txt
+sh tools/valgrind/chrome_tests.sh -t layout -n YY
+```
+That will run the XXth chunk of YY layout tests.
+
+## Configuration tips
+ * Use an optimized content\_shell when rebaselining or running a lot of tests. ([bug 8475](http://code.google.com/p/chromium/issues/detail?id=8475) is about how the debug output differs from the optimized output.) `ninja -C out/Release content_shell`
+ * Make sure you have wdiff installed: `sudo apt-get install wdiff` to get prettier diff output
+ * Some pixel tests may fail due to processor-specific rounding errors. Build using a chroot jail with Lucid 64-bit user space to be sure that your system matches the checked in baselines. You can use `build/install-chroot.sh` to set up a Lucid 64 chroot. Learn more about [UsingALinuxChroot](UsingALinuxChroot.md).
+## Getting a layout test into a debugger
+
+There are two ways:
+ 1. Run content\_shell directly rather than using run\_layout\_tests.py. You will need to pass some options:
+ * `--no-timeout` to give you plenty of time to debug
+ * the fully qualified path of the layout test (rather than relative to `WebKit/LayoutTests`).
+ 1. Or, run as normal but with the `--additional-drt-flag=--renderer-startup-dialog --additional-drt-flag=--no-timeout --time-out-ms=86400000` flags. The first one makes content\_shell bring up a dialog before running, which then would let you attach to the process via `gdb -p PID_OF_DUMPRENDERTREE`. The others help avoid the test shell and DumpRenderTree timeouts during the debug session.
+
+## Using an embedded X server
+
+If you try to use your computer while the tests are running, you may get annoyed as windows are opened and closed automatically. To get around this, you can create a separate X server for running the tests.
+
+ 1. Install Xephyr (`sudo apt-get install xserver-xephyr`)
+ 1. Start Xephyr as display 4: `Xephyr :4 -screen 1024x768x24`
+ 1. Run the layout tests in the Xephyr: `DISPLAY=:4 run_layout_tests.py`
+
+Xephyr supports debugging repainting. See the [Xephyr README](http://cgit.freedesktop.org/xorg/xserver/tree/hw/kdrive/ephyr/README) for details. In brief:
+ 1. `XEPHYR_PAUSE=$((500*1000)) Xephyr ...etc... # 500 ms repaint flash`
+ 1. `kill -USR1 $(pidof Xephyr)`
+
+If you don't want to see anything at all, you can use Xvfb (should already be installed).
+ 1. Start Xvfb as display 4: `Xvfb :4 -screen 0 1024x768x24`
+ 1. Run the layout tests in the Xvfb: `DISPLAY=:4 run_layout_tests.py`
+
+## Tiling Window managers
+
+The layout tests want to run with the window at a particular size down to the pixel level. This means if your window manager resizes the window it'll cause test failures. This is another good reason to use an embedded X server.
+
+### xmonad
+In your `.xmonad/xmonad.hs`, change your config to include a manageHook along these lines:
+```
+test_shell_manage = className =? "Test_shell" --> doFloat
+main = xmonad $
+ defaultConfig
+ { manageHook = test_shell_manage <+> manageHook defaultConfig
+ ...
+``` \ No newline at end of file
diff --git a/docs/linux64_bit_issues.md b/docs/linux64_bit_issues.md
new file mode 100644
index 0000000..98efb2d
--- /dev/null
+++ b/docs/linux64_bit_issues.md
@@ -0,0 +1,67 @@
+**Note: This page is (somewhat) obsolete.** Chrome has a native 64-bit build now. Normal users should be running 64-bit Chrome on 64-bit systems. However, it's possible developers might be using a 64-bit system to build 32-bit Chrome, and will want to test the build on the same system. In that case, some of these tips may still be useful (though there might not be much more work done to address problems with this configuration).
+
+Many 64-bit Linux distros allow you to run 32-bit apps but have many libraries misconfigured. The distros may be fixed at some point, but in the meantime we have workarounds.
+
+## IME path wrong
+Symptom: IME doesn't work. `Gtk: /usr/lib/gtk-2.0/2.10.0/immodules/im-uim.so: wrong ELF class: ELFCLASS64`
+
+Chromium bug: [9643](http://code.google.com/p/chromium/issues/detail?id=9643)
+
+Affected systems:
+| **Distro** | **upstream bug** |
+|:-----------|:-----------------|
+| Ubuntu Hardy, Jaunty | [190227](https://bugs.launchpad.net/ubuntu/+source/ia32-libs/+bug/190227) |
+
+Workaround: If your xinput setting is to use SCIM, im-scim.so is searched for in lib32 directory, but ia32 package does not have im-scim.so in Ubuntu and perhaps other distributions. Ubuntu Hardy, however, has 32-bit im-xim.so. Therefore, invoking Chrome as following enables SCIM in Chrome.
+
+> $ GTK\_IM\_MODULE=xim XMODIFIERS="@im=SCIM" chrome
+
+
+## GTK filesystem module path wrong
+Symptom: File picker doesn't work. `Gtk: /usr/lib/gtk-2.0/2.10.0/filesystems/libgio.so: wrong ELF class: ELFCLASS64`
+
+Chromium bug: [12151](http://code.google.com/p/chromium/issues/detail?id=12151)
+
+Affected systems:
+| **Distro** | **upstream bug** |
+|:-----------|:-----------------|
+| Ubuntu Hardy, Jaunty, Koala alpha 2 | [190227](https://bugs.launchpad.net/ubuntu/+source/ia32-libs/+bug/190227) |
+
+Workaround: ??
+
+## GIO module path wrong
+Symptom: `/usr/lib/gio/modules/libgioremote-volume-monitor.so: wrong ELF class: ELFCLASS64`
+
+Chromium bug: [12193](http://code.google.com/p/chromium/issues/detail?id=12193)
+
+Affected systems:
+| **Distro** | **upstream bug** |
+|:-----------|:-----------------|
+| Ubuntu Hardy, Jaunty, Koala alpha 2 | [190227](https://bugs.launchpad.net/ubuntu/+source/ia32-libs/+bug/190227) |
+
+Workaround: ??
+
+## Can't install on 64 bit Ubuntu 9.10 Live CD
+Symptom: "Error: Dependency is not satisfiable: ia32-libs-gtk"
+
+Chromium bug: n/a
+
+Affected systems:
+| **Distro** | **upstream bug** |
+|:-----------|:-----------------|
+| Ubuntu Koala alpha 2 | |
+
+Workaround: Enable the Universe repository. (It's enabled by default
+when you actually install Ubuntu; only the live CD has it disabled.)
+
+## gconv path wrong
+Symptom: Paste doesn't work. `Gdk: Error converting selection from STRING: Conversion from character set 'ISO-8859-1' to 'UTF-8' is not supported`
+
+Chromium bug: [12312](http://code.google.com/p/chromium/issues/detail?id=12312)
+
+Affected systems:
+| **Distro** | **upstream bug** |
+|:-----------|:-----------------|
+| Arch | ?? |
+
+Workaround: Set `GCONV_PATH` to appropriate `/path/to/lib32/usr/lib/gconv` . \ No newline at end of file
diff --git a/docs/linux_build_instructions.md b/docs/linux_build_instructions.md
new file mode 100644
index 0000000..3383b45
--- /dev/null
+++ b/docs/linux_build_instructions.md
@@ -0,0 +1,164 @@
+#summary Build instructions for Linux
+#labels Linux,build
+
+
+
+## Overview
+Due its complexity, Chromium uses a set of custom tools to check out and build. Here's an overview of the steps you'll run:
+ 1. **gclient**. A checkout involves pulling nearly 100 different SVN repositories of code. This process is managed with a tool called `gclient`.
+ 1. **gyp**. The cross-platform build configuration system is called `gyp`, and on Linux it generates ninja build files. Running `gyp` is analogous to the `./configure` step seen in most other software.
+ 1. **ninja**. The actual build itself uses `ninja`. A prebuilt binary is in depot\_tools and should already be in your path if you followed the steps to check out Chromium.
+ 1. We don't provide any sort of "install" step.
+ 1. You may want to [use a chroot](http://code.google.com/p/chromium/wiki/UsingALinuxChroot) to isolate yourself from versioning or packaging conflicts (or to run the layout tests).
+
+## Getting a checkout
+ * [Prerequisites](LinuxBuildInstructionsPrerequisites.md): what you need before you build
+ * [Get the Code](http://dev.chromium.org/developers/how-tos/get-the-code): check out the source code.
+
+**Note**. If you are working on Chromium OS and already have sources in `chromiumos/chromium`, you **must** run `chrome_set_ver --runhooks` to set the correct dependencies. This step is otherwise performed by `gclient` as part of your checkout.
+
+## First Time Build Bootstrap
+ * Make sure your dependencies are up to date by running the `install-build-deps.sh` script:
+```
+.../chromium/src$ build/install-build-deps.sh
+```
+
+ * Before you build, you should also [install API keys](https://sites.google.com/a/chromium.org/dev/developers/how-tos/api-keys).
+
+## `gyp` (configuring)
+After `gclient sync` finishes, it will run `gyp` automatically to generate the ninja build files. For standard chromium builds, this automatic step is sufficient and you can start [compiling](https://code.google.com/p/chromium/wiki/LinuxBuildInstructions#Compilation).
+
+To manually configure `gyp`, run `gclient runhooks` or run `gyp` directly via `build/gyp_chromium`. See [Configuring the Build](https://code.google.com/p/chromium/wiki/CommonBuildTasks#Configuring_the_Build) for detailed `gyp` options.
+
+[GypUserDocumentation](https://code.google.com/p/gyp/wiki/GypUserDocumentation) gives background on `gyp`, but is not necessary if you are just building Chromium.
+
+### Configuring `gyp`
+See [Configuring the Build](https://code.google.com/p/chromium/wiki/CommonBuildTasks#Configuring_the_Build) for details; most often you'll be changing the `GYP_DEFINES` options, which is discussed here.
+
+`gyp` supports a minimal amount of build configuration via the `-D` flag.
+```
+build/gyp_chromium -Dflag1=value1 -Dflag2=value2
+```
+You can store these in the `GYP_DEFINES` environment variable, separating flags with spaces, as in:
+```
+ export GYP_DEFINES="flag1=value1 flag2=value2"
+```
+After changing your `GYP_DEFINES` you need to rerun `gyp`, either implicitly via `gclient sync` (which also syncs) or `gclient runhooks` or explicitly via `build/gyp_chromium`.
+
+Note that quotes are not necessary for a single flag, but are useful for clarity; `GYP_DEFINES=flag1=value1` is syntactically valid but can be confusing compared to `GYP_DEFINES="flag1=value1"`.
+
+If you have various flags for various purposes, you may find it more legible to break them up across several lines, taking care to include spaces, such as like this:
+```
+ export GYP_DEFINES="flag1=value1"\
+ " flag2=value2"
+```
+or like this (allowing comments):
+```
+ export GYP_DEFINES="flag1=value1" # comment
+ GYP_DEFINES+=" flag2=value2" # another comment
+```
+
+### Sample configurations
+ * **gcc warnings**. By default we fail to build if there are any compiler warnings. If you're getting warnings, can't build because of that, but just want to get things done, you can specify `-Dwerror=` to turn that off:
+```
+# one-off
+build/gyp_chromium -Dwerror=
+# via variable
+export GYP_DEFINES="werror="
+build/gyp_chromium
+```
+
+ * **ChromeOS**. `-Dchromeos=1` builds the ChromeOS version of Chrome. This is **not** all of ChromeOS (see [the ChromiumOS](http://www.chromium.org/chromium-os) page for full build instructions), this is just the slightly tweaked version of the browser that runs on that system. Its not designed to be run outside of ChromeOS and some features won't work, but compiling on your Linux desktop can be useful for certain types of development and testing.
+```
+# one-off
+build/gyp_chromium -Dchromeos=1
+# via variable
+export GYP_DEFINES="chromeos=1"
+build/gyp_chromium
+```
+
+
+## Compilation
+The weird "`src/`" directory is an artifact of `gclient`. Start with:
+```
+$ cd src
+```
+
+### Build just chrome
+```
+$ ninja -C out/Debug chrome
+```
+
+### Faster builds
+See LinuxFasterBuilds
+
+### Build every test
+```
+$ ninja -C out/Debug
+```
+The above builds all libraries and tests in all components. **It will take hours.**
+
+Specifying other target names to restrict the build to just what you're
+interested in. To build just the simplest unit test:
+```
+$ ninja -C out/Debug base_unittests
+```
+
+### Clang builds
+
+Information about building with Clang can be found [here](http://code.google.com/p/chromium/wiki/Clang).
+
+### Output
+
+Executables are written in `src/out/Debug/` for Debug builds, and `src/out/Release/` for Release builds.
+
+### Release mode
+
+Pass `-C out/Release` to the ninja invocation:
+```
+$ ninja -C out/Release chrome
+```
+
+### Seeing the commands
+
+If you want to see the actual commands that ninja is invoking, add `-v` to the ninja invocation.
+```
+$ ninja -v -C out/Debug chrome
+```
+This is useful if, for example, you are debugging gyp changes, or otherwise need to see what ninja is actually doing.
+
+### Clean builds
+All built files are put into the `out/` directory, so to start over with a clean build, just
+```
+rm -rf out
+```
+and run `gclient runhooks` or `build\gyp_chromium` again to recreate the ninja build files (which are also stored in `out/`). Or you can run `ninja -C out/Debug -t clean`.
+
+### Linker Crashes
+If, during the final link stage:
+```
+ LINK(target) out/Debug/chrome
+```
+
+You get an error like:
+```
+collect2: ld terminated with signal 6 Aborted terminate called after throwing an instance of 'std::bad_alloc'
+
+collect2: ld terminated with signal 11 [Segmentation fault], core dumped
+```
+you are probably running out of memory when linking. Try one of:
+ 1. Use the `gold` linker
+ 1. Build on a 64-bit computer
+ 1. Build in Release mode (debugging symbols require a lot of memory)
+ 1. Build as shared libraries (note: this build is for developers only, and may have broken functionality)
+Most of these are described on the LinuxFasterBuilds page.
+
+## Advanced Features
+
+ * Building frequently? See LinuxFasterBuilds.
+ * Cross-compiling for ARM? See LinuxChromiumArm.
+ * Want to use Eclipse as your IDE? See LinuxEclipseDev.
+ * Built version as Default Browser? See LinuxDevBuildAsDefaultBrowser.
+
+## Next Steps
+If you want to contribute to the effort toward a Chromium-based browser for Linux, please check out the [Linux Development page](LinuxDevelopment.md) for more information. \ No newline at end of file
diff --git a/docs/linux_build_instructions_prerequisites.md b/docs/linux_build_instructions_prerequisites.md
new file mode 100644
index 0000000..fa179a0
--- /dev/null
+++ b/docs/linux_build_instructions_prerequisites.md
@@ -0,0 +1,116 @@
+This page describes system requirements for building Chromium on Linux.
+
+
+
+# System Requirements
+
+## Linux distribution
+You should be able to build Chromium on any reasonably modern Linux distribution, but there are a lot of distributions and we sometimes break things on one or another. Internally, our development platform has been a variant of Ubuntu 14.04 (Trusty Tahr); we expect you will have the most luck on this platform, although directions for other popular platforms are included below.
+
+## Disk space
+It takes about 10GB or so of disk space to check out and build the source tree. This number grows over time.
+
+## Memory space
+It takes about 8GB of swap file to link chromium and its tests. If you get an out-of-memory error during the final link, you will need to add swap space with swapon. It's recommended to have at least 4GB of memory available for building a statically linked debug build. Dynamic linking and/or building a release build lowers memory requirements. People with less than 8GB of memory may want to not build tests since they are quite large.
+
+## 64-bit Systems
+Chromium can be compiled as either a 32-bit or 64-bit application. Chromium requires several system libraries to compile and run. While it is possible to compile and run a 32-bit Chromium on 64-bit Linux, many distributions are missing the necessary 32-bit libraries, and will result in build or run-time errors.
+
+## Depot tools
+Before setting up the environment, make sure you install the [depot tools](http://dev.chromium.org/developers/how-tos/depottools) first.
+
+# Software Requirements
+
+## Ubuntu Setup
+Run [build/install-build-deps.sh](https://chromium.googlesource.com/chromium/chromium/+/trunk/build/install-build-deps.sh) The script only supports current releases as listed on https://wiki.ubuntu.com/Releases.
+
+Building on Linux requires software not usually installed with the distributions.
+The script attempts to automate installing the required software. This script is used to set up the canonical builders, and as such is the most up to date reference for the required prerequisites.
+
+## Other distributions
+Note: Other distributions are not officially supported for building and the instructions below might be outdated.
+
+### Debian Setup
+
+Follow the Ubuntu instructions above.
+
+If you want to install the build-deps manually, note that the original packages are for Ubuntu. Here are the Debian equivalents:
+ * libexpat-dev -> libexpat1-dev
+ * freetype-dev -> libfreetype6-dev
+ * libbzip2-dev -> libbz2-dev
+ * libcupsys2-dev -> libcups2-dev
+
+Additionally, if you're building Chromium components for Android, you'll need to install the package: lib32z1
+
+### openSUSE Setup
+
+For openSUSE 11.0 and later, see [Linux openSUSE Build Instructions](LinuxOpenSuseBuildInstructions.md).
+
+### Fedora Setup
+
+Recent systems:
+```
+su -c 'yum install subversion pkgconfig python perl gcc-c++ bison \
+flex gperf nss-devel nspr-devel gtk2-devel glib2-devel freetype-devel \
+atk-devel pango-devel cairo-devel fontconfig-devel GConf2-devel \
+dbus-devel alsa-lib-devel libX11-devel expat-devel bzip2-devel \
+dbus-glib-devel elfutils-libelf-devel libjpeg-devel \
+mesa-libGLU-devel libXScrnSaver-devel \
+libgnome-keyring-devel cups-devel libXtst-devel libXt-devel pam-devel'
+```
+
+The msttcorefonts packages can be obtained by following the instructions present here: http://www.fedorafaq.org/#installfonts
+
+For the optional packages:
+ * php-cgi is provided by the php-cli package
+ * wdiff doesn't exist in Fedora repositories, a possible alternative would be dwdiff
+ * sun-java6-fonts doesn't exist in Fedora repositories, needs investigating
+
+```
+su -c 'yum install httpd mod_ssl php php-cli wdiff'
+```
+
+### Arch Linux Setup
+Most of these packages are probably already installed since they're often used, and the parameter --needed ensures that packages up to date are not reinstalled.
+```
+$ sudo pacman -S --needed python perl gcc gcc-libs bison flex gperf pkgconfig nss \
+ alsa-lib gconf glib2 gtk2 nspr ttf-ms-fonts freetype2 cairo dbus \
+ libgnome-keyring
+```
+
+For the optional packages on Arch Linux:
+ * php-cgi is provided with pacman
+ * wdiff is not in the main repository but dwdiff is. You can get wdiff in AUR/yaourt
+ * sun-java6-fonts do not seem to be in main repository or AUR.
+
+For a successful build, add `'remove_webcore_debug_symbols': 1,` to the variables-object in include.gypi. Tested on 64-bit Arch Linux.
+
+TODO: Figure out how to make it build with the WebCore debug symbols. `make V=1` can be useful for solving the problem.
+
+
+### Mandriva setup
+
+```
+urpmi lib64fontconfig-devel lib64alsa2-devel lib64dbus-1-devel lib64GConf2-devel \
+lib64freetype6-devel lib64atk1.0-devel lib64gtk+2.0_0-devel lib64pango1.0-devel \
+lib64cairo-devel lib64nss-devel lib64nspr-devel g++ python perl bison flex subversion \
+gperf
+```
+
+Note 1: msttcorefonts are not available, you will need to build your own (see instructions, not hard to do, see http://code.google.com/p/chromium/wiki/MandrivaMsttcorefonts ) or use drakfont to import the fonts from a windows installation
+
+Note 2: these packages are for 64 bit, to download the 32 bit packages, substitute lib64 with lib
+
+Note 3: some of these packages might not be explicitly necessary as they come as dependencies, there is no harm in including them however.
+
+Note 4: to build on 64 bit systems use, instead of GYP\_DEFINES=target\_arch=x64 , as mentioned in the general notes for building on 64 bit:
+
+```
+export GYP_DEFINES="target_arch=x64"
+gclient runhooks --force
+```
+
+### Gentoo setup
+```
+emerge www-client/chromium
+``` \ No newline at end of file
diff --git a/docs/linux_building_debug_gtk.md b/docs/linux_building_debug_gtk.md
new file mode 100644
index 0000000..75cdf93
--- /dev/null
+++ b/docs/linux_building_debug_gtk.md
@@ -0,0 +1,110 @@
+# Introduction
+
+Sometimes installing the debug packages for gtk and glib isn't quite enough.
+(For instance, if the artifacts from -O2 are driving you bonkers in gdb, you
+might want to rebuild with -O0.)
+Here's how to build from source and use your local version without installing it.
+
+## 32-bit systems
+
+On Ubuntu, to download and build glib and gtk suitable for debugging:
+
+1. If you don't have a gpg key yet, generate one with gpg --gen-key.
+
+2. Create file ~/.devscripts containing DEBSIGN\_KEYID=yourkey, e.g.
+DEBSIGN\_KEYID=CC91A262
+(See http://www.debian.org/doc/maint-guide/ch-build.en.html
+
+3. If you're on a 32 bit system, do:
+```
+#!/bin/sh
+set -x
+set -e
+# Workaround for "E: Build-dependencies for glib2.0 could not be satisfied"
+# See also https://bugs.launchpad.net/ubuntu/+source/apt/+bug/245068
+sudo apt-get install libgamin-dev
+sudo apt-get build-dep glib2.0 gtk+2.0
+rm -rf ~/mylibs
+mkdir ~/mylibs
+cd ~/mylibs
+apt-get source glib2.0 gtk+2.0
+cd glib2.0*
+DEB_BUILD_OPTIONS="nostrip noopt debug" debuild
+cd ../gtk+2.0*
+DEB_BUILD_OPTIONS="nostrip noopt debug" debuild
+```
+This should take about an hour. If it gets stuck waiting for a zombie,
+you may have to kill its closest parent (the makefile uses subshells,
+and bash seems to get confused). When I did this, it continued successfully.
+
+At the very end, it will prompt you for the passphrase for your gpg key.
+
+Then, to run an app with those libraries, do e.g.
+```
+export LD_LIBRARY_PATH=$HOME/mylibs/gtk+2.0-2.16.1/debian/install/shared/usr/lib:$HOME/mylibs/gtk+2.0-2.20.1/debian/install/shared/usr/lib
+```
+
+gdb ignores that variable, so in the debugger, you would have to do something like
+```
+set solib-search-path $HOME/mylibs/gtk+2.0-2.16.1/debian/install/shared/usr/lib:$HOME/mylibs/gtk+2.0-2.20.1/debian/install/shared/usr/lib
+```
+
+See also http://sources.redhat.com/gdb/current/onlinedocs/gdb_17.html
+
+## 64-bit systems
+
+If you're on a 64 bit systems, you can do the above on a 32
+bit system, and copy the result. Or try one of the following:
+
+### Building your own GTK
+
+```
+apt-get source glib-2.0 gtk+-2.0
+
+export CFLAGS='-m32 -g'
+export LDFLAGS=-L/usr/lib32
+export LD_LIBRARY_PATH=/work/32/lib
+export PKG_CONFIG_PATH=/work/32/lib/pkgconfig
+
+# glib
+setarch i386 ./configure --prefix=/work/32 --enable-debug=yes
+
+# gtk
+setarch i386 ./configure --prefix=/work/32 --enable-debug=yes --without-libtiff
+```
+
+
+### ia32-libs
+_Note: Evan tried this and didn't get any debug libs at the end._
+
+Or you could try this instead:
+```
+#!/bin/sh
+set -x
+set -e
+sudo apt-get build-dep ia32-libs
+rm -rf ~/mylibs
+mkdir ~/mylibs
+cd ~/mylibs
+apt-get source ia32-libs
+cd ia32-libs*
+DEB_BUILD_OPTIONS="nostrip noopt debug" debuild
+```
+
+By default, this just grabs and unpacks prebuilt libraries; see
+ia32-libs-2.7ubuntu6/fetch-and-build which documents a BUILD
+variable which would force actual building.
+This would take way longer, since it builds dozens of libraries.
+I haven't tried it yet.
+
+#### Possible Issues
+
+debuild may fail with
+```
+gpg: [stdin]: clearsign failed: secret key not available
+debsign: gpg error occurred! Aborting....
+```
+if you forget to create ~/.devscripts with the right contents.
+
+The build may fail with a "FAIL: abicheck.sh" if gold is your system
+linker. Use ld instead. \ No newline at end of file
diff --git a/docs/linux_cert_management.md b/docs/linux_cert_management.md
new file mode 100644
index 0000000..7faf6ba
--- /dev/null
+++ b/docs/linux_cert_management.md
@@ -0,0 +1,64 @@
+**NOTE:** SSL client authentication with personal certificates does not work completely in Linux, see [issue 16830](http://code.google.com/p/chromium/issues/detail?id=16830) and [issue 25241](http://code.google.com/p/chromium/issues/detail?id=25241).
+
+# Introduction
+
+The easy way to manage certificates is navigate to chrome://settings/search#ssl. Then click on the "Manage Certificates" button. This will load a built-in interface for managing certificates.
+
+On Linux, Chromium uses the [NSS Shared DB](https://wiki.mozilla.org/NSS_Shared_DB_And_LINUX). If the built-in manager does not work for you then you can configure certificates with the [NSS command line tools](http://www.mozilla.org/projects/security/pki/nss/tools/).
+
+# Details
+
+## Get the tools
+ * Debian/Ubuntu: `sudo apt-get install libnss3-tools`
+ * Fedora: `su -c "yum install nss-tools"`
+ * Gentoo: `su -c "echo 'dev-libs/nss utils' >> /etc/portage/package.use && emerge dev-libs/nss"` (You need to launch all commands below with the `nss` prefix, e.g., `nsscertutil`.)
+ * Opensuse: `sudo zypper install mozilla-nss-tools`
+
+
+## List all certificates
+
+`certutil -d sql:$HOME/.pki/nssdb -L`
+
+### Ubuntu Jaunty error
+Above (and most commands) gives:
+
+`certutil: function failed: security library: invalid arguments.`
+
+Package version 3.12.3.1-0ubuntu0.9.04.2
+
+## List details of a certificate
+
+`certutil -d sql:$HOME/.pki/nssdb -L -n <certificate nickname>`
+
+## Add a certificate
+
+`certutil -d sql:$HOME/.pki/nssdb -A -t <TRUSTARGS> -n <certificate nickname> -i <certificate filename>`
+
+The TRUSTARGS are three strings of zero or more alphabetic
+characters, separated by commas. They define how the certificate should be trusted for SSL, email, and object signing, and are explained in the [certutil docs](http://www.mozilla.org/projects/security/pki/nss/tools/certutil.html#1034193) or [Meena's blog post on trust flags](https://blogs.oracle.com/meena/entry/notes_about_trust_flags).
+
+For example, to trust a root CA certificate for issuing SSL server certificates, use
+
+`certutil -d sql:$HOME/.pki/nssdb -A -t "C,," -n <certificate nickname> -i <certificate filename>`
+
+To import an intermediate CA certificate, use
+
+`certutil -d sql:$HOME/.pki/nssdb -A -t ",," -n <certificate nickname> -i <certificate filename>`
+
+Note: to trust a self-signed server certificate, we should use
+
+`certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n <certificate nickname> -i <certificate filename>`
+
+This should work now, because [NSS bug 531160](https://bugzilla.mozilla.org/show_bug.cgi?id=531160) is claimed to be fixed in a related bug report. If it doesn't work, then to work around the NSS bug, you have to trust it as a CA using the "C,," trust flags.
+
+### Add a personal certificate and private key for SSL client authentication
+
+Use the command:
+
+`pk12util -d sql:$HOME/.pki/nssdb -i PKCS12_file.p12`
+
+to import a personal certificate and private key stored in a PKCS #12 file. The TRUSTARGS of the personal certificate will be set to "u,u,u".
+
+## Delete a certificate
+
+`certutil -d sql:$HOME/.pki/nssdb -D -n <certificate nickname>` \ No newline at end of file
diff --git a/docs/linux_chromium_arm.md b/docs/linux_chromium_arm.md
new file mode 100644
index 0000000..927be6e
--- /dev/null
+++ b/docs/linux_chromium_arm.md
@@ -0,0 +1,123 @@
+Note this currently contains various recipes:
+
+
+
+---
+
+
+# Recipe1: Building for an ARM CrOS device
+This recipe uses `ninja` (instead of `make`) so its startup time is much lower (sub-1s, instead of tens of seconds), is integrated with goma (for google-internal users) for very high parallelism, and uses `sshfs` instead of `scp` to significantly speed up the compile-run cycle. It has moved to https://sites.google.com/a/chromium.org/dev/developers/how-tos/-quickly-building-for-cros-arm-x64 (mostly b/c of the ease of attaching files to sites).
+
+
+
+---
+
+
+# Recipe2: Explicit Cross compiling
+
+Due to the lack of ARM hardware with the grunt to build Chromium native, cross compiling is currently the recommended method of building for ARM.
+
+These instruction are designed to run on Ubuntu Precise.
+
+### Installing the toolchain
+
+The install-build-deps script can be used to install all the compiler
+and library dependencies directly from Ubuntu:
+
+```
+$ ./build/install-build-deps.sh --arm
+```
+
+### Installing the rootfs
+
+A prebuilt rootfs image is kept up-to-date on Cloud Storage. It will
+automatically be installed by gclient runhooks installed if you have 'target\_arch=arm' in your GYP\_DEFINES.
+
+To install the sysroot manually you can run:
+```
+$ ./chrome/installer/linux/sysroot_scripts/install-debian.wheezy.sysroot.py --arch=arm
+```
+
+### Building
+
+To build for ARM, using the clang binary in the chrome tree, use the following settings:
+
+```
+export GYP_CROSSCOMPILE=1
+export GYP_DEFINES="target_arch=arm"
+```
+
+There variables need to be set at gyp-time (when you run gyp\_chromium),
+but are not needed at build-time (when you run make/ninja).
+
+## Testing
+
+### Automated Build and Testing
+
+Chromium's testing infrastructure for ARM/Linux is (to say the least)
+in its infancy. There are currently two builders setup, one on the
+FYI waterfall and one the the trybot waterfall:
+
+http://build.chromium.org/p/chromium.fyi/builders/Linux%20ARM%20Cross-Compile
+http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_arm
+
+
+These builders cross compile on x86-64 and then trigger testing
+on real ARM hard bots:
+
+http://build.chromium.org/p/chromium.fyi/builders/Linux%20ARM%20Tests%20%28Panda%29/
+http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_arm_tester
+
+Unfortunately, even those the builders are usually green, the testers
+are not yet well maintained or monitored.
+
+There is compile-only trybot and fyi bot also:
+
+http://build.chromium.org/p/chromium.fyi/builders/Linux%20ARM
+http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_arm_compile
+
+### Testing with QEMU
+
+If you don't have a real ARM machine, you can test with QEMU. For instance, there are some prebuilt QEMU Debian images here: http://people.debian.org/~aurel32/qemu/. Another option is to use the rootfs generated by rootstock, as mentioned above.
+
+Here's a minimal xorg.conf if needed:
+
+```
+Section "InputDevice"
+ Identifier "Generic Keyboard"
+ Driver "kbd"
+ Option "XkbRules" "xorg"
+ Option "XkbModel" "pc105"
+ Option "XkbLayout" "us"
+EndSection
+
+Section "InputDevice"
+ Identifier "Configured Mouse"
+ Driver "mouse"
+EndSection
+
+Section "Device"
+ Identifier "Configured Video Device"
+ Driver "fbdev"
+ Option "UseFBDev" "true"
+EndSection
+
+Section "Monitor"
+ Identifier "Configured Monitor"
+EndSection
+
+Section "Screen"
+ Identifier "Default Screen"
+ Monitor "Configured Monitor"
+ Device "Configured Video Device"
+ DefaultDepth 8
+ SubSection "Display"
+ Depth 8
+ Modes "1024x768" "800x600" "640x480"
+ EndSubSection
+EndSection
+```
+
+### Notes
+ * To building for thumb reduces the stripped release binary by around 9MB, equating to ~33% of the binary size. To enable thumb, set 'arm\_thumb': 1
+ * TCmalloc does not have an ARM port, so it is disabled. \ No newline at end of file
diff --git a/docs/linux_chromium_packages.md b/docs/linux_chromium_packages.md
new file mode 100644
index 0000000..c91b051
--- /dev/null
+++ b/docs/linux_chromium_packages.md
@@ -0,0 +1,36 @@
+Some Linux distributions package up Chromium for easy installation. Please note that Chromium is not identical to Google Chrome -- see ChromiumBrowserVsGoogleChrome -- and that distributions may (and actually do) make their own modifications.
+
+| **Distro** | **Contact** | **URL for packages** | **URL for distro-specific patches** |
+|:-----------|:------------|:---------------------|:------------------------------------|
+| Ubuntu | Chad Miller `chad.miller@canonical.com` | https://launchpad.net/ubuntu/+source/chromium-browser | https://code.launchpad.net/ubuntu/+source/chromium-browser |
+| Debian | [see package page](http://packages.debian.org/sid/chromium) | in standard repo | [debian patch tracker](http://patch-tracker.debian.org/package/chromium-browser/) |
+| openSUSE | Raymond Wooninck `tittiatcoke@gmail.com` | http://software.opensuse.org/search?baseproject=ALL&p=1&q=chromium | ?? |
+| Arch | Evangelos Foutras `evangelos@foutrelis.com` | http://www.archlinux.org/packages/extra/x86_64/chromium/ | [link](http://projects.archlinux.org/svntogit/packages.git/tree/trunk?h=packages/chromium) |
+| Gentoo | [project page](http://www.gentoo.org/proj/en/desktop/chromium/index.xml) | Available in portage, [www-client/chromium](http://packages.gentoo.org/package/www-client/chromium) | http://sources.gentoo.org/viewcvs.py/gentoo-x86/www-client/chromium/files/ |
+| ALT Linux | Andrey Cherepanov (Андрей Черепанов) `cas@altlinux.org` | http://packages.altlinux.org/en/Sisyphus/srpms/chromium | http://git.altlinux.org/gears/c/chromium.git?a=tree |
+| Mageia | Dexter Morgan `dmorgan@mageia.org` | http://svnweb.mageia.org/packages/cauldron/chromium-browser-stable/current/SPECS/ | http://svnweb.mageia.org/packages/cauldron/chromium-browser-stable/current/SOURCES/ |
+| NixOS | aszlig `"^[0-9]+$"@regexmail.net` | http://hydra.nixos.org/search?query=pkgs.chromium | https://github.com/NixOS/nixpkgs/tree/master/pkgs/applications/networking/browsers/chromium |
+
+## Unofficial packages
+Packages in this section are not part of the distro's official repositories.
+
+| **Distro** | **Contact** | **URL for packages** | **URL for distro-specific patches** |
+|:-----------|:------------|:---------------------|:------------------------------------|
+| Fedora | Tom Callaway `tcallawa@redhat.com` | http://repos.fedorapeople.org/repos/spot/chromium/ | ?? |
+| Slackware | Eric Hameleers `alien@slackware.com` | http://www.slackware.com/~alien/slackbuilds/chromium/ | http://www.slackware.com/~alien/slackbuilds/chromium/ |
+
+## Other Unixes
+| **System** | **Contact** | **URL for packages** | **URL for patches** |
+|:-----------|:------------|:---------------------|:--------------------|
+| FreeBSD | http://lists.freebsd.org/mailman/listinfo/freebsd-chromium | http://wiki.freebsd.org/Chromium | http://trillian.chruetertee.ch/chromium |
+| OpenBSD | Robert Nagy `robert@openbsd.org` | http://openports.se/www/chromium | http://www.openbsd.org/cgi-bin/cvsweb/ports/www/chromium/patches/ |
+
+
+## Updating the list
+
+Are you packaging Chromium for a Linux distro? Is the information above out of date? Please contact `thestig@chromium.org` with updates.
+
+Before emailing, please note:
+ * This is not a support email address
+ * If you ask about a Linux distro that is not listed above, the answer will be "I don't know"
+ * Linux distros supported by Google Chrome are listed here: https://support.google.com/chrome/answer/95411 \ No newline at end of file
diff --git a/docs/linux_crash_dumping.md b/docs/linux_crash_dumping.md
new file mode 100644
index 0000000..1639cf2
--- /dev/null
+++ b/docs/linux_crash_dumping.md
@@ -0,0 +1,66 @@
+Official builds of Chrome support crash dumping and reporting using the Google crash servers. This is a guide to how this works.
+
+## Breakpad
+
+Breakpad is an open source library which we use for crash reporting across all three platforms (Linux, Mac and Windows). For Linux, a substantial amount of work was required to support cross-process dumping. At the time of writing this code is currently forked from the upstream breakpad repo. While this situation remains, the forked code lives in <tt>breakpad/linux</tt>. The upstream repo is mirrored in <tt>breakpad/src</tt>.
+
+The code currently supports i386 only. Getting x86-64 to work should only be a minor amount of work.
+
+### Minidumps
+
+Breakpad deals in a file format called 'minidumps'. This is a Microsoft format and thus is defined by in-memory structures which are dumped, raw, to disk. The main header file for this file format is <tt>breakpad/src/google_breakpad/common/minidump_format.h</tt>.
+
+At the top level, the minidump file format is a list of key-value pairs. Many of the keys are defined by the minidump format and contain cross-platform representations of stacks, threads etc. For Linux we also define a number of custom keys containing <tt>/proc/cpuinfo</tt>, <tt>lsb-release</tt> etc. These are defined in <tt>breakpad/linux/minidump_format_linux.h</tt>.
+
+### Catching exceptions
+
+Exceptional conditions (such as invalid memory references, floating point exceptions, etc) are signaled by synchronous signals to the thread which caused them. Synchronous signals are always run on the thread which triggered them as opposed to asynchronous signals which can be handled by any thread in a thread-group which hasn't masked that signal.
+
+All the signals that we wish to catch are synchronous except SIGABRT, and we can always arrange to send SIGABRT to a specific thread. Thus, we find the crashing thread by looking at the current thread in the signal handler.
+
+The signal handlers run on a pre-allocated stack in case the crash was triggered by a stack overflow.
+
+Once we have started handling the signal, we have to assume that the address space is compromised. In order not to fall prey to this and crash (again) in the crash handler, we observe some rules:
+ 1. We don't enter the dynamic linker. This, observably, can trigger crashes in the crash handler. Unfortunately, entering the dynamic linker is very easy and can be triggered by calling a function from a shared library who's resolution hasn't been cached yet. Since we can't know which functions have been cached we avoid calling any of these functions with one exception: <tt>memcpy</tt>. Since the compiler can emit calls to <tt>memcpy</tt> we can't really avoid it.
+ 1. We don't allocate memory via malloc as the heap may be corrupt. Instead we use a custom allocator (in <tt>breadpad/linux/memory.h</tt>) which gets clean pages directly from the kernel.
+
+In order to avoid calling into libc we have a couple of header files which wrap the system calls (<tt>linux_syscall_support.h</tt>) and reimplement a tiny subset of libc (<tt>linux_libc_support.h</tt>).
+
+### Self dumping
+
+The simple case occurs when the browser process crashes. Here we catch the signal and <tt>clone</tt> a new process to perform the dumping. We have to use a new process because a process cannot ptrace itself.
+
+The dumping process then ptrace attaches to all the threads in the crashed process and writes out a minidump to <tt>/tmp</tt>. This is generic breakpad code.
+
+Then we reach the Chrome specific parts in <tt>chrome/app/breakpad_linux.cc</tt>. Here we construct another temporary file and write a MIME wrapping of the crash dump ready for uploading. We then fork off <tt>wget</tt> to upload the file. Based on Debian popcorn, <tt>wget</tt> is very commonly installed (much more so than <tt>libcurl</tt>) and <tt>wget</tt> handles the HTTPS gubbins for us.
+
+### Renderer dumping
+
+In the case of a crash in the renderer, we don't want the renderer handling the crash dumping itself. In the future we will sandbox the renderer and allowing it the authority to crash dump itself is too much.
+
+Thus, we split the crash dumping in two parts: the gathering of information which is done in process and the external dumping which is done out of process. In the case above, the latter half was done in a <tt>clone</tt>d child. In this case, the browser process handles it.
+
+When renderers are forked off, they have a UNIX DGRAM socket in file descriptor 4. The signal handler then calls into Chrome specific code (<tt>chrome/renderer/render_crash_handler_linux.cc</tt>) when it would otherwise <tt>clone</tt>. The Chrome specific code sends a datagram to the socket which contains:
+ * Information which is only available to the signal handler (such as the <tt>ucontext</tt> structure).
+ * A file descriptor to a pipe which it then blocks on reading from.
+ * A <tt>CREDENTIALS</tt> structure giving its PID.
+
+The kernel enforces that the renderer isn't lying in the <tt>CREDENTIALS</tt> structure so it can't ask the browser to crash dump another process.
+
+The browser then performs the ptrace and minidump writing which would otherwise be performed in the <tt>clone</tt>d process and does the MIME wrapping the uploading as normal.
+
+Once the browser has finished getting information from the crashed renderer via ptrace, it writes a byte to the file descriptor which was passed from the renderer. The renderer than wakes up (because it was blocking on reading from the other end) and rethrows the signal to itself. It then appears to crash 'normally' and other parts of the browser notice the abnormal termination and display the sad tab.
+
+## How to test Breakpad support in Chromium
+
+ * Build Chromium with the gyp option `-Dlinux_breakpad=1`.
+```
+./build/gyp_chromium -Dlinux_breakpad=1
+ninja -C out/Debug chrome
+```
+ * Run the browser with the environment variable [CHROME\_HEADLESS=1](http://code.google.com/p/chromium/issues/detail?id=19663). This enables crash dumping but prevents crash dumps from being uploaded and deleted.
+```
+env CHROME_HEADLESS=1 ./out/Debug/chrome-wrapper
+```
+ * Visit the special URL `about:crash` to trigger a crash in the renderer process.
+ * A crash dump file should appear in the directory `~/.config/chromium/Crash Reports`. \ No newline at end of file
diff --git a/docs/linux_debugging.md b/docs/linux_debugging.md
new file mode 100644
index 0000000..73a9b5a
--- /dev/null
+++ b/docs/linux_debugging.md
@@ -0,0 +1,385 @@
+#summary tips for debugging on Linux
+#labels Linux
+
+This page is for Chromium-specific debugging tips; learning how to run gdb is out of scope.
+
+
+
+## Symbolized stack trace
+
+The sandbox can interfere with the internal symbolizer. Use --no-sandbox (but keep this temporary) or an external symbolizer (see tools/valgrind/asan/asan\_symbolize.py).
+
+Generally, do not use --no-sandbox on waterfall bots, sandbox testing is needed. Talk to security@chromium.org.
+
+## GDB
+**GDB-7.7 is required in order to debug Chrome on Linux.**
+
+Any prior version will fail to resolve symbols or segfault.
+
+### Basic browser process debugging
+
+```
+gdb -tui -ex=r --args out/Debug/chrome --disable-seccomp-sandbox http://google.com
+```
+
+### Allowing attaching to foreign processes
+On distributions that use the [Yama LSM](https://www.kernel.org/doc/Documentation/security/Yama.txt) (that includes Ubuntu and Chrome OS), process A can attach to process B only if A is an ancestor of B.
+
+You will probably want to disable this feature by using
+```
+echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
+```
+
+If you don't you'll get an error message such as "Could not attach to process".
+
+Note that you'll also probably want to use --no-sandbox, as explained below.
+
+### Multiprocess Tricks
+#### Getting renderer subprocesses into gdb
+Since Chromium itself spawns the renderers, it can be tricky to grab a particular with gdb. This command does the trick:
+```
+chrome --no-sandbox --renderer-cmd-prefix='xterm -title renderer -e gdb --args'
+```
+The "--no-sandbox" flag is needed because otherwise the seccomp sandbox will kill the renderer process on startup, or the setuid sandbox will prevent xterm's execution. The "xterm" is necessary or gdb will run in the current terminal, which can get particularly confusing since it's running in the background, and if you're also running the main process in gdb, won't work at all (the two instances will fight over the terminal). To auto-start the renderers in the debugger, send the "run" command to the debugger:
+```
+chrome --no-sandbox --renderer-cmd-prefix='xterm -title renderer -e gdb -ex run --args'
+```
+If you're using Emacs and `M-x gdb`, you can do
+```
+chrome "--renderer-cmd-prefix=gdb --args"
+```
+
+Note: using the `--renderer-cmd-prefix` option bypasses the zygote launcher, so the renderers won't be sandboxed. It is generally not an issue, except when you are trying to debug interactions with the sandbox. If that's what you are doing, you will need to attach your debugger to a running renderer process (see below).
+
+You may also want to pass `--disable-hang-monitor` to suppress the hang monitor, which is rather annoying.
+
+You can also use "--renderer-startup-dialog" and attach to the process in order to debug the renderer code. Go to http://www.chromium.org/blink/getting-started-with-blink-debugging for more information on how this can be done.
+
+#### Choosing which renderers to debug
+If you are starting multiple renderers then the above means that multiple gdb's start and fight over the console. Instead, you can set the prefix to point to this shell script:
+
+```
+#!/bin/sh
+
+echo "**** Child $$ starting: y to debug"
+read input
+if [ "$input" = "y" ] ; then
+ gdb --args $*
+else
+ $*
+fi
+```
+
+#### Selective breakpoints
+When debugging both the browser and renderer process, you might want to have separate set of breakpoints to hit. You can use gdb's command files to accomplish this by putting breakpoints in separate files and instructing gdb to load them.
+
+```
+gdb -x ~/debug/browser --args chrome --no-sandbox --disable-hang-monitor --renderer-cmd-prefix='xterm -title renderer -e gdb -x ~/debug/renderer --args '
+```
+
+Also, instead of running gdb, you can use the script above, which let's you select which renderer process to debug. Note: you might need to use the full path to the script and avoid $HOME or ~/.
+
+#### Connecting to a running renderer
+
+Usually `ps aux | grep chrome` will not give very helpful output. Try `pstree -p | grep chrome` to get something like
+
+```
+ | |-bash(21969)---chrome(672)-+-chrome(694)
+ | | |-chrome(695)---chrome(696)-+-{chrome}(697)
+ | | | \-{chrome}(709)
+ | | |-{chrome}(675)
+ | | |-{chrome}(678)
+ | | |-{chrome}(679)
+ | | |-{chrome}(680)
+ | | |-{chrome}(681)
+ | | |-{chrome}(682)
+ | | |-{chrome}(684)
+ | | |-{chrome}(685)
+ | | |-{chrome}(705)
+ | | \-{chrome}(717)
+```
+
+Most of those are threads. In this case the browser process would be 672 and the (sole) renderer process is 696. You can use `gdb -p 696` to attach. Alternatively, you might find out the process ID from Chrome's built-in Task Manager (under the Tools menu). Right-click on the Task Manager, and enable "Process ID" in the list of columns.
+
+Note: by default, sandboxed processes can't be attached by a debugger. To be able to do so, you will need to pass the `--allow-sandbox-debugging` option.
+
+If the problem only occurs with the seccomp sandbox enabled (and the previous tricks don't help), you could try enabling core-dumps (see the **Core files** section). That would allow you to get a backtrace and see some local variables, though you won't be able to step through the running program.
+
+Note: If you're interested in debugging LinuxSandboxIPC process, you can attach to 694 in the above diagram. The LinuxSandboxIPC process has the same command line flag as the browser process so that it's easy to identify it if you run `pstree -pa`.
+
+#### Getting GPU subprocesses into gdb
+Use `--gpu-launcher` flag instead of `--renderer-cmd-prefix` in the instructions for renderer above.
+
+#### Getting browser\_tests launched browsers into gdb
+Use environment variable `BROWSER_WRAPPER` instead of `--renderer-cmd-prefix` switch in the instructions above.
+
+Example:
+$ BROWSER\_WRAPPER='xterm -title renderer -e gdb --eval-command=run --eval-command=quit --args' out/Debug/browser\_tests --gtest\_filter=Print
+
+#### Plugin Processes
+Same strategies as renderers above, but the flag is called `--plugin-launcher`:
+```
+chrome --plugin-launcher='xterm -e gdb --args'
+```
+
+_Note: For now, this does not currently apply to PPAPI plugins because they currently run in the renderer process._
+
+#### Single-Process mode
+Depending on whether it's relevant to the problem, it's often easier to just run in "single process" mode where the renderer threads are in-process. Then you can just run gdb on the main process.
+```
+gdb --args chrome --single-process
+```
+
+Currently, the --disable-gpu flag is also required, as there are known crashes that occur under TextureImageTransportSurface without it. The crash described in http://crbug.com/361689 can also sometimes occur, but that crash can be continued from without harm.
+
+Note that for technical reasons plugins cannot be in-process, so `--single-process` only puts the renderers in the browser process. The flag is still useful for debugging plugins (since it's only two processes instead of three) but you'll still need to use `--plugin-launcher` or another approach.
+
+### Printing Chromium types
+gdb 7 lets us use Python to write pretty-printers for Chromium types. The directory `tools/gdb/` contains a Python gdb scripts useful for Chromium code. There are similar scripts [in WebKit](http://trac.webkit.org/wiki/GDB) (in fact, the Chromium script relies on using it with the WebKit one).
+
+To include these pretty-printers with your gdb, put the following into `~/.gdbinit`:
+```
+python
+import sys
+sys.path.insert(0, "<path/to/chromium/src>/third_party/WebKit/Tools/gdb/")
+import webkit
+sys.path.insert(0, "<path/to/chromium/src>/tools/gdb/")
+import gdb_chrome
+```
+
+Pretty printers for std types shouldn't be necessary in gdb 7, but they're provided here in case you're using an older gdb. Put the following into `~/.gdbinit`:
+```
+# Print a C++ string.
+define ps
+ print $arg0.c_str()
+end
+
+# Print a C++ wstring or wchar_t*.
+define pws
+ printf "\""
+ set $c = (wchar_t*)$arg0
+ while ( *$c )
+ if ( *$c > 0x7f )
+ printf "[%x]", *$c
+ else
+ printf "%c", *$c
+ end
+ set $c++
+ end
+ printf "\"\n"
+end
+```
+
+[More STL GDB macros](http://www.yolinux.com/TUTORIALS/src/dbinit_stl_views-1.01.txt)
+
+### Graphical Debugging Aid for Chromium Views
+
+The following link describes a tool that can be used on Linux, Windows and Mac under GDB.
+
+http://code.google.com/p/chromium/wiki/GraphicalDebuggingAidChromiumViews
+
+### Faster startup
+
+Use the gdb-add-index script (e.g. build/gdb-add-index out/Debug/browser\_tests)
+
+Only makes sense if you run the binary multiple times or maybe if you use the component build since most .so files won't require reindexing on a rebuild.
+
+See https://groups.google.com/a/chromium.org/forum/#!searchin/chromium-dev/gdb-add-index/chromium-dev/ELRuj1BDCL4/5Ki4LGx41CcJ for more info.
+
+Alternatively, specify:
+```
+linux_use_debug_fission=0
+```
+
+in GYP\_DEFINES. This improves load time of gdb significantly at the cost of link time.
+
+## Core files
+`ulimit -c unlimited` should cause all Chrome processes (run from that shell) to dump cores, with the possible exception of some sandboxed processes.
+
+Some sandboxed subprocesses might not dump cores unless you pass the `--allow-sandbox-debugging` flag.
+
+If the problem is a freeze rather than a crash, you may be able to trigger a core-dump by sending SIGABRT to the relevant process:
+```
+kill -6 [process id]
+```
+
+## Breakpad minidump files
+
+See LinuxMinidumpToCore
+
+## Running Tests
+Many of our tests bring up windows on screen. This can be annoying (they steal your focus) and hard to debug (they receive extra events as you mouse over them). Instead, use `Xvfb` or `Xephyr` to run a nested X session to debug them, as outlined on LayoutTestsLinux.
+
+### Browser tests
+By default the browser\_tests forks a new browser for each test. To debug the browser side of a single test, use a command like
+```
+gdb --args out/Debug/browser_tests --single_process --gtest_filter=MyTestName
+```
+**note the underscore in single\_process** -- this makes the test harness and browser process share the outermost process.
+
+
+To debug a renderer process in this case, use the tips above about renderers.
+
+### Layout tests
+See LayoutTestsLinux for some tips. In particular, note that it's possible to debug a layout test via `ssh`ing to a Linux box; you don't need anything on screen if you use `Xvfb`.
+
+### UI tests
+UI tests are run in forked browsers. Unlike browser tests, you cannot do any single process tricks here to debug the browser. See below about `BROWSER_WRAPPER`.
+
+To pass flags to the browser, use a command line like `--extra-chrome-flags="--foo --bar"`.
+
+### Timeouts
+UI tests have a confusing array of timeouts in place. (Pawel is working on reducing the number of timeouts.) To disable them while you debug, set the timeout flags to a large value:
+ * `--test-timeout=100000000`
+ * `--ui-test-action-timeout=100000000`
+ * `--ui-test-terminate-timeout=100000000`
+
+### To replicate Window Manager setup on the bots
+Chromium try bots and main waterfall's bots run tests under Xvfb&openbox combination. Xvfb is an X11 server that redirects the graphical output to the memeory, and openbox is a simple window manager that is running on top of Xvfb. The behavior of openbox is markedly different when it comes to focus management and other window tasks, so test that runs fine locally may fail or be flaky on try bots. To run the tests on a local machine as on a bot, follow these steps:
+
+Make sure you have openbox:
+```
+apt-get install openbox
+```
+Start Xvfb and openbox on a particular display:
+```
+Xvfb :6.0 -screen 0 1280x1024x24 & DISPLAY=:6.0 openbox &
+```
+Run your tests with graphics output redirected to that display:
+```
+DISPLAY=:6.0 out/Debug/browser_tests --gtest_filter="MyBrowserTest.MyActivateWindowTest"
+```
+You can look at a snapshot of the output by:
+```
+xwd -display :6.0 -root | xwud
+```
+
+Alternatively, you can use testing/xvfb.py to set up your environment for you:
+```
+testing/xvfb.py out/Debug out/Debug/browser_tests --gtest_filter="MyBrowserTest.MyActivateWindowTest"
+```
+
+### BROWSER\_WRAPPER
+You can also get the browser under a debugger by setting the `BROWSER_WRAPPER` environment variable. (You can use this for `browser_tests` too, but see above for discussion of a simpler way.)
+
+```
+BROWSER_WRAPPER='xterm -e gdb --args' out/Debug/browser_tests
+```
+
+### Replicating Trybot Slowness
+
+Trybots are pretty stressed, and can sometimes expose timing issues you can't normally reproduce locally.
+
+You can simulate this by shutting down all but one of the CPUs (http://www.cyberciti.biz/faq/debian-rhel-centos-redhat-suse-hotplug-cpu/) and running a CPU loading tool (e.g., http://www.devin.com/lookbusy/). Now run your test. It will run slowly, but any flakiness found by the trybot should replicate locally now - and often nearly 100% of the time.
+
+## Logging
+### Seeing all LOG(foo) messages
+Default log level hides `LOG(INFO)`. Run with `--log-level=0` and `--enable-logging=stderr` flags.
+
+Newer versions of chromium with VLOG may need --v=1 too. For more VLOG tips, see the chromium-dev thread: http://groups.google.com/a/chromium.org/group/chromium-dev/browse_thread/thread/dcd0cd7752b35de6?pli=1
+
+### Seeing IPC debug messages
+Run with CHROME\_IPC\_LOGGING=1 eg.
+```
+CHROME_IPC_LOGGING=1 out/Debug/chrome
+```
+or within gdb:
+```
+set environment CHROME_IPC_LOGGING 1
+```
+
+If some messages show as unknown, check if the list of IPC message headers in chrome/common/logging\_chrome.cc is up-to-date. In case this file reference goes out of date, try looking for usage of macros like IPC\_MESSAGE\_LOG\_ENABLED or IPC\_MESSAGE\_MACROS\_LOG\_ENABLED.
+
+## Using valgrind
+
+To run valgrind on the browser and renderer processes, with our suppression file and flags:
+```
+$ cd $CHROMIUM_ROOT/src
+$ tools/valgrind/valgrind.sh out/Debug/chrome
+```
+
+You can use valgrind on chrome and/or on the renderers e.g
+`valgrind --smc-check=all ../sconsbuild/Debug/chrome`
+or by passing valgrind as the argument to `--render-cmd-prefix`.
+
+Beware that there are several valgrind "false positives" e.g. pickle, sqlite and some instances in webkit that are ignorable. On systems with prelink and address space randomization (e.g. Fedora), you may also see valgrind errors in libstdc++ on startup and in gnome-breakpad.
+
+Valgrind doesn't seem to play nice with tcmalloc. To disable tcmalloc run GYP
+```
+$ cd $CHROMIUM_ROOT/src
+$ build/gyp_chromium -Duse_allocator=none
+```
+and rebuild.
+
+## Profiling
+See https://sites.google.com/a/chromium.org/dev/developers/profiling-chromium-and-webkit and http://code.google.com/p/chromium/wiki/LinuxProfiling
+
+## i18n
+We obey your system locale. Try something like:
+```
+LANG=ja_JP.UTF-8 out/Debug/chrome
+```
+If this doesn't work, make sure that the LANGUAGE, LC\_ALL and LC\_MESSAGE environment variables aren't set -- they have higher priority than LANG in the order listed. Alternatively, just do this:
+
+```
+LANGUAGE=fr out/Debug/chrome
+```
+
+Note that because we use GTK, some locale data comes from the system -- for example, file save boxes and whether the current language is considered RTL. Without all the language data available, Chrome will use a mixture of your system language and the language you run Chrome in.
+
+Here's how to install the Arabic (ar) and Hebrew (he) language packs:
+```
+sudo apt-get install language-pack-ar language-pack-he language-pack-gnome-ar language-pack-gnome-he
+```
+Note that the `--lang` flag does **not** work properly for this.
+
+On non-Debian systems, you need the `gtk20.mo` files. (Please update these docs with the appropriate instructions if you know what they are.)
+
+## Breakpad
+See the last section of LinuxCrashDumping; you need to set a gyp variable and an environment variable for the crash dump tests to work.
+
+## Drag and Drop
+If you break in a debugger during a drag, Chrome will have grabbed your mouse and keyboard so you won't be able to interact with the debugger! To work around this, run via `Xephyr`. Instructions for how to use `Xephyr` are on the LayoutTestsLinux page.
+
+## Tracking Down Bugs
+
+### Isolating Regressions
+Old builds are archived here: http://build.chromium.org/buildbot/snapshots/chromium-rel-linux/
+
+`tools/bisect-builds.py` in the tree automates bisecting through the archived builds. Despite a computer science education, I am still amazed how quickly binary search will find its target.
+
+### Screen recording for bug reports
+`sudo apt-get install gtk-recordmydesktop`
+
+## Version-specific issues
+
+### Google Chrome
+Google Chrome binaries don't include symbols. Googlers can read where to get symbols from [the Google-internal wiki](http://wiki/Main/ChromeOfficialBuildLinux#The_Build_Archive).
+
+### Ubuntu Chromium
+Since we don't build the Ubuntu packages (Ubuntu does) we can't get useful backtraces from them. Direct users to https://wiki.ubuntu.com/Chromium/Debugging .
+
+### Fedora's Chromium
+Like Ubuntu, but direct users to https://fedoraproject.org/wiki/TomCallaway/Chromium_Debug .
+
+### Xlib
+If you're trying to track down X errors like:
+```
+The program 'chrome' received an X Window System error.
+This probably reflects a bug in the program.
+The error was 'BadDrawable (invalid Pixmap or Window parameter)'.
+```
+Some strategies are:
+ * pass `--sync` on the command line to make all X calls synchronous
+ * run chrome via [xtrace](http://xtrace.alioth.debian.org/)
+ * turn on IPC debugging (see above section)
+
+### Window Managers
+To test on various window managers, you can use a nested X server like `Xephyr`. Instructions for how to use `Xephyr` are on the LayoutTestsLinux page.
+
+If you need to test something with hardware accelerated compositing (e.g., compiz), you can use `Xgl` (`sudo apt-get install xserver-xgl`). E.g.:
+```
+Xgl :1 -ac -accel glx:pbuffer -accel xv:pbuffer -screen 1024x768
+```
+## Mozilla Tips
+https://developer.mozilla.org/en/Debugging_Mozilla_on_Linux_FAQ \ No newline at end of file
diff --git a/docs/linux_debugging_gtk.md b/docs/linux_debugging_gtk.md
new file mode 100644
index 0000000..93a1afb
--- /dev/null
+++ b/docs/linux_debugging_gtk.md
@@ -0,0 +1,51 @@
+## Making warnings fatal
+
+See [Running GLib Applications](http://developer.gnome.org/glib/stable/glib-running.html) for notes on how to make GTK warnings fatal.
+
+## Using GTK Debug packages
+
+```
+sudo apt-get install libgtk2.0-0-dbg
+```
+Make sure that you're building a binary that matches your architecture (e.g. 64-bit on a 64-bit machine), and there you go.
+
+### Source
+You'll likely want to get the source for gtk too so that you can step through it. You can tell gdb that you've downloaded the source to your system's GTK by doing:
+
+```
+$ cd /my/dir
+$ apt-get source libgtk2.0-0
+$ gdb ...
+(gdb) set substitute-path /build/buildd /my/dir
+```
+
+NOTE: I tried debugging pango in a similar manner, but for some reason gdb didn't pick up the symbols from the symbols from the -dbg package. I ended up building from source and setting my LD\_LIBRARY\_PATH.
+
+See LinuxBuildingDebugGtk for more on how to build your own debug version of GTK.
+
+## Parasite
+http://chipx86.github.com/gtkparasite/ is great. Go check out the site for more about it.
+
+Install it with
+```
+sudo apt-get install gtkparasite
+```
+
+And then run Chrome with
+```
+GTK_MODULES=gtkparasite ./out/Debug/chrome
+```
+
+### ghardy
+If you're within the Google network on ghardy, which is too old to include gtkparasite, you can do:
+```
+scp bunny.sfo:/usr/lib/gtk-2.0/modules/libgtkparasite.so /tmp
+sudo cp /tmp/libgtkparasite.so /usr/lib/gtk-2.0/modules/libgtkparasite.so
+```
+
+## GDK\_DEBUG
+
+```
+14:43 < xan> mrobinson: there's a way to run GTK+ without grabs fwiw, useful for gdb sessions
+14:44 < xan> GDK_DEBUG=nograbs
+``` \ No newline at end of file
diff --git a/docs/linux_debugging_ssl.md b/docs/linux_debugging_ssl.md
new file mode 100644
index 0000000..1f8f656
--- /dev/null
+++ b/docs/linux_debugging_ssl.md
@@ -0,0 +1,124 @@
+# Introduction
+
+To help anyone looking at the SSL code, here are a few tips I've found handy.
+
+# Building your own NSS
+
+In order to use a debugger with the NSS library, it helps to build NSS yourself. Here's how I did it:
+
+First, read
+http://www.mozilla.org/projects/security/pki/nss/nss-3.11.4/nss-3.11.4-build.html
+and/or
+https://developer.mozilla.org/En/NSS_reference/Building_and_installing_NSS/Build_instructions
+
+Then, to build the most recent source tarball:
+```
+ cd $HOME
+ wget ftp://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/NSS_3_12_RTM/src/nss-3.12-with-nspr-4.7.tar.gz
+ tar -xzvf nss-3.12-with-nspr-4.7.tar.gz
+ cd nss-3.12/
+ cd mozilla/security/nss/
+ make nss_build_all
+```
+
+Sadly, the latest release, 3.12.2, isn't available as a tarball, so you have to build it from cvs:
+```
+ cd $HOME
+ mkdir nss-3.12.2
+ cd nss-3.12.2
+ export CVSROOT=:pserver:anonymous@cvs-mirror.mozilla.org:/cvsroot
+ cvs login
+ cvs co -r NSPR_4_7_RTM NSPR
+ cvs co -r NSS_3_12_2_RTM NSS
+ cd mozilla/security/nss/
+ make nss_build_all
+```
+
+# Linking against your own NSS
+
+Sadly, I don't know of a nice way to do this; I always do
+```
+hammer --verbose net > log 2>&1
+```
+then grab the line that links my app and put it into a shell script link.sh,
+and edit it to include the line
+```
+DIR=$HOME/nss-3.12.2/mozilla/dist/Linux2.6_x86_glibc_PTH_DBG.OBJ/lib
+```
+and insert a -L$DIR right before the -lnss3.
+
+Note that hammer often builds the app in one, deeply buried, place, then copies it into Hammer
+for ease of use. You'll probably want to make your link.sh do the same thing.
+
+Then, after a source code change, do the usual "hammer net" followed by "sh link.sh".
+
+Then, to run the resulting app, use a script like
+
+# Running against your own NSS
+Create a script named 'run.sh' like this:
+```
+#!/bin/sh
+set -x
+DIR=$HOME/nss-3.12.2/mozilla/dist/Linux2.6_x86_glibc_PTH_DBG.OBJ/lib
+export LD_LIBRARY_PATH=$DIR
+"$@"
+```
+
+Then run your app with
+```
+sh run.sh Hammer/foo
+```
+
+Or, to debug it, do
+```
+sh run.sh gdb Hammer/foo
+```
+
+# Logging
+
+There are several flavors of logging you can turn on.
+
+ * SSLClientSocketNSS can log its state transitions and function calls using base/logging.cc. To enable this, edit net/base/ssl\_client\_socket\_nss.cc and change #if 1 to #if 0. See base/logging.cc for where the output goes (on Linux, it's usually stderr).
+
+ * HttpNetworkTransaction and friends can log its state transitions using base/trace\_event.cc. To enable this, arrange for your app to call base::TraceLog::StartTracing(). The output goes to a file named trace...pid.log in the same directory as the executable (e.g. Hammer/trace\_15323.log).
+
+ * NSS itself can log some events. To enable this, set the envirnment variables SSLDEBUGFILE=foo.log SSLTRACE=99 SSLDEBUG=99 before running your app.
+
+# Network Traces
+
+http://wiki.wireshark.org/SSL describes how to decode SSL traffic.
+Chromium SSL unit tests that use src/net/base/ssl\_test\_util.cc to
+set up thir servers always use port 9443 with src/net/data/ssl/certificates/ok\_cert.pem,
+and port 9666 with src/net/data/ssl/certificates/expired\_cert.pem
+This makes it easy to configure Wireshark to decode the traffic: do
+Edit / Preferences / Protocols / SSL, and in the "RSA Keys List" box, enter
+```
+127.0.0.1,9443,http,<path to ok_cert.pem>;127.0.0.1,9666,http,<path to expired_cert.pem>
+```
+e.g.
+```
+127.0.0.1,9443,http,/home/dank/chromium/src/net/data/ssl/certificates/ok_cert.pem;127.0.0.1,9666,http,/home/dank/chromium/src/net/data/ssl/certificates/expired_cert.pem
+```
+Then capture all tcp traffic on interface lo, and run your test.
+
+# Valgrinding NSS
+
+Read https://developer.mozilla.org/en/NSS_Memory_allocation and do
+```
+export NSS_DISABLE_ARENA_FREE_LIST=1
+```
+before valgrinding if you want to find where a block was originally
+allocated.
+
+If you get unsymbolized entries in NSS backtraces, try setting:
+```
+export NSS_DISABLE_UNLOAD=1
+```
+
+(Note that if you use the Chromium valgrind scripts like tools/valgrind/chrome\_tests.sh or tools/valgrind/valgrind.sh these will both be set automatically.)
+
+# Support forums
+
+If you have nonconfidential questions about NSS, check the newsgroup
+> http://groups.google.com/group/mozilla.dev.tech.crypto
+The NSS maintainer monitors that group and gives good answers. \ No newline at end of file
diff --git a/docs/linux_dev_build_as_default_browser.md b/docs/linux_dev_build_as_default_browser.md
new file mode 100644
index 0000000..41226d9
--- /dev/null
+++ b/docs/linux_dev_build_as_default_browser.md
@@ -0,0 +1,20 @@
+Copy a stable version's .desktop file and modify it to point to your dev build:
+ * `cp /usr/share/applications/google-chrome.desktop ~/.local/share/applications/chromium-mybuild-c-release.desktop`
+ * `vim ~/.local/share/applications/chromium-mybuild-c-release.desktop`
+ * Change first Exec line in desktop entry: (change path to your dev setup)
+ * `Exec=/usr/local/google/home/scheib/c/src/out/Release/chrome %U`
+
+Set the default:
+ * `xdg-settings set default-web-browser chromium-mybuild-c-release.desktop`
+
+Launch, telling Chrome which config you're using:
+ * `CHROME_DESKTOP=chromium-mybuild-c-release.desktop out/Release/chrome`
+ * Verify Chrome thinks it is default in `about:settings` page.
+ * Press the button to make default if not.
+
+Restore the normal default:
+ * `xdg-settings set default-web-browser google-chrome.desktop`
+
+
+A single line to change the default, run, and restore:
+ * `xdg-settings set default-web-browser chromium-mybuild-c-release.desktop && CHROME_DESKTOP=chromium-mybuild-c-release.desktop out/Release/chrome; xdg-settings set default-web-browser google-chrome.desktop && echo Restored default browser.` \ No newline at end of file
diff --git a/docs/linux_development.md b/docs/linux_development.md
new file mode 100644
index 0000000..8697999
--- /dev/null
+++ b/docs/linux_development.md
@@ -0,0 +1,33 @@
+# Linux Development
+
+**Please join us on IRC for the most up-to-date development discussion: `irc.freenode.net`, `#chromium`**
+
+## Checkout and Build
+See the LinuxBuildInstructions.
+
+## What Needs Work
+
+Look at the Chromium bug tracker for open Linux issues:
+http://code.google.com/p/chromium/issues/list?can=2&q=os%3Alinux
+
+Issues marked "Available" are ready for someone to claim. To claim an issue, add a comment and then a project member will mark it "Assigned". If none of the "Available" issues seem appropriate, you may be able to help an already claimed ("Assigned" or "Started") issue, but you'll probably want to coordinate with the claimants, to avoid unnecessary duplication of effort.
+
+Issues marked with HelpWanted are a good place to start.
+
+### Random TODOs
+
+We've also marked bits that remain to be done for porting with `TODO(port)` in the code. If you grep for that you'll likely find plenty of small tasks to get started on.
+
+### New Bugs
+
+If you think you have discovered a new Linux bug, start by [searching for similar issues](http://code.google.com/p/chromium/issues/list?can=1&q=Linux). When you search, make sure you choose the "All Issues" option, since your bug might have already been fixed, but the default search only looks for open issues. If you can't find a related bug, please create a [New Issue](http://code.google.com/p/chromium/issues/entry). Use the linux defect template.
+
+## Contributing code
+See [ContributingCode](http://dev.chromium.org/developers/contributing-code).
+
+## Debugging
+See LinuxDebugging.
+
+## Documents
+
+LinuxGraphicsPipeline \ No newline at end of file
diff --git a/docs/linux_eclipse_dev.md b/docs/linux_eclipse_dev.md
new file mode 100644
index 0000000..9805c7b
--- /dev/null
+++ b/docs/linux_eclipse_dev.md
@@ -0,0 +1,252 @@
+# Introduction
+
+Eclipse can be used on Linux (and probably Windows and Mac) as an IDE for developing Chromium. It's unpolished, but here's what works:
+
+ * Editing code works well (especially if you're used to it or Visual Studio).
+ * Navigating around the code works well. There are multiple ways to do this (F3, control-click, outlines).
+ * Building works fairly well and it does a decent job of parsing errors so that you can click and jump to the problem spot.
+ * Debugging is hit & miss. You can set breakpoints and view variables. STL containers give it (and gdb) a bit of trouble. Also, the debugger can get into a bad state occasionally and eclipse will need to be restarted.
+ * Refactoring seems to work in some instances, but be afraid of refactors that touch a lot of files.
+
+# Setup
+
+## Get & Configure Eclipse
+
+Eclipse 4.3 (Kepler) is known to work with Chromium for Linux.
+ * Download the distribution appropriate for your OS. For example, for Linux 64-bit/Java 64-bit, use the Linux 64 bit package from http://www.eclipse.org/downloads/ (Eclipse Packages Tab -> Linux 64 bit (link in bottom right)).
+ * Tip: The packaged version of eclipse in distros may not work correctly with the latest CDT plugin (installed below). Best to get them all from the same source.
+ * Googlers: The version installed on Goobuntu works fine. The UI will be much more responsive if you do not install the google3 plug-ins. Just uncheck all the boxes at first launch.
+ * Unpack the distribution and edit the eclipse/eclipse.ini to increase the heap available to java. For instance:
+ * Change -Xms40m to -Xms1024m (minimum heap) and -Xmx256m to -Xmx3072m (maximum heap).
+ * Googlers: Edit ~/.eclipse/init.sh to add this:
+```
+export ECLIPSE_MEM_START="1024M"
+export ECLIPSE_MEM_MAX="3072M"
+```
+The large heap size prevents out of memory errors if you include many Chrome subprojects that Eclipse is maintaining code indices for.
+ * Turn off Hyperlink detection in the Eclipse preferences. (Window -> Preferences, search for "Hyperlinking, and uncheck "Enable on demand hyperlink style navigation").
+
+Pressing the control key on (for keyboard shortcuts such as copy/paste) can trigger the hyperlink detector. This occurs on the UI thread and can result in the reading of jar files on the Eclipse classpath, which can tie up the editor due to the size of the classpath in Chromium.
+
+## A short word about paths
+
+Before you start setting up your work space - here are a few hints:
+ * Don't put your checkout on a remote file system (e.g. NFS filer). It's too slow both for building and for Eclipse.
+ * Make sure there is no file system link in your source path because Ninja will resolve it for a faster build and Eclipse / GDB will get confused. (Note: This means that the source will possibly not reside in your user directory since it would require a link from filer to your local repository.)
+ * You may want to start Eclipse from the source root. To do this you can add an icon to your task bar as launcher. It should point to a shell script which will set the current path to your source base, and then start Eclipse. The result would probably look like this:
+```
+~/.bashrc
+cd /usr/local/google/chromium/src
+export PATH=/home/skuhne/depot_tools:/usr/local/google/goma/goma:/opt/eclipse:/usr/local/symlinks:/usr/local/scripts:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+/opt/eclipse/eclipse -vm /usr/bin/java
+```
+
+(Note: Things work fine for me without launching Eclipse from a special directory. jamescook@chromium.org 2012-06-1)
+
+## Run Eclipse & Set your workspace
+
+Run eclipse/eclipse in a way that your regular build environment (export CC, CXX, etc...) will be visible to the eclipse process.
+
+Set the Workspace to be a directory on a local disk (e.g. /work/workspaces/chrome). Placing it on an NFS share is not recommended -- it's too slow and Eclipse will block on access. Don't put the workspace in the same directory as your checkout.
+
+## Install the C Development Tools ("CDT")
+
+ 1. From the Help menu, select Install New Software...
+ 1. Select the URL for the CDT, http://download.eclipse.org/tools/cdt/releases/kepler
+ 1. If it's not there you can click Add... and add it.
+ 1. Googlers: We have a local mirror, but be sure you run prodaccess before trying to use it.
+ 1. Select & install the Main and Optional features.
+ 1. Restart Eclipse
+ 1. Go to Window > Open Perspective > Other... > C/C++ to switch to the C++ perspective (window layout).
+ 1. Right-click on the "Java" perspective in the top-right corner and select "Close" to remove it.
+
+## Create your project(s)
+
+First, turn off automatic workspace refresh and automatic building, as Eclipse tries to do these too often and gets confused:
+
+ 1. Open Window > Preferences
+ 1. Search for "workspace"
+ 1. Turn off "Build automatically"
+ 1. Turn off "Refresh using native hooks or polling"
+ 1. Click "Apply"
+
+Create a single Eclipse project for everything:
+
+ 1. From the File menu, select New > Project...
+ 1. Select C/C++ Project > Makefile Project with Existing Code
+ 1. Name the project the exact name of the directory: "src"
+ 1. Provide a path to the code, like /work/chromium/src
+ 1. Select toolchain: Linux GCC
+ 1. Click Finish.
+
+Chromium has a huge amount of code, enough that Eclipse can take a very long time to perform operations like "go to definition" and "open resource". You need to set it up to operate on a subset of the code.
+
+In the Project Explorer on the left side:
+
+ 1. Right-click on "src" and select "Properties..."
+ 1. Open Resource > Resource Filters
+ 1. Click "Add..."
+ 1. Add the following filter:
+ * Include only
+ * Files, all children (recursive)
+ * Name matches `.*\.(c|cc|cpp|h|mm|inl|idl|js|json|css|html|gyp|gypi|grd|grdp|gn)` regular expression
+ 1. Add another filter:
+ * Exclude all
+ * Folders
+ * Name matches `out_.*|\.git|\.svn|LayoutTests` regular expression
+ * If you aren't working on WebKit, adding `|WebKit` will remove more files
+ 1. Click "OK"
+
+Don't exclude the primary "out" directory, as it contains generated header files for things like string resources and Eclipse will miss a lot of symbols if you do.
+
+Eclipse will refresh the workspace and start indexing your code. It won't find most header files, however. Give it more help finding them:
+
+ 1. Open Window > Preferences
+ 1. Search for "Indexer"
+ 1. Turn on "Allow heuristic resolution of includes"
+ 1. Select "Use active build configuration"
+ 1. Set Cache limits > Index database > Limit relative... to 20%
+ 1. Set Cache limits > Index database > Absolute limit to 256 MB
+ 1. Click "OK"
+
+Now the indexer will find many more include files, regardless of which approach you take below.
+
+### Optional: Manual header paths and symbols
+You can manually tell Eclipse where to find header files, which will allow it to create the source code index before you do a real build.
+
+ 1. Right-click on "src" and select "Properties..."
+ * Open C++ General > Paths and Symbols > Includes
+ * Click "GNU C++"
+ * Click "Add..."
+ * Add /path/to/chromium/src
+ * Check "Add to all configurations" and "Add to all languages"
+ 1. Repeat the above for:
+ * /path/to/chromium/src/testing/gtest/include
+
+You may also find it helpful to define some symbols.
+
+ 1. Add OS\_LINUX:
+ * Select the "Symbols" tab
+ * Click "GNU C++"
+ * Click "Add..."
+ * Add name OS\_LINUX with value 1
+ * Click "Add to all configurations" and "Add to all languages"
+ 1. Repeat for ENABLE\_EXTENSIONS 1
+ 1. Repeat for HAS\_OUT\_OF\_PROC\_TEST\_RUNNER 1
+ 1. Click "OK".
+ 1. Eclipse will ask if you want to rebuild the index. Click "Yes".
+
+Let the C++ indexer run. It will take a while (10s of minutes).
+
+## Optional: Building inside Eclipse
+This allows Eclipse to automatically discover include directories and symbols. If you use gold or ninja (both recommended) you'll need to tell Eclipse about your path.
+
+ 1. echo $PATH from a shell and copy it to the clipboard
+ 1. Open Window > Preferences > C/C++ > Build > Environment
+ 1. Select "Replace native environment with specified one" (since gold and ninja must be at the start of your path)
+ 1. Click "Add..."
+ 1. For name, enter `PATH`
+ 1. For value, paste in your path with the ninja and gold directories.
+ 1. Click "OK"
+
+To create a Make target:
+
+ 1. From the Window menu, select Show View > Make Target
+ 1. In the Make Target view, right-click on the project and select New...
+ 1. name the target (e.g. base\_unittests)
+ 1. Unclick the Build Command: Use builder Settings and type whatever build command you would use to build this target (e.g. "ninja -C out/Debug base\_unittests").
+ 1. Return to the project properties page a under the C/C++ Build, change the Build Location/Build Directory to be /path/to/chromium/src
+ 1. In theory ${workspace\_loc} should work, but it doesn't for me.
+ 1. If you put your workspace in /path/to/chromium, then ${workspace\_loc:/src} will work too.
+ 1. Now in the Make Targets view, select the target and click the hammer icon (Build Make Target).
+
+You should see the build proceeding in the Console View and errors will be parsed and appear in the Problems View. (Note that sometimes multi-line compiler errors only show up partially in the Problems view and you'll want to look at the full error in the Console).
+
+(Eclipse 3.8 has a bug where the console scrolls too slowly if you're doing a fast build, e.g. with goma. To work around, go to Window > Preferences and search for "console". Under C/C++ console, set "Limit console output" to 2147483647, the maximum value.)
+
+## Optional: Multiple build targets
+If you want to build multiple different targets in Eclipse (chrome, unit\_tests, etc.):
+
+ 1. Window > Show Toolbar (if you had it off)
+ 1. Turn on special toolbar menu item (hammer) or menu bar item (Project > Build configurations > Set Active > ...)
+ 1. Window > Customize Perspective... > "Command Groups Availability"
+ 1. Check "Build configuration"
+ 1. Add more Build targets
+ 1. Project > Properties > C/C++ Build > Manage Configurations
+ 1. Select "New..."
+ 1. Duplicate from current and give it a name like "Unit tests".
+ 1. Change under “Behavior” > Build > the target to e.g. “unit\_tests”.
+
+You can also drag the toolbar to the bottom of your window to save vertical space.
+
+## Optional: Debugging
+
+ 1. From the toolbar at the top, click the arrow next to the debug icon and select Debug Configurations...
+ 1. Select C/C++ Application and click the New Launch Configuration icon. This will create a new run/debug configuration under the C/C++ Application header.
+ 1. Name it something useful (e.g. base\_unittests).
+ 1. Under the Main Tab, enter the path to the executable (e.g. .../out/Debug/base\_unittests)
+ 1. Select the Debugger Tab, select Debugger: gdb and unclick "Stop on startup in (main)" unless you want this.
+ 1. Set a breakpoint somewhere in your code and click the debug icon to start debugging.
+
+## Optional: Accurate symbol information
+
+If setup properly, Eclipse can do a great job of semantic navigation of C++ code (showing type hierarchies, finding all references to a particular method even when other classes have methods of the same name, etc.). But doing this well requires the Eclipse knows correct include paths and pre-processor definitions. After fighting with with a number of approaches, I've found the below to work best for me.
+
+ 1. From a shell in your src directory, run GYP\_GENERATORS=ninja,eclipse build/gyp\_chromium
+ 1. This generates <project root>/out/Debug/eclipse-cdt-settings.xml which is used below.
+ 1. This creates a single list of include directories and preprocessor definitions to be used for all source files, and so is a little inaccurate. Here are some tips for compensating for the limitations:
+ 1. Use `-R <target>` to restrict the output to considering only certain targets (avoiding unnecessary includes that are likely to cause trouble). Eg. for a blink project, use `-R blink`.
+ 1. If you care about blink, move 'third\_party/Webkit/Source' to the top of the list to better resolve ambiguous include paths (eg. 'config.h').
+ 1. Import paths and symbols
+ 1. Right click on the project and select Properties > C/C++ General > Paths and Symbols
+ 1. Click Restore Defaults to clear any old settings
+ 1. Click Import Settings... > Browse... and select <project root>/out/Debug/eclipse-cdt-settings.xml
+ 1. Click the Finish button. The entire preferences dialog should go away.
+ 1. Right click on the project and select Index > Rebuild
+
+## Alternative: Per-file accurate include/pre-processor information
+
+Instead of generating a fixed list of include paths and pre-processor definitions for a project (above), it is also possible to have Eclipse determine the correct setting on a file-by-file basis using a built output parser. I (rbyers) used this successfully for a long time, but it doesn't seem much better in practice than the simpler (and less bug-prone) approach above.
+
+ 1. Install the latest version of Eclipse IDE for C/C++ developers ([Juno SR1](http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/junosr1) at the time of this writing)
+ 1. Setup build to generate a build log that includes the g++ command lines for the files you want to index:
+ 1. Project Properties -> C/C++ Build
+ 1. Uncheck "Use default build command"
+ 1. Enter your build command, eg: ninja -v
+ 1. Note that for better performance, you can use a command that doesn't actually builds, just prints the commands that would be run. For ninja/make this means adding -n. This only prints the compile commands for changed files (so be sure to move your existing out directory out of the way temporarily to force a full "build"). ninja also supports "-t commands" which will print all build commands for the specified target and runs even faster as it doesn't have to check file timestamps.
+ 1. Build directory: your build path including out/Debug
+ 1. Note that for the relative paths to be parsed correctly you can't use ninja's ` '-C <dir>' ` to change directories as you might from the command line.
+ 1. Build: potentially change "all" to the target you want to analyze, eg. "chrome"
+ 1. Deselect 'clean'
+ 1. If you're using Ninja, you need to teach eclipse to ignore the prefix it adds (eg. [10/1234] to each line in build output):
+ 1. Project properties -> C/C++ General -> Preprocessor includes
+ 1. Providers -> CDT GCC Build Output Parser -> Compiler command pattern
+ 1. ` (\[.*\] )?((gcc)|([gc]\+\+)|(clang(\+\+)?)) `
+ 1. Note that there appears to be a bug with "Share setting entries between projects" - it will keep resetting to off. I suggest using per-project settings and using the "folder" as the container to keep discovered entries ("file" may work as well).
+ 1. Eclipse / GTK has bugs where lots of output to the build console can slow down the UI dramatically and cause it to hang (basically spends all it's time trying to position the cursor correctly in the build console window). To avoid this, close the console window and disable automatically opening it on build:
+ 1. Preferences->C/C++->Build->Console -> Uncheck "Open console when building"
+ 1. note you can still see the build log in ` <workspace>/.metadata/.plugins/org.eclipse.cdt.ui `
+ 1. Now build the project (select project, click on hammer). If all went well:
+ 1. Right click on a cpp file -> properties -> C/C++ general -> Preprocessor includes -> GNU C++ -> CDT GCC Build output Parser
+ 1. You will be able to expand and see all the include paths and pre-processor definitions used for this file
+ 1. Rebuild index (right-click on project, index, rebuild). If all went well:
+ 1. Open a CPP file and look at problems windows
+ 1. Should be no (or very few) errors
+ 1. Should be able to hit F3 on most symbols and jump to their definitioin
+ 1. CDT has some issues with complex C++ syntax like templates (eg. PassOwnPtr functions)
+ 1. See [this page](http://wiki.eclipse.org/CDT/User/FAQ#Why_does_Open_Declaration_.28F3.29_not_work.3F_.28also_applies_to_other_functions_using_the_indexer.29) for more information.
+
+## Optional: static code and style guide analysis using cpplint.py
+
+ 1. From the toolbar at the top, click the Project -> Properties and go to C/C++Build.
+ 1. Click on the right side of the pop up windows, "Manage Configurations...", then on New, and give it a name, f.i. "Lint current file", and close the small window, then select it in the Configuration drop down list.
+ 1. Under Builder settings tab, unclick "Use default build command" and type as build command the full path to your depot\_tools/cpplint.py
+ 1. Under behaviour tab, unselect Clean, select Build(incremental build) and in Make build target, add `--verbose=0 ${selected_resource_loc} `
+ 1. Go back to the left side of the current window, and to C/C++Build -> Settings, and click on error parsers tab, make sure CDT GNU C/C++ Error Parser, CDT pushd/popd CWD Locator are set, then click Apply and OK.
+ 1. Select a file and click on the hammer icon drop down triangle next to it, and make sure the build configuration is selected "Lint current file", then click on the hammer.
+ 1. Note: If you get the cpplint.py help output, make sure you have selected a file, by clicking inside the editor window or on its tab header, and make sure the editor is not maximized inside Eclipse, i.e. you should see more subwindows around.
+
+## Additional tips
+ 1. Mozilla's [Eclipse CDT guide](https://developer.mozilla.org/en-US/docs/Eclipse_CDT) is helpful:
+ 1. For improved performance, I use medium-granularity projects (eg. one for WebKit/Source) instead of putting all of 'src/' in one project.
+ 1. For working in Blink (which uses WebKit code style), feel free to use [this](https://drive.google.com/file/d/0B2LVVIKSxUVYM3R6U0tUa1dmY0U/view?usp=sharing) code-style formatter XML profile \ No newline at end of file
diff --git a/docs/linux_faster_builds.md b/docs/linux_faster_builds.md
new file mode 100644
index 0000000..1dee237
--- /dev/null
+++ b/docs/linux_faster_builds.md
@@ -0,0 +1,109 @@
+#summary tips for improving build speed on Linux
+#labels Linux,build
+
+This list is sorted such that the largest speedup is first; see LinuxBuildInstructions for context and [Faster Builds](https://code.google.com/p/chromium/wiki/CommonBuildTasks#Faster_Builds) for non-Linux-specific techniques.
+
+
+
+## Use goma
+
+If you work at Google, you can use goma for distributed builds; this is similar to [distcc](http://en.wikipedia.org/wiki/Distcc). See [go/ma](http://go/ma) for documentation.
+
+Even without goma, you can do distributed builds with distcc (if you have access to other machines), or a parallel build locally if have multiple cores.
+
+Whether using goma, distcc, or parallel building, you can specify the number of build processes with `-jX` where `X` is the number of processes to start.
+
+## Use Icecc
+
+[Icecc](https://github.com/icecc/icecream) is the distributed compiler with a central scheduler to share build load. Currently, many external contributors use it. e.g. Intel, Opera, Samsung.
+
+When you use Icecc, you need to set some gyp variables.
+
+**linux\_use\_bundled\_binutils=0**
+
+-B option is not supported. [relevant commit](https://github.com/icecc/icecream/commit/b2ce5b9cc4bd1900f55c3684214e409fa81e7a92)
+
+**linux\_use\_debug\_fission=0**
+
+[debug fission](http://gcc.gnu.org/wiki/DebugFission) is not supported. [bug](https://github.com/icecc/icecream/issues/86)
+
+**clang=0**
+
+Icecc doesn't support clang yet.
+
+## Build only specific targets
+
+If you specify just the target(s) you want built, the build will only walk that portion of the dependency graph:
+```
+$ cd $CHROMIUM_ROOT/src
+$ ninja -C out/Debug base_unittests
+```
+
+## Linking
+### Dynamically link
+
+We normally statically link everything into one final executable, which produces enormous (nearly 1gb in debug mode) files. If you dynamically link, you save a lot of time linking for a bit of time during startup, which is fine especially when you're in an edit/compile/test cycle.
+
+Run gyp with the `-Dcomponent=shared_library` flag to put it in this configuration. (Or set those flags via the `GYP_DEFINES` environment variable.)
+
+e.g.
+
+```
+$ build/gyp_chromium -D component=shared_library
+$ ninja -C out/Debug chrome
+```
+
+See the [component build page](http://www.chromium.org/developers/how-tos/component-build) for more information.
+
+### Linking using gold
+
+The experimental "gold" linker is much faster than the standard BFD linker.
+
+On some systems (including Debian experimental, Ubuntu Karmic and beyond), there exists a `binutils-gold` package. Do not install this version! Having gold as the default linker is known to break kernel / kernel module builds.
+
+The Chrome tree now includes a binary of gold compiled for x64 Linux. It is used by default on those systems.
+
+On other systems, to safely install gold, make sure the final binary is named `ld` and then set `CC/CXX` appropriately, e.g. `export CC="gcc -B/usr/local/gold/bin"` and similarly for `CXX`. Alternatively, you can add `/usr/local/gold/bin` to your `PATH` in front of `/usr/bin`.
+
+## WebKit
+### Build WebKit without debug symbols
+
+WebKit is about half our weight in terms of debug symbols. (Lots of templates!) If you're working on UI bits where you don't care to trace into WebKit you can cut down the size and slowness of debug builds significantly by building WebKit without debug symbols.
+
+Set the gyp variable `remove_webcore_debug_symbols=1`, either via the `GYP_DEFINES` environment variable, the `-D` flag to gyp, or by adding the following to `~/.gyp/include.gypi`:
+```
+{
+ 'variables': {
+ 'remove_webcore_debug_symbols': 1,
+ },
+}
+```
+
+## Tune ccache for multiple working directories
+
+(Ignore this if you use goma.)
+
+Increase your ccache hit rate by setting `CCACHE_BASEDIR` to a parent directory that the working directories all have in common (e.g., `/home/yourusername/development`). Consider using `CCACHE_SLOPPINESS=include_file_mtime` (since if you are using multiple working directories, header times in svn sync'ed portions of your trees will be different - see [the ccache troubleshooting section](http://ccache.samba.org/manual.html#_troubleshooting) for additional information). If you use symbolic links from your home directory to get to the local physical disk directory where you keep those working development directories, consider putting
+```
+alias cd="cd -P"
+```
+in your .bashrc so that `$PWD` or `cwd` always refers to a physical, not logical directory (and make sure `CCACHE_BASEDIR` also refers to a physical parent).
+
+If you tune ccache correctly, a second working directory that uses a branch tracking trunk and is up-to-date with trunk and was gclient sync'ed at about the same time should build chrome in about 1/3 the time, and the cache misses as reported by `ccache -s` should barely increase.
+
+This is especially useful if you use `git-new-workdir` and keep multiple local working directories going at once.
+
+## Using tmpfs
+
+You can use tmpfs for the build output to reduce the amount of disk writes required. I.e. mount tmpfs to the output directory where the build output goes:
+
+As root:
+ * `mount -t tmpfs -o size=20G,nr_inodes=40k,mode=1777 tmpfs /path/to/out`
+
+**Caveat:** You need to have enough RAM + swap to back the tmpfs. For a full debug build, you will need about 20 GB. Less for just building the chrome target or for a release build.
+
+Quick and dirty benchmark numbers on a HP Z600 (Intel core i7, 16 cores hyperthreaded, 12 GB RAM)
+
+| With tmpfs: | 12m:20s |
+|:------------|:--------|
+| Without tmpsfs: | 15m:40s | \ No newline at end of file
diff --git a/docs/linux_graphics_pipeline.md b/docs/linux_graphics_pipeline.md
new file mode 100644
index 0000000..268718f
--- /dev/null
+++ b/docs/linux_graphics_pipeline.md
@@ -0,0 +1,5 @@
+Note, this deals with **test\_shell** only. See [BitmapPipeline](BitmapPipeline.md) for the picture in the browser.
+
+![http://chromium.googlecode.com/svn/trunk/images/linux-gfx.png](http://chromium.googlecode.com/svn/trunk/images/linux-gfx.png)
+
+(SVG source ![http://chromium.googlecode.com/svn/trunk/images/linux-gfx.svg](http://chromium.googlecode.com/svn/trunk/images/linux-gfx.svg)) \ No newline at end of file
diff --git a/docs/linux_gtk_theme_integration.md b/docs/linux_gtk_theme_integration.md
new file mode 100644
index 0000000..90eda85
--- /dev/null
+++ b/docs/linux_gtk_theme_integration.md
@@ -0,0 +1,92 @@
+# Introduction
+
+The GTK+ port of Chromium has a mode where we try to match the user's GTK theme (which can be enabled under Wrench -> Options -> Personal Stuff -> Set to GTK+ theme). The heuristics often don't pick good colors due to a lack of information in the GTK themes.
+
+Starting in Chrome 9, we're providing a new way for theme authors to control our GTK+ theming mode. I am not sure of the earliest build these showed up in, but I know 9.0.597 works.
+
+# Describing the previous heuristics
+
+The frame heuristics were simple. Query the `bg[SELECTED]` and `bg[INSENSITIVE]` colors on the `MetaFrames` class and darken them slightly. This usually worked OK until the rise of themes that try to make a unified titlebar/menubar look. At roughly that time, it seems that people stopped specifying color information for the `MetaFrames` class and this has lead to the very orange chrome frame on Maverick.
+
+`MetaFrames` is (was?) a class that was used to communicate frame color data to the window manager around the Hardy days. (It's still defined in most of [XFCE's themes](http://packages.ubuntu.com/maverick/gtk2-engines-xfce)). In chrome's implementation, `MetaFrames` derives from `GtkWindow`.
+
+If you are happy with the defaults that chrome has picked, no action is necessary on the part of the theme author.
+
+# Introducing `ChromeGtkFrame`
+
+For cases where you want control of the colors chrome uses, Chrome gives you a number of style properties for injecting colors and other information about how to draw the frame. For example, here's the proposed modifications to Ubuntu's Ambiance:
+
+```
+style "chrome-gtk-frame"
+{
+ ChromeGtkFrame::frame-color = @fg_color
+ ChromeGtkFrame::inactive-frame-color = lighter(@fg_color)
+
+ ChromeGtkFrame::frame-gradient-size = 16
+ ChromeGtkFrame::frame-gradient-color = "#5c5b56"
+
+ ChromeGtkFrame::scrollbar-trough-color = @bg_color
+ ChromeGtkFrame::scrollbar-slider-prelight-color = "#F8F6F2"
+ ChromeGtkFrame::scrollbar-slider-normal-color = "#E7E0D3"
+}
+
+class "ChromeGtkFrame" style "chrome-gtk-frame"
+```
+
+## Frame color properties
+
+These are the frame's main solid color.
+
+| **Property** | **Type** | **Description** | **If unspecified** |
+|:-------------|:---------|:----------------|:-------------------|
+| `frame-color` | `GdkColor` | The main color of active chrome windows. | Darkens `MetaFrame::bg[SELECTED]` |
+| `inactive-frame-color` | `GdkColor` | The main color of inactive chrome windows. | Darkens `MetaFrame::bg[INSENSITIVE]` |
+| `incognito-frame-color` | `GdkColor` | The main color of active incognito windows. | Tints `frame-color` by the default incognito tint |
+| `incognito-inactive-frame-color` | `GdkColor` | The main color of inactive incognito windows. | Tints `inactive-frame-color` by the default incognito tint |
+
+## Frame gradient properties
+
+Chrome's frame (along with many normal window manager themes) have a slight gradient at the top, before filling the rest of the frame background image with a solid color. For example, the top `frame-gradient-size` pixels would be a gradient starting from `frame-gradient-color` at the top to `frame-color` at the bottom, with the rest of the frame being filled with `frame-color`.
+
+| **Property** | **Type** | **Description** | **If unspecified** |
+|:-------------|:---------|:----------------|:-------------------|
+| `frame-gradient-size` | Integers 0 through 128 | How large the gradient should be. Set to zero to disable drawing a gradient | Defaults to 16 pixels tall |
+| `frame-gradient-color` | `GdkColor` | Top color of the gradient | Lightens `frame-color` |
+| `inactive-frame-gradient-color` | `GdkColor` | Top color of the inactive gradient | Lightents `inactive-frame-color` |
+| `incognito-frame-gradient-color` | `GdkColor` | Top color of the incognito gradient | Lightens `incognito-frame-color` |
+| `incognito-inactive-frame-gradient-color` | `GdkColor` | Top color of the incognito inactive gradient. | Lightens `incognito-inactive-frame-color` |
+
+## Scrollbar control
+
+Because widget rendering is done in a separate, sandboxed process that doesn't have access to the X server or the filesystem, there's no current way to do GTK+ widget rendering. We instead pass WebKit a few colors and let it draw a default scrollbar. We have a very [complex fallback](http://git.chromium.org/gitweb/?p=chromium.git;a=blob;f=chrome/browser/gtk/gtk_theme_provider.cc;h=a57ab6b182b915192c84177f1a574914c44e2e71;hb=3f873177e192f5c6b66ae591b8b7205d8a707918#l424) where we render the widget and then average colors if this information isn't provided.
+
+| **Property** | **Type** | **Description** |
+|:-------------|:---------|:----------------|
+| `scrollbar-slider-prelight-color` | `GdkColor` | Color of the slider on mouse hover. |
+| `scrollbar-slider-normal-color` | `GdkColor` | Color of the slider otherwise |
+| `scrollbar-trough-color` | `GdkColor` | Color of the scrollbar trough |
+
+# Anticipated Q&A
+
+## Will you patch themes upstream?
+
+I am at the very least hoping we can get Radiance and Ambiance patches since we make very poor frame decisions on those themes, and hopefully a few others.
+
+## How about control over the min/max/close buttons?
+
+I actually tried this locally. There's a sort of uncanny valley effect going on; as the frame looks more native, it's more obvious that it isn't behaving like a native frame. (Also my implementation added a startup time hit.)
+
+## Why use style properties instead of (i.e.) `bg[STATE]`?
+
+There's no way to distinguish between colors set on different classes. Using style properties allows us to be backwards compatible and maintain the heuristics since not everyone is going to modify their themes for chromium (and the heuristics do a reasonable job).
+
+## Why now?
+
+ * I (erg@) was putting off major changes to the window frame stuff in anticipation of finally being able to use GTK+'s theme rendering for the window border with client side decorations, but client side decorations either isn't happening or isn't happening anytime soon, so there's no justification for pushing this task off into the future.
+ * Chrome looks pretty bad under Ambiance on Maverick.
+
+## Any details about `MetaFrames` and `ChromeGtkFrame` relationship and history?
+
+`MetaFrames` is a class that was used in metacity to communicate color information to the window manager. During the Hardy Heron days, we slurped up the data and used it as a key part of our heuristics. At least on my Lucid Lynx machine, none of the GNOME GTK+ themes have `MetaFrames` styling. (As mentioned above, several of the XFCE themes do, though.)
+
+Internally to chrome, our `ChromeGtkFrame` class inherits from `MetaFrames` (again, which inherits from `GtkWindow`) so any old themes that style the `MetaFrames` class are backwards compatible. \ No newline at end of file
diff --git a/docs/linux_hw_video_decode.md b/docs/linux_hw_video_decode.md
new file mode 100644
index 0000000..a3b0805
--- /dev/null
+++ b/docs/linux_hw_video_decode.md
@@ -0,0 +1,57 @@
+# Enabling hardware
+
+&lt;video&gt;
+
+ decode codepaths on linux
+
+Hardware acceleration of video decode on linux is [unsupported](http://crbug.com/137247) in Chrome for user-facing builds. During development (targeting other platforms) it can be useful to be able to trigger the code-paths used on HW-accelerated platforms (such as CrOS and win7) in a linux-based development environment. Here's one way to do so, with details based on a gprecise setup.
+
+ * Install pre-requisites: On Ubuntu Precise, at least, this includes:
+```
+sudo apt-get install libtool libvdpau1 libvdpau-dev
+```
+
+ * Install and configure [libva](http://cgit.freedesktop.org/libva/)
+```
+DEST=${HOME}/apps/libva
+cd /tmp
+git clone git://anongit.freedesktop.org/libva
+cd libva
+git reset --hard libva-1.2.1
+./autogen.sh && ./configure --prefix=${DEST}
+make -j32 && make install
+```
+ * Install and configure the [VDPAU](http://cgit.freedesktop.org/vaapi/vdpau-driver) VAAPI driver
+```
+DEST=${HOME}/apps/libva
+cd /tmp
+git clone git://anongit.freedesktop.org/vaapi/vdpau-driver
+cd vdpau-driver
+export PKG_CONFIG_PATH=${DEST}/lib/pkgconfig/:$PKG_CONFIG_PATH
+export LIBVA_DRIVERS_PATH=${DEST}/lib/dri
+export LIBVA_X11_DEPS_CFLAGS=-I${DEST}/include
+export LIBVA_X11_DEPS_LIBS=-L${DEST}/lib
+export LIBVA_DEPS_CFLAGS=-I${DEST}/include
+export LIBVA_DEPS_LIBS=-L${DEST}/lib
+make distclean
+unset CC CXX
+./autogen.sh && ./configure --prefix=${DEST} --enable-debug
+find . -name Makefile |xargs sed -i 'sI/usr/lib/xorg/modules/driversI${DEST}/lib/driIg'
+sed -i -e 's/_(\(VAEncH264VUIBufferType\|VAEncH264SEIBufferType\));//' src/vdpau_dump.c
+make -j32 && rm -f ${DEST}/lib/dri/{nvidia_drv_video.so,s3g_drv_video.so} && make install
+```
+ * Add to `$GYP_DEFINES`:
+ * `chromeos=1` to link in `VaapiVideoDecodeAccelerator`
+ * `proprietary_codecs=1 ffmpeg_branding=Chrome` to allow Chrome to play h.264 content, which is the only codec VAVDA knows about today.
+ * Re-run gyp (`./build/gyp_chromium` or `gclient runhooks`)
+ * Rebuild chrome
+ * Run chrome with `LD_LIBRARY_PATH=${HOME}/apps/libva/lib` in the environment, and with the --no-sandbox command line flag.
+ * If things don't work, a Debug build (to include D\*LOG's) with `--vmodule=*content/common/gpu/media/*=10,gpu_video*=1` might be enlightening.
+
+# NOTE THIS IS AN UNSUPPORTED CONFIGURATION AND LIKELY TO BE BROKEN AT ANY POINT IN TIME
+
+This page is purely here to help developers targeting supported HW
+
+&lt;video&gt;
+
+ decode platforms be more effective. Do not expect help if this setup fails to work. \ No newline at end of file
diff --git a/docs/linux_minidump_to_core.md b/docs/linux_minidump_to_core.md
new file mode 100644
index 0000000..67baf41
--- /dev/null
+++ b/docs/linux_minidump_to_core.md
@@ -0,0 +1,103 @@
+# Introduction
+
+On Linux, Chromium can use Breakpad to generate minidump files for crashes. It is possible to convert the minidump files to core files, and examine the core file in gdb, cgdb, or Qtcreator. In the examples below cgdb is assumed but any gdb based debugger can be used.
+
+# Details
+
+## Creating the core file
+
+Use `minidump-2-core` to convert the minidump file to a core file. On Linux, one can build the minidump-2-core target in a Chromium checkout, or alternatively, build it in a Google Breakpad checkout.
+
+```
+
+$ minidump-2-core foo.dmp > foo.core
+
+```
+
+## Retrieving Chrome binaries
+
+If the minidump is from
+a public build then Googlers can find Google Chrome Linux binaries and debugging symbols via https://goto.google.com/chromesymbols. Otherwise, use the locally built chrome files.
+Google Chrome uses the _debug link_ method to specify the debugging file.
+Either way be sure to put chrome and chrome.debug
+(the stripped debug information) in the same directory as the core file so that the debuggers can find them.
+
+## Loading the core file into gdb/cgdb
+
+The recommended syntax for loading a core file into gdb/cgdb is as follows, specifying both the executable and the core file:
+
+```
+
+$ cgdb chrome foo.core
+
+```
+
+If the executable is not available then the core file can be loaded on its own but debugging options will be limited:
+
+```
+
+$ cgdb -c foo.core
+
+```
+
+## Loading the core file into Qtcreator
+
+Qtcreator is a full GUI wrapper for gdb and it can also load Chrome's core files. From Qtcreator select the Debug menu, Start Debugging, Load Core File... and then enter the paths to the core file and executable. Qtcreator has windows to display the call stack, locals, registers, etc. For more information on debugging with Qtcreator see [Getting Started Debugging on Linux.](https://www.youtube.com/watch?v=xTmAknUbpB0)
+
+## Source debugging
+
+If you have a Chromium repo that is synchronized to exactly (or even approximately) when the Chrome build was created then you can tell gdb/cgdb/Qtcreator to load source code. Since all source paths in Chrome are relative to the out/Release directory you just need to add that directory to your debugger search path, by adding a line similar to this to ~/.gdbinit:
+
+```
+
+(gdb) directory /usr/local/chromium/src/out/Release/
+
+```
+
+## Notes
+
+ * Since the core file is created from a minidump, it is incomplete and the debugger may not know values for variables in memory. Minidump files contain thread stacks so local variables and function parameters should be available, subject to the limitations of optimized builds.
+ * For gdb's `add-symbol-file` command to work, the file must have debugging symbols.
+ * In case of separate debug files, [the gdb manual](https://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html) explains how gdb looks for them.
+ * If the stack trace involve system libraries, the Advanced module loading steps shown below need to be repeated for each library.
+
+## Advanced module loading
+
+If gdb doesn't find shared objects that are needed you can force it to load them. In gdb, the `add-symbol-file` command takes a filename and an address. To figure out the address, look near the end of `foo.dmp`, which contains a copy of `/proc/pid/maps` from the process that crashed.
+
+One quick way to do this is with `grep`. For instance, if the executable is `/path/to/chrome`, one can simply run:
+
+```
+
+$ grep -a /path/to/chrome$ foo.dmp
+
+7fe749a90000-7fe74d28f000 r-xp 00000000 08:07 289158 /path/to/chrome
+7fe74d290000-7fe74d4b7000 r--p 037ff000 08:07 289158 /path/to/chrome
+7fe74d4b7000-7fe74d4e0000 rw-p 03a26000 08:07 289158 /path/to/chrome
+
+
+```
+
+In this case, `7fe749a90000` is the base address for `/path/to/chrome`, but gdb takes the start address of the file's text section. To calculate this, one will need a copy of `/path/to/chrome`, and run:
+
+```
+
+$ objdump -x /path/to/chrome | grep '\.text' | head -n 1 | tr -s ' ' | cut -d' ' -f 7
+
+005282c0
+
+```
+
+Now add the two addresses: `7fe749a90000 + 005282c0 = 7fe749fb82c0` and in gdb, run:
+
+```
+
+(gdb) add-symbol-file /path/to/chrome 0x7fe749fb82c0
+
+```
+
+Then use gdb as normal.
+
+## Other resources
+
+For more discussion on this process see [Debugging a Minidump](http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/crash-reporting/debugging-a-minidump). This page discusses the same process in the context of ChromeOS and many of the concepts and techniques overlap. \ No newline at end of file
diff --git a/docs/linux_open_suse_build_instructions.md b/docs/linux_open_suse_build_instructions.md
new file mode 100644
index 0000000..c94d388
--- /dev/null
+++ b/docs/linux_open_suse_build_instructions.md
@@ -0,0 +1,77 @@
+This page includes some instruction to build Chromium on openSUSE 11.1 and 11.0.
+Before reading this page you need to learn the [Linux Build Instructions](LinuxBuildInstructions.md).
+
+If you are on 64-bit openSUSE, you will also want to read [Linux Build 64-bit on openSUSE](http://code.google.com/p/chromium/wiki/LinuxBuild64Bit#Manual_Setup_on_openSUSE).
+
+## How to Install Dependencies:
+
+Use zypper command to install dependencies:
+
+(openSUSE 11.1 and higher)
+```
+sudo zypper in subversion pkg-config python perl \
+ bison flex gperf mozilla-nss-devel glib2-devel gtk-devel \
+ wdiff lighttpd gcc gcc-c++ gconf2-devel mozilla-nspr \
+ mozilla-nspr-devel php5-fastcgi alsa-devel libexpat-devel \
+ libjpeg-devel libbz2-devel
+```
+
+
+For 11.0, use libnspr4-0d and libnspr4-dev instead of mozilla-nspr and mozilla-nspr-devel, and use php5-cgi instead of php5-fastcgi. And need gtk2-devel.
+
+(openSUSE 11.0)
+```
+sudo zypper in subversion pkg-config python perl \
+ bison flex gperf mozilla-nss-devel glib2-devel gtk-devel \
+ libnspr4-0d libnspr4-dev wdiff lighttpd gcc gcc-c++ libexpat-devel php5-cgi gconf2-devel \
+ alsa-devel gtk2-devel jpeg-devel
+```
+
+
+The Ubuntu package sun-java6-fonts contains a subset of Java of the fonts used. Since this package requires Java as a prerequisite anyway, we can do the same thing by just installing the equivalent OpenSUSE Sun Java package:
+```
+sudo zypper in java-1_6_0-sun
+```
+
+Webkit is currently hard-linked to the Microsoft fonts. To install these using zypper
+```
+sudo zypper in fetchmsttfonts pullin-msttf-fonts
+```
+
+To make the fonts installed above work, as the paths are hardcoded for Ubuntu, create symlinks to the appropriate locations:
+```
+sudo mkdir -p /usr/share/fonts/truetype/msttcorefonts
+sudo ln -s /usr/share/fonts/truetype/arial.ttf /usr/share/fonts/truetype/msttcorefonts/Arial.ttf
+sudo ln -s /usr/share/fonts/truetype/arialbd.ttf /usr/share/fonts/truetype/msttcorefonts/Arial_Bold.ttf
+sudo ln -s /usr/share/fonts/truetype/arialbi.ttf /usr/share/fonts/truetype/msttcorefonts/Arial_Bold_Italic.ttf
+sudo ln -s /usr/share/fonts/truetype/ariali.ttf /usr/share/fonts/truetype/msttcorefonts/Arial_Italic.ttf
+sudo ln -s /usr/share/fonts/truetype/comic.ttf /usr/share/fonts/truetype/msttcorefonts/Comic_Sans_MS.ttf
+sudo ln -s /usr/share/fonts/truetype/comicbd.ttf /usr/share/fonts/truetype/msttcorefonts/Comic_Sans_MS_Bold.ttf
+sudo ln -s /usr/share/fonts/truetype/cour.ttf /usr/share/fonts/truetype/msttcorefonts/Courier_New.ttf
+sudo ln -s /usr/share/fonts/truetype/courbd.ttf /usr/share/fonts/truetype/msttcorefonts/Courier_New_Bold.ttf
+sudo ln -s /usr/share/fonts/truetype/courbi.ttf /usr/share/fonts/truetype/msttcorefonts/Courier_New_Bold_Italic.ttf
+sudo ln -s /usr/share/fonts/truetype/couri.ttf /usr/share/fonts/truetype/msttcorefonts/Courier_New_Italic.ttf
+sudo ln -s /usr/share/fonts/truetype/impact.ttf /usr/share/fonts/truetype/msttcorefonts/Impact.ttf
+sudo ln -s /usr/share/fonts/truetype/times.ttf /usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttf
+sudo ln -s /usr/share/fonts/truetype/timesbd.ttf /usr/share/fonts/truetype/msttcorefonts/Times_New_Roman_Bold.ttf
+sudo ln -s /usr/share/fonts/truetype/timesbi.ttf /usr/share/fonts/truetype/msttcorefonts/Times_New_Roman_Bold_Italic.ttf
+sudo ln -s /usr/share/fonts/truetype/timesi.ttf /usr/share/fonts/truetype/msttcorefonts/Times_New_Roman_Italic.ttf
+sudo ln -s /usr/share/fonts/truetype/verdana.ttf /usr/share/fonts/truetype/msttcorefonts/Verdana.ttf
+sudo ln -s /usr/share/fonts/truetype/verdanab.ttf /usr/share/fonts/truetype/msttcorefonts/Verdana_Bold.ttf
+sudo ln -s /usr/share/fonts/truetype/verdanai.ttf /usr/share/fonts/truetype/msttcorefonts/Verdana_Italic.ttf
+sudo ln -s /usr/share/fonts/truetype/verdanaz.ttf /usr/share/fonts/truetype/msttcorefonts/Verdana_Bold_Italic.ttf
+```
+And then for the Java fonts:
+```
+sudo mkdir -p /usr/share/fonts/truetype/ttf-lucida
+sudo find /usr/lib*/jvm/java-1.6.*-sun-*/jre/lib -iname '*.ttf' -print -exec ln -s {} /usr/share/fonts/truetype/ttf-lucida \;
+```
+
+## Building the software
+Please refer to the [Linux Build Instructions](LinuxBuildInstructions.md).
+
+
+---
+
+
+Please, give comments and update this page if you use different steps. \ No newline at end of file
diff --git a/docs/linux_password_storage.md b/docs/linux_password_storage.md
new file mode 100644
index 0000000..8e63d37
--- /dev/null
+++ b/docs/linux_password_storage.md
@@ -0,0 +1,23 @@
+# Introduction
+
+On Linux, Chromium can store passwords in three ways:
+ * GNOME Keyring
+ * KWallet 4
+ * plain text
+Chromium chooses which store to use automatically, based on your desktop environment.
+
+Passwords stored in GNOME Keyring or KWallet are encrypted on disk, and access to them is controlled by dedicated daemon software. Passwords stored in plain text are not encrypted. Because of this, when either GNOME Keyring or KWallet is in use, any unencrypted passwords that have been stored previously are automatically moved into the encrypted store.
+
+Support for using GNOME Keyring and KWallet was added in version 6, but using these (when available) was not made the default mode until version 12.
+
+# Details
+
+Although Chromium chooses which store to use automatically, the store to use can also be specified with a command line argument:
+ * `--password-store=gnome` (to use GNOME Keyring)
+ * `--password-store=kwallet` (to use KWallet)
+ * `--password-store=basic` (to use the plain text store)
+
+Note that Chromium will fall back to `basic` if a requested or autodetected store is not available.
+
+In versions 6-11, the store to use was not detected automatically, but detection could be requested with an additional argument:
+ * `--password-store=detect` \ No newline at end of file
diff --git a/docs/linux_pid_namespace_support.md b/docs/linux_pid_namespace_support.md
new file mode 100644
index 0000000..defebf6
--- /dev/null
+++ b/docs/linux_pid_namespace_support.md
@@ -0,0 +1,42 @@
+The [LinuxSUIDSandbox](LinuxSUIDSandbox.md) currently relies on support for the CLONE\_NEWPID flag in Linux's [clone() system call](http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html). You can check whether your system supports PID namespaces with the code below, which must be run as root:
+
+```
+#define _GNU_SOURCE
+#include <unistd.h>
+#include <sched.h>
+#include <stdio.h>
+#include <sys/wait.h>
+
+#if !defined(CLONE_NEWPID)
+#define CLONE_NEWPID 0x20000000
+#endif
+
+int worker(void* arg) {
+ const pid_t pid = getpid();
+ if (pid == 1) {
+ printf("PID namespaces are working\n");
+ } else {
+ printf("PID namespaces ARE NOT working. Child pid: %d\n", pid);
+ }
+
+ return 0;
+}
+
+int main() {
+ if (getuid()) {
+ fprintf(stderr, "Must be run as root.\n");
+ return 1;
+ }
+
+ char stack[8192];
+ const pid_t child = clone(worker, stack + sizeof(stack), CLONE_NEWPID, NULL);
+ if (child == -1) {
+ perror("clone");
+ fprintf(stderr, "Clone failed. PID namespaces ARE NOT supported\n");
+ }
+
+ waitpid(child, NULL, 0);
+
+ return 0;
+}
+``` \ No newline at end of file
diff --git a/docs/linux_plugins.md b/docs/linux_plugins.md
new file mode 100644
index 0000000..33bdb55
--- /dev/null
+++ b/docs/linux_plugins.md
@@ -0,0 +1,27 @@
+### Background reading materials
+#### Plugins in general
+ * [Gecko Plugin API reference](https://developer.mozilla.org/en/Gecko_Plugin_API_Reference) -- most important to read
+ * [Mozilla plugins site](http://www.mozilla.org/projects/plugins/)
+ * [XEmbed extension](https://developer.mozilla.org/en/XEmbed_Extension_for_Mozilla_Plugins) -- newer X11-specific plugin API
+ * [NPAPI plugin guide](http://gplflash.sourceforge.net/gplflash2_blog/npapi.html) from GPLFlash project
+
+#### Chromium-specific
+ * [Chromium's plugin architecture](http://dev.chromium.org/developers/design-documents/plugin-architecture) -- may be out of date but will be worth reading
+
+### Code to reference
+ * [Mozilla plugin code](http://mxr.mozilla.org/firefox/source/modules/plugin/base/src/) -- useful reference
+ * [nspluginwrapper](http://gwenole.beauchesne.info//en/projects/nspluginwrapper) -- does out-of-process plugins itself
+
+### Terminology
+ * _Internal plugin_: "a plugin that's implemented in the chrome dll, i.e. there's no external dll that services that mime type. For Linux you'll just have to worry about the default plugin, which is what shows a puzzle icon for content that you don't have a plugin for. We use that to allow the user to download and install the missing plugin."
+
+### Flash
+ * [Adobe Flash player dev center](http://www.adobe.com/devnet/flashplayer/)
+ * [penguin.swf](http://blogs.adobe.com/penguin.swf/) -- blog about Flash on Linux
+ * [tips and tricks](http://macromedia.mplug.org/) -- user-created page, with some documentation of special flags in `/etc/adobe/mms.cfg`
+ * [official Adobe bug tracker](https://bugs.adobe.com/flashplayer/)
+
+### Useful Tools
+ * `xwininfo -tree` -- lets you inspect the window hierarchy of a window and get the layout of child windows.
+ * "[DiamondX](http://multimedia.cx/diamondx/) is a simple NPAPI plugin built to run on Unix platforms and exercise the XEmbed browser extension."
+ * To build a 32-bit binary: `./configure CFLAGS='-m32' LDFLAGS='-L/usr/lib32 -m32'` \ No newline at end of file
diff --git a/docs/linux_printing.md b/docs/linux_printing.md
new file mode 100644
index 0000000..e99e8135
--- /dev/null
+++ b/docs/linux_printing.md
@@ -0,0 +1,74 @@
+# Introduction
+The common approach used in printing on Linux is to use Gtk+ and Cairo libraries. The [Gtk+ documentation](http://library.gnome.org/devel/gtk/stable/Printing.html) describes both high-level and low-level APIs for us to do printing. In an application program, the easiest way to do printing is to use [GtkPrintOperation](http://library.gnome.org/devel/gtk/stable/gtk-High-level-Printing-API.html) to get the Cairo context in `draw-page`'s callback, and render **each** page's contents on this specific context, the rest is easy and trivial.
+
+However, in Chromium's multi-process architecture, we hope that all rendering should be done in the renderer process, and I/O should be done in the browser process. The problem is that we are unable to pass the Cairo context we obtained in the browser process to the renderer via IPC and/or shared memory, and later get it back after the rendering is done. Hence, we have to find something which we can _pickle_ and pass between processes.
+
+# Possible Solutions
+ 1. **Bitmap**: This seems easy because we have already passed bitmaps for displaying web pages. It is also pretty easy to _dump_ this bitmap on the printing context. However, the bitmap takes lots of memory space even when you print a blank page. You might wonder why it can be a critical problem, since we've used bitmaps for displaying. The critical part is that the screen DPI is around 72~110, at least lower than 150. However, the DPI for printing is usually above 150, and maybe 1200 or 2400 for high-end printers. The 72-DPI bitmap actually looks terrible on the paper and reminds us the old time when we used dot-matrix printers. A 72-DPI bitmap will take ~7MB memory (assume we are using US letter paper), hence, it will take ~500MB memory per page when we would like to print with a 600-DPI laser printer. By the way, even we would like to do so, we still have the problem that WebKit seems not to take the DPI factor into account when rendering the web page.
+ 1. **Rendering records** (scripts): We might be able to record every operation which is going to be performed on canvas, and later pass all these records to the browser side to playback. We can define our own format (or script), or we can use CairoScript. Unfortunately, CairoScript is still in the snapshot (it is under development and we can hardly find its detailed and useful documentation), not in the stable release. Even it is in a stable release, we still cannot assume that the user will install the latest version of Cairo library. If we would like to create our own format/script and use it, we have to replay these records in the browser process, which seems to be another kind of rendering action (it is actually). By the way, one thing we need to take care of would be, for example, when we have to composite multiple semi-transparent bitmaps, we also have to embed these bitmaps along with other skia objects into our records. This implies that we have to be able to _pickle_ `SkBitmap`, `SkPaint`, and other related skia objects. This sucks.
+ 1. **Metafile approach 1**: We can use Cairo to create the PDF (or PS) file in the renderer (one file per page) and pass it to the browser process. We can then use [libpoppler](http://poppler.freedesktop.org/) to render the page content on the printing context (it is pretty easy to use and we just need to add few lines to use it). This sounds better, but this also means that we have to bring [libpoppler](http://poppler.freedesktop.org/) into our dependency. Moreover, we still have to do _rendering_ in the browser process, which we should really avoid if possible.
+ 1. **Metafile approach 2**: Again, we use Cairo to create the PDF (or PS) file in the renderer. However, this time we have to generate the PDF/PS file for all pages. Unlike other approaches we mentioned earlier, all rendering tasks are done in the renderer, including transformation for setting up the page, and rendering of the header and footer. Since we do not want to do rendering in the browser process, this means that we cannot use the [GtkPrintOperation](http://library.gnome.org/devel/gtk/stable/gtk-High-level-Printing-API.html) approach, which does require rendering. Instead, we can use [GtkPrintUnixDialog](http://library.gnome.org/devel/gtk/stable/GtkPrintUnixDialog.html) to get printing parameters and generate the [GtkPrintJob](http://library.gnome.org/devel/gtk/stable/GtkPrintJob.html) accordingly. Then, we use `gtk_print_job_set_source_file()` and `gtk_print_job_send ()` to send our PDF/PS file directly to the printing system (CUPS). One bad part is that `gtk_print_job_set_source_file()` only takes a file on the disk, so we have to create a temporary file on the disk before we use it. This file might be pretty large if the user is printing a long web page.
+
+# Our Choice
+We currently are using Metafile approach 2, since we really like to avoid any rendering in the browser process if possible. By the way, we are using a two-pass rendering in `PdfPsmetafile` right now. Because in the first pass, we need to get the shrink factor from WebKit so that we can scale and center the page in the second pass. If we later we can have the shrink factor from Preview, we might be able to use single-pass rendering. However, using two-pass rendering might still have some advantages. For example, we can actually do Preview and the first pass at the same time if we use the bitmap object in the `VectorPlatformDevice` as Preview. (Not very sure if it works or not now, since Previewing is also a complicated issue.) Once we have the page settings (margins, scaling, etc), we can easily apply the first-pass results to generate our final output by copying Cairo surfaces.
+
+(Please NOTE the approach used here might be changed in the future.)
+
+# Current Status
+We now can generate a PDF file for the web page and save it under user's default download directory. The function is still very basic. Please see ideal Goal, Known Issues, and Bugs below.
+
+
+---
+
+
+# Ideal Goal
+Design a better printing flow for Linux:
+> Ideally, when we print the web page, we should get the _snapshot_ of that page. In the current architecture, we cannot halt JavaScript from running when we print the web page. This is not good since the script might close the page we are printing. Things could be worse if plug-ins are involved. When we print, the renderer sends a sync message to the browser, so the renderer must wait for the browser. We potentially may have deadlocks when the plug-in talks to the renderer. Please see [here](http://dev.chromium.org/developers/design-documents/printing) for further detail. This might be avoided if we could copy the entire DOM tree before we print. It seems that WebKit does not support this directly right now. Before we can entirely solve this issue, we might need to reduce the time and chance we block the renderer. For example, unlike the windows version, we always have at least one printer (print to file). Hence, we can put "Page Setup" in the browser menu, so that we don't need to ask the user each time before we print (You can see this in Firefox and many other Linux applications).
+> Another issue is that we might need different mechanisms for different platforms. Obviously, the ways how we do printing on Windows and on Linux are quite different. The printing flow on Linux might be something like this one:
+ * Print on a low-resolution bitmap canvas to generate Previews. (Believe it or not! It's actually much more difficult/tedious than it sounds.)
+ * We use the preview to do Page Setup: Paper size, margins, page range, and maybe also the header and the footer.
+ * Generate the PDF file.
+ * Save the resulting PDF as a temporary file.
+ * Use GTK+ APIs in the browser to ask the user which printer to use, then directly send the temporary file to CUPS.
+> These steps look simple, but we actually need to consider and design more details before we can make it happen. (For example, do we have to support all options shown in the GTK+ printing dialog?)
+
+# Known Issues
+ 1. For some reason, if we send the resulting PDF files directly to CUPS, we often get nothing without any error message. The CUPS I was using is version 1.3.11. This might be a bug in Cairo 1.6.0, and/or a bug in the PDF filter (pdftopdf? pdftops?) in CUPS. Actually, if we use Firefox and print to file, we will sometimes have this problem, too. Nevertheless, the resulting PDF can be viewed in all PDF viewers without any error. However, you won't see the embedded font information in some PDF viewers, such as evince. If the printer supports PDF natively and has the HTTP interface, we can get the printout by sending the PDF file via printer's HTTP interface. [Issue# 21599](http://code.google.com/p/chromium/issues/detail?id=21599)
+ 1. WebKit does not pass original text information to skia. Hence, we only have glyphs in the resulting PDF. This implies that we cannot do text selection in the resulting PDF. [Issue# 21602](http://code.google.com/p/chromium/issues/detail?id=21602)
+ 1. The vector canvas used for printing in skia still has a bitmap within it [Issue# 21604](http://code.google.com/p/chromium/issues/detail?id=21604). This wastes lots of memory and does nothing. Maybe we can use this bitmap to do previewing, or use it as a thumbnail. of course, another possibility might be implementing PDF generating capabilities in skia.
+ 1. To let Cairo use correct font information, we use FreeType to load the font again in `PdfPsMetafile`. This again wastes lots of memory when printing. It would be nice if we can find a way to share font information with/from skia. [Issue# 21608](http://code.google.com/p/chromium/issues/detail?id=21608)
+ 1. Since we ask the browser open a temporary file for us. This might potentially be a security hole for DoS attack. We should find a way to limit the size of temporary files and the frequency of creation. (Do we have this now?) [Issue# 21610](http://code.google.com/p/chromium/issues/detail?id=21610)
+ 1. In Cairo 1.6.0, the library opens a temporary file when creating a PostScript surface. Hence, our only choice is the PDF surface, which does not require any temporary file in the renderer.
+ 1. In Cairo 1.6.0, we cannot output multiple glyphs at the same time. (we have to do it one by one). Newer version does support multiple glyphs output. We can use it in the future.
+ 1. I did not have enough time to write good unit tests for classes related to printing on Linux. We definitely need those unit tests in the future. [Issue# 21611](http://code.google.com/p/chromium/issues/detail?id=21611)
+ 1. I did not have enough time to compare our results with other competitors. Anyway, in the future, we should always compare quality, correctness, size, and maybe also resources and time in printing.
+ 1. We do not supports all APIs in `SkCanvas` now ([Issue# 21612](http://code.google.com/p/chromium/issues/detail?id=21612)). By the way, when we need to do alpha composition in canvas, the result generated by Cairo is not perfect(buggy). For example, the resulting color might be wrong, and sometimes we will have round-off error in images' layout. You can try to print `third_party/WebKit/LayoutTests/svg/W3C-SVG-1.1/filters-blend-01-b.svg` and compare the result with your screen. If you print it out with a printer using CYMK, you might have incorrect colors.
+ 1. We should find a way to do layout tests for printing. [Issue# 21613](http://code.google.com/p/chromium/issues/detail?id=21613) For example, it looks not quite right when you print this [page](http://code.google.com/p/chromium/issues/detail?id=8551&colspec=ID%20Stars%20Pri%20Area%20Type%20Status%20Summary%20Modified%20Owner%20Mstone).
+
+# Bugs
+ 1. There are still many bugs in vector canvas. I did not implement path effect, so it prints dashed lines as solid lines. [Issue# 21614](http://code.google.com/p/chromium/issues/detail?id=21614)
+ 1. When you print the "new-tab-page", the rounded boxes look strange. [Issue# 21616](http://code.google.com/p/chromium/issues/detail?id=21616)
+ 1. The button is shown as a black rectangle. We should print it with a bitmap. Of course, we have to get the correct button according to the user's theme. [Issue# 21617](http://code.google.com/p/chromium/issues/detail?id=21617)
+ 1. The font cache in `PdfPsMetafile` might not be thread-safe. [Issue# 21618](http://code.google.com/p/chromium/issues/detail?id=21618)
+ 1. The file descriptor map used in the browser might not be thread safe. (However, it is just used at this moment as a quick ugly hack. We should be able to get rid of it when we implement all other printing classes.)
+ 1. Since we save the resulting PDF file in the renderer, this might not be a good thing and might freeze the renderer for a while. We should find a way to get around it. By the way, maybe we should also show the printing progress? [Issue#21619](http://code.google.com/p/chromium/issues/detail?id=21619)
+
+
+---
+
+
+# Reference
+[Issue# 9847](http://code.google.com/p/chromium/issues/detail?id=9847)
+> It is blocked on [Issue# 19223](http://code.google.com/p/chromium/issues/detail?id=19223)
+| Revision# | Code review# |
+|:----------|:-------------|
+| `r22522` | `160347` |
+| `r23032` | `164025` |
+| `r24243` | `174042` |
+| `r24376` | `173368` |
+| `r24474` | `174468` |
+| `r24533` | `173516` |
+| `r25615` | `172115` |
+| `r25974` | `196071` |
+| `r26308` | `203062` |
+| `r26400` | `200138` | \ No newline at end of file
diff --git a/docs/linux_profiling.md b/docs/linux_profiling.md
new file mode 100644
index 0000000..5f47277
--- /dev/null
+++ b/docs/linux_profiling.md
@@ -0,0 +1,156 @@
+# Linux Profiling
+
+How to profile chromium on Linux.
+
+See [Profiling Chromium and WebKit](https://sites.google.com/a/chromium.org/dev/developers/profiling-chromium-and-webkit) for alternative discussion.
+
+## CPU Profiling
+
+gprof: reported not to work (taking an hour to load on our large binary).
+
+oprofile: Dean uses it, says it's good. (As of 9/16/9 oprofile only supports timers on the new Z600 boxes, which doesn't give good granularity for profiling startup).
+
+TODO(willchan): Talk more about oprofile, gprof, etc.
+
+Also see https://sites.google.com/a/chromium.org/dev/developers/profiling-chromium-and-webkit
+
+### perf
+
+`perf` is the successor to `oprofile`. It's maintained in the kernel tree, it's available on Ubuntu in the package `linux-tools`.
+
+To capture data, you use `perf record`. Some examples:
+```
+$ perf record -f -g out/Release/chrome # captures the full execution of the program
+$ perf record -f -g -p 1234 # captures a particular pid, you can start at the right time, and stop with ctrl-C
+$ perf record -f -g -a # captures the whole system
+```
+
+Some versions of the perf command can be confused by process renames. Affected versions will be unable to resolve Chromium's symbols if it was started through perf, as in the first example above. It should work correctly if you attach to an existing Chromium process as shown in the second example. (This is known to be broken as late as 3.2.5 and fixed as early as 3.11.rc3.g36f571. The actual affected range is likely much smaller. You can download and build your own perf from source.)
+
+The last one is useful on limited systems with few cores and low memory bandwidth, where the CPU cycles are shared between several processes (e.g. chrome browser, renderer, plugin, X, pulseaudio, etc.)
+
+To look at the data, you use:
+```
+$ perf report
+```
+
+This will use the previously captured data (`perf.data`).
+
+### google-perftools
+
+google-perftools code is enabled when the `use_allocator` variable in gyp is set to `tcmalloc` (currently the default). That will build the tcmalloc library, including the cpu profiling and heap profiling code into Chromium. In order to get stacktraces in release builds on 64 bit, you will need to build with some extra flags enabled by setting `profiling=1` in gyp.
+
+If the stack traces in your profiles are incomplete, this may be due to missing frame pointers in some of the libraries. A workaround is to use the `linux_keep_shadow_stacks=1` gyp option. This will keep a shadow stack using the -finstrument-functions option of gcc and consult the stack when unwinding.
+
+In order to enable cpu profiling, run Chromium with the environment variable CPUPROFILE set to a filename. For example:
+
+```
+$ CPUPROFILE=/tmp/cpuprofile out/Release/chrome
+```
+
+After the program exits successfully, the cpu profile will be available at the filename specified in the CPUPROFILE environment variable. You can then analyze it using the pprof script (distributed with google-perftools, installed by default on Googler Linux workstations). For example:
+
+```
+$ pprof --gv out/Release/chrome /tmp/cpuprofile
+```
+
+This will generate a visual representation of the cpu profile as a postscript file and load it up using `gv`. For more powerful commands, please refer to the pprof help output and the google-perftools documentation.
+
+Note that due to the current design of google-perftools' profiling tools, it is only possible to profile the browser process. You can also profile and pass the --single-process flag for a rough idea of what the render process looks like, but keep in mind that you'll be seeing a mixed browser/renderer codepath that is not used in production.
+
+For further information, please refer to http://google-perftools.googlecode.com/svn/trunk/doc/cpuprofile.html.
+
+## Heap Profiling
+
+### google-perftools
+
+#### Turning on heap profiles
+Follow the instructions for enabling profiling as described above in the google-perftools section under Cpu Profiling.
+
+To turn on the heap profiler on a Chromium build with tcmalloc, use the HEAPPROFILE environment variable to specify a filename for the heap profile. For example:
+
+```
+$ HEAPPROFILE=/tmp/heapprofile out/Release/chrome
+```
+
+After the program exits successfully, the heap profile will be available at the filename specified in the `HEAPPROFILE` environment variable.
+
+Some tests fork short-living processes which have a small memory footprint. To catch those, use the `HEAP_PROFILE_ALLOCATION_INTERVAL` environment variable.
+
+#### Dumping a profile of a running process
+
+To programmatically generate a heap profile before exit, use code like:
+```
+#include "third_party/tcmalloc/chromium/src/google/heap-profiler.h"
+...
+HeapProfilerDump("foobar"); // "foobar" will be included in the message printed to the console
+```
+For example, you might hook that up to some action in the UI.
+
+Or you can use gdb to attach at any point:
+
+ 1. Attach gdb to the process: `$ gdb -p 12345`
+ 1. Cause it to dump a profile: `(gdb) p HeapProfilerDump("foobar")`
+ 1. The filename will be printed on the console you started Chrome from; e.g. "`Dumping heap profile to heap.0001.heap (foobar)`"
+
+
+#### Analyzing dumps
+
+You can then analyze dumps using the `pprof` script (distributed with google-perftools, installed by default on Googler Linux workstations; on Ubuntu it is called `google-pprof`). For example:
+
+```
+$ pprof --gv out/Release/chrome /tmp/heapprofile
+```
+
+This will generate a visual representation of the heap profile as a postscript file and load it up using `gv`. For more powerful commands, please refer to the pprof help output and the google-perftools documentation.
+
+(pprof is slow. Googlers can try the not-open-source cpprof; Evan wrote an open source alternative [available on github](https://github.com/martine/hp).)
+
+#### Sandbox
+
+Sandboxed renderer subprocesses will fail to write out heap profiling dumps. To work around this, turn off the sandbox (via `export CHROME_DEVEL_SANDBOX=`).
+
+#### Troubleshooting
+
+ * "Hooked allocator frame not found": build with `-Dcomponent=static_library`. tcmalloc gets confused when the allocator routines are in a different `.so` than the rest of the code.
+
+#### More reading
+
+For further information, please refer to http://google-perftools.googlecode.com/svn/trunk/doc/heapprofile.html.
+
+### Massif
+[Massif](http://valgrind.org/docs/manual/mc-manual.html) is a [Valgrind](http://www.chromium.org/developers/how-tos/using-valgrind)-based heap profiler.
+It is much slower than the heap profiler from google-perftools, but it may have some advantages. (In particular, it handles the multi-process executables well).
+
+First, you will need to build massif from valgrind-variant project yourself, it's [easy](http://code.google.com/p/valgrind-variant/wiki/HowTo).
+
+Then, make sure your chromium is built using the [valgrind instructions](http://www.chromium.org/developers/how-tos/using-valgrind).
+Now, you can run massif like this:
+
+```
+% path-to-valgrind-variant/valgrind/inst/bin/valgrind --fullpath-after=/chromium/src/ \
+ --trace-children-skip=*npviewer*,/bin/uname,/bin/sh,/usr/bin/which,/bin/ps,/bin/grep,/usr/bin/linux32 --trace-children=yes --tool=massif \
+ out/Release/chrome --noerrdialogs --disable-hang-monitor --other-chrome-flags
+```
+
+The result will be stored in massif.out.PID files, which you can post-process with [ms\_print](http://valgrind.org/docs/manual/mc-manual.html).
+
+TODO(kcc) sometimes when closing a tab the main process kills the tab process before massif completes writing it's log file. Need a flag that tells the main process to wait longer.
+
+## Paint profiling
+
+You can use Xephyr to profile how chrome repaints the screen. Xephyr is a virtual X server like Xnest with debugging options which draws red rectangles to where applications are drawing before drawing the actual information.
+
+```
+$ export XEPHYR_PAUSE=10000
+$ Xephyr :1 -ac -screen 800x600 &
+$ DISPLAY=:1 out/Debug/chrome
+```
+
+When ready to start debugging issue the following command, which will tell Xephyr to start drawing red rectangles:
+
+```
+$ kill -USR1 `pidof Xephyr`
+```
+
+For further information, please refer to http://cgit.freedesktop.org/xorg/xserver/tree/hw/kdrive/ephyr/README. \ No newline at end of file
diff --git a/docs/linux_proxy_config.md b/docs/linux_proxy_config.md
new file mode 100644
index 0000000..fed3d80
--- /dev/null
+++ b/docs/linux_proxy_config.md
@@ -0,0 +1,11 @@
+# Introduction
+
+Chromium on Linux has several possible sources of proxy info: GNOME/KDE settings, command-line flags, and environment variables.
+
+# Details
+
+## GNOME and KDE
+When Chromium detects that it is running in GNOME or KDE, it will automatically use the appropriate standard proxy settings. You can configure these proxy settings from the options dialog (the "Change proxy settings" button in the "Under the Hood" tab), which will launch the GNOME or KDE proxy settings applications, or by launching those applications directly.
+
+## Flags and environment variables
+For other desktop environments, Chromium's proxy settings can be configured using command-line flags or environment variables. These are documented on the man page (`man google-chrome` or `man chromium-browser`). \ No newline at end of file
diff --git a/docs/linux_sandbox_ipc.md b/docs/linux_sandbox_ipc.md
new file mode 100644
index 0000000..a5caaaf
--- /dev/null
+++ b/docs/linux_sandbox_ipc.md
@@ -0,0 +1,30 @@
+The Sandbox IPC system is separate from the 'main' IPC system. The sandbox IPC is a lower level system which deals with cases where we need to route requests from the bottom of the call stack up into the browser.
+
+The motivating example is Skia, which uses fontconfig to load fonts. In a chrooted renderer we cannot access the user's fontcache, nor the font files themselves. However, font loading happens when we have called through WebKit, through Skia and into the SkFontHost. At this point, we cannot loop back around to use the main IPC system.
+
+Thus we define a small IPC system which doesn't depend on anything but <tt>base</tt> and which can make synchronous requests to the browser process.
+
+The zygote (LinuxZygote) starts with a UNIX DGRAM socket installed in a well known file descriptor slot (currently 4). Requests can be written to this socket which are then processed on a special "sandbox IPC" process. Requests have a magic <tt>int</tt> at the beginning giving the type of the request.
+
+All renderers share the same socket, so replies are delivered via a reply channel which is passed as part of the request. So the flow looks like:
+ 1. The renderer creates a UNIX DGRAM socketpair.
+ 1. The renderer writes a request to file descriptor 4 with an SCM\_RIGHTS control message containing one end of the fresh socket pair.
+ 1. The renderer blocks reading from the other end of the fresh socketpair.
+ 1. A special "sandbox IPC" process receives the request, processes it and writes the reply to the end of the socketpair contained in the request.
+ 1. The renderer wakes up and continues.
+
+The browser side of the processing occurs in <tt>chrome/browser/renderer_host/render_sandbox_host_linux.cc</tt>. The renderer ends could occur anywhere, but the browser side has to know about all the possible requests so that should be a good starting point.
+
+Here is a (possibly incomplete) list of endpoints in the renderer:
+
+### fontconfig
+
+As mentioned above, the motivating example of this is dealing with fontconfig from a chrooted renderer. We implement our own Skia FontHost, outside of the Skia tree, in <tt>skia/ext/SkFontHost_fontconfig**</tt>.**
+
+There are two methods used. One for performing a match against the fontconfig data and one to return a file descriptor to a font file resulting from one of those matches. The only wrinkle is that fontconfig is a single-threaded library and it's already used in the browser by GTK itself.
+
+Thus, we have a couple of options:
+ 1. Handle the requests on the UI thread in the browser.
+ 1. Handle the requests in a separate address space.
+
+The original implementation did the former (handle on UI thread). This turned out to be a terrible idea, performance wise, so we now handle the requests on a dedicated process. \ No newline at end of file
diff --git a/docs/linux_sandboxing.md b/docs/linux_sandboxing.md
new file mode 100644
index 0000000..00ba8dd
--- /dev/null
+++ b/docs/linux_sandboxing.md
@@ -0,0 +1,97 @@
+Chromium uses a multiprocess model, which allows to give different privileges and restrictions to different parts of the browser. For instance, we want renderers to run with a limited set of privileges since they process untrusted input and are likely to be compromised. Renderers will use an IPC mechanism to request access to resource from a more privileged (browser process).
+You can find more about this general design [here](http://dev.chromium.org/developers/design-documents/sandbox).
+
+We use different sandboxing techniques on Linux and Chrome OS, in combination, to achieve a good level of sandboxing. You can see which sandboxes are currently engaged by looking at chrome://sandbox (renderer processes) and chrome://gpu (gpu process).
+
+We have a two layers approach:
+
+ * Layer-1 (also called the "semantics" layer) prevents access to most resources from a process where it's engaged. The setuid sandbox is used for this.
+ * Layer-2 (also called "attack surface reduction" layer) restricts access from a process to the attack surface of the kernel. Seccomp-BPF is used for this.
+
+You can disable all sandboxing (for testing) with --no-sandbox.
+
+## Layered approach
+
+One notable difficulty with seccomp-bpf is that filtering at the system call interface provides difficult to understand semantics. One crucial aspect is that if a process A runs under seccomp-bpf, we need to guarantee that it cannot affect the integrity of process B running under a different seccomp-bpf policy (which would be a sandbox escape). Besides the obvious system calls such as ptrace() or process\_vm\_writev(), there are multiple subtle issues, such as using open() on /proc entries.
+
+Our layer-1 guarantees the integrity of processes running under different seccomp-bpf policies. In addition, it allows restricting access to the network, something that is difficult to perform at the layer-2.
+
+## Sandbox types summary
+
+| **Name** | **Layer and process** | **Linux flavors where available** | **State** |
+|:---------|:----------------------|:----------------------------------|:----------|
+| [Setuid sandbox](#The_setuid_sandbox.md) | Layer-1 in Zygote processes (renderers, PPAPI, [NaCl](http://www.chromium.org/nativeclient), some utility processes) | Linux distributions and Chrome OS | Enabled by default (old kernels) and maintained |
+| [User namespaces sandbox](#User_namespaces_sandbox.md) | Modern alternative to the setuid sandbox. Layer-1 in Zygote processes (renderers, PPAPI, [NaCl](http://www.chromium.org/nativeclient), some utility processes) | Linux distributions and Chrome OS (kernel >= 3.8) | Enabled by default (modern kernels) and actively developed |
+| [Seccomp-BPF](#The_seccomp-bpf_sandbox.md) | Layer-2 in some Zygote processes (renderers, PPAPI, [NaCl](http://www.chromium.org/nativeclient)), Layer-1 + Layer-2 in GPU process | Linux kernel >= 3.5, Chrome OS and Ubuntu | Enabled by default and actively developed |
+| [Seccomp-legacy](#The_seccomp_sandbox.md) | Layer-2 in renderers | All | [Deprecated](https://src.chromium.org/viewvc/chrome?revision=197301&view=revision) |
+| [SELinux](#SELinux.md) | Layer-1 in Zygote processes (renderers, PPAPI) | SELinux distributions | [Deprecated](https://src.chromium.org/viewvc/chrome?revision=200838&view=revision) |
+| Apparmor | Outer layer-1 in Zygote processes (renderers, PPAPI) | Not used | Deprecated |
+
+## The setuid sandbox
+
+Also called SUID sandbox, our main layer-1 sandbox.
+
+A SUID binary that will create a new network and PID namespace, as well as chroot() the process to an empty directory on request.
+
+To disable it, use --disable-setuid-sandbox. (Do not remove the binary or unset CHROME\_DEVEL\_SANDBOX, it is not supported).
+
+_Main page: [LinuxSUIDSandbox](LinuxSUIDSandbox.md)_
+
+## User namespaces sandbox
+
+The namespace sandbox [aims to replace the setuid sandbox](https://code.google.com/p/chromium/issues/detail?id=312380). It has the advantage of not requiring a setuid binary. It's based on (unprivileged)
+[user namespaces](https://lwn.net/Articles/531114/) in the Linux kernel. It generally requires a kernel >= 3.10, although it may work with 3.8 if certain patches are backported.
+
+Starting with M-43, if the kernel supports it, unprivileged namespaces are used instead of the setuid sandbox. Starting with M-44, certain processes run [in their own PID namespace](https://code.google.com/p/chromium/issues/detail?id=460972), which isolates them better.
+
+## The <tt>seccomp-bpf</tt> sandbox
+
+Also called <tt>seccomp-filters</tt> sandbox.
+
+Our main layer-2 sandbox, designed to shelter the kernel from malicious code executing in userland.
+
+Also used as layer-1 in the GPU process. A [BPF](http://www.tcpdump.org/papers/bpf-usenix93.pdf) compiler will compile a process-specific program
+to filter system calls and send it to the kernel. The kernel will interpret this program for each system call and allow or disallow the call.
+
+To help with sandboxing of existing code, the kernel can also synchronously raise a SIGSYS signal. This allows user-land to perform actions such as "log and return errno", emulate the system call or broker-out the system call (perform a remote system call via IPC). Implementing this requires a low-level async-signal safe IPC facility.
+
+Seccomp-bpf is supported since Linux 3.5, but is also back-ported on Ubuntu 12.04 and is always available on Chrome OS. See [this page](http://outflux.net/teach-seccomp/) for more information.
+
+See [this blog post](http://blog.chromium.org/2012/11/a-safer-playground-for-your-linux-and.html) announcing Chrome support. Or [this one](http://blog.cr0.org/2012/09/introducing-chromes-next-generation.html) for a more technical overview.
+
+This sandbox can be disabled with --disable-seccomp-filter-sandbox.
+
+## The <tt>seccomp</tt> sandbox
+
+Also called <tt>seccomp-legacy</tt>. An obsolete layer-1 sandbox, then available as an optional layer-2 sandbox.
+
+Deprecated by seccomp-bpf and removed from the Chromium code base. It still exists as a separate project [here](https://code.google.com/p/seccompsandbox/).
+
+See:
+ * http://www.imperialviolet.org/2009/08/26/seccomp.html
+ * http://lwn.net/Articles/346902/
+ * https://code.google.com/p/seccompsandbox/
+
+## SELinux
+
+[Deprecated](https://src.chromium.org/viewvc/chrome?revision=200838&view=revision). Was designed to be used instead of the SUID sandbox.
+
+Old information for archival purposes:
+
+One can build Chromium with <tt>selinux=1</tt> and the Zygote (which starts the renderers and PPAPI processes) will do a
+dynamic transition. audit2allow will quickly build a usable module.
+
+Available since [r26257](http://src.chromium.org/viewvc/chrome?view=rev&revision=26257),
+more information in [this blog post](http://www.imperialviolet.org/2009/07/14/selinux.html) (grep for
+'dynamic' since dynamic transitions are a little obscure in SELinux)
+
+## Developing and debugging with sandboxing
+
+Sandboxing can make developing harder, see:
+ * [this page](https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment) for the setuid sandbox
+ * [this page](http://www.chromium.org/for-testers/bug-reporting-guidelines/hanging-tabs) for triggering crashes
+ * [this page for debugging tricks](https://code.google.com/p/chromium/wiki/LinuxDebugging#Getting_renderer_subprocesses_into_gdb)
+
+## See also
+ * [LinuxSandboxIPC](LinuxSandboxIPC.md)
+ * [How Chromium's Linux sandbox affects Native Client](https://code.google.com/p/nativeclient/wiki/LinuxOuterSandbox) \ No newline at end of file
diff --git a/docs/linux_suid_sandbox.md b/docs/linux_suid_sandbox.md
new file mode 100644
index 0000000..84e5acd
--- /dev/null
+++ b/docs/linux_suid_sandbox.md
@@ -0,0 +1,63 @@
+With [r20110](http://src.chromium.org/viewvc/chrome?view=rev&revision=20110), Chromium on Linux can now sandbox its renderers using a SUID helper binary. This is one of [our layer-1 sandboxing solutions](LinuxSandboxing.md).
+
+## SUID helper executable
+
+The SUID helper binary is called 'chrome\_sandbox' and you must build it separately from the main 'chrome' target. To use this sandbox, you have to specify its path in the `linux_sandbox_path` GYP variable. When spawning the zygote process (LinuxZygote), if the suid sandbox is enabled, Chromium will check for the sandbox binary at the location specified by `linux_sandbox_path`. For Google Chrome, this is set to <tt>/opt/google/chrome/chrome-sandbox</tt>, and early version had this value hard coded in <tt>chrome/browser/zygote_host_linux.cc</tt>.
+
+
+In order for the sandbox to be used, the following conditions must be met:
+ * The sandbox binary must be executable by the Chromium process.
+ * It must be SUID and executable by other.
+
+If these conditions are met then the sandbox binary is used to launch the zygote process. Once the zygote has started, it asks a helper process to chroot it to a temp directory.
+
+## CLONE\_NEWPID method
+
+The sandbox does three things to restrict the authority of a sandboxed process. The SUID helper is responsible for the first two:
+ * The SUID helper chroots the process. This takes away access to the filesystem namespace.
+ * The SUID helper puts the process in a PID namespace using the CLONE\_NEWPID option to [clone()](http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html). This stops the sandboxed process from being able to ptrace() or kill() unsandboxed processes.
+
+In addition:
+ * The LinuxZygote startup code sets the process to be _undumpable_ using [prctl()](http://www.kernel.org/doc/man-pages/online/pages/man2/prctl.2.html). This stops sandboxed processes from being able to ptrace() each other. More specifically, it stops the sandboxed process from being ptrace()'d by any other process. This can be switched off with the `--allow-sandbox-debugging` option.
+
+Limitations:
+ * Not all kernel versions support CLONE\_NEWPID. If the SUID helper is run on a kernel that does not support CLONE\_NEWPID, it will ignore the problem without a warning, but the protection offered by the sandbox will be substantially reduced. See LinuxPidNamespaceSupport for how to test whether your system supports PID namespaces.
+ * This does not restrict network access.
+ * This does not prevent processes within a given sandbox from sending each other signals or killing each other.
+ * Setting a process to be undumpable is not irreversible. A sandboxed process can make itself dumpable again, opening itself up to being taken over by another process (either unsandboxed or within the same sandbox).
+ * Breakpad (the crash reporting tool) makes use of this. If a process crashes, Breakpad makes it dumpable in order to use ptrace() to halt threads and capture the process's state at the time of the crash. This opens a small window of vulnerability.
+
+## setuid() method
+
+_This is an alternative to the CLONE\_NEWPID method; it is not currently implemented in the Chromium codebase._
+
+Instead of using CLONE\_NEWPID, the SUID helper can use setuid() to put the process into a currently-unused UID, which is allocated out of a range of UIDs. In order to ensure that the UID has not been allocated for another sandbox, the SUID helper uses [getrlimit()](http://www.kernel.org/doc/man-pages/online/pages/man2/getrlimit.2.html) to set RLIMIT\_NPROC temporarily to a soft limit of 1. (Note that the docs specify that [setuid()](http://www.kernel.org/doc/man-pages/online/pages/man2/setuid.2.html) returns EAGAIN if RLIMIT\_NPROC is exceeded.) We can reset RLIMIT\_NPROC afterwards in order to allow the sandboxed process to fork child processes.
+
+As before, the SUID helper chroots the process.
+
+As before, LinuxZygote can set itself to be undumpable to stop processes in the sandbox from being able to ptrace() each other.
+
+Limitations:
+ * It is not possible for an unsandboxed process to ptrace() a sandboxed process because they run under different UIDs. This makes debugging harder. There is no equivalent of the `--allow-sandbox-debugging` other than turning the sandbox off with `--no-sandbox`.
+ * The SUID helper can check that a UID is unused before it uses it (hence this is safe if the SUID helper is installed into multiple chroots), but it cannot prevent other root processes from putting processes into this UID after the sandbox has been started. This means we should make the UID range configurable, or distributions should reserve a UID range.
+
+## CLONE\_NEWNET method
+
+The SUID helper uses [CLONE\_NEWNET](http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html) to restrict network access.
+
+## Future work
+
+We are splitting the SUID sandbox into a separate project which will support both the CLONE\_NEWNS and setuid() methods: http://code.google.com/p/setuid-sandbox/
+
+Having the SUID helper as a separate project should make it easier for distributions to review and package.
+
+## Possible extensions
+
+## History
+
+Older versions of the sandbox helper process will <i>only</i> run <tt>/opt/google/chrome/chrome</tt>. This string is hard coded (<tt>sandbox/linux/suid/sandbox.cc</tt>). If your package is going to place the Chromium binary somewhere else you need to modify this string.
+
+## See also
+ * [LinuxSUIDSandboxDevelopment](LinuxSUIDSandboxDevelopment.md)
+ * LinuxSandboxing
+ * General information on Chromium sandboxing: http://dev.chromium.org/developers/design-documents/sandbox \ No newline at end of file
diff --git a/docs/linux_suid_sandbox_development.md b/docs/linux_suid_sandbox_development.md
new file mode 100644
index 0000000..950460d
--- /dev/null
+++ b/docs/linux_suid_sandbox_development.md
@@ -0,0 +1,61 @@
+(For context see [LinuxSUIDSandbox](http://code.google.com/p/chromium/wiki/LinuxSUIDSandbox))
+
+We need a SUID helper binary to turn on the sandbox on Linux.
+
+In most cases, you can run **build/update-linux-sandbox.sh** and it'll install the proper sandbox for you in /usr/local/sbin and tell you to update your .bashrc if needed.
+
+### Installation instructions for developers
+
+ * If you have no setuid sandbox at all, you will see a message such as:
+```
+Running without the SUID sandbox!
+```
+ * If your setuid binary is out of date, you will get messages such as:
+```
+The setuid sandbox provides API version X, but you need Y
+```
+```
+You are using a wrong version of the setuid binary!
+```
+
+Run the script mentioned above, or do something such as:
+
+ * Build chrome\_sandbox whenever you build chrome ("ninja -C xxx chrome chrome\_sandbox" instead of "ninja -C xxx chrome")
+ * After building, run something similar to (or use the provided update-linux-sandbox.sh):
+```
+sudo cp out/Debug/chrome_sandbox /usr/local/sbin/chrome-devel-sandbox #needed if you build on NFS!
+sudo chown root:root /usr/local/sbin/chrome-devel-sandbox
+sudo chmod 4755 /usr/local/sbin/chrome-devel-sandbox
+```
+
+ * Put this line in your ~/.bashrc (or .zshenv etc):
+```
+export CHROME_DEVEL_SANDBOX=/usr/local/sbin/chrome-devel-sandbox
+```
+
+### Try bots and waterfall
+
+If you're installing a new bot, always install the setuid sandbox (the instructions are different than for developers, contact the Chrome troopers). If something does need to run without the setuid sandbox, use the --disable-setuid-sandbox command line flag.
+
+The SUID sandbox must be enabled on the try bots and the waterfall. If you don't use it locally, things might appear to work for you, but break on the bots.
+
+(Note: as a temporary, stop gap measure, setting CHROME\_DEVEL\_SANDBOX to an empty string is equivalent to --disable-setuid-sandbox)
+
+### Disabling the sandbox
+
+If you are certain that you don't want the setuid sandbox, use --disable-setuid-sandbox. There should be very few cases like this.
+So if you're not absolutely sure, run with the setuid sandbox.
+
+### Installation instructions for "[Raw builds of Chromium](https://commondatastorage.googleapis.com/chromium-browser-continuous/index.html)"
+
+If you're using a "raw" build of Chromium, do the following:
+```
+sudo chown root:root chrome_sandbox && sudo chmod 4755 chrome_sandbox && export CHROME_DEVEL_SANDBOX="$PWD/chrome_sandbox"
+./chrome
+```
+
+You can also make such an installation more permanent by following the [steps above](#Installation_instructions_for_developers.md) and installing chrome\_sandbox to a more permanent location.
+
+### System-wide installations of Chromium
+
+The CHROME\_DEVEL\_SANDBOX variable is intended for developers and won't work for a system-wide installation of Chromium. Package maintainers should make sure the setuid binary is installed and defined in GYP as linux\_sandbox\_path. \ No newline at end of file
diff --git a/docs/linux_zygote.md b/docs/linux_zygote.md
new file mode 100644
index 0000000..5c84a79
--- /dev/null
+++ b/docs/linux_zygote.md
@@ -0,0 +1,15 @@
+A zygote process is one that listens for spawn requests from a master process and forks itself in response. Generally they are used because forking a process after some expensive setup has been performed can save time and share extra memory pages.
+
+On Linux, for Chromium, this is not the point, and measurements suggest that the time and memory savings are minimal or negative.
+
+We use it because it's the only reasonable way to keep a reference to a binary and a set of shared libraries that can be exec'ed. In the model used on Windows and Mac, renderers are exec'ed as needed from the chrome binary. However, if the chrome binary, or any of its shared libraries are updated while Chrome is running, we'll end up exec'ing the wrong version. A version _x_ browser might be talking to a version _y_ renderer. Our IPC system does not support this (and does not want to!).
+
+So we would like to keep a reference to a binary and its shared libraries and exec from these. However, unless we are going to write our own <tt>ld.so</tt>, there's no way to do this.
+
+Instead, we exec the prototypical renderer at the beginning of the browser execution. When we need more renderers, we signal this prototypical process (the zygote) to fork itself. The zygote is always the correct version and, by exec'ing one, we make sure the renderers have a different address space randomisation than the browser.
+
+The zygote process is triggered by the <tt>--type=zygote</tt> command line flag, which causes <tt>ZygoteMain</tt> (in <tt>chrome/browser/zygote_main_linux.cc</tt>) to be run. The zygote is launched from <tt>chrome/browser/zygote_host_linux.cc</tt>.
+
+Signaling the zygote for a new renderer happens in <tt>chrome/browser/child_process_launcher.cc</tt>.
+
+You can use the <tt>--zygote-cmd-prefix</tt> flag to debug the zygote process. If you use <tt>--renderer-cmd-prefix</tt> then the zygote will be bypassed and renderers will be exec'ed afresh every time. \ No newline at end of file
diff --git a/docs/mac_build_instructions.md b/docs/mac_build_instructions.md
new file mode 100644
index 0000000..cb2a7b8
--- /dev/null
+++ b/docs/mac_build_instructions.md
@@ -0,0 +1,143 @@
+# Prerequisites
+
+ * A Mac running 10.8+.
+ * http://developer.apple.com/tools/xcode/XCode, 5+
+ * Install [gclient](http://dev.chromium.org/developers/how-tos/install-depot-tools), part of the [depot\_tools](http://dev.chromium.org/developers/how-tos/depottools) package ([download](http://dev.chromium.org/developers/how-tos/install-depot-tools)). gclient is a wrapper around svn that we use to manage our working copies.
+ * Install [git](http://code.google.com/p/git-osx-installer/) on OSX 10.8. The system git shipping with OS X 10.9 / Xcode 5 works well too.
+ * (optional -- required if you don't have some commands such as svn natively) Install Xcode's "Command Line Tools" via Xcode menu -> Preferences -> Downloads
+
+# Getting the code
+
+[Check out the source code](http://dev.chromium.org/developers/how-tos/get-the-code) using Git. If you're new to the project, you can skip all the information about git-svn, since you will not be committing directly to the repository.
+
+Before checking out, go to the [waterfall](http://build.chromium.org/buildbot/waterfall/) and check that the source tree is open (to avoid pulling a broken tree).
+
+The path to the build directory should not contain spaces (e.g. not "~/Mac OS X/chromium"), as this will cause the build to fail. This includes your drive name, the default "Macintosh HD2" for a second drive has a space.
+
+# Building
+
+Chromium on OS X can only be built using the [Ninja](NinjaBuild.md) tool and the [Clang](Clang.md) compiler. See both of those pages for further details on how to tune the build.
+
+Before you build, you may want to [install API keys](https://sites.google.com/a/chromium.org/dev/developers/how-tos/api-keys) so that Chrome-integrated Google services work. This step is optional if you aren't testing those features.
+
+## Raising system-wide and per-user process limits
+
+If you see errors like the following:
+
+```
+clang: error: unable to execute command: posix_spawn failed: Resource temporarily unavailable
+clang: error: clang frontend command failed due to signal (use -v to see invocation)
+```
+
+you may be running into too-low limits on the number of concurrent processes allowed on the machine. Check:
+
+```
+sysctl kern.maxproc
+sysctl kern.maxprocperuid
+```
+
+You can increase them with e.g.:
+
+```
+sudo sysctl -w kern.maxproc=2500
+sudo sysctl -w kern.maxprocperuid=2500
+```
+
+But normally this shouldn't be necessary if you're building on 10.7 or higher. If you see this, check if some rogue program spawned hundreds of processes and kill them first.
+
+# Faster builds
+
+Full rebuilds are about the same speed in Debug and Release, but linking is a lot faster in Release builds.
+
+Run
+```
+GYP_DEFINES=fastbuild=1 build/gyp_chromium
+```
+to disable debug symbols altogether, this makes both full rebuilds and linking faster (at the cost of not getting symbolized backtraces in gdb).
+
+You might also want to [install ccache](CCacheMac.md) to speed up the build.
+
+# Running
+
+All build output is located in the `out` directory (in the example above, `~/chromium/src/out`). You can find the applications at `{Debug|Release}/ContentShell.app` and `{Debug|Release}/Chromium.app`, depending on the selected configuration.
+
+# Unit Tests
+
+We have several unit test targets that build, and tests that run and pass. A small subset of these is:
+
+ * `unit_tests` from `chrome/chrome.gyp`
+ * `base_unittests` from `base/base.gyp`
+ * `net_unittests` from `net/net.gyp`
+ * `url_unittests` from `url/url.gyp`
+
+When these tests are built, you will find them in the `out/{Debug|Release}` directory. You can run them from the command line:
+```
+~/chromium/src/out/Release/unit_tests
+```
+
+# Coding
+
+According to the [Chromium style guide](http://dev.chromium.org/developers/coding-style) code is [not allowed to have whitespace on the ends of lines](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Horizontal_Whitespace). If you edit in Xcode, know that it loves adding whitespace to the ends of lines which can make editing in Xcode more painful than it should be. The [GTM Xcode Plugin](http://code.google.com/p/google-toolbox-for-mac/downloads/list) adds a preference panel to Xcode that allows you to strip whitespace off of the ends of lines on save. Documentation on how to install it is [here](http://code.google.com/p/google-toolbox-for-mac/wiki/GTMXcodePlugin).
+
+# Debugging
+
+Good debugging tips can be found [here](http://dev.chromium.org/developers/how-tos/debugging-on-os-x). If you would like to debug in a graphical environment, rather than using `lldb` at the command line, that is possible without building in Xcode. See [Debugging in Xcode](http://www.chromium.org/developers/debugging-on-os-x/building-with-ninja-debugging-with-xcode) for information on how.
+
+# Contributing
+
+Once you’re comfortable with building Chromium, check out [Contributing Code](http://dev.chromium.org/developers/contributing-code) for information about writing code for Chromium and contributing it.
+
+# Using Xcode-Ninja Hybrid
+
+While using Xcode is unsupported, GYP supports a hybrid approach of using ninja for building, but Xcode for editing and driving compliation. Xcode can still be slow, but it runs fairly well even **with indexing enabled**.
+
+With hybrid builds, compilation is still handled by ninja, and can be run by the command line (e.g. ninja -C out/Debug chrome) or by choosing the chrome target in the hybrid workspace and choosing build.
+
+To use Xcode-Ninja Hybrid, set `GYP_GENERATORS=ninja,xcode-ninja`.
+
+Due to the way Xcode parses ninja output paths, it's also necessary to change the main gyp location to anything two directories deep. Otherwise Xcode build output will not be clickable. Adding `xcode_ninja_main_gyp=src/build/ninja/all.ninja.gyp` to your GYP\_GENERATOR\_FLAGS will fix this.
+
+After generating the project files with gclient runhooks, open `src/build/ninja/all.ninja.xcworkspace`.
+
+You may run into a problem where http://YES is opened as a new tab every time you launch Chrome. To fix this, open the scheme editor for the Run scheme, choose the Options tab, and uncheck "Allow debugging when using document Versions Browser". When this option is checked, Xcode adds --NSDocumentRevisionsDebugMode YES to the launch arguments, and the YES gets interpreted as a URL to open.
+
+If you want to limit the number of targets visible, which is known to improve Xcode performance, add `xcode_ninja_executable_target_pattern=%target%` where %target% is a regular expression matching executable targets you'd like to include.
+
+To include non-executable targets, use `xcode_ninja_target_pattern=All_iOS`.
+
+If you have problems building, join us in `#chromium` on `irc.freenode.net` and ask there. As mentioned above, be sure that the [waterfall](http://build.chromium.org/buildbot/waterfall/) is green and the tree is open before checking out. This will increase your chances of success.
+
+There is also a dedicated [Xcode tips](Xcode4Tips.md) page that you may want to read.
+
+
+# Using Emacs as EDITOR for "git commit"
+
+Using the [Cocoa version of Emacs](http://emacsformacosx.com/) as the EDITOR environment variable on Mac OS will cause "git commit" to open the message in a window underneath all the others. To fix this, create a shell script somewhere (call it $HOME/bin/[EmacsEditor](EmacsEditor.md) in this example) containing the following:
+
+```
+#!/bin/sh
+
+# All of these hacks are needed to get "git commit" to launch a new
+# instance of Emacs on top of everything else, properly pointing to
+# the COMMIT_EDITMSG.
+
+realpath() {
+ [[ $1 = /* ]] && echo "$1" || echo "$PWD/${1#./}"
+}
+
+i=0
+full_paths=()
+for arg in "$@"
+do
+ full_paths[$i]=$(realpath $arg)
+ ((++i))
+done
+
+open -nWa /Applications/Emacs.app/Contents/MacOS/Emacs --args --no-desktop "${full_paths[@]}"
+```
+
+and in your .bashrc or similar,
+
+```
+export EDITOR=$HOME/bin/EmacsEditor
+``` \ No newline at end of file
diff --git a/docs/mandriva_msttcorefonts.md b/docs/mandriva_msttcorefonts.md
new file mode 100644
index 0000000..e1ef0d1
--- /dev/null
+++ b/docs/mandriva_msttcorefonts.md
@@ -0,0 +1,30 @@
+# Introduction
+
+The msttcorefonts are needed to build Chrome but are not available for Mandriva. Building your own is not hard though and only takes about 2 minutes to set up and complete
+
+
+# Details
+
+```
+urpmi rpm-build cabextract
+```
+
+download this script, make it executable and run it http://wiki.mandriva.com/en/uploads/3/3a/Rpmsetup.sh it will create a directory ~/rpm and some hidden files in your home directory
+
+open the file ~/.rpmmacros and comment out the following lines by putting a # in front of them, eg like this (because most likely you won't have a gpg key set up and creating the package will fail if you leave these lines):
+
+#%_signature gpg_
+
+#%_gpg\_name Mandrivalinux_
+
+#%_gpg\_path ~/.gnupg_
+
+download the following file http://corefonts.sourceforge.net/msttcorefonts-2.0-1.spec and save it to ~/rpm/SPECS
+
+```
+cd ~/rpm/SPECS
+
+rpmbuild -bb msttcorefonts-2.0-1.spec
+```
+
+the rpm will be build and be put in ~/rpm/RPMS/noarch ready to install \ No newline at end of file
diff --git a/docs/mojo_in_chromium.md b/docs/mojo_in_chromium.md
new file mode 100644
index 0000000..d2a7d19
--- /dev/null
+++ b/docs/mojo_in_chromium.md
@@ -0,0 +1,719 @@
+**THIS DOCUIMENT IS A WORK IN PROGRESS.** As long as this notice exists, you should probably ignore everything below it.
+
+
+
+# Introduction
+
+This document is intended to serve as a Mojo primer for Chromium developers. No prior knowledge of Mojo is assumed, but you should have a decent grasp of C++ and be familiar with Chromium's multi-process architecture as well as common concepts used throughout Chromium such as smart pointers, message loops, callback binding, and so on.
+
+## Should I Bother Reading This?
+
+If you're planning to build a Chromium feature that needs IPC and you aren't already using Mojo, you probably want to read this. **Legacy IPC** -- _i.e._, `foo_messages.h` files, message filters, and the suite of `IPC_MESSAGE_*` macros -- **is on the verge of deprecation.**
+
+## Why Mojo?
+
+Mojo provides IPC primitives for pushing messages and data around between transferrable endpoints which may or may not cross process boundaries; it simplifies threading with regard to IPC; it standardizes message serialization in a way that's resilient to versioning issues; and it can be used with relative ease and consistency across a number of languages including C++, Java, and `JavaScript` -- all languages which comprise a significant share of Chromium code.
+
+The messaging protocol doesn't strictly need to be used for IPC though, and there are some higher-level reasons for this adoption and for the specific approach to integration outlined in this document.
+
+### Code Health
+
+At the moment we have fairly weak separation between components, with DEPS being the strongest line of defense against increasing complexity.
+
+A component Foo might hold a reference to some bit of component Bar's internal state, or it might expect Bar to initialize said internal state in some particular order. These sorts of problems are reasonably well-mitigated by the code review process, but they can (and do) still slip through the cracks, and they have a noticeable cumulative effect on complexity as the code base continues to grow.
+
+We think we can make a lasting positive impact on code health by establishing more concrete boundaries between components, and this is something a library like Mojo gives us an opportunity to do.
+
+### Modularity
+
+In addition to code health -- which alone could be addressed in any number of ways that don't involve Mojo -- this approach opens doors to build and distribute parts of Chrome separately from the main binary.
+
+While we're not currently taking advantage of this capability, doing so remains a long-term goal due to prohibitive binary size constraints in emerging mobile markets. Many open questions around the feasibility of this goal should be answered by the experimental Mandoline project as it unfolds, but the Chromium project can be technically prepared for such a transition in the meantime.
+
+### Mandoline
+
+The Mandoline project is producing a potential replacement for `src/content`. Because Mandoline components are Mojo apps, and Chromium is now capable of loading Mojo apps (somethings we'll discuss later), Mojo apps can be shared between both projects with minimal effort. Developing your feature as or within a Mojo application can mean you're contributing to both Chromium and Mandoline.
+
+# Mojo Overview
+
+This section provides a general overview of Mojo and some of its API features. You can probably skip straight to [Your First Mojo Application](#Your_First_Mojo_Application.md) if you just want to get to some practical sample code.
+
+The Mojo Embedder Development Kit (EDK) provides a suite of low-level IPC primitives: **message pipes**, **data pipes**, and **shared buffers**. We'll focus primarily on message pipes and the C++ bindings API in this document.
+
+_TODO: Java and JS bindings APIs should also be covered here._
+
+## Message Pipes
+
+A message pipe is a lightweight primitive for reliable, bidirectional, queued transfer of relatively small packets of data. Every pipe endpoint is identified by a **handle** -- a unique process-wide integer identifying the endpoint to the EDK.
+
+A single message across a pipe consists of a binary payload and an array of zero or more handles to be transferred. A pipe's endpoints may live in the same process or in two different processes.
+
+Pipes are easy to create. The `mojo::MessagePipe` type (see [//third\_party/mojo/src/mojo/public/cpp/system/message\_pipe.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/system/message_pipe.h)) provides a nice class wrapper with each endpoint represented as a scoped handle type (see members `handle0` and `handle1` and the definition of `mojo::ScopedMessagePipeHandle`). In the same header you can find `WriteMessageRaw` and `ReadMessageRaw` definitions. These are in theory all one needs to begin pushing things from one endpoint to the other.
+
+While it's worth being aware of `mojo::MessagePipe` and the associated raw I/O functions, you will rarely if ever have a use for them. Instead you'll typically use bindings code generated from mojom interface definitions, along with the public bindings API which mostly hides the underlying pipes.
+
+## Mojom Bindings
+
+Mojom is the IDL for Mojo interfaces. When given a mojom file, the bindings generator outputs a collection of bindings libraries for each supported language. Mojom syntax is fairly straightforward (TODO: Link to a mojom language spec?). Consider the example mojom file below:
+
+```
+// frobinator.mojom
+module frob;
+interface Frobinator {
+ Frobinate();
+};
+```
+
+This can be used to generate bindings for a very simple `Frobinator` interface. Bindings are generated at build time and will match the location of the mojom source file itself, mapped into the generated output directory for your Chromium build. In this case one can expect to find files named `frobinator.mojom.js`, `frobinator.mojom.cc`, `frobinator.mojom.h`, _etc._
+
+The C++ header (`frobinator.mojom.h`) generated from this mojom will define a pure virtual class interface named `frob::Frobinator` with a pure virtual method of signature `void Frobinate()`. Any class which implements this interface is effectively a `Frobinator` service.
+
+## C++ Bindings API
+
+Before we see an example implementation and usage of the Frobinator, there are a handful of interesting bits in the public C++ bindings API you should be familiar with. These complement generated bindings code and generally obviate any need to use a `mojo::MessagePipe` directly.
+
+In all of the cases below, `T` is the type of a generated bindings class interface, such as the `frob::Frobinator` discussed above.
+
+### `mojo::InterfacePtr<T>`
+
+Defined in [//third\_party/mojo/src/mojo/public/cpp/bindings/interface\_ptr.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/bindings/interface_ptr.h).
+
+`mojo::InterfacePtr<T>` is a typed proxy for a service of type `T`, which can be bound to a message pipe endpoint. This class implements every interface method on `T` by serializing a message (encoding the method call and its arguments) and writing it to the pipe (if bound.) This is the standard way for C++ code to talk to any Mojo service.
+
+For illustrative purposes only, we can create a message pipe and bind an `InterfacePtr` to one end as follows:
+
+```
+ mojo::MessagePipe pipe;
+ mojo::InterfacePtr<frob::Frobinator> frobinator;
+ frobinator.Bind(
+ mojo::InterfacePtrInfo<frob::Frobinator>(pipe.handle0.Pass(), 0u));
+```
+
+You could then call `frobinator->Frobinate()` and read the encoded `Frobinate` message from the other side of the pipe (`handle1`.) You most likely don't want to do this though, because as you'll soon see there's a nicer way to establish service pipes.
+
+### `mojo::InterfaceRequest<T>`
+
+Defined in [//third\_party/mojo/src/mojo/public/cpp/bindings/interface\_request.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/bindings/interface_request.h).
+
+`mojo::InterfaceRequest<T>` is a typed container for a message pipe endpoint that should _eventually_ be bound to a service implementation. An `InterfaceRequest` doesn't actually _do_ anything, it's just a way of holding onto an endpoint without losing interface type information.
+
+A common usage pattern is to create a pipe, bind one end to an `InterfacePtr<T>`, and pass the other end off to someone else (say, over some other message pipe) who is expected to eventually bind it to a concrete service implementation. `InterfaceRequest<T>` is here for that purpose and is, as we'll see later, a first-class concept in Mojom interface definitions.
+
+As with `InterfacePtr<T>`, we can manually bind an `InterfaceRequest<T>` to a pipe endpoint:
+
+```
+ mojo::MessagePipe pipe;
+
+ mojo::InterfacePtr<frob::Frobinator> frobinator;
+ frobinator.Bind(
+ mojo::InterfacePtrInfo<frob::Frobinator>(pipe.handle0.Pass(), 0u));
+
+ mojo::InterfaceRequest<frob::Frobinator> frobinator_request;
+ frobinator_request.Bind(pipe.handle1.Pass());
+```
+
+At this point we could start making calls to `frobinator->Frobinate()` as before, but they'll just sit in queue waiting for the request side to be bound. Note that the basic logic in the snippet above is such a common pattern that there's a convenient API function which does it for us.
+
+### `mojo::GetProxy<T>`
+
+Defined in [//third\_party/mojo/src/mojo/public/cpp/bindings/interface\_request.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/bindings/interface_request.h).
+
+`mojo::GetProxy<T>` is the function you will most commonly use to create a new message pipe. Its signature is as follows:
+
+```
+template <typename T>
+mojo::InterfaceRequest<T> GetProxy(mojo::InterfacePtr<T>* ptr);
+```
+
+This function creates a new message pipe, binds one end to the given `InterfacePtr` argument, and binds the other end to a new `InterfaceRequest` which it then returns. Equivalent to the sample code just above is the following snippet:
+
+```
+ mojo::InterfacePtr<frob::Frobinator> frobinator;
+ mojo::InterfaceRequest<frob::Frobinator> frobinator_request =
+ mojo::GetProxy(&frobinator);
+```
+
+### `mojo::Binding<T>`
+
+Defined in [//third\_party/mojo/src/mojo/public/cpp/bindings/binding.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/bindings/binding.h).
+
+Binds one end of a message pipe to an implementation of service `T`. A message sent from the other end of the pipe will be read and, if successfully decoded as a `T` message, will invoke the corresponding call on the bound `T` implementation. A `Binding<T>` must be constructed over an instance of `T` (which itself usually owns said `Binding` object), and its bound pipe is usually taken from a passed `InterfaceRequest<T>`.
+
+A common usage pattern looks something like this:
+
+```
+#include "components/frob/public/interfaces/frobinator.mojom.h"
+#include "third_party/mojo/src/mojo/public/cpp/bindings/binding.h"
+#include "third_party/mojo/src/mojo/public/cpp/bindings/interface_request.h"
+
+class FrobinatorImpl : public frob::Frobinator {
+ public:
+ FrobinatorImpl(mojo::InterfaceRequest<frob::Frobinator> request)
+ : binding_(this, request.Pass()) {}
+ ~FrobinatorImpl() override {}
+
+ private:
+ // frob::Frobinator:
+ void Frobinate() override { /* ... */ }
+
+ mojo::Binding<frob::Frobinator> binding_;
+};
+```
+
+And then we could write some code to test this:
+
+```
+ // Fun fact: The bindings generator emits a type alias like this for every
+ // interface type. frob::FrobinatorPtr is an InterfacePtr<frob::Frobinator>.
+ frob::FrobinatorPtr frobinator;
+ scoped_ptr<FrobinatorImpl> impl(
+ new FrobinatorImpl(mojo::GetProxy(&frobinator)));
+ frobinator->Frobinate();
+```
+
+This will _eventually_ call `FrobinatorImpl::Frobinate()`. "Eventually," because the sequence of events when `frobinator->Frobinate()` is called is roughly as follows:
+
+ 1. A new message buffer is allocated and filled with an encoded 'Frobinate' message.
+ 1. The EDK is asked to write this message to the pipe endpoint owned by the `FrobinatorPtr`.
+ 1. If the call didn't happen on the Mojo IPC thread for this process, EDK hops to the Mojo IPC thread.
+ 1. The EDK writes the message to the pipe. In this case the pipe endpoints live in the same process, so this essentially a glorified `memcpy`. If they lived in different processes this would be the point at which the data moved across a real IPC channel.
+ 1. The EDK on the other end of the pipe is awoken on the Mojo IPC thread and alerted to the message arrival.
+ 1. The EDK reads the message.
+ 1. If the bound receiver doesn't live on the Mojo IPC thread, the EDK hops to the receiver's thread.
+ 1. The message is passed on to the receiver. In this case the receiver is generated bindings code, via `Binding<T>`. This code decodes and validates the `Frobinate` message.
+ 1. `FrobinatorImpl::Frobinate()` is called on the bound implementation.
+
+So as you can see, the call to `Frobinate()` may result in up to two thread hops and one process hop before the service implementation is invoked.
+
+### `mojo::StrongBinding<T>`
+
+Defined in [//third\_party/mojo/src/mojo/public/cpp/bindings/strong\_binding.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/bindings/strong_binding.h).
+
+`mojo::StrongBinding<T>` is just like `mojo::Binding<T>` with the exception that a `StrongBinding` takes ownership of the bound `T` instance. The instance is destroyed whenever the bound message pipe is closed. This is convenient in cases where you want a service implementation to live as long as the pipe it's servicing, but like all features with clever lifetime semantics, it should be used with caution.
+
+## The Mojo Shell
+
+Both Chromium and Mandoline run a central **shell** component which is used to coordinate communication among all Mojo applications (see the next section for an overview of Mojo applications.)
+
+Every application receives a proxy to this shell upon initialization, and it is exclusively through this proxy that an application can request connections to other applications. The `mojo::Shell` interface provided by this proxy is defined as follows:
+
+```
+module mojo;
+interface Shell {
+ ConnectToApplication(URLRequest application_url,
+ ServiceProvider&? services,
+ ServiceProvider? exposed_services);
+ QuitApplication();
+};
+```
+
+and as for the `mojo::ServiceProvider` interface:
+
+```
+module mojo;
+interface ServiceProvider {
+ ConnectToService(string interface_name, handle<message_pipe> pipe);
+};
+```
+
+Definitions for these interfaces can be found in [//mojo/application/public/interfaces](https://code.google.com/p/chromium/codesearch#chromium/src/mojo/application/public/interfaces/). Also note that `mojo::URLRequest` is a Mojo struct defined in [//mojo/services/network/public/interfaces/url\_loader.mojom](https://code.google.com/p/chromium/codesearch#chromium/src/mojo/services/network/public/interfaces/url_loader.mojom).
+
+Note that there's some new syntax in the mojom for `ConnectToApplication` above. The '?' signifies a nullable value and the '&' signifies an interface request rather than an interface proxy.
+
+The argument `ServiceProvider&? services` indicates that the caller should pass an `InterfaceRequest<ServiceProvider>` as the second argument, but that it need not be bound to a pipe (i.e., it can be "null" in which case it's ignored.)
+
+The argument `ServiceProvider? exposed_services` indicates that the caller should pass an `InterfacePtr<ServiceProvider>` as the third argument, but that it may also be null.
+
+`ConnectToApplication` asks the shell to establish a connection between the caller and some other app the shell might know about. In the event that a connection can be established -- which may involve the shell starting a new instance of the target app -- the given `services` request (if not null) will be bound to a service provider in the target app. The target app may in turn use the passed `exposed_services` proxy (if not null) to request services from the connecting app.
+
+## Mojo Applications
+
+All code which runs in a Mojo environment, apart from the shell itself (see above), belongs to one Mojo **application** or another**`**`**. The term "application" in this context is a common source of confusion, but it's really a simple concept. In essence an application is anything which implements the following Mojom interface:
+
+```
+ module mojo;
+ interface Application {
+ Initialize(Shell shell, string url);
+ AcceptConnection(string requestor_url,
+ ServiceProvider&? services,
+ ServiceProvider? exposed_services,
+ string resolved_url);
+ OnQuitRequested() => (bool can_quit);
+ };
+```
+
+Of course, in Chromium and Mandoline environments this interface is obscured from application code and applications should generally just implement `mojo::ApplicationDelegate` (defined in [//mojo/application/public/cpp/application\_delegate.h](https://code.google.com/p/chromium/codesearch#chromium/src/mojo/application/public/cpp/application_delegate.h).) We'll see a concrete example of this in the next section, [Your First Mojo Application](#Your_First_Mojo_Application.md).
+
+The takeaway here is that an application can be anything. It's not necessarily a new process (though at the moment, it's at least a new thread). Applications can connect to each other, and these connections are the mechanism through which separate components expose services to each other.
+
+**`**`**NOTE: This is not true in Chromium today, but it should be eventually. For some components (like render frames, or arbitrary browser process code) we provide APIs which allow non-Mojo-app-code to masquerade as a Mojo app and therefore connect to real Mojo apps through the shell.
+
+## Other IPC Primitives
+
+Finally, it's worth making brief mention of the other types of IPC primitives Mojo provides apart from message pipes. A **data pipe** is a unidirectional channel for pushing around raw data in bulk, and a **shared buffer** is (unsurprisingly) a shared memory primitive. Both of these objects use the same type of transferable handle as message pipe endpoints, and can therefore be transferred across message pipes, potentially to other processes.
+
+# Your First Mojo Application
+
+In this section, we're going to build a simple Mojo application that can be run in isolation using Mandoline's `mojo_runner` binary. After that we'll add a service to the app and set up a test suite to connect and test that service.
+
+## Hello, world!
+
+So, you're building a new Mojo app and it has to live somewhere. For the foreseeable future we'll likely be treating `//components` as a sort of top-level home for new Mojo apps in the Chromium tree. Any component application you build should probably go there. Let's create some basic files to kick things off. You may want to start a new local Git branch to isolate any changes you make while working through this.
+
+First create a new `//components/hello` directory. Inside this directory we're going to add the following files:
+
+**components/hello/main.cc**
+```
+#include "base/logging.h"
+#include "third_party/mojo/src/mojo/public/c/system/main.h"
+
+MojoResult MojoMain(MojoHandle shell_handle) {
+ LOG(ERROR) << "Hello, world!";
+ return MOJO_RESULT_OK;
+};
+```
+
+
+**components/hello/BUILD.gn**
+```
+import("//mojo/public/mojo_application.gni")
+
+mojo_native_application("hello") {
+ sources = [
+ "main.cc",
+ ]
+ deps = [
+ "//base",
+ "//mojo/environment:chromium",
+ ]
+}
+```
+
+For the sake of this example you'll also want to add your component as a dependency somewhere in your local checkout to ensure its build files are generated. The easiest thing to do there is probably to add a dependency on `"//components/hello"` in the `"gn_all"` target of the top-level `//BUILD.gn`.
+
+Assuming you have a GN output directory at `out_gn/Debug`, you can build the Mojo runner along with your shiny new app:
+
+```
+ ninja -C out_gn/Debug mojo_runner components/hello
+```
+
+In addition to the `mojo_runner` executable, this will produce a new binary at `out_gn/Debug/hello/hello.mojo`. This binary is essentially a shared library which exports your `MojoMain` function.
+
+`mojo_runner` takes an application URL as its only argument and runs the corresponding application. In its current state it resolves `mojo`-scheme URLs such that `"mojo:foo"` maps to the file `"foo/foo.mojo"` relative to the `mojo_runner` path (_i.e._ your output directory.) This means you can run your new app with the following command:
+
+```
+ out_gn/Debug/mojo_runner mojo:hello
+```
+
+You should see our little `"Hello, world!"` error log followed by a hanging application. You can `^C` to kill it.
+
+## Exposing Services
+
+An app that prints `"Hello, world!"` isn't terribly interesting. At a bare minimum your app should implement `mojo::ApplicationDelegate` and expose at least one service to connecting applications.
+
+Let's update `main.cc` with the following contents:
+
+**components/hello/main.cc**
+```
+#include "components/hello/hello_app.h"
+#include "mojo/application/public/cpp/application_runner.h"
+#include "third_party/mojo/src/mojo/public/c/system/main.h"
+
+MojoResult MojoMain(MojoHandle shell_handle) {
+ mojo::ApplicationRunner runner(new hello::HelloApp);
+ return runner.Run(shell_handle);
+};
+```
+
+This is a pretty typical looking `MojoMain`. Most of the time this is all you want -- a `mojo::ApplicationRunner` constructed over a `mojo::ApplicationDelegate` instance, `Run()` with the pipe handle received from the shell. We'll add some new files to the app as well:
+
+**components/hello/public/interfaces/greeter.mojom**
+```
+module hello;
+interface Greeter {
+ Greet(string name) => (string greeting);
+};
+```
+
+Note the new arrow syntax on the `Greet` method. This indicates that the caller expects a response from the service.
+
+**components/hello/public/interfaces/BUILD.gn**
+```
+import("//third_party/mojo/src/mojo/public/tools/bindings/mojom.gni")
+
+mojom("interfaces") {
+ sources = [
+ "greeter.mojom",
+ ]
+}
+```
+
+**components/hello/hello\_app.h**
+```
+#ifndef COMPONENTS_HELLO_HELLO_APP_H_
+#define COMPONENTS_HELLO_HELLO_APP_H_
+
+#include "base/macros.h"
+#include "components/hello/public/interfaces/greeter.mojom.h"
+#include "mojo/application/public/cpp/application_delegate.h"
+#include "mojo/application/public/cpp/interface_factory.h"
+
+namespace hello {
+
+class HelloApp : public mojo::ApplicationDelegate,
+ public mojo::InterfaceFactory<Greeter> {
+ public:
+ HelloApp();
+ ~HelloApp() override;
+
+ private:
+ // mojo::ApplicationDelegate:
+ bool ConfigureIncomingConnection(
+ mojo::ApplicationConnection* connection) override;
+
+ // mojo::InterfaceFactory<Greeter>:
+ void Create(mojo::ApplicationConnection* connection,
+ mojo::InterfaceRequest<Greeter> request) override;
+
+ DISALLOW_COPY_AND_ASSIGN(HelloApp);
+};
+
+} // namespace hello
+
+#endif // COMPONENTS_HELLO_HELLO_APP_H_
+```
+
+
+**components/hello/hello\_app.cc**
+```
+#include "base/macros.h"
+#include "components/hello/hello_app.h"
+#include "mojo/application/public/cpp/application_connection.h"
+#include "third_party/mojo/src/mojo/public/cpp/bindings/interface_request.h"
+#include "third_party/mojo/src/mojo/public/cpp/bindings/strong_binding.h"
+
+namespace hello {
+
+namespace {
+
+class GreeterImpl : public Greeter {
+ public:
+ GreeterImpl(mojo::InterfaceRequest<Greeter> request)
+ : binding_(this, request.Pass()) {
+ }
+
+ ~GreeterImpl() override {}
+
+ private:
+ // Greeter:
+ void Greet(const mojo::String& name, const GreetCallback& callback) override {
+ callback.Run("Hello, " + std::string(name) + "!");
+ }
+
+ mojo::StrongBinding<Greeter> binding_;
+
+ DISALLOW_COPY_AND_ASSIGN(GreeterImpl);
+};
+
+} // namespace
+
+HelloApp::HelloApp() {
+}
+
+HelloApp::~HelloApp() {
+}
+
+bool HelloApp::ConfigureIncomingConnection(
+ mojo::ApplicationConnection* connection) {
+ connection->AddService<Greeter>(this);
+ return true;
+}
+
+void HelloApp::Create(
+ mojo::ApplicationConnection* connection,
+ mojo::InterfaceRequest<Greeter> request) {
+ new GreeterImpl(request.Pass());
+}
+
+} // namespace hello
+```
+
+And finally we need to update our app's `BUILD.gn` to add some new sources and dependencies:
+
+**components/hello/BUILD.gn**
+```
+import("//mojo/public/mojo_application.gni")
+
+source_set("lib") {
+ sources = [
+ "hello_app.cc",
+ "hello_app.h",
+ ]
+ deps = [
+ "//base",
+ "//components/hello/public/interfaces",
+ "//mojo/application/public/cpp",
+ "//mojo/environment:chromium",
+ ]
+}
+
+mojo_native_application("hello") {
+ sources = [
+ "main.cc",
+ ],
+ deps = [ ":lib" ]
+}
+```
+
+Note that we build the bulk of our application sources as a static library separate from the `MojoMain` definition. Following this convention is particularly useful for Chromium integration, as we'll see later.
+
+There's a lot going on here and it would be useful to familiarize yourself with the definitions of `mojo::ApplicationDelegate`, `mojo::ApplicationConnection`, and `mojo::InterfaceFactory<T>`. The TL;DR though is that if someone connects to this app and requests a service named `"hello::Greeter"`, the app will create a new `GreeterImpl` and bind it to that request pipe. From there the connecting app can call `Greeter` interface methods and they'll be routed to that `GreeterImpl` instance.
+
+Although this appears to be a more interesting application, we need some way to actually connect and test the behavior of our new service. Let's write an app test!
+
+## App Tests
+
+App tests run inside a test application, giving test code access to a shell which can connect to one or more applications-under-test.
+
+First let's introduce some test code:
+
+**components/hello/hello\_apptest.cc**
+```
+#include "base/bind.h"
+#include "base/callback.h"
+#include "base/logging.h"
+#include "base/macros.h"
+#include "base/run_loop.h"
+#include "components/hello/public/interfaces/greeter.mojom.h"
+#include "mojo/application/public/cpp/application_impl.h"
+#include "mojo/application/public/cpp/application_test_base.h"
+
+namespace hello {
+namespace {
+
+class HelloAppTest : public mojo::test::ApplicationTestBase {
+ public:
+ HelloAppTest() {}
+ ~HelloAppTest() override {}
+
+ void SetUp() override {
+ ApplicationTestBase::SetUp();
+ mojo::URLRequestPtr app_url = mojo::URLRequest::New();
+ app_url->url = "mojo:hello";
+ application_impl()->ConnectToService(app_url.Pass(), &greeter_);
+ }
+
+ Greeter* greeter() { return greeter_.get(); }
+
+ private:
+ GreeterPtr greeter_;
+
+ DISALLOW_COPY_AND_ASSIGN(HelloAppTest);
+};
+
+void ExpectGreeting(const mojo::String& expected_greeting,
+ const base::Closure& continuation,
+ const mojo::String& actual_greeting) {
+ EXPECT_EQ(expected_greeting, actual_greeting);
+ continuation.Run();
+};
+
+TEST_F(HelloAppTest, GreetWorld) {
+ base::RunLoop loop;
+ greeter()->Greet("world", base::Bind(&ExpectGreeting, "Hello, world!",
+ loop.QuitClosure()));
+ loop.Run();
+}
+
+} // namespace
+} // namespace hello
+```
+
+We also need to add a new rule to `//components/hello/BUILD.gn`:
+
+```
+ mojo_native_application("apptests") {
+ output_name = "hello_apptests"
+ testonly = true
+ sources = [
+ "hello_apptest.cc",
+ ]
+ deps = [
+ "//base",
+ "//mojo/application/public/cpp:test_support",
+ ]
+ public_deps = [
+ "//components/hello/public/interfaces",
+ ]
+ data_deps = [ ":hello" ]
+ }
+```
+
+Note that the `//components/hello:apptests` target does **not** have a binary dependency on either `HelloApp` or `GreeterImpl` implementations; instead it depends only on the component's public interface definitions.
+
+The `data_deps` entry ensures that `hello.mojo` is up-to-date when `apptests` is built. This is desirable because the test connects to `"mojo:hello"` which will in turn load `hello.mojo` from disk.
+
+You can now build the test suite:
+
+```
+ ninja -C out_gn/Debug components/hello:apptests
+```
+
+and run it:
+
+```
+ out_gn/Debug/mojo_runner mojo:hello_apptests
+```
+
+You should see one test (`HelloAppTest.GreetWorld`) passing.
+
+One particularly interesting bit of code in this test is in the `SetUp` method:
+
+```
+ mojo::URLRequestPtr app_url = mojo::URLRequest::New();
+ app_url->url = "mojo:hello";
+ application_impl()->ConnectToService(app_url.Pass(), &greeter_);
+```
+
+`ConnectToService` is a convenience method provided by `mojo::ApplicationImpl`, and it's essentially a shortcut for calling out to the shell's `ConnectToApplication` method with the given application URL (in this case `"mojo:hello"`) and then connecting to a specific service provided by that app via its `ServiceProvider`'s `ConnectToService` method.
+
+Note that generated interface bindings include a constant string to identify each interface by name; so for example the generated `hello::Greeter` type defines a static C string:
+
+```
+ const char hello::Greeter::Name_[] = "hello::Greeter";
+```
+
+This is exploited by the definition of `mojo::ApplicationConnection::ConnectToService<T>`, which uses `T::Name_` as the name of the service to connect to. The type `T` in this context is inferred from the `InterfacePtr<T>*` argument. You can inspect the definition of `ConnectToService` in [//mojo/application/public/cpp/application\_connection.h](https://code.google.com/p/chromium/codesearch#chromium/src/mojo/application/public/cpp/application_connection.h) for additional clarity.
+
+We could have instead written this code as:
+
+```
+ mojo::URLRequestPtr app_url = mojo::URLRequest::New();
+ app_url->url = "mojo::hello";
+
+ mojo::ServiceProviderPtr services;
+ application_impl()->shell()->ConnectToApplication(
+ app_url.Pass(), mojo::GetProxy(&services),
+ // We pass a null provider since we aren't exposing any of our own
+ // services to the target app.
+ mojo::ServiceProviderPtr());
+
+ mojo::InterfaceRequest<hello::Greeter> greeter_request =
+ mojo::GetProxy(&greeter_);
+ services->ConnectToService(hello::Greeter::Name_,
+ greeter_request.PassMessagePipe());
+```
+
+The net result is the same, but 3-line version seems much nicer.
+
+# Chromium Integration
+
+Up until now we've been using `mojo_runner` to load and run `.mojo` binaries dynamically. While this model is used by Mandoline and may eventually be used in Chromium as well, Chromium is at the moment confined to running statically linked application code. This means we need some way to register applications with the browser's Mojo shell.
+
+It also means that, rather than using the binary output of a `mojo_native_application` target, some part of Chromium must link against the app's static library target (_e.g._, `"//components/hello:lib"`) and register a URL handler to teach the shell how to launch an instance of the app.
+
+When registering an app URL in Chromium it probably makes sense to use the same mojo-scheme URL used for the app in Mandoline. For example the media renderer app is referenced by the `"mojo:media"` URL in both Mandoline and Chromium. In Mandoline this resolves to a dynamically-loaded `.mojo` binary on disk, but in Chromium it resolves to a static application loader linked into Chromium. The net result is the same in both cases: other apps can use the shell to connect to `"mojo:media"` and use its services.
+
+This section explores different ways to register and connect to `"mojo:hello"` in Chromium.
+
+## In-Process Applications
+
+Applications can be set up to run within the browser process via `ContentBrowserClient::RegisterInProcessMojoApplications`. This method populates a mapping from URL to `base::Callback<scoped_ptr<mojo::ApplicationDelegate>()>` (_i.e._, a factory function which creates a new `mojo::ApplicationDelegate` instance), so registering a new app means adding an entry to this map.
+
+Let's modify `ChromeContentBrowserClient::RegisterInProcessMojoApplications` (in `//chrome/browser/chrome_content_browser_client.cc`) by adding the following code:
+
+```
+ apps->insert(std::make_pair(GURL("mojo:hello"),
+ base::Bind(&HelloApp::CreateApp)));
+```
+
+you'll also want to add the following convenience method to your `HelloApp` definition in `//components/hello/hello_app.h`:
+
+```
+ static scoped_ptr<mojo::ApplicationDelegate> HelloApp::CreateApp() {
+ return scoped_ptr<mojo::ApplicationDelegate>(new HelloApp);
+ }
+```
+
+This introduces a dependency from `//chrome/browser` on to `//components/hello:lib`, which you can add to the `"browser"` target's deps in `//chrome/browser/BUILD.gn`. You'll of course also need to include `"components/hello/hello_app.h"` in `chrome_content_browser_client.cc`.
+
+That's it! Now if an app comes to the shell asking to connect to `"mojo:hello"` and app is already running, it'll get connected to our `HelloApp` and have access to the `Greeter` service. If the app wasn't already running, it will first be launched on a new thread.
+
+## Connecting From the Browser
+
+We've already seen how apps can connect to each other using their own private shell proxy, but the vast majority of Chromium code doesn't yet belong to a Mojo application. So how do we use an app's services from arbitrary browser code? We use `content::MojoAppConnection`, like this:
+
+```
+ #include "base/bind.h"
+ #include "base/logging.h"
+ #include "components/hello/public/interfaces/greeter.mojom.h"
+ #include "content/public/browser/mojo_app_connection.h"
+
+ void LogGreeting(const mojo::String& greeting) {
+ LOG(INFO) << greeting;
+ }
+
+ void GreetTheWorld() {
+ scoped_ptr<content::MojoAppConnection> connection =
+ content::MojoAppConnection::Create("mojo:hello",
+ content::kBrowserMojoAppUrl);
+ hello::GreeterPtr greeter;
+ connection->ConnectToService(&greeter);
+ greeter->Greet("world", base::Bind(&LogGreeting));
+ }
+```
+
+A `content::MojoAppConnection`, while not thread-safe, may be created and safely used on any single browser thread.
+
+You could add the above code to a new browsertest to convince yourself that it works. In fact you might want to take a peek at `MojoShellTest.TestBrowserConnection` (in [//content/browser/mojo\_shell\_browsertest.cc](https://code.google.com/p/chromium/codesearch#chromium/src/content/browser/mojo_shell_browsertest.cc)) which registers and tests an in-process Mojo app.
+
+Finally, note that `MojoAppConnection::Create` takes two URLs. The first is the target app URL, and the second is the source URL. Since we're not really a Mojo app, but we are still trusted browser code, the shell will gladly use this URL as the `requestor_url` when establishing an incoming connection to the target app. This allows browser code to masquerade as a Mojo app at the given URL. `content::kBrowserMojoAppUrl` (which is presently `"system:content_browser"`) is a reasonable default choice when a more specific app identity isn't required.
+
+## Out-of-Process Applications
+
+If an app URL isn't registered for in-process loading, the shell assumes it must be an out-of-process application. If the shell doesn't already have a known instance of the app running, a new utility process is launched and the application request is passed onto it. Then if the app URL is registered in the utility process, the app will be loaded there.
+
+Similar to in-process registration, a URL mapping needs to be registered in `ContentUtilityClient::RegisterMojoApplications`.
+
+Once again you can take a peek at //content/browser/mojo\_shell\_browsertest.cc for an end-to-end example of testing an out-of-process Mojo app from browser code. Note that `content_browsertests` runs on `content_shell`, which uses `ShellContentUtilityClient` as defined [//content/shell/utility/shell\_content\_utility\_client.cc](https://code.google.com/p/chromium/codesearch#chromium/src/content/shell/utility/shell_content_utility_client.cc). This code registers a common OOP test app.
+
+## Unsandboxed Out-of-Process Applications
+
+By default new utility processes run in a sandbox. If you want your Mojo app to run out-of-process and unsandboxed (which you **probably do not**), you can register its URL via `ContentBrowserClient::RegisterUnsandboxedOutOfProcessMojoApplications`.
+
+## Connecting From `RenderFrame`
+
+We can also connect to Mojo apps from a `RenderFrame`. This is made possible by `RenderFrame`'s `GetServiceRegistry()` interface. The `ServiceRegistry` can be used to acquire a shell proxy and in turn connect to an app like so:
+
+```
+void GreetWorld(content::RenderFrame* frame) {
+ mojo::ShellPtr shell;
+ frame->GetServiceRegistry()->ConnectToRemoteService(
+ mojo::GetProxy(&shell));
+
+ mojo::URLRequestPtr request = mojo::URLRequest::New();
+ request->url = "mojo:hello";
+
+ mojo::ServiceProviderPtr hello_services;
+ shell->ConnectToApplication(
+ request.Pass(), mojo::GetProxy(&hello_services), nullptr);
+
+ hello::GreeterPtr greeter;
+ hello_services->ConnectToService(
+ hello::Greeter::Name_, mojo::GetProxy(&greeter).PassMessagePipe());
+}
+```
+
+It's important to note that connections made through the frame's shell proxy will appear to come from the frame's `SiteInstance` URL. For example, if the frame has loaded `https://example.com/`, `HelloApp`'s incoming `mojo::ApplicationConnection` in this case will have a remote application URL of `"https://example.com/"`. This allows apps to expose their services to web frames on a per-origin basis if needed.
+
+## Connecting From Java
+
+TODO
+
+## Connecting From `JavaScript`
+
+This is still a work in progress and might not really take shape until the Blink+Chromium merge. In the meantime there are some end-to-end WebUI examples in [//content/browser/webui/web\_ui\_mojo\_browsertest.cc](https://code.google.com/p/chromium/codesearch#chromium/src/content/browser/webui/web_ui_mojo_browsertest.cc). In particular, `WebUIMojoTest.ConnectToApplication` connects from a WebUI frame to a test app running in a new utility process.
+
+# FAQ
+
+Nothing here yet! \ No newline at end of file
diff --git a/docs/ninja_build.md b/docs/ninja_build.md
new file mode 100644
index 0000000..addf61d
--- /dev/null
+++ b/docs/ninja_build.md
@@ -0,0 +1,79 @@
+
+
+Ninja is a build system written with the specific goal of improving the edit-compile cycle time. It is used by default everywhere except when building for iOS.
+
+Ninja behaves very similar to Make -- the major feature is that it starts building files nearly instantly. (It has a number of minor user interface improvements to make as well.)
+
+Read more about Ninja at [the Ninja home page](http://martine.github.com/ninja/).
+
+## Using it
+
+### Configure your system to use Ninja
+
+#### Install
+
+Ninja is included in depot\_tools as well as gyp, so there's nothing to install.
+
+## Build instructions
+
+To build Chrome:
+```
+cd /path/to/chrome/src
+ninja -C out/Debug chrome
+```
+
+Specify `out/Release` for a release build. I recommend setting up an alias so that you don't need to type out that build directory path.
+
+If you want to build all targets, use `ninja -C out/Debug all`. It's faster to build only the target you're working on, like 'chrome' or 'unit\_tests'.
+
+## Android
+
+Identical to Linux, just make sure `OS=android` is in your `GYP_DEFINES`. You want to build one of the _apk targets, e.g. `content_shell_apk`._
+
+## Windows
+
+Similar to Linux. It uses MSVS's `cl.exe`, `link.exe`, etc. so you still need to have VS installed. To use it, open `cmd.exe`, go to your chrome checkout, and run:
+```
+set GYP_DEFINES=component=shared_library
+python build\gyp_chromium
+ninja -C out\Debug chrome.exe
+```
+
+`component=shared_library` optional but recommended for faster links.
+
+You can also set `GYP_GENERATORS=ninja,msvs-ninja` to get both VS projects generated if you want to use VS just to browse/edit (but then gyp takes twice as long to run).
+
+If you're using Express or the Windows SDK by itself (rather than using a Visual Studio install), you'll need to run from a vcvarsall command prompt.
+
+### Debugging
+
+Miss VS for debugging?
+```
+devenv.com /debugexe chrome.exe --my-great-args "go here" --single-process etc
+```
+
+Miss Xcode for debugging? Read http://dev.chromium.org/developers/debugging-on-os-x/building-with-ninja-debugging-with-xcode
+
+### Without Visual Studio
+
+That is, building with just the WinDDK. This is documented in the [regular build instructions](http://dev.chromium.org/developers/how-tos/build-instructions-windows#TOC-Setting-up-the-environment-for-building-with-Visual-C-2010-Express-or-Windows-7.1-SDK).
+
+## Tweaks
+
+### Building through errors
+Pass a flag like `-k3` to make Ninja build until it hits three errors instead of stopping at the first.
+
+### Parallelism
+Pass a flag like `-j8` to use 8 parallel processes, or `-j1` to compile just one at a time (helpful if you're getting weird compiler errors). By default Ninja tries to use all your processors.
+
+### More options
+There are more options. Run `ninja --help` to see them all.
+
+### Custom build configs
+
+You can write a specific build config to a specific output directory via the `-G` flags to gyp. Here's an example from jamesr:
+`build/gyp_chromium -Gconfig=Release -Goutput_dir=out_profiling -Dprofiling=1 -Dlinux_fpic=0`
+
+## Bugs
+
+If you encounter any problems, please file a bug at http://crbug.com/new with label `ninja` and cc `thakis@` or `scottmg@`. Assume that it is a bug in Ninja before you bother anyone about e.g. link problems. \ No newline at end of file
diff --git a/docs/piranha_plant.md b/docs/piranha_plant.md
new file mode 100644
index 0000000..4b6db33
--- /dev/null
+++ b/docs/piranha_plant.md
@@ -0,0 +1,54 @@
+# Introduction
+
+Piranha Plant is the name of a project, started in November 2013, that aims to deliver the future architecture of MediaStreams in Chromium.
+
+Project members are listed in the [group for the project](https://groups.google.com/a/chromium.org/forum/#!members/piranha-plant).
+
+The Piranha Plant is a monster plant that has appeared in many of the Super Mario games. In the original Super Mario Bros, it hid in the green pipes and is thus an apt name for the project as we are fighting "monsters in the plumbing."
+
+![http://files.hypervisor.fr/img/super_mario_piranha_plant.png](http://files.hypervisor.fr/img/super_mario_piranha_plant.png)
+
+# Background
+
+When the MediaStream spec initially came to be, it was tightly coupled with PeerConnection. The infrastructure for both of these was initially implemented primarily in libjingle, and then used by Chromium. For this reason, the MediaStream implementation in Chromium is still somewhat coupled with the PeerConnection implementation, it still uses some libjingle interfaces on the Chromium side, and progress is sometimes more difficult as changes need to land in libjingle before changes can be made in Chromium.
+
+Since the early days, the MediaStream spec has evolved so that PeerConnection is just one destination for a MediaStream, multiple teams are or will be consuming the MediaStream infrastructure, and we have a clearer vision of what the architecture should look like now that the spec is relatively stable.
+
+# Goals
+ 1. Document the idealized future design for MediaStreams in Chromium (MS) as well as the current state.
+ 1. Create and execute on a plan to incrementally implement the future design.
+ 1. Improve quality, maintainability and readability/understandability of the MS code.
+ 1. Make life easier for Chromium developers using MS.
+ 1. Balance concerns and priorities of the different teams that are or will be using MS in Chromium.
+ 1. Do the above without hurting our ability to produce the WebRTC.org deliverables, and without hurting interoperability between Chromium and other software built on the WebRTC.org deliverables.
+
+# Deliverables
+
+ 1. Project code name: Piranha Plant.
+ 1. A [design document](http://www.chromium.org/developers/design-documents/idealized-mediastream-design) for the idealized future design (work in progress).
+ 1. A document laying out a plan for incremental steps to achieve as much of the idealized design as is pragmatic. See below for current draft.
+ 1. A [master bug](http://crbug.com/323223) to collect all existing and currently planned work items:
+ 1. Sub-bugs of the master bug, for all currently known and planned work.
+ 1. A document describing changed and improved team policies to help us keep improving code quality (e.g. naming, improved directory structure, OWNERS files). Not started.
+
+# Task List
+Here are some upcoming tasks we need to work on to progress towards the idealized design. Those currently being worked on have emails at the front:
+ * General
+ * More restrictive OWNERS
+ * DEPS files to limit dependencies on libjingle
+ * Rename MediaStream{Manager, Dispatcher, DispatcherHandler} to CaptureDevice{...} since it is a bit confusing to use the MediaStream name here.
+ * Rename MediaStreamDependencyFactory to PeerConnectionDependencyFactory.
+ * Split up MediaStreamImpl.
+ * Change the RTCPeerConnectionHandler to only create the PeerConnection and related stuff when necessary.
+ * Audio
+ * [xians](xians.md) Add a Content API where given an audio WebMediaStreamTrack, you can register as a sink for that track.
+ * Move RendererMedia, the current local audio track sink interface, to //media and change as necessary.
+ * Put a Chrome-side adapter on the libjingle audio track interface.
+ * Move the APM from libjingle to Chrome, putting it behind an experimental flag to start with.
+ * Do format change notifications on the capture thread.
+ * Switch to a push model for received PeerConnection audio.
+ * Video
+ * [perkj](perkj.md) Add a Chrome-side interface representing a sink for a video track.
+ * [perkj](perkj.md) Add a Content API where given a video WebMediaStreamTrack, you can register as a sink for that track.
+ * Add a Chrome-side adapter for libjingle’s video track interface, which may also need to change.
+ * Implement a Chrome-side VideoSource and constraints handling (currently in libjingle). \ No newline at end of file
diff --git a/docs/profiling_content_shell_on_android.md b/docs/profiling_content_shell_on_android.md
new file mode 100644
index 0000000..0d4b903
--- /dev/null
+++ b/docs/profiling_content_shell_on_android.md
@@ -0,0 +1,149 @@
+# Introduction
+
+Below are the instructions for setting up profiling for Content Shell on Android. This will let you generate profiles for ContentShell. This will require linux, building an userdebug Android build, and wiping the device.
+
+## Prepare your device.
+
+You need an Android 4.2+ device (Galaxy Nexus, Nexus 4, 7, 10, etc.) which you don’t mind erasing all data, rooting, and installing a userdebug build on.
+
+## Get and build content\_shell\_apk for Android
+(These instructions have been carefully distilled from http://code.google.com/p/chromium/wiki/AndroidBuildInstructions)
+
+ 1. Get the code! You’ll want a second checkout as this will be android-specific. You know the drill: http://dev.chromium.org/developers/how-tos/get-the-code
+ 1. Append this to your .gclient file: `target_os = ['android']`
+ 1. Create `chromium.gyp_env` next to your .gclient file: `echo "{ 'GYP_DEFINES': 'OS=android', }" > chromium.gyp_env`
+ 1. (Note: All these scripts assume you’re using "bash" (default) as your shell.)
+ 1. Sync and runhooks (be careful not to run hooks on the first sync):
+```
+gclient sync --nohooks
+. build/android/envsetup.sh
+gclient runhooks
+```
+ 1. No need to install any API Keys.
+ 1. Install Oracle’s Java: http://goo.gl/uPRSq. Grab the appropriate x64 .bin file, `chmod +x`, and then execute to extract. You then move that extracted tree into /usr/lib/jvm/, rename it java-6-sun and set:
+```
+export JAVA_HOME=/usr/lib/jvm/java-6-sun
+export ANDROID_JAVA_HOME=/usr/lib/jvm/java-6-sun
+```
+ 1. Type ‘`java -version`’ and make sure it says java version "1.6.0\_35” without any mention of openjdk before proceeding.
+ 1. `sudo build/install-build-deps-android.sh`
+ 1. Time to build!
+```
+ninja -C out/Release content_shell_apk
+```
+
+## Setup the physical device
+
+> Plug in your device. Make sure you can talk to your device, try "`adb shell ls`"
+
+## Root your device and install a userdebug build
+
+ 1. This may require building your own version of Android: http://source.android.com/source/building-devices.html
+ 1. A build that works is: manta / android-4.2.2\_r1 or master / full\_manta-userdebug.
+
+## Root your device
+ 1. Run `adb root`. Every time you connect your device you’ll want to run this.
+ 1. If adb is not available, make sure to run “`. build/android/envsetup.sh`”
+> If you get the error “error: device offline”, you may need to become a developer on your device before Linux will see it. On Jellybean 4.2.1 and above this requires going to “about phone” or “about tablet” and clicking the build number 7 times: http://androidmuscle.com/how-to-enable-usb-debugging-developer-options-on-nexus-4-and-android-4-2-devices/
+
+## Run a Telemetry perf profiler
+
+You can run any Telemetry benchmark with --profiler=perf, and it will:
+1) Download "perf" and "perfhost"
+2) Install on your device
+3) Run the test
+4) Setup symlinks to work with the --symfs parameter
+
+
+You can also run "manual" tests with Telemetry, more information here:
+http://www.chromium.org/developers/telemetry/profiling#TOC-Manual-Profiling---Android
+
+The following steps describe building "perf", which is no longer necessary if you use Telemetry.
+
+
+## Install /system/bin/perf on your device (not needed for Telemetry)
+
+```
+# From inside the android source tree (not inside Chromium)
+mmm external/linux-tools-perf/
+adb remount # (allows you to write to the system image)
+adb sync
+adb shell perf top # check that perf can get samples (don’t expect symbols)
+```
+
+## Enable profiling
+> Rebuild content\_shell\_apk with profiling enabled
+```
+export GYP_DEFINES="$GYP_DEFINES profiling=1"
+build/gyp_chromium
+ninja -C out/Release content_shell_apk
+```
+## Install ContentShell
+> Install with the following:
+```
+build/android/adb_install_apk.py --apk out/Release/apks/ContentShell.apk --apk_package org.chromium.content_shell
+```
+
+## Run ContentShell
+> Run with the following:
+```
+./build/android/adb_run_content_shell
+```
+> If content\_shell “stopped unexpectedly” use “`adb logcat`” to debug. If you see ResourceExtractor exceptions, a clean build is your solution. crbug.com/164220
+
+## Setup a “symbols” directory with symbols from your build (not needed for Telemetry)
+ 1. Figure out exactly what path content\_shell\_apk (or chrome, etc) installs to.
+ * On the device, navigate ContentShell to about:crash
+```
+adb logcat | grep libcontent_shell_content_view.so
+```
+> > You should find a path that’s something like /data/app-lib/org.chromium.content\_shell-1/libcontent\_shell\_content\_view.so
+ 1. Make a symbols directory
+```
+mkdir symbols (this guide assumes you put this next to src/)
+```
+ 1. Make a symlink from your symbols directory to your un-stripped content\_shell.
+```
+mkdir -p symbols/data/app-lib/org.chromium.content_shell-1 (or whatever path in app-lib you got above)
+ln -s `pwd`/src/out/Release/lib/libcontent_shell_content_view.so `pwd`/symbols/data/app-lib/org.chromium.content_shell-1
+```
+
+## Install perfhost\_linux locally (not needed for Telemetry)
+
+
+> Note: modern versions of perf may also be able to process the perf.data files from the device.
+ 1. perfhost\_linux can be built from: https://android.googlesource.com/platform/external/linux-tools-perf/.
+ 1. Place perfhost\_linux next to symbols, src, etc.
+```
+chmod a+x perfhost_linux
+```
+
+## Actually record a profile on the device!
+> Run the following:
+```
+adb shell ps | grep content (look for the pid of the sandboxed_process)
+adb shell perf record -g -p 12345 sleep 5
+adb pull /data/perf.data
+```
+
+## Create the report
+ 1. Run the following:
+```
+./perfhost_linux report -g -i perf.data --symfs symbols/
+```
+ 1. If you don’t see chromium/webkit symbols, make sure that you built/pushed Release, and that the symlink you created to the .so is valid!
+ 1. If you have symbols, but your callstacks are nonsense, make sure you ran build/gyp\_chromium after setting profiling=1, and rebuilt.
+
+## Add symbols for the kernel
+ 1. By default, /proc/kallsyms returns 0 for all symbols, to fix this, set “/proc/sys/kernel/kptr\_restrict” to 0:
+```
+adb shell echo “0” > /proc/sys/kernel/kptr_restrict
+```
+ 1. See http://lwn.net/Articles/420403/ for explanation of what this does.
+```
+adb pull /proc/kallsyms symbols/kallsyms
+```
+ 1. Now add --kallsyms to your perfhost\_linux command:
+```
+./perfhost_linux report -g -i perf.data --symfs symbols/ --kallsyms=symbols/kallsyms
+``` \ No newline at end of file
diff --git a/docs/proxy_auto_config.md b/docs/proxy_auto_config.md
new file mode 100644
index 0000000..c5cb991
--- /dev/null
+++ b/docs/proxy_auto_config.md
@@ -0,0 +1,27 @@
+# Introduction
+Most systems support manually configuring a proxy for web access, but this is cumbersome and kind of techical, so Chrome also supports [WPAD](http://en.wikipedia.org/wiki/Web_Proxy_Autodiscovery_Protocol) for proxy configuration (enabled if "automatically detect proxy settings" is enabled on Windows).
+
+# Problem
+Currently, WPAD is pretty slow when we're starting up Chrome - we have to query the local network for WPAD servers using DNS (and maybe NetBIOS), and we wait all the way until the resolver timeout before we try sending any HTTP requests if there's no WPAD server. This is a really crappy user experience, since the browser's basically unuseable for a couple of seconds after startup if autoconfig is turned on and there's no WPAD server.
+
+# Solution
+There's a couple of simplifying assumptions we make:
+
+ * If there is a WPAD server, it is on the same network as us, and hence likely to respond to lookups far more quickly than a random internet DNS server would.
+ * If we get a lookup success for WPAD, there's overwhelmingly likely to be a live WPAD server. The WPAD script could also be large (!?) whereas the DNS response is necessarily small.
+
+Therefore our proposed solution is that when we're trying to do WPAD resolution, we fail very fast if the WPAD server doesn't immediately respond to a lookup (like, 100ms or less). If there's no WPAD server, we'll time the lookup out in 100ms and get ourselves out of the critical path much faster. We won't time out lookups for explicitly-configured WPAD servers (i.e., custom PAC script URLs) in this fashion; those will still use the normal DNS timeout.
+
+**This could have bad effects on networks with slow DNS or WPAD servers**, so we should be careful to allow users to turn this off, and we should keep statistics as to how often lookups succeed after the timeout.
+
+So here's what our WPAD lookup policy looks like **currently** in practice (assuming WPAD is enabled throughout):
+
+ * If there's no WPAD server on the network, we try to do a lookup for WPAD, time out after two seconds, and disable WPAD. Until this time, no requests can proceed.
+ * If there's a WPAD server and our lookup for it answers in under two seconds, we use that WPAD server (fetch and execute its script) and proceed with requests.
+ * If there's a WPAD server and our lookup for it answers after two seconds, we time out and do not use it (ever) until a network change triggers a WPAD reconfiguration.
+
+Here's what the **proposed** lookup policy looks like in practice:
+
+ * If there's no WPAD server on the network, we try to do a lookup for WPAD, time out after 100ms, and disable WPAD.
+ * If there's a WPAD server and our lookup for it answers in under 100ms or it's explicitly configured (via a custom PAC URL), we use that WPAD server.
+ * If there's a WPAD server and our lookup for it answers after 100ms, we time out and do not use it until a network change. \ No newline at end of file
diff --git a/docs/retrieving_code_analysis_warnings.md b/docs/retrieving_code_analysis_warnings.md
new file mode 100644
index 0000000..34c36f1
--- /dev/null
+++ b/docs/retrieving_code_analysis_warnings.md
@@ -0,0 +1,40 @@
+# Introduction
+
+Several times a day the Chromium code base is built with Microsoft VC++'s /analyze compile option. This does static code analysis which has found numerous bugs (see https://code.google.com/p/chromium/issues/detail?id=427616). While it is possible to visit the /analyze builder page and look at the raw results (http://build.chromium.org/p/chromium.fyi/builders/Chromium%20Windows%20Analyze) this works very poorly.
+
+As of this writing there are 2,702 unique warnings. Some of these are in header files and fire multiple times so there are a total of 11,202 warning lines. Most of these have been examined and found to be false positives. Therefore, in order to sanely examine the /analyze warnings it is necessary to summarize the warnings, and find what is new.
+
+There are scripts to do this.
+
+# Details
+
+The necessary scripts, which currently run on Windows only, are checked in to tools\win\new\_analyze\_warnings. Typical usage is like this:
+
+```
+> set ANALYZE_REPO=d:\src\analyze_chromium
+> retrieve_latest_warnings.bat
+```
+
+The batch file using the associated Python scripts to retrieve the latest results from the web page, create a summary file, and if previous results were found create a new warnings file. Typical results look like this:
+
+```
+analyze0067_full.txt
+analyze0067_summary.txt
+analyze0067_new.txt
+```
+
+If ANALYZE\_REPO is set then the batch file goes to %ANALYZE\_REPO%\src, does a git pull, then does a checkout of the revision that corresponds to the latest warnings, and then does a gclient sync. The warnings can then be easily correlated to the specific source that triggered them.
+
+# Understanding the results
+
+The new.txt file lists new warnings, and fixed warnings. Usually it can accurately identify them but sometimes all it can say is that the number of instances of a particularly warning has changed, which is usually not of interest. If you look at new warnings every day or two then the number of new warnings is usually low enough to be quite manageable.
+
+The summary.txt file groups warnings by type, and then sorts the groups by frequency. Low frequency warnings are more likely to be real bugs, so focus on those. However, all of the low-frequency have been investigated so at this time they are unlikely to be real bugs.
+
+The majority of new warnings are variable shadowing warnings. Until -Wshadow is enabled for gcc/clang builds these warnings will continue to appear, and unless they are actually buggy or are particularly confusing it is usually not worth fixing them. One exception would be if you are planning to enable -Wshadow in which case using the list or relevant shadowing warnings would be ideal.
+
+Some of the warnings say that out-of-range memory accesses will occur, which is pretty scary. For instance "warning C6201: Index '-1' is out of valid index range '0' to '4'". In most cases these are false positives so use your own judgment when deciding whether to fix them.
+
+The full.txt file contains the raw output and should usually be ignored.
+
+If you have any questions then post to the chromium dev mailing list. \ No newline at end of file
diff --git a/docs/script_preprocessor.md b/docs/script_preprocessor.md
new file mode 100644
index 0000000..fc6b9f4
--- /dev/null
+++ b/docs/script_preprocessor.md
@@ -0,0 +1,64 @@
+# Using the Chrome Devtools JavaScript preprocessing feature
+
+The Chrome Devtools JavaScript preprocessor intercepts JavaScript just before it enters V8, the Chrome JS system, allowing the JS to be transcoded before compilation. In combination with page injected JavaScript, the preprocessor allows a complete synthetic runtime to be constructed in JavaScript. Combined with other functions in the `chrome.devtools` extension API, the preprocessor allows new more sophisticated JavaScript-related developer tools to be created.
+
+## API
+
+To use the script preprocessor, write a [chrome devtools extension](http://developer.chrome.com/extensions/devtools.inspectedWindow.html#method-reload) that reloads the Web page with the preprocessor installed:
+```
+chrome.devtools.inspectedWindow.reload({
+ ignoreCache: true,
+ injectedScript: runThisFirst,
+ preprocessorScript: preprocessor
+});
+```
+where `preprocessorScript` is source code (string) for a JavaScript function taking three string arguments, the source to preprocess, the URL of the source, and a function name if the source is an DOM event handler. The preprocessorerScript function should return a string to be compiled by Chrome in place of the input source. In the case that the source is a DOM event handler, the returned source must compile to a single JS function.
+
+The [Chrome Preprocessor Example](http://developer.chrome.com/extensions/samples.html) illustrates the API call in a simple chrome devtools extension. Download and unpack the .zip file, use `chrome://extensions` in Developer Mode and load the unpacked extension. Then open or reopen devtools. The Preprocessor panel has a **reload** button that triggers a simple preprocessor.
+
+The preprocessor runs in an isolated world similar to the environment of Chrome content scripts. A `window` object is available but it shares no properties with the Web page `window` object. DOM calls in the preprocessor environment will operate on the Web page, but developers should be cautious about operating on the DOM in the preprocessor. We do not test such operations though we expect the result to resemble calls from the outer function of `<script>` tags.
+
+In some applications the developer may coordinate runtime initialization using the `injectedScript` property in the object passed to the `reload()` call. This is also JavaScript source code; it is compiled into the page ahead of any Web page scripts and thus before any JavaScript is preprocessed.
+
+The preprocessor is compiled once just before the first JavaScript appears. It remains active until the page is reloaded or otherwise navigated. Navigating the Web page back and then forward will result in no preprocessing. Closing devtools will leave the preprocessor in place.
+
+## Use Cases
+
+The script preprocessor supports transcoding input source to JavaScript. Use cases include:
+ * Adding write barriers for Querypoint debugging,
+ * Supporting feature-specific debugging of next generation EcmaScript using eg Traceur,
+ * Integration of development tools like coverage analysis.
+ * Analysis of call sequences for performance tuning.
+Several JavaScript compilers support transcoding, including [Traceur](https://github.com/google/traceur-compiler#readme) and [Esprima](http://esprima.org/).
+
+## Implementation
+
+The implementation relies on the Devtools front-end hosting an extension supplying the preprocessor script; the front end communicates with the browser backend over eg web sockets.
+
+The devtools extension function call issues a postMessage() event from the devtools extension iframe to the devtools main frame. The event is handled in ExtensionServer.js which forwards it over the [devtools remote debug protocol](https://developers.google.com/chrome-developer-tools/docs/protocol/1.0/page#command-reload). (See [Bug 229971](https://code.google.com/p/chromium/issues/detail?id=229971) for this part of the implementation and its status).
+
+When the preprocessor script arrives in the back end, `InspectorPageAgent::reload` stores the preprocessor script in `m_pendingScriptPreprocessor`. After the browser begins the reload operation, it calls `PageDebuggerAgent::didClearWindowObjectInWorld` which moves the processor source into the `scriptDebugServer()`.
+
+Next the browser prepares the page environment and calls `PageDebuggerAgent::didClearWindowObjectInWorld`. This function clears the preprocessor object pointer and if it is not recreated during the page load, no scripts will be preprocessed. At this point we only store the preprocessor source, delaying the compilation of the preprocessor until just before its first use. This helps ensure that the JS environment we use is fully initialized.
+
+Source to be preprocessed comes from three different places:
+ 1. Web page `<script>` tags,
+ 1. DOM event-listener attributes, eg `onload`,
+ 1. JS `eval()` or `new Function()` calls.
+
+When the browser encounters either a `<script>` tag (`ScriptController::executeScriptInMainWorld`) or an element attribute script (`V8LazyEventListener::prepareListenerObject`) we call a corresponding function in InspectorInstrumentation. This function has a fast inlined return path in the case that the debugger is not attached.
+
+If the debugger is attached, InspectorInstrumentation will call the matching function in PageDebuggerAgent (see core/inspector/InspectorInstrumentation.idl). It checks to see if the preprocessor is installed. If not, it returns.
+
+The preprocessor source is stored in PageScriptDebugServer.
+If the preprocessor is installed, we check to see if it is compiled. If not, we create a new `ScriptPreprocessor` object. The constructor uses `ScriptController::executeScriptInIsolatedWorld` to compile the preprocessor in a new isolated world associated with the Web page's main world. If the compilation and outer script execution succeed and if the result is a JavaScript function, we store the resulting function as a `ScopedPersistent<v8::Function>` member of the preprocessor.
+
+If the `PageScriptDebugServer::preprocess()` has a value for the preprocessor function, it applies the function to the web page source using `V8ScriptRunner::callAsFunction()`. This calls the compiled JS function in the ScriptPreprocessor's isolated world and retrieves the resulting string.
+
+When the preprocessed JavaScript source runs it may call `eval()` or `new Function()`. These calls cause the V8 runtime to compile source. Immediately before compiling, V8 issues a beforeCompile event which triggers `ScriptDebugServer::handleV8DebugEvent()`. This code is only called if the debugger is active. In the handler we call `ScriptDebugServer::preprocessEval()` to examine the ScriptCompilationTypeInfo, a marker set by V8, to see if we are compiling dynamic code. Only dynamic code is preprocessed in this function and only if we are not executing the preprocessor itself.
+
+During the browser operation, API generation code, debugger console initialization code, injected page script code, debugger information extraction code, and regular web page code enter this function. There is currently no way to distinguish internal or system code from the web page code. However the internal code is all static. By limiting our preprocessing to dynamic code in the beforeCompile handler, we know we are only operating on Web page code. The static Web page code is preprocessed as described above.
+
+## Limitations
+
+We currently do not support preprocessing of WebWorker source code. \ No newline at end of file
diff --git a/docs/seccomp_sandbox_crash_dumping.md b/docs/seccomp_sandbox_crash_dumping.md
new file mode 100644
index 0000000..1397664
--- /dev/null
+++ b/docs/seccomp_sandbox_crash_dumping.md
@@ -0,0 +1,24 @@
+## Introduction
+Currently, Breakpad relies on facilities that are disallowed inside the Linux seccomp sandbox. Specifically, it sets a signal handler to catch faults (currently disallowed), forks a new process, and uses ptrace() (also disallowed) to read the memory of the faulted process.
+
+## Options
+There are three ways we could do crash dumping of seccomp-sandboxed processes:
+ * Find a way to permit signal handling safely inside the sandbox (see below).
+ * Allow the kernel's core dumper to kick in and write a core file.
+ * This seems risky because this code tends not to be well-tested.
+ * This will not work if the process is chrooted, so it would not work if the seccomp sandbox is stacked with the SUID sandbox.
+ * Have an unsandboxed helper process which ptrace()s the sandboxed process to catch faults.
+
+## Signal handling in the seccomp sandbox
+In case a trusted thread faults with a SIGSEGV, we must make sure that an untrusted thread cannot register a signal handler that will run in the context of the trusted thread.
+
+Here are some mechanisms that could make this safe:
+ * sigaltstack() is per-thread. If we opt not to set a signal stack for trusted threads, and set %esp/%rsp to an invalid address, trusted threads will die safely if they fault.
+ * This means the trusted thread cannot set a signal stack on behalf of the untrusted thread once the latter has switched to seccomp mode. The signal stack would have to be set up when the thread is created and not subsequently changed.
+ * clone() has a CLONE\_SIGHAND flag. By omitting this flag, trusted and untrusted threads can have different sets of signal handlers. This means we can opt not to set signal handlers for trusted threads.
+ * Again, per-thread signal handler sets would mean the trusted thread cannot change signal handlers on behalf of untrusted threads.
+ * sigprocmask()/pthread\_sigmask(): These can be used to block signal handling in trusted threads.
+
+## See also
+ * LinuxCrashDumping
+ * [Issue 37728](http://code.google.com/p/chromium/issues/detail?id=37728) \ No newline at end of file
diff --git a/docs/shift_based_development.md b/docs/shift_based_development.md
new file mode 100644
index 0000000..3d8f645
--- /dev/null
+++ b/docs/shift_based_development.md
@@ -0,0 +1,152 @@
+# Introduction
+
+Kai Wang (kaiwang@) and I (Jói Sigurðsson, joi@) experimented with something we called “shift-based development” in the first half of Q1 2013 as a way to work closely together on a componentization that was hard to do in parallel.
+
+I work from Iceland, which is 7 or 8 hours ahead of Kai’s location (Mountain View, California), depending on the time of year (daylight savings time is not observed in Iceland). Kai and I were working on componentizing the Prefs subsystem of Chromium, and it was obvious early on that if we tried to develop in parallel, we would step on each others’ toes very regularly and be forced to do often difficult merges (due to e.g. moving files, renaming things, and so on). The idea came up, since my normal work day ends right around the time Kai’s starts, why not do the development in serial instead. Although I often work an extra hour or two later in the evening, that’s normally for responding to code review requests and email and not so much for coding, so this way we’d be completely free of getting in each others’ way.
+
+The way we implemented this was we set up a bare git repository, and at the end of the day, we would push whatever we were working on to this repository, and let the other know the status of things. This could be a single working branch, or (more often) a pipeline of branches, plus a branch representing the SVN revision we were based off of. To make this work, we were both using the unmanaged git workflow.
+
+The idea was, the next “shift” would pull down the pipeline of branches and continue where the last left off, doing development on the last change in the pipeline if it was incomplete, landing whatever changes could be landed, or starting a new change in the pipeline if all of them were ready for review or ready to land.
+
+One limitation we ran into was that only the owner of an issue in Rietveld could upload a new patch set. To work around this limitation, I added a feature to Rietveld where you can add a COLLABORATOR=xyz@chromium.org line to your change description, which will allow that person to also upload patches and edit the change description (see [Rietveld patch](https://code.google.com/p/rietveld/source/detail?r=a37a6b2495b43e5fdd38292602d933714b7e8ddd)).
+
+In my opinion this was moderately successful. We were probably less productive than we would have been if each of us had been working on completely unrelated things, but certainly more productive than if we had tried to work together on componentizing Prefs in parallel.
+
+With more practice, I think this way of working together could be quite successful. It was also challenging and fun and could be a worthwhile thing to try for folks separated by close to a full working day or more. See details below if you'd like to try.
+
+# Details
+
+The following instructions assume Linux is being used, but should be easily adaptable to other OSes. If you find mistakes in the instructions, please feel free to correct and clarify this Wiki page.
+
+## Setup
+
+On one of our Linux boxes, we set up a bare git repository using [these instructions](http://git-scm.com/book/en/Git-on-the-Server-Setting-Up-the-Server). I'm sure you could also use an existing git service such as github.
+
+Let's assume the IP address of the Linux box hosting the bare
+repository is `12.34.56.78`, the Linux user that provides access to the
+bare repository is named `gitshift`, and the repository is located at
+`/home/gitshift/git/gitshift.git`.
+
+Each developer participating in the shift-based development provides
+their RSA public key (e.g. `~/.ssh/id_rsa.pub`) and we add it to
+`/home/gitshift/.ssh/authorized_keys`; that way as long as you are using
+`ssh-agent` and have run `ssh-add`, you won't need to type in a password
+every time you issue a git command that affects the repository.
+
+Issue these commands to add a remote for this repository to your local
+git repo, and to mirror its initial state. You can call it whatever
+you like, shiftrepo is just an example.
+
+```
+$ git remote add shiftrepo gitshift@12.34.56.78:/home/gitshift/git/gitshift.git
+$ git fetch shiftrepo
+```
+
+You should now be able to do a `git push` of some dummy branch; try
+it out, e.g.
+
+```
+$ git checkout -b shifttest master
+$ echo boo > boo.txt
+$ git add boo.txt && git commit -m .
+$ git push shiftrepo
+```
+
+The shared repository is just a place where you share your branches;
+it is not a place where you do actual work. The actual work should be
+done in your separate local repositories, and you still use e.g. git
+pull (which in our git svn repositories behind the scenes does a `git svn fetch` etc.
+
+Shift-based collaboration won't work well (at least not with a
+pipeline of branches) unless you are using an "unmanaged" git checkout
+(search for "unmanaged" on
+[this page](https://code.google.com/p/chromium/wiki/UsingNewGit)).
+
+## Example Working Rules
+
+For the branches we collaborated on, we set up some working
+rules. This may be a good starting set and it worked well enough for us,
+but others could adapt these rules:
+
+a) We had a naming convention for pipelines of branches. E.g. you
+might have branches named shiftrepo/p0-movemore and shiftrepo/p1-sync,
+where p1-sync's upstream branch is p0-movemore, and p0-movemore's
+upstream branch is an SVN revision, generally a version that was LKGR
+at some point. You can find the git commit hash of this revision by
+running
+
+```
+git log -1 --grep=src@ | head -1 | cut -d " " -f2
+```
+
+and the SVN revision number by running
+
+```
+git log -1 --grep=src@ | grep git-svn-id | cut -d@ -f2 | cut -d " " -f1
+```
+
+We followed a naming convention of one or two alphabetic characters
+followed by the sequence number of the branch, followed by a dash and
+a descriptive name for what's going on in that particular branch. The
+one or two alphabetic characters indicated the rough over-arching
+topic (e.g. p for Prefs), and the stuff after the dash can be more
+descriptive.
+
+b) When pushing a pipeline of branches to shiftrepo (where branch A
+depends on branch B and so forth) we made sure to first git pull in
+each dependent branch in sequence, so that each branch in shiftrepo is
+building straight on top of the previous branch.
+
+c) At the end of our shift, we did `git push shiftrepo branchname`
+for each branch.
+
+d) At the start of our shift, we did `git fetch shiftrepo` and then
+for each branch we were collaborating on we did `git checkout branchname && git merge shiftrepo/branchname`. Note that the first command checks
+out the local branch, and the second merges the shiftrepo/ branch into
+it. This does not make the shiftrepo/ branch a parent of the local
+branch.
+
+e) Also at the start of each shift, we updated the local upstream
+branches for each branch to match the upstream relationships that the
+person ending their shift had on his end. One case is if the oldest
+branch in the pipeline has been merged to a new LKGR, then we did this:
+
+```
+git branch --set-upstream oldestBranchName `git log -1 --grep=src@ oldestBranchName | head -1 | cut -d " " -f2`
+```
+
+and the other case is if new branches were created during the last shift, e.g. p4-foo was added, then for each we need to do like this:
+
+```
+git branch --set-upstream p4-foo p3-bar
+```
+
+f) For managing old branches, we removed the oldest branch in a
+pipeline when several conditions were met:
+
+> i) The old branch has been checked in.
+
+> ii) The SVN revision of the check-in of the old branch is equal to
+> or older than LKGR, i.e. that change in SVN is included when you
+> sync to LKGR.
+
+> iii) That LKGR has been merged into the old branch, and we've done
+> `git pull` in the next branch after it.
+
+g) We only used the CQ to commit stuff. The fear was (and this hasn't
+really been validated as true or false) that there might be some
+gotchas if we used `git cl dcommit` instead.
+
+h) At the end of our shift, we communicated by email/IM/Hangout to let
+the other know the status of the work, next steps remaining for any
+currently-open branches, and to discuss what might make sense for the
+next branches to work on.
+
+## Random Commands
+
+To push a local branch to shiftrepo: `git push shiftrepo localbranchname`
+
+To push all "matching" branches (i.e. push the latest copy of
+any local branch that has previously been pushed to shiftrepo): `git push shiftrepo`
+
+To delete a branch from shiftrepo, it's weird: `git push shiftrepo :branchname` \ No newline at end of file
diff --git a/docs/spelling_panel_planning_doc.md b/docs/spelling_panel_planning_doc.md
new file mode 100644
index 0000000..2b0be5d
--- /dev/null
+++ b/docs/spelling_panel_planning_doc.md
@@ -0,0 +1,30 @@
+# High Level
+
+There will be a context menu from which the display of the spelling panel can be toggled. This will tie into some methods in webkit that will make sure that the selection is in the right place and in sync with the panel. By catching the messages that the spelling panel sends we can also do the right things when the user asks to correct words, ignore words, move to the next misspelled word and learn words. Additionally, the language of the spellchecker can also be changed through the spelling panel.
+
+# Details
+
+## Toggling the Spelling Panel
+
+Design document for the addition of the spelling panel to Chromium
+
+ * Add a new define, `IDC_SPELLING_PANEL_TOGGLE`, to chrome\_dll\_resource.h.
+ * Add code to `RenderViewContextMenu::AppendEditableItems` to check the state of the spelling panel and make the right decision. Note that this has to touch the function `SpellCheckerPlatform::SpellingPanelVisible()` which should only be called from the main thread (which this is, so it's ok).
+ * Showing the spelling panel works as follows: `RenderViewContextMenu::ExecuteCommand` will need another case to handle the added define. It calls `source_tab_contents_->render_view_host()->ToggleSpellPanel()`, which in turn does `Send(new ViewMsg_ToggleSpellPanel(routing_id(),bool))`. The bool should be the current state of the spelling panel, which is cached on the webkit side to avoid an extra IPC call that would be difficult to execute, due to the limitations with `SpellCheckerPlatform::SpellingPanelVisible()`. This message is caught in `RenderView`, which caches the visibility and then calls `ToggleSpellPanel` on the focused frame. This call ends up at `WebFrameImpl`, which forwards the call to the webkit method `Editor::showSpellingGuessPanel`. From here, webkit does a few things and then calls `advanceToNextMispelling`, which calls `client()->updateSpellingUIWithMisspelledWord(misspelledWord);` which will eventually end up back in the browser due to an IPC call. We can update the panel using `[[NSSpellChecker sharedSpellChecker] updateSpellingPanelWithMisspelledWord:nextMisspelledWord]`. After that, `client()->showSpellingUI(true)` is called. This puts us back in the webkit glue code in editor\_client\_impl.cc. From here, we grab the current `WebViewDelegate` and call `ShowSpellingUI`. This call ends up in `RenderView`, since it implements `WebViewDelegate`, which finally does `Send(new ViewHostMsg_ShowSpellingPanel(routing_id_,show)`). This is caught in `resource_message_filter.cc`, which forwards the call to `SpellCheckerPlatform::ShowSpellingPanel` where we finally display the panel. Hiding the spelling Panel is a similar processs (i.e. we go through webkit and eventually receive a message back telling us to hid the spelling panel).
+
+## Spellchecking Words
+ * `advanceToNextMisspelling` in webkit ensures that the currently misspelled word is selected. When the user clicks on the change button in the spelling correction panel, the panel sends a message up the responder chain. We catch this message (`changeSpelling`) in `render_widget_host_view_mac.mm` and interrogate the spelling panel (the sender) to find out what the new word is. We then send an IPC message to the renderer telling it to replace the selected word and advance to the next misspelling, the machinery for which already exists in webkit. The spelling panel will also send the `checkSpelling` message, although this is not very well documented. Anytime we receive this message, we should advance to the next misspelled word, which we can do using `advanceToNextMisspelling`.
+
+## The Find Next Button
+ * When the Find Next button is clicked, the spelling panel sends just the `checkSpelling` message, so little additional work is needed to enable Find Next.
+
+## Learning Words
+ * When the Learn Button is clicked, the spelling panel handles telling OS X to learn the word, but we still need to move to the next word and provide it with the next misspelling. Again, this is just a matter of catching the `checkSpelling` message.
+
+## Ignoring Words
+ * In order to support ignoring words, we need to have unique document tags for every RenderView; this could be done at a more fine grain level, but this is how mainline webkit does it. Whenever a spellcheck request is generated in webkit, it asks for a document tag, which we can obtain from the browser through an IPC call (the actual tags are generated by `[NSSpellChecker uniqueSpellDocumentTag]`). This tag is stored in the RenderView and all spellchecking requests from webkit now bundle the tag. On platforms other than OS X, it is eventually ignored.
+ * When the user clicks the Ignore button in the panel, we receive an `ignoreSpelling` message. We send this to `SpellCheckerPlatform::IgnoreWord`, where we can make the calls to the NSSpellChecker to ignore the word. However, there is a problem. We don't know what the tag of the document that we are in is. To solve this, we cache the document tag whenever we spellcheck a word and use that document tag.
+ * When a RenderView is closed, it sends an IPC message to the Browser telling it that the document with it's tag has been closed. This lets us forget those words that we no longer need to ignore.
+
+## Unresolved Issues
+ * The spelling panel displays `Multilingual` as an option for the spelling language, which is not currently supported by chromium.
diff --git a/docs/system_hardening_features.md b/docs/system_hardening_features.md
new file mode 100644
index 0000000..1bde408
--- /dev/null
+++ b/docs/system_hardening_features.md
@@ -0,0 +1,107 @@
+# Introduction
+
+This is a list of current and planned Chrome OS security features. Each feature is listed together with its rationale and status. This should serve as a checklist and status update on Chrome OS security.
+
+
+
+# Details
+
+## General Linux features
+
+| **Feature** | **Status** | **Rationale** | **Tests** | **Bug** | **More thoughts or work needed?** |
+|:------------|:-----------|:--------------|:----------|:--------|:----------------------------------|
+| No Open Ports | implemented | Reduce attack surface of listening services. | [security\_NetworkListeners](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/security_NetworkListeners) | | Runtime test has to whitelist test-system-only "noise" like sshd. See Issue 22412 (on Google Code) and [ensure\_\*](http://git.chromium.org/gitweb/?p=chromiumos/platform/vboot_reference.git;a=tree;f=scripts/image_signing) for offsetting tests ensuring these aren't on Release builds. |
+| Password Hashing | When there is no TPM, scrypt is used. | Frustrate brute force attempts at recovering passwords. |
+| SYN cookies | needs functional test | In unlikely event of SYN flood, act sanely. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) |
+| Filesystem Capabilities | runtime use only | allow root privilege segmentation | [security\_Minijail0](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/security_Minijail0) |
+| Firewall | needs functional test | Block unexpected network listeners to frustrate remote access. | | Issue 23089 (on Google Code) |
+| PR\_SET\_SECCOMP | needs functional test | Available for extremely restricted sandboxing. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) | Issue 23090 (on Google Code) |
+| AppArmor | not used |
+| SELinux | not used |
+| SMACK | not used |
+| Encrypted LVM | not used |
+| eCryptFS | implemented | Keep per-user data private. | [login\_Cryptohome\*](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests) |
+| glibc Stack Protector | needs functional test | Block string-buffer-on-stack-overflow attacks from rewriting saved IP. | | Issue 23101 (on Google Code) | -fstack-protector-strong is used for almost all packages |
+| glibc Heap Protector | needs functional test | Block heap unlink/double-free/etc corruption attacks. | | Issue 23101 (on Google Code) |
+| glibc Pointer Obfuscation | needs functional test | Frustrate heap corruption attacks using saved libc func ptrs. | | Issue 23101 (on Google Code) | includes FILE pointer managling |
+| Stack ASLR | needs functional test | Frustrate stack memory attacks that need known locations. | | |
+| Libs/mmap ASLR | needs functional test | Frustrate return-to-library and ROP attacks. | | |
+| Exec ASLR | needs functional test | Needs PIE, used to frustrate ROP attacks. | | |
+| brk ASLR | needs functional test | Frustrate brk-memory attacks that need known locations. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) | |
+| VDSO ASLR | needs functional test | Frustrate return-to-VDSO attacks. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) | |
+| Built PIE | needs functional test | Take advantage of exec ASLR. | [platform\_ToolchainOptions](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/platform_ToolchainOptions) | |
+| Built _FORTIFY\_SOURCE_| needs functional test | Catch overflows and other detectable security problems. | | |
+| Built RELRO | needs functional test | Reduce available locations to gain execution control. | [platform\_ToolchainOptions](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/platform_ToolchainOptions) | |
+| Built BIND\_NOW | needs functional test | With RELRO, really reduce available locations. | [platform\_ToolchainOptions](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/platform_ToolchainOptions) | |
+| Non-exec memory | needs functional test | Block execution of malicious data regions. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) |
+| /proc/PID/maps protection | needs functional test | Block access to ASLR locations of other processes. |
+| Symlink restrictions | implemented | Block /tmp race attacks. | [security\_SymlinkRestrictions.py](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=blob;f=client/site_tests/security_SymlinkRestrictions/security_SymlinkRestrictions.py) | Issue 22137 (on Google Code) |
+| Hardlink restrictions | implemented | Block hardlink attacks. | [security\_HardlinkRestrictions.py](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=blob;f=client/site_tests/security_HardlinkRestrictions/security_HardlinkRestrictions.py) | Issue 22137 (on Google Code) |
+| ptrace scoping | implemented | Block access to in-process credentials. | [security\_ptraceRestrictions.py](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=blob;f=client/site_tests/security_ptraceRestrictions/security_ptraceRestrictions.py) | Issue 22137 (on Google Code) |
+| 0-address protection | needs functional test | Block kernel NULL-deref attacks. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) |
+| /dev/mem protection | needs functional test | Block kernel root kits and privacy loss. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) | Issue 21553 (on Google Code) | crash\_reporter uses ramoops via /dev/mem |
+| /dev/kmem protection | needs functional test | Block kernel root kits and privacy loss. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) |
+| disable kernel module loading | how about module signing instead? | Block kernel root kits and privacy loss. |
+| read-only kernel data sections | needs functional test | Block malicious manipulation of kernel data structures. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) |
+| kernel stack protector | needs functional test | Catch character buffer overflow attacks. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) |
+| kernel module RO/NX | needs functional test | Block malicious manipulation of kernel data structures. | [kernel\_ConfigVerify](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/kernel_ConfigVerify) |
+| kernel address display restriction | needs config and functional test | Frustrate kernel exploits that need memory locations. | | | Was disabled by default in 3.x kernels. |
+| disable debug interfaces for non-root users | needs config and functional test | Frustrate kernel exploits that depend on debugfs | | Issue 23758 (on Google Code) |
+| disable ACPI custom\_method | needs config and functional test | Frustrate kernel exploits that depend on root access to physical memory | | Issue 23759 (on Google Code) |
+| unreadable kernel files | needs config and functional test | Frustrate automated kernel exploits that depend access to various kernel resources | | Issue 23761 (on Google Code) |
+| blacklist rare network modules | needs functional test | Reduce attack surface of available kernel interfaces. |
+| syscall filtering | needs functional testing | Reduce attack surface of available kernel interfaces. | | Issue 23150 (on Google Code) |
+| vsyscall ASLR | medium priority | Reduce ROP target surface. |
+| Limited use of suid binaries | implemented | Potentially dangerous, so minimize use. | [security\_SuidBinaries](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/security_SuidBinaries) |
+
+## Chrome OS specific features
+
+ * We use `minijail` for sandboxing:
+ * [Design doc](http://www.chromium.org/chromium-os/chromiumos-design-docs/system-hardening#Detailed_Design_73859539098644_6227793370126997)
+ * Issue 380 (on Google Code)
+ * Current sandboxing status:
+
+| | | | | **Exposure** | | | | | **Privileges** | | **Sandbox** |
+|:-|:-|:-|:-|:-------------|:-|:-|:-|:-|:---------------|:-|:------------|
+| **Service/daemon** | **Overall status** | **Usage** | **Comments** | **Network traffic** | **User input** | **DBus** | **Hardware (udev)** | **FS (config files, etc.)** | **Runs as** | **Privileges needed?** | **uid** | **gid** | **Namespaces** | **Caps** | **seccomp\_filters** |
+| udevd | Low pri | Listens to udev events via netfilter socket | | No | No | No | Yes | No | root | Probably | No | No | No | No | No |
+| session-manager | <font color='yellow'>P2</font>| | Launched from /sbin/session\_manager\_setup.sh | No | No | Yes | No | No | root | Probably | No | No | No | No | No |
+| rsyslogd | Low pri | Logging | | No | No | No | No | Yes | root | Probably | No | | No | No | No |
+| dbus-daemon | Low pri | IPC | Listens on Unix domain socket | Unix domain socket | | Yes | | | messagebus | Yes | Yes | Yes | No | No | No |
+| powerm | <font color='yellow'>P2</font>| Suspend to RAM and system shutdown. Handles input events for hall effect sensor (lid) and power button. | | No | No | Yes | Yes | Yes | root | Probably | No | No | No | No | No |
+| wpa\_supplicant | Low pri | WPA auth | | Yes | Via flimflam | Yes | No | Yes, exposes management API through FS | wpa | Yes | Yes | Yes | No | Yes | No |
+| shill | <font color='red'>P0</font>| Connection manager | | Yes | Yes | Yes | Yes | Yes | root | Probably | No | No | No | No | No |
+| X | <font color='orange'>P1</font>| | | No (-nolisten tcp) | Yes | No | GPU | Yes | root | x86: no, ARM: yes | No | No | No | No | No |
+| htpdate | Low pri | Setting date and time | | Yes | No | No | No | No | ntp | Yes | Yes | Yes | No | No | No |
+| cashewd | Low pri | Network usage tracking | | No | No | Yes | No | No | cashew | Yes | Yes | Yes | No | No | No |
+| chapsd | Low pri | PKCS#11 implementation | | No | No | Yes | No | No | chaps | Yes | Yes | Yes | No | No | No |
+| cryptohomed | <font color='orange'>P1</font>| Encrypted user storage | | No | Yes | Yes | No | No | root | Probably | No | No | No | No | No |
+| powerd | Low pri | Idle or video activity detection. Dimming the backlight or turning off the screen, adjusting backlight intensity. Monitors plug state (on ac or on battery) and battery state-of-charge. | | No | Yes | Yes | Yes | Yes | powerd | Probably | Yes | No | No | No | No |
+| modem-manager | <font color='orange'>P1</font>| Manages 3G modems | | Indirectly | Yes | Yes | Yes | No | root | Probably not | No | No | No | No | No |
+| gavd | <font color='yellow'>P2</font>| Audio/video events and routing | | No | Yes | Yes | Yes | No | gavd | Yes | Yes | Yes | No | No | No |
+| dhcpcd | Low pri | DHCP client | | Yes | Indirectly | No | No | No | dhcp | Yes | Yes | Yes | No | Yes | No |
+| metrics\_daemon | <font color='yellow'>P2</font>| Metrics collection and uploading | | Yes, but shouldn't listen | No | Yes | No | No | root | Probably not | No | No | No | No | No |
+| cros-disks/disks | <font color='orange'>P1</font>| Removable media handling | | No | Yes | Yes | Yes | No | root | Launches minijail | No | No | No | No | No |
+| avfsd | Low pri | Compressed file handling | Launched from cros-disks, uses minijail | Not in Chrome OS | Yes | No | No | Yes | avfs | Yes | Yes | | No | Yes | Yes |
+| update\_engine | <font color='red'>P0</font>| System updates | | Yes | No | Yes | No | No | root | Probably | No | No | No | No | No |
+| cromo | Low pri | Supports Gobi 3G modems | | Indirectly | Yes | Yes | Yes | Probably | cromo | Yes | Yes | Yes | No | No | No |
+| bluetoothd | Low pri | | | Yes | Yes | Yes | Yes | Yes | bluetooth | Yes | Yes | Yes | No | Yes | No |
+| unclutter | Low pri | Hides cursor while typing | | | Yes | | | | chronos | Yes | Yes (via sudo) | No | No | No | No |
+| cras | <font color='yellow'>P2</font>| Audio server | | No | Yes | Yes | Yes | No | cras | Yes | Yes | Yes | No | No | No |
+| tcsd | <font color='yellow'>P2</font>| Portal to the TPM device driver | | No | Yes | Yes | Yes | Yes | tss | Yes | Yes | Yes | No | No | No |
+| keyboard\_touchpad\_helper | <font color='orange'>P1</font>| Disables touchpad when typing | | | Yes | | | | root | Probably not | No | No | No | No | No |
+| logger | Low pri | Redirects stderr for several daemons to syslog | | Indirectly | Indirectly | No | No | No | syslog | Yes | Yes | Yes | No | No | No |
+| login | <font color='yellow'>P2</font>| Helps organize Upstart events | | No | Indirectly | Yes | No | Yes | root | Probably | No | No | No | No | No |
+| wimax-manager | <font color='orange'>P1</font>| | Includes third-party library | Yes | Indirectly | Yes | Yes | Yes | root | Probably not | No | No | No | No | No |
+| mtpd | <font color='yellow'>P2</font>| Manages MTP devices | Includes third-party library | No | Yes | Yes | Yes | No | mtp | Yes | Yes | Yes | No | Not needed | Yes |
+| **Service/daemon** | **Overall status** | **Usage** | **Comments** | **Network traffic** | **User input** | **DBus** | **Hardware (udev)** | **FS (config files, etc.)** | **Runs as** | **Privileges needed?** | **uid** | **gid** | **Namespaces** | **Caps** | **seccomp\_filters** |
+| | | | | **Exposure** | | | | | **Privileges** | | **Sandbox** |
+
+Enforced by [security\_SandboxedServices](http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git;a=tree;f=client/site_tests/security_SandboxedServices)
+
+# References
+
+ * https://wiki.ubuntu.com/Security/Features
+ * http://wiki.debian.org/Hardening
+ * http://www.gentoo.org/proj/en/hardened/hardened-toolchain.xml
+ * http://www.awe.com/mark/blog/20101130.html \ No newline at end of file
diff --git a/docs/test_descriptions.md b/docs/test_descriptions.md
new file mode 100644
index 0000000..c949fbb
--- /dev/null
+++ b/docs/test_descriptions.md
@@ -0,0 +1,58 @@
+See [Testing and infrastructure](https://sites.google.com/a/chromium.org/dev/developers/testing) for more information.
+
+|accessibility\_unittests||
+|:-----------------------|:|
+|angle\_unittests ||
+|app\_list\_unittests ||
+|ash\_unittests ||
+|aura\_unittests ||
+|base\_i18n\_perftests ||
+|base\_perftests |Performance tests for base module.|
+|base\_unittests |Tests the base module.|
+|blink\_heap\_unittests ||
+|blink\_platform\_unittests||
+|breakpad\_unittests ||
+|[browser\_tests](https://sites.google.com/a/chromium.org/dev/developers/testing/browser-tests)|Tests the browser UI. Can not inject user input or depend on focus/activation behavior because it can be run in parallel processes and/or with a locked screen, headless etc. For tests sensitive to that, use interactive\_ui\_tests. For example, when tests need to navigate to chrome://hang (see chrome/browser/ui/webui/ntp/new\_tab\_ui\_uitest.cc)|
+|cacheinvalidation\_unittests||
+|chromedriver\_unittests ||
+|content\_browsertests |Similar to browser\_tests, but with a minimal shell contained entirely within content/. This test, as well as the entire content module, has no dependencies on chrome/.|
+|content\_gl\_tests ||
+|content\_perftests ||
+|content\_unittests ||
+|courgette\_unittests ||
+|crypto\_unittests ||
+|curvecp\_unittests ||
+|device\_unittests |Tests for the device (Bluetooth, HID, USB, etc.) APIs.|
+|ffmpeg\_tests ||
+|ffmpeg\_unittests ||
+|gfx\_unittests ||
+|gpu\_tests ||
+|interactive\_ui\_tests |Like browser\_tests, but these tests do things like changing window focus, so that the machine running the test can't be used while the test is running. May include browsertests (derived from InProcessBrowserTest) to run in-process in case when the test is sensitive to focus transitions or injects user input/mouse events.|
+|ipc\_tests |Tests the IPC subsystem for communication between browser, renderer, and plugin processes.|
+|jingle\_unittests ||
+|media\_unittests ||
+|memory\_test ||
+|net\_perftests |Performance tests for the disk cache and cookie storage.|
+|net\_unittests |Unit tests network stack.|
+|[page\_cycler\_tests](https://sites.google.com/a/chromium.org/dev/developers/testing/page-cyclers)||
+|performance\_ui\_tests ||
+|plugin\_tests |Tests the plugin subsystem.|
+|ppapi\_unittests |Tests to verify Chromium recovery after hanging or crashing of renderers.|
+|printing\_unittests ||
+|reliability\_tests ||
+|safe\_browsing\_tests ||
+|sql\_unittests ||
+|startup\_tests |Test startup performance of Chromium.|
+|sync\_integration\_tests||
+|sync\_unit\_tests ||
+|tab\_switching\_test |Test tab switching functionality.|
+|telemetry\_unittests |Tests for the core functionality of the Telemetry performance testing framework. Not performance-sensitive.|
+|telemetry\_perf\_unittests|Smoke tests to catch errors running performance tests before they run on the chromium.perf waterfall. Not performance-sensitive.|
+|test\_shell\_tests |A collection of tests within the Test Shell.|
+|[test\_installer](https://sites.google.com/a/chromium.org/dev/developers/testing/windows-installer-tests)|Tests Chrome's installer for Windows|
+|ui\_base\_unittests |Unit tests for //ui/base.|
+|unit\_tests |The kitchen sink for unit tests. These tests cover several modules within Chromium.|
+|url\_unittests ||
+|views\_unittests ||
+|wav\_ola\_test ||
+|webkit\_unit\_tests || \ No newline at end of file
diff --git a/docs/theme_creation_guide.md b/docs/theme_creation_guide.md
new file mode 100644
index 0000000..4238c60
--- /dev/null
+++ b/docs/theme_creation_guide.md
@@ -0,0 +1,353 @@
+
+# Theme Creation Guide
+
+The Google Chrome Extensions Help Docs provided some info on how to create theme as an extension, but for a pure designer, the details of the `*`.cc file can be overwhelming and confusing. Also, having a clear documentation, enables a new designer to start designing on the go! (Also makes life easier... he he he).
+
+Experimenting with the creation of theme and the possible UI elements that could be themed helped create this help document(working progress).It would be helpful if **people contribute** to this document in any possible way, that would make it a good Theme Creation Guide!
+
+---
+
+**So how do you create a theme for Google chrome?**
+
+**Things you'll need to create a theme**
+ 1. A basic text editor (preferably one that shows line numbers-because on packaging a theme, Chrome might point out the error in the control file - manifest.json, if any. It is recommended using Notepad++ which is a free and very useful editor!).
+ 1. An image editor - preferably an advance editor that can allow you to create good content (Using simple editors can do the job of creating themes, but very sloppy ones! It is recommended to use Photoshop, alternatively you may use the free editors like Gimp and Paint.net, [click here](http://sixrevisions.com/graphics-design/10-excellent-open-source-and-free-alternatives-to-photoshop/)).
+ 1. If you are using Photoshop, you can download [this Chrome window design](http://www.chromium.org/user-experience/visual-design/chrome_0.2_psd.zip) which is broken down in layers, and makes it easy to visualize what the theme should look like.
+ 1. Some creative ideas about what the theme is going to look like - the colors, patterns and design.
+ 1. Package your theme and publish it in one of the following ways -
+ 1. [Upload](https://chrome.google.com/webstore/developer/dashboard) the theme to the [Chrome Web Store](https://chrome.google.com/webstore/)
+ 1. Use Chrome to package it by yourself. More information can be found [here](http://code.google.com/chrome/extensions/hosting.html)
+ 1. Package it by yourself. More information can be found [here](http://code.google.com/chrome/extensions/packaging.html).
+
+_Now that you have the needed tools, let's get started._
+
+First create a folder with the name of the theme, inside it you need to create a folder (usually named _images_, but it's your choice).
+
+_Then you need to create two things:_
+The first part is to create the images (PNG images) needed for the theme and put them in the _images_ folder (in the next section you'll see a list of images that can be created for a theme), then create a file named "manifest.json", it needs to be inside the theme folder (here is an example file [manifest.json](http://src.chromium.org/viewvc/chrome/trunk/src/chrome/test/data/extensions/theme/manifest.json?revision=72690&view=markup), open it with basic text editor to see the contents and remember that all notation in this file is in **lower case**)
+
+Then we package the theme and test it.
+
+There are a number of things that can be themed in Chrome.
+
+(See [Description of Elements](#Description_of_Elements.md) section for detailed explanation.)
+
+### Image Elements
+_Image elements are defined under the "images" section in the manifest.json file._
+
+|Number|Description|manifest.json Notation |Recommended Size (W x H)|
+|:-----|:----------|:----------------------|:-----------------------|
+|1 |The frame of the chrome browser/the area that is behind the tabs.|["theme\_frame"](#theme_frame.md)|∞ x 80 |
+|1. 1 |The same area as above, only that this represents the inactive state.|["theme\_frame\_inactive"](#theme_frame_inactive.md)| |
+|1. 2 |The same area under the incognito mode, when the window is active|["theme\_frame\_incognito"](#theme_frame_incognito.md)| |
+|1. 3 |The same area but in the incognito mode, when the window is inactive.|["theme\_frame\_incognito\_inactive"](#theme_frame_incognito_inactive.md)| |
+|2 |This represents both the current tab and the toolbar together|["theme\_toolbar"](#theme_toolbar.md)|∞ x 120 |
+|3 |This is the area that covers the tabs that are not active|["theme\_tab\_background"](#theme_tab_background.md)|∞ x 65 |
+|3. 1 |The same thing as above, but used for the incognito mode|["theme\_tab\_background\_incognito"](#theme_tab_background_incognito.md)| |
+|4 |(Not yet confirmed) The tab background for something!|["theme\_tab\_background\_v"](#theme_tab_background_v.md)| |
+|5 |This is the theme's inner background-the large white space is covered by this|["theme\_ntp\_background"](#theme_ntp_background.md)|Recommended Minimum Size for images 800 x 600 |
+|6 |This is the image that appears at the top left of the frame|["theme\_frame\_overlay"](#theme_frame_overlay.md)|1100 x 40 |
+|6. 1 |Same as above but displayed when window is inactive|["theme\_frame\_overlay\_inactive"](#theme_frame_overlay_inactive.md)| |
+|7 |This is the area that covers the toolbar buttons|["theme\_button\_background"](#theme_button_background.md)|30 x 30 |
+|8 |This is the image that will be displayed in the 'theme created by' section|["theme\_ntp\_attribution"](#theme_ntp_attribution.md)| |
+|9 |The background for the window control buttons (close, maximize, etc.,)|["theme\_window\_control\_background"](#theme_window_control_background.md)| |
+
+
+### Color Elements
+_Color elements are defined under the "colors" section in the manifest.json file._
+
+Colors are entered as RGB values, some elements can contain opacity value also.
+e.g. `"ntp_section" : [15, 15, 15, 0.6]`
+
+
+|Number | Description |manifest.json Notation|
+|:------|:------------|:---------------------|
+|10 |The color of the frame, that covers the smaller outer frame|["frame"](#Frame.md) |
+|10. 1 |The color of the same element, but in inactive mode|["frame\_inactive"](#Frame_inactive.md)|
+|10. 2 |The color of the same element, but in incognito mode|["frame\_incognito"](#Frame_incognito.md)|
+|10. 3 |The color of the same element, but in incognito, inactive mode|["frame\_incognito\_inactive"](#Frame_incognito_inactive.md)|
+|10. 4 |The color of the toolbar background (visible by pressing Ctrl+B)|["toolbar"](#toolbar.md)|
+|11 |The color of text, in the title of current tab|["tab\_text"](#tab_text.md)|
+|12 |The color of text, in the title of all inactive tabs|["tab\_background\_text"](#tab_background_text.md)|
+|13 |The color of the bookmark element's text|["bookmark\_text"](#bookmark_text.md)|
+|14 |The theme's inner background color|["ntp\_background"](#ntp_background.md)|
+|14. 1 |The color of all the text that comes in the inner background area|["ntp\_text"](#ntp_text.md)|
+|14. 2 |The color of the links that appear in the background area|["ntp\_link"](#ntp_link.md)|
+|14. 3 |The color of the underline of all links in the background area|["ntp\_link\_underline"](#ntp_link_underline.md)|
+|14. 4 |The color of the section frames when mouse over|["ntp\_header"](#ntp_header.md)|
+|14. 5 |The color of Recently closed tabs area's bg and frame of quick links|["ntp\_section"](#ntp_section.md)|
+|14. 6 |The color of text in the section|["ntp\_section\_text"](#ntp_section_text.md)|
+|14. 7 |The color of the links that appear in the section area|["ntp\_section\_link"](#ntp_section_link.md)|
+|14. 8 |The color of underline of links in the section area|["ntp\_section\_link\_underline"](#ntp_section_link_underline.md)|
+|15 |Unconfirmed yet-The color of the window control buttons (close, maximize, etc.)|["control\_background"](#control_background.md)|
+|16 |The background color of all the toolbar buttons|["button\_background"](#button_background.md)|
+
+### Tint Elements
+Tint elements change the hue, saturation and lightness of images.
+
+_Tint elements come under the "tints" section in the manifest.json file._
+
+|Number|Description|manifest.json Notation|
+|:-----|:----------|:---------------------|
+|17 |The color tint that can be applied to various buttons in chrome|["buttons"](#buttons.md)|
+|18 |The color tint that can be applied to the frame of chrome|["frame"](#frame.md) |
+|18. 1 |The color tint that is applied when the chrome window is inactive|["frame\_inactive"](#frame_inactive.md)|
+|18. 2 |The color tint to the frame-in incognito mode|["frame\_incognito"](#frame_incognito.md)|
+|18. 3 |Same as above, but when the window is inactive (and in incognito mode)|["frame\_incognito\_inactive"](#frame_incognito_inactive.md)|
+|19 |The color tint of the inactive tabs in incognito mode|["background\_tab"](#background_tab.md)|
+
+### UI Property Elements
+_Property elements come under the "properties" section in the manifest.json file._
+
+|Number|Description|manifest.json Notation|
+|:-----|:----------|:---------------------|
+|20 |The property that tells the alignment of the inner backrground image|["ntp\_background\_alignment"](#ntp_background_alignment.md)|
+|21 |This property specifies if the above background should be repeated|["ntp\_background\_repeat"](#ntp_background_repeat.md)|
+|22 |This lets you select the type of google chrome header you want|["ntp\_logo\_alternate"](#ntp_logo_alternate.md)|
+
+_Phew! lots of things to theme! Actually not!_
+
+These are the elements that google chrome allows a user to theme, but it's the user's whish to decide what elements are going to be edited. The things that you don't need changed can be left alone (in case of which those elements will have their default value/image).Remember that each element goes into it's own section in the manifest file - color elements should be listed under "colors", image elements under "images" and so on.
+
+_Let's go through the elements one by one._
+
+## Description of Elements
+
+### Basic Theme Elements
+_These elements are the starting point, by using only them, you can quickly create a basic theme._
+
+ * #### theme\_frame
+
+This is an image, this image represents the area behind the tabs. There is no strict dimensions for this image, the rest of the area in the frame that is not covered by this image is covered by the color element [frame](#Frame.md). It would be helpful to know that this image by default repeats along the x-axis. Hence if you create a small square image, it will be automatically repeated along x-axis-which means you can create patterns if you use short sized images.
+
+Remember this image doesn't repeat along-y, hence make sure it is long enough to cover the toolbar area-anything over 80px height is good, usually with grading alpha transparency at the bottom so that the image blends with the "frame" color.(you can create a large sized frame image, that extends and coveres the frame borders too)
+
+Else you might see a small seperation to the extreme top left of the frame, when the window is in restored mode due to the wrong size of the image.
+
+Alternatively one can decide to create an image with loooong width-long enough that the image repetition is not seen-this method allowes you to create one continuous design for the frame-but this method might slow down the loading time of the theme since large resolution screens require image of larger width (or else you'll see the repetition of the image).Note that if you don't include this image, the default frame of chrome-the blue one is displayed, the color element ["frame"](#Frame.md) doesn't override this.
+
+ * #### theme\_toolbar
+
+This is an image that covers the area of the current tab and the toolbar below it:
+
+Make sure this image is over 119px in height because the find bar( which appears when you press Ctrl+F )shares the tool bar image, the width is up to you. Similar to the theme\_frame, this image also tiles along the x-axis so you have the option to create pattern or create a looong width image for the toolbar. Remember that the toolbar contains some buttons and when the bookmarks are visible (CMD+B or Ctrl+B), they too occupy space in the toolbar:
+
+So don't make the design too much crowded, or else the toolbar will not be visually appealing. Usually for the toolbar, a square, tiling image is preferred, which might be a gradient or just plain color.
+
+ * #### theme\_tab\_background
+
+This is an image, this represents the tabs - all the inactive tabs.
+
+usually a less saturated image of theme\_toolbar is used for this. You may also design something else, but make sure that the design enables the user to distinguish the inactive tabs from the active one!
+This image also tiles default in x-axis and the height of this can be around 65px , the width is up to you.
+
+ * #### theme\_ntp\_background
+This is the image that is displayed at the large white space in the browser, in the new tab page, it can contain a background image that contains alpha transparency( the default page that contains various quick access elements-see the help image).Note that the notation ntp represents new tab page, hence all elements which contain ntp in the notation will correspond to some element inside the new tab page.
+
+There are two ways you can create the inner background for the browser-use a large image without repetition/tiling or use a small image that repeats in x-axis and/or y-axis.(see [ntp\_background\_repeat](#ntp_background_repeat.md))
+
+There is also option for you to select the alignment of this image, by default the image is center aligned, but you may choose to align it the way you want.(see [ntp\_background\_alignment](#ntp_background_alignment.md))
+
+### Advanced Theme Elements
+_Use these to create a more advanced theme._
+ * #### theme\_frame\_inactive
+
+This is an image, representing the area behind the tabs, when the chrome window is out of focus/inactive.
+
+All that is applicable to [theme\_frame](#theme_frame.md), applies to this. Usually to avoid making the theme heavy, you can go for [frame\_inactive](#frame_inactive.md) tint, to show that the window is inactive-it's efficient than creating a whole new image. But it's up to the designer to decide, if it's going to be an image seperately for the inactive state or there is going to be a colo tint when the window is inactive.
+
+ * #### theme\_frame\_incognito
+
+This is similar to the [theme\_frame](#theme_frame.md), but this image represents the frame of a window in incognito mode. You may choose to redesign the image specially for the incognito mode or ignore this, so that whatever you made for [theme\_frame](#theme_frame.md) will be tinted (see [frame\_incognito](#frame_incognito.md)) and used in incognito mode (it's by default that it gets a dark tint in incognito mode).
+
+ * #### theme\_frame\_incognito\_inactive
+
+This is also an image, similar to theme\_frame\_inactive, but this image is for the inactive frame of a window in incognito mode.(see [frame\_incognito\_inactive](#frame_incognito_inactive.md))
+
+ * #### theme\_tab\_background\_incognito
+
+This is an image, that represents the inactive tabs, in the incognito mode. Alternatively one can use the tinting [background\_tab](#background_tab.md), to effect inactive tabs in incognito mode, but there is a slight problem that some may want to avoid - even if you tint the inactive tabs of the incognito window, the inactive tabs are made transparent (by default). Hence they'll show the area behind them. i.e. the frame. If you want to avoid this, you can include this image.
+
+ * #### theme\_tab\_background\_v
+
+Until now, the role of this image is a mystery, that someone needs to unlock!
+
+ * #### theme\_frame\_overlay
+
+This is the image that will be displayed at the top left corner of the frame, over the [theme\_frame](#theme_frame.md) image.Also this image doesn't repeat by default.Hence this image may be used in case you don't want the frame area design to repeat.Similar to the theme\_frame ,anything over 80px height is good, usually with grading alpha transparency at the bottom so that the image blends with the "frame" color.
+
+ * #### theme\_frame\_overlay\_inactive
+
+This is similar to [theme\_frame\_overlay](#theme_frame_overlay.md), but will be displayed when the browser window is inactive.If you do not include this image, theme\_frame\_overlay image will be darkly tinted and used by default-to denote the inactive frame.
+
+ * #### theme\_button\_background
+
+This is the image that specifies the background for various buttons(stop,refresh,back,forward,etc.,) in the toolbar.This image is optional, if you do not include this image, the color element [button\_background](#button_background.md) overrides the button's background color.
+
+Whatever image you give for this, the browser leaves off two pixels at top and left of the image and mapps a square 25px image to the buttons as background.And the icon/symbol of the button(stop,refresh,back,forward,etc.,) is displayed at the center.
+
+ * #### theme\_ntp\_attribution
+This is the image that is displayed at the bottom right corner of the new tab page.Chrome automatically puts a heading "Theme created by" and below that displays whatever image you give as theme\_ntp\_attribution.
+
+A good practice is to create a small png file enough for an aurthor name(and contact if needed) with alpha transparency background.Making large and more color intense image will attract view, but will make the theme a bit heavier(the file size of the theme may increase with bigger png file) but it's your choice anyway.
+
+ * #### theme\_window\_control\_background
+
+This is the image that specifies the background for the window control buttons(minimize,maximize,close and new tab).This image is also not necessary until you desperatly need to change the control button background.If the image is included, the browser leaves off 1px at the top and left of image and maps a 16px height button from it, the width varies according to buttons though.
+
+If this image is not included, the control buttons assume the background color specified in the color element button\_background.
+
+<a href='Hidden comment: NOTE: The following three section headings Frame,Frame_inactive,Frame_incognito,Frame_incognito_inactive - all these contain capitalised F intentionally so that internal page navigation is possible'></a>
+
+ * #### Frame
+
+This is a color element, that specifies the color of the frame area of the browser(the area behind the tabs + the border).It occupies the area that is not covered by the [theme\_frame](#theme_frame.md) image.
+The format to specify this element in the manifest.json file is : `"frame" : [R,G,B]`
+
+ * #### Frame\_inactive
+
+This is a color element, that specifies the color of the frame area of the browser but when the window is inactive/out of focus (the area behind the tabs + the border).It occupies the area that is not covered by the [theme\_frame](#theme_frame.md) image.
+The format to specify this element in the manifest.json file is : `"frame_inactive" : [R,G,B]`
+
+ * #### Frame\_incognito
+
+This is a color element similar to ["frame"](#Frame.md) ,but under the incognito mode.
+
+ * #### Frame\_incognito\_inactive
+
+This is a color element similar to ["frame\_inactive"](#Frame_inactive.md) ,but under the incognito mode.
+
+ * #### toolbar
+
+This is a color element that specifies the background color of the bookmarks bar, that is visible only in the new tab page,when you press the shortcut keys Ctrl+B or CMD+B.And it contains a 1px border whose color is defined by [ntp\_header](#ntp_header.md).Also this element can contain an opacity value that effects transparency of this bar.Note that opacity value are float values that ranges from 0 to 1, 0 being fully transparent and 1 being fully opaque.
+
+The format to specify this element in the manifest.json file is : `"toolbar" : [R,G,B,opacity]`
+
+Eg. `"toolbar" : [25, 154, 154, 0.5]`
+
+Note that this element also specifies color value of the background for floating the status bar(in the bottom of page).It's found that using opacity values for this element makes the status bar transparent, but the text inside it will contain a opaque background of same color-hence area without the text will be transparent
+
+ * #### tab\_text
+
+This is a color element that specifies the color of the title text of the current tab(tab title name of current tab).
+
+ * #### tab\_background\_text
+
+This is a color element that specifies the color of the title text of all the inactive tabs/out of focus tabs.
+
+ * #### bookmark\_text
+This is a color element that specifies the color of the text of bookmarks in the toolbar and the text for the download bar that appears at the bottom.
+Note : During a download, the text color indicating the number of MB downloaded is not configurable
+
+ * #### ntp\_background
+
+This is a color element that specifies the color of the background of the new tab page(covers all areas that is not mapped by [theme\_ntp\_background](#theme_ntp_background.md)).Usually if a alpha transparency is employed in the image element theme\_ntp\_background, make sure that ntp\_background is such that it matches that image element.
+
+ * #### ntp\_text
+
+This is a color element that specifies the color of all the text that appears in the new tab page.(tips, quick access lables,etc.,).
+
+ * #### ntp\_link
+
+This is a color element that specifes the color of all the links that may appear in the new tab page.(currently the links under list view and links of tips that appear at the bottom of new tab page takes it's color from this)
+
+ * #### ntp\_link\_underline
+
+This is a color element that specifies the color of the underline of all links in the new tab page(the color of underline of the ntp\_link element).
+
+ * #### ntp\_header
+
+This is a color element that specifies the color for the frame of quick link buttons, when one hovers the mouse over it.It also specifies the 1px border color of the [toolbar](#toolbar.md) element ,the ntp\_section element and the color of three small buttons in the new tab page-thumbnail view,list view,change page layout.
+
+ * #### ntp\_section
+
+This is a color element that specifies the color for the border of the quick link buttons(see help image) and also the background color for the recently closed bar that appears above the tips area.Similar to the [toolbar](#toolbar.md) element, this can als contain opacity value.
+
+ * #### ntp\_section\_text
+
+This is a clolor element that specifies the color of all the text that appears in the section area.(currently onl the text "Recently closed" derives it's color from this)
+
+ * #### ntp\_section\_link
+
+This is a color element that specifies the color of all the links that appear in the section area.Currently all the links in the "Recently closed" bar take their color from this.
+
+ * #### ntp\_section\_link\_underline
+
+This is a color element that specifies the color of underlines of all the links that appear in the section area.(underlines the ntp\_section\_link element)
+
+ * #### control\_background
+
+This should specify the color of the control buttons of window-minimize,maximize and close.But I couldn't confirm that.It seems that the following element overrides it.
+
+ * #### button\_background
+
+This is a color element that specifies the color for the background of all the buttons in the toolbar area(back,forward, bookmark,etc.,).This element too can contain opacity values like the [toolbar](#toolbar.md), which will affect the opacity of the window control buttons( minimize,maximize,close).
+
+The following are tint elements.The tint element [buttons](#buttons.md) is the most common one, but you may include other elements too. Before moving on to those, one must know how the tins work.The tint elements are used to assign color tints to certain elements of the browser area.The value of the tint is in floating values ranging from 0 to 1. Eg, `"buttons" : [0.3,0.5,0.5]` (the values range from 0 to 1, hence even 0.125 or 0.65 represent a color).
+
+ * Here the first value represents the hue value, for which 0 means red and 1 means red
+
+ * The next is saturation value that lets you set vibrancy of the color,here 0 means completely desaturated and 1 means fully saturated.
+
+ * The next value is lightness/brightness value.Here 0 means least bright and 1 means most bright
+
+ * #### buttons
+
+This is a tint element, that is used to specify a color tint for icons inside all the buttons in the toolbar (back, forward, refresh, etc.).
+
+ * #### frame
+
+This is a tint element, that is used to specify a color tint for the frame area.Whatever image you've created for the frame area will be tinted with a color that you specify here.
+
+ * #### frame\_inactive
+
+This is a tint element, similar to the tint element frame, but the tint is applied when the window is inactive/out of focus.
+
+ * #### frame\_incognito
+
+This is a tint element, that is used to specify a color tint for the frame area in incognito mode.Whatever image you've created for the frame area will be tinted with a color that you specify here.
+
+ * #### frame\_incognito\_inactive
+
+This is a tint element, that is used to specify a color tint for the frame area in incognito mode, but when the window is inactive/out of focus.
+
+ * #### background\_tab
+
+This is a tint element,that specifies the color tint of the inactive tabs in incognito mode.
+
+ * #### ntp\_background\_alignment
+
+This is a property element, that is used to control the alignment property of the image element [theme\_ntp\_background](#theme_ntp_background.md).The value for this element is entered as follows:
+`"ntp_background_alignment" : "VALUE"`
+
+In the place of VALUE, you can enter either "top","bottom","left" or "right".Further you can use combinations like "left top","right bottom",etc., The difference is that using only "left", aligns the background image to the left center of the new tab page.While using "left top" aligns the image to the top left corner of the new tab page.
+Eg, `"ntp_background_alignment" : "left bottom"`
+_(Note that the default alignment of the background image is center)._
+
+ * #### ntp\_background\_repeat
+
+This is a property element, that is used to control the repetition of the image element [theme\_ntp\_background](#theme_ntp_background.md).It is specified as:
+
+`"ntp_background_repeat" : "VALUE"`
+
+In the place of VALUE, you can enter either "repeat","no-repeat","repeat-x" or "repeat-y" .Depending upon the image you've created as the background you can choose to repeat the image along x-axis or y-axis or turn repeat off, since repeat is on by default!.
+
+ * #### ntp\_logo\_alternate
+
+This is a propety element that specifies what header of Google chrome you wnat for your theme.It is specified as follows:
+
+`"ntp_logo_alternate" : VALUE`
+
+Note that this element's value should not be entered in double quotes!.In the place of VALUE you can enter 0 or 1.Choosing 0 will give you a colorful Google Chrome header logo inside the new tab page.Choosing 1 will give you an all white Google Chrome header logo inside the new tab page.
+
+### Packaging
+
+That ends the description of various theme elements.Once you've the images needed, and after creating the manifest.json file, you are ready to test your theme.In the latest beta version, you'ev the option to package the theme into an extension.To do this follow these steps(to know more about packaging visit [this link](http://code.google.com/chrome/extensions/packaging.html) ):
+ 1. Open the Chrome browser (it has to be the lates beta version).
+ 1. In the options menu (click the wrench in toolbar).
+ 1. Choose the Tools submenu, then Extensions.
+ 1. In the page that appears, click on the "Pack extension" button.
+ 1. You'll be asked to browse and locate the extension root directory-remember the folder we created with the theme name?, the root directory is that one.
+ 1. In the dialog box that comes up, Click ok.
+
+Now the theme has been packaged into an extension ( if there were errors in the manifest.json file, you'll be notified before the extension is created, and the extention will not be packaged until the error is rectified).Now open the extension file in chrome(it's located next to the root folder), you'll be asked if you want to continue-click continue and you'll se your theme.Once satisfied with the theme, you need to create a zip file of the root directory and submit to the [extensions gallery](https://chrome.google.com/extensions). \ No newline at end of file
diff --git a/docs/tpm_quick_ref.md b/docs/tpm_quick_ref.md
new file mode 100644
index 0000000..4aa0bbf
--- /dev/null
+++ b/docs/tpm_quick_ref.md
@@ -0,0 +1,32 @@
+# Introduction
+
+This page is meant to help keep track of [TPM](Glossary.md) use across the system. It may not be up-to-date at any given point, but it's a wiki so you know what to do.
+
+# Details
+
+ * TPM ownership management:
+> > http://git.chromium.org/gitweb/?p=chromiumos/platform/cryptohome.git;a=blob;f=README.tpm
+
+ * TPM\_Clear is done (as in vboot\_reference) but in the firmware code itself on switch between dev and verified modes and in recovery. (TODO: link code)
+
+ * TPM owner password clearing (triggered at sign-in by chrome):
+> > http://git.chromium.org/gitweb/?p=chromium/chromium.git;a=blob;f=chrome/browser/chromeos/login/login_utils.cc;h=9c4564e074c650bd91c27243c589d603740793bb;hb=HEAD#l861
+
+ * PCR extend (no active use elsewhere):
+> > http://git.chromium.org/gitweb/?p=chromiumos/platform/vboot_reference.git;a=blob;f=firmware/lib/tpm_bootmode.c
+
+ * NVRAM use for OS rollback attack protection:
+> > http://git.chromium.org/gitweb/?p=chromiumos/platform/vboot_reference.git;a=blob;f=firmware/lib/rollback_index.c
+
+ * Tamper evident storage:
+> > http://git.chromium.org/gitweb/?p=chromiumos/platform/cryptohome.git;a=blob;f=README.lockbox
+
+ * Tamper-evident storage for avoiding runtime device management mode changes:
+> > http://git.chromium.org/gitweb/?p=chromium/chromium.git;a=blob;f=chrome/browser/chromeos/login/enrollment/enterprise_enrollment_screen.cc
+
+ * User key/passphrase and cached data protection:
+> > http://git.chromium.org/gitweb/?p=chromiumos/platform/cryptohome.git;a=blob;f=README.homedirs
+
+ * A TPM in a Chrome device has an EK certificate that is signed by an intermediate certificate authority that is dedicated to the specific TPMs allocated for use in Chrome devices. OS-level self-validation of the platform TPM should be viable with this or chaining any other trust expectations.
+
+ * TPM is used for per-user certificate storage (NSS+PKCS#11) using opencryptoki but soon to be replaced by chaps. Update links here when chaps stabilizes (Each user's pkcs#11 key store is kept in their homedir to ensure it is tied to the local user account) This functionality includes VPN and 802.1x-related keypairs. \ No newline at end of file
diff --git a/docs/updating_clang.md b/docs/updating_clang.md
new file mode 100644
index 0000000..1721dce
--- /dev/null
+++ b/docs/updating_clang.md
@@ -0,0 +1,11 @@
+# Updating clang
+
+ 1. Sync your Chromium tree to the latest revision to pick up any plugin changes and test the new compiler against ToT
+ 1. Update clang revision in tools/clang/scripts/update.sh, upload CL to rietveld
+ 1. Run tools/clang/scripts/package.py to create a tgz of the binary (mac and linux)
+ 1. Do a local clobber build with that clang (mac and linux). Check that everything builds fine and no new warnings appear. (Optional if the revision picked in 1 was vetted by other means already.)
+ 1. Upload the binaries using gsutil, they will appear at http://commondatastorage.googleapis.com/chromium-browser-clang/index.html
+ 1. Run goma package update script to push these packages to goma, send email
+ 1. `git cl try -m tryserver.chromium.mac -b mac_chromium_rel_ng -b mac_chromium_asan_rel_ng -b mac_chromium_gn_dbg -b ios_rel_device_ninja && git cl try -m tryserver.chromium.linux -b linux_chromium_gn_dbg -b linux_chromium_chromeos_dbg_ng -b linux_chromium_asan_rel_ng -b linux_chromium_chromeos_asan_rel_ng -b android_clang_dbg_recipe -b linux_chromium_trusty32_rel -b linux_chromium_rel_ng && git cl try -m tryserver.blink -b linux_blink_rel`
+ 1. Commit roll CL from the first step
+ 1. The bots will now pull the prebuilt binary, and goma will have a matching binary, too. \ No newline at end of file
diff --git a/docs/updating_clang_format_binaries.md b/docs/updating_clang_format_binaries.md
new file mode 100644
index 0000000..8ba523e
--- /dev/null
+++ b/docs/updating_clang_format_binaries.md
@@ -0,0 +1,94 @@
+Instructions on how to update the [clang-format binaries](ClangFormat.md) that come with a checkout of Chromium.
+
+<h2>Prerequisites</h2>
+
+You'll need a Windows machine, a Linux machine, and a Mac; all capable of building clang-format. You'll also need permissions to upload to the appropriate google storage bucket. Chromium infrastructure team members have this, and others can be granted the permission based on need. Talk to ncarter or hinoka about getting access.
+
+<h2>Pick a head svn revision</h2>
+
+Consult http://llvm.org/svn/llvm-project/ for the current head revision. This will be the CLANG\_REV you'll use later to check out each platform to a consistent state.
+
+<h2>Build a release-mode clang-format on each platform</h2>
+
+Follow the the official instructions here: http://clang.llvm.org/get_started.html.
+
+Windows step-by-step:
+```
+[double check you have the tools you need]
+where cmake.exe # You need to install this.
+where svn.exe # Maybe fix with: set PATH=%PATH%;D:\src\depot_tools\svn_bin
+"c:\Program Files (x86)\Microsoft Visual Studio 12.0\vc\vcvarsall.bat" amd64_x86
+
+
+set CLANG_REV=198831 # You must change this value (see above)
+
+[from a clean directory, check out and build]
+rmdir /S /Q llvm
+rmdir /S /Q llvm-build
+mkdir llvm
+mkdir llvm-build
+svn co http://llvm.org/svn/llvm-project/llvm/trunk@%CLANG_REV% llvm
+cd llvm\tools
+svn co http://llvm.org/svn/llvm-project/cfe/trunk@%CLANG_REV% clang
+cd ..\..\llvm-build
+set CC=cl
+set CXX=cl
+cmake -G Ninja ..\llvm -DCMAKE_BUILD_TYPE=Release -DLLVM_USE_CRT_RELEASE=MT -DLLVM_ENABLE_ASSERTIONS=NO -DLLVM_ENABLE_THREADS=NO -DPYTHON_EXECUTABLE=d:\src\depot_tools\python276_bin\python.exe
+ninja clang-format
+bin\clang-format.exe --version
+```
+
+Mac & Linux step-by-step:
+```
+# Check out.
+export CLANG_REV=198831 # You must change this value (see above)
+rm -rf llvm
+rm -rf llvm-build
+mkdir llvm
+mkdir llvm-build
+svn co http://llvm.org/svn/llvm-project/llvm/trunk@$CLANG_REV llvm
+cd llvm/tools
+svn co http://llvm.org/svn/llvm-project/cfe/trunk@$CLANG_REV clang
+cd ../../llvm-build
+
+# Option 1: with cmake
+ MACOSX_DEPLOYMENT_TARGET=10.9 cmake -G Ninja -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=NO -DLLVM_ENABLE_THREADS=NO ../llvm/
+ time caffeinate ninja clang-format
+ strip bin/clang-format
+
+ # (On Linux, to build with clang, which produces smaller binaries, add this to your cmake invocation.
+ # On Mac, the system compiler is already clang so it's not needed there.)
+ -DCMAKE_C_COMPILER=$PWD/../chrome/src/third_party/llvm-build/Release+Asserts/bin/clang -DCMAKE_CXX_COMPILER=$PWD/../chrome/src/third_party/llvm-build/Release+Asserts/bin/clang++
+```
+Platform specific notes:
+ * Windows: Visual Studio 2013 only.
+ * Linux: so far (as of January 2014) we've just included a 64-bit binary. It's important to disable threading, else clang-format will depend on libatomic.so.1 which doesn't exist on Precise.
+ * Mac: Remember to set `MACOSX_DEPLOYMENT_TARGET` when building! If you get configure warnings, you may need to install XCode 5 and avoid a goma environment.
+
+<h2>Upload each binary to google storage</h2>
+
+Copy the binaries into your chromium checkout (under `src/buildtools/(win|linux64|mac)/clang-format(.exe?)`).
+For each binary, you'll need to run upload\_to\_google\_storage.py according to the instructions in [README.txt](https://code.google.com/p/chromium/codesearch#chromium/src/buildtools/clang_format/README.txt). This will upload the binary into a publicly accessible google storage bucket, and update `.sha1` file in your Chrome checkout. You'll check in the `.sha1` file (but NOT the clang-format binary) into source control. In order to be able to upload, you'll need write permission to the bucket -- see the prerequisites.
+
+<h2>Copy the helper scripts and update README.chromium</h2>
+
+There are some auxiliary scripts that ought to be kept updated in lockstep with the clang-format binary. These get copied into third\_party/clang\_format/scripts in your Chromium checkout.
+
+The `README.chromium` file ought to be updated with version and date info.
+
+<h2>Upload a CL according to the following template</h2>
+
+```
+Update clang-format binaries and scripts for all platforms.
+
+I followed these instructions:
+https://code.google.com/p/chromium/wiki/UpdatingClangFormatBinaries
+
+The binaries were built at clang revision ####### on ####DATETIME####.
+
+BUG=
+```
+
+The change should <b>always</b> include new `.sha1` files for each platform (we want to keep these in lockstep), should <b>never</b> include `clang-format` binaries directly. The change should <b>always</b> update `README.chromium`
+
+clang-format binaries should weigh in at 1.5MB or less. Watch out for size regressions. \ No newline at end of file
diff --git a/docs/use_find_bugs_for_android.md b/docs/use_find_bugs_for_android.md
new file mode 100644
index 0000000..862c147
--- /dev/null
+++ b/docs/use_find_bugs_for_android.md
@@ -0,0 +1,32 @@
+# Introduction
+
+[FindBugs](http://findbugs.sourceforge.net) is an open source static analysis tool from the University of Maryland that looks for potential bugs in Java class files. We have some scripts to run it over the Java code at build time.
+
+# How To Run
+
+For gyp builds, add `run_findbugs=1` to your `GYP_DEFINES`.
+
+For gn builds, add `run_findbugs=true` to the args you pass to `gn gen`:
+
+```
+gn gen --args='target_os="android" run_findbugs=true'
+```
+
+Note that running findbugs will add time to your build. The amount of additional time required depends on the number of targets on which findbugs runs, though it will usually be between 1-10 minutes.
+
+Some of the warnings are false positives. In general, they should be suppressed using [@SuppressFBWarnings](https://code.google.com/p/chromium/codesearch#chromium/src/base/android/java/src/org/chromium/base/annotations/SuppressFBWarnings.java). In the rare event that a warning should be suppressed across the entire code base, it should be added to the [exclusion file](https://code.google.com/p/chromium/codesearch#chromium/src/build/android/findbugs_filter/findbugs_exclude.xml) instead. If you modify this file:
+
+ * Include a comment that says what you're suppressing and why.
+ * The existing suppressions should give you an idea of the syntax. See also the FindBugs documentation. Note that the documentation doesn't seem totally accurate (there's probably some version skew between the online docs and the version of FindBugs we're using) so you may have to experiment a little.
+
+# Chromium's [FindBugs](http://findbugs.sourceforge.net) plugin
+
+We have [FindBugs plugin](https://code.google.com/p/chromium/codesearch#chromium/src/tools/android/findbugs_plugin/) to enforce chromium specific Java rules. It currently detects:
+ * Synchronized method
+ * Synchronized this
+
+# [FindBugs](http://findbugs.sourceforge.net) on the Bots
+
+[FindBugs](http://findbugs.sourceforge.net) is configured to run on:
+ * [android\_clang\_dbg\_recipe](http://build.chromium.org/p/tryserver.chromium.linux/builders/android_clang_dbg_recipe) on the commit queue
+ * [Android Clang Builder (dbg)](http://build.chromium.org/p/chromium.linux/builders/Android%20Clang%20Builder%20(dbg)) on the main waterfall \ No newline at end of file
diff --git a/docs/useful_urls.md b/docs/useful_urls.md
new file mode 100644
index 0000000..2629039
--- /dev/null
+++ b/docs/useful_urls.md
@@ -0,0 +1,47 @@
+# Introduction
+
+Chromium has a lot of different pages for a lot of different things. This page aims to be a repository of useful links that people may find useful.
+
+## Build Status
+
+| http://build.chromium.org/p/chromium/console | Main buildbot waterfall |
+|:---------------------------------------------|:------------------------|
+| http://chromium-status.appspot.com/lkgr | Last Known Good Revision. Trybots pull this revision from trunk. |
+| http://chromium-status.appspot.com/revisions | List of the last 100 potential LKGRs |
+| http://build.chromium.org/p/chromium/lkgr-status/ | Status dashboard for LKGR |
+| http://build.chromium.org/p/tryserver.chromium/waterfall?committer=developer@chromium.org | Trybot runs, by developer |
+| http://chromium-status.appspot.com/status_viewer | Tree uptime stats |
+| http://chromium-cq-status.appspot.com | Commit queue status |
+| http://codereview.chromium.org/search?closed=3&commit=2&limit=50 | Pending commit queue jobs |
+| http://chromium-build-logs.appspot.com/ | Search for historical test failures by test name |
+| http://chromium-build-logs.appspot.com/list | Filterable list of most recent build logs |
+
+## For Sheriffs
+| http://build.chromium.org/p/chromium.chromiumos/waterfall?show_events=true&reload=120&failures_only=true | List of failing bots for a waterfall (chromium.chromiumos as an example) |
+|:---------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------|
+| http://build.chromium.org/p/chromium.linux/waterfall?show_events=true&reload=120&builder=Linux%20Builder%20x64&builder=Linux%20Builder%20(dbg) | Monitor one or multiple bots (Linux Builder x64 and Linux Builder (dbg) on chromium.linux as an example) |
+| http://build.chromium.org/p/chromium.win/waterfall/help | Customize the waterfall view for a waterfall (using chromium.win as an example) |
+| http://chromium-sheriffing.appspot.com | Alternate waterfall view that helps with test failure triage |
+| http://test-results.appspot.com/dashboards/flakiness_dashboard.html | Lists historical test results for the bots |
+
+## Release Information
+| https://omahaproxy.appspot.com/viewer | Current release versions of Chrome on all channels |
+|:--------------------------------------|:---------------------------------------------------|
+| https://omahaproxy.appspot.com/ | Looks up the revision of a build/release version |
+
+## Source Information
+| http://cs.chromium.org/ | Code Search |
+|:------------------------|:------------|
+| http://cs.chromium.org/SEARCH_TERM | Code Search for a specific SEARCH\_TERM |
+| http://src.chromium.org/viewvc/chrome/ | ViewVC History Viewer |
+| http://git.chromium.org/gitweb/?p=chromium.git;a=summary | Gitweb History Viewer |
+| https://chromium.googlesource.com/chromium/src/+log/b6cfa6a..9a2e0a8?pretty=fuller | Git changes in revision range (also works for build numbers) |
+| http://build.chromium.org/f/chromium/perf/dashboard/ui/changelog.html?url=/trunk/src&mode=html&range=SUCCESS_REV:FAILURE_REV | SVN changes in revision range |
+| http://build.chromium.org/f/chromium/perf/dashboard/ui/changelog_blink.html?url=/trunk&mode=html&range=SUCCESS_REV:FAILURE_REV | Blink changes in revision range |
+
+## Communication
+| http://groups.google.com/a/chromium.org/group/chromium-dev/topics | Chromium Developers List |
+|:------------------------------------------------------------------|:-------------------------|
+| http://groups.google.com/a/chromium.org/group/chromium-discuss/topics | Chromium Users List |
+| http://code.google.com/p/chromium/source/list | Wiki History (SVN-based) |
+| http://code.google.com/p/chromium/wiki/UserHandleMapping | Chromium User Mapping | \ No newline at end of file
diff --git a/docs/user_handle_mapping.md b/docs/user_handle_mapping.md
new file mode 100644
index 0000000..2cd6c27
--- /dev/null
+++ b/docs/user_handle_mapping.md
@@ -0,0 +1,106 @@
+For Chromium contributors that have different nicks on other domains.
+
+| **@chromium.org** | **IRC nick(s)** | **@google.com** |
+|:------------------|:----------------|:----------------|
+| aa | aboodman | aa |
+| abw | abw-cr | abw |
+| acleung | acleung | acleung |
+| adamk | aklein | adamk |
+| ajwong | awong | ajwong |
+| aluebs | aluebs | aluebs |
+| amanda | awalker | awalker |
+| amit | joshia | joshia |
+| amstan | amstan | amstan |
+| ananta | iyengar | iyengar |
+| anantha | anantha | ananthak |
+| avi | motownavi | avi |
+| ben | beng | beng |
+| benhansen | benhansen | benhansen |
+| benhenry | benry | benhenry |
+| bnutter | | bnutter |
+| bxx | bxs | bxx |
+| ccameron | ccameron | ccameron |
+| csharp | csharp1 | csharp |
+| cthomp | cthomp | cthomp |
+| darin, fishd | fishd | darin |
+| dbeam | danbeam | dbeam |
+| dmurph | dmurph | dmurph |
+| eaugusti | eriq | eaugustine |
+| erg | eglaysher | erg |
+| eroman | eroman | ericroman |
+| esprehn | esprehn | esprehn |
+| evan | evmar | evanm |
+| feng | feng | fqian |
+| fmeawad | fmeawad-cr | fmeawad |
+| gab | gabc | gab |
+| hans | hwennborg | hwennborg |
+| haven | periodic | haven |
+| ian | ifette | ifette |
+| inferno | inferno | aarya |
+| iannucci | iannucci | iannucci |
+| jaikk | | jaikk |
+| jansson | jansson | jansson |
+| janx | janx | janx |
+| jam | jam2 | jabdelmalek |
+| jchaffraix | jchaffraix | jchaffraix |
+| jeremy | jeremymos | playmobil |
+| jln | julien` | jln |
+| jochen | jochen`__` | eisinger |
+| johnnyg | johnny\_g | johnnyg |
+| joi | joisig | joi |
+| jonross | jonrossca | jonross |
+| jshin | jshin | jungshik |
+| jww | jww`__` | jww |
+| jyasskin | jyasskin | jyasskin |
+| karen | kareng | kareng |
+| keescook | kees | keescook |
+| koz | | jameskozianski |
+| kuchhal | kuchhal | rahulk |
+| levin | dave\_levin | levin |
+| lfg | lfg`_` | lfg |
+| luken | luken\_chromium | luken |
+| mark | markmentovai | mmentovai |
+| mattm | mattm\_c, mattm\_g | mattm |
+| mbarbella | mbarbella | mbarbella |
+| mmeade | mmeade | mmeade |
+| mednik | mednik | mednik |
+| mgaba | mgaba | mgaba |
+| mlinck | dullb0yj4ck | mlinck |
+| msw | msw`_` | msw |
+| nick | nickcarter | ncarter |
+| oleg | | olege |
+| ortuno | gortuno | ortuno |
+| pam | pamg | pamg |
+| paulirish | paul\_irish | paulirish |
+| patrick | pjohnson | pjohnson |
+| peter | beverloo | beverloo |
+| phajdan.jr | phajdan-jr | phajdan |
+| rch | RyanHamilton | rch |
+| rdevlin.cronin | rdcronin | rdcronin |
+| reillyg | reillyeon | reillyg |
+| rlp | rpetterson | rlp |
+| robliao | robliao | robliao |
+| rsleevi | sleevi, rsleevi | sleevi |
+| sarah | | sarahgordon |
+| satish | satish`_` | satish |
+| scheglov | | scheglov |
+| scottbyer | sbyer | scottbyer |
+| shans | | shanestephens |
+| shrike | shrike | shrike |
+| smut | Sana | smut |
+| svaldez | dvorak42, svaldez | svaldez |
+| tansell | mithro@mithis.com | tansell |
+| thestig | leiz | thestig |
+| tim | timsteele | timsteele |
+| tony | tony^work | tc |
+| tonyg | tonyg-cr | tonyg |
+| tyoshino | tyoshino | tyoshino |
+| vabr | vabr | vabr |
+| vadimt | vadimt | vadimt |
+| viettrungluu | trungl | vtl |
+| wad | redpig | drewry |
+| wez | real\_wez | wez |
+| wjmaclean | seumas, wjmaclean | wjmaclean, wjm, seumas |
+| yoz | yaws | yoz |
+| zmo | zhenyao | zmo |
+| zty | zty | zty | \ No newline at end of file
diff --git a/docs/using_a_linux_chroot.md b/docs/using_a_linux_chroot.md
new file mode 100644
index 0000000..e95eabd
--- /dev/null
+++ b/docs/using_a_linux_chroot.md
@@ -0,0 +1,62 @@
+# Using a chroot
+
+If you want to run layout tests and you're not running Lucid, you'll get errors due to version differences in libfreetype. To work around this, you can use a chroot.
+
+# Basic Instructions
+
+ * Run `build/install-chroot.sh`. On the prompts, choose to install a 64-bit Lucid chroot and activate all your secondary mount points.
+ * sudo edit `/etc/schroot/mount-lucid64bit` and uncomment `/run` and `/run/shm`. Verify that your mount points are correct and uncommented: for example, if you have a second hard drive at `/src`, you should have an entry like `/src /src none rw,bind 0 0`.
+ * Enter your chroot as root with `sudo schroot -c lucid64`. Run `build/install-build-deps.sh`, then exit the rooted chroot.
+ * Delete your out/ directory if you had a previous non-chrooted build.
+ * To enter your chroot as normal user, run `schroot -c lucid64`.
+ * Now run `build/gyp_chromium`, compile and run DumpRenderTree within chroot.
+
+
+# Tips and Tricks
+
+## NFS home directories
+The chroot install will be installed by default in /home/$USER/chroot. If your home directory is inaccessible by root (typically because it is mounted on NFS), then move this directory onto your local disk and change the corresponding entry in `/etc/schroot/mount-lucid64bit`.
+
+## Goma builds
+If you get mysterious compile errors (glibconfig.h or dbus header error), make sure that goma is running in the chroot, or don't use goma for builds inside the chroot.
+
+## Different color prompt
+
+I use the following code in my .zshrc file to change the color of my prompt in the chroot.
+```
+# load colors
+autoload colors zsh/terminfo
+if [[ "$terminfo[colors]" -ge 8 ]]; then
+ colors
+fi
+for color in RED GREEN YELLOW BLUE MAGENTA CYAN WHITE; do
+ eval PR_$color='%{$terminfo[bold]$fg[${(L)color}]%}'
+ eval PR_LIGHT_$color='%{$fg[${(L)color}]%}'
+done
+PR_NO_COLOR="%{$terminfo[sgr0]%}"
+
+# set variable identifying the chroot you work in (used in the prompt below)
+if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then
+ debian_chroot=$(cat /etc/debian_chroot)
+fi
+
+if [ "xlucid64" = "x$debian_chroot" ]; then
+ PS1="%n@$PR_GREEN% lucid64$PR_NO_COLOR %~ %#"
+else
+ PS1="%n@$PR_RED%m$PR_NO_COLOR %~ %#"
+fi
+```
+
+## Running X apps
+
+I also have `DISPLAY=:0` in my `$debian_chroot` section so I can run test\_shell or layout tests without manually setting my display every time. Your display number may vary (`echo $DISPLAY` outside the chroot to see what your display number is).
+
+You can also use `Xvfb` if you only want to [run tests headless](http://code.google.com/p/chromium/wiki/LayoutTestsLinux#Using_an_embedded_X_server).
+
+## Having layout test results open in a browser
+
+After running layout tests, you should get a new browser tab or window that opens results.html. If you get an error "Failed to open [file:///path/to/results.html](file:///path/to/results.html)", check the following conditions.
+
+ 1. Make sure `DISPLAY` is set. See the [Running X apps](https://code.google.com/p/chromium/wiki/UsingALinuxChroot#Running_X_apps) section above.
+ 1. Install `xdg-utils`, which includes `xdg-open`, a utility for finding the right application to open a file or URL with.
+ 1. Install [Chrome](https://www.google.com/intl/en/chrome/browser/). \ No newline at end of file
diff --git a/docs/using_build_runner.md b/docs/using_build_runner.md
new file mode 100644
index 0000000..fecb20b
--- /dev/null
+++ b/docs/using_build_runner.md
@@ -0,0 +1,65 @@
+Instructions on how to use the buildrunner to execute builds.
+
+# Introduction
+
+The buildrunner is a script which extracts buildsteps from builders and runs them locally on the slave. It is being developed to simplify development on and reduce the complexity of the Chromium build infrastructure. When provided a master name (with `master.cfg` inside) and a builder, it will either execute steps sequentially or output information about them.
+
+`runbuild.py` is the main script, while `runit.py` is a convenience script that sets up `PYTHONPATH` for you. Note that you can use `runit.py` to conveniently run other scripts in the `build/` directory.
+
+# Master/Builder Selection
+
+`scripts/tools/runit.py scripts/slave/runbuild.py --list-masters`
+
+will list all masters in the search path. Select a mastername (alternatively, use --master-dir to use a specific directory).
+
+Next, we need to pick a builder or slave hostname to build. The slave hostname is only used to locate a suitable builder, so it need not be the actual hostname of the slave you're on.
+
+To list all the builders in a master, run:
+
+`scripts/tools/runit.py scripts/slave/runbuild.py mastername --list-builders`
+
+Example, if you're in `/home/user/chromium/build/scripts/slave/`:
+
+`scripts/tools/runit.py scripts/slave/runbuild.py chromium --list-builders`
+
+will show you which builders are available under the `chromium` master.
+
+# Step Inspection and Execution
+
+You can check out the list of steps without actually running them like so:
+
+`scripts/tools/runit.py scripts/slave/runbuild.py chromium build56-m1 --list-steps`
+
+(Note that some exotic steps, such as gclient steps, won't show up in buildrunner.) You can show the exact commands of most steps with --show-commands:
+
+`scripts/tools/runit.py scripts/slave/runbuild.py chromium build56-m1 --show-commands`
+
+Finally, you can run the build with:
+
+`scripts/tools/runit.py scripts/slave/runbuild.py mastername buildername/slavehost`
+
+Example, if you're in `/home/user/chromium/build/scripts/slave/`:
+
+`scripts/tools/runit.py scripts/slave/runbuild.py chromium build56-m1`
+
+or
+
+`scripts/tools/runit.py scripts/slave/runbuild.py chromium 'Linux x64'`
+
+`--stepfilter` and `--stepreject`` can be used to filter steps to execute based on a regex (you can see which with `--list-steps`). See `-help`for more info.
+
+# Properties
+
+Build properties and factory properties can be specified using `--build-properties` and `--factory-properties`, respectively. Since build properties contain a master and builder directive, any master or builder options on the CLI are ignored. Properties can be inspected with either or both of --output-build-properties or --output-factory-properties.
+
+# Monitoring
+
+You can specify a log destination (including '`-`' for stdout) with `--logfile`. Enabling `--annotate` will enable annotator output.
+
+# Using Within a Buildstep
+
+The an annotated buildrunner can be invoked via chromium\_commands.AddBuildStep(). Set the master, builder, and any stepfilter/reject options in factory\_properties. For example usage, see f\_linux\_runnertest in master.chromium.fyi/master.cfg and check\_deps2git\_runner in chromium\_factory.py
+
+# More Information
+
+Running with `--help` provides more detailed usage and options. If you have any questions or issues please contact xusydoc@chromium.org. \ No newline at end of file
diff --git a/docs/vanilla_msysgit_workflow.md b/docs/vanilla_msysgit_workflow.md
new file mode 100644
index 0000000..6986397
--- /dev/null
+++ b/docs/vanilla_msysgit_workflow.md
@@ -0,0 +1,68 @@
+# Introduction
+
+This describes how you can use msysgit on Windows to work on the Chromium git repository, without setting up Cygwin or hacking the `git cl`, `git try` and other scripts to work under a regular Windows shell.
+
+The basic setup is to set up a regular git checkout on a Linux (or Mac) box, and use this exclusively to create your branches and run tools such as `git cl`, and have your Windows box treat this git repository as its upstream.
+
+The advantage is, you get a pretty clean setup on your Windows box that is unlikely to break when the various custom git tools like `git cl` change. The setup is also advantageous if you regularly build code on Windows and then want to test it on Linux, since all you need to test on your Linux box is a `git push` from Windows followed by building and testing under Linux.
+
+The disadvantage is that it adds an extra layer between the Chromium git repo and your Windows checkout. In my experience (joi@chromium.org) this does not actually slow you down much, if at all.
+
+The most frequently used alternative to this workflow on Windows seems to be using Cygwin and creating a checkout directly according to the instructions at UsingGit. The advantage of that approach is you lose the extra overhead, the disadvantage seems to be mostly speed and having to run a Cygwin shell rather than just a normal Windows cmd.
+
+Please note that the instructions below are mostly from memory so they may be slightly incorrect and steps may be missing. Please feel free to update the page with corrections and additions based on your experience.
+
+# Details
+
+Create your checkouts:
+ 1. Create a git checkout on your Linux box, with read/write abilities, as per UsingGit. The rest of these instructions assume it is located at /home/username/chrome
+ 1. Install msysgit on your Windows box.
+ 1. Install Pageant on your Windows box (search for Pageant on the UsingGit page for details). This is not necessary, but if you don't do it you will be prompted for your SSH password every time you perform a git operation on your Windows box that needs to communicate with your Linux box.
+ 1. On Windows, you want to do something like: `git clone ssh://username@yourmachine.com/home/username/chrome`
+
+Starting a new topic branch:
+ 1. Linux: `git branch mytopic` (or you may want to use e.g. the LKGR script from UsingGit).
+ 1. Windows: `git fetch` then `git checkout mytopic`
+
+Normal workflow on Windows:
+ 1. ...edit/add some files...
+ 1. `git commit -a -m "my awesome change"`
+ 1. ...edit more...
+ 1. `git commit -a -m "follow-up awesomeness"`
+ 1. `git push`
+
+Normal workflow on Linux:
+ * (after `git push` from windows): `git cl upload && git try`
+ * (after LGTM and successful try): `git cl commit` (but note the `tot-mytopic` trick in the pipelining section below)
+
+Avoiding excessive file changes (to limit amount of Visual Studio rebuilds when switching between branches):
+ * Base all your different topic branches off of the same base branch; I generally create a new LKGR branch once every 2-3 working days and then `git merge` it to all of my topic branches.
+ * To track which base branch topic branches are based off, you can use a naming convention; I use e.g. lk0426 for an LKGR branch created April 26th, then use e.g. lk0426-topic1, lk0426-topic2 for the topic branches that have all changes merged from lk0426. I (joi@chromium.org) also have a script to update the base branch for topic branches and rename them - let me know if interested.
+ * Now that all your branch names are prefixed with the base revision (whether you use my naming convention or not), you can know before hand when you switch between branches on Windows whether you should expect a major rebuild, or a minor rebuild. If you are able to remember which of your topic branches have .gyp changes and which don't (or I guess you could use `git diff` to figure this out), then you will also have a good idea whether you need to run `gclient runhooks` or not when you switch branches. Another nice thing is that yu should never have to run `gclient sync` when you switch between branches with the same base revision, unless some of your branches have changes to DEPS files.
+
+Pipelining:
+ 1. Linux:
+ 1. `git checkout lk0426-mytopic`
+ 1. `git checkout -b lk0426-mytopic-nextstep`
+ 1. Windows:
+ 1. `git fetch && git checkout lk0426-mytopic-nextstep`
+ 1. ...work as usual...
+ 1. `git push`
+ 1. Later, on Linux:
+ 1. `make_new_lkgr_branch lk0428`
+ 1. `git merge lk0428 lk0426-mytopic`
+ 1. `git branch -m lk0426-mytopic lk0428-mytopic` (to rename)
+ 1. `git merge lk0428-mytopic lk0426-mytopic-nextstep`
+ 1. `git branch -m lk0428-mytopic-nextstep lk0428-mytopic-nextstep` (to rename)
+ 1. Later, when you want to commit one of the earlier changes in the pipeline; all on Linux. The reason you may want to create the separate tip-of-tree branch is in case the trybots show your change failing on tip-of-tree and you need to do significant additional work, this avoids having to roll back the tip-of-tree merge:
+ 1. `git checkout lk0428-mytopic`
+ 1. `git checkout -b tot-mytopic`
+ 1. `git fetch && git merge remotes/origin/trunk`
+
+Janitorial work on Windows:
+ * When you rename branches on the Linux side, the Windows repo will not know automatically; so if you already had a branch `lk0426-mytopic` open on Windows and then `git fetch`, you will still have `lk0426-mytopic` even if that was renamed on the Linux side to `lk0428-mytopic`.
+ * Dealing with this is straight-forward; you just `git checkout lk0428-mytopic` to switch to the renamed (and likely updated) branch. Then `git branch -d lk0426-mytopic` to get rid of the tracking branch for the older name. Then, occasionally, `git remotes prune origin` to prune remote tracking branches (you don't normally see these listed unless you do `git branch -a`).
+
+Gotchas:
+ * You should normally create your branches on Linux only, so that the Windows repo gets tracking branches for them. Any branches you create in the Windows repo would be local to that repository, and so will be non-trivial to push to Linux.
+ * `git push` from Windows will fail if your Linux repo is checked out to the same branch. It is easy to switch back manually, but I also have a script I call `safepush` that switches the Linux-side branch for you before pushing; let me (joi@chromium.org) know if interested. \ No newline at end of file
diff --git a/docs/windows_incremental_linking.md b/docs/windows_incremental_linking.md
new file mode 100644
index 0000000..8d0149a
--- /dev/null
+++ b/docs/windows_incremental_linking.md
@@ -0,0 +1,5 @@
+Include in your `GYP_DEFINES`: `incremental_chrome_dll=1`. This turns on the equivalent of Use Library Dependency Inputs for the large components in the build.
+
+And if you want faster builds, it would be best to include to `component=shared_library` too unless you need a fully static link for some reason.
+
+Note that `incremental_chrome_dll=1` will probably not work on Visual Studio 2008 builds. It may not work on Visual Studio 2010 builds either (pamg couldn't get it to work as of Nov 2012, encountering numerous link errors). You may have to use [ninja](http://code.google.com/p/chromium/wiki/NinjaBuild), which has incremental linking on by default. \ No newline at end of file
diff --git a/docs/windows_precompiled_headers.md b/docs/windows_precompiled_headers.md
new file mode 100644
index 0000000..050f222
--- /dev/null
+++ b/docs/windows_precompiled_headers.md
@@ -0,0 +1,50 @@
+# Introduction
+
+Using precompiled headers on Windows can speed builds up by around 25%.
+
+Precompiled headers are used by default when GYP generates project files for Visual Studio 2010.
+
+When using Visual Studio 2008, use of precompiled headers is off by default (see discussion below). To turn on precompiled headers in your client when using MSVS 2008, make sure your `~\.gyp\include.gypi` file looks something like this, then run `gclient runhooks` to update the solution files generated by GYP:
+
+```
+{
+ 'variables': {
+ 'chromium_win_pch': 1,
+ }
+}
+```
+
+Since [r174228](http://src.chromium.org/viewvc/chrome?view=rev&revision=174228), default is using precompiled header for non `Official` build.
+
+# Discussion
+
+MSVS 2008 has some limitations in how well it handles precompiled headers. We've run into two issues:
+ 1. Using precompiled headers can push our official builders over the edge of the world, into the dangerous Kingdom of Oom (out of memory).
+ 1. When compilation flags are changed, instead of doing the right thing and rebuilding the precompiled headers and their dependents, MSVS prints a warning instead, saying the precompiled header file was built with different flags than the current file.
+
+Because of the above, we disabled use of precompiled headers by default, and required the `chromium_win_pch` flag discussed above to be set.
+
+We may be able to turn use of precompiled headers back on for Debug builds by default, by adding a workaround to MSVS's limitations to GYP, i.e. if it detects a change in compile flags it could blow away MSVS's output directory.
+
+# Troubleshooting
+
+Both of these apply to Visual Studio 2008 only.
+
+
+---
+
+
+**Problem**: You didn't rebuild recently, and you want to build an individual source file (Ctrl+F7). MSVS complains that the precompiled header is missing.
+
+**Solution**: You could do a full build of the target your source file is in. If you'd like to avoid that, find the precompiled header generator file, located within a filter somewhere like `../../build/precompile.cc` in your project, individually build that file, then individually build the source file you intended to build. The `precompile.cc` file is the generator for the precompiled header file.
+
+
+---
+
+
+**Problem**: MSVS prints out a warning like this (that we treat as an error): `warning C4651: '/D_FOOBAR' specified for precompiled header but not for current compile`
+
+**Solution**: This means compilation flags have changed from when the precompiled header file was generated. The issue is that MSVS does not handle this correctly. As compilation flags may affect the precompiled header file, it should be rebuilt along with its dependents. The workaround for now is to do a full rebuild, or (if you want to try to be minimal) a rebuild of all projects previously built that use precompiled headers.
+
+
+---
diff --git a/docs/windows_split_dll.md b/docs/windows_split_dll.md
new file mode 100644
index 0000000..3b85cbb
--- /dev/null
+++ b/docs/windows_split_dll.md
@@ -0,0 +1,35 @@
+# Introduction
+
+A build mode where chrome.dll is split into two separate DLLs. This was undertaken as one possible workaround for toolchain limitations on Windows.
+
+
+# Details
+
+## How
+
+Normally, you probably don't need to worry about doing this build. If for some reason you need to build it locally:
+
+ * From a _Visual Studio Command Prompt_ running as **Administrator** run `python tools\win\split_link\install_split_link.py`.
+ * Set `GYP_DEFINES=chrome_split_dll=1`. In particular, don't have `component=shared_library`. Other things, like `buildtype` or `fastbuild` are fine.
+ * `gclient runhooks`
+ * `ninja -C out\Release chrome`
+
+`chrome_split_dll` currently applies only to chrome.dll (and not test binaries).
+
+## What
+
+This is intended to be a temporary measure until either the toolchain is improved or the code can be physically separated into two DLLs (based on a browser/child split).
+
+The link replacement forcibly splits chrome.dll into two halves based on a description in `build\split_link_partition.py`. Code is primarily split along browser/renderer lines. Roughly, Blink and its direct dependencies are in the "chrome1.dll", and the rest of the browser code remains in "chrome.dll".
+
+Splitting the code this way allows keeping maximum optimization on the Blink portion of the code, which is important for performance.
+
+There is a compile time define set when building in this mode `CHROME_SPLIT_DLL`, however it should be used very sparingly-to-not-at-all.
+
+## Details
+
+This forcible split is implemented by putting .lib files in either one DLL or the other, and causing unresolved externals that result during linking to be forcibly exported from the other DLL. This works relatively cleanly for function import/export, however it cannot work for data export.
+
+There are relatively few instances where data exports are required across the DLL boundary. The waterfall builder http://build.chromium.org/p/chromium/waterfall?show=Win%20Split will detect when new data exports are added, and these will need to be repaired. For constants, the data can be duplicated to both DLLs, but for writeable data, a wrapping set/get function will need to be added.
+
+Some more details can be found on the initial commit of the split\_link script http://src.chromium.org/viewvc/chrome?revision=200049&view=revision and the associated bugs: http://crbug.com/237249 http://crbug.com/237267 \ No newline at end of file
diff --git a/docs/working_remotely_with_android.md b/docs/working_remotely_with_android.md
new file mode 100644
index 0000000..3def5f7
--- /dev/null
+++ b/docs/working_remotely_with_android.md
@@ -0,0 +1,92 @@
+# Introduction
+
+When you call `$SRC/build/android/run_tests.py` or `$SRC/build/android/run_instrumentation_tests.py` it assumes an android device is attached to the local host.
+
+If you want to work remotely from your laptop with an android device attached to it, while keeping an ssh connection to a remote desktop machine where you have your build environment setup, you will have to use one of the two alternatives listed below.
+
+
+# Option 1: SSHFS - Mounting the out/Debug directory
+
+### On your remote host machine
+(_you can open a regular ssh to your host_)
+```
+# build it
+desktop$ cd $SRC;
+desktop$ . build/android/envsetup.sh
+desktop$ build/gyp_chromium -DOS=android
+desktop$ ninja -C out/Debug
+```
+(see also AndroidBuildInstructions).
+
+
+### On your laptop
+(_you have to have an android device attached to it_)
+```
+# Install sshfs
+laptop$ sudo apt-get install sshfs
+
+# Mount the chrome source from your remote host machine into your local laptop.
+laptop$ mkdir ~/chrome_sshfs
+laptop$ sshfs your.host.machine:/usr/local/code/chrome/src ./chrome_sshfs
+
+# Setup enviroment.
+laptop$ cd chrome_sshfs
+laptop$ . build/android/envsetup.sh
+laptop$ adb devices
+laptop$ adb root
+
+# Install APK (which was previously built in the host machine).
+laptop$ python build/android/adb_install_apk.py --apk ContentShell.apk --apk_package org.chromium.content_shell
+
+# Run tests.
+laptop$ python build/android/run_instrumentation_tests.py -I --test-apk ContentShellTest -vvv
+```
+
+**This is assuming you have the exact same linux version on your host machine and in your laptop.**
+
+But if you have different versions, lets say, ubuntu lucid on your laptop, and the newer ubuntu precise on your host machine, some binaries compiled on the host will not work on your laptop.
+In this case you will have to recompile these binaries in your laptop:
+```
+# May need to install dependencies on your laptop.
+laptop$ sudo ./build/install-build-deps-android.sh
+
+# Rebuild the needed binaries on your laptop.
+laptop$ build/gyp_chromium -DOS=android
+laptop$ ninja -C out/Debug md5sum host_forwarder
+```
+
+
+# Option 2: SSH Tunneling
+
+## Option 2a: Use a script
+
+Copy src/tools/android/adb\_remote\_setup.sh to your laptop, then run it. adb\_remote\_setup.sh updates itself, so you only need to copy it once.
+
+```
+laptop$ curl "http://src.chromium.org/svn/trunk/src/tools/android/adb_remote_setup.sh" > adb_remote_setup.sh
+laptop$ chmod +x adb_remote_setup.sh
+laptop$ ./adb_remote_setup.sh <desktop_hostname> <path_to_adb_on_desktop>
+```
+
+## Option 2b: Manual tunneling
+
+You have to make sure that ports 5037, 10000, ad 10201 are not being used on either your laptop or your desktop.
+Try the command: `netstat -nap | grep 10000` to see
+
+Kill the pids that are using those ports.
+
+### On your host machine
+```
+desktop$ killall adb
+desktop$ killall host_forwarder
+```
+
+### On your laptop
+```
+laptop$ ssh -C -R 5037:localhost:5037 -R 10000:localhost:10000 -R 10201:localhost:10201 <desktop_host_name>
+```
+
+### On your host machine
+```
+desktop$ python build/android/run_instrumentation_tests.py -I --test-apk ContentShellTest -vvv
+``` \ No newline at end of file
diff --git a/docs/writing_clang_plugins.md b/docs/writing_clang_plugins.md
new file mode 100644
index 0000000..bddee96
--- /dev/null
+++ b/docs/writing_clang_plugins.md
@@ -0,0 +1,109 @@
+# Don't write a clang plugin
+
+Make sure you really want to write a clang plugin.
+
+ * The clang plugin api is not stable. If you write a plugin, _you_ are responsible for making sure it's updated when we update clang.
+ * If you're adding a generally useful warning, it should be added to upstream clang, not to a plugin.
+ * You should not use a clang plugin to do things that can be done in a PRESUBMIT check (e.g. checking that the headers in a file are sorted).
+
+Valid reasons for writing a plugin are for example:
+
+ * You want to add a chromium-specific error message.
+ * You want to write an automatic code rewriter.
+
+In both cases, please inform [clang@chromium.org](http://groups.google.com/a/chromium.org/group/clang/topics) of your plans before you pursue them.
+
+# Having said that
+
+clang currently has minimal documentation on its plugin interface; it's mostly doxygen annotations in the source. This is an attempt to be half map to the header files/half tutorial.
+
+# Building your plugin
+
+## Just copy the clang build system
+
+I suggest you make a new dir in **llvm/tools/clang/examples/** and copy the Makefile from `PrintFunctionNames` there. This way, you'll just leverage the existing clang build system. You can then build your plugin with
+
+```
+make -C llvm/tools/clang/examples/myplugin
+```
+
+See ["Using plugins"](Clang.md) on how to use your plugin while building chromium with clang.
+
+## Use the interface in `tools/clang/plugins/ChromeClassTester.h`
+
+Here's a canned interface that filters code, only passing class definitions in non-blacklisted headers. The users of `ChromeClassTester` are good code to study to see what you can do.
+
+## Or if you're doing something really different, just copy `PrintFunctionNames.cpp`
+
+`PrintFunctionNames.cpp` is a plugin in the clang distribution. It is the Hello World of plugins. As a most basic skeleton, it's a good starting point. Change all the identifiers that start with `PrintFunction` to your desired name. Take note of the final line:
+
+```
+static FrontendPluginRegistry::Add<PrintFunctionNamesAction>
+X("print-fns", "print function names");
+```
+
+This registers your PluginASTAction with a string plugin name that can be invoked on the command line. Note that everything else is in an anonymous namespace; all other symbols aren't exported.
+
+Your `PluginASTAction` subclass exists just to build your ASTConsumer, which receives declarations, sort of like a SAX parser.
+
+## Your ASTConsumer
+
+There is doxygen documentation on when each `ASTConsumer::Handle` method is called in **llvm/tools/clang/include/clang/AST/ASTConsumer.h**. For this tutorial, I'll assume you only want to look at type definitions (struct, class, enum definitions), so we'll start with:
+
+```
+class TagConsumer : public ASTConsumer {
+ public:
+ virtual void HandleTagDeclDefinition(TagDecl *D) {
+ }
+};
+
+```
+
+The data type passed in is the `Decl`, which is a giant class hierarchy spanning the following files:
+
+ * `llvm/tools/clang/include/clang/AST/DeclBase.h`: declares the `Decl` class, along with some utility classes you won't use.
+ * `llvm/tools/clang/include/clang/AST/Decl.h`: declares subclasses of `Decl`, for example, `FunctionDecl` (a function declaration), `TagDecl` (the base class for struct/class/enum/etc), `TypedefDecl`, etc.
+ * `llvm/tools/clang/include/clang/AST/DeclCXX.h`: C++ specific types. You'll find most Decl subclasses dealing with templates here, along with things like `UsingDirectiveDecl`, `CXXConstructorDecl`, etc.
+
+The interface on these classes is massive; We'll only cover some of the basics, but some basics about source location and errors.
+
+## Emitting Errors
+
+Lots of location information is stored in the `Decl` tree. Most `Decl` subclasses have multiple methods that return a `SourceLocation`, but lets use `TagDecl::getInnerLocStart()` as an example. (`SourceLocation` is defined in `llvm/tools/clang/include/clang/Basic/SourceLocation.h`, for reference.)
+
+Errors are emitted to the user through the `CompilerInstance`. You will probably want to pass the `CompilerInstance` object passed to `ASTAction::CreateASTConsumer` to your ASTConsumer subclass for reporting. You interact with the user through the `Diagnostic` object. You could report errors to the user like this:
+
+```
+void emitWarning(CompilerInstance& instance, SourceLocation loc, const char* error) {
+ FullSourceLoc full(loc, instance.getSourceManager());
+ unsigned id = instance.getCustomDiagID(Diagnostic::Warning, error);
+ DiagnosticBuilder B = instance.getDiagnostics().Report(full, id);
+}
+```
+
+(The above is the simplest error reporting. See **llvm/tools/clang/include/clang/Basic/Diagnostic.h** for all the things you can do, like `FixItHint`s if you want to get fancy!)
+
+## Downcast early, Downcast often
+
+The clang library will give you the most general types possible. For example `TagDecl` has comparably minimal interface. The library is designed so you will be downcasting all the time, and you won't use the standard `dynamic_cast<>()` builtin to do it. Instead, you'll use llvm/clang's home built RTTI system:
+
+```
+ virtual void HandleTagDeclDefinition(TagDecl* tag) {
+ if (CXXRecordDecl* record = dyn_cast<CXXRecordDecl>(tag)) {
+ // Do stuff with |record|.
+ }
+ }
+```
+
+## A (not at all exhaustive) list of things you can do with `(CXX)RecordDecl`
+
+ * Iterate across all constructors (`CXXRecordDecl::ctor_begin()`, `CXXReocrdDecl::ctor_end()`)
+ * `CXXRecordDecl::isPOD()`: is this a Plain Old Datatype (a type that has no construction or destruction semantics)?
+ * Check if certain properties of the class: `CXXRecordDecl::isAbstract()`, `CXXRecordDecl::hasTrivialConstructor()`, `CXXRecordDecl::hasTrivialDestructor()`, etc.
+ * Iterate across all fields/member variables (`RecordDecl::field_begin()`, `RecordDecl::field_end()`)
+ * Iterate across all of the base classes of a record type (`CXXRecordDecl::bases_begin()`, `CXXRecordDecl::bases_end()`)
+ * Get the simple string name `NamedDecl::getNameAsString()`. (This method is deprecated, but the replacement assert()s on error conditions). (If you had `struct One {}`, this method would return "One".)
+
+## Modifying existing plugins
+
+If you want to add additional checks to the existing plugins, be sure to add the new diagnostic behind a flag (there are several examples of this in the plugins already). The reason for this is that the plugin is bundled with clang, and the new check will get deployed with the next clang roll. If your check fires, then the next clang roll would now be blocked on cleaning up the whole codebase for your check – and even if the check doesn't fire at the moment, maybe that regresses until the next clang roll happens. If your new check is behind a flag, then the clang roll can happen first, and you can add the flag to enable your check after that, and then turn on the check everywhere once you know that the codebase is clean. \ No newline at end of file