summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorandybons <andybons@chromium.org>2015-08-30 19:27:44 -0700
committerCommit bot <commit-bot@chromium.org>2015-08-31 02:28:22 +0000
commitad92aa35752c2a1f3a0faad364f9b2d1cef83b91 (patch)
tree443feee4987dc2390dbceedd3a292a276700b5c7 /docs
parent22afb31800284923e9f84af6373f68ad6b241f4b (diff)
downloadchromium_src-ad92aa35752c2a1f3a0faad364f9b2d1cef83b91.zip
chromium_src-ad92aa35752c2a1f3a0faad364f9b2d1cef83b91.tar.gz
chromium_src-ad92aa35752c2a1f3a0faad364f9b2d1cef83b91.tar.bz2
[Docs] Another round of stylistic fixes.
TBR=nodir BUG=524256 Review URL: https://codereview.chromium.org/1324603002 Cr-Commit-Position: refs/heads/master@{#346335}
Diffstat (limited to 'docs')
-rw-r--r--docs/OWNERS4
-rw-r--r--docs/gtk_vs_views_gtk.md59
-rw-r--r--docs/how_to_extend_layout_test_framework.md326
-rw-r--r--docs/include_what_you_use.md82
-rw-r--r--docs/installation_at_vmware.md35
-rw-r--r--docs/installazione_su_vmware.md35
-rw-r--r--docs/ipc_fuzzer.md81
-rw-r--r--docs/kiosk_mode.md31
-rw-r--r--docs/layout_tests_linux.md128
-rw-r--r--docs/linux64_bit_issues.md67
-rw-r--r--docs/linux_build_instructions.md235
-rw-r--r--docs/linux_build_instructions_prerequisites.md170
-rw-r--r--docs/linux_building_debug_gtk.md91
-rw-r--r--docs/linux_cert_management.md92
-rw-r--r--docs/linux_chromium_arm.md72
-rw-r--r--docs/linux_chromium_packages.md24
-rw-r--r--docs/linux_crash_dumping.md147
-rw-r--r--docs/linux_debugging.md464
-rw-r--r--docs/linux_debugging_gtk.md61
-rw-r--r--docs/linux_debugging_ssl.md158
-rw-r--r--docs/linux_dev_build_as_default_browser.md38
-rw-r--r--docs/linux_development.md29
-rw-r--r--docs/linux_eclipse_dev.md582
-rw-r--r--docs/linux_faster_builds.md136
-rw-r--r--docs/linux_graphics_pipeline.md9
-rw-r--r--docs/linux_gtk_theme_integration.md97
-rw-r--r--docs/linux_hw_video_decode.md124
-rw-r--r--docs/linux_minidump_to_core.md127
-rw-r--r--docs/linux_open_suse_build_instructions.md67
-rw-r--r--docs/linux_password_storage.md43
-rw-r--r--docs/linux_pid_namespace_support.md12
-rw-r--r--docs/linux_plugins.md78
-rw-r--r--docs/linux_printing.md74
-rw-r--r--docs/linux_profiling.md197
-rw-r--r--docs/linux_proxy_config.md19
-rw-r--r--docs/linux_sandbox_ipc.md69
-rw-r--r--docs/linux_sandboxing.md135
-rw-r--r--docs/linux_suid_sandbox.md135
-rw-r--r--docs/linux_suid_sandbox_development.md101
-rw-r--r--docs/linux_zygote.md37
-rw-r--r--docs/mac_build_instructions.md197
-rw-r--r--docs/mandriva_msttcorefonts.md39
-rw-r--r--docs/mojo_in_chromium.md692
-rw-r--r--docs/ninja_build.md81
-rw-r--r--docs/piranha_plant.md135
-rw-r--r--docs/profiling_content_shell_on_android.md286
-rw-r--r--docs/proxy_auto_config.md69
-rw-r--r--docs/retrieving_code_analysis_warnings.md72
-rw-r--r--docs/script_preprocessor.md151
-rw-r--r--docs/seccomp_sandbox_crash_dumping.md52
-rw-r--r--docs/spelling_panel_planning_doc.md30
51 files changed, 3915 insertions, 2360 deletions
diff --git a/docs/OWNERS b/docs/OWNERS
index 4256a6b..72e8ffc 100644
--- a/docs/OWNERS
+++ b/docs/OWNERS
@@ -1,3 +1 @@
-jparent@chromium.org
-nodir@chromium.org
-andybons@chromium.org
+*
diff --git a/docs/gtk_vs_views_gtk.md b/docs/gtk_vs_views_gtk.md
index df72341..4af77e1 100644
--- a/docs/gtk_vs_views_gtk.md
+++ b/docs/gtk_vs_views_gtk.md
@@ -1,27 +1,38 @@
-# Benefits of ViewsGtk
+# Gtk vs ViewsGtk
- * Better code sharing. For example, don't have to duplicate tab layout or bookmark bar layout code.
- * Tab Strip
- * Drawing
- * All the animationy bits
- * Subtle click selection behavior (curved corners)
- * Drag behavior, including dropping of files onto the URL bar
- * Closing behavior
- * Bookmarks bar
- * drag & drop behavior, including menus
- * chevron?
- * Easier for folks to work on both platforms without knowing much about the underlying toolkits.
- * Don't have to implement ui features twice.
+## Benefits of ViewsGtk
+* Better code sharing. For example, don't have to duplicate tab layout or
+ bookmark bar layout code.
+ * Tab Strip
+ * Drawing
+ * All the animationy bits
+ * Subtle click selection behavior (curved corners)
+ * Drag behavior, including dropping of files onto the URL bar
+ * Closing behavior
+ * Bookmarks bar
+ * drag & drop behavior, including menus
+ * chevron?
+* Easier for folks to work on both platforms without knowing much about the
+ underlying toolkits.
+* Don't have to implement ui features twice.
-# Benefits of Gtk
- * Dialogs
- * Native feel layout
- * Font size changes (e.g., changing the system font size will apply to our dialogs)
- * Better RTL (e.g., http://crbug.com/2822 http://crbug.com/5729 http://crbug.com/6082 http://crbug.com/6103 http://crbug.com/6125 http://crbug.com/8686 http://crbug.com/8649 )
- * Being able to obey the user's system theme
- * Accessibility for buttons and dialogs (but not for tabstrip and bookmarks)
- * A better change at good remote X performance?
- * We still would currently need Pango / Cairo for text layout, so it will be more efficient to just draw that during the Gtk pipeline instead of with Skia.
- * Gtk widgets will automatically "feel and behave" like Linux. The behavior of our own Views system does not necessarily feel right on Linux.
- * People working on Windows features don't need to worry about breaking the Linux build. \ No newline at end of file
+## Benefits of Gtk
+
+* Dialogs
+ * Native feel layout
+ * Font size changes (e.g., changing the system font size will apply to our
+ dialogs)
+ * Better RTL (e.g., http://crbug.com/2822 http://crbug.com/5729
+ http://crbug.com/6082 http://crbug.com/6103 http://crbug.com/6125
+ http://crbug.com/8686 http://crbug.com/8649)
+* Being able to obey the user's system theme
+* Accessibility for buttons and dialogs (but not for tabstrip and bookmarks)
+* A better change at good remote X performance?
+* We still would currently need Pango / Cairo for text layout, so it will be
+ more efficient to just draw that during the Gtk pipeline instead of with
+ Skia.
+* Gtk widgets will automatically "feel and behave" like Linux. The behavior of
+ our own Views system does not necessarily feel right on Linux.
+* People working on Windows features don't need to worry about breaking the
+ Linux build.
diff --git a/docs/how_to_extend_layout_test_framework.md b/docs/how_to_extend_layout_test_framework.md
index 3618d66..de61e68 100644
--- a/docs/how_to_extend_layout_test_framework.md
+++ b/docs/how_to_extend_layout_test_framework.md
@@ -1,125 +1,265 @@
-# Extending the Layout Framework
+# How to Extend the Layout Test Framework
-# Introduction
-The Layout Test Framework that Blink uses is a regression testing tool that is multi-platform and it has a large amount of tools that help test varying types of regression, such as pixel diffs, text diffs, etc. The framework is mainly used by Blink, however it was made to be extensible so that other projects can use it test different parts of chrome (such as Print Preview). This is a guide to help people who want to actually the framework to test whatever they want.
+The Layout Test Framework that Blink uses is a regression testing tool that is
+multi-platform and it has a large amount of tools that help test varying types
+of regression, such as pixel diffs, text diffs, etc. The framework is mainly
+used by Blink, however it was made to be extensible so that other projects can
+use it test different parts of chrome (such as Print Preview). This is a guide
+to help people who want to actually the framework to test whatever they want.
-# Background
-Before you can start actually extending the framework, you should be familiar with how to use it. This wiki is basically all you need to learn how to use it
+[TOC]
+
+## Background
+
+Before you can start actually extending the framework, you should be familiar
+with how to use it. This wiki is basically all you need to learn how to use it
http://www.chromium.org/developers/testing/webkit-layout-tests
-# How to Extend the Framework
+## How to Extend the Framework
+
There are two parts to actually extending framework to test a piece of software.
The first part is extending certain files in:
-src/third\_party/Webkit/Tools/Scripts/webkitpy/layout\_tests/
-The code in webkitpy/layout\_tests is the layout test framework itself
+[/third_party/Webkit/Tools/Scripts/webkitpy/layout_tests/](/third_party/Webkit/Tools/Scripts/webkitpy/layout_tests/)
+The code in `webkitpy/layout_tests` is the layout test framework itself
+
+The second part is creating a driver (program) to actually communicate the
+layout test framework. This part is significantly more tricky and dependant on
+what exactly exactly is being tested.
+
+### Part 1
+
+This part isn’t too difficult. There are basically two classes that need to be
+extended (ideally, just inherited from). These classes are:
+
+ Driver
+
+Located in `layout_tests/port/driver.py`. Each instance of this is the class
+that will actually an instance of the program that produces the test data
+(program in Part 2).
+
+ Port
+
+Located in `layout_tests/port/base.py`. This class is responsible creating
+drivers with the correct settings, giving access to certain OS functionality to
+access expected files, etc.
-The second part is creating a driver (program) to actually communicate the layout test framework. This part is significantly more tricky and dependant on what exactly exactly is being tested.
+#### Extending Driver
-## Part 1:
-This part isn’t too difficult. There are basically two classes that need to be extended (ideally, just inherited from). These classes are:
-Driver
-Located in layout\_tests/port/driver.py
-Each instance of this is the class that will actually an instance of the program that produces the test data (program in Part 2).
-Port
-Located in layout\_tests/port/base.py
-This class is responsible creating drivers with the correct settings, giving access to certain OS functionality to access expected files, etc.
+As said, Driver launches the program from Part 2. Said program will communicate
+with the driver class to receive instructions and send back data. All of the
+work for driver gets done in `Driver.run_test`. Everything else is a helper or
+initialization function.
+`run_test()` steps:
+1. On the very first call of this function, it will actually run the test
+ program. On every subsequent call to this function, at the beginning it will
+ verify that the process doesn’t need to be restarted, and if it does, it
+ will create a new instance of the test program.
+1. It will then create a command to send the program
+ * This command generally consists of an html file path for the test
+ program to navigate to.
+ * After creating it, the command is sent
+1. After the command has been sent, it will then wait for data from the
+ program.
+ * It will actually wait for 2 blocks of data.
+ * The first part being text or audio data. This part is required (the
+ program will always send something, even an empty string)
+ * The second block is optional and is image data and an image hash
+ (md5) this block of data is used for pixel tests
+1. After it has received all the data, it will proceed to check if the program
+ has timed out or crashed, and if so fail this instance of the test (it can
+ be retried later if need be).
+Luckily, `run_test()` most likely doesn’t need to be overridden unless extra
+blocks of data need to be sent to/read from the test program. However, you do
+need to know how it works because it will influence what functions you need to
+override. Here are the ones you’re probably going to need to override
+ cmd_line
-### Extending Driver:
-As said, Driver launches the program from Part 2. Said program will communicate with the driver class to receive instructions and send back data. All of the work for driver gets done in Driver.run\_test. Everything else is a helper or initialization function.
-run\_test() steps:
- 1. On the very first call of this function, it will actually run the test program. On every subsequent call to this function, at the beginning it will verify that the process doesn’t need to be restarted, and if it does, it will create a new instance of the test program.
- 1. It will then create a command to send the program
- * This command generally consists of an html file path for the test program to navigate to.
- * After creating it, the command is sent
- 1. After the command has been sent, it will then wait for data from the program.
- * It will actually wait for 2 blocks of data.
- * The first part being text or audio data. This part is required (the program will always send something, even an empty string)
- * The second block is optional and is image data and an image hash (md5) this block of data is used for pixel tests
- 1. After it has received all the data, it will proceed to check if the program has timed out or crashed, and if so fail this instance of the test (it can be retried later if need be).
+This function creates a set of command line arguments to run the test program,
+so the function will almost certainly need to be overridden.
-Luckily, run\_test() most likely doesn’t need to be overridden unless extra blocks of data need to be sent to/read from the test program. However, you do need to know how it works because it will influence what functions you need to override. Here are the ones you’re probably going to need to override
-cmd\_line
-This function creates a set of command line arguments to run the test program, so the function will almost certainly need to be overridden.
-It creates the command line to run the program. Driver uses subprocess.popen to create the process, which takes the name of the test program and any options it might need.
-The first item in the list of arguments should be the path to test program using this function:
-self._port._path\_to\_driver()
-This is an absolute path to the test program.
-This is the bare minimum you need to get the driver to launch the test program, however if you have options you need to append, just append them to the list.
-start
-If your program has any special startup needs, then this will be the place to put it.
+It creates the command line to run the program. `Driver` uses `subprocess.popen`
+to create the process, which takes the name of the test program and any options
+it might need.
-That’s mostly it. The Driver class has almost all the functionality you could want, so there isn’t much to override here. If extra data needs to be read or sent, extra data members should be added to ContentBlock.
+The first item in the list of arguments should be the path to test program using
+this function:
-### Extending Port:
-This class is responsible for providing functionality such as where to look for tests, where to store test results, what driver to run, what timeout to use, what kind of files can be run, etc. It provides a lot of functionality, however it isn’t really sufficient because it doesn’t account of platform specific problems, therefore port itself shouldn’t be extend. Instead LinuxPort, WinPort, and MacPort (and maybe the android port class) should be extended as they provide platform specific overrides/extensions that implement most of the important functionality. While there are many functions in Port, overriding one function will affect most of the other ones to get the desired behavior. For example, if layout\_tests\_dir() is overriden, not only will the code look for tests in that directory, but it will find the correct TestExpectations file, the platform specific expected files, etc.
+ self._port._path_to_driver()
+
+This is an absolute path to the test program. This is the bare minimum you need
+to get the driver to launch the test program, however if you have options you
+need to append, just append them to the list.
+
+ start
+
+If your program has any special startup needs, then this will be the place to
+put it.
+
+That’s mostly it. The Driver class has almost all the functionality you could
+want, so there isn’t much to override here. If extra data needs to be read or
+sent, extra data members should be added to `ContentBlock`.
+
+#### Extending Port
+
+This class is responsible for providing functionality such as where to look for
+tests, where to store test results, what driver to run, what timeout to use,
+what kind of files can be run, etc. It provides a lot of functionality, however
+it isn’t really sufficient because it doesn’t account of platform specific
+problems, therefore port itself shouldn’t be extend. Instead LinuxPort, WinPort,
+and MacPort (and maybe the android port class) should be extended as they
+provide platform specific overrides/extensions that implement most of the
+important functionality. While there are many functions in Port, overriding one
+function will affect most of the other ones to get the desired behavior. For
+example, if `layout_tests_dir()` is overriden, not only will the code look for
+tests in that directory, but it will find the correct TestExpectations file, the
+platform specific expected files, etc.
Here are some of the functions that most likely need to be overridden.
- * driver\_class
- * This should be overridden to allow the testing program to actually run. By default the code will run content\_shell, which might or might not be what you want.
- * It should be overridden to return the driver extension class created earlier.This function doesn’t return an instance on the driver, just the class itself.
- * driver\_name
- * This should return the name of the program test p. By default it returns ‘content\_shell’, but you want to have it return the program you want to run, such as chrome or browser\_tests.
- * layout\_tests\_dir
- * This tells the port where to look for all the and everything associated with them such as resources files.
- * By default it returns absolute path to the webkit tests.
- * If you are planning on running something in the chromium src/ directory, there are helper functions to allow you to return a path relative to the base of the chromium src directory.
+* `driver_class`
+ * This should be overridden to allow the testing program to actually run.
+ By default the code will run content_shell, which might or might not be
+ what you want.
+ * It should be overridden to return the driver extension class created
+ earlier. This function doesn’t return an instance on the driver, just
+ the class itself.
+* `driver_name`
+ * This should return the name of the program test p. By default it returns
+ ‘content_shell’, but you want to have it return the program you want to
+ run, such as `chrome` or `browser_tests`.
+* `layout_tests_dir`
+ * This tells the port where to look for all the and everything associated
+ with them such as resources files.
+ * By default it returns absolute path to the webkit tests.
+ * If you are planning on running something in the chromium src/ directory,
+ there are helper functions to allow you to return a path relative to the
+ base of the chromium src directory.
-The rest of the functions can definitely be overridden for your projects specific needs, however these are the bare minimum needed to get it running. There are also functions you can override to make certain actions that aren’t on by default always take place. For example, the layout test framework always checks for system dependencies unless you pass in a switch. If you want them disabled for your project, just override check\_sys\_deps to always return OK. This way you don’t need to pass in so many switches.
+The rest of the functions can definitely be overridden for your projects
+specific needs, however these are the bare minimum needed to get it running.
+There are also functions you can override to make certain actions that aren’t on
+by default always take place. For example, the layout test framework always
+checks for system dependencies unless you pass in a switch. If you want them
+disabled for your project, just override `check_sys_deps` to always return OK.
+This way you don’t need to pass in so many switches.
-As said earlier, you should override LinuxPort, MacPort, and/or WinPort. You should create a class that implements the platform independent overrides (such as driver\_class) and then create a separate class for each platform specific port of your program that inherits from the class with the independent overrides and the platform port you want. For example, you might want to have a different timeout for your project, but on Windows the timeout needs to be vastly different than the others. In this case you can just create a default override that every class uses except your Windows port. In that port you can just override the function again to provide the specific timeout you need. This way you don’t need to maintain the same function on each platform if they all do the same thing.
+As said earlier, you should override LinuxPort, MacPort, and/or WinPort. You
+should create a class that implements the platform independent overrides (such
+as `driver_class`) and then create a separate class for each platform specific
+port of your program that inherits from the class with the independent overrides
+and the platform port you want. For example, you might want to have a different
+timeout for your project, but on Windows the timeout needs to be vastly
+different than the others. In this case you can just create a default override
+that every class uses except your Windows port. In that port you can just
+override the function again to provide the specific timeout you need. This way
+you don’t need to maintain the same function on each platform if they all do the
+same thing.
-For Driver and Port that’s basically it unless you need to make many odd modifications. Lots of functionality is already there so you shouldn’t really need to do much.
+For `Driver` and `Port` that’s basically it unless you need to make many odd
+modifications. Lots of functionality is already there so you shouldn’t really
+need to do much.
-## Part 2:
-This is the part where you create the program that your driver class launches. This part is very application dependent, so it will not be a guide on how implement certain features, just what should be implemented and the order in which events should occur and some guidelines about what to do/not do. For a good example of how to implement your test program, look at MockDRT in mock\_drt.pyin the same directory as base.py and driver.py. It goes through all the steps described below and is very clear and concise. It is written in python, but your driver can be anything that can be run by subprocess.popen and has stdout, stdin, stderr.
+### Part 2
-### Goals
-Your goal for this part of the project is to create a program (or extend a program) to interface with the layout test framework. The layout test framework will communicate with this program to tell it what to do and it will accept data from this program to perform the regression testing or create new base line files.
+This is the part where you create the program that your driver class launches.
+This part is very application dependent, so it will not be a guide on how
+implement certain features, just what should be implemented and the order in
+which events should occur and some guidelines about what to do/not do. For a
+good example of how to implement your test program, look at MockDRT in
+`mock_drt.pyin` the same directory as `base.py` and `driver.py`. It goes through
+all the steps described below and is very clear and concise. It is written in
+python, but your driver can be anything that can be run by `subprocess.popen`
+and has stdout, stdin, stderr.
-### Structure
-This is how your code should be laid out.
- 1. Initialization
- * The creation of any directories or the launching of any programs should be done here and should be done once.
- * After the program is initialized, “#READY\n” should be sent to progress the run\_test() in the driver.
- 1. Infinite Loop (!)
- * After initialization, your program needs to actually wait for input, then process that input to carry out the test. In the context of layout testing, the content\_shell needs to wait for an html file to navigate to, render it, then convert that rendering to a PNG. It does this constantly, until a signal/message is sent to indicate that no more tests should be processed
- * Details:
- * The first thing you need is your test file path and any other additional information about the test that is required (this is sent during the write() step in run\_tests() is driver.py. This information will be passed through stdin and is just one large string, with each part of the command being split with apostrophes (ex: “/path’foo” is path to the test file, then foo is some setting that your program might need).
- * After that, your program should act on this input, how it does this is dependent on your program, however in content\_shell, this would be the part where it navigates to the test file, then renders it. After the program acts on the input, it needs to send some text to the driver code to indicate that it has acted on the input. This text will indicate something that you want to test. For example, if you want to make sure you program always prints “foo” you should send it to the driver. If the program every prints “bar” (or anything else), that would indicate a failure and the test will fail.
- * Then you need to send any image data in the same manner as you did for step ii.
- * Cleanup everything related to processing the input from step i, then go back to step i.
- * This is where the ‘infinite’ loop part comes in, your program should constantly accept input from the driver until the driver indicates that there are no more tests to run. The driver does this by closing stdin, which will cause std::cin to go into a bad state. However, you can also modify the driver to send a special string such as ‘QUIT’ to exit the while loop.
+#### Goals
-That’s basically what the skeleton of your program should be.
+Your goal for this part of the project is to create a program (or extend a
+program) to interface with the layout test framework. The layout test framework
+will communicate with this program to tell it what to do and it will accept data
+from this program to perform the regression testing or create new base line
+files.
-### Details:
-This is information about how to do some specific things, such as sending data to the layout test framework.
- * Content Blocks
- * The layout test framework accepts output from your program in blocks of data through stdout. Therefore, printing to stdout is really sending data to the layout test framework.
- * Structure of block
- * “Header: Data\n”
- * Header indicates what type of data will be sent through. A list of valid headers is listed in Driver.py.
- * Data is the data that you actually want to send. For pixel tests, you want to send the actual PNG data here.
- * The newline is needed to indicate the end of a header.
- * End of a content block
- * To indicate the end of a a content block and cause the driver to progress, you need to write “#EOF\n” to stdout (mandatory) and to stderr for certain types of content, such as image data.
- * Multiple headers per block
- * Some blocks require different sets of data. For PNGs, not only is the PNG needed, but so is a hash of the bitmap used to create the PNG.
- * In this case this is how your output should look.
- * “Content-type: image/png\n”
- * “ActualHash: hashData\n”
- * “Content-Length: lengthOfPng\n”
- * “pngdata”
- * This part doesn’t need a header specifying that you are sending png data, just send it
- * “#EOF\n” on both stdout and stderr
- * To see the structure of the data required, look at the read\_block functions in Driver.py
+#### Structure
+This is how your code should be laid out.
+1. Initialization
+ * The creation of any directories or the launching of any programs should
+ be done here and should be done once.
+ * After the program is initialized, “#READY\n” should be sent to progress
+ the `run_test()` in the driver.
+1. Infinite Loop (!)
+ * After initialization, your program needs to actually wait for input,
+ then process that input to carry out the test. In the context of layout
+ testing, the `content_shell` needs to wait for an html file to navigate
+ to, render it, then convert that rendering to a PNG. It does this
+ constantly, until a signal/message is sent to indicate that no more
+ tests should be processed
+ * Details:
+ * The first thing you need is your test file path and any other
+ additional information about the test that is required (this is sent
+ during the write() step in `run_tests()` is `driver.py`. This
+ information will be passed through stdin and is just one large
+ string, with each part of the command being split with apostrophes
+ (ex: “/path’foo” is path to the test file, then foo is some setting
+ that your program might need).
+ * After that, your program should act on this input, how it does this
+ is dependent on your program, however in `content_shell`, this would
+ be the part where it navigates to the test file, then renders it.
+ After the program acts on the input, it needs to send some text to
+ the driver code to indicate that it has acted on the input. This
+ text will indicate something that you want to test. For example, if
+ you want to make sure you program always prints “foo” you should
+ send it to the driver. If the program every prints “bar” (or
+ anything else), that would indicate a failure and the test will
+ fail.
+ * Then you need to send any image data in the same manner as you did
+ for step 2.
+ * Cleanup everything related to processing the input from step i, then
+ go back to step 1.
+ * This is where the ‘infinite’ loop part comes in, your program
+ should constantly accept input from the driver until the driver
+ indicates that there are no more tests to run. The driver does this
+ by closing stdin, which will cause std::cin to go into a bad state.
+ However, you can also modify the driver to send a special string
+ such as ‘QUIT’ to exit the while loop.
+That’s basically what the skeleton of your program should be.
+### Details
+This is information about how to do some specific things, such as sending data
+to the layout test framework.
+* Content Blocks
+ * The layout test framework accepts output from your program in blocks of
+ data through stdout. Therefore, printing to stdout is really sending
+ data to the layout test framework.
+ * Structure of block
+ * “Header: Data\n”
+ * Header indicates what type of data will be sent through. A list
+ of valid headers is listed in `Driver.py`.
+ * Data is the data that you actually want to send. For pixel
+ tests, you want to send the actual PNG data here.
+ * The newline is needed to indicate the end of a header.
+ * End of a content block
+ * To indicate the end of a a content block and cause the driver to
+ progress, you need to write “#EOF\n” to stdout (mandatory) and
+ to stderr for certain types of content, such as image data.
+ * Multiple headers per block
+ * Some blocks require different sets of data. For PNGs, not only
+ is the PNG needed, but so is a hash of the bitmap used to create
+ the PNG.
+ * In this case this is how your output should look.
+ * “Content-type: image/png\n”
+ * “ActualHash: hashData\n”
+ * “Content-Length: lengthOfPng\n”
+ * “pngdata”
+ * This part doesn’t need a header specifying that you are
+ sending png data, just send it
+ * “#EOF\n” on both stdout and stderr
+ * To see the structure of the data required, look at the
+ `read_block` functions in Driver.py
diff --git a/docs/include_what_you_use.md b/docs/include_what_you_use.md
deleted file mode 100644
index 2446c61..0000000
--- a/docs/include_what_you_use.md
+++ /dev/null
@@ -1,82 +0,0 @@
-# Introduction
-
-**WARNING:** This is all very alpha. Proceed at your own risk. The Mac instructions are very out of date -- IWYU currently isn't generally usable, so we stopped looking at it for chromium.
-
-See [include what you use page](http://code.google.com/p/include-what-you-use/) for background about what it is and why it is important.
-
-This page describes running IWYU for Chromium.
-
-# Linux/Blink
-
-## Running IWYU
-
-These instructions have a slightly awkward workflow. Ideally we should use something like `CXX=include-what-you-use GYP_DEFINES="clang=1" gclient runhooks; ninja -C out/Debug webkit -k 10000` if someone can get it working.
-
- * Install include-what-you-use (see [here](https://code.google.com/p/include-what-you-use/wiki/InstructionsForUsers)). Make sure to use --enable-optimized=YES when building otherwise IWYU will be very slow.
- * Get the compilation commands from ninja (using g++), and derive include-what-you-use invocations from it
-```
-$ cd /path/to/chromium/src
-$ ninja -C out/Debug content_shell -v > ninjalog.txt
-$ sed '/obj\/third_party\/WebKit\/Source/!d; s/^\[[0-9\/]*\] //; /^g++/!d; s/^g++/include-what-you-use -Wno-c++11-extensions/; s/-fno-ident//' ninjalog.txt > commands.txt
-```
- * Run the IWYU commands. We do this in parallel for speed. Merge the output and remove any complaints that the compiler has.
-```
-$ cd out/Debug
-$ for i in {1..32}; do (sed -ne "$i~32p" ../../commands.txt | xargs -n 1 -L 1 -d '\n' bash -c > iwyu_$i.txt 2>&1) & done
-$ cat iwyu_{1..32}.txt | sed '/In file included from/d;/\(note\|warning\|error\):/{:a;N;/should add/!b a;s/.*\n//}' > iwyu.txt
-$ rm iwyu_{1..32}.txt
-```
- * The output in iwyu.txt has all the suggested changes
-
-# Mac
-
-## Setup
-
- 1. Checkout and build IWYU (This will also check out and build clang. See [Clang page](http://code.google.com/p/chromium/wiki/Clang) for details.)
-```
-$ cd /path/to/src/
-$ tools/clang/scripts/update_iwyu.sh
-```
- 1. Ensure "Continue building after errors" is enabled in the Xcode Preferences UI.
-
-## Chromium
-
- 1. Build Chromium. Be sure to substitute in the correct absolute path for `/path/to/src/`.
-```
-$ GYP_DEFINES='clang=1' gclient runhooks
-$ cd chrome
-$ xcodebuild -configuration Release -target chrome OBJROOT=/path/to/src/clang/obj DSTROOT=/path/to/src/clang SYMROOT=/path/to/src/clang CC=/path/to/src/third_party/llvm-build/Release+Asserts/bin/clang++
-```
- 1. Run IWYU. Be sure to substitute in the correct absolute path for `/path/to/src/`.
-```
-$ xcodebuild -configuration Release -target chrome OBJROOT=/path/to/src/clang/obj DSTROOT=/path/to/src/clang SYMROOT=/path/to/src/clang CC=/path/to/src/third_party/llvm-build/Release+Asserts/bin/include-what-you-use
-```
-
-## WebKit
-
- 1. Build TestShell. Be sure to substitute in the correct absolute path for `/path/to/src/`.
-```
-$ GYP_DEFINES='clang=1' gclient runhooks
-$ cd webkit
-$ xcodebuild -configuration Release -target test_shell OBJROOT=/path/to/src/clang/obj DSTROOT=/path/to/src/clang SYMROOT=/path/to/src/clang CC=/path/to/src/third_party/llvm-build/Release+Asserts/bin/clang++
-```
- 1. Run IWYU. Be sure to substitute in the correct absolute path for `/path/to/src/`.
-```
-$ xcodebuild -configuration Release -target test_shell OBJROOT=/path/to/src/clang/obj DSTROOT=/path/to/src/clang SYMROOT=/path/to/src/clang CC=/work/chromium/src/third_party/llvm-build/Release+Asserts/bin/include-what-you-use
-```
-
-# Bragging
-
-You can run `tools/include_tracer.py` to get header file sizes before and after running iwyu. You can then include stats like "This reduces the size of foo.h from 2MB to 80kB" in your CL descriptions.
-
-# Known Issues
-
-We are a long way off from being able to accept the results of IWYU for Chromium/Blink. However, even in its current state it can be a useful tool for finding forward declaration opportunities and unused includes.
-
-Using IWYU with Blink has several issues:
- * Lack of understanding on Blink style, e.g. config.h, wtf/MathExtras.h, wtf/Forward.h, wtf/Threading.h
- * "using" declarations (most of WTF) makes IWYU not suggest forward declarations
- * Functions defined inline in a different location to the declaration are dropped, e.g. Document::renderView in RenderView.h and Node::renderStyle in NodeRenderStyle.h
- * typedefs can cause unwanted dependencies, e.g. typedef int ExceptionCode in Document.h
- * .cpp files don't always correspond directly to .h files, e.g. Foo.h can be implemented in e.g. chromium/FooChromium.cpp
- * g++/clang/iwyu seems fine with using forward declarations for PassRefPtr types in some circumstances, which MSVC doesn't. \ No newline at end of file
diff --git a/docs/installation_at_vmware.md b/docs/installation_at_vmware.md
index b66a714..6feae63 100644
--- a/docs/installation_at_vmware.md
+++ b/docs/installation_at_vmware.md
@@ -1,21 +1,24 @@
-#How to install Chromium OS on VMWare
+# How to install Chromium OS on VMWare
-# Download
+## Download
-1.[Download VMware player](http://www.vmware.com/products/player/)
-<br>2.<a href='http://gdgt.com/google/chrome-os/download/'>Create gtgt.com account and download Chrome image</a>
+1. [Download VMware player](http://www.vmware.com/products/player/)
+2. [Create a gtgt.com account and download Chrome image](http://gdgt.com/google/chrome-os/download/)
-<h1>Mounting</h1>
+## Mounting
-1. Create a New Virtual Machine<br>
-<br>2. Do not selecting any OP-sys (other-other-etc)<br>
-<br>3. Delete your newly created hardisc (lets say you named it as Chrome)<br>
-<br>4. Move downloaded harddisc to the same folder as othere VMware files for this Virtual Machine<br>
-<br>5. Rename your downloaded hardisc to newly created Virtual Machine original name (in my example it was Chrome)<br>
-<br>6. Boot the Chrome Virtual Machine (recommended to use NAT network configuration)<br>
-<br>
-<h1>Google results</h1>
+1. Create a New Virtual Machine
+1. Do not selecting any OP-sys (other-other-etc)
+1. Delete your newly created hardisc (lets say you named it as Chrome)
+1. Move downloaded harddisc to the same folder as othere VMware files for this
+ Virtual Machine
+1. Rename your downloaded hardisc to newly created Virtual Machine original
+ name (in my example it was Chrome)
+1. Boot the Chrome Virtual Machine (recommended to use NAT network
+ configuration)
-> <a href='http://discuss.gdgt.com/google/chrome-os/general/download-chrome-os-vmware-image/'>http://discuss.gdgt.com/google/chrome-os/general/download-chrome-os-vmware-image/</a>
-<br>> <a href='http://www.engadget.com/2009/11/20/google-chrome-os-available-as-free-vmware-download/'>http://www.engadget.com/2009/11/20/google-chrome-os-available-as-free-vmware-download/</a>
-<br>> <a href='http://blogs.zdnet.com/gadgetreviews/?p=9583'>http://blogs.zdnet.com/gadgetreviews/?p=9583</a> \ No newline at end of file
+## Google results
+
+* http://discuss.gdgt.com/google/chrome-os/general/download-chrome-os-vmware-image/
+* http://www.engadget.com/2009/11/20/google-chrome-os-available-as-free-vmware-download/
+* http://blogs.zdnet.com/gadgetreviews/?p=9583'>http://blogs.zdnet.com/gadgetreviews/?p=9583
diff --git a/docs/installazione_su_vmware.md b/docs/installazione_su_vmware.md
index 14bd157..92f6ff0 100644
--- a/docs/installazione_su_vmware.md
+++ b/docs/installazione_su_vmware.md
@@ -1,20 +1,25 @@
-#Come installare Chromium OS su VMWare
+# Come installare Chromium OS su VMWare
-# Download
+## Download
-1.[Scarica VMware player](http://www.vmware.com/products/player/)
-<br>2.<a href='http://gdgt.com/google/chrome-os/download/'>Crea un account su gtgt.com account e scarica l'immagine di Chrome</a>
+1. [Scarica VMware player](http://www.vmware.com/products/player/)
+2. [Crea un account su gtgt.com account e scarica l'immagine di Chrome](http://gdgt.com/google/chrome-os/download/)
-<h1>Montare l'Immagine</h1>
+## Montare l'Immagine
-1. Crea una nuova Virtual Machine<br>
-<br>2. Non selezionare nessun sistema operativo (other-other-ecc)<br>
-<br>3. Elimina l'hard disk virtuale appena creato (se hai nominato la virtual machine "Chrome", elimina il file "Chrome.vmdk" )<br>
-<br>4. Sposta l'immagine dell'Hard Disk scaricato nella stessa cartella dove vi sono gli altri files di VMware per questa virtual machine<br>
-<br>5. Rinomina l'immagine dell'Hard Disk scaricato con il nome della Virtual Machine creata (nel mio esempio è Chrome)<br>
-<br>6. Avvia la Virtual Machine (si consiglia di usare la configurazione di rete NAT)<br>
-<h1>Risultati di Google</h1>
+1. Crea una nuova Virtual Machine
+1. Non selezionare nessun sistema operativo (other-other-ecc)
+1. Elimina l'hard disk virtuale appena creato (se hai nominato la virtual
+ machine "Chrome", elimina il 1i le "Chrome.vmdk" )
+1. Sposta l'immagine dell'Hard Disk scaricato nella stessa cartella dove vi
+ sono gli altri files di 1M ware per questa virtual machine
+1. Rinomina l'immagine dell'Hard Disk scaricato con il nome della Virtual
+ Machine creata (nel mio 1s empio è Chrome)
+1. Avvia la Virtual Machine (si consiglia di usare la configurazione di rete
+ NAT)
-> <a href='http://discuss.gdgt.com/google/chrome-os/general/download-chrome-os-vmware-image/'>http://discuss.gdgt.com/google/chrome-os/general/download-chrome-os-vmware-image/</a>
-<br>> <a href='http://www.engadget.com/2009/11/20/google-chrome-os-available-as-free-vmware-download/'>http://www.engadget.com/2009/11/20/google-chrome-os-available-as-free-vmware-download/</a>
-<br>> <a href='http://blogs.zdnet.com/gadgetreviews/?p=9583'>http://blogs.zdnet.com/gadgetreviews/?p=9583</a>
+## Risultati di Google
+
+* http://discuss.gdgt.com/google/chrome-os/general/download-chrome-os-vmware-image/
+* http://www.engadget.com/2009/11/20/google-chrome-os-available-as-free-vmware-download/
+* http://blogs.zdnet.com/gadgetreviews/?p=9583'>http://blogs.zdnet.com/gadgetreviews/?p=9583
diff --git a/docs/ipc_fuzzer.md b/docs/ipc_fuzzer.md
index 17a80c6..0ab9ce9 100644
--- a/docs/ipc_fuzzer.md
+++ b/docs/ipc_fuzzer.md
@@ -1,52 +1,65 @@
-# Introduction
+# IPC Fuzzer
-A chromium IPC fuzzer is under development by aedla and tsepez. The fuzzer lives under `src/tools/ipc_fuzzer/` and is running on ClusterFuzz. A previous version of the fuzzer was a simple bitflipper, which caught around 10 bugs. A new version is doing smarter mutations and generational fuzzing. To do so, each `ParamTraits<Type>` needs a corresponding `FuzzTraits<Type>`. Feel free to contribute.
+A chromium IPC fuzzer is under development by aedla and tsepez. The fuzzer lives
+under `src/tools/ipc_fuzzer/` and is running on ClusterFuzz. A previous version
+of the fuzzer was a simple bitflipper, which caught around 10 bugs. A new
+version is doing smarter mutations and generational fuzzing. To do so, each
+`ParamTraits<Type>` needs a corresponding `FuzzTraits<Type>`. Feel free to
+contribute.
+[TOC]
----
+## Working with the fuzzer
-# Working with the fuzzer
+### Build instructions
-## Build instructions
- * add `enable_ipc_fuzzer=1` to `GYP_DEFINES`
- * build `ipc_fuzzer_all` target
- * component builds are currently broken, sorry
- * Debug builds are broken; only Release mode works.
+* add `enable_ipc_fuzzer=1` to `GYP_DEFINES`
+* build `ipc_fuzzer_all` target
+* component builds are currently broken, sorry
+* Debug builds are broken; only Release mode works.
-## Replaying ipcdumps
- * `tools/ipc_fuzzer/scripts/play_testcase.py path/to/testcase.ipcdump`
- * more help: `tools/ipc_fuzzer/scripts/play_testcase.py -h`
+### Replaying ipcdumps
-## Listing messages in ipcdump
- * `out/`_Build_`/ipc_message_util --dump path/to/testcase.ipcdump`
+* `tools/ipc_fuzzer/scripts/play_testcase.py path/to/testcase.ipcdump`
+* more help: `tools/ipc_fuzzer/scripts/play_testcase.py -h`
-## Updating fuzzers in ClusterFuzz
- * `tools/ipc_fuzzer/scripts/cf_package_builder.py`
- * upload `ipc_fuzzer_mut.zip` and `ipc_fuzzer_gen.zip` under build directory to ClusterFuzz
+### Listing messages in ipcdump
-## Contributing FuzzTraits
- * add them to tools/ipc\_fuzzer/fuzzer/fuzzer.cc
- * thanks!
+* `out/<Build>/ipc_message_util --dump path/to/testcase.ipcdump`
+### Updating fuzzers in ClusterFuzz
----
+* `tools/ipc_fuzzer/scripts/cf_package_builder.py`
+* upload `ipc_fuzzer_mut.zip` and `ipc_fuzzer_gen.zip` under build directory
+ to ClusterFuzz
-# Components
+### Contributing FuzzTraits
-## ipcdump logger
- * add `enable_ipc_fuzzer=1` to `GYP_DEFINES`
- * build `chrome` and `ipc_message_dump` targets
- * run chrome with `--no-sandbox --ipc-dump-directory=/path/to/ipcdump/directory`
- * ipcdumps will be created in this directory for each renderer using the format _pid_.ipcdump
+* add them to `tools/ipc_fuzzer/fuzzer/fuzzer.cc`
+* thanks!
-## ipcdump replay
-Lives under `ipc_fuzzer/replay`. The renderer is replaced with `ipc_fuzzer_replay` using `--renderer-cmd-prefix`. This is done automatically with the `ipc_fuzzer/play_testcase.py` convenience script.
+## Components
-## ipcdump mutator / generator
-Lives under `ipc_fuzzer/fuzzer`. This is the code that runs on ClusterFuzz. It uses `FuzzTraits<Type>` to mutate ipcdumps or generate them out of thin air.
+### ipcdump logger
+* add `enable_ipc_fuzzer=1` to `GYP_DEFINES`
+* build `chrome` and `ipc_message_dump` targets
+* run chrome with
+ `--no-sandbox --ipc-dump-directory=/path/to/ipcdump/directory`
+* ipcdumps will be created in this directory for each renderer using the
+ format `_pid_.ipcdump`
----
+### ipcdump replay
-# Problems, questions, suggestions
-Send them to mbarbella@chromium.org. \ No newline at end of file
+Lives under `ipc_fuzzer/replay`. The renderer is replaced with
+`ipc_fuzzer_replay` using `--renderer-cmd-prefix`. This is done automatically
+with the `ipc_fuzzer/play_testcase.py` convenience script.
+
+### ipcdump mutator / generator
+
+Lives under `ipc_fuzzer/fuzzer`. This is the code that runs on ClusterFuzz. It
+uses `FuzzTraits<Type>` to mutate ipcdumps or generate them out of thin air.
+
+## Problems, questions, suggestions
+
+Send them to mbarbella@chromium.org.
diff --git a/docs/kiosk_mode.md b/docs/kiosk_mode.md
index 55bc39c..3c17898 100644
--- a/docs/kiosk_mode.md
+++ b/docs/kiosk_mode.md
@@ -1,7 +1,7 @@
-## Introduction
-
-If you have a real world kiosk application that you want to run on Google Chrome, then below are the steps to take to simulate kiosk mode.
+# Kiosk Mode
+If you have a real world kiosk application that you want to run on Google
+Chrome, then below are the steps to take to simulate kiosk mode.
## Steps to Simulate Kiosk Mode
@@ -9,7 +9,7 @@ If you have a real world kiosk application that you want to run on Google Chrome
Compile the following Java code:
-```
+```java
import java.awt.*;
import java.applet.*;
import java.security.*;
@@ -21,9 +21,9 @@ public class FullScreen extends Applet
{
AccessController.doPrivileged
(
- new PrivilegedAction()
+ new PrivilegedAction()
{
- public Object run()
+ public Object run()
{
try
{
@@ -46,8 +46,11 @@ public class FullScreen extends Applet
Include it in an applet on your kiosk application's home page:
-```
-<applet name="appletFullScreen" code="FullScreen.class" width="1" height="1"></applet>
+```html
+<applet name="appletFullScreen"
+ code="FullScreen.class"
+ width="1"
+ height="1"></applet>
```
### Step 3
@@ -56,16 +59,17 @@ Add the following to the kiosk computer's java.policy file:
```
grant codeBase "http://yourservername/*"
-{
+{
permission java.security.AllPermission;
};
```
### Step 4
-Include the following JavaScript and assign the doLoad function to the onload event:
+Include the following JavaScript and assign the `doLoad` function to the
+`onload` event:
-```
+```javascript
var _appletFullScreen;
function doLoad()
@@ -78,8 +82,9 @@ function doFullScreen()
{
if (_appletFullScreen && _appletFullScreen.fullScreen)
{
-// Add an if statement to check whether document.body.clientHeight is not indicative of full screen mode
+ // Add an if statement to check whether document.body.clientHeight is not
+ // indicative of full screen mode
_appletFullScreen.fullScreen();
}
}
-``` \ No newline at end of file
+```
diff --git a/docs/layout_tests_linux.md b/docs/layout_tests_linux.md
index c5d0be0..154e334 100644
--- a/docs/layout_tests_linux.md
+++ b/docs/layout_tests_linux.md
@@ -1,21 +1,31 @@
# Running layout tests on Linux
- 1. Build `blink_tests` (see LinuxBuildInstructions)
- 1. Checkout the layout tests
- * If you have an entry in your .gclient file that includes "LayoutTests", you may need to comment it out and sync.
- * You can run a subset of the tests by passing in a path relative to `src/third_party/WebKit/LayoutTests/`. For example, `run_layout_tests.py fast` will only run the tests under `src/third_party/WebKit/LayoutTests/fast/`.
- 1. When the tests finish, any unexpected results should be displayed.
-
-See [Running WebKit Layout Tests](http://dev.chromium.org/developers/testing/webkit-layout-tests) for full documentation about set up and available options.
+1. Build `blink_tests` (see LinuxBuildInstructions)
+1. Checkout the layout tests
+ * If you have an entry in your `.gclient` file that includes
+ "LayoutTests", you may need to comment it out and sync.
+ * You can run a subset of the tests by passing in a path relative to
+ `src/third_party/WebKit/LayoutTests/`. For example,
+ `run_layout_tests.py fast` will only run the tests under
+ `src/third_party/WebKit/LayoutTests/fast/`.
+1. When the tests finish, any unexpected results should be displayed.
+
+See
+[Running WebKit Layout Tests](http://dev.chromium.org/developers/testing/webkit-layout-tests)
+for full documentation about set up and available options.
## Pixel Tests
-The pixel test results were generated on Ubuntu 10.4 (Lucid). If you're running a newer version of Ubuntu, you will get some pixel test failures due to changes in freetype or fonts. In this case, you can create a Lucid 64 chroot using `build/install-chroot.sh` to compile and run tests.
+The pixel test results were generated on Ubuntu 10.4 (Lucid). If you're running
+a newer version of Ubuntu, you will get some pixel test failures due to changes
+in freetype or fonts. In this case, you can create a Lucid 64 chroot using
+`build/install-chroot.sh` to compile and run tests.
## Fonts
Make sure you have all the necessary fonts installed.
-```
+
+```shell
sudo apt-get install apache2 wdiff php5-cgi ttf-indic-fonts \
msttcorefonts ttf-dejavu-core ttf-kochi-gothic ttf-kochi-mincho \
ttf-thai-tlwg
@@ -25,66 +35,104 @@ You can also just run `build/install-build-deps.sh` again.
## Plugins
-If `fast/dom/object-plugin-hides-properties.html` and `plugins/embed-attributes-style.html` are failing, try uninstalling `totem-mozilla` from your system:
-```
-sudo apt-get remove totem-mozilla
-```
+If `fast/dom/object-plugin-hides-properties.html` and
+`plugins/embed-attributes-style.html` are failing, try uninstalling
+`totem-mozilla` from your system:
+
+ sudo apt-get remove totem-mozilla
+
## Running layout tests under valgrind on Linux
As above, but use `tools/valgrind/chrome_tests.sh -t webkit` instead. e.g.
-```
-sh tools/valgrind/chrome_tests.sh -t webkit LayoutTests/fast/
-```
+
+ sh tools/valgrind/chrome_tests.sh -t webkit LayoutTests/fast/
+
This defaults to using --debug. Read the script for more details.
-If you're trying to reproduce a run from the valgrind buildbot, look for the --run\_chunk=XX:YY
-line in the bot's log. You can rerun exactly as the bot did with the commands
-```
+If you're trying to reproduce a run from the valgrind buildbot, look for the
+`--run_chunk=XX:YY` line in the bot's log. You can rerun exactly as the bot did
+with the commands.
+
+```shell
cd ~/chromium/src
-echo XX > valgrind_layout_chunk.txt
+echo XX > valgrind_layout_chunk.txt
sh tools/valgrind/chrome_tests.sh -t layout -n YY
```
+
That will run the XXth chunk of YY layout tests.
## Configuration tips
- * Use an optimized content\_shell when rebaselining or running a lot of tests. ([bug 8475](http://code.google.com/p/chromium/issues/detail?id=8475) is about how the debug output differs from the optimized output.) `ninja -C out/Release content_shell`
- * Make sure you have wdiff installed: `sudo apt-get install wdiff` to get prettier diff output
- * Some pixel tests may fail due to processor-specific rounding errors. Build using a chroot jail with Lucid 64-bit user space to be sure that your system matches the checked in baselines. You can use `build/install-chroot.sh` to set up a Lucid 64 chroot. Learn more about [UsingALinuxChroot](UsingALinuxChroot.md).
+
+* Use an optimized `content_shell` when rebaselining or running a lot of
+ tests. ([bug 8475](https://crbug.com/8475) is about how the debug output
+ differs from the optimized output.)
+
+ `ninja -C out/Release content_shell`
+
+* Make sure you have wdiff installed: `sudo apt-get install wdiff` to get
+ prettier diff output.
+* Some pixel tests may fail due to processor-specific rounding errors. Build
+ using a chroot jail with Lucid 64-bit user space to be sure that your system
+ matches the checked in baselines. You can use `build/install-chroot.sh` to
+ set up a Lucid 64 chroot. Learn more about
+ [using a linux chroot](using_a_linux_chroot.md).
+
## Getting a layout test into a debugger
There are two ways:
- 1. Run content\_shell directly rather than using run\_layout\_tests.py. You will need to pass some options:
- * `--no-timeout` to give you plenty of time to debug
- * the fully qualified path of the layout test (rather than relative to `WebKit/LayoutTests`).
- 1. Or, run as normal but with the `--additional-drt-flag=--renderer-startup-dialog --additional-drt-flag=--no-timeout --time-out-ms=86400000` flags. The first one makes content\_shell bring up a dialog before running, which then would let you attach to the process via `gdb -p PID_OF_DUMPRENDERTREE`. The others help avoid the test shell and DumpRenderTree timeouts during the debug session.
+
+1. Run `content_shell` directly rather than using `run_layout_tests.py`. You
+ will need to pass some options:
+ * `--no-timeout` to give you plenty of time to debug
+ * the fully qualified path of the layout test (rather than relative to
+ `WebKit/LayoutTests`).
+1. Or, run as normal but with the
+ `--additional-drt-flag=--renderer-startup-dialog
+ --additional-drt-flag=--no-timeout --time-out-ms=86400000` flags. The first
+ one makes content\_shell bring up a dialog before running, which then would
+ let you attach to the process via `gdb -p PID_OF_DUMPRENDERTREE`. The others
+ help avoid the test shell and DumpRenderTree timeouts during the debug
+ session.
## Using an embedded X server
-If you try to use your computer while the tests are running, you may get annoyed as windows are opened and closed automatically. To get around this, you can create a separate X server for running the tests.
+If you try to use your computer while the tests are running, you may get annoyed
+as windows are opened and closed automatically. To get around this, you can
+create a separate X server for running the tests.
+
+1. Install Xephyr (`sudo apt-get install xserver-xephyr`)
+1. Start Xephyr as display 4: `Xephyr :4 -screen 1024x768x24`
+1. Run the layout tests in the Xephyr: `DISPLAY=:4 run_layout_tests.py`
+
+Xephyr supports debugging repainting. See the
+[Xephyr README](http://cgit.freedesktop.org/xorg/xserver/tree/hw/kdrive/ephyr/README)
+for details. In brief:
- 1. Install Xephyr (`sudo apt-get install xserver-xephyr`)
- 1. Start Xephyr as display 4: `Xephyr :4 -screen 1024x768x24`
- 1. Run the layout tests in the Xephyr: `DISPLAY=:4 run_layout_tests.py`
+1. `XEPHYR_PAUSE=$((500*1000)) Xephyr ...etc... # 500 ms repaint flash`
+1. `kill -USR1 $(pidof Xephyr)`
-Xephyr supports debugging repainting. See the [Xephyr README](http://cgit.freedesktop.org/xorg/xserver/tree/hw/kdrive/ephyr/README) for details. In brief:
- 1. `XEPHYR_PAUSE=$((500*1000)) Xephyr ...etc... # 500 ms repaint flash`
- 1. `kill -USR1 $(pidof Xephyr)`
+If you don't want to see anything at all, you can use Xvfb (should already be
+installed).
-If you don't want to see anything at all, you can use Xvfb (should already be installed).
- 1. Start Xvfb as display 4: `Xvfb :4 -screen 0 1024x768x24`
- 1. Run the layout tests in the Xvfb: `DISPLAY=:4 run_layout_tests.py`
+1. Start Xvfb as display 4: `Xvfb :4 -screen 0 1024x768x24`
+1. Run the layout tests in the Xvfb: `DISPLAY=:4 run_layout_tests.py`
## Tiling Window managers
-The layout tests want to run with the window at a particular size down to the pixel level. This means if your window manager resizes the window it'll cause test failures. This is another good reason to use an embedded X server.
+The layout tests want to run with the window at a particular size down to the
+pixel level. This means if your window manager resizes the window it'll cause
+test failures. This is another good reason to use an embedded X server.
### xmonad
-In your `.xmonad/xmonad.hs`, change your config to include a manageHook along these lines:
+
+In your `.xmonad/xmonad.hs`, change your config to include a manageHook along
+these lines:
+
```
test_shell_manage = className =? "Test_shell" --> doFloat
main = xmonad $
defaultConfig
{ manageHook = test_shell_manage <+> manageHook defaultConfig
...
-``` \ No newline at end of file
+```
diff --git a/docs/linux64_bit_issues.md b/docs/linux64_bit_issues.md
deleted file mode 100644
index 98efb2d..0000000
--- a/docs/linux64_bit_issues.md
+++ /dev/null
@@ -1,67 +0,0 @@
-**Note: This page is (somewhat) obsolete.** Chrome has a native 64-bit build now. Normal users should be running 64-bit Chrome on 64-bit systems. However, it's possible developers might be using a 64-bit system to build 32-bit Chrome, and will want to test the build on the same system. In that case, some of these tips may still be useful (though there might not be much more work done to address problems with this configuration).
-
-Many 64-bit Linux distros allow you to run 32-bit apps but have many libraries misconfigured. The distros may be fixed at some point, but in the meantime we have workarounds.
-
-## IME path wrong
-Symptom: IME doesn't work. `Gtk: /usr/lib/gtk-2.0/2.10.0/immodules/im-uim.so: wrong ELF class: ELFCLASS64`
-
-Chromium bug: [9643](http://code.google.com/p/chromium/issues/detail?id=9643)
-
-Affected systems:
-| **Distro** | **upstream bug** |
-|:-----------|:-----------------|
-| Ubuntu Hardy, Jaunty | [190227](https://bugs.launchpad.net/ubuntu/+source/ia32-libs/+bug/190227) |
-
-Workaround: If your xinput setting is to use SCIM, im-scim.so is searched for in lib32 directory, but ia32 package does not have im-scim.so in Ubuntu and perhaps other distributions. Ubuntu Hardy, however, has 32-bit im-xim.so. Therefore, invoking Chrome as following enables SCIM in Chrome.
-
-> $ GTK\_IM\_MODULE=xim XMODIFIERS="@im=SCIM" chrome
-
-
-## GTK filesystem module path wrong
-Symptom: File picker doesn't work. `Gtk: /usr/lib/gtk-2.0/2.10.0/filesystems/libgio.so: wrong ELF class: ELFCLASS64`
-
-Chromium bug: [12151](http://code.google.com/p/chromium/issues/detail?id=12151)
-
-Affected systems:
-| **Distro** | **upstream bug** |
-|:-----------|:-----------------|
-| Ubuntu Hardy, Jaunty, Koala alpha 2 | [190227](https://bugs.launchpad.net/ubuntu/+source/ia32-libs/+bug/190227) |
-
-Workaround: ??
-
-## GIO module path wrong
-Symptom: `/usr/lib/gio/modules/libgioremote-volume-monitor.so: wrong ELF class: ELFCLASS64`
-
-Chromium bug: [12193](http://code.google.com/p/chromium/issues/detail?id=12193)
-
-Affected systems:
-| **Distro** | **upstream bug** |
-|:-----------|:-----------------|
-| Ubuntu Hardy, Jaunty, Koala alpha 2 | [190227](https://bugs.launchpad.net/ubuntu/+source/ia32-libs/+bug/190227) |
-
-Workaround: ??
-
-## Can't install on 64 bit Ubuntu 9.10 Live CD
-Symptom: "Error: Dependency is not satisfiable: ia32-libs-gtk"
-
-Chromium bug: n/a
-
-Affected systems:
-| **Distro** | **upstream bug** |
-|:-----------|:-----------------|
-| Ubuntu Koala alpha 2 | |
-
-Workaround: Enable the Universe repository. (It's enabled by default
-when you actually install Ubuntu; only the live CD has it disabled.)
-
-## gconv path wrong
-Symptom: Paste doesn't work. `Gdk: Error converting selection from STRING: Conversion from character set 'ISO-8859-1' to 'UTF-8' is not supported`
-
-Chromium bug: [12312](http://code.google.com/p/chromium/issues/detail?id=12312)
-
-Affected systems:
-| **Distro** | **upstream bug** |
-|:-----------|:-----------------|
-| Arch | ?? |
-
-Workaround: Set `GCONV_PATH` to appropriate `/path/to/lib32/usr/lib/gconv` . \ No newline at end of file
diff --git a/docs/linux_build_instructions.md b/docs/linux_build_instructions.md
index 3383b45..6b026eb 100644
--- a/docs/linux_build_instructions.md
+++ b/docs/linux_build_instructions.md
@@ -1,66 +1,98 @@
-#summary Build instructions for Linux
-#labels Linux,build
-
+# Build instructions for Linux
+[TOC]
## Overview
-Due its complexity, Chromium uses a set of custom tools to check out and build. Here's an overview of the steps you'll run:
- 1. **gclient**. A checkout involves pulling nearly 100 different SVN repositories of code. This process is managed with a tool called `gclient`.
- 1. **gyp**. The cross-platform build configuration system is called `gyp`, and on Linux it generates ninja build files. Running `gyp` is analogous to the `./configure` step seen in most other software.
- 1. **ninja**. The actual build itself uses `ninja`. A prebuilt binary is in depot\_tools and should already be in your path if you followed the steps to check out Chromium.
- 1. We don't provide any sort of "install" step.
- 1. You may want to [use a chroot](http://code.google.com/p/chromium/wiki/UsingALinuxChroot) to isolate yourself from versioning or packaging conflicts (or to run the layout tests).
+
+Due its complexity, Chromium uses a set of custom tools to check out and build.
+Here's an overview of the steps you'll run:
+
+1. **gclient**. A checkout involves pulling nearly 100 different SVN
+ repositories of code. This process is managed with a tool called `gclient`.
+1. **gyp**. The cross-platform build configuration system is called `gyp`, and
+ on Linux it generates ninja build files. Running `gyp` is analogous to the
+ `./configure` step seen in most other software.
+1. **ninja**. The actual build itself uses `ninja`. A prebuilt binary is in
+ `depot_tools` and should already be in your path if you followed the steps
+ to check out Chromium.
+1. We don't provide any sort of "install" step.
+1. You may want to [use a chroot](using_a_linux_chroot.md) to isolate yourself
+ from versioning or packaging conflicts (or to run the layout tests).
## Getting a checkout
- * [Prerequisites](LinuxBuildInstructionsPrerequisites.md): what you need before you build
- * [Get the Code](http://dev.chromium.org/developers/how-tos/get-the-code): check out the source code.
-**Note**. If you are working on Chromium OS and already have sources in `chromiumos/chromium`, you **must** run `chrome_set_ver --runhooks` to set the correct dependencies. This step is otherwise performed by `gclient` as part of your checkout.
+* [Prerequisites](linux_build_instructions_prerequisites.md): what you need
+ before you build.
+* [Get the Code](http://dev.chromium.org/developers/how-tos/get-the-code):
+ check out the source code.
+
+**Note**. If you are working on Chromium OS and already have sources in
+`chromiumos/chromium`, you **must** run `chrome_set_ver --runhooks` to set the
+correct dependencies. This step is otherwise performed by `gclient` as part of
+your checkout.
## First Time Build Bootstrap
- * Make sure your dependencies are up to date by running the `install-build-deps.sh` script:
-```
-.../chromium/src$ build/install-build-deps.sh
-```
- * Before you build, you should also [install API keys](https://sites.google.com/a/chromium.org/dev/developers/how-tos/api-keys).
+* Make sure your dependencies are up to date by running the
+ `install-build-deps.sh` script:
+
+ .../chromium/src$ build/install-build-deps.sh
+
+* Before you build, you should also
+ [install API keys](https://sites.google.com/a/chromium.org/dev/developers/how-tos/api-keys).
## `gyp` (configuring)
-After `gclient sync` finishes, it will run `gyp` automatically to generate the ninja build files. For standard chromium builds, this automatic step is sufficient and you can start [compiling](https://code.google.com/p/chromium/wiki/LinuxBuildInstructions#Compilation).
-To manually configure `gyp`, run `gclient runhooks` or run `gyp` directly via `build/gyp_chromium`. See [Configuring the Build](https://code.google.com/p/chromium/wiki/CommonBuildTasks#Configuring_the_Build) for detailed `gyp` options.
+After `gclient sync` finishes, it will run `gyp` automatically to generate the
+ninja build files. For standard chromium builds, this automatic step is
+sufficient and you can start [compiling](linux_build_instructions.md).
+
+To manually configure `gyp`, run `gclient runhooks` or run `gyp` directly via
+`build/gyp_chromium`. See [Configuring the Build](https://code.google.com/p/chromium/wiki/CommonBuildTasks#Configuring_the_Build) for detailed `gyp` options.
[GypUserDocumentation](https://code.google.com/p/gyp/wiki/GypUserDocumentation) gives background on `gyp`, but is not necessary if you are just building Chromium.
### Configuring `gyp`
-See [Configuring the Build](https://code.google.com/p/chromium/wiki/CommonBuildTasks#Configuring_the_Build) for details; most often you'll be changing the `GYP_DEFINES` options, which is discussed here.
+
+See [Configuring the Build](common_build_tasks.md) for details; most often
+you'll be changing the `GYP_DEFINES` options, which is discussed here.
`gyp` supports a minimal amount of build configuration via the `-D` flag.
-```
-build/gyp_chromium -Dflag1=value1 -Dflag2=value2
-```
-You can store these in the `GYP_DEFINES` environment variable, separating flags with spaces, as in:
-```
- export GYP_DEFINES="flag1=value1 flag2=value2"
-```
-After changing your `GYP_DEFINES` you need to rerun `gyp`, either implicitly via `gclient sync` (which also syncs) or `gclient runhooks` or explicitly via `build/gyp_chromium`.
-Note that quotes are not necessary for a single flag, but are useful for clarity; `GYP_DEFINES=flag1=value1` is syntactically valid but can be confusing compared to `GYP_DEFINES="flag1=value1"`.
+ build/gyp_chromium -Dflag1=value1 -Dflag2=value2
+
+You can store these in the `GYP_DEFINES` environment variable, separating flags
+with spaces, as in:
+
+ export GYP_DEFINES="flag1=value1 flag2=value2"
+
+After changing your `GYP_DEFINES` you need to rerun `gyp`, either implicitly via
+`gclient sync` (which also syncs) or `gclient runhooks` or explicitly via
+`build/gyp_chromium`.
+
+Note that quotes are not necessary for a single flag, but are useful for
+clarity; `GYP_DEFINES=flag1=value1` is syntactically valid but can be confusing
+compared to `GYP_DEFINES="flag1=value1"`.
+
+If you have various flags for various purposes, you may find it more legible to
+break them up across several lines, taking care to include spaces, such as like
+this:
+
+ export GYP_DEFINES="flag1=value1 flag2=value2"
-If you have various flags for various purposes, you may find it more legible to break them up across several lines, taking care to include spaces, such as like this:
-```
- export GYP_DEFINES="flag1=value1"\
- " flag2=value2"
-```
or like this (allowing comments):
-```
- export GYP_DEFINES="flag1=value1" # comment
- GYP_DEFINES+=" flag2=value2" # another comment
-```
+
+ export GYP_DEFINES="flag1=value1" # comment
+ GYP_DEFINES+=" flag2=value2" # another comment
+
### Sample configurations
- * **gcc warnings**. By default we fail to build if there are any compiler warnings. If you're getting warnings, can't build because of that, but just want to get things done, you can specify `-Dwerror=` to turn that off:
-```
+
+* **gcc warnings**. By default we fail to build if there are any compiler
+ warnings. If you're getting warnings, can't build because of that, but just
+ want to get things done, you can specify `-Dwerror=` to turn that off:
+
+```script
# one-off
build/gyp_chromium -Dwerror=
# via variable
@@ -68,8 +100,15 @@ export GYP_DEFINES="werror="
build/gyp_chromium
```
- * **ChromeOS**. `-Dchromeos=1` builds the ChromeOS version of Chrome. This is **not** all of ChromeOS (see [the ChromiumOS](http://www.chromium.org/chromium-os) page for full build instructions), this is just the slightly tweaked version of the browser that runs on that system. Its not designed to be run outside of ChromeOS and some features won't work, but compiling on your Linux desktop can be useful for certain types of development and testing.
-```
+* **ChromeOS**. `-Dchromeos=1` builds the ChromeOS version of Chrome. This is
+ **not** all of ChromeOS (see
+ [the ChromiumOS](http://www.chromium.org/chromium-os) page for full build
+ instructions), this is just the slightly tweaked version of the browser that
+ runs on that system. Its not designed to be run outside of ChromeOS and some
+ features won't work, but compiling on your Linux desktop can be useful for
+ certain types of development and testing.
+
+```shell
# one-off
build/gyp_chromium -Dchromeos=1
# via variable
@@ -77,88 +116,108 @@ export GYP_DEFINES="chromeos=1"
build/gyp_chromium
```
-
## Compilation
+
The weird "`src/`" directory is an artifact of `gclient`. Start with:
-```
-$ cd src
-```
+
+ $ cd src
### Build just chrome
-```
-$ ninja -C out/Debug chrome
-```
+
+ $ ninja -C out/Debug chrome
+
### Faster builds
-See LinuxFasterBuilds
+
+See [Linux Faster Builds](linux_faster_builds.md)
### Build every test
-```
-$ ninja -C out/Debug
-```
-The above builds all libraries and tests in all components. **It will take hours.**
+
+ $ ninja -C out/Debug
+
+The above builds all libraries and tests in all components. **It will take
+hours.**
Specifying other target names to restrict the build to just what you're
interested in. To build just the simplest unit test:
-```
-$ ninja -C out/Debug base_unittests
-```
+
+ $ ninja -C out/Debug base_unittests
### Clang builds
-Information about building with Clang can be found [here](http://code.google.com/p/chromium/wiki/Clang).
+Information about building with Clang can be found [here](clang.md).
### Output
-Executables are written in `src/out/Debug/` for Debug builds, and `src/out/Release/` for Release builds.
+Executables are written in `src/out/Debug/` for Debug builds, and
+`src/out/Release/` for Release builds.
### Release mode
Pass `-C out/Release` to the ninja invocation:
-```
-$ ninja -C out/Release chrome
-```
+
+ $ ninja -C out/Release chrome
+
### Seeing the commands
-If you want to see the actual commands that ninja is invoking, add `-v` to the ninja invocation.
-```
-$ ninja -v -C out/Debug chrome
-```
-This is useful if, for example, you are debugging gyp changes, or otherwise need to see what ninja is actually doing.
+If you want to see the actual commands that ninja is invoking, add `-v` to the
+ninja invocation.
+
+ $ ninja -v -C out/Debug chrome
+
+This is useful if, for example, you are debugging gyp changes, or otherwise need
+to see what ninja is actually doing.
### Clean builds
-All built files are put into the `out/` directory, so to start over with a clean build, just
-```
-rm -rf out
-```
-and run `gclient runhooks` or `build\gyp_chromium` again to recreate the ninja build files (which are also stored in `out/`). Or you can run `ninja -C out/Debug -t clean`.
+
+All built files are put into the `out/` directory, so to start over with a clean
+build, just
+
+ rm -rf out
+
+and run `gclient runhooks` or `build\gyp_chromium` again to recreate the ninja
+build files (which are also stored in `out/`). Or you can run `ninja -C
+out/Debug -t clean`.
### Linker Crashes
+
If, during the final link stage:
-```
- LINK(target) out/Debug/chrome
-```
+
+ LINK(target) out/Debug/chrome
+
You get an error like:
+
```
-collect2: ld terminated with signal 6 Aborted terminate called after throwing an instance of 'std::bad_alloc'
+collect2: ld terminated with signal 6 Aborted terminate called after throwing an
+instance of 'std::bad_alloc'
-collect2: ld terminated with signal 11 [Segmentation fault], core dumped
+collect2: ld terminated with signal 11 [Segmentation fault], core dumped
```
-you are probably running out of memory when linking. Try one of:
- 1. Use the `gold` linker
- 1. Build on a 64-bit computer
- 1. Build in Release mode (debugging symbols require a lot of memory)
- 1. Build as shared libraries (note: this build is for developers only, and may have broken functionality)
-Most of these are described on the LinuxFasterBuilds page.
+
+you are probably running out of memory when linking. Try one of:
+
+1. Use the `gold` linker
+1. Build on a 64-bit computer
+1. Build in Release mode (debugging symbols require a lot of memory)
+1. Build as shared libraries (note: this build is for developers only, and may
+ have broken functionality)
+
+Most of these are described on the [LinuxFasterBuilds](linux_faster_builds.md)
+page.
## Advanced Features
- * Building frequently? See LinuxFasterBuilds.
- * Cross-compiling for ARM? See LinuxChromiumArm.
- * Want to use Eclipse as your IDE? See LinuxEclipseDev.
- * Built version as Default Browser? See LinuxDevBuildAsDefaultBrowser.
+* Building frequently? See [LinuxFasterBuilds](linux_faster_builds.md).
+* Cross-compiling for ARM? See [LinuxChromiumArm](linux_chromium_arm.md).
+* Want to use Eclipse as your IDE? See
+ [LinuxEclipseDev](linux_eclipse_dev.md).
+* Built version as Default Browser? See
+ [LinuxDevBuildAsDefaultBrowser](linux_dev_build_as_default_browser.md).
## Next Steps
-If you want to contribute to the effort toward a Chromium-based browser for Linux, please check out the [Linux Development page](LinuxDevelopment.md) for more information. \ No newline at end of file
+
+If you want to contribute to the effort toward a Chromium-based browser for
+Linux, please check out the [Linux Development page](linux_development.md) for
+more information.
diff --git a/docs/linux_build_instructions_prerequisites.md b/docs/linux_build_instructions_prerequisites.md
index fa179a0..e0cc4a6 100644
--- a/docs/linux_build_instructions_prerequisites.md
+++ b/docs/linux_build_instructions_prerequisites.md
@@ -1,55 +1,90 @@
+# Linux Build Instructions — Prerequisites
+
This page describes system requirements for building Chromium on Linux.
+[TOC]
+
+## System Requirements
+
+### Linux distribution
+
+You should be able to build Chromium on any reasonably modern Linux
+distribution, but there are a lot of distributions and we sometimes break things
+on one or another. Internally, our development platform has been a variant of
+Ubuntu 14.04 (Trusty Tahr); we expect you will have the most luck on this
+platform, although directions for other popular platforms are included below.
+
+### Disk space
+
+It takes about 10GB or so of disk space to check out and build the source tree.
+This number grows over time.
+### Memory space
-# System Requirements
+It takes about 8GB of swap file to link chromium and its tests. If you get an
+out-of-memory error during the final link, you will need to add swap space with
+swapon. It's recommended to have at least 4GB of memory available for building a
+statically linked debug build. Dynamic linking and/or building a release build
+lowers memory requirements. People with less than 8GB of memory may want to not
+build tests since they are quite large.
-## Linux distribution
-You should be able to build Chromium on any reasonably modern Linux distribution, but there are a lot of distributions and we sometimes break things on one or another. Internally, our development platform has been a variant of Ubuntu 14.04 (Trusty Tahr); we expect you will have the most luck on this platform, although directions for other popular platforms are included below.
+### 64-bit Systems
-## Disk space
-It takes about 10GB or so of disk space to check out and build the source tree. This number grows over time.
+Chromium can be compiled as either a 32-bit or 64-bit application. Chromium
+requires several system libraries to compile and run. While it is possible to
+compile and run a 32-bit Chromium on 64-bit Linux, many distributions are
+missing the necessary 32-bit libraries, and will result in build or run-time
+errors.
-## Memory space
-It takes about 8GB of swap file to link chromium and its tests. If you get an out-of-memory error during the final link, you will need to add swap space with swapon. It's recommended to have at least 4GB of memory available for building a statically linked debug build. Dynamic linking and/or building a release build lowers memory requirements. People with less than 8GB of memory may want to not build tests since they are quite large.
+### Depot tools
-## 64-bit Systems
-Chromium can be compiled as either a 32-bit or 64-bit application. Chromium requires several system libraries to compile and run. While it is possible to compile and run a 32-bit Chromium on 64-bit Linux, many distributions are missing the necessary 32-bit libraries, and will result in build or run-time errors.
+Before setting up the environment, make sure you install the
+[depot tools](http://dev.chromium.org/developers/how-tos/depottools) first.
-## Depot tools
-Before setting up the environment, make sure you install the [depot tools](http://dev.chromium.org/developers/how-tos/depottools) first.
+## Software Requirements
-# Software Requirements
+### Ubuntu Setup
-## Ubuntu Setup
-Run [build/install-build-deps.sh](https://chromium.googlesource.com/chromium/chromium/+/trunk/build/install-build-deps.sh) The script only supports current releases as listed on https://wiki.ubuntu.com/Releases.
+Run [build/install-build-deps.sh](build/install-build-deps.sh) The script only
+supports current releases as listed on https://wiki.ubuntu.com/Releases.
-Building on Linux requires software not usually installed with the distributions.
-The script attempts to automate installing the required software. This script is used to set up the canonical builders, and as such is the most up to date reference for the required prerequisites.
+Building on Linux requires software not usually installed with the
+distributions.
-## Other distributions
-Note: Other distributions are not officially supported for building and the instructions below might be outdated.
+The script attempts to automate installing the required software. This script is
+used to set up the canonical builders, and as such is the most up to date
+reference for the required prerequisites.
-### Debian Setup
+### Other distributions
+
+Note: Other distributions are not officially supported for building and the
+instructions below might be outdated.
+
+#### Debian Setup
Follow the Ubuntu instructions above.
-If you want to install the build-deps manually, note that the original packages are for Ubuntu. Here are the Debian equivalents:
- * libexpat-dev -> libexpat1-dev
- * freetype-dev -> libfreetype6-dev
- * libbzip2-dev -> libbz2-dev
- * libcupsys2-dev -> libcups2-dev
+If you want to install the build-deps manually, note that the original packages
+are for Ubuntu. Here are the Debian equivalents:
+
+* libexpat-dev -> libexpat1-dev
+* freetype-dev -> libfreetype6-dev
+* libbzip2-dev -> libbz2-dev
+* libcupsys2-dev -> libcups2-dev
-Additionally, if you're building Chromium components for Android, you'll need to install the package: lib32z1
+Additionally, if you're building Chromium components for Android, you'll need to
+install the package: lib32z1
-### openSUSE Setup
+#### openSUSE Setup
-For openSUSE 11.0 and later, see [Linux openSUSE Build Instructions](LinuxOpenSuseBuildInstructions.md).
+For openSUSE 11.0 and later, see
+[Linux openSUSE Build Instructions](linux_open_suse_build_instructions.md).
-### Fedora Setup
+#### Fedora Setup
Recent systems:
-```
+
+```shell
su -c 'yum install subversion pkgconfig python perl gcc-c++ bison \
flex gperf nss-devel nspr-devel gtk2-devel glib2-devel freetype-devel \
atk-devel pango-devel cairo-devel fontconfig-devel GConf2-devel \
@@ -59,58 +94,71 @@ mesa-libGLU-devel libXScrnSaver-devel \
libgnome-keyring-devel cups-devel libXtst-devel libXt-devel pam-devel'
```
-The msttcorefonts packages can be obtained by following the instructions present here: http://www.fedorafaq.org/#installfonts
+The msttcorefonts packages can be obtained by following the instructions present
+here: http://www.fedorafaq.org/#installfonts
For the optional packages:
- * php-cgi is provided by the php-cli package
- * wdiff doesn't exist in Fedora repositories, a possible alternative would be dwdiff
- * sun-java6-fonts doesn't exist in Fedora repositories, needs investigating
+* php-cgi is provided by the php-cli package
+* wdiff doesn't exist in Fedora repositories, a possible alternative would be
+ dwdiff
+* sun-java6-fonts doesn't exist in Fedora repositories, needs investigating
-```
-su -c 'yum install httpd mod_ssl php php-cli wdiff'
-```
+ su -c 'yum install httpd mod_ssl php php-cli wdiff'
-### Arch Linux Setup
-Most of these packages are probably already installed since they're often used, and the parameter --needed ensures that packages up to date are not reinstalled.
-```
-$ sudo pacman -S --needed python perl gcc gcc-libs bison flex gperf pkgconfig nss \
- alsa-lib gconf glib2 gtk2 nspr ttf-ms-fonts freetype2 cairo dbus \
+
+#### Arch Linux Setup
+
+Most of these packages are probably already installed since they're often used,
+and the parameter --needed ensures that packages up to date are not reinstalled.
+
+```shell
+$ sudo pacman -S --needed python perl gcc gcc-libs bison flex gperf pkgconfig \
+ nss alsa-lib gconf glib2 gtk2 nspr ttf-ms-fonts freetype2 cairo dbus \
libgnome-keyring
```
For the optional packages on Arch Linux:
- * php-cgi is provided with pacman
- * wdiff is not in the main repository but dwdiff is. You can get wdiff in AUR/yaourt
- * sun-java6-fonts do not seem to be in main repository or AUR.
-For a successful build, add `'remove_webcore_debug_symbols': 1,` to the variables-object in include.gypi. Tested on 64-bit Arch Linux.
+* php-cgi is provided with pacman
+* wdiff is not in the main repository but dwdiff is. You can get wdiff in
+ AUR/yaourt
+* sun-java6-fonts do not seem to be in main repository or AUR.
-TODO: Figure out how to make it build with the WebCore debug symbols. `make V=1` can be useful for solving the problem.
+For a successful build, add `'remove_webcore_debug_symbols': 1,` to the
+variables-object in include.gypi. Tested on 64-bit Arch Linux.
+TODO: Figure out how to make it build with the WebCore debug symbols. `make V=1`
+can be useful for solving the problem.
-### Mandriva setup
+#### Mandriva setup
-```
-urpmi lib64fontconfig-devel lib64alsa2-devel lib64dbus-1-devel lib64GConf2-devel \
-lib64freetype6-devel lib64atk1.0-devel lib64gtk+2.0_0-devel lib64pango1.0-devel \
-lib64cairo-devel lib64nss-devel lib64nspr-devel g++ python perl bison flex subversion \
-gperf
+```shell
+urpmi lib64fontconfig-devel lib64alsa2-devel lib64dbus-1-devel \
+lib64GConf2-devel lib64freetype6-devel lib64atk1.0-devel lib64gtk+2.0_0-devel \
+lib64pango1.0-devel lib64cairo-devel lib64nss-devel lib64nspr-devel g++ python \
+perl bison flex subversion gperf
```
-Note 1: msttcorefonts are not available, you will need to build your own (see instructions, not hard to do, see http://code.google.com/p/chromium/wiki/MandrivaMsttcorefonts ) or use drakfont to import the fonts from a windows installation
+Note 1: msttcorefonts are not available, you will need to build your own (see
+instructions, not hard to do, see
+[mandriva_msttcorefonts.md](mandriva_msttcorefonts.md)) or use drakfont to
+import the fonts from a windows installation
-Note 2: these packages are for 64 bit, to download the 32 bit packages, substitute lib64 with lib
+Note 2: these packages are for 64 bit, to download the 32 bit packages,
+substitute lib64 with lib
-Note 3: some of these packages might not be explicitly necessary as they come as dependencies, there is no harm in including them however.
+Note 3: some of these packages might not be explicitly necessary as they come as
+dependencies, there is no harm in including them however.
-Note 4: to build on 64 bit systems use, instead of GYP\_DEFINES=target\_arch=x64 , as mentioned in the general notes for building on 64 bit:
+Note 4: to build on 64 bit systems use, instead of
+`GYP_DEFINES=target_arch=x64`, as mentioned in the general notes for building on
+64 bit:
-```
+```shell
export GYP_DEFINES="target_arch=x64"
gclient runhooks --force
```
-### Gentoo setup
-```
-emerge www-client/chromium
-``` \ No newline at end of file
+#### Gentoo setup
+
+ emerge www-client/chromium
diff --git a/docs/linux_building_debug_gtk.md b/docs/linux_building_debug_gtk.md
index 75cdf93..dea311e 100644
--- a/docs/linux_building_debug_gtk.md
+++ b/docs/linux_building_debug_gtk.md
@@ -1,64 +1,65 @@
-# Introduction
+# Linux — Building and Debugging GTK
Sometimes installing the debug packages for gtk and glib isn't quite enough.
(For instance, if the artifacts from -O2 are driving you bonkers in gdb, you
might want to rebuild with -O0.)
-Here's how to build from source and use your local version without installing it.
+Here's how to build from source and use your local version without installing
+it.
+
+[TOC]
## 32-bit systems
On Ubuntu, to download and build glib and gtk suitable for debugging:
-1. If you don't have a gpg key yet, generate one with gpg --gen-key.
-
-2. Create file ~/.devscripts containing DEBSIGN\_KEYID=yourkey, e.g.
-DEBSIGN\_KEYID=CC91A262
-(See http://www.debian.org/doc/maint-guide/ch-build.en.html
-
-3. If you're on a 32 bit system, do:
-```
-#!/bin/sh
-set -x
-set -e
-# Workaround for "E: Build-dependencies for glib2.0 could not be satisfied"
-# See also https://bugs.launchpad.net/ubuntu/+source/apt/+bug/245068
-sudo apt-get install libgamin-dev
-sudo apt-get build-dep glib2.0 gtk+2.0
-rm -rf ~/mylibs
-mkdir ~/mylibs
-cd ~/mylibs
-apt-get source glib2.0 gtk+2.0
-cd glib2.0*
-DEB_BUILD_OPTIONS="nostrip noopt debug" debuild
-cd ../gtk+2.0*
-DEB_BUILD_OPTIONS="nostrip noopt debug" debuild
-```
-This should take about an hour. If it gets stuck waiting for a zombie,
+1. If you don't have a gpg key yet, generate one with `gpg --gen-key`.
+2. Create file `~/.devscripts` containing `DEBSIGN_KEYID=yourkey`, e.g.
+ `DEBSIGN_KEYID=CC91A262` (See
+ http://www.debian.org/doc/maint-guide/ch-build.en.html)
+3. If you're on a 32 bit system, do:
+
+ ```shell
+ #!/bin/sh
+ set -x
+ set -e
+ # Workaround for "E: Build-dependencies for glib2.0 could not be satisfied"
+ # See also https://bugs.launchpad.net/ubuntu/+source/apt/+bug/245068
+ sudo apt-get install libgamin-dev
+ sudo apt-get build-dep glib2.0 gtk+2.0
+ rm -rf ~/mylibs
+ mkdir ~/mylibs
+ cd ~/mylibs
+ apt-get source glib2.0 gtk+2.0
+ cd glib2.0*
+ DEB_BUILD_OPTIONS="nostrip noopt debug" debuild
+ cd ../gtk+2.0*
+ DEB_BUILD_OPTIONS="nostrip noopt debug" debuild
+ ```
+
+This should take about an hour. If it gets stuck waiting for a zombie,
you may have to kill its closest parent (the makefile uses subshells,
-and bash seems to get confused). When I did this, it continued successfully.
+and bash seems to get confused). When I did this, it continued successfully.
At the very end, it will prompt you for the passphrase for your gpg key.
Then, to run an app with those libraries, do e.g.
-```
-export LD_LIBRARY_PATH=$HOME/mylibs/gtk+2.0-2.16.1/debian/install/shared/usr/lib:$HOME/mylibs/gtk+2.0-2.20.1/debian/install/shared/usr/lib
-```
+
+ export LD_LIBRARY_PATH=$HOME/mylibs/gtk+2.0-2.16.1/debian/install/shared/usr/lib:$HOME/mylibs/gtk+2.0-2.20.1/debian/install/shared/usr/lib
gdb ignores that variable, so in the debugger, you would have to do something like
-```
-set solib-search-path $HOME/mylibs/gtk+2.0-2.16.1/debian/install/shared/usr/lib:$HOME/mylibs/gtk+2.0-2.20.1/debian/install/shared/usr/lib
-```
+
+ set solib-search-path $HOME/mylibs/gtk+2.0-2.16.1/debian/install/shared/usr/lib:$HOME/mylibs/gtk+2.0-2.20.1/debian/install/shared/usr/lib
See also http://sources.redhat.com/gdb/current/onlinedocs/gdb_17.html
## 64-bit systems
-If you're on a 64 bit systems, you can do the above on a 32
+If you're on a 64 bit system, you can do the above on a 32
bit system, and copy the result. Or try one of the following:
### Building your own GTK
-```
+```shell
apt-get source glib-2.0 gtk+-2.0
export CFLAGS='-m32 -g'
@@ -73,11 +74,12 @@ setarch i386 ./configure --prefix=/work/32 --enable-debug=yes
setarch i386 ./configure --prefix=/work/32 --enable-debug=yes --without-libtiff
```
-
### ia32-libs
+
_Note: Evan tried this and didn't get any debug libs at the end._
Or you could try this instead:
+
```
#!/bin/sh
set -x
@@ -92,19 +94,20 @@ DEB_BUILD_OPTIONS="nostrip noopt debug" debuild
```
By default, this just grabs and unpacks prebuilt libraries; see
-ia32-libs-2.7ubuntu6/fetch-and-build which documents a BUILD
-variable which would force actual building.
-This would take way longer, since it builds dozens of libraries.
-I haven't tried it yet.
+ia32-libs-2.7ubuntu6/fetch-and-build which documents a BUILD variable which
+would force actual building. This would take way longer, since it builds dozens
+of libraries. I haven't tried it yet.
#### Possible Issues
debuild may fail with
+
```
gpg: [stdin]: clearsign failed: secret key not available
debsign: gpg error occurred! Aborting....
```
-if you forget to create ~/.devscripts with the right contents.
-The build may fail with a "FAIL: abicheck.sh" if gold is your system
-linker. Use ld instead. \ No newline at end of file
+if you forget to create `~/.devscripts` with the right contents.
+
+The build may fail with a `FAIL: abicheck.sh` if gold is your system linker. Use
+ld instead.
diff --git a/docs/linux_cert_management.md b/docs/linux_cert_management.md
index 7faf6ba..7c75acf 100644
--- a/docs/linux_cert_management.md
+++ b/docs/linux_cert_management.md
@@ -1,64 +1,96 @@
-**NOTE:** SSL client authentication with personal certificates does not work completely in Linux, see [issue 16830](http://code.google.com/p/chromium/issues/detail?id=16830) and [issue 25241](http://code.google.com/p/chromium/issues/detail?id=25241).
+# Linux Cert Management
-# Introduction
+**NOTE:** SSL client authentication with personal certificates does not work
+completely in Linux, see [issue 16830](https://crbug.com/16830) and
+[issue 25241](https://crbug.com/25241).
-The easy way to manage certificates is navigate to chrome://settings/search#ssl. Then click on the "Manage Certificates" button. This will load a built-in interface for managing certificates.
+The easy way to manage certificates is navigate to chrome://settings/search#ssl.
+Then click on the "Manage Certificates" button. This will load a built-in
+interface for managing certificates.
-On Linux, Chromium uses the [NSS Shared DB](https://wiki.mozilla.org/NSS_Shared_DB_And_LINUX). If the built-in manager does not work for you then you can configure certificates with the [NSS command line tools](http://www.mozilla.org/projects/security/pki/nss/tools/).
+On Linux, Chromium uses the
+[NSS Shared DB](https://wiki.mozilla.org/NSS_Shared_DB_And_LINUX). If the
+built-in manager does not work for you then you can configure certificates with
+the
+[NSS command line tools](http://www.mozilla.org/projects/security/pki/nss/tools/).
-# Details
+## Details
-## Get the tools
- * Debian/Ubuntu: `sudo apt-get install libnss3-tools`
- * Fedora: `su -c "yum install nss-tools"`
- * Gentoo: `su -c "echo 'dev-libs/nss utils' >> /etc/portage/package.use && emerge dev-libs/nss"` (You need to launch all commands below with the `nss` prefix, e.g., `nsscertutil`.)
- * Opensuse: `sudo zypper install mozilla-nss-tools`
+### Get the tools
+* Debian/Ubuntu: `sudo apt-get install libnss3-tools`
+* Fedora: `su -c "yum install nss-tools"`
+* Gentoo: `su -c "echo 'dev-libs/nss utils' >> /etc/portage/package.use &&
+ emerge dev-libs/nss"` (You need to launch all commands below with the `nss`
+ prefix, e.g., `nsscertutil`.)
+* Opensuse: `sudo zypper install mozilla-nss-tools`
-## List all certificates
+### List all certificates
-`certutil -d sql:$HOME/.pki/nssdb -L`
+ certutil -d sql:$HOME/.pki/nssdb -L
+
+#### Ubuntu Jaunty error
-### Ubuntu Jaunty error
Above (and most commands) gives:
-`certutil: function failed: security library: invalid arguments.`
+ certutil: function failed: security library: invalid arguments.
Package version 3.12.3.1-0ubuntu0.9.04.2
-## List details of a certificate
+### List details of a certificate
-`certutil -d sql:$HOME/.pki/nssdb -L -n <certificate nickname>`
+ certutil -d sql:$HOME/.pki/nssdb -L -n <certificate nickname>
-## Add a certificate
+### Add a certificate
-`certutil -d sql:$HOME/.pki/nssdb -A -t <TRUSTARGS> -n <certificate nickname> -i <certificate filename>`
+```shell
+certutil -d sql:$HOME/.pki/nssdb -A -t <TRUSTARGS> -n <certificate nickname> \
+-i <certificate filename>
+```
-The TRUSTARGS are three strings of zero or more alphabetic
-characters, separated by commas. They define how the certificate should be trusted for SSL, email, and object signing, and are explained in the [certutil docs](http://www.mozilla.org/projects/security/pki/nss/tools/certutil.html#1034193) or [Meena's blog post on trust flags](https://blogs.oracle.com/meena/entry/notes_about_trust_flags).
+The TRUSTARGS are three strings of zero or more alphabetic characters, separated
+by commas. They define how the certificate should be trusted for SSL, email, and
+object signing, and are explained in the
+[certutil docs](http://www.mozilla.org/projects/security/pki/nss/tools/certutil.html#1034193)
+or
+[Meena's blog post on trust flags](https://blogs.oracle.com/meena/entry/notes_about_trust_flags).
-For example, to trust a root CA certificate for issuing SSL server certificates, use
+For example, to trust a root CA certificate for issuing SSL server certificates,
+use
-`certutil -d sql:$HOME/.pki/nssdb -A -t "C,," -n <certificate nickname> -i <certificate filename>`
+```shell
+certutil -d sql:$HOME/.pki/nssdb -A -t "C,," -n <certificate nickname> \
+-i <certificate filename>
+```
To import an intermediate CA certificate, use
-`certutil -d sql:$HOME/.pki/nssdb -A -t ",," -n <certificate nickname> -i <certificate filename>`
+```shell
+certutil -d sql:$HOME/.pki/nssdb -A -t ",," -n <certificate nickname> \
+-i <certificate filename>
+```
Note: to trust a self-signed server certificate, we should use
-`certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n <certificate nickname> -i <certificate filename>`
+```
+certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n <certificate nickname> \
+-i <certificate filename>
+```
-This should work now, because [NSS bug 531160](https://bugzilla.mozilla.org/show_bug.cgi?id=531160) is claimed to be fixed in a related bug report. If it doesn't work, then to work around the NSS bug, you have to trust it as a CA using the "C,," trust flags.
+This should work now, because
+[NSS bug 531160](https://bugzilla.mozilla.org/show_bug.cgi?id=531160) is claimed
+to be fixed in a related bug report. If it doesn't work, then to work around
+the NSS bug, you have to trust it as a CA using the "C,," trust flags.
-### Add a personal certificate and private key for SSL client authentication
+#### Add a personal certificate and private key for SSL client authentication
Use the command:
-`pk12util -d sql:$HOME/.pki/nssdb -i PKCS12_file.p12`
+ pk12util -d sql:$HOME/.pki/nssdb -i PKCS12_file.p12
-to import a personal certificate and private key stored in a PKCS #12 file. The TRUSTARGS of the personal certificate will be set to "u,u,u".
+to import a personal certificate and private key stored in a PKCS #12 file. The
+TRUSTARGS of the personal certificate will be set to "u,u,u".
-## Delete a certificate
+### Delete a certificate
-`certutil -d sql:$HOME/.pki/nssdb -D -n <certificate nickname>` \ No newline at end of file
+ certutil -d sql:$HOME/.pki/nssdb -D -n <certificate nickname>
diff --git a/docs/linux_chromium_arm.md b/docs/linux_chromium_arm.md
index 927be6e..2e01d77 100644
--- a/docs/linux_chromium_arm.md
+++ b/docs/linux_chromium_arm.md
@@ -1,21 +1,21 @@
-Note this currently contains various recipes:
+# Linux Chromium Arm Recipes
+[TOC]
+## Recipe1: Building for an ARM CrOS device
----
+This recipe uses `ninja` (instead of `make`) so its startup time is much lower
+(sub-1s, instead of tens of seconds), is integrated with goma (for
+google-internal users) for very high parallelism, and uses `sshfs` instead of
+`scp` to significantly speed up the compile-run cycle. It has moved to
+https://sites.google.com/a/chromium.org/dev/developers/how-tos/-quickly-building-for-cros-arm-x64
+(mostly b/c of the ease of attaching files to sites).
-# Recipe1: Building for an ARM CrOS device
-This recipe uses `ninja` (instead of `make`) so its startup time is much lower (sub-1s, instead of tens of seconds), is integrated with goma (for google-internal users) for very high parallelism, and uses `sshfs` instead of `scp` to significantly speed up the compile-run cycle. It has moved to https://sites.google.com/a/chromium.org/dev/developers/how-tos/-quickly-building-for-cros-arm-x64 (mostly b/c of the ease of attaching files to sites).
+## Recipe2: Explicit Cross compiling
-
-
----
-
-
-# Recipe2: Explicit Cross compiling
-
-Due to the lack of ARM hardware with the grunt to build Chromium native, cross compiling is currently the recommended method of building for ARM.
+Due to the lack of ARM hardware with the grunt to build Chromium native, cross
+compiling is currently the recommended method of building for ARM.
These instruction are designed to run on Ubuntu Precise.
@@ -24,30 +24,28 @@ These instruction are designed to run on Ubuntu Precise.
The install-build-deps script can be used to install all the compiler
and library dependencies directly from Ubuntu:
-```
-$ ./build/install-build-deps.sh --arm
-```
+ $ ./build/install-build-deps.sh --arm
### Installing the rootfs
-A prebuilt rootfs image is kept up-to-date on Cloud Storage. It will
-automatically be installed by gclient runhooks installed if you have 'target\_arch=arm' in your GYP\_DEFINES.
+A prebuilt rootfs image is kept up-to-date on Cloud Storage. It will
+automatically be installed by gclient runhooks installed if you have
+`target_arch=arm` in your `GYP_DEFINES`.
To install the sysroot manually you can run:
-```
-$ ./chrome/installer/linux/sysroot_scripts/install-debian.wheezy.sysroot.py --arch=arm
-```
+
+ ./chrome/installer/linux/sysroot_scripts/install-debian.wheezy.sysroot.py \
+ --arch=arm
### Building
-To build for ARM, using the clang binary in the chrome tree, use the following settings:
+To build for ARM, using the clang binary in the chrome tree, use the following
+settings:
-```
-export GYP_CROSSCOMPILE=1
-export GYP_DEFINES="target_arch=arm"
-```
+ export GYP_CROSSCOMPILE=1
+ export GYP_DEFINES="target_arch=arm"
-There variables need to be set at gyp-time (when you run gyp\_chromium),
+There variables need to be set at gyp-time (when you run `gyp_chromium`),
but are not needed at build-time (when you run make/ninja).
## Testing
@@ -55,21 +53,20 @@ but are not needed at build-time (when you run make/ninja).
### Automated Build and Testing
Chromium's testing infrastructure for ARM/Linux is (to say the least)
-in its infancy. There are currently two builders setup, one on the
+in its infancy. There are currently two builders setup, one on the
FYI waterfall and one the the trybot waterfall:
http://build.chromium.org/p/chromium.fyi/builders/Linux%20ARM%20Cross-Compile
http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_arm
-
-These builders cross compile on x86-64 and then trigger testing
-on real ARM hard bots:
+These builders cross compile on x86-64 and then trigger testing on real ARM hard
+bots:
http://build.chromium.org/p/chromium.fyi/builders/Linux%20ARM%20Tests%20%28Panda%29/
http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_arm_tester
-Unfortunately, even those the builders are usually green, the testers
-are not yet well maintained or monitored.
+Unfortunately, even those the builders are usually green, the testers are not
+yet well maintained or monitored.
There is compile-only trybot and fyi bot also:
@@ -78,7 +75,10 @@ http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_arm_compile
### Testing with QEMU
-If you don't have a real ARM machine, you can test with QEMU. For instance, there are some prebuilt QEMU Debian images here: http://people.debian.org/~aurel32/qemu/. Another option is to use the rootfs generated by rootstock, as mentioned above.
+If you don't have a real ARM machine, you can test with QEMU. For instance,
+there are some prebuilt QEMU Debian images here:
+http://people.debian.org/~aurel32/qemu/. Another option is to use the rootfs
+generated by rootstock, as mentioned above.
Here's a minimal xorg.conf if needed:
@@ -119,5 +119,7 @@ EndSection
```
### Notes
- * To building for thumb reduces the stripped release binary by around 9MB, equating to ~33% of the binary size. To enable thumb, set 'arm\_thumb': 1
- * TCmalloc does not have an ARM port, so it is disabled. \ No newline at end of file
+
+* To building for thumb reduces the stripped release binary by around 9MB,
+ equating to ~33% of the binary size. To enable thumb, set `'arm_thumb': 1`
+* TCmalloc does not have an ARM port, so it is disabled.
diff --git a/docs/linux_chromium_packages.md b/docs/linux_chromium_packages.md
index c91b051..8df6873 100644
--- a/docs/linux_chromium_packages.md
+++ b/docs/linux_chromium_packages.md
@@ -1,4 +1,11 @@
-Some Linux distributions package up Chromium for easy installation. Please note that Chromium is not identical to Google Chrome -- see ChromiumBrowserVsGoogleChrome -- and that distributions may (and actually do) make their own modifications.
+# Linux Chromium Packages
+
+Some Linux distributions package up Chromium for easy installation. Please note
+that Chromium is not identical to Google Chrome -- see
+[chromium_browser_vs_google_chrome.md](chromium_browser_vs_google_chrome.md) --
+and that distributions may (and actually do) make their own modifications.
+
+TODO: Move away from tables.
| **Distro** | **Contact** | **URL for packages** | **URL for distro-specific patches** |
|:-----------|:------------|:---------------------|:------------------------------------|
@@ -12,6 +19,7 @@ Some Linux distributions package up Chromium for easy installation. Please note
| NixOS | aszlig `"^[0-9]+$"@regexmail.net` | http://hydra.nixos.org/search?query=pkgs.chromium | https://github.com/NixOS/nixpkgs/tree/master/pkgs/applications/networking/browsers/chromium |
## Unofficial packages
+
Packages in this section are not part of the distro's official repositories.
| **Distro** | **Contact** | **URL for packages** | **URL for distro-specific patches** |
@@ -20,17 +28,21 @@ Packages in this section are not part of the distro's official repositories.
| Slackware | Eric Hameleers `alien@slackware.com` | http://www.slackware.com/~alien/slackbuilds/chromium/ | http://www.slackware.com/~alien/slackbuilds/chromium/ |
## Other Unixes
+
| **System** | **Contact** | **URL for packages** | **URL for patches** |
|:-----------|:------------|:---------------------|:--------------------|
| FreeBSD | http://lists.freebsd.org/mailman/listinfo/freebsd-chromium | http://wiki.freebsd.org/Chromium | http://trillian.chruetertee.ch/chromium |
| OpenBSD | Robert Nagy `robert@openbsd.org` | http://openports.se/www/chromium | http://www.openbsd.org/cgi-bin/cvsweb/ports/www/chromium/patches/ |
-
## Updating the list
-Are you packaging Chromium for a Linux distro? Is the information above out of date? Please contact `thestig@chromium.org` with updates.
+Are you packaging Chromium for a Linux distro? Is the information above out of
+date? Please contact `thestig@chromium.org` with updates.
Before emailing, please note:
- * This is not a support email address
- * If you ask about a Linux distro that is not listed above, the answer will be "I don't know"
- * Linux distros supported by Google Chrome are listed here: https://support.google.com/chrome/answer/95411 \ No newline at end of file
+
+* This is not a support email address
+* If you ask about a Linux distro that is not listed above, the answer will be
+ "I don't know"
+* Linux distros supported by Google Chrome are listed here:
+ https://support.google.com/chrome/answer/95411
diff --git a/docs/linux_crash_dumping.md b/docs/linux_crash_dumping.md
index 1639cf2..05859c4 100644
--- a/docs/linux_crash_dumping.md
+++ b/docs/linux_crash_dumping.md
@@ -1,66 +1,135 @@
-Official builds of Chrome support crash dumping and reporting using the Google crash servers. This is a guide to how this works.
+# Linux Crash Dumping
+
+Official builds of Chrome support crash dumping and reporting using the Google
+crash servers. This is a guide to how this works.
+
+[TOC]
## Breakpad
-Breakpad is an open source library which we use for crash reporting across all three platforms (Linux, Mac and Windows). For Linux, a substantial amount of work was required to support cross-process dumping. At the time of writing this code is currently forked from the upstream breakpad repo. While this situation remains, the forked code lives in <tt>breakpad/linux</tt>. The upstream repo is mirrored in <tt>breakpad/src</tt>.
+Breakpad is an open source library which we use for crash reporting across all
+three platforms (Linux, Mac and Windows). For Linux, a substantial amount of
+work was required to support cross-process dumping. At the time of writing this
+code is currently forked from the upstream breakpad repo. While this situation
+remains, the forked code lives in `breakpad/linux`. The upstream repo is
+mirrored in `breakpad/src`.
-The code currently supports i386 only. Getting x86-64 to work should only be a minor amount of work.
+The code currently supports i386 only. Getting x86-64 to work should only be a
+minor amount of work.
### Minidumps
-Breakpad deals in a file format called 'minidumps'. This is a Microsoft format and thus is defined by in-memory structures which are dumped, raw, to disk. The main header file for this file format is <tt>breakpad/src/google_breakpad/common/minidump_format.h</tt>.
+Breakpad deals in a file format called 'minidumps'. This is a Microsoft format
+and thus is defined by in-memory structures which are dumped, raw, to disk. The
+main header file for this file format is
+`breakpad/src/google_breakpad/common/minidump_format.h`.
-At the top level, the minidump file format is a list of key-value pairs. Many of the keys are defined by the minidump format and contain cross-platform representations of stacks, threads etc. For Linux we also define a number of custom keys containing <tt>/proc/cpuinfo</tt>, <tt>lsb-release</tt> etc. These are defined in <tt>breakpad/linux/minidump_format_linux.h</tt>.
+At the top level, the minidump file format is a list of key-value pairs. Many of
+the keys are defined by the minidump format and contain cross-platform
+representations of stacks, threads etc. For Linux we also define a number of
+custom keys containing `/proc/cpuinfo`, `lsb-release` etc. These are defined in
+`breakpad/linux/minidump_format_linux.h`.
### Catching exceptions
-Exceptional conditions (such as invalid memory references, floating point exceptions, etc) are signaled by synchronous signals to the thread which caused them. Synchronous signals are always run on the thread which triggered them as opposed to asynchronous signals which can be handled by any thread in a thread-group which hasn't masked that signal.
-
-All the signals that we wish to catch are synchronous except SIGABRT, and we can always arrange to send SIGABRT to a specific thread. Thus, we find the crashing thread by looking at the current thread in the signal handler.
-
-The signal handlers run on a pre-allocated stack in case the crash was triggered by a stack overflow.
-
-Once we have started handling the signal, we have to assume that the address space is compromised. In order not to fall prey to this and crash (again) in the crash handler, we observe some rules:
- 1. We don't enter the dynamic linker. This, observably, can trigger crashes in the crash handler. Unfortunately, entering the dynamic linker is very easy and can be triggered by calling a function from a shared library who's resolution hasn't been cached yet. Since we can't know which functions have been cached we avoid calling any of these functions with one exception: <tt>memcpy</tt>. Since the compiler can emit calls to <tt>memcpy</tt> we can't really avoid it.
- 1. We don't allocate memory via malloc as the heap may be corrupt. Instead we use a custom allocator (in <tt>breadpad/linux/memory.h</tt>) which gets clean pages directly from the kernel.
-
-In order to avoid calling into libc we have a couple of header files which wrap the system calls (<tt>linux_syscall_support.h</tt>) and reimplement a tiny subset of libc (<tt>linux_libc_support.h</tt>).
+Exceptional conditions (such as invalid memory references, floating point
+exceptions, etc) are signaled by synchronous signals to the thread which caused
+them. Synchronous signals are always run on the thread which triggered them as
+opposed to asynchronous signals which can be handled by any thread in a
+thread-group which hasn't masked that signal.
+
+All the signals that we wish to catch are synchronous except SIGABRT, and we can
+always arrange to send SIGABRT to a specific thread. Thus, we find the crashing
+thread by looking at the current thread in the signal handler.
+
+The signal handlers run on a pre-allocated stack in case the crash was triggered
+by a stack overflow.
+
+Once we have started handling the signal, we have to assume that the address
+space is compromised. In order not to fall prey to this and crash (again) in the
+crash handler, we observe some rules:
+
+1. We don't enter the dynamic linker. This, observably, can trigger crashes in
+ the crash handler. Unfortunately, entering the dynamic linker is very easy
+ and can be triggered by calling a function from a shared library who's
+ resolution hasn't been cached yet. Since we can't know which functions have
+ been cached we avoid calling any of these functions with one exception:
+ `memcpy`. Since the compiler can emit calls to `memcpy` we can't really
+ avoid it.
+1. We don't allocate memory via malloc as the heap may be corrupt. Instead we
+ use a custom allocator (in `breadpad/linux/memory.h`) which gets clean pages
+ directly from the kernel.
+
+In order to avoid calling into libc we have a couple of header files which wrap
+the system calls (`linux_syscall_support.h`) and reimplement a tiny subset of
+libc (`linux_libc_support.h`).
### Self dumping
-The simple case occurs when the browser process crashes. Here we catch the signal and <tt>clone</tt> a new process to perform the dumping. We have to use a new process because a process cannot ptrace itself.
+The simple case occurs when the browser process crashes. Here we catch the
+signal and `clone` a new process to perform the dumping. We have to use a new
+process because a process cannot ptrace itself.
-The dumping process then ptrace attaches to all the threads in the crashed process and writes out a minidump to <tt>/tmp</tt>. This is generic breakpad code.
+The dumping process then ptrace attaches to all the threads in the crashed
+process and writes out a minidump to `/tmp`. This is generic breakpad code.
-Then we reach the Chrome specific parts in <tt>chrome/app/breakpad_linux.cc</tt>. Here we construct another temporary file and write a MIME wrapping of the crash dump ready for uploading. We then fork off <tt>wget</tt> to upload the file. Based on Debian popcorn, <tt>wget</tt> is very commonly installed (much more so than <tt>libcurl</tt>) and <tt>wget</tt> handles the HTTPS gubbins for us.
+Then we reach the Chrome specific parts in `chrome/app/breakpad_linux.cc`. Here
+we construct another temporary file and write a MIME wrapping of the crash dump
+ready for uploading. We then fork off `wget` to upload the file. Based on Debian
+popcorn, `wget` is very commonly installed (much more so than `libcurl`) and
+`wget` handles the HTTPS gubbins for us.
### Renderer dumping
-In the case of a crash in the renderer, we don't want the renderer handling the crash dumping itself. In the future we will sandbox the renderer and allowing it the authority to crash dump itself is too much.
+In the case of a crash in the renderer, we don't want the renderer handling the
+crash dumping itself. In the future we will sandbox the renderer and allowing it
+the authority to crash dump itself is too much.
+
+Thus, we split the crash dumping in two parts: the gathering of information
+which is done in process and the external dumping which is done out of process.
+In the case above, the latter half was done in a `clone`d child. In this case,
+the browser process handles it.
-Thus, we split the crash dumping in two parts: the gathering of information which is done in process and the external dumping which is done out of process. In the case above, the latter half was done in a <tt>clone</tt>d child. In this case, the browser process handles it.
+When renderers are forked off, they have a `UNIX DGRAM` socket in file
+descriptor 4. The signal handler then calls into Chrome specific code
+(`chrome/renderer/render_crash_handler_linux.cc`) when it would otherwise
+`clone`. The Chrome specific code sends a datagram to the socket which contains:
-When renderers are forked off, they have a UNIX DGRAM socket in file descriptor 4. The signal handler then calls into Chrome specific code (<tt>chrome/renderer/render_crash_handler_linux.cc</tt>) when it would otherwise <tt>clone</tt>. The Chrome specific code sends a datagram to the socket which contains:
- * Information which is only available to the signal handler (such as the <tt>ucontext</tt> structure).
- * A file descriptor to a pipe which it then blocks on reading from.
- * A <tt>CREDENTIALS</tt> structure giving its PID.
+* Information which is only available to the signal handler (such as the
+ `ucontext` structure).
+* A file descriptor to a pipe which it then blocks on reading from.
+* A `CREDENTIALS` structure giving its PID.
-The kernel enforces that the renderer isn't lying in the <tt>CREDENTIALS</tt> structure so it can't ask the browser to crash dump another process.
+The kernel enforces that the renderer isn't lying in the `CREDENTIALS` structure
+so it can't ask the browser to crash dump another process.
-The browser then performs the ptrace and minidump writing which would otherwise be performed in the <tt>clone</tt>d process and does the MIME wrapping the uploading as normal.
+The browser then performs the ptrace and minidump writing which would otherwise
+be performed in the `clone`d process and does the MIME wrapping the uploading as
+normal.
-Once the browser has finished getting information from the crashed renderer via ptrace, it writes a byte to the file descriptor which was passed from the renderer. The renderer than wakes up (because it was blocking on reading from the other end) and rethrows the signal to itself. It then appears to crash 'normally' and other parts of the browser notice the abnormal termination and display the sad tab.
+Once the browser has finished getting information from the crashed renderer via
+ptrace, it writes a byte to the file descriptor which was passed from the
+renderer. The renderer than wakes up (because it was blocking on reading from
+the other end) and rethrows the signal to itself. It then appears to crash
+'normally' and other parts of the browser notice the abnormal termination and
+display the sad tab.
## How to test Breakpad support in Chromium
- * Build Chromium with the gyp option `-Dlinux_breakpad=1`.
-```
-./build/gyp_chromium -Dlinux_breakpad=1
-ninja -C out/Debug chrome
-```
- * Run the browser with the environment variable [CHROME\_HEADLESS=1](http://code.google.com/p/chromium/issues/detail?id=19663). This enables crash dumping but prevents crash dumps from being uploaded and deleted.
-```
-env CHROME_HEADLESS=1 ./out/Debug/chrome-wrapper
-```
- * Visit the special URL `about:crash` to trigger a crash in the renderer process.
- * A crash dump file should appear in the directory `~/.config/chromium/Crash Reports`. \ No newline at end of file
+* Build Chromium with the gyp option `-Dlinux_breakpad=1`.
+
+ ```shell
+ ./build/gyp_chromium -Dlinux_breakpad=1
+ ninja -C out/Debug chrome
+ ```
+* Run the browser with the environment variable
+ [CHROME_HEADLESS=1](https://crbug.com/19663). This enables crash dumping but
+ prevents crash dumps from being uploaded and deleted.
+
+ ```shell
+ env CHROME_HEADLESS=1 ./out/Debug/chrome-wrapper
+ ```
+* Visit the special URL `about:crash` to trigger a crash in the renderer
+ process.
+* A crash dump file should appear in the directory
+ `~/.config/chromium/Crash Reports`.
diff --git a/docs/linux_debugging.md b/docs/linux_debugging.md
index 73a9b5a..2594672 100644
--- a/docs/linux_debugging.md
+++ b/docs/linux_debugging.md
@@ -1,64 +1,91 @@
-#summary tips for debugging on Linux
-#labels Linux
-
-This page is for Chromium-specific debugging tips; learning how to run gdb is out of scope.
+# Tips for debugging on Linux
+This page is for Chromium-specific debugging tips; learning how to run gdb is
+out of scope.
+[TOC]
## Symbolized stack trace
-The sandbox can interfere with the internal symbolizer. Use --no-sandbox (but keep this temporary) or an external symbolizer (see tools/valgrind/asan/asan\_symbolize.py).
+The sandbox can interfere with the internal symbolizer. Use `--no-sandbox` (but
+keep this temporary) or an external symbolizer (see
+`tools/valgrind/asan/asan_symbolize.py`).
-Generally, do not use --no-sandbox on waterfall bots, sandbox testing is needed. Talk to security@chromium.org.
+Generally, do not use `--no-sandbox` on waterfall bots, sandbox testing is
+needed. Talk to security@chromium.org.
## GDB
+
**GDB-7.7 is required in order to debug Chrome on Linux.**
Any prior version will fail to resolve symbols or segfault.
### Basic browser process debugging
-```
-gdb -tui -ex=r --args out/Debug/chrome --disable-seccomp-sandbox http://google.com
-```
+ gdb -tui -ex=r --args out/Debug/chrome --disable-seccomp-sandbox \
+ http://google.com
### Allowing attaching to foreign processes
-On distributions that use the [Yama LSM](https://www.kernel.org/doc/Documentation/security/Yama.txt) (that includes Ubuntu and Chrome OS), process A can attach to process B only if A is an ancestor of B.
+
+On distributions that use the
+[Yama LSM](https://www.kernel.org/doc/Documentation/security/Yama.txt) (that
+includes Ubuntu and Chrome OS), process A can attach to process B only if A is
+an ancestor of B.
You will probably want to disable this feature by using
-```
-echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
-```
+
+ echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
If you don't you'll get an error message such as "Could not attach to process".
-Note that you'll also probably want to use --no-sandbox, as explained below.
+Note that you'll also probably want to use `--no-sandbox`, as explained below.
### Multiprocess Tricks
+
#### Getting renderer subprocesses into gdb
-Since Chromium itself spawns the renderers, it can be tricky to grab a particular with gdb. This command does the trick:
+
+Since Chromium itself spawns the renderers, it can be tricky to grab a
+particular with gdb. This command does the trick:
+
```
chrome --no-sandbox --renderer-cmd-prefix='xterm -title renderer -e gdb --args'
```
-The "--no-sandbox" flag is needed because otherwise the seccomp sandbox will kill the renderer process on startup, or the setuid sandbox will prevent xterm's execution. The "xterm" is necessary or gdb will run in the current terminal, which can get particularly confusing since it's running in the background, and if you're also running the main process in gdb, won't work at all (the two instances will fight over the terminal). To auto-start the renderers in the debugger, send the "run" command to the debugger:
-```
-chrome --no-sandbox --renderer-cmd-prefix='xterm -title renderer -e gdb -ex run --args'
-```
+
+The `--no-sandbox` flag is needed because otherwise the seccomp sandbox will
+kill the renderer process on startup, or the setuid sandbox will prevent xterm's
+execution. The "xterm" is necessary or gdb will run in the current terminal,
+which can get particularly confusing since it's running in the background, and
+if you're also running the main process in gdb, won't work at all (the two
+instances will fight over the terminal). To auto-start the renderers in the
+debugger, send the "run" command to the debugger:
+
+ chrome --no-sandbox --renderer-cmd-prefix='xterm -title renderer -e gdb -ex \
+ run --args
+
If you're using Emacs and `M-x gdb`, you can do
-```
-chrome "--renderer-cmd-prefix=gdb --args"
-```
-Note: using the `--renderer-cmd-prefix` option bypasses the zygote launcher, so the renderers won't be sandboxed. It is generally not an issue, except when you are trying to debug interactions with the sandbox. If that's what you are doing, you will need to attach your debugger to a running renderer process (see below).
+ chrome "--renderer-cmd-prefix=gdb --args"
-You may also want to pass `--disable-hang-monitor` to suppress the hang monitor, which is rather annoying.
+Note: using the `--renderer-cmd-prefix` option bypasses the zygote launcher, so
+the renderers won't be sandboxed. It is generally not an issue, except when you
+are trying to debug interactions with the sandbox. If that's what you are doing,
+you will need to attach your debugger to a running renderer process (see below).
-You can also use "--renderer-startup-dialog" and attach to the process in order to debug the renderer code. Go to http://www.chromium.org/blink/getting-started-with-blink-debugging for more information on how this can be done.
+You may also want to pass `--disable-hang-monitor` to suppress the hang monitor,
+which is rather annoying.
+
+You can also use `--renderer-startup-dialog` and attach to the process in order
+to debug the renderer code. Go to
+http://www.chromium.org/blink/getting-started-with-blink-debugging for more
+information on how this can be done.
#### Choosing which renderers to debug
-If you are starting multiple renderers then the above means that multiple gdb's start and fight over the console. Instead, you can set the prefix to point to this shell script:
-```
+If you are starting multiple renderers then the above means that multiple gdb's
+start and fight over the console. Instead, you can set the prefix to point to
+this shell script:
+
+```sh
#!/bin/sh
echo "**** Child $$ starting: y to debug"
@@ -71,17 +98,26 @@ fi
```
#### Selective breakpoints
-When debugging both the browser and renderer process, you might want to have separate set of breakpoints to hit. You can use gdb's command files to accomplish this by putting breakpoints in separate files and instructing gdb to load them.
+
+When debugging both the browser and renderer process, you might want to have
+separate set of breakpoints to hit. You can use gdb's command files to
+accomplish this by putting breakpoints in separate files and instructing gdb to
+load them.
```
-gdb -x ~/debug/browser --args chrome --no-sandbox --disable-hang-monitor --renderer-cmd-prefix='xterm -title renderer -e gdb -x ~/debug/renderer --args '
+gdb -x ~/debug/browser --args chrome --no-sandbox --disable-hang-monitor \
+ --renderer-cmd-prefix='xterm -title renderer -e gdb -x ~/debug/renderer \
+ --args '
```
-Also, instead of running gdb, you can use the script above, which let's you select which renderer process to debug. Note: you might need to use the full path to the script and avoid $HOME or ~/.
+Also, instead of running gdb, you can use the script above, which let's you
+select which renderer process to debug. Note: you might need to use the full
+path to the script and avoid `$HOME` or `~/.`
#### Connecting to a running renderer
-Usually `ps aux | grep chrome` will not give very helpful output. Try `pstree -p | grep chrome` to get something like
+Usually `ps aux | grep chrome` will not give very helpful output. Try
+`pstree -p | grep chrome` to get something like
```
| |-bash(21969)---chrome(672)-+-chrome(694)
@@ -99,46 +135,80 @@ Usually `ps aux | grep chrome` will not give very helpful output. Try `pstree -p
| | \-{chrome}(717)
```
-Most of those are threads. In this case the browser process would be 672 and the (sole) renderer process is 696. You can use `gdb -p 696` to attach. Alternatively, you might find out the process ID from Chrome's built-in Task Manager (under the Tools menu). Right-click on the Task Manager, and enable "Process ID" in the list of columns.
+Most of those are threads. In this case the browser process would be 672 and the
+(sole) renderer process is 696. You can use `gdb -p 696` to attach.
+Alternatively, you might find out the process ID from Chrome's built-in Task
+Manager (under the Tools menu). Right-click on the Task Manager, and enable
+"Process ID" in the list of columns.
-Note: by default, sandboxed processes can't be attached by a debugger. To be able to do so, you will need to pass the `--allow-sandbox-debugging` option.
+Note: by default, sandboxed processes can't be attached by a debugger. To be
+able to do so, you will need to pass the `--allow-sandbox-debugging` option.
-If the problem only occurs with the seccomp sandbox enabled (and the previous tricks don't help), you could try enabling core-dumps (see the **Core files** section). That would allow you to get a backtrace and see some local variables, though you won't be able to step through the running program.
+If the problem only occurs with the seccomp sandbox enabled (and the previous
+tricks don't help), you could try enabling core-dumps (see the **Core files**
+section). That would allow you to get a backtrace and see some local variables,
+though you won't be able to step through the running program.
-Note: If you're interested in debugging LinuxSandboxIPC process, you can attach to 694 in the above diagram. The LinuxSandboxIPC process has the same command line flag as the browser process so that it's easy to identify it if you run `pstree -pa`.
+Note: If you're interested in debugging LinuxSandboxIPC process, you can attach
+to 694 in the above diagram. The LinuxSandboxIPC process has the same command
+line flag as the browser process so that it's easy to identify it if you run
+`pstree -pa`.
#### Getting GPU subprocesses into gdb
-Use `--gpu-launcher` flag instead of `--renderer-cmd-prefix` in the instructions for renderer above.
-#### Getting browser\_tests launched browsers into gdb
-Use environment variable `BROWSER_WRAPPER` instead of `--renderer-cmd-prefix` switch in the instructions above.
+Use `--gpu-launcher` flag instead of `--renderer-cmd-prefix` in the instructions
+for renderer above.
+
+#### Getting `browser_tests` launched browsers into gdb
+
+Use environment variable `BROWSER_WRAPPER` instead of `--renderer-cmd-prefix`
+switch in the instructions above.
Example:
-$ BROWSER\_WRAPPER='xterm -title renderer -e gdb --eval-command=run --eval-command=quit --args' out/Debug/browser\_tests --gtest\_filter=Print
+
+```shell
+BROWSER_WRAPPER='xterm -title renderer -e gdb --eval-command=run \
+ --eval-command=quit --args' out/Debug/browser_tests --gtest_filter=Print
+```
#### Plugin Processes
+
Same strategies as renderers above, but the flag is called `--plugin-launcher`:
-```
-chrome --plugin-launcher='xterm -e gdb --args'
-```
-_Note: For now, this does not currently apply to PPAPI plugins because they currently run in the renderer process._
+ chrome --plugin-launcher='xterm -e gdb --args'
+
+_Note: For now, this does not currently apply to PPAPI plugins because they
+currently run in the renderer process._
#### Single-Process mode
-Depending on whether it's relevant to the problem, it's often easier to just run in "single process" mode where the renderer threads are in-process. Then you can just run gdb on the main process.
-```
-gdb --args chrome --single-process
-```
-Currently, the --disable-gpu flag is also required, as there are known crashes that occur under TextureImageTransportSurface without it. The crash described in http://crbug.com/361689 can also sometimes occur, but that crash can be continued from without harm.
+Depending on whether it's relevant to the problem, it's often easier to just run
+in "single process" mode where the renderer threads are in-process. Then you can
+just run gdb on the main process.
+
+ gdb --args chrome --single-process
-Note that for technical reasons plugins cannot be in-process, so `--single-process` only puts the renderers in the browser process. The flag is still useful for debugging plugins (since it's only two processes instead of three) but you'll still need to use `--plugin-launcher` or another approach.
+Currently, the `--disable-gpu` flag is also required, as there are known crashes
+that occur under TextureImageTransportSurface without it. The crash described in
+http://crbug.com/361689 can also sometimes occur, but that crash can be
+continued from without harm.
+
+Note that for technical reasons plugins cannot be in-process, so
+`--single-process` only puts the renderers in the browser process. The flag is
+still useful for debugging plugins (since it's only two processes instead of
+three) but you'll still need to use `--plugin-launcher` or another approach.
### Printing Chromium types
-gdb 7 lets us use Python to write pretty-printers for Chromium types. The directory `tools/gdb/` contains a Python gdb scripts useful for Chromium code. There are similar scripts [in WebKit](http://trac.webkit.org/wiki/GDB) (in fact, the Chromium script relies on using it with the WebKit one).
-To include these pretty-printers with your gdb, put the following into `~/.gdbinit`:
-```
+gdb 7 lets us use Python to write pretty-printers for Chromium types. The
+directory `tools/gdb/` contains a Python gdb scripts useful for Chromium code.
+There are similar scripts [in WebKit](http://trac.webkit.org/wiki/GDB) (in fact,
+the Chromium script relies on using it with the WebKit one).
+
+To include these pretty-printers with your gdb, put the following into
+`~/.gdbinit`:
+
+```python
python
import sys
sys.path.insert(0, "<path/to/chromium/src>/third_party/WebKit/Tools/gdb/")
@@ -147,7 +217,10 @@ sys.path.insert(0, "<path/to/chromium/src>/tools/gdb/")
import gdb_chrome
```
-Pretty printers for std types shouldn't be necessary in gdb 7, but they're provided here in case you're using an older gdb. Put the following into `~/.gdbinit`:
+Pretty printers for std types shouldn't be necessary in gdb 7, but they're
+provided here in case you're using an older gdb. Put the following into
+`~/.gdbinit`:
+
```
# Print a C++ string.
define ps
@@ -176,210 +249,297 @@ end
The following link describes a tool that can be used on Linux, Windows and Mac under GDB.
-http://code.google.com/p/chromium/wiki/GraphicalDebuggingAidChromiumViews
+[graphical_debugging_aid_chromium_views](graphical_debugging_aid_chromium_views.md)
### Faster startup
-Use the gdb-add-index script (e.g. build/gdb-add-index out/Debug/browser\_tests)
+Use the `gdb-add-index` script (e.g.
+`build/gdb-add-index out/Debug/browser_tests`)
-Only makes sense if you run the binary multiple times or maybe if you use the component build since most .so files won't require reindexing on a rebuild.
+Only makes sense if you run the binary multiple times or maybe if you use the
+component build since most .so files won't require reindexing on a rebuild.
-See https://groups.google.com/a/chromium.org/forum/#!searchin/chromium-dev/gdb-add-index/chromium-dev/ELRuj1BDCL4/5Ki4LGx41CcJ for more info.
+See
+https://groups.google.com/a/chromium.org/forum/#!searchin/chromium-dev/gdb-add-index/chromium-dev/ELRuj1BDCL4/5Ki4LGx41CcJ
+for more info.
Alternatively, specify:
-```
-linux_use_debug_fission=0
-```
-in GYP\_DEFINES. This improves load time of gdb significantly at the cost of link time.
+ linux_use_debug_fission=0
+
+in `GYP_DEFINES`. This improves load time of gdb significantly at the cost of
+link time.
## Core files
-`ulimit -c unlimited` should cause all Chrome processes (run from that shell) to dump cores, with the possible exception of some sandboxed processes.
-Some sandboxed subprocesses might not dump cores unless you pass the `--allow-sandbox-debugging` flag.
+`ulimit -c unlimited` should cause all Chrome processes (run from that shell) to
+dump cores, with the possible exception of some sandboxed processes.
-If the problem is a freeze rather than a crash, you may be able to trigger a core-dump by sending SIGABRT to the relevant process:
-```
-kill -6 [process id]
-```
+Some sandboxed subprocesses might not dump cores unless you pass the
+`--allow-sandbox-debugging` flag.
+
+If the problem is a freeze rather than a crash, you may be able to trigger a
+core-dump by sending SIGABRT to the relevant process:
+
+ kill -6 [process id]
## Breakpad minidump files
-See LinuxMinidumpToCore
+See [linux_minidump_to_core.md](linux_minidump_to_core.md)
## Running Tests
-Many of our tests bring up windows on screen. This can be annoying (they steal your focus) and hard to debug (they receive extra events as you mouse over them). Instead, use `Xvfb` or `Xephyr` to run a nested X session to debug them, as outlined on LayoutTestsLinux.
+
+Many of our tests bring up windows on screen. This can be annoying (they steal
+your focus) and hard to debug (they receive extra events as you mouse over them).
+Instead, use `Xvfb` or `Xephyr` to run a nested X session to debug them, as
+outlined on [layout_tests_linux.md](layout_tests_linux.md).
### Browser tests
-By default the browser\_tests forks a new browser for each test. To debug the browser side of a single test, use a command like
+
+By default the `browser_tests` forks a new browser for each test. To debug the
+browser side of a single test, use a command like
+
```
gdb --args out/Debug/browser_tests --single_process --gtest_filter=MyTestName
```
-**note the underscore in single\_process** -- this makes the test harness and browser process share the outermost process.
+
+**note the underscore in `single_process`** -- this makes the test harness and
+browser process share the outermost process.
To debug a renderer process in this case, use the tips above about renderers.
### Layout tests
-See LayoutTestsLinux for some tips. In particular, note that it's possible to debug a layout test via `ssh`ing to a Linux box; you don't need anything on screen if you use `Xvfb`.
+
+See [layout_tests_linux.md](layout_tests_linux.md) for some tips. In particular,
+note that it's possible to debug a layout test via `ssh`ing to a Linux box; you
+don't need anything on screen if you use `Xvfb`.
### UI tests
-UI tests are run in forked browsers. Unlike browser tests, you cannot do any single process tricks here to debug the browser. See below about `BROWSER_WRAPPER`.
-To pass flags to the browser, use a command line like `--extra-chrome-flags="--foo --bar"`.
+UI tests are run in forked browsers. Unlike browser tests, you cannot do any
+single process tricks here to debug the browser. See below about
+`BROWSER_WRAPPER`.
+
+To pass flags to the browser, use a command line like
+`--extra-chrome-flags="--foo --bar"`.
### Timeouts
-UI tests have a confusing array of timeouts in place. (Pawel is working on reducing the number of timeouts.) To disable them while you debug, set the timeout flags to a large value:
- * `--test-timeout=100000000`
- * `--ui-test-action-timeout=100000000`
- * `--ui-test-terminate-timeout=100000000`
+
+UI tests have a confusing array of timeouts in place. (Pawel is working on
+reducing the number of timeouts.) To disable them while you debug, set the
+timeout flags to a large value:
+
+* `--test-timeout=100000000`
+* `--ui-test-action-timeout=100000000`
+* `--ui-test-terminate-timeout=100000000`
### To replicate Window Manager setup on the bots
-Chromium try bots and main waterfall's bots run tests under Xvfb&openbox combination. Xvfb is an X11 server that redirects the graphical output to the memeory, and openbox is a simple window manager that is running on top of Xvfb. The behavior of openbox is markedly different when it comes to focus management and other window tasks, so test that runs fine locally may fail or be flaky on try bots. To run the tests on a local machine as on a bot, follow these steps:
+
+Chromium try bots and main waterfall's bots run tests under Xvfb&openbox
+combination. Xvfb is an X11 server that redirects the graphical output to the
+memory, and openbox is a simple window manager that is running on top of Xvfb.
+The behavior of openbox is markedly different when it comes to focus management
+and other window tasks, so test that runs fine locally may fail or be flaky on
+try bots. To run the tests on a local machine as on a bot, follow these steps:
Make sure you have openbox:
-```
-apt-get install openbox
-```
+
+ apt-get install openbox
+
Start Xvfb and openbox on a particular display:
-```
-Xvfb :6.0 -screen 0 1280x1024x24 & DISPLAY=:6.0 openbox &
-```
+
+ Xvfb :6.0 -screen 0 1280x1024x24 & DISPLAY=:6.0 openbox &
+
Run your tests with graphics output redirected to that display:
-```
-DISPLAY=:6.0 out/Debug/browser_tests --gtest_filter="MyBrowserTest.MyActivateWindowTest"
-```
+
+ DISPLAY=:6.0 out/Debug/browser_tests --gtest_filter="MyBrowserTest.MyActivateWindowTest"
+
You can look at a snapshot of the output by:
-```
-xwd -display :6.0 -root | xwud
-```
+
+ xwd -display :6.0 -root | xwud
Alternatively, you can use testing/xvfb.py to set up your environment for you:
-```
-testing/xvfb.py out/Debug out/Debug/browser_tests --gtest_filter="MyBrowserTest.MyActivateWindowTest"
-```
-### BROWSER\_WRAPPER
-You can also get the browser under a debugger by setting the `BROWSER_WRAPPER` environment variable. (You can use this for `browser_tests` too, but see above for discussion of a simpler way.)
+ testing/xvfb.py out/Debug out/Debug/browser_tests \
+ --gtest_filter="MyBrowserTest.MyActivateWindowTest"
-```
-BROWSER_WRAPPER='xterm -e gdb --args' out/Debug/browser_tests
-```
+### `BROWSER_WRAPPER`
+
+You can also get the browser under a debugger by setting the `BROWSER_WRAPPER`
+environment variable. (You can use this for `browser_tests` too, but see above
+for discussion of a simpler way.)
+
+ BROWSER_WRAPPER='xterm -e gdb --args' out/Debug/browser_tests
### Replicating Trybot Slowness
-Trybots are pretty stressed, and can sometimes expose timing issues you can't normally reproduce locally.
+Trybots are pretty stressed, and can sometimes expose timing issues you can't
+normally reproduce locally.
-You can simulate this by shutting down all but one of the CPUs (http://www.cyberciti.biz/faq/debian-rhel-centos-redhat-suse-hotplug-cpu/) and running a CPU loading tool (e.g., http://www.devin.com/lookbusy/). Now run your test. It will run slowly, but any flakiness found by the trybot should replicate locally now - and often nearly 100% of the time.
+You can simulate this by shutting down all but one of the CPUs
+(http://www.cyberciti.biz/faq/debian-rhel-centos-redhat-suse-hotplug-cpu/) and
+running a CPU loading tool (e.g., http://www.devin.com/lookbusy/). Now run your
+test. It will run slowly, but any flakiness found by the trybot should replicate
+locally now - and often nearly 100% of the time.
## Logging
+
### Seeing all LOG(foo) messages
-Default log level hides `LOG(INFO)`. Run with `--log-level=0` and `--enable-logging=stderr` flags.
-Newer versions of chromium with VLOG may need --v=1 too. For more VLOG tips, see the chromium-dev thread: http://groups.google.com/a/chromium.org/group/chromium-dev/browse_thread/thread/dcd0cd7752b35de6?pli=1
+Default log level hides `LOG(INFO)`. Run with `--log-level=0` and
+`--enable-logging=stderr` flags.
+
+Newer versions of chromium with VLOG may need --v=1 too. For more VLOG tips, see
+the chromium-dev thread:
+http://groups.google.com/a/chromium.org/group/chromium-dev/browse_thread/thread/dcd0cd7752b35de6?pli=1
### Seeing IPC debug messages
-Run with CHROME\_IPC\_LOGGING=1 eg.
-```
-CHROME_IPC_LOGGING=1 out/Debug/chrome
-```
+
+Run with `CHROME_IPC_LOGGING=1` eg.
+
+ CHROME_IPC_LOGGING=1 out/Debug/chrome
+
or within gdb:
-```
-set environment CHROME_IPC_LOGGING 1
-```
-If some messages show as unknown, check if the list of IPC message headers in chrome/common/logging\_chrome.cc is up-to-date. In case this file reference goes out of date, try looking for usage of macros like IPC\_MESSAGE\_LOG\_ENABLED or IPC\_MESSAGE\_MACROS\_LOG\_ENABLED.
+ set environment CHROME_IPC_LOGGING 1
+
+If some messages show as unknown, check if the list of IPC message headers in
+`chrome/common/logging_chrome.cc` is up-to-date. In case this file reference
+goes out of date, try looking for usage of macros like `IPC_MESSAGE_LOG_ENABLED`
+or `IPC_MESSAGE_MACROS_LOG_ENABLED`.
## Using valgrind
-To run valgrind on the browser and renderer processes, with our suppression file and flags:
-```
-$ cd $CHROMIUM_ROOT/src
-$ tools/valgrind/valgrind.sh out/Debug/chrome
-```
+To run valgrind on the browser and renderer processes, with our suppression file
+and flags:
+
+ $ cd $CHROMIUM_ROOT/src
+ $ tools/valgrind/valgrind.sh out/Debug/chrome
You can use valgrind on chrome and/or on the renderers e.g
`valgrind --smc-check=all ../sconsbuild/Debug/chrome`
or by passing valgrind as the argument to `--render-cmd-prefix`.
-Beware that there are several valgrind "false positives" e.g. pickle, sqlite and some instances in webkit that are ignorable. On systems with prelink and address space randomization (e.g. Fedora), you may also see valgrind errors in libstdc++ on startup and in gnome-breakpad.
+Beware that there are several valgrind "false positives" e.g. pickle, sqlite and
+some instances in webkit that are ignorable. On systems with prelink and address
+space randomization (e.g. Fedora), you may also see valgrind errors in libstdc++
+on startup and in gnome-breakpad.
Valgrind doesn't seem to play nice with tcmalloc. To disable tcmalloc run GYP
-```
-$ cd $CHROMIUM_ROOT/src
-$ build/gyp_chromium -Duse_allocator=none
-```
+
+ $ cd $CHROMIUM_ROOT/src
+ $ build/gyp_chromium -Duse_allocator=none
+
and rebuild.
## Profiling
-See https://sites.google.com/a/chromium.org/dev/developers/profiling-chromium-and-webkit and http://code.google.com/p/chromium/wiki/LinuxProfiling
+
+See
+https://sites.google.com/a/chromium.org/dev/developers/profiling-chromium-and-webkit
+and
+http://code.google.com/p/chromium/wiki/LinuxProfiling
## i18n
-We obey your system locale. Try something like:
-```
-LANG=ja_JP.UTF-8 out/Debug/chrome
-```
-If this doesn't work, make sure that the LANGUAGE, LC\_ALL and LC\_MESSAGE environment variables aren't set -- they have higher priority than LANG in the order listed. Alternatively, just do this:
-```
-LANGUAGE=fr out/Debug/chrome
-```
+We obey your system locale. Try something like:
+
+ LANG=ja_JP.UTF-8 out/Debug/chrome
+
+If this doesn't work, make sure that the `LANGUAGE`, `LC_ALL` and `LC_MESSAGE`
+environment variables aren't set -- they have higher priority than LANG in the
+order listed. Alternatively, just do this:
-Note that because we use GTK, some locale data comes from the system -- for example, file save boxes and whether the current language is considered RTL. Without all the language data available, Chrome will use a mixture of your system language and the language you run Chrome in.
+ LANGUAGE=fr out/Debug/chrome
+
+Note that because we use GTK, some locale data comes from the system -- for
+example, file save boxes and whether the current language is considered RTL.
+Without all the language data available, Chrome will use a mixture of your
+system language and the language you run Chrome in.
Here's how to install the Arabic (ar) and Hebrew (he) language packs:
-```
-sudo apt-get install language-pack-ar language-pack-he language-pack-gnome-ar language-pack-gnome-he
-```
+
+ sudo apt-get install language-pack-ar language-pack-he \
+ language-pack-gnome-ar language-pack-gnome-he
+
Note that the `--lang` flag does **not** work properly for this.
-On non-Debian systems, you need the `gtk20.mo` files. (Please update these docs with the appropriate instructions if you know what they are.)
+On non-Debian systems, you need the `gtk20.mo` files. (Please update these docs
+with the appropriate instructions if you know what they are.)
## Breakpad
-See the last section of LinuxCrashDumping; you need to set a gyp variable and an environment variable for the crash dump tests to work.
+
+See the last section of [linux_crash_dumping.md](linux_crash_dumping.md); you
+need to set a gyp variable and an environment variable for the crash dump tests
+to work.
## Drag and Drop
-If you break in a debugger during a drag, Chrome will have grabbed your mouse and keyboard so you won't be able to interact with the debugger! To work around this, run via `Xephyr`. Instructions for how to use `Xephyr` are on the LayoutTestsLinux page.
+
+If you break in a debugger during a drag, Chrome will have grabbed your mouse
+and keyboard so you won't be able to interact with the debugger! To work around
+this, run via `Xephyr`. Instructions for how to use `Xephyr` are on the
+[layout_tests_linux.md](layout_tests_linux.md) page.
## Tracking Down Bugs
### Isolating Regressions
-Old builds are archived here: http://build.chromium.org/buildbot/snapshots/chromium-rel-linux/
-`tools/bisect-builds.py` in the tree automates bisecting through the archived builds. Despite a computer science education, I am still amazed how quickly binary search will find its target.
+Old builds are archived here:
+http://build.chromium.org/buildbot/snapshots/chromium-rel-linux/
+
+`tools/bisect-builds.py` in the tree automates bisecting through the archived
+builds. Despite a computer science education, I am still amazed how quickly
+binary search will find its target.
### Screen recording for bug reports
-`sudo apt-get install gtk-recordmydesktop`
+
+ sudo apt-get install gtk-recordmydesktop
## Version-specific issues
### Google Chrome
-Google Chrome binaries don't include symbols. Googlers can read where to get symbols from [the Google-internal wiki](http://wiki/Main/ChromeOfficialBuildLinux#The_Build_Archive).
+
+Google Chrome binaries don't include symbols. Googlers can read where to get
+symbols from
+[the Google-internal wiki](http://wiki/Main/ChromeOfficialBuildLinux#The_Build_Archive).
### Ubuntu Chromium
-Since we don't build the Ubuntu packages (Ubuntu does) we can't get useful backtraces from them. Direct users to https://wiki.ubuntu.com/Chromium/Debugging .
+
+Since we don't build the Ubuntu packages (Ubuntu does) we can't get useful
+backtraces from them. Direct users to https://wiki.ubuntu.com/Chromium/Debugging
### Fedora's Chromium
-Like Ubuntu, but direct users to https://fedoraproject.org/wiki/TomCallaway/Chromium_Debug .
+
+Like Ubuntu, but direct users to
+https://fedoraproject.org/wiki/TomCallaway/Chromium_Debug
### Xlib
+
If you're trying to track down X errors like:
+
```
The program 'chrome' received an X Window System error.
This probably reflects a bug in the program.
The error was 'BadDrawable (invalid Pixmap or Window parameter)'.
```
+
Some strategies are:
- * pass `--sync` on the command line to make all X calls synchronous
- * run chrome via [xtrace](http://xtrace.alioth.debian.org/)
- * turn on IPC debugging (see above section)
+
+* pass `--sync` on the command line to make all X calls synchronous
+* run chrome via [xtrace](http://xtrace.alioth.debian.org/)
+* turn on IPC debugging (see above section)
### Window Managers
-To test on various window managers, you can use a nested X server like `Xephyr`. Instructions for how to use `Xephyr` are on the LayoutTestsLinux page.
-If you need to test something with hardware accelerated compositing (e.g., compiz), you can use `Xgl` (`sudo apt-get install xserver-xgl`). E.g.:
-```
-Xgl :1 -ac -accel glx:pbuffer -accel xv:pbuffer -screen 1024x768
-```
+To test on various window managers, you can use a nested X server like `Xephyr`.
+Instructions for how to use `Xephyr` are on the
+[layout_tests_linux.md](layout_tests_linux.md) page.
+
+If you need to test something with hardware accelerated compositing
+(e.g., compiz), you can use `Xgl` (`sudo apt-get install xserver-xgl`). E.g.:
+
+ Xgl :1 -ac -accel glx:pbuffer -accel xv:pbuffer -screen 1024x768
+
## Mozilla Tips
-https://developer.mozilla.org/en/Debugging_Mozilla_on_Linux_FAQ \ No newline at end of file
+
+https://developer.mozilla.org/en/Debugging_Mozilla_on_Linux_FAQ
diff --git a/docs/linux_debugging_gtk.md b/docs/linux_debugging_gtk.md
index 93a1afb..7106742 100644
--- a/docs/linux_debugging_gtk.md
+++ b/docs/linux_debugging_gtk.md
@@ -1,51 +1,60 @@
+# Linux Debugging GTK
+
## Making warnings fatal
-See [Running GLib Applications](http://developer.gnome.org/glib/stable/glib-running.html) for notes on how to make GTK warnings fatal.
+See
+[Running GLib Applications](http://developer.gnome.org/glib/stable/glib-running.html)
+for notes on how to make GTK warnings fatal.
## Using GTK Debug packages
-```
-sudo apt-get install libgtk2.0-0-dbg
-```
-Make sure that you're building a binary that matches your architecture (e.g. 64-bit on a 64-bit machine), and there you go.
+ sudo apt-get install libgtk2.0-0-dbg
+
+Make sure that you're building a binary that matches your architecture (e.g.
+64-bit on a 64-bit machine), and there you go.
### Source
-You'll likely want to get the source for gtk too so that you can step through it. You can tell gdb that you've downloaded the source to your system's GTK by doing:
-```
+You'll likely want to get the source for gtk too so that you can step through
+it. You can tell gdb that you've downloaded the source to your system's GTK by
+doing:
+
+```shell
$ cd /my/dir
$ apt-get source libgtk2.0-0
$ gdb ...
(gdb) set substitute-path /build/buildd /my/dir
```
-NOTE: I tried debugging pango in a similar manner, but for some reason gdb didn't pick up the symbols from the symbols from the -dbg package. I ended up building from source and setting my LD\_LIBRARY\_PATH.
+NOTE: I tried debugging pango in a similar manner, but for some reason gdb
+didn't pick up the symbols from the symbols from the `-dbg` package. I ended up
+building from source and setting my `LD_LIBRARY_PATH`.
-See LinuxBuildingDebugGtk for more on how to build your own debug version of GTK.
+See [linux_building_debug_gtk.md](linux_building_debug_gtk.md) for more on how
+to build your own debug version of GTK.
## Parasite
-http://chipx86.github.com/gtkparasite/ is great. Go check out the site for more about it.
+
+http://chipx86.github.com/gtkparasite/ is great. Go check out the site for more
+about it.
Install it with
-```
-sudo apt-get install gtkparasite
-```
+
+ sudo apt-get install gtkparasite
And then run Chrome with
-```
-GTK_MODULES=gtkparasite ./out/Debug/chrome
-```
+
+ GTK_MODULES=gtkparasite ./out/Debug/chrome
### ghardy
-If you're within the Google network on ghardy, which is too old to include gtkparasite, you can do:
-```
-scp bunny.sfo:/usr/lib/gtk-2.0/modules/libgtkparasite.so /tmp
-sudo cp /tmp/libgtkparasite.so /usr/lib/gtk-2.0/modules/libgtkparasite.so
-```
-## GDK\_DEBUG
+If you're within the Google network on ghardy, which is too old to include
+gtkparasite, you can do:
-```
-14:43 < xan> mrobinson: there's a way to run GTK+ without grabs fwiw, useful for gdb sessions
-14:44 < xan> GDK_DEBUG=nograbs
-``` \ No newline at end of file
+ scp bunny.sfo:/usr/lib/gtk-2.0/modules/libgtkparasite.so /tmp
+ sudo cp /tmp/libgtkparasite.so /usr/lib/gtk-2.0/modules/libgtkparasite.so
+
+## `GDK_DEBUG`
+
+Use `GDK_DEBUG=nograbs` to run GTK+ without grabs. This is useful for gdb
+sessions.
diff --git a/docs/linux_debugging_ssl.md b/docs/linux_debugging_ssl.md
index 1f8f656..aa446e6 100644
--- a/docs/linux_debugging_ssl.md
+++ b/docs/linux_debugging_ssl.md
@@ -1,10 +1,13 @@
-# Introduction
+# Debuggin SSL on Linux
To help anyone looking at the SSL code, here are a few tips I've found handy.
-# Building your own NSS
+[TOC]
-In order to use a debugger with the NSS library, it helps to build NSS yourself. Here's how I did it:
+## Building your own NSS
+
+In order to use a debugger with the NSS library, it helps to build NSS yourself.
+Here's how I did it:
First, read
http://www.mozilla.org/projects/security/pki/nss/nss-3.11.4/nss-3.11.4-build.html
@@ -12,51 +15,58 @@ and/or
https://developer.mozilla.org/En/NSS_reference/Building_and_installing_NSS/Build_instructions
Then, to build the most recent source tarball:
-```
- cd $HOME
- wget ftp://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/NSS_3_12_RTM/src/nss-3.12-with-nspr-4.7.tar.gz
- tar -xzvf nss-3.12-with-nspr-4.7.tar.gz
- cd nss-3.12/
- cd mozilla/security/nss/
- make nss_build_all
-```
-Sadly, the latest release, 3.12.2, isn't available as a tarball, so you have to build it from cvs:
+```shell
+cd $HOME
+wget ftp://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/NSS_3_12_RTM/src/nss-3.12-with-nspr-4.7.tar.gz
+tar -xzvf nss-3.12-with-nspr-4.7.tar.gz
+cd nss-3.12/
+cd mozilla/security/nss/
+make nss_build_all
```
- cd $HOME
- mkdir nss-3.12.2
- cd nss-3.12.2
- export CVSROOT=:pserver:anonymous@cvs-mirror.mozilla.org:/cvsroot
- cvs login
- cvs co -r NSPR_4_7_RTM NSPR
- cvs co -r NSS_3_12_2_RTM NSS
- cd mozilla/security/nss/
- make nss_build_all
+
+Sadly, the latest release, 3.12.2, isn't available as a tarball, so you have to
+build it from cvs:
+
+```shell
+cd $HOME
+mkdir nss-3.12.2
+cd nss-3.12.2
+export CVSROOT=:pserver:anonymous@cvs-mirror.mozilla.org:/cvsroot
+cvs login
+cvs co -r NSPR_4_7_RTM NSPR
+cvs co -r NSS_3_12_2_RTM NSS
+cd mozilla/security/nss/
+make nss_build_all
```
-# Linking against your own NSS
+## Linking against your own NSS
Sadly, I don't know of a nice way to do this; I always do
-```
-hammer --verbose net > log 2>&1
-```
+
+ hammer --verbose net > log 2>&1
+
then grab the line that links my app and put it into a shell script link.sh,
and edit it to include the line
-```
-DIR=$HOME/nss-3.12.2/mozilla/dist/Linux2.6_x86_glibc_PTH_DBG.OBJ/lib
-```
-and insert a -L$DIR right before the -lnss3.
-Note that hammer often builds the app in one, deeply buried, place, then copies it into Hammer
-for ease of use. You'll probably want to make your link.sh do the same thing.
+ DIR=$HOME/nss-3.12.2/mozilla/dist/Linux2.6_x86_glibc_PTH_DBG.OBJ/lib
+
+and insert a `-L$DIR` right before the `-lnss3`.
-Then, after a source code change, do the usual "hammer net" followed by "sh link.sh".
+Note that hammer often builds the app in one, deeply buried, place, then copies
+it into Hammer for ease of use. You'll probably want to make your `link.sh` do
+the same thing.
+
+Then, after a source code change, do the usual `hammer net` followed by
+`sh link.sh`.
Then, to run the resulting app, use a script like
-# Running against your own NSS
-Create a script named 'run.sh' like this:
-```
+## Running against your own NSS
+
+Create a script named `run.sh` like this:
+
+```sh
#!/bin/sh
set -x
DIR=$HOME/nss-3.12.2/mozilla/dist/Linux2.6_x86_glibc_PTH_DBG.OBJ/lib
@@ -65,60 +75,68 @@ export LD_LIBRARY_PATH=$DIR
```
Then run your app with
-```
-sh run.sh Hammer/foo
-```
+
+ sh run.sh Hammer/foo
Or, to debug it, do
-```
-sh run.sh gdb Hammer/foo
-```
-# Logging
+ sh run.sh gdb Hammer/foo
+
+## Logging
There are several flavors of logging you can turn on.
- * SSLClientSocketNSS can log its state transitions and function calls using base/logging.cc. To enable this, edit net/base/ssl\_client\_socket\_nss.cc and change #if 1 to #if 0. See base/logging.cc for where the output goes (on Linux, it's usually stderr).
+* `SSLClientSocketNSS` can log its state transitions and function calls using
+ `base/logging.cc`. To enable this, edit `net/base/ssl_client_socket_nss.cc`
+ and change `#if 1` to `#if 0`. See `base/logging.cc` for where the output
+ goes (on Linux, it's usually stderr).
- * HttpNetworkTransaction and friends can log its state transitions using base/trace\_event.cc. To enable this, arrange for your app to call base::TraceLog::StartTracing(). The output goes to a file named trace...pid.log in the same directory as the executable (e.g. Hammer/trace\_15323.log).
+* `HttpNetworkTransaction` and friends can log its state transitions using
+ `base/trace_event.cc`. To enable this, arrange for your app to call
+ `base::TraceLog::StartTracing()`. The output goes to a file named
+ `trace...pid.log` in the same directory as the executable (e.g.
+ `Hammer/trace_15323.log`).
- * NSS itself can log some events. To enable this, set the envirnment variables SSLDEBUGFILE=foo.log SSLTRACE=99 SSLDEBUG=99 before running your app.
+* `NSS` itself can log some events. To enable this, set the environment
+ variables `SSLDEBUGFILE=foo.log SSLTRACE=99 SSLDEBUG=99` before running
+ your app.
-# Network Traces
+## Network Traces
+
+http://wiki.wireshark.org/SSL describes how to decode SSL traffic. Chromium SSL
+unit tests that use `net/base/ssl_test_util.cc` to set up their servers always
+use port 9443 with `net/data/ssl/certificates/ok_cert.pem`, and port 9666 with
+`net/data/ssl/certificates/expired_cert.pem` This makes it easy to configure
+Wireshark to decode the traffic: do
-http://wiki.wireshark.org/SSL describes how to decode SSL traffic.
-Chromium SSL unit tests that use src/net/base/ssl\_test\_util.cc to
-set up thir servers always use port 9443 with src/net/data/ssl/certificates/ok\_cert.pem,
-and port 9666 with src/net/data/ssl/certificates/expired\_cert.pem
-This makes it easy to configure Wireshark to decode the traffic: do
Edit / Preferences / Protocols / SSL, and in the "RSA Keys List" box, enter
-```
-127.0.0.1,9443,http,<path to ok_cert.pem>;127.0.0.1,9666,http,<path to expired_cert.pem>
-```
+
+ 127.0.0.1,9443,http,<path to ok_cert.pem>;127.0.0.1,9666,http,<path to expired_cert.pem>
+
e.g.
-```
-127.0.0.1,9443,http,/home/dank/chromium/src/net/data/ssl/certificates/ok_cert.pem;127.0.0.1,9666,http,/home/dank/chromium/src/net/data/ssl/certificates/expired_cert.pem
-```
+
+ 127.0.0.1,9443,http,/home/dank/chromium/src/net/data/ssl/certificates/ok_cert.pem;127.0.0.1,9666,http,/home/dank/chromium/src/net/data/ssl/certificates/expired_cert.pem
+
Then capture all tcp traffic on interface lo, and run your test.
-# Valgrinding NSS
+## Valgrinding NSS
Read https://developer.mozilla.org/en/NSS_Memory_allocation and do
-```
-export NSS_DISABLE_ARENA_FREE_LIST=1
-```
-before valgrinding if you want to find where a block was originally
-allocated.
+
+ export NSS_DISABLE_ARENA_FREE_LIST=1
+
+before valgrinding if you want to find where a block was originally allocated.
If you get unsymbolized entries in NSS backtraces, try setting:
-```
-export NSS_DISABLE_UNLOAD=1
-```
-(Note that if you use the Chromium valgrind scripts like tools/valgrind/chrome\_tests.sh or tools/valgrind/valgrind.sh these will both be set automatically.)
+ export NSS_DISABLE_UNLOAD=1
+
+(Note that if you use the Chromium valgrind scripts like
+`tools/valgrind/chrome_tests.sh` or `tools/valgrind/valgrind.sh` these will both
+be set automatically.)
-# Support forums
+## Support forums
If you have nonconfidential questions about NSS, check the newsgroup
-> http://groups.google.com/group/mozilla.dev.tech.crypto
-The NSS maintainer monitors that group and gives good answers. \ No newline at end of file
+http://groups.google.com/group/mozilla.dev.tech.crypto The NSS maintainer
+monitors that group and gives good answers.
diff --git a/docs/linux_dev_build_as_default_browser.md b/docs/linux_dev_build_as_default_browser.md
index 41226d9..acf2988 100644
--- a/docs/linux_dev_build_as_default_browser.md
+++ b/docs/linux_dev_build_as_default_browser.md
@@ -1,20 +1,34 @@
-Copy a stable version's .desktop file and modify it to point to your dev build:
- * `cp /usr/share/applications/google-chrome.desktop ~/.local/share/applications/chromium-mybuild-c-release.desktop`
- * `vim ~/.local/share/applications/chromium-mybuild-c-release.desktop`
- * Change first Exec line in desktop entry: (change path to your dev setup)
- * `Exec=/usr/local/google/home/scheib/c/src/out/Release/chrome %U`
+# Linux Dev Build As Default Browser
+
+Copy a stable version's `.desktop` file and modify it to point to your dev
+build:
+
+```
+cp /usr/share/applications/google-chrome.desktop \
+ ~/.local/share/applications/chromium-mybuild-c-release.desktop
+vim ~/.local/share/applications/chromium-mybuild-c-release.desktop
+
+# Change first Exec line in desktop entry: (change path to your dev setup)
+Exec=/usr/local/google/home/scheib/c/src/out/Release/chrome %U
+```
Set the default:
- * `xdg-settings set default-web-browser chromium-mybuild-c-release.desktop`
+
+ xdg-settings set default-web-browser chromium-mybuild-c-release.desktop
Launch, telling Chrome which config you're using:
- * `CHROME_DESKTOP=chromium-mybuild-c-release.desktop out/Release/chrome`
- * Verify Chrome thinks it is default in `about:settings` page.
- * Press the button to make default if not.
+
+* `CHROME_DESKTOP=chromium-mybuild-c-release.desktop out/Release/chrome`
+* Verify Chrome thinks it is default in `about:settings` page.
+ * Press the button to make default if not.
Restore the normal default:
- * `xdg-settings set default-web-browser google-chrome.desktop`
+ xdg-settings set default-web-browser google-chrome.desktop
+
+Change the default, run, and restore:
-A single line to change the default, run, and restore:
- * `xdg-settings set default-web-browser chromium-mybuild-c-release.desktop && CHROME_DESKTOP=chromium-mybuild-c-release.desktop out/Release/chrome; xdg-settings set default-web-browser google-chrome.desktop && echo Restored default browser.` \ No newline at end of file
+ xdg-settings set default-web-browser chromium-mybuild-c-release.desktop && \
+ CHROME_DESKTOP=chromium-mybuild-c-release.desktop out/Release/chrome
+ xdg-settings set default-web-browser google-chrome.desktop && \
+ echo Restored default browser.
diff --git a/docs/linux_development.md b/docs/linux_development.md
index 8697999..1333a51 100644
--- a/docs/linux_development.md
+++ b/docs/linux_development.md
@@ -1,33 +1,42 @@
# Linux Development
-**Please join us on IRC for the most up-to-date development discussion: `irc.freenode.net`, `#chromium`**
+**Please join us on IRC for the most up-to-date development discussion:
+`irc.freenode.net`, `#chromium`**
## Checkout and Build
-See the LinuxBuildInstructions.
+
+See the [Linux build instructions](linux_build_instructions.md).
## What Needs Work
Look at the Chromium bug tracker for open Linux issues:
http://code.google.com/p/chromium/issues/list?can=2&q=os%3Alinux
-Issues marked "Available" are ready for someone to claim. To claim an issue, add a comment and then a project member will mark it "Assigned". If none of the "Available" issues seem appropriate, you may be able to help an already claimed ("Assigned" or "Started") issue, but you'll probably want to coordinate with the claimants, to avoid unnecessary duplication of effort.
+Issues marked "Available" are ready for someone to claim. To claim an issue, add
+a comment and then a project member will mark it "Assigned". If none of the
+"Available" issues seem appropriate, you may be able to help an already claimed
+("Assigned" or "Started") issue, but you'll probably want to coordinate with the
+claimants, to avoid unnecessary duplication of effort.
Issues marked with HelpWanted are a good place to start.
-### Random TODOs
-
-We've also marked bits that remain to be done for porting with `TODO(port)` in the code. If you grep for that you'll likely find plenty of small tasks to get started on.
-
### New Bugs
-If you think you have discovered a new Linux bug, start by [searching for similar issues](http://code.google.com/p/chromium/issues/list?can=1&q=Linux). When you search, make sure you choose the "All Issues" option, since your bug might have already been fixed, but the default search only looks for open issues. If you can't find a related bug, please create a [New Issue](http://code.google.com/p/chromium/issues/entry). Use the linux defect template.
+If you think you have discovered a new Linux bug, start by
+[searching for similar issues](http://code.google.com/p/chromium/issues/list?can=1&q=Linux).
+When you search, make sure you choose the "All Issues" option, since your bug
+might have already been fixed, but the default search only looks for open
+issues. If you can't find a related bug, please create a
+[New Issue](https://crbug.com/new). Use the linux defect template.
## Contributing code
+
See [ContributingCode](http://dev.chromium.org/developers/contributing-code).
## Debugging
-See LinuxDebugging.
+
+See [linux_debugging.md](linux_debugging.md).
## Documents
-LinuxGraphicsPipeline \ No newline at end of file
+[linux_graphics_pipeline.md](linux_graphics_pipeline.md)
diff --git a/docs/linux_eclipse_dev.md b/docs/linux_eclipse_dev.md
index 9805c7b..89ec5f0 100644
--- a/docs/linux_eclipse_dev.md
+++ b/docs/linux_eclipse_dev.md
@@ -1,252 +1,398 @@
-# Introduction
+# Linux Eclipse Dev
-Eclipse can be used on Linux (and probably Windows and Mac) as an IDE for developing Chromium. It's unpolished, but here's what works:
+Eclipse can be used on Linux (and probably Windows and Mac) as an IDE for
+developing Chromium. It's unpolished, but here's what works:
- * Editing code works well (especially if you're used to it or Visual Studio).
- * Navigating around the code works well. There are multiple ways to do this (F3, control-click, outlines).
- * Building works fairly well and it does a decent job of parsing errors so that you can click and jump to the problem spot.
- * Debugging is hit & miss. You can set breakpoints and view variables. STL containers give it (and gdb) a bit of trouble. Also, the debugger can get into a bad state occasionally and eclipse will need to be restarted.
- * Refactoring seems to work in some instances, but be afraid of refactors that touch a lot of files.
+* Editing code works well (especially if you're used to it or Visual Studio).
+* Navigating around the code works well. There are multiple ways to do this
+ (F3, control-click, outlines).
+* Building works fairly well and it does a decent job of parsing errors so
+ that you can click and jump to the problem spot.
+* Debugging is hit & miss. You can set breakpoints and view variables. STL
+ containers give it (and gdb) a bit of trouble. Also, the debugger can get
+ into a bad state occasionally and eclipse will need to be restarted.
+* Refactoring seems to work in some instances, but be afraid of refactors that
+ touch a lot of files.
-# Setup
+[TOC]
-## Get & Configure Eclipse
+## Setup
-Eclipse 4.3 (Kepler) is known to work with Chromium for Linux.
- * Download the distribution appropriate for your OS. For example, for Linux 64-bit/Java 64-bit, use the Linux 64 bit package from http://www.eclipse.org/downloads/ (Eclipse Packages Tab -> Linux 64 bit (link in bottom right)).
- * Tip: The packaged version of eclipse in distros may not work correctly with the latest CDT plugin (installed below). Best to get them all from the same source.
- * Googlers: The version installed on Goobuntu works fine. The UI will be much more responsive if you do not install the google3 plug-ins. Just uncheck all the boxes at first launch.
- * Unpack the distribution and edit the eclipse/eclipse.ini to increase the heap available to java. For instance:
- * Change -Xms40m to -Xms1024m (minimum heap) and -Xmx256m to -Xmx3072m (maximum heap).
- * Googlers: Edit ~/.eclipse/init.sh to add this:
-```
-export ECLIPSE_MEM_START="1024M"
-export ECLIPSE_MEM_MAX="3072M"
-```
-The large heap size prevents out of memory errors if you include many Chrome subprojects that Eclipse is maintaining code indices for.
- * Turn off Hyperlink detection in the Eclipse preferences. (Window -> Preferences, search for "Hyperlinking, and uncheck "Enable on demand hyperlink style navigation").
-
-Pressing the control key on (for keyboard shortcuts such as copy/paste) can trigger the hyperlink detector. This occurs on the UI thread and can result in the reading of jar files on the Eclipse classpath, which can tie up the editor due to the size of the classpath in Chromium.
-
-## A short word about paths
-
-Before you start setting up your work space - here are a few hints:
- * Don't put your checkout on a remote file system (e.g. NFS filer). It's too slow both for building and for Eclipse.
- * Make sure there is no file system link in your source path because Ninja will resolve it for a faster build and Eclipse / GDB will get confused. (Note: This means that the source will possibly not reside in your user directory since it would require a link from filer to your local repository.)
- * You may want to start Eclipse from the source root. To do this you can add an icon to your task bar as launcher. It should point to a shell script which will set the current path to your source base, and then start Eclipse. The result would probably look like this:
-```
-~/.bashrc
-cd /usr/local/google/chromium/src
-export PATH=/home/skuhne/depot_tools:/usr/local/google/goma/goma:/opt/eclipse:/usr/local/symlinks:/usr/local/scripts:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
-/opt/eclipse/eclipse -vm /usr/bin/java
-```
-
-(Note: Things work fine for me without launching Eclipse from a special directory. jamescook@chromium.org 2012-06-1)
-
-## Run Eclipse & Set your workspace
-
-Run eclipse/eclipse in a way that your regular build environment (export CC, CXX, etc...) will be visible to the eclipse process.
+### Get & Configure Eclipse
-Set the Workspace to be a directory on a local disk (e.g. /work/workspaces/chrome). Placing it on an NFS share is not recommended -- it's too slow and Eclipse will block on access. Don't put the workspace in the same directory as your checkout.
-
-## Install the C Development Tools ("CDT")
-
- 1. From the Help menu, select Install New Software...
- 1. Select the URL for the CDT, http://download.eclipse.org/tools/cdt/releases/kepler
- 1. If it's not there you can click Add... and add it.
- 1. Googlers: We have a local mirror, but be sure you run prodaccess before trying to use it.
- 1. Select & install the Main and Optional features.
- 1. Restart Eclipse
- 1. Go to Window > Open Perspective > Other... > C/C++ to switch to the C++ perspective (window layout).
- 1. Right-click on the "Java" perspective in the top-right corner and select "Close" to remove it.
+Eclipse 4.3 (Kepler) is known to work with Chromium for Linux.
-## Create your project(s)
+* Download the distribution appropriate for your OS. For example, for Linux
+ 64-bit/Java 64-bit, use the Linux 64 bit package from
+ http://www.eclipse.org/downloads/ (Eclipse Packages Tab -> Linux 64 bit
+ (link in bottom right)).
+ * Tip: The packaged version of eclipse in distros may not work correctly
+ with the latest CDT plugin (installed below). Best to get them all from
+ the same source.
+ * Googlers: The version installed on Goobuntu works fine. The UI will be
+ much more responsive if you do not install the google3 plug-ins. Just
+ uncheck all the boxes at first launch.
+* Unpack the distribution and edit the eclipse/eclipse.ini to increase the
+ heap available to java. For instance:
+ * Change `-Xms40m` to `-Xms1024m` (minimum heap) and `-Xmx256m` to
+ `-Xmx3072m` (maximum heap).
+ * Googlers: Edit `~/.eclipse/init.sh` to add this:
+
+ export ECLIPSE_MEM_START="1024M"
+ export ECLIPSE_MEM_MAX="3072M"
+
+The large heap size prevents out of memory errors if you include many Chrome
+subprojects that Eclipse is maintaining code indices for.
+
+* Turn off Hyperlink detection in the Eclipse preferences. (Window ->
+ Preferences, search for "Hyperlinking, and uncheck "Enable on demand
+ hyperlink style navigation").
+
+Pressing the control key on (for keyboard shortcuts such as copy/paste) can
+trigger the hyperlink detector. This occurs on the UI thread and can result in
+the reading of jar files on the Eclipse classpath, which can tie up the editor
+due to the size of the classpath in Chromium.
+
+### A short word about paths
-First, turn off automatic workspace refresh and automatic building, as Eclipse tries to do these too often and gets confused:
+Before you start setting up your work space - here are a few hints:
- 1. Open Window > Preferences
- 1. Search for "workspace"
- 1. Turn off "Build automatically"
- 1. Turn off "Refresh using native hooks or polling"
- 1. Click "Apply"
+* Don't put your checkout on a remote file system (e.g. NFS filer). It's too
+ slow both for building and for Eclipse.
+* Make sure there is no file system link in your source path because Ninja
+ will resolve it for a faster build and Eclipse / GDB will get confused.
+ (Note: This means that the source will possibly not reside in your user
+ directory since it would require a link from filer to your local
+ repository.)
+* You may want to start Eclipse from the source root. To do this you can add
+ an icon to your task bar as launcher. It should point to a shell script
+ which will set the current path to your source base, and then start Eclipse.
+ The result would probably look like this:
+
+ ```shell
+ ~/.bashrc
+ cd /usr/local/google/chromium/src
+ export PATH=/home/skuhne/depot_tools:/usr/local/google/goma/goma:/opt/eclipse:/usr/local/symlinks:/usr/local/scripts:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ /opt/eclipse/eclipse -vm /usr/bin/java
+ ```
+
+(Note: Things work fine for me without launching Eclipse from a special
+directory. jamescook@chromium.org 2012-06-1)
+
+### Run Eclipse & Set your workspace
+
+Run eclipse/eclipse in a way that your regular build environment (export CC,
+CXX, etc...) will be visible to the eclipse process.
+
+Set the Workspace to be a directory on a local disk (e.g.
+`/work/workspaces/chrome`). Placing it on an NFS share is not recommended --
+it's too slow and Eclipse will block on access. Don't put the workspace in the
+same directory as your checkout.
+
+### Install the C Development Tools ("CDT")
+
+1. From the Help menu, select Install New Software...
+ 1. Select the URL for the CDT,
+ http://download.eclipse.org/tools/cdt/releases/kepler
+ 1. If it's not there you can click Add... and add it.
+ 1. Googlers: We have a local mirror, but be sure you run prodaccess before
+ trying to use it.
+1. Select & install the Main and Optional features.
+1. Restart Eclipse
+1. Go to Window > Open Perspective > Other... > C/C++ to switch to the C++
+ perspective (window layout).
+1. Right-click on the "Java" perspective in the top-right corner and select
+ "Close" to remove it.
+
+### Create your project(s)
+
+First, turn off automatic workspace refresh and automatic building, as Eclipse
+tries to do these too often and gets confused:
+
+1. Open Window > Preferences
+1. Search for "workspace"
+1. Turn off "Build automatically"
+1. Turn off "Refresh using native hooks or polling"
+1. Click "Apply"
Create a single Eclipse project for everything:
- 1. From the File menu, select New > Project...
- 1. Select C/C++ Project > Makefile Project with Existing Code
- 1. Name the project the exact name of the directory: "src"
- 1. Provide a path to the code, like /work/chromium/src
- 1. Select toolchain: Linux GCC
- 1. Click Finish.
+1. From the File menu, select New > Project...
+1. Select C/C++ Project > Makefile Project with Existing Code
+1. Name the project the exact name of the directory: "src"
+1. Provide a path to the code, like /work/chromium/src
+1. Select toolchain: Linux GCC
+1. Click Finish.
-Chromium has a huge amount of code, enough that Eclipse can take a very long time to perform operations like "go to definition" and "open resource". You need to set it up to operate on a subset of the code.
+Chromium has a huge amount of code, enough that Eclipse can take a very long
+time to perform operations like "go to definition" and "open resource". You need
+to set it up to operate on a subset of the code.
In the Project Explorer on the left side:
- 1. Right-click on "src" and select "Properties..."
- 1. Open Resource > Resource Filters
- 1. Click "Add..."
- 1. Add the following filter:
- * Include only
- * Files, all children (recursive)
- * Name matches `.*\.(c|cc|cpp|h|mm|inl|idl|js|json|css|html|gyp|gypi|grd|grdp|gn)` regular expression
- 1. Add another filter:
- * Exclude all
- * Folders
- * Name matches `out_.*|\.git|\.svn|LayoutTests` regular expression
- * If you aren't working on WebKit, adding `|WebKit` will remove more files
- 1. Click "OK"
-
-Don't exclude the primary "out" directory, as it contains generated header files for things like string resources and Eclipse will miss a lot of symbols if you do.
-
-Eclipse will refresh the workspace and start indexing your code. It won't find most header files, however. Give it more help finding them:
-
- 1. Open Window > Preferences
- 1. Search for "Indexer"
- 1. Turn on "Allow heuristic resolution of includes"
- 1. Select "Use active build configuration"
- 1. Set Cache limits > Index database > Limit relative... to 20%
- 1. Set Cache limits > Index database > Absolute limit to 256 MB
- 1. Click "OK"
-
-Now the indexer will find many more include files, regardless of which approach you take below.
-
-### Optional: Manual header paths and symbols
-You can manually tell Eclipse where to find header files, which will allow it to create the source code index before you do a real build.
-
- 1. Right-click on "src" and select "Properties..."
+1. Right-click on "src" and select "Properties..."
+1. Open Resource > Resource Filters
+1. Click "Add..."
+1. Add the following filter:
+ * Include only
+ * Files, all children (recursive)
+ * Name matches
+ `.*\.(c|cc|cpp|h|mm|inl|idl|js|json|css|html|gyp|gypi|grd|grdp|gn)`
+ regular expression
+1. Add another filter:
+ * Exclude all
+ * Folders
+ * Name matches `out_.*|\.git|\.svn|LayoutTests` regular expression
+ * If you aren't working on WebKit, adding `|WebKit` will remove more
+ files
+1. Click "OK"
+
+Don't exclude the primary "out" directory, as it contains generated header files
+for things like string resources and Eclipse will miss a lot of symbols if you
+do.
+
+Eclipse will refresh the workspace and start indexing your code. It won't find
+most header files, however. Give it more help finding them:
+
+1. Open Window > Preferences
+1. Search for "Indexer"
+1. Turn on "Allow heuristic resolution of includes"
+1. Select "Use active build configuration"
+1. Set Cache limits > Index database > Limit relative... to 20%
+1. Set Cache limits > Index database > Absolute limit to 256 MB
+1. Click "OK"
+
+Now the indexer will find many more include files, regardless of which approach
+you take below.
+
+#### Optional: Manual header paths and symbols
+
+You can manually tell Eclipse where to find header files, which will allow it to
+create the source code index before you do a real build.
+
+1. Right-click on "src" and select "Properties..."
* Open C++ General > Paths and Symbols > Includes
* Click "GNU C++"
* Click "Add..."
- * Add /path/to/chromium/src
+ * Add `/path/to/chromium/src`
* Check "Add to all configurations" and "Add to all languages"
- 1. Repeat the above for:
- * /path/to/chromium/src/testing/gtest/include
+1. Repeat the above for:
+ * `/path/to/chromium/src/testing/gtest/include`
You may also find it helpful to define some symbols.
- 1. Add OS\_LINUX:
+1. Add `OS_LINUX`:
* Select the "Symbols" tab
* Click "GNU C++"
* Click "Add..."
- * Add name OS\_LINUX with value 1
+ * Add name `OS_LINUX` with value 1
* Click "Add to all configurations" and "Add to all languages"
- 1. Repeat for ENABLE\_EXTENSIONS 1
- 1. Repeat for HAS\_OUT\_OF\_PROC\_TEST\_RUNNER 1
- 1. Click "OK".
- 1. Eclipse will ask if you want to rebuild the index. Click "Yes".
+1. Repeat for `ENABLE_EXTENSIONS 1`
+1. Repeat for `HAS_OUT_OF_PROC_TEST_RUNNER 1`
+1. Click "OK".
+1. Eclipse will ask if you want to rebuild the index. Click "Yes".
Let the C++ indexer run. It will take a while (10s of minutes).
-## Optional: Building inside Eclipse
-This allows Eclipse to automatically discover include directories and symbols. If you use gold or ninja (both recommended) you'll need to tell Eclipse about your path.
+### Optional: Building inside Eclipse
+
+This allows Eclipse to automatically discover include directories and symbols.
+If you use gold or ninja (both recommended) you'll need to tell Eclipse about
+your path.
- 1. echo $PATH from a shell and copy it to the clipboard
- 1. Open Window > Preferences > C/C++ > Build > Environment
- 1. Select "Replace native environment with specified one" (since gold and ninja must be at the start of your path)
- 1. Click "Add..."
- 1. For name, enter `PATH`
- 1. For value, paste in your path with the ninja and gold directories.
- 1. Click "OK"
+1. echo $PATH from a shell and copy it to the clipboard
+1. Open Window > Preferences > C/C++ > Build > Environment
+1. Select "Replace native environment with specified one" (since gold and ninja
+ must be at the start of your path)
+1. Click "Add..."
+1. For name, enter `PATH`
+1. For value, paste in your path with the ninja and gold directories.
+1. Click "OK"
To create a Make target:
- 1. From the Window menu, select Show View > Make Target
- 1. In the Make Target view, right-click on the project and select New...
- 1. name the target (e.g. base\_unittests)
- 1. Unclick the Build Command: Use builder Settings and type whatever build command you would use to build this target (e.g. "ninja -C out/Debug base\_unittests").
- 1. Return to the project properties page a under the C/C++ Build, change the Build Location/Build Directory to be /path/to/chromium/src
- 1. In theory ${workspace\_loc} should work, but it doesn't for me.
- 1. If you put your workspace in /path/to/chromium, then ${workspace\_loc:/src} will work too.
- 1. Now in the Make Targets view, select the target and click the hammer icon (Build Make Target).
-
-You should see the build proceeding in the Console View and errors will be parsed and appear in the Problems View. (Note that sometimes multi-line compiler errors only show up partially in the Problems view and you'll want to look at the full error in the Console).
-
-(Eclipse 3.8 has a bug where the console scrolls too slowly if you're doing a fast build, e.g. with goma. To work around, go to Window > Preferences and search for "console". Under C/C++ console, set "Limit console output" to 2147483647, the maximum value.)
-
-## Optional: Multiple build targets
-If you want to build multiple different targets in Eclipse (chrome, unit\_tests, etc.):
-
- 1. Window > Show Toolbar (if you had it off)
- 1. Turn on special toolbar menu item (hammer) or menu bar item (Project > Build configurations > Set Active > ...)
- 1. Window > Customize Perspective... > "Command Groups Availability"
- 1. Check "Build configuration"
- 1. Add more Build targets
- 1. Project > Properties > C/C++ Build > Manage Configurations
- 1. Select "New..."
- 1. Duplicate from current and give it a name like "Unit tests".
- 1. Change under “Behavior” > Build > the target to e.g. “unit\_tests”.
-
-You can also drag the toolbar to the bottom of your window to save vertical space.
-
-## Optional: Debugging
-
- 1. From the toolbar at the top, click the arrow next to the debug icon and select Debug Configurations...
- 1. Select C/C++ Application and click the New Launch Configuration icon. This will create a new run/debug configuration under the C/C++ Application header.
- 1. Name it something useful (e.g. base\_unittests).
- 1. Under the Main Tab, enter the path to the executable (e.g. .../out/Debug/base\_unittests)
- 1. Select the Debugger Tab, select Debugger: gdb and unclick "Stop on startup in (main)" unless you want this.
- 1. Set a breakpoint somewhere in your code and click the debug icon to start debugging.
-
-## Optional: Accurate symbol information
-
-If setup properly, Eclipse can do a great job of semantic navigation of C++ code (showing type hierarchies, finding all references to a particular method even when other classes have methods of the same name, etc.). But doing this well requires the Eclipse knows correct include paths and pre-processor definitions. After fighting with with a number of approaches, I've found the below to work best for me.
-
- 1. From a shell in your src directory, run GYP\_GENERATORS=ninja,eclipse build/gyp\_chromium
- 1. This generates <project root>/out/Debug/eclipse-cdt-settings.xml which is used below.
- 1. This creates a single list of include directories and preprocessor definitions to be used for all source files, and so is a little inaccurate. Here are some tips for compensating for the limitations:
- 1. Use `-R <target>` to restrict the output to considering only certain targets (avoiding unnecessary includes that are likely to cause trouble). Eg. for a blink project, use `-R blink`.
- 1. If you care about blink, move 'third\_party/Webkit/Source' to the top of the list to better resolve ambiguous include paths (eg. 'config.h').
- 1. Import paths and symbols
- 1. Right click on the project and select Properties > C/C++ General > Paths and Symbols
- 1. Click Restore Defaults to clear any old settings
- 1. Click Import Settings... > Browse... and select <project root>/out/Debug/eclipse-cdt-settings.xml
- 1. Click the Finish button. The entire preferences dialog should go away.
- 1. Right click on the project and select Index > Rebuild
-
-## Alternative: Per-file accurate include/pre-processor information
-
-Instead of generating a fixed list of include paths and pre-processor definitions for a project (above), it is also possible to have Eclipse determine the correct setting on a file-by-file basis using a built output parser. I (rbyers) used this successfully for a long time, but it doesn't seem much better in practice than the simpler (and less bug-prone) approach above.
-
- 1. Install the latest version of Eclipse IDE for C/C++ developers ([Juno SR1](http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/junosr1) at the time of this writing)
- 1. Setup build to generate a build log that includes the g++ command lines for the files you want to index:
- 1. Project Properties -> C/C++ Build
- 1. Uncheck "Use default build command"
- 1. Enter your build command, eg: ninja -v
- 1. Note that for better performance, you can use a command that doesn't actually builds, just prints the commands that would be run. For ninja/make this means adding -n. This only prints the compile commands for changed files (so be sure to move your existing out directory out of the way temporarily to force a full "build"). ninja also supports "-t commands" which will print all build commands for the specified target and runs even faster as it doesn't have to check file timestamps.
- 1. Build directory: your build path including out/Debug
- 1. Note that for the relative paths to be parsed correctly you can't use ninja's ` '-C <dir>' ` to change directories as you might from the command line.
- 1. Build: potentially change "all" to the target you want to analyze, eg. "chrome"
- 1. Deselect 'clean'
- 1. If you're using Ninja, you need to teach eclipse to ignore the prefix it adds (eg. [10/1234] to each line in build output):
- 1. Project properties -> C/C++ General -> Preprocessor includes
- 1. Providers -> CDT GCC Build Output Parser -> Compiler command pattern
- 1. ` (\[.*\] )?((gcc)|([gc]\+\+)|(clang(\+\+)?)) `
- 1. Note that there appears to be a bug with "Share setting entries between projects" - it will keep resetting to off. I suggest using per-project settings and using the "folder" as the container to keep discovered entries ("file" may work as well).
- 1. Eclipse / GTK has bugs where lots of output to the build console can slow down the UI dramatically and cause it to hang (basically spends all it's time trying to position the cursor correctly in the build console window). To avoid this, close the console window and disable automatically opening it on build:
- 1. Preferences->C/C++->Build->Console -> Uncheck "Open console when building"
- 1. note you can still see the build log in ` <workspace>/.metadata/.plugins/org.eclipse.cdt.ui `
- 1. Now build the project (select project, click on hammer). If all went well:
- 1. Right click on a cpp file -> properties -> C/C++ general -> Preprocessor includes -> GNU C++ -> CDT GCC Build output Parser
- 1. You will be able to expand and see all the include paths and pre-processor definitions used for this file
- 1. Rebuild index (right-click on project, index, rebuild). If all went well:
- 1. Open a CPP file and look at problems windows
- 1. Should be no (or very few) errors
- 1. Should be able to hit F3 on most symbols and jump to their definitioin
- 1. CDT has some issues with complex C++ syntax like templates (eg. PassOwnPtr functions)
- 1. See [this page](http://wiki.eclipse.org/CDT/User/FAQ#Why_does_Open_Declaration_.28F3.29_not_work.3F_.28also_applies_to_other_functions_using_the_indexer.29) for more information.
-
-## Optional: static code and style guide analysis using cpplint.py
-
- 1. From the toolbar at the top, click the Project -> Properties and go to C/C++Build.
- 1. Click on the right side of the pop up windows, "Manage Configurations...", then on New, and give it a name, f.i. "Lint current file", and close the small window, then select it in the Configuration drop down list.
- 1. Under Builder settings tab, unclick "Use default build command" and type as build command the full path to your depot\_tools/cpplint.py
- 1. Under behaviour tab, unselect Clean, select Build(incremental build) and in Make build target, add `--verbose=0 ${selected_resource_loc} `
- 1. Go back to the left side of the current window, and to C/C++Build -> Settings, and click on error parsers tab, make sure CDT GNU C/C++ Error Parser, CDT pushd/popd CWD Locator are set, then click Apply and OK.
- 1. Select a file and click on the hammer icon drop down triangle next to it, and make sure the build configuration is selected "Lint current file", then click on the hammer.
- 1. Note: If you get the cpplint.py help output, make sure you have selected a file, by clicking inside the editor window or on its tab header, and make sure the editor is not maximized inside Eclipse, i.e. you should see more subwindows around.
-
-## Additional tips
- 1. Mozilla's [Eclipse CDT guide](https://developer.mozilla.org/en-US/docs/Eclipse_CDT) is helpful:
- 1. For improved performance, I use medium-granularity projects (eg. one for WebKit/Source) instead of putting all of 'src/' in one project.
- 1. For working in Blink (which uses WebKit code style), feel free to use [this](https://drive.google.com/file/d/0B2LVVIKSxUVYM3R6U0tUa1dmY0U/view?usp=sharing) code-style formatter XML profile \ No newline at end of file
+1. From the Window menu, select Show View > Make Target
+1. In the Make Target view, right-click on the project and select New...
+1. name the target (e.g. base\_unittests)
+1. Unclick the Build Command: Use builder Settings and type whatever build
+ command you would use to build this target (e.g.
+ `ninja -C out/Debug base_unittests`).
+1. Return to the project properties page a under the C/C++ Build, change the
+ Build Location/Build Directory to be /path/to/chromium/src
+ 1. In theory `${workspace_loc}` should work, but it doesn't for me.
+ 1. If you put your workspace in `/path/to/chromium`, then
+ `${workspace_loc:/src}` will work too.
+1. Now in the Make Targets view, select the target and click the hammer icon
+ (Build Make Target).
+
+You should see the build proceeding in the Console View and errors will be
+parsed and appear in the Problems View. (Note that sometimes multi-line compiler
+errors only show up partially in the Problems view and you'll want to look at
+the full error in the Console).
+
+(Eclipse 3.8 has a bug where the console scrolls too slowly if you're doing a
+fast build, e.g. with goma. To work around, go to Window > Preferences and
+search for "console". Under C/C++ console, set "Limit console output" to
+2147483647, the maximum value.)
+
+### Optional: Multiple build targets
+
+If you want to build multiple different targets in Eclipse (`chrome`,
+`unit_tests`, etc.):
+
+1. Window > Show Toolbar (if you had it off)
+1. Turn on special toolbar menu item (hammer) or menu bar item (Project > Build
+ configurations > Set Active > ...)
+ 1. Window > Customize Perspective... > "Command Groups Availability"
+ 1. Check "Build configuration"
+1. Add more Build targets
+ 1. Project > Properties > C/C++ Build > Manage Configurations
+ 1. Select "New..."
+ 1. Duplicate from current and give it a name like "Unit tests".
+ 1. Change under “Behavior” > Build > the target to e.g. `unit_tests`.
+
+You can also drag the toolbar to the bottom of your window to save vertical
+space.
+
+### Optional: Debugging
+
+1. From the toolbar at the top, click the arrow next to the debug icon and
+ select Debug Configurations...
+1. Select C/C++ Application and click the New Launch Configuration icon. This
+ will create a new run/debug con figuration under the C/C++ Application header.
+1. Name it something useful (e.g. `base_unittests`).
+1. Under the Main Tab, enter the path to the executable (e.g.
+ `.../out/Debug/base_unittests`)
+1. Select the Debugger Tab, select Debugger: gdb and unclick "Stop on startup
+ in (main)" unless you want this.
+1. Set a breakpoint somewhere in your code and click the debug icon to start
+ debugging.
+
+### Optional: Accurate symbol information
+
+If setup properly, Eclipse can do a great job of semantic navigation of C++ code
+(showing type hierarchies, finding all references to a particular method even
+when other classes have methods of the same name, etc.). But doing this well
+requires the Eclipse knows correct include paths and pre-processor definitions.
+After fighting with with a number of approaches, I've found the below to work
+best for me.
+
+1. From a shell in your src directory, run
+ `GYP_GENERATORS=ninja,eclipse build/gyp_chromium`
+ 1. This generates <project root>/out/Debug/eclipse-cdt-settings.xml which
+ is used below.
+ 1. This creates a single list of include directories and preprocessor
+ definitions to be used for all source files, and so is a little
+ inaccurate. Here are some tips for compensating for the limitations:
+ 1. Use `-R <target>` to restrict the output to considering only certain
+ targets (avoiding unnecessary includes that are likely to cause
+ trouble). Eg. for a blink project, use `-R blink`.
+ 1. If you care about blink, move 'third\_party/Webkit/Source' to the
+ top of the list to better resolve ambiguous include paths (eg.
+ `config.h`).
+1. Import paths and symbols
+ 1. Right click on the project and select Properties > C/C++ General > Paths
+ and Symbols
+ 1. Click Restore Defaults to clear any old settings
+ 1. Click Import Settings... > Browse... and select
+ `<project root>/out/Debug/eclipse-cdt-settings.xml`
+ 1. Click the Finish button. The entire preferences dialog should go away.
+ 1. Right click on the project and select Index > Rebuild
+
+### Alternative: Per-file accurate include/pre-processor information
+
+Instead of generating a fixed list of include paths and pre-processor
+definitions for a project (above), it is also possible to have Eclipse determine
+the correct setting on a file-by-file basis using a built output parser. I
+(rbyers) used this successfully for a long time, but it doesn't seem much better
+in practice than the simpler (and less bug-prone) approach above.
+
+1. Install the latest version of Eclipse IDE for C/C++ developers
+ ([Juno SR1](http://www.eclipse.org/downloads/packages/eclipse-ide-cc-developers/junosr1)
+ at the time of this writing)
+1. Setup build to generate a build log that includes the g++ command lines for
+ the files you want to index:
+ 1. Project Properties -> C/C++ Build
+ 1. Uncheck "Use default build command"
+ 1. Enter your build command, eg: `ninja -v`
+ 1. Note that for better performance, you can use a command that
+ doesn't actually builds, just prints the commands that would be
+ run. For ninja/make this means adding -n. This only prints the
+ compile commands for changed files (so be sure to move your
+ existing out directory out of the way temporarily to force a
+ full "build"). ninja also supports "-t commands" which will
+ print all build commands for the specified target and runs even
+ faster as it doesn't have to check file timestamps.
+ 1. Build directory: your build path including out/Debug
+ 1. Note that for the relative paths to be parsed correctly you
+ can't use ninja's `-C <dir>` to change directories as you might
+ from the command line.
+ 1. Build: potentially change `all` to the target you want to analyze,
+ eg. `chrome`
+ 1. Deselect 'clean'
+ 1. If you're using Ninja, you need to teach eclipse to ignore the prefix it
+ adds (eg. `[10/1234]` to each line in build output):
+ 1. Project properties -> C/C++ General -> Preprocessor includes
+ 1. Providers -> CDT GCC Build Output Parser -> Compiler command pattern
+ 1. `(\[.*\] )?((gcc)|([gc]\+\+)|(clang(\+\+)?))`
+ 1. Note that there appears to be a bug with "Share setting entries
+ between projects" - it will keep resetting to off. I suggest using
+ per-project settings and using the "folder" as the container to keep
+ discovered entries ("file" may work as well).
+ 1. Eclipse / GTK has bugs where lots of output to the build console can
+ slow down the UI dramatically and cause it to hang (basically spends all
+ it's time trying to position the cursor correctly in the build console
+ window). To avoid this, close the console window and disable
+ automatically opening it on build:
+ 1. Preferences->C/C++->Build->Console -> Uncheck "Open console when
+ building"
+ 1. note you can still see the build log in
+ `<workspace>/.metadata/.plugins/org.eclipse.cdt.ui`
+1. Now build the project (select project, click on hammer). If all went well:
+ 1. Right click on a cpp file -> properties -> C/C++ general -> Preprocessor
+ includes -> GNU C++ -> CDT GCC Build output Parser
+ 1. You will be able to expand and see all the include paths and
+ pre-processor definitions used for this file
+1. Rebuild index (right-click on project, index, rebuild). If all went well:
+ 1. Open a CPP file and look at problems windows
+ 1. Should be no (or very few) errors
+ 1. Should be able to hit F3 on most symbols and jump to their definitioin
+ 1. CDT has some issues with complex C++ syntax like templates (eg.
+ `PassOwnPtr` functions)
+ 1. See
+ [this page](http://wiki.eclipse.org/CDT/User/FAQ#Why_does_Open_Declaration_.28F3.29_not_work.3F_.28also_applies_to_other_functions_using_the_indexer.29)
+ for more information.
+
+### Optional: static code and style guide analysis using cpplint.py
+
+1. From the toolbar at the top, click the Project -> Properties and go to
+ C/C++Build.
+ 1. Click on the right side of the pop up windows, "Manage
+ Configurations...", then on New, and give it a name, f.i. "Lint current
+ file", and close the small window, then select it in the Configuration
+ drop down list.
+ 1. Under Builder settings tab, unclick "Use default build command" and type
+ as build command the full path to your `depot_tools/cpplint.py`
+ 1. Under behaviour tab, unselect Clean, select Build(incremental build) and
+ in Make build target, add `--verbose=0 ${selected_resource_loc}`
+ 1. Go back to the left side of the current window, and to C/C++Build ->
+ Settings, and click on error parsers tab, make sure CDT GNU C/C++ Error
+ Parser, CDT pushd/popd CWD Locator are set, then click Apply and OK.
+1. Select a file and click on the hammer icon drop down triangle next to it,
+ and make sure the build configuration is selected "Lint current file", then
+ click on the hammer.
+1. Note: If you get the `cpplint.py help` output, make sure you have selected a
+ file, by clicking inside the editor window or on its tab header, and make
+ sure the editor is not maximized inside Eclipse, i.e. you should see more
+ subwindows around.
+
+### Additional tips
+
+1. Mozilla's
+ [Eclipse CDT guide](https://developer.mozilla.org/en-US/docs/Eclipse_CDT)
+ is helpful:
+1. For improved performance, I use medium-granularity projects (eg. one for
+ WebKit/Source) instead of putting all of 'src/' in one project.
+1. For working in Blink (which uses WebKit code style), feel free to use
+ [this](https://drive.google.com/file/d/0B2LVVIKSxUVYM3R6U0tUa1dmY0U/view?usp=sharing)
+ code-style formatter XML profile
diff --git a/docs/linux_faster_builds.md b/docs/linux_faster_builds.md
index 1dee237..1dd601b 100644
--- a/docs/linux_faster_builds.md
+++ b/docs/linux_faster_builds.md
@@ -1,76 +1,103 @@
-#summary tips for improving build speed on Linux
-#labels Linux,build
-
-This list is sorted such that the largest speedup is first; see LinuxBuildInstructions for context and [Faster Builds](https://code.google.com/p/chromium/wiki/CommonBuildTasks#Faster_Builds) for non-Linux-specific techniques.
+# Tips for improving build speed on Linux
+This list is sorted such that the largest speedup is first; see
+[Linux build instructions](linux_build_instructions.md) for context and
+[Faster Builds](common_build_tasks.md) for non-Linux-specific techniques.
+[TOC]
## Use goma
-If you work at Google, you can use goma for distributed builds; this is similar to [distcc](http://en.wikipedia.org/wiki/Distcc). See [go/ma](http://go/ma) for documentation.
+If you work at Google, you can use goma for distributed builds; this is similar
+to [distcc](http://en.wikipedia.org/wiki/Distcc). See [go/ma](http://go/ma) for
+documentation.
-Even without goma, you can do distributed builds with distcc (if you have access to other machines), or a parallel build locally if have multiple cores.
+Even without goma, you can do distributed builds with distcc (if you have access
+to other machines), or a parallel build locally if have multiple cores.
-Whether using goma, distcc, or parallel building, you can specify the number of build processes with `-jX` where `X` is the number of processes to start.
+Whether using goma, distcc, or parallel building, you can specify the number of
+build processes with `-jX` where `X` is the number of processes to start.
## Use Icecc
-[Icecc](https://github.com/icecc/icecream) is the distributed compiler with a central scheduler to share build load. Currently, many external contributors use it. e.g. Intel, Opera, Samsung.
+[Icecc](https://github.com/icecc/icecream) is the distributed compiler with a
+central scheduler to share build load. Currently, many external contributors use
+it. e.g. Intel, Opera, Samsung.
When you use Icecc, you need to set some gyp variables.
-**linux\_use\_bundled\_binutils=0**
+ linux_use_bundled_binutils=0**
--B option is not supported. [relevant commit](https://github.com/icecc/icecream/commit/b2ce5b9cc4bd1900f55c3684214e409fa81e7a92)
+`-B` option is not supported.
+[relevant commit](https://github.com/icecc/icecream/commit/b2ce5b9cc4bd1900f55c3684214e409fa81e7a92)
-**linux\_use\_debug\_fission=0**
+ linux_use_debug_fission=0
-[debug fission](http://gcc.gnu.org/wiki/DebugFission) is not supported. [bug](https://github.com/icecc/icecream/issues/86)
+[debug fission](http://gcc.gnu.org/wiki/DebugFission) is not supported.
+[bug](https://github.com/icecc/icecream/issues/86)
-**clang=0**
+ clang=0
Icecc doesn't support clang yet.
## Build only specific targets
-If you specify just the target(s) you want built, the build will only walk that portion of the dependency graph:
-```
-$ cd $CHROMIUM_ROOT/src
-$ ninja -C out/Debug base_unittests
-```
+If you specify just the target(s) you want built, the build will only walk that
+portion of the dependency graph:
+
+ cd $CHROMIUM_ROOT/src
+ ninja -C out/Debug base_unittests
## Linking
+
### Dynamically link
-We normally statically link everything into one final executable, which produces enormous (nearly 1gb in debug mode) files. If you dynamically link, you save a lot of time linking for a bit of time during startup, which is fine especially when you're in an edit/compile/test cycle.
+We normally statically link everything into one final executable, which produces
+enormous (nearly 1gb in debug mode) files. If you dynamically link, you save a
+lot of time linking for a bit of time during startup, which is fine especially
+when you're in an edit/compile/test cycle.
-Run gyp with the `-Dcomponent=shared_library` flag to put it in this configuration. (Or set those flags via the `GYP_DEFINES` environment variable.)
+Run gyp with the `-Dcomponent=shared_library` flag to put it in this
+configuration. (Or set those flags via the `GYP_DEFINES` environment variable.)
e.g.
-```
-$ build/gyp_chromium -D component=shared_library
-$ ninja -C out/Debug chrome
-```
+ build/gyp_chromium -D component=shared_library
+ ninja -C out/Debug chrome
-See the [component build page](http://www.chromium.org/developers/how-tos/component-build) for more information.
+See the
+[component build page](http://www.chromium.org/developers/how-tos/component-build)
+for more information.
### Linking using gold
The experimental "gold" linker is much faster than the standard BFD linker.
-On some systems (including Debian experimental, Ubuntu Karmic and beyond), there exists a `binutils-gold` package. Do not install this version! Having gold as the default linker is known to break kernel / kernel module builds.
+On some systems (including Debian experimental, Ubuntu Karmic and beyond), there
+exists a `binutils-gold` package. Do not install this version! Having gold as
+the default linker is known to break kernel / kernel module builds.
-The Chrome tree now includes a binary of gold compiled for x64 Linux. It is used by default on those systems.
+The Chrome tree now includes a binary of gold compiled for x64 Linux. It is used
+by default on those systems.
-On other systems, to safely install gold, make sure the final binary is named `ld` and then set `CC/CXX` appropriately, e.g. `export CC="gcc -B/usr/local/gold/bin"` and similarly for `CXX`. Alternatively, you can add `/usr/local/gold/bin` to your `PATH` in front of `/usr/bin`.
+On other systems, to safely install gold, make sure the final binary is named
+`ld` and then set `CC/CXX` appropriately, e.g.
+`export CC="gcc -B/usr/local/gold/bin"` and similarly for `CXX`. Alternatively,
+you can add `/usr/local/gold/bin` to your `PATH` in front of `/usr/bin`.
## WebKit
+
### Build WebKit without debug symbols
-WebKit is about half our weight in terms of debug symbols. (Lots of templates!) If you're working on UI bits where you don't care to trace into WebKit you can cut down the size and slowness of debug builds significantly by building WebKit without debug symbols.
+WebKit is about half our weight in terms of debug symbols. (Lots of templates!)
+If you're working on UI bits where you don't care to trace into WebKit you can
+cut down the size and slowness of debug builds significantly by building WebKit
+without debug symbols.
+
+Set the gyp variable `remove_webcore_debug_symbols=1`, either via the
+`GYP_DEFINES` environment variable, the `-D` flag to gyp, or by adding the
+following to `~/.gyp/include.gypi`:
-Set the gyp variable `remove_webcore_debug_symbols=1`, either via the `GYP_DEFINES` environment variable, the `-D` flag to gyp, or by adding the following to `~/.gyp/include.gypi`:
```
{
'variables': {
@@ -83,27 +110,48 @@ Set the gyp variable `remove_webcore_debug_symbols=1`, either via the `GYP_DEFIN
(Ignore this if you use goma.)
-Increase your ccache hit rate by setting `CCACHE_BASEDIR` to a parent directory that the working directories all have in common (e.g., `/home/yourusername/development`). Consider using `CCACHE_SLOPPINESS=include_file_mtime` (since if you are using multiple working directories, header times in svn sync'ed portions of your trees will be different - see [the ccache troubleshooting section](http://ccache.samba.org/manual.html#_troubleshooting) for additional information). If you use symbolic links from your home directory to get to the local physical disk directory where you keep those working development directories, consider putting
-```
-alias cd="cd -P"
-```
-in your .bashrc so that `$PWD` or `cwd` always refers to a physical, not logical directory (and make sure `CCACHE_BASEDIR` also refers to a physical parent).
+Increase your ccache hit rate by setting `CCACHE_BASEDIR` to a parent directory
+that the working directories all have in common (e.g.,
+`/home/yourusername/development`). Consider using
+`CCACHE_SLOPPINESS=include_file_mtime` (since if you are using multiple working
+directories, header times in svn sync'ed portions of your trees will be
+different - see
+[the ccache troubleshooting section](http://ccache.samba.org/manual.html#_troubleshooting)
+for additional information). If you use symbolic links from your home directory
+to get to the local physical disk directory where you keep those working
+development directories, consider putting
-If you tune ccache correctly, a second working directory that uses a branch tracking trunk and is up-to-date with trunk and was gclient sync'ed at about the same time should build chrome in about 1/3 the time, and the cache misses as reported by `ccache -s` should barely increase.
+ alias cd="cd -P"
-This is especially useful if you use `git-new-workdir` and keep multiple local working directories going at once.
+in your `.bashrc` so that `$PWD` or `cwd` always refers to a physical, not
+logical directory (and make sure `CCACHE_BASEDIR` also refers to a physical
+parent).
+
+If you tune ccache correctly, a second working directory that uses a branch
+tracking trunk and is up-to-date with trunk and was gclient sync'ed at about the
+same time should build chrome in about 1/3 the time, and the cache misses as
+reported by `ccache -s` should barely increase.
+
+This is especially useful if you use `git-new-workdir` and keep multiple local
+working directories going at once.
## Using tmpfs
-You can use tmpfs for the build output to reduce the amount of disk writes required. I.e. mount tmpfs to the output directory where the build output goes:
+You can use tmpfs for the build output to reduce the amount of disk writes
+required. I.e. mount tmpfs to the output directory where the build output goes:
As root:
- * `mount -t tmpfs -o size=20G,nr_inodes=40k,mode=1777 tmpfs /path/to/out`
-**Caveat:** You need to have enough RAM + swap to back the tmpfs. For a full debug build, you will need about 20 GB. Less for just building the chrome target or for a release build.
+ mount -t tmpfs -o size=20G,nr_inodes=40k,mode=1777 tmpfs /path/to/out
+
+**Caveat:** You need to have enough RAM + swap to back the tmpfs. For a full
+debug build, you will need about 20 GB. Less for just building the chrome target
+or for a release build.
-Quick and dirty benchmark numbers on a HP Z600 (Intel core i7, 16 cores hyperthreaded, 12 GB RAM)
+Quick and dirty benchmark numbers on a HP Z600 (Intel core i7, 16 cores
+hyperthreaded, 12 GB RAM)
-| With tmpfs: | 12m:20s |
-|:------------|:--------|
-| Without tmpsfs: | 15m:40s | \ No newline at end of file
+* With tmpfs:
+ * 12m:20s
+* Without tmpfs
+ * 15m:40s
diff --git a/docs/linux_graphics_pipeline.md b/docs/linux_graphics_pipeline.md
index 268718f..5e35a40 100644
--- a/docs/linux_graphics_pipeline.md
+++ b/docs/linux_graphics_pipeline.md
@@ -1,5 +1,10 @@
-Note, this deals with **test\_shell** only. See [BitmapPipeline](BitmapPipeline.md) for the picture in the browser.
+# Linux Graphics Pipeline
+
+Note, this deals with `test_shell` only. See
+[BitmapPipeline](BitmapPipeline.md) for the picture in the browser.
![http://chromium.googlecode.com/svn/trunk/images/linux-gfx.png](http://chromium.googlecode.com/svn/trunk/images/linux-gfx.png)
-(SVG source ![http://chromium.googlecode.com/svn/trunk/images/linux-gfx.svg](http://chromium.googlecode.com/svn/trunk/images/linux-gfx.svg)) \ No newline at end of file
+TODO: Does this render correctly?
+
+(SVG source ![http://chromium.googlecode.com/svn/trunk/images/linux-gfx.svg](http://chromium.googlecode.com/svn/trunk/images/linux-gfx.svg))
diff --git a/docs/linux_gtk_theme_integration.md b/docs/linux_gtk_theme_integration.md
index 90eda85..e1fdc55 100644
--- a/docs/linux_gtk_theme_integration.md
+++ b/docs/linux_gtk_theme_integration.md
@@ -1,20 +1,37 @@
-# Introduction
+# Linux GTK Theme Integration
-The GTK+ port of Chromium has a mode where we try to match the user's GTK theme (which can be enabled under Wrench -> Options -> Personal Stuff -> Set to GTK+ theme). The heuristics often don't pick good colors due to a lack of information in the GTK themes.
+The GTK+ port of Chromium has a mode where we try to match the user's GTK theme
+(which can be enabled under Wrench -> Options -> Personal Stuff -> Set to GTK+
+theme). The heuristics often don't pick good colors due to a lack of information
+in the GTK themes.
-Starting in Chrome 9, we're providing a new way for theme authors to control our GTK+ theming mode. I am not sure of the earliest build these showed up in, but I know 9.0.597 works.
+Starting in Chrome 9, we're providing a new way for theme authors to control our
+GTK+ theming mode. I am not sure of the earliest build these showed up in, but I
+know 9.0.597 works.
-# Describing the previous heuristics
+## Describing the previous heuristics
-The frame heuristics were simple. Query the `bg[SELECTED]` and `bg[INSENSITIVE]` colors on the `MetaFrames` class and darken them slightly. This usually worked OK until the rise of themes that try to make a unified titlebar/menubar look. At roughly that time, it seems that people stopped specifying color information for the `MetaFrames` class and this has lead to the very orange chrome frame on Maverick.
+The frame heuristics were simple. Query the `bg[SELECTED]` and `bg[INSENSITIVE]`
+colors on the `MetaFrames` class and darken them slightly. This usually worked
+OK until the rise of themes that try to make a unified titlebar/menubar look. At
+roughly that time, it seems that people stopped specifying color information for
+the `MetaFrames` class and this has lead to the very orange chrome frame on
+Maverick.
-`MetaFrames` is (was?) a class that was used to communicate frame color data to the window manager around the Hardy days. (It's still defined in most of [XFCE's themes](http://packages.ubuntu.com/maverick/gtk2-engines-xfce)). In chrome's implementation, `MetaFrames` derives from `GtkWindow`.
+`MetaFrames` is (was?) a class that was used to communicate frame color data to
+the window manager around the Hardy days. (It's still defined in most of
+[XFCE's themes](http://packages.ubuntu.com/maverick/gtk2-engines-xfce)). In
+chrome's implementation, `MetaFrames` derives from `GtkWindow`.
-If you are happy with the defaults that chrome has picked, no action is necessary on the part of the theme author.
+If you are happy with the defaults that chrome has picked, no action is
+necessary on the part of the theme author.
-# Introducing `ChromeGtkFrame`
+## Introducing `ChromeGtkFrame`
-For cases where you want control of the colors chrome uses, Chrome gives you a number of style properties for injecting colors and other information about how to draw the frame. For example, here's the proposed modifications to Ubuntu's Ambiance:
+For cases where you want control of the colors chrome uses, Chrome gives you a
+number of style properties for injecting colors and other information about how
+to draw the frame. For example, here's the proposed modifications to Ubuntu's
+Ambiance:
```
style "chrome-gtk-frame"
@@ -33,7 +50,7 @@ style "chrome-gtk-frame"
class "ChromeGtkFrame" style "chrome-gtk-frame"
```
-## Frame color properties
+### Frame color properties
These are the frame's main solid color.
@@ -44,9 +61,13 @@ These are the frame's main solid color.
| `incognito-frame-color` | `GdkColor` | The main color of active incognito windows. | Tints `frame-color` by the default incognito tint |
| `incognito-inactive-frame-color` | `GdkColor` | The main color of inactive incognito windows. | Tints `inactive-frame-color` by the default incognito tint |
-## Frame gradient properties
+### Frame gradient properties
-Chrome's frame (along with many normal window manager themes) have a slight gradient at the top, before filling the rest of the frame background image with a solid color. For example, the top `frame-gradient-size` pixels would be a gradient starting from `frame-gradient-color` at the top to `frame-color` at the bottom, with the rest of the frame being filled with `frame-color`.
+Chrome's frame (along with many normal window manager themes) have a slight
+gradient at the top, before filling the rest of the frame background image with
+a solid color. For example, the top `frame-gradient-size` pixels would be a
+gradient starting from `frame-gradient-color` at the top to `frame-color` at the
+bottom, with the rest of the frame being filled with `frame-color`.
| **Property** | **Type** | **Description** | **If unspecified** |
|:-------------|:---------|:----------------|:-------------------|
@@ -56,9 +77,15 @@ Chrome's frame (along with many normal window manager themes) have a slight grad
| `incognito-frame-gradient-color` | `GdkColor` | Top color of the incognito gradient | Lightens `incognito-frame-color` |
| `incognito-inactive-frame-gradient-color` | `GdkColor` | Top color of the incognito inactive gradient. | Lightens `incognito-inactive-frame-color` |
-## Scrollbar control
+### Scrollbar control
-Because widget rendering is done in a separate, sandboxed process that doesn't have access to the X server or the filesystem, there's no current way to do GTK+ widget rendering. We instead pass WebKit a few colors and let it draw a default scrollbar. We have a very [complex fallback](http://git.chromium.org/gitweb/?p=chromium.git;a=blob;f=chrome/browser/gtk/gtk_theme_provider.cc;h=a57ab6b182b915192c84177f1a574914c44e2e71;hb=3f873177e192f5c6b66ae591b8b7205d8a707918#l424) where we render the widget and then average colors if this information isn't provided.
+Because widget rendering is done in a separate, sandboxed process that doesn't
+have access to the X server or the filesystem, there's no current way to do
+GTK+ widget rendering. We instead pass WebKit a few colors and let it draw a
+default scrollbar. We have a very
+[complex fallback](http://git.chromium.org/gitweb/?p=chromium.git;a=blob;f=chrome/browser/gtk/gtk_theme_provider.cc;h=a57ab6b182b915192c84177f1a574914c44e2e71;hb=3f873177e192f5c6b66ae591b8b7205d8a707918#l424)
+where we render the widget and then average colors if this information isn't
+provided.
| **Property** | **Type** | **Description** |
|:-------------|:---------|:----------------|
@@ -66,27 +93,43 @@ Because widget rendering is done in a separate, sandboxed process that doesn't h
| `scrollbar-slider-normal-color` | `GdkColor` | Color of the slider otherwise |
| `scrollbar-trough-color` | `GdkColor` | Color of the scrollbar trough |
-# Anticipated Q&A
+## Anticipated Q&A
-## Will you patch themes upstream?
+### Will you patch themes upstream?
-I am at the very least hoping we can get Radiance and Ambiance patches since we make very poor frame decisions on those themes, and hopefully a few others.
+I am at the very least hoping we can get Radiance and Ambiance patches since we
+make very poor frame decisions on those themes, and hopefully a few others.
-## How about control over the min/max/close buttons?
+### How about control over the min/max/close buttons?
-I actually tried this locally. There's a sort of uncanny valley effect going on; as the frame looks more native, it's more obvious that it isn't behaving like a native frame. (Also my implementation added a startup time hit.)
+I actually tried this locally. There's a sort of uncanny valley effect going on;
+as the frame looks more native, it's more obvious that it isn't behaving like a
+native frame. (Also my implementation added a startup time hit.)
-## Why use style properties instead of (i.e.) `bg[STATE]`?
+### Why use style properties instead of (i.e.) `bg[STATE]`?
-There's no way to distinguish between colors set on different classes. Using style properties allows us to be backwards compatible and maintain the heuristics since not everyone is going to modify their themes for chromium (and the heuristics do a reasonable job).
+There's no way to distinguish between colors set on different classes. Using
+style properties allows us to be backwards compatible and maintain the
+heuristics since not everyone is going to modify their themes for chromium (and
+the heuristics do a reasonable job).
-## Why now?
+### Why now?
- * I (erg@) was putting off major changes to the window frame stuff in anticipation of finally being able to use GTK+'s theme rendering for the window border with client side decorations, but client side decorations either isn't happening or isn't happening anytime soon, so there's no justification for pushing this task off into the future.
- * Chrome looks pretty bad under Ambiance on Maverick.
+* I (erg@) was putting off major changes to the window frame stuff in
+ anticipation of finally being able to use GTK+'s theme rendering for the
+ window border with client side decorations, but client side decorations
+ either isn't happening or isn't happening anytime soon, so there's no
+ justification for pushing this task off into the future.
+* Chrome looks pretty bad under Ambiance on Maverick.
-## Any details about `MetaFrames` and `ChromeGtkFrame` relationship and history?
+### Details about `MetaFrames` and `ChromeGtkFrame` relationship and history?
-`MetaFrames` is a class that was used in metacity to communicate color information to the window manager. During the Hardy Heron days, we slurped up the data and used it as a key part of our heuristics. At least on my Lucid Lynx machine, none of the GNOME GTK+ themes have `MetaFrames` styling. (As mentioned above, several of the XFCE themes do, though.)
+`MetaFrames` is a class that was used in metacity to communicate color
+information to the window manager. During the Hardy Heron days, we slurped up
+the data and used it as a key part of our heuristics. At least on my Lucid Lynx
+machine, none of the GNOME GTK+ themes have `MetaFrames` styling. (As mentioned
+above, several of the XFCE themes do, though.)
-Internally to chrome, our `ChromeGtkFrame` class inherits from `MetaFrames` (again, which inherits from `GtkWindow`) so any old themes that style the `MetaFrames` class are backwards compatible. \ No newline at end of file
+Internally to chrome, our `ChromeGtkFrame` class inherits from `MetaFrames`
+(again, which inherits from `GtkWindow`) so any old themes that style the
+`MetaFrames` class are backwards compatible.
diff --git a/docs/linux_hw_video_decode.md b/docs/linux_hw_video_decode.md
index a3b0805..7450ca5 100644
--- a/docs/linux_hw_video_decode.md
+++ b/docs/linux_hw_video_decode.md
@@ -1,57 +1,67 @@
-# Enabling hardware
-
-&lt;video&gt;
-
- decode codepaths on linux
-
-Hardware acceleration of video decode on linux is [unsupported](http://crbug.com/137247) in Chrome for user-facing builds. During development (targeting other platforms) it can be useful to be able to trigger the code-paths used on HW-accelerated platforms (such as CrOS and win7) in a linux-based development environment. Here's one way to do so, with details based on a gprecise setup.
-
- * Install pre-requisites: On Ubuntu Precise, at least, this includes:
-```
-sudo apt-get install libtool libvdpau1 libvdpau-dev
-```
-
- * Install and configure [libva](http://cgit.freedesktop.org/libva/)
-```
-DEST=${HOME}/apps/libva
-cd /tmp
-git clone git://anongit.freedesktop.org/libva
-cd libva
-git reset --hard libva-1.2.1
-./autogen.sh && ./configure --prefix=${DEST}
-make -j32 && make install
-```
- * Install and configure the [VDPAU](http://cgit.freedesktop.org/vaapi/vdpau-driver) VAAPI driver
-```
-DEST=${HOME}/apps/libva
-cd /tmp
-git clone git://anongit.freedesktop.org/vaapi/vdpau-driver
-cd vdpau-driver
-export PKG_CONFIG_PATH=${DEST}/lib/pkgconfig/:$PKG_CONFIG_PATH
-export LIBVA_DRIVERS_PATH=${DEST}/lib/dri
-export LIBVA_X11_DEPS_CFLAGS=-I${DEST}/include
-export LIBVA_X11_DEPS_LIBS=-L${DEST}/lib
-export LIBVA_DEPS_CFLAGS=-I${DEST}/include
-export LIBVA_DEPS_LIBS=-L${DEST}/lib
-make distclean
-unset CC CXX
-./autogen.sh && ./configure --prefix=${DEST} --enable-debug
-find . -name Makefile |xargs sed -i 'sI/usr/lib/xorg/modules/driversI${DEST}/lib/driIg'
-sed -i -e 's/_(\(VAEncH264VUIBufferType\|VAEncH264SEIBufferType\));//' src/vdpau_dump.c
-make -j32 && rm -f ${DEST}/lib/dri/{nvidia_drv_video.so,s3g_drv_video.so} && make install
-```
- * Add to `$GYP_DEFINES`:
- * `chromeos=1` to link in `VaapiVideoDecodeAccelerator`
- * `proprietary_codecs=1 ffmpeg_branding=Chrome` to allow Chrome to play h.264 content, which is the only codec VAVDA knows about today.
- * Re-run gyp (`./build/gyp_chromium` or `gclient runhooks`)
- * Rebuild chrome
- * Run chrome with `LD_LIBRARY_PATH=${HOME}/apps/libva/lib` in the environment, and with the --no-sandbox command line flag.
- * If things don't work, a Debug build (to include D\*LOG's) with `--vmodule=*content/common/gpu/media/*=10,gpu_video*=1` might be enlightening.
-
-# NOTE THIS IS AN UNSUPPORTED CONFIGURATION AND LIKELY TO BE BROKEN AT ANY POINT IN TIME
-
-This page is purely here to help developers targeting supported HW
-
-&lt;video&gt;
-
- decode platforms be more effective. Do not expect help if this setup fails to work. \ No newline at end of file
+# Enabling hardware `<video>` decode codepaths on linux
+
+Hardware acceleration of video decode on Linux is
+[unsupported](https://crbug.com/137247) in Chrome for user-facing builds. During
+development (targeting other platforms) it can be useful to be able to trigger
+the code-paths used on HW-accelerated platforms (such as CrOS and win7) in a
+linux-based development environment. Here's one way to do so, with details based
+on a gprecise setup.
+
+* Install pre-requisites: On Ubuntu Precise, at least, this includes:
+
+ ```shell
+ sudo apt-get install libtool libvdpau1 libvdpau-dev
+ ```
+
+* Install and configure [libva](http://cgit.freedesktop.org/libva/)
+
+ ```shell
+ DEST=${HOME}/apps/libva
+ cd /tmp
+ git clone git://anongit.freedesktop.org/libva
+ cd libva
+ git reset --hard libva-1.2.1
+ ./autogen.sh && ./configure --prefix=${DEST}
+ make -j32 && make install
+ ```
+
+* Install and configure the
+ [VDPAU](http://cgit.freedesktop.org/vaapi/vdpau-driver) VAAPI driver
+
+ ```shell
+ DEST=${HOME}/apps/libva
+ cd /tmp
+ git clone git://anongit.freedesktop.org/vaapi/vdpau-driver
+ cd vdpau-driver
+ export PKG_CONFIG_PATH=${DEST}/lib/pkgconfig/:$PKG_CONFIG_PATH
+ export LIBVA_DRIVERS_PATH=${DEST}/lib/dri
+ export LIBVA_X11_DEPS_CFLAGS=-I${DEST}/include
+ export LIBVA_X11_DEPS_LIBS=-L${DEST}/lib
+ export LIBVA_DEPS_CFLAGS=-I${DEST}/include
+ export LIBVA_DEPS_LIBS=-L${DEST}/lib
+ make distclean
+ unset CC CXX
+ ./autogen.sh && ./configure --prefix=${DEST} --enable-debug
+ find . -name Makefile |xargs sed -i 'sI/usr/lib/xorg/modules/driversI${DEST}/lib/driIg'
+ sed -i -e 's/_(\(VAEncH264VUIBufferType\|VAEncH264SEIBufferType\));//' src/vdpau_dump.c
+ make -j32 && rm -f ${DEST}/lib/dri/{nvidia_drv_video.so,s3g_drv_video.so} && make install
+ ```
+
+* Add to `$GYP_DEFINES`:
+ * `chromeos=1` to link in `VaapiVideoDecodeAccelerator`
+ * `proprietary_codecs=1 ffmpeg_branding=Chrome` to allow Chrome to play
+ h.264 content, which is the only codec VAVDA knows about today.
+* Re-run gyp (`./build/gyp_chromium` or `gclient runhooks`)
+* Rebuild chrome
+* Run chrome with `LD_LIBRARY_PATH=${HOME}/apps/libva/lib` in the environment,
+ and with the `--no-sandbox` command line flag.
+* If things don't work, a Debug build (to include D\*LOG's) with
+ `--vmodule=*content/common/gpu/media/*=10,gpu_video*=1` might be
+ enlightening.
+
+**NOTE THIS IS AN UNSUPPORTED CONFIGURATION AND LIKELY TO BE BROKEN AT ANY
+POINT IN TIME**
+
+This page is purely here to help developers targeting supported HW `<video>`
+decode platforms be more effective. Do not expect help if this setup fails to
+work.
diff --git a/docs/linux_minidump_to_core.md b/docs/linux_minidump_to_core.md
index 67baf41..f4b54ec 100644
--- a/docs/linux_minidump_to_core.md
+++ b/docs/linux_minidump_to_core.md
@@ -1,103 +1,108 @@
-# Introduction
+# Linux Minidump to Core
-On Linux, Chromium can use Breakpad to generate minidump files for crashes. It is possible to convert the minidump files to core files, and examine the core file in gdb, cgdb, or Qtcreator. In the examples below cgdb is assumed but any gdb based debugger can be used.
+On Linux, Chromium can use Breakpad to generate minidump files for crashes. It
+is possible to convert the minidump files to core files, and examine the core
+file in gdb, cgdb, or Qtcreator. In the examples below cgdb is assumed but any
+gdb based debugger can be used.
-# Details
+[TOC]
## Creating the core file
-Use `minidump-2-core` to convert the minidump file to a core file. On Linux, one can build the minidump-2-core target in a Chromium checkout, or alternatively, build it in a Google Breakpad checkout.
+Use `minidump-2-core` to convert the minidump file to a core file. On Linux, one
+can build the minidump-2-core target in a Chromium checkout, or alternatively,
+build it in a Google Breakpad checkout.
-```
-
-$ minidump-2-core foo.dmp > foo.core
-
-```
+ $ minidump-2-core foo.dmp > foo.core
## Retrieving Chrome binaries
-If the minidump is from
-a public build then Googlers can find Google Chrome Linux binaries and debugging symbols via https://goto.google.com/chromesymbols. Otherwise, use the locally built chrome files.
-Google Chrome uses the _debug link_ method to specify the debugging file.
-Either way be sure to put chrome and chrome.debug
-(the stripped debug information) in the same directory as the core file so that the debuggers can find them.
+If the minidump is from a public build then Googlers can find Google Chrome
+Linux binaries and debugging symbols via https://goto.google.com/chromesymbols.
+Otherwise, use the locally built chrome files. Google Chrome uses the
+_debug link_ method to specify the debugging file. Either way be sure to put
+`chrome` and `chrome.debug` (the stripped debug information) in the same
+directory as the core file so that the debuggers can find them.
## Loading the core file into gdb/cgdb
-The recommended syntax for loading a core file into gdb/cgdb is as follows, specifying both the executable and the core file:
-
-```
-
-$ cgdb chrome foo.core
+The recommended syntax for loading a core file into gdb/cgdb is as follows,
+specifying both the executable and the core file:
-```
+ $ cgdb chrome foo.core
-If the executable is not available then the core file can be loaded on its own but debugging options will be limited:
+If the executable is not available then the core file can be loaded on its own
+but debugging options will be limited:
-```
-
-$ cgdb -c foo.core
-
-```
+ $ cgdb -c foo.core
## Loading the core file into Qtcreator
-Qtcreator is a full GUI wrapper for gdb and it can also load Chrome's core files. From Qtcreator select the Debug menu, Start Debugging, Load Core File... and then enter the paths to the core file and executable. Qtcreator has windows to display the call stack, locals, registers, etc. For more information on debugging with Qtcreator see [Getting Started Debugging on Linux.](https://www.youtube.com/watch?v=xTmAknUbpB0)
+Qtcreator is a full GUI wrapper for gdb and it can also load Chrome's core
+files. From Qtcreator select the Debug menu, Start Debugging, Load Core File...
+and then enter the paths to the core file and executable. Qtcreator has windows
+to display the call stack, locals, registers, etc. For more information on
+debugging with Qtcreator see
+[Getting Started Debugging on Linux.](https://www.youtube.com/watch?v=xTmAknUbpB0)
## Source debugging
-If you have a Chromium repo that is synchronized to exactly (or even approximately) when the Chrome build was created then you can tell gdb/cgdb/Qtcreator to load source code. Since all source paths in Chrome are relative to the out/Release directory you just need to add that directory to your debugger search path, by adding a line similar to this to ~/.gdbinit:
-
-```
-
-(gdb) directory /usr/local/chromium/src/out/Release/
+If you have a Chromium repo that is synchronized to exactly (or even
+approximately) when the Chrome build was created then you can tell
+`gdb/cgdb/Qtcreator` to load source code. Since all source paths in Chrome are
+relative to the out/Release directory you just need to add that directory to
+your debugger search path, by adding a line similar to this to `~/.gdbinit`:
-```
+ (gdb) directory /usr/local/chromium/src/out/Release/
## Notes
- * Since the core file is created from a minidump, it is incomplete and the debugger may not know values for variables in memory. Minidump files contain thread stacks so local variables and function parameters should be available, subject to the limitations of optimized builds.
- * For gdb's `add-symbol-file` command to work, the file must have debugging symbols.
- * In case of separate debug files, [the gdb manual](https://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html) explains how gdb looks for them.
- * If the stack trace involve system libraries, the Advanced module loading steps shown below need to be repeated for each library.
+* Since the core file is created from a minidump, it is incomplete and the
+ debugger may not know values for variables in memory. Minidump files contain
+ thread stacks so local variables and function parameters should be
+ available, subject to the limitations of optimized builds.
+* For gdb's `add-symbol-file` command to work, the file must have debugging
+ symbols.
+ * In case of separate debug files,
+ [the gdb manual](https://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html)
+ explains how gdb looks for them.
+* If the stack trace involve system libraries, the Advanced module loading
+ steps shown below need to be repeated for each library.
## Advanced module loading
-If gdb doesn't find shared objects that are needed you can force it to load them. In gdb, the `add-symbol-file` command takes a filename and an address. To figure out the address, look near the end of `foo.dmp`, which contains a copy of `/proc/pid/maps` from the process that crashed.
+If gdb doesn't find shared objects that are needed you can force it to load
+them. In gdb, the `add-symbol-file` command takes a filename and an address. To
+figure out the address, look near the end of `foo.dmp`, which contains a copy of
+`/proc/pid/maps` from the process that crashed.
-One quick way to do this is with `grep`. For instance, if the executable is `/path/to/chrome`, one can simply run:
+One quick way to do this is with `grep`. For instance, if the executable is
+`/path/to/chrome`, one can simply run:
-```
+ $ grep -a /path/to/chrome$ foo.dmp
-$ grep -a /path/to/chrome$ foo.dmp
+ 7fe749a90000-7fe74d28f000 r-xp 00000000 08:07 289158 /path/to/chrome
+ 7fe74d290000-7fe74d4b7000 r--p 037ff000 08:07 289158 /path/to/chrome
+ 7fe74d4b7000-7fe74d4e0000 rw-p 03a26000 08:07 289158 /path/to/chrome
-7fe749a90000-7fe74d28f000 r-xp 00000000 08:07 289158 /path/to/chrome
-7fe74d290000-7fe74d4b7000 r--p 037ff000 08:07 289158 /path/to/chrome
-7fe74d4b7000-7fe74d4e0000 rw-p 03a26000 08:07 289158 /path/to/chrome
+In this case, `7fe749a90000` is the base address for `/path/to/chrome`, but gdb
+takes the start address of the file's text section. To calculate this, one will
+need a copy of `/path/to/chrome`, and run:
+ $ objdump -x /path/to/chrome | grep '\.text' | head -n 1 | tr -s ' ' | \
+ cut -d' ' -f 7
-```
-
-In this case, `7fe749a90000` is the base address for `/path/to/chrome`, but gdb takes the start address of the file's text section. To calculate this, one will need a copy of `/path/to/chrome`, and run:
-
-```
-
-$ objdump -x /path/to/chrome | grep '\.text' | head -n 1 | tr -s ' ' | cut -d' ' -f 7
-
-005282c0
-
-```
+ 005282c0
Now add the two addresses: `7fe749a90000 + 005282c0 = 7fe749fb82c0` and in gdb, run:
-```
-
-(gdb) add-symbol-file /path/to/chrome 0x7fe749fb82c0
-
-```
+ (gdb) add-symbol-file /path/to/chrome 0x7fe749fb82c0
Then use gdb as normal.
## Other resources
-For more discussion on this process see [Debugging a Minidump](http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/crash-reporting/debugging-a-minidump). This page discusses the same process in the context of ChromeOS and many of the concepts and techniques overlap. \ No newline at end of file
+For more discussion on this process see
+[Debugging a Minidump](http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/crash-reporting/debugging-a-minidump).
+This page discusses the same process in the context of ChromeOS and many of the
+concepts and techniques overlap.
diff --git a/docs/linux_open_suse_build_instructions.md b/docs/linux_open_suse_build_instructions.md
index c94d388..5b471d1 100644
--- a/docs/linux_open_suse_build_instructions.md
+++ b/docs/linux_open_suse_build_instructions.md
@@ -1,45 +1,46 @@
-This page includes some instruction to build Chromium on openSUSE 11.1 and 11.0.
-Before reading this page you need to learn the [Linux Build Instructions](LinuxBuildInstructions.md).
+# Linux Open SUSE Build Instructions
-If you are on 64-bit openSUSE, you will also want to read [Linux Build 64-bit on openSUSE](http://code.google.com/p/chromium/wiki/LinuxBuild64Bit#Manual_Setup_on_openSUSE).
+This page includes some instruction to build Chromium on openSUSE 11.1 and 11.0.
+Before reading this page you need to learn the
+[Linux Build Instructions](linux_build_instructions.md).
## How to Install Dependencies:
Use zypper command to install dependencies:
(openSUSE 11.1 and higher)
-```
-sudo zypper in subversion pkg-config python perl \
- bison flex gperf mozilla-nss-devel glib2-devel gtk-devel \
- wdiff lighttpd gcc gcc-c++ gconf2-devel mozilla-nspr \
- mozilla-nspr-devel php5-fastcgi alsa-devel libexpat-devel \
- libjpeg-devel libbz2-devel
-```
+ sudo zypper in subversion pkg-config python perl \
+ bison flex gperf mozilla-nss-devel glib2-devel gtk-devel \
+ wdiff lighttpd gcc gcc-c++ gconf2-devel mozilla-nspr \
+ mozilla-nspr-devel php5-fastcgi alsa-devel libexpat-devel \
+ libjpeg-devel libbz2-devel
-For 11.0, use libnspr4-0d and libnspr4-dev instead of mozilla-nspr and mozilla-nspr-devel, and use php5-cgi instead of php5-fastcgi. And need gtk2-devel.
+For 11.0, use `libnspr4-0d` and `libnspr4-dev` instead of `mozilla-nspr` and
+`mozilla-nspr-devel`, and use `php5-cgi` instead of `php5-fastcgi`. And need
+`gtk2-devel`.
(openSUSE 11.0)
-```
-sudo zypper in subversion pkg-config python perl \
- bison flex gperf mozilla-nss-devel glib2-devel gtk-devel \
- libnspr4-0d libnspr4-dev wdiff lighttpd gcc gcc-c++ libexpat-devel php5-cgi gconf2-devel \
- alsa-devel gtk2-devel jpeg-devel
-```
+ sudo zypper in subversion pkg-config python perl \
+ bison flex gperf mozilla-nss-devel glib2-devel gtk-devel \
+ libnspr4-0d libnspr4-dev wdiff lighttpd gcc gcc-c++ libexpat-devel \
+ php5-cgi gconf2-devel alsa-devel gtk2-devel jpeg-devel
-The Ubuntu package sun-java6-fonts contains a subset of Java of the fonts used. Since this package requires Java as a prerequisite anyway, we can do the same thing by just installing the equivalent OpenSUSE Sun Java package:
-```
-sudo zypper in java-1_6_0-sun
-```
+The Ubuntu package sun-java6-fonts contains a subset of Java of the fonts used.
+Since this package requires Java as a prerequisite anyway, we can do the same
+thing by just installing the equivalent OpenSUSE Sun Java package:
+
+ sudo zypper in java-1_6_0-sun
Webkit is currently hard-linked to the Microsoft fonts. To install these using zypper
-```
-sudo zypper in fetchmsttfonts pullin-msttf-fonts
-```
-To make the fonts installed above work, as the paths are hardcoded for Ubuntu, create symlinks to the appropriate locations:
-```
+ sudo zypper in fetchmsttfonts pullin-msttf-fonts
+
+To make the fonts installed above work, as the paths are hardcoded for Ubuntu,
+create symlinks to the appropriate locations:
+
+```shell
sudo mkdir -p /usr/share/fonts/truetype/msttcorefonts
sudo ln -s /usr/share/fonts/truetype/arial.ttf /usr/share/fonts/truetype/msttcorefonts/Arial.ttf
sudo ln -s /usr/share/fonts/truetype/arialbd.ttf /usr/share/fonts/truetype/msttcorefonts/Arial_Bold.ttf
@@ -61,17 +62,17 @@ sudo ln -s /usr/share/fonts/truetype/verdanab.ttf /usr/share/fonts/truetype/mstt
sudo ln -s /usr/share/fonts/truetype/verdanai.ttf /usr/share/fonts/truetype/msttcorefonts/Verdana_Italic.ttf
sudo ln -s /usr/share/fonts/truetype/verdanaz.ttf /usr/share/fonts/truetype/msttcorefonts/Verdana_Bold_Italic.ttf
```
+
And then for the Java fonts:
-```
+
+```shell
sudo mkdir -p /usr/share/fonts/truetype/ttf-lucida
-sudo find /usr/lib*/jvm/java-1.6.*-sun-*/jre/lib -iname '*.ttf' -print -exec ln -s {} /usr/share/fonts/truetype/ttf-lucida \;
+sudo find /usr/lib*/jvm/java-1.6.*-sun-*/jre/lib -iname '*.ttf' -print \
+ -exec ln -s {} /usr/share/fonts/truetype/ttf-lucida \;
```
## Building the software
-Please refer to the [Linux Build Instructions](LinuxBuildInstructions.md).
-
-
----
+Please refer to the [Linux Build Instructions](linux_build_instructions.md).
-Please, give comments and update this page if you use different steps. \ No newline at end of file
+Please update this page if you use different steps.
diff --git a/docs/linux_password_storage.md b/docs/linux_password_storage.md
index 8e63d37..0418e64 100644
--- a/docs/linux_password_storage.md
+++ b/docs/linux_password_storage.md
@@ -1,23 +1,36 @@
-# Introduction
+# Linux Password Storage
On Linux, Chromium can store passwords in three ways:
- * GNOME Keyring
- * KWallet 4
- * plain text
-Chromium chooses which store to use automatically, based on your desktop environment.
-Passwords stored in GNOME Keyring or KWallet are encrypted on disk, and access to them is controlled by dedicated daemon software. Passwords stored in plain text are not encrypted. Because of this, when either GNOME Keyring or KWallet is in use, any unencrypted passwords that have been stored previously are automatically moved into the encrypted store.
+* GNOME Keyring
+* KWallet 4
+* plain text
-Support for using GNOME Keyring and KWallet was added in version 6, but using these (when available) was not made the default mode until version 12.
+Chromium chooses which store to use automatically, based on your desktop
+environment.
-# Details
+Passwords stored in GNOME Keyring or KWallet are encrypted on disk, and access
+to them is controlled by dedicated daemon software. Passwords stored in plain
+text are not encrypted. Because of this, when either GNOME Keyring or KWallet is
+in use, any unencrypted passwords that have been stored previously are
+automatically moved into the encrypted store.
-Although Chromium chooses which store to use automatically, the store to use can also be specified with a command line argument:
- * `--password-store=gnome` (to use GNOME Keyring)
- * `--password-store=kwallet` (to use KWallet)
- * `--password-store=basic` (to use the plain text store)
+Support for using GNOME Keyring and KWallet was added in version 6, but using
+these (when available) was not made the default mode until version 12.
-Note that Chromium will fall back to `basic` if a requested or autodetected store is not available.
+## Details
-In versions 6-11, the store to use was not detected automatically, but detection could be requested with an additional argument:
- * `--password-store=detect` \ No newline at end of file
+Although Chromium chooses which store to use automatically, the store to use can
+also be specified with a command line argument:
+
+* `--password-store=gnome` (to use GNOME Keyring)
+* `--password-store=kwallet` (to use KWallet)
+* `--password-store=basic` (to use the plain text store)
+
+Note that Chromium will fall back to `basic` if a requested or autodetected
+store is not available.
+
+In versions 6-11, the store to use was not detected automatically, but detection
+could be requested with an additional argument:
+
+* `--password-store=detect`
diff --git a/docs/linux_pid_namespace_support.md b/docs/linux_pid_namespace_support.md
index defebf6..81ce80f 100644
--- a/docs/linux_pid_namespace_support.md
+++ b/docs/linux_pid_namespace_support.md
@@ -1,6 +1,12 @@
-The [LinuxSUIDSandbox](LinuxSUIDSandbox.md) currently relies on support for the CLONE\_NEWPID flag in Linux's [clone() system call](http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html). You can check whether your system supports PID namespaces with the code below, which must be run as root:
+# Linux PID Namespace Support
-```
+The [LinuxSUIDSandbox](linux_suid_sandbox.md) currently relies on support for
+the `CLONE_NEWPID` flag in Linux's
+[clone() system call](http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html).
+You can check whether your system supports PID namespaces with the code below,
+which must be run as root:
+
+```c
#define _GNU_SOURCE
#include <unistd.h>
#include <sched.h>
@@ -39,4 +45,4 @@ int main() {
return 0;
}
-``` \ No newline at end of file
+```
diff --git a/docs/linux_plugins.md b/docs/linux_plugins.md
index 33bdb55..24eefd9 100644
--- a/docs/linux_plugins.md
+++ b/docs/linux_plugins.md
@@ -1,27 +1,51 @@
-### Background reading materials
-#### Plugins in general
- * [Gecko Plugin API reference](https://developer.mozilla.org/en/Gecko_Plugin_API_Reference) -- most important to read
- * [Mozilla plugins site](http://www.mozilla.org/projects/plugins/)
- * [XEmbed extension](https://developer.mozilla.org/en/XEmbed_Extension_for_Mozilla_Plugins) -- newer X11-specific plugin API
- * [NPAPI plugin guide](http://gplflash.sourceforge.net/gplflash2_blog/npapi.html) from GPLFlash project
-
-#### Chromium-specific
- * [Chromium's plugin architecture](http://dev.chromium.org/developers/design-documents/plugin-architecture) -- may be out of date but will be worth reading
-
-### Code to reference
- * [Mozilla plugin code](http://mxr.mozilla.org/firefox/source/modules/plugin/base/src/) -- useful reference
- * [nspluginwrapper](http://gwenole.beauchesne.info//en/projects/nspluginwrapper) -- does out-of-process plugins itself
-
-### Terminology
- * _Internal plugin_: "a plugin that's implemented in the chrome dll, i.e. there's no external dll that services that mime type. For Linux you'll just have to worry about the default plugin, which is what shows a puzzle icon for content that you don't have a plugin for. We use that to allow the user to download and install the missing plugin."
-
-### Flash
- * [Adobe Flash player dev center](http://www.adobe.com/devnet/flashplayer/)
- * [penguin.swf](http://blogs.adobe.com/penguin.swf/) -- blog about Flash on Linux
- * [tips and tricks](http://macromedia.mplug.org/) -- user-created page, with some documentation of special flags in `/etc/adobe/mms.cfg`
- * [official Adobe bug tracker](https://bugs.adobe.com/flashplayer/)
-
-### Useful Tools
- * `xwininfo -tree` -- lets you inspect the window hierarchy of a window and get the layout of child windows.
- * "[DiamondX](http://multimedia.cx/diamondx/) is a simple NPAPI plugin built to run on Unix platforms and exercise the XEmbed browser extension."
- * To build a 32-bit binary: `./configure CFLAGS='-m32' LDFLAGS='-L/usr/lib32 -m32'` \ No newline at end of file
+# Linux Plugins
+
+## Background reading materials
+
+### Plugins in general
+
+* [Gecko Plugin API reference](https://developer.mozilla.org/en/Gecko_Plugin_API_Reference)
+ -- most important to read
+* [Mozilla plugins site](http://www.mozilla.org/projects/plugins/)
+* [XEmbed extension](https://developer.mozilla.org/en/XEmbed_Extension_for_Mozilla_Plugins)
+ -- newer X11-specific plugin API
+* [NPAPI plugin guide](http://gplflash.sourceforge.net/gplflash2_blog/npapi.html)
+ from GPLFlash project
+
+### Chromium-specific
+
+* [Chromium's plugin architecture](http://dev.chromium.org/developers/design-documents/plugin-architecture)
+ -- may be out of date but will be worth reading
+
+## Code to reference
+
+* [Mozilla plugin code](http://mxr.mozilla.org/firefox/source/modules/plugin/base/src/)
+ -- useful reference
+* [nspluginwrapper](http://gwenole.beauchesne.info//en/projects/nspluginwrapper)
+ -- does out-of-process plugins itself
+
+## Terminology
+
+* _Internal plugin_: "a plugin that's implemented in the chrome dll, i.e.
+ there's no external dll that services that mime type. For Linux you'll just
+ have to worry about the default plugin, which is what shows a puzzle icon
+ for content that you don't have a plugin for. We use that to allow the user
+ to download and install the missing plugin."
+
+## Flash
+
+* [Adobe Flash player dev center](http://www.adobe.com/devnet/flashplayer/)
+* [penguin.swf](http://blogs.adobe.com/penguin.swf/) -- blog about Flash on
+ Linux
+* [tips and tricks](http://macromedia.mplug.org/) -- user-created page, with
+ some documentation of special flags in `/etc/adobe/mms.cfg`
+* [official Adobe bug tracker](https://bugs.adobe.com/flashplayer/)
+
+## Useful Tools
+
+* `xwininfo -tree` -- lets you inspect the window hierarchy of a window and
+ get the layout of child windows.
+* "[DiamondX](http://multimedia.cx/diamondx/) is a simple NPAPI plugin built
+ to run on Unix platforms and exercise the XEmbed browser extension."
+ * To build a 32-bit binary:
+ `./configure CFLAGS='-m32' LDFLAGS='-L/usr/lib32 -m32'`
diff --git a/docs/linux_printing.md b/docs/linux_printing.md
deleted file mode 100644
index e99e8135..0000000
--- a/docs/linux_printing.md
+++ /dev/null
@@ -1,74 +0,0 @@
-# Introduction
-The common approach used in printing on Linux is to use Gtk+ and Cairo libraries. The [Gtk+ documentation](http://library.gnome.org/devel/gtk/stable/Printing.html) describes both high-level and low-level APIs for us to do printing. In an application program, the easiest way to do printing is to use [GtkPrintOperation](http://library.gnome.org/devel/gtk/stable/gtk-High-level-Printing-API.html) to get the Cairo context in `draw-page`'s callback, and render **each** page's contents on this specific context, the rest is easy and trivial.
-
-However, in Chromium's multi-process architecture, we hope that all rendering should be done in the renderer process, and I/O should be done in the browser process. The problem is that we are unable to pass the Cairo context we obtained in the browser process to the renderer via IPC and/or shared memory, and later get it back after the rendering is done. Hence, we have to find something which we can _pickle_ and pass between processes.
-
-# Possible Solutions
- 1. **Bitmap**: This seems easy because we have already passed bitmaps for displaying web pages. It is also pretty easy to _dump_ this bitmap on the printing context. However, the bitmap takes lots of memory space even when you print a blank page. You might wonder why it can be a critical problem, since we've used bitmaps for displaying. The critical part is that the screen DPI is around 72~110, at least lower than 150. However, the DPI for printing is usually above 150, and maybe 1200 or 2400 for high-end printers. The 72-DPI bitmap actually looks terrible on the paper and reminds us the old time when we used dot-matrix printers. A 72-DPI bitmap will take ~7MB memory (assume we are using US letter paper), hence, it will take ~500MB memory per page when we would like to print with a 600-DPI laser printer. By the way, even we would like to do so, we still have the problem that WebKit seems not to take the DPI factor into account when rendering the web page.
- 1. **Rendering records** (scripts): We might be able to record every operation which is going to be performed on canvas, and later pass all these records to the browser side to playback. We can define our own format (or script), or we can use CairoScript. Unfortunately, CairoScript is still in the snapshot (it is under development and we can hardly find its detailed and useful documentation), not in the stable release. Even it is in a stable release, we still cannot assume that the user will install the latest version of Cairo library. If we would like to create our own format/script and use it, we have to replay these records in the browser process, which seems to be another kind of rendering action (it is actually). By the way, one thing we need to take care of would be, for example, when we have to composite multiple semi-transparent bitmaps, we also have to embed these bitmaps along with other skia objects into our records. This implies that we have to be able to _pickle_ `SkBitmap`, `SkPaint`, and other related skia objects. This sucks.
- 1. **Metafile approach 1**: We can use Cairo to create the PDF (or PS) file in the renderer (one file per page) and pass it to the browser process. We can then use [libpoppler](http://poppler.freedesktop.org/) to render the page content on the printing context (it is pretty easy to use and we just need to add few lines to use it). This sounds better, but this also means that we have to bring [libpoppler](http://poppler.freedesktop.org/) into our dependency. Moreover, we still have to do _rendering_ in the browser process, which we should really avoid if possible.
- 1. **Metafile approach 2**: Again, we use Cairo to create the PDF (or PS) file in the renderer. However, this time we have to generate the PDF/PS file for all pages. Unlike other approaches we mentioned earlier, all rendering tasks are done in the renderer, including transformation for setting up the page, and rendering of the header and footer. Since we do not want to do rendering in the browser process, this means that we cannot use the [GtkPrintOperation](http://library.gnome.org/devel/gtk/stable/gtk-High-level-Printing-API.html) approach, which does require rendering. Instead, we can use [GtkPrintUnixDialog](http://library.gnome.org/devel/gtk/stable/GtkPrintUnixDialog.html) to get printing parameters and generate the [GtkPrintJob](http://library.gnome.org/devel/gtk/stable/GtkPrintJob.html) accordingly. Then, we use `gtk_print_job_set_source_file()` and `gtk_print_job_send ()` to send our PDF/PS file directly to the printing system (CUPS). One bad part is that `gtk_print_job_set_source_file()` only takes a file on the disk, so we have to create a temporary file on the disk before we use it. This file might be pretty large if the user is printing a long web page.
-
-# Our Choice
-We currently are using Metafile approach 2, since we really like to avoid any rendering in the browser process if possible. By the way, we are using a two-pass rendering in `PdfPsmetafile` right now. Because in the first pass, we need to get the shrink factor from WebKit so that we can scale and center the page in the second pass. If we later we can have the shrink factor from Preview, we might be able to use single-pass rendering. However, using two-pass rendering might still have some advantages. For example, we can actually do Preview and the first pass at the same time if we use the bitmap object in the `VectorPlatformDevice` as Preview. (Not very sure if it works or not now, since Previewing is also a complicated issue.) Once we have the page settings (margins, scaling, etc), we can easily apply the first-pass results to generate our final output by copying Cairo surfaces.
-
-(Please NOTE the approach used here might be changed in the future.)
-
-# Current Status
-We now can generate a PDF file for the web page and save it under user's default download directory. The function is still very basic. Please see ideal Goal, Known Issues, and Bugs below.
-
-
----
-
-
-# Ideal Goal
-Design a better printing flow for Linux:
-> Ideally, when we print the web page, we should get the _snapshot_ of that page. In the current architecture, we cannot halt JavaScript from running when we print the web page. This is not good since the script might close the page we are printing. Things could be worse if plug-ins are involved. When we print, the renderer sends a sync message to the browser, so the renderer must wait for the browser. We potentially may have deadlocks when the plug-in talks to the renderer. Please see [here](http://dev.chromium.org/developers/design-documents/printing) for further detail. This might be avoided if we could copy the entire DOM tree before we print. It seems that WebKit does not support this directly right now. Before we can entirely solve this issue, we might need to reduce the time and chance we block the renderer. For example, unlike the windows version, we always have at least one printer (print to file). Hence, we can put "Page Setup" in the browser menu, so that we don't need to ask the user each time before we print (You can see this in Firefox and many other Linux applications).
-> Another issue is that we might need different mechanisms for different platforms. Obviously, the ways how we do printing on Windows and on Linux are quite different. The printing flow on Linux might be something like this one:
- * Print on a low-resolution bitmap canvas to generate Previews. (Believe it or not! It's actually much more difficult/tedious than it sounds.)
- * We use the preview to do Page Setup: Paper size, margins, page range, and maybe also the header and the footer.
- * Generate the PDF file.
- * Save the resulting PDF as a temporary file.
- * Use GTK+ APIs in the browser to ask the user which printer to use, then directly send the temporary file to CUPS.
-> These steps look simple, but we actually need to consider and design more details before we can make it happen. (For example, do we have to support all options shown in the GTK+ printing dialog?)
-
-# Known Issues
- 1. For some reason, if we send the resulting PDF files directly to CUPS, we often get nothing without any error message. The CUPS I was using is version 1.3.11. This might be a bug in Cairo 1.6.0, and/or a bug in the PDF filter (pdftopdf? pdftops?) in CUPS. Actually, if we use Firefox and print to file, we will sometimes have this problem, too. Nevertheless, the resulting PDF can be viewed in all PDF viewers without any error. However, you won't see the embedded font information in some PDF viewers, such as evince. If the printer supports PDF natively and has the HTTP interface, we can get the printout by sending the PDF file via printer's HTTP interface. [Issue# 21599](http://code.google.com/p/chromium/issues/detail?id=21599)
- 1. WebKit does not pass original text information to skia. Hence, we only have glyphs in the resulting PDF. This implies that we cannot do text selection in the resulting PDF. [Issue# 21602](http://code.google.com/p/chromium/issues/detail?id=21602)
- 1. The vector canvas used for printing in skia still has a bitmap within it [Issue# 21604](http://code.google.com/p/chromium/issues/detail?id=21604). This wastes lots of memory and does nothing. Maybe we can use this bitmap to do previewing, or use it as a thumbnail. of course, another possibility might be implementing PDF generating capabilities in skia.
- 1. To let Cairo use correct font information, we use FreeType to load the font again in `PdfPsMetafile`. This again wastes lots of memory when printing. It would be nice if we can find a way to share font information with/from skia. [Issue# 21608](http://code.google.com/p/chromium/issues/detail?id=21608)
- 1. Since we ask the browser open a temporary file for us. This might potentially be a security hole for DoS attack. We should find a way to limit the size of temporary files and the frequency of creation. (Do we have this now?) [Issue# 21610](http://code.google.com/p/chromium/issues/detail?id=21610)
- 1. In Cairo 1.6.0, the library opens a temporary file when creating a PostScript surface. Hence, our only choice is the PDF surface, which does not require any temporary file in the renderer.
- 1. In Cairo 1.6.0, we cannot output multiple glyphs at the same time. (we have to do it one by one). Newer version does support multiple glyphs output. We can use it in the future.
- 1. I did not have enough time to write good unit tests for classes related to printing on Linux. We definitely need those unit tests in the future. [Issue# 21611](http://code.google.com/p/chromium/issues/detail?id=21611)
- 1. I did not have enough time to compare our results with other competitors. Anyway, in the future, we should always compare quality, correctness, size, and maybe also resources and time in printing.
- 1. We do not supports all APIs in `SkCanvas` now ([Issue# 21612](http://code.google.com/p/chromium/issues/detail?id=21612)). By the way, when we need to do alpha composition in canvas, the result generated by Cairo is not perfect(buggy). For example, the resulting color might be wrong, and sometimes we will have round-off error in images' layout. You can try to print `third_party/WebKit/LayoutTests/svg/W3C-SVG-1.1/filters-blend-01-b.svg` and compare the result with your screen. If you print it out with a printer using CYMK, you might have incorrect colors.
- 1. We should find a way to do layout tests for printing. [Issue# 21613](http://code.google.com/p/chromium/issues/detail?id=21613) For example, it looks not quite right when you print this [page](http://code.google.com/p/chromium/issues/detail?id=8551&colspec=ID%20Stars%20Pri%20Area%20Type%20Status%20Summary%20Modified%20Owner%20Mstone).
-
-# Bugs
- 1. There are still many bugs in vector canvas. I did not implement path effect, so it prints dashed lines as solid lines. [Issue# 21614](http://code.google.com/p/chromium/issues/detail?id=21614)
- 1. When you print the "new-tab-page", the rounded boxes look strange. [Issue# 21616](http://code.google.com/p/chromium/issues/detail?id=21616)
- 1. The button is shown as a black rectangle. We should print it with a bitmap. Of course, we have to get the correct button according to the user's theme. [Issue# 21617](http://code.google.com/p/chromium/issues/detail?id=21617)
- 1. The font cache in `PdfPsMetafile` might not be thread-safe. [Issue# 21618](http://code.google.com/p/chromium/issues/detail?id=21618)
- 1. The file descriptor map used in the browser might not be thread safe. (However, it is just used at this moment as a quick ugly hack. We should be able to get rid of it when we implement all other printing classes.)
- 1. Since we save the resulting PDF file in the renderer, this might not be a good thing and might freeze the renderer for a while. We should find a way to get around it. By the way, maybe we should also show the printing progress? [Issue#21619](http://code.google.com/p/chromium/issues/detail?id=21619)
-
-
----
-
-
-# Reference
-[Issue# 9847](http://code.google.com/p/chromium/issues/detail?id=9847)
-> It is blocked on [Issue# 19223](http://code.google.com/p/chromium/issues/detail?id=19223)
-| Revision# | Code review# |
-|:----------|:-------------|
-| `r22522` | `160347` |
-| `r23032` | `164025` |
-| `r24243` | `174042` |
-| `r24376` | `173368` |
-| `r24474` | `174468` |
-| `r24533` | `173516` |
-| `r25615` | `172115` |
-| `r25974` | `196071` |
-| `r26308` | `203062` |
-| `r26400` | `200138` | \ No newline at end of file
diff --git a/docs/linux_profiling.md b/docs/linux_profiling.md
index 5f47277..cb01060 100644
--- a/docs/linux_profiling.md
+++ b/docs/linux_profiling.md
@@ -2,155 +2,210 @@
How to profile chromium on Linux.
-See [Profiling Chromium and WebKit](https://sites.google.com/a/chromium.org/dev/developers/profiling-chromium-and-webkit) for alternative discussion.
+See
+[Profiling Chromium and WebKit](https://sites.google.com/a/chromium.org/dev/developers/profiling-chromium-and-webkit)
+for alternative discussion.
## CPU Profiling
gprof: reported not to work (taking an hour to load on our large binary).
-oprofile: Dean uses it, says it's good. (As of 9/16/9 oprofile only supports timers on the new Z600 boxes, which doesn't give good granularity for profiling startup).
+oprofile: Dean uses it, says it's good. (As of 9/16/9 oprofile only supports
+timers on the new Z600 boxes, which doesn't give good granularity for profiling
+startup).
TODO(willchan): Talk more about oprofile, gprof, etc.
-Also see https://sites.google.com/a/chromium.org/dev/developers/profiling-chromium-and-webkit
+Also see
+https://sites.google.com/a/chromium.org/dev/developers/profiling-chromium-and-webkit
### perf
-`perf` is the successor to `oprofile`. It's maintained in the kernel tree, it's available on Ubuntu in the package `linux-tools`.
+`perf` is the successor to `oprofile`. It's maintained in the kernel tree, it's
+available on Ubuntu in the package `linux-tools`.
To capture data, you use `perf record`. Some examples:
-```
-$ perf record -f -g out/Release/chrome # captures the full execution of the program
-$ perf record -f -g -p 1234 # captures a particular pid, you can start at the right time, and stop with ctrl-C
-$ perf record -f -g -a # captures the whole system
+
+```shell
+# captures the full execution of the program
+perf record -f -g out/Release/chrome
+# captures a particular pid, you can start at the right time, and stop with
+# ctrl-C
+perf record -f -g -p 1234
+perf record -f -g -a # captures the whole system
```
-Some versions of the perf command can be confused by process renames. Affected versions will be unable to resolve Chromium's symbols if it was started through perf, as in the first example above. It should work correctly if you attach to an existing Chromium process as shown in the second example. (This is known to be broken as late as 3.2.5 and fixed as early as 3.11.rc3.g36f571. The actual affected range is likely much smaller. You can download and build your own perf from source.)
+Some versions of the perf command can be confused by process renames. Affected
+versions will be unable to resolve Chromium's symbols if it was started through
+perf, as in the first example above. It should work correctly if you attach to
+an existing Chromium process as shown in the second example. (This is known to
+be broken as late as 3.2.5 and fixed as early as 3.11.rc3.g36f571. The actual
+affected range is likely much smaller. You can download and build your own perf
+from source.)
-The last one is useful on limited systems with few cores and low memory bandwidth, where the CPU cycles are shared between several processes (e.g. chrome browser, renderer, plugin, X, pulseaudio, etc.)
+The last one is useful on limited systems with few cores and low memory
+bandwidth, where the CPU cycles are shared between several processes (e.g.
+chrome browser, renderer, plugin, X, pulseaudio, etc.)
To look at the data, you use:
-```
-$ perf report
-```
+
+ perf report
This will use the previously captured data (`perf.data`).
### google-perftools
-google-perftools code is enabled when the `use_allocator` variable in gyp is set to `tcmalloc` (currently the default). That will build the tcmalloc library, including the cpu profiling and heap profiling code into Chromium. In order to get stacktraces in release builds on 64 bit, you will need to build with some extra flags enabled by setting `profiling=1` in gyp.
+google-perftools code is enabled when the `use_allocator` variable in gyp is set
+to `tcmalloc` (currently the default). That will build the tcmalloc library,
+including the cpu profiling and heap profiling code into Chromium. In order to
+get stacktraces in release builds on 64 bit, you will need to build with some
+extra flags enabled by setting `profiling=1` in gyp.
-If the stack traces in your profiles are incomplete, this may be due to missing frame pointers in some of the libraries. A workaround is to use the `linux_keep_shadow_stacks=1` gyp option. This will keep a shadow stack using the -finstrument-functions option of gcc and consult the stack when unwinding.
+If the stack traces in your profiles are incomplete, this may be due to missing
+frame pointers in some of the libraries. A workaround is to use the
+`linux_keep_shadow_stacks=1` gyp option. This will keep a shadow stack using the
+`-finstrument-functions` option of gcc and consult the stack when unwinding.
-In order to enable cpu profiling, run Chromium with the environment variable CPUPROFILE set to a filename. For example:
+In order to enable cpu profiling, run Chromium with the environment variable
+`CPUPROFILE` set to a filename. For example:
-```
-$ CPUPROFILE=/tmp/cpuprofile out/Release/chrome
-```
+ CPUPROFILE=/tmp/cpuprofile out/Release/chrome
-After the program exits successfully, the cpu profile will be available at the filename specified in the CPUPROFILE environment variable. You can then analyze it using the pprof script (distributed with google-perftools, installed by default on Googler Linux workstations). For example:
+After the program exits successfully, the cpu profile will be available at the
+filename specified in the CPUPROFILE environment variable. You can then analyze
+it using the pprof script (distributed with google-perftools, installed by
+default on Googler Linux workstations). For example:
-```
-$ pprof --gv out/Release/chrome /tmp/cpuprofile
-```
+ pprof --gv out/Release/chrome /tmp/cpuprofile
-This will generate a visual representation of the cpu profile as a postscript file and load it up using `gv`. For more powerful commands, please refer to the pprof help output and the google-perftools documentation.
+This will generate a visual representation of the cpu profile as a postscript
+file and load it up using `gv`. For more powerful commands, please refer to the
+pprof help output and the google-perftools documentation.
-Note that due to the current design of google-perftools' profiling tools, it is only possible to profile the browser process. You can also profile and pass the --single-process flag for a rough idea of what the render process looks like, but keep in mind that you'll be seeing a mixed browser/renderer codepath that is not used in production.
+Note that due to the current design of google-perftools' profiling tools, it is
+only possible to profile the browser process. You can also profile and pass the
+`--single-process` flag for a rough idea of what the render process looks like,
+but keep in mind that you'll be seeing a mixed browser/renderer codepath that is
+not used in production.
-For further information, please refer to http://google-perftools.googlecode.com/svn/trunk/doc/cpuprofile.html.
+For further information, please refer to
+http://google-perftools.googlecode.com/svn/trunk/doc/cpuprofile.html.
## Heap Profiling
### google-perftools
#### Turning on heap profiles
-Follow the instructions for enabling profiling as described above in the google-perftools section under Cpu Profiling.
-To turn on the heap profiler on a Chromium build with tcmalloc, use the HEAPPROFILE environment variable to specify a filename for the heap profile. For example:
+Follow the instructions for enabling profiling as described above in the
+google-perftools section under CPU Profiling.
-```
-$ HEAPPROFILE=/tmp/heapprofile out/Release/chrome
-```
+To turn on the heap profiler on a Chromium build with tcmalloc, use the
+`HEAPPROFILE` environment variable to specify a filename for the heap profile.
+For example:
+
+ HEAPPROFILE=/tmp/heapprofile out/Release/chrome
-After the program exits successfully, the heap profile will be available at the filename specified in the `HEAPPROFILE` environment variable.
+After the program exits successfully, the heap profile will be available at the
+filename specified in the `HEAPPROFILE` environment variable.
-Some tests fork short-living processes which have a small memory footprint. To catch those, use the `HEAP_PROFILE_ALLOCATION_INTERVAL` environment variable.
+Some tests fork short-living processes which have a small memory footprint. To
+catch those, use the `HEAP_PROFILE_ALLOCATION_INTERVAL` environment variable.
#### Dumping a profile of a running process
To programmatically generate a heap profile before exit, use code like:
-```
-#include "third_party/tcmalloc/chromium/src/google/heap-profiler.h"
-...
-HeapProfilerDump("foobar"); // "foobar" will be included in the message printed to the console
-```
+
+ #include "third_party/tcmalloc/chromium/src/google/heap-profiler.h"
+
+ // "foobar" will be included in the message printed to the console
+ HeapProfilerDump("foobar");
+
For example, you might hook that up to some action in the UI.
Or you can use gdb to attach at any point:
- 1. Attach gdb to the process: `$ gdb -p 12345`
- 1. Cause it to dump a profile: `(gdb) p HeapProfilerDump("foobar")`
- 1. The filename will be printed on the console you started Chrome from; e.g. "`Dumping heap profile to heap.0001.heap (foobar)`"
-
+1. Attach gdb to the process: `$ gdb -p 12345`
+1. Cause it to dump a profile: `(gdb) p HeapProfilerDump("foobar")`
+1. The filename will be printed on the console you started Chrome from; e.g.
+ "`Dumping heap profile to heap.0001.heap (foobar)`"
#### Analyzing dumps
-You can then analyze dumps using the `pprof` script (distributed with google-perftools, installed by default on Googler Linux workstations; on Ubuntu it is called `google-pprof`). For example:
+You can then analyze dumps using the `pprof` script (distributed with
+google-perftools, installed by default on Googler Linux workstations; on Ubuntu
+it is called `google-pprof`). For example:
-```
-$ pprof --gv out/Release/chrome /tmp/heapprofile
-```
+ pprof --gv out/Release/chrome /tmp/heapprofile
-This will generate a visual representation of the heap profile as a postscript file and load it up using `gv`. For more powerful commands, please refer to the pprof help output and the google-perftools documentation.
+This will generate a visual representation of the heap profile as a postscript
+file and load it up using `gv`. For more powerful commands, please refer to the
+pprof help output and the google-perftools documentation.
-(pprof is slow. Googlers can try the not-open-source cpprof; Evan wrote an open source alternative [available on github](https://github.com/martine/hp).)
+(pprof is slow. Googlers can try the not-open-source cpprof; Evan wrote an open
+source alternative [available on github](https://github.com/martine/hp).)
#### Sandbox
-Sandboxed renderer subprocesses will fail to write out heap profiling dumps. To work around this, turn off the sandbox (via `export CHROME_DEVEL_SANDBOX=`).
+Sandboxed renderer subprocesses will fail to write out heap profiling dumps. To
+work around this, turn off the sandbox (via `export CHROME_DEVEL_SANDBOX=`).
#### Troubleshooting
- * "Hooked allocator frame not found": build with `-Dcomponent=static_library`. tcmalloc gets confused when the allocator routines are in a different `.so` than the rest of the code.
+* "Hooked allocator frame not found": build with `-Dcomponent=static_library`.
+ `tcmalloc` gets confused when the allocator routines are in a different
+ `.so` than the rest of the code.
#### More reading
-For further information, please refer to http://google-perftools.googlecode.com/svn/trunk/doc/heapprofile.html.
+For further information, please refer to
+http://google-perftools.googlecode.com/svn/trunk/doc/heapprofile.html.
### Massif
-[Massif](http://valgrind.org/docs/manual/mc-manual.html) is a [Valgrind](http://www.chromium.org/developers/how-tos/using-valgrind)-based heap profiler.
-It is much slower than the heap profiler from google-perftools, but it may have some advantages. (In particular, it handles the multi-process executables well).
-First, you will need to build massif from valgrind-variant project yourself, it's [easy](http://code.google.com/p/valgrind-variant/wiki/HowTo).
+[Massif](http://valgrind.org/docs/manual/mc-manual.html) is a
+[Valgrind](http://www.chromium.org/developers/how-tos/using-valgrind)-based heap
+profiler. It is much slower than the heap profiler from google-perftools, but it
+may have some advantages. (In particular, it handles the multi-process
+executables well).
+
+First, you will need to build massif from valgrind-variant project yourself,
+it's [easy](http://code.google.com/p/valgrind-variant/wiki/HowTo).
-Then, make sure your chromium is built using the [valgrind instructions](http://www.chromium.org/developers/how-tos/using-valgrind).
+Then, make sure your chromium is built using the
+[valgrind instructions](http://www.chromium.org/developers/how-tos/using-valgrind).
Now, you can run massif like this:
```
-% path-to-valgrind-variant/valgrind/inst/bin/valgrind --fullpath-after=/chromium/src/ \
- --trace-children-skip=*npviewer*,/bin/uname,/bin/sh,/usr/bin/which,/bin/ps,/bin/grep,/usr/bin/linux32 --trace-children=yes --tool=massif \
- out/Release/chrome --noerrdialogs --disable-hang-monitor --other-chrome-flags
+path-to-valgrind-variant/valgrind/inst/bin/valgrind \
+ --fullpath-after=/chromium/src/ \
+ --trace-children-skip=*npviewer*,/bin/uname,/bin/sh,/usr/bin/which,/bin/ps,/bin/grep,/usr/bin/linux32 \
+ --trace-children=yes \
+ --tool=massif \
+ out/Release/chrome --noerrdialogs --disable-hang-monitor --other-chrome-flags
```
-The result will be stored in massif.out.PID files, which you can post-process with [ms\_print](http://valgrind.org/docs/manual/mc-manual.html).
+The result will be stored in massif.out.PID files, which you can post-process
+with [ms_print](http://valgrind.org/docs/manual/mc-manual.html).
-TODO(kcc) sometimes when closing a tab the main process kills the tab process before massif completes writing it's log file. Need a flag that tells the main process to wait longer.
+TODO(kcc) sometimes when closing a tab the main process kills the tab process
+before massif completes writing it's log file. Need a flag that tells the main
+process to wait longer.
## Paint profiling
-You can use Xephyr to profile how chrome repaints the screen. Xephyr is a virtual X server like Xnest with debugging options which draws red rectangles to where applications are drawing before drawing the actual information.
+You can use Xephyr to profile how chrome repaints the screen. Xephyr is a
+virtual X server like Xnest with debugging options which draws red rectangles to
+where applications are drawing before drawing the actual information.
-```
-$ export XEPHYR_PAUSE=10000
-$ Xephyr :1 -ac -screen 800x600 &
-$ DISPLAY=:1 out/Debug/chrome
-```
+ export XEPHYR_PAUSE=10000
+ Xephyr :1 -ac -screen 800x600 &
+ DISPLAY=:1 out/Debug/chrome
-When ready to start debugging issue the following command, which will tell Xephyr to start drawing red rectangles:
+When ready to start debugging issue the following command, which will tell
+Xephyr to start drawing red rectangles:
-```
-$ kill -USR1 `pidof Xephyr`
-```
+ kill -USR1 `pidof Xephyr`
-For further information, please refer to http://cgit.freedesktop.org/xorg/xserver/tree/hw/kdrive/ephyr/README. \ No newline at end of file
+For further information, please refer to
+http://cgit.freedesktop.org/xorg/xserver/tree/hw/kdrive/ephyr/README.
diff --git a/docs/linux_proxy_config.md b/docs/linux_proxy_config.md
index fed3d80..5b8d01d 100644
--- a/docs/linux_proxy_config.md
+++ b/docs/linux_proxy_config.md
@@ -1,11 +1,18 @@
-# Introduction
+# Linux Proxy Config
-Chromium on Linux has several possible sources of proxy info: GNOME/KDE settings, command-line flags, and environment variables.
-
-# Details
+Chromium on Linux has several possible sources of proxy info: GNOME/KDE
+settings, command-line flags, and environment variables.
## GNOME and KDE
-When Chromium detects that it is running in GNOME or KDE, it will automatically use the appropriate standard proxy settings. You can configure these proxy settings from the options dialog (the "Change proxy settings" button in the "Under the Hood" tab), which will launch the GNOME or KDE proxy settings applications, or by launching those applications directly.
+
+When Chromium detects that it is running in GNOME or KDE, it will automatically
+use the appropriate standard proxy settings. You can configure these proxy
+settings from the options dialog (the "Change proxy settings" button in the
+"Under the Hood" tab), which will launch the GNOME or KDE proxy settings
+applications, or by launching those applications directly.
## Flags and environment variables
-For other desktop environments, Chromium's proxy settings can be configured using command-line flags or environment variables. These are documented on the man page (`man google-chrome` or `man chromium-browser`). \ No newline at end of file
+
+For other desktop environments, Chromium's proxy settings can be configured
+using command-line flags or environment variables. These are documented on the
+man page (`man google-chrome` or `man chromium-browser`).
diff --git a/docs/linux_sandbox_ipc.md b/docs/linux_sandbox_ipc.md
index a5caaaf..5ff70906 100644
--- a/docs/linux_sandbox_ipc.md
+++ b/docs/linux_sandbox_ipc.md
@@ -1,30 +1,57 @@
-The Sandbox IPC system is separate from the 'main' IPC system. The sandbox IPC is a lower level system which deals with cases where we need to route requests from the bottom of the call stack up into the browser.
-
-The motivating example is Skia, which uses fontconfig to load fonts. In a chrooted renderer we cannot access the user's fontcache, nor the font files themselves. However, font loading happens when we have called through WebKit, through Skia and into the SkFontHost. At this point, we cannot loop back around to use the main IPC system.
-
-Thus we define a small IPC system which doesn't depend on anything but <tt>base</tt> and which can make synchronous requests to the browser process.
-
-The zygote (LinuxZygote) starts with a UNIX DGRAM socket installed in a well known file descriptor slot (currently 4). Requests can be written to this socket which are then processed on a special "sandbox IPC" process. Requests have a magic <tt>int</tt> at the beginning giving the type of the request.
-
-All renderers share the same socket, so replies are delivered via a reply channel which is passed as part of the request. So the flow looks like:
- 1. The renderer creates a UNIX DGRAM socketpair.
- 1. The renderer writes a request to file descriptor 4 with an SCM\_RIGHTS control message containing one end of the fresh socket pair.
- 1. The renderer blocks reading from the other end of the fresh socketpair.
- 1. A special "sandbox IPC" process receives the request, processes it and writes the reply to the end of the socketpair contained in the request.
- 1. The renderer wakes up and continues.
-
-The browser side of the processing occurs in <tt>chrome/browser/renderer_host/render_sandbox_host_linux.cc</tt>. The renderer ends could occur anywhere, but the browser side has to know about all the possible requests so that should be a good starting point.
+# Linux Sandbox IPC
+
+The Sandbox IPC system is separate from the 'main' IPC system. The sandbox IPC
+is a lower level system which deals with cases where we need to route requests
+from the bottom of the call stack up into the browser.
+
+The motivating example is Skia, which uses fontconfig to load fonts. In a
+chrooted renderer we cannot access the user's fontcache, nor the font files
+themselves. However, font loading happens when we have called through WebKit,
+through Skia and into the SkFontHost. At this point, we cannot loop back around
+to use the main IPC system.
+
+Thus we define a small IPC system which doesn't depend on anything but `base`
+and which can make synchronous requests to the browser process.
+
+The [zygote](linux_zygote.md) starts with a `UNIX DGRAM` socket installed in a
+well known file descriptor slot (currently 4). Requests can be written to this
+socket which are then processed on a special "sandbox IPC" process. Requests
+have a magic `int` at the beginning giving the type of the request.
+
+All renderers share the same socket, so replies are delivered via a reply
+channel which is passed as part of the request. So the flow looks like:
+
+1. The renderer creates a `UNIX DGRAM` socketpair.
+1. The renderer writes a request to file descriptor 4 with an `SCM_RIGHTS`
+ control message containing one end of the fresh socket pair.
+1. The renderer blocks reading from the other end of the fresh socketpair.
+1. A special "sandbox IPC" process receives the request, processes it and
+ writes the reply to the end of the socketpair contained in the request.
+1. The renderer wakes up and continues.
+
+The browser side of the processing occurs in
+`chrome/browser/renderer_host/render_sandbox_host_linux.cc`. The renderer ends
+could occur anywhere, but the browser side has to know about all the possible
+requests so that should be a good starting point.
Here is a (possibly incomplete) list of endpoints in the renderer:
### fontconfig
-As mentioned above, the motivating example of this is dealing with fontconfig from a chrooted renderer. We implement our own Skia FontHost, outside of the Skia tree, in <tt>skia/ext/SkFontHost_fontconfig**</tt>.**
+As mentioned above, the motivating example of this is dealing with fontconfig
+from a chrooted renderer. We implement our own Skia FontHost, outside of the
+Skia tree, in `skia/ext/SkFontHost_fontconfig**`.
-There are two methods used. One for performing a match against the fontconfig data and one to return a file descriptor to a font file resulting from one of those matches. The only wrinkle is that fontconfig is a single-threaded library and it's already used in the browser by GTK itself.
+There are two methods used. One for performing a match against the fontconfig
+data and one to return a file descriptor to a font file resulting from one of
+those matches. The only wrinkle is that fontconfig is a single-threaded library
+and it's already used in the browser by GTK itself.
Thus, we have a couple of options:
- 1. Handle the requests on the UI thread in the browser.
- 1. Handle the requests in a separate address space.
-The original implementation did the former (handle on UI thread). This turned out to be a terrible idea, performance wise, so we now handle the requests on a dedicated process. \ No newline at end of file
+1. Handle the requests on the UI thread in the browser.
+1. Handle the requests in a separate address space.
+
+The original implementation did the former (handle on UI thread). This turned
+out to be a terrible idea, performance wise, so we now handle the requests on a
+dedicated process.
diff --git a/docs/linux_sandboxing.md b/docs/linux_sandboxing.md
index 00ba8dd..fb7cc73b 100644
--- a/docs/linux_sandboxing.md
+++ b/docs/linux_sandboxing.md
@@ -1,20 +1,41 @@
-Chromium uses a multiprocess model, which allows to give different privileges and restrictions to different parts of the browser. For instance, we want renderers to run with a limited set of privileges since they process untrusted input and are likely to be compromised. Renderers will use an IPC mechanism to request access to resource from a more privileged (browser process).
-You can find more about this general design [here](http://dev.chromium.org/developers/design-documents/sandbox).
+# Linux Sandboxing
-We use different sandboxing techniques on Linux and Chrome OS, in combination, to achieve a good level of sandboxing. You can see which sandboxes are currently engaged by looking at chrome://sandbox (renderer processes) and chrome://gpu (gpu process).
+Chromium uses a multiprocess model, which allows to give different privileges
+and restrictions to different parts of the browser. For instance, we want
+renderers to run with a limited set of privileges since they process untrusted
+input and are likely to be compromised. Renderers will use an IPC mechanism to
+request access to resource from a more privileged (browser process).
+You can find more about this general design
+[here](http://dev.chromium.org/developers/design-documents/sandbox).
+
+We use different sandboxing techniques on Linux and Chrome OS, in combination,
+to achieve a good level of sandboxing. You can see which sandboxes are currently
+engaged by looking at chrome://sandbox (renderer processes) and chrome://gpu
+(gpu process).
We have a two layers approach:
- * Layer-1 (also called the "semantics" layer) prevents access to most resources from a process where it's engaged. The setuid sandbox is used for this.
- * Layer-2 (also called "attack surface reduction" layer) restricts access from a process to the attack surface of the kernel. Seccomp-BPF is used for this.
+* Layer-1 (also called the "semantics" layer) prevents access to most
+ resources from a process where it's engaged. The setuid sandbox is used for
+ this.
+* Layer-2 (also called "attack surface reduction" layer) restricts access from
+ a process to the attack surface of the kernel. Seccomp-BPF is used for this.
-You can disable all sandboxing (for testing) with --no-sandbox.
+You can disable all sandboxing (for testing) with `--no-sandbox`.
## Layered approach
-One notable difficulty with seccomp-bpf is that filtering at the system call interface provides difficult to understand semantics. One crucial aspect is that if a process A runs under seccomp-bpf, we need to guarantee that it cannot affect the integrity of process B running under a different seccomp-bpf policy (which would be a sandbox escape). Besides the obvious system calls such as ptrace() or process\_vm\_writev(), there are multiple subtle issues, such as using open() on /proc entries.
+One notable difficulty with `seccomp-bpf` is that filtering at the system call
+interface provides difficult to understand semantics. One crucial aspect is that
+if a process A runs under `seccomp-bpf`, we need to guarantee that it cannot
+affect the integrity of process B running under a different `seccomp-bpf` policy
+(which would be a sandbox escape). Besides the obvious system calls such as
+`ptrace()` or `process_vm_writev()`, there are multiple subtle issues, such as
+using `open()` on `/proc` entries.
-Our layer-1 guarantees the integrity of processes running under different seccomp-bpf policies. In addition, it allows restricting access to the network, something that is difficult to perform at the layer-2.
+Our layer-1 guarantees the integrity of processes running under different
+`seccomp-bpf` policies. In addition, it allows restricting access to the
+network, something that is difficult to perform at the layer-2.
## Sandbox types summary
@@ -31,67 +52,101 @@ Our layer-1 guarantees the integrity of processes running under different seccom
Also called SUID sandbox, our main layer-1 sandbox.
-A SUID binary that will create a new network and PID namespace, as well as chroot() the process to an empty directory on request.
+A SUID binary that will create a new network and PID namespace, as well as
+`chroot()` the process to an empty directory on request.
-To disable it, use --disable-setuid-sandbox. (Do not remove the binary or unset CHROME\_DEVEL\_SANDBOX, it is not supported).
+To disable it, use `--disable-setuid-sandbox`. (Do not remove the binary or
+unset `CHROME_DEVEL_SANDBOX`, it is not supported).
-_Main page: [LinuxSUIDSandbox](LinuxSUIDSandbox.md)_
+Main page: [LinuxSUIDSandbox](linux_suid_sandbox.md)
## User namespaces sandbox
-The namespace sandbox [aims to replace the setuid sandbox](https://code.google.com/p/chromium/issues/detail?id=312380). It has the advantage of not requiring a setuid binary. It's based on (unprivileged)
-[user namespaces](https://lwn.net/Articles/531114/) in the Linux kernel. It generally requires a kernel >= 3.10, although it may work with 3.8 if certain patches are backported.
+The namespace sandbox
+[aims to replace the setuid sandbox](https://crbug.com/312380). It has the
+advantage of not requiring a setuid binary. It's based on (unprivileged)
+[user namespaces](https://lwn.net/Articles/531114/) in the Linux kernel. It
+generally requires a kernel >= 3.10, although it may work with 3.8 if certain
+patches are backported.
-Starting with M-43, if the kernel supports it, unprivileged namespaces are used instead of the setuid sandbox. Starting with M-44, certain processes run [in their own PID namespace](https://code.google.com/p/chromium/issues/detail?id=460972), which isolates them better.
+Starting with M-43, if the kernel supports it, unprivileged namespaces are used
+instead of the setuid sandbox. Starting with M-44, certain processes run
+[in their own PID namespace](https://crbug.com/460972), which isolates them
+better.
-## The <tt>seccomp-bpf</tt> sandbox
+## The `seccomp-bpf` sandbox
-Also called <tt>seccomp-filters</tt> sandbox.
+Also called `seccomp-filters` sandbox.
-Our main layer-2 sandbox, designed to shelter the kernel from malicious code executing in userland.
+Our main layer-2 sandbox, designed to shelter the kernel from malicious code
+executing in userland.
-Also used as layer-1 in the GPU process. A [BPF](http://www.tcpdump.org/papers/bpf-usenix93.pdf) compiler will compile a process-specific program
-to filter system calls and send it to the kernel. The kernel will interpret this program for each system call and allow or disallow the call.
+Also used as layer-1 in the GPU process. A
+[BPF](http://www.tcpdump.org/papers/bpf-usenix93.pdf) compiler will compile a
+process-specific program to filter system calls and send it to the kernel. The
+kernel will interpret this program for each system call and allow or disallow
+the call.
-To help with sandboxing of existing code, the kernel can also synchronously raise a SIGSYS signal. This allows user-land to perform actions such as "log and return errno", emulate the system call or broker-out the system call (perform a remote system call via IPC). Implementing this requires a low-level async-signal safe IPC facility.
+To help with sandboxing of existing code, the kernel can also synchronously
+raise a `SIGSYS` signal. This allows user-land to perform actions such as "log
+and return errno", emulate the system call or broker-out the system call
+(perform a remote system call via IPC). Implementing this requires a low-level
+async-signal safe IPC facility.
-Seccomp-bpf is supported since Linux 3.5, but is also back-ported on Ubuntu 12.04 and is always available on Chrome OS. See [this page](http://outflux.net/teach-seccomp/) for more information.
+`seccomp-bpf` is supported since Linux 3.5, but is also back-ported on Ubuntu
+12.04 and is always available on Chrome OS. See
+[this page](http://outflux.net/teach-seccomp/) for more information.
-See [this blog post](http://blog.chromium.org/2012/11/a-safer-playground-for-your-linux-and.html) announcing Chrome support. Or [this one](http://blog.cr0.org/2012/09/introducing-chromes-next-generation.html) for a more technical overview.
+See
+[this blog post](http://blog.chromium.org/2012/11/a-safer-playground-for-your-linux-and.html)
+announcing Chrome support. Or
+[this one](http://blog.cr0.org/2012/09/introducing-chromes-next-generation.html)
+for a more technical overview.
-This sandbox can be disabled with --disable-seccomp-filter-sandbox.
+This sandbox can be disabled with `--disable-seccomp-filter-sandbox`.
-## The <tt>seccomp</tt> sandbox
+## The `seccomp` sandbox
-Also called <tt>seccomp-legacy</tt>. An obsolete layer-1 sandbox, then available as an optional layer-2 sandbox.
+Also called `seccomp-legacy`. An obsolete layer-1 sandbox, then available as an
+optional layer-2 sandbox.
-Deprecated by seccomp-bpf and removed from the Chromium code base. It still exists as a separate project [here](https://code.google.com/p/seccompsandbox/).
+Deprecated by seccomp-bpf and removed from the Chromium code base. It still
+exists as a separate project [here](https://code.google.com/p/seccompsandbox/).
See:
- * http://www.imperialviolet.org/2009/08/26/seccomp.html
- * http://lwn.net/Articles/346902/
- * https://code.google.com/p/seccompsandbox/
+
+* http://www.imperialviolet.org/2009/08/26/seccomp.html
+* http://lwn.net/Articles/346902/
+* https://code.google.com/p/seccompsandbox/
## SELinux
-[Deprecated](https://src.chromium.org/viewvc/chrome?revision=200838&view=revision). Was designed to be used instead of the SUID sandbox.
+[Deprecated](https://src.chromium.org/viewvc/chrome?revision=200838&view=revision).
+Was designed to be used instead of the SUID sandbox.
Old information for archival purposes:
-One can build Chromium with <tt>selinux=1</tt> and the Zygote (which starts the renderers and PPAPI processes) will do a
-dynamic transition. audit2allow will quickly build a usable module.
+One can build Chromium with `selinux=1` and the Zygote (which starts the
+renderers and PPAPI processes) will do a dynamic transition. audit2allow will
+quickly build a usable module.
-Available since [r26257](http://src.chromium.org/viewvc/chrome?view=rev&revision=26257),
-more information in [this blog post](http://www.imperialviolet.org/2009/07/14/selinux.html) (grep for
-'dynamic' since dynamic transitions are a little obscure in SELinux)
+Available since
+[r26257](http://src.chromium.org/viewvc/chrome?view=rev&revision=26257),
+more information in
+[this blog post](http://www.imperialviolet.org/2009/07/14/selinux.html) (grep
+for 'dynamic' since dynamic transitions are a little obscure in SELinux)
## Developing and debugging with sandboxing
Sandboxing can make developing harder, see:
- * [this page](https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment) for the setuid sandbox
- * [this page](http://www.chromium.org/for-testers/bug-reporting-guidelines/hanging-tabs) for triggering crashes
- * [this page for debugging tricks](https://code.google.com/p/chromium/wiki/LinuxDebugging#Getting_renderer_subprocesses_into_gdb)
+
+* [this page](https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment)
+ for the `setuid` sandbox
+* [this page](http://www.chromium.org/for-testers/bug-reporting-guidelines/hanging-tabs)
+ for triggering crashes
+* [this page for debugging tricks](linux_debugging.md)
## See also
- * [LinuxSandboxIPC](LinuxSandboxIPC.md)
- * [How Chromium's Linux sandbox affects Native Client](https://code.google.com/p/nativeclient/wiki/LinuxOuterSandbox) \ No newline at end of file
+
+* [LinuxSandboxIPC](linux_sandbox_ipc.md)
+* [How Chromium's Linux sandbox affects Native Client](https://code.google.com/p/nativeclient/wiki/LinuxOuterSandbox)
diff --git a/docs/linux_suid_sandbox.md b/docs/linux_suid_sandbox.md
index 84e5acd..5845662 100644
--- a/docs/linux_suid_sandbox.md
+++ b/docs/linux_suid_sandbox.md
@@ -1,63 +1,130 @@
-With [r20110](http://src.chromium.org/viewvc/chrome?view=rev&revision=20110), Chromium on Linux can now sandbox its renderers using a SUID helper binary. This is one of [our layer-1 sandboxing solutions](LinuxSandboxing.md).
+# Linux `SUID` Sandbox
-## SUID helper executable
+With [r20110](https://crrev.com/20110), Chromium on Linux can now sandbox its
+renderers using a `SUID` helper binary. This is one of
+[our layer-1 sandboxing solutions](linux_sandboxing.md).
-The SUID helper binary is called 'chrome\_sandbox' and you must build it separately from the main 'chrome' target. To use this sandbox, you have to specify its path in the `linux_sandbox_path` GYP variable. When spawning the zygote process (LinuxZygote), if the suid sandbox is enabled, Chromium will check for the sandbox binary at the location specified by `linux_sandbox_path`. For Google Chrome, this is set to <tt>/opt/google/chrome/chrome-sandbox</tt>, and early version had this value hard coded in <tt>chrome/browser/zygote_host_linux.cc</tt>.
+## `SUID` helper executable
+The `SUID` helper binary is called `chrome_sandbox` and you must build it
+separately from the main 'chrome' target. To use this sandbox, you have to
+specify its path in the `linux_sandbox_path` GYP variable. When spawning the
+[zygote process](linux_zygote/md), if the `SUID` sandbox is enabled, Chromium
+will check for the sandbox binary at the location specified by
+`linux_sandbox_path`. For Google Chrome, this is set to
+`/opt/google/chrome/chrome-sandbox`, and early version had this value hard coded
+in `chrome/browser/zygote_host_linux.cc`.
-In order for the sandbox to be used, the following conditions must be met:
- * The sandbox binary must be executable by the Chromium process.
- * It must be SUID and executable by other.
-If these conditions are met then the sandbox binary is used to launch the zygote process. Once the zygote has started, it asks a helper process to chroot it to a temp directory.
+In order for the sandbox to be used, the following conditions must be met:
-## CLONE\_NEWPID method
+* The sandbox binary must be executable by the Chromium process.
+* It must be `SUID` and executable by other.
-The sandbox does three things to restrict the authority of a sandboxed process. The SUID helper is responsible for the first two:
- * The SUID helper chroots the process. This takes away access to the filesystem namespace.
- * The SUID helper puts the process in a PID namespace using the CLONE\_NEWPID option to [clone()](http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html). This stops the sandboxed process from being able to ptrace() or kill() unsandboxed processes.
+If these conditions are met then the sandbox binary is used to launch the zygote
+process. Once the zygote has started, it asks a helper process to chroot it to a
+temp directory.
-In addition:
- * The LinuxZygote startup code sets the process to be _undumpable_ using [prctl()](http://www.kernel.org/doc/man-pages/online/pages/man2/prctl.2.html). This stops sandboxed processes from being able to ptrace() each other. More specifically, it stops the sandboxed process from being ptrace()'d by any other process. This can be switched off with the `--allow-sandbox-debugging` option.
+## `CLONE_NEWPID` method
-Limitations:
- * Not all kernel versions support CLONE\_NEWPID. If the SUID helper is run on a kernel that does not support CLONE\_NEWPID, it will ignore the problem without a warning, but the protection offered by the sandbox will be substantially reduced. See LinuxPidNamespaceSupport for how to test whether your system supports PID namespaces.
- * This does not restrict network access.
- * This does not prevent processes within a given sandbox from sending each other signals or killing each other.
- * Setting a process to be undumpable is not irreversible. A sandboxed process can make itself dumpable again, opening itself up to being taken over by another process (either unsandboxed or within the same sandbox).
- * Breakpad (the crash reporting tool) makes use of this. If a process crashes, Breakpad makes it dumpable in order to use ptrace() to halt threads and capture the process's state at the time of the crash. This opens a small window of vulnerability.
+The sandbox does three things to restrict the authority of a sandboxed process.
+The `SUID` helper is responsible for the first two:
-## setuid() method
+* The `SUID` helper chroots the process. This takes away access to the
+ filesystem namespace.
+* The `SUID` helper puts the process in a PID namespace using the
+ `CLONE_NEWPID` option to
+ [clone()](http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html).
+ This stops the sandboxed process from being able to `ptrace()` or `kill()`
+ unsandboxed processes.
-_This is an alternative to the CLONE\_NEWPID method; it is not currently implemented in the Chromium codebase._
+In addition:
-Instead of using CLONE\_NEWPID, the SUID helper can use setuid() to put the process into a currently-unused UID, which is allocated out of a range of UIDs. In order to ensure that the UID has not been allocated for another sandbox, the SUID helper uses [getrlimit()](http://www.kernel.org/doc/man-pages/online/pages/man2/getrlimit.2.html) to set RLIMIT\_NPROC temporarily to a soft limit of 1. (Note that the docs specify that [setuid()](http://www.kernel.org/doc/man-pages/online/pages/man2/setuid.2.html) returns EAGAIN if RLIMIT\_NPROC is exceeded.) We can reset RLIMIT\_NPROC afterwards in order to allow the sandboxed process to fork child processes.
+* The [Linux Zygote](linux_zygote.md) startup code sets the process to be
+ _undumpable_ using
+ [prctl()](http://www.kernel.org/doc/man-pages/online/pages/man2/prctl.2.html).
+ This stops sandboxed processes from being able to `ptrace()` each other.
+ More specifically, it stops the sandboxed process from being `ptrace()`'d by
+ any other process. This can be switched off with the
+ `--allow-sandbox-debugging` option.
-As before, the SUID helper chroots the process.
+Limitations:
-As before, LinuxZygote can set itself to be undumpable to stop processes in the sandbox from being able to ptrace() each other.
+* Not all kernel versions support `CLONE_NEWPID`. If the `SUID` helper is run
+ on a kernel that does not support `CLONE_NEWPID`, it will ignore the problem
+ without a warning, but the protection offered by the sandbox will be
+ substantially reduced. See LinuxPidNamespaceSupport for how to test whether
+ your system supports PID namespaces.
+* This does not restrict network access.
+* This does not prevent processes within a given sandbox from sending each
+ other signals or killing each other.
+* Setting a process to be undumpable is not irreversible. A sandboxed process
+ can make itself dumpable again, opening itself up to being taken over by
+ another process (either unsandboxed or within the same sandbox).
+ * Breakpad (the crash reporting tool) makes use of this. If a process
+ crashes, Breakpad makes it dumpable in order to use ptrace() to halt
+ threads and capture the process's state at the time of the crash. This
+ opens a small window of vulnerability.
+
+## `setuid()` method
+
+_This is an alternative to the `CLONE_NEWPID` method; it is not currently
+implemented in the Chromium codebase._
+
+Instead of using `CLONE_NEWPID`, the `SUID` helper can use `setuid()` to put the
+process into a currently-unused UID, which is allocated out of a range of UIDs.
+In order to ensure that the `UID` has not been allocated for another sandbox,
+the `SUID` helper uses
+[getrlimit()](http://www.kernel.org/doc/man-pages/online/pages/man2/getrlimit.2.html)
+to set `RLIMIT_NPROC` temporarily to a soft limit of 1. (Note that the docs
+specify that [setuid()](http://www.kernel.org/doc/man-pages/online/pages/man2/setuid.2.html)
+returns `EAGAIN` if `RLIMIT_NPROC` is exceeded.) We can reset `RLIMIT_NPROC`
+afterwards in order to allow the sandboxed process to fork child processes.
+
+As before, the `SUID` helper chroots the process.
+
+As before, LinuxZygote can set itself to be undumpable to stop processes in the
+sandbox from being able to `ptrace()` each other.
Limitations:
- * It is not possible for an unsandboxed process to ptrace() a sandboxed process because they run under different UIDs. This makes debugging harder. There is no equivalent of the `--allow-sandbox-debugging` other than turning the sandbox off with `--no-sandbox`.
- * The SUID helper can check that a UID is unused before it uses it (hence this is safe if the SUID helper is installed into multiple chroots), but it cannot prevent other root processes from putting processes into this UID after the sandbox has been started. This means we should make the UID range configurable, or distributions should reserve a UID range.
-## CLONE\_NEWNET method
+* It is not possible for an unsandboxed process to `ptrace()` a sandboxed
+ process because they run under different UIDs. This makes debugging harder.
+ There is no equivalent of the `--allow-sandbox-debugging` other than turning
+ the sandbox off with `--no-sandbox`.
+* The `SUID` helper can check that a `UID` is unused before it uses it (hence
+ this is safe if the `SUID` helper is installed into multiple chroots), but
+ it cannot prevent other root processes from putting processes into this
+ `UID` after the sandbox has been started. This means we should make the
+ `UID` range configurable, or distributions should reserve a `UID` range.
-The SUID helper uses [CLONE\_NEWNET](http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html) to restrict network access.
+## `CLONE_NEWNET` method
+
+The `SUID` helper uses
+[CLONE_NEWNET](http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html)
+to restrict network access.
## Future work
-We are splitting the SUID sandbox into a separate project which will support both the CLONE\_NEWNS and setuid() methods: http://code.google.com/p/setuid-sandbox/
+We are splitting the `SUID` sandbox into a separate project which will support
+both the `CLONE_NEWNS` and `setuid()` methods:
+http://code.google.com/p/setuid-sandbox/
-Having the SUID helper as a separate project should make it easier for distributions to review and package.
+Having the `SUID` helper as a separate project should make it easier for
+distributions to review and package.
## Possible extensions
## History
-Older versions of the sandbox helper process will <i>only</i> run <tt>/opt/google/chrome/chrome</tt>. This string is hard coded (<tt>sandbox/linux/suid/sandbox.cc</tt>). If your package is going to place the Chromium binary somewhere else you need to modify this string.
+Older versions of the sandbox helper process will _only_ run
+`/opt/google/chrome/chrome`. This string is hard coded
+(`sandbox/linux/suid/sandbox.cc`). If your package is going to place the
+Chromium binary somewhere else you need to modify this string.
## See also
- * [LinuxSUIDSandboxDevelopment](LinuxSUIDSandboxDevelopment.md)
- * LinuxSandboxing
- * General information on Chromium sandboxing: http://dev.chromium.org/developers/design-documents/sandbox \ No newline at end of file
+
+* [LinuxSUIDSandboxDevelopment](linux_suid_sandbox_development.md)
+* [LinuxSandboxing](linux_sandboxing.md)
+* General information on Chromium sandboxing:
+ http://dev.chromium.org/developers/design-documents/sandbox
diff --git a/docs/linux_suid_sandbox_development.md b/docs/linux_suid_sandbox_development.md
index 950460d..4563a31 100644
--- a/docs/linux_suid_sandbox_development.md
+++ b/docs/linux_suid_sandbox_development.md
@@ -1,61 +1,82 @@
-(For context see [LinuxSUIDSandbox](http://code.google.com/p/chromium/wiki/LinuxSUIDSandbox))
+# Linux SUID Sandbox Development
+
+For context see [LinuxSUIDSandbox](linux_suid_sandbox.md)
We need a SUID helper binary to turn on the sandbox on Linux.
-In most cases, you can run **build/update-linux-sandbox.sh** and it'll install the proper sandbox for you in /usr/local/sbin and tell you to update your .bashrc if needed.
+In most cases, you can run `build/update-linux-sandbox.sh` and it'll install
+the proper sandbox for you in `/usr/local/sbin` and tell you to update your
+`.bashrc` if needed.
+
+## Installation instructions for developers
+
+* If you have no setuid sandbox at all, you will see a message such as:
+
+ ```
+ Running without the SUID sandbox!
+ ```
-### Installation instructions for developers
+* If your setuid binary is out of date, you will get messages such as:
- * If you have no setuid sandbox at all, you will see a message such as:
-```
-Running without the SUID sandbox!
-```
- * If your setuid binary is out of date, you will get messages such as:
-```
-The setuid sandbox provides API version X, but you need Y
-```
-```
-You are using a wrong version of the setuid binary!
-```
+ ```
+ The setuid sandbox provides API version X, but you need Y
+ You are using a wrong version of the setuid binary!
+ ```
Run the script mentioned above, or do something such as:
- * Build chrome\_sandbox whenever you build chrome ("ninja -C xxx chrome chrome\_sandbox" instead of "ninja -C xxx chrome")
- * After building, run something similar to (or use the provided update-linux-sandbox.sh):
-```
-sudo cp out/Debug/chrome_sandbox /usr/local/sbin/chrome-devel-sandbox #needed if you build on NFS!
-sudo chown root:root /usr/local/sbin/chrome-devel-sandbox
-sudo chmod 4755 /usr/local/sbin/chrome-devel-sandbox
-```
+* Build `chrome_sandbox` whenever you build chrome
+ (`ninja -C xxx chrome chrome_sandbox` instead of `ninja -C xxx chrome`)
+* After building, run something similar to (or use the provided
+ `update-linux-sandbox.sh`):
- * Put this line in your ~/.bashrc (or .zshenv etc):
-```
-export CHROME_DEVEL_SANDBOX=/usr/local/sbin/chrome-devel-sandbox
-```
+ ```shell
+ # needed if you build on NFS!
+ sudo cp out/Debug/chrome_sandbox /usr/local/sbin/chrome-devel-sandbox
+ sudo chown root:root /usr/local/sbin/chrome-devel-sandbox
+ sudo chmod 4755 /usr/local/sbin/chrome-devel-sandbox
+ ```
-### Try bots and waterfall
+* Put this line in your `~/.bashrc` (or `.zshenv` etc):
-If you're installing a new bot, always install the setuid sandbox (the instructions are different than for developers, contact the Chrome troopers). If something does need to run without the setuid sandbox, use the --disable-setuid-sandbox command line flag.
+ ```
+ export CHROME_DEVEL_SANDBOX=/usr/local/sbin/chrome-devel-sandbox
+ ```
-The SUID sandbox must be enabled on the try bots and the waterfall. If you don't use it locally, things might appear to work for you, but break on the bots.
+## Try bots and waterfall
-(Note: as a temporary, stop gap measure, setting CHROME\_DEVEL\_SANDBOX to an empty string is equivalent to --disable-setuid-sandbox)
+If you're installing a new bot, always install the setuid sandbox (the
+instructions are different than for developers, contact the Chrome troopers). If
+something does need to run without the setuid sandbox, use the
+`--disable-setuid-sandbox` command line flag.
-### Disabling the sandbox
+The `SUID` sandbox must be enabled on the try bots and the waterfall. If you
+don't use it locally, things might appear to work for you, but break on the
+bots.
-If you are certain that you don't want the setuid sandbox, use --disable-setuid-sandbox. There should be very few cases like this.
-So if you're not absolutely sure, run with the setuid sandbox.
+(Note: as a temporary, stop gap measure, setting `CHROME_DEVEL_SANDBOX` to an
+empty string is equivalent to `--disable-setuid-sandbox`)
-### Installation instructions for "[Raw builds of Chromium](https://commondatastorage.googleapis.com/chromium-browser-continuous/index.html)"
+## Disabling the sandbox
+
+If you are certain that you don't want the setuid sandbox, use
+`--disable-setuid-sandbox`. There should be very few cases like this. So if
+you're not absolutely sure, run with the setuid sandbox.
+
+## Installation instructions for "[Raw builds of Chromium](https://commondatastorage.googleapis.com/chromium-browser-continuous/index.html)"
If you're using a "raw" build of Chromium, do the following:
-```
-sudo chown root:root chrome_sandbox && sudo chmod 4755 chrome_sandbox && export CHROME_DEVEL_SANDBOX="$PWD/chrome_sandbox"
-./chrome
-```
-You can also make such an installation more permanent by following the [steps above](#Installation_instructions_for_developers.md) and installing chrome\_sandbox to a more permanent location.
+ sudo chown root:root chrome_sandbox && sudo chmod 4755 chrome_sandbox && \
+ export CHROME_DEVEL_SANDBOX="$PWD/chrome_sandbox"
+ ./chrome
+
+You can also make such an installation more permanent by following the
+[steps above](#Installation-Instructions-for-developers) and installing
+`chrome_sandbox` to a more permanent location.
-### System-wide installations of Chromium
+## System-wide installations of Chromium
-The CHROME\_DEVEL\_SANDBOX variable is intended for developers and won't work for a system-wide installation of Chromium. Package maintainers should make sure the setuid binary is installed and defined in GYP as linux\_sandbox\_path. \ No newline at end of file
+The `CHROME_DEVEL_SANDBOX` variable is intended for developers and won't work
+for a system-wide installation of Chromium. Package maintainers should make sure
+the `setuid` binary is installed and defined in GYP as `linux_sandbox_path`.
diff --git a/docs/linux_zygote.md b/docs/linux_zygote.md
index 5c84a79..5504115 100644
--- a/docs/linux_zygote.md
+++ b/docs/linux_zygote.md
@@ -1,15 +1,36 @@
-A zygote process is one that listens for spawn requests from a master process and forks itself in response. Generally they are used because forking a process after some expensive setup has been performed can save time and share extra memory pages.
+A zygote process is one that listens for spawn requests from a master process
+and forks itself in response. Generally they are used because forking a process
+after some expensive setup has been performed can save time and share extra
+memory pages.
-On Linux, for Chromium, this is not the point, and measurements suggest that the time and memory savings are minimal or negative.
+On Linux, for Chromium, this is not the point, and measurements suggest that the
+time and memory savings are minimal or negative.
-We use it because it's the only reasonable way to keep a reference to a binary and a set of shared libraries that can be exec'ed. In the model used on Windows and Mac, renderers are exec'ed as needed from the chrome binary. However, if the chrome binary, or any of its shared libraries are updated while Chrome is running, we'll end up exec'ing the wrong version. A version _x_ browser might be talking to a version _y_ renderer. Our IPC system does not support this (and does not want to!).
+We use it because it's the only reasonable way to keep a reference to a binary
+and a set of shared libraries that can be exec'ed. In the model used on Windows
+and Mac, renderers are exec'ed as needed from the chrome binary. However, if the
+chrome binary, or any of its shared libraries are updated while Chrome is
+running, we'll end up exec'ing the wrong version. A version _x_ browser might be
+talking to a version _y_ renderer. Our IPC system does not support this (and
+does not want to!).
-So we would like to keep a reference to a binary and its shared libraries and exec from these. However, unless we are going to write our own <tt>ld.so</tt>, there's no way to do this.
+So we would like to keep a reference to a binary and its shared libraries and
+exec from these. However, unless we are going to write our own `ld.so`, there's
+no way to do this.
-Instead, we exec the prototypical renderer at the beginning of the browser execution. When we need more renderers, we signal this prototypical process (the zygote) to fork itself. The zygote is always the correct version and, by exec'ing one, we make sure the renderers have a different address space randomisation than the browser.
+Instead, we exec the prototypical renderer at the beginning of the browser
+execution. When we need more renderers, we signal this prototypical process (the
+zygote) to fork itself. The zygote is always the correct version and, by
+exec'ing one, we make sure the renderers have a different address space
+randomisation than the browser.
-The zygote process is triggered by the <tt>--type=zygote</tt> command line flag, which causes <tt>ZygoteMain</tt> (in <tt>chrome/browser/zygote_main_linux.cc</tt>) to be run. The zygote is launched from <tt>chrome/browser/zygote_host_linux.cc</tt>.
+The zygote process is triggered by the `--type=zygote` command line flag, which
+causes `ZygoteMain` (in `chrome/browser/zygote_main_linux.cc`) to be run. The
+zygote is launched from `chrome/browser/zygote_host_linux.cc`.
-Signaling the zygote for a new renderer happens in <tt>chrome/browser/child_process_launcher.cc</tt>.
+Signaling the zygote for a new renderer happens in
+`chrome/browser/child_process_launcher.cc`.
-You can use the <tt>--zygote-cmd-prefix</tt> flag to debug the zygote process. If you use <tt>--renderer-cmd-prefix</tt> then the zygote will be bypassed and renderers will be exec'ed afresh every time. \ No newline at end of file
+You can use the `--zygote-cmd-prefix` flag to debug the zygote process. If you
+use `--renderer-cmd-prefix` then the zygote will be bypassed and renderers will
+be exec'ed afresh every time.
diff --git a/docs/mac_build_instructions.md b/docs/mac_build_instructions.md
index cb2a7b8..30c25f9 100644
--- a/docs/mac_build_instructions.md
+++ b/docs/mac_build_instructions.md
@@ -1,26 +1,49 @@
-# Prerequisites
+# Mac Build Instructions
- * A Mac running 10.8+.
- * http://developer.apple.com/tools/xcode/XCode, 5+
- * Install [gclient](http://dev.chromium.org/developers/how-tos/install-depot-tools), part of the [depot\_tools](http://dev.chromium.org/developers/how-tos/depottools) package ([download](http://dev.chromium.org/developers/how-tos/install-depot-tools)). gclient is a wrapper around svn that we use to manage our working copies.
- * Install [git](http://code.google.com/p/git-osx-installer/) on OSX 10.8. The system git shipping with OS X 10.9 / Xcode 5 works well too.
- * (optional -- required if you don't have some commands such as svn natively) Install Xcode's "Command Line Tools" via Xcode menu -> Preferences -> Downloads
+[TOC]
-# Getting the code
+## Prerequisites
-[Check out the source code](http://dev.chromium.org/developers/how-tos/get-the-code) using Git. If you're new to the project, you can skip all the information about git-svn, since you will not be committing directly to the repository.
+* A Mac running 10.8+.
+* http://developer.apple.com/tools/xcode/XCode, 5+
+* Install
+ [gclient](http://dev.chromium.org/developers/how-tos/install-depot-tools),
+ part of the
+ [depot_tools](http://dev.chromium.org/developers/how-tos/depottools) package
+ ([download](http://dev.chromium.org/developers/how-tos/install-depot-tools)).
+ gclient is a wrapper around svn that we use to manage our working copies.
+* Install [git](http://code.google.com/p/git-osx-installer/) on OSX 10.8. The
+ system git shipping with OS X 10.9 / Xcode 5 works well too.
+* (optional -- required if you don't have some commands such as svn natively)
+ Install Xcode's "Command Line Tools" via Xcode menu -> Preferences ->
+ Downloads
-Before checking out, go to the [waterfall](http://build.chromium.org/buildbot/waterfall/) and check that the source tree is open (to avoid pulling a broken tree).
+## Getting the code
-The path to the build directory should not contain spaces (e.g. not "~/Mac OS X/chromium"), as this will cause the build to fail. This includes your drive name, the default "Macintosh HD2" for a second drive has a space.
+[Check out the source code](http://dev.chromium.org/developers/how-tos/get-the-code)
+using Git. If you're new to the project, you can skip all the information about
+git-svn, since you will not be committing directly to the repository.
-# Building
+Before checking out, go to the
+[waterfall](http://build.chromium.org/buildbot/waterfall/) and check that the
+source tree is open (to avoid pulling a broken tree).
-Chromium on OS X can only be built using the [Ninja](NinjaBuild.md) tool and the [Clang](Clang.md) compiler. See both of those pages for further details on how to tune the build.
+The path to the build directory should not contain spaces (e.g. not
+`~/Mac OS X/chromium`), as this will cause the build to fail. This includes your
+drive name, the default "Macintosh HD2" for a second drive has a space.
-Before you build, you may want to [install API keys](https://sites.google.com/a/chromium.org/dev/developers/how-tos/api-keys) so that Chrome-integrated Google services work. This step is optional if you aren't testing those features.
+## Building
-## Raising system-wide and per-user process limits
+Chromium on OS X can only be built using the [Ninja](ninja_build.md) tool and
+the [Clang](clang.md) compiler. See both of those pages for further details on
+how to tune the build.
+
+Before you build, you may want to
+[install API keys](https://sites.google.com/a/chromium.org/dev/developers/how-tos/api-keys)
+so that Chrome-integrated Google services work. This step is optional if you
+aren't testing those features.
+
+### Raising system-wide and per-user process limits
If you see errors like the following:
@@ -29,90 +52,131 @@ clang: error: unable to execute command: posix_spawn failed: Resource temporaril
clang: error: clang frontend command failed due to signal (use -v to see invocation)
```
-you may be running into too-low limits on the number of concurrent processes allowed on the machine. Check:
+you may be running into too-low limits on the number of concurrent processes
+allowed on the machine. Check:
-```
-sysctl kern.maxproc
-sysctl kern.maxprocperuid
-```
+ sysctl kern.maxproc
+ sysctl kern.maxprocperuid
You can increase them with e.g.:
-```
-sudo sysctl -w kern.maxproc=2500
-sudo sysctl -w kern.maxprocperuid=2500
-```
+ sudo sysctl -w kern.maxproc=2500
+ sudo sysctl -w kern.maxprocperuid=2500
-But normally this shouldn't be necessary if you're building on 10.7 or higher. If you see this, check if some rogue program spawned hundreds of processes and kill them first.
+But normally this shouldn't be necessary if you're building on 10.7 or higher.
+If you see this, check if some rogue program spawned hundreds of processes and
+kill them first.
-# Faster builds
+## Faster builds
-Full rebuilds are about the same speed in Debug and Release, but linking is a lot faster in Release builds.
+Full rebuilds are about the same speed in Debug and Release, but linking is a
+lot faster in Release builds.
Run
-```
-GYP_DEFINES=fastbuild=1 build/gyp_chromium
-```
-to disable debug symbols altogether, this makes both full rebuilds and linking faster (at the cost of not getting symbolized backtraces in gdb).
-You might also want to [install ccache](CCacheMac.md) to speed up the build.
+ GYP_DEFINES=fastbuild=1 build/gyp_chromium
-# Running
+to disable debug symbols altogether, this makes both full rebuilds and linking
+faster (at the cost of not getting symbolized backtraces in gdb).
-All build output is located in the `out` directory (in the example above, `~/chromium/src/out`). You can find the applications at `{Debug|Release}/ContentShell.app` and `{Debug|Release}/Chromium.app`, depending on the selected configuration.
+You might also want to [install ccache](ccache_cac.md) to speed up the build.
-# Unit Tests
+## Running
-We have several unit test targets that build, and tests that run and pass. A small subset of these is:
+All build output is located in the `out` directory (in the example above,
+`~/chromium/src/out`). You can find the applications at
+`{Debug|Release}/ContentShell.app` and `{Debug|Release}/Chromium.app`, depending
+on the selected configuration.
- * `unit_tests` from `chrome/chrome.gyp`
- * `base_unittests` from `base/base.gyp`
- * `net_unittests` from `net/net.gyp`
- * `url_unittests` from `url/url.gyp`
+## Unit Tests
-When these tests are built, you will find them in the `out/{Debug|Release}` directory. You can run them from the command line:
-```
-~/chromium/src/out/Release/unit_tests
-```
+We have several unit test targets that build, and tests that run and pass. A
+small subset of these is:
-# Coding
+* `unit_tests` from `chrome/chrome.gyp`
+* `base_unittests` from `base/base.gyp`
+* `net_unittests` from `net/net.gyp`
+* `url_unittests` from `url/url.gyp`
-According to the [Chromium style guide](http://dev.chromium.org/developers/coding-style) code is [not allowed to have whitespace on the ends of lines](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Horizontal_Whitespace). If you edit in Xcode, know that it loves adding whitespace to the ends of lines which can make editing in Xcode more painful than it should be. The [GTM Xcode Plugin](http://code.google.com/p/google-toolbox-for-mac/downloads/list) adds a preference panel to Xcode that allows you to strip whitespace off of the ends of lines on save. Documentation on how to install it is [here](http://code.google.com/p/google-toolbox-for-mac/wiki/GTMXcodePlugin).
+When these tests are built, you will find them in the `out/{Debug|Release}`
+directory. You can run them from the command line:
-# Debugging
+ ~/chromium/src/out/Release/unit_tests
-Good debugging tips can be found [here](http://dev.chromium.org/developers/how-tos/debugging-on-os-x). If you would like to debug in a graphical environment, rather than using `lldb` at the command line, that is possible without building in Xcode. See [Debugging in Xcode](http://www.chromium.org/developers/debugging-on-os-x/building-with-ninja-debugging-with-xcode) for information on how.
-# Contributing
+## Coding
-Once you’re comfortable with building Chromium, check out [Contributing Code](http://dev.chromium.org/developers/contributing-code) for information about writing code for Chromium and contributing it.
+According to the
+[Chromium style guide](http://dev.chromium.org/developers/coding-style) code is
+[not allowed to have whitespace on the ends of lines](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Horizontal_Whitespace).
+If you edit in Xcode, know that it loves adding whitespace to the ends of lines
+which can make editing in Xcode more painful than it should be. The
+[GTM Xcode Plugin](http://code.google.com/p/google-toolbox-for-mac/downloads/list)
+adds a preference panel to Xcode that allows you to strip whitespace off of the
+ends of lines on save. Documentation on how to install it is
+[here](http://code.google.com/p/google-toolbox-for-mac/wiki/GTMXcodePlugin).
-# Using Xcode-Ninja Hybrid
+## Debugging
-While using Xcode is unsupported, GYP supports a hybrid approach of using ninja for building, but Xcode for editing and driving compliation. Xcode can still be slow, but it runs fairly well even **with indexing enabled**.
+Good debugging tips can be found
+[here](http://dev.chromium.org/developers/how-tos/debugging-on-os-x). If you
+would like to debug in a graphical environment, rather than using `lldb` at the
+command line, that is possible without building in Xcode. See
+[Debugging in Xcode](http://www.chromium.org/developers/debugging-on-os-x/building-with-ninja-debugging-with-xcode)
+for information on how.
-With hybrid builds, compilation is still handled by ninja, and can be run by the command line (e.g. ninja -C out/Debug chrome) or by choosing the chrome target in the hybrid workspace and choosing build.
+## Contributing
-To use Xcode-Ninja Hybrid, set `GYP_GENERATORS=ninja,xcode-ninja`.
+Once you’re comfortable with building Chromium, check out
+[Contributing Code](http://dev.chromium.org/developers/contributing-code) for
+information about writing code for Chromium and contributing it.
-Due to the way Xcode parses ninja output paths, it's also necessary to change the main gyp location to anything two directories deep. Otherwise Xcode build output will not be clickable. Adding `xcode_ninja_main_gyp=src/build/ninja/all.ninja.gyp` to your GYP\_GENERATOR\_FLAGS will fix this.
+## Using Xcode-Ninja Hybrid
-After generating the project files with gclient runhooks, open `src/build/ninja/all.ninja.xcworkspace`.
+While using Xcode is unsupported, GYP supports a hybrid approach of using ninja
+for building, but Xcode for editing and driving compliation. Xcode can still be
+slow, but it runs fairly well even **with indexing enabled**.
-You may run into a problem where http://YES is opened as a new tab every time you launch Chrome. To fix this, open the scheme editor for the Run scheme, choose the Options tab, and uncheck "Allow debugging when using document Versions Browser". When this option is checked, Xcode adds --NSDocumentRevisionsDebugMode YES to the launch arguments, and the YES gets interpreted as a URL to open.
+With hybrid builds, compilation is still handled by ninja, and can be run by the
+command line (e.g. ninja -C out/Debug chrome) or by choosing the chrome target
+in the hybrid workspace and choosing build.
-If you want to limit the number of targets visible, which is known to improve Xcode performance, add `xcode_ninja_executable_target_pattern=%target%` where %target% is a regular expression matching executable targets you'd like to include.
+To use Xcode-Ninja Hybrid, set `GYP_GENERATORS=ninja,xcode-ninja`.
-To include non-executable targets, use `xcode_ninja_target_pattern=All_iOS`.
+Due to the way Xcode parses ninja output paths, it's also necessary to change
+the main gyp location to anything two directories deep. Otherwise Xcode build
+output will not be clickable. Adding
+`xcode_ninja_main_gyp=src/build/ninja/all.ninja.gyp` to your
+`GYP_GENERATOR_FLAGS` will fix this.
-If you have problems building, join us in `#chromium` on `irc.freenode.net` and ask there. As mentioned above, be sure that the [waterfall](http://build.chromium.org/buildbot/waterfall/) is green and the tree is open before checking out. This will increase your chances of success.
+After generating the project files with gclient runhooks, open
+`src/build/ninja/all.ninja.xcworkspace`.
-There is also a dedicated [Xcode tips](Xcode4Tips.md) page that you may want to read.
+You may run into a problem where http://YES is opened as a new tab every time
+you launch Chrome. To fix this, open the scheme editor for the Run scheme,
+choose the Options tab, and uncheck "Allow debugging when using document
+Versions Browser". When this option is checked, Xcode adds
+`--NSDocumentRevisionsDebugMode YES` to the launch arguments, and the `YES` gets
+interpreted as a URL to open.
+If you want to limit the number of targets visible, which is known to improve
+Xcode performance, add `xcode_ninja_executable_target_pattern=%target%` where
+`%target%` is a regular expression matching executable targets you'd like to
+include.
-# Using Emacs as EDITOR for "git commit"
+To include non-executable targets, use `xcode_ninja_target_pattern=All_iOS`.
+
+If you have problems building, join us in `#chromium` on `irc.freenode.net` and
+ask there. As mentioned above, be sure that the
+[waterfall](http://build.chromium.org/buildbot/waterfall/) is green and the tree
+is open before checking out. This will increase your chances of success.
-Using the [Cocoa version of Emacs](http://emacsformacosx.com/) as the EDITOR environment variable on Mac OS will cause "git commit" to open the message in a window underneath all the others. To fix this, create a shell script somewhere (call it $HOME/bin/[EmacsEditor](EmacsEditor.md) in this example) containing the following:
+## Using Emacs as `EDITOR` for `git commit`
+
+Using the [Cocoa version of Emacs](http://emacsformacosx.com/) as the `EDITOR`
+environment variable on Mac OS will cause `git commit` to open the message in a
+window underneath all the others. To fix this, create a shell script somewhere
+(call it `$HOME/bin/EmacsEditor` in this example) containing the following:
```
#!/bin/sh
@@ -133,11 +197,10 @@ do
((++i))
done
-open -nWa /Applications/Emacs.app/Contents/MacOS/Emacs --args --no-desktop "${full_paths[@]}"
+open -nWa /Applications/Emacs.app/Contents/MacOS/Emacs --args --no-desktop \
+ "${full_paths[@]}"
```
-and in your .bashrc or similar,
+and in your `.bashrc` or similar,
-```
-export EDITOR=$HOME/bin/EmacsEditor
-``` \ No newline at end of file
+ export EDITOR=$HOME/bin/EmacsEditor
diff --git a/docs/mandriva_msttcorefonts.md b/docs/mandriva_msttcorefonts.md
index e1ef0d1..fad2633 100644
--- a/docs/mandriva_msttcorefonts.md
+++ b/docs/mandriva_msttcorefonts.md
@@ -1,30 +1,31 @@
-# Introduction
+# `msttcorefonts` on Mandriva
-The msttcorefonts are needed to build Chrome but are not available for Mandriva. Building your own is not hard though and only takes about 2 minutes to set up and complete
+The `msttcorefonts` are needed to build Chrome but are not available for
+Mandriva. Building your own is not hard though and only takes about 2 minutes to
+set up and complete
+ urpmi rpm-build cabextract
-# Details
+Download this script, make it executable and run it:
+http://wiki.mandriva.com/en/uploads/3/3a/Rpmsetup.sh
-```
-urpmi rpm-build cabextract
-```
+It will create a directory `~/rpm` and some hidden files in your home directory.
-download this script, make it executable and run it http://wiki.mandriva.com/en/uploads/3/3a/Rpmsetup.sh it will create a directory ~/rpm and some hidden files in your home directory
+open the file `~/.rpmmacros` and comment out the following lines by putting a #
+in front of them, eg like this (because most likely you won't have a gpg key set
+up and creating the package will fail if you leave these lines):
-open the file ~/.rpmmacros and comment out the following lines by putting a # in front of them, eg like this (because most likely you won't have a gpg key set up and creating the package will fail if you leave these lines):
+ #%_signature gpg_
-#%_signature gpg_
+ #%_gpg_name Mandrivalinux_
-#%_gpg\_name Mandrivalinux_
+ #%_gpg_path ~/.gnupg_
-#%_gpg\_path ~/.gnupg_
+download the following file
+http://corefonts.sourceforge.net/msttcorefonts-2.0-1.spec and save it to
+`~/rpm/SPECS`
-download the following file http://corefonts.sourceforge.net/msttcorefonts-2.0-1.spec and save it to ~/rpm/SPECS
+ cd ~/rpm/SPECS
+ rpmbuild -bb msttcorefonts-2.0-1.spec
-```
-cd ~/rpm/SPECS
-
-rpmbuild -bb msttcorefonts-2.0-1.spec
-```
-
-the rpm will be build and be put in ~/rpm/RPMS/noarch ready to install \ No newline at end of file
+the rpm will be build and be put in `~/rpm/RPMS/noarch` ready to install
diff --git a/docs/mojo_in_chromium.md b/docs/mojo_in_chromium.md
index d2a7d19..b9919e2 100644
--- a/docs/mojo_in_chromium.md
+++ b/docs/mojo_in_chromium.md
@@ -1,60 +1,116 @@
-**THIS DOCUIMENT IS A WORK IN PROGRESS.** As long as this notice exists, you should probably ignore everything below it.
+# Mojo in Chromium
+**THIS DOCUIMENT IS A WORK IN PROGRESS.** As long as this notice exists, you
+should probably ignore everything below it.
+This document is intended to serve as a Mojo primer for Chromium developers. No
+prior knowledge of Mojo is assumed, but you should have a decent grasp of C++
+and be familiar with Chromium's multi-process architecture as well as common
+concepts used throughout Chromium such as smart pointers, message loops,
+callback binding, and so on.
-# Introduction
-
-This document is intended to serve as a Mojo primer for Chromium developers. No prior knowledge of Mojo is assumed, but you should have a decent grasp of C++ and be familiar with Chromium's multi-process architecture as well as common concepts used throughout Chromium such as smart pointers, message loops, callback binding, and so on.
+[TOC]
## Should I Bother Reading This?
-If you're planning to build a Chromium feature that needs IPC and you aren't already using Mojo, you probably want to read this. **Legacy IPC** -- _i.e._, `foo_messages.h` files, message filters, and the suite of `IPC_MESSAGE_*` macros -- **is on the verge of deprecation.**
+If you're planning to build a Chromium feature that needs IPC and you aren't
+already using Mojo, you probably want to read this. **Legacy IPC** -- _i.e._,
+`foo_messages.h` files, message filters, and the suite of `IPC_MESSAGE_*` macros
+-- **is on the verge of deprecation.**
## Why Mojo?
-Mojo provides IPC primitives for pushing messages and data around between transferrable endpoints which may or may not cross process boundaries; it simplifies threading with regard to IPC; it standardizes message serialization in a way that's resilient to versioning issues; and it can be used with relative ease and consistency across a number of languages including C++, Java, and `JavaScript` -- all languages which comprise a significant share of Chromium code.
+Mojo provides IPC primitives for pushing messages and data around between
+transferrable endpoints which may or may not cross process boundaries; it
+simplifies threading with regard to IPC; it standardizes message serialization
+in a way that's resilient to versioning issues; and it can be used with relative
+ease and consistency across a number of languages including C++, Java, and
+`JavaScript` -- all languages which comprise a significant share of Chromium
+code.
-The messaging protocol doesn't strictly need to be used for IPC though, and there are some higher-level reasons for this adoption and for the specific approach to integration outlined in this document.
+The messaging protocol doesn't strictly need to be used for IPC though, and
+there are some higher-level reasons for this adoption and for the specific
+approach to integration outlined in this document.
### Code Health
-At the moment we have fairly weak separation between components, with DEPS being the strongest line of defense against increasing complexity.
+At the moment we have fairly weak separation between components, with DEPS being
+the strongest line of defense against increasing complexity.
-A component Foo might hold a reference to some bit of component Bar's internal state, or it might expect Bar to initialize said internal state in some particular order. These sorts of problems are reasonably well-mitigated by the code review process, but they can (and do) still slip through the cracks, and they have a noticeable cumulative effect on complexity as the code base continues to grow.
+A component Foo might hold a reference to some bit of component Bar's internal
+state, or it might expect Bar to initialize said internal state in some
+particular order. These sorts of problems are reasonably well-mitigated by the
+code review process, but they can (and do) still slip through the cracks, and
+they have a noticeable cumulative effect on complexity as the code base
+continues to grow.
-We think we can make a lasting positive impact on code health by establishing more concrete boundaries between components, and this is something a library like Mojo gives us an opportunity to do.
+We think we can make a lasting positive impact on code health by establishing
+more concrete boundaries between components, and this is something a library
+like Mojo gives us an opportunity to do.
### Modularity
-In addition to code health -- which alone could be addressed in any number of ways that don't involve Mojo -- this approach opens doors to build and distribute parts of Chrome separately from the main binary.
+In addition to code health -- which alone could be addressed in any number of
+ways that don't involve Mojo -- this approach opens doors to build and
+distribute parts of Chrome separately from the main binary.
-While we're not currently taking advantage of this capability, doing so remains a long-term goal due to prohibitive binary size constraints in emerging mobile markets. Many open questions around the feasibility of this goal should be answered by the experimental Mandoline project as it unfolds, but the Chromium project can be technically prepared for such a transition in the meantime.
+While we're not currently taking advantage of this capability, doing so remains
+a long-term goal due to prohibitive binary size constraints in emerging mobile
+markets. Many open questions around the feasibility of this goal should be
+answered by the experimental Mandoline project as it unfolds, but the Chromium
+project can be technically prepared for such a transition in the meantime.
### Mandoline
-The Mandoline project is producing a potential replacement for `src/content`. Because Mandoline components are Mojo apps, and Chromium is now capable of loading Mojo apps (somethings we'll discuss later), Mojo apps can be shared between both projects with minimal effort. Developing your feature as or within a Mojo application can mean you're contributing to both Chromium and Mandoline.
+The Mandoline project is producing a potential replacement for `src/content`.
+Because Mandoline components are Mojo apps, and Chromium is now capable of
+loading Mojo apps (somethings we'll discuss later), Mojo apps can be shared
+between both projects with minimal effort. Developing your feature as or within
+a Mojo application can mean you're contributing to both Chromium and Mandoline.
-# Mojo Overview
+## Mojo Overview
-This section provides a general overview of Mojo and some of its API features. You can probably skip straight to [Your First Mojo Application](#Your_First_Mojo_Application.md) if you just want to get to some practical sample code.
+This section provides a general overview of Mojo and some of its API features.
+You can probably skip straight to
+[Your First Mojo Application](#Your-First-Mojo-Application) if you just want to
+get to some practical sample code.
-The Mojo Embedder Development Kit (EDK) provides a suite of low-level IPC primitives: **message pipes**, **data pipes**, and **shared buffers**. We'll focus primarily on message pipes and the C++ bindings API in this document.
+The Mojo Embedder Development Kit (EDK) provides a suite of low-level IPC
+primitives: **message pipes**, **data pipes**, and **shared buffers**. We'll
+focus primarily on message pipes and the C++ bindings API in this document.
_TODO: Java and JS bindings APIs should also be covered here._
-## Message Pipes
+### Message Pipes
-A message pipe is a lightweight primitive for reliable, bidirectional, queued transfer of relatively small packets of data. Every pipe endpoint is identified by a **handle** -- a unique process-wide integer identifying the endpoint to the EDK.
+A message pipe is a lightweight primitive for reliable, bidirectional, queued
+transfer of relatively small packets of data. Every pipe endpoint is identified
+by a **handle** -- a unique process-wide integer identifying the endpoint to the
+EDK.
-A single message across a pipe consists of a binary payload and an array of zero or more handles to be transferred. A pipe's endpoints may live in the same process or in two different processes.
+A single message across a pipe consists of a binary payload and an array of zero
+or more handles to be transferred. A pipe's endpoints may live in the same
+process or in two different processes.
-Pipes are easy to create. The `mojo::MessagePipe` type (see [//third\_party/mojo/src/mojo/public/cpp/system/message\_pipe.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/system/message_pipe.h)) provides a nice class wrapper with each endpoint represented as a scoped handle type (see members `handle0` and `handle1` and the definition of `mojo::ScopedMessagePipeHandle`). In the same header you can find `WriteMessageRaw` and `ReadMessageRaw` definitions. These are in theory all one needs to begin pushing things from one endpoint to the other.
+Pipes are easy to create. The `mojo::MessagePipe` type (see
+`/third_party/mojo/src/mojo/public/cpp/system/message_pipe.h`) provides a nice
+class wrapper with each endpoint represented as a scoped handle type (see
+members `handle0` and `handle1` and the definition of
+`mojo::ScopedMessagePipeHandle`). In the same header you can find
+`WriteMessageRaw` and `ReadMessageRaw` definitions. These are in theory all one
+needs to begin pushing things from one endpoint to the other.
-While it's worth being aware of `mojo::MessagePipe` and the associated raw I/O functions, you will rarely if ever have a use for them. Instead you'll typically use bindings code generated from mojom interface definitions, along with the public bindings API which mostly hides the underlying pipes.
+While it's worth being aware of `mojo::MessagePipe` and the associated raw I/O
+functions, you will rarely if ever have a use for them. Instead you'll typically
+use bindings code generated from mojom interface definitions, along with the
+public bindings API which mostly hides the underlying pipes.
-## Mojom Bindings
+### Mojom Bindings
-Mojom is the IDL for Mojo interfaces. When given a mojom file, the bindings generator outputs a collection of bindings libraries for each supported language. Mojom syntax is fairly straightforward (TODO: Link to a mojom language spec?). Consider the example mojom file below:
+Mojom is the IDL for Mojo interfaces. When given a mojom file, the bindings
+generator outputs a collection of bindings libraries for each supported
+language. Mojom syntax is fairly straightforward (TODO: Link to a mojom language
+spec?). Consider the example mojom file below:
```
// frobinator.mojom
@@ -64,23 +120,39 @@ interface Frobinator {
};
```
-This can be used to generate bindings for a very simple `Frobinator` interface. Bindings are generated at build time and will match the location of the mojom source file itself, mapped into the generated output directory for your Chromium build. In this case one can expect to find files named `frobinator.mojom.js`, `frobinator.mojom.cc`, `frobinator.mojom.h`, _etc._
+This can be used to generate bindings for a very simple `Frobinator` interface.
+Bindings are generated at build time and will match the location of the mojom
+source file itself, mapped into the generated output directory for your Chromium
+build. In this case one can expect to find files named `frobinator.mojom.js`,
+`frobinator.mojom.cc`, `frobinator.mojom.h`, _etc._
-The C++ header (`frobinator.mojom.h`) generated from this mojom will define a pure virtual class interface named `frob::Frobinator` with a pure virtual method of signature `void Frobinate()`. Any class which implements this interface is effectively a `Frobinator` service.
+The C++ header (`frobinator.mojom.h`) generated from this mojom will define a
+pure virtual class interface named `frob::Frobinator` with a pure virtual method
+of signature `void Frobinate()`. Any class which implements this interface is
+effectively a `Frobinator` service.
-## C++ Bindings API
+### C++ Bindings API
-Before we see an example implementation and usage of the Frobinator, there are a handful of interesting bits in the public C++ bindings API you should be familiar with. These complement generated bindings code and generally obviate any need to use a `mojo::MessagePipe` directly.
+Before we see an example implementation and usage of the Frobinator, there are a
+handful of interesting bits in the public C++ bindings API you should be
+familiar with. These complement generated bindings code and generally obviate
+any need to use a `mojo::MessagePipe` directly.
-In all of the cases below, `T` is the type of a generated bindings class interface, such as the `frob::Frobinator` discussed above.
+In all of the cases below, `T` is the type of a generated bindings class
+interface, such as the `frob::Frobinator` discussed above.
-### `mojo::InterfacePtr<T>`
+#### `mojo::InterfacePtr<T>`
-Defined in [//third\_party/mojo/src/mojo/public/cpp/bindings/interface\_ptr.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/bindings/interface_ptr.h).
+Defined in `/third_party/mojo/src/mojo/public/cpp/bindings/interface_ptr.h`.
-`mojo::InterfacePtr<T>` is a typed proxy for a service of type `T`, which can be bound to a message pipe endpoint. This class implements every interface method on `T` by serializing a message (encoding the method call and its arguments) and writing it to the pipe (if bound.) This is the standard way for C++ code to talk to any Mojo service.
+`mojo::InterfacePtr<T>` is a typed proxy for a service of type `T`, which can be
+bound to a message pipe endpoint. This class implements every interface method
+on `T` by serializing a message (encoding the method call and its arguments) and
+writing it to the pipe (if bound.) This is the standard way for C++ code to talk
+to any Mojo service.
-For illustrative purposes only, we can create a message pipe and bind an `InterfacePtr` to one end as follows:
+For illustrative purposes only, we can create a message pipe and bind an
+`InterfacePtr` to one end as follows:
```
mojo::MessagePipe pipe;
@@ -89,43 +161,62 @@ For illustrative purposes only, we can create a message pipe and bind an `Interf
mojo::InterfacePtrInfo<frob::Frobinator>(pipe.handle0.Pass(), 0u));
```
-You could then call `frobinator->Frobinate()` and read the encoded `Frobinate` message from the other side of the pipe (`handle1`.) You most likely don't want to do this though, because as you'll soon see there's a nicer way to establish service pipes.
+You could then call `frobinator->Frobinate()` and read the encoded `Frobinate`
+message from the other side of the pipe (`handle1`.) You most likely don't want
+to do this though, because as you'll soon see there's a nicer way to establish
+service pipes.
-### `mojo::InterfaceRequest<T>`
+#### `mojo::InterfaceRequest<T>`
-Defined in [//third\_party/mojo/src/mojo/public/cpp/bindings/interface\_request.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/bindings/interface_request.h).
+Defined in `/third_party/mojo/src/mojo/public/cpp/bindings/interface_request.h`.
-`mojo::InterfaceRequest<T>` is a typed container for a message pipe endpoint that should _eventually_ be bound to a service implementation. An `InterfaceRequest` doesn't actually _do_ anything, it's just a way of holding onto an endpoint without losing interface type information.
+`mojo::InterfaceRequest<T>` is a typed container for a message pipe endpoint
+that should _eventually_ be bound to a service implementation. An
+`InterfaceRequest` doesn't actually _do_ anything, it's just a way of holding
+onto an endpoint without losing interface type information.
-A common usage pattern is to create a pipe, bind one end to an `InterfacePtr<T>`, and pass the other end off to someone else (say, over some other message pipe) who is expected to eventually bind it to a concrete service implementation. `InterfaceRequest<T>` is here for that purpose and is, as we'll see later, a first-class concept in Mojom interface definitions.
+A common usage pattern is to create a pipe, bind one end to an
+`InterfacePtr<T>`, and pass the other end off to someone else (say, over some
+other message pipe) who is expected to eventually bind it to a concrete service
+implementation. `InterfaceRequest<T>` is here for that purpose and is, as we'll
+see later, a first-class concept in Mojom interface definitions.
-As with `InterfacePtr<T>`, we can manually bind an `InterfaceRequest<T>` to a pipe endpoint:
+As with `InterfacePtr<T>`, we can manually bind an `InterfaceRequest<T>` to a
+pipe endpoint:
```
- mojo::MessagePipe pipe;
+mojo::MessagePipe pipe;
- mojo::InterfacePtr<frob::Frobinator> frobinator;
- frobinator.Bind(
- mojo::InterfacePtrInfo<frob::Frobinator>(pipe.handle0.Pass(), 0u));
+mojo::InterfacePtr<frob::Frobinator> frobinator;
+frobinator.Bind(
+ mojo::InterfacePtrInfo<frob::Frobinator>(pipe.handle0.Pass(), 0u));
- mojo::InterfaceRequest<frob::Frobinator> frobinator_request;
- frobinator_request.Bind(pipe.handle1.Pass());
+mojo::InterfaceRequest<frob::Frobinator> frobinator_request;
+frobinator_request.Bind(pipe.handle1.Pass());
```
-At this point we could start making calls to `frobinator->Frobinate()` as before, but they'll just sit in queue waiting for the request side to be bound. Note that the basic logic in the snippet above is such a common pattern that there's a convenient API function which does it for us.
+At this point we could start making calls to `frobinator->Frobinate()` as
+before, but they'll just sit in queue waiting for the request side to be bound.
+Note that the basic logic in the snippet above is such a common pattern that
+there's a convenient API function which does it for us.
-### `mojo::GetProxy<T>`
+#### `mojo::GetProxy<T>`
-Defined in [//third\_party/mojo/src/mojo/public/cpp/bindings/interface\_request.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/bindings/interface_request.h).
+Defined in
+`/third_party/mojo/src/mojo/public/cpp/bindings/interface`_request.h`.
-`mojo::GetProxy<T>` is the function you will most commonly use to create a new message pipe. Its signature is as follows:
+`mojo::GetProxy<T>` is the function you will most commonly use to create a new
+message pipe. Its signature is as follows:
```
template <typename T>
mojo::InterfaceRequest<T> GetProxy(mojo::InterfacePtr<T>* ptr);
```
-This function creates a new message pipe, binds one end to the given `InterfacePtr` argument, and binds the other end to a new `InterfaceRequest` which it then returns. Equivalent to the sample code just above is the following snippet:
+This function creates a new message pipe, binds one end to the given
+`InterfacePtr` argument, and binds the other end to a new `InterfaceRequest`
+which it then returns. Equivalent to the sample code just above is the following
+snippet:
```
mojo::InterfacePtr<frob::Frobinator> frobinator;
@@ -133,11 +224,16 @@ This function creates a new message pipe, binds one end to the given `InterfaceP
mojo::GetProxy(&frobinator);
```
-### `mojo::Binding<T>`
+#### `mojo::Binding<T>`
-Defined in [//third\_party/mojo/src/mojo/public/cpp/bindings/binding.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/bindings/binding.h).
+Defined in `/third_party/mojo/src/mojo/public/cpp/bindings/binding.h`.
-Binds one end of a message pipe to an implementation of service `T`. A message sent from the other end of the pipe will be read and, if successfully decoded as a `T` message, will invoke the corresponding call on the bound `T` implementation. A `Binding<T>` must be constructed over an instance of `T` (which itself usually owns said `Binding` object), and its bound pipe is usually taken from a passed `InterfaceRequest<T>`.
+Binds one end of a message pipe to an implementation of service `T`. A message
+sent from the other end of the pipe will be read and, if successfully decoded as
+a `T` message, will invoke the corresponding call on the bound `T`
+implementation. A `Binding<T>` must be constructed over an instance of `T`
+(which itself usually owns said `Binding` object), and its bound pipe is usually
+taken from a passed `InterfaceRequest<T>`.
A common usage pattern looks something like this:
@@ -163,39 +259,62 @@ class FrobinatorImpl : public frob::Frobinator {
And then we could write some code to test this:
```
- // Fun fact: The bindings generator emits a type alias like this for every
- // interface type. frob::FrobinatorPtr is an InterfacePtr<frob::Frobinator>.
- frob::FrobinatorPtr frobinator;
- scoped_ptr<FrobinatorImpl> impl(
- new FrobinatorImpl(mojo::GetProxy(&frobinator)));
- frobinator->Frobinate();
-```
-
-This will _eventually_ call `FrobinatorImpl::Frobinate()`. "Eventually," because the sequence of events when `frobinator->Frobinate()` is called is roughly as follows:
-
- 1. A new message buffer is allocated and filled with an encoded 'Frobinate' message.
- 1. The EDK is asked to write this message to the pipe endpoint owned by the `FrobinatorPtr`.
- 1. If the call didn't happen on the Mojo IPC thread for this process, EDK hops to the Mojo IPC thread.
- 1. The EDK writes the message to the pipe. In this case the pipe endpoints live in the same process, so this essentially a glorified `memcpy`. If they lived in different processes this would be the point at which the data moved across a real IPC channel.
- 1. The EDK on the other end of the pipe is awoken on the Mojo IPC thread and alerted to the message arrival.
- 1. The EDK reads the message.
- 1. If the bound receiver doesn't live on the Mojo IPC thread, the EDK hops to the receiver's thread.
- 1. The message is passed on to the receiver. In this case the receiver is generated bindings code, via `Binding<T>`. This code decodes and validates the `Frobinate` message.
- 1. `FrobinatorImpl::Frobinate()` is called on the bound implementation.
-
-So as you can see, the call to `Frobinate()` may result in up to two thread hops and one process hop before the service implementation is invoked.
-
-### `mojo::StrongBinding<T>`
-
-Defined in [//third\_party/mojo/src/mojo/public/cpp/bindings/strong\_binding.h](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/mojo/src/mojo/public/cpp/bindings/strong_binding.h).
-
-`mojo::StrongBinding<T>` is just like `mojo::Binding<T>` with the exception that a `StrongBinding` takes ownership of the bound `T` instance. The instance is destroyed whenever the bound message pipe is closed. This is convenient in cases where you want a service implementation to live as long as the pipe it's servicing, but like all features with clever lifetime semantics, it should be used with caution.
+// Fun fact: The bindings generator emits a type alias like this for every
+// interface type. frob::FrobinatorPtr is an InterfacePtr<frob::Frobinator>.
+frob::FrobinatorPtr frobinator;
+scoped_ptr<FrobinatorImpl> impl(
+ new FrobinatorImpl(mojo::GetProxy(&frobinator)));
+frobinator->Frobinate();
+```
+
+This will _eventually_ call `FrobinatorImpl::Frobinate()`. "Eventually," because
+the sequence of events when `frobinator->Frobinate()` is called is roughly as
+follows:
+
+1. A new message buffer is allocated and filled with an encoded 'Frobinate'
+ message.
+1. The EDK is asked to write this message to the pipe endpoint owned by the
+ `FrobinatorPtr`.
+1. If the call didn't happen on the Mojo IPC thread for this process, EDK hops
+ to the Mojo IPC thread.
+1. The EDK writes the message to the pipe. In this case the pipe endpoints live
+ in the same process, so this essentially a glorified `memcpy`. If they lived
+ in different processes this would be the point at which the data moved
+ across a real IPC channel.
+1. The EDK on the other end of the pipe is awoken on the Mojo IPC thread and
+ alerted to the message arrival.
+1. The EDK reads the message.
+1. If the bound receiver doesn't live on the Mojo IPC thread, the EDK hops to
+ the receiver's thread.
+1. The message is passed on to the receiver. In this case the receiver is
+ generated bindings code, via `Binding<T>`. This code decodes and validates
+ the `Frobinate` message.
+1. `FrobinatorImpl::Frobinate()` is called on the bound implementation.
+
+So as you can see, the call to `Frobinate()` may result in up to two thread hops
+and one process hop before the service implementation is invoked.
+
+#### `mojo::StrongBinding<T>`
+
+Defined in `third_party/mojo/src/mojo/public/cpp/bindings/strong_binding.h`.
+
+`mojo::StrongBinding<T>` is just like `mojo::Binding<T>` with the exception that
+a `StrongBinding` takes ownership of the bound `T` instance. The instance is
+destroyed whenever the bound message pipe is closed. This is convenient in cases
+where you want a service implementation to live as long as the pipe it's
+servicing, but like all features with clever lifetime semantics, it should be
+used with caution.
## The Mojo Shell
-Both Chromium and Mandoline run a central **shell** component which is used to coordinate communication among all Mojo applications (see the next section for an overview of Mojo applications.)
+Both Chromium and Mandoline run a central **shell** component which is used to
+coordinate communication among all Mojo applications (see the next section for
+an overview of Mojo applications.)
-Every application receives a proxy to this shell upon initialization, and it is exclusively through this proxy that an application can request connections to other applications. The `mojo::Shell` interface provided by this proxy is defined as follows:
+Every application receives a proxy to this shell upon initialization, and it is
+exclusively through this proxy that an application can request connections to
+other applications. The `mojo::Shell` interface provided by this proxy is
+defined as follows:
```
module mojo;
@@ -216,51 +335,94 @@ interface ServiceProvider {
};
```
-Definitions for these interfaces can be found in [//mojo/application/public/interfaces](https://code.google.com/p/chromium/codesearch#chromium/src/mojo/application/public/interfaces/). Also note that `mojo::URLRequest` is a Mojo struct defined in [//mojo/services/network/public/interfaces/url\_loader.mojom](https://code.google.com/p/chromium/codesearch#chromium/src/mojo/services/network/public/interfaces/url_loader.mojom).
+Definitions for these interfaces can be found in
+`/mojo/application/public/interfaces`. Also note that `mojo::URLRequest` is a
+Mojo struct defined in
+`/mojo/services/network/public/interfaces/url_loader.mojom`.
-Note that there's some new syntax in the mojom for `ConnectToApplication` above. The '?' signifies a nullable value and the '&' signifies an interface request rather than an interface proxy.
+Note that there's some new syntax in the mojom for `ConnectToApplication` above.
+The '?' signifies a nullable value and the '&' signifies an interface request
+rather than an interface proxy.
-The argument `ServiceProvider&? services` indicates that the caller should pass an `InterfaceRequest<ServiceProvider>` as the second argument, but that it need not be bound to a pipe (i.e., it can be "null" in which case it's ignored.)
+The argument `ServiceProvider&? services` indicates that the caller should pass
+an `InterfaceRequest<ServiceProvider>` as the second argument, but that it need
+not be bound to a pipe (i.e., it can be "null" in which case it's ignored.)
-The argument `ServiceProvider? exposed_services` indicates that the caller should pass an `InterfacePtr<ServiceProvider>` as the third argument, but that it may also be null.
+The argument `ServiceProvider? exposed_services` indicates that the caller
+should pass an `InterfacePtr<ServiceProvider>` as the third argument, but that
+it may also be null.
-`ConnectToApplication` asks the shell to establish a connection between the caller and some other app the shell might know about. In the event that a connection can be established -- which may involve the shell starting a new instance of the target app -- the given `services` request (if not null) will be bound to a service provider in the target app. The target app may in turn use the passed `exposed_services` proxy (if not null) to request services from the connecting app.
+`ConnectToApplication` asks the shell to establish a connection between the
+caller and some other app the shell might know about. In the event that a
+connection can be established -- which may involve the shell starting a new
+instance of the target app -- the given `services` request (if not null) will be
+bound to a service provider in the target app. The target app may in turn use
+the passed `exposed_services` proxy (if not null) to request services from the
+connecting app.
-## Mojo Applications
+### Mojo Applications
-All code which runs in a Mojo environment, apart from the shell itself (see above), belongs to one Mojo **application** or another**`**`**. The term "application" in this context is a common source of confusion, but it's really a simple concept. In essence an application is anything which implements the following Mojom interface:
+All code which runs in a Mojo environment, apart from the shell itself (see
+above), belongs to one Mojo **application** or another**`**`**. The term
+"application" in this context is a common source of confusion, but it's really a
+simple concept. In essence an application is anything which implements the
+following Mojom interface:
```
- module mojo;
- interface Application {
- Initialize(Shell shell, string url);
- AcceptConnection(string requestor_url,
- ServiceProvider&? services,
- ServiceProvider? exposed_services,
- string resolved_url);
- OnQuitRequested() => (bool can_quit);
- };
+module mojo;
+interface Application {
+ Initialize(Shell shell, string url);
+ AcceptConnection(string requestor_url,
+ ServiceProvider&? services,
+ ServiceProvider? exposed_services,
+ string resolved_url);
+ OnQuitRequested() => (bool can_quit);
+};
```
-Of course, in Chromium and Mandoline environments this interface is obscured from application code and applications should generally just implement `mojo::ApplicationDelegate` (defined in [//mojo/application/public/cpp/application\_delegate.h](https://code.google.com/p/chromium/codesearch#chromium/src/mojo/application/public/cpp/application_delegate.h).) We'll see a concrete example of this in the next section, [Your First Mojo Application](#Your_First_Mojo_Application.md).
+Of course, in Chromium and Mandoline environments this interface is obscured
+from application code and applications should generally just implement
+`mojo::ApplicationDelegate` (defined in
+`/mojo/application/public/cpp/application_delegate.h`.) We'll see a concrete
+example of this in the next section,
+[Your First Mojo Application](#Your-First-Mojo-Application).
-The takeaway here is that an application can be anything. It's not necessarily a new process (though at the moment, it's at least a new thread). Applications can connect to each other, and these connections are the mechanism through which separate components expose services to each other.
+The takeaway here is that an application can be anything. It's not necessarily a
+new process (though at the moment, it's at least a new thread). Applications can
+connect to each other, and these connections are the mechanism through which
+separate components expose services to each other.
-**`**`**NOTE: This is not true in Chromium today, but it should be eventually. For some components (like render frames, or arbitrary browser process code) we provide APIs which allow non-Mojo-app-code to masquerade as a Mojo app and therefore connect to real Mojo apps through the shell.
+**NOTE##: This is not true in Chromium today, but it should be eventually. For
+some components (like render frames, or arbitrary browser process code) we
+provide APIs which allow non-Mojo-app-code to masquerade as a Mojo app and
+therefore connect to real Mojo apps through the shell.
-## Other IPC Primitives
+### Other IPC Primitives
-Finally, it's worth making brief mention of the other types of IPC primitives Mojo provides apart from message pipes. A **data pipe** is a unidirectional channel for pushing around raw data in bulk, and a **shared buffer** is (unsurprisingly) a shared memory primitive. Both of these objects use the same type of transferable handle as message pipe endpoints, and can therefore be transferred across message pipes, potentially to other processes.
+Finally, it's worth making brief mention of the other types of IPC primitives
+Mojo provides apart from message pipes. A **data pipe** is a unidirectional
+channel for pushing around raw data in bulk, and a **shared buffer** is
+(unsurprisingly) a shared memory primitive. Both of these objects use the same
+type of transferable handle as message pipe endpoints, and can therefore be
+transferred across message pipes, potentially to other processes.
-# Your First Mojo Application
+## Your First Mojo Application
-In this section, we're going to build a simple Mojo application that can be run in isolation using Mandoline's `mojo_runner` binary. After that we'll add a service to the app and set up a test suite to connect and test that service.
+In this section, we're going to build a simple Mojo application that can be run
+in isolation using Mandoline's `mojo_runner` binary. After that we'll add a
+service to the app and set up a test suite to connect and test that service.
-## Hello, world!
+### Hello, world!
-So, you're building a new Mojo app and it has to live somewhere. For the foreseeable future we'll likely be treating `//components` as a sort of top-level home for new Mojo apps in the Chromium tree. Any component application you build should probably go there. Let's create some basic files to kick things off. You may want to start a new local Git branch to isolate any changes you make while working through this.
+So, you're building a new Mojo app and it has to live somewhere. For the
+foreseeable future we'll likely be treating `//components` as a sort of
+top-level home for new Mojo apps in the Chromium tree. Any component application
+you build should probably go there. Let's create some basic files to kick things
+off. You may want to start a new local Git branch to isolate any changes you
+make while working through this.
-First create a new `//components/hello` directory. Inside this directory we're going to add the following files:
+First create a new `//components/hello` directory. Inside this directory we're
+going to add the following files:
**components/hello/main.cc**
```
@@ -273,7 +435,6 @@ MojoResult MojoMain(MojoHandle shell_handle) {
};
```
-
**components/hello/BUILD.gn**
```
import("//mojo/public/mojo_application.gni")
@@ -289,27 +450,36 @@ mojo_native_application("hello") {
}
```
-For the sake of this example you'll also want to add your component as a dependency somewhere in your local checkout to ensure its build files are generated. The easiest thing to do there is probably to add a dependency on `"//components/hello"` in the `"gn_all"` target of the top-level `//BUILD.gn`.
+For the sake of this example you'll also want to add your component as a
+dependency somewhere in your local checkout to ensure its build files are
+generated. The easiest thing to do there is probably to add a dependency on
+`"//components/hello"` in the `"gn_all"` target of the top-level `//BUILD.gn`.
-Assuming you have a GN output directory at `out_gn/Debug`, you can build the Mojo runner along with your shiny new app:
+Assuming you have a GN output directory at `out_gn/Debug`, you can build the
+Mojo runner along with your shiny new app:
-```
ninja -C out_gn/Debug mojo_runner components/hello
-```
-In addition to the `mojo_runner` executable, this will produce a new binary at `out_gn/Debug/hello/hello.mojo`. This binary is essentially a shared library which exports your `MojoMain` function.
+In addition to the `mojo_runner` executable, this will produce a new binary at
+`out_gn/Debug/hello/hello.mojo`. This binary is essentially a shared library
+which exports your `MojoMain` function.
-`mojo_runner` takes an application URL as its only argument and runs the corresponding application. In its current state it resolves `mojo`-scheme URLs such that `"mojo:foo"` maps to the file `"foo/foo.mojo"` relative to the `mojo_runner` path (_i.e._ your output directory.) This means you can run your new app with the following command:
+`mojo_runner` takes an application URL as its only argument and runs the
+corresponding application. In its current state it resolves `mojo`-scheme URLs
+such that `"mojo:foo"` maps to the file `"foo/foo.mojo"` relative to the
+`mojo_runner` path (_i.e._ your output directory.) This means you can run your
+new app with the following command:
-```
out_gn/Debug/mojo_runner mojo:hello
-```
-You should see our little `"Hello, world!"` error log followed by a hanging application. You can `^C` to kill it.
+You should see our little `"Hello, world!"` error log followed by a hanging
+application. You can `^C` to kill it.
-## Exposing Services
+### Exposing Services
-An app that prints `"Hello, world!"` isn't terribly interesting. At a bare minimum your app should implement `mojo::ApplicationDelegate` and expose at least one service to connecting applications.
+An app that prints `"Hello, world!"` isn't terribly interesting. At a bare
+minimum your app should implement `mojo::ApplicationDelegate` and expose at
+least one service to connecting applications.
Let's update `main.cc` with the following contents:
@@ -325,7 +495,10 @@ MojoResult MojoMain(MojoHandle shell_handle) {
};
```
-This is a pretty typical looking `MojoMain`. Most of the time this is all you want -- a `mojo::ApplicationRunner` constructed over a `mojo::ApplicationDelegate` instance, `Run()` with the pipe handle received from the shell. We'll add some new files to the app as well:
+This is a pretty typical looking `MojoMain`. Most of the time this is all you
+want -- a `mojo::ApplicationRunner` constructed over a
+`mojo::ApplicationDelegate` instance, `Run()` with the pipe handle received from
+the shell. We'll add some new files to the app as well:
**components/hello/public/interfaces/greeter.mojom**
```
@@ -335,7 +508,8 @@ interface Greeter {
};
```
-Note the new arrow syntax on the `Greet` method. This indicates that the caller expects a response from the service.
+Note the new arrow syntax on the `Greet` method. This indicates that the caller
+expects a response from the service.
**components/hello/public/interfaces/BUILD.gn**
```
@@ -348,7 +522,7 @@ mojom("interfaces") {
}
```
-**components/hello/hello\_app.h**
+**components/hello/hello_app.h**
```
#ifndef COMPONENTS_HELLO_HELLO_APP_H_
#define COMPONENTS_HELLO_HELLO_APP_H_
@@ -384,7 +558,7 @@ class HelloApp : public mojo::ApplicationDelegate,
```
-**components/hello/hello\_app.cc**
+**components/hello/hello_app.cc**
```
#include "base/macros.h"
#include "components/hello/hello_app.h"
@@ -438,7 +612,8 @@ void HelloApp::Create(
} // namespace hello
```
-And finally we need to update our app's `BUILD.gn` to add some new sources and dependencies:
+And finally we need to update our app's `BUILD.gn` to add some new sources and
+dependencies:
**components/hello/BUILD.gn**
```
@@ -465,19 +640,30 @@ mojo_native_application("hello") {
}
```
-Note that we build the bulk of our application sources as a static library separate from the `MojoMain` definition. Following this convention is particularly useful for Chromium integration, as we'll see later.
+Note that we build the bulk of our application sources as a static library
+separate from the `MojoMain` definition. Following this convention is
+particularly useful for Chromium integration, as we'll see later.
-There's a lot going on here and it would be useful to familiarize yourself with the definitions of `mojo::ApplicationDelegate`, `mojo::ApplicationConnection`, and `mojo::InterfaceFactory<T>`. The TL;DR though is that if someone connects to this app and requests a service named `"hello::Greeter"`, the app will create a new `GreeterImpl` and bind it to that request pipe. From there the connecting app can call `Greeter` interface methods and they'll be routed to that `GreeterImpl` instance.
+There's a lot going on here and it would be useful to familiarize yourself with
+the definitions of `mojo::ApplicationDelegate`, `mojo::ApplicationConnection`,
+and `mojo::InterfaceFactory<T>`. The TL;DR though is that if someone connects to
+this app and requests a service named `"hello::Greeter"`, the app will create a
+new `GreeterImpl` and bind it to that request pipe. From there the connecting
+app can call `Greeter` interface methods and they'll be routed to that
+`GreeterImpl` instance.
-Although this appears to be a more interesting application, we need some way to actually connect and test the behavior of our new service. Let's write an app test!
+Although this appears to be a more interesting application, we need some way to
+actually connect and test the behavior of our new service. Let's write an app
+test!
-## App Tests
+### App Tests
-App tests run inside a test application, giving test code access to a shell which can connect to one or more applications-under-test.
+App tests run inside a test application, giving test code access to a shell
+which can connect to one or more applications-under-test.
First let's introduce some test code:
-**components/hello/hello\_apptest.cc**
+**components/hello/hello_apptest.cc**
```
#include "base/bind.h"
#include "base/callback.h"
@@ -532,158 +718,217 @@ TEST_F(HelloAppTest, GreetWorld) {
We also need to add a new rule to `//components/hello/BUILD.gn`:
```
- mojo_native_application("apptests") {
- output_name = "hello_apptests"
- testonly = true
- sources = [
- "hello_apptest.cc",
- ]
- deps = [
- "//base",
- "//mojo/application/public/cpp:test_support",
- ]
- public_deps = [
- "//components/hello/public/interfaces",
- ]
- data_deps = [ ":hello" ]
- }
+mojo_native_application("apptests") {
+ output_name = "hello_apptests"
+ testonly = true
+ sources = [
+ "hello_apptest.cc",
+ ]
+ deps = [
+ "//base",
+ "//mojo/application/public/cpp:test_support",
+ ]
+ public_deps = [
+ "//components/hello/public/interfaces",
+ ]
+ data_deps = [ ":hello" ]
+}
```
-Note that the `//components/hello:apptests` target does **not** have a binary dependency on either `HelloApp` or `GreeterImpl` implementations; instead it depends only on the component's public interface definitions.
+Note that the `//components/hello:apptests` target does **not** have a binary
+dependency on either `HelloApp` or `GreeterImpl` implementations; instead it
+depends only on the component's public interface definitions.
-The `data_deps` entry ensures that `hello.mojo` is up-to-date when `apptests` is built. This is desirable because the test connects to `"mojo:hello"` which will in turn load `hello.mojo` from disk.
+The `data_deps` entry ensures that `hello.mojo` is up-to-date when `apptests` is
+built. This is desirable because the test connects to `"mojo:hello"` which will
+in turn load `hello.mojo` from disk.
You can now build the test suite:
-```
- ninja -C out_gn/Debug components/hello:apptests
-```
+ ninja -C out_gn/Debug components/hello:apptests
and run it:
-```
- out_gn/Debug/mojo_runner mojo:hello_apptests
-```
+ out_gn/Debug/mojo_runner mojo:hello_apptests
You should see one test (`HelloAppTest.GreetWorld`) passing.
One particularly interesting bit of code in this test is in the `SetUp` method:
-```
mojo::URLRequestPtr app_url = mojo::URLRequest::New();
app_url->url = "mojo:hello";
application_impl()->ConnectToService(app_url.Pass(), &greeter_);
-```
-`ConnectToService` is a convenience method provided by `mojo::ApplicationImpl`, and it's essentially a shortcut for calling out to the shell's `ConnectToApplication` method with the given application URL (in this case `"mojo:hello"`) and then connecting to a specific service provided by that app via its `ServiceProvider`'s `ConnectToService` method.
+`ConnectToService` is a convenience method provided by `mojo::ApplicationImpl`,
+and it's essentially a shortcut for calling out to the shell's
+`ConnectToApplication` method with the given application URL (in this case
+`"mojo:hello"`) and then connecting to a specific service provided by that app
+via its `ServiceProvider`'s `ConnectToService` method.
-Note that generated interface bindings include a constant string to identify each interface by name; so for example the generated `hello::Greeter` type defines a static C string:
+Note that generated interface bindings include a constant string to identify
+each interface by name; so for example the generated `hello::Greeter` type
+defines a static C string:
-```
const char hello::Greeter::Name_[] = "hello::Greeter";
-```
-This is exploited by the definition of `mojo::ApplicationConnection::ConnectToService<T>`, which uses `T::Name_` as the name of the service to connect to. The type `T` in this context is inferred from the `InterfacePtr<T>*` argument. You can inspect the definition of `ConnectToService` in [//mojo/application/public/cpp/application\_connection.h](https://code.google.com/p/chromium/codesearch#chromium/src/mojo/application/public/cpp/application_connection.h) for additional clarity.
+This is exploited by the definition of
+`mojo::ApplicationConnection::ConnectToService<T>`, which uses `T::Name_` as the
+name of the service to connect to. The type `T` in this context is inferred from
+the `InterfacePtr<T>*` argument. You can inspect the definition of
+`ConnectToService` in `/mojo/application/public/cpp/application_connection.h`
+for additional clarity.
We could have instead written this code as:
```
- mojo::URLRequestPtr app_url = mojo::URLRequest::New();
- app_url->url = "mojo::hello";
+mojo::URLRequestPtr app_url = mojo::URLRequest::New();
+app_url->url = "mojo::hello";
- mojo::ServiceProviderPtr services;
- application_impl()->shell()->ConnectToApplication(
- app_url.Pass(), mojo::GetProxy(&services),
- // We pass a null provider since we aren't exposing any of our own
- // services to the target app.
- mojo::ServiceProviderPtr());
+mojo::ServiceProviderPtr services;
+application_impl()->shell()->ConnectToApplication(
+ app_url.Pass(), mojo::GetProxy(&services),
+ // We pass a null provider since we aren't exposing any of our own
+ // services to the target app.
+ mojo::ServiceProviderPtr());
- mojo::InterfaceRequest<hello::Greeter> greeter_request =
- mojo::GetProxy(&greeter_);
- services->ConnectToService(hello::Greeter::Name_,
- greeter_request.PassMessagePipe());
+mojo::InterfaceRequest<hello::Greeter> greeter_request =
+ mojo::GetProxy(&greeter_);
+services->ConnectToService(hello::Greeter::Name_,
+ greeter_request.PassMessagePipe());
```
The net result is the same, but 3-line version seems much nicer.
-# Chromium Integration
+## Chromium Integration
-Up until now we've been using `mojo_runner` to load and run `.mojo` binaries dynamically. While this model is used by Mandoline and may eventually be used in Chromium as well, Chromium is at the moment confined to running statically linked application code. This means we need some way to register applications with the browser's Mojo shell.
+Up until now we've been using `mojo_runner` to load and run `.mojo` binaries
+dynamically. While this model is used by Mandoline and may eventually be used in
+Chromium as well, Chromium is at the moment confined to running statically
+linked application code. This means we need some way to register applications
+with the browser's Mojo shell.
-It also means that, rather than using the binary output of a `mojo_native_application` target, some part of Chromium must link against the app's static library target (_e.g._, `"//components/hello:lib"`) and register a URL handler to teach the shell how to launch an instance of the app.
+It also means that, rather than using the binary output of a
+`mojo_native_application` target, some part of Chromium must link against the
+app's static library target (_e.g._, `"//components/hello:lib"`) and register a
+URL handler to teach the shell how to launch an instance of the app.
-When registering an app URL in Chromium it probably makes sense to use the same mojo-scheme URL used for the app in Mandoline. For example the media renderer app is referenced by the `"mojo:media"` URL in both Mandoline and Chromium. In Mandoline this resolves to a dynamically-loaded `.mojo` binary on disk, but in Chromium it resolves to a static application loader linked into Chromium. The net result is the same in both cases: other apps can use the shell to connect to `"mojo:media"` and use its services.
+When registering an app URL in Chromium it probably makes sense to use the same
+mojo-scheme URL used for the app in Mandoline. For example the media renderer
+app is referenced by the `"mojo:media"` URL in both Mandoline and Chromium. In
+Mandoline this resolves to a dynamically-loaded `.mojo` binary on disk, but in
+Chromium it resolves to a static application loader linked into Chromium. The
+net result is the same in both cases: other apps can use the shell to connect to
+`"mojo:media"` and use its services.
-This section explores different ways to register and connect to `"mojo:hello"` in Chromium.
+This section explores different ways to register and connect to `"mojo:hello"`
+in Chromium.
-## In-Process Applications
+### In-Process Applications
-Applications can be set up to run within the browser process via `ContentBrowserClient::RegisterInProcessMojoApplications`. This method populates a mapping from URL to `base::Callback<scoped_ptr<mojo::ApplicationDelegate>()>` (_i.e._, a factory function which creates a new `mojo::ApplicationDelegate` instance), so registering a new app means adding an entry to this map.
+Applications can be set up to run within the browser process via
+`ContentBrowserClient::RegisterInProcessMojoApplications`. This method populates
+a mapping from URL to `base::Callback<scoped_ptr<mojo::ApplicationDelegate>()>`
+(_i.e._, a factory function which creates a new `mojo::ApplicationDelegate`
+instance), so registering a new app means adding an entry to this map.
-Let's modify `ChromeContentBrowserClient::RegisterInProcessMojoApplications` (in `//chrome/browser/chrome_content_browser_client.cc`) by adding the following code:
+Let's modify `ChromeContentBrowserClient::RegisterInProcessMojoApplications`
+(in `//chrome/browser/chrome_content_browser_client.cc`) by adding the following
+code:
-```
apps->insert(std::make_pair(GURL("mojo:hello"),
base::Bind(&HelloApp::CreateApp)));
-```
-you'll also want to add the following convenience method to your `HelloApp` definition in `//components/hello/hello_app.h`:
+you'll also want to add the following convenience method to your `HelloApp`
+definition in `//components/hello/hello_app.h`:
-```
static scoped_ptr<mojo::ApplicationDelegate> HelloApp::CreateApp() {
return scoped_ptr<mojo::ApplicationDelegate>(new HelloApp);
}
-```
-This introduces a dependency from `//chrome/browser` on to `//components/hello:lib`, which you can add to the `"browser"` target's deps in `//chrome/browser/BUILD.gn`. You'll of course also need to include `"components/hello/hello_app.h"` in `chrome_content_browser_client.cc`.
+This introduces a dependency from `//chrome/browser` on to
+`//components/hello:lib`, which you can add to the `"browser"` target's deps in
+`//chrome/browser/BUILD.gn`. You'll of course also need to include
+`"components/hello/hello_app.h"` in `chrome_content_browser_client.cc`.
-That's it! Now if an app comes to the shell asking to connect to `"mojo:hello"` and app is already running, it'll get connected to our `HelloApp` and have access to the `Greeter` service. If the app wasn't already running, it will first be launched on a new thread.
+That's it! Now if an app comes to the shell asking to connect to `"mojo:hello"`
+and app is already running, it'll get connected to our `HelloApp` and have
+access to the `Greeter` service. If the app wasn't already running, it will
+first be launched on a new thread.
-## Connecting From the Browser
+### Connecting From the Browser
-We've already seen how apps can connect to each other using their own private shell proxy, but the vast majority of Chromium code doesn't yet belong to a Mojo application. So how do we use an app's services from arbitrary browser code? We use `content::MojoAppConnection`, like this:
+We've already seen how apps can connect to each other using their own private
+shell proxy, but the vast majority of Chromium code doesn't yet belong to a Mojo
+application. So how do we use an app's services from arbitrary browser code? We
+use `content::MojoAppConnection`, like this:
```
- #include "base/bind.h"
- #include "base/logging.h"
- #include "components/hello/public/interfaces/greeter.mojom.h"
- #include "content/public/browser/mojo_app_connection.h"
+#include "base/bind.h"
+#include "base/logging.h"
+#include "components/hello/public/interfaces/greeter.mojom.h"
+#include "content/public/browser/mojo_app_connection.h"
- void LogGreeting(const mojo::String& greeting) {
- LOG(INFO) << greeting;
- }
+void LogGreeting(const mojo::String& greeting) {
+ LOG(INFO) << greeting;
+}
- void GreetTheWorld() {
- scoped_ptr<content::MojoAppConnection> connection =
- content::MojoAppConnection::Create("mojo:hello",
- content::kBrowserMojoAppUrl);
- hello::GreeterPtr greeter;
- connection->ConnectToService(&greeter);
- greeter->Greet("world", base::Bind(&LogGreeting));
- }
+void GreetTheWorld() {
+ scoped_ptr<content::MojoAppConnection> connection =
+ content::MojoAppConnection::Create("mojo:hello",
+ content::kBrowserMojoAppUrl);
+ hello::GreeterPtr greeter;
+ connection->ConnectToService(&greeter);
+ greeter->Greet("world", base::Bind(&LogGreeting));
+}
```
-A `content::MojoAppConnection`, while not thread-safe, may be created and safely used on any single browser thread.
+A `content::MojoAppConnection`, while not thread-safe, may be created and safely
+used on any single browser thread.
-You could add the above code to a new browsertest to convince yourself that it works. In fact you might want to take a peek at `MojoShellTest.TestBrowserConnection` (in [//content/browser/mojo\_shell\_browsertest.cc](https://code.google.com/p/chromium/codesearch#chromium/src/content/browser/mojo_shell_browsertest.cc)) which registers and tests an in-process Mojo app.
+You could add the above code to a new browsertest to convince yourself that it
+works. In fact you might want to take a peek at
+`MojoShellTest.TestBrowserConnection` (in
+`/content/browser/mojo_shell_browsertest.cc`) which registers and tests an
+in-process Mojo app.
-Finally, note that `MojoAppConnection::Create` takes two URLs. The first is the target app URL, and the second is the source URL. Since we're not really a Mojo app, but we are still trusted browser code, the shell will gladly use this URL as the `requestor_url` when establishing an incoming connection to the target app. This allows browser code to masquerade as a Mojo app at the given URL. `content::kBrowserMojoAppUrl` (which is presently `"system:content_browser"`) is a reasonable default choice when a more specific app identity isn't required.
+Finally, note that `MojoAppConnection::Create` takes two URLs. The first is the
+target app URL, and the second is the source URL. Since we're not really a Mojo
+app, but we are still trusted browser code, the shell will gladly use this URL
+as the `requestor_url` when establishing an incoming connection to the target
+app. This allows browser code to masquerade as a Mojo app at the given URL.
+`content::kBrowserMojoAppUrl` (which is presently `"system:content_browser"`) is
+a reasonable default choice when a more specific app identity isn't required.
-## Out-of-Process Applications
+### Out-of-Process Applications
-If an app URL isn't registered for in-process loading, the shell assumes it must be an out-of-process application. If the shell doesn't already have a known instance of the app running, a new utility process is launched and the application request is passed onto it. Then if the app URL is registered in the utility process, the app will be loaded there.
+If an app URL isn't registered for in-process loading, the shell assumes it must
+be an out-of-process application. If the shell doesn't already have a known
+instance of the app running, a new utility process is launched and the
+application request is passed onto it. Then if the app URL is registered in the
+utility process, the app will be loaded there.
-Similar to in-process registration, a URL mapping needs to be registered in `ContentUtilityClient::RegisterMojoApplications`.
+Similar to in-process registration, a URL mapping needs to be registered in
+`ContentUtilityClient::RegisterMojoApplications`.
-Once again you can take a peek at //content/browser/mojo\_shell\_browsertest.cc for an end-to-end example of testing an out-of-process Mojo app from browser code. Note that `content_browsertests` runs on `content_shell`, which uses `ShellContentUtilityClient` as defined [//content/shell/utility/shell\_content\_utility\_client.cc](https://code.google.com/p/chromium/codesearch#chromium/src/content/shell/utility/shell_content_utility_client.cc). This code registers a common OOP test app.
+Once again you can take a peek at `/content/browser/mojo_shell_browsertest.cc`
+for an end-to-end example of testing an out-of-process Mojo app from browser
+code. Note that `content_browsertests` runs on `content_shell`, which uses
+`ShellContentUtilityClient` as defined
+`/content/shell/utility/shell_content_utility_client.cc`. This code registers a
+common OOP test app.
## Unsandboxed Out-of-Process Applications
-By default new utility processes run in a sandbox. If you want your Mojo app to run out-of-process and unsandboxed (which you **probably do not**), you can register its URL via `ContentBrowserClient::RegisterUnsandboxedOutOfProcessMojoApplications`.
+By default new utility processes run in a sandbox. If you want your Mojo app to
+run out-of-process and unsandboxed (which you **probably do not**), you can
+register its URL via
+`ContentBrowserClient::RegisterUnsandboxedOutOfProcessMojoApplications`.
## Connecting From `RenderFrame`
-We can also connect to Mojo apps from a `RenderFrame`. This is made possible by `RenderFrame`'s `GetServiceRegistry()` interface. The `ServiceRegistry` can be used to acquire a shell proxy and in turn connect to an app like so:
+We can also connect to Mojo apps from a `RenderFrame`. This is made possible by
+`RenderFrame`'s `GetServiceRegistry()` interface. The `ServiceRegistry` can be
+used to acquire a shell proxy and in turn connect to an app like so:
```
void GreetWorld(content::RenderFrame* frame) {
@@ -704,16 +949,25 @@ void GreetWorld(content::RenderFrame* frame) {
}
```
-It's important to note that connections made through the frame's shell proxy will appear to come from the frame's `SiteInstance` URL. For example, if the frame has loaded `https://example.com/`, `HelloApp`'s incoming `mojo::ApplicationConnection` in this case will have a remote application URL of `"https://example.com/"`. This allows apps to expose their services to web frames on a per-origin basis if needed.
+It's important to note that connections made through the frame's shell proxy
+will appear to come from the frame's `SiteInstance` URL. For example, if the
+frame has loaded `https://example.com/`, `HelloApp`'s incoming
+`mojo::ApplicationConnection` in this case will have a remote application URL of
+`"https://example.com/"`. This allows apps to expose their services to web
+frames on a per-origin basis if needed.
-## Connecting From Java
+### Connecting From Java
TODO
-## Connecting From `JavaScript`
+### Connecting From `JavaScript`
-This is still a work in progress and might not really take shape until the Blink+Chromium merge. In the meantime there are some end-to-end WebUI examples in [//content/browser/webui/web\_ui\_mojo\_browsertest.cc](https://code.google.com/p/chromium/codesearch#chromium/src/content/browser/webui/web_ui_mojo_browsertest.cc). In particular, `WebUIMojoTest.ConnectToApplication` connects from a WebUI frame to a test app running in a new utility process.
+This is still a work in progress and might not really take shape until the
+Blink+Chromium merge. In the meantime there are some end-to-end WebUI examples
+in `/content/browser/webui/web_ui_mojo_browsertest.cc`. In particular,
+`WebUIMojoTest.ConnectToApplication` connects from a WebUI frame to a test app
+running in a new utility process.
-# FAQ
+## FAQ
-Nothing here yet! \ No newline at end of file
+Nothing here yet!
diff --git a/docs/ninja_build.md b/docs/ninja_build.md
index addf61d..0a32853 100644
--- a/docs/ninja_build.md
+++ b/docs/ninja_build.md
@@ -1,10 +1,15 @@
+# Ninja Build
+Ninja is a build system written with the specific goal of improving the
+edit-compile cycle time. It is used by default everywhere except when building
+for iOS.
-Ninja is a build system written with the specific goal of improving the edit-compile cycle time. It is used by default everywhere except when building for iOS.
+Ninja behaves very similar to Make -- the major feature is that it starts
+building files nearly instantly. (It has a number of minor user interface
+improvements to make as well.)
-Ninja behaves very similar to Make -- the major feature is that it starts building files nearly instantly. (It has a number of minor user interface improvements to make as well.)
-
-Read more about Ninja at [the Ninja home page](http://martine.github.com/ninja/).
+Read more about Ninja at
+[the Ninja home page](http://martine.github.com/ninja/).
## Using it
@@ -12,68 +17,88 @@ Read more about Ninja at [the Ninja home page](http://martine.github.com/ninja/)
#### Install
-Ninja is included in depot\_tools as well as gyp, so there's nothing to install.
+Ninja is included in `depot_tools` as well as `gyp`, so there's nothing to
+install.
## Build instructions
To build Chrome:
-```
-cd /path/to/chrome/src
-ninja -C out/Debug chrome
-```
-Specify `out/Release` for a release build. I recommend setting up an alias so that you don't need to type out that build directory path.
+ cd /path/to/chrome/src
+ ninja -C out/Debug chrome
-If you want to build all targets, use `ninja -C out/Debug all`. It's faster to build only the target you're working on, like 'chrome' or 'unit\_tests'.
+Specify `out/Release` for a release build. I recommend setting up an alias so
+that you don't need to type out that build directory path.
+
+If you want to build all targets, use `ninja -C out/Debug all`. It's faster to
+build only the target you're working on, like `chrome` or `unit_tests`.
## Android
-Identical to Linux, just make sure `OS=android` is in your `GYP_DEFINES`. You want to build one of the _apk targets, e.g. `content_shell_apk`._
+Identical to Linux, just make sure `OS=android` is in your `GYP_DEFINES`. You
+want to build one of the apk targets, e.g. `content_shell_apk`.
## Windows
-Similar to Linux. It uses MSVS's `cl.exe`, `link.exe`, etc. so you still need to have VS installed. To use it, open `cmd.exe`, go to your chrome checkout, and run:
-```
-set GYP_DEFINES=component=shared_library
-python build\gyp_chromium
-ninja -C out\Debug chrome.exe
-```
+Similar to Linux. It uses MSVS's `cl.exe`, `link.exe`, etc. so you still need to
+have VS installed. To use it, open `cmd.exe`, go to your chrome checkout, and
+run:
-`component=shared_library` optional but recommended for faster links.
+ set GYP_DEFINES=component=shared_library
+ python build\gyp_chromium
+ ninja -C out\Debug chrome.exe
-You can also set `GYP_GENERATORS=ninja,msvs-ninja` to get both VS projects generated if you want to use VS just to browse/edit (but then gyp takes twice as long to run).
+`component=shared_library` is optional but recommended for faster links.
-If you're using Express or the Windows SDK by itself (rather than using a Visual Studio install), you'll need to run from a vcvarsall command prompt.
+You can also set `GYP_GENERATORS=ninja,msvs-ninja` to get both VS projects
+generated if you want to use VS just to browse/edit (but then gyp takes twice as
+long to run).
+
+If you're using Express or the Windows SDK by itself (rather than using a Visual
+Studio install), you'll need to run from a vcvarsall command prompt.
### Debugging
Miss VS for debugging?
+
```
devenv.com /debugexe chrome.exe --my-great-args "go here" --single-process etc
```
-Miss Xcode for debugging? Read http://dev.chromium.org/developers/debugging-on-os-x/building-with-ninja-debugging-with-xcode
+Miss Xcode for debugging? Read
+http://dev.chromium.org/developers/debugging-on-os-x/building-with-ninja-debugging-with-xcode
### Without Visual Studio
-That is, building with just the WinDDK. This is documented in the [regular build instructions](http://dev.chromium.org/developers/how-tos/build-instructions-windows#TOC-Setting-up-the-environment-for-building-with-Visual-C-2010-Express-or-Windows-7.1-SDK).
+That is, building with just the WinDDK. This is documented in the
+[regular build instructions](http://dev.chromium.org/developers/how-tos/build-instructions-windows#TOC-Setting-up-the-environment-for-building-with-Visual-C-2010-Express-or-Windows-7.1-SDK).
## Tweaks
### Building through errors
-Pass a flag like `-k3` to make Ninja build until it hits three errors instead of stopping at the first.
+
+Pass a flag like `-k3` to make Ninja build until it hits three errors instead of
+stopping at the first.
### Parallelism
-Pass a flag like `-j8` to use 8 parallel processes, or `-j1` to compile just one at a time (helpful if you're getting weird compiler errors). By default Ninja tries to use all your processors.
+
+Pass a flag like `-j8` to use 8 parallel processes, or `-j1` to compile just one
+at a time (helpful if you're getting weird compiler errors). By default Ninja
+tries to use all your processors.
### More options
+
There are more options. Run `ninja --help` to see them all.
### Custom build configs
-You can write a specific build config to a specific output directory via the `-G` flags to gyp. Here's an example from jamesr:
-`build/gyp_chromium -Gconfig=Release -Goutput_dir=out_profiling -Dprofiling=1 -Dlinux_fpic=0`
+You can write a specific build config to a specific output directory via the
+`-G` flags to gyp. Here's an example from jamesr:
+`build/gyp_chromium -Gconfig=Release -Goutput_dir=out_profiling -Dprofiling=1
+-Dlinux_fpic=0`
## Bugs
-If you encounter any problems, please file a bug at http://crbug.com/new with label `ninja` and cc `thakis@` or `scottmg@`. Assume that it is a bug in Ninja before you bother anyone about e.g. link problems. \ No newline at end of file
+If you encounter any problems, please file a bug at http://crbug.com/new with
+label `ninja` and cc `thakis@` or `scottmg@`. Assume that it is a bug in Ninja
+before you bother anyone about e.g. link problems.
diff --git a/docs/piranha_plant.md b/docs/piranha_plant.md
index 4b6db33..692b32b 100644
--- a/docs/piranha_plant.md
+++ b/docs/piranha_plant.md
@@ -1,54 +1,93 @@
-# Introduction
+# Piranha Plant
-Piranha Plant is the name of a project, started in November 2013, that aims to deliver the future architecture of MediaStreams in Chromium.
+Piranha Plant is the name of a project, started in November 2013, that aims to
+deliver the future architecture of MediaStreams in Chromium.
-Project members are listed in the [group for the project](https://groups.google.com/a/chromium.org/forum/#!members/piranha-plant).
+Project members are listed in the
+[group for the project](https://groups.google.com/a/chromium.org/forum/#!members/piranha-plant).
-The Piranha Plant is a monster plant that has appeared in many of the Super Mario games. In the original Super Mario Bros, it hid in the green pipes and is thus an apt name for the project as we are fighting "monsters in the plumbing."
+The Piranha Plant is a monster plant that has appeared in many of the Super
+Mario games. In the original Super Mario Bros, it hid in the green pipes and is
+thus an apt name for the project as we are fighting "monsters in the plumbing."
![http://files.hypervisor.fr/img/super_mario_piranha_plant.png](http://files.hypervisor.fr/img/super_mario_piranha_plant.png)
-# Background
-
-When the MediaStream spec initially came to be, it was tightly coupled with PeerConnection. The infrastructure for both of these was initially implemented primarily in libjingle, and then used by Chromium. For this reason, the MediaStream implementation in Chromium is still somewhat coupled with the PeerConnection implementation, it still uses some libjingle interfaces on the Chromium side, and progress is sometimes more difficult as changes need to land in libjingle before changes can be made in Chromium.
-
-Since the early days, the MediaStream spec has evolved so that PeerConnection is just one destination for a MediaStream, multiple teams are or will be consuming the MediaStream infrastructure, and we have a clearer vision of what the architecture should look like now that the spec is relatively stable.
-
-# Goals
- 1. Document the idealized future design for MediaStreams in Chromium (MS) as well as the current state.
- 1. Create and execute on a plan to incrementally implement the future design.
- 1. Improve quality, maintainability and readability/understandability of the MS code.
- 1. Make life easier for Chromium developers using MS.
- 1. Balance concerns and priorities of the different teams that are or will be using MS in Chromium.
- 1. Do the above without hurting our ability to produce the WebRTC.org deliverables, and without hurting interoperability between Chromium and other software built on the WebRTC.org deliverables.
-
-# Deliverables
-
- 1. Project code name: Piranha Plant.
- 1. A [design document](http://www.chromium.org/developers/design-documents/idealized-mediastream-design) for the idealized future design (work in progress).
- 1. A document laying out a plan for incremental steps to achieve as much of the idealized design as is pragmatic. See below for current draft.
- 1. A [master bug](http://crbug.com/323223) to collect all existing and currently planned work items:
- 1. Sub-bugs of the master bug, for all currently known and planned work.
- 1. A document describing changed and improved team policies to help us keep improving code quality (e.g. naming, improved directory structure, OWNERS files). Not started.
-
-# Task List
-Here are some upcoming tasks we need to work on to progress towards the idealized design. Those currently being worked on have emails at the front:
- * General
- * More restrictive OWNERS
- * DEPS files to limit dependencies on libjingle
- * Rename MediaStream{Manager, Dispatcher, DispatcherHandler} to CaptureDevice{...} since it is a bit confusing to use the MediaStream name here.
- * Rename MediaStreamDependencyFactory to PeerConnectionDependencyFactory.
- * Split up MediaStreamImpl.
- * Change the RTCPeerConnectionHandler to only create the PeerConnection and related stuff when necessary.
- * Audio
- * [xians](xians.md) Add a Content API where given an audio WebMediaStreamTrack, you can register as a sink for that track.
- * Move RendererMedia, the current local audio track sink interface, to //media and change as necessary.
- * Put a Chrome-side adapter on the libjingle audio track interface.
- * Move the APM from libjingle to Chrome, putting it behind an experimental flag to start with.
- * Do format change notifications on the capture thread.
- * Switch to a push model for received PeerConnection audio.
- * Video
- * [perkj](perkj.md) Add a Chrome-side interface representing a sink for a video track.
- * [perkj](perkj.md) Add a Content API where given a video WebMediaStreamTrack, you can register as a sink for that track.
- * Add a Chrome-side adapter for libjingle’s video track interface, which may also need to change.
- * Implement a Chrome-side VideoSource and constraints handling (currently in libjingle). \ No newline at end of file
+[TOC]
+
+## Background
+
+When the MediaStream spec initially came to be, it was tightly coupled with
+PeerConnection. The infrastructure for both of these was initially implemented
+primarily in libjingle, and then used by Chromium. For this reason, the
+MediaStream implementation in Chromium is still somewhat coupled with the
+PeerConnection implementation, it still uses some libjingle interfaces on the
+Chromium side, and progress is sometimes more difficult as changes need to land
+in libjingle before changes can be made in Chromium.
+
+Since the early days, the MediaStream spec has evolved so that PeerConnection is
+just one destination for a MediaStream, multiple teams are or will be consuming
+the MediaStream infrastructure, and we have a clearer vision of what the
+architecture should look like now that the spec is relatively stable.
+
+## Goals
+
+1. Document the idealized future design for MediaStreams in Chromium (MS) as
+ well as the current state.
+1. Create and execute on a plan to incrementally implement the future design.
+1. Improve quality, maintainability and readability/understandability of the MS
+ code.
+1. Make life easier for Chromium developers using MS.
+1. Balance concerns and priorities of the different teams that are or will be
+ using MS in Chromium.
+1. Do the above without hurting our ability to produce the WebRTC.org
+ deliverables, and without hurting interoperability between Chromium and
+ other software built on the WebRTC.org deliverables.
+
+## Deliverables
+
+1. Project code name: Piranha Plant.
+1. A [design document](http://www.chromium.org/developers/design-documents/idealized-mediastream-design)
+ for the idealized future design (work in progress).
+1. A document laying out a plan for incremental steps to achieve as much of the
+ idealized design as is pragmatic. See below for current draft.
+1. A [master bug](https://crbug.com/323223) to collect all existing and
+ currently planned work items:
+1. Sub-bugs of the master bug, for all currently known and planned work.
+1. A document describing changed and improved team policies to help us keep
+ improving code quality (e.g. naming, improved directory structure, OWNERS
+ files). Not started.
+
+## Task List
+
+Here are some upcoming tasks we need to work on to progress towards the
+idealized design. Those currently being worked on have emails at the front:
+
+* General
+ * More restrictive OWNERS
+ * DEPS files to limit dependencies on libjingle
+ * Rename MediaStream{Manager, Dispatcher, DispatcherHandler} to
+ CaptureDevice{...} since it is a bit confusing to use the MediaStream
+ name here.
+ * Rename MediaStreamDependencyFactory to PeerConnectionDependencyFactory.
+ * Split up MediaStreamImpl.
+ * Change the RTCPeerConnectionHandler to only create the PeerConnection
+ and related stuff when necessary.
+* Audio
+ * [xians] Add a Content API where given an audio WebMediaStreamTrack, you
+ can register as a sink for that track.
+ * Move RendererMedia, the current local audio track sink interface, to
+ //media and change as necessary.
+ * Put a Chrome-side adapter on the libjingle audio track interface.
+ * Move the APM from libjingle to Chrome, putting it behind an experimental
+ flag to start with.
+ * Do format change notifications on the capture thread.
+ * Switch to a push model for received PeerConnection audio.
+* Video
+ * [perkj] Add a Chrome-side interface representing a sink for a video
+ track.
+ * [perkj] Add a Content API where given a video WebMediaStreamTrack, you
+ can register as a sink for that track.
+ * Add a Chrome-side adapter for libjingle’s video track interface, which
+ may also need to change.
+ * Implement a Chrome-side VideoSource and constraints handling (currently
+ in libjingle).
diff --git a/docs/profiling_content_shell_on_android.md b/docs/profiling_content_shell_on_android.md
index 0d4b903..cf33d06 100644
--- a/docs/profiling_content_shell_on_android.md
+++ b/docs/profiling_content_shell_on_android.md
@@ -1,149 +1,203 @@
-# Introduction
+# Profiling Content Shell on Android
-Below are the instructions for setting up profiling for Content Shell on Android. This will let you generate profiles for ContentShell. This will require linux, building an userdebug Android build, and wiping the device.
+Below are the instructions for setting up profiling for Content Shell on
+Android. This will let you generate profiles for ContentShell. This will require
+linux, building an userdebug Android build, and wiping the device.
+
+[TOC]
## Prepare your device.
-You need an Android 4.2+ device (Galaxy Nexus, Nexus 4, 7, 10, etc.) which you don’t mind erasing all data, rooting, and installing a userdebug build on.
-
-## Get and build content\_shell\_apk for Android
-(These instructions have been carefully distilled from http://code.google.com/p/chromium/wiki/AndroidBuildInstructions)
-
- 1. Get the code! You’ll want a second checkout as this will be android-specific. You know the drill: http://dev.chromium.org/developers/how-tos/get-the-code
- 1. Append this to your .gclient file: `target_os = ['android']`
- 1. Create `chromium.gyp_env` next to your .gclient file: `echo "{ 'GYP_DEFINES': 'OS=android', }" > chromium.gyp_env`
- 1. (Note: All these scripts assume you’re using "bash" (default) as your shell.)
- 1. Sync and runhooks (be careful not to run hooks on the first sync):
-```
-gclient sync --nohooks
-. build/android/envsetup.sh
-gclient runhooks
-```
- 1. No need to install any API Keys.
- 1. Install Oracle’s Java: http://goo.gl/uPRSq. Grab the appropriate x64 .bin file, `chmod +x`, and then execute to extract. You then move that extracted tree into /usr/lib/jvm/, rename it java-6-sun and set:
-```
-export JAVA_HOME=/usr/lib/jvm/java-6-sun
-export ANDROID_JAVA_HOME=/usr/lib/jvm/java-6-sun
-```
- 1. Type ‘`java -version`’ and make sure it says java version "1.6.0\_35” without any mention of openjdk before proceeding.
- 1. `sudo build/install-build-deps-android.sh`
- 1. Time to build!
-```
-ninja -C out/Release content_shell_apk
-```
+You need an Android 4.2+ device (Galaxy Nexus, Nexus 4, 7, 10, etc.) which you
+don’t mind erasing all data, rooting, and installing a userdebug build on.
+
+## Get and build `content_shell_apk` for Android
+
+(These instructions have been carefully distilled from the
+[Android Build Instructions](android_build_instructions.md).)
+
+1. Get the code! You’ll want a second checkout as this will be
+ android-specific. You know the drill:
+ http://dev.chromium.org/developers/how-tos/get-the-code
+1. Append this to your `.gclient` file: `target_os = ['android']`
+1. Create `chromium.gyp_env` next to your `.gclient` file:
+ `echo "{ 'GYP_DEFINES': 'OS=android', }" > chromium.gyp_env`
+1. (Note: All these scripts assume you’re using "bash" (default) as your
+ shell.)
+1. Sync and runhooks (be careful not to run hooks on the first sync):
+
+ ```
+ gclient sync --nohooks
+ . build/android/envsetup.sh
+ gclient runhooks
+ ```
+
+1. No need to install any API Keys.
+1. Install Oracle’s Java: http://goo.gl/uPRSq. Grab the appropriate x64 .bin
+ file, `chmod +x`, and then execute to extract. You then move that extracted
+ tree into /usr/lib/jvm/, rename it java-6-sun and set:
+
+ ```
+ export JAVA_HOME=/usr/lib/jvm/java-6-sun
+ export ANDROID_JAVA_HOME=/usr/lib/jvm/java-6-sun
+ ```
+
+1. Type ‘`java -version`’ and make sure it says java version `1.6.0_35` without
+ any mention of openjdk before proceeding.
+1. `sudo build/install-build-deps-android.sh`
+1. Time to build!
+
+ ```
+ ninja -C out/Release content_shell_apk
+ ```
## Setup the physical device
-> Plug in your device. Make sure you can talk to your device, try "`adb shell ls`"
+Plug in your device. Make sure you can talk to your device, try "`adb shell ls`"
## Root your device and install a userdebug build
- 1. This may require building your own version of Android: http://source.android.com/source/building-devices.html
- 1. A build that works is: manta / android-4.2.2\_r1 or master / full\_manta-userdebug.
+1. This may require building your own version of Android:
+ http://source.android.com/source/building-devices.html
+1. A build that works is: `manta / android-4.2.2_r1` or
+ `master / full_manta-userdebug`.
## Root your device
- 1. Run `adb root`. Every time you connect your device you’ll want to run this.
- 1. If adb is not available, make sure to run “`. build/android/envsetup.sh`”
-> If you get the error “error: device offline”, you may need to become a developer on your device before Linux will see it. On Jellybean 4.2.1 and above this requires going to “about phone” or “about tablet” and clicking the build number 7 times: http://androidmuscle.com/how-to-enable-usb-debugging-developer-options-on-nexus-4-and-android-4-2-devices/
+
+1. Run `adb root`. Every time you connect your device you’ll want to run this.
+1. If adb is not available, make sure to run `. build/android/envsetup.sh`
+
+If you get the error `error: device offline`, you may need to become a developer
+on your device before Linux will see it. On Jellybean 4.2.1 and above this
+requires going to “about phone” or “about tablet” and clicking the build number
+7 times:
+http://androidmuscle.com/how-to-enable-usb-debugging-developer-options-on-nexus-4-and-android-4-2-devices/
## Run a Telemetry perf profiler
-You can run any Telemetry benchmark with --profiler=perf, and it will:
-1) Download "perf" and "perfhost"
-2) Install on your device
-3) Run the test
-4) Setup symlinks to work with the --symfs parameter
+You can run any Telemetry benchmark with `--profiler=perf`, and it will:
+1. Download `perf` and `perfhost`
+1. Install on your device
+1. Run the test
+1. Setup symlinks to work with the `--symfs` parameter
You can also run "manual" tests with Telemetry, more information here:
http://www.chromium.org/developers/telemetry/profiling#TOC-Manual-Profiling---Android
-The following steps describe building "perf", which is no longer necessary if you use Telemetry.
+The following steps describe building `perf`, which is no longer necessary if
+you use Telemetry.
+## Install `/system/bin/perf` on your device (not needed for Telemetry)
-## Install /system/bin/perf on your device (not needed for Telemetry)
-
-```
-# From inside the android source tree (not inside Chromium)
-mmm external/linux-tools-perf/
-adb remount # (allows you to write to the system image)
-adb sync
-adb shell perf top # check that perf can get samples (don’t expect symbols)
-```
+ # From inside the android source tree (not inside Chromium)
+ mmm external/linux-tools-perf/
+ adb remount # (allows you to write to the system image)
+ adb sync
+ adb shell perf top # check that perf can get samples (don’t expect symbols)
## Enable profiling
-> Rebuild content\_shell\_apk with profiling enabled
-```
-export GYP_DEFINES="$GYP_DEFINES profiling=1"
-build/gyp_chromium
-ninja -C out/Release content_shell_apk
-```
+
+Rebuild `content_shell_apk` with profiling enabled
+
+ export GYP_DEFINES="$GYP_DEFINES profiling=1"
+ build/gyp_chromium
+ ninja -C out/Release content_shell_apk
+
## Install ContentShell
-> Install with the following:
-```
-build/android/adb_install_apk.py --apk out/Release/apks/ContentShell.apk --apk_package org.chromium.content_shell
-```
+
+Install with the following:
+
+ build/android/adb_install_apk.py \
+ --apk out/Release/apks/ContentShell.apk \
+ --apk_package org.chromium.content_shell
## Run ContentShell
-> Run with the following:
-```
-./build/android/adb_run_content_shell
-```
-> If content\_shell “stopped unexpectedly” use “`adb logcat`” to debug. If you see ResourceExtractor exceptions, a clean build is your solution. crbug.com/164220
-
-## Setup a “symbols” directory with symbols from your build (not needed for Telemetry)
- 1. Figure out exactly what path content\_shell\_apk (or chrome, etc) installs to.
- * On the device, navigate ContentShell to about:crash
-```
-adb logcat | grep libcontent_shell_content_view.so
-```
-> > You should find a path that’s something like /data/app-lib/org.chromium.content\_shell-1/libcontent\_shell\_content\_view.so
- 1. Make a symbols directory
-```
-mkdir symbols (this guide assumes you put this next to src/)
-```
- 1. Make a symlink from your symbols directory to your un-stripped content\_shell.
-```
-mkdir -p symbols/data/app-lib/org.chromium.content_shell-1 (or whatever path in app-lib you got above)
-ln -s `pwd`/src/out/Release/lib/libcontent_shell_content_view.so `pwd`/symbols/data/app-lib/org.chromium.content_shell-1
-```
-
-## Install perfhost\_linux locally (not needed for Telemetry)
-
-
-> Note: modern versions of perf may also be able to process the perf.data files from the device.
- 1. perfhost\_linux can be built from: https://android.googlesource.com/platform/external/linux-tools-perf/.
- 1. Place perfhost\_linux next to symbols, src, etc.
-```
-chmod a+x perfhost_linux
-```
+
+Run with the following:
+
+ ./build/android/adb_run_content_shell
+
+If `content_shell` “stopped unexpectedly” use `adb logcat` to debug. If you see
+ResourceExtractor exceptions, a clean build is your solution.
+https://crbug.com/164220
+
+## Setup a `symbols` directory with symbols from your build (not needed for Telemetry)
+
+1. Figure out exactly what path `content_shell_apk` (or chrome, etc) installs
+ to.
+ * On the device, navigate ContentShell to about:crash
+
+
+ adb logcat | grep libcontent_shell_content_view.so
+
+You should find a path that’s something like
+`/data/app-lib/org.chromium.content_shell-1/libcontent_shell_content_view.so`
+
+1. Make a symbols directory
+ ```
+ mkdir symbols (this guide assumes you put this next to src/)
+ ```
+1. Make a symlink from your symbols directory to your un-stripped
+ `content_shell`.
+
+ ```
+ # Use whatever path in app-lib you got above
+ mkdir -p symbols/data/app-lib/org.chromium.content_shell-1
+ ln -s `pwd`/src/out/Release/lib/libcontent_shell_content_view.so \
+ `pwd`/symbols/data/app-lib/org.chromium.content_shell-1
+ ```
+
+## Install `perfhost_linux` locally (not needed for Telemetry)
+
+Note: modern versions of perf may also be able to process the perf.data files
+from the device.
+
+1. `perfhost_linux` can be built from:
+ https://android.googlesource.com/platform/external/linux-tools-perf/.
+1. Place `perfhost_linux` next to symbols, src, etc.
+
+ chmod a+x perfhost_linux
## Actually record a profile on the device!
-> Run the following:
-```
-adb shell ps | grep content (look for the pid of the sandboxed_process)
-adb shell perf record -g -p 12345 sleep 5
-adb pull /data/perf.data
-```
+
+Run the following:
+
+ adb shell ps | grep content (look for the pid of the sandboxed_process)
+ adb shell perf record -g -p 12345 sleep 5
+ adb pull /data/perf.data
+
## Create the report
- 1. Run the following:
-```
-./perfhost_linux report -g -i perf.data --symfs symbols/
-```
- 1. If you don’t see chromium/webkit symbols, make sure that you built/pushed Release, and that the symlink you created to the .so is valid!
- 1. If you have symbols, but your callstacks are nonsense, make sure you ran build/gyp\_chromium after setting profiling=1, and rebuilt.
+
+1. Run the following:
+
+ ```
+ ./perfhost_linux report -g -i perf.data --symfs symbols/
+ ```
+
+1. If you don’t see chromium/webkit symbols, make sure that you built/pushed
+ Release, and that the symlink you created to the .so is valid!
+1. If you have symbols, but your callstacks are nonsense, make sure you ran
+ `build/gyp_chromium` after setting `profiling=1`, and rebuilt.
## Add symbols for the kernel
- 1. By default, /proc/kallsyms returns 0 for all symbols, to fix this, set “/proc/sys/kernel/kptr\_restrict” to 0:
-```
-adb shell echo “0” > /proc/sys/kernel/kptr_restrict
-```
- 1. See http://lwn.net/Articles/420403/ for explanation of what this does.
-```
-adb pull /proc/kallsyms symbols/kallsyms
-```
- 1. Now add --kallsyms to your perfhost\_linux command:
-```
-./perfhost_linux report -g -i perf.data --symfs symbols/ --kallsyms=symbols/kallsyms
-``` \ No newline at end of file
+
+1. By default, /proc/kallsyms returns 0 for all symbols, to fix this, set
+ `/proc/sys/kernel/kptr_restrict` to `0`:
+
+ ```
+ adb shell echo “0” > /proc/sys/kernel/kptr_restrict
+ ```
+
+1. See http://lwn.net/Articles/420403/ for explanation of what this does.
+
+ ```
+ adb pull /proc/kallsyms symbols/kallsyms
+ ```
+
+1. Now add --kallsyms to your perfhost\_linux command:
+ ```
+ ./perfhost_linux report -g -i perf.data --symfs symbols/ \
+ --kallsyms=symbols/kallsyms
+ ```
diff --git a/docs/proxy_auto_config.md b/docs/proxy_auto_config.md
index c5cb991..656d911 100644
--- a/docs/proxy_auto_config.md
+++ b/docs/proxy_auto_config.md
@@ -1,27 +1,60 @@
-# Introduction
-Most systems support manually configuring a proxy for web access, but this is cumbersome and kind of techical, so Chrome also supports [WPAD](http://en.wikipedia.org/wiki/Web_Proxy_Autodiscovery_Protocol) for proxy configuration (enabled if "automatically detect proxy settings" is enabled on Windows).
+# Proxy Auto Config Using WPAD
-# Problem
-Currently, WPAD is pretty slow when we're starting up Chrome - we have to query the local network for WPAD servers using DNS (and maybe NetBIOS), and we wait all the way until the resolver timeout before we try sending any HTTP requests if there's no WPAD server. This is a really crappy user experience, since the browser's basically unuseable for a couple of seconds after startup if autoconfig is turned on and there's no WPAD server.
+Most systems support manually configuring a proxy for web access, but this is
+cumbersome and kind of techical, so Chrome also supports
+[WPAD](http://en.wikipedia.org/wiki/Web_Proxy_Autodiscovery_Protocol) for proxy
+configuration (enabled if "automatically detect proxy settings" is enabled on
+Windows).
-# Solution
-There's a couple of simplifying assumptions we make:
-
- * If there is a WPAD server, it is on the same network as us, and hence likely to respond to lookups far more quickly than a random internet DNS server would.
- * If we get a lookup success for WPAD, there's overwhelmingly likely to be a live WPAD server. The WPAD script could also be large (!?) whereas the DNS response is necessarily small.
+## Problem
-Therefore our proposed solution is that when we're trying to do WPAD resolution, we fail very fast if the WPAD server doesn't immediately respond to a lookup (like, 100ms or less). If there's no WPAD server, we'll time the lookup out in 100ms and get ourselves out of the critical path much faster. We won't time out lookups for explicitly-configured WPAD servers (i.e., custom PAC script URLs) in this fashion; those will still use the normal DNS timeout.
+Currently, WPAD is pretty slow when we're starting up Chrome - we have to query
+the local network for WPAD servers using DNS (and maybe NetBIOS), and we wait
+all the way until the resolver timeout before we try sending any HTTP requests
+if there's no WPAD server. This is a really crappy user experience, since the
+browser's basically unuseable for a couple of seconds after startup if
+autoconfig is turned on and there's no WPAD server.
-**This could have bad effects on networks with slow DNS or WPAD servers**, so we should be careful to allow users to turn this off, and we should keep statistics as to how often lookups succeed after the timeout.
+## Solution
-So here's what our WPAD lookup policy looks like **currently** in practice (assuming WPAD is enabled throughout):
+There's a couple of simplifying assumptions we make:
- * If there's no WPAD server on the network, we try to do a lookup for WPAD, time out after two seconds, and disable WPAD. Until this time, no requests can proceed.
- * If there's a WPAD server and our lookup for it answers in under two seconds, we use that WPAD server (fetch and execute its script) and proceed with requests.
- * If there's a WPAD server and our lookup for it answers after two seconds, we time out and do not use it (ever) until a network change triggers a WPAD reconfiguration.
+* If there is a WPAD server, it is on the same network as us, and hence likely
+ to respond to lookups far more quickly than a random internet DNS server
+ would.
+* If we get a lookup success for WPAD, there's overwhelmingly likely to be a
+ live WPAD server. The WPAD script could also be large (!?) whereas the DNS
+ response is necessarily small.
+
+Therefore our proposed solution is that when we're trying to do WPAD resolution,
+we fail very fast if the WPAD server doesn't immediately respond to a lookup
+(like, 100ms or less). If there's no WPAD server, we'll time the lookup out in
+100ms and get ourselves out of the critical path much faster. We won't time out
+lookups for explicitly-configured WPAD servers (i.e., custom PAC script URLs) in
+this fashion; those will still use the normal DNS timeout.
+
+**This could have bad effects on networks with slow DNS or WPAD servers**, so we
+should be careful to allow users to turn this off, and we should keep statistics
+as to how often lookups succeed after the timeout.
+
+So here's what our WPAD lookup policy looks like **currently** in practice
+(assuming WPAD is enabled throughout):
+
+* If there's no WPAD server on the network, we try to do a lookup for WPAD,
+ time out after two seconds, and disable WPAD. Until this time, no requests
+ can proceed.
+* If there's a WPAD server and our lookup for it answers in under two seconds,
+ we use that WPAD server (fetch and execute its script) and proceed with
+ requests.
+* If there's a WPAD server and our lookup for it answers after two seconds, we
+ time out and do not use it (ever) until a network change triggers a WPAD
+ reconfiguration.
Here's what the **proposed** lookup policy looks like in practice:
- * If there's no WPAD server on the network, we try to do a lookup for WPAD, time out after 100ms, and disable WPAD.
- * If there's a WPAD server and our lookup for it answers in under 100ms or it's explicitly configured (via a custom PAC URL), we use that WPAD server.
- * If there's a WPAD server and our lookup for it answers after 100ms, we time out and do not use it until a network change. \ No newline at end of file
+* If there's no WPAD server on the network, we try to do a lookup for WPAD,
+ time out after 100ms, and disable WPAD.
+* If there's a WPAD server and our lookup for it answers in under 100ms or
+ it's explicitly configured (via a custom PAC URL), we use that WPAD server.
+* If there's a WPAD server and our lookup for it answers after 100ms, we time
+ out and do not use it until a network change.
diff --git a/docs/retrieving_code_analysis_warnings.md b/docs/retrieving_code_analysis_warnings.md
index 34c36f1..a59e696 100644
--- a/docs/retrieving_code_analysis_warnings.md
+++ b/docs/retrieving_code_analysis_warnings.md
@@ -1,40 +1,66 @@
-# Introduction
+# Retrieving Code Analysis Warnings
-Several times a day the Chromium code base is built with Microsoft VC++'s /analyze compile option. This does static code analysis which has found numerous bugs (see https://code.google.com/p/chromium/issues/detail?id=427616). While it is possible to visit the /analyze builder page and look at the raw results (http://build.chromium.org/p/chromium.fyi/builders/Chromium%20Windows%20Analyze) this works very poorly.
+Several times a day the Chromium code base is built with Microsoft VC++'s
+`/analyze` compile option. This does static code analysis which has found
+numerous bugs (see https://crbug.com/427616). While it is possible to visit the
+`/analyze` builder page and look at the raw results
+(http://build.chromium.org/p/chromium.fyi/builders/Chromium%20Windows%20Analyze)
+this works very poorly.
-As of this writing there are 2,702 unique warnings. Some of these are in header files and fire multiple times so there are a total of 11,202 warning lines. Most of these have been examined and found to be false positives. Therefore, in order to sanely examine the /analyze warnings it is necessary to summarize the warnings, and find what is new.
+As of this writing there are 2,702 unique warnings. Some of these are in header
+files and fire multiple times so there are a total of 11,202 warning lines. Most
+of these have been examined and found to be false positives. Therefore, in order
+to sanely examine the /analyze warnings it is necessary to summarize the
+warnings, and find what is new.
There are scripts to do this.
-# Details
+## Details
-The necessary scripts, which currently run on Windows only, are checked in to tools\win\new\_analyze\_warnings. Typical usage is like this:
+The necessary scripts, which currently run on Windows only, are checked in to
+`tools\win\new_analyze_warnings`. Typical usage is like this:
-```
-> set ANALYZE_REPO=d:\src\analyze_chromium
-> retrieve_latest_warnings.bat
-```
+ > set ANALYZE_REPO=d:\src\analyze_chromium
+ > retrieve_latest_warnings.bat
-The batch file using the associated Python scripts to retrieve the latest results from the web page, create a summary file, and if previous results were found create a new warnings file. Typical results look like this:
+The batch file using the associated Python scripts to retrieve the latest
+results from the web page, create a summary file, and if previous results were
+found create a new warnings file. Typical results look like this:
-```
-analyze0067_full.txt
-analyze0067_summary.txt
-analyze0067_new.txt
-```
+ analyze0067_full.txt
+ analyze0067_summary.txt
+ analyze0067_new.txt
-If ANALYZE\_REPO is set then the batch file goes to %ANALYZE\_REPO%\src, does a git pull, then does a checkout of the revision that corresponds to the latest warnings, and then does a gclient sync. The warnings can then be easily correlated to the specific source that triggered them.
+If `ANALYZE_REPO` is set then the batch file goes to `%ANALYZE_REPO%\src`, does
+a git pull, then does a checkout of the revision that corresponds to the latest
+warnings, and then does a gclient sync. The warnings can then be easily
+correlated to the specific source that triggered them.
-# Understanding the results
+## Understanding the results
-The new.txt file lists new warnings, and fixed warnings. Usually it can accurately identify them but sometimes all it can say is that the number of instances of a particularly warning has changed, which is usually not of interest. If you look at new warnings every day or two then the number of new warnings is usually low enough to be quite manageable.
+The `new.txt` file lists new warnings, and fixed warnings. Usually it can
+accurately identify them but sometimes all it can say is that the number of
+instances of a particularly warning has changed, which is usually not of
+interest. If you look at new warnings every day or two then the number of new
+warnings is usually low enough to be quite manageable.
-The summary.txt file groups warnings by type, and then sorts the groups by frequency. Low frequency warnings are more likely to be real bugs, so focus on those. However, all of the low-frequency have been investigated so at this time they are unlikely to be real bugs.
+The `summary.txt` file groups warnings by type, and then sorts the groups by
+frequency. Low frequency warnings are more likely to be real bugs, so focus on
+those. However, all of the low-frequency have been investigated so at this time
+they are unlikely to be real bugs.
-The majority of new warnings are variable shadowing warnings. Until -Wshadow is enabled for gcc/clang builds these warnings will continue to appear, and unless they are actually buggy or are particularly confusing it is usually not worth fixing them. One exception would be if you are planning to enable -Wshadow in which case using the list or relevant shadowing warnings would be ideal.
+The majority of new warnings are variable shadowing warnings. Until `-Wshadow`
+is enabled for gcc/clang builds these warnings will continue to appear, and
+unless they are actually buggy or are particularly confusing it is usually not
+worth fixing them. One exception would be if you are planning to enable
+`-Wshadow` in which case using the list or relevant shadowing warnings would be
+ideal.
-Some of the warnings say that out-of-range memory accesses will occur, which is pretty scary. For instance "warning C6201: Index '-1' is out of valid index range '0' to '4'". In most cases these are false positives so use your own judgment when deciding whether to fix them.
+Some of the warnings say that out-of-range memory accesses will occur, which is
+pretty scary. For instance "warning C6201: Index '-1' is out of valid index
+range '0' to '4'". In most cases these are false positives so use your own
+judgment when deciding whether to fix them.
-The full.txt file contains the raw output and should usually be ignored.
+The `full.txt` file contains the raw output and should usually be ignored.
-If you have any questions then post to the chromium dev mailing list. \ No newline at end of file
+If you have any questions then post to the chromium dev mailing list.
diff --git a/docs/script_preprocessor.md b/docs/script_preprocessor.md
index fc6b9f4..7b1b9ae 100644
--- a/docs/script_preprocessor.md
+++ b/docs/script_preprocessor.md
@@ -1,64 +1,143 @@
# Using the Chrome Devtools JavaScript preprocessing feature
-The Chrome Devtools JavaScript preprocessor intercepts JavaScript just before it enters V8, the Chrome JS system, allowing the JS to be transcoded before compilation. In combination with page injected JavaScript, the preprocessor allows a complete synthetic runtime to be constructed in JavaScript. Combined with other functions in the `chrome.devtools` extension API, the preprocessor allows new more sophisticated JavaScript-related developer tools to be created.
+The Chrome Devtools JavaScript preprocessor intercepts JavaScript just before it
+enters V8, the Chrome JS system, allowing the JS to be transcoded before
+compilation. In combination with page injected JavaScript, the preprocessor
+allows a complete synthetic runtime to be constructed in JavaScript. Combined
+with other functions in the `chrome.devtools` extension API, the preprocessor
+allows new more sophisticated JavaScript-related developer tools to be created.
## API
-To use the script preprocessor, write a [chrome devtools extension](http://developer.chrome.com/extensions/devtools.inspectedWindow.html#method-reload) that reloads the Web page with the preprocessor installed:
-```
+To use the script preprocessor, write a
+[chrome devtools extension](http://developer.chrome.com/extensions/devtools.inspectedWindow.html#method-reload)
+that reloads the Web page with the preprocessor installed:
+
+```javascript
chrome.devtools.inspectedWindow.reload({
- ignoreCache: true,
+ ignoreCache: true,
injectedScript: runThisFirst,
preprocessorScript: preprocessor
});
```
-where `preprocessorScript` is source code (string) for a JavaScript function taking three string arguments, the source to preprocess, the URL of the source, and a function name if the source is an DOM event handler. The preprocessorerScript function should return a string to be compiled by Chrome in place of the input source. In the case that the source is a DOM event handler, the returned source must compile to a single JS function.
-
-The [Chrome Preprocessor Example](http://developer.chrome.com/extensions/samples.html) illustrates the API call in a simple chrome devtools extension. Download and unpack the .zip file, use `chrome://extensions` in Developer Mode and load the unpacked extension. Then open or reopen devtools. The Preprocessor panel has a **reload** button that triggers a simple preprocessor.
-The preprocessor runs in an isolated world similar to the environment of Chrome content scripts. A `window` object is available but it shares no properties with the Web page `window` object. DOM calls in the preprocessor environment will operate on the Web page, but developers should be cautious about operating on the DOM in the preprocessor. We do not test such operations though we expect the result to resemble calls from the outer function of `<script>` tags.
-
-In some applications the developer may coordinate runtime initialization using the `injectedScript` property in the object passed to the `reload()` call. This is also JavaScript source code; it is compiled into the page ahead of any Web page scripts and thus before any JavaScript is preprocessed.
-
-The preprocessor is compiled once just before the first JavaScript appears. It remains active until the page is reloaded or otherwise navigated. Navigating the Web page back and then forward will result in no preprocessing. Closing devtools will leave the preprocessor in place.
+where `preprocessorScript` is source code (string) for a JavaScript function
+taking three string arguments, the source to preprocess, the URL of the source,
+and a function name if the source is an DOM event handler. The
+`preprocessorerScript` function should return a string to be compiled by Chrome
+in place of the input source. In the case that the source is a DOM event
+handler, the returned source must compile to a single JS function.
+
+The
+[Chrome Preprocessor Example](http://developer.chrome.com/extensions/samples.html)
+illustrates the API call in a simple chrome devtools extension. Download and
+unpack the .zip file, use `chrome://extensions` in Developer Mode and load the
+unpacked extension. Then open or reopen devtools. The Preprocessor panel has a
+**reload** button that triggers a simple preprocessor.
+
+The preprocessor runs in an isolated world similar to the environment of Chrome
+content scripts. A `window` object is available but it shares no properties with
+the Web page `window` object. DOM calls in the preprocessor environment will
+operate on the Web page, but developers should be cautious about operating on
+the DOM in the preprocessor. We do not test such operations though we expect the
+result to resemble calls from the outer function of `<script>` tags.
+
+In some applications the developer may coordinate runtime initialization using
+the `injectedScript` property in the object passed to the `reload()` call. This
+is also JavaScript source code; it is compiled into the page ahead of any Web
+page scripts and thus before any JavaScript is preprocessed.
+
+The preprocessor is compiled once just before the first JavaScript appears. It
+remains active until the page is reloaded or otherwise navigated. Navigating the
+Web page back and then forward will result in no preprocessing. Closing devtools
+will leave the preprocessor in place.
## Use Cases
The script preprocessor supports transcoding input source to JavaScript. Use cases include:
- * Adding write barriers for Querypoint debugging,
- * Supporting feature-specific debugging of next generation EcmaScript using eg Traceur,
- * Integration of development tools like coverage analysis.
- * Analysis of call sequences for performance tuning.
-Several JavaScript compilers support transcoding, including [Traceur](https://github.com/google/traceur-compiler#readme) and [Esprima](http://esprima.org/).
-
-## Implementation
-The implementation relies on the Devtools front-end hosting an extension supplying the preprocessor script; the front end communicates with the browser backend over eg web sockets.
+* Adding write barriers for Querypoint debugging,
+* Supporting feature-specific debugging of next generation EcmaScript using eg Traceur,
+* Integration of development tools like coverage analysis.
+* Analysis of call sequences for performance tuning.
-The devtools extension function call issues a postMessage() event from the devtools extension iframe to the devtools main frame. The event is handled in ExtensionServer.js which forwards it over the [devtools remote debug protocol](https://developers.google.com/chrome-developer-tools/docs/protocol/1.0/page#command-reload). (See [Bug 229971](https://code.google.com/p/chromium/issues/detail?id=229971) for this part of the implementation and its status).
+Several JavaScript compilers support transcoding, including
+[Traceur](https://github.com/google/traceur-compiler#readme) and
+[Esprima](http://esprima.org/).
-When the preprocessor script arrives in the back end, `InspectorPageAgent::reload` stores the preprocessor script in `m_pendingScriptPreprocessor`. After the browser begins the reload operation, it calls `PageDebuggerAgent::didClearWindowObjectInWorld` which moves the processor source into the `scriptDebugServer()`.
+## Implementation
-Next the browser prepares the page environment and calls `PageDebuggerAgent::didClearWindowObjectInWorld`. This function clears the preprocessor object pointer and if it is not recreated during the page load, no scripts will be preprocessed. At this point we only store the preprocessor source, delaying the compilation of the preprocessor until just before its first use. This helps ensure that the JS environment we use is fully initialized.
+The implementation relies on the Devtools front-end hosting an extension
+supplying the preprocessor script; the front end communicates with the browser
+backend over eg web sockets.
+
+The devtools extension function call issues a postMessage() event from the
+devtools extension iframe to the devtools main frame. The event is handled in
+`ExtensionServer.js` which forwards it over the
+[devtools remote debug protocol](https://developers.google.com/chrome-developer-tools/docs/protocol/1.0/page#command-reload).
+(See [Bug 229971](https://crbug.com/229971) for this part of the implementation
+and its status).
+
+When the preprocessor script arrives in the back end,
+`InspectorPageAgent::reload` stores the preprocessor script in
+`m_pendingScriptPreprocessor`. After the browser begins the reload operation, it
+calls `PageDebuggerAgent::didClearWindowObjectInWorld` which moves the processor
+source into the `scriptDebugServer()`.
+
+Next the browser prepares the page environment and calls
+`PageDebuggerAgent::didClearWindowObjectInWorld`. This function clears the
+preprocessor object pointer and if it is not recreated during the page load, no
+scripts will be preprocessed. At this point we only store the preprocessor
+source, delaying the compilation of the preprocessor until just before its first
+use. This helps ensure that the JS environment we use is fully initialized.
Source to be preprocessed comes from three different places:
- 1. Web page `<script>` tags,
- 1. DOM event-listener attributes, eg `onload`,
- 1. JS `eval()` or `new Function()` calls.
-
-When the browser encounters either a `<script>` tag (`ScriptController::executeScriptInMainWorld`) or an element attribute script (`V8LazyEventListener::prepareListenerObject`) we call a corresponding function in InspectorInstrumentation. This function has a fast inlined return path in the case that the debugger is not attached.
-If the debugger is attached, InspectorInstrumentation will call the matching function in PageDebuggerAgent (see core/inspector/InspectorInstrumentation.idl). It checks to see if the preprocessor is installed. If not, it returns.
+1. Web page `<script>` tags,
+1. DOM event-listener attributes, eg `onload`,
+1. JS `eval()` or `new Function()` calls.
-The preprocessor source is stored in PageScriptDebugServer.
-If the preprocessor is installed, we check to see if it is compiled. If not, we create a new `ScriptPreprocessor` object. The constructor uses `ScriptController::executeScriptInIsolatedWorld` to compile the preprocessor in a new isolated world associated with the Web page's main world. If the compilation and outer script execution succeed and if the result is a JavaScript function, we store the resulting function as a `ScopedPersistent<v8::Function>` member of the preprocessor.
-
-If the `PageScriptDebugServer::preprocess()` has a value for the preprocessor function, it applies the function to the web page source using `V8ScriptRunner::callAsFunction()`. This calls the compiled JS function in the ScriptPreprocessor's isolated world and retrieves the resulting string.
+When the browser encounters either a `<script>` tag
+(`ScriptController::executeScriptInMainWorld`) or an element attribute script
+(`V8LazyEventListener::prepareListenerObject`) we call a corresponding function
+in InspectorInstrumentation. This function has a fast inlined return path in the
+case that the debugger is not attached.
-When the preprocessed JavaScript source runs it may call `eval()` or `new Function()`. These calls cause the V8 runtime to compile source. Immediately before compiling, V8 issues a beforeCompile event which triggers `ScriptDebugServer::handleV8DebugEvent()`. This code is only called if the debugger is active. In the handler we call `ScriptDebugServer::preprocessEval()` to examine the ScriptCompilationTypeInfo, a marker set by V8, to see if we are compiling dynamic code. Only dynamic code is preprocessed in this function and only if we are not executing the preprocessor itself.
+If the debugger is attached, InspectorInstrumentation will call the matching
+function in PageDebuggerAgent (see core/inspector/InspectorInstrumentation.idl).
+It checks to see if the preprocessor is installed. If not, it returns.
-During the browser operation, API generation code, debugger console initialization code, injected page script code, debugger information extraction code, and regular web page code enter this function. There is currently no way to distinguish internal or system code from the web page code. However the internal code is all static. By limiting our preprocessing to dynamic code in the beforeCompile handler, we know we are only operating on Web page code. The static Web page code is preprocessed as described above.
+The preprocessor source is stored in PageScriptDebugServer.
+If the preprocessor is installed, we check to see if it is compiled. If not, we
+create a new `ScriptPreprocessor` object. The constructor uses
+`ScriptController::executeScriptInIsolatedWorld` to compile the preprocessor in
+a new isolated world associated with the Web page's main world. If the
+compilation and outer script execution succeed and if the result is a JavaScript
+function, we store the resulting function as a `ScopedPersistent<v8::Function>`
+member of the preprocessor.
+
+If the `PageScriptDebugServer::preprocess()` has a value for the preprocessor
+function, it applies the function to the web page source using
+`V8ScriptRunner::callAsFunction()`. This calls the compiled JS function in the
+ScriptPreprocessor's isolated world and retrieves the resulting string.
+
+When the preprocessed JavaScript source runs it may call `eval()` or
+`new Function()`. These calls cause the V8 runtime to compile source.
+Immediately before compiling, V8 issues a beforeCompile event which triggers
+`ScriptDebugServer::handleV8DebugEvent()`. This code is only called if the
+debugger is active. In the handler we call `ScriptDebugServer::preprocessEval()`
+to examine the ScriptCompilationTypeInfo, a marker set by V8, to see if we are
+compiling dynamic code. Only dynamic code is preprocessed in this function and
+only if we are not executing the preprocessor itself.
+
+During the browser operation, API generation code, debugger console
+initialization code, injected page script code, debugger information extraction
+code, and regular web page code enter this function. There is currently no way
+to distinguish internal or system code from the web page code. However the
+internal code is all static. By limiting our preprocessing to dynamic code in
+the beforeCompile handler, we know we are only operating on Web page code. The
+static Web page code is preprocessed as described above.
## Limitations
-We currently do not support preprocessing of WebWorker source code. \ No newline at end of file
+We currently do not support preprocessing of WebWorker source code.
diff --git a/docs/seccomp_sandbox_crash_dumping.md b/docs/seccomp_sandbox_crash_dumping.md
index 1397664..0c776fe 100644
--- a/docs/seccomp_sandbox_crash_dumping.md
+++ b/docs/seccomp_sandbox_crash_dumping.md
@@ -1,24 +1,46 @@
-## Introduction
-Currently, Breakpad relies on facilities that are disallowed inside the Linux seccomp sandbox. Specifically, it sets a signal handler to catch faults (currently disallowed), forks a new process, and uses ptrace() (also disallowed) to read the memory of the faulted process.
+# seccomp Sandbox Crash Dumping
+
+Currently, Breakpad relies on facilities that are disallowed inside the Linux
+seccomp sandbox. Specifically, it sets a signal handler to catch faults
+(currently disallowed), forks a new process, and uses ptrace() (also disallowed)
+to read the memory of the faulted process.
## Options
+
There are three ways we could do crash dumping of seccomp-sandboxed processes:
- * Find a way to permit signal handling safely inside the sandbox (see below).
- * Allow the kernel's core dumper to kick in and write a core file.
- * This seems risky because this code tends not to be well-tested.
- * This will not work if the process is chrooted, so it would not work if the seccomp sandbox is stacked with the SUID sandbox.
- * Have an unsandboxed helper process which ptrace()s the sandboxed process to catch faults.
+
+* Find a way to permit signal handling safely inside the sandbox (see below).
+* Allow the kernel's core dumper to kick in and write a core file.
+ * This seems risky because this code tends not to be well-tested.
+ * This will not work if the process is chrooted, so it would not work if
+ the seccomp sandbox is stacked with the SUID sandbox.
+* Have an unsandboxed helper process which `ptrace()`s the sandboxed process
+ to catch faults.
## Signal handling in the seccomp sandbox
-In case a trusted thread faults with a SIGSEGV, we must make sure that an untrusted thread cannot register a signal handler that will run in the context of the trusted thread.
+
+In case a trusted thread faults with a SIGSEGV, we must make sure that an
+untrusted thread cannot register a signal handler that will run in the context
+of the trusted thread.
Here are some mechanisms that could make this safe:
- * sigaltstack() is per-thread. If we opt not to set a signal stack for trusted threads, and set %esp/%rsp to an invalid address, trusted threads will die safely if they fault.
- * This means the trusted thread cannot set a signal stack on behalf of the untrusted thread once the latter has switched to seccomp mode. The signal stack would have to be set up when the thread is created and not subsequently changed.
- * clone() has a CLONE\_SIGHAND flag. By omitting this flag, trusted and untrusted threads can have different sets of signal handlers. This means we can opt not to set signal handlers for trusted threads.
- * Again, per-thread signal handler sets would mean the trusted thread cannot change signal handlers on behalf of untrusted threads.
- * sigprocmask()/pthread\_sigmask(): These can be used to block signal handling in trusted threads.
+
+* `sigaltstack()` is per-thread. If we opt not to set a signal stack for
+ trusted threads, and set %esp/%rsp to an invalid address, trusted threads
+ will die safely if they fault.
+ * This means the trusted thread cannot set a signal stack on behalf of the
+ untrusted thread once the latter has switched to seccomp mode. The
+ signal stack would have to be set up when the thread is created and not
+ subsequently changed.
+* `clone()` has a `CLONE_SIGHAND` flag. By omitting this flag, trusted and
+ untrusted threads can have different sets of signal handlers. This means we
+ can opt not to set signal handlers for trusted threads.
+ * Again, per-thread signal handler sets would mean the trusted thread
+ cannot change signal handlers on behalf of untrusted threads.
+* `sigprocmask()/pthread_sigmask()`: These can be used to block signal
+ handling in trusted threads.
## See also
- * LinuxCrashDumping
- * [Issue 37728](http://code.google.com/p/chromium/issues/detail?id=37728) \ No newline at end of file
+
+* [LinuxCrashDumping](linux_crash_dumping.md)
+* [Issue 37728](https://crbug.com/37728)
diff --git a/docs/spelling_panel_planning_doc.md b/docs/spelling_panel_planning_doc.md
deleted file mode 100644
index 2b0be5d..0000000
--- a/docs/spelling_panel_planning_doc.md
+++ /dev/null
@@ -1,30 +0,0 @@
-# High Level
-
-There will be a context menu from which the display of the spelling panel can be toggled. This will tie into some methods in webkit that will make sure that the selection is in the right place and in sync with the panel. By catching the messages that the spelling panel sends we can also do the right things when the user asks to correct words, ignore words, move to the next misspelled word and learn words. Additionally, the language of the spellchecker can also be changed through the spelling panel.
-
-# Details
-
-## Toggling the Spelling Panel
-
-Design document for the addition of the spelling panel to Chromium
-
- * Add a new define, `IDC_SPELLING_PANEL_TOGGLE`, to chrome\_dll\_resource.h.
- * Add code to `RenderViewContextMenu::AppendEditableItems` to check the state of the spelling panel and make the right decision. Note that this has to touch the function `SpellCheckerPlatform::SpellingPanelVisible()` which should only be called from the main thread (which this is, so it's ok).
- * Showing the spelling panel works as follows: `RenderViewContextMenu::ExecuteCommand` will need another case to handle the added define. It calls `source_tab_contents_->render_view_host()->ToggleSpellPanel()`, which in turn does `Send(new ViewMsg_ToggleSpellPanel(routing_id(),bool))`. The bool should be the current state of the spelling panel, which is cached on the webkit side to avoid an extra IPC call that would be difficult to execute, due to the limitations with `SpellCheckerPlatform::SpellingPanelVisible()`. This message is caught in `RenderView`, which caches the visibility and then calls `ToggleSpellPanel` on the focused frame. This call ends up at `WebFrameImpl`, which forwards the call to the webkit method `Editor::showSpellingGuessPanel`. From here, webkit does a few things and then calls `advanceToNextMispelling`, which calls `client()->updateSpellingUIWithMisspelledWord(misspelledWord);` which will eventually end up back in the browser due to an IPC call. We can update the panel using `[[NSSpellChecker sharedSpellChecker] updateSpellingPanelWithMisspelledWord:nextMisspelledWord]`. After that, `client()->showSpellingUI(true)` is called. This puts us back in the webkit glue code in editor\_client\_impl.cc. From here, we grab the current `WebViewDelegate` and call `ShowSpellingUI`. This call ends up in `RenderView`, since it implements `WebViewDelegate`, which finally does `Send(new ViewHostMsg_ShowSpellingPanel(routing_id_,show)`). This is caught in `resource_message_filter.cc`, which forwards the call to `SpellCheckerPlatform::ShowSpellingPanel` where we finally display the panel. Hiding the spelling Panel is a similar processs (i.e. we go through webkit and eventually receive a message back telling us to hid the spelling panel).
-
-## Spellchecking Words
- * `advanceToNextMisspelling` in webkit ensures that the currently misspelled word is selected. When the user clicks on the change button in the spelling correction panel, the panel sends a message up the responder chain. We catch this message (`changeSpelling`) in `render_widget_host_view_mac.mm` and interrogate the spelling panel (the sender) to find out what the new word is. We then send an IPC message to the renderer telling it to replace the selected word and advance to the next misspelling, the machinery for which already exists in webkit. The spelling panel will also send the `checkSpelling` message, although this is not very well documented. Anytime we receive this message, we should advance to the next misspelled word, which we can do using `advanceToNextMisspelling`.
-
-## The Find Next Button
- * When the Find Next button is clicked, the spelling panel sends just the `checkSpelling` message, so little additional work is needed to enable Find Next.
-
-## Learning Words
- * When the Learn Button is clicked, the spelling panel handles telling OS X to learn the word, but we still need to move to the next word and provide it with the next misspelling. Again, this is just a matter of catching the `checkSpelling` message.
-
-## Ignoring Words
- * In order to support ignoring words, we need to have unique document tags for every RenderView; this could be done at a more fine grain level, but this is how mainline webkit does it. Whenever a spellcheck request is generated in webkit, it asks for a document tag, which we can obtain from the browser through an IPC call (the actual tags are generated by `[NSSpellChecker uniqueSpellDocumentTag]`). This tag is stored in the RenderView and all spellchecking requests from webkit now bundle the tag. On platforms other than OS X, it is eventually ignored.
- * When the user clicks the Ignore button in the panel, we receive an `ignoreSpelling` message. We send this to `SpellCheckerPlatform::IgnoreWord`, where we can make the calls to the NSSpellChecker to ignore the word. However, there is a problem. We don't know what the tag of the document that we are in is. To solve this, we cache the document tag whenever we spellcheck a word and use that document tag.
- * When a RenderView is closed, it sends an IPC message to the Browser telling it that the document with it's tag has been closed. This lets us forget those words that we no longer need to ignore.
-
-## Unresolved Issues
- * The spelling panel displays `Multilingual` as an option for the spelling language, which is not currently supported by chromium.