...
This guide is a constant work in progress. If something is missing or confusing, please update the guide with better information.
To develop Earthdata Search, you will need a computer running Mac OS X 10.8+ and the following:
brew install qt
)Earthdata Search has been open sourced and is available at https://github.com/nasa/earthdata-search.
...
If you are an external contributor, please fork the EDSC repo from https://github.com/nasa/earthdata-search.
There are a few other related code repositories that may be of interest to you. Links to those repositories are below:
The README at https://github.com/nasa/earthdata-search contains documentation detailing configuration, setup, and running Earthdata Search. After following the installation guidelines in the README, follow the rest of the guide for the commands needed to run an initial setup.
Command Description
rake -T
List rake commands available in the project
rake doc
Generate documentation. UI documentation is generated under doc/ui/index.html
rake test
Run all tests, including unit, functional, integration, and UI tests
rake jasmine
Run Jasmine / Coffeescript specs in a browser
rake spec
Run RSpec specs
rake test:csslint
Run CSSLint on generated documentation
rake test:deadweight
Run deadweight to find unexercised CSS in UI documentation
rails s
Run the app locally
rake notes
View all TODO and FIXME comments
JIRA Agile Board: https://bugs.earthdata.nasa.gov/secure/RapidBoard.jspa?rapidView=209
JIRA Project: https:JIRA Agile Board: https://bugs.earthdata.nasa.gov/secure/RapidBoard.jspa?rapidView=209
JIRA Project: https://bugs.earthdata.nasa.gov/browse/EDSC
Github site: https://github.com/nasa/earthdata-search
Git Stash Bitbucket site: https://git.earthdata.nasa.gov/projects/EDSC
Bamboo Build: https://ci.earthdata.nasa.gov/browse/EDSC-EDSCDB/DBN2
...
Earthdata Search uses an agile development process with two-week sprints. At all times, our master main branch maintains a potentially releasable product increment. Once a commit is merged into mastermain, it will immediately trigger a build and deployment to our SIT environment.
...
We have periodic retrospectives, but more importantly, we try to maintain direct communication and adjust for problems as they arise.
...
...
Our estimation may differ from typical agile projects in the following ways:
Bugs are tackled immediatelyas soon as possible. We cannot have a huge backlog of bugs. If we've marked something done and it is defective, we fix it. Quality is non-negotiable.
...
No change may go into master main until it has undergone peer review in Github and is successfully building in TravisCIGitHub Actions. Master Main is to remain deployable at all times, so ensure it contains production-ready code with green builds.
Occasionally, due to intermittent problems with test timing or execution order, our master main build will fail. Fixing the issue and ensuring it does not happen again becomes the highest priority when this happens.
...
...
Info | ||
---|---|---|
| ||
At the end of every sprint, in order to make patching a release easier, please tag themastermain branch at the latest commit.
Also remember to update the Release Version on bamboo for all of the projects These tags are referred to as These tags are referred to as releases in Github and can be viewed here: https://github.com/nasa/earthdata-search/releases |
Our code generally follows Ruby on Rails conventions. The descriptions below describe useful conventions not outlined by Rails; they touch on the most important pieces of code and do not attempt to describe every directory.
Earthdata Search is implemented primarily using Ruby on Rails 4.1 running on Ruby 2.1 (MRI). For specific bugfix or patch releases, see the project's .ruby-version and Gemfile.lock files.
Production instances run on unicorn / nginx and are backed by a Postgres database.
Earthdata Search is primarily a client to numerous web-facing services:
Earthdata Search uses a responsive HTML5 boilerplate from http://www.initializr.com for our basic layout. It provides default CSS "reset" rules, browser detection, and feature implementation for older browsers.
Client-side code is written in Coffeescript, and uses jQuery or plain Javascript for DOM interaction.
We use knockout.js for handling data models and keeping the interface in sync with changing data. Our knockout code can be found under app/assets/javascripts/models and is further subdivided into three types:
We use a heavily-customized Leaflet.js to draw our maps and Leaflet.draw to allow the user to draw and edit spatial bounds. Our key customizations are handlers for projection switches, rendering of GIBS-based layers for a set of granule results, and translation / interpolation of ECHO polygons into Leaflet- compatible geometries. Customizations are found in the app/assets/javascripts/modules/maps directory
For icon fonts, we use Font Awesome when appropriate and create custom icons through IcoMoon when Font Awesome does not suit our needs. IcoMoon files can be imported from the app/assets/fonts directory and customizations should be exported there.
Our granule timeline is a custom SVG component implemented in the app/assets/javascripts/modules/timeline folder.
We use a custom version of Bootstrap for some UI elements. Our customized version of bootstrap can be downloaded here.
We use a jQuery plugin for our datetime pickers. Usage information can be found here.
For the temporal recurring year range we are using bootstrap- slider
Fast and consistent tests are critical. The full suite should run in under 10 minutes and faster is better. If the suite gets slow, fix it. If a spec fails intermittently, find the problem and make it pass consistently. In order to ensure speed and consistency, we have mocked all of our external service calls using VCR and customized Capybara to avoid reloading sessions between every spec (generally this means you want to use before(:all)
instead of before(:each)
in specs and ensure that there is a corresponding after(:all)
block that resets the page state.
The entire suite of Earthdata Search tests, including unit, functional, and UI tests, may be run using the rake test
command.
Earthdata Search uses RSpec for its unit, functional, and integration tests. Tests may be run by using the rspec
command in the project root.
Integration specs use Capybara and CapybaraWebkit to simulate browser interactions. We include the capybara-screenshot gem and publish screenshots produced by failing builds to aid debugging.
We document the application's behavior using our RSpec integration specs. To generate this documentation, run rspec spec/features/ --format=documentation -o doc/specs.html
. Generated documentation will appear in doc/specs.html.
In order to allow us to describe the application behavior using RSpec, developers must read and follow the guidelines in the "Testing" section of this document's style guide.
For testing Javascript, we use Jasmine, which has an RSpec-like syntax. Developers should exercise their Javascript in the same way they exercise Ruby code. Jasmine tests are located under spec/javascripts. They can be run continuously using rake jasmine
or once using rake jasmine:ci
.
In addition to the RSpec and Jasmine tests documented in the previous section, we perform additional tests to ensure that our UI looks and functions as intended.
Reusable CSS rules and Javascript components should be displayed in the project's pattern portfolio document found at docs/ui/index.html. Developers may add to the portfolio by editing docs/ui/templates/index.html.erb. We generate the portfolio by running rake doc:ui
in the project root (or rake doc
to generate all project documentation).
To see which rules are demonstrated, we run Deadweight on the pattern portfolio to scan for unused rules using rake test:deadweight
.
To ensure the quality of our CSS, we run CSS lint using rake test:csslint
. Developers are encouraged to read the CSS Lint wiki (https://github.com/stubbornella/csslint/wiki/Rules) to learn about the reasoning behind the rules.
We run locally using Pow in a server called edsc.dev. We use sqlite databases in local test and development environments.
There are 4 shared deployment environments for Earthdata projects, including Earthdata Search:
System Integration Testing (SIT)
An environment for performing integration tests on builds, ensuring that new code works correctly in a production environment before placing it in front of customers or users. This is roughly equivalent to ECHO's "testbed" environment.
User Acceptance Testing (UAT)
An environment for verifying that builds meet the customer and user expectations before promoting them to operations. We expect partners and advanced users to use this environment frequently for their own testing, so it will be public and must have a high uptime. This is roughly equivalent to ECHO's "partner-test" environment.
Operations (Ops)
The production environment, running the canonical public-facing site.
SearchLab (Lab)
An environment for highly experimental features. This is typically where major prototyping efforts are deployed, demoed, and tested.
Changes to shared environments must be deployed through Bamboo via the Earthdata Search deployment project. Any successful build of the master branch in Travis CI will result in the code being send to the deploy branch of the ECC Git repo (BitBucket). Once that branch receives the code, the branch is built and deployed to the EDSC SIT environment (https://search.sit.earthdata.nasa.gov).
See "The Pattern Portfolio for HTML, CSS, and Javascript testing"
Progressive enhancement involves starting with a baseline HTML document that is readable and usable without any styling or javascript. We accomplish this by using semantic markup. From there we enhance the markup by unobtrusively adding CSS styles and Javascript.
By starting with working basic HTML, we ensure we have a page that's minimally usable by:
The key point here is that a missing browser feature or a single script or style error should not render the site unusable.
This is similar to Progressive Enhancement described above. We target narrow screens (typically mobile screens) first and add additional styles as screens get wider, using media selectors.
The reason for this is mostly practical. Mobile renderings tend to have a much simpler set of styles than larger renderings. If we targetted large screens first, much of our mobile effort would be in undoing the styles aimed at larger screens.
Another key point here is that we will plan to be mobile friendly from the start. It is much easier to build this in from the beginning than to attempt to construct it later on.
HTML is just markup and easy to learn. CSS is turing complete, but not really a programming language. They're easy to dismiss.
The reality is, though, that HTML and CSS provide very few mechanisms for code reuse and organization. Their size and complexity has a direct and perceptible impact on page speed, and they are the most user-visible part of the codebase. It is exceedingly difficult to create clean, performant, reusable, and extensible frontend assets. It generally requires much more care than the corresponding backend code, since backend languages are designed with these aspects in mind.
Frontend authoring is a development discipline and requires a great deal of care and consideration, to the point that most of this guide focuses on frontend development.
It is typically very difficult to extract complexity from front-end code. All new components should be focused on reuse, versitility, and extensibility. When possible, do not add new components at all, but reuse or extend existing components, which can be found in the Pattern Portfolio.
When developing frontend code, the unit of reuse should be the module, component, or behavior, not the page. Design components that can be used across multiple pages or that are versatile enough to be used multiple times on the same page, possibly for different purposes. Write, CSS, Javascript, and partial HTML for components, not for pages, in order to promote robustness and reuse and keep code size in check.
There may be good exceptions to every rule in this guide. When in doubt, follow the guide, but make exceptions as necessary. Always document these decisions. Further, whenever you write code in a non-standard way, or you are faced with multiple competing options and make an important choice, document those decisions as well.
Every line of application code, including UI markup, should be exercised in a test.
Exercise boundary conditions, error handling, and varying inputs at the unit or functional level.
Integration specs should demonstrate user-visible system behavior.
Remember that integration tests run much more slowly than unit tests, so prefer to test more thoroughly at the unit level.
Integration tests should be placed in the "spec/features/" folder. All other tests should go in the default locations generated by Rails (e.g. "spec/models/")
The chain of RSpec "describe" and "context" blocks leading up to the final "it" block should form a human-readable sentence. This is particularly true for integration specs where we are documenting system behavior spec names.
Consider an example where we don't use this style.
Bad Example:
describe "Account creation" do
…
context "messages" do
…
it "should display success messages" { … }
it "should display failure messages" { … }
end
it "recovers passwords" { … }
it "should send emails to users" { … }
end
Consider the sentences produced by the above:
The spec fails to describe the system. Reading the sentences, we don't know why a particular behavior might happen. Some of the sentences don't entirely make sense.
We fix the problem by using more descriptive contexts and paying attention to the sentences we're constructing with our specs.
Improved Example:
describe "Account creation" do
…
context "for users providing valid information" do
it "displays a success message" { … }
it "sends an email to the user" { … }
end
context "for users providing duplicate user names" do
it "displays an informative error message" { … }
it "prompts users to recover their passwords" { … }
end
end
Consider the sentences produced by the above:
The above sentences more adequately describe the behavior of the system given varying inputs.
http://jasonrudolph.com/blog/2008/07/30/testing-anti-patterns-the-ugly- mirror/
Tests should describe how the system responds to certain inputs. They should not simply duplicate the code under test.
Ruby makes it very easy to stub methods and specify return values. Often, this can lead to fragile tests which don't perform any useful validation. If a test stubs every call made by a method, for instance, the test doesn't verify that the method actually works; simultaneously, the test will break any time the method changes.
Mocks couple tests and code, and should be used very sparingly. Valid reasons to use mocks include:
When in doubt, it's better to not mock.
The test suite should provide developers with rapid feedback regarding the correctness of their code. To accomplish this, they should execute quickly. Keep performance in mind when writing tests. The following guidelines will help minimize execution time:
If performance becomes a problem, we may segregate tests into "fast" and "full" runs, but ideally we will avoid this.
If you see a failure and you suspect it was caused by some intermittent problem, e.g. a timeout that is too short or an external service being down, it is not enough to simply re-run the tests. Fix the problem. If a problem truly cannot be fixed, document why, catch the specific error that cannot be fixed, and throw a more meaningful one.
the Pattern Portfolio
Factor non-trivial HTML into partials and helpers and demonstrate its use in the Pattern Portfolio. This allows other developers to find and reuse partials and ensure that new code does not break existing code.
Catalog custom Javascript components in the Pattern Portfolio. Show examples of configuration options, if appropriate.
Create offline pages, similar to the Pattern Portfolio, to exercise Javascript components in their various states. Avoid requiring a running application instance or performing server communication in Jasmine specs.
Integration specs should still perform simple checks on Javascript components, but the bulk of Javascript testing should be performed offline.
Factor any non-trivial markup patterns into partials or helpers. Non-trivial markup patterns include patterns which require an element to have a specific child or children to obtain a certain behavior or style. For instance, a header containing a span child may trigger CSS image replacement behavior.
Demonstrate all markup in the Pattern Portfolio. Call helpers and partials within the portfolio to ensure that markup stays in sync.
...
There are 3 shared deployment environments for Earthdata projects, including Earthdata Search:
System Integration Testing (SIT)
An environment for performing integration tests on builds, ensuring that new code works correctly in a production environment before placing it in front of customers or users. This is roughly equivalent to ECHO's "testbed" environment.
User Acceptance Testing (UAT)
An environment for verifying that builds meet the customer and user expectations before promoting them to operations. We expect partners and advanced users to use this environment frequently for their own testing, so it will be public and must have a high uptime. This is roughly equivalent to ECHO's "partner-test" environment.
Operations (Ops)
The production environment, running the canonical public-facing site.
Changes to shared environments must be deployed through Bamboo via the Earthdata Search deployment project. Any successful build of the main branch in GitHub Actions will result in the code being send to the main branch of the ECC Git repo (BitBucket). Once that branch receives the code, the branch is built and deployed to the EDSC SIT environment (https://search.sit.earthdata.nasa.gov).
Progressive enhancement involves starting with a baseline HTML document that is readable and usable without any styling or javascript. We accomplish this by using semantic markup. From there we enhance the markup by unobtrusively adding CSS styles and Javascript.
By starting with working basic HTML, we ensure we have a page that's minimally usable by:
The key point here is that a missing browser feature or a single script or style error should not render the site unusable.
This is similar to Progressive Enhancement described above. We target narrow screens (typically mobile screens) first and add additional styles as screens get wider, using media selectors.
The reason for this is mostly practical. Mobile renderings tend to have a much simpler set of styles than larger renderings. If we targetted large screens first, much of our mobile effort would be in undoing the styles aimed at larger screens.
Another key point here is that we will plan to be mobile friendly from the start. It is much easier to build this in from the beginning than to attempt to construct it later on.
HTML is just markup and easy to learn. CSS is turing complete, but not really a programming language. They're easy to dismiss.
The reality is, though, that HTML and CSS provide very few mechanisms for code reuse and organization. Their size and complexity has a direct and perceptible impact on page speed, and they are the most user-visible part of the codebase. It is exceedingly difficult to create clean, performant, reusable, and extensible frontend assets. It generally requires much more care than the corresponding backend code, since backend languages are designed with these aspects in mind.
Frontend authoring is a development discipline and requires a great deal of care and consideration, to the point that most of this guide focuses on frontend development.
It is typically very difficult to extract complexity from front-end code. All new components should be focused on reuse, versatility, and extensibility. When possible, do not add new components at all, but reuse or extend existing components.
When developing frontend code, the unit of reuse should be the module, component, or behavior, not the page. Design components that can be used across multiple pages or that are versatile enough to be used multiple times on the same page, possibly for different purposes. Write, CSS, Javascript, and partial HTML for components, not for pages, in order to promote robustness and reuse and keep code size in check.
There may be good exceptions to every rule in this guide. When in doubt, follow the guide, but make exceptions as necessary. Always document these decisions. Further, whenever you write code in a non-standard way, or you are faced with multiple competing options and make an important choice, document those decisions as well.
Every line of application code, including UI markup, should be exercised in a test.
Exercise boundary conditions, error handling, and varying inputs at the unit or functional level.
Integration tests should demonstrate user-visible system behavior.
Remember that integration tests run much more slowly than unit tests, so prefer to test more thoroughly at the unit level.
The chain of Jest "describe" blocks leading up to the final "test" block should form a human-readable sentence. This is particularly true for integration specs where we are documenting system behavior spec names.
Consider an example where we don't use this style.
Bad Example:
describe('Account creation', () => {
…
describe('messages', () => {
…
test('should display success messages', () => { … })
test('should display failure messages', () => { … })
})
test('recovers passwords', () => { … }
test('should send emails to users', () => { … }
})
Consider the sentences produced by the above:
The test fails to describe the system. Reading the sentences, we don't know why a particular behavior might happen. Some of the sentences don't entirely make sense.
We fix the problem by using more descriptive contexts and paying attention to the sentences we're constructing with our tests.
Improved Example:
describe("Account creation" do
…
describe('for users providing valid information', () => {
test('displays a success message', () => { … }
test('sends an email to the user', () => { … }
})
describe('for users providing duplicate user names', () => {
test('displays an informative error message', () => { … }
test('prompts users to recover their passwords', () => { … }
})
})
Consider the sentences produced by the above:
The above sentences more adequately describe the behavior of the system given varying inputs.
http://jasonrudolph.com/blog/2008/07/30/testing-anti-patterns-the-ugly- mirror/
Tests should describe how the system responds to certain inputs. They should not simply duplicate the code under test.
The test suite should provide developers with rapid feedback regarding the correctness of their code. To accomplish this, they should execute quickly. Keep performance in mind when writing tests. The following guidelines will help minimize execution time:
If performance becomes a problem, we may segregate tests into "fast" and "full" runs, but ideally we will avoid this.
If you see a failure and you suspect it was caused by some intermittent problem, e.g. a timeout that is too short or an external service being down, it is not enough to simply re-run the tests. Fix the problem. If a problem truly cannot be fixed, document why, catch the specific error that cannot be fixed, and throw a more meaningful one.
React testing library best practices:
Role
such as buttons, checkboxes etc. If there are multiple elements of the same role on a component use a secondary filter to ensure you select the correct ones for your assertions example: getByRole('button, {name: 'button-name'}await
statements for userEvent methods, userEvent is already wrapped in an awaitscreen
to select elements on the virtual DOMwaitFor
block...
Minimize the overall depth of HTML to decrease page size, increase readability, and improve rendering speed.
See This example and description from Stack Overflow. Use Rails helpers to dynamically generate highlight dynamic HTML with widely varying elements and structure. Use Rails partials to generate more static content. Remember to demonstrate partials and helpers in the pattern portfolio, and improve rendering speed.
Earthdata Search uses SCSS to generate its CSS. It follows the guidelines for scalable CSS outlined by SMACSS, with key items reproduced in this document. The CI build checks CSS style using CSS Lint. Developers are strongly encouraged to read the CSS Lint rules.
Use the Pattern Portfolio to demonstrate all styles in the application. This allows other developers to find styles that already exist instead of re- inventing the wheel, and it allows developers to ensure that new styles do not break existing styleskey items reproduced in this document. The CI build ensures that each style is used at least once in the portfoliochecks CSS style using CSS Lint. Developers are strongly encouraged to read the CSS Lint rules.
...
The base site contains HTML boilerplate libraries which add CSS classes to the html and body element that detect commonly misbehaving browsers (older IE versions) and browser capabilities. Use these classes to target browsers or capabilities rather than relying on CSS hacks.
New project code should be written in Coffeescript. It eliminates much of the boilerplate and gotchas of Javascript, producing easier-to-read code that is more accessible to developers with all levels of front-end experience.
https://github.com/polarmobile/coffeescript-style-guide
browser capabilities. Use these classes to target browsers or capabilities rather than relying on CSS hacks.
Try to build Javascript components and widgets that could apply throughout the site, rather than on a single page or in a single situation. Good questions to ask when writing code is "Can I make this into a widget?" or "Can I apply this behavior to all elements with this class?". If the answer is no, perhaps the element could be described as a composition of multiple components (scrollable, zebra-striped, selectable list rather than one-off granule list)
...
When building the interface, use the History API to ensure that history entries are pushed to the stack appropriately. Push entries to the stack when the user reaches points they would reasonably expect to bookmark. Avoid pushing entries so frequently that backing out of a state using the back button becomes tedious or impossible.
...
Merging to master main will kick off a build on Travis CIGitHub Actions. Travis Bamboo will automatically kick off a deployment to SIT, on a successful build.
If the build on master main fails and requires intervention a manual deployment to SIT will be required.
Once your code merged into the e2e-services branch on GitHub you'll need to push that branch to the e2e-services branch on Bitbucket. Before you can push to Bitbucket you'll need to add it as a remote:
...
a manual deployment to SIT will be required.
Once on the NASA VPN, visit https://ci.earthdata.nasa.gov/
...
...
EDSC-DBN2 which will prompt you for your code 700 token, further instruction for each environment are below.
This is the most common deployment method. After working a ticket, you've issued a Pull Request, that has been reviewed and merged into main on GitHub. When GitHub Actions completes a build on main it runs bin/ecc-sync which syncs main from GitHub to deploy on Bitbucket. Bamboo has a deployment trigger configured to deploy automatically on a successful build to SIT.
At any time you can push your code to Bitbucket and create a release from that branch. Note that you should not push these branches to any environment except SIT, and that you should check with all necessary stakeholders before doing so. Once you are done testing that branch on SIT, return SIT back to the normal main branch deployment
Once the remote is establish, ensure that you're on the e2e-services branch, and push it to the Bitbucket remote:
git push bitbucket e2e-services
This will kick off an e2e-services build on Bamboo.
Once on the NASA VPN, visit https://ci.earthdata.nasa.gov/browse/EDSC-EDSCDB which will prompt you for your code 700 token.
The following example creates a back port for version 1.69.2, meaning that is the version that is currently deployed and we need to create 1.69.3 to push out a bug fix.
First we'll need to ensure we have the most recent tags, so fetch everything from git.
git fetch
Now weWe'll create a new branch off of the currently deployed tag.
...
Now we're on a branch that contains the exact code deployed for the tag tag 1.69.2. We'll find the commits that we need to get into this back port and cherry pick each one.
...
Shortly after pushing to Bitbucket you should see a new build appear on Bamboo, this build will be your most recent commit. When the build is complete you'll see a button on the right hand side of the page title Create release, clicking that button will allow you to create a new release named 1.69.3.
Select the 1.69.x branch, name the release and click Create release. From here you'll be sent to the release, this is the link that should be provided to the OPS team to deploy.
...