Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Stand-ups occur daily at 10:00 AM EST. (877) 847-0013,8518026#
  • Sprint planning meetings occur as part of the EED2 SAFe efforts. Typically on the first day of each sprint.
  • Retrospectives and sprint overviews take place on the Friday before the start of the sprint.
  • Developers should use EOSDIS Slack accounts for chat and join the #edsc-dev channel for team communication. See your team lead for an invitation to the channel.

...

Earthdata Search uses an agile development process with two-week sprints. At all times, our master branch maintains a potentially releasable product increment. Once a commit is merged into master, and we would like to eventually move to a continuous deployment modelit will immediately trigger a build and deployment to our SIT environment.

Sprints give us periodic checkpoints for feedback but are orthogonal to our releases.

...

No change may go into master until it has undergone peer review in Stash Github and is successfully building in BambooTravisCI. Master is to remain deployable at all times, so ensure it contains production-ready code with green builds.

...

Day-to-day development proceeds as follows:1.

  1. The developer chooses a JIRA issue from among those in the active sprint, assigns it to himself, and moves it to the "In Progress" column of the agile board

...

  1. The developer implements the change in a new git branch named after the issue number, e.g. EDSC-123. Commits are also prefixed with the issue number, e.g. "EDSC-123: Add user accounts page"

...

  1. .
  2. (Optional) When in need of collaboration or input, the developer pushes the branch and opens a new pull request without assigning a reviewer. Collaborators may pull the branch or view and comment on the diff in

...

  1. Github.

...

  1. When the change is complete, the developer pushes the branch and opens a pull request (without assigning a reviewer), triggering a

...

  1. TravisCI build for that branch, and ensures the build is green.

...

  1. Once the build is green, the developer assigns

...

  1. the pull request for the branch to

...

  1. each member of the development team and moves the issue into the "Pending Review" column in JIRA.

...

  1. The reviewer looks at the code, ensuring passing tests, adequate test coverage, code quality, and correct/complete implementation. He also runs the code and manually tests it

...

  1. , paying close attention to usability and consistency of like-features or interactions throughout the rest of the application.
  2. The original developer fixes items brought up during review until the reviewer is satisfied

...

  1. and has approved the pull request. In most cases, we like to have 2 approvals. In some cases that is not needed. The team lead will have a good idea as to what needs additional review. Use your best judgement.
  2. Once sufficient approvals have been granted via Github, the original developer merges the branch. At this point the remote branch can be deleted.
  3. Once master has built and deployed to SIT, the original developer verifies the new functionality/fix is working and assigns the JIRA issue to the EDSC QA team member for testing and verification.
  4. If the QA process reveals updates need to be made, the QA team member works with the original developer directly to resolve any issues, following the process outlined above, until QA approves the changes.
  5. The QA team member moves the JIRA issue into the "Done" column, typically with a resolution of "

...

  1. Ready for Test."

...

  1. Once deployed to the UAT environment, the QA team member executes a regression testing protocol ensuring new features work correctly and that the build/release is stable. They also reach out to a primary stakeholder for the issue (for instance, the person requesting the change)

...

  1. so that they may test the implementation. If satisfied, the ticket is transitioned into "Verified Internal" (in the case of EDSC QA approval) or "Verified External" (in the case of an external reviewer).

Technologies

Code structure

...

Shared Environments

There are 3 4 shared deployment environments for Earthdata projects, including Earthdata Search:

...

The production environment, running the canonical public-facing site.

SearchLab (Lab)

An environment for highly experimental features. This is typically where major prototyping efforts are deployed, demoed, and tested.

Deploying changes

Changes to shared environments must be deployed through Bamboo via the Earthdata Search deployment project. Any successful build of the master branch will result in a deployment Search deployment project. Any successful build of the master branch in Travis CI will result in the code being send to the deploy branch of the ECC Git repo (BitBucket). Once that branch receives the code, the branch is built and deployed to the EDSC SIT environment (https://search.sit.earthdata.nasa.gov).

...

Consider the sentences produced by the above:1.

  1. Account creation messages should display success messages.

...

  1. Account creation messages should display failure messages.

...

  1. Account creation recovers passwords.

...

  1. Account creation should send emails to users.

The spec fails to describe the system. Reading the sentences, we don't know why a particular behavior might happen. Some of the sentences don't entirely make sense.

...

Consider the sentences produced by the above:1.

  1. Account creation for users providing valid information displays a success message.

...

  1. Account creation for users providing valid information sends an email to the user.

...

  1. Account creation for users providing duplicate user names displays an informative error message.

...

  1. Account creation for users providing duplicate user names prompts users to recover their passwords.

The above sentences more adequately describe the behavior of the system given varying inputs.

...

Mocks couple tests and code, and should be used very sparingly. Valid reasons to use mocks include:1.

  1. Calls to external services which cannot or should not be made during tests.

...

  1. Calls which are expensive to perform (I/O) and irrelevant to the test at hand.

...

  1. Calls which would require an unreasonable amount of setup and fixtures and are irrelevant to the test at hand.

When in doubt, it's better to not mock.

...

The test suite should provide developers with rapid feedback regarding the correctness of their code. To accomplish this, they should execute quickly. Keep performance in mind when writing tests. The following guidelines will help minimize execution time:1.

  1. Test varying inputs and edge cases at the unit or functional level, rather than the integration level.

...

  1. Avoid running integration tests with Javascript enabled unless Javascript is necessary for the feature under test.

...

  1. Avoid calling external services, particularly ones which cannot be run in a local environment. Use mocks for these services.

...

  1. Avoid loading external images, CSS, and Javascript in integration tests.

...

  1. Avoid or disable interactions and interface elements that will cause Capybara to wait. For instance, disable animations or transitions.

...

  1. Skip to the page under test in integration tests, there is no need to start at the home page for every spec (though you should have a spec which verifies you can start at the home page).

...

  1. Avoid increasing timeouts to fix intermittent problems. Find other means.

...

  1. Time your tests.

...

  1. Follow this style guide for performant HTML, CSS, and Javascript.

If performance becomes a problem, we may segregate tests into "fast" and "full" runs, but ideally we will avoid this.

...