When evaluating apps, a good triage helps set the stage.
In the interest of documenting our evaluation workflow, we worked up the following outline and flowchart that illustrates our initial intake process when evaluating applications. While there will always be slight variations due to how individual applications work, the steps outlined here provide the foundation we use in our evaluation process. Because these resources are freely available, anyone interested in privacy and basic information security -- from vendors to school and district staff, students, parents, and other privacy advocates -- can use this work.
The following steps document our full evaluation process:
- Triage (steps A, B, and C) outlined below;
- The questions, with citations and background, we use to evaluate the terms;
- An introductory Information Security Primer.
For people interested in the background documentation we used when compiling and vetting the final version of the question set, this post provides a list of works consulted.
App Triage
The steps described in the text version below are more comprehensive than what is depicted in the flowchart because every application is slightly different, and the resulting triage process is adjusted slightly as a result. You can download a full version of the flowchart here (PNG download).
Step A: Verify that the terms are present.

- A1. Go to home page of the app/site. Find the terms of service, privacy policy, and other related policies. Make sure that the terms are for this specific service (and not, for example, for the public-facing website). Note the URLs of the policies, and make sure that the policies contain an "effective on" or "updated" date.
- A2. On the product website or via search, locate any Android or iOS apps for the product. Navigate to the respective stores where the apps can be downloaded or purchased (i.e., play store, Apple store, Chrome or FF extension). The store locations should have privacy policies linked there (although the Chrome store has no information on terms, and Firefox extensions currently have inconsistent info on privacy terms). Make sure that these policies link to the same location as the home page. If the policies are different, each set of policies should get a separate evaluation. Having multiple policies for the same service creates a potentially serious usability issue if the terms are significantly different or if the data collected is significantly different.
Step B: Create an account.

- B1. Go to the account-creation page for the app/site. Make sure that:
- B1a. The terms of service/privacy policy are linked from the account-creation page.
- B1b. The URLs of these policies are the same as the ones on the home page of the site.
- B1c. The URLs of these policies are the same as the ones on the app pages.
- B1d. The page loads via https.
- B1e. If you attempt to load the page via http, the site forces you back to https.
- B2. Create a sample account (with a throwaway email address):
- B2a. Note the data required to create the account; reference it against what's in the terms.
- B2b. If the account requires an age, attempt to create an account for an under-13-year-old user.
- B2c. Note whether the account requires email verification.
- B2d. Note any trackers set on the account-creation page.
- B2e. Note whether social login is supported.
Step C: Logged In.

- C1. Make sure that the terms of service and privacy policies are linked and available from within the site while you're logged in. Verify that the URLs are the same as noted in A1, A2, and B1a.
- C2. Remove the "s" and see if you can still access the site. Try this on multiple pages.
- C3. If profiles are supported, note the data elements collected in a profile.
- C4. If profiles are supported and they collect age, try to set the age for under 13.
- C5. Log back in. On the login page, make sure that the terms of service/privacy policies are linked. Verify that the URLs are the same as noted in A1, A2, and B1a.
- C6. Return to the profile edit page. Log out, then use your browser's back button to see if you can access the profile edit page. You should be stopped before you can see the "edit" details.
Our general triage ends with step C6. If there is a lack of clarity in the terms, or when requested by a district, additional testing is performed using the information security primer. These steps are not relevant to every app/site and therefore are not covered in the flowchart. The following steps are a subset of what is covered in the information security primer. Our starting point for an additional information security triage is included below, starting with step C7, although the information security primer is our authoritative document.
- C7. Note any first- or third-party trackers set on profile-related pages.
- C8. If profiles are supported, see if there are any visibility settings.
- C9. If there are visibility settings, set your profile to private. Log out, then return to the URL of your profile page and see if you can access the profile.
- C10. Additional interactions (not relevant for all apps/will vary widely among apps):
- C10a. What can you create on the site (class, blog post, video, etc.)?
- C10b. When you create something, does it generate a URL with a numeric ID? If yes, what happens when you alter a single number in the numeric ID?
- C10c. Does your profile have a numeric ID in the URL? If yes, what happens when you alter a single number in the ID?
- C10d. Can you see content or information (a class, post, profile, etc.) created by another person on the site?
- C10e. Can you message or contact any other person on the site (create a comment, DM, etc.) -- DON'T do this; just see if the option exists.
- C10f. Can you share audio, video, or text?
- C10g. Can you share anything you create on the site with another site user?
- C10h. Do any pages have social share icons?
- C10i. Are there options to report/flag any unwanted interactions?
Image credit: First Steps by Jonas Jovaisis, released under a CC 0 Public Domain license.