Privacy evaluations are multifaceted, and informed decisions by professional educators are essential.
As we have been working on the privacy evaluations, one of the most difficult elements was figuring out how to visually represent what people should look at when they evaluate educational technology, and the privacy and security implications of this technology. The evaluations we run have never been intended as a replacement for an informed decision by an education professional. Rather, our goal continues to be reducing the work required to make an informed decision. While many people asked us to produce something that was an absolute guide - effectively reducing privacy and security assessment down to a strict yes/no - we opted not to go down that path. It's an oversimplification, and like many oversimplifications, it can be inaccurate or misleading.
In the evaluations, the evaluation text is where the details reside. Our process is detailed - to get a greater sense of it watch the video in this post, and read the questions, and read the information security primer. Our summaries pull highlights from the full process. The resulting visual - the bar icon - is an approximate representation of the full evaluation. Attempts to convert the bar icon into a rating, or somehow claim that a single visual can represent a multi-hour process completed by multiple individuals, are not accurate. The icons represent potential, and as a symbol they represent where a teacher, student, school, parent, or district might see issues to be addressed. Sometimes the issues indicate places where policies can be improved, but in some cases - as we will discuss below - issues are related to the various ways data could be collected within an app, or different ways an application can be used. In our evaluations, the bar icons are a small first step in understanding this larger picture.
Visuals draw people in - they are immediate and visceral. The risk of visuals - especially when discussing privacy - is that they convey a false and inaccurate sense of certainty. With our evaluations, if a person looks at the icons and does not read the accompanying summaries, they are getting a very incomplete picture. If a person is just reading the summaries and not looking at the actual app or its policies, they are getting an incomplete picture. Our evaluations are a guide designed to make the work easier. They are best used to augment - not replace - an informed professional within a school or district determining how an application works.
Since we launched last week, the overwhelming majority of feedback has been about the visuals, with only a small number of people asking questions about the text. This was something we anticipated - part of this is a natural reaction to something new, and part of this is an attempt to understand how this new system works. I have been asked one question more frequently, by far, than any other: "Does one bar mean that I shouldn't use an application?"
The answer to that question is an absolute, resounding "No." A one bar indicator can mean many things. A small subset of possible meanings include:
- the terms aren't clear or omit details;
- the terms contain contradictory claims that make the intent of the terms impossible to assess;
- there are a range of ways that people can use the software, and that end users need training and support to make best use of the default feature set;
- the app includes ways that data can be shared that might not work for all educational or social contexts;
- the app includes interactions between users that might not work for all educational or social contexts;
- there are disconnects between what is stated in the terms and what users can do in the site;
- the terms don't mention or provide inadequate details on consent mechanisms required to use the site;
- etc, etc, etc.
On their own, none of these reasons are showstoppers that merit not using a site. They are, however, things to consider when evaluating a site. If an app receives a single bar in one or more category, that does not indicate that an app should not be used - it indicates that there are factors that merit consideration. Consideration and thought are not bad things - they lead to more deliberate and more intentional use of technology, and that's a good thing.
It is also worth noting that one thing we see very frequently is a situation where a vendor's practice is not reflected in their terms. We often see applications where a vendor is doing many things right, but this good practice is not reflected in their terms. Our advice to vendors in this situation has always been - and will continue to be - document your good practice in your terms. This allows people to see the good work you are doing. When it comes to privacy and security, don't make people guess. If you have contracts with third parties that prohibit them from reusing data, say so. If you encrypt all data at rest, say so. If you will delete data in the event of a bankruptcy, say so. These types of guarantees help people get a clearer sense of how a vendor operates. These statements help build and maintain trust.
As people evaluate technology, one of the elements they consider is risk. There are benefits to viewing tech through the lens of risk versus safety - and a sane aversion to risk is difficult to argue with. However, levels of risk can vary within the same application. For example, a student information system (or SIS) can be used to contain a full range of student data that is critical to the smooth running of the school. Some SIS's can contain health information, or information about special needs accommodations, which in turn requires staff training on both secure data handling and physical security (ie, lock your account or shut down your machine when you leave your desk, etc). One school might not store any health information in their SIS, where another might store detailed information. Within a school, one person might only share scant detail on health-related issues, where their colleague might share detailed accounts. All of these behaviors by people using the application create different potential levels of risk.
Filtering appliances offer a comparably nuanced risk profile. Filters track the browsing habits of all users behind the filter, teacher and student alike. Different schools can use the same filtering appliance in different ways. The privacy implications of a filter used exclusively at school, versus a filter that is used both at home and at school, will vary widely. The level of potential risk will vary widely based on how well school staff are trained in handling sensitive information, or how well individual vendor staff are trained in handling sensitive information. As with an SIS, we have different levels of potential risk supported within the same app.
As these examples show, the same technologies can support different types of implementations, each with a different risk profile. In evaluating software to help people make informed decisions, we are attempting to provide decision makers a guide to asking their own questions. Most implementations will be affected by local decisions to access features supported by the technology, so assessing risk is going to be inherently fluid. That also means that it would be irresponsible and misleading to evaluate based exclusively on the best case scenario. People need to be aware of their choices, and of the implications of those choices. We also have no delusions that we will always get this 100% right, which is why we are as transparent as possible with the criteria we use to evaluate. As noted earlier, the full set of questions we use to evaluate have been publicly available for months, and we have written and shared an information security primer. Our evaluations are based on information freely available to everyone - we don't charge for evaluations, and we don't require information to be disclosed privately to us.
For those reading this who still use our phones to actually call people - we all have had good conversations when our signal strength indicator was at one bar. For the rest of us: we have texted people, watched video, shared pictures, and more, all when we had one bar. In short, one bar works. It's a visual indicator of potential, not an absolute measure of certainty.