This image is from a recent Twitter thread (with 4500+ retweets and 450+ comments as of this writing) by Chris Gilliard (@hypervisible), whose research and pedagogy often point to the consequences of online systems surveilling us, built as they are on the monetization of data collected about individuals. As a consequence, groups peripheral to whatever “use case” was used to develop the system are marginalized, and these groups tend to be lower-income, people of color, and otherwise disadvantaged. Reading through this thread makes me seriously consider doing a factory reset of all my devices and deleting all my social media accounts. This litany of factual incidents makes me feel disrespected, used, and dehumanized.

Currently, these problems seem most glaringly obvious to me in the context of educational uses of social media, software-as-a-service (SaaS), and other digital tools. We require students to use tools that surveil them, tools they are right to distrust.

This is a new form of oppression. As a student, I need to use these tools to learn, but they are watching my every move. As Carol Dweck’s work on mindset has highlighted, when every task is a test of my intelligence and ability, I may shut down any exploration or learning, and I will aim to perform. I will play it safe, work beneath my ability level just to not appear wrong, dumb, etc. to those watching. We have no reason to trust the designers of these systems, the caretakers of the data. The Twitter thread above holds ample evidence that tech companies do not respect individuals, and instead are aimed only at profit. They are the oppressors.

As 2018 begins, I am thinking again about Paulo Freire’s Pedagogy of the Oppressed, particularly the part where he points out that gross imbalances of power not only harm the Oppressed, but also the Oppressors. By creating an us-versus-them power structure, not only is the humanity of the oppressed denied, but also the humanity of the oppressors is damaged. When, in our fervent debate about #edtech, did we forget this?

I am certain most educational tech designers and software engineers do not see themselves as oppressive, nor do I think they reflect much on the power they wield as they design our digital worlds. In fact, in day-to-day work, many may see themselves as dis-empowered, hard pressed to enact change within the system they inhabit. (See any meme involving “code monkeys.”)  Many of them probably feel they have little to no voice in how educational systems are designed, deployed, or secured.

Yet, en masse they are deciding what data to collect, how it is aggregated, who has access, and what decisions are made based upon that data. In the name of customizing learning experiences, many are guilty of digital redlining, limiting what certain students can see or do within their systems. The algorithms and machine learning tools recreate the systemic biases already present in our world. Learning, stretching our potential, discovering new things becomes harder, the opposite of what education should do.

Yet I do understand what tech companies are attempting to do. They are trying to fulfill to potential outlined by Clayton Christensen and his collaborators when they wrote Disrupting Class. At their noblest, ed tech designers are interested in providing individualized learning experiences, where no one’s flow is interrupted by an annoying quiz or assessment, where the system notices what you’re working on and helps you keep learning. I still believe this is possible, and I believe it’s possible to do it ethically.

So, I do want the data. I want to understand what is going on for my students. Where is communication breaking down on their teams? When do they struggle with the digital tools they are using? How can I best help support their efforts?

But students should own their data and have the right to see the algorithm guiding them. No heavy-handed manipulation, no black boxes, and no proprietary B.S. I want my students to be in control of their learning, or there’s no hope of them becoming lifelong learners.

I have been part of software development processes. Good ones involve sitting alongside a business and really understanding the needs at multiple levels, with many, many different kinds of users involved. Are they asking all types of students, and if at the K-12 levels, their parents? Are they involving all kinds of instructors, as well as the learning scientists who study how we learn? Or are they only listening to those making the purchasing decisions?

So, if you want me to trust these technologies so that I will use them in my classroom, I want a seat at the table where people decide how and when to collect that data. I want to know how data will be analyzed and what adaptive algorithms will be employed. I want my students to have the ability to see their data and the algorithm they are interacting with.

I want in.