The Job Search

I was on the job market this year. While the opportunities were equal for anyone applying, I wouldn’t call them equitable. In most ways, I am extremely privileged. I am cisgendered, not first generation, Asian, and grew up in a white household. One area I can speak about where my identity has been challenging is interviewing as a mother.

Here are a few snapshots of the experience:

1. SCREEN TIME

I owe a combination of my husband, PBS Kids, and DisneyPlus for giving me the time to work on applications. Most academic jobs require some combination of a cover letter, CV, teaching statement, research statement, and diversity statement. These are not trivial to write, even if they become quicker and quicker to produce. Pre-kids I had no problem putting in extra hours to write. Now it’s more difficult to secure uninterrupted time.

2. “Do you have kids?”

I went to two onsite interviews, and during both, I had people ask if I had kids. One was on the committee and one was not. Although I understand how it came up in the conversation, I (wrongly) thought it would be common knowledge that it is inappropriate to ask. I made the decision to openly share about my family because I wanted to be in a supportive environment, but I’m aware that people have biases and some mothers may want to keep their mothering status private while interviewing.
I also suspect that my appearance as a woman triggered the question. I doubt most men are asked if they have kids.

3. THE ONSITE

The onsite can be stressful, and it can be made more stressful by being away from tiny humans that need your help. During one onsite, I was trying to make conversation over breakfast with a committee member. Meanwhile, my partner was trying to corral our kids for daycare. I left detailed instructions the night before, but there was an unforeseen crisis with a new hole in our kid’s pants. I wanted to focus on the conversation and learn about the programs and how I fit in them, but half of my brain was trying to think of how to help my partner solve the problem. (By the way, I said I was busy and he solved the problem on his own, and did not ask me anything else the entire day.)

4. NURSING

On the subject of my partner, he is wonderful. He cared for our kids alone for 6 days while I interviewed and went to a conference. The one thing that he cannot do that I can do is nurse. So instead of nursing, he took on weaning our 1-yr-old. This was great. That said, the interview, for me, also included major physical and hormonal changes as we underwent the weaning process. 

5. GUILT

It is easy to find reasons to feel guilty. Should I even be writing this post? What if by revealing the challenges of being a mother only worsens employer’s biases against hiring them? What opportunities are my kids missing to develop their brains while I let them have screen time (see 1)? Am I signing up to miss the most precious years of my kid’s lives? Am I doing a disservice to my employer by having so many other obligations? Surely other people without kids could spend more time perfecting grant applications or publishing. I can reason away much of this guilt. But the extra noise and then self-regulation takes energy and intention away from other things that require focus.

The takeaway is not: poor mothers, interviewing is so difficult. Rather it is this: challenge your own biases against women and mothers and HIRE THEM! They are efficient, productive humans with extra-strength focus, by necessity. Right now they may have distractions, but if you are patient and give them a chance to keep their careers alive, they bring valuable perspectives from having mentored a human being in every aspect of life for multiple years and personal investment about the educational system they are entering.

 

 

Teaching a QuantCrit research methods course

This semester I set out to translate our team’s thinking about and research into using QuantCrit perspective to inform research into a course for PhD students. Not knowing of any other QuantCrit methods curriculums, I began working from scratch. Teaching the class has been a highly enjoyable and I thank my amazing students for engaging so fully in the intellectual work. I hope to refine my curriculum and make it publicly available for others who wish to teach a similar course. But that will take time and I wanted to share an overview of the course (so far) for those who might be interested:

Week 1: Overview of the course, create groups for the semester, and identify a dataset for each group to use throughout the semester. It’s critical that the datasets have student social identifier information (e.g., race and gender) so that they can answer research questions about diversity, equity, and inclusion.

Week 2: Get to know their datasets. Where did they come from and what biases might already be baked into them? Brainstorming research questions. We include an overview of QuantCrit.

Week 3: Creating descriptive statistics and basic figures. Identifying trends that are important to answering their research questions.

Week 4: Data cleaning. What kinds of bad data might exist in your dataset? How might removing data bias your findings? How might not removing data bias your findings?

Week 5: Missing data. Who are missing in your data? What causes may have led them to be missing? How might this bias your findings?

Week 6: Multiple imputation. How can we create multiple imputation models that will limit the biases due to missing data? Students also read and critique a pair of articles that both examine issues of DEI, but one from a QuantCrit perspective and one from a deficit perspective.

Week 7: Creating regression models. How do the variables we include in our models influence what the models are saying? For example, if prior preparation is included in a model, the model will now predict how groups will do if they has the same prior preparation. This is something that is not usually true. We also discuss what it mean to include a social identity variable in a model. It’s important that we consider how to interpret these variables in ways that don’t support deficit ways of thinking. For example, we name racism and sexism as causes, rather than race and gender.

Week 8: Checking model assumptions and multi-level models.

Week 9: Model specification and defining equity. How might the model specification process biased findings? For example, if p-values were used to determine if variables should be included in a model, then it is highly unlikely that interaction terms for minoritized groups will be included.

—– This is as far as I am at the moment, but I’m including a rough plan for the rest of the course.

Week 10: Data visualization. How to show data in ways that don’t support deficit interpretations.

Week 11: P-values and their pernicious impacts, particularly in diversity, equity, and inclusion investigations.

Week 12: More on uncertainty and effect sizes.

Week 13-15: Working on their analysis, reading and critiquing examples of quantitative diversity, equity, and inclusion research.

Final: Submit their draft of a QuantCrit paper based on their analysis (minus the literature review).

Qualtrics Tips and Tricks

 

Before I joined the STEM Equity group as a postdoc, I worked in the Grasping the Rationality of Instructional Practices lab, known partly for their bluies (little blue storyboard characters) and long online surveys. They have done some unique studies that I would have thought were inconceivable, such as having national samples of in-service teachers each complete over 24-hours worth of questionnaires. As another example, I helped them build a geometry teaching simulation using the survey logic in Qualtrics to guide participants through several different choose-your-own-adventure storylines. Through working in that group and administering surveys primarily through Qualtrics, I have learned a few things that (I think) increase the likelihood that your participants will complete the survey feeling more like the bluey on the left than the bluey on the right. 

(All opinions expressed are those of the author and do not necessarily represent the views of LessonSketch or the University of Michigan.)

  • Have a progress bar visible. That way participants will not feel hopeless if the survey goes on longer than expected.
  • Give participants an outline of what the different sections are and then name the section when they start it. Just like how in a presentation you give an outline, it can help people get a sense for what the survey entails, and mentally feel good about progressing through different sections.
  • Let people know that they can stop and come back IF they stay on the same device and that device does not delete/reset cookies. They will be very irritated with you if you say they can stop and come back and they return to the survey to see they actually have to redo it. School computers often reset cookies every day.
  • Break long studies up to different sections and email them out separately. Email the next one only after the previous has been completed, so that it does not seem overwhelming.
  • Do not have too many questions on one page, as progress is not saved until participants hit next. Also, if participants are dropping out, you can look at the data to narrow down what questions are deterring them from continuing.

 

Happy Qualtrics-ing!

 

Mollee Shultz

QuantCrit resources updates

We’re planning on updating a bunch of our resource pages for QuantCrit. Some of these will also become blog posts. For those who may be newer to QuantCrit and are interested in resources as to how you might go about enacting it in your research, we recommend our Tenets of QuantCrit page.