top of page

PLO 8: Designing and Conducting Research and Evaluation to Support Evidence-Based Decision-Making

  • Writer: Mingzhe Xue
    Mingzhe Xue
  • 2 days ago
  • 7 min read

Among the MLIS Program Learning Outcomes, PLO 8: design and conduct research and evaluation studies to inform evidence-based decision-making, has been one of the most important to my growth as an information professional. As I think about my future in librarianship, research support, and information work more broadly, I do not see evidence-based decision-making as something limited to formal academic research. I see it as a professional habit of mind: the ability to define a problem clearly, select appropriate methods, evaluate the strengths and limitations of those methods, and use findings responsibly to support decisions about services, systems, and users. In many library and information settings, professionals are asked to improve programs, assess user needs, interpret patterns of behaviour, or justify changes in practice. For that reason, PLO 8 matters to me not only as a research outcome but as a foundation for accountable and reflective professional work.

My development in this area became especially clear through the Learning Coach project, which I worked on in collaboration with Prof. Fatemeh. The project drew on student data originally collected from University of Michigan students, and it focused on analyzing interaction patterns in a learning analytics environment in order to identify meaningful behavioural patterns related to self-regulated learning. My role involved working with the underlying data structure, helping develop and refine the analysis pipeline, supporting sequence-based analysis and clustering, generating outputs, and contributing to the interpretation of findings. More importantly, this project pushed me to see research and evaluation not as secondary layers added after technical work, but as the logic that should shape the work from the beginning.

One of the most important shifts in my thinking during the MLIS program has been learning to approach information problems as research and evaluation questions rather than simply as technical tasks. Before entering the program, I was already comfortable with digital tools, data handling, and systems thinking. Earlier in my studies, I often focused on whether I could complete a task correctly: build a dataset, run an analysis, or produce an output. Over time, however, I began to ask different questions. What exactly is the research question? What assumptions are embedded in the method? What forms of evidence can the data actually support? What would count as an overinterpretation? How should findings be communicated so that they remain useful without becoming overstated? The Learning Coach project made those questions unavoidable, and that is why it represents the strongest example of my development in PLO 8.

One of the first lessons I learned from this project was that evidence-based research begins long before final results. It begins with conceptual clarity and data preparation. The project required transforming event log data into behavioural sequences that could be analyzed in a meaningful way. At first glance, such work might appear purely technical, but I came to understand it as a research design decision. How actions are coded, how sessions are segmented, and how transitions are interpreted all shape what kinds of patterns become visible later. If those early representations are poorly designed, the later analyses may still produce outputs, but the evidence they generate will be weaker or more misleading. This taught me that strong research is often built in the quiet, earlier stages, before polished figures or summaries appear.

A second major area of growth involved method selection and revision. In the Learning Coach project, we were trying to identify recurrent patterns in students’ behaviour, which required comparing different approaches to sequence analysis and clustering. One of the most valuable aspects of the experience was realizing that a method can be computationally valid and still be conceptually inappropriate for the question at hand. Some approaches were capable of generating results, but did not preserve temporality or adjacency in ways that matched the behavioural meaning we were trying to study. This pushed me toward more critical method evaluation: not asking merely whether a technique could run successfully, but whether it was fit for purpose. That moment marked an important development in my understanding of PLO 8. Designing research is not simply about choosing available tools. It is about selecting or revising methods so that the evidence they produce is appropriate to the research question.

The project’s evaluation dimension became even clearer during clustering and pattern analysis. Our goal was not simply to generate clusters, but to determine whether those clusters represented meaningful differences in student behaviour. This required comparing multiple forms of evidence: sequence structures, recurring patterns, and later associations with other variables such as GPA and selected categorical attributes. I learned that evaluation is strongest when it is layered rather than dependent on a single output. A result in isolation is rarely enough. Instead, I needed to consider whether different analyses converged, whether interpretations remained stable across outputs, and whether cluster narratives stayed grounded in the underlying data. This gave me a much more mature understanding of evaluation. Good evidence is not simply something contained in a table or visualization. It is built through comparison, triangulation, and interpretive restraint.

Another important part of my learning involved translating findings into forms that could inform decisions. The Learning Coach project eventually moved toward persona-building and the interpretation of behavioural clusters, which introduced a new challenge: how to make technical findings meaningful to collaborators and potentially useful for future design or intervention, while still preserving evidentiary integrity. I found this especially valuable because it sits at the boundary between research and practice. On one hand, I needed to ground claims in observable patterns and remain careful about uncertainty. On the other hand, I needed to help articulate why these patterns mattered in terms that collaborators could use. This taught me that evidence-based decision-making is not only about producing data-driven insights. It is also about framing evidence appropriately, distinguishing between observation and inference, and making clear what conclusions are warranted and what remain speculative.

The collaborative nature of the project also shaped my development. Because the Learning Coach work was done with Prof. Fatemeh in a broader research context, I had to think not only about technical correctness, but about how my analytical decisions affected the direction of a shared project. This required a stronger sense of responsibility. I needed to be explicit about what my methods were doing, transparent about limitations, and attentive to how findings might be interpreted by others. That experience helped me grow as both an analyst and a collaborator. It reminded me that research is methodological, but it is also social. Evidence is produced within teams, interpreted through discussion, and made useful through communication.

This work was not without challenges. One persistent challenge was the tension between complexity and clarity. Behavioural data can become analytically dense very quickly, and I often had to decide how much complexity was necessary to preserve meaning and how much would make the results harder to interpret or communicate. A second challenge was learning to tolerate revision. There were points where earlier assumptions, coding schemes, or methodological choices had to be reconsidered. Over time, I stopped seeing revision as a sign that the work had gone wrong and began to see it as a sign of stronger research practice. A third challenge involved recognizing the limits of the data and of my own interpretations. Because learning analytics data are traces rather than direct expressions of student intention, the project required humility as well as analytical precision.

At the same time, the project also brought important successes. I became more confident in designing workflows rather than simply following predefined procedures. I improved my ability to connect technical choices to research goals. I also became better at recognizing the limits of interpretation, which I now see as a strength rather than a weakness. Most importantly, I gained experience in helping produce evidence that could support reflection and decision-making in an applied educational context. That feels highly relevant to my future career because libraries and information organizations increasingly depend on assessment, user research, and data-informed planning. Whether I work in academic libraries, research support, or another information setting, I expect to draw on this ability to investigate problems systematically and translate findings into practical insight.

PLO 8 has also shaped how I think about professional identity. I entered the program already comfortable with technology, but the MLIS has shown me that technical competence alone is not enough. What makes information work effective is the ability to ask careful questions, design appropriate studies, evaluate evidence critically, and use findings ethically and responsibly. The Learning Coach project helped me see that research and evaluation are not separate from practice; they are part of how responsible practice is carried out.

Looking ahead, I still have areas for growth. I would like to strengthen my skills in formal evaluation design, especially in connecting quantitative findings with qualitative interpretation. I also want to continue improving my ability to communicate methodological choices and evidentiary limits to mixed audiences, including collaborators who may not share the same technical background. In future work, I would also like to become more systematic in documenting research decisions as projects unfold, so that the reasoning behind an analysis remains visible and reproducible from beginning to end.

Overall, the Learning Coach project represents the clearest example of my growth in relation to PLO 8. It challenged me to participate in a real collaborative research setting, work with student data from the University of Michigan, refine methods, evaluate evidence carefully, and connect findings to decision-making in an applied context. For those reasons, it demonstrates my learning in relation to PLO 8 more clearly than any single course assignment or isolated technical task. More importantly, it reflects the kind of information professional I hope to become: someone who approaches real-world problems with rigor, humility, curiosity, and a commitment to evidence-informed practice. Copyright and permissions note

This reflection was written by Mingzhe Xue and is included in the portfolio for educational and reflective purposes. The Learning Coach project discussed in this reflection was carried out in collaboration with Prof. Fatemeh Salehian Kia within a broader research context. Supporting project artifacts, analytical outputs, and related materials should be understood as emerging from that collaborative research environment rather than as solely independent works unless otherwise specified. The underlying student data originated from University of Michigan sources and are subject to privacy, governance, and research-use constraints; they are not reproduced in full in this portfolio. Any figures, excerpts, or derived materials included here are presented only to document the student’s role and learning, with appropriate respect for authorship, confidentiality, and permission requirements.



Recent Posts

See All

Comments


©2026 by MiNG.

bottom of page