Accreditation gets a bad reputation.
I should know about bad reputations, as I’ve spent most of my career working in assessment — an area many professionals actively try to avoid calling by name. People often see accreditation as this scary, externally imposed process where an institution is required to make themselves look perfect, do everything excellently, and comply with whatever is asked of them, even if it doesn’t serve the needs of the institution.
Accreditation is widely understood as important. People know they must do whatever is necessary to be in compliance, otherwise, they risk losing financial aid, academic credits transferring to other institutions, and/or the ability to operate in certain locations or through different modalities.
Given the associated power here, there can be major downsides to misunderstanding accreditation.
First, people can use accreditation as an excuse to get attention or attempt to bring about change. Be careful with inappropriate usage, as you can create a boy-who-cried-wolf situation.
Second, people may incorrectly believe their work has to mirror or adhere to external, unfamiliar, unrelated standards. If that were true, I would also not be the biggest fan.
Third, people may believe accreditation occurs in single, time-bound events — such as a report or a visit — as opposed to an ongoing mindset or process that reinforces activity.
I’m here to help set the record straight. My hope in doing so is, partially, to clear the name of accreditation and make it not seem so authoritarian and absolute in prescriptive practice. And more importantly, I want to make people aware of the benefits of accreditation and the ways in which it can improve and inform their work.