Tuesday 28 November 2017

Trainings Evaluation Strategie Design


Trainingsprogramm Evaluation Schulung und Lernbewertung, Feedback-Formulare, Aktionspläne und Follow-up Dieser Abschnitt beginnt mit einer Einführung in die Ausbildung und Lernbewertung, einschließlich einiger nützlicher Lernreferenzmodelle. Die Einleitung erklärt auch, dass die Ausbildung und Entwicklung selbst für die Person und die Situation angemessen sein muss, um die Evaluation wirklich effektiv zu gestalten. Gute moderne persönliche Entwicklung und Bewertung über die offensichtlichen Fähigkeiten und Kenntnisse hinaus, die für den Job oder die Organisation oder die Qualifikation erforderlich sind. Effektive persönliche Entwicklung muss auch berücksichtigen: individuelles Potenzial (natürliche Fähigkeiten oft versteckt oder unterdrückt) individuelle Lernstile und ganze Person Entwicklung (Leben Fähigkeiten, mit anderen Worten). Wo die Ausbildung oder der Unterricht die Menschen zu entwickeln sucht (anstatt sich nur auf eine bestimmte Qualifikation oder Geschicklichkeit zu konzentrieren), muss die Entwicklung flexibler und individueller angegangen werden als in traditionellen paternalistischen (autoritären, vorgeschriebenen) Methoden der Gestaltung, Lieferung und Prüfung. Diese Grundsätze gelten auch für den Unterricht und die Entwicklung von Jugendlichen, die interessanterweise nützliche Lehren für die Ausbildung, Entwicklung und Evaluation von Arbeitsplätzen bieten. Einleitung Ein wichtiger Aspekt jeder Art von Bewertung ist ihre Wirkung auf die Person, die ausgewertet wird. Feedback ist wichtig für die Menschen zu wissen, wie sie vorankommen, und auch, Bewertung ist entscheidend für die Lernenden Vertrauen zu. Und da das Engagement der Völker zum Lernen so stark auf das Vertrauen und die Überzeugung hindeutet, dass das Lernen erreichbar ist, ist die Art und Weise, wie Tests und Einschätzungen entworfen und verwaltet werden, und die Ergebnisse, die den Lernenden vorgestellt wurden, ein wichtiger Teil des Lern - und Entwicklungsprozesses . Die Menschen können die ganze Idee des Lernens und der Entwicklung sehr schnell abschalten, wenn sie nur negative kritische Testergebnisse und Feedback erhalten. Immer nach positiven Ergebnissen suchen. Ermutigen und unterstützen - nicht kritisieren, ohne einige Positives, und sicherlich nie auf Misserfolg konzentrieren, oder das ist genau das, was youll produzieren. Dies ist ein viel übersehener Faktor in allen möglichen Auswertungen und Tests, und da dieses Element nicht typischerweise in die Evaluierungs - und Bewertungsinstrumente eingeschlossen ist, wird der Punkt hier deutlich hervorgehoben. Also immer daran denken - Auswertung ist nicht nur für den Trainer oder Lehrer oder Organisation oder Politiker - die Auswertung ist für den Lernenden auch unbedingt notwendig. Das ist vielleicht der wichtigste Grund für die Beurteilung der Menschen richtig, fair und mit so viel Ermutigung wie die Situation erlaubt. Die meisten der spezifischen Inhalte und Werkzeuge unten für die Ausbildung am Arbeitsplatz basieren auf der Arbeit von Leslie Rae, einem Experten und Autor über die Bewertung von Lern - und Trainingsprogrammen, und dieser Beitrag wird sehr geschätzt. W Leslie Rae hat über 30 Bücher über die Ausbildung und die Auswertung des Lernens geschrieben - er ist ein Experte auf seinem Gebiet. Sein Leitfaden für die effektive Evaluation von Training und Lernen, Schulungen und Lernprogramme ist ein nützlicher Satz von Regeln und Techniken für alle Trainer und HR-Profis. Diese Trainings-Evaluierungs-Guide wird durch eine ausgezeichnete Reihe von kostenlosen Lern-Evaluierung und Follow-up-Tools ergänzt. Erstellt von Leslie Rae. Es wird empfohlen, dass Sie diesen Artikel lesen, bevor Sie die kostenlose Auswertung und Training Follow-up-Tools. Besonders sehen Sie die Hinweise auf dieser Seite über die Selbsteinschätzung bei der Messung von Fähigkeiten vor und nach dem Training (d. H. Kompetenzverbesserung und Trainingseffektivität), die sich speziell auf das 3-Test-Tool beziehen (erklärt und unten beschrieben). Siehe auch den Abschnitt über Donald Kirkpatricks Trainingsauswertungsmodell. Die Grundtheorie und Grundsätze für die Bewertung von Lernen und Training darstellt. Siehe auch Blooms Taxonomie der Lerndomänen. Die Grundprinzipien für die Aus - und Weiterbildung des Lernens und damit die Ausbildungswirksamkeit festlegt. Erik Eriksons Psychosoziale (Life Stages) Theorie ist sehr hilfreich bei der Verständnis, wie Völker Ausbildung und Entwicklung braucht sich je nach Alter und Stadium des Lebens. Diese generationsorientierten Aspekte sind bei der Erfüllung der Bedürfnisse der Menschen zunehmend wichtig (jetzt eine gesetzliche Voraussetzung im Alter des Diskriminierungsgesetzes) und auch das Beste daraus, was verschiedene Altersgruppen Arbeit und Organisationen anbieten können. Die Eriksons-Theorie ist besonders bei der Betrachtung umfassenderer persönlicher Entwicklungsbedürfnisse und - möglichkeiten außerhalb der offensichtlichen beruflichen Fähigkeiten und Kenntnisse hilfreich. Multiple Intelligence Theory (Abschnitt enthält kostenlose Selbsttests) ist äußerst relevant für Training und Lernen. Dieses Modell hilft, natürliche Fähigkeiten und individuelle Potenziale zu adressieren, die bei vielen Menschen (oft von Arbeitgebern) verborgen oder unterdrückt werden können. Learning Styles Theorie ist sehr wichtig für Training und Lehre, und Features in Kolbs-Modell. Und in der VAK Lernstile Modell (auch mit einem kostenlosen Selbst-Test-Tool). Learning Styles Theorie bezieht sich auch auf Methoden der Bewertung und Bewertung, in denen unangemessene Tests können stark schneiden Ergebnisse. Testing, sowie Lieferung, müssen berücksichtigen Völker Lernstile, zum Beispiel einige Leute finden es sehr schwierig, ihre Kompetenz in einem schriftlichen Test zu beweisen, kann aber bemerkenswerte Kompetenz zeigen, wenn gefragt, um eine physische Demonstration geben. Textbasierte Evaluierungsinstrumente sind nicht der beste Weg, um alle zu beurteilen. Die bewusste Kompetenz Lernstadien Theorie ist auch eine hilfreiche Perspektive für Lernende und Lehrer. Das Modell hilft, den Lernprozess für Trainer und Lernende zu erläutern und hilft auch, Urteile über Kompetenz zu verfeinern, da Kompetenz nur selten eine einfache Frage ist oder nicht. Das bewusste Kompetenzmodell erbringt vor allem Lehrer und Lernende, wenn Gefühle von Frustration aufgrund offensichtlicher Mangel an Fortschritt entstehen. Fortschritt ist nicht immer leicht zu sehen, kann aber trotzdem passieren. Unterricht von (und vielleicht auch für) Kindererziehung Während diese verschiedenen Theorien und Modelle hier hauptsächlich für eine erwachsene arbeitsorientierte Ausbildung vorgestellt werden, gelten die Prinzipien auch für die Erziehung von Kindern und Jugendlichen, die einige nützliche Grundlagen für die Ausbildung und Entwicklung von Arbeitsplätzen bieten. Bemerkenswert, während Auswertung und Bewertung von entscheidender Bedeutung sind (denn wenn man es nicht messen kann, kann man es nicht schaffen). Das Wichtigste ist, dass man die richtigen Dinge auf die richtige Weise trainiert und entwickelt. Assessment und Evaluation (und Kindertests) wird kein effektives Lernen und Entwicklung sicherstellen, wenn die Ausbildung und Entwicklung nicht ordnungsgemäß entworfen wurde. Lektionen für den Arbeitsplatz sind überall, wo Sie in der Kindererziehung schauen, also bitte verzeihen Sie diese Ablenkung. Wenn die Kindererziehung im Vereinigten Königreich jemals gut funktionierte, gelang es aufeinanderfolgenden Regierungen, es bis in die 1980er Jahre zu zerstören und hat es seither verschlechtert. Dies wurde durch die Verhängung einer lächerlich engen Palette von Fähigkeiten und Liefermethoden sowie ähnlich engbasierte Prüfkriterien und - ziele sowie eine selbstvernichtende Verwaltungslast erreicht. All dies perfekt charakterisiert Arroganz und Täuschung in X-Theory-Management-Strukturen, in diesem Fall von hohen und mächtigen Beamten und Politikern, die nicht in der realen Welt, und wer ging nie zur normalen Schule und deren Kinder didnt entweder. Eine große Lektion von diesem für Organisationen und Arbeitsplatz Ausbildung ist, dass X-Theorie-Richtlinien und Engstirnigkeit eine katastrophale Kombination sind. Übrigens, nach einigen der gleichen Leute, ist die Gesellschaft kaputt und unsere Schulen und Eltern sind schuld und sind verantwortlich für die Sortierung der Chaos. Die Schuld der Opfer ist ein weiteres klassisches Verhalten der unfähigen Governance. Die Gesellschaft ist nicht gebrochen, es fehlt nur eine ordnungsgemäße verantwortliche Führung, was ein weiterer interessanter Punkt ist: Die Qualität jeder Führung (Regierung oder Organisation) ist definiert, wie sie ihre Menschen entwickelt. Gute Führer haben die Verantwortung, den Menschen zu helfen, ihr eigenes Potenzial zu verstehen, zu entwickeln und zu erfüllen. Das ist ganz anders als nur sie zu trainieren, um einen Job zu machen, oder sie zu lehren, eine Prüfung zu bestehen und in die Universität zu kommen, was weitaus wichtigere menschliche und gesellschaftliche Bedürfnisse und Chancen ignoriert. Ein dankbar modernes pädagogisches Denken (und hofft auch Politik) scheint nun den breiteren Entwicklungsbedürfnissen des einzelnen Kindes zu begegnen, anstatt nur darauf zu achten, Wissen zu übertragen, um Tests und Prüfungen zu bestehen. Wissenstransfer zum Zwecke der Durchführung von Prüfungen und Prüfungen, vor allem, wenn auf einer solchen willkürlichen und äußerst engen Vorstellung von dem, was gelehrt werden soll und wie, wenig Bedeutung oder Relevanz für das Entwicklungspotential und die Bedürfnisse der meisten Jugendlichen und noch weniger Relevanz hat Auf die Forderungen und Chancen der realen modernen Welt, ganz zu schweigen von den Fähigkeiten des Lebens, um einen erfüllten Erwachsenen zu werden, der einen positiven Beitrag zur Gesellschaft leisten kann. Das verzweifelt fehlerhafte britische Kinderbildungssystem der vergangenen dreißig Jahre und seine negativen Auswirkungen auf die Gesellschaft bieten viele nützliche Lehren für Organisationen. Vielleicht am bedeutendsten, wenn Sie es nicht schaffen, Menschen als Einzelpersonen zu entwickeln und nur darauf abzielen, Wissen und Fähigkeiten zu vermitteln, um die organisatorischen Prioritäten des Tages zu erfüllen, dann werden Sie ernsthaft Ihre Chancen behindern, eine glückliche produktive Gesellschaft in Ihrer Belegschaft zu fördern, vorausgesetzt, Sie wollen Zu dem, was ich vermute, ist ein anderes Thema ganz. Angenommen, Sie wollen eine glückliche und produktive Belegschaft entwickeln, die es sinnvoll ist, aus den Fehlern, die in der Kindererziehung gemacht wurden, zu betrachten und zu lernen: Das Spektrum des Lernens ist viel zu eng definiert und ignoriert das individuelle Potenzial, das dann abgewertet oder blockiert wird (Klassische arrogante X-Theorie-Management - ihre erstickenden und unterdrückenden) politischen Entscheidungsträger geben den offensichtlichen akademischen Intelligenzen (Lesen, Schreiben, Arithmetik usw.) die oberste oder ausschließliche Priorität vor, , Wenn andere der vielfachen Intelligenzen (vor allem zwischenmenschliche und intrapersonale Fähigkeiten, die von emotionaler Intelligenz hilfreich getragen werden) wohl einen weitaus größeren Wert in Arbeit und Gesellschaft haben (und sicherlich mehr Probleme in Arbeit und Gesellschaft verursachen, wenn sie unterentwickelt sind), die Prüfung und Bewertung der Lernenden Und die Lehrer messen die falschen Dinge, zu eng, in der falschen Weise - wie die Messung des Wetters mit einem Thermometer-Testen (die falsche Art, obwohl keiner dafür geeignet wäre) wird verwendet, um zu beurteilen und auszusprechen, Völker Grundwert - das ganz offensichtlich Beeinträchtigt das Selbstwertgefühl, das Vertrauen, den Ehrgeiz, die Träume, den Lebenszweck usw. (nichts zu ernst dann ..) breitere individuelle Entwicklungsbedürfnisse - vor allem die Lebensbedürfnisse - werden ignoriert (viele Organisationen und pädagogische Entscheidungsträger scheinen zu denken, dass Menschen Roboter sind Und dass ihre Arbeit und ihr persönliches Leben nicht miteinander verbunden sind und dass Arbeit von Wohlbefinden oder Depressionen nicht beeinflusst wird, etc.) werden individuelle Lernstile ignoriert (Lernen wird vor allem durch Lesen und Schreiben, wenn viele Menschen viel besser beim Lernen durch Erfahrung sind , Beobachtung, etc. - wieder sehen Kolb und VAK) Prüfung und Bewertung konzentriert sich auf den Nachweis des Wissens in einer deutlich ungerechten Situation nur hilfreich für bestimmte Arten von Menschen, anstatt zu beurteilen, Völker Anwendung, Interpretation und Entwicklung von Fähigkeiten, was ist, was das wirkliche Leben erfordert (Siehe Kirkpatricks-Modell - und betrachten die Bedeutung der Beurteilung, was die Menschen mit ihrer verbesserten Fähigkeit tun, darüber hinaus zu beurteilen, ob sie die Theorie beibehalten haben, was relativ wenig bedeutet) Kindererziehung hat traditionell die Tatsache ignoriert, dass die Entwicklung zuversichtlich glücklicher produktiver Menschen viel einfacher ist Wenn Sie in erster Linie den Menschen helfen zu entdecken, was sie gut sind - was auch immer es ist - und dann aufbauen. Lehren, Training und Lernen müssen mit individuellen Potenzialen, individuellen Lernstilen und einem breiteren Lebensentwicklungsbedarf in Einklang gebracht werden. Und diese breite, flexible individuelle Herangehensweise an die menschliche Entwicklung ist für den Arbeitsplatz entscheidend, genau wie bei den Schulen. Rückkehr zur betrieblichen Ausbildung selbst und die Arbeit von Leslie Rae: Evaluation des Lernens am Arbeitsplatz und der Ausbildung Es gab viele Erhebungen über die Verwendung von Evaluation in Ausbildung und Entwicklung (siehe die Ergebnisse der Forschungsergebnisse). Während Erhebungen anfänglich ermutigend erscheinen mögen, was darauf hindeutet, dass viele Trainersorganisationen die Ausbildungsauswertung intensiv nutzen, wenn spezifischere und eindringlichere Fragen gestellt werden, ist es oft so, dass viele professionelle Trainer und Trainingsabteilungen nur Reaktionen finden (allgemeine vage Feedback-Formulare), Einschließlich der invidious Happy Sheet, die sich auf Fragen wie Wie gut warst du das Trainer war, und wie angenehm war die Schulung. Wie Kirkpatrick uns unter anderem lehrt, sind selbst gut produzierte Reaktionsmärkte keine ordnungsgemäße Validierung oder Auswertung der Ausbildung. Für eine effektive Schulung und Lernbewertung sollten die wichtigsten Fragen sein: Inwieweit wurden die identifizierten Ausbildungsbedarfsziele des Programms erreicht. Inwieweit wurden die Lernenden erreicht? Was haben die Lernenden gezielt erlernt oder sinnvoll daran erinnert, welche Verpflichtung die Lernenden haben Über das Lernen, das sie bei ihrer Rückkehr zur Arbeit umsetzen wollen, und wieder bei der Arbeit, Wie erfolgreich waren die Auszubildenden bei der Umsetzung ihrer Aktionspläne. In welchem ​​Umfang wurden sie in diesem von ihren Leitungsleitern unterstützt. Inwieweit hat die oben aufgeführte Aktion erreicht Ein Return on Investment (ROI) für die Organisation, entweder in Bezug auf identifizierte Ziele Zufriedenheit oder, wenn möglich, eine monetäre Bewertung. Organisationen, die diese Evaluierungsprozesse häufig nicht durchführen, insbesondere dort, wo: die Personalabteilung und die Ausbilder, haben nicht genügend Zeit, dies zu tun, und die HR-Abteilung hat nicht genügend Ressourcen - Menschen und Geld - um dies zu tun. Offensichtlich muss das Auswertungstuch nach den verfügbaren Ressourcen (und der Kulturatmosphäre) geschnitten werden, die sich von einer Organisation zur anderen tendieren. Die Tatsache bleibt, dass eine gute methodische Auswertung eine gute, zuverlässige Daten umgekehrt hervorbringt, wo wenig Auswertung durchgeführt wird, ist wenig über die Wirksamkeit des Trainings bekannt. Auswertung des Trainings Es gibt die beiden Hauptfaktoren, die gelöst werden müssen: Wer ist für die Validierungs - und Evaluierungsprozesse verantwortlich. Für die Validierung sind Auswertungszwecke für die Zeit, die Menschen und das Geld zur Verfügung zu stellen (in diesem Fall die Auswirkungen der Variation auf diese Instanz eine unerwartete Kürzung des Budgets oder der Arbeitskräfte, dh die Vorhersage und Planung der Kontingenz, um mit der Variation umzugehen.) Verantwortung für die Evaluation der Ausbildung Traditionell ist im Wesentlichen eine Bewertung oder eine andere Bewertung den Trainern überlassen worden, weil das ihre Aufgabe ist . Meine (Raes) Behauptung ist, dass ein Trainingsauswertungsquintett existieren sollte, jedes Mitglied des Quintetts, das Rollen und Verantwortlichkeiten in dem Prozess hat (siehe Bewertung des Wertes deines Trainings, Leslie Rae, Gower, 2002). Erhebliche Lippenbekenntnisse sind darauf zu bezahlen, aber die tatsächliche Praxis neigt dazu, viel weniger zu sein. Das Ausbildungsauswertungsquintett befürwortet besteht aus: Jeder hat seine eigenen Verantwortlichkeiten, die als nächstes detailliert sind. Senior Management - Ausbildungsverantwortung Bewusstsein für die Notwendigkeit und den Wert der Ausbildung für die Organisation. Die Notwendigkeit, den Trainingsmanager (oder gleichwertig) in Führungskräfte-Meetings einzubeziehen, bei denen Entscheidungen über zukünftige Veränderungen bei der Ausbildung getroffen werden, sind von wesentlicher Bedeutung. Kenntnisse und Unterstützung von Trainingsplänen. Aktive Teilnahme an Veranstaltungen. Voraussetzung für die Durchführung der Evaluierung und erfordern einen regelmäßigen zusammenfassenden Bericht. Politik und strategische Entscheidungen auf der Grundlage von Ergebnissen und ROI-Daten. Der Trainer - Trainingsbewertung Verantwortlichkeiten Bereitstellung von notwendigen Vor-Programm-Arbeiten usw. und Programmplanung. Identifizierung zu Beginn des Programms der Kenntnisse und Fähigkeiten der Auszubildenden. Bereitstellung von Schulungs - und Lernressourcen, damit die Lernenden in den Zielen des Programms und den Lernenden eigene Ziele lernen können. Überwachung des Lernens beim Fortschreiten des Programms. Am Ende des Programms, Bewertung und Erhalt von Berichten von den Lernenden der Lernniveaus erreicht. Sicherstellung der Produktion durch die Lernenden eines Aktionsplans zur Verstärkung, Praxis und Umsetzung des Lernens. Der Leitungsmanager - Ausbildungsverantwortung Arbeitsbedarf und Personenidentifikation. Beteiligung am Ausbildungsprogramm und Evaluationsentwicklung. Unterstützung der Vorveranstaltung Vorbereitung und Besprechung Treffen mit dem Lernenden. Die laufende und praktische Unterstützung des Ausbildungsprogramms. Halten Sie ein Debriefing-Treffen mit dem Lernenden bei ihrer Rückkehr zur Arbeit zu diskutieren, zustimmen oder helfen, zu ändern und zu vereinbaren Maßnahmen für ihren Aktionsplan. Überprüfung des Fortschritts der Lernimplementierung. Endgültige Überprüfung der Umsetzung Erfolg und Bewertung, wo möglich, der ROI. Der Ausbildungsmanager - Ausbildungsverantwortung Management der Ausbildungsabteilung und Vereinbarung der Ausbildungsbedürfnisse und der Programmanwendung Pflege von Interesse und Unterstützung bei der Planung und Durchführung der Programme, einschliesslich ein praktisches Engagement bei Bedarf Die Einführung und Pflege von Auswertungssystemen und Produktion von regelmäßigen Berichten für das obere Management Häufige, relevante Kontaktaufnahme mit der Geschäftsleitung Liaison mit den Lernenden Leitungsführern und Arrangement der Lernumwandlung Verantwortlichkeit Lernprogramme für die Führungskräfte Liaison mit Linienleitern, soweit erforderlich, bei der Beurteilung des Trainings-ROI. Der Auszubildende oder Lernende - Ausbildungsverantwortung Einbindung in die Planung und Gestaltung des Ausbildungsprogramms, wo möglich Einbindung in die Planung und Gestaltung des Evaluierungsprozesses, wo möglich Offensichtlich, Interesse zu nehmen und aktiv an dem Trainingsprogramm oder der Tätigkeit teilzunehmen. Um einen persönlichen Aktionsplan während und am Ende des Trainings für die Umsetzung zur Rückkehr zur Arbeit zu vervollständigen und dies in die Praxis umzusetzen, mit Unterstützung des Linienleiters. Interessieren und unterstützen die Evaluationsprozesse. N. B. Obwohl die Hauptrolle des Auszubildenden im Programm zu lernen ist, muss der Lernende in den Evaluierungsprozess einbezogen werden. Dies ist wichtig, da ohne ihre Kommentare ein Großteil der Bewertung nicht auftreten konnte. Weder würden die neuen Kenntnisse und Fähigkeiten umgesetzt werden. Für Auszubildende, die entweder die Verantwortung vernachlässigen, verschwendet das Unternehmen seine Investition in die Ausbildung. Auszubildende helfen leichter, wenn der Prozess das Aussehen und das Gefühl einer Papier-Chase oder Nummer-Crunching-Übung vermeidet. Stattdessen stellen Sie sicher, dass die Auszubildenden die Bedeutung ihres Inputs verstehen - genau das und warum werden sie gefragt. Trainingsauswertung und Validierungsmöglichkeiten Wie schon früher vorgeschlagen, was Sie tun können, anstatt was Sie tun möchten oder was getan werden soll, hängt von den verschiedenen Ressourcen und Kulturunterstützung ab. Im Folgenden wird ein Spektrum von Möglichkeiten innerhalb dieser Abhängigkeiten zusammengefasst. 1 - nichts tun Nichts zu tun, um die Wirksamkeit und das Ergebnis einer Geschäftstätigkeit zu messen, ist niemals eine gute Option, aber es ist vielleicht im Trainingsbereich unter den folgenden Umständen gerechtfertigt: Wenn die Organisation, auch wenn sie dazu aufgefordert wird, kein Interesse an der Bewertung hat Und Validierung der Ausbildung und des Lernens - vom Linienleiter bis zum Vorstand. Wenn Sie als Trainer einen festen Prozess haben, um das Training zu planen, um die organisatorischen und personellen Entwicklungsbedürfnisse zu erfüllen. Wenn Sie ein angemessenes Maß an Sicherheit oder Nachweis haben, dass das ausgelieferte Training für den Zweck geeignet ist, erhält er Ergebnisse, und dass die Organisation (vor allem die Linienmanager und das Board, die potentielle Quelle der Kritik und Beschwerde) mit der Trainingsleistung zufrieden sind . Sie haben viel bessere Dinge zu tun als Durchführung Ausbildungsbewertung, vor allem, wenn die Bewertung schwierig ist und die Zusammenarbeit ist spärlich. Doch auch unter diesen Umständen kann es zu einer Zeit kommen, in der ein grundlegendes Evaluierungssystem beibehalten wird, zum Beispiel: Sie erhalten eine plötzliche unerwartete Forderung nach einer Rechtfertigung eines Teils oder der gesamten Ausbildungsaktivität. (Diese Forderungen können entstehen, zum Beispiel mit einem Wechsel in Management oder Politik oder einer neuen Initiative). Sie sehen die Gelegenheit oder müssen Ihre eigene Rechtfertigung (z. B. zur Verbesserung der Trainingsressourcen, Personalisierung oder Budgets, neue Räumlichkeiten oder Ausrüstung) zu produzieren. Sie versuchen, den Job zu wechseln und brauchen Beweise für die Wirksamkeit Ihrer bisherigen Trainingsaktivitäten. Nichts zu tun ist immer die am wenigsten wünschenswerte Option. Irgendwann kann jemand, der dir älter ist, verschoben werden, um zu fragen. Können Sie beweisen, was Sie sagen, wie erfolgreich Sie sind, ohne Auswertungsdatensätze, die Sie wahrscheinlich mit einem Verlust für Beweiswahrscheinlichkeiten sind. 2 - minimales Handeln Die absolut grundlegende Handlung für einen Beginn einer Auswertung ist wie folgt: Am Ende jedes Trainingsprogramms geben die Lernenden genügend Zeit und Unterstützung in Form von Programminformationen und haben die Lernenden einen Aktionsplan Basierend auf dem, was sie auf dem Programm gelernt haben und was sie beabsichtigen, bei ihrer Rückkehr zur Arbeit umzusetzen. Dieser Aktionsplan sollte nicht nur eine Beschreibung der beabsichtigten Handlung beinhalten, sondern auch Kommentare darüber, wie sie beabsichtigen, sie umzusetzen, eine Zeitskala für das Starten und Vervollständigen, und alle benötigten Ressourcen usw. Ein vollständig detaillierter Aktionsplan hilft immer den Lernenden, ihre zu konsolidieren Gedanken Der Aktionsplan wird eine sekundäre Nutzung bei der Demonstration der Trainer, und alle anderen interessiert, die Arten und Ebenen des Lernens, die erreicht wurden. Die Lernenden sollten auch ermutigt werden, ihre Aktionspläne mit ihren Linienleitern bei der Rückkehr zur Arbeit zu zeigen und zu diskutieren, ob diese Art von Follow-up vom Manager initiiert wurde oder nicht. 3 - minimal wünschenswertes Handeln zur Evaluation Bei der Rückkehr zur Arbeit zur Umsetzung des Aktionsplans sollte der Lernende idealerweise von seinem Linienmanager unterstützt werden, anstatt die Implementierung ganz auf den Lernenden zu setzen. Der Leitungsleiter sollte ein Nachbesprechungssitzung mit dem Lernenden bald nach ihrer Rückkehr zur Arbeit halten und eine Reihe von Fragen abdecken, die grundsätzlich den Aktionsplan erörtern und vereinbaren und die Unterstützung des Lernenden bei der Umsetzung unterstützen. Wie bereits erwähnt, ist dies eine klare Verantwortung des Linienleiters, der der Führungskräfte, der Ausbildungsabteilung und sicherlich nicht zuletzt dem Lernenden nachweist, dass eine positive Einstellung zur Ausbildung gemacht wird. Kontrast dazu mit, wie es oft passiert, ein Mitarbeiter, der auf eine Schulung geschickt wird, nach der alle Gedanken der Management-Follow-up vergessen sind. Das erste Line Manager Debriefing Meeting ist nicht das Ende der Lernbeziehung zwischen dem Lernenden und dem Line Manager. Bei der ersten Sitzung müssen Ziele und Unterstützung vereinbart werden, dann werden Vorkehrungen für Zwischenberichte über den Umsetzungsfortschritt gemacht. Danach muss ein abschließendes Review-Meeting künftige Maßnahmen berücksichtigen. Dieser Prozeß erfordert minimales Handeln durch den Linienmanager - es geht nicht mehr als die Art von Beobachtungen, die gemacht werden, wie es normal wäre, wenn ein Leitungsmanager die Handlungen seines Angestellten überwacht. Dieser Prozess der Überprüfung Sitzungen erfordert wenig Aufwand und Zeit von der Manager, aber viel zu demonstrieren, zumindest für die Mitarbeiter, dass ihre Manager Training ernst nehmen. 4 - Trainingsprogramm Basisvalidierungsansatz Der in Ziffer 3 beschriebene Aktionsplan und Umsetzungsansatz ist als Verantwortlichkeit für die Lernenden und ihre Linienmanager zuständig und erfordert neben der Bereitstellung von Beratung und Zeit keine Ressourcenbeteiligung von der Trainer. Es gibt zwei weitere Teile eines Ansatzes, die auch nur die Bereitstellung von Zeit für die Lernenden, um ihre Gefühle und Informationen zu beschreiben. Der erste ist der Reaktionsgang, der die Ansichten, Meinungen, Gefühle usw. der Lernenden über das Programm sucht. Dies ist nicht auf einer glücklichen Blatt-Ebene, noch eine einfache Tick-Liste - aber eine, die realistische Gefühle erlaubt, angegeben werden. Diese Art von Rezeptur ist in dem Buch beschrieben (Bewertung des Wertes deines Trainings, Leslie Rae, Gower, 2002). Diese Auswertung sucht eine Punktzahl für jede Frage gegen einen 6-Punkte-Bereich von gut bis schlecht, und auch die Lernenden besitzen Gründe für die Punkte, was besonders wichtig ist, wenn die Punktzahl niedrig ist. Reactionnaires sollten keine automatischen Ereignisse auf jedem Kurs oder Programm sein. Diese Art der Auswertung kann für neue Programme reserviert werden (zB die ersten drei Ereignisse) oder wenn es Hinweise darauf gibt, dass etwas mit dem Programm falsch läuft. Beispielreaktionäre sind im Satz freier Trainingsbewertungsinstrumente verfügbar. Das nächste Evaluierungsinstrument, wie der Aktionsplan, sollte möglichst am Ende eines jeden Kurses verwendet werden. Dies ist der Lern-Fragebogen (LQ), der ein relativ einfaches Instrument sein kann, das den Lernenden gefragt hat, was sie auf dem Programm gelernt haben, was sie anfangs daran erinnert haben und was nicht eingeschlossen war, dass sie erwartet wurden, eingeschlossen zu werden oder zu haben Gern eingeschlossen Scoring-Bereiche können aufgenommen werden, aber diese sind minimal und untergeordnet zu den Textkommentaren, die von den Lernenden gemacht wurden. Es gibt eine Alternative zu dem LQ namens Key Objectives LQ (KOLQ), die die Menge des Lernens durch die Aufstellung der relevanten Fragen gegen die Liste der wichtigsten Ziele für das Programm produziert sucht. Wenn ein Reaktionsgang und LQKOLQ verwendet werden, dürfen sie nicht abgelegt und am Ende des Programms vergessen werden, ebenso wie die gemeinsame Tendenz, sondern verwendet, um eine Trainingsbewertung und Validierungszusammenfassung zu erstellen. Eine faktisch-basierte Evaluierungszusammenfassung ist notwendig, um die Behauptung zu unterstützen, dass ein Programm goodeffectivesatisiert die Ziele gesetzt ist. Evaluationszusammenfassungen können auch für die Öffentlichkeitsarbeit für das Trainingsprogramm etc. hilfreich sein. Beispiel Lern-Fragebögen und wichtige Ziele Das Lernen von Fragebögen ist in den Satz freier Evaluierungswerkzeuge enthalten. 5 - Gesamtbewertungsprozess Wenn es notwendig wird, können die in (3) und (4) beschriebenen Prozesse durch andere Methoden kombiniert und ergänzt werden, um einen vollständigen Evaluierungsprozess zu erstellen, der alle Eventualitäten abdeckt. Nur wenige Gelegenheiten oder Umgebungen erlauben es, diesen vollständigen Prozess anzuwenden, besonders wenn es keine Quintett-Unterstützung gibt, aber es ist das ultimative Ziel. Der Prozess ist im Folgenden zusammengefasst: Schulungsbedarf Identifizierung und Festlegung von Zielen durch die Organisation Planung, Gestaltung und Vorbereitung der Ausbildungsprogramme gegen die Ziele Vorlesenkennzeichnung von Personen mit Bedürfnissen und Fertigstellung der Vorbereitung, die im Rahmen des Ausbildungsprogramms erforderlich ist Trainingsprogramme Pre-Kurs-Briefing-Treffen zwischen Lernenden und Linienmanager Pre-Kurs oder Beginn der Programmidentifikation der Lernenden vorhandenes Wissen, Fähigkeiten und Einstellungen, (3-Test vor und nach dem Training Beispiel Werkzeug und manuelle Version (pdf) und manuelle Version (Xls) und Arbeitsdatei Version - (ich bin dankbar für F Tarek für die Teilung dieser pdf-Datei - Arabische Übersetzung Drei-Test-Version und das gleiche Werkzeug wie eine doc-Datei - Arabische Übersetzung Drei-Test-Version.) Interim-Validierung als Programm-Fortschritte Bewertung (3-Test-Beispiel-Tool und manuelle Version und Arbeitsdatei-Version) Abschluss des End-of-Programm-Reagenzes Abschluss der Ende-of-Programm Learning Fragebogen oder wichtige Ziele Lernen Fragebogen Fertigstellung Des Aktionsplans Post-Kurs-Nachbesprechung Treffen zwischen Lernenden und Line Manager Line Manager Beobachtung der Umsetzung Fortschritte Überprüfung Sitzungen zu diskutieren Fortschritte der Umsetzung Finale Umsetzung Überprüfung Sitzung Bewertung von ROI Was auch immer Sie tun, etwas tun. Die oben beschriebenen Prozesse erlauben erhebliche Breitengrade je nach Ressourcen und Kulturumgebung, so dass es immer die Möglichkeit gibt, etwas zu tun - offensichtlich je mehr Werkzeuge verwendet und je größer der Ansatz, desto wertvoller und effektiver wird die Auswertung sein. Aber pragmatisch sein Große teure kritische Programme rechtfertigen immer mehr Bewertung und Kontrolle als kleine, einmalige, unkritische Trainingsaktivitäten. Wo theres eine große Investition und Erwartung ist, so sollte die Auswertung hinreichend detailliert und vollständig sein. Ausbildungsmanager sollten vor allem die Mess - und Evaluierungserwartungen mit dem Senior Management klären, bevor sie umfangreiche neue Ausbildungsaktivitäten einführen, damit bei der Gestaltung des Programms entsprechende Evaluierungsprozesse entstehen können. Wo große und potenziell kritische Programme geplant sind, sollten die Ausbildungsmanager auf der Seite der Vorsicht irren - dafür sorgen, dass angemessene Evaluierungsprozesse vorhanden sind. Wie bei jeder Investition, ist eine leitende Führungskraft immer wahrscheinlich zu fragen, was haben wir für unsere Investition bekommen, und wenn er fragt, muss der Trainingsmanager in der Lage sein, eine vollständig detaillierte Antwort zu geben. Messung der Verbesserung durch Selbstbewertung Das 3-Test-Vor-und-Nach-Trainingsbeispiel (siehe manuelle Version (pdf) und manuelle Version (xls) und Arbeitsdatei-Version) ist ein nützliches Werkzeug und hilfreiche Illustration der Herausforderung bei der Verbesserung der Leistungsfähigkeit Nach dem Training mit Selbsteinschätzung. Ein wesentliches Element innerhalb des Instruments ist die Bewertung, die als überarbeitete vorberufene Fähigkeit bezeichnet wird, die nach dem Training durchgeführt wird. Die überarbeitete vorgeschulte Fähigkeit ist eine Neubewertung, die nach dem Training des Fähigkeitsniveaus durchgeführt wird, das vor dem Training bestand. Dies wird sich in der Regel deutlich von der Fähigkeitsbewertung unterscheiden, die vor dem Training gemacht wurde, denn implizit verstehen wir Kompetenz und Fähigkeit in einem Skillarea nicht vollständig, bevor wir darin trainiert werden. Menschen übertreiben ihre Fähigkeiten vor dem Training. Nach dem Training haben viele Leute erkannt, dass sie tatsächlich eine geringere Kompetenz hatten, als sie zuerst glaubten (d. H. Vor dem Training). Es ist wichtig, dies zuzulassen, wenn man versucht, eine wirkliche Verbesserung mit Selbsteinschätzung zu messen. Dies ist der Grund für die Überarbeitung (nach dem Training) die vorgeschulte Beurteilung der Fähigkeit. Darüber hinaus können in vielen Situationen nach dem Training die Völker der Kompetenz in einer bestimmten Skillarea enorm erweitern. Sie erkennen, wie groß und komplex das Thema ist und sie werden sich ihrer wirklichen Fähigkeiten und Chancen zur Verbesserung bewusst. Because of this it is possible for a person before training to imagine (in ignorance) that they have a competence level of say 7 out of 10. After training their ability typically improves, but also so does their awareness of the true nature of competency . and so they may then judge themselves - after training - only to be say 8 or 7 or even lower at 6 out of 10. This looks like a regression. Its not of course, which is why a reassessment of the pre-trained ability is important. Extending the example, a persons revised assessment of their pre-trained ability could be say 3 or 4 out of 10 (revised downwards from 710), because now the person can make an informed (revised) assessment of their actual competence before training. A useful reference model in understanding this is the Conscious Competence learning model. Before we are trained we tend to be unconsciously incompetent (unaware of our true ability and what competence actually is). After training we become more consciously aware of our true level of competence, as well as hopefully becoming more competent too. When we use self-assessment tools it is important to allow for this, hence the design of the 3-Test before-and-after training tool - see also manual version (pdf) and manual version (xls). In other words: In measuring improvement, using self-assessment, between before and after training it is useful first to revise our pre-trained assessment, because before training usually our assessment of ability is over-optimistic, which can suggest (falsely) an apparent small improvement or even regression (because we thought we were more skilled than we actually now realise that we were). Note that this self-assessment aspect of learning evaluation is only part of the overall evaluation which can be addressed. See Kirkpatricks learning evaluation model for a wider appreciation of the issues. the trainers overall responsibilities - aside from training evaluation Over the years the trainers roles have changed, but the basic purpose of the trainer is to provide efficient and effective training programmes. The following suggests the elements of the basic role of the trainer, but it must be borne in mind that different circumstances will require modifications of these activities. 1. The basic role of a trainer (or however they may be designated) is to offer and provide efficient and effective training programmes aimed at enabling the participants to learn the knowledge, skills and attitudes required of them. 2. A trainer plans and designs the training programmes, or otherwise obtains them (for example, distance learning or e-technology programmes on the Internet or on CDDVD), in accordance with the requirements identified from the results of a TNIA (Training Needs Identification and Analysis - or simply TNA, Training Needs Analysis) for the relevant staff of an organizations or organizations. 3. The training programmes cited at (1) and (2) must be completely based on the TNIA which has been: (a) completed by the trainer on behalf of and at the request of the relevant organization (b) determined in some other way by the organization. 4. Following discussion with or direction by the organization management who will have taken into account costs and values (e. g. ROI - Return on Investment in the training), the trainer will agree with the organization management the most appropriate form and methods for the training. 5. If the appropriate form for satisfying the training need is a direct training course or workshop, or an Intranet provided programme, the trainer will design this programme using the most effective approaches, techniques and methods, integrating face-to-face practices with various forms of e-technology wherever this is possible or desirable. 6. If the appropriate form for satisfying the training need is some form of open learning programme or e-technology programme, the trainer, with the support of the organization management obtain, plan the utilization and be prepared to support the learner in the use of the relevant materials. 7. The trainer, following contact with the potential learners, preferably through their line managers, to seek some pre-programme activity andor initial evaluation activities, should provide the appropriate training programme(s) to the learners provided by their organization(s). During and at the end of the programme, the trainer should ensure that: (a) an effective form of traininglearning validation is followed (b) the learners complete an action plan for implementation of their learning when they return to work. 8. Provide, as necessary, having reviewed the validation results, an analysis of the changes in the knowledge, skills and attitudes of the learners to the organization management with any recommendations deemed necessary. The review would include consideration of the effectiveness of the content of the programme and the effectiveness of the methods used to enable learning, that is whether the programme satisfied the objectives of the programme and those of the learners. 9. Continue to provide effective learning opportunities as required by the organization. 10. Enable their own CPD (Continuing Professional Development) by all possible developmental means - training programmes and self-development methods. 11. Arrange and run educative workshops for line managers on the subject of their fulfillment of their training and evaluation responsibilities. Dependant on the circumstances and the decisions of the organization management, trainers do not, under normal circumstances: 1. Make organizational training decisions without the full agreement of the organizational management. 2. Take part in the post-programme learning implementation or evaluation unless the learners line managers cannot or will not fulfil their training and evaluation responsibilities. Unless circumstances force them to behave otherwise, the trainers role is to provide effective training programmes and the role of the learners line managers is to continue the evaluation process after the training programme, counsel and support the learner in the implementation of their learning, and assess the cost-value effectiveness or (where feasible) the ROI of the training. Naturally, if action will help the trainers to become more effective in their training, they can take part in but not run any pre - and post-programme actions as described, always remembering that these are the responsibilities of the line manager. leslie raes further references and recommended reading Annett, Duncan, Stammers and Gray, Task Analysis, Training Information Paper 6, HMSO, 1971. Bartram, S. and Gibson, B. Training Needs Analysis, 2nd edition, Gower, 1997. Bartram, S. and Gibson, B. Evaluating Training, Gower, 1999. Bee, Frances and Roland, Training Needs Analysis and Evaluation, Institute of Personnel and Development, 1994. Boydell, T. H. A Guide to the Identification of Training Needs, BACIE, 1976. Boydell, T. H. A Guide to Job Analysis, BACIE, 1970. A companion booklet to A Guide to the Identification of Training Needs. Bramley, Peter, Evaluating Training Effectiveness, McGraw-Hill, 1990. Buckley, Roger and Caple, Jim, The Theory and Practice of Training, Kogan Page, 1990.(Chapters 8 and 9) Craig, Malcolm, Analysing Learning Needs, Gower, 1994. Davies, I. K. The Management of Learning, McGraw-Hill, 1971. (Chapters 14 and 15.) Easterby-Smith, M. Braiden, E. M. and Ashton, D. Auditing Management Development, Gower, 1980. Easterby-Smith, M. How to Use Repertory Grids in HRD, Journal of European Industrial Training, Vol 4, No 2, 1980. Easterby-Smith, M. Evaluating Management Development, Training and Education, 2nd edition, Gower, 1994. Fletcher, Shirley, NVQs Standards and Competence, 2nd edition, Kogan Page, 1994. Hamblin, A. C. The Evaluation and Control of Training, McGraw-Hill, 1974. Honey, P. The Repertory Grid in Action, Industrial and Commercial Training, Vol II, Nos 9, 10 and 11, 1979. ITOL, A Glossary of UK Training and Occupational Learning Terms, ed. J. Brooks, ITOL, 2000. Kelly, G. A. The Psychology of Personal Constructs, Norton, 1953. Kirkpatrick, D. L. Evaluation of Training, in Training and Development Handbook, edited by R. L. Craig, McGraw-Hill, 1976. Kirkpatrick, D. L. Evaluating Training Programs: The four levels, Berrett-Koehler, 1996. Laird, D. Approaches to Training and Development, Addison-Wesley, 1978. (Chapters 15 and 16.) Mager, R. F. Preparing Objectives for Programmed Instruction, Fearon, 1962. (Later re-titled: Preparing Instructional Objectives, Fearon, 1975.) Manpower Services Commission, A Glossary of Training Terms, HMSO, 1981. Newby, Tony, Validating Your Training, Kogan Page Practical Trainer Series, 1992. Odiorne, G. S. Training by Objectives, Macmillan, 1970. Parker, T. C. Statistical Methods for Measuring Training Results, in Training and Development Handbook, edited by R. L. Craig, McGraw-Hill, 1976. Peterson, Robyn, Training Needs Analysis in the Workplace, Kogan Page Practical Trainer Series, 1992. Philips, J. Handbook of Training Evaluation and Measurement, 3rd edition, Butterworth-Heinemann, 1977 Philips, J. Return on Investment in training and Performance Improvement Programs. Butterworth-Heinemann, 1977 Philips, P. P.P. Understanding the Basics of Return on Investment in Training, Kogan-Page,2002 Prior, John (ed.), Handbook of Training and Development, 2nd edition, Gower, 1994. Rackham, N. and Morgan, T. Behaviour Analysis in Training, McGraw-Hill, 1977. Rackham, N. et al. Developing Interactive Skills, Wellens, 1971. Rae, L. Towards a More Valid End-of-Course Validation, The Training Officer, October 1983. Rae, L. The Skills of Human Relations Training, Gower, 1985. Rae, L. How Valid is Validation, Industrial and Commercial Training, Jan.-Feb. 1985. Rae, L. Using Evaluation in Training and Development, Kogan Page, 1999. Rae, L. Effective Planning in Training and Development, Kogan Page, 2000. Rae, L. Training Evaluation Toolkit, Echelon Learning, 2001. Rae, L. Trainer Assessment, Gower, 2002. Rae, L. Techniques of Training, 3rd edition, Gower, 1995. (Chapter 10.) Robinson, K. R. A Handbook of Training Management, Kogan Page, 1981. (Chapter 7.) Schmalenbach, Martin, The Death of ROI and the Rise of a New Management Paradigm, Journal of the Institute of Training and Occupational Learning, Vol. 3, No.1, 2002. Sheal, P. R. How to Develop and Present Staff Training Courses, Kogan Page, 1989. Smith, M. and Ashton, D. Using Repertory Grid Techniques to Evaluate Management Training, Personnel Review, Vol 4, No 4, 1975. Stewart, V. and Stewart A. Managing the Managers Growth, Gower, 1978. (Chapter 13.) Thurley, K. E. and Wirdenius, H. Supervision: a Re-appraisal, Heinemann, 1973. Warr, P. B. Bird, M. and Rackham, N. The Evaluation of Management Training, Gower, 1970. Whitelaw, M. The Evaluation of Management Training: a Review, Institute of Personnel Management, 1972. Wills, Mike, Managing the Training Process, McGraw-Hill, 1993. The core content and tools relating to workplace training evaluation is based on the work of Leslie Rae, MPhil, Chartered FCIPD, FITOL, which is gratefully acknowledged. Leslie Rae welcomes comments and enquiries about the subject of training and its evaluation, and can be contacted via businessballs or direct: Wrae804418 at aol dot com a note about ROI (return on investment) in training Attempting financial ROI assessment of training is a controversial issue. Its a difficult task to do in absolute terms due to the many aspects to be taken into account, some of which are very difficult to quantify at all, let alone to define in precise financial terms. Investment - the cost - in training may be easier to identify, but the benefits - the return - are notoriously tricky to pin down. What value do you place on improved morale Reduced stress levels Longer careers Better qualified staff Improved time management All of these can be benefits - returns - on training investment. Attaching a value and relating this to a single cause, i. e. training, is often impossible. At best therefore, many training ROI assessments are necessarily best estimates. If ROI-type measures are required in areas where reliable financial assessment is not possible, its advisable to agree a best possible approach, or a notional indicator and then ensure this is used consistently from occasion to occasion . year on year, course to course, allowing at least a comparison of like with like to be made, and trends to be spotted, even if financial data is not absolutely accurate. In the absence of absolutely quantifiable data, find something that will provide a useful if notional indication. For example, after training sales people, the increased number and value of new sales made is an indicator of sorts. After motivational or team-building training, reduced absentee rates would be an expected output. After an extensive management development programme, the increase in internal management promotions would be a measurable return. Find something to measure, rather than say it cant be done at all, but be pragmatic and limit the time and resource spent according to the accuracy and reliability of the input and output data. Also, refer to the very original Training Needs Analysis that prompted the training itself - what were the business performance factors that the training sought to improve Use these original drivers to measure and relate to organizational return achieved. The problems in assessing ROI are more challenging in public and non-profit-making organizations - government departments, charities, voluntary bodies, etc. ROI assessment in these environments can be so difficult as to be insurmountable, so that the organization remains satisfied with general approximations or vague comparisons, or accepts wider forms of justification for the training without invoking detailed costing. None of this is to say that cost - and value-effectiveness assessment should not be attempted. At the very least, direct costs must be controlled within agreed budgets, and if it is possible, attempts at more detailed returns should be made. It may be of some consolation to know that Jack Philips, an American ROI guru, recently commented about training ROI: Organisations should be considering implementing ROI impact studies very selectively on only 5 to 10 per cent of their training programme, otherwise it becomes incredibly expensive and resource intensive. training evaluation research This research extract is an example of the many survey findings that indicate the need to improve evaluation of training and learning. It is useful to refer to the Kirkpatrick Learning Evaluation model to appreciate the different stages at which learning and training effectiveness should be evaluated. Research published the UKs British Learning Association in May 2006 found that 72 (of a representative sample) of the UKs leading learning professionals considered that learning tends not to lead to change . Only 51 of respondents said that learning and training was evaluated several months after the learning or training intervention . The survey was carried out among delegates of the 2006 conference of the UKs British Learning Association. Speaking on the findings, David Wolfson, Chairman of the British Learning Association said, These are worrying figures from the countrys leading learning professionals. If they really do reflect training in the UK, then we have to think long and hard about how to make the changes that training is meant to give. It suggests that we have to do more - much more - to ensure that learning interventions really make a difference. The British Learning Association is a centre of expertise that produces best practice examples, identifies trends and disseminates information on both innovative and well-established techniques and technologies for learning. The aim is to synthesise existing knowledge, develop original solutions and disseminate this to a wide cross sector membership. There are many different ways to assess and evaluate training and learning. Remember that evaluation is for the learner too - evaluation is not just for the trainer or organisation. Feedback and test results help the learner know where they are, and directly affect the learners confidence and their determination to continue with the development - in some cases with their own future personal development altogether. Central to improving training and learning is the question of bringing more meaning and purpose to peoples lives . aside from merely focusing on skills and work-related development and training courses. Learning and training enables positive change and improvement - for people and employers - when peoples work is aligned with peoples lives - their strengths, personal potential, goals and dreams - outside work as well as at work. Evaluation of training can only effective if the training itself is effective and appropriate. Testing the wrong things in the wrong way will give you unhelpful data, and could be even more unhelpful for learners. Consider peoples learning styles when evaluating personal development. Learning styles are essentially a perspective of peoples preferred working, thinking and communicating styles. Written tests do not enable all types of people to demonstrate their competence. Evaluating retention of knowledge only is a very limited form of assessment. It will not indicate how well people apply their learning and development in practice. Revisit Kirkpatricks Theory and focus as much as you can on how the learning and development is applied, and the change and improvements achieved, in the working situation. See the notes about organizational change and ethical leadership to help understand and explain these principles further, and how to make learning and development more meaningful and appealing for people. authorshipreferencing copy leslie rae content main workplace learning evaluation content and tools 2004-13 alan chapman edit and contextual materials 2004-2013Evaluating Training and Results (ROI of Training) Also See the Librarys Blogs Related to Evaluating Training and Result (ROI) In addition to the articles on this current page, also see the following blogs that have posts related to Evaluating Training and Results (ROI). Scannen Sie die Blogs Seite, um verschiedene Beiträge zu sehen. Siehe auch den Abschnitt quotRecent Blog Postsquot in der Seitenleiste des Blogs oder klicken Sie auf quotnextquot in der Nähe der Unterseite eines Beitrags im Blog. Der Blog verbindet auch zahlreiche kostenlose Ressourcen. Preparation for Evaluating Training Activities and Results The last phase of the ADDIE model of instructional design, or systematic training, is evaluation. However, the evaluation really should have started even during the previous phase -- the implementation phase -- because the evaluation is of both the activities of the trainer as they are being implemented and of the results of the training as it nears an end or is finished. Evaluation includes getting ongoing feedback, e. g. from the learner, trainer and learners supervisor, to improve the quality of the training and identify if the learner achieved the goals of the training. Vor dem Fortschreiten der Leitlinien in diesem Thema würde der Leser von der ersten Überprüfung der Informationen über die formale und systematische Ausbildung, insbesondere das ADDIE-Modell, bei Formal Training Processes - Instructional Systems Design (ISD) und ADDIE profitieren. Then scan the contents of the fourth phase of the ADDIE model systematic planning of training, Implementing Your Training Plan. (This evaluation phase is the fifth phase of the ADDIE model.) Also, note that there is a document, Complete Guidelines to Design Your Training Plan. Das konfrontiert die Leitlinien aus den verschiedenen Themen über Ausbildungspläne, um Sie zu einem Trainingsplan zu entwickeln. That document also provides a Framework to Design Your Training Plan that you can use to document the various aspects of your plan Perspective on Evaluating Training Evaluation is often looked at from four different levels (the quotKirkpatrick levelsquot) listed below. Note that the farther down the list, the more valid the evaluation. Reaction - What does the learner feel about the training Learning - What facts, knowledge, etc. did the learner gain Behaviors - What skills did the learner develop, that is, what new information is the learner using on the job Results or effectiveness - What results occurred, that is, did the learner apply the new skills to the necessary tasks in the organization and, if so, what results were achieved Although level 4, evaluating results and effectiveness, is the most desired result from training, its usually the most difficult to accomplish. Evaluating effectiveness often involves the use of key performance measures -- measures you can see, e. g. faster and more reliable output from the machine after the operator has been trained, higher ratings on employees job satisfaction questionnaires from the trained supervisor, etc. This is where following sound principles of performance management is of great benefit. Suggestions for Evaluating Training Typically, evaluators look for validity, accuracy and reliability in their evaluations. However, these goals may require more time, people and money than the organization has. Evaluators are also looking for evaluation approaches that are practical and relevant. Training and development activities can be evaluated before, during and after the activities. Consider the following very basic suggestions: Before the Implementation Phase Will the selected training and development methods really result in the employees learning the knowledge and skills needed to perform the task or carry out the role Have other employees used the methods and been successful Consider applying the methods to a highly skilled employee. Ask the employee of their impressions of the methods. Do the methods conform to the employees preferences and learning styles Have the employee briefly review the methods, e. g. documentation, overheads, etc. Does the employee experience any difficulties understanding the methods During Implementation of Training Ask the employee how theyre doing. Do they understand whats being said Periodically conduct a short test, e. g. have the employee explain the main points of what was just described to him, e. g. in the lecture. Is the employee enthusiastically taking part in the activities Is he or she coming late and leaving early. Its surprising how often learners will leave a course or workshop and immediately complain that it was a complete waste of their time. Ask the employee to rate the activities from 1 to 5, with 5 being the highest rating. If the employee gives a rating of anything less than 5, have the employee describe what could be done to get a 5. After Completion of the Training Give him or her a test before and after the training and development, and compare the results Interview him or her before and after, and compare results Watch him or her perform the task or conduct the role Assign an expert evaluator from inside or outside the organization to evaluate the learners knowledge and skills One Approach to Calculate Return On Investment (ROI) (This section was written by Leigh Dudley. The section mentions HRD -- activities of human resource development -- but the guidelines are as applicable to training and development.) The calculation of ROI in training and development or HRD begins with the basic model, where sequential steps simplify a potentially complicated process. The ROI process model provides a systematic approach to ROI calculations. The step-by-step approach keeps the process manageable so that users can tackle one issue at a time. The model also emphasizes that this is a logical process that flows from one step to another. ROI calculation to another provides consistency, understanding, and credibility. Each step of the model is briefly described below. Collecting Post-Program Data Data collection is central to the ROI process and is the starting point of the ROI process. Although the ROI analysis is (or should be) planned early in the training and development cycle, the actual ROI calculation begins with data collection. (Additional information on planning for the ROI analysis is presented later under 8220Essential Planning Steps). The HRD staff should collect both hard data (representing output, quality, cost, and time) and soft data (including work habits, work climate, and attitudes). Collect Level 4 data using a variety of the methods as follows: Follow-up Questionnaires 8211 Administer follow-up questionnaires to uncover specific applications of training. Participants provide responses to a variety of types of open-ended and forced response questions. Use questionnaires to capture both Level 3 and Level 4 data. The example below shows a series of level 4 impact questions contained in a follow-up questionnaire for evaluating an automotive manufacturer8217s sales training program in Europe, with appropriate responses. HRD practitioners can use the data in an ROI analysis Program Assignments 8211 Program assignments are useful for simple, short-term projects. Participants complete the assignment on the job, using the skills or knowledge learned in the program. Report completed assignments as evaluation information, which often contains Level 3Level 4 data. Convert Level 4 data to monetary values and compare the data to cost to develop the ROI Action Plans 8211 Developed in training and development programs, action plans on the job should be implemented after the program is completed. A follow-up of the plans provides evaluation information. Level 3Level 4 data are collected with action plans, and the HRD staff can develop the ROI from the Level 4 data. Performance Contracts 8211 Developed prior to conducting the program and when the participant, the participant8217s supervisor, and the instructor all agree on planned specific out-comes from the training, performance contracts outline how the program will be implemented. Performance contracts usually collect both Level 3and Level 4 data and are designed and analyzed in the same way as action plans. Performance Monitoring 8211 As the most beneficial method to collect Level 4 data, performance monitoring is useful when HRD personnel examine various business performance records and operational data for improvement. The important challenge in this step is to select the data collection method or methods that are appropriate for both the setting and the specific program and the time and budget constraints. Isolating the Effects of Training Isolating the effects of training is an often overlooked issue in evaluations. In this step of the ROI process, explore specific techniques to determine the amount of output performance directly related to the program. This step is essential because many factors influence performance data after training. The specific techniques of this step will pinpoint the amount of improvement directly related to the program, increasing the accuracy and credibility of the ROI calculation. Collectively, the following techniques provide a comprehensive set of tools to tackle the important and critical issue of isolating the effects of training. Control Group 8211 use a control group arrangement to isolate training impact. With this technique, one group receives training while another similar, group does not receive training. The difference in the performance of the two groups is attributed to the training program. When properly set up and implemented, control group arrangement is the most effective way to isolate the effects of training. Impact Estimates 8211 When the previous approach is not feasible, estimating the impact of training on the output variables is another approach and can be accomplished on the following 4 levels. Participants 8211 estimate the amount of improvement related to training. In this approach, provide participants with the total amount of improvement, on a pre - and post-program basis, and ask them to indicate the percent of the improvement that is actually related to the training program. Supervisors 8211 of participants estimate the impact of training on the output variables. Present supervisors with the total amount of improvement, and ask them to indicate the percent related to training. Senior Managers 8211 estimate the impact of training by providing an estimate or adjustment to reflect the portion of the improvement related to the training program. While perhaps inaccurate, having senior management involved in this process develops ownership of the value and buy-in process. Experts 8211estimate the impact of training on the performance variable. Because these estimates are based on previous experience, experts must be familiar with the type of training and the specific situation. Customers sometimes provide input on the extent to which training has influenced their decision to use a product or service. Although this approach has limited applications, it can be quite useful in customer service and sales training. Converting Data to Monetary Values A number of techniques are available to convert data to monetary values the selection depends on the type of data and the situation. Convert output data to profit contribution or cost savings. With this technique, output increases are converted to monetary value based on their unit contribution to profit or the unit of cost reduction. These values are readily available in most organizations and are seen as generally accepted standard values. Calculate the cost of quality, and covert quality improvements directly to cost savings. This standard value is available in many organizations for the most common quality measures (such as rejects, rework, and scrap). Use the participants8217 wages and employee benefits as the value for time in programs where employee time is saved. Because a variety of programs focus on improving the time required to complete projects, processes, or daily activities, the value of time becomes an important and necessary issue. The use of total compensation per hour provides a conservative estimate for the value of time. Use historical costs when they are available for a specific variable. In this case, use organizational cost data to establish the specific value of an improvement. Use internal and external experts, when available, to estimate a value for an improvement. In this situation, the credibility of the estimate hinges on the expertise and reputation of the individual. Use external databases, when available, to estimate the value or cost of data items. Research, government, and industry databases can provide important for these values. The difficulty lies in finding a specific database related to the situation. Ask participants to estimate the value of the data item. For this approach to be effective, participants must understand the process and be capable of providing a value for the improvement. Require supervisors and managers to provide estimates when they are willing and capable of assigning values to the improvement. This approach is especially useful when participants are not fully capable of providing this input or in situations where supervisors or managers need to confirm or adjust the participant8217s estimate. Converting data to monetary value is very important in the ROI model and is absolutely necessary to determine the monetary benefits from a training program. The process is challenging, particularly with the conversion of soft data, but can be methodically accomplished using one or more of the above techniques. Tabulating Program Costs The other part of the equation in a costbenefit analysis is the cost of the program. Tabulating the costs involves monitoring or developing all of the related costs of the program targeted for the ROI calculation. Include the following items among the cost components. Cost to design and develop the program, possibly prorated over the expected life of the program Cost of all program materials provided to each participant Cost for the instructorfacilitator, including preparation time as well as delivery time. Cost of the facilities for the training program. Cost of travel, lodging and meals for the participants, if applicable. Salaries, plus employee benefits of the training function, allocated in some convenient way. In addition, specific cost related to the needs assessment and evaluation should be included, if appropriate. The conservative approach is to include all of these costs so that the total is fully loaded. Calculating the ROI Calculate the ROI using the program benefits and costs. The BCR is the program benefits divided by costs: BCR program benefits program costs (Sometimes this ratio is stated as a costbenefit ratio, although the formula is the same as BCR). The net benefits are the program benefits minus the costs: Net benefits program benefits 8211 program costs The ROI uses the net benefits divided by programs costs: ROI () net benefits program costs x 100 Use the same basic formula in evaluating other investments where the ROI is traditionally reported as earnings divided by investment. The ROI from some training programs is high. For example, in sales training, supervisory training, and managerial training, the ROI can be quite large, frequently over 100 percent, while ROI value for technical and operator training may be lower. Additional Resources to Guide Evaluation of Your Training Evaluating Online Learning Also See the Librarys Blogs Related to this Topic In addition to the articles on this current page, also see the following blogs that have posts related to this topic. Scannen Sie die Blogs Seite, um verschiedene Beiträge zu sehen. Siehe auch den Abschnitt quotRecent Blog Postsquot in der Seitenleiste des Blogs oder klicken Sie auf quotnextquot in der Nähe der Unterseite eines Beitrags im Blog. Der Blog verbindet auch zahlreiche kostenlose Ressourcen. Für die Kategorie der Ausbildung und Entwicklung: Um Ihr Wissen über dieses Bibliotheksthema abzurunden, können Sie einige verwandte Themen, die unter dem untenstehenden Link verfügbar sind, überprüfen. Jedes der verwandten Themen umfasst kostenlose Online-Ressourcen. Scannen Sie auch die unten aufgeführten empfohlenen Bücher. Sie wurden für ihre Relevanz und ihre praktische Natur ausgewählt. Empfohlene Bücher Grundlagen und allgemeine Informationen Field Guide to Leadership und Supervision in Business von Carter McNamara, veröffentlicht von Authenticity Consulting, LLC. Bietet Schritt-für-Schritt, sehr praktische Richtlinien zur Rekrutierung, Nutzung und Bewertung der besten Mitarbeiter für Ihr Unternehmen. Beinhaltet Richtlinien, um sich selbst (als Vorstandsmitglied oder Mitarbeiter), andere Personen, Gruppen und Organisationen effektiv zu führen. Enthält Leitlinien zur Vermeidung von Burnout - ein sehr häufiges Problem bei Mitarbeitern von Kleinunternehmen. In diesem Buch werden viele Materialien in diesem Bibliotheksthema über die Personalausstattung angepasst. Field Guide to Leadership und Supervision mit Nonprofit Staff von Carter McNamara, veröffentlicht von Authenticity Consulting, LLC. Bietet Schritt-für-Schritt, sehr praktische Leitlinien zur Rekrutierung, Nutzung und Bewertung der besten Mitarbeiter für Ihre gemeinnützige. Enthält Richtlinien, um sich selbst (als Vorstandsmitglied oder Mitarbeiter), andere Personen, Gruppen und Organisationen effektiv zu führen. Enthält Leitlinien zur Vermeidung von Burnout - ein sehr häufiges Problem bei Nonprofit-Mitarbeitern. In diesem Buch werden viele Materialien in diesem Bibliotheksthema über die Personalausstattung angepasst. Die folgenden Bücher werden wegen ihrer sehr praktischen Natur empfohlen und oft, weil sie eine breite Palette von Informationen über dieses Bibliotheksthema enthalten. Um mehr Informationen über jedes Buch zu erhalten, klicken Sie einfach auf das Bild des Buches. Außerdem kann eine Blase von Informationen angezeigt werden. Sie können auf den Titel des Buches in dieser Blase klicken, um weitere Informationen zu erhalten. Orientierung und Training Mitarbeiter Die folgenden Bücher werden wegen ihrer sehr praktischen Natur empfohlen und oft, weil sie eine breite Palette von Informationen über dieses Bibliotheksthema enthalten. Um mehr Informationen über jedes Buch zu erhalten, klicken Sie einfach auf das Bild des Buches. Außerdem kann eine Blase von Informationen angezeigt werden. You can click on the title of the book in that bubble to get more information, too. Section 4. Selecting an Appropriate Design for the Evaluation Why should you choose a design for your evaluation When should you do so Who should be involved in choosing a design How do you select an appropriate design for your evaluation When you hear the word experiment, it may call up pictures of people in long white lab coats peering through microscopes. In reality, an experiment is just trying something out to see how or why or whether it works. It can be as simple as putting a different spice in your favorite dish, or as complex as developing and testing a comprehensive effort to improve child health outcomes in a city or state. Academics and other researchers in public health and the social sciences conduct experiments to understand how environments affect behavior and outcomes, so their experiments usually involve people and aspects of the environment. A new community program or intervention is an experiment, too, one that a governmental or community organization engages in to find out a better way to address a community issue. It usually starts with an assumption about what will work sometimes called a theory of change - but that assumption is no guarantee. Like any experiment, a program or intervention has to be evaluated to see whether it works and under what conditions. In this section, well look at some of the ways you might structure an evaluation to examine whether your program is working, and explore how to choose the one that best meets your needs. These arrangements for discovery are known as experimental (or evaluation) designs. What do we mean by a design for the evaluation Every evaluation is essentially a research or discovery project. Your research may be about determining how effective your program or effort is overall, which parts of it are working well and which need adjusting, or whether some participants respond to certain methods or conditions differently from others. If your results are to be reliable, you have to give the evaluation a structure that will tell you what you want to know. That structure the arrangement of discovery - is the evaluations design. The design depends on what kinds of questions your evaluation is meant to answer. Some of the most common evaluation (research) questions : Does a particular program or intervention whether an instructional or motivational program, improving access and opportunities, or a policy change cause a particular change in participants or others behavior, in physical or social conditions, health or development outcomes, or other indicators of success What component(s) and element(s) of the program or intervention were responsible for the change What are the unintended effects of an intervention, and how did they influence the outcomes If you try a new method or activity, what happens Will the program that worked in another context, or the one that you read about in a professional journal, work in your community, or with your population, or with your issue If you want reliable answers to evaluation questions like these, you have to ask them in a way that will show you whether you actually got results, and whether those results were in fact due to your actions or the circumstances you created, or to other factors. In other words, you have to create a design for your research or evaluation to give you clear answers to your questions. Well discuss how to do that later in the section. Why should you choose a design for your evaluation An evaluation may seem simple: if you can see progress toward your goal by the end of the evaluation period, youre doing OK if you cant, you need to change. Unfortunately, its not that simple at all. First, how do you measure progress Second, if there seems to be none, how do you know what you should change in order to increase your effectiveness Third, if there is progress, how do you know it was caused by ( or contributed to) your program, and not by something else And finally, even if youre doing well, how will you decide what you could do better, and what elements of your program can be changed or eliminated without affecting success A good design for your evaluation will help you answer important questions like these. Some specific reasons for spending the time to design your evaluation carefully include: So your evaluation will be reliable. A good design will give you accurate results. If you design your evaluation well, you can trust it to tell you whether youre actually having an effect, and why. Understanding your program to this extent makes it easier to achieve and maintain success. So you can pinpoint areas you need to work on, as well as those that are successful . A good design can help you understand exactly where the strong and weak points of your program or intervention are, and give you clues as to how they can be further strengthened or changed for the greatest impact. So your results are credible . If your evaluation is designed properly, others will take your results seriously. If a well-designed evaluation shows that your program is effective, youre much more likely to be able to convince others to use similar methods, and to convince funders that your organization is a good investment. So you can identify factors unrelated to what youre doing that have an effect positive or negative on your results and on the lives of participants. Participants histories, crucial local or national events, the passage of time, personal crises, and many other factors can influence the outcome of a program or intervention for better or worse. A good evaluation design can help you to identify these, and either correct for them if you can, or devise methods to deal with or incorporate them. So you can identify unintended consequences (both positive and negative) and correct for them . A good design can show you all of what resulted from your program or intervention, not just what you expected. If you understand that your work has consequences that are negative as well as positive, or that it has more andor different positive consequences than you anticipated, you can adjust accordingly. So youll have a coherent plan and organizing structure for your evaluation . It will be much easier to conduct your evaluation if it has an appropriate design. Youll know better what you need to do in order to get the information you need. Spending the time to choose and organize an evaluation design will pay off in the time you save later and in the quality of the information you get. When should you choose a design for your evaluation Once youve determined your evaluation questions and gathered and organized all the information you can about the issue and ways to approach it, the next step is choosing a design for the evaluation. Ideally, this all takes place at the beginning of the process of putting together a program or intervention. Your evaluation should be an integral part of your program. and its planning should therefore be an integral part of the program planning. Thats the ideal now lets talk about reality. If youre reading this, the chances are probably at least 50-50 that youre connected to an underfunded government agency or to a community-based or non-governmental organization, and that youre planning an evaluation of a program or intervention thats been running for some time months or even years. Even if thats true, the same guidelines apply. Choose your questions, gather information, choose a design, and then go on through the steps presented in this chapter. Evaluation is important enough that you wont really be accomplishing anything by taking shortcuts in planning it. If your program has a cycle, then it probably makes sense to start your evaluation at the beginning of it the beginning of a year or a program phase, where all participants are starting from the same place, or from the beginning of their involvement. If thats not possible if your program has a rolling admissions policy, or provides a service whenever people need it and participants are all at different points, that can sometimes present research problems. You may want to evaluate the programs effects only with new participants, or with another specific group. On the other hand, if your program operates without a particular beginning and end, you may get the best picture of its effectiveness by evaluating it as it is, starting whenever youre ready. Whatever the case, your design should follow your information gathering and synthesis. Who should be involved in choosing a design If youre a regular Tool Box user, and particularly if youve been reading this chapter, you know that the Tool Box team generally recommends a participatory process involving both research and community partners, including all those with an interest in or who are affected with the program in planning and implementation. Choosing a design for evaluation presents somewhat of an exception to this policy, since scientific or evaluation partners may have a much clearer understanding of what is required to conduct research, and of the factors that may interfere with it. As well see in the how-to part of this section, there are a number of considerations that have to be taken into account to gain accurate information that actually tells you what you want to know. Graduate students generally take courses to gain the knowledge they need to conduct research well, and even some veteran researchers have difficulty setting up an appropriate research design. That doesnt mean a community group cant learn to do it, but rather that the time they would have to spend on acquiring background knowledge might be too great. Thus, it makes the most sense to assign this task (or at the very least its coordination) to an individual or small group with experience in research and evaluation design. Such a person can not only help you choose among possible designs, but explain what each design entails, in time, resources, and necessary skills, so that you can judge its appropriateness and feasibility for your context. How do you choose a design for your evaluation How do you go about deciding what kind of research design will best serve the purposes of your evaluation The answer to that question involves an examination of four areas: The nature of the research questions you are trying to answer The challenges to the research, and the ways they can be resolved or reduced The kinds of research designs that are generally used, and what each design entails The possibility of adapting a particular research design to your program or situation what the structure of your program will support, what participants will consent to, and what your resources and time constraints are Well begin this part of the section with an examination of the concerns research designs should address, go on to considering some common designs and how well they address those concerns, and end with some guidelines for choosing a design that will both be possible to implement and give you the information you need about your program. Hinweis . in this part of the section, were looking at evaluation as a research project. As a result, well use the term research in many places where we could just as easily have said, for the purposes of this section, evaluation. Research is more general, and some users of this section may be more concerned with research in general than evaluation in particular. Concerns research designs should address The most important consideration in designing a research project except perhaps for the value of the research itself is whether your arrangement will provide you with valid information. If you dont design and set up your research project properly, your findings wont give you information that is accurate and likely to hold true with other situations. In the case of an evaluation, that means that you wont have a basis for adjusting what you do to strengthen and improve it. Heres a far-fetched example that illustrates this point. If you took childrens heights at age six, then fed them large amounts of a specific food for three years say carrots and measured them again at the end of the period, youd probably find that most of them were considerably taller at nine years than at six. You might conclude that it was eating carrots that made the children taller because your research design gave you no basis for comparing these childrens growth to that of other children. There are two kinds of threats to the validity of a piece of research . They are usually referred to as threats to internal validity (whether the intervention produced the change) and threats to external validity (whether the results are likely to apply to other people and situations). Threats to internal validity These are threats (or alternative explanations) to your claim that what you did caused changes in the direction you were aiming for. They are generally posed by factors operating at the same time as your program or intervention that might have an effect on the issue youre trying to address. If you dont have a way of separating their effects from those of your program, you cant tell whether the observed changes were caused by your work, or by one or more of these other factors. Theyre called threats to internal validity because theyre internal to the study they have to do with whether your intervention and not something else accounted for the difference. There are several kinds of threats to internal validity: History. Both participants personal histories their backgrounds, cultures, experiences, education, etc. and external events that occur during the research period a disaster, an election, conflict in the community, a new law may influence whether or not theres any change in the outcomes youre concerned with. Maturation . This refers to the natural physical, psychological, and social processes that take place as time goes by. The growth of the carrot-eating children in the example above is a result of maturation, for instance, as might be a decline in risky behavior as someone passed from adolescence to adulthood, the development of arthritis in older people, or participants becoming tired during learning activities towards the end of the day. The effects of testing or observation on participants . The mere fact of a programs existence, or of their taking part in it, may affect participants behavior or attitudes, as may the experience of being tested, videotaped, or otherwise observed or measured. Changes in measurement . An instrument a blood pressure cuff or a scale, for instance can change over time, or different ones may not give the same results. By the same token, observers those gathering information may change their standards over time, or two or more observers may disagree on the observations. Regression toward the mean . This is a statistical term that refers to the fact that, over time, the very high and very low scores on a measure (a test, for instance) often tend to drift back toward the average for the group. If you start a program with participants who, by definition, have very low or high levels of whatever youre measuring reading skill, exposure to domestic violence, particular behavior toward people of other races or backgrounds, etc. their scores may end up closer to the average over the course of the evaluation period even without any program. The selection of participants . Those who choose participants may slant their selection toward a particular group that is more or less likely to change than a cross-section of the population from which the group was selected. (A good example is that of employment training programs that get paid according to the number of people they place in jobs. Theyre more likely to select participants who already have all or most of the skills they need to become employed, and neglect those who have fewer skills. and who therefore most need the service.) Selection can play a part when participants themselves choose to enroll in a program (self-selection), since those who decide to participate are probably already motivated to make changes. It may also be a matter of chance: members of a particular group may, simply by coincidence, share a characteristic that will set their results on your measures apart from the norm of the population youre drawing from. Selection can also be a problem when two groups being compared are chosen by different standards. Well discuss this further below when we deal with control or comparison groups. The loss of data or participants . If too little information is collected about participants, or if too many drop out well before the research period is over, your results may be based on too little data to be reliable. This also arises when two groups are being compared. If their losses of data or participants are significantly different, comparing them may no longer give you valid information. The nature of change . Often, change isnt steady and even. It can involve leaps forward and leaps backward before it gets to a stable place if it ever does. (Think of looking at the performance of a sports team halfway through the season. No matter what its record is at that moment, you wont know how well it will finish until the season is over.) Your measurements may take place over too short a period or come at the wrong times to track the true course of the change or lack of change thats occurring. A combination of the effects of two or more of these . Two or more of these factors may combine to produce or prevent the changes your program aims to produce. A language-study curriculum that is tested only on students who already speak two or more languages runs into problems with both participants history all the students have experience learning languages other than their own and selection youve chosen students who are very likely to be successful at language learning. Threats to external validity These are factors that affect your ability to apply your research results in other circumstances to increase the chances that your program and its results can be reproduced elsewhere or with other populations. If, for instance, you offer parenting classes only to single mothers, you cant assume, no matter how successful they appear to be, that the same classes will work as well with men. Threats to external validity (or generalizability) may be the result of the interactions of other factors with the program or intervention itself, or may be due to particular conditions of the program. Interaction of testing or data collection and the program or intervention . An initial test or observation might change the way participants react to the program, making a difference in final outcomes. Since you cant assume that another group will have the same reaction or achieve similar final outcomes as a result, external validity or generalizability of the findings becomes questionable. Interaction of selection procedures and the program or intervention . If the participants selected or self-selected are particularly sensitive to the methods or purpose of the program, it cant be assumed to be effective with participants who are less sensitive or ready for the program. Parents whove been threatened by the government with the loss of their children due to child abuse may be more receptive to learning techniques for improving their parenting, for example, than parents who are under no such pressure. The effects of the research arrangements . Participants may change behavior as a result of being observed, or may react to particular individuals in ways they would be unlikely to react to others. A classic example here is that of a famous baboon researcher, Irven DeVore, who after years of observing troupes of baboons, realized that they behaved differently when he was there than when he wasnt. Although his intent was to observe their natural behavior, his presence itself constituted an intervention, making the behavior of the baboons he was observing different from that of a troupe that was not observed. The interference of multiple treatments or interventions . The effects of a particular program can be changed when participants are exposed to it beforehand in a different context, or are exposed to another before or at the same time as the one being evaluated. This may occur when participants are receiving services from different sources, or being treated simultaneously for two or more health issues or other conditions. Given the range of community programs that exist, there are many possibilities here. Adults might be members of a high school completion class while participating in a substance abuse recovery program. A diabetic might be treated with a new drug while at the same time participating in a nutrition and physical activity program to deal with obesity. Sometimes, the sequence of treatments or services in a single program can have the same effect, with one influencing how participants respond to those that follow, even though each treatment is being evaluated separately. Common research designs Many books have been written on the subject of research design. While they contain too much material to summarize here, there are some basic designs that we can introduce. The important differences among them come down to how many measurements youll take, when you will take them, and how many groups of what kind will be involved. Program evaluations generally look for the answers to three basic questions: Was there any change in participants or others behavior, in physical or social conditions, or in outcomes or indicators of success during the evaluation period Was whatever change took place or the lack of change caused by your program, intervention, or effort What, in your program or outside it, actually caused or prevented the change As weve discussed, changes and improvement in outcomes may have been caused by some or all of your intervention, or by external factors. Participants or the communitys history might have been crucial. Participants may have changed as a result of simply getting older and more mature or more experienced in the world often an issue when working with children or adolescents. Environmental factors events, policy change, or conditions in participants lives can often facilitate or prevent change as well. Understanding exactly where the change came from or where the barriers to change reside, gives you the opportunity to adjust your program to take advantage of or combat those factors. If all you had to do was to measure whatever behavior or condition you wanted to influence at the beginning and end of the evaluation, choosing a design would be an easy task. Unfortunately, its not quite that simple there are those nasty threats to validity to worry about. We have to keep them in mind as we look at some common research designs. Research designs, in general, differ in one or both of two ways: the number and timing of the measurements they use and whether they look at single or multiple groups. Well look at single-group designs first, then go on to multiple groups. Before we go any further, it is helpful to have an understanding of some basic research terms that we will be using in our discussion. Researchers usually refer to your first measurement(s) or observation(s) the ones you take before you start your program or intervention as a baseline measure or baseline observation . because it establishes a baseline a known level to which you compare future measurements or observations. Some other important research terms: Independent variables are the program itself andor the methods or conditions that the researcher in this case, you wants to evaluate. Theyre called variables because they can change you might have chosen (and might still choose) other methods. Theyre independent because their existence doesnt depend on whether something else occurs: youve chosen them, and theyll stay consistent throughout the evaluation period. Dependent variables are whatever may or may not change as a result of the presence of the independent variable(s). In an evaluation, your program or intervention is the independent variable. (If youre evaluating a number of different methods or conditions, each of them is an independent variable.) Whatever youre trying to change is the dependent variable. (If youre aiming at change in more than one behavior or outcome, each type of change is a different dependent variable.) Theyre called dependent variables because changes in them depend on the action of the independent variable. or something else. Measures are just that measurements of the dependent variables. They usually refer to procedures that have results that can be translated into numbers, and may take the form of community assessments, observations, surveys, interviews, or tests. They may also count incidents or measure the amount of the dependent variable (number or percentage of children who are overweight or obese, violent crimes per 100,000 population, etc.) Observations might involve measurement, or they might simply record what happens in specific circumstances: the ways in which people use a space, the kinds of interactions children have in a classroom, the character of the interactions during an assessment. For convenience, researchers often use observation to refer to any kind of measurement and well use the same convention here. Pre - and post - single-group design The simplest design is also probably the least accurate and desirable: the pre (before) and post (after) measurement or observation. This consists of simply measuring whatever youre concerned with in one group the infant mortality rate, unemployment, water pollution applying your intervention to that group or community, and then observing again. This type of design assumes that a difference in the two observations will tell you whether there was a change over the period between them, and also assumes that any positive change was caused by the intervention. In most cases, a pre-post design wont tell you much, because it doesnt really address any of the research concerns weve discussed. It doesnt account for the influence of other factors on the dependent variable, and it doesnt tell you anything about trends of change or the progress of change during the evaluation period only where participants were at the beginning and where they were at the end. It can help you determine whether certain kinds of things have happened whether theres been a reduction in the level of educational attainment or the amount of environmental pollution in a river, for instance but it wont tell you why. Despite its limitations, taking measures before and after the intervention is far better than no measures. Even looking at something as seemingly simple to measure pre and post as blood pressure (in a heart disease prevention program) is questionable. Blood pressure may be lower at the final observation than at the initial one, but that tells you nothing about how much it may have gone up and down in between. If the readings were taken by different people, the change may be due in part to differences in their skill, or to how relaxed each was able to make participants feel. Familiarity with the program could also have reduced most participants blood pressure from the pre - to the post-measurement, as could some other factor that wasnt specifically part of the independent variable being evaluated. Interrupted time series design with a single group (simple time series) An interrupted time series used repeated measures before and after delayed implementation of the independent variable (e. g. the program, etc.) to help rule out other explanations. This relatively strong design with comparisons within the group addresses most threats to internal validity. The simplest form of this design is to take repeated observations, implement the program or intervention, and observe a number of times during the evaluation period, including at the end. This method is a great improvement over the pre - and post - design in that it tracks the trend of change, and can therefore, help see whether it was actually the independent variable that caused any change. It can also help to identify the influence of external factors such as when the dependent variable shows significant change before the intervention is implemented. Another possibility for this design is to implement more than one independent variable, either by trying two or more, one after another (often with a break in between), or by adding each to what came before. This gives a picture not only of the progress of change, but can show very clearly what causes change. That gives an evaluator the opportunity not only to adjust the program, but to drop elements that have no effect. There are a number of variations on the interrupted time series theme, including varying the observation times implementing the independent variable repeatedly and implementing one independent variable, then another, then both together to evaluate their interaction. In any variety of interrupted time series design, its important to know what youre looking for. In an evaluation of a traffic fatality control program in the United Kingdom that focused on reducing drunk driving, monthly measurements seemed to show only a small decline in fatal accidents. When the statistics for weekends, when there were most likely to be drunk drivers on the road, were separated out, however, they showed that the weekend fatality rate dropped sharply with the implementation of the program, and stayed low thereafter. Had the researchers not realized that that might be the case, the program might have been stopped, and the weekend accident rate would not have been reduced. Interrupted time series design with multiple groups (multiple baselinetime series) This has the same possibilities as the single time series design, with the added wrinkle of using repeated measures with one or more other groups (so-called multiple baselines). By using multiple baselines (groups), the external validity or generality of the findings is enhanced we can see if the effects occur with different groups or under different conditions. This multiple time series design typically staggered introduction of the intervention with different groups or communities gives the researcher more opportunities : You can try a method or program with two or more groups from the same You can try a particular method or program with different populations, to see if its effective with others You can vary the timing or intensity of an intervention with different groups You can test different interventions at the same time You can try the same two or more interventions with each of two groups, but reverse their order to see if sequencing it makes any difference Again, there are more variations possible here. Control group design A common way to evaluate the effects of an independent variable is to use a control group. This group is usually similar to the participant group, but either receives no intervention at all, or receives a different intervention with the same goal as that offered to the participant group. A control group design is usually the most difficult to set up you have to find appropriate groups, observe both on a regular basis, etc. but is generally considered to be the most reliable. The term control group comes from the attempt to control outside and other influences on the dependent variable. If everything about the two groups except their exposure to the program being evaluated averages out to be the same, then any differences in results must be due to that exposure. The term comparison group is more modest it typically offers a community watched for similar levels of the problemgoal and relevant characteristics of the community or population (e. g. education, poverty). The gold standard here is the randomized control group, one that is selected totally at random, either from among the population the program or intervention is concerned with those at risk for heart disease, unemployed males, young parents or, if appropriate, the population at large. A random group eliminates the problems of selection we discussed above, as well as issues that might arise from differences in culture, race, or other factors. A control group thats carefully chosen will have the same characteristics as the intervention group (the focus of the evaluation). If, for instance, the two groups come from the same pool of people with a particular health condition, and are chosen at random either to be treated in the conventional way or to try a new approach, it can be assumed that since they were chosen at random from the same population both groups will be subject, on average, to the same outside influences, and will have the same diversity of backgrounds. Thus, if there is a significant difference in their results, it is fairly safe to assume that the difference comes from the independent variable the type of intervention, and not something else. The difficulty for governmental and community-based organizations is to find or create a randomized control group. If the program has a long waiting list, it may be able to create a control by selecting those to first receive the intervention at random. That in itself creates problems, in that people often drop off waiting lists out of frustration or other reasons. Being included in the evaluation may help to keep them, on the other hand, by giving them a closer connection to the program and making them feel valued. An ESOL (English as a Second or Other Language) program in Boston with a three-year waiting list addressed the problem by offering those on the waiting list a different option. They received videotapes to use at home, along with biweekly tutoring by advanced students and graduates of the program. Thus, they became a comparison group with a somewhat different intervention that, as expected, was less effective than the program itself, but was more effective than none, and kept them on the waiting list. It also gave them a head start once they got into the classes, with many starting at a middle rather than at a beginning level. When theres no waiting list or similar group to draw from, community organizations often end up using a comparison group - one composed of participants in another place or program and whose members characteristics, backgrounds, and experience may or may not be similar to those of the participant group. That circumstance can raise some of the same problems related to selection seen when there is no control group. If the only potential comparisons involve very different groups, it may be better to use a design, such as an interrupted time series design that doesnt involve a control group at all, where the comparison is within (not between) groups. Groups may look similar, but may differ in an important way. Two groups of participants in a substance abuse intervention program, for instance, may have similar histories, but if one program is voluntary and the other is not, the results arent likely to be comparable. One group will probably be more motivated and less resentful than the other, and composed of people who already know they have a potential problem. The motivation and determination of their participants, rather than the effectiveness of the two programs, may influence the amount of change observed. This issue may come up in a single-group design as well. A program that may, on average, seem to be relatively ineffective may prove, on close inspection, to be quite effective with certain participants those of a specific educational background, for instance, or with particular life experiences. Looking at results with this in mind can be an important part of an evaluation, and give you valuable and usable information. Choosing a design This sections discussion of research designs is in no way complete. Its meant to provide an introduction to whats available. There are literally thousands of books and articles written on this topic, and youll probably want more information. There are a number of statistical methods that can compensate for less-than-perfect designs, for instance: few community groups have the resources to assemble a randomized control group, or to implement two or more similar programs to see which works better. Given this, the material that follows is meant only as broad guidelines. We dont attempt to be specific about what kind of design you need in what circumstances, but only try to suggest some things to think about in different situations. Help is available from a number of directions: Much can be found on the Internet (see the Resources part of this section for a few sites) there are numerous books and articles (the classic text on research design is also cited in Resources) and universities are a great resource, both through their libraries and through faculty and graduate students who might be interested in what youre doing, and be willing to help with your evaluation. Use any and all of these to find what will work best for you. Funders may also be willing either to provide technical assistance for evaluations, or to include money in your grant or contract specifically to pay for a professional evaluation. Your goal in evaluating your effort is to get the most reliable and accurate information possible, given your evaluation questions, the nature of your program, what your participants will consent to, your time constraints, and your resources. The important thing here is not to set up a perfect research study, but to design your evaluation to get real information, and to be able to separate the effects of external factors from the effects of your program. So how do you go about choosing the best design that will be workable for you The steps are in the first sentence of this paragraph. Consider your evaluation questions What do you need to know If the intent of your evaluation is simply to see whether something specific happened, its possible that a simple pre-post design will do. If, as is more likely, you want to know both whether change has occurred, and if it has, whether it has in fact been caused by your program, youll need a design that helps to screen out the effects of external influences and participants backgrounds. Für viele Community-Programme ist eine Kontroll - oder Vergleichsgruppe hilfreich, aber nicht unbedingt notwendig. Think carefully about the frequency and timing of your observations and the amount of different kinds of information you can collect. With repeated measures, you can get you quite an accurate picture of the effectiveness of your program from a simple time series design. Single group interrupted time series designs, which are often the most workable for small organizations, can give you a very reliable evaluation if theyre structured well. That generally means obtaining multiple baseline observations (enough to set a trend) before the program begins observing often and documenting your observations carefully (often with both quantitative expressed in numbers and qualitative expressed in records of incidents and of what participants did and said data) and including during intervention and follow-up observations to see whether effects are maintained. In many of these situations, a multiple-group interrupted time series design is quite possible, but of a naturally-occurring experiment. If your program includes two or more groups or classes, each working toward the same goals, you have the opportunity to stagger the introduction of the intervention across the groups. This comparison with (and across) groups allows you to screen out such factors as the facilitators ability and community influences (assuming all participants come from the same general population.) You could also try different methods or time sequences, to see which works best. In some cases, the real question is not whether your method or program works, but whether it works better than other methods or programs you could be using. Teaching a skill for instance, employment training, parenting, diabetes management, conflict resolution often falls into this category. Here, you need a comparison of some sort. While evaluations of some of these medical treatment, for example may require a control group, others can be compared to data from the field, to published results of other programs, or, by using community-level indicators. from measurements in other communities. There are community programs where the bottom line is very simple. If youre working to control water pollution, your main concern may be the amount of pollution coming out of effluent pipes, or the amount found in the river. Your only measure of success may be keeping pollution below a certain level, which means that regular monitoring of water quality is the only evaluation you need. There are probably relatively few community programs where evaluation is this easy you might, for instance, want to know which of your pollution-control activities is most effective but if yours is one, a simple design may be all you need. Consider the nature of your program What does your program look like, and what is it meant to do Does it work with participants in groups, or individually, for instance Does it run in cycles classes or workshops that begin and end on certain dates, or a time-limited program that participants go through only once Or can participants enter whenever they are ready and stay until they reach their goals How much of the work of the program is dependent on staff, and how much do participants do on their own How important is the program context the way staff, participants, and others treat one another, the general philosophy of the program, the physical setting, the organizational culture (The culture of an organization consists of accepted and traditional ways of doing things, patterns of relationships, how people dress, how they act toward and communicate with one another, etc.) If you work with participants in groups, a multiple-group design either interrupted time series or control group might be easier to use. If you work with participants individually, perhaps a simple time series or a single group design would be appropriate. If your program is time-limited either one-time-only, or with sessions that follow one another youll want a design that fits into the schedule, and that can give you reliable results in the time you have. One possibility is to use a multiple group design, with groups following one another session by session. The program for each group might be adjusted, based on the results for the group before, so that you could test new ideas each session. If your program has no clear beginning and end, youre more likely to need a single group design that considers participants individually, or by the level of their baseline performance. You may also have to compensate for the fact that participants may be entering the program at different levels, or with different goals. A proverb says that you never step in the same river twice, because the water that flows past a fixed point is always changing. The same is true of most community programs. Someone coming into a program at a particular time may have a totally different experience than a similar person entering at a different time, even though the operation of the program is the same for both. A particular participant may encourage everyone around her, and create an overwhelmingly positive atmosphere different from that experienced by participants who enter the program after she has left, for example. Its very difficult to control for this kind of difference over time, but its important to be aware that it can, and often does, exist, and may affect the results of a program evaluation. If the organizational or program context and culture are important, then youll probably want to compare your results with participants to those in a control group in a similar situation where those factors are different, or are ignored. There is, of course, a huge range of possibilities here: nearly any design can be adapted to nearly any situation in the right circumstances. This material is meant only to give you a sense of how to start thinking about the issue of design for an evaluation. Consider what your participants (and staff) will consent to In addition to the effect that it might have on the results of your evaluation, you might find that a lot of observation can raise protests from participants who feel their privacy is threatened, or from already-overworked staff members who see adding evaluation to their job as just another burden. You may be able to overcome these obstacles, or you may have to compromise fewer or different kinds of observations, a less intrusive design in order to be able to conduct the evaluation at all. There are other reasons that participants might object to observation, or at least intense observation. Potential for embarrassment, a desire for secrecy (to keep their participation in the program from family members or others), even self-protection (in the case of domestic violence, for instance) can contribute to unwillingness to be a participant in the evaluation. Staff members may have some of the same concerns. There are ways to deal with these issues, but theres no guarantee that theyll work. One is to inform participants at the beginning about exactly what youre hoping to do, listen to their objections, and meet with them (more than once, if necessary) to come up with a satisfactory approach. Staff members are less likely to complain if theyre involved in planning the evaluation, and thus have some say over the frequency and nature of observations. The same is true for participants. Treating everyones concerns seriously and including them in the planning process can go a long way toward assuring cooperation. Consider your time constraints As we mentioned above, the important thing here is to choose a design that will give you reasonably reliable information. In general, your design doesnt have to be perfect, but it does have to be good enough to give you a reasonably good indication that changes are actually taking place, and that they are the result of your program. Just how precise you can be is at least partially controlled by the limits on your time placed by funding, program considerations, and other factors. Time constraints may also be imposed. Some of the most common: Program structure . An evaluation may make the most sense if its conducted to correspond with a regular program cycle. Funding . If you are funded only for a pilot project, for example, youll have to conduct your evaluation within the time span of the funding, and soon enough to show that your program is successful enough to be refunded. A time schedule for evaluation may be part of your grant or contract, especially if the funder is paying for it. Participants schedules . A rural education program may need to stop for several months a year to allow participants to plant and tend crops, for instance. The seriousness of the issue A delay in understanding whether a violence prevention program is effective may cost lives. The availability of professional evaluators . Perhaps the evaluation team can only work during a particular time frame. Consider your resources Strategic planners often advise that groups and organizations consider resources last: otherwise theyll reject many good ideas because theyre too expensive or difficult, rather than trying to find ways to make them work with the resources at hand. Resources include not only money, but also space, materials and equipment, personnel, and skills and expertise. Often, one of these can substitute for another: a staff person with experience in research can take the place of money that would be used to pay a consultant, for example. A partnership with a nearby university could get you not only expertise, but perhaps needed equipment as well. The lesson here is to begin by determining the best design possible for your purposes, without regard to resources. You may have to settle for somewhat less, but if you start by aiming for what you want, youre likely to get a lot closer to it than if you assume you cant possibly get it. In Summary The way you design your evaluation research will have a lot to do with how accurate and reliable your results are, and how well you can use them to improve your program or intervention. The design should be one that best addresses key threats to internal validity (whether the intervention caused the change) and external validity (the ability to generalize your results to other situations, communities, and populations). Common research designs such as interrupted time series or control group designs can be adapted to various situations, and combined in various ways to create a design that is both appropriate and feasible for your program. It may be necessary to seek help from a consultant, a university partner, or simply someone with research experience to help identify a design that fits your needs. A good design will address your evaluation questions, and take into consideration the nature of your program, what program participants and staff will agree to, your time constraints, and the resources you have available for evaluation. It often makes sense to consider resources last, so that you wont reject good ideas because they seem too expensive or difficult. Once youve chosen a design, you can often find a way around a lack of resources to make it a reality. Kirkpatricks Four-Level Training Evaluation Model Evaluate the effectiveness of your training at four levels. If you deliver training for your team or your organization, then you probably know how important it is to measure its effectiveness. After all, you dont want to spend time or money on training that doesnt provide a good return. This is where Kirkpatricks Four-Level Training Evaluation Model can help you objectively analyze the effectiveness and impact of your training, so that you can improve it in the future. In this article, well look at each of the four levels of the Kirkpatrick model, and well examine how you can apply the model to evaluate training. Well also look at some of the situations where it may not be useful. The Four Levels Donald Kirkpatrick, Professor Emeritus at the University of Wisconsin and past president of the American Society for Training and Development (ASTD), first published his Four-Level Training Evaluation Model in 1959, in the US Training and Development Journal. The model was then updated in 1975, and again in 1994, when he published his best-known work, Evaluating Training Programs. The four levels are: Lets look at each level in greater detail. Level 1: Reaction This level measures how your trainees (the people being trained), reacted to the training. Obviously, you want them to feel that the training was a valuable experience, and you want them to feel good about the instructor, the topic, the material, its presentation, and the venue. Its important to measure reaction, because it helps you understand how well the training was received by your audience. It also helps you improve the training for future trainees, including identifying important areas or topics that are missing from the training. Level 2: Learning At level 2, you measure what your trainees have learned. How much has their knowledge increased as a result of the training When you planned the training session, you hopefully started with a list of specific learning objectives: these should be the starting point for your measurement. Keep in mind that you can measure learning in different ways depending on these objectives, and depending on whether youre interested in changes to knowledge, skills, or attitude. Its important to measure this, because knowing what your trainees are learning and what they arent will help you improve future training. Level 3: Behavior At this level, you evaluate how far your trainees have changed their behavior, based on the training they received. Specifically, this looks at how trainees apply the information. Its important to realize that behavior can only change if conditions are favorable. For instance, imagine youve skipped measurement at the first two Kirkpatrick levels and, when looking at your groups behavior, you determine that no behavior change has taken place. Therefore, you assume that your trainees havent learned anything and that the training was ineffective. However, just because behavior hasnt changed, it doesnt mean that trainees havent learned anything. Perhaps their boss wont let them apply new knowledge. Or, maybe theyve learned everything you taught, but they have no desire to apply the knowledge themselves. Level 4: Results At this level, you analyze the final results of your training. This includes outcomes that you or your organization have determined to be good for business, good for the employees, or good for the bottom line. Reprinted with permission of Berrett-Koehler Publishers, Inc. San Francisco, CA. From Evaluating Training Programs. copy 1996 by Donald L. Kirkpatrick amp James D Kirkpatrick. Alle Rechte vorbehalten. bkconnection Make sure that you plan your training effectively. Use our articles on Training Needs Assessment . Gagnes Nine Levels of Learning and 4MAT to help you do this. How to Apply the Model Level 1: Reaction Start by identifying how youll measure reaction. Consider addressing these questions: Did the trainees feel that the training was worth their time Did they think that it was successful What were the biggest strengths of the training, and the biggest weaknesses Did they like the venue and presentation style Did the training session accommodate their personal learning styles Next, identify how you want to measure these reactions. To do this youll typically use employee satisfaction surveys or questionnaires however you can also watch trainees body language during the training, and get verbal feedback by asking trainees directly about their experience. Once youve gathered this information, look at it carefully. Then, think about what changes you could make, based on your trainees feedback and suggestions. Level 2: Learning To measure learning, start by identifying what you want to evaluate. (These things could be changes in knowledge, skills, or attitudes.) Its often helpful to measure these areas both before and after training. So, before training commences, test your trainees to determine their knowledge, skill levels, and attitudes. Once training is finished, test your trainees a second time to measure what they have learned, or measure learning with interviews or verbal assessments. Level 3: Behavior It can be challenging to measure behavior effectively. This is a longer-term activity that should take place weeks or months after the initial training. Consider these questions: Did the trainees put any of their learning to use Are trainees able to teach their new knowledge, skills, or attitudes to other people Are trainees aware that theyve changed their behavior One of the best ways to measure behavior is to conduct observations and interviews over time. Also, keep in mind that behavior will only change if conditions are favorable. For instance, effective learning could have taken place in the training session. But, if the overall organizational culture isnt set up for any behavior changes, the trainees might not be able to apply what theyve learned. Finding This Article Useful You can learn another 285 team management skills, like this, by joining the Mind Tools Club. Alternatively, trainees might not receive support, recognition, or reward for their behavior change from their boss. So, over time, they disregard the skills or knowledge that they have learned, and go back to their old behaviors. Level 4: Results Of all the levels, measuring the final results of the training is likely to be the most costly and time consuming. The biggest challenges are identifying which outcomes, benefits, or final results are most closely linked to the training, and coming up with an effective way to measure these outcomes over the long term. Here are some outcomes to consider, depending on the objectives of your training: Increased employee retention. Increased production. Higher morale. Reduced waste. Increased sales. Higher quality ratings. Increased customer satisfaction. Fewer staff complaints. Considerations Although Kirkpatricks Four-Level Training Evaluation Model is popular and widely used, there are a number of considerations that need to be taken into account when using the model. One issue is that it can be time-consuming and expensive to use levels 3 or 4 of the model, so its not practical for all organizations and situations. This is especially the case for organizations that dont have a dedicated training or human resource department, or for one-off training sessions or programs. In a similar way, it can be expensive and resource intensive to wire up an organization to collect data with the sole purpose of evaluating training at levels 3 and 4. (Whether or not this is practical depends on the systems already in place within the organization.) The model also assumes that each levels importance is greater than the last level, and that all levels are linked. For instance, it implies that Reaction is less important, ultimately, than Results, and that reactions must be positive for learning to take place. In practice, this may not be the case. Most importantly, organizations change in many ways, and behaviors and results change depending on these, as well as on training. For example, measurable improvements in areas like retention and productivity could result from the arrival of a new boss or from a new computer system, rather than from training. Kirkpatricks model is great for trying to evaluate training in a scientific way, however, so many variables can be changing in fast-changing organizations that analysis at level 4 can be limited in usefulness. Key Points The Kirkpatrick Four-Level Training Evaluation Model helps trainers to measure the effectiveness of their training in an objective way. The model was originally created by Donald Kirkpatrick in 1959, and has since gone through several updates and revisions. The Four-Levels are as follows: By going through and analyzing each of these four levels, you can gain a thorough understanding of how effective your training was, and how you can improve in the future. Bear in mind that the model isnt practical in all situations, and that measuring the effectiveness of training with the model can be time-consuming and use a lot of resources. Diese Seite unterrichtet Sie die Fähigkeiten, die Sie für eine glückliche und erfolgreiche Karriere benötigen und das ist nur eines von vielen Werkzeugen und Ressourcen, die Sie hier bei Mind Tools finden. Abonnieren Sie unseren kostenlosen Newsletter. or join the Mind Tools Club and really supercharge your career

No comments:

Post a Comment