Created on Tuesday, 19 Mar 2019. Posted in Methods
Health Outcomes Insights offers a questionnaire design review service (QuestReviewTM) where they provide a detailed appraisal and report - based on best practice - on any weaknesses in the questionnaire’s design. This is an excellent service for researchers who want to design surveys but don’t have the experience.
The IDEAL Collaboration is an initiative to improve the quality of research in surgery.
Important tip: recent HTA commissioned calls have mentioned this framework!
This report is the output from a meeting in London in June 2015 that was funded by The Health Foundation, The Medical Research Council, the National Institute for Health Research and Universities UK. It brings together current approaches and methodologies for the evaluation of complex interventions in healthcare, and considers future challenges. The report covers 8 themes, including patient-reported outcome measures, healthcare equity, both qualitative and quantitative methods, and evidence synthesis.
Created on Tuesday, 24 May 2016. Posted in Methods
Entitled ‘Challenges, solutions and future directions in the evaluation of service innovations in health care and public health’, the book is the result of a partnership between the MRC, the National Institute for Health Research (NIHR) and the Health Foundation, together with Universities UK and Academy Health.
“This book will enable researchers, healthcare providers and policy makers to discover the most up-to-date methodologies for evaluation research, to prioritise these methods and implement them in the most appropriate manner to transform the provision of healthcare and public health.”
Created on Tuesday, 10 May 2016. Posted in Methods
The Adaptive Designs Working Group of the MRC Network of Hubs for Trials Methodology Research has produced an online guide on why not to use A+B designs.
The document summarises the main arguments why 3+3 and similar rule based A+B designs are inappropriate for phase I dose escalation studies. These designs are still widely used although superior model-based designs such as the continual reassessment method (CRM) are available.
Guest edited by Professor David Torgerson this new series reviews previous stepped wedge studies, explores key issues of design and forms a comprehensive overview of the current state of knowledge of the design.
There is often a tension between policy makers wishing to implement a novel intervention and researchers wanting time to do a controlled evaluation. One trial design that can address this tension is the stepped wedged randomised trial. In a stepped wedge trial, all groups or clusters of participants eventually receive the intervention, but the implementation is staggered in a random fashion allowing a robust evaluation to take place. Recently there has been an upsurge in interest in the design, which is not yet well understood. This series of papers reviews previous stepped wedge studies and explores key issues of design, forming a comprehensive overview of the current state of knowledge of the design.
Created on Wednesday, 01 Apr 2015. Posted in Methods
Evaluation is an essential part of quality improvement and when done well it can help to solve problems, inform decision making and build knowledge. While evaluation comes in many shapes and sizes, its key purpose is to help us to develop a deeper understanding of how best to improve health care.
People involved in quality improvement often ask us about how to approach evaluation. Inspired by the most commonly asked questions, this guide is intended to assist those new to evaluation by suggesting methodological and practical considerations and providing resources to support further learning.
The Health Foundation guide is not prescriptive or step-by-step as people and organisations will have very diverse evaluation needs. Instead, it aims to stimulate your thinking and support your plans for an evaluation that delivers what you need.