Research Catalog

Assessing and evaluating Department of Defense efforts to inform, influence, and persuade : desk reference / Christopher Paul, Jessica Yeats, Colin P. Clarke, Miriam Matthews.

Title
Assessing and evaluating Department of Defense efforts to inform, influence, and persuade : desk reference / Christopher Paul, Jessica Yeats, Colin P. Clarke, Miriam Matthews.
Author
Paul, Christopher, 1971-
Publication
Santa Monica, CA : RAND, [2015]

Items in the Library & Off-site

Filter by

1 Item

StatusFormatAccessCall NumberItem Location
TextRequest in advance UA23 .P36 2015Off-site

Holdings

Details

Additional Authors
  • Yeats, Jessica M.
  • Clarke, Colin P.
  • Matthews, Miriam (Behavioral scientist)
  • National Defense Research Institute (U.S.) sponsoring body.
  • National Defense Research Institute (U.S.), issuing body.
  • United States. Department of Defense. Office of the Secretary of Defense, sponsoring body.
Description
xxx, 393 pages : color illustrations; 26 cm
Alternative Title
Assessing and evaluating DoD efforts to inform, influence, and persuade : desk reference
Subject
  • United States. Department of Defense > Public relations
  • Psychological warfare > United States > Evaluation
  • Information warfare > United States > Evaluation
  • Propaganda, American
Note
  • "National Defense Research Institute."
  • "RR-809/1-OSD"--Page 4 of cover.
Bibliography (note)
  • Includes bibliographical references (pages 349-364) and index.
Processing Action (note)
  • committed to retain
Contents
  • Machine generated contents note: Current DoD Assessment Practice -- Current DoD Assessment Guidance -- Field Manual 3-53: Military Information Support Operations -- Field Manual 3-13: Inform and Influence Activities -- Joint Publication 5-0: Joint Operation Planning -- Integrating Best Practices into Future DoD IIP Assessment Efforts: Operational Design and the Joint Operation Planning Process as Touchstones -- Operational Design -- Joint Operation Planning Process -- What RAND Was Asked to Do -- Methods and Approach -- Different Sectors Considered -- The Most-Informative Results for DoD IIP Efforts Were at the Intersection of Academic Evaluation Research and Public Communication -- DoD IIP Efforts Can Learn from Both Success and Failure -- Outline of This Report -- The Language of Assessment -- Three Motivations for Evaluation and Assessment: Planning, Improvement, and Accountability -- Three Types of Evaluation: Formative, Process, and Summative -- Nesting: The Hierarchy of Evaluation -- Assessment to Support Decisionmaking
  • Users of Evaluation -- Requirements for the Assessment of DoD Efforts to Inform, Influence, and Persuade -- Requirements Regarding Congressional Interest and Accountability -- Requirement to Improve Effectiveness and Efficiency -- Requirement to Aggregate IIP Assessments with Campaign Assessments -- Summary -- Effective Assessment Requires Clear, Realistic, and Measurable Goals -- Effective Assessment Starts in Planning -- Effective Assessment Requires a Theory of Change or Logic of the Effort Connecting Activities to Objectives -- Evaluating Change Requires a Baseline -- Assessment over Time Requires Continuity and Consistency -- Assessment Is Iterative -- Assessment Requires Resources -- Summary -- Building Organizations That Value Research -- Building an Assessment Culture: Education, Resources, and Leadership Commitment -- Evaluation Capacity Building -- Don't Fear Bad News -- Promoting Top-to-Bottom Support for Assessment -- Secure Both Top-Down and Bottom-Up Buy-In -- Encourage Participatory Evaluation and Promote Research Throughout the Organization -- Engage Leadership and Stakeholders
  • Explain the Value of Research to Leaders and Stakeholders -- Foster a Willingness to Learn from Assessment -- Preserving Integrity, Accountability, and Transparency in Assessment -- In-House Versus Outsourced Assessment -- Tension Between Collaboration and Independence: The Intellectual Firewall -- Assessment Time Horizons, Continuity, and Accountability -- Challenges to Continuity: Rotations and Turnover -- Improving Continuity: Spreading Accountability Across Rotations -- Longer Assessment Timelines, Continuous Measures, and Periodicity of Assessment -- Preserving Integrity, Accountability, and Transparency in Data Collection -- Cultivating Local Research Capacity -- The Local Survey Research Marketplace -- Organizing for Assessment Within DoD -- Mission Analysis: Where a Theory of Change/Logic of the Effort Should Become Explicit -- Differences Between Information Operations and Kinetic Operations -- The Need to Standardize and Routinize Processes for IIP Planning and Assessment -- Overcoming a Legacy of Poor Assessment -- Assessment and Intelligence -- Summary -- Setting Objectives
  • Characteristics of SMART or High-Quality Objectives -- Behavioral Versus Attitudinal Objectives -- Intermediate Versus Long-Term Objectives -- How IIP Objectives Differ from Kinetic Objectives -- How to Identify Objectives -- Setting Target Thresholds: How Much Is Enough? -- Logic Model Basics -- Inputs, Activities, Outputs, Outcomes, and Impacts -- Logic Models Provide a Framework for Selecting and Prioritizing Measures -- Program Failure Versus Theory Failure -- Constraints, Barriers, Disruptors, and Unintended Consequences -- Building a Logic Model, Theory of Change, or Logic of an Effort -- Various Frameworks, Templates, Techniques, and Tricks for Building Logic Models -- Updating the Theory of Change -- Validating Logic Models -- Summary -- Hierarchy of Terms and Concepts: From Constructs to Measures to Data -- Types of Measures -- Identifying the Constructs Worth Measuring: The Relationship Between the Logic Model and Measure Selection -- Capturing the Sequence of Effects, from Campaign Exposure to Behavioral Change -- Upstream and Downstream Measures -- Attributes of Good Measures: Validity, Reliability, Feasibility, and Utility
  • Assessing Validity: Are You Measuring What You Intend to Measure? -- Assessing Reliability: If You Measure It Again, Will the Value Change? -- Assessing Feasibility: Can Data Be Collected for the Measure with a Reasonable Level of Effort? -- Assessing Utility: What Is the Information Value of the Measure? -- Feasibility Versus Utility: Are You Measuring What Is Easy to Observe or Measuring What Matters? -- Desired Measure Attributes from Defense Doctrine -- Constructing the Measures: Techniques and Best Practices for Operationally Defining the Constructs Worth Measuring -- Excursion: Measuring Things That Seem Hard to Measure -- MOE and MOP Elements in Defense Doctrine -- Summary -- Criteria for High-Quality Evaluation Design: Feasibility, Validity, and Utility -- Designing Feasible Assessments -- Designing Valid Assessments: The Challenge of Causal Inference in IIP Evaluations -- Designing Useful Assessments and Determining the "Uses and Users" Context -- A Note on Academic Evaluation Studies Versus Practitioner-Oriented Evaluations and Assessments -- Types or Stages of Evaluation Elaborated: Formative, Process, and Summative Evaluation Designs
  • Formative Evaluation Design -- Process Evaluation Design -- Summative Evaluation Design -- Experimental Designs in IIP Evaluation -- Quasi-Experimental Designs in IIP Evaluation -- Nonexperimental Designs -- The Best Evaluations Draw from a Compendium of Studies with Multiple Designs and Approaches -- The Importance of Baseline Data to Summative Evaluations -- Summary -- Importance and Role of Formative Research -- Characterizing the Information Environment: Key Audiences and Program Needs -- Audience Segmentation -- Social Network Analysis -- Audience Issues Unique to the Defense Sector: Target Audience Analysis -- Developing and Testing the Message -- Importance and Role of Qualitative Research Methods -- Focus Groups -- Interviews -- Narrative Inquiry -- Anecdotes -- Expert Elicitation -- Other Qualitative Formative Research Methods -- Summary -- Overview of Research Methods for Evaluating Influence Effects -- Measuring Program Processes: Methods and Data Sources -- Measuring Exposure: Measures, Methods, and Data Sources -- Capturing Variance in the Quality and Nature of Exposure
  • Methods and Best Practices for Measuring Reach and Frequency -- Measuring Self-Reported Changes in Knowledge, Attitudes, and Other Predictors of Behavior -- Knowledge or Awareness Measures -- Measuring Self-Reported Attitudes and Behavioral Intention -- Content Analysis and Social Media Monitoring -- Content Analysis with Natural Language Processing: Sentiment Analysis and Beyond -- Social Media Monitoring for Measuring Influence -- Measuring Observed Changes in Individual and Group Behavior and Contributions to Strategic Objectives -- Observing Desired Behaviors and Achievement of Influence Objectives -- Direct and Indirect Response Tracking -- Atmospherics and Observable Indicators of Attitudes and Sentiments -- Aggregate or Campaign-Level Data on Military and Political End States -- Embedding Behavioral Measures in Survey Instruments -- Techniques and Tips for Measuring Effects That Are Long-Term or Inherently Difficult to Observe -- Analyses and Modeling in Influence Outcome and Impact Evaluation -- Prioritize Data Collection over Modeling and Statistical Analysis Tools
  • The Perils of Overquantification and Junk Arithmetic -- Aggregation Across Areas, Commands, and Methods -- Narrative as a Method for Analysis or Aggregation -- Analyze Trends over Time -- Statistical Hypothesis Tests -- Multivariate Analysis -- Structural Equation Modeling -- Summary -- Survey Research: Essential but Challenging -- Sample Selection: Determining Whom to Survey -- Collecting Information from Everyone or from a Sample -- Sample Size: How Many People to Survey -- Challenges to Survey Sampling -- Interview Surveys: Options for Surveying Individuals -- Conducting Survey Interviews In Person: Often Needed in Conflict Environments -- Additional Methods of Data Collection -- The Survey Instrument: Design and Construction -- Question Wording and Choice: Keep It Simple -- Open-Ended Questions: Added Sensitivity Comes at a Cost -- Question Order: Consider Which Questions to Ask Before Others -- Survey Translation and Interpretation: Capture Correct Meaning and Intent -- Multi-Item Measures: Improve Robustness -- Item Reversal and Scale Direction: Avoid Confusion
  • Testing the Survey Design: Best Practices in Survey Implementation -- Response Bias: Challenges to Survey Design and How to Address Them -- Using Survey Data to Inform Assessment -- Analyzing Survey Data for IIP Assessment -- Analyzing and Interpreting Trends over Time and Across Areas -- Triangulating Survey Data with Other Methods to Validate and Explain Survey Results -- Summary -- Assessment and Decisionmaking -- The Presentational Art of Assessment Data -- Tailor Presentation to Stakeholders -- Data Visualization -- The Importance of Narratives -- Aggregated Data -- Report Assessments and Feedback Loops -- Evaluating Evaluations: Meta-Analysis -- Metaevaluation -- Metaevaluation Checklist -- Toward a Quality Index for Evaluation Design
ISBN
  • 9780833088901 (pbk. : alk. paper)
  • 0833088904 (pbk. : alk. paper)
LCCN
^^2015011673
OCLC
  • 905668031
  • SCSB-10181919
Owning Institutions
Harvard Library