看网页加速器:Writing a report from survey data

来源:百度文库 编辑:中财网 时间:2024/05/03 20:30:22
Here is a very basic guide on how to write a report from survey data. It's not intended for absolute beginners. It is more of a reminder for those who once learned statistics, but aren't sure how to convert a statistical printout (from software such as Epi Info or SPSS) into a written report.
This is based on the format that Audience Dialogue normally uses. Following these instructions will produce a more thorough report than you normally see from commercial research companies. The purpose is to produce a report that's completely self-explanatory: one that can be passed on to any well-educated manager, who should be able to fully understand it without having to make further inquiries. When you've finished writing it, you should be able to hand it over and say "Here's the report. It tells you everything you need to know about the survey."
Note, more information on research reporting (and usage) is available in the book Know your Audience - A Practical Guide to Media Research (revised 2007), which provides a comprehensive and usable guide to all aspects of audience research.  This book is available as a 384 page PDF document for just 10 Euros, US $15 or AUD $15.   Click here to contact us and place an order, or simply email john(@audiencedialogue.net), removing the brackets when you type the address into your email program. We use PayPal, enabling you to pay easily and securely in different ways including by credit card. You scan the pages of the earlier editionhere.
1. Introduction to the survey
This introductory section will usually contain a contents page, a credits page, an executive summary, and background data on the survey. Preliminary pages Executive summary. Many reports have a one-page "executive summary" at the beginning, so readers can get an idea of what’s in the report. Contents page. An alternative is to write an annotated contents page, with headings in the style of newspaper headlines: a one-line sumary of the results shown in each section.
Who is responsible for the report: the organization that commissioned the survey - and why it was commissioned. the main contact person in that organization the organization that did the research any other organizations involved in any subcontracting the chief researcher and the person who wrote the report.
Give addresses and contact details as well as names. Are you thinking "that seems like a lot - why bother?" The answer is that reports are often used years later, when a follow-up survey is done. The more information provided in the original report, the easier it is to contact the people involved. This is very helpful when a follow-up survey is being planned. Background data: Dates - which can include the dates of the fieldwork, the date the report was completed (and any other relevant dates). The geographical area covered by the survey - with a map, if possible. Standards (of ethics and data quality) followed in the conduct of the survey - such as the ESOMAR Code of Conduct. Definition of the population. Exclusions from the sample (e.g. children under a certain age, or people not living in private households). How the sample was selected. References to other relevant documents, such as census data - but if there are a lot of references, they usually go at the end of the report.
If there is anything else that refers to the survey as a whole, it should be mentioned in this preliminary section. Next, you need to go through each question separately...
2. Write about each question in turn
Cover each question asked in the survey - about 2 pages per question is usually enough. Avoid covering one question across more than one page opening. Control variables (such as time, date, and place of survey) should be included as if they are questions. Questions should be covered in the most logical order for the hypotheses - often (but not necessarily) the same order as the questionnaire.
For each question, write about each of these 5 points: Who was asked? How many people were asked? What was the question? What type of responses were gathered? Summary of responses.
The examples below show how it can be done.
 
2.1. Who was asked?
Were all respondents asked the question, or only some - if only some, who?
2.2. How many people were asked?
This is the sample size for that question: the number of respondents who answered that question - often less than the entire sample size of the survey.
2.3. What was the question?
Here, write the exact wording of the question. Any special instructions that interviewers were given, on how to ask this question, should also be given.
2.4. What type of responses was gathered?
This means the format of the response type (if not obvious from the results). In other words, were respondents asked to choose from a list of possible answers ("multiple choice"), or was the question open-ended (letting respondents give an answer in their own words, or supply their own numbers. How many answers were respondents allowed to give: only one, as many as they wanted, or some in-between number?
If open-ended, were the answers then combined by coding? If "other" answers were sought in addition to the multiple-choice answers, list any "others" mentioned by at least 1% of the sample.
2.5. Summary of responses
Different people understand data in different ways, so it's best to provide the response summary in four different formats: as a verbal summary, a verbal explanation, a table, and a graph. The responses to each question normally fill two pages: one opening of a report - such as with words on one page, and a table and/or graph on the facing page. If there's not much detail, one page is sometimes enough.
The verbal summary is like a headline, expressing the main finding from tha question. One sentence is enough. This is followed by a more detailed verbal explanation of the results, both describing them in words, and also commenting on them.
For the frequency table - which can often be combined with a graph - the format depends on the type of answers that were given to this question. At the foot or top of the table, show the total raw numbers and the total % (usually 100%). For questions that allow multiple answers, show two percentages: % of respondents and % of answers. It's best to copy and paste a table from the statistical software into your word processor file, to avoid numeric errors - removing unnecessary elements later.
If more than about 1% of respondents who were eligible to answer this question did not answer, here provide any reasons why this might have happened.
There are three main types of question, and the responses to each are displayed differently. These are (1) multiple-choice (but single-answer) questions, (2) multiple-answer questions, and (3) open-ended questions. Now for some examples of each type...
2.5.1. Multiple-choice question
For multiple-choice question: list the frequency distribution of answers for the question, showing both raw numbers and %. List the frequency distribution, showing both raw numbers and percentages. If the sample was less than about 200, list whole percentages, e.g. 15%. If more than about 200, give percentages with one decimal point, e.g. 15.1%.
 
Example 1: Single-answer question. Notice how the raw numbers are in a different type style from the percentages.
Gender of respondent.
Base: all respondents
Gender No. %
Male
87 48
Female 94 52
Total 181 100
Example 2: Multiple-answer question. Note simple bar graph on right of percentages.
Q14. Base: all respondents (n=181)
"At which of these times did you listen to radio yesterday?"
Time of radio listening: Monday No. % *
00-04h
2 1
|
04-06h
2 1
|
06-08h
45 25
|||||||||||||
08-10h
20 11
||||||
10-12h
7 4
||
12-14h
7 4
||
14-16h
13 7
||||
16-18h
36 20
||||||||||
18-20h
33 19
||||||||||
20-22h
45 25
|||||||||||||
22-24h
17 9
|||||
* Total > 100%, because of multiple answers.
2.5.2. Numeric questions: group responses
For numeric questions with more than about 10 different responses, the responses can be grouped. E.g. if you ask respondents their exact age, the frequency distribution can show 5-year age groups: 15-19, 20-24, 25-29, and so on, up to "65 or over". The final combined category should have fewer respondents than the previous category - e.g. if 3% were aged 60-64 and 4% were 65 or over, then the 5-year groupings should go to 65-69. The principle here is to have a small number in the "X or over" grouping.
Example 3: This table has grouped ages that were gathered as individual years, compared the survey results with the census data, and shown the difference. The table was followed by a comment on the reasons for the difference, and how much those differences might affect the survey results. Because the main purpose of the table was to compare the sample with the population, it did not include raw numbers, which would have made the presentation confusing.
Base = all people (606 of them answered)
Q.16 "What is your age?"
(grouped in 5-year bands)
Survey
%
Census
%
Diffference
%
15 to 19 years
13
19
-6
20 to 24
11
14
-3
25 to 29
15
13
+2
30 to 34
15
10
+5
35 to 39
10
9
+1
40 to 44
10
8
+2
45 to 59
9
6
+3
50 to 54
8
6
+2
55 and over
10
15
-5
Total
100%
100%
2.5.3. Open-ended questions
For open-ended questions, present coded responses (plus maybe full text in appendix)
if there are only a few possible answers, these can be listed as for a multiple-response question. But if many different answers are given (more than about 20) they need to be coded and grouped. The codes need to be explained, so that readers of the report can interpret the question well. When the sample size is small, it is often best to have a minimal number of grouping, and report all the answers in an appendix.
Example: 4 In this case, the hypothesis related to noticing live programs, hence the highlighting. Other responses were of less interest for the research. Notice that when a question allows multiple answers, the percentage is ambiguous (% of people or % of answers?) so you need to state what the percentage is based on.
Changes noticed on provincial radio
Ha Nam % of
respondents
Live programs 58
Other program changes 20
Non-program factors (e.g. hours, reception) 29
Total > 100% because of multiple answers
Base: all who noticed changes n=138
2.5.4. Verbal summary
A verbal summary should show the main highlights from the table. Don't overdo it: if readers want all the details, they can read them in the table. One paragraph is often enough, and half a page is plenty, even for a complex table. Here's an example of a verbal summary - based on the above table.
When asked what changes thay had noticed on provincial radio in the last 6 months, 138 people (76% of the total sample) mentioned at least one type of change. 58% of these mentioned the introduction of live programs. 20% mentioned changes to programs, other than live programming, and 29% mentioned changes that were not program-related - such as broadcast hours or quality of reception. These percentages add to 107%, because a few people mentioned more than one type of change.
Take care not to confuse frequency of response (what surveys measure) with othe factors. Thus in the above example it would be wrong to write "Live programs were the most popular change." Just because live programs were more noticeable doesn't mean they were more popular - which might imply more liked. After you've written a summary, get somebody else to read it and see if they can misinterpret it. If they can, rewrite it!
If you have any other data (e.g. from a census or previous surveys) related to that question, mention it in the summary. Also discuss any problems that occurred with that question - e.g. ambiguous wording.
2.5.5. Graph the results
A graph of that frequency distribution can also be useful - but not essential. Bear in mind that graphs take a lot of space in a report. If you include a graph, make sure everything is clearly labelled, including axes, units, and values. Give it a heading that summarizes the findings in one line, and is related to the topic of the study - e.g. in the above example not "Changes noticed on provincial radio" but something like "Live programming was the change most noticed."
The best graph format to use is generally the horizontal bar chart, with one line per frequency. Pie charts take too much space, and histograms don't have enough space for labels, if there are more than a few different answers. Don't use colour graphs if your report will be photocopied in black and white - or use patterns as well as colours to distinguish the bars in your graphs.
2.5.6. Significance testing
Which hypotheses (or sub-hypotheses) apply to this question? If the hypotheses can be tested by knowing only the frequency response, do a significance test (usually chi-squared for categorical data, or a t-test for numeric data) and report the results. Usually you need quote only the t or chi-squared figure, the degrees of freedom, and the probability of the outcome. If the P figure is less than .05 it can be regarded as significant.
3. Cross-tabulate relevant pairs of questions
Most questions don't have much meaning on their own, so you need to cross-tabulate them with other questions. Each cross-tab will test a major or minor hypothesis. Don't try to cross-tab each question with ever other - only the ones that either (a) you have hypotheses for, or (b) you want to make a demographic comparison for. If the answers to a cross-tab are not statistically significant, report them only if this is an important hypothesis. Otherwise, just report that the different wasn't significant - e.g. "Question 3 produced no significant differences at the .05 level for different sexes, age groups, or geographical areas."
When putting tables into a report, remember than you can only squeeze about 8 columns across a page, and about 50 rows down. If there are too many different answers, you'll need to recode the question to reduce the number of columns (or, sometimes, rows).
Here's an over-simplified guide to which statistical test to use:
If both questions are categorical, use the chi squared test. If one question is categorical and the other is numeric, use a means comparison (F test, or one-way anova).
If there are two categorical values being tabulated against the numeric variable (e.g. comparing the income of men and women), use a t-test.
Also do a graph, showing how the means vary for each category. If both questions are numeric, do a regression analysis to find out how much the answer to one question can predict the answer to the other.
Also do a graph: a scattergram or (if there are not many different values) a bubble chart.
Example 5: Recognize this? It's just Example 3, with two more columns of data. That's all a cross-tab is. However it's more important with a cross-tab to make sure that readers cannot misinterpret the percentages. The normal convention is that percentages add to 100 downwards. If they add across - (usually done to fit the table onto a single page, as in Example 6, below) you must show this very clearly - as Example 6 does. Changes noticed on provincial radio Ha Nam Quang Nam Vinh Long   % of respondents in that province
Live programs 58 61 76
 
Other program changes 20 32 21
Non-program factors (e.g. hours, reception) 29 20 15
Total ... >100%, because of multiple answers
Base: all who noticed changes n=138 n=132 n=161
Chi-squared = 22.74 (10 degrees of freedom). Significance of differences = .008 **
4. Groups of questions
Sometimes, a group of questions can be combined - for either of two reasons:
a set of questions asked to produce a particular score on some combined variable - e.g. Snyder's self-monitoring scale. When questions have been grouped in this way, a combined score should be shown. (This is done by computing a new, combined variable, in the statistical software, and reporting on that new variable.) Grouping questions without a summary scale, to enable easier comparison of results.
4.1. Combined table and summary
Report the findings as for simple frequencies above: a table and a verbal summary of it. When a question is part of a grouped set, less detail is needed for each question. You can report on the whole set together - but you will still need to discuss each individual question briefly.
4.2. One combined graph
Instead of having a graph for each question, produce one graph comparing all the questions. A horizontal bar graph (for categorical questions) or area graph (for numeric questions) is often the clearest way to present the data.
4.3. Use scale averages to compare questions
Another way to compare a grouped set of questions is to calculate averages for scales - e.g. for a Likert scale with 5 possible answers, coded 1 to 5, calculate the mean for each question and produce a graph of the means. (If the frequency distribution for each question in the group follows the same pattern - but if some questions have very different distributions - e.g. if most questions have about 20% saying "totally agree" but one question has 50%, report on that question separately.
Make sure not to include missing values in scale averages - e.g. if respondents are using a 1 to 5 scale, and "don't know" is coded 6, exclude those 6s from the calculation of averages.
4.4. Correlations
When a group of questions has the same format and the same set of answers, it's often useful to calculate a correlation matrix. However with more than about 20 question in such a set, a correlation matrix becomes very difficult to read. In that case, a factor analysis or cluster analysis might be better than presenting a simple correlation matrix. i don't recommend presenting correlation data unless your readers have some knowledge of statistics. Correlations are easily misunderstood, and almost nobody understands factor analysis. Cluster analysis is easier - you can just say "answers to these questions go together."
Example 6 - a table from a group of questions. This table combines the answers from 8 related questions, making it easy to compare which types of program are most and least listened to. Always include a Total column (usually 100%, as here) so that readers can see what the percentages mean, and in which direction they add up. The percentages are based on the number who answered each question. Notice that the sample sizes are in a different type, again to avoid confusion.
Respondents who answered Yes to question 1 (148 of them) were asked question 6:
"How often do you listen to [type of program] on LTVR? - Daily, weekly, monthly, or never?"
Type of Program
Daily
Weekly
Monthly
Never
Total
Sample
Phone-in programme 41%
31%
12%
18%
100%
147
Advertisement 87
9
4
0
100%
147
News bulletin 91
7
2
0
100 %
145
Current affairs discussion 39
46
15
2
100%
146
Death announcement 52
38
2
8
100%
120
Music, reggae 70
26
2
2
100%
145
Health program 25
52
19
4
100%
146
Music, Arabic style 38
36
13
13
100 %
141
Example 7 - A cross-tab from a group of questions. This has one question in each column and one answer in each row -generally the easiest format to read. This time the totals are not shown, because they would make no sense. Sample sizes in each area are shown as "n=" to avoid confusion with the other figures, which are all percentages.
Q.18 (Sample = all respondents )
"Which of these media do you use at least once a week? TV? Radio? Print?"
Media used weekly
Banjarmasin
%
Pekanbaru
%
Jakarta
%
TV
84
59
81
Radio
84
92
74
Print
57
45
64
Total > 100% because of multiple answers
Sample size
n=151
n=185
n=181
5. Conclusions, maybe recommendations, and appendixes
When all this is done, you'll have reported on each question separately, with a page or two per question. Then you'll have had reports on grouped questions. When somebody reads a report like this, they feel swamped with data. So you need to summarize it, and explain how each question fits with the others, and come to some overall conclusions.
At the end you should write several pages of conclusions for people who've just read (or skimmed) the whole report, summarizing what you think are the main findings of the survey. If it’s relevant to add recommendations, they can go here. However, there’s no point in writing recommendations without giving reasons for them - otherwise the readers don’t act on them.
Finally, there can be appendixes. It's good practice to include a copy of the questionnaire - even though you have already given the wording of each question individually. The appendixes may also include tables related to sample selection, instructions to interviewers, and so on. Sometimes there's so much appendix material that it can be produced as a separate volume - the existence of which should be mentioned in the first volume. The advantage of the two-volume report is that because fewer people are interested in the technical details, fewer copies of the second volume can be produced.
Another way of saving paper is to produce both a full survey report (with few copies distributed) and a short summary report, with many copies distributed. The disadvantage of the latter approach is that it means writing two separate reports, which costs more, and takes longer. Thus writing a summary report is usually justifiable only if the expected readership is large - say 100 or more.
If your readers all have internet connections, you can save paper (at your end) by emailing the report to them. A few suggestions, if you do this:
(1) Graphs are usually in colour, but if readers don't have colour printers, graphs can be hard to interpret. So unless you know that all your readers have colour printers, make sure that your graphs will be legible when printed in black and white.
(2) Emailed reports often get emailed onto others, who you (the writer of the report) don't know about. After we discovered that somebody had changed a crucial figure in one of our emailed reports, to make his department come out better, we started distributing reports in PDF format, which readers can't change.
If you are interested in writing research reports we suggest you seriously consider our book Know your Audience - A Practical Guide to Media Research (revised 2007), which provides a comprehensive and usable guide to all aspects of audience research including how to write and use research findings and reports.  This book is available as a 384 page PDF document for just 10 Euros, US $15 or AUD $15. Click here to learn more about the book.