Donate

Wednesday, September 14, 2011

ASSESSING AND RECOMMENDING QUANTITATIVE RESEARCH DESIGNS

According to Creswell (2009), research studies are based on a statement that is predictably testable. For instance, Nonprofits and civic engagement (Berry, 2005), is a study, which examined the potentials for nonprofit agencies to fully engage their clients through public policy. Accordingly, it was conducted on the premise that section 501(C3s) discourages public policy participation considerably, thereby reducing their likelihood of building, fostering, and enhancing civic engagement within the United States. This statement was partly tested in a study done on the American nonprofit sector, known as ‘strengthening nonprofit advocacy project (Berry & Arons, 2003)’. Because the outcome of the study was tested in part within an array of institutions (Lincoln Filene Center, Charity Lobbying in the Public Interest, and OMB Watch), not only strengthened the findings of the study, but solidified its credibility (Berry, 2005, p. 572).

Moreover, the means through which data were generated in the study (i.e., random-sample mail survey nationally, phone interviews of some nonprofit executive directors among the survey participants, board members and directors of selected nonprofits, and others), also solidified the study (p. 573). Nonetheless, since the main data discussed in the study were the ones primarily taken from the mail survey of some 220,000 (501(C3s), exclusively, and coupled with the limited use of outline of the sampling frame and others, weakened the credibility of the final result. Moreover, the use of a small portion of a very large sample of nonprofits, may not accurately support the assumption, that 501(C (3) is a deterrent to nonprofit advocacy (p. 570). In order to sustain the validity of a study, the underlining assumption or hypothesis must be empirically relevant and connected to a number of phenomena out there. This can be achieved through the theoretical concepts within practical settings (Reynolds, 2007, p. 52).

According to Creswell (2009), the standard practice of all research paradigms, is to maximize the reliability and validity of the study.  Also, the deduction that if nonprofits are discouraged from advocacy, they are equally dissuaded from recruiting lower-income and other less privileged Americans to interact politically, could be unrelated.  And the H provision used by the Internal Revenue Service to measure the expenditure of a two scale lobbying by legislatures and grass rooters, is not only in stark contrast to the normative theory used in the study, most nonprofit agencies are not even aware of it being an available option (p. 574). The use of H electors as control group, owing to their similarities to 501 (3Cs) even though they are not true control groups, seriously limited the validity of the findings.

Berry (2005) and Weitzman, Silver, and Brazil (2006) shared some similarities, in that like the former, the latter is a study that examined ways of improving public policy through data management.  Weitzman et al (2006) was conducted against the assumption that the improvement of data practice as a tool to enhance public policy is tied to two distinct schools of thoughts held by rational choice theorists and deliberative democratic theorists. The researchers based the need for this study on the claim that better information may lead to a comprehensively analytical decision making, while increased access to data encourages greater democratic governance. Hence, better information may lead to a comprehensively analytical decision making, while increased access to data encourages greater democratic governance (p.390). Bardach (2000) argued that through rational choice theory, political agents trying to identify or evaluate problems have an array of alternatives to make the right choice.    

The quasi experiment design used in this study was taken from an Urban Health Initiative (UHI), conducted nationally for the Robert Wood Johnson Foundation. The UHI is intended to improve the health and safety of kids. A key quasi independent variable in the UHI study is that the process of handling children’s health and safety issues is characterized by good data practice.  Accordingly, it is not possible to obtain an effective data driven process that fully support the hypothesis.

One limitation of the Weitzman et al. (2006) is the problem posed by technology. Most public and nonprofit agencies lack the full capacity to effectively manage microdata. Moreover, by selecting only fifteen cities among thousands of cities around the United States, was not disclosed in the study. However, applying the findings of few localities as valid public policy practice nationally, questions the validity of the study. Even though the study did not comment on the problems posed by the Freedom of Information Act on studies of this sort, through the use of the rational choice theory, research has shown that, policy makers are denied full access to certain data, thus leaving public policy decisions to be made incrementally (Reidy, 2003). As in Berry (2005), the experiment used to test the variables, were manipulated in Weitzman et al. (2006) in that it was based on the findings of a prior longitudinal study of an earlier related study.

A primary reason why the causal comparative design will be used in my proposed study, using soft power as an effective counterterrorism tool, is on the basis that one does not have to impose any form of treatment for the study participants (Creswell, 2009). Accordingly, in causal comparative designs, which are conducted after the fact, the primary focus of the study is to look for the relationships between cause and effect of various groups. For instance, the study might look into how familial support of new members, for instance Al-Qaeda, affects their ability to undertake suicide missions.

Even though quantitative studies heavily rely on experimental designs involving random participants assigned to treatment or control group, the fact that my study will not run a standardize test to measure or compare the outcomes of the study, nullifies its use, Creswell (2009, p. 134). And since the subjects of the study will not be randomly assigned to survey groups due to impracticality, invalidates the use of quasi-experimental designs.

To assess the validity of the study, data will be assigned numeric values with a set of coding scheme. Based on the research question (would the replacement of the disproportionate use of force with soft power, as a counterterrorism measure eliminates the level of anger that gives rise to terrorism?), one or more statistical inferences will be used (p. 135). Hence, the data generated through that process will be carefully analyzed, and stored as descriptive statistical data (i.e., measuring of fundamental variable, corroboration, graphics, etc). For this study, a regression analysis will be used. The aim is to establish a statistical model that best explains the relationship of the variables (such as what affects one, affects all).

References



American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author.

Bardach, E. (2000). A practical guide for policy analysis: The eightfold path to more effective problem solving. New York, NY.

Berry, J. M. (2005, Sept/October). Nonprofits and civic engagement. Public Administration Review, 65, 568-578.

Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approach (3rd ed.). Thousand Oaks, CA: Sage Publications.

Reidy, M. (2003). Key themes: Reflections from the child indicator project. Working paper, University of Chicago, Chapin Hall Center for Children.

Reynolds, P. D. (2007). A premier in theory construction. Boston, MA.

Weitzman, B., Silver, D., & Brazil, C. (2006, May). Efforts to improve public policy and programs through data practice: Experiences in 15 distressed American cities. Public Administration Review, 66, 386–399.

1 comment:

  1. This is very impressive. Thank.

    Nathan Sonjayah

    ReplyDelete