Guide Introduction to Structural Equation Modelling Using SPSS and AMOS

Free download. Book file PDF easily for everyone and every device. You can download and read online Introduction to Structural Equation Modelling Using SPSS and AMOS file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Introduction to Structural Equation Modelling Using SPSS and AMOS book. Happy reading Introduction to Structural Equation Modelling Using SPSS and AMOS Bookeveryone. Download file Free Book PDF Introduction to Structural Equation Modelling Using SPSS and AMOS at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Introduction to Structural Equation Modelling Using SPSS and AMOS Pocket Guide.

Published in: Science. Full Name Comment goes here. Are you sure you want to Yes No. Be the first to like this. No Downloads.

You are here

Views Total views. Actions Shares. Embeds 0 No embeds. No notes for slide. Introduction to structural equation modelling using spss and amos pdf 1.

Introduction to Structural Equation Modeling Using IBM SPSS Statistics and Amos (2nd ed.)

Blunch 2. As an alternative to existing books on the subject, which are customarily very long, very high-level and very mathematical, not to mention expensive, Niels Blunch's introduction has been designed for advanced undergraduates and Masters students who are new to SEM and still relatively new to statistics. Besides the identification of latent factors, the task of the factor analysis is also to determine the level, to which the each observed variable can be explained by each factor.

As mentioned before, there are two basic types of factor analyses: exploratory factor analysis EFA and confirmatory factor analysis CFA. The first one is useful, if we are only interested in searching for a structure among a set of variables or when we want to employ the data reduction method. On the other hand, the second one is useful, when we already have some preconceived ideas about the actual structure of the data, based on certain theoretical support or prior research. So the confirmatory approach can asses the degree, which explains how the data meet fit the expected structure [4].

The conceptual block diagram for exploratory factor analysis The conceptual EFA block diagram is shown in Fig. After the research design is applied and all the necessary assumptions are investigated, the EFA procedure starts due to the given factor model structure. The appropriate estimation method must be selected and then the factors can be extracted by the means of the estimation procedure. In this process, the factor loadings are also estimated through the estimation of factor matrix.

When selecting how many factors to include in a model, we must provide that their number still provides the sufficient variability in the underlying data. If the unrotated factor solutions do not provide enough information to unambiguously interprete the factors related to the examined variables, the rotation of factors must be additionally employed by the means of orthogonal or oblique methods [4].


  • Samuelsonian Economics and the Twenty-First Century?
  • The counsellor!
  • Advances in Ceramic Armor IV (Ceramic Engineering and Science Proceedings, Vol. 29, No. 6).
  • What is Kobo Super Points?.
  • Secular Wholeness: A Skeptics Paths To A Richer Life.
  • The Overweight Patient: A Psychological Approach to Understanding and Working with Obesity?
  • Geometry of chemical graphs: Polycycles and two-faced maps?

If there is any variable a candidate for deletion, or we want to change the number of factors or the type of estimation method, respectively, then the model respicification must be done. Naturally, an appropriate evaluation of the model's validity and reliability must be also investigated in the processing of the factor analysis procedure.

The conceptual EFA block diagram B. The presentation of the factor model structure EFA is designed for the situation when the connections between the indicator variables and the latent factors are unknown or uncertain [5]. The calculation of this matrix then also enables 2 the estimation of the communalities hi , which appear in expression 4. This way, we get the ability to identify the minimum possible number of factors that account for covariation among the observed variables [5]. All of these methods have the specific properties, with accompanying advantages and weaknesses.

The decision about the factor extraction estimation method is related to the objectives of factor analysis and the theoretical knowledge about some basic characteristics of the relations between the variables [4]. The main advantage of ML method is the ability to calculate the wide range of indexes for testing the model's goodness of fit. This method also enables the use of the statistical significance tests for the correlations and factor loadings, as well as the determination of confidence intervals [52].

On the other hand, the main limitation of the maximum likelihood method is the assumption of multivariate normality, which can cause the calculation of false results [52]. Regarding the PAF method, it does not rely on the assumption of multivariate normality, which can definitely be treated as an advantage.

But this method also has certain weaknesses, since it can not enable such a wide range of goodness-of-fit indexes as it is provided in the case of ML method [52]. Factor rotation When the factor matrix is extracted, the judgment must be done about the interpretation of the corresponding factors. The factor loadings indicate the degree of association between the variable and the factor, which means that they enable the interpretation of relationships between each variable with respect to each factor [4].

In many cases the unrotated factor solutions are not capable of providing the information that offers the most adequate interpretation of the observed variables and the corresponding factors. Then the rotation of factors must be addtionally applied, where the factors reference axes are turned about the origin until some other position is reached [4]. By using this maneuver the redistribution of the variance from the earlier factors to the later ones is achieved, which usually also gives us a simpler and theoretically more visible factor pattern.

The examination of literature shoes that the several rotation methods have been developed, such as: the varimax method, the equimax method, the orthomax method, the quartimax method, the promax method, the procrustes method, and so on [51]. These factor rotation methods can be either orthogonal or oblique.

All the details about the rotation methods can be found in the appropriate literature [4, 5, 6, 51]. Verification of the appropriateness of applying the factor analysis Since it must be ensured that the data matrix has sufficient correlations to justify the use of factor analysis, the latter is in principle not appropriate in the case of low or approximately equal correlations [4]. By other words, if no substantial number of correlations are greater than 0.

ISBN 13: 9781446248997

To check the suitability of the factor analysis application, the so-called Bartlett test of sphericity can be also employed, which investigates the statistical significance of correlations among the variables. If the test confirms that the correlation matrix is not equal to the identity matrix, then the observed variables are significantly correlated to the certain level and the factor analysis can be applied.

Another test, which checks the appropriateness of factor analysis application, is so-called Kaiser-Meyer-Olkin KMO test. The latter measures the strength of the intercorrelations and a value of 0. The suitability of the data checked by the KMO test, which is conducted to help us in decision, whether to apply or not to apply the factor analysis, is ilustrated in table 4 [4].

Table 4. The appropriateness of the data checked by the KMO test to help us in decision whether to apply or not to apply the factor analysis KMO value Appropriateness 0. Criteria for the number of factors to extract It is important to choose the number of factors which still provides the sufficient variability in the corresponding data in the sense of maximization of the proportion of explained variance.

At the same time we also want to limit the number of these factors, so there must be a compromise found between how much variability we want to keep and how many variables we want to exclude from the further analysis. There are a number of procedures designed to determine the optimal number of factors to retain in factor analysis, like the Kaiser's "eigen value bigger than one rule" , Cattell's scree plot, percentage of variation criterion PVC , and so on [4]. For example, in the case of PVC approach we require that at least a specified amount of variance must be explained from the derived latent factors.

A slightly modified version of this criterion involves a selection of enough factors to achieve the pre-specified communality for each of the original measured variables in order to adequately represent each of them [4]. The convergent validity and assessing the statistical significance of factor loadings The convergent validity means that there can be found the convergences between the variables, which measure the same phenomenon. Namely, the certain measure is valid, if truly measure what it should measure [4].

The convergent validity is usually reflected by the size of the factor loadings, where the variables within a single factor must be highly correlated [5]. As it turns out, the significance of the factor loadings depend on the sample size of the data. In general, the smaller is the sample size, the higher must be the required loadings. Table 5 represents the thresholds of the sample sizes needed for each factor loading value to be considered significant [4].

Thresholds of the sample sizes needed for factor loadings to be considered significant Sample size needed for Factor loading significance 0. Interpretation of the factor matrix In the process of interpretation of factor loadings, most factor solutions unfortunately do not result in a simple structure, where each variable would have a single high loading on only one factor. If the variable has more than one significant loading, than we are dealing with a so-called cross-loading.

If we want to make each variable loaded only on one factor, the different types of rotations are usually required, which would eliminate the cross-loadings and thus enable a simpler structure. But if the variable after applied rotation still has significant cross-loadings, then it is a candidate for deletion [4]. Besides this, the variable is also the candidate for deletion, if it does not have at least one significant loading. Also, the communalities must be inspected for each variable to check whether the variables meet an acceptable level of explanation.


  • Introduction to Structural Equation Modeling Using IBM SPSS Statistics and Amos;
  • Account Options.
  • User login.
  • Tracing Mobilities (Transport and Society);

As suggested by Hair [4], the variables are the candidates for deletion, if their communalities are smaller than the value 0. The discriminant validity The discriminant validity means that the observed phenomenon is independent and separated from the other phenomonons. Or by other words, the variables, which measure certain phenomenon, are not highly correlated with the variables, which measure some other phenomenon [4].

This implies that the variables should relate more strongly to their own factor than to some other factor. The discriminant validity is usually investigated by the observation of so-called factor correlation matrix, where the correlations between factors must not exceed certain pre- specified level [5]. The reliability The reliability investigates the level of consistency between multiple measurements of a certain variable.

The commonly used measure is so-called internal consistency, which means that the "reliable" set of variables will consistently load on the same factor. This can be deduced from the fact that the individual indicators of the scale should all measure the same factor and thus should be highly intercorrelated [4]. The most widely used reliability measure is so-called Cronbach's alpha reliability coefficient, which is usually calculated for each factor [4].

According to Hair [4], the generally agreed minimum lower level for Cronbach's alpha coefficient is the value 0. So this method enables us to test how well the observed variables represent a smaller number of constructs [4]. This type of analysis is similar to EFA in many aspects, but philosophically it is essentialy different. Namely, in CFA, the number of factors for a given set of variables must be defined in advance, as well as to which factor the variables will load on. It must be also specified whether these factors are correlated or not.

Evenmore, the variabled are assigned to only a single factor and cross-loadings are not present at all. Naturally, all the present factors can be also covariated among themselves. This means that the CFA enables some sort of confirmatory test of our measurement theory, which verifies the consistency of our systematic representation of a theoretical factor model with the data reflected in observed variables. After that, the measurement theory can be combined with a so-called structural theory in order to fully specify a SEM model [4], which will be more precisely introduced in the next section.

As it was already mentioned in the introductory section, the CFA provides the means to construct the measurement part of the SEM model. Since it is pointless to validate the structural model, if the measurement model is not validated at first, the structural modeling is usually a two- stage process. In the first stage, the measurement model is constructed and validated by the means of CFA, while in the second stage the design of the whole structural model is completed by adding the structural part of model and appropriately validating the entire model structure.

All the details about the design and construction of CFA model, the estimation methods used in CFA analysis, and the appropriate evaluation of the CFA model, respectively, can be found in the corresponding literature [, ]. There can be also found, how the expressions, related to the CFA model, can be applied, similarly, as it is illustrated for the case of EFA model c.

The convergent and discriminant validity and reliability in the CFA analysis It is always necessary to investigate the convergent and discriminant validity, as well as the reliability, when a CFA analysis is conducted [4]. The thresholds for these measures are shown in table 6 [4]. Table 6. If all AVE estimates are bigger than the corresponding SIC estimates, this could indicate a good evidence of discriminant validity [4]. For this purpose, the structure of interrelationships expressed in a set of equations, similar to a system of multiple regression equations, is examined.

These equations reveal all of the relationships among the constructs the dependent and independent factors , which are involved in the analysis. Similarly as in case of factor analysis, the constructs are unobservable latent factors, which can be represented by the multiple variables [4].

The main origin of the SEM comes from two familiar multivariate techniques: factor analysis and multiple regression analysis. Thus, the SEM can be treated as a unique combination of both types of these techniques. SEM is a method which gains in the popularity because it combines the confirmatory factor analysis and the regression analysis simultaneous equations models in order to depict a variety of different relationships between the unmeasurable latent factors [3].

In SEM, variables can be either exogenous or endogenous, and the whole set of interrelations like direct, indirect, multiple and reversed, can be applied. An additional explanation about the variables involved in SEM and the characteristics of their relationships can be found in the literature [4]. The main definitions of the variables involved in the SEM modeling, and the specifics of their relationships A.

From Fig. After that, the model must be specified, which means the definition of all causal paths between the variables due to some theory and basic theoretical knowledge. The model identification and estimation are the next steps in the procedure.

IBM SPSS - Advanced: Structural Equation Modelling using AMOS

During the model identification, the care must be taken about over-identification, which means that there are more parameters available than it is the number of parameters, we want to estimate. Concerning the model estimation, the suitable estimation method must be chosen in order to achieve the succesful model verification, which is the next step of SEM procedure.

Herein, the model is tested and its fit quality is evaluated, where the different fit indices can be applied. If these indices indicate a poor fit performance, the post-hoc modifications of the model specification must be additionaly proceeded. Otherwise, the model is ready for an appropriate interpretation and the report of achieved results.

Even more clear presentation of all significant steps in the SEM modeling and how it incorporates the CFA measurement model, is given in Fig. For this purpose, the corresponding measurement model must be converted to the structural model at first. In the next step, the structural model validity must be assesed, where the goodness of fit indices and the direction of all paths are investigated, as well as the structural parameters are estimated.

If the structural model passes the validity test, it can be used to make some conclusions.

Otherwise, the model refinement is needed. The example includes four latent factors and 16 observed indicator variables. Each factor is investigated via four indicators, and the covariances for the each pair of factors are given in the CFA model. When the conversion to the SEM model is completed, some factors and their indicators have an exogenous independent charater, while the others have an endogenous dependent character.

In Fig. The further details about the SEM model structure, all the variables involved, and their relationships and characteristics, will be depicted more precisely in the sequell. The structure of the SEM model represented by the path diagram Fig. The meaning of the corresponding variables and coefficients is shown in table 7. Obviously in this case we are dealing with 6 exogenous indicators, 4 endogenous indicators, 3 exogenous latent factors and 2 endogenous latent factors.

The exogenous latent factors are supposed to be covariated among each other, and the directions of causal paths related to the constructs can be cleary seen. Since the latter is not very convenient for the theoretical purposes, it can be further converted into the analogue system of matrix equations. All the details about the mathematical structure of SEM models, their matrix analysis and the derivation of variance- covariance matrices can be found in the appropriate literature [, ]. The problem of normality in the process of SEM estimation When estimating the parameters of SEM model, the maximum likelihood ML method, which is the default estimation method in the most of SEM softwares, is usually used.

This method demands the certain statistical assumptions about the multivariate normality and in principle provides the accurate estimates, if the data are continuous and normally distributed. According to Kline [53], the multivariate normality includes cited from [53] : 1.

Introduction to Structural Equation Modeling Using SPSS and AMOS by Niels J. Blunch

Since it is difficult to investigate all of the aspects of multivariate normality, at least the inspection of univariate distribution of each observed variable is requiered []. The inspection of non- normality is usually proceeded by the observation of the skewness and kurtosis of the measurements. There has been certain disagreement detected in the literature about the criterions concerning the non-normality, which is stil admissible for an efficient usage of the ML method.

And while SPSS is one of the more easy-to-use statistical software programs available, for anxious students who realize they not only have to learn statistics but also new software, the task can seem insurmountable. Through a comprehensive, step-by-step approach, this text is consistently and specifically designed to both alleviate anxiety toward the subject matter and build a successful experience analyzing data in SPSS.

Chris Hahn. Drawing on a wide range of examples to demonstrate how easy it is to use such software, this guide is full of useful hints and tips on how to manage research more efficiently and effectively, including: - Formatting transcripts for maximum coding efficiency in Microsoft Word - Using features of Word to organize the analysis of data and to facilitate efficient qualitative coding - Synchronizing codes, categories, and important concepts between Microsoft Word and Microsoft Access - Efficiently storing and analyzing the qualitative data in Microsoft Excel - Creating flexible analytic memos in Access that help lead the researcher to final conclusions Ideal for those students or researchers who don't want to invest in expensive specialised software packages, this guide will be an invaluable companion for anyone embarking on their own research project.

Creating and Verifying Data Sets with Excel. Robert E. Accurate data entry and analysis can be deceptively labor-intensive and time-consuming. Creating and Verifying Data Sets with Excel is a focused, easy-to-read guide that gives readers the wherewithal to make use of a remarkable set of data tools tucked within Excel—tools most researchers are entirely unaware of. It incorporates a number of learning tools such as screenshots, text boxes that summarize key points, examples from across the social sciences, tips for creating professional-looking tables, and questions at the end of each chapter.

Providing practical strategies to improve and ease the processes of data entry, creation and analysis, this step-by-step guide is a brief, but invaluable resource for both students and researchers. Patricia Bazeley. Written by leading authorities, with over 40 years combined experience in computer-assisted analysis of qualitative and mixed-mode data, the new edition of this best selling textbook is an ideal mix of practical instruction, methodology and real world examples. Computer-Assisted Interviewing. Book Aimed at aiding researchers to improve their data's quality, Computer-Assisted Interviewing helps readers identify the possibilities and difficulties which arise in computer-assisted interviewing.

The author annotates samples of actual research questionnaires so that readers can compare the usual paper questionnaire against the extra statements needed for clear computer-assisted interviewing. In addition, the book includes an overview of the important features to consider if one wants to purchase a CADAC program.

Similar ebooks. Niels Blunch. This comprehensive Second Edition offers readers a complete guide to carrying out research projects involving structural equation modeling SEM. Updated to include extensive analysis of AMOS' graphical interface, a new chapter on latent curve models and detailed explanations of the structural equation modeling process, this second edition is the ideal guide for those new to the field. The book includes: Learning objectives, key concepts and questions for further discussion in each chapter. Helpful diagrams and screenshots to expand on concepts covered in the texts.

Real life examples from a variety of disciplines to show how SEM is applied in real research contexts. Exercises for each chapter on an accompanying companion website. A new glossary. Discovering Statistics Using R. Andy Field. Lecturers - request an e-inspection copy of this text or contact your local SAGE representative to discuss your course needs.

Watch Andy Field's introductory video to Discovering Statistics Using R Keeping the uniquely humorous and self-deprecating style that has made students across the world fall in love with Andy Field's books, Discovering Statistics Using R takes students on a journey of statistical discovery using R, a free, flexible and dynamically changing software tool for data analysis that is becoming increasingly popular across the social and behavioural sciences throughout the world.