SciELO - Scientific Electronic Library Online

vol.13 issue2Subjectivity of Diamond Prices in Online Retail: Insights from a Data Mining StudyAssessing the Buyer Trust and Satisfaction Factors in the E-Marketplace author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand




Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google


Journal of theoretical and applied electronic commerce research

On-line version ISSN 0718-1876

J. theor. appl. electron. commer. res. vol.13 no.2 Talca May 2018 


Investigating the Post-Adoption Attitude of the Web Based Content Management System within Organization

Yujong Hwang1  2 

Jin-Young Chung2 

Dong-Hee Shin3 

1DePaul University, School of Accountancy & MIS, Chicago, USA,

2Kyung Hee University, College of International Studies, Yong-in, Republic of Korea,

3Chung-Ang University, School of Media and Communication, Seoul, Republic of Korea,


In technology acceptance literature, few have studied end users’ post-adoption attitude of technologies within organizational contexts before end-users start using the technology- the pre-implementation stage. This research proposes that perceived usefulness, appropriateness, and perceived behavioral control have influences on post-adoption attitudes toward web based content management systems. The proposed model was empirically tested using the Partial Least Squares technique with the 148 web based content management system end-users’ data collected from a large organization in a field setting. As theorized, all three variables were found to be significant determinants of system users’ post-adoption attitude in the pre-implementation stage. Furthermore, appropriateness was found to be the strongest determinant of systems users’ post-adoption attitude. The study findings provide important insights on enhancing system users’ post-adoption attitude in the pre-implementation stage.

Keywords: Post-adoption attitude; Appropriateness; Perceived usefulness; Perceived behavioral control; System implementation; Partial Least Squares PLS

1 Introduction

Technology acceptance (TA) studies have been successful in creating knowledge about the factors that affect users’ acceptance of new technologies; such acceptance has usually been measured using behavioral intention and self-reported usage. Authors such as Szanja have cautioned against the use of self-reported usage as a substitute for actual/objective usage [32]. Also, the bulk of TA literature has offered little guidance to practitioners on how to manage the implementation process. TA literature hasn’t addressed the temporal aspects of the acceptance process sufficiently, and as such there seems to be a paucity of research that is specifically aimed at understanding the temporal aspects of the acceptance phenomenon [40]. This research adopts the view that the acceptance process is part of the multistage technology implementation process. As such, looking at specific stages within that process will allow for a better understanding of how to influence end users’ acceptance of new technologies within the workplace.

This research aims to provide some guidance to practitioners and management, especially in relation to the pre-implementation phase. The research question is: In a mandatory adoption environment, and specifically in the pre-implementation phase, what are the variables that are expected to influence and explain employees’ Attitudes toward adopting and using the system upon its rollout? At the pre-implementation stage, initial attitudes and expectations toward using the technology are formed. In the context of this research, pre-implementation is synonymous with pre-deployment- that is, the period before the new system has been rolled out and put to use. This stage extends from the time when the decision was made to adopt a certain technology to the actual deployment of the system. The criticality of this stage stems from the fact that this is when communications and knowledge about the system are first sent throughout the organization.

Prospective users of the new system begin to form their attitudes toward the use of the technology even before it is deployed. Such attitudes are important because they serve as cues to interpret the environment and affect end users’ expectations as they relate to the system and its usage once deployed. A classic study by Ginzberg finds that the realism of the expectations of end users at the pre-implementation stage is associated with both attitudinal and behavioral success measures [14]. Furthermore, research has found that different sets of beliefs come into play at different stages of the project. For example, the findings of Karahanna et al. support the premise that a different belief structure exists at different stages of the acceptance process: pre-adoption attitudes are mainly determined by a richer set of antecedents suggesting a more complex process through which users base their attitudes on, while post-adoption attitudes are mainly determined by beliefs regarding usefulness and image [19].

Recently, researchers have begun to take a more critical approach when looking at the Technology Acceptance Model (TAM) and the literature that developed and evolved around it [20]. It has been argued that even though the parsimony of the TAM has been its main strength, it has also become, in a way, a limitation and a liability. The TAM might have enabled the building of narrow cumulative tradition [8]. This tradition can be characterized as incremental, with little added to our knowledge in each step along the way. Critics suggest that the simplicity and rigor of the model became an attraction to this stream of research, thus limiting the attention that would otherwise have been paid to other streams [8], [20], [31]. The parsimony of the TAM might have served acceptance literature by focusing researchers’ efforts and allowing for highly predictive models, but it also limited our understanding of the acceptance phenomenon. However, it is not fair to criticize the TAM for receiving so much attention.

Interestingly, looking at the bigger picture and seeing how the TAM has evolved over time, such as the TAM2, the Unified Theory of Acceptance and Use of Technology (UTAUT), and the TAM3, one is stricken with how the integrative TAM (i.e. UTAUT) is looking more than ever like its origin, the Theory of Reasoned Action (TRA) - or to be more precise, the Theory of Planned Behavior (TPB). The UTAUT aims to explain user intentions to use anISand subsequent usage behavior. The theory holds that there are four key constructs: 1) performance expectancy, 2) effort expectancy, 3)social influence, and 4) facilitating conditions. The UTAUT constructs of social influence and facilitating conditions are extremely similar to the TPB’s subjective norm and perceived behavioral control [8]. It is as if the literature has undergone a long process of building a tradition that essentially ended up looking like the theory from which the TAM originated. Research utilizing the TAM has also been characterized by the extensive use of student samples, the simplicity of the applications tested, and the use of self-reported measures [20], [21], [38]. Additionally, the issue of common method variance has been raised recently [31].

2 Research Model and Hypotheses

A main goal of this research is to investigate the role different factors play in users' acceptance of new information technology within the work place, specifically in a mandated adoption environment and at the pre-implementation stage. The pre-implementation stage this research refers to represents the pre-deployment period, which generally spans from the time when the decision to adopt a new system is made by senior management to the time when the system is actually rolled out. The focus of this research is on end-users and the process through which they accept, support, and use the new system, or reject, resist, and underutilize it.

This research is aimed at addressing some of the gaps in the technology acceptance literature within the IS discipline. Specifically, this research attempts to fill in gaps in regards to three main areas. The first area this research attempts to address, looking specifically at the pre-implementation stage of technology adoption, is the temporal gaps that exist in acceptance literature. Secondly, this research aims to gain a better understanding of the acceptance process in mandatory adoption environments. Even with all the success that the TAM has been able to achieve, it still fails to provide guidance on how to manage this process. Venkatesh et al. point to the fact that even though technology acceptance models in general might generally provide us with an idea of users’ intentions and usage behaviors, they fail in providing guidance to designers [38]. However, what is not mentioned is that technology acceptance as a process is not only relevant to system designers but also to those who are concerned with aspects of its implementation. Venkatesh et al [38] argue that in most cases of innovation implementation failures, the burden falls not on the innovation itself but rather on the implementation process.

Recently, Venkatesh & Bala introduced TAM3 and argued for a research agenda which points to the fact that more research is indeed needed with regards to the implementation aspects of technology acceptance [36]. They attempt to redirect technology acceptance research toward a more practical orientation. This latest trend of moving technology acceptance research toward building richer and more practical models might be a response to the recent criticisms of TAM, as it represents the most dominant model in technology acceptance literature. For example, in Lee et al., a leading IS researcher is quoted as saying: “imagine talking to a manager and saying that to be adopted technology must be useful and easy to use. I imagine the reaction would be ‘Duh!’ The more important questions are what make technology useful and easy to use” [20] p. 766.

Criticisms of the TAM also include its lack of a means to account for temporal aspects of the TA process [25]. Legris et al. point to the fact that TA literature treats IT implementation as being independent from organizational dynamics [21].

Figure 1: Proposed research model 

2.1 Perceived Usefulness

The proposed model with hypotheses is shown in Figure 1. One of the TAM’s central premises is the instrumentality assumption. Specifically, the TAM was built on the premise that the underlying mechanism by which the model’s main variables are said to operate is their instrumentality. Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) are said to be instrumental for achieving rewards that are extrinsic in nature through the potential increased performance that results from using the technology; as such people skip the affective process and rely on a cognitive appraisal process that directly links performance to intentions based on the rewards [12]. They further argue that the direct influence of perceived usefulness on behavioral intention-which led to the removal of the attitude construct form the model- is based on “the idea that, within organizational settings, people form intentions toward behaviors they believe will increase their job performance, over and above whatever positive or negative feelings may be evoked toward the behavior per se. This occurs because enhanced performance is instrumental to achieving various rewards that are extrinsic to the content of the work itself, such as pay increases and promotions” [12] p. 986.

One might argue that such a statement is a general one that ignores many aspects of the workplace. For example, Robey suggests a model of user behavior for IT applications and argues that the use of the system is mostly associated with increased job performance, but that this relationship is mediated by both extrinsic and intrinsic rewards [28]. Davis et al operationalize the variable extrinsic motivation using the same items which were used for Perceived usefulness, while intrinsic motivation was operationalized using items such as enjoyable, pleasant, and fun [13]. Venkatesh et al suggest that intrinsic motivation as operationalized by [13] is most similar to the attitude construct [38]. One can argue that attitude shouldn’t be viewed in the same way as intrinsic motivation; the TRA which provided the theoretical base for TAM advances behavioral intentions as the construct aimed at capturing motivational factors and not attitudes. Furthermore, it is unlikely that the direct use of a system will lead to rewards such as pay raises or promotions. Additionally, Bandura points to the value of considering intrinsic motivation [7]. Armenakis et al argue for capturing both intrinsic and extrinsic rewards when measuring change in recipients’ beliefs with regards to change efforts [6]. Ryan and Deci also emphasize the importance of considering both intrinsic and extrinsic motivation [30]. Conceptualizing motivation as either increased performance, which is implicitly presumed as leading to rewards, or playfulness and enjoyment [37] might not be suitable for organizational settings where the use of the system is mandated.

The instrumentality premise, along with the underlying implicit assumption that increased performance is a shared goal among organizational actors, reflects a rather untenable rational view of organizational reality [18]. Within organizations, the use of a system of some sort has become a part of how jobs are done, thus the kinds of rewards attached to such usage might not be appropriately captured using traditional operationalization of constructs such as Perceived Usefulness [12]. Furthermore, increased job performance might not be a shared goal between users; other factors such as power [22] might come into play. Additionally, increased job performance might not be instrumental in furthering an individual’s higher level goals- depending on the context and situation, individuals might have different plans to achieve their goals which may or may not be aligned with the goals that led to the introduction of the technology.

That being said, this research is not dismissing the importance of intrinsic and extrinsic motivation, nor is it aimed at re-conceptualizing the motivational forces behind technology acceptance. This part of this research simply aims to test the validity of the claim that usefulness perceptions by end users are instrumental to achieving rewards. By testing the relationship between Perceived Usefulness and Attitude, more informed statements can be made about the relevance and importance of perceived usefulness as it relates to behavioral intention.

H1: Perceived usefulness will exert a positive influence on attitude.

2. 2 Appropriateness

Repenning suggests that managers’ failure to wholeheartedly support an innovation, regardless of their perceptions of how appropriate it is, is a recipe for failure [26]. One can further argue that blind support is not likely, thus appropriateness of the solution or the system becomes even more critical. As such, influencing appropriateness beliefs through messages and management actions becomes central to the change effort. Armenakis and Harris suggest that one of the key components of a change message is the part about appropriateness [5]. Once the sense of a need has been established, the search for a solution that is appropriate for overcoming the discrepancy begins. But in situations where the solution is not of free choice but rather selected by top management or any other higher decision making authority, influencing the appropriateness beliefs of those who will be affected by the change is a more complex process; perceiving that there is a discrepancy and a need for action doesn’t automatically qualify the management’s suggested solution as the appropriate one. Rogers’ arguments with regards to compatibility with needs suggest that change agents’ roles include helping targeted employees identify and recognize that there is a discrepancy which needs to be addressed [29]. This role is complemented by the change agents’ additional role of influencing compatibility perceptions. Moreover, [29] advances the generalized premise that meeting needs results in faster rates of adoption.

Within IS literature, many constructs have been used which are somewhat related to the concept of appropriateness. For example, compatibility has been utilized in the study of technology adoption [1], [19], [24]. Other constructs which have been used by IS researchers and are relevant to the concept of compatibility are task-technology fit (TTF) [15] and Job relevance [37]. But, the operationalization of compatibility and other relevant constructs has been more focused; items were reflective of how compatible the system or the technology is with one’s work and tasks [1]. Ward et al find that perceived organizational benefits (POB) from adopting and using an information system have a significant direct influence on users’ attitudes toward using the technology in both pre-implementation and post-implementation phases [39]. The influence that POB has on pre-implementation attitudes suggests that if employees believe the system will benefit the organization, they will have more favorable attitudes toward its use in the future. At this early stage, believing that the system will be beneficial to the organization implies that the system is appropriate.

For this research it seems more reasonable to address appropriateness at a more general level because end users’ evaluation of the more specific constructs would likely be more relevant in later stages of the implementation project, when they would have enough direct experience with the system. As such the following hypothesis is proposed:

H2: Appropriateness will exert a positive influence on Attitude.

2.3 Perceived Behavioral Control

Ajzen argues that any human behavior, regardless of how mundane that behavior may seem, isn’t completely under one’s volitional control [4]. Any behavior is subject to external and internal factors that might hinder the performance of that behavior. By introducing the concept of Perceived Behavioral Control (PBC), Ajzen aimed to expand the TRA so that it could be applied to a wider range of human behaviors which are not completely under the actor’s volitional control [2]-[4]. PBC refers to one’s perceptions of the ease or difficulty associated with performing a behavior [3]. PBC is a product of one’s control beliefs, that is, beliefs about “the presence or absence of requisite resources and opportunities” [3] p. 196. The TPB, as previously mentioned, has been used extensively in IS research [9], [23], [33], [37], [38]. The TPB offers us with the opportunity to account for more variables that might come into play when studying acceptance at this early stage of the acceptance phenomenon. The context of this research requires a richer theoretical foundation than, say, TAM or TRA. Moreover, the UTAUT [38], which came as an extension and integration of many of theories that have been used in technology acceptance literature, can be considered an adaptation of the TPB.

Furthermore, self-efficacy beliefs have been tested repeatedly in the context of technology use; the social cognitive theory was tested by Compeau et al and it has been found that self-efficacy beliefs play an important role in explaining computer usage [11]. Self-efficacy beliefs were positively associated with affect (i.e. attitude), performance outcome expectations, personal outcome expectations, and usage. It was also negatively associated with anxiety. Bandura, the originator of the construct self-efficacy, argues that such beliefs affect one’s extended efforts and persistence in behavior. The TPB's construct of perceived behavioral control (PBC) originated from Bandura's work on self-efficacy [3]. This research argues that in a mandatory pre-implementation environment PBC will play a role in forming the attitude toward using the system rather than toward behavioral intention directly, thus it is hypothesized that:

H3: PBC will exert a positive influence on Attitude.

3 Research Method

To test the theoretical model and the study’s hypotheses, a search effort was initiated to find an institutional setting where a new system is being implemented. A University in the Chicago area was identified as a potential site for the study. The university was in the process of implementing a new Web Based Content Management System (WB-CMS) at the institutional level.

3.1 Project Overview

The main researcher met with the IS team to make sure that the setting fulfilled the study’s criteria. The researcher explained to the IS team that the implementation project should be a new system, a new device, a new application, or a switch from one system to another. In other words, end users should be able to identify the novelty and the newness of the system. The pre-implementation stage this research refers to represents the pre-deployment period which generally spans from the time when the decision to adopt a new system is made by senior management to the time when the system is actually rolled out; the mandatoriness is established if the end users have no choice but to use the new system. The focus of this research is on end-users and the process through which they accept, support, and use the new system, or reject, resist, and underutilize it. The initial meetings confirmed that the project met this study’s criteria. Further communications, meetings, and interviews were conducted to establish the fit between the setting and the study requirements and to collect more information about the project.

A WB-CMS, the SharePoint, is basically a system that optimizes the acquisition, production, management, and deployment of content on a website. The University, at the leadership level, has reached a conclusion that the web presentation of the institution is a vital recruiting, marketing, and communication tool and that it reflects their brand, which can and should help the university strengthen its presence on the web. At that time, prior to that conclusion, university websites were developed in a haphazard way in which school entities would use the tools of their choice and, in many instances, hire a student worker or an outside entity to create a website. This process was the norm for many years, and it reached the point where universities had a multitude of sites with no consistency in navigation, brand or content. Specifically, there were varying degrees of graphic sophistication, outdated content, a lack of message continuity, disjointed view of the institution, and recurring high costs to retool those websites. As such, the university’s first experience with an institution-wide effort at WB-CMS was between the years 2006-7. The CMS of choice was Serena Collage, and it was a voluntary migration that involved some colleges and other university units. However, right after the project started, the company dropped the product and ceased any future development of its CMS system. The company promised to continue providing support. The choice of Collage was driven by many factors, mainly the fact that this system worked in a way where if the system itself went down the websites would remain up. Furthermore, the team tasked with looking for a CMS solution at the time had found that Collage was the most popular among similar institutions, and it was concluded that it was the best fit for the higher education model. Additionally, when implementing Collage, one was buying an already built product, so it would not take much optimization or customization to roll out.

The implementation team partnered with six colleges and 46 smaller units to move their sites from the existing systems they had to Collage. The implementation team ultimately trained approximately two hundred people to use Collage over the course of the implementation effort. As per the implementation team, the process didn’t require ongoing training; it was ultimately decided to apply the train the trainer approach. Collage was the first attempt to get non-technical people who are closer and more familiar with content creation to manage websites. Unfortunately, it was never completely realized-users had to be technically qualified at some level to work with it, so webmasters at colleges and units continued to play the main role. Some colleges decided that it was too difficult to use, abandoned it, and went back to their old ways of doing business. Add to that the fact that the company decided to drop the product, thus giving a reason for university entities to go independent. Even though Collage didn’t achieve the complete set of goals for its implementation, it wasn’t all bad as per the experience of the implementation team. The implementation project has achieved some degree of integration across the sites that migrated to Collage and in those instances produced sites that had similar navigation, branding, look and feel. Additionally, it got people in the university more comfortable with, and even interested in, the idea of using a system to manage content.

A couple of years later, the University leadership decided more forcibly that, in this day and age, the web is one of, if not the, most important recruiting tool for the institution. It was reiterated that it was no longer acceptable for a university website to be stale or have outdated content or broken links. Websites needed to be more integrated, consistent, containing fresh content that was timely and accurate. At that point a team from a different area (the IS department) that implemented Collage was charged to develop a small website as a pilot. The leadership wanted to see and feel something before it gave directions to implement a university-wide system. The team knew that Collage was no longer an option. The team used Microsoft SharePoint 2007 to build the site and did it in a fairly short amount of time. It was an improvement over Collage, and users of the website became less dependent on technical support. The team additionally built some other, smaller websites using the same product. Soon after the launch of the pilot website, the leadership directed the team to start a university wide project to implement a WB-CMS that could take the Institution’s web presence to the next level.

3.2 Sampling Procedure

A purposive sampling technique, followed by snowball sampling, was used to recruit participants. Purposive sampling is mainly concerned with “selecting units (e.g., individuals, groups of individuals, institutions) based on specific purposes associated with answering a research study’s questions” [34]. To select participants for the study, the researcher was in close contact with the SharePoint Implementation Team who identified prospective users based on the implementation plan. The SharePoint implementation team, throughout the process of planning the project, identified potential users for project training/communication purposes. The SharePoint Implementation plan consisted of establishing working groups from the implementation sites. Those groups served as the main communication tool for the implementation team in which requirements and other project related issues were discussed. The working groups were made of individuals who have been identified as the primary resources of input regarding the applications and widgets development. The working groups’ members had been identified as a match for the criteria set for prospective participants in this study. The members would be using SharePoint 2010 upon its rollout as the main tool for content management on their respective sites. Additionally, the training plan, which was based on input about the users of the system, was used to identify prospective participants for this study.

The researcher was introduced to the main contacts at the implementation sites through e-mail and by attending some of the working groups’ meetings. The researcher visited the main contacts from the working groups at their sites and requested they identify prospective users of the SharePoint 2010. Furthermore, those meetings were used to explain the purpose of this study and to ensure that the prospective participants matched the criteria for the study. Once the lists for prospective users were compiled, an introductory email was drafted by the researcher with the counsel of the SharePoint Implementation Team. The email invited prospective participants to take part in the survey and explained that it was for this study’s purposes only in which anonymity was ensured. The email recipients were given the option to opt out of further communications if they didn’t want to be a part of the study. The introductory email was sent to around 220 prospective users. A final list of 200 willing participants was compiled based on the responses from the introductory email. The finalized list of participants was sent an e-mail containing a link to an anonymous online survey.

3.3 Survey Refinement

This research was conducted at the individual level of analysis and used the survey method in an organizational setting as the means to collect the data. This self-report method was used to measure the latent variables. Survey items measuring all the variables were adopted from previous studies where they have gone through multiple reliability and validity tests. All measurements used a 5-point Likert scale (see the appendix A for the detailed items in the model). Even though the scale items were adopted from existing literature with an established reliability and validity, the researcher took additional steps to refine the wording of its items and identify any problematic items. The first step included conducting a focus group with the SharePoint Implementation Team. The goal of this was manifold. First, the team had the chance to point to items which they felt were inappropriate or might affect the implementation project. Second, as the team charged with the implementation, they were asked to identify the types of users for the system. Third, the team was asked to review the survey and comment on its content as implementers. This was meant to ensure that the study variables cover the main aspects of the phenomenon of interest.

The focus group discussions revealed some issues with the wording of some items as they related to the project. Also, the words used in the scaling of some items were modified to better fit the context of the study based on the group’s feedback. The group discussions revealed that the variables as represented by the scale items seemed to be able to capture the phenomenon of interest. The survey was further reviewed by the researcher and the necessary changes were made accordingly. The next step included identifying some users that had already gone through the change to SharePoint 2010. The SharePoint Implementation Team introduced the researcher to a group of users that had their website up using SharePoint 2010. The researcher sent the five users who agreed to participate in the survey refining process an email containing a link to a the final draft of the online survey. They were asked not to share the link or the survey with any other people. The researcher, in the email, asked the users to comment on the time/length of the survey. Also, the users were asked to identify and comment on the wording and any scale items which they felt were ambiguous, inappropriate, or unrelated to the project.

3.4 The Final Sample

As mentioned earlier, a finalized list of 200 willing participants was used to send the online survey link. The survey tool collected data anonymously. Two days before sending the e-mail link, the willing participants were sent an email notifying them that they would be receiving the survey link in two days. The email described the nature of the study, that no identifying information would be collected, and that the analysis of the data would be at the aggregate level. Once the email containing the survey link was sent, the recipients were given a one week window to complete the survey. A reminder was sent three days later and a final reminder was sent one day before the deadline. Of the 200 willing participants 172 people participated in the survey. The data was screened for uncompleted surveys and any anomalies. A final list of 148 usable surveys was used in the data analysis.

To ensure the adequacy of the sample size, the researcher followed the recommendations of Hair et al [16]. The authors, while not dismissing the 10 times rule commonly used in studies using PLS-SEM, state that researchers should determine the required sample size based on a power analysis that takes into consideration the part of the model with largest number of predictors. The 10 times rule simply states that the sample size should be ten times the maximum number of arrowheads pointing at any latent variable in the research model. For this research, the minimum sample size is 30. The sample size of 148 is more than satisfactory for PLS-SEM analysis purposes.

3.5 The Data Analysis

The Partial Least Squares (PLS) method was used to test the research model using the software SmartPLS 2.0. Structural Equation Modeling is a second generation technique which enables researchers to test the relationships between multiple independent and dependent variables [41]. PLS is a structural equation modeling technique that has been used extensively in the IS field [27], [35]. Generally speaking, the two most common approaches for structural model estimations are the Covariance Based SEM (CB-SEM) and the Variance based PLS. The choice of one of the two modeling methods by researchers depends on criteria that make each method unique in its application. Mainly, the choice depends on the objective of the research and the statistical assumptions among many other things. PLS is a component based approach which places minimal demands of sample size and distributional assumptions. Additionally, it has the ability to handle complex models with multiple relationships [16], which is better for this study. This PLS feature makes it more suitable for theory development and variance explanation. The research is exploratory in nature: While studying Technology Acceptance has been fairly established in the IS field with numerous articles looking at many aspects of the phenomenon, the study of TA at the pre-implementation phase in a mandatory environment is relatively new.

3.6 Data Screening and Preparation

Before the analysis of the data using SmartPLS 2.0, the data set was examined to identify any missing data. The 15% rule was followed where any observation missing 15% or more of its answers is removed. None of the observations were subject to removal due to missing data. Hair et al. recommend using mean replacement when less than 5% of the values per indicator are missing [16]. The data was screened visually multiple times to identify any missing data; fortunately, none of the indicators were missing any values. Additionally, the screening process involved looking for straight lining and any inconsistent response patterns. As mentioned earlier, 172 surveys were submitted. Of the 172, 20 were uncompleted and thus were subject to removal in the first round. Two other completed surveys were removed due to inconsistent response patterns. One respondent, for example, responded to a couple of reverse-worded items in a way that contradicted his/her response to the items measuring the same variable. Table 1 shows demographic variables of final samples of 148 survey participants.

Table 1: Demographics bariables 

3.7 Specifying the PLS-SEM Path and Measurement Model

The PLS specification process starts by specifying the structural model that shows the relationships between the study’s variables. The modeling procedure doesn’t allow circular relationships. The paths in the model represent the research hypotheses, that is, the hypothesized relationships between the model’s latent variables. The PLS algorithm tests the significance of the relationships (i.e. paths) and produces the path coefficients with the R2 (i.e. the explained variance) of the model’s dependent variables. A PLS path model is represented using two models that constitute the foundation of the PLS method. The first model is called the measurement model which relates the measured variables (indicators) and their corresponding latent variables (constructs). This model is also termed the Outer Model. The second model relating the model’s variables is called the structural model which is also termed the Inner Model.

The measurement model in PLS is assessed by examining internal consistency, convergent validity, and discriminant validity [10], [17]. Internal consistencies (similar to Cronbach’s alpha) of .7 or higher are considered adequate [10], [11], [17]. Convergent and discriminant validity are assessed by applying the criteria that (1) the square root of the average variance extracted (AVE) by a construct from its indicators should be at least .707 (i.e., AVE > .50); (2) it should be greater than that construct’s correlation with other constructs [17], [41]; and (3) an item should load more highly on the construct it is intended to measure than it does on another construct. The structural model and hypotheses are assessed by examining the significance of the path coefficients (similar to standardized beta weights in a regression analysis) and the variance accounted for by the antecedent constructs.

3.8 Test Results

Table 2 shows the internal consistency reliabilities and correlations. As recommended [17], [41], the internal consistency reliabilities were without exception all higher than .7, and the diagonal elements (square root of the variance shared between the constructs and their measures) were without exception all higher than .707 and higher than correlations between target constructs and other constructs.

Table 2: Reliability, correlations and the square root of AVE (n=148) 

Table 3 presents the factor structure matrix of the study variables of sub-samples. The factor structure matrix showed that all items exhibited high loadings (>.707) on their respective constructs. No items loaded higher on constructs they were not intended to measure, demonstrating strong convergent and discriminant validity. Collectively, the psychometric properties of the constructs were considered more than adequate.

Table 3: Items’ loading and cross loadings 

The PLS structural model and hypotheses were assessed by examining path coefficients and their significance levels. Following Chin [10], bootstrapping (with 500 resamples) was performed on the model to obtain estimates of standard errors for testing the statistical significance of path coefficients using a t-test. Figure 2 shows that the overall samples (n=148) support all three hypotheses (R square is relatively high with 72 percent). The appropriateness shows the strongest path to the attitude in the model.

4 Discussion

Research in technology acceptance literature has found the relationship between Perceived Usefulness (PU) and Behavioral Intention (BI) to be the most consistent among the TAM’s relationships. This fact, along with the instrumentality premise which is discussed earlier, has led to the removal of the Attitude construct from the original TAM. This research has found a pattern of relationship between Perceived Usefulness and Attitude. PU is found to be a significant predictor of Attitude, thus supporting hypothesis 1. Furthermore, PU has a moderate effect size (.15) on explaining Attitude’s variance (R2). These findings suggest that beliefs about the usefulness of the system may play a role in forming the attitude about using the new system at this early stage. Testing the model reveals that Appropriateness (App) has a significant effect on Attitude, thus supporting hypothesis 2. Appropriateness, as a variable within the context of this study, is aimed at capturing the end users’ perceptions about how the new system will improve the performance of the organization as a whole and, obviously, how appropriate it is given context and need. Appropriateness has a relatively moderate to strong effect size (.31) on explaining Attitude’s variance-twice the effect of PU on Attitude. This finding suggests that end users’ perception about the appropriateness of the system being implemented plays a significant role in forming their attitudes about their future use of the technology.

Figure 2: Test results 

4.1 Contribution

The earlier discussion of similar constructs such as compatibility [1], [19], [24], which is more specific and concerned with capturing the personal aspects of the construct, offers an insight on how to explain this finding. Variables such as compatibility are more appropriate when an actual interaction and use has occurred between the users and the system. At earlier stages such as the pre-implementation stage, messages about the appropriateness of the new system will have a favorable effect on the attitudes of end users. However, as the project progresses, the content of the appropriateness message must become more personal and at the end users’ level.

Perceived Behavioral Control (PBC) is found to have a significant effect on Attitude as well, thus supporting hypothesis 3. PBC has a moderate effect size of (.19) on explaining the variance of Attitude. Users’ perception about the availability of resources and the ability to acquire the knowledge necessary to use the system influence their attitudes toward using the system. This finding suggests that having resources such as time and training that focus on enhancing the skills of prospective users will have a positive effect on how they view their system use in the future [9].

Jointly, the three variables (PU, App, and PBC) explain a relatively large portion of Attitude’s variance (72%). Of the three variables, Appropriateness displayed the strongest effect size. This finding highlights the importance of communications that emphasize the appropriateness of the system being implemented [5]. It further points to the importance of communicating the choice process of the new system and the main decision drivers which led to choosing the specific system. Messages that signal that choosing SharePoint 2010 is based on an informed decision making process increases the confidence of the end users in the appropriateness of the system and ultimately will have a positive effect on their attitudes toward using the system. The long planning and preparation period of the project appears to have helped in creating a positive attitude among end users. This period involved regular communications and workgroups meetings which appear to have increased the end users’ confidence in the appropriate choice of the system, the availability of resources, and its usefulness.

4.2 Limitations

While this study offers new insights about the technology acceptance phenomenon at the pre-implementation stage in a mandatory environment, it is not without limitations. First, the sample for this study was drawn from a single organization (University) and for one system (Content Management). Additionally, the items measuring the study’s variables were modified to reflect both the nature of the project (i.e. the system to be implemented) and its stage. As such the results of the study may not be generalizable to other contexts.

The second limitation is the use of the self-reporting survey method to collect the data for the study. Measuring all variables in the same survey might raise the issue of common method bias, and as such this bias can’t be ruled out. Furthermore, for respondents, answering items measuring Perceived Usefulness may have posed a challenge, since they haven’t interacted with the system yet. However, the fact that they could have been familiar with WB-CMS in general and Microsoft products (i.e. interface) may have reduced that possibility.

The third limitation is that we did not include the Perceived Ease of Use in the model. However, our main focus is not to retest the TAM- we hope to test the other factors, such as Appropriateness, in deciding Attitude in a mandatory environment. Future studies can test how the original TAM can be related to our study to complete the picture of this important phenomenon.

The fourth limitation is that the correlations among the several constructs are high in the data analysis. However, the test results show that the convergent and discriminant validity are relevant based on the criteria: (1) the square root of the average variance extracted (AVE) by a construct from its indicators should be at least .707 (i.e., AVE > .50); (2) it should be greater than that construct’s correlation with other constructs; and (3) an item should load more highly on the construct it is intended to measure than it does on another construct. Future research could overcome many of these limitations by extending the study to more organizations and different systems.

Furthermore, a longitudinal study collecting data at multiple points of the project would offer deeper insights into the acceptance process as it relates to organizational change efforts. For example, data can be collected after the first training, after the last training, upon rollout, and after three months of usage. Future research could also look into incorporating more organizational and contextual variables into acceptance models. Change management, social psychology, and leadership literatures are example of fields that can enrich and deepen our understanding of the technology acceptance phenomenon. This will allow for a better understanding of the factors that affect the strength of the relationships between the variables of interest.

5 Conclusion

The major goal for this study is to explore and understand the technology acceptance attitude phenomenon in a mandatory pre-implementation environment within an organization. The technology acceptance phenomenon has been studied extensively within the IS field. However, a study within such a context where acceptance is mandatory and at the pre-implementation stage is lacking. This research is conducted with the goal of answering major research question within such a context. Those questions guide hypothesis development and allow for exploring the nature of the relationships between the study’s variables and how they differ in this context from other contexts that have been examined before.

The research model for this study was developed to answer the major research questions that aim to understand the post-adoption attitude at the pre-implementation stage in a mandatory environment. It also highlighted the role of appropriateness and attitude as a conduit for the acceptance process among end users. The introduction of a new technology into a workplace is in fact a change endeavor that is subject to contextual factors which affect individuals’ reactions to the technology and its use. Results and analysis shows that this model has high explanatory and predictive power and is valuable in offering insights and guidance for implementers initiating technology related changes within organizations.


This work was supported by a grant from Kyung Hee University in 2014 (KHU-20140353).

( All three authors are co-corresponding authors


[1] R. Agarwal and J. Prasad, The role of innovation characteristics and perceived voluntariness in the acceptance of information technology, Decision Science, vol. 28, no. 3, pp. 557-582, 1997. [ Links ]

[2] I. Ajzen, From intentions to actions: A theory of planned behavior, in Action-Control: From Cognition to Behavior (J. Kuhl and J. Beckman, Eds.). Heidelberg, Germany: Springer,1985, pp. 11- 39. [ Links ]

[3] I. Ajzen, I, The theory of planned behavior, Organizational Behavior and Human Decision Processes, vol. 50, no. 2, pp. 179-211, 1991. [ Links ]

[4] I. Ajzen, Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior, Journal of Applied Social Psychology, vol. 32, no. 4, pp. 665-683, 2001. [ Links ]

[5] A. A. Armenakis and S. G. Harris, Crafting a change message to create transformational readiness, Journal of Organizational Change Management, vol. 15, no. 2, pp. 169-183, 2002. [ Links ]

[6] A. A. Armenakis, J. B. Bernerth, J. P. Pitts, and H. J. Walker, Organizational change recipients’ beliefs scale, Journal of Applied Behavioral Science, vol. 43, no. 4, pp. 481-505, 2007. [ Links ]

[7] A. Bandura, Social Foundations of Thought and Action. Englewood Cliffs, NJ: Prentice-Hall, 1986. [ Links ]

[8] I. Benbasat and H. Barki, Quo vadis, TAM?, Journal of the Association for Information Systems, vol. 8, no. 4, pp. 211-218, 2007. [ Links ]

[9] S. A. Brown, A. P. Massey, M. M. Montoya-Weiss, and J. R. Burkman, Do I really have to? User acceptance of mandated technology, European Journal of Information Systems, vol. 11, no. 4, pp. 283-295, 2002. [ Links ]

[10] W. W. Chin, The partial least squares approach to structural equation modeling, in Modern Methods for Business Research (G. A. Marcoulides, (Ed.). Mahwah, NJ: Lawrence Erlbaum Associates, 1998, pp. 195-336. [ Links ]

[11] D. R. Compeau, C. A. Higgins and S. Huff, Social cognitive theory and individual reactions to computing technology: A longitudinal study, MIS Quarterly, vol. 23, no. 2, pp. 145-158, 1999. [ Links ]

[12] F. D. Davis, R. P. Bagozzi and P. R. Warshaw, User acceptance of computer technology: A comparison of two theoretical models, Management Science, vol. 35, no. 8, pp. 982-1003, 1989. [ Links ]

[13] F. D. Davis, R. P. Bagozzi and P. R. Warshaw, Extrinsic and intrinsic motivation to use computers in the workplace, Journal of Applied Social Psychology, vol. 22, no. 14, pp. 1111-1132, 1992. [ Links ]

[14] M. J. Ginzberg, Early diagnosis of MIS implementation failure: Promising results and unanswered questions, Management Science, vol. 27, no. 4, pp. 459-478, 1981. [ Links ]

[15] D. L. Goodhue and R. L. Thompson, Task-technology fit and individual performance, MIS Quarterly, vol. 19, no. 2, pp. 213-236, 1995. [ Links ]

[16] J. F. Hair Jr , G. T. M. Hult, C. Ringle, and M. Sarstedt, A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). New York: SAGE Publications, Incorporated, 2013. [ Links ]

[17] Y. Hwang and K. Lee, Investigating the moderating role of uncertainty avoidance cultural values on multidimensional online trust, Information & Management, vol. 49, no. 3, pp. 171-176, 2012. [ Links ]

[18] A. Jeyaraj, J. W. Rottman and M. C. Lacity, A review of the predictors, linkages, and biases in IT innovation adoption research, Journal of Information Technology, vol. 21, no. 1, pp. 1-23, 2006. [ Links ]

[19] E. Karahanna, D. W. Straub and N. L. Chervany, Information technology adoption across time: A cross-sectional comparison of pre-adoption and post-adoption beliefs, MIS Quarterly, vol. 23, no. 2, pp. 183-213, 1999. [ Links ]

[20] Y. Lee, K. A. Kozar and K. Larsen, The technology acceptance model: Past, present, and future, Communications of the Association for Information Systems, vol. 12, no. 50, 752-780, 2003. [ Links ]

[21] P. Legris, J. Ingham and P. Collerette, Why do people use information technology? A critical review of the technology acceptance model, Information & Management, vol. 40, no. 3, pp. 191-204, 2003. [ Links ]

[22] M. L. Markus, Power, politics, and MIS implementation, Communications of the ACM, vol. 26, no. 6, pp. 430-444, 1983. [ Links ]

[23] K. Mathieson, Predicting user intentions: Comparing the technology acceptance model with the theory of planned behavior, Information Systems Research, vol. 2, no. 3, pp. 173-191, 1991. [ Links ]

[24] G. C. Moore and I. Benbasat, Development of an instrument to measure the perception of adopting an information technology innovation, Information Systems Research, vol. 2, no. 3, pp. 192-223, 1991. [ Links ]

[25] W. J. Orlikowski and C. S. Iacono, Research commentary: Desperately seeking the IT in IT research-a call to theorizing the IT artifact, Information Systems Research, vol. 12, no. 2, pp. 121-134, 2001. [ Links ]

[26] N. Repenning, A simulation-based approach to understanding the dynamics of innovation implementation, Organization Science, vol. 13, no. 2, pp. 109-127, 2002. [ Links ]

[27] C. M. Ringle, M. Sarstedt and D. W. Straub, Editor's comments: A critical look at the use of PLS-SEM in MIS quarterly, MIS Quarterly, vol. 36, no. 1, pp. iii-xiv, 2012. [ Links ]

[28] D. Robey, User attitudes and management information system use, Academy of Management Journal, vol. 22, no. 3, pp. 527-538, 1979. [ Links ]

[29] E. Rogers, Diffusion of Innovations, 5th ed. New York: Free Press, 2003. [ Links ]

[30] R. M. Ryan and E. L. Deci, Self-determination theory and the facilitation of intrinsic motivation, social development, and wellbeing, American Psychologist, vol. 55, no. 1, pp. 68-78, 2000. [ Links ]

[31] D. W. Straub and A. Burton-Jones, Veni, Vidi, Vici: Breaking the TAM logjam, Journal of the Association for Information Systems, vol. 8, no. 4, pp. 223-229, 2007. [ Links ]

[32] B. Szanja, Empirical evaluation of the revised technology acceptance model, Management Science, vol. 42, no. 1, pp. 85-92, 1996. [ Links ]

[33] S. Taylor and P.A. Todd, Understanding information technology usage: A test of competing models, Information Systems Research, vol. 6, no. 2, pp. 144-176, 1995. [ Links ]

[34] C. Teddlie and F. Yu, Mixed methods sampling a typology with examples, Journal of Mixed Methods Research, vol. 1, no. 1, pp. 77-100, 2007. [ Links ]

[35] N. Urbach and F. Ahlemann, Structural equation modeling in information systems research using partial least squares, Journal of Information Technology Theory and Application, vol. 11, no. 2, pp. 5-40, 2010. [ Links ]

[36] V. Venkatesh and H. Bala, Technology acceptance model 3 and a research agenda on interventions, Journal of Information Technology, vol. 39, no. 2, pp. 273-315, 2008. [ Links ]

[37] V. Venkatesh and F. D. Davis, A theoretical extension of the technology acceptance model: four longitudinal field studies, Management Science, vol. 46, no. 2, 186-204, 2000. [ Links ]

[38] V. Venkatesh, M. G. Morris, G. B. Davis, and F. D. Davis, User acceptance of information technology: Toward a unified view, MIS Quarterly, vol. 27, no. 3, pp. 425-478, 2003. [ Links ]

[39] K. W. Ward, S. A. Brown and A. P. Massey, Organizational influences on attitudes in mandatory system use environments: A longitudinal study, International Journal of Business Information Systems, vol. 1 no. 1-2, pp. 9-30, 2005. [ Links ]

[40] W. Xia and G. Lee, The influence of persuasion, training, and experience on user perceptions and acceptance of IT innovation, in Proceedings of the Twenty First International Conference on Information Systems, Pitsburgh 2001, pp. 371-384. [ Links ]

[41] M. Y. Yi and F. D. Davis, Developing and validating an observational learning model of computer software training and skill acquisition, Information Systems Research, vol. 14, no. 2, pp. 146-169, 2003. [ Links ]

Appendix A

Appropriateness [6]

I believe that the change to the new system will have a favorable effect on our operations.

When I think about the change to the new system I realize that it is appropriate for our organization.

The change to the new system will improve the performance of our organization.

I believe that the change to the new system will prove to be the best for our situation as an organization.

Perceived Usefulness [12]

I believe I would find using SharePoint in my job to be useful.

Using SharePoint would enhance my effectiveness in my job.

Using SharePoint in my job would improve my performance.

Perceived Behavioral control [33]

I believe that I will have the resources necessary to use SharePoint.

I believe that I will have the Knowledge necessary to use SharePoint upon it rollout.

Attitude [33]

All things considered…

…my expectation and use of SharePoint upon its roll-out is: (Very Bad…Very Good)

…my adoption/use of SharePoint upon its roll-out is: (Very Worthless…Very Valuable)

…my adoption/use of SharePoint upon its roll-out is: (Very Negative…Very Positive)

Received: May 20, 2017; Revised: October 03, 2017; Accepted: October 17, 2017

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License