Reliable and valid measurement scales for determinants of the willingness to accept knowledge

Before any acquired knowledge is used or adds value to the receiving project (members), it must be accepted by its recipients leading to an increase in the recipients’ positive attitudes towards and intended use of the acquired knowledge. To be willing to accept knowledge it must have value and be easy to use as perceived by the receiving project’s team members. The focus of this exploratory paper is to develop and empirically test relevant sub-dimensions of perceived value and ease-of-use. The sub-dimensions were identified through a literature review and measurement scales were developed empirically by applying a well-established scale development methodology. completing a questionnaire a cumbersome task with a high possibility of (non)response biases. Reduction was accomplished by asking respondents, acting as judges, to evaluate each item in terms of its relevance to the definition of the sub-dimension. For this purpose, a questionnaire was developed that measured a respondent’s judgement of the relevance using a six-point Likert scale 1 . In total 321 postgraduate students in the field of Engineering, Technology and Project Management at a South-African university responded (August 2017). Because these students were active project managers, this characteristic qualifies them as appropriate subjects. Respondents had an average project work experience just under 5 years ranging from a few months up to 40 years involvement in projects.


INTRODUCTION
Project-based organizing in the economy and society at large is an important managerial practice and increasingly studied by project management, business, and management scholars [1]- [3]. Because the generation of economic and social value is increasingly knowledge and information-based, processes like creativity, knowledge development, and innovation become highly relevant in projects. Due to their relatively flexible nature, projects are regarded as very suitable a breeding grounds for knowledge creation in the context of its application, however their temporary nature hinders the sedimentation of knowledge, because when the project dissolves and its members move on, the created knowledge is likely to disperse [4]. This phenomenon is often labelled as the project learning paradox [5]. It follows from this paradox that one of the major challenges for project managers regards the transfer of knowledge created in a project to other organisational contexts, like for example subsequent projects or the permanent organisation [6].
At an abstract level, the knowledge transfer process can be modelled as a communication processes comprising three basic building blocks [7]: a source (and its context), a transfer, and a recipient (and its context). If we add certain project characteristics to the equation, an interesting and relevant issue surfaces. It is commonly accepted that a project is an action-oriented, temporary endeavour oftentimes conducting a unique task [8].
The more unique the task performed by the project, the higher the likelihood that the project generates knowledge that is more difficult to apply in other contexts. Focusing on other contexts, implies that one has to look at the recipients of knowledge generated in previous projects inside or outside the organisation [9]. In particular, the recipient has to be willing to accept the acquired knowledge, which can be defined as the likelihood that knowledge received will be used in subsequent activities. Here, we assume that at least a part of the knowledge developed in a previous project potentially can be used in subsequent ones. This knowledge can be project related or, more broadly, related to capabilities for managing projects [10]. Inspired by the Technology Acceptance (TA) Model in which perceived value and ease-of-use of knowledge acquired are main determinants of the willingness to accept a technology, this study explores and test measurable sub-dimensions of these two determinants in such a way that they can be applied to a project context. The TA model is widely used for studying the willingness to accept certain technologies or technological artefacts, but its determinants are not designed to measure sub-dimensions of perceived values and ease-of-use This article, therefore, aims developing a reliable and valid measurement scale for these sub-dimensions of the two determinants of the willingness to accept acquired knowledge by project team members of receiving projects or other organisational units. The focus of this paper is to explore and identify theoretical dimensions of the perceived value and the perceived ease of use of acquired project knowledge and to develop scales to measure these sub-dimensions The research questions for this study therefore are: (1) What are the sub-dimensions of perceived value and perceived ease-of-use of acquired project knowledge, and (2) how can these dimensions be measured in a reliable and valid way? We argue that the development of reliable and valid measurement scales is crucial for the field of knowledge and project management. In doing so, we contribute in two ways to these fields. A first contribution is the identification of measurable sub-dimensions of the perceived value and perceived ease-of-use of acquired project knowledge. Such a dedicated identification is new to these fields, as current dimensions and measurements predominantly refer to technology and technological artefacts, like, for example digital technologies [11], mobile payment systems [12], or ride sharing services [13], and not to project knowledge.
Unlike other scientific fields (e.g. in psychology and economics), scholars in the former fields tend to measure similar concepts with different (self-designed) measurements, which are oftentimes not tested for their psychometric characteristics. Consequently, scholars run the risk that effects observed in studies are not the product of factors investigated, but a result of the different ways constructs are measured.
A second contribution is that this study lowers measurement problems in knowledge and project management by systematically investigating the reliability and validity of the measurement scales developed [14] The next section discusses the knowledge acceptance concept

DETERMINANTS OF THE WILLINGNESS TO ACCEPT ACQUIRED PROJECT KNOWLEDGE: PERCEIVED VALUE AND EASE-OF-USE
In the Theory of Reasoned Action (TRA), a theoretical model on the antecedents of human behaviour from the field of psychology and developed by Ajzen and Fishbein [15], actors first need to indicate that they have the intention to behave in a certain way before they actually show the behaviour. Several meta-analyses found that there is strong overall evidence for the predictive utility of the TRA model, e.g., as indicated by Sheppard, et.al. [16]. Davis [17] extended their work and introduced a Technology Acceptance (TA) Model and added two concepts, impacting on an actor's attitude, namely Perceived Usefulness and Perceived Ease-of-Use" [17], [18].
A later version of the TA model, TAM2, dropped the attitude concept and replaced it with the 'subjective norm' concept. As a consequence, the behavioural intension to use becomes a function of perceived usefulness (a.k.a.
performance expectancy) and perceived ease-of-use (a.k.a. effort expectancy). Perceived usefulness is "the degree to which an individual believes that using a particular system or technology would enhance his or her job performance', whereas perceived ease-of-use is "the degree to which an individual believes that using a particular system or technology would be free of physical and mental effort" [17, p. 26]. In many different settings, the TA model has been used, modified, improved and confirmed empirically in many different settings [11], [17], [19]. Furthermore, results of several meta-analyses of the TA model show the model to be a valid and robust model (see for example: [20]- [22]. Its core independent concepts, perceived ease-of-use and perceived usefulness proved to be solid predictors of behavioural intention. Consequently, this model can inform our research and additional explorations of the literature will have little added value. For projects, one needs to determine whether receiving projects and their members are willing to accept transferred knowledge from other projects and project related sources. Informed by the TA model, it is proposed that the perceived value and perceived ease-of-use of acquired knowledge from projects may influence the receiver's intention to use the acquired knowledge. It therefore implies that the recipient(s) of the knowledge must be able to understand the knowledge received and have experience with the surrounding conditions and influences in which the knowledge is generated and used, for the knowledge to be meaningful to the receiving project [23]. Put differently, we argue that perceived value and ease-of-use of acquired knowledge by projects are important determinants of the willingness to accept acquired knowledge from projects.
The need to develop new reliable and valid measurement scales for the sub-dimensions of both determinants of willing to accept knowledge is informed by the observation that commonly existing scales are not oriented on value and ease-of use of (project) knowledge, but have technological artefacts or software as their object of measurement.
A few examples illustrate this statement. One of the items measuring perceived ease-of-use in a study of [24, p. 172] reads "It is easy for me to become skilful at using Google Applications", [25, p. 656] use the item "The healthcare information system can reduce the paper work time" to indicate perceived Introduction In the previous section, we concluded that there are many applications of the TA model but not directed at explaining the willingness to accept acquired project knowledge. Furthermore, the same applies to the measurements of the main determinants (perceived value & perceived ease-of-use) [26]. This implies that for our purposes, we will use the main determinants, but because measurement scales for the perceived value of project, and perceived ease-of-use of knowledge acquired are non-existent, and systematic exploration of the literature is needed to identify possible sub-dimensions, which taken together, provide a sound measurement of the overall constructs. Possible sub-dimensions were identified by searching the literature using combinations of keywords such as "measurement", "perceived value", "perceived ease-of-use", "knowledge", "projects", etc.
Analogies were used where sub-dimensions were identified that could be adapted, clarified or explained to indicate their applicability towards knowledge acceptance. The results of this exploratory search of the literature are presented in the next two sections. After identifying possible sub-dimensions of both main constructs, they will be empirically investigated aiming for the development of valid measurement scales (Section 4 and 5).

Identifying Sub-Dimensions of Perceived Value of Project Knowledge Acquired
Perceived value of project knowledge acquired is defined as the degree to which a recipient of knowledge believes that the knowledge acquired is relevant, adds value and enhances project work or project performance.
Informed by an exploratory search of the literature, possible sub dimensions identified are: • Uniqueness: The rareness of the transferred project knowledge including the difficulty to obtain or copy it (level of inimitability) and to find a substitute for it (non-substitutability) [27]. The more unique the knowledge, (i.e. the higher the inimitability and non-substitutability of the knowledge), the higher the perceived value of the knowledge and subsequent competitive advantage it can create or sustain for the project and the organisation [28], [29]. Unique knowledge is often tacit, complex and highly product specific [30] and embedded within a firm's knowledge reservoirs like people, tasks, tools and networks [31]. The ambiguity caused by the tacitness of the knowledge objects, often makes knowledge transfer difficult, especially when there is no overlapping process compatible with both actors of learning [32].
• Relevance can be defined as the extent to which the knowledge acquired is applicable and salient to projects and subsequent organisational success [33] and whether the subsequent recipient projects and teams will learn a great deal about the technological or process know-how held by the source project [34], [35]. It can be argued that the more relevant the knowledge is to a particular problem at hand or application, the more valuable it is (Ford and Staples, 2006).
• Comprehensiveness pertains to the correctness and level of detail of the knowledge that is transferred from a project and should lead to a deeper understanding of the knowledge content as defined by [36]. It should therefor include the know-what, know-why and know-how of the knowledge objects. In general, the more detail provided, the higher the comprehensiveness but the more time and resources it will take in providing such details. Too much information may also lead to wasted effort and information overload. The correctness of knowledge artefacts and the inclusion of contextual meaning adds to the perceived value [37].
• Ability to Improve Quality of Decision Making: The value of project knowledge acquired also lies in the question whether the received knowledge will improve the ability of decision maker to make better decisions [38].
• Source Attractiveness: Knowledge value is higher when the recipient deems the knowledge source to be attractive and/or authoritative. This increases the legitimacy of the source and adds value to the acquired knowledge [29], [33]. Source attractiveness can also be seen as the value and organisation or project attach to specific employees or team members in terms of their influence and ability to perform their work and achieve organisational or project goals [28].

Identifying Sub-Dimensions of Perceived Ease-of-Use of the Project Knowledge Acquired
Perceived ease-of-use of project knowledge acquired is defined as the degree to which an individual working in a project believes that the use of knowledge acquired would be free of physical and mental effort. Informed by the literature, several sub-dimension could be relevant: • Understandability of knowledge acquired is indicated by the easiness in obtaining a deeper understanding of the knowledge content [36]. It can therefore be defined as the extent to which new knowledge that is transferred from a project can be fully understood by the recipient of the knowledge [33]. For project-based organisations, this means that knowledge generated and transferred by a sender project, team or individual is to be fully and easily understood by individuals and teams elsewhere in the project or across project boundaries [37], [38].
• Speed of application signifies how quickly the recipient acquires new insights and skills [36] or how quickly the project knowledge was retrieved [38]. Should the recipient master useful knowledge, but does so slowly, early mover benefits are likely to be limited and the costs might even outweigh anticipated benefits [33].
• Economics of Transfer relates to the efforts needed to acquire knowledge from a project, as well as the transferring the knowledge through the transfer process. [39]- [41]. Excessive use of resources could also lead to the loss of early mover benefits similar to the speed of transfer [33]. The speed and easiness at which a recipient can obtain better understanding of the knowledge will motivate the recipient in utilising the knowledge to its full advantage but will also enhance the perceived value of the knowledge [41].
The next section deals with the research methodology applied for identifying appropriate items to measure each of the established dimensions of the two concepts and thus develop scales for their measurement.

METHODOLOGY
Measurement scale development as a methodology has been discussed by many scholars [42]- [46]. For this study, a scale development methodology is applied as developed by Schriesheim et al. [43] and adapted by Hinkin and Tracey [46] and referred to by a number of scholars in subsequent scale development studies [45], [47], [48].
The scale development process used includes: (1) item generation, (2) item reduction, (3) content adequacy assessment and validation and (4) item retention selection. Each of the above-mentioned process steps and their application are discussed below.

Item Generation
Item generation is the process of creating items or statements to measure a construct and its dimensions.
Research indicated that most scale development efforts combine an inductive (involving experts) and deductive approach to develop or identify items [49]. When creating items, each item should address only one issue.
Furthermore, all items should be consistent in terms of perspective and should be simple and as short as possible.
Items should be written in a language that is familiar to the target group and negatively worded items should be avoided. The number of items to be compiled must ensure that the measure is internally consistent and parsimonious and should comprise the minimum number of items that adequately assess the dimension of the construct [50]. As a general rule and used by different authors, 3 to 4 items per dimension should provide adequate internal consistency reliability [45], [50].
A literature review identified possible sub-dimensions that could measure the two determinants of perceived value (5 sub-dimensions), perceived ease-of-use (3 sub-dimensions) of knowledge acceptance. From here, groups of items were derived that could measure each of the sub-dimension. Items were formulated in such a way that each statement related to one dimension only. This was to ensure that subsequent content adequacy assessment could be simplified (see next section). A total of 81 Items were compiled for the different sub-dimensions.

Item Reduction
To make the measurement instrument more practical and feasible for respondents, the number of statements needs to be reduced, as the 81 items statements that were generated, would make completing a questionnaire a cumbersome task with a high possibility of (non)response biases. Reduction was accomplished by asking respondents, acting as judges, to evaluate each item in terms of its relevance to the definition of the subdimension. For this purpose, a questionnaire was developed that measured a respondent's judgement of the relevance using a six-point Likert scale 1  This comparison was conducted because previous studies [51], [52] proposed that more experience managers process information received differently.
As mentioned, there is no specific rule about the number of items to be selected, although there are helpful heuristics, as the measurement items need to be internally consistent and parsimonious and should comprise of the minimum number of items that adequately assess the domain of interest [50]. This is achieved by selecting three to four items per sub-dimension. It was decided to select the top ranked four items for each sub-dimension but also verify that the means should be above 3.50, meaning that respondents judged the relevance of the item statement at least moderately relevant (score of 4). Additionally, an independent sample T-test was performed on the two experience groups to determine whether there is any statistically significant difference between them. For the higher ranked and selected items, the results showed no statistically significant differences between the two experience groups. The final lists of selected items that are used as part of the measurement instrument is referred to in the subsequent result section and the item generation and reduction method and detailed result is published as part of the International Association for Management of Technology (IAMOT) conference proceedings [53].

Content Adequacy Assessment and Validation
Content adequacy regards how well the items of a scale measure a theoretical construct satisfactorily. This means that a satisfactory measurement item should only measure the intended theoretical construct and not others [45]. Authoritative work in the field of content adequacy was done by Schriesheim [43], [54] who developed a variety of approaches to assess content adequacy judgements. The approach followed here involved the setting of a questionnaire followed by a statistical analysis on the results obtained. The questionnaire was set to measure a respondent's view on how well an item statement, from the reduced list of items, belongs to any of the defined sub-dimension definitions of perceived value and perceived ease-of-use.
Respondents were requested to carefully read each of the item statements and indicate to which of the subdimension definition the statement(s) applied to. The list of statements used for the content adequacy assessment analysis contained the items with the highest mean values from the item reduction section (see Table   1 and Table 2) for the two main sub-dimensions. To check for any interpretative biases, three additional items that reflected the other sub-dimension were included as marker items. I believe in the knowledge because I believe in the person that sent me the knowledge MV1 Y is provided in such a manner that it easy to understand

MV2
The knowledge was acquired in a short period of time

MV3
The knowledge was acquired efficiently and in a convenient way  [42], [54], if an item statement only applied to a single definition the respondent had to indicate this with an "X". If the respondent felt that an item statement belonged to multiple dimension definitions, they had to indicate the most relevant belonging with the number 1, the second most belonging with the number 2, and so on. We compiled three sets of questionnaire forms with items statements randomly ordered. To control for any potential order effects, forms were distributed equally among respondents.
In total 114 respondents took part in this second study. All respondents were postgraduate, thus employed, students in the field of Engineering, Technology and Project Management at a South-African university (March 2018). The panel had an average project work experience of just under 6 years, ranging between a few months and 20 years involvement in projects. The respondent's judgements on each item were scored following the Schriesheim's procedures [42, p. 72]. The responses on each item were scored by assigning points for each entry as depicted in Table 3. Entries over "2" were not scored as these made up less than 2.2% of the total response data. It is also doubtful whether a respondent can make such accurate dimension discriminations. All other nonindicated responses were set to zero. An extended matrix and Q-method approach to content assessment was applied to evaluate the data [43], [45].
First, the scored responses are consolidated into a data matrix in which the rows represent the questionnaire item statements, the columns the content categories or different definition statements, and the matrix entries the mean ratings of the total number of respondents. Table 4 and Table 5 show these matrices for our variables.
For perceived value, the mean ratings clearly identified the items having highest mean values in their respective content categories (except for U1, C1, C2 & C3). Similarly, the same procedure is followed for perceived ease of use (except for S1). For the five exceptions, the mean values indicated potential problems as these items can be confounded with other, non-intended dimensions.  The marker items also showed highest mean values in the intended marker categories except MV1 and ME2, implying that care must be taken to clearly distinguish between the two constructs of perceived value and  Table 6 and Table 7. Clear factor structures emerged except for U1 and C2, which both loaded onto two factors each and MV1 that loaded on an incorrect factor. Table 6 depicts the Perceived Value loadings. For the Perceived Ease-of-Use items, only the ME2 item loaded on two factors (see Table 7).   [46] approach to content validation, as an alternative to making item retention and deletion decisions, an ANOVA procedure was employed using the same data as for the factor analysis. According to these authors, this analysis provides a direct method for assessing an item's content validity by comparing the item's mean rating on one conceptual dimension to the item's ratings on another comparative dimension [46]. It therefore can determine whether an item's mean score is statistically higher on the proposed theoretical construct. The ANOVA indicated that there was a statistically significant difference between the mean values of the different groups under a component/dimension (p<0.05) for both Perceived Value and Perceived Ease-of-Use constructs.
To check whether the groups under a component differed from each other individually, we conducted a Duncan's Multiple Range Test (DMRT). The DRMT provides simultaneous comparisons by holding the probability of making a type I error for the entire set of comparisons to the α priori significance criteria [45], [46]. The results of both tests indicated that for Perceived-Value, the items U1 and C1 do not have a statistically significant mean difference, therefor loading onto two different factors each, while M1 loaded on an incorrect factor. For Perceived Ease-of-Use the item S1 loaded on two factors. Table 8 and Table 9 present a summary of the findings for all statistical tests conducted to establish content validity. Ignoring all marker items, Table 8 indicates that the U1 item is definitely problematic in the "Perceived Value" theoretical content domain while the C1, C2 and C3 items were found problematic in some tests.
Similarly, Table 9 shows that the S1 item is problematic in the "Perceived Ease-of-Use" dimension.

Statistical Test Problematic Items
Means Assessment S1, ME2 Extended matrix and Q-method approach ME2 ANOVA and DMRT S1

Item Retention and Deletion Discussion
Different follow-up options are proposed regarding the problematic items identified in the previous section.
First, one could scrutinise the problematic items in terms of wording and relevance to their respective definitions and have them rephrased to eliminate their possible confounding effect. If this effect is obvious, there might be a limited need to re-evaluate the items through an additional content adequacy and validation study. If a completely new item statement needs to be compiled, a re-evaluation study might be compulsory or the problematic statement should be omitted from the items pool. To understand the reason for the confounding effect, below we discuss the problematic statements.
The factor U1 includes the item "The acquired knowledge is highly content specific", loading mainly on the identified factors "Uniqueness" and "Comprehensiveness". As the word "specific" relates to qualities and properties of the content, it might be that the use of the word "highly" is problematic. The word "highly" in the item statement could possibly be substituted with the words "thoroughly" or "exceedingly". In this way, the understanding of the statement can also suggest thorough or exceeding content, which is more comprehensive.
Another reason for the loading onto the Comprehensiveness factor might be the different language background of the respondents. Although the English language is the business language in South Africa, there are 11 official languages in South Africa. In most cases, the English language is not the mother tongue language of respondents, which could also lead to different interpretations of key words. Thus, the word "highly" should be omitted to only indicate that "The acquired knowledge is content specific" or rephrased to "The acquired knowledge is specific in its content".
Similarly, the C1 statement "The acquired knowledge gives me a deeper understanding of the problem or situation at hand" loaded on two factors ("Knowledge Relevance" and "Knowledge Comprehensiveness"). We argue that the underlined words might suggest relevance to a very specific problem or situation and therefore one can omit these words. For the C2 item "The acquired knowledge provides good context", the word "context" could also mean "connection" or related or relevant to something and therefore the confusion with relevance.
It is suggested that this statement is changed to "The acquired knowledge provides good contextual insight".
For the item S1 "I could identify important aspects quickly" loading on both "Comprehensibility of the knowledge" and the "Speed of knowledge transfer", it might be that the phrase has an ambiguous meaning.
Although emphasis should have been on the word "quickly", therefore speed, it could have also meant "easily comprehensible". It therefore makes sense omitting this item.

CONCLUSION
In this paper, theoretical sub-dimensions were identified for the perceived value and perceived ease of use of acquired project knowledge. Additionally, measurement scales to measure these dimensions have also been proposed and empirically tested. For academics, the sub-dimensions and measurement scales may provide an opportunity for future research by further developing or replicating the scales as well as the testing of these scales for suitability and application within project environments. For practitioners, the sub-dimensions and measurements scales can be used, even in a reduced or simplified way, to monitor that the knowledge developed in one project is structured and presented in such a way that it may improve the receiving behaviour of individuals in a receiving project.
The development and general use of validated and reliable measurement scales is underdeveloped in the knowledge and project management field. To measure two important determinants of the willingness to accept acquired project knowledge (perceived value and perceived ease-of-use of project knowledge acquired), it is important that sound measurement instruments are developed that can measure these constructs. The first contribution of this paper is that it developed a content adequate measurement framework that is based on a well-established scale development procedure. The scales developed can be used to measure the two constructs by project members and members of other organisational units. This validated measurement will help predicting intended and actual knowledge use behaviour in projects.
A second contribution lies in the literature-based identification of possible dimensions of the two determinants.
The dimensions "perceived value of project knowledge acquired" has the sub-dimensions uniqueness, relevance, comprehensiveness, decision making and source credibility. The sub-dimensions for "perceived ease-of-use of project knowledge acquired" are speed and economics as well understanding of the received knowledge.
The third contribution concerns the formulation and validation of item statements that can be used for measuring the relevant (sub)dimensions. These item statements were reduced to a manageable set and a content adequacy assessment was performed to verify the independence of the item statements and indicated items for retention of deletion. Our analyses led to a final set of usable items statements that can be used to adequately measure the willingness to accept acquired knowledge in projects.
Although our research efforts generated an instrument measuring the willingness to accept knowledge by project members, it should not be seen as the only and best way of measurement. Schriesheim et.al. [54] indicated that scale development and content adequacy assessment can best be viewed as a never-ending process.
Consequently, we do not claim having produced the best measurement. It was an attempt to identify and test suitable item statements enabling the measurement of relevant dimensions. .