TIME-DRIVEN ACTIVITY-BASED COSTING RELATED TO DIGITAL TWINNING IN ADDITIVE MANUFACTURING

Many businesses in the additive manufacturing industry have limited equipment capacity. This method of using time-driven activity-based costing in collaboration with digital twinning will be advantageous to optimise their use of time and their capacity. Optimising the use of time is essential to ensure efficient process flow and to waste less time and money. To optimise, we need to analyse system dynamics and model system responses, to enable us to consider various scenarios iteratively. This paper first considers activity-based costing, driven by its most precious resource, time. Standard time is defined as the base parameter by which cost is calculated. Charge-out rates of elements are based on the actual cost of equipment apportioned to activities, based on the time spent using such equipment. The process chain is broken into elements, each of which incurs full cost when started. The value chain develops accordingly, enabling us to predict the actual cost of production. Second, the use of digital twinning to model standard time is considered. Stochastic variation is evident, but standard time can be allocated to each element in the process chain, given a certain confidence level. Together, a cause—effect prediction model can be developed. The model would predict the time that a process chain, consisting of known elements, would take. However, in the event of an occurrence out of the norm, the updated expected time can be predicted. Using the same rates, the new cost can be determined immediately. We propose that the digital twin can predict production cost, based on a statistically measurable stochastic variation of element duration and the time-varying charge-out rate.


INTRODUCTION: A DIGITAL TWIN FOR TIME-DRIVEN ACTIVITY-BASED COSTING
'Time is money' is a well-known saying that is often heard in various areas of the additive manufacturing business. The more time that is available, the more production hours are available, and thus the more products are produced and possibly sold. The additive manufacturing industry is growing rapidly and is becoming more competitive. Ensuring that a business's additive manufacturing process is flawless is the key to ensuring a clear distinction between businesses. For the purpose of this research it is important to keep the additive manufacturing process in mind. Time is the only resource that can't be bought, spared, or put on hold, making it one of the most valuable resources. Optimising the use of time is essential to ensure efficient process flow and decrease waste amounts. Time standards are determined by using estimates, historical records, and work measurement procedures. In past years, analysts have relied more heavily on estimates as a way to establish standards. However, experience has shown that no individual can establish consistent and fair standards by merely looking at a job and judging the amount of time required to complete it. As a result, standard times were developed.
Digital twinning should be used to optimise the use of time. Digital twinning is the mapping of a physical asset to a digital platform. Data collected for use by digital twins is predicting breakages before they happen, and reporting them to human operators to save money and time during production. Data is collected by means of manual or automatic sensors. A digital twin predicts how long each step in a process will take; it can aid in determining standard times; in the end, standard times act as the foundation for a time-driven activity-based costing (TDABC) system. Time is considered to be the main driver, as optimised time reduces costs when hourly rates remain unchanged.

TIME-DRIVEN ACTIVITY-BASED COSTING
Standard times form the foundation for activity-based costing. With the historical records method, production standards are based on the records of similar previously performed jobs. Some jobs carry personal, unavoidable delay time to a much greater extent than they should. It is possible that the personal problems of employees will affect their motivation and speed in doing a particular task; the time that it takes to do specific jobs will also vary. Even though the physical act of additive manufacturing takes place by machine, the processes leading up to it and the unloading that takes place afterwards are done by employees. Thus it is important to allocate a share of delay time to a job to ensure that personal problems do not affect the outcome of the company or job that a particular employee has to do. Historical records have consistently shown a deviation by as much as 50 per cent on the same operation of the same job. Any of the work measurement techniques -stopwatch time studies, predetermined time systems, standard data, time formulas, or work sampling studies -are better ways to establish fair production standards. All these techniques are based on establishing a realistic allowed time standard for performing a given task, with due allowance for fatigue and personal and unavoidable delays [1]. Time is the main driver, as time is the variable that cannot be saved, made, or added; one only has x amount of time available, and so one needs to use it optimally. 'Standard time' is defined as the normal time it takes to perform a task, plus certain allowances. It is the average time that an operation takes -for example, how long it takes to prepare the machines before an additive manufactured job takes place.
ST = the time allocated to a task NT = the time it will take a person to do a task under perfect conditions Allowances = any ergonomic factors that might delay the doing of the task [e.g. extreme heat that leads to more breaks and lower work pace, which all lead to an increase in ST] NT = Observed time x rating (2) Observed time (OT) = statistical average time observed to do a task [1] Rating = an expert opinion of the percentage rate at which the observed worker performed during the observed time measurements Employees require time to become proficient in any new or difficult operation. Frequently, time study analysts establish a standard for a relatively new operation for which there is insufficient volume for the operator to reach top efficiency. If one uses the concept of output per operator (i.e., rating the operator out of 100) as the basis for operator grading, the resulting standard may be tight, and most operators will most likely be unable to make any incentive earnings. In contrast, if the analyst sets a standard for a new job that currently has low volume and the size of the order increases, problems may occur. Temporary standards eliminate this issue. The analyst establishes the standard by considering the difficulty of the work assignment and the number of pieces to be produced. Then, by using a learning curve for the work, as well as the existing standard data, the analyst can develop an equitable temporary standard for work. The resulting standard will be considerably more liberal than if the job involved a large volume. When released to the production floor, the standard is marked 'temporary', and includes the maximum quantity for which it applies. When the temporary standards are released, they should only be in effect for the duration of the contract, or for 60 days. In the case of additive manufacturing medical implants, the duration is 42 days (six weeks). On expiration, they should be replaced with permanent standards, which are standard times.
Work sampling is a technique used to investigate the proportions of total time devoted to the various activities that constitute a job or work situation. The results of work sampling are sufficient to determine machine and personnel utilisation, allowances applicable to the job, and production stations [1]. The same information can be obtained when using the time study method. However, work sampling provides the same information faster and at considerably less cost. In conducting work sampling studies, analysts take a comparatively large number of observations at random intervals. According to Freivalds [1], the ratio of observations of a given activity to the total observations approximates the percentage of time that the process is in that state of activity. For example, if 100 observations taken at random intervals over a day showed that the additive manufacturing machine was working in 60 instances but was idle for different reasons in 40 instances, then the downtime of the machine is 40 per cent of the working day. Optimisation of time is once again the most critical piece of the puzzle, and thus time is the main driver.
Freivalds [1] mentions that the work sampling method has several advantages over the conventional time study procedure, including:  It does not require continuous observation by an analyst over a long time.
The total work hours expended by the analyst are usually much fewer.  The operator is not subjected to long-period stopwatch observations.  A single analyst can readily study crew operations.
All methods of determining standard times are sufficient, and any one can be applied.
Once the concept of standard time has been grasped, it is possible to apply the rand per hour concept to any process. This means that every hour that is used costs the business money; the less time that is used, the less money is spent. Time in this instance is the driver of money. Standard times are essential when using the time-driven activity-based costing method.
Time-driven activity-based costing (TDABC) is a cost-calculation methodology that estimates the costs of activities based on the unit cost of supplying capacity and the time required to perform the service in question [2]. TDABC is regarded as a costing model that can handle the complexity and variability of daily activities, and that can provide more accurate and transparent cost estimates. Kaplan and Anderson [10] designed TDABC in order to simplify and reduce the cost of the implementation and maintenance processes of its forerunner, activity-based costing (ABC). It simplifies the costing process by eliminating the need to interview and survey employees to allocate resource costs to activities. TDABC assigns resource costs directly to the cost objects, using only two sets of estimates: the capacity cost rate and the customer service time; and both are relatively simple to obtain. An explanation of how to determine these estimates follows below.
Only two activities are involved in a TDABC system. First, the TDABC system calculates the cost of supplying resource capacity. For example, for any customer ordering process, the TDABC model calculates the cost of all the resources, personnel, and equipment used, the technology used in this process, and any necessary supervision. After it has calculated these costs, it divides the total cost by the capacity, which is the time available from the employees performing the work. This will give the capacity cost rate. ( Second, the TDABC model uses the capacity cost rate to drive departmental resource costs to cost objects by estimating the demand for resource capacity (usually time). The model requires only an estimate of the time required to process a particular customer order [3]. TDABC does not require that each customer order time be the same; it allows it to vary. Variations can be classified as manual or automated orders, international or domestic orders, too fragile or hazardous goods, or even orders from a new customer or one who already has a credit record. Because TDABC simulates the actual processes used to perform work throughout a business, it can capture far more variation and complexity than its forerunner, ABC, and it does not need a lot of data or people to process the data or a lot of data storage space.
Capacity cost rate = Cost of capacity supplied Practical capacity of resources supplied The TDABC model specifies the unit times it takes to perform each instance of an activity. These unit times help the business to see how much time is spent on a particular activity and how much time needs to be allocated to unused resources. The TDABC model mostly works from historical data, but its main power is actually to help predict the future. TDABC promotes the direct allocation of resource costs to cost centres, using easily obtained estimations of two parameters for each group of resources: capacity cost rate, and the time required to complete a transaction or activity [4]. TDABC represents unused capacity more accurately, as employee surveys tend to overestimate the time spent on activities [5], [6].
In 2015 a study was done to analyse using a TDABC system on the clinical pathway for a total knee replacement (TKR), and afterwards to determine where the primary cost drivers lay [2]. The study applyied the TDABC methods to a small cohort of TKR patients. About 20 patients were observed, and at each stage personal, consumable, and indirect costs were calculated. Because mean times were not used, the results were more accurate than using cost analysis or any basic top-down approach. The primary cost drivers were identified to be the cost of theatre consumables (including the implant), corporate overheads, and overall ward and staffing costs for admission and the operating theatre. By being able to identify the significant cost drivers, it is possible to inform policymakers more accurately as they attempt to benchmark treatment costs. Possible discounts include a reduction in the length of stay for post-operative care, discount on implants, and the control of corporate overheads by using elective orthopaedic treatment centres.
In service companies, resources are assigned by labour hours, meaning that resources are only used when employees are working; and so it has been said that a TDABC system is more suitable for a service company. However, if applied to a manufacturing company, a TDABC system can optimise the use of time before and after machining takes place and make provision for machining, thus optimising the total use of time. A TDABC therefore becomes a viable and suitable model to use for manufacturing [7].
The optimal control of the use of time can contribute to how well a business functions. Cutting out idle times can add up to a large amount, and a number of jobs can be done in this time. Time-driven activitybased costing is a new and exciting way to approach costing systems, as it is time-driven. Thus the saying that 'time is money' becomes a reality and a way to run a business.

DIGITAL TWINNING
Digital twinning is more than just a blueprint; it is a schematic or pair of glasses that clearly shows how each step in the process works [8]. It is a virtual representation of the dynamics of how an Internet of Things device operates and lives its life cycle. A digital twin facilitates the actual operation of the product, starting with the design, going to the build, and then to the manufacture. A digital twin can be dynamic or static, meaning that it can either be continually changing as the processes change or are adjusted, or it can be fixed -for example, if a simulation is developed and built for a certain process with all variables known, it can be seen as a static digital twin. Continuous learning facilitates better manufacturing. When it comes to additive manufacturing, the more research that is done the better, as in this way the processes that lead up to the actual printing job and the processes that come afterwards are streamlined to ensure a fault-free additive manufacturing job. Previous mistakes or problem areas are used to optimise the process even further. A digital twin takes into account any tolerances that may occur, as it has to drift and shift as the product does [9]. Constant recalibration of the process is necessary in order for a business to see what happens and where changes occur [10]. Analytics are required at every step, and it is important to record any information that is regarded as important or that has something to do with change in the process or product.
A digital twin acts as a living model that drives a business outcome and acts as continuous engineering, providing direct feedback. When designing a digital twin, it is important that its requirements are clearly stated, as well as the modelling software needed to create the 3D representation of the asset, and who is going to have access to the information in the digital twin. A business' Internet of Things management needs to be secure, as there will be different devices on a network, and numerous risks are involved. An identity-driven Internet of Things platform allows for authentication, configuration monitoring, and the management of each device on the network. It also helps to do this quickly and securely. An identity-driven Internet of Things platform can manage the identity of every element involved in the digital twin, and provide messaging services to automate secure communications between these people, systems, and things.
As an example of a digital twin, the formula y = mx + c is a mathematical twin of a straight line in two dimensions, where m is the gradient, c the offset, x the independent variable, and y the dependent variable. As x changes, y will also change. Data points can be grouped, arranged in different ways, and analysed using linear regression parameters. Extrapolation can be used to determine the value of y between observations of x when applied to the mathematical twin; this is called digital twinning. The results obtained from this analysis can then be used to reach certain conclusions about the data. The conclusions from the analysis of the data could be used to make future predictions as certain patterns develop. By using extrapolation, results can be obtained, even if certain information is not available. This concept is the core of digital twinning.

DIGITAL TWINNING AND TDABC
In order to understand how digital twinning and TDABC can be related and work together for the benefit of whatever it is that a business is trying to achieve, a simple example can be used. Let's look at the process involved in baking cupcakes. It is a simple process, some would say -it only requires reading and following a recipe. The first step will be to read what ingredients are necessary, and to make sure that the correct amount of each ingredient is available. If not, time will have to be taken to acquire the necessary ingredients; this is a step that could have been avoided if planning had been done beforehand; thus time is being wasted. The unnecessary trip to the grocery store results in two problems: first, unnecessary money will be spent on petrol, parking fees, etc., and second, an amount of time that could have been used to start the process of baking the cupcakes is lost to the trip. Both time and money are lost. The next step, when baking cupcakes, will be to prepare the batter. As the goal is to save time or to optimise the use of time, the oven is switched on so that it can pre-heat and reach the correct temperature. Skipping this step would mean that time is unnecessarily lost. This step is another example that shows that TDABC planning will minimise the amount of time wasted; certain activities can happen at the same time. The next step will be to pour the batter into the cupcake holders and place the mix into the oven; as the cupcakes are being baked, another batch can be started and the kitchen can be cleaned. Now more activites occur in the same amount of time, using electricity and optimising the time available. The digital twin will form part of the planning phase of the baking process. By using a digital twin, it would be known beforehand exactly how much of each ingredient is needed for each process or batch of cupcakes, how long or how much time it will take to complete the process of baking, and even how much each individual cupcake will cost in the end. The digital twin is constantly adaptable, and thus can be used to find the optimal sequence of processes or steps to ensure that the least amount of time is wasted; and, should something go wrong, the digital twin can be adapted to predict which steps to follow to ensure that the process runs as smoothly as possible. Thus the digital twin is a tool used to aid in improving the TDABC method.
Additive manufacturing follows the same pattern as in the baking example. The printing job is prepared, planned, and designed, and sent to the next person for quality checks and to ensure that the structures added are correct. Then the job is sent and loaded onto the printing machine and the job is started. Each job takes a different amount of time, during which other jobs or the next job can be prepared. Once the job is completed it is unloaded, support structures are removed, and the item is sent for heat treatment and further quality checks. By using a digital twin, the layout of the company and the time taken for each task can be simulated, and changes can be made to the layout without physically changing anything in the real system. In this way the optimal process chain can be determined for each job before actually doing it, thus optimising the use of the time available. The more jobs that take place, the more money comes in, and the better for the company. Without the use of a digital twin, activities might run in random order; and this can lead to inefficiency in the workplace. By using the digital twin, the optimal path for the process chain to follow is known, and activities can proceed without delay. The digital twin also ensures that each object or process in the process chain knows what happens before it and what happens afterwards. Nothing is left to the unknown.
Digital twins allow businesses to see what could go wrong. In return, businesses can order the necessary parts to fix the problems, thus reducing the risk of downtime caused by broken machinery before faults occur. Designers previously had little opportunity to test and amend their prototypes [14]. The marketing director at EU Automation says that digital twinning models reduce development time and costs, as the final construction improves efficiency after analysing the simulations [14]. As with the cupcake example, the digital twin can be used to predict results, meaning that the digital twin can give an indication of the likely outcome of certain situations according to the data that it currently uses. The digital twin can provide time estimates, cost estimates, and indicate possible problematic areas; by knowing what will happen in a process before physically completing the process, it allows for pre-planning to attempt to mitigate the problems or to keep them from occurring entirely.
Another example of a simple digital twin that uses data to make predictions is the weather forecast. Every day, weather specialists look at weather data, analyse it, and then give weather predictions for the next day. These weather predictions make a huge difference in daily life. If an owner of a motorbike and a car has to commute to work daily, they will look at the weather the day before to ensure that they can plan which method of transport to use. Living in a busy city, the amount of traffic on a certain day plays a big part in how much time is spent travelling. In order to avoid traffic, weather permitting, a motorbike will be the best option to get to and from work as quickly as possible. The motorbike saves not only time but money as well, in the form of petrol used. This example links to TDABC in the sense that time is the most important factor. The weather forecast acts as a digital twin for planning travelling choices to and from work. If weather forecasts indicate rain, the car will be chosen as the method of transportation, meaning that more time will be spent on the 'travelling to work' part of the day, and thus more time and money will be spent. The weather forecast is a digital twin of how the day will be started; it gives information beforehand on what problems might occur -i.e., rain, hail, wind, snow, heat -and helps the commuter to make an informed decision so that any unnecessary problems, such as getting stuck in traffic or getting soaked in the rain, can be avoided.
Before a business gives the go-ahead on a project, planning must be done. If the planning is time-based and a digital twin is used, planning becomes even more flawless: now time can be optimised because a digital twin is based on real-time data -as the process is changed, the digital twin updates, giving reports on how things have changed. If the process doesn't work, the digital twin can be changed, and one can see how the changes affect the business. Businesses will now know how much each step will cost, based on the time that it will take. Possible flaws will be identified, and plans can be put into place to mitigate these flaws. As processes are optimised, waste is reduced, more time is available, more jobs can be done, and income increases.

CONCLUSION
One of the most important parts of using a digital twin is deciding what the requirements are for the digital twin to achieve or do. By identifying the digital twin's requirements, the first step to building a digital twin is completed, as these requirements act as a foundation for what programmes will be used, which platforms are necessary, how much it might cost, and how complex the digital twin will be. Digital twinning in collaboration with TDABC can be beneficial when it comes to the indication of possible bottlenecks or identifying which process might or might not be as successful as others. By using the digital twin together with TDABC, time and money can be saved, as no processes have to occur physically for the identification to take place; it is known beforehand what works and what doesn't. Time in the end is money; time cannot be replaced or stored, and by optimising the relevant processes, money is automatically saved. Many businesses that work in the additive manufacturing industry only have a certain machine capacity, and need to plan jobs around this machine capacity. By using TDABC in collaboration with digital twinning, these businesses can pre-plan in order to optimise the use of the available capacity and so optimise the available time as much as possible. As time becomes the main driver, costs are reduced when hourly rates remain unchanged.