Compute the Incremental Unit Cost of Producing an Additional Paper Feed Drive
Design and Application of Thermal Insulation
Alireza Bahadori PhD , in Thermal Insulation Handbook for the Oil, Gas, and Petrochemical Industries, 2014
1.7.2 Economic Thickness by Algebraic Solution
- •
-
Incremental cost derivation
In the derivation of an algebraic expression for economic thickness, a term arises that is a function of the insulation cost, and for the equation given below, the term is represented by the symbol C, which is defined as the incremental cost of insulation.
It is important to realize that this is not the simple difference in applied cost between one thickness and the next higher one, but is more strictly interpreted as the derivative of the applied cost with respect to the volume of insulation. It should be noted that the additional cost of the finish and accessories resulting from the increasing thickness of insulation is included.
Within the context of this method of economic thickness calculation, therefore, the value of C should be obtained from a measurement or deduction of the slope of the curve for the plot of insulation cost against insulation thickness. However, in practice, such a graph is unlikely to exhibit a curve (unless forced smoothing of the plotted points is carried out) and the alternative approach is commonly adopted in these circumstances.
By this method, any one value of C can be estimated from the costs of two correspondingly successive thicknesses, with the understanding that the value applies to neither one, but may relate to a thickness about midway between them.
Sequential values obtained in this manner can still be very erratic and should be subsequently smoothed where the situation calls for an orderly progression of C values.
The equations for deriving the incremental cost in accordance with the two methods described in the previous paragraph are as follows:
From the slope of the cost curve:
- 1.
-
Cylindrical surfaces:
(1.31)
- 2.
-
Flat surfaces:
(1.32)
From the cost of two successive thicknesses:
This is determined from the change in cost divided by the corresponding change in volume on insulation:
- 1.
-
Cylindrical surfaces:
(1.33)
- 2.
-
Flat surfaces:
(1.34)
where
-
S = slope of the curve of P against L
-
dn = the outermost diameter of the insulation (in mm)
-
P = the installed cost of insulation (in S/m or S/m2)
-
L = the insulation thickness (in mm)
-
d o = the outside diameter of pipe (in mm)
-
C = the incremental cost (in $/m3)
-
H = evaluation period in hours (working)
-
Y = cost of heat in cent/useful MJ
-
q = rate of heat loss through the insulating material per unit area of hot surface per second in W/m2
-
λ = thermal conductivity of insulation material in W/mK
-
θ 1 = hot face temperature in °C
-
θm = ambient air temperature in °C
-
χ = economic thickness in mm
For the purpose of the calculation, the incremental cost C varies from 891 $/m3 (see Table 1.17). As an initial approximation, C is taken as the average of 698 $/m3.
Outside Diameter of Steel Pipe (in mm) | Thickness (in mm) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
120 to 140 | 140 to 160 | 160 to 180 | 180 to 200 | 200 to 220 | 220 to 240 | 240 to 260 | 260 to 280 | 280 to 300 | Average Cost (in $/m3) | |
Incremental Cost of Insulation (in m3): | ||||||||||
17.2 | 632 | - | - | - | - | - | - | - | - | 632 |
21.3 | 631 | - | - | - | - | - | - | - | - | 631 |
26.9 | 618 | - | - | - | - | - | - | - | - | 618 |
33.7 | 652 | 451 | - | - | - | - | - | - | - | 552 |
42.4 | 573 | 570 | 283 | - | - | - | - | - | - | 475 |
48.3 | 553 | 557 | 416 | - | - | - | - | - | - | 509 |
60.3 | 517 | 514 | 512 | 202 | - | - | - | - | - | 436 |
76.1 | 358 | 523 | 523 | 101 | - | - | - | - | - | 376 |
88.9 | 362 | 510 | 511 | 508 | - | - | - | - | - | 473 |
101.6 | 496 | 510 | 507 | 507 | 415 | - | - | - | - | 487 |
114.3 | 422 | 513 | 513 | 514 | 499 | - | - | - | - | 492 |
139.7 | 404 | 489 | 490 | 493 | 491 | - | - | - | - | 473 |
168.3 | 410 | 456 | 458 | 459 | 459 | 455 | - | - | - | 450 |
219.1 | 594 | 445 | 447 | 450 | 449 | 449 | - | - | - | 465 |
244.5 | 382 | 439 | 439 | 439 | 433 | 442 | 439 | - | - | 430 |
273 | 581 | 432 | 428 | 429 | 431 | 429 | 431 | - | - | 452 |
323.9 | 300 | 428 | 436 | 438 | 445 | 439 | 444 | 445 | - | 422 |
355.6 | 599 | 407 | 426 | 436 | 439 | 431 | 416 | 436 | - | 449 |
406.4 | 562 | 408 | 440 | 433 | 431 | 400 | 436 | 434 | - | 443 |
457 | 637 | 401 | 425 | 420 | 419 | 423 | 420 | 422 | 421 | 443 |
508 | 548 | 405 | 424 | 410 | 420 | 416 | 415 | 418 | 418 | 430 |
Over 508 | 552 | - | - | - | - | - | - | - | - | 552 |
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128000106000010
Sustainable Energy Technologies & Sustainable Chemical Processes
Adetoyese Olajire Oyedun , Amit Kumar , in Encyclopedia of Sustainable Technologies, 2017
Electricity and power cost estimates
Fig. 10A and B shows the incremental cost of electricity and levelized cost of electricity for the pelletized biomass cofiring scenarios.
The incremental cost of electricity and LCOE values in Fig. 10 show an increasing trend as the cofiring level increases. The forest residue-based regular pellets show the highest incremental cost followed by wheat straw- and switchgrass-based regular pellets, but the reverse occurs with steam pretreated pellets: switchgrass-based steam pretreated pellets have the highest incremental costs, which increase as the level of cofiring increases, followed by wheat straw- and then forest residue-based pellets. This is largely due to the production cost of regular and steam pretreated pellets.
The LCOE results show a similar trend to the incremental cost of electricity results for the pelletized biomass cofiring scenarios (same as the base case). The LCOE values for pellets range from 40 to 60$ MWh− 1 across different cofiring levels. At 5% cofiring level, it is cheaper to cofire regular pellets than raw biomass. This is largely due to the feedstock costs, biomass requirement, and the transportation distance. For example, while the feedstock cost at 5% cofiring level for forest residue pellet is 6.18$ GJ− 1, the feedstock cost for raw forest residue is 6.03$ GJ− 1. The biomass requirement for raw biomass is higher than that of pellet and therefore results in higher costs for raw biomass than pellet at this cofiring level.
The difference between the LCOE values for regular and steam pretreated pellets at a 5% cofiring level is small (around 2$ MWh− 1), but at a higher cofiring level, 25%, the difference is almost 10$ MWh− 1.
The cost breakdown of the LCOE for the pelletized biomass at a cofiring level of 25% is illustrated in Fig. 11 . In the pelletized biomass cofiring scenario, pellet costs and maintenance costs are the major cost components of the LCOE. Coal cost and ash disposal are also significant cost components, but capital recovery cost is insignificant, given the low modification cost for the pelletized biomass scenario. The cost breakdown for the LCOE at other cofiring levels follows the same trend.
The power costs for generating electricity from a direct biomass combustion plant for pelletized biomass and cost breakdown of the power costs are given in Fig. 12A and B , respectively, for a 500-MW plant capacity.
For pelletized biomass, two scenarios were considered, one with pellet transportation costs and the other without. Pellet transportation is considered when the pellet plant is far away from the power plant, and in this study we used a distance of 150 km. Power costs, when transportation costs are included, range from 139 to 147$ MWh− 1 for regular pellets and 173 to 187.50$ MWh− 1 for steam pretreated pellets. Moreover, pellet transportation costs have a small effect on the power cost; without transportation costs, pelletized biomass power costs were no more than 7.7$ MWh− 1 than for the transportation scenario. In all cases, however, the power costs are considerably higher than that for the raw biomass (base case).
The cost breakdowns for the power cost for the pelletized biomass show that the pellet cost is the major component, followed by capital recovery, maintenance, and pellet transportation costs. Largely due to the high pellet production cost, it may not be feasible to fire 100% pellets for power generation unless the production cost of pellets can be considerably reduced.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124095489105317
Economic Operation of Power Systems
P.S.R. Murty , in Electrical Power Systems, 2017
20.14 Merit Order Method
If the incremental cost characteristics are fairly constant over a wide range in operation, then neglecting the transmission losses and running reserve requirements, it is possible to prepare schedules for load allocation using incremental efficiencies. Merit tables based upon incremental efficiencies are prepared and each unit is loaded to its rated capacity in order of the highest incremental efficiency. Changes in fuel costs, plant cycle efficiencies, plant availabilities, etc., require the merit tables to be revised regularly to reflect these factors. Then, it is possible to look at the tables so prepared and schedule the generation to different units.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780081011249000206
Cost-effectiveness
Louise Barnsbee , ... Son Nghiem , in Mechanical Circulatory and Respiratory Support, 2018
Choice of Model
To judge cost-effectiveness, we would ideally: (1) include all patients eligible to receive the VAD or ECMO treatment and (2) assess them over a long time, possibly their lifetime.
This is quite different from clinical studies, which generally examine a narrow population over a relatively short time. For example, to evaluate a modified VAD, a clinical study may just consider patients in one hospital for a year and may exclude the sickest patients because of ethical concerns. There is generally, therefore, a mismatch between the published evidence and the evidence we need to judge cost-effectiveness.
The usual approach is to take the available information from smaller and shorter-term studies to inform a statistical model that considers the bigger picture. For example, we may know that 1-year survival after LVAD implantation is 87% [14]. We can use this survival data to extrapolate beyond the first year, ideally by using the survival curve from a Kaplan-Meier plot.
To examine a person's lifetime, we can use state-based models. An example is in Fig. 24.1, which comes from the cost-effectiveness study by Moreno et al. [15]. People start in the "LVAD" state after having the device fitted and then either remain in that state (circular arrow), or have a heart transplant, or die. We regularly update a person's state, usually every month or year, and keep updating until everyone has moved to the death state.
The state-based model is a great simplification of reality, and a patient's journey can be far more complex than just three states (e.g., patients who recover ventricular function and have their device removed with their native heart in place). More states can be added to capture complex journeys; but if the model's states cover the major costs and health states, then it should provide useful estimates for our bigger picture.
To create patient journeys, we need to simulate a cohort of patients. Moreno et al. [15] started 100 patients in the LVAD state and then simulated their lifetimes. To simulate movements between states, we need to know the probability of moving between states. These probabilities usually come from observed data or published papers. For example, Moreno et al. [15] used data from a clinical study where 8% of patients had died 1 month after LVAD. When simulating an individual patient, we then randomly generate a number between 0 and 100. And if that number is 8 or less, we move the patient to death at that time; otherwise they remain in LVAD. Similarly, data are available on heart transplant numbers; hence we can estimate the probability of moving to the heart transplant state. Any patient who moves to a heart transplant state incurs the extra costs of the transplant (surgery and device costs).
Table 24.1 shows an example of the simulated journey of a single patient using hypothetical costs and QALYs. This patient died after 4 months and accumulated $167,000 in costs and 0.223 quality-adjusted life years. Costs vary over time; the cost of LVAD at month zero is higher ($90K) because it included the surgery and device costs, whereas the next month covers items like hospital stay, maintenance and support, and serious adverse events [16]. QALYs also change over time; the patient is assumed to be in better health 1 month after the LVAD implant (0.055 QALYs in month 1 compared with 0.042 QALYs in month 0).
Time (months) | State | Costs ($) | QALYs |
---|---|---|---|
0—implant | LVAD | 90,000 | 0.042 |
1 | LVAD | 38,000 | 0.055 |
2 | Heart transplant | 35,000 | 0.063 |
3 | Heart transplant | 4000 | 0.063 |
4 | Death | 0 | 0 |
Total | 167,000 | 0.223 |
QALY, quality adjusted life year, on the scale of months (so maximum is 1/12 = 0.08).
Ideally, we should adapt Fig. 24.1 to have a new state every time the costs or QALYs change (i.e., "LVAD month 0," "LVAD month 1", etc.); but while this increases accuracy, it reduces the heuristic appeal of Fig. 24.1. We show an example of a more complete model in Fig. 24.2. In this example, there are changes in the costs and/or QALYs for the LVAD group up to 2 months post implant.
After simulating 100 patients, we generate summary statistics such as the average costs and QALYs per patient (see " Incremental Costs and Outcomes" section).
We now have the costs and QALYs for our treated group. For comparison, we also need a model that describes the untreated group. That might be a conventionally managed group who were eligible for an LVAD but did not receive one. This group are likely to have different states that describe their typical journey, and so we would need to design another state-based model and estimate the costs, QALYs, and transitions between states.
The accuracy and usefulness of our estimate of cost-effectiveness will obviously depend on how well our state-based model captures reality. We are likely to get poor estimates if we have missed an important state, or if our estimated transitions between states are wrong. (For example, we generally over-estimate life years because the transitions to death were too small.) The best way to avoid this situation is for researchers to work with expert clinicians and patients to create a model that covers the important stages of life after LVAD or ECMO. Researchers should also systematically search the literature and find the best available data to inform the costs, QALYs, and transitions.
A popular alternative to a state-based model is a decision tree where the key decisions that drive patients' journeys are modeled. For example, a decision tree examining ECMO first split patients according to whether they experienced persistent cardiac arrest or severe shock, then had a split if they were transferred, and then split if they survived or died [17]. Decision trees work well when there are a few key events after treatment, whereas state models are more useful if there are multiple health states with many potential transitions between states.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128104910000242
Energy Convergence
Ren Anderson , ... Steven Hauser , in Energy Efficiency, 2013
2 Typical Characteristics of Residential Technology Pathways
The chapter starts with analysis of the incremental benefits of different choices for energy upgrades in new homes focusing on the limiting case of a continuous investment in the improvement of a single building component. As can be seen from inspection of Figure 19.2, when only one efficiency measure is used to reduce home energy use – for example, the additional savings achieved by adding an additional thickness of wall insulation – the diminishing marginal returns associated with each increase in efficiency quickly limit the total level of cost-effective energy savings that can be achieved.
Initially the value of energy savings from improved efficiency increases faster than the incremental cost of financing efficiency improvements. Because of the diminishing magnitude of energy savings associated with each increment of efficiency improvement, the incremental cost of the efficiency improvement is eventually greater than the incremental savings provided by the corresponding reductions in energy use. A typical technology pathway combines all three of the trends from the simple examples shown in Figure 19.1. The 100 percent savings point in Figure 19.2 is defined as the point where total annual energy use for the home is equal to total site renewable energy generation. In other words, a ZNEH produces as much energy as it uses on an annual basis.
The point where incremental costs equal incremental savings represents the minimum cost point for the pathway. Beyond the minimum cost point, costs increase faster than savings until total costs are equivalent to costs at the starting point on the left-hand side of Figure 19.2. The point where annual costs are equal to the cost of utility supplied energy at the starting point represents the neutral cost point. The first cost of the home at the neutral cost point is higher than the first cost at the starting point but because of the energy savings associated with increased investments in efficiency, the net annual energy related costs 4 are the same at the neutral cost point as the annual utility bills for the less efficient home at the zero savings point. It is possible to go beyond the neutral cost point and achieve larger energy savings; however, because of the increasing steepness of the cost/performance curve beyond the neutral cost point, the benefits associated with larger investments in energy efficiency are relatively small compared to the increase in energy-related costs.
The limits associated with use of a single efficiency measure can be partially mitigated by using multiple efficiency measures. An example of the technology pathway that results from the use of multiple efficiency measures is shown in Figure 19.3. When multiple efficiency strategies are used, system integration benefits lead to increases in energy savings without large increases in energy costs. The trend line in Figure 19.3 represents this by showing how a combination of curves for separate technologies results in the cumulative effect of those technologies acting together as a system.
The most cost-effective overall whole house design does not result from using just the most efficient or the least costly efficiency measures but from the least cost combination of all measures. As the efficiency of a home is improved, there are discrete transition points where the next step in efficiency improvement in one component generates a reduction in the cost of another component.
For example, in hot climates with large cooling loads, investments in insulation and better windows reduce both the size and the cost of the cooling system and the first cost savings resulting from use of a smaller air conditioning (AC) system can partially offset the cost of efficient windows and walls.
Costs associated with different efficiencies and sizes of AC systems are shown in Figure 19.4. As can be seen from inspection of Figure 19.4, if building efficiency is increased enough to reduce the required AC size from 5 tons to 1.5 tons, the cost of the AC system can be reduced by $3,000 while also nearly doubling the performance of the system from SEER 13 to SEER 24.5. The importance of improvements in enclosure efficiency are not just measured in terms of direct reductions in energy use but also in terms of additional savings in first costs due to reductions in equipment size. The impacts of improvements in enclosure efficiency can be multiplied by reinvesting savings in equipment costs in additional energy improvement options. These internal cost savings are included in all of the energy related cost and energy savings curves shown in this chapter. The use of multiple efficiency measures produces larger overall energy savings without increasing overall costs [4]. System integration strategies that result from the use of combinations of energy measures allow the diminishing returns associated with a single efficiency measure to be offset by taking advantage of the system-level benefits provided by other efficiency measures.
When the incremental cost of energy savings associated with the next step in efficiency improvement is larger than the incremental cost of energy provided by a residential PV system, then the least cost solution is provided by investing in PV rather than making additional investments in efficiency. Because the cost of PV systems scales approximately linearly with system capacity for the multiple kW size systems used in residential applications, the least cost curve is linear after the PV start point in Figure 19.5.
The maximum energy savings are achieved by adding the system integration advantages of multiple efficiency measures shown in Figure 19.3 with the onsite power generated by the residential PV system shown in Figure 19.5. The fully developed least cost curve that develops from this combination is shown in Figure 19.6. As can be seen from the increase in energy-related costs in Figure 19.6, even though it is technically possible to achieve a zero net energy home today in most major U.S. climates, residential PV systems at current average costs of $5.50/W generate power that is more expensive than power from the grid, leading to high-operating costs for conventional homes with savings levels larger than 30 to 40 percent.
Figure 19.6 represents a simplified view of possible pathways to more efficient homes by focusing only on a single curve. In the more realistic case shown in Figure 19.7 involving multiple technologies, there are a large number of individual design choices, all of which use less energy than the starting building design. Because of the large number of possible combinations to be considered, the comparison of different technology choices and design approaches is simplified by focusing on the choices that provide the lowest cost energy savings. The lower bounding curve formed by the designs that deliver the lowest cost savings are used to define the least cost curve.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123978790000190
Case Study1
Monica Greer Ph.D , in Electricity Marginal Cost Pricing, 2012
Discussion of Figure 6.4—Average Incremental Cost and Marginal Cost
Figure 6.4 displays the average incremental and marginal cost curves generated by the total cost model in Equation (6.10). To display these results, it was necessary to compute a composite output, v, where v = Y 2/Y 1. In the case of Figure 6.4, v = 0.2, which covers most of the coops in the sample. Average cost is falling until distributed electricity is equal to about 1000 MWhs (1 GWh). After this, the rising marginal cost causes average cost to rise, which accords with economic theory. The relevant issue here is that the price paid by the end user should reflect this rising marginal cost and that the rates paid by customers in the United States are typically not set in this fashion (see Chapter 8 for more details).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123851345000065
Acceleration
Mark F. Nagata PSP, CDT , ... Theodore J. Trauner P.E. , in Construction Delays (Third Edition), 2018
Managing acceleration
Before accelerating a project, it is essential to carefully plan what tasks will be accelerated and how the acceleration will be implemented. All too often, when a decision is made to accelerate a project, the response is to go to overtime or increased work weeks for every facet of the project. This is the wrong approach and is often an unnecessary or excessive expenditure of effort and money. The focus on any acceleration is to shorten the remaining duration of the critical path at the lowest cost. One of the preferred approaches to determine the most cost-effective way to accelerate work is to use a cost slope calculation. The basic equation is:
The Crash Time is the absolute fastest time that the activity can be performed. The Crash Cost is the cost of performing the activity within the Crash Time. The Original Costs and Original Time are those that existed prior to any consideration for acceleration. Using our previous example:
The Cost Slope identifies the incremental costs per day for any specific activity to shorten that activity by 1 day. The Cost Slope of the individual work activities and the project schedule can be used to plan the project's acceleration. This is demonstrated in the next example.
Example 15.5
Using the schedule to accelerate intelligently
The project in Fig. 15.2 has 9 remaining work activities. The critical path is highlighted in red (dark gray in print versions). The project is currently expected to finish in 32 days. The owner has directed the contractor to accelerate the work so that the project finishes in 25 days. Therefore, the contractor must shorten the overall duration by 7 days.
Fig. 15.3 shows the daily incremental cost (cost slope) and the maximum number of days that each of the work activities can be reasonably accelerated.
In looking at the chart, the most affordable work activity to accelerate along the critical path would be Activity G. We can accelerate this by 2 days, before the critical path shifts to include another path of activities. After accelerating Activity G by 2 days, our schedule would look like Fig. 15.4.
Thus, after the second day of acceleration, it will be necessary to accelerate both the path that begins with Activity A, and the path that begins with Activity C in order to improve the completion date. The activities accelerated and the cost to do so are summarized in Fig. 15.5. The total cost to accelerate the project by 7 days is $46,000.
This example demonstrates a phenomenon that occurs on most projects, that being that each day of acceleration may become more expensive than the preceding day. Accelerating some areas of work will be a waste of money, such as Activities H & I in the previous example. Also, sometimes the work that is currently ongoing should not be accelerated in favor of accelerating future, less costly work, such as Activity C in the previous example. By applying a reasoned plan to accelerate, the project will avoid unnecessary expenses and wasted effort.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012811244100015X
SUPPLY CHAIN AT THE TACTICAL LEVEL
MANISH GOVIL PHD , JEAN-MARIE PROTH PHD , in Supply Chain Design and Management, 2002
4.3.2.6. Reducing Costs
Being able to measure profitability is a key issue in any production system. For years, so-called analytical accounting relied on arbitrary allocation of indirect costs to product types or services, and hence made it impossible to evaluate the true profitability of these product types or services. Accounting techniques have evolved dramatically (and positively) during the past few years due to the new project-oriented paradigm, which is the most important characteristic of supply chains.
Supply chain management is flow oriented, which means that a supply chain is no longer viewed as a set of departments specialized in some activities but, rather, as a set of projects, with each project having a specific objective suchas manufacturing a given type of product or providing a well-defined service. To reach this goal, each project uses the facilities provided by the supply chain, ensuring that no barriers between the activities delay the completion of the project. Cost analysis should include each of the projects. More precisely, there should be a one-to-one relationship between the projects and data cost. Indeed, cost evaluation could be detailed to different levels. A cost can be evaluated for each activity performed in the project, but usually activities are grouped into activity centers, and a cost is evaluated for each activity center that takes part in the project under consideration. In some cases, projects are categorized in accordance with customer segments, which provides a more detailed view of the costs related to the projects or types of product.
Another important aspect of cost evaluation in supply chains is the concept of incremental costs. According to this point of view, only the additional costs incurred by the activity centers to perform a project are taken into account in the evaluation of the project. Indeed, the resulting cost will depend on the projects already performed in the supply chain. For instance, when launching a new type of product, a supply chain may take advantage of some underloaded resources already available. This would lead to a low incremental cost. On the contrary, the same type of product launched in a supply chain in which none of the existing resources can be used would lead to a high incremental cost. Usually, this problem can be neglected due to the reasonable spectrum of resources that are typically present in a supply chain.
If we want to evaluate the incremental costs resulting from the manufacturing of a given type of product for a given customer segment, we have to answer the following question: What is the cost per unit that could be avoided in the following activities if we stop this production?
- •
-
Design and redesign
- •
-
Order processing
- •
-
Raw materials and components
- •
-
Marketing
- •
-
Communication
- •
-
Documentation
- •
-
Transportation
- •
-
Manufacturing, which includes salaries, energy consumption, maintenance, wear of machines, quality control, and material handling
- •
-
Rework, in the case of lack of quality
- •
-
Management
- •
-
Specific storage facilities
- •
-
Promotional activities
- •
-
Packaging
- •
-
Delivery
Note that these costs should be recorded when the corresponding activities are performed.
Table 4.4, represents the way in which cost evaluation should be done in a supply chain. P1, P2, …, P k are the projects or products being worked on in the supply chain; P i1, P i2, …, P ki are the parts of P i that are decomposed in accordance with customer segments; and AC1, AC2, …, AC n are the n activity centers that comprise the supply chain. The costs associated with the projects and the activity centers that are not represented in the table are supposed to be equal to zero. This kind of cost analysis may lead to a decision to stop a given production activity or service for a specific type of customer or, on the contrary, to introduce a new production activity or service in the supply chain.
AC1 | AC2 | … | AC n | Total | ||
---|---|---|---|---|---|---|
P1 | P11 | 20 | 15 | 30 | 65 | |
P12 | 10 | 25 | 15 | 50 | ||
P2 | P21 | 20 | 30 | 10 | 60 | |
P22 | 25 | 5 | 18 | 48 | ||
P23 | 10 | 10 | 5 | 25 | ||
⋮ | ||||||
P k | P k1 | 17 | 3 | 9 | 29 | |
P k2 | 4 | 7 | 12 | 23 | ||
Total | 106 | 95 | 99 | 300 |
Indeed, in the incremental evaluation described previously, we assume that each project (product type or service) is performed optimally (i.e., at the lowest cost). This implies that the following questions have been answered:
- •
-
For each activity required to complete the project, is it possible to reduce the cost by using other resources or changing the way the activity is performed (Kaizen-like approach)? If the answer is "yes," what would be the cost?
- •
-
Is it possible to perform the same project using another process that is cheaper than the current one (reengineering)? If the answer is "yes," what would be the cost?
The tool often used to analyze and possibly reduce the costs of a project is a graph in which each of the activities is represented with its cost, evaluated as explained previously. Figure 4.5 represents such a graph in which the project consists of manufacturing a type of product. To reduce the size of the graph, we represent five activity centers instead of the detailed activities. Buy, make, move, store, and sell denote these activity centers.
The following basic rules are taken into account when evaluating the costs in a supply chain:
- •
-
Costs should be attached to projects instead of departments (i.e., the approach when evaluating costs should be horizontal instead of vertical).
- •
-
Only incremental costs should be considered. These incremental costs should be evaluated for each activity of the project and even for each customer segment.
The following approaches are used to reduce the incremental costs of a project:
- •
-
A gradual approach, similar to the Kaizen approach applied to improve quality
- •
-
A more abrupt approach that consists of changing drastically the manufacturing process (reengineering)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780122941511500047
Renewables Integration Through Direct Load Control and Demand Response
Theodore Hesser , Samir Succar , in Smart Grid, 2012
Wind Integration Cost
Wind integration costs are typically defined as those incremental costs incurred in the operational time frames that can be attributed to the variability and uncertainty introduced by wind generation. These integration costs can be minimized by thoroughly utilizing ancillary services provisioned by DLC. The increased operational costs, brought onto the bulk power system, can be temporally disaggregated to include regulation, load following, and unit commitment. The integration costs associated with the merit order effect are sometimes treated separately and sometimes bundled into unit-commitment costs. Studies find that the cost of integrating wind rises with greater wind penetrations [5]. DLC DR is capable of decreasing the regulation and load-following portions of the integration costs. Nine of the sixteen wind integration studies surveyed by Wiser disaggregate integration costs by temporal classification [21]. With the caveat that each study represents a different relative wind capacity, ranging from 11% to 48%, Figure 9.7 illustrates the average share of each temporal classification's integration cost.
Unit-commitment integration costs are typically greater than regulation or load-following integration costs. Unit-commitment cost-mitigation strategies for wind integration include, but are not limited to, aggregating wind plant output over large geographic regions [22], consolidating balancing authorities [23], increasing the fidelity of wind forecasting [24] intra-hour wind scheduling [25], making better use of physically (as opposed to contractually) available transmission lines, dynamic thermal line rating [26], and improved unit-commitment algorithms that incorporate adaptive load management [27]. Economic DR programs that provide dynamic pricing signals to participants can be utilized to mitigate the unit-commitment costs of wind integration.
Minimizing regulation and load-following integration costs is, of course, also essential to maximizing wind capacity integration. It is worth noting that Figure 9.7 is a representation of integration costs, not capacity. Therefore, since the price of regulation is several times that of other services, the cumulative MWs of required regulation are an even smaller fraction of the ancillary MWs required to integrate wind. The annual average price ($/MWh) of regulation up and down is typically two to ten times that of spinning reserves and twenty to thirty that of non-spinning reserves [28].
The clear trend among the studies surveyed is that unit-commitment costs account for the majority of operational costs associated with VER integration. While there are exceptions, these are typically attributable to variations in accounting methodology employed by the study authors. For example, the Avista study, which focused on wind integration in the Pacific Northwest region, attributes the largest fraction of integration costs to load-following services [29]. This anomaly is driven by the study's mingling of wind forecasting error with load-following requirements. Wind forecasting errors generally fall into the unit-commitment bin, although some amount of intra-hour forecasting errors falls into load following as well.
Strategies for decreasing regulation and load-following integration costs are less extensively documented than those of unit commitment. Utilizing DR to firm VERs through ancillary services provides such a strategy.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123864529000097
Devices for direct production of mechanical energy
In The Efficient Use of Energy (Second Edition), 1982
Equipment.
Peak lopping requires certain additional equipment, and this incremental cost should be charged against peak lopping, not the total capital employed, in order to obtain a true picture. Ultimately, for convenience, all the capital can be allocated to this duty, but the financial mechanics should be appreciated in the costing presentation by noting that standby security is then 'free'.
If parallel operation with the Electricity Supply Board is not catered for in the generating equipment purchased, peak lopping can then only be carried out by supplying isolated circuits independently. This is not very satisfactory, but in some recent instances this practice has had to be used where standby plant was installed and the local Board had requested that the site load be reduced to say 60% of normal site requirement. If the standby plant had been designed to be capable of peak lopping the exercise would have been easy to carry out.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780408012508500210
mcdanielbress1988.blogspot.com
Source: https://www.sciencedirect.com/topics/engineering/incremental-cost
0 Response to "Compute the Incremental Unit Cost of Producing an Additional Paper Feed Drive"
Post a Comment