1-2 Flashcards
(15 cards)
Define a project
A project is a temporary and unique task with a clear goal. To estimate cost, we need to know:
Why the project is done (business need)
What it will create (deliverables)
How it will be done (work required)
Explain and define the concept of project cost.
What is a common problem with project cost?
What is the difference between cost and price?
The project cost is the total amount of money needed to complete the project. This includes everything: people, materials, equipment, and other resources.
A common problem is deciding what counts as a project cost. For example, if company employees work on the project:
If their salaries are counted as OPEX(Operating expenses) (day-to-day business costs), they’re not part of the project budget.
If counted as CAPEX(capital expenditure) (project-specific investment), the project appears more expensive.
Cost
is the amount of money a business spends to produce a product or service. This includes materials, labor, manufacturing, and other related expenses.
Price
is the amount a customer pays to buy the product or service. It’s usually set higher than the cost so the business can make a profit.
Profit=price - cost
In the long run, companies must earn more than they spend. Price is often set by the market—where supply meets demand.
Explain the project budgeting process. What is the difference between budgeting and cost estimation?
The project budgeting process helps plan, manage, and control how money is used throughout the project. It has four main steps:
Resource Planning – Figure out what resources the project needs (like labor, equipment, and materials). Cost Estimation – Estimate how much those resources will cost in total. This gives you a rough idea of the full project cost. Cost Budgeting – Add a timeline. Break down the estimated cost into phases or tasks. This shows when money will be spent, helping with cash flow. Cost Control – During the project, track spending to make sure it stays within the budget. Tools like Earned Value Analysis help measure performance.
The difference between cost estimation and budgeting:
Cost estimation answers: “How much will this project cost in total?” Budgeting adds: “When will we spend that money?”
What is the purpose of project cost estimation? What are the outputs of a proper cost estimation analysis?
The purpose of project cost estimation is to figure out how much money is needed to complete a project. It helps the organization:
Make sure the project is affordable, not waste money and resources and reduce risk of going over budget
A proper cost estimation gives three important outputs:
Point Estimate – A single best-guess number for total cost. Example: “The project will cost $50,000.” Range Estimate – A range that shows uncertainty. Example: “We’re 95% sure it’ll cost between $40,000 and $60,000.” Probability Distribution – A detailed chart (often made using Monte Carlo simulations) that shows how likely different cost outcomes are.
What is a cost estimating relationship (CER)?
What is it a part of?
A Cost Estimating Relationship (CER) is a formula or model used to estimate the cost of a project or work package based on one or more key variables—called cost drivers. These might include things like hours of labor, number of components, or amount of materials.
The CER is built using:
Expert judgment (experience and intuition), or Data analysis (using historical project data to find patterns)
It’s part of parametric estimation, where math and past data are used to predict future costs.
What are the different parametric Cost estimation methods?
Parametric cost estimation uses data and mathematical models to estimate costs based on measurable project characteristics. Key methods include:
Average Cost Takes the average cost from similar past projects. ➤ Simple and fast, but ignores differences between projects. ➤ Best when past projects are very similar in cost and size. Unit Cost Estimates cost using: Cost = Units × Cost per Unit ➤ Example: 1000 meters of cable × $5 = $5000. ➤ Assumes a stable unit cost and one main cost driver. Regression Analysis Uses statistical formulas (like OLS regression) to estimate costs from one or more cost drivers. ➤ Can include several factors (e.g., size, duration). ➤ Needs clean historical data. Works best with linear or simple relationships. Machine Learning Uses advanced algorithms like neural networks to detect complex patterns between cost drivers and project costs. ➤ Works well with large, detailed datasets. ➤ Requires more data and computational power than other methods.
What is the average cost method, and when should it be used?
The Average Cost Method uses the average of past project costs as a prediction for a new project. It assumes:
The projects are similar in size, type, and complexity. Cost data has a normal (symmetrical) distribution. There is little variation in cost (low standard deviation). No strong relationship between project features and cost.
Cautions:
Outliers (very high or low costs) can skew the average. Doesn’t adapt to unique project features—no customization.
Simple Explanation:
This is the easiest method—just take the average of previous projects. It’s only useful when all projects are pretty much the same.
What is the unit cost approach and how is it calculated? What methods are used?
This method estimates cost as:
Cost = Quantity × Unit Cost (β)
There are 3 ways to calculate the unit cost (β):
Unweighted Average Take unit cost for each project (cost ÷ quantity), then average them. Simple, but treats small and large projects equally. Not reliable when project sizes vary. Weighted Average Gives more importance to larger projects. Reduces distortion from small/outlier projects. More accurate in most cases. Constrained OLS Slope Regression with no intercept (line goes through origin). Minimizes squared errors—more statistically sound. Performs best in tests (R² = 0.9461).
R² Comparison (Fit Quality):
Unweighted Average: 0.7682 Weighted Average: 0.9321 Constrained OLS Slope: 0.9461
Simple Explanation:
Multiply how much you need by the cost per unit. You can figure out the unit cost in different ways, but the best method depends on how much data you have and how big your projects are.
What is LCOE, how is it calculated, and what does it mean?
LCOE stands for Levelized Cost of Electricity. It represents the average cost per MWh of electricity produced by a power-generating asset over its entire lifetime. It’s often used to compare the economic efficiency of different energy sources (like wind, solar, gas, etc.).
LCOE is calculated as the price per unit of electricity that makes the Net Present Value (NPV) of the project equal to zero. This means the total discounted revenues from selling electricity just match the total discounted costs of building, operating, and decommissioning the plant.
Steps to calculate LCOE:
Add all project costs:
CAPEX (building cost)
OPEX (operation and maintenance)
DECOM (shutdown/removal)
Add up all electricity production in MWh over the project’s life. Discount both costs and electricity using a discount rate (to reflect time value of money). LCOE = Present Value of Total Costs ÷ Present Value of Total Electricity Produced
If market electricity price > LCOE → Project earns money.
If market electricity price < LCOE → Project loses money.
Simple Explanation:
LCOE tells you how much, on average, it costs to produce one MWh of electricity over the life of a power plant. If you sell electricity at a higher price than LCOE, the project makes money. It’s like a break-even electricity price.
What is the learning curve (Wright’s law), and why does it matter for offshore wind in Norway?
What is the learning curve (Wright’s law), and why does it matter for offshore wind in Norway?
The learning curve (or Wright’s Law) says that the more you build of something, the cheaper it becomes to build each new unit. This happens due to gaining experience, improving efficiency, and optimizing production. In numbers: every time cumulative production doubles, the cost per unit drops by a certain percentage (the learning rate).
This is crucial for offshore wind power in Norway, especially floating wind, which is still expensive and relatively new. Norway is hoping that with more projects, costs will fall thanks to the learning effect.
Challenges in Norway:
Deep water and far-from-shore locations increase costs. Floating wind is new, with limited experience/data. Larger projects may not scale easily. Increased wind power could lower electricity prices, making it harder to profit. Historical data shows offshore wind hasn’t always followed the learning curve as expected.
So, while the learning curve gives hope that costs will go down, it’s not guaranteed—especially in Norway’s tough conditions.
Simple Explanation:
The more wind farms you build, the better and cheaper it should get—like getting better at a game the more you play. Norway hopes this will make floating wind affordable, but it’s tricky because their conditions are harder than usual.
How is the learning rate from Wright’s Law estimated?
How to estimate the learning rate:
Collect data from past projects:
Get unit costs ($/MW) and cumulative installed capacities for several projects.
Linearize the model:
Since the model is nonlinear, take the natural logarithm (ln) of both unit costs and cumulative capacities. This turns the equation into a linear form.
Run regression analysis:
Calculate the learning rate:
Use the formula:
Learning rate = 1-2b1
In short:
Costs go down as you build more.
Use logs and a line fit to find how fast.
That speed is the learning rate.
What is Econometrics?
Econometrics is the use of statistical tools, especially regression analysis, to study economic data. It helps turn real-world economic facts into models that can explain relationships and predict outcomes.
Example in course:
Estimating the investment cost (CAPEX) of offshore wind farms by relating costs to factors like installed capacity, water depth, and distance to shore using regression. This allows predicting costs for new projects based on past data.
Simple Explanation:
Econometrics is like using math and statistics to make sense of economic data, helping us predict costs or outcomes based on real examples.
What is Omitted Variable Bias?
Omitted variable bias happens when an important variable that affects the outcome and is linked to included variables is left out of a regression model.
Effects:
Makes coefficient estimates wrong (too big or too small). Misleads which variables matter and how much. Breaks the assumption that errors are unrelated to explanatory variables. Even models with good statistics (high R²) can be misleading.
How to fix it:
Use theory/expert knowledge to choose variables carefully.
Avoid adding variables just because they look good statistically.
Use tools like instrumental variables or fixed effects if needed.
Test model stability with different variable sets.
Simple Explanation:
Leaving out a key factor from your model is like trying to tell a story missing an important part—it will give you the wrong picture, even if it looks good on paper.
What is the population orthogonality condition, and what is meant by endogeneity issue?
Population Orthogonality Condition
What it means:
The random “noise” — the stuff your model can’t explain — stays completely separate from your known variables.
Why it matters:
If the unknown stuff isn’t messing with the known stuff, your model gives clear, honest results. You can trust what it says about cause and effect.
Plain-speech:
✅ Orthogonality good: Unknown stuff stays out of your predictors → clean, reliable estimates.
Endogeneity
What it means:
The unknown noise is tangled up with your known variables — like trying to measure the effect of sleep on energy, but caffeine is secretly involved too.
Why it matters:
Now your model gets confused. It can’t tell if X really causes Y, or if something else is pushing both around.
Plain-speech:
❌ Endogeneity bad: Unknown stuff sneaks into your predictors → dirty, misleading estimates.
What is Data-Mining and P-Value Hacking? Why Use Theory for Variable Selection?
Data-mining: Testing many variables and models until finding something that looks good by chance, without any real theoretical reason.
P-value hacking: Trying different variables or data manipulations until you get “statistically significant” results (p < 0.05), even if they aren’t meaningful.
Why it’s bad:
Leads to false positives—relationships that appear real but are actually random noise. Results won’t hold up in new data and mislead conclusions.
Why use theory:
Selecting variables based on prior knowledge or theory before looking at data ensures you model real, meaningful relationships, not random patterns.
Simple Explanation:
Randomly trying lots of things until something “works” is cheating yourself; using theory is like having a map so you know where to look and what really matters.