Mining Operation Cost Model

This is a follow up post to Physically locating GPU Servers at Massive Scale - #15 by megacrypto

I’ve spent the last couple days building a cost model for a zcash mining operation that starts with # rigs, calculates upfront investment and monthly costs, and projects a “time to ROI”. I’ve built a cost model that does the calculations associated with above.

Whats nice about my cost model is that it’s “just add water” – the more rigs I add, the more the upfront cost, but the more daily profit (defined as revenue minus electricity) I make. And best of all, it doesn’t change my “days to ROI” because I’m not a big enough fish to affect the network hashrate!

What I’m having trouble modeling:

  • how much the mining reward will go down over time as my mining equipment depreciates
  • how price swings in zcash affect my ROI
  • how my “macro thesis” about crypto supports an appreciation in price (and how to know if i’m wrong early)
  • how $ETH going to POS is going to affect the ROI of GPU mining
  • how much cost-benefit there is to overclocking (or making more power-efficient) the cards

Other things I could use help with

  • anyone have contacts with a distributor that sells equipment in bulk?
  • at what stage is it worth hosting in a data center vs renting my own office space, vs placing mining rigs around my house?

I’m thinking of ante-ing in with 10-12 rigs to start, then I’d like to eventually scale up to a 30 rig operation, and then up to 100 rigs after that if the operation proves to be a worthwhile endeavor.

1 Like

Am planning something similar - have PM’d you.

You can both pm me.

Working with a few big farm owners who own 2000+ rigs…

So in other words, i can assist with many things :wink:

Just pm me.

Greetings!

3 Likes

Or people cold talk about this in public. I mean are you really keeping secrets here? Do they give you a competitive advantage?

I can answer one thing- when Ethereum goes to PoS, that hash power will go elsewhere. If ETC stays PoW, then a lot of it will go to ETC. But you can bet that the difficulty of most GPU mined coins will be going up.

I don’t think we can know the future whether difficulty or price of a coin… but I figure generally the more hash rate the higher price of the coin. The two have a symbiotic relationship.

So in my modeling I just assume they will cancel each other out. This means in the future you will get fewer coins, but they will probably be more profitable, and the real threat to your enterprise is new generations of technology – either GPUs or someone making an ASIC for what you’re mining.

1 Like

I agree. One of the more appealing things about Zcash is the developers have stated from the beginning that Zcash cannot be mined with ASIC. I certainly hope that holds true for all of us investing in multiple GPU rigs.

1 Like

So, one thing I have discovered is that there’s interesting results when you model the different cards. When you look at just the revenue per watt, higher end cards do better, but when you look account for the capital costs of the cards, lower end cards do better. A 1060 gets a lot of the performance of a 1080, but at a lot less dollars. The 1080 gets more performance at not that much more watts.

Since difficulty can be presumed to go up and new generations of cards are coming out, should one focus on return on dollars? OR should one focus on return on power as that will be the ultimate determinant of a cards profitability?

Mind sharing changes with your model? I’m doing essentially the same thing.

For instance I modeled power supply pricing (Eg: 80Gold vs 80 Platinum and Titanium). With a two year lifespan the price difference and the energy savings about match up. But if PSUs last 10 years then you’ll save enough to make it worth it (if you have a zero cost of capital. EG: What will bitcoin do if you had just put that extra $100 into bitcoin for 10 years?)

Also I discovered that you get more efficiency by using 240V, and these power supplies support that. So I’m looking into setting up a 240V system.

2000 gpu rigs sounds like ur getting cheap islandic thermal power or maybe you have your own little water dam, or maybe your uncle happens to be a chinese official at some electric company.

ROI is not something you can caluculate on this kind of gamble if you dont have extremly cheap energy, dont buy more cards than you feel you can afford to support the community.

3 Likes

Hey there all. I am considering doing something similar in terms of setting up the business.

If you all have a group message / thread going I would like to join. It would be nice if we could work together a bit on the pitch to our respective investors. If anyone is using Nvidia cards on Linux then my business partner and I might have some monitoring code (logging for clock speeds, temperature, power, etc) that could be helpful and that we would love some help with.

Are any of yall software developers or at least can develop basic scripts to automate processes?

Also, if anyone wants some deals on GPUs in bulk, I have a separate business as an authorized reseller and can get you close to the price of what the distributors are selling to me (thinking you pay shipping plus $5-$10 per card – but we can talk about it **You must reside in the US though).

Also to the larger players in the thread… Do you know how to obtain industrial electricity rates vs. basic commercial rates? That rate difference can make a big difference in terms of the return.

One more thing: What kind of operation life for the GPUs are you all using as assumptions in your models? That is, how long do you think the GPUs will run for assuming we can keep the operation temps between 50-60 C under full load (no overclocking) but running 24/7/365 with say 2% downtime annually (1 week). **Ignoring the effect that the network difficulty may have on the profitability of keeping the card running.

I know the price of Zcash has slipped here in the last few weeks, which is not great news for these business models BUT I would love to keep talking about this so we can all be more prepared to pull the trigger if and when the time is right.

Be easy my friends!

nirvana16d
So, one thing I have discovered is that there’s interesting results when you model the different cards. When you look at just the revenue per watt, higher end cards do better, but when you look account for the capital costs of the cards, lower end cards do better. A 1060 gets a lot of the performance of a 1080, but at a lot less dollars. The 1080 gets more performance at not that much more watts.

Since difficulty can be presumed to go up and new generations of cards are coming out, should one focus on return on dollars? OR should one focus on return on power as that will be the ultimate determinant of a cards profitability?

I have been running my equipment decision models using a metric of:
Sols / (annual electricity cost + (33% * cost of the hardware) )

Why 33% of the cost of the hardware you ask? Well FASB 360-10-35-3 (Financial Accounting Standards Board) says that you should depreciate your asset over its useful life. Industry norms suggest depreciating Computer Equipment over 3 years (even though Nvidia says their chips will last for 50,000 hours or 5.7 years). You can use 33% or 17.5% BUT you need to be consistent in how you deal with these depreciation numbers. This is assuming “Straight-Line Depreciation” (with a $0 salvage value assumption) and of course their are other methods of Depreciation that you could use like the Modified Accelerated Cost Recovery System (MACRS)

A rational reason for supporting your method and consistency are really the only requirements.

So the metric I calculated above will best (in my opinion) represent the combination between Electricity Costs and the Cost of the Hardware. This is what I am using.

I am not a CPA but I did get a Masters in Accounting and a Masters in Data Science. (seems like the Accounting is coming in handy more than I thought lol)

If you have any more questions about your financial modelling, fire away or Message me personally. Happy to help!

Anything can be mined with an ASIC. So that is incorrect. But it my be more unlikely than other coins.

1 Like

I agree with this. ASICs can be developed for just about anything. Actually every specialized component on your motherboard is “technically” an ASIC. If there is a long term profit motive and justification, an ASIC will be developed.

But of course the current BitCoin ASICs cannot be re-purposed for Zcash. So we are safe there.

The question is not whether ASICs can be developed. The question is can an ASIC be developed that has a clear advantage over GPUs, both current and next-gen cards, in terms of price/performance/power consumption. Then there’s resale value … you can’t play video games on an ASIC after it’s mining days are over. :slight_smile:

1 Like

Point well taken!!!

I think the ZCash team have said that they can change the algorithms in future which would render ASICs useless - which I think is probably enough of a threat for people to not invest in developing ASICs for ZCash.

Out of curiosity, do you have a “below $X, it doesn’t make it profitable to mine ZEC” built into your cost model? And what would you do if ZEC falls from it’s current $37-$40 range to this

For me, this figure is about $22 per ZEC to break even. Above that, I make a “paper” profit. Below that is a “paper” loss.

I’m taking a long term view, so thinking that if ZEC continues to drop, even to $10, I could keep mining and then regain profits once the value goes back up.

With any kind of investment, profits are only realised when you actually sell. Otherwise they are “paper” profits.

1 Like

Out of curiosity, for those trying to do a longish term cost model, what did you estimate for the rise/fall in difficulty?

I was expecting an average 5% increase in difficulty each month - but the difficulty has dropped by about 7% in the last 2 weeks. Putting the new figures in makes the future forecast change wildly. I know a lot of this is finger in the air estimates of “what do you think ZECUSD rate will be at some point in the future” and no-one can predict that with any accuracy.

2 Likes

The only thing you can say with certainty is that whatever model you, or anyone else, comes up with, it will be far from the actual reality that unfolds, and for reasons that never even occurred to you. This whole endeavor is, by design, a never ending crap shoot. Think about sitting there, last May, coming up with cost model projections for ETH. To say that there were a couple of surprises would be a massive understatement. Anything can, and probably will, happen. If you are operating under any other assumption, you’re playing the wrong game, in my opinion.

1 Like

indeed lots os surprises possble, good and bad

if you start a farm you nee to play the long game

and make sure you have a good power contract, saves you a LOT!

Yes I’ve been depreciating the cards. I’m not sure how often new cards will come out and what effect that will have on the hash capacity of the whole network.

I think of the network as all the GPU miners-- across all alt coins. Whatever coin is most valuable they will switch to it. So we’re in a sense in competition with each other.

ZEC will continue to decline, I think, until there’s enough capacity that it reaches its fair market value. I think it’s still constrained by lack of availability.

One thing to consider is that you can depreciate your motherboard and power supply over different time periods… I think PSUs are often warranted for 10 years, and a MB should last you more than a GPU.