Safe long-term CPU temp? [closed]
Solution 1:
As far as I know (if someone has precise data, please report), there are no serious statistics data about modern microchips lifetime estimations as a function of its running temperature.
I know of two reasons for this:
- When we could know data about lifetime estimation of a microchip technology, this is, after years from manufacturing, that technology is... obsolete.
- Only microchips corporations could be interested in researching to obtain such precise info about their products (or the competitors ones). And they are not willing to share it; even if they do, I wouldn't believe them very much.
So, I believe that end-users only have the (often intuitive-only) knowledge of experienced IT specialists.
This is the mine:
- Microcircuitry engineering is something like cooking: it involves a lot of probabilistcs and will often have a rather random results. So, you don't know how good a microchip is until you have fabricated it. Even then, deterioration will have too a bit of probabilistic behavior.
- 40ºC (104ºF) or below is heaven for every microchip.
- 50ºC (122ºF) is a not bad temperature for any microchip.
- Microchips starts getting damaged on its lifetime at 60ºC (140ºF).
- A chip running at 70ºC (158ºF) during 24 hours and 7 days a week, will probably last 2-6 years.
- A chip running at 80ºC (176ºF) during 24 hours and 7 days a week, will probably last 1-3 years.
- A chip running at 90ºC (194ºF) during 24 hours and 7 days a week, will probably last 6-20 months.
- In this matter there is no difference between main computer chips like GPU, CPU, Northbridge, Southbridge... etc.
- Given a temperature, it is harder for the chip to maintain it at high processor usage than at low processor usage. For example: a CPU that achieves 70ºC (158ºF) during 10 hours on nearly-inactive Windows desktop suffers less than a(nother) CPU that achieves 70ºC (158ºF) during 10 hours of intensive CPU processing (i.e: SuperPI). Some hardware engineers report this could be due to that in the second case the CPU uses most of the microcircuitry, and in the first case only a small part of it.
- The general rule: microcircuitry is like an ellectrical printed circuit board that has the tracks very close between them (there are often only 4-5 molecules between two tracks), so heating is slowly melting the tracks as time goes by. Keep things as cold as possible.
-
The general rule when reading the manufacturer's data: they want for you not to care about refrigerating anything, because then it will get broken just after the warranty period (sometimes only a few weeks after it; it is incredible, I know).
"It is just bussiness"
, Alcapone dixit. - Preventing is important (better than waiting for failures to repair): when things start to fail, it could be due to tracks melting in the microcircuitry, or due to minor tracks dilatations. The second case is a temporal problem. The first one is probably a definitive one.
-
History: Changes in this document (read only if this is not the first time you visit it):
- 03-05-2014: Added ºF temperature data conversions.
Solution 2:
I'm going to cover hardware first and then towards the end what you can do if you're stuck with that hardware as your only viable option right now.
My last CPU was an X3 720 (that unlocked to an "X4 20", it idled at 150 °F (66 °C), and I purchased it the very first day AMD 45 nm CPUs were released.
I'm now running an AM 8350 eight-core running at 4 GHz, and it always runs at room temperature.
To understand how CPUs and heat correlate you need to understand two things, architectural design and manufacturing design. AMD and Intel are both architectural CPU designers though AMD is an underdog and split off their manufacturing into an independent company called Global Foundries in order to stay competitive with Intel who is ahead with manufacturing technology by a full node.
What is a full node and what is a half node? A full node is a mainstream technology where the sizes of the silicon walls are shrunk. The smaller the CPU/GPU die the shorter the distance and the more is done by electrons flowing through the silicon corridors inside the CPU which also increases efficiency.
- 65 nm Circa 2007.
- 45 nm Intel late 2007 / everyone else late 2008.
- 32 nm Intel January 2010 / everyone else 2011.
- 22 nm Intel 2012/everyone else 2014.
- 14 nm Intel late 2014/early 2015 / everyone else about 2015 (if we're lucky)
Now you also have to think about CPUs being made a lot like Gramma baking cookies... they're not all made the same. Some just don't get built as well as others. My socket AM3 quad core couldn't clock from 2.8 GHz to anything beyond 3.1 GHz and change while my already blazing fast 4 GHz eight-core beast easily goes to 4.444 GHz before needing voltage to continue overclocking (haven't messed with that yet). Keep in mind that my 32 nm 4 GHz oct-core CPU was on a mature/refined 32 nm process, not when it first came out.
CPU Cores / Watts
My friend has the same CPU and his house was 48 °F (9 °C) one "winter" day in Florida...his CPU was also 48 °F (9 °C). Now that won't matter when you have a high/full load on your CPU. Our 8350s have a 125 watt TDP (total power draw). The more they work the hotter they'll get and that depends on how you configure your OS and applications.
You also can't forget that with Intel; unless you drop $600 you're stuck at four cores tops while $200 got me eight cores. If you're talking load balance for example, Firefox will occasionally freeze and use 25% of a quad core's CPU cycles, on my eight-core CPU it uses only 12.5% load. So take a CPU's TDP and divide by its core count. 125 watts / 8 = 15.6 watts per CPU core. Keep in mind it's not all CPU cores though it should give you an understanding. An Intel 3630 QM's TDP is 45 watts, 45 watts / 4 = 11 watts, but keep in mind it's a mobile CPU. Most desktop Intel CPUs run at 77 watts now, 77 watts / 4 = 19 watts per core.
If you want to reduce the heat my hardware suggestions are as follows...
The more cores you have and the lower the TDP per core the better.
Software Configuration
Kill the Superfetch program and kill off anything that sucks CPU cycles is your best bet. You're likely stuck with this as your only option (unless you can buy some new hardware).
Run msconfig, go to Services, hide all Microsoft services and disable things except your anti-virus. Go to the startup tab and do the same (there is no checkbox there for Microsoft items though).
Also keep in mind things like Windows Updates (e.g. the MS .NET language compiler is a HUGE CPU hog) will spend an hour or two at near 100% CPU usage.
Make sure you have more RAM than you need, kill off the pagefile (keep the hard drive from grinding to death and increases system performance simultaneously), and you'll get the most out of your system.