Artificial intelligence has been advancing on the heels of natural intelligence for more than a year, and it does this not least due to the growth in the power of iron. Therefore, to create modern ML algorithms, you most likely need a powerful laptop for machine learning or a desktop computer. This post will offer 2 builds:
The first PC assembly for ML: beginner, consider the minimum system with which it is relatively comfortable to work, costing up to $500.
The second, balanced price / performance: more expensive, more powerful, allowing you to solve a wide range of tasks. worth $1000.
Initially, I thought to allocate an assembly for enthusiasts, but then I decided that those who are ready to shell out a lot of money for ML know exactly what they want and most likely will assemble for a specific task.
The main indicators are the number of cores and the performance of one core. Intel has faster cores, but AMD gives you more cores for the same money. What is more important and where to find the balance depends on the tasks. If you drive neurons on video cards, then take Intel. If you want to solve a wide range of tasks, then AMD, because. in the end, with proper parallelization of the program, the calculations will go faster. But keep in mind that not everything can be parallelized. As for hyper trading, it speeds up the system a little, but in reality, not so much, so the cores are paramount.
Budget option: 4-6 cores
Medium variant: 6-8 or more cores with good boosting per core.
Budget option: 16Gb
Medium: 32Gb with two sticks of 16, to be able to expand to 64 if necessary. More than 64Gb sockets 1151-v2 and AM4 do not support. If you want to get a few extra percent of performance from AMD, take a faster memory and be sure to use 2-channel.
Many where in AI it is advised that the amount of RAM be 2 times more than video memory, I have not yet developed a specific recommendation on this topic, but I will leave it here.
In a budget build, I suggest doing without a video card at all (if the processor has integrated graphics) or with the cheapest one you can find. So you can save money by purchasing more powerful other components and leaving the possibility of an upgrade. You will not count on it – it is only for displaying the image on the monitor and that’s it! How is that, one might ask? The answer is simple – the GPU is primarily needed for neural networks, but on a weak video card like the geforce 1050, the calculation speed will in most cases be the same as on a powerful CPU. Some gradient boosting algorithms also know how to use the power of the GPU, but in experience there is no gain compared to 1050. And when buying a card is more expensive, we will already get out of the budget of $500.
In the average assembly, a video card is definitely needed and preferably more powerful. The choice of the manufacturer is unambiguous – this is nvidia. It is worth noting an important detail right away – you will not need SLI at all, tensorflow and cuda are perfectly parallelized and you can use different types of nvidia cards, not necessarily the same ones. However, in neural networks there is such a thing as a batch (batch) and for it to be larger, you need more memory on the video card. The larger the batch, the better, and in order not to be limited from below by a weak video card in terms of batch size, it is better to take video cards with the same amount of memory.
The times of mining have passed and it has become easier to buy a card, whether to take a BU or not – let everyone decide for themselves. To make the choice easier, I will write a comparative performance of different cards, let 1050 be the base (1X). This will allow you to better correlate the power with the price and choose the optimum.
- 1050 1X
- 1050Ti 1.12X
- 1060 3Gb 1.95X
- 1060 6Gb 2.09X
- 1070 2.91X
- 1070Ti 3.39X
- 1080 3.7X
- 2070 3.9X
- 1080Ti 4.42X
- 2080 4.45X
- 2080Ti 5.79X
It is better not to take less than 4 Gb, it may not be enough even for simple tasks.
When choosing a motherboard, you need to know the following – whether you will overclock the processor and how many video cards you plan to connect. I am not a supporter of overclocking, but if you plan to squeeze the maximum out of the system, then pay attention to the possibility of overclocking. A more subtle point with the number of video cards, or rather with the number of PCI-E lines. almost all motherboards support 16 lines for one video card. Problems start when using more video cards for machine learning. The 8PCI-E to 8PCI-E configuration for two video cards seems to me optimal for an average solution. But if you plan to use only one video card, then you can save money without buying a more expensive socket.
Budget assembly: take an HDD, say a terabyte. Of course the system will boot slower than with ssd. but you can buy it later. And if you immediately give everything on ssd – that is, there are fears that large datasets will not fit. Please note that the installation of all the necessary libraries and programs may well take under 100Gb of hard disk.
In the optimal assembly, it is proposed to take an SSD + HDD.