Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/10/2014 in all areas

  1. Yet another article explaining all the details one should remember for faster ArcGIS workflow. Check this out..... When it comes to computer hardware, there is a lot you can do to increase the performance of ArcGIS. The decisions you make about processor, disk, memory, etc. are all critical issues that can often be overlooked – even by the experts. The following is a listing (in order of importance) of some key hardware issues for ArcGIS – specifically for Geoprocessing tasks in ESRI ArcGIS products like ArcMap and the ‘arcpy’ Python module. Let’s get started: Processor: Without a doubt, this is the most critical consideration. Having a modern processor with the fastest clock speed you can afford is by far the most significant performance choice you can make. Right now in Q1 2013, my quick recommendation is (for a Desktop system) the Intel Core i7 3970x, and (for a Workstation/Server system) the Intel Xeon E5-2687w. Price aside, there are some key considerations that you need to know about when deciding what processor you should buy: Chip architecture: This basically describes the techniques used to design and construct the processor. As you might imagine, processor architecture is an extremely complex and constantly evolving field. If you have a strong cup of coffee, 10 minutes or so, and enjoy melodrama. Generally speaking you want to be sure to always buy the most modern (but mature) chip architecture that is on the market, but also at the same time ensure that the chip has a high clock speed (see below). Sometimes it takes the chip manufacturers a while to release the high clock speed models when the release a new processor architecture. This effect is described in the Intel “tick-tock” model. In any case, you should probably wait until the “tock” happens lest you end up with a processor that is actually a bit slower than the higher clock speed version (see the next section) of last years processor family. It is very important to note that chip architectures vary considerably, and it is unwise to make straight across comparisons of processors with different architectures. For example, a 2.8 Ghz processor model from a few years ago will invariably be slower than a 2.8 Ghz processor of today. Main point here: Unless there is a compelling reason, don’t buy yesterdays processor architecture. Clock speed: This is critical. The clock speed is a measure of the processor “frequency”, which basically refers to how fast the electrons cycle through the processor… Similar to someone flipping a light switch on and off really (really) fast. With each on/off cycle, the processor “computes” something. Generally speaking, you want to buy a processor with the fastest clock speed you can afford. Of course increased clock speed (aka processor performance) does come at a price: Not only do high clock speed chips cost more initially to purchase, they also require more electricity to both power the processor and to properly cool it. This is why many large IT shops (think large server rooms and data centers) do not generally buy the highest clock speed processors… doing so increases their electricity/cooling demands. Is performance or energy efficiency more important? I will leave that up to you to decide. Number of cores: The argument of “more is better” sounds logical here, but in truth it really depends on your intended use. Generally, the more cores a processor has the slower the clock speed (otherwise all those cores would get too hot). That said, having more cores theoretically allows you to run more processes at the same time without having them compete for limited processor resources. However, all good ESRI GIS users should know by now that ArcGIS is a single threaded application – at least for now – meaning that a single instance of ArcMap will only use the equivalent resources of a single processor. For example, lets suppose you have a 4 core processor, and are running a single instance of the Clip tool. In that case, you will only be using 25% of the total system processor resources. Having lots of processor cores only makes sense if you can effectively take advantage of them. A good example where extra cores does make sense is if your hardware will be used by many people at the same time (think Citrix, Remote Desktop Protocol, Virtual Server (Hyper-V), etc.). Another use case where lots of cores make sense is if you are an intensive multi-tasker (you are running Excel, ArcMap, Python, Access, etc. all at the same time) and/or you are a decent programmer and can write scripts to run your large GIS processes as separate processes – for example using the Python ‘subprocess’ or ‘multiprocessing’ modules. Unless you fit one of these use cases I would recommend you go with a processor that has fewer cores but higher clock speed. Something good to know also: Most decent processors these days have at least two cores, and with something called multithreaded architecture (Intel calls it ‘Hyperthreading’), the number of cores is effectively doubled. This is why your two core Intel i5 processor appears to have four cores. What’s the optimum ratio of cores/clock speed? Again, I will leave that up to you to decide. My general recommendation: Clock speed first, cores second. When comparing clock speed, make sure you are using the same/similar chip architectures to base the comparison on. Cache: A processor cache is basically a type of very fast memory buffer that physically exists exists inside processor. The idea being that it is much faster to read from the processor cache than the RAM, and so it stands to reason that it if you load up the cache with data from the RAM that have a high probability of being used, the data will be more readily available and thus reduce latency. A decent sized processor cache today is typically more than 10MB or so. Having a larger cache will generally make the processor more efficient and fast. How much so depends on your data and algorithm structures. Memory (aka Random Access Memory – RAM): Think of computer memory (aka RAM) as an intermediate, temporary, and fast storage area between the disc and the processor. In a way, RAM is a smaller and faster version of a disk – with no moving parts. The more data you can read and write to/from the RAM (instead of the disk) the faster your process will be. How much memory you need depends a lot on how you process data and the type of data your process. My quick recommendation is to buy at least 2GB of RAM per processor core (possibly less if you have lots of cores). Consider 3-4 GB or more if the machine is going to support many concurrent users and/or very large GIS processes or GIS ‘power users’. One benefit to having lots of RAM available is that it provides a higher limit to how large the ESRI ‘in_memory’ workspace can hold. For users that don’t know, the ‘in_memory’ workspace is a type of RAM disk that can be used to temporarily store tables, feature classes, and now in v10.1, raster datasets as well. In addition, more memory means that you can process larger datasets using memory intensive tools such as Dissolve and Union. Up until a few years ago, it used to be that a single instance of ArcGIS physically couldn’t use more than 2.1 GB of RAM due to it’s status of being compiled as a 32-bit software… And actually the limit was in effect more like 1.4 GB or so due to system overhead. However a few years back, ArcGIS users were able to take advantage of more RAM using a secret ‘Large Address Aware’ setting. If you have this turned on, and had a 64 bit operating system, you can theoretically use up to 4 GB of RAM. Now with ArcGIS 10.1 the ‘Large Address Aware’ setting appears to be turned on by default, and for v10.1 Service Pack 1, the install package includes a new ’64-bit Background Geoprocessing’ installer, which theoretically allows background geoprocessing tasks to (basically) use as much memory as your system has. Similar to a processor, memory also has an operating frequency (for example, 1600 Mhz), and generally the faster the frequency, the faster the memory can communicate with the processor. Be sure that your processor is compatible with the memory you are purchasing. Also, be sure you are buying the latest memory type which is DDR3 (double data rate type 3). Note that DDR4 memory is on the horizon and will be better/faster than DDR3. My final say on RAM: Since lots of RAM can be expensive, don’t buy a whole bunch unless you know how to effectively use it – and if you do – well then 64 GB of RAM (or more!) wouldn’t be unreasonable. That said, sticking to the 2GB of RAM per processor core rule should be sufficient for the majority of single user ArcGIS applications. Consider more for multi-user systems and/or expert ArcGIS ‘power-users’. Disk: The two largest considerations about disks are their capacity (how much they can store), and their speed (how fast can they transfer data to and from the RAM and processor. Historically, disks have had spinning magnetic platters, but in the last few years Solid State Drives (SSDs) have become increasing popular. SSD have no moving parts and are extremely fast… In some cases, approaching the speed of RAM. In fact, SSDs resemble RAM much more than a conventional magnetic disk drive with a spinning platter. If you have noticed something by reading my post up until this point, it is that the dividing lines between the disk, RAM, and processor is being blurred, and it may not be too long until it is blurred even more… perhaps to the point that someday they become a single integrated piece of hardware… Well maybe. So SSDs do have disadvatages: One significant issue is price: They are relatively expensive, and the other is capacity: They don’t have nearly as much storage capacity as magnetic platter disks (for example, you effectively can’t buy a 2TB SSD drive). My quick recommendation is, if you can afford it, buy a modern large capacity SSD (6.0 Gb/s or more and ~ 500 GB which will cost you about $500 as of Jan 2013). In my opinion, their speed is worth the extra price… If you are frustrated by how long it takes for ArcMap to start up, you will be amazed to see that a fast SSD drive will bring ArcMap start time to a few seconds or less. SSD performance really shines when running parallel tasks/processes – their throughput is quite remarkable. Magnetic disks are still the dominant disk type by far… but probably not for long. As a ‘hybrid’ approach, It is possible to have both SSDs and conventional HHDs in the same computer, thereby having the best of both worlds. Another option for advanced applications is configuring your disks (you will need at least two) in a RAID array. RAID can significantly increase speed, capacity, and reliability of your disks, but it will be at least twice as expensive as a non-RAID set up. Final recommendation: Be sure to buy enough storage capacity for your needs right now, and then multiply by three. GIS datasets aren’t getting any smaller, in fact quite the opposite! Also, although SSDs remain orders of magnitude more expensive than conventional HHDs, they are worth it, especially for data intensive applications like GIS. Video card: Video cards (aka graphics cards) increase the performance of graphical display to you monitor. For example a nice video card will make, a 3D rendering of ArcScene or ArcGlobe more smooth and fluid. Personally, I don’t do a lot of 3D rendering so I may be a bit biased here. High end video cards are more geared towards computer animators and computer gamers. If you don’t do that, then you probably don’t need a high end graphics card… A 256 MB one will do you just fine. However, one emerging technology to take note of is something called GPU processing, which basically allows you to take advantage of the Graphics Processing Unit (GPU) in a video card and use it for general processing tasks unrelated to graphics display. A GPU (as opposed to a CPU) has the advantage of having hundreds or sometimes thousands of processing cores, and the idea being to make processes massively parallel so that each core in the GPU can work on a small piece of the larger pie. Is this an easy to do? The answer is a resounding ’No’. Algorithms need to be rewritten at a very low level to take advantage of GPU computing, and that is very labor intensive. Note that ArcGIS at this time (Jan 2013) does not support GPU computing, but ESRI has hinted that it may in the near future. Here’s a great article all about it. So with that, having a GPU/CUDA worthy video card will probably be something that you should consider in the near future. Right now, maybe not so much. Video cards are very easy to install, so it might be an upgrade to consider if ESRI gets GPU computing usable in ArcGIS. Thanks for reading!
    2 points
  2. The directory that you are looking for http://asprs.org/a/publications/pers/2007journal/december/2007_dec_1381-1391.pdf. Here's the following simple formula NDBI = (SWIR - NIR) / (SWIR + NIR)
    1 point
  3. Couldn't but agree more. It's an age old strategy. The more choices in the market, the more share they bring. How many packages can you remember in the MS office in 2000 and 2010, or Autocad, or Oracle. The more the merrier! But these names had to survive a lot of bump in the road, surely more than ArcGIS. Holding the name on top for almost 5 decade, ESRI proved that they are the best you could ever have for a one-stop geospatial solution. What they missed is a good competition. After this many days ESRI has a huge 'sms-generation' as user, sugar coated with 'Facebook' and 'Twitter', and will try anything to get things done. This is going to be bad! So why not put everything in a browser right after their facebook page. Yes, we agree that the desktop version is messed up in so many level that It's better to start all over. Let us welcome the new geoprocessing engine. Let us wait for some stylishly sexy error messages.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist