Georeferencing in ArcMap very slow?

Georeferencing in ArcMap very slow?

I have started having a problem with georeferencing.

When I try to add control points, the cursor just starts "thinking" forever, and it's nigh unusable. I've read forum posts that say this is an issue with GDI object leaks, and the advice is to reduce the number of rasters in the TOC. I only have one and it's a real small .jpg">

Turn off snapping (Snapping toolbar, uncheck "Use Snapping"). I've had this problem before when there are many vector layers in a project, the cursor is getting bogged down looking for a vertex (or edge, or whatever) to snap to. You could also try copying the image to be georeferenced to a new, empty ArcGIS project along with the bare minimum of vector layers you need to georeference it.

Georeferencing in ArcMap very slow? - Geographic Information Systems

There are many options to help make rectifying an image easier, such as adding street name labels to the vector shapefile. I did use this, but noticed that it makes ArcMap very slow. Another is to change the transparency of the 1903 map from 0 to 30% from the layer properties, making it easier to see the modern roads. However, I found that then it was harder to read the road names from 1903.

Add the Georeferencing toolbar from the customize menu. Before adding georeferencing points to an image, it is important to get an idea of where the image should be relative to the referenced dataset. The "Fit to Display" button from the Georeferencing toolbar menu moves the image over the current extent. Then, the Rotate and Move tools help to align the image in the correct orientation. For this map, the orientation was easy to figure out by looking at the north arrow. Then, I used landmarks such as Ile Sainte-Helene and the Lachine canal to place it roughly in the right location. Once this is done, a trick I found that worked was, by turning off "auto adjust", it was easier to set the first couple of points. To set control points, first click on the unreferenced image, then the same location on the referenced shapefile. Try to spread the points out and aim to have at least ten points in total.

Rectify and verifying the raster

Once you are finished adjusting the image, you can click on Rectify from the menu on the Georeferencing toolbar. As mentioned earlier, the path to the save location should not have any spaces. Set the file type to GRID and the NoData value to 255, which will make the image 8-bit so it could be viewed more easily in other programs if desired. You could place the raster into its own folder, making it easier to find later. Although you want the best resolution possible, if it is too large the program could crash. The first attempt resulted in only a small section of the image, probably because the grid size was too small and I wasn't patient enough for the file to save properly. Afterwards, I tried saving it several times using progressively better resolution. As you can see below, there is a significant difference between the different resolutions. At 0.0015, no detail can be seen. At 0.00015, the roads can be seen but the road names and any other details are lost. Even at 0.000015, the image has lost some quality, but the road names can still be read. Optimally, we should use an even more detailed resolution, but it would require too much time to create a better rectified raster or ArcMap could crash.

Mosaic Dataset - rasters disappear when footprint attribute is edited

I am georeferencing Sanborn fire insurance map sheets in a mosaic database using ArcMap. I had the typical problem of seeing only a green wire frame after adding rasters. I edited the MaxPS and MinPS as has been recommended in the forums. I have done this before and it usually fixes the problem. Something new happened this time, for which I don't have a solution. I hope someone out there can help.

When I edited the Max PS and Min PS on the footprint attribute table of the mosaic database, the rasters became visible as usual, but when I saved the edits, some rasters disappear. Any ideas?

by CodyBenkelman

Brian - please let us know if this resolves your need. Based on your description, I am guessing that the 'Max Number of Rasters per Mosaic' property (as mentioned by Peter) can enable you to display the missing rasters, but I thought it would be worth a bit more explanation re: "Why?".

re: ' Max Number of Rasters per Mosaic ', we set a default of 20 to prevent poor performance if you attempt to view too many files. Yes, you can increase that value, and also adjust MinPS and MaxPS, but if you zoom out and expect to see hundreds of small images (requiring ArcGIS to open and display a large number of files) the result can be slow, especially if the source rasters don't have Pyramids.

This leads to other key issues to consider, including the use of Pyramids and Overviews (the latter mentioned by Ben). If you aren't familiar w/ them, both are reduced resolution views saved on disk to speed display. Pyramids are associated with each source raster, and Overviews are conceptually the same, but associated with the mosaic of multiple rasters. I n my 2nd paragraph I referred to opening hundreds of files - If pyramids do not exist, ArcGIS has to resample each raster on the fly to generate the proper resolution, and that will take time. But even if pyramids do exist, we don't recommend attempting to display hundreds simply because opening a large number of files will still take time (thus the default of 20 individual rasters).

As Ben noted, when you zoom out, the system will automatically switch to displaying the appropriate Overviews if they exist. These are prebuilt reduced resolution views of the mosaic - so rather than displaying hundreds of individual rasters, you should have a very fast display of one or a small number of Overviews.

I hope this explains why you see green wire frames rather than images. You called it a "typical problem" but this is how the system was designed, to avoid slow performance. (You might ask "well, why doesn't ArcGIS just do this automatically?" but think about the case of thousands or millions of images - users would be very angry if we automatically started a process to generate Pyramids and they had to wait for hours just to see if the images were in the right place -- thus the green wireframes)

Two last notes: when you create Pyramids or Overviews, I recommend that you use Bilinear resampling rather than Nearest Neighbor, so the results look smooth. But no matter which resampling method is used, when you are working with raster versions of maps, at small scales you may find the view is not very useful. In that situation, I often recommend that you add a completely different dataset at an appropriate resolution (e.g. Landsat imagery, or a hillshaded terrain, or a 1:1,000,000 map etc.). It does not show your data, but it provides geographic context.

If you would like to read more, we have published the Image Management Guidebook that provides discussion of all of these parameters.

Geoweb use case and crisis management

Nowadays, local media, NGO, communities of practice and local authorities use more and more GeoWeb technologies to deploy emergency-related Web applications. An analysis carried out on a series of recent events highlighted the vast diversity of types of use. However, three main categories combining top-down and bottom-up approaches can be considered: (1) Map mashups: aiming at informing the general public with various sources of information (2) Contribution platforms for the testimony and demand response for victims (3) Collaborative platforms for creating and updating base maps and contents.

Map mashups

These online applications are developed to disseminate, as fast as possible to the general public, information coming from local authorities, emergency respondents, or media. The use of map mashups to process any type of information (fires, floods, earthquakes…) in times of crisis is very new and becomes more and more systematic. Clearly, map mashups allow to display on a map crisis management-related information. To do so, information is first geocoded, and then integrated to the map. The benefit of such Web applications is to provide visual, clear and coherent organization of information based on a spatial reference system (Goodchild and Glennon 2010 Liu and Palen 2010). In order to illustrate the full potential of map mashups, some examples are presented below.

During the devastating fires that swept through Australia in February 2009, and more specifically through the states of Victoria, New South Wales and the Australian Capital Territory, a series of map mashups appeared on the Internet. The most blatant example, still active on the Web, is Victorian Bushfires Map. From the fire database operated by the Country Fire Authority (CFA), Google Australia developed a mashup offering real-time tracking of the fires recorded by the authorities. Fires are marked with dots on the Google Maps API. Each dot provides the characteristics of the fire it is associated to (start date and time, status, type, size, number of emergency vehicles dispatched to the scene, level of control). This mapping application is based on the dual-use of a Google Maps API and a RSS feed of CFA website, as well as on information provided by the state of Victoria. It also includes various information on road status and security measures that are displayed on maps produced by the media (The Age,

Extremely devastating fires also hit South California during the fall of 2007 and 2008. A map (Fig. 1) was set up by KPBS, the local radio station, and updated every 5 min, featuring evacuation information and shelters. The site recorded over three million visitors. Fire perimeters were determined according to the data provided by Los Angeles County Emergency Operations Center. A detailed legend listed a series of punctual and zonal representations with variables of different colors and shapes so as to provide a clear classification of the information displayed on the map (red: evacuation areas, green: areas where evacuation notices are lifted). A quick analysis of this map raises the question of the legend and of the harmonization of graphic charters, both of which we will discuss later.

Fire map (KPBS radio, autumn 2007)

The map produced by the Los Angeles Times daily paper (Fig. 2) received over 1.6 million of visitors. It offered numerous formal information to describe the fires (fire extent, status, damage, number of wounded persons, start time, ignition point and cause, number of firemen, evacuation, etc.). This initiative was renewed in September 2009 with a more accurate and readable map (graphic charter, use of arrows, fire perimeter, buffer zone, etc.). It scored 500 00 hits the first 2 weeks.

Dynamic fire map (Los Angeles Times, September 2009)

Other examples, notably in the field of floods (for example: dynamic multimedia map provided by the TV channel BBC during the great floods of October 2007, in England) could be developed to demonstrate the relevance of “mixing” different field sources (journalists, victims, neighbors, etc.) (De Longueville et al. 2010). More precisely, numerous informative cartographic applications were produced in response to Haiti earthquake, January 2010 (Google’s Crisis Response, Haïti Crisis Map, ESRI’s Haiti Earthquake Map, or even Virtual Disaster Viewer) and, Christchurch earthquake, February 2011 (ESRI’s Earthquake Incident Viewer—Fig. 3).

ESRI’s Earthquake Incident Viewer (Christchurch earthquake, February 2011)

From Scipionus to Ushahidi: testimony, situation reports and requests for on-site support

Scipionus, the precursor

After Hurricane Katrina swept over the United States in 2005, millions of information pages were created on the Internet. Contrary to traditional media who experienced broadcasting and logistical difficulties to cover the hurricane, online applications such as blogs and forums could provide, very quickly, reports and testimonies from the affected people. As a matter of fact, whereas most of the traditional communication infrastructures failed in the wake of the Hurricane, the Internet remained the only source to relay information on the situation in Louisiana. Among all the websites created in an emergency to mitigate the lack of data and the crisis situation, Scipionus (Fig. 4) rapidly established itself as the most popular resource to get information on the affected zones and search for missing persons.

Contrary to a blog where only the author has permission to modify his Web pages, Scipionus interactive map enabled every user to publish his/her own location-based information. This participative space made it possible for the local inhabitants to reassure their families and friends, or to describe the disaster and the slow receding of flood waters. Created by a computer programmer from New Orleans, this initiative was rapidly used by telecommunication operators, TV channels, newspapers and the Web. Launched a few days after the disaster, tens of thousands of visitors hit the site to find useful information on the areas affected by Hurricane Katrina. Based on the Google Maps technology, the site offered a dynamic and interactive map of the flooded zones. Each visitor could add complementary information to give a more accurate description of the damage, to report and find missing persons. According to the logic of mashups, the location-based data displayed on the mapping API came from different forums and blogs, which partly explains its success (Miller 2007).

Ushahidi, crisis communication platform

Following Scipionus, but developed 4 years later with more advanced technological tools, the Ushahidi Platform is the new generation of dynamic maps dedicated to crisis management (political crisis, natural disasters, local conflicts, etc.). This information-gathering tool makes it possible for Internet users to follow the progress of crises, in real time, through the eyes of those directly involved in the disaster. Initially, the project aimed at reporting on a specific crisis situation on the basis of the testimonies of the people involved. A few months later, thanks to the support of an American NGO, the blog became a software application adaptable to various crisis situations. Ushahidi also provides applications to cover specific and time-bound events (violence in South Africa, Congo, Kenya and Gaza strip elections in India and Mexico earthquake in Haiti, blizzard in the United States…).

The Ushahidi application has a dual value: on the one hand, it allows people affected by a disaster to have information on its unfolding and evolution and on the other hand, it provides people with a set of tools to testify about the situation they are going through. Consequently, this Web application is above all a resource-and information-sharing platform allowing anyone having a relevant fact about a specific situation to send it via SMS, e-mail or using the forms available on the website. Once this information is pooled, formalized, documented and checked, it is added to the map. The purpose is to record, aggregate and cross-check the information sent by users during the crisis to improve the competent authorities’ response/reaction time for a better help.

From a technical point of view, the platform operates according to the logic of mashups, that is to say: it combines several Web services (mapping, database, data handling tools, visual functionalities, etc.). Free and available to all as an API, this Open Source project is based on both Web (XML, JSON, AJAX) and GeoWeb (mapping API, Open Layer, KML, GeoRSS) standards. The platform was developed to be fully adjustable to meet the needs of the organizations and the crisis settings within which it is implemented (classification and display arrangements, feedback system, safety and export functions). Classification is based on a semantic qualification of information that follows the principle of category. Moreover, this application allows text and audio data to be catalogued and integrated, thus enriching the testimonies. It is also important to point out that the people behind this ambitious project provide technical support to the implementation and creation of new functionalities.

Put online only 3 days after the earthquake, the Ushahidi-Haiti Platform is particularly representative of the potential benefits of such an application used in crisis management (Fig. 5). In 2 weeks it received over 3,000 testimonies, more than half of which had been posted via SMS. Thanks to the rapid creation of a working partnership with Digicel Haiti, a mobile telephone provider, and other organizations (firms, emergency services, NGO), Ushahidi deployed the 4,636 project to enable people on the spot to provide near-real time feedback via SMS: calls for assistance, vital lines, potential threats, individual news, etc. Since the relative restoration to working order of the telecommunication networks, a free short code was provided to send SMS whether in Creole, French or English, to text locations and needs. Each piece of information is followed by the term “verified” or “unverified”. Events are organized into six main categories (emergency, security threats, vital lines, services available, other and persons news) and twenty-four sub-categories (contaminated water, looting, fire, food distribution point, shelter offered, road blocked, missing person, etc.). Beside the “Mainstream News” and “Incidents”, all the reports created can be consulted, along with the photos and videos that go with them. Users can not only text a testimony (via SMS, e-mail or on-line form), but also get alerts (via SMS or e-mail) to be notified of the events occurring at a specific location or within a specific area.

Ushahidi Haiti primary interface

In order to crowdsource and, permanently integrate these reports as legitimate and actionable sources of information, the system must be able to rapidly identify inaccurate, intentional exaggerated, or accidental information. We should briefly describe the processing and validation of reports, since it evolved alongside events.

Person (humanitarian aid, victim, emergency respondent, etc.) texts a request by SMS (need for help, water, food, medical attention, etc.).

The testimony is sent to the Crowdflower website for translation or data entry.

Haitian volunteers translate and add metadata to the testimony (relevancy, location, level of priority).

The SMS is transformed into a formalized report, then published in the information distribution network, and finally integrated to the interactive map. After processing, the report can thus be consulted by all the organizations involved in crisis management.

More recently, Ushahidi was used in response to the Christchurch earthquake, February 2011. In the aftermath of this disaster, a dedicated application was set up in order to identify hazards/solutions, to request help and to provide public information about the current situation (hazards and evacuation zones, infrastructures and road status) and available services (water, supplies, pharmacies and medical centres still open…). In 10 days, more than 1,200 reports were sent. Since the Haiti earthquake, Ushahidi has been used in various crisis situations to help victims and provide NGOs and authorities with an online application to support disaster response. This application has become an essential tool for online crisis management.

Crisis mapping, creation and updating of base maps and data

“Crisis mapping” is another type of GeoWeb use highlighting the current trend to use geospatial technologies in crisis situation. The purpose is to redraw (or update) the maps and plans of disaster areas so as to publish them on the Web, under open source license (Zook et al. 2010). In Haiti, there was no recent mapping regularly updated by the government and, the national Haitian map agency was totally destroyed by the earthquake. After disaster, the former maps of Haiti had become useless, therefore a prompt update was necessary.

Hundreds of people stood up to support and guide emergency response as well as local organizations. In the beginning, mapping campaigns grouped under the initiative Drawing Together were thus carried out by hundreds of Internet volunteers all around the world. The work achieved by these “tech volunteers” was then continued by many other existing initiatives such as OpenStreetMap, which quickly developed an OSM-based collaborative mapping platform specifically dedicated to Haiti. Coming from open source communities, just like Wikipedia or other free culture movements, these collaborative platforms demonstrated its ability to provide accurate data within a short period of time, thanks to a pre-existing technical organization, efficient collaborative tools and a dedicated community. The road network map of Port-au-Prince, which was almost blank on the evening of the 12th of January, was nearly complete 10 days later (Fig. 6). In only 2 days, over eight hundred modifications were made even though the area was not yet fully covered by traditional providers such as Tele Atlas or Navteq.

Road network coverage of Port-au-Prince in OSM before and after the earthquake

In a first time, roads, paths and buildings were drawn and updated with the help of old maps produced by the CIA, and Yahoo aerial imagery, which OSM has been allowed to use since 2006. In a second time, in the aftermath of the earthquake, several satellites scanned the area to provide recent satellite images. The firms DigitalGlobe and GeoEye gave the free use of a series of high resolution photos taken after the earthquake, which enabled the team of OpenStreetMap to complete mapping of Port-au-Prince (Fig. 7), and of the other towns affected, with a wealth of information: collapsed buildings, road blockage, health facilities, refugee camp and population relocation.

Interface of OSM Haïti map

In addition, volunteer contributors were involved in on-site data capture providing accurate information on road conditions, the location of collapsed buildings, of hospitals or emergency camps. The production of royalty-free data enabled allowed a free and rapid reuse of the data created on-site by NGOs or emergency agencies. The University of Heidelberg provided a new version of its GPS application OpenRouteService for live route planning (road conditions, location of camps, etc.). The German company Geofabrik gave access to the OSM-Haiti data in various file formats workable with the leading GIS and GPS software applications. Consequently, data were quickly integrated into a series of mashups and virtual globes, and then rapidly disseminated, adapted and used on-site via various formats and on different platforms by NGOs and emergency agencies.

Understanding Projections: Geo-referencing

Recently I started working on the year 1959, which is currently the last set of images I have to work on.

1959 was a unique set compared to 1975 and 1995 for a few reasons. The first reason being related to the significant difference in development, which is obvious due to how far back in time the images were taken. The second reason is probably common among Geo-referencing projects, but I have not encountered it until this year. The set was split into two types of images: Geo-referenced or projected images and non-referenced or non-projected images. The projected images are similar to the images I have completed in 1975 and 1995, they were already projected to the desired coordinate system and allowed for immediate Geo-referencing. The non-referenced images are a little more difficult.

These images would gray out the toolbar and not allow for any Geo-referencing. At first I was a bit confused, but quickly started brainstorming ideas. My first idea came in the form of projecting them to the same coordinate system as the data frame. Unfortunately, due to the size of the images, projecting these images was impossible on the machine I had (16 GB of RAM). Nearly instantly the tools would fail. However, I thought this had to be the reason, so I continued to experiment with different ways to project the images. The next was to project to an intermediate coordinate system, then proceed to the more advanced system in hopes of reducing the burden, but that also fizzled out. It seems that no matter what system you are coming from (although I don’t fully understand geographic transformations), it attempts to perform the full process. Running out of ideas, I tried one final solution. I thought that perhaps the size of the images were the reason and decided to slim them down by splitting them into eight sections. After creating a few scripts that would split every image, which went smoothly, it approached the projection part of the script and crashed nearly instantly (although it may have lasted slightly longer).

After none of my ideas worked, I decided to restart the computer…and that worked. Even though the images are not the same coordinate system as the data frame, I can still add control points, fit to display, etc. This was a bit frustrating since it was such a simple solution, but the lesson here is that it is best to try the simplest solution before trying the more difficult ideas.

After talking with a few people, the coordinate system should be irrelevant because as soon as you begin Geo-referencing the image should convert to that of the data frame and even that will not matter in the end. Once the images are completed another individual will tether or mosaic them together and assign a coordinate system that way, which will result in the current coordinate system being overwritten.

ArcMap 10.4 and ArcGIS Pro: extremely slow editing of large polygons?

I am working on creating maps for a fantasy world, and I have drawn some large polygons for islands using both ArcMap 10.4 and ArcGIS Pro 1.2. These polygons are several hundred miles across and, at the current scale I am working with, contain up to 6,000 vertices each. As this project continues, I will want to edit the polygons at a smaller scale for more detail, so obviously that will mean more vertices.

Editing the polygons is HORRIBLY slow but in different ways in the two programs. ArcMap 10.4 is fast to do things like exploding features and merging polygons, but attempting to edit any of the vertices on the edge of the shapes drags everything to a complete halt for 10-20-30 minutes even for a few vertices moved! On the other hand, ArcGIS Pro edits vertices (relatively) faster, but it took the program TWO HOURS to explode nine polygons that took ArcMap 10.4 about thirty seconds to do.

I am flummoxed here. I thought maybe the size of the polygons might be the issue, but when I started investigating the Dice tool, it said that that tool would be used in situations where you have shapes with literally millions of vertices, so I can't imagine 6,000 should be all that bad computationally. Any advice?

PS: the polygons and the data frame are in the same projection, and I have turned off any topologies.

Running ArcGIS on the Mac

Although Esri has released an app version of ArcGIS for iOS, ArcGIS unfortunately isn’t natively offered as a option for installing on a Macintosh computer. The last native effort by Esri to produce GIS software for the Mac was a ported version of ArcView in the early 1990s that never made it past pre-release. It doesn’t look like Esri will produce a native port of ArcGIS for the Mac in the very near future. From the closing session of the 2011 Esri User Conference:

“We’d love to be on the Mac, but we have engineering priorities…so we have to ask ourselves what’s most important for our users,” said Jack Dangermond. “That focus is very important and we want to make sure that we don’t spread our resources too thin. In theory, we could spread our resources more on platforms and thus less on functionality. But would you want us to slow down advancement of the basic tool in order to deploy on a Mac?”

However, there is a teaser that, at some point ArcGIS will be Mac ready:

“We’ll probably start moving more towards supporting the Mac at the next release after ArcGIS 10.1.”

For those that want to stick with the most popular commercial GIS package, Esri’s ArcGIS software, a separate instance of Windows OS (Windows XP and above) needs to be set up on the Macintosh to run the GIS software. There are two main ways to accomplish this for Intel-based Macs. The first is to set up a dual boot system. Apple offers BootCamp for free as part of its operating system for versions of MacOS that are 10.4 and higher, which, after installation of the software and a licensed Windows OS, gives users the options of either booting up the computer in Macintosh or Windows OS. Booting up as a Windows machine allows the user to run Windows compatible software, which would include ArcGIS. The downside to this option is that the Macintosh installed software and operations are inaccessible while the computer is booted up as a Windows machine accessing the Macintosh side requires a reboot and vice versa. The plus side is that the user doesn’t need to operate a separate Windows OS computer. Since the computer boots up fully as a Windows machine, memory is not split between the Windows and Macintosh partitions, enabling ArcGIS to run faster as compared to a virtual machine installation.

A second option is to create a virtual machine (VM) on the Macintosh and then install Windows as the operating system that runs on that VM partition. The two leading commercial options are Parallels and VMware Fusion. Both of these virtualization programs require that a licensed copy of Windows OS be installed. The benefit of a VM is that users can toggle back and forth between Mac and Windows programs without having to reboot. Files and directories on the Mac side can be shared with the VM so that programs in Windows can read and write those files. The downside is that memory is split between the two operating systems, meaning ArcGIS and other programs will run more slowly as compared to a native PC or ArcGIS running under BootCamp. Make sure the Macintosh computer you install VM and ArcGIS on has at least 8GB of RAM, preferably 16GB of RAM.

The Digital Dark Ages

In the developed world, we capture and store almost everything that can be stored: security video, electronic communications, smartphone photos of events momentous and trivial.

Almost none of that data will survive us.

Although storage becomes cheaper every year, technology changes every year. Data must be migrated from old storage media and file formats, or it is lost to physical degradation or technological obsolescence.

Data in The Cloud never has a permanent physical home. The Cloud is a performance and requires constant flows of capital and resources to stay in operation. Changes in the economics of The Cloud will necessitate loss of some of that data. Which data will be lost to time?

Contrast the impermanence of the digital with papyrus text from 2500 BC or clay tablets from as far back as 3300 BC.

While security camera video from an ATM where there has been no criminal activity may not be something that should outlive us, your grandchildren may want to see some of those thousands of baby pictures that you took of your son in the first year of his life. You should plan accordingly.

Section Four: Meet ArcMap

In this section, we are going to take a look at ArcMap. Much like the last section, we are going to highlight some of the basic features where to find them, what they do, etc. By the end of the semester, you shouldn’t have to look most of these actions up, you should just have them down (but if you need to, this page is found here and when you click the ‘Data View’ button in the top menu of the Wiki).

4.4.2: The Layout Of ArcMap

ArcMap, based on the definition of GIS, is a software to create, store, manage, analyze, and display data. While much of the storage and management is handled with ArcCatalog, there is some of that in ArcMap. ArcMap, however, is the portion of the software suite that was designed to create, analyze and display that data. We can use ArcMap to look at our data, explore our data, examine spatial relationships between our data, run geoprocessing tools and analyze the results, and create cartographic layouts to display the data to our customers (or instructor, in the case of this class).

We see the basic layout is the same as ArcCatalog with a series of menus at the top above several toolbars of shortcut buttons, a thin Table of Contents which shows the layers participating in the ArcMap session, a large white space for interacting with the data, and some tabs along the right and left sides of the window hiding some additional toolboxes and windows.

Figure 4.5: ArcMaps's Basic Layout

4.4.3: Views and Areas of ArcMap

Data View vs Layout View

Since ArcMap is used to analyze and display our data, we have two distinct views in the software: data view and layout view. Data view is where we spent the majority of our time (compared to ArcCatalog and layout view), as the majority of our work is done creating and manipulating spatial data. Once we've taken the time to examine the data in great detail, both in the way the features, or the shapes representing real world objects , and the attributes in the attribute table, we switch over to layout view to complete the cartographic layout. Switching can be accomplished via the "View" menu at the top of the software window and a couple of shortcut buttons at the bottom of the display window , which we will look at next.

Figure 4.6: Data View vs Layout View in ArcMap
US States and California in Two Data Frames, Data ViewUS States and California in Two Data Frames, Layout View

Display Window and Data Frames

The Display Window is the portion of ArcMap where you interact with your data, showing us both data view and layout view. In data view, the layers draw, you can pan around, zoom in and out on, and interact with your data. In layout view, you can add map elements such as a north arrow, and a legend to create a final cartographic layout. This area is the main focus of ArcMap and the part you are going to spend the most time looking at. The Table of Contents list the data shown (by default) to the left of this area, the results of tools are loaded into this pane, and you create new data while looking at this area.

When you are looking at the Display Window in data view, you are looking at one Data Frame . A data frame is a collection of data layered to draw in a specified order in one geographic or projected coordinate system . While you can have more then one data frame in one MXD, ArcMaps' data view will only show one at a time, or the active data frame , while layout view will show all of them at once in order to create final maps with several views, such as inset maps (maps which zooms in on a smaller area to show it in greater detail or shows the overall location of the data within some larger area of context) or different data for the same area, where overlapping that data would be impossible or confusing to the reader. Another use of additional data frames is the ability to eliminate white space when the portion of the map which matters covers a large area, such as details about the lower 48, Alaska, and Hawaii.

  1. The first data frame will always be named “Layers”
  2. All other data frames will be named “New Data Frame”, “New Data Frame 2”, etc.
  3. The “active” data frame (the one displayed in Data View) will be listed in bold.

Data Frame Coordinate System

We learned all about coordinate systems in Chapter Two, noting that there are two major categories: geographic coordinate systems and projected coordinate systems. All spatial data is stored in a single coordinate system, either a geographic or a projected system. Knowing what coordinate systems a layer is stored in is very important, since it tells us how the data was created, what distortions may be present, and what units the displayed coordinates are in, since the measured units - angular or linear - are actually a measurement from the origin of the system in the X and Y directions to the feature being displayed. Longmont CO is found roughly at 40° N 105° W, meaning it is 40 degrees northwest of the origin of the latitude and longitude geographic grid, found under the "wing" of Africa.

We have the ability to change the stored coordinate system of a spatial layer via a tool called "Project", which looks at where the coordinates of features in one system correspond to the coordinates of another system, and reassigns those coordinates accordingly. For example, the latitude and longitude coordinates of Longmont, more specifically 40° 10' 1.944" N 105° 6' 6.939" W, equate to the Universal Transverse Mercator (UTM) coordinates: Zone 13N, E: 491320.76 N: 4446320.79 (measured in meters from the origin of the 13N Zone). If we run a tool to convert the stored coordinate system for a US City Center point shapefile from the WGS84 Latitude and Longitude Geographic Coordinate System to the UTM Projected coordinate system, the tool will look at Longmont's latitude and longitude in the input layer and find the UTM equivalent, overwriting the spatial data coordinates for the city in the output layer .

Figure 4.8: Changing the Display Coordinate System
The display coordinate system can be changed without consequence to the stored coordinate system of the data listed in the Table of Contents. When the display coordinate system is changed, ArcMap projects the data on the fly, or shows what the data would look like if a tool such as Project was run.

ArcMap allows us to look at several spatial layers at one time in one data frame, regardless of what coordinate system the data is actually stored in, lining up the data the best it can for the user to interact with. We call this process projecting on the fly , meaning it is not creating a fundamental change of the data, like would happen if we ran the Project tool), but instead displaying the hypothetical result as though the tool had run. We can change the display coordinate system of the data frame to our heart's content, and each time, ArcMap will project on the fly, redrawing each spatial layer in a display to match the selected data frame coordinate system.

How Do I Know a ‘Fundamental’ Change is Happening with My Data?
This, and future chapters, will present options and actions and state they do not fundamentally change the data - meaning that nothing about the data will be permanent. ArcGIS is for displaying data, and can display data in several forms at once, without changing it.
The best way to remember if you data is being fundamentally changed or simply changing how the data is being displayed is: Arc will ask you before changing data. For example, if you were presented with a confirmation box stating “Would you like to save your edits?”, you most likely initiated the changes. When you run geoprocessing tools (Chapter Seven), the tool will offer a place to save the “output layer”. This actually creates a new layer, complete with whatever changes were initiated - such as using the “Project” tool which produces a new spatial layer output stored in a different coordinate system than the input layer.

Since the data frame is displayed in a single coordinate system, the display window can show you the coordinates of the cursor at any given time. These current cursor coordinates are shown at the bottom of the display window in the coordinate system of the data frame regardless of the coordinate system of each layer participating in the ArcMap session, and as you move your cursor around the window, the coordinates change. These current cursor coordinates are useful in getting a rough idea of where you are looking in the world while you are working within ArcMap. For example, if you add some data that you know is in Central Colorado and the current cursor coordinates are displaying values of 85° N, 115° E when you move your cursor over the data, you can quickly get a pretty good idea that something is wrong with the data.

Figure 4.9: Current Cursor Coordinates in Action
Both ArcMap and ArcCatalog (Preview Window) show the current cursor coordinates at the bottom of the window. Notice in the animation that the coordinates scroll while the cursor is moving and pause when the cursor pauses.

When you switch over to layout view in ArcMap in order to start creating your cartographic product, a second set of cursor coordinates pop up, showing you map coordinates , a positive set of coordinates referring to where your cursor is on the printed page versus where the data is in the real world. The coordinates are based on the same principles of the Cartesian Coordinate System, where the is an origin at the lower left corner of the page and positive values moving along the X and Y axis of the page itself. The concept is exactly the same a projected coordinate system, where there is some origin somewhere in the world, and to locate places on the Earth's surface, we move away from the origin until we arrive, measuring the distance in linear units.

Figure 4.10: Map Coordinates Displayed in Inches
To locate map elements (legend, title, north arrow, etc) for printing neatly and evenly spaced, ArcMap's Layout View uses map coordinates. Also based on the Cartesian Coordinate System, map coordinates are positive numbers starting at a 0,0 origin found at the lower left corner of the page.

Table of Contents

In ArcMap, the Table of Contents is the window where all of your spatial layers and data tables are displayed. Some of them might be visible, some may not, some might have special styling, others may be just tables. Regardless of how the data is displayed in your map, all of the active layers are listed in the Table of Contents. From here, you can move the data to draw in a different order, see where data is stored on your computer, and examine how different layers are displayed. You can add data to the map, remove data from the map, make changes to the display of the active layers, and access information about each layer.

As ArcMap is the software in the suite used to interact with our data and create cartographic layouts, we have the ability to make display changes, such as the color of the features when they draw on the map, whether or not we can see feature labels like state names when a US states layer is active, or which layer draws on top of the next. These display changes are not fundamental changes to the data, as it really makes no difference if that US States layer is displayed in light green every single time it's added to an ArcMap project. The five to eight little files each shapefile is made up of dictate things like where each feature lives in the world, what coordinate systems the data is stored in, the coordinates of those features, and the attributes about the features - the non-spatial values which we can attribute to each feature. Not a single one of them dictate what color to fill in each state with, what color the outline line should be, or if the labels are visible or not (labels, however, are attributes, such as the name of each state). Those are all cartographic decisions made by a GIS technician each time they use the US States layer. In one project, the layer might be used for analysis and needs to be featured at the front of the map, and in another map, the states might just be used for context , or features which help visually locate other features in the world.

Since ArcMap can show us lots of information about our data - what it looks like and what order to draw it in, where it's stored on the computer, what layers are visible or hidden at any given time, or if selections have been made in the attribute table (Chapter Five), and ESRI (the company who makes ArcGIS) has no idea what you need to look at when, the Table of Contents offers us four different ways to list our data. We call these the List by. Options, and in this section, we will quickly look over three of the four just to get an understanding that ArcMap can show us data in different ways, not necessarily to memorize each view right now.

    1. Turn layers on and off by checking and unchecking the box to the left of the layers name
    2. Re-order layers by dragging the name to another place in the list.
      • Feature classes draw in layers, thus the top of the list will be the ‘top of the pile’. If you can’t see a layer and it is turned on, it is likely covered by another layer. Try dragging it higher in the list to view it.
      • You can see where the layer is going by the dark black line which appears between layer names as you move your mouse around
      • Sometimes its hard to drop a layer at the top of the list for simple reasons of it’s not cooperating with you. Try dropping it in the second place and dragging the top layer below it.
      1. Change layer symbology
      2. Access the Layers Option menu for each layer
        • You cannot drag the layers to reorder them. In fact, you can’t reorder them at all.
        • You can see where the layers are stored/saved without accessing the properties of each individual layer

        Turning Layers On and Off and Collapsing and Expanding Layers

        We've looked at the Display Window and the Table of Contents within ArcMap, stating that the layers participating in the current ArcMap session are listed out in the Table of Contents can be set to draw or not draw in the Display Window and can be reordered so certain layers draw on top of other layers. From this, we can derive the fact that not every spatial layer on the computer is visible at once in ArcMap and that we would need to intentionally add the data to the MXD. While we will look at all the ways we can add data to an ArcMap session, the how isn't the most important thing right now, but the fact that we can and from there, we can control which layers are visible and which are not.

        Similar to adding an image to a Word document or your social media page, adding data to ArcMap involves opening a dialog box, looking in folders - and in the case of spatial data, geodatabases - until you find the layer you wish to add. Once you've found it, it's a matter of clicking on it's name and confirming your selection with some sort of "OK" or "Add" button. When you add a layer to ArcMap it automatically draws it in the Display Window and places itself at the top of the Table of Contents. The choice for it to draw or not is controlled by checking the corresponding box in the Table of Contents.

        In addition to automatically drawing each layer when it is added to a map session, the layer is added expanded, where a color patch showing the color, line pattern, or point graphic the layer is being represented as is listed below each layer name. For polygons, the patch is a rectangle, for polylines, it is a single straight line, and for points, it is the same graphic as the points are being displayed in the map. These examples allow for you to look at the Table of Contents and locate the color or symbol of a particular layer in the map, similar to using a legend in a completed cartographic product, as well as have quick access to the dialog boxes which change said color or symbol. Since the layer is expanded when it's added, you have to option to collapse it and hide the patch from view, regardless if the layer is drawing in the display window or not.

        Figure 4.11: Collapsing/Expanding and Turning On/Turning Off Layers in the Table of Contents within ArcMap
        The check box for the states layer is checked, which means it will draw in the display window.In this screenshot, the states layer is expanded, showing the blue patch representing the color the polygon will fill in with. The rivers layer is collapsed, meaning whatever color the line draws in is currently hidden from view.

        4.6.4: Menus And Toolbars

        ArcGIS has a series of menus and toolbars which are most often found at the top of the window. Menus have tons of functionality, and many sub-menus with even more options. Toolbars are series of buttons which are either shortcut buttons for actions found in menus or control actions for a process within the software, such as all the buttons we would use to draw new features in our map (Chapter Six). Again, the list here is not for you to memorize, as that will simply come in time, but for you to see that there is quite a bit of functionality in ArcMap, and the software programmers do their best to organize those functions in a logical manner. The toolbars displayed are just three of (lots), two are examples of shortcut buttons for the Menus and one is an example of buttons for specific functionality, in this case, for changing and creating new features.

        Menu NameHighlights
        FileNew and save MXD, print, and important to turning in your labs, the Export Map option to save your MXD as a PDF
        EditFor now, the most important thing in Edit is “Undo”
        ViewSwitch between Data View and Layout View and the option pause drawing if it’s drawing very slow
        BookmarksCreates bookmarks in your MXD to save the current view and return to it later
        InsertInsert a new data frame map elements (title, legend, scale bar) when working in Layout View
        SelectionHow to highlight data you’d like to interact with.
        GeoprocessingThe “Top Six” tools used to save spatial problems. Covered in Chapter Seven. The "Results" window which shows the inputs and messages from all the geoprocessing tools run in an ArcMap Session.
        CustomizeA list of toolbars available the enable extension options (covered on the next page)
        WindowsDifferent views to interact with your data better where to get your Table of Contents back when you lose it
        HelpLaunches your best friend in the ArcGIS world, the help menu
        Standard Toolbar
        The Standard toolbar has buttons which control the MXD (save, open, etc), add data to the MXD, and open additional windows (described below)
        Tools Toolbar
        The Tools toolbar has functions used to navigate the map (pan, zoom, etc) and explore the data in spatial way (measuring features, finding a XY location, etc)
        Editor Toolbar
        The Editor toolbar contains a series of buttons used to draw new vector features (you create the dot to dot with your mouse by placing each vertex where it belongs in the world) and edit existing ones (change how the polygon or polyline are drawn, moving a point from one place to another)

        4.4.5: Additional Windows in ArcMap

        The Table of Contents is not the only window available in ArcMap, even though it is the most common. Other windows allow you to find tools to analyze your data, search for tools or spatial data on your computer, and bring in the Catalog Tree without having to open ArcCatalog in a separate window. These windows add ease an functionality to the way that ArcMap works. Just to sound like a broken record, this section is not here for you to memorize the things ArcMap does, but see that we interact with the software and spatial data in several ways beyond just adding data to the Table of Contents and examining it in the display window, hoping it will do a trick or provide us with an ice cream cone on a hot day.

        ArcCatalog Window

        The ArcCatalog Window is basically like having the Catalog Tree tacked to the side of ArcMap. Anything you can accomplish in the Catalog Tree, such as creating, renaming, moving, and deleting folders, shapefiles, geodatabases, and feature classes, can be done in the ArcCatalog Window. The benefit to having a ‘peek’ at the Catalog Tree is simply not having to launch ArcCatalog, as well as the two instances stay in sync (as you move through this class, you will notice that ArcMap and ArcCatalog work on a cache, meaning it doesn’t instantaneously update any changes you’ve made to data from ArcCatalog to ArcGIS without forcing it to refresh.. We will get to that.)

        Search Window

        The Search Window can search for all thing GIS. You can find layers, MXDs, images, and what we will be using it for, tools in the ArcToolbox.


        The ArcToolbox Window houses all of the geospatial tools available in ArcGIS. The tools are grouped by like features, such as all the tools we use to compare how layers relate to each other spatially. In that group we might find tools that calculate what percentage of one layer is intersecting another, or a tool that will remove all parts of a layer that do not intersect with another.

        In this class, you will be guided to particular toolboxes to perform certain tasks. However, knowing they are grouped together might help you find a tool for your final project or solve a problem you didn’t even know you had! (Honestly, I’ve found more tools by looking for other tools then knowing the tool existed in the first place.)

        Docking Windows

        Along with the Table of Contents there are other windows that can be docked along the sides of the ArcMap window to access data, tools, or ArcCatalog without launching ArcCatalog. The four most common tabs docked are: ArcCatalog, the Search Window, ArcToolbox, and the Results Window.

        What makes arcgis so much better than other programs

        i'm doing research right now about manifold vs arcgis pro and how they're different, plus im just curious i've read some articles about how gis is the most popular and the best and i don't understand why, what does it have that all the other ones dont its pretty expensive so it should be worth it.

        As a serious note, where ArcGIS shines is the parent company, ESRI, provides solutions for specific industries and problems where the biggest competitor essentially requires a developer on staff to figure those out.

        This is as someone who greatly prefers QGIS.

        What it comes down to is that right now ESRI is a safe business choice, both for users and organisations.

        ESRI is going to be here next year, in ten years. They aren't likely to go bankrupt or be bought out. They have massive contracts with governments, militaries and multi-nationals.

        ESRI practically gives the software and training packages away to universities, meaning that because it's easy for university to teach with it, most graduates are introduced to and trained on ESRI software. This, combined with the large number of government and military users, means that there are a steady supply of trained users to employ.

        Because ESRI software is so common in organisations it's a safe skill for workers to sink time into.

        Since a large part of of the GIS workforce is familiar with it, they're much more likely to recommend it or choose to work with it over another package. Since a large number of large organisations use it, companies feel more comfortable choosing it.

        ESRI practically writes the standards. The software is either interoperable with, or actually is, the default for many things and will read or write almost everything. as long as you pay.

        ESRI will fix your problem, for a cost.

        Lastly, for most organisations, changing software is not a viable option. It's more likely that if they do change, they will move to ESRI because of all of the above.

        EDIT: Bonus reason - ESRI does EVERYTHING. There's no "oh, our software doesn't do X". ESRI may charge you the earth, and it might not be the best at X, but you can be pretty sure they do X. For a price.

        And as much as that price sucks, the fact that they opened up their top license + extensions for $100/year for personal use makes me kind of love Esri even more. Even though I’m sure the decision to do so was based on dollar signs, it allows the amateur GIS enthusiast access to one of the top GIS software packages for less than $9/month.

        I pay more than that for Netflix.

        In my opinion network analyst is superior in ARCGIS compared to QGIS.

        As a student who have access to ArcGIS Pro for free being able to find vast libraries of troubleshooting help online is far superior on ESRI's program. Learning ArcGIS is more secure for a job since it is free as a student and most businesses uses it, and you can always use QGiS to refresh after studies, but not ArcGIS.

        So easy to use that you need minimal training

        ESRI is the Microsoft Windows of the GIS world. They were one of the first ones in the commercial GIS space and they executed well on their product. It's not perfect, but if you don't need the more advanced spatial and 3D analysis licenses, its fairly affordable. And they keep investing in their product. even if it sometimes feels like its nothing more than a bit of "Who moved my cheese?" action.

        And the dirty little secret is that you have a large, well trained work force that is willing to work for cheap because they like maps more than anyone else. And I am not talking cheap relative to other well educated college professions, I am talking cheap relative to your average construction worker kind of cheap labor.

        Like Windows and Microsoft Office, it would take an epic meltdown from ESRI to lose their place in the consumer commercial GIS space. QGIS is fantastic and getting better, but it also suffers from some really poor design flaws that are not getting better. It can sometimes feel like a mashup from a 100 different decisionmakers (which it does) and nobody really looking out for the overall look and feel. QGIS doesn't have the polish of ArcGIS, and it doesn't have the out of the box functionality that ESRI's ArcGIS does. but its free. You can code the functionality in there or piece it together from a collection of tools, but it takes time to search and figure it out in QGIS. When it comes to pure programming and spatial anlaysis, ESRI is definitely losing ground to open source solutions in Python and R.

        Don't get tricked into thinking Manifold is a reasonable substitute for ArcGIS or QGIS it simply is not, I am sure it can handle some things better but overall it is not an equivalent to ArcGIS. It really lacks in support and is not heavily used anywhere. To put it into perspective right now on the GIS StackExchange forum there are only 14 questions with 'Manifold' tag on it, QGIS has 23,580 and ArcGIS is at 175,855. Do not go down the route of Manifold, there is one guy on this sub who seems to be really into Manifold but beyond that, it is hardly ever discussed or referred too. If you are going to pick up knowledge on a GIS suite go with ArcGIS most GIS jobs require ArcGIS experience, QGIS is good and there are jobs specifying that knowledge, I have never seen Manifold as a job requirement.

        Don't get tricked into thinking Manifold is a reasonable substitute for ArcGIS or QGIS it simply is not

        Experienced people don't talk "substitute" because they know virtually all GIS shops will use multiple tools. Manifold is no more a substitute for ArcGIS than ArcGIS is a substitute for Manifold or QGIS is a substitute. Each package does something the other packages cannot do.

        Manifold, for example, can do tasks in seconds that take ArcGIS Pro hours and QGIS over a day. People who do such work will add Manifold to their toolbox. They're not going to sit around on their hands for hours.

        A more insightful comment would have been to advise people not to get tricked into thinking any one tool can do everything.

        Nonsense. Manifold has the best support in the business. You can wait months or years for bugs to be fixed in some other GIS packages, while with Manifold bugs rarely last more than a week or two. In the next build (every two weeks or so) they get fixed. Critical bugs (rare, but happen) get fixed the next day.

        All that is free to licensees: no need to buy a maintenance contract to get new builds with bug fixes.

        To put it into perspective right now on the GIS StackExchange forum there are only 14 questions with 'Manifold' tag on it,

        That's just totally nuts. It's just as nuts as saying there are no GIS users in China because there are no posts in Chinese on GIS StackExchange.

        Or, it's like saying there are no ESRI users because there are 116,000+ posts on the [] ( forum about Manifold and zero about ESRI.

        Manifold users post on [] ( - the highly respected and extremely effective forum for discussions about Manifold products. Manifold people aren't stupid: they know that if they want to get help from the Manifold community or to discuss something about Manifold, it's smart to post where all the Manifold experts and users hang out. Manifold users don't post on reddit or GIS StackExchange because the [] ( site has far superior signal to noise ratio, and no issues with intellectual vandalism by downvote trolls, like reddit.

        If you don't want to learn what modern software can do, that's OK, but that's your issue, not anybody else's. Every time technology undergoes a major revolution, whether it is replacing vacuum tubes with transistors, buttons on telephones with smartphones, or single-threaded 1980's code with superfast parallel code, there's always a bunch of reactionaries talking down the new, wonderful technology. Smart people don't listen to guys who promote vacuum tube technology in 2019. They just go on to leverage the very best of what is new and modern.

        An example: Today's release of Viewer turns on GPU parallelism in the free product, just like has been in Release 9. Viewer has always been CPU parallel, but GPU parallelism was reserved for the commercial product. Now the free Viewer has massively parallel GPGPU technology as well. That will enable people to see for themselves how they can do work in Manifold in seconds that can take over an hour in ESRI or QGIS.

        Smart people will try that out for themselves, acquiring the street smarts to know how to use that new capability effectively, when to ignore it, and when to put it front and center in their workflow. Reactionaries and dumb people will come up with excuses why it's a great thing to use only one CPU core out of 16, and to not use any of thousands of GPU cores at all.

        I like to pick and choose tools so I get the best for whatever I'm doing. Most GIS people I know are like that as well. If I can do something in seconds instead of waiting hours for Arc or Q, I'll use Manifold. If I can do something better in Arc or Q, I'll reach for those. No big deal. Every good craftsman knows that choosing the right tool for the job is a key part of doing the job right.

        It depends on the industry, but ArcGIS has great tools and very good support. They also have just a ton of manuals and tutorials. For some industries they have extremely worthwhile tools that are actually quite a good value. For example their parcel fabric cadastral system is quite good and included in most licenses.

        It’s very nice for users because once you gain a certain level of competence you can fairly easily teach yourself nearly any function in the program.

        Where I work I extensively use both Manifold and QGIS. We're trialling ArcGIS but, to be honest, I'm struggling as to whether we need it. Manifold, whilst not all that user friendly at times, is extremely fast and reliable, and I always think of QGIS as the Swiss army knife of GIS - it will most likely do what you want it to. With this combination, do we really need ArcGIS? Especially since we're now using an alternative to our drone mapping, rather than their drone2map (which we've had a go at and it didn't give us the quality we wanted), and I've written a script in python to do something we needed in QGIS, rather than ArcGIS.

        I hear what you're saying here and mostly agree with you. But what about providing WFS/WMTS and REST endpoints for your data? Fast and scalable web GIS viewers? What about mobile field collection of data and photos etc? For me that's where ESRI wins.

        Asking which GIS is the best is a bit like asking a good artist which color is the best. :-) A good artist uses many colors and most people doing real life GIS will use many tools.

        Since the question was ArcGIS PRO and Manifold, I'll stick to that: ESRI people often use both. Manifold people often use ESRI as well. The two complement each other. Both are big packages so a comparison necessarily has to stick to rough summaries.

        ArcGIS PRO is great software that ESRI introduced a few years ago as their new flagship GIS. PRO is a classic, non-parallel GIS, in the tradition of classic packages like MapInfo, QGIS or Manifold's own classic Release 8. PRO is 64-bit, it features a trendy "ribbon" GUI, it has good integration with ESRI's Python, it has very many useful features of interest in classic GIS, and it pushes dependence on ESRI's web-based data storage and facilities as an integral part of ArcGIS PRO functionality.

        PRO is also painfully slow with bigger data or more complex tasks, it crashes too often, and it is poor at database, including spatial DBMS. If you have 32 CPU cores in your system and 4000 GPU cores, PRO will use at most two of the CPU cores and none of the GPU cores for computation. Despite being years in the making, ArcGIS PRO does not have the performance to handle data sizes that are becoming the new normal today. A rough characterization of PRO is that it is older technology with weak internals, that nonetheless provides a wide range of GIS capabilities, all packaged in a modern veneer for mass appeal. It's a great product.

        Release 9 is great software that is totally new, and decades ahead of classic software like Arc in terms of parallel technology. 9 is automatically, fully CPU parallel + GPU parallel, and it can open 100 GB of images in 1/10th of a second, on an average desktop machine. The DBMS sophistication of 9 is Oracle class, far beyond any other GIS. 9 has the best spatial SQL of any GIS package or spatial DBMS. 9 is almost impossible to crash.

        9 always runs fully parallel, which makes it very fast and reliable with bigger data. 9 can easily handle data sizes that are the new normal today and in the future, likely through 2035. If you have 32 CPU cores in your system and 4000 GPU cores, 9 will use all 32 of the CPU cores and all 4000 of the GPU cores in parallel for computation, and it will do that automatically with no need for you to learn any parallel coding techniques. 9 has very good integration with 11 languages, supporting six languages as "built ins", including Javascript, Python, C#, VBScript and others.

        Like PRO, 9 has thousands of features, but unlike PRO those features in 9 tend to be aimed at higher end spatial data engineering. The focus with 9 in recent months has shifted to "classic" GIS features, with a few hundred features having been added in the last six months (9 evolves very rapidly), now enough to allow non-experts to use 9 for daily GIS work. A rough characterization of 9 is that it is hypermodern technology with immensely strong internals. 9 provides a wide range of GIS capabilities, but 9 needs more veneer for mass appeal. Like PRO, 9 is a great product.

        9 is like a mile-high skyscraper in technology, providing an immensely greater view, but when you move into your penthouse you discover your apartment is still missing some items you want, like bigger towels for the shower and more than one choice of flavor in coffee pods for the super-duper automatic coffee maker. But getting a few more coffee pods in different flavors is way easier than building a mile-high skyscraper. PRO, in contrast, is like a great apartment with endless flavors of coffee pods that's stuck on the second floor of a building where a weak foundation limits you to three or four floors at most.

        What people want is a combination of PRO and 9: endless ESRI tools plus mile-high 9 skyscraper power and quality. ESRI users therefore now tend to use ArcGIS PRO for the extensive set of classic GIS features it provides, while augmenting PRO with Release 9 when processing is too slow or too crash-prone in Arc. They also use 9 for better spatial data engineering, including interconnecting to many spatial data sources at once, for example, copying and pasting between Oracle spatial warehouses, PostgreSQL, SQL Server and ESRI Geodatabases.

        How that symbiosis will play out over time as 9 steadily adds hundreds of features every few months will be interesting to see. I expect it will still be a mix of tools, just with different workflow.