Entering multiple image on single page using ArcGIS Desktop?

Entering multiple image on single page using ArcGIS Desktop?

I have 10 dynamic images.

I want to print these images (10) on a single layout page.

How to do this using ArcGIS 9.2?

You don't mention if these images are geo-referenced.

  1. create a separate data frame for each and place them manually on a layout.


Choose one of the following methods to add multiple map extents in Operations Dashboard for ArcGIS:

Configure multiple bookmarks in the same web map and embed in Operations Dashboard for ArcGIS

This option is suitable for comparing map extents from the same web map with multiple bookmarks.

  1. In ArcGIS Online Map Viewer, create multiple bookmarks that zoom in to different locations of interest in the same web map. For example, the image below shows three bookmarks created for boroughs in New York City: Brooklyn, The Bronx, and Manhattan.

  1. Save the web map.
  2. Open Operations Dashboard for ArcGIS. Click the Add drop-down arrow and select Map.

  1. Select a map from the map gallery.
  2. In the Settings tab, enable the Default Extent and Bookmarks option. After configuring, click Done.

  1. Repeat Steps 1 through 5 to embed the web map multiple times, depending on the number of map extents to display. Save the dashboard.
  2. In each map window, click the Bookmark icon at the top right corner, and click the specific bookmark to view the map extent.

The image below shows the dashboard of three boroughs in New York City configured using bookmarks. Zoom in to view the specific details.

Configure different map extents for multiple web maps and embed in Operations Dashboard for ArcGIS

This option is suitable for comparing multiple web maps with different map extents.

  1. In ArcGIS Online, open the item details page. Click Settings.
  2. In the General section, navigate to Extent and click Edit Extent.

  1. Open Operations Dashboard for ArcGIS. Click the Add drop-down arrow and select Map.
  2. Select a map from the map gallery. After configuring, click Done.
  3. Repeat Steps 1 through 5 to embed multiple web maps with their respective map extents. Save the dashboard.

The following image shows the dashboard containing web maps with different map extents of the live traffic situations in Amsterdam, New York City, Kuala Lumpur, and London.

Single Page Applications vs. Multi-Page Applications

If you want to understand the difference between single page applications and multi-page applications at a glance, here&rsquos a helpful graphic from an article written by Mike Wasson on the Microsoft Developer Network.

Multiple-page applications are the more &ldquotraditional&rdquo way of delivering a dynamic user experience inside of a web browser.

As you can see in the top portion of the graphic above, every change to the page, initiated by some action that has been taken by the user, sends a request back to the server and requires the rendering of a new page in order to update the appearance of data and content from the user&rsquos perspective.

In the past, these applications were much, much bigger than their single page cousins, which meant they required longer load times and more complex user interfaces to function properly. Thanks to AJAX, structural complexity and the speed of data transfer isn&rsquot as much of a concern, but MPAs are still more difficult to develop than single page applications.


The exponential rise in the urban population of the developing countries in the past few decades and the resulting accelerated urbanization phenomenon has brought to the fore the necessity to develop environmentally sustainable and efficient waste management systems. Sanitary landfill constitutes one of the primary methods of municipal solid waste disposal. Optimized siting decisions have gained considerable importance in order to ensure minimum damage to the various environmental sub-components as well as reduce the stigma associated with the residents living in its vicinity, thereby enhancing the overall sustainability associated with the life cycle of a landfill.

This paper addresses the siting of a new landfill using a multi-criteria decision analysis (MCDA) and overlay analysis using a geographic information system (GIS). The proposed system can accommodate new information on the landfill site selection by updating its knowledge base. Several factors are considered in the siting process including geology, water supply resources, land use, sensitive sites, air quality and groundwater quality. Weightings were assigned to each criterion depending upon their relative importance and ratings in accordance with the relative magnitude of impact. The results from testing the system using different sites show the effectiveness of the system in the selection process.

April 27th, 2011 GIS Data

The National Weather Service in Birmingham is offering GIS data from storm surveys following severe weather events that unfold across Central Alabama. These data offer survey information from tornado tracks that are found by damage assessment teams in the field. The links below contain information for all 62 tornadoes that have been surveyed across the State of Alabama that formed during the morning and afternoon of April 27, 2011. Most information was produced and extracted from the National Weather Service Damage Assessment Toolkit, which has been developed by National Weather Service Meteorologists to modernize storm survey information into a geospatial format for the general public and GIS analysts. The information can be passed along to decision makers and the emergency management community after storm surveys have been completed.

ESRI Shapefiles:

All Tornado Paths - (Updated 8/27/11) Zipped folder contains polygon shapefiles of individual tornado tracks and estimated damage swaths, as well as an all inclusive shapefile that has all path information merged together. Shapefile projections are in Universal Transverse Mercator Zone 16N.

All Survey Points - (Updated 4/25/21) Zipped folder contains point shapefile locations within specific tornado paths that include detailed survey information gathered from assessment teams in the field. The attribute information includes: Event Date, Survey Date, Damage Indicator, Degree of Damage, EF-Rating at the point location, Wind Speed, Comments from the survey team regarding specific damage information , and the Tornado associated with the survey point. Shapefile projections are in Universal Transverse Mercator Zone 16N.

All Tornado Tracks - (Posted 8/28/11) Zipped folder contains line shapefiles of all 62 tornado tracks across the State of Alabama. Attribute information includes: Tornado Name, EF-Scale, Peak Wind, Counties affected, Start Time, Beginning and Ending Latitudes and Longitudes, End Time, Maximum width in miles and yards, Order Number of the tornado segement, and Tornado Number according to the statewide tornado map. Shapefile projections are in Universal Transverse Mercator Zone 16N.

Attention GIS Analysts Using ArcGIS: Please view the metadata within ArcCatalog to view a complete description of shapefile information.

Google Earth KMZ Files:

Tornado Survey Information KMZ (Updated 4/25/21) Google Earth KMZ contains all the detailed storm survey information gathered from the historic outbreak for the 29 tornadoes in the Birmingham County Warning Area and select tornadoes in the Huntsville County Warning Area. In select locations, damage photographs taken by assessment teams can also be viewed. Be advised that this information is still deemed as experimental. The information was extracted from the National Weather Service Damage Assessment Toolkit.

Alabama EMA Map Books:

The following map books, courtesy of the Alabama Emergency Management Agency, analyze the most affected areas from the April 27, 2011 tornado outbreak. Alabama Civil Air Patrol overflight imagery are used along with GIS roadway shapefiles overlaid on top of the imagery. The maps below were compiled by the Alabama State Oil and Gas Board, Geological Survey of Alabama, and the University of Alabama.

Interactive Map - Tornado Paths and Survey Points:

The map below presents an interactive view of tornado paths and survey locations collected by damage assessment teams in the field from the April 27, 2011 Historic Tornado Outbreak. Many of these same points can be downloaded using the links for the Paths and Survey Point Shapefiles above. To view information from the survey points, zoom in to the specific point of interest and click on the point. A pop-up box will appear with specific survey information. Damage photographs, if they are available, can be viewed by scrolling to the bottom of the pop-up. Click on the image for the full resolution version in a separate tab. If several features appear within a close proximity to one another, look for the "Next Feature" button (>) on the top right of the pop-up to view each feature. To zoom in and out, use your mouse wheel or use the plus and minus buttons in the top left corner of the map. Once again, these data are experimental. The map below can also be viewed on by searching "Tornado" and clicking on "April 27, 2011 NWS BMX Interactive Survey Map."

Statewide Geospatial Information: (Updated 8/28/11)

The April 27, 2011 Tornado Outbreak had far reaching effects across the State of Alabama. With assisstance from the National Weather Service in Huntsville, National Weather Service in Mobile, and the Alabama Emergency Management Agency, a detailed statewide tornado map was created. The map below depicts the tornado tracks starting in the morning of April 27th and ending during the evening hours. Each number below represents the start time of each tornado in chronological order, beginning on the morning of April 27th.

62 tornadoes have been confirmed across the entire state. By clicking on the numbers below, you will be taken to detailed survey information regarding the tornado of interest.

* Tornadoes #2, #41, and #54 contain asterisks due to the fact they are labeled as EF-2 (#2), EF-5 (#41), and EF-4 (#54). Those are the maxium ratings of those tornadoes when they were on the ground in Mississippi before they moved into Alabama. Tornado #2 was an EF-1 in Alabama, #41 was an EF-3 in Alabama, and tornado #54 was an EF-3 in Choctaw County and an EF-2 in Marengo and Perry Counties.

Statewide Tornado Statistics:

Total Number of Tornadoes: 62
Total Mileage of All Tornadoes Across Alabama: 1202.47 Miles

Google Workspace security whitepaper

Google builds security into its structure, technology, operations and approach to customer data. Our robust security infrastructure and systems become the default for each and every Google Workspace customer. Beyond these levels, users are actively empowered to enhance and customize their individual security settings to meet their business needs through dashboards and account security wizards.

Google Workspace also offers administrators full control to configure infrastructure, applications, and system integrations in a single dashboard via our Admin Console—regardless of the size of the organization—simplifying administration and configuration. Consider the deployment of DKIM (a phishing prevention feature) in an on-premise email system. Traditionally, administrators would need to patch and configure every server separately, with any misconfiguration causing a service outage. Using our Admin Console, however, DKIM can be configured in minutes across thousands, or hundreds of thousands, of accounts with peace of mind and no outage or maintenance window required.

That’s just one example. Administrators have many powerful tools at their disposal, including authentication features like 2-step verification and single sign-on, and email security policies like secure transport (TLS) enforcement, which can be configured to meet the security and system integration requirements of any organization.

Access and Authentication

2-step verification and security keys

Customers can strengthen account security by using 2-step verification and security keys. 3 These can help mitigate risks such as the misconfiguration of employee access controls or attackers taking advantage of compromised accounts. 4 With the Advanced Protection Program for enterprise, we can enforce a curated set of strong account security policies for enrolled users. These include requiring security keys, blocking access to untrusted apps, and enhanced scanning for email threats.

Single sign-on (SAML 2.0)

Google Workspace offers customers a single sign-on (SSO) service that lets users access multiple services using the same sign-in page and authentication credentials. It is based on SAML 2.0, an XML standard that allows secure web domains to exchange user authentication and authorization data. For additional security, SSO accepts public keys and certificates generated with either the RSA or DSA algorithm. Customer organizations can use the SSO service to integrate single sign-on for Google Workspace into their LDAP or other SSO system.

OAuth 2.0 and OpenID Connect

Google Workspace supports OAuth 2.0 and OpenID Connect, an open protocol for authentication and authorization that allows customers to configure one single sign-on service (SSO) for multiple cloud solutions. Users can log on to third-party applications through Google Workspace—and vice versa—without re-entering their credentials or sharing sensitive password information.

Information Rights Management (IRM)

Most organizations also have internal policies which dictate the handling of sensitive data. To help Google Workspace administrators maintain control over sensitive data, we offer information rights management in Google Drive. Administrators and users can use the access permissions in Google Drive to protect sensitive content by preventing the re-sharing, downloading, printing or copying of the file or changing of the permissions.

Restricted email delivery

By default, users with Gmail accounts at your domain can send mail to and receive mail from any email address. In some cases, administrators may want to restrict the email addresses users can exchange mail with. For example, a school might want to allow its students to exchange mail with the faculty and other students, but not with people outside the school.

Using the restrict delivery setting allows administrators to specify the addresses and domains where users can send or receive email messages. When administrators add a restrict delivery setting, users can only communicate with authorized parties. Users who attempt to send mail to a domain not listed will see a message that specifies the policy prohibiting mail to that address, and confirms that the mail is unsent. Likewise, users receive only authenticated messages from listed domains. Messages sent from unlisted domains—or messages from listed domains that can’t be verified using DKIM or SPF records—are returned to the sender with a message about the policy.

App access based on user context

To facilitate easier user access, while at the same time protecting the security of data, Google has developed context-aware access. 5 This provides granular controls for Google Workspace apps, based on a user’s identity and context of the request (such as device security status or IP address). Based on the BeyondCorp security model developed by Google, users can access web applications and infrastructure resources from virtually any device, anywhere, without utilising remote-access VPN gateways while administrators can establish controls over the device. You can also still set access policies, such as 2-Step Verification, for all members of an organizational unit or group.

Asset Protection

Email spam, phishing and malware protection

Gmail protects your incoming mail against spam, phishing attempts, and malware. Our existing machine learning models are highly effective at doing this, and in conjunction with our other protections, they help block more than 99.9% of threats from reaching Gmail inboxes. One of our key protections is our malware scanner that processes more than 300 billion attachments each week to block harmful content. 6 63% percent of the malicious documents we block differ from day to day. 7 In addition, Gmail can scan or run attachments in a virtual environment called Security Sandbox. Attachments identified as threats can be placed in users' Spam folders or quarantined.

We’re continuing to improve spam detection accuracy with early phishing detection, a dedicated machine learning model that selectively delays messages (less than 0.05 percent of messages on average) to perform rigorous phishing analysis and further protect user data from compromise.

Our detection models integrate with Google Safe Browsing machine learning technologies for finding and flagging phishy and suspicious URLs. These new models combine a variety of techniques, such as reputation and similarity analysis on URLs, allowing us to generate new URL click-time warnings for phishing and malware links. As we find new patterns, our models get better with time, and adapt more quickly than manual systems ever could.

Email spoofing prevention

Spammers can sometimes forge the “From” address on an email message so that it appears to come from a reputable organization’s domain. To help prevent this email spoofing, Google participates in the DMARC program, which lets domain owners tell email providers how to handle unauthenticated messages from their domain. Google Workspace customers can implement DMARC by creating a DMARC record within their admin settings and implementing an SPF record and DKIM keys on all outbound mail streams.

Warnings for employees to prevent data loss

When employees are empowered to make the right decisions to protect data, it can improve an enterprise’s security posture. To help with this, Gmail displays unintended external reply warnings to users to help prevent data loss. If you try to respond to someone outside of your company domain, you’ll receive a quick warning to make sure you intended to send that email. And because Gmail has contextual intelligence, it knows if the recipient is an existing contact or someone you interact with regularly, to avoid displaying warnings unnecessarily.

Hosted S/MIME to provide enhanced security

With Google’s hosted S/MIME solution, once an incoming encrypted email with S/MIME is received, it is stored using Google's encryption. This means that all normal processing of the email can happen, including extensive protections for spam, phishing and malware, as well as admin services (such as vault retention, auditing and email routing rules) and high-value end user features such as mail categorization, advanced search and Smart Reply. For the vast majority of emails, this is the safest solution, giving the benefit of strong authentication and encryption in transit without losing the safety and features of Google's processing.

Gmail confidential mode

Gmail users can help protect sensitive information from unauthorized access using Gmail confidential mode. Recipients of messages in confidential mode don't have the option to forward, copy, print, or download messages, including attachments. Users can set a message expiration date, revoke message access at any time, and require an SMS verification code to access messages.

Data Loss Prevention (DLP) for Gmail and Drive

Data loss prevention (DLP) 8 adds another layer of protection designed to prevent sensitive or private information such as payment card numbers, national identification numbers, or protected health information, from leaking outside of an organization. DLP enables customers to audit how sensitive data is flowing in their enterprise or turn on warning or blocking actions, to prevent users from sending confidential data. To enable this, DLP provides predefined content detectors, including detection of global and regional identifiers, medical information and credentials. Customers can also define their own custom detectors to meet their enterprise needs. For attachments and image-based documents, DLP uses Google’s optical character recognition to increase detection coverage and quality. Learn more here about Gmail DLP. DLP can also be used to prevent users from sharing sensitive content in Google Drive or shared drive with people outside of your organization. In addition, customers can automate IRM controls and classification of Drive files advanced DLP rules.

Configuring Google Workspace security settings

Security and alert management

With multiple security and privacy controls in place, organizations need a centralized location where they can prevent, detect, and remediate threats. The Google Workspace security center 9 provides advanced security information and analytics, and added visibility and control into security issues affecting your domain. 10 It brings together security analytics, actionable insights and best practice recommendations from Google to empower you to protect your organization, data and users.

As an administrator, you can use the security dashboard to see an overview of different security center reports. The security health page provides visibility into your Admin console settings to help you better understand and manage security risks. Furthermore, you can use the security investigation tool to identify, triage, and take action on security and privacy issues in your domain. Administrators can automate actions in the investigation tool by creating activity rules to detect and remediate such issues more quickly and efficiently. For example, you can set up a rule to send email notifications to certain administrators if Drive documents are shared outside the company.

The alert center for Google Workspace provides all Google Workspace customers with alerts and actionable security insights about activity in your domain to help protect your organization from the latest security threats, including phishing, malware, suspicious account, and suspicious device activity. You can also use the alert center API to export alerts into your existing ticketing or SIEM platforms.

Trusted domains for drive sharing

Administrators can control how users in their organization share Google Drive files and folders. For example, whether users can share files with people outside of their organization or whether sharing is restricted to only trusted domains. 11 Optional alerts can be established to remind users to check that files aren't confidential before they are shared outside of the organization.

Video meetings safety

Google Meet takes advantage of the same secure-by-design infrastructure, built-in protection, and global network that Google uses to secure your information and safeguard your privacy. Our array of default-on anti-abuse measures that include anti-hijacking measures for both web meetings and telephony dial-ins, keep your meetings safe.

For users on Chrome, Firefox, Safari and new Edge we don't require or ask for any plugins or software to be installed, Meet works entirely in the browser. This limits the attack surface for Meet and the need to push out frequent security patches on end-user machines. On mobile, we recommend that you install the Google Meet app from Apple App Store or the Google Play Store.

We support multiple 2 Step Verification (2SV) options for Meet that are both secure and convenient - hardware and phone-based security keys, as well as Google prompt. Meet users can enroll their account in Google’s Advanced Protection Program (APP). APP provides our strongest protections available against phishing and account hijacking and is specifically designed for the highest-risk accounts, and we’ve yet to see people successfully phished if they participate in APP, even if they are repeatedly targeted. For more information, check out this page.

Endpoint management

The protection of information on mobile and desktop devices can be a key concern for customers. Google Workspace customers can use endpoint management 12 to help protect corporate data on users’ personal devices and on an organization’s company-owned devices. By enrolling the devices for management, users get secure access to Google Workspace services and organizations can set policies to keep devices and data safe through device encryption and screen lock or password enforcement. Furthermore, if a device is lost or stolen, corporate accounts can be remotely wiped from mobile devices and users can be remotely signed out from desktop devices. IT admins can also manage and configure Windows 10 devices through the Admin console, and users can use existing Google Workspace account credentials to login to Windows 10 devices and access apps and services with single sign-on (SSO). Reports enable customers to monitor policy compliance and get information about users and devices. You can obtain further information on endpoint management here.

Reporting analytics

Google Workspace audit logs

Enterprises storing data in the Cloud seek visibility into data access and account activity. Google Workspace audit logs help security teams maintain audit trails in Google Workspace and view detailed information about Admin activity, data access, and system events. Google Workspace admins can use the Admin Console to access these logs and can customize and export logs as required.

Security reports

Google Workspace administrators have access to security reports that provide vital information on their organization’s exposure to data compromise. They can quickly discover which particular users pose security risks by not taking advantage of 2-step verification, installing external apps, or sharing documents indiscriminately. Administrators can also choose to receive alerts when suspicious login activity occurs, indicating a possible security threat.

Insights using BigQuery

Google Workspace admins can export audit logs and other information to BigQuery. With BigQuery, Google’s enterprise data warehouse for large-scale data analytics, customers can analyze Google Workspace logs using sophisticated, high-performing custom queries, and leverage third-party tools for deeper analysis.

Data Recovery

Restore a recently deleted user

An administrator can restore a deleted user account for up to twenty days after the date of deletion. After twenty days, the Admin console permanently deletes the user account, and it can’t be restored, even if you contact Google technical support. Please note that only customer administrators can delete accounts.

Restore a user’s Drive or Gmail data

An administrator can restore a user’s Drive or Gmail data for up to 25 days after the data is removed from the user’s trash, subject to any retention policies set in Vault. After 25 days, the data cannot be restored, even if you contact technical support. Google will delete all customer-deleted data from its systems as soon as reasonably practicable and within a maximum period of 180 days.

Retention and eDiscovery

An administrator can turn on Google Vault to retain, hold, search, and export data in support of your organization’s retention and eDiscovery needs. Vault supports such data as Gmail messages, files in Google Drive, and recordings in Google Meet, among others.

Data Residency

As an administrator, you can choose to store your covered data in a specific geographic location (the United States or Europe) by using a data region policy. Data region policies cover the primary data-at-rest (including backups) for these Google Workspace Core Services. Covered data includes Drive file content, Google Chat messages and attachments, Gmail mail subjects and messages, as well as other Core Services data.

3 Further information about deploying 2-step verification can be found on our support page.
4 See security best practices guidance on our security checklists page.
5 Integrated with Cloud Identity. Using context-aware access capabilities to protect access to Google Workspace apps requires a Cloud Identity Premium or Google Workspace Enterprise license.
6 As of February 2020.
7 As of February 2020.

8 Available to Google Workspace Enterprise and paid editions of Google Workspace for Education customers only.
9 Included with Google Workspace Enterprise edition and Google Workspace for Education Standard and Google Workspace for Education Plus.
10 You must be an administrator with a Google Workspace Enterprise, Google Workspace for Education Standard, Google Workspace for Education Plus, Drive Enterprise, or Cloud Identity Premium Edition license to access the security center. With Drive Enterprise or Cloud Identity Premium Edition, you receive a subset of security center reports on the security dashboard.
11 Certain features, such as restricting sharing to only whitelisted domains, are only available with Google Workspace Enterprise, Google Workspace for Education Plus , Drive Enterprise, Business, and Nonprofits edition.
12 Included as standard with Google Workspace.

Fifty Years of Commercial GIS – Part 1: 1969-1994

Author’s Note: During the past 50 years, there have been many contributions to commercial GIS. This article, which offers my perspective only, will hopefully provide some understanding of how GIS developed during its first 25 years and, perhaps, provoke some thoughtful reflections by those who began their careers during this time as well.

As we turn the page on our calendars to 2019, it marks 50 years of commercial GIS. GIS indeed reaches a milestone that recognizes a mature, yet still growing, technology that is finding new applications in business intelligence, advertising technology, smart cities and many emerging technologies such as autonomous vehicles, Big Data, machine learning and blockchain. During the '70s and '80s, GIS was still an immature, not-very-user-friendly technology, and few people knew how to use it. This article hopes to chronicle some of the companies that led to its commercial success.

In 1969, two companies opened their doors and began their journey toward developing some of the most impactful technologies used today. Intergraph, now Hexagon (but began operations as M&S Computing), and Environmental Systems Research Institute, now Esri, were launched in that year. Jim Meadlock, an ex-NASA engineer from the Apollo program founded Intergraph Jack Dangermond, a Harvard-trained landscape architect and his wife, Laura, founded Esri. Shortly thereafter, these companies were joined by a few others in fostering the applications and innovations associated with the technology, such as Trimble, Autodesk, Bentley Systems, Comarc Design Systems, Synercom and others. I worked for Intergraph in the '90s and found Meadlock to be a frank and honest chief executive, and when I worked in publishing, Esri was my client and I always appreciated the time Dangermond took with me for interviews and to explain his corporate direction.

One person, however, is considered the “father of GIS.” Roger Tomlinson, a Canadian geographer, is credited with the development of the first computer-aided mapping system in the early 1960s. Tomlinson was a true gentle giant of a man, standing 6 feet 5 inches or better, with a deliberate, academic cadence to his spoken word and a distinct British accent. He worked in many capacities in both government and academia and ran his own consulting practice for many years until his death in 2014. I had the privilege of hearing Dr. Tomlinson speak at many Esri user conferences.

During the decade following 1969, a singular development catapulted computer mapping: the launch of Landsat by NASA and the U.S. Department of the Interior in July 1972. Landsat 1, a multispectral imaging platform in a sun-synchronous orbit, captured digital images of the Earth’s surface, and returned to the same geographic location to re-image that area every 18 days. Data were beamed to ground receiving stations around the world and then archived at the Earth Resource Observation and Science (EROS) Center, a U.S. Geological Survey facility in Sioux Falls, S.D. During the next 40 years, follow-on Landsat missions with both better spatial and spectral resolution provided the longest, temporal coverage of the Earth’s surface. As a graduate student at Dartmouth in the late '70s, I was processing Landsat imagery using a classification algorithm developed at the college and a Digital Equipment Corporation (DEC) PDP 11 minicomputer. I later worked at EROS and had the distinct honor of working side by side with some true innovators of remote sensing science.

Early Beginnings of GIS Software

Mapping software in the '70s allowed for the digitization of vector data with the ability to reference points, lines and polygons to a geographic coordinate system. Very little analysis was built into these early mapping software programs. However, raster data image processing and visualization on mainframe and minicomputers to support Landsat data scientists was taking off.

The GE Image 100 workstation and the Interactive Digital Image Manipulation System (IDIMS) by ESL, running on DEC VAX computers, were the mainstay software solutions in the late '70s and early '80s. Each were capable of performing advanced image integration and computational modeling. Classified Landsat imagery was sometimes printed using a large line printer that used alphanumeric characters and a large black ribbon. The objective was to fill a wall with printouts to see detail of large areas. To get a color composite image, the ink cartridge was changed four times (red, green, blue and black inks) and the same paper was rethreaded each time.

During this same time, IBM’s Geo-Facilities Information System (GFIS) ran on mainframes and leveraged the power of an object-oriented approach to store graphical data. Smallworld, also an object-oriented software solution, had its first release in 1989 and was focused primarily on serving the utility market (Smallworld was acquired by GE in 2000). (See more about these solutions from Peter Batty, one of the true GIS thought leaders, who worked at both companies).

Some early desktop computers, running Z80 processors with eight-inch floppy drives, were developed for image visualization and prototyped at EROS. The Earth Resources Data Analysis System Company(ERDAS), cofounded by Lawrie Jordan in 1979, commercially developed image processing that utilized Z80s and PDP 11s as well. Other early GIS systems were hybrid marriages of AutoCAD and dBase. ArcInfo by Esri and the Modular GIS Environment by Intergraph were minicomputer-based GIS software systems. Atlas Graphics (later Atlas GIS) by Strategic Mapping, Inc. and MapInfo by MapInfo Corporation were released in the mid-'80s on DOS-based Intel 386 desktop computers. Both Atlas Graphics and MapInfo offered GIS at a much lower cost, thus bringing more users into GIS. Bentley Systems’ MicroStation was one of the first PC-based CAD systems, used extensively in mapping, and later became the underpinnings of Intergraph’s MGE.

However, geospatial data manipulation and visualization were hampered so much by slow microprocessor clock speed and limited random-access memory on PCs that some software had to run on minicomputers like DEC VAX, Data General and Prime. It was a constant battle to balance the need for developing complex thematic maps with visualizing the results, which often required PC users to purchase upgrades to their graphics cards. Screen redraw times were at times glacial. Users were building their own PCs to upgrade motherboards, CPUs and RAM. When Intel released their 486 chip, we all felt as though we jumped into a new era of speed. Such was the time when we leaned on Moore’s Law.

Data…The Rocket Fuel of GIS

Geographic Data Technology, founded by Don Cooke, opened for business in 1980 in Lebanon, N.H., one of the first street centerline data companies. GDT had a contract from the U.S. Census Bureau to leverage TIGER, the Topologically Integrated Geographic Encoding and Referencing System, one of the first complete digital maps of the U.S. In 1983, ETAK, a Menlo Park, Calif. company, founded by Nolan Bushnell, that utilized the newly published TIGER digital street files, began to manufacture one of the first in-vehicle navigation systems. However, due to the constraints of technology at the time, it took multiple CDs to hold digital street files of each city that ETAK chose to sell, along with the hardware that included CD players and a small green screen display that sat near the passenger-side well of the car. I remember interviewing for a job at ETAK and seeing a very crude navigation system, but at the time, it was revolutionary.

Nielsen, Claritas, National Planning Data Corporation, CACI, Donnelley Marketing Information Services , Business Location Research and MapInfo were demographic and points of interest data product suppliers in the '80s and early '90s. These companies provided the data in formats that certainly fueled early business applications of GIS. Wessex, a small Winnekta, IL based data company founded by Scott Elliott, commercialized TIGER by delivering the data for only $995, far below the price of competing data packages. Elliott later started Directions Magazine.

The coalescing of GIS software and data companies caught the attention of Fortune Magazine and Business Week in 1989, both of which ran articles on GIS. Forbes Magazine published an extensive article in January 1992 on how Arby’s, the fast-food restaurant chain, was using GIS to locate stores and refine their target marketing. Hal Reid, the VP for Innovation for Arby’s at the time, was profiled in that article and later brought his unique understanding of the potential of GIS to both Intergraph and Dunkin’ Donuts. And, in the world of high-stakes competitiveness for the best fast-food restaurant locations, McDonald’s Corporation went so far as to build their own GIS called Quintillion. In 1994, the rights to Quintillion were acquired by Dakota Marketing.

GUI & Usability

During the early 1990s, Microsoft Windows provided a more intuitive graphical user interface that software developers leveraged to broaden the user base of GIS. More desktop GIS companies began to sprout: TYDAC published the Spatial Analysis System and Scan/US Inc. published software under the same name both utilized a quadtree data structure for thematic mapping. TYDAC and MapInfo recognized the early potential of targeting business users for GIS. MapInfo chose to port its software from DOS to Windows. TYDAC chose to port its software to IBM’s OS2…not the best business decision. MapInfo also won a contract to provide software plus geographic and demographic data to Microsoft that allowed users to create thematic maps using Excel.

A large drawback to GIS was the lack of industry-specific software applications. In an effort to bridge the gap between the required location analytical technology and an application that spoke the language of the user, a consortium of organizations built the Geographic Underwriting System for the property and casualty sector of the insurance industry. Electronic Data Systems, the Insurance Services Office, Inc. and DataMap Inc. test marketed GUS in Florida in 1991 with the intent of going nationwide shortly thereafter.

Publishers and Academics Support Early GIS Development

While there were many books that supported the commercial development of GIS and remote sensing, there were a few noteworthy contributions. Dana Tomlin significantly contributed to geospatial data science in his book “Geographic Information Systems and Cartographic Modeling,” published in 1990. Here he explained the principals of map algebra that he had first introduced in the early '80s. Early GIS software such as SPANS could implement Tomlin’s work. In the world of business GIS, three books, I believe, offered authoritative insights: “Location Strategies for Retail and Service Firms” by Ghosh and McLafferty (1987) “Site Selection” by Bob Buckner (1982) and in 1993, Gil Castle edited a compendium of business application use cases entitled, “Profiting from a Geographic Information System” that brought to light applications in retail, banking and insurance that offered true competitive advantages to those the invested in the technology. However, the “father of retail site selection” was Dr. David Huff of The University of Texas, who had introduced his spatial interaction models, known as the Huff Model, in 1962 but later helped Esri develop software that allowed retailers to implement his models as well as the laws of retail gravitation, such as those proposed by William Reilly in 1931.

In the early 1990s, the first GIS-specific magazines began publishing, offering an opportunity to advertise and promote the technology by software providers. GIS World was a monthly periodical, which quickly branched into region-specific magazines, GIS Europe and GIS Asia Pacific, followed by conferences and later a magazine specifically for the business applications of GIS entitled Business Geographics. However, I first saw ads for both TYDAC and MapInfo in American Demographics in the late '80s that forever changed my career from that of a geologist to GIS software marketing so much so that I penned my first article on GIS for the Dallas Business Journal in 1989, and in 1991, I began writing the first column dedicated to GIS in Business” for GIS World. Later, I served as the editor of Business Geographics in the late '90s and Directions Magazine, the first Internet-based publication, from 2001-2014.

These early companies and thought leaders pioneered GIS and set the stage before the Internet became an essential tool in GIS and before the advent of spatially-enabled databases, Microsoft MapPoint and Google Earth. More to come in Part 2.

“Profiting from a Geographic Information System,” Gil Castle, GIS World, 1993

Thanks to Gil Castle, Hal Reid, Adena Schutzberg and Clarence Hempfield for reviewing this article and checking my memory.

From the imagemagick package, use the convert command:

You will get a single pdf containing all jpg in the current folder. The option -auto-orient reads the image's EXIF data to rotate the image.

Edit: Note that images will be out of specific order if they are not numbered. if you have 10 or more you need to name them ending filename01.jpg. filename99.jpg etc. The leading zeros are required for proper ordering. If you have 100 or more 001. 999.

Unfortunately, convert changes the image quality before "packing it" into the PDF. So, to have minimal loss of quality, is better to put the original jpg , (works with .png too) into the PDF, you need to use img2pdf .

A shorter one liner solution also using img2pdf as suggested in the comments**

img2pdf *.jp* --output combined.pdf

(optional) OCR the output PDF

ocrmypdf combined.pdf combined_ocr.pdf

Below is the original answer commands with more command and more tools needed:

This command is to make a pdf file out of every jpg image without loss of either resolution or quality:

ls -1 ./*jpg | xargs -L1 -I <> img2pdf <> -o <>.pdf

This command will concatenate the pdf pages into one document:

pdftk *.pdf cat output combined.pdf

And finally, I add an OCRed text layer that doesn't change the quality of the scan in the pdfs so that they can be searchable:

An alternative to using pypdfocr :

Worked for me (BUT warning! turns off compression and resulting PDF will be big!):

From, the +compress helps it to not hang. NOTE: the +compress turns off compression. The machine I was working on at the time seemed to hang ?forever?(I did not wait forever though to find out.) without the +compress option. Your Mileage May Vary quite a bit! RTFM on option -compress, maybe experiment with -compress < type> if you have slow compression/hanging problems to find out what will work for you.

Open jpg or png file with LibreOffice Writer and export as PDF.

I hope, this is simple way to export pdf.

I'm curious nobody pointed out pdfjam, which is a super efficient way to merge images/pdf into a pdf:

will create for you a pdf in A4 format for all .jpg files, usually named with a -pdfjam.pdf at the end. To force a specific output name, you have a --outfile <your output> option!

As far as I can see, there is no re-encoding of the file, making the command pretty fast compared to convert .

To install pdfjam, I'm not sure to know what's the most efficient way (it comes automatically with LaTeX), but you can try:

The following solution also relies on ImageMagick's convert but is a bit more sophisticated because:

  • It allows all images to have different dimensions, while keeping the output PDF page size the same (A4 in the example below).
  • It centers the images on the PDF page.
  • It allows you to guarantee a minimum space between image borders and PDF page borders, to allow printing the PDF without problems.
  • It does not change the image data. (So image quality is unaffected, the PDF file has about the same file size as the image, and you can re-extract the original images later with pdfimages -j file.pdf img .) At the moment, this only works with PNG – see the comment by @dma_k below.
    Use my script from this answer to convert each image into its own one-page PDF file with A4 page size and 5% border all around.

Concatenate all your one-page PDF files with PDFtk as follows:

Although convert does the job, it tries to open all the source files together and if you have a lot of files and do not have a huge amount of RAM you can run out it.

So as an alternative you can run the following commands in a terminal while being in the folder where the jpg files are.

This converts each image to a single page pdf, one by one, without overloading the system. Then:

This merges the pdfs into a single pdf and deletes the single page ones.

Using img2pdf you can do that.
But sometimes you may need your images converted to document in an order by timestamp or by size or by name.To make that possible this script does that work.

ls -tr | tr ' ' ' ' | sed 's/$/ --output mydoc.pdf/' | xargs img2pdf

In place of mydoc.pdf, enter name of output file as your wish.
Option of ls command( instead of -tr use these as per your need)

  • -S , sort by file size, largest first
  • -t , sort by modification time, newest first
  • -X , sort alphabetically by entry extension
  • -r , reverse order while sorting

It is humblesome - but file-size can explode - to avoid exploding file-size you can do these steps :

a) At first you need to export with "gimp" the *.jpeg-files to *.jpg-files. (jpeg is Apple format - jpeg and jpg are both NOT the same !). jpg-file would need a small white or black 'passepartout' (=frame).

b) With Android and app "photocompress" I compress the jpg files to size under 300 KBytes each.

c) then back to Desktop of Ubuntu you can edit these files with Libre-Office and create a pdf-map with them.

Surely somebody knows how this works from a) to c) simply in terminal ?

The side-effect of this is, that it can happen, because of correct byte-size the recipient with bad $mickrosaft has then posters, but it is not your fault.

Mapping the Twitterverse

About 30 million Twitter users share data about their location. They choose whether to release their city and state, their address or their precise latitude and longitude. Their tweet text may also reveal a user’s location as illustrated here, in which a young woman says she is at home. Credit: Chris Weidemann

( —What does your Twitter profile reveal about you? More than you know, according to Chris Weidemann. The GIST master's student has developed an application that follows geospatial footprints.

You start your day at your favorite breakfast spot. When your order of strawberry waffles with extra whipped cream arrives, it's too delectable not to share with your Twitter followers. You snap a photo with your smartphone and hit send. Then, it's time to hit the books.

You tweet your friends that you'll be at the library on campus. Later that day, palm trees silhouette a neon-pink sunset. You can't resist. You tweet a picture with the hashtag #ILoveLA.

You may not realize that when you tweet those breezy updates and photos of food, you are sharing information about your location.

Chris Weidemann, a graduate student in the Geographic Information Science and Technology (GIST) online master's program at USC Dornsife, investigated just how much public geospatial data was generated by Twitter users and how their information—available through Twitter's application programming interface (API)—could potentially be used by third parties. His study was published June 2013 in the International Journal of Geoinformatics.

"I'm a pretty private person, and I wish others would be more cautious with the types of information they share," said Weidemann, a geospatial technology manager for a Virginia company that builds geographic information systems for the federal government.

Weidemann understands how data is stored and how it can be accessed. He found that location metadata from users' profiles is more publicly available than most people realize.

Twitter has approximately 500 million active users, and reports show that 6 percent of users opt-in to allow the platform to broadcast their location using global positioning technology with each tweet they post. That's about 30 million people sending geo-tagged data out into the Twitterverse. In their tweets, people can choose whether their information is displayed as a city and state, an address or pinpoint their precise latitude and longitude.

That's only part of their geospatial footprint. Information contained in a post may reveal a user's location. Depending upon how the account is set up, profiles may include details about their hometown, time zone and language.

This heat map shows the density of all tweets during the study collection period where the location was identifiable through ambient information or direct sharing. Credit: Image courtesy of Chris Weidemann.

"There is all sorts of information that can be gleaned from things outside of the tweet itself," Weidemann said.

To get a fuller picture of what that collection of data might reveal about a Twitter user, Weidemann developed an application called Twitter2GIS to analyze the geospatial data Twitter users generate.

Using Twitter's API and Google's Geocoding API, Twitter2GIS collects tweets from either a geographic region or a specific Twitter user. That geospatial information—or ambient data—is processed by Esri ArcGIS, a GIS software program used to map and analyze data, searching for trends.

During the sampling period, an average of 4 million Twitter users divulged their physical locations through global positioning system coordinates or other active location monitoring. Additionally, 2.2 percent of tweets – about 4.4 million tweets a day – provided substantial ambient location data in the text of their posts.

By harvesting geospatial information, corporations could potentially build profiles of individuals for marketing purposes. However, it also opens users to more malicious intent.

"The downside is that mining this kind of information can also provide opportunities for criminal misuse of data," Weidemann said.

Also, an application similar to Twitter2GIS could potentially be developed to mine data from other social media outlets.

Weidemann originally produced Twitter2GIS for his capstone project in a course taught by Jennifer Swift, assistant teaching professor of spatial sciences. In "Web GIS," students study how to design, build and implement Web-mapping applications.

Swift, Weidemann's thesis adviser, said the project stood out for its thoughtful look at geospatial information. "It will help create an awareness among the general population about the information they divulge," she said.

For his master's thesis, Weidemann is taking his application one step further and expanding it to allow Twitter users to login in with their profile credentials so they can view their own Twitter geospatial footprint. They can test out the beta version and provide feedback on the app at Weidemann will continue to add functionality to the app over the Fall semester, such as its ability to analyze the content of tweets.

"My intent is to educate social media users and inform the public about their privacy," Weidemann said. He is slated to complete his thesis and earn his master's degree in December 2013.

Weidemann has a profile on Twitter, but uses it infrequently and opts out of including location information to his tweets.

When developing Twitter2GIS, he tested his own Twitter account for location information. Despite his self-described conservative Twitter use, the application was able to deduce information about the location and date of a conference he was hoping to attend based on tweets he posted that only included the event's hashtag. He was surprised at the amount of information his app was able to extrapolate.