Aspect Patch Count

Aspect Patch Count

I am using ArcMap to create aspect maps. I want to find the how many different directions the land faces (total aspect patch count) with continuous patches greater than three.

So for example I want to be able to say "this piece of land has 55 different slope faces while this piece of land has 60"

How do I do this?

One approach could be this:

  1. Convert your aspect dataset to a polygon FeatureClass
  2. Union (1) with you land boundary dataset
  3. Select everything from (2) with an area greater than your desired threshold (I am assuming you meant 3 pixels)
  4. Run a summary statistics on the (3), group by land polygon ID, count as union polygon ID.

You can use the Region Group tool to find contiguous areas and the Lookup tool to filter by the count of contiguous cells.

Depending on the source of the original DEM, you may want to smooth it before calculating aspect as you'll likely get quite different answers depending on the level of noise in the original data (see this question).

Aspect Patch Count - Geographic Information Systems

Welcome to the website for

Professor of Zoology and Physiology

Contact information: Dr. Frank J. Rahel, Professor

Department of Zoology and Physiology

Box 3166, University of Wyoming

Phone: (307) 766-4212 Fax: (307) 766-5627

I teach ichthyology, fisheries management, ecology, and graduate seminars in biological invasions and climate change

My graduate students and I study various aspects of fish ecology and fisheries management. One focus is on fish habitat use and movement patterns in regards to habitat patchiness in streams. We are interested in what constitutes a habitat patch, how these patches are rearranged by disturbances such as floods, and what factors influence fish movement among patches. Other research areas include how warmwater fish species in the Rocky Mountains region will respond to climate change the interaction between climate change and invasive fish species and the homogenization of aquatic biota across the world through habitat alteration and species introductions. Much of our research involves species of conservation concern including native trout and nongame fishes such as native minnows in prairie streams.

The Laramie River is a prairie stream with occasional gallery forests of cottonwoods and willows. We are studying how fish use habitat patches such as woody debris and how spring floods influence the dynamics of these patches. The photo to the left shows the Laramie River during low flows in late summer. The photo to the right shows the same reach during spring runoff. High flows rearrange habitat patches annually and we are interested in how fish respond to this disturbance.

Eriek Hansen is a doctoral student who is examining how loss of winter ice cover due to climate warming might affect the behavior and physiology of fishes in streams of the western U.S.

Dan Gibson-Reinemer is a doctoral student who is examining how warmwater fishes will respond to the potential increase in warmwater habitats in the Rocky Mountain regiondue to climate warming.

We are investigating how intentional fragmentation of aquatic systems can sometimes be beneficial by preventing the spread of invasive species, preventing hybridization or keeping species from entering ecological traps such as irrigation canals. Above is a rock gabion dam that limits the upstream spread of some nonnative fishes in Muddy Creek, WY.

Potential graduate students:

If you are interested in doing graduate work in fish ecology or fisheries management, please send me an email indicating your areas of interest and career goals. It would be helpful to include a resume along with recent transcripts and GRE scores. General information about the graduate program of the Department of Zoology & Physiology is at I also supervise doctoral students in the Program in Ecology and information about this program can be found at

Selected Publications:

Rahel, F.J. 2013. Intentional fragmentation as a management strategy in aquatic systems. BioScience 63:5 362-372. [PDF]

Laske, S.M., F.R. Rahel and W.A. Hubert. 2012. Differential interactions of two introduced piscivorous salmonids with a native cyprinid in lentic systems: implications for conservation of roundtail chub. Transactions of the American Fisheries Society: 141:495-506. [PDF]

Dauwalter, D.C. and F. J. Rahel. 2011. Patch size and shape influence the accuracy of mapping small habitat patches with a global positioning system. Environmental Monitoring and Assessment: 179:123-135. [PDF]

Laske, S.M., F.J. Rahel, W.A. Hubert and P.A. Cavalli. 2011. Ecology of unique lentic populations of roundtail chub, Gila robusta. Western North American Naturalist 71(4):507-515. [PDF]

Rahel, F.J. 2010. Homogenization, differentiation, and the widespread alteration of fish faunas. Chapter in K. Gido and D. Jackson (eds). Community ecology of stream fishes: concepts, approaches and techniques. American Fisheries Society Symposium 73:311-326. [PDF]

Dauwalter, D.C., W.L. Fisher, and F.J. Rahel. 2010. Warmwater streams. Chapter 20 (pp 657-697) in W.A. Hubert and M.C. Quist (eds). Inland Fisheries Management in North America. 3 rd Edition. American Fisheries Society, Bethesda, MD. [PDF]

D.C. Dauwalter, F. J. Rahel and K.G. Gerow. 2010. Power of revisit monitoring designs to detect forest-wide declines in trout populations. North American Journal of Fisheries Management 30:1462-1468. [PDF]

Cook, N., F.J. Rahel and W.A. Hubert. 2010. Persistence of Colorado River cutthroat trout populations in isolated headwater streams of Wyoming. Transactions of the American Fisheries Society, 139:1500-1510. [PDF]

Carlson, A. J. and F. J. Rahel. 2010. Annual intrabasin movement and mortality of adult Bonneville cutthroat trout among complementary riverine habitats. Transactions of the American Fisheries Society 139:1360-1371. [PDF]


Software quality is motivated by at least two main perspectives:

    : Software failure has caused more than inconvenience. Software errors can cause human fatalities (see for example: List of software bugs). The causes have ranged from poorly designed user interfaces to direct programming errors, [18][19][20] see for example Boeing 737 case or Unintended acceleration cases [21][22] or Therac-25 cases. [23] This resulted in requirements for the development of some types of software, particularly and historically for software embedded in medical and other devices that regulate critical infrastructures: "[Engineers who write embedded software] see Java programs stalling for one third of a second to perform garbage collection and update the user interface, and they envision airplanes falling out of the sky.". [24] In the United States, within the Federal Aviation Administration (FAA), the FAA Aircraft Certification Service provides software programs, policy, guidance and training, focus on software and Complex Electronic Hardware that has an effect on the airborne product (a "product" is an aircraft, an engine, or a propeller). [25] Certification standards such as DO-178C, ISO 26262, IEC 62304, etc. provide guidance. : As in any other fields of engineering, a software product or service governed by good software quality costs less to maintain, is easier to understand and can change more cost-effective in response to pressing business needs. [26] Industry data demonstrate that poor application structural quality in core business applications (such as enterprise resource planning (ERP), customer relationship management (CRM) or large transaction processing systems in financial services) results in cost, schedule overruns and creates waste in the form of rework (see Muda (Japanese term)). [27][28][29] Moreover, poor structural quality is strongly correlated with high-impact business disruptions due to corrupted data, application outages, security breaches, and performance problems. [30]
    • CISQ reports on the cost of poor quality estimates an impact of:
      • $2.08 trillion dollar in 2020 [31][32]
      • $3.86 million

      ISO Edit

      Software quality is "capability of a software product to conform to requirements." [35] [36] while for others it can be synonymous with customer- or value-creation [37] [38] or even defect level. [39]

      ASQ Edit

      ASQ uses the following definition: Software quality describes the desirable attributes of software products. There are two main approaches exist: defect management and quality attributes. [40]

      NIST Edit

      Software Assurance (SA) covers both the property and the process to achieve it: [41]

      • [Justifiable] confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle and that the software functions in the intended manner
      • The planned and systematic set of activities that ensure that software life cycle processes and products conform to requirements, standards, and procedures

      PMI Edit

      The Project Management Institute's PMBOK Guide "Software Extension" defines not "Software quality" itself, but Software Quality Assurance (SQA) as "a continuous process that audits other software processes to ensure that those processes are being followed (includes for example a software quality management plan)." whereas Software Quality Control (SCQ) means "taking care of applying methods, tools, techniques to ensure satisfaction of the work products towards quality requirements for a software under development or modification." [42]

      Other general and historic Edit

      The first definition of quality history remembers is from Shewhart in the beginning of 20th century: "There are two common aspects of quality: one of them has to do with the consideration of the quality of a thing as an objective reality independent of the existence of man. The other has to do with what we think, feel or sense as a result of the objective reality. In other words, there is a subjective side of quality." [43]

      Kitchenham and Pfleeger, further reporting the teachings of David Garvin, identify five different perspectives on quality: [44] [45]

      • The transcendental perspective deals with the metaphysical aspect of quality. In this view of quality, it is "something toward which we strive as an ideal, but may never implement completely". [46] It can hardly be defined, but is similar to what a federal judge once commented about obscenity: "I know it when I see it". [47]
      • The user perspective is concerned with the appropriateness of the product for a given context of use. Whereas the transcendental view is ethereal, the user view is more concrete, grounded in the product characteristics that meet user's needs. [46]
      • The manufacturing perspective represents quality as conformance to requirements. This aspect of quality is stressed by standards such as ISO 9001, which defines quality as "the degree to which a set of inherent characteristics fulfills requirements" (ISO/IEC 9001 [48] ).
      • The product perspective implies that quality can be appreciated by measuring the inherent characteristics of the product.
      • The final perspective of quality is value-based. [37] This perspective recognizes that the different perspectives of quality may have different importance, or value, to various stakeholders.

      The problem inherent in attempts to define the quality of a product, almost any product, were stated by the master Walter A. Shewhart. The difficulty in defining quality is to translate future needs of the user into measurable characteristics, so that a product can be designed and turned out to give satisfaction at a price that the user will pay. This is not easy, and as soon as one feels fairly successful in the endeavor, he finds that the needs of the consumer have changed, competitors have moved in, etc. [49]

      Quality is a customer determination, not an engineer's determination, not a marketing determination, nor a general management determination. It is based on the customer's actual experience with the product or service, measured against his or her requirements -- stated or unstated, conscious or merely sensed, technically operational or entirely subjective -- and always representing a moving target in a competitive market. [50]

      The word quality has multiple meanings. Two of these meanings dominate the use of the word: 1. Quality consists of those product features which meet the need of customers and thereby provide product satisfaction. 2. Quality consists of freedom from deficiencies. Nevertheless, in a handbook such as this it is convenient to standardize on a short definition of the word quality as "fitness for use". [51]

      Tom DeMarco has proposed that "a product's quality is a function of how much it changes the world for the better." [ citation needed ] This can be interpreted as meaning that functional quality and user satisfaction are more important than structural quality in determining software quality.

      Another definition, coined by Gerald Weinberg in Quality Software Management: Systems Thinking, is "Quality is value to some person." [52] [53] This definition stresses that quality is inherently subjective—different people will experience the quality of the same software differently. One strength of this definition is the questions it invites software teams to consider, such as "Who are the people we want to value our software?" and "What will be valuable to them?".

      Other meanings and controversies Edit

      One of the challenges in defining quality is that "everyone feels they understand it" [54] and other definitions of software quality could be based on extending the various descriptions of the concept of quality used in business.

      Software quality also often gets mixed-up with Quality Assurance or Problem Resolution Management [55] or Quality Control [56] or DevOps. It does over-lap with before mentioned areas (see also PMI definitions), but is distinctive as it does not solely focus on testing but also on processes, management, improvements, assessments, etc. [56]

      Although the concepts presented in this section are applicable to both structural and functional software quality, measurement of the latter is essentially performed through testing [see main article: Software testing]. [57] However, testing isn't enough: According to a study, individual programmers are less than 50% efficient at finding bugs in their own software. And most forms of testing are only 35% efficient. This makes it difficult to determine [software] quality. [58]

      Introduction Edit

      Software quality measurement is about quantifying to what extent a system or software possesses desirable characteristics. This can be performed through qualitative or quantitative means or a mix of both. In both cases, for each desirable characteristic, there are a set of measurable attributes the existence of which in a piece of software or system tend to be correlated and associated with this characteristic. For example, an attribute associated with portability is the number of target-dependent statements in a program. More precisely, using the Quality Function Deployment approach, these measurable attributes are the "hows" that need to be enforced to enable the "whats" in the Software Quality definition above.

      The structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126-3 and the subsequent ISO/IEC 25000:2005 quality model. The main focus is on internal structural quality. Subcategories have been created to handle specific areas like business application architecture and technical characteristics such as data access and manipulation or the notion of transactions.

      The dependence tree between software quality characteristics and their measurable attributes is represented in the diagram on the right, where each of the 5 characteristics that matter for the user (right) or owner of the business system depends on measurable attributes (left):

      • Application Architecture Practices
      • Coding Practices
      • Application Complexity
      • Documentation
      • Portability
      • Technical and Functional Volume

      Correlations between programming errors and production defects unveil that basic code errors account for 92 percent of the total errors in the source code. These numerous code-level issues eventually count for only 10 percent of the defects in production. Bad software engineering practices at the architecture levels account for only 8 percent of total defects, but consume over half the effort spent on fixing problems, and lead to 90 percent of the serious reliability, security, and efficiency issues in production. [59] [60]

      Code-based analysis Edit

      Many of the existing software measures count structural elements of the application that result from parsing the source code for such individual instructions [61] tokens [62] control structures (Complexity), and objects. [63]

      Software quality measurement is about quantifying to what extent a system or software rates along these dimensions. The analysis can be performed using a qualitative or quantitative approach or a mix of both to provide an aggregate view [using for example weighted average(s) that reflect relative importance between the factors being measured].

      This view of software quality on a linear continuum has to be supplemented by the identification of discrete Critical Programming Errors. These vulnerabilities may not fail a test case, but they are the result of bad practices that under specific circumstances can lead to catastrophic outages, performance degradations, security breaches, corrupted data, and myriad other problems [64] that make a given system de facto unsuitable for use regardless of its rating based on aggregated measurements. A well-known example of vulnerability is the Common Weakness Enumeration, [65] a repository of vulnerabilities in the source code that make applications exposed to security breaches.

      The measurement of critical application characteristics involves measuring structural attributes of the application's architecture, coding, and in-line documentation, as displayed in the picture above. Thus, each characteristic is affected by attributes at numerous levels of abstraction in the application and all of which must be included calculating the characteristic's measure if it is to be a valuable predictor of quality outcomes that affect the business. The layered approach to calculating characteristic measures displayed in the figure above was first proposed by Boehm and his colleagues at TRW (Boehm, 1978) [66] and is the approach taken in the ISO 9126 and 25000 series standards. These attributes can be measured from the parsed results of a static analysis of the application source code. Even dynamic characteristics of applications such as reliability and performance efficiency have their causal roots in the static structure of the application.

      Structural quality analysis and measurement is performed through the analysis of the source code, the architecture, software framework, database schema in relationship to principles and standards that together define the conceptual and logical architecture of a system. This is distinct from the basic, local, component-level code analysis typically performed by development tools which are mostly concerned with implementation considerations and are crucial during debugging and testing activities.

      Reliability Edit

      The root causes of poor reliability are found in a combination of non-compliance with good architectural and coding practices. This non-compliance can be detected by measuring the static quality attributes of an application. Assessing the static attributes underlying an application's reliability provides an estimate of the level of business risk and the likelihood of potential application failures and defects the application will experience when placed in operation.

      Assessing reliability requires checks of at least the following software engineering best practices and technical attributes:

      • Application Architecture Practices
      • Coding Practices
      • Complexity of algorithms
      • Complexity of programming practices
      • Compliance with Object-Oriented and Structured Programming best practices (when applicable)
      • Component or pattern re-use ratio
      • Dirty programming
      • Error & Exception handling (for all layers - GUI, Logic & Data)
      • Multi-layer design compliance
      • Resource bounds management
      • Software avoids patterns that will lead to unexpected behaviors
      • Software manages data integrity and consistency
      • Transaction complexity level

      Depending on the application architecture and the third-party components used (such as external libraries or frameworks), custom checks should be defined along the lines drawn by the above list of best practices to ensure a better assessment of the reliability of the delivered software.

      Efficiency Edit

      As with Reliability, the causes of performance inefficiency are often found in violations of good architectural and coding practice which can be detected by measuring the static quality attributes of an application. These static attributes predict potential operational performance bottlenecks and future scalability problems, especially for applications requiring high execution speed for handling complex algorithms or huge volumes of data.

      Assessing performance efficiency requires checking at least the following software engineering best practices and technical attributes:

      • Application Architecture Practices
      • Appropriate interactions with expensive and/or remote resources
      • Data access performance and data management
      • Memory, network and disk space management
      • Compliance with Coding Practices [67] (Best coding practices)

      Security Edit

      Software quality includes software security. [68] Many security vulnerabilities result from poor coding and architectural practices such as SQL injection or cross-site scripting. [69] [70] These are well documented in lists maintained by CWE, [71] and the SEI/Computer Emergency Center (CERT) at Carnegie Mellon University. [67]

      Assessing security requires at least checking the following software engineering best practices and technical attributes:

      • Implementation, Management of a security-aware and hardening development process, e.g. Security Development Lifecycle (Microsoft) or IBM's Secure Engineering Framework. [72]
      • Secure Application Architecture Practices [73][74]
      • Multi-layer design compliance
      • Security best practices (Input Validation, SQL Injection, Cross-Site Scripting, Access control etc.) [75][76]
      • Secure and good Programming Practices [67]
      • Error & Exception handling

      Maintainability Edit

      Maintainability includes concepts of modularity, understandability, changeability, testability, reusability, and transferability from one development team to another. These do not take the form of critical issues at the code level. Rather, poor maintainability is typically the result of thousands of minor violations with best practices in documentation, complexity avoidance strategy, and basic programming practices that make the difference between clean and easy-to-read code vs. unorganized and difficult-to-read code. [77]

      Assessing maintainability requires checking the following software engineering best practices and technical attributes:

      • Application Architecture Practices
      • Architecture, Programs and Code documentation embedded in source code
      • Code readability
      • Complexity level of transactions
      • Complexity of algorithms
      • Complexity of programming practices
      • Compliance with Object-Oriented and Structured Programming best practices (when applicable)
      • Component or pattern re-use ratio
      • Controlled level of dynamic coding
      • Coupling ratio
      • Dirty programming
      • Documentation
      • Hardware, OS, middleware, software components and database independence
      • Multi-layer design compliance
      • Portability
      • Programming Practices (code level)
      • Reduced duplicate code and functions
      • Source code file organization cleanliness

      Maintainability is closely related to Ward Cunningham's concept of technical debt, which is an expression of the costs resulting of a lack of maintainability. Reasons for why maintainability is low can be classified as reckless vs. prudent and deliberate vs. inadvertent, [78] and often have their origin in developers' inability, lack of time and goals, their carelessness and discrepancies in the creation cost of and benefits from documentation and, in particular, maintainable source code. [79]

      Size Edit

      Measuring software size requires that the whole source code be correctly gathered, including database structure scripts, data manipulation source code, component headers, configuration files etc. There are essentially two types of software sizes to be measured, the technical size (footprint) and the functional size:

      • There are several software technical sizing methods that have been widely described. The most common technical sizing method is number of Lines of Code (#LOC) per technology, number of files, functions, classes, tables, etc., from which backfiring Function Points can be computed
      • The most common for measuring functional size is function point analysis. Function point analysis measures the size of the software deliverable from a user's perspective. Function point sizing is done based on user requirements and provides an accurate representation of both size for the developer/estimator and value (functionality to be delivered) and reflects the business functionality being delivered to the customer. The method includes the identification and weighting of user recognizable inputs, outputs and data stores. The size value is then available for use in conjunction with numerous measures to quantify and to evaluate software delivery and performance (development cost per function point delivered defects per function point function points per staff month.).

      The function point analysis sizing standard is supported by the International Function Point Users Group (IFPUG). It can be applied early in the software development life-cycle and it is not dependent on lines of code like the somewhat inaccurate Backfiring method. The method is technology agnostic and can be used for comparative analysis across organizations and across industries.

      Since the inception of Function Point Analysis, several variations have evolved and the family of functional sizing techniques has broadened to include such sizing measures as COSMIC, NESMA, Use Case Points, FP Lite, Early and Quick FPs, and most recently Story Points. However, Function Points has a history of statistical accuracy, and has been used as a common unit of work measurement in numerous application development management (ADM) or outsourcing engagements, serving as the "currency" by which services are delivered and performance is measured.

      One common limitation to the Function Point methodology is that it is a manual process and therefore it can be labor-intensive and costly in large scale initiatives such as application development or outsourcing engagements. This negative aspect of applying the methodology may be what motivated industry IT leaders to form the Consortium for IT Software Quality focused on introducing a computable metrics standard for automating the measuring of software size while the IFPUG keep promoting a manual approach as most of its activity rely on FP counters certifications.

      CISQ defines Sizing as to estimate the size of software to support cost estimating, progress tracking or other related software project management activities. Two standards are used: Automated Function Points to measure the functional size of software and Automated Enhancement Points to measure the size of both functional and non-functional code in one measure. [80]

      Identifying critical programming errors Edit

      Critical Programming Errors are specific architectural and/or coding bad practices that result in the highest, immediate or long term, business disruption risk. [81]

      These are quite often technology-related and depend heavily on the context, business objectives and risks. Some may consider respect for naming conventions while others – those preparing the ground for a knowledge transfer for example – will consider it as absolutely critical.

      Critical Programming Errors can also be classified per CISQ Characteristics. Basic example below:

      • Reliability
        • Avoid software patterns that will lead to unexpected behavior (Uninitialized variable, null pointers, etc.)
        • Methods, procedures and functions doing Insert, Update, Delete, Create Table or Select must include error management
        • Multi-thread functions should be made thread safe, for instance servlets or struts action classes must not have instance/non-final static fields
        • Ensure centralization of client requests (incoming and data) to reduce network traffic
        • Avoid SQL queries that don't use an index against large tables in a loop
        • Avoid fields in servlet classes that are not final static
        • Avoid data access without including error management
        • Check control return codes and implement error handling mechanisms
        • Ensure input validation to avoid cross-site scripting flaws or SQL injections flaws
        • Deep inheritance trees and nesting should be avoided to improve comprehensibility
        • Modules should be loosely coupled (fanout, intermediaries) to avoid propagation of modifications
        • Enforce homogeneous naming conventions

        Operationalized quality models Edit

        Newer proposals for quality models such as Squale and Quamoco [82] propagate a direct integration of the definition of quality attributes and measurement. By breaking down quality attributes or even defining additional layers, the complex, abstract quality attributes (such as reliability or maintainability) become more manageable and measurable. Those quality models have been applied in industrial contexts but have not received widespread adoption.

        How to verify if Windows Update location is properly configured from WSUS?

        I was recently configuring a WSUS server. However I am not sure if I've set up correctly (WSUS isn't an everyday tool at the place where I work).

        I've been wondering, Is there a command (or a tool) in Windows 7 that can be executed on a client machine and will verify the location from where the client is pulling the updates?

        It seems that the WSUS is working. The GPO is showing the clients as Kyle Brandt suggested.

        However I just want to verify one more thing. Here's more info on the GPO i've created in order to distribute the updates.

        Corect me if I'm wrong, I am not sure about Target group name for this computer option. Does it want as a value of the names of the AD Groups or the names of the WSUS groups? I wasn't sure so I include both of them - from the image you can see that I have put inside the relevant AD Groups and WSUS Unssigned Computers group

        15 Answers 15

        The misconception that you're having is that security through obscurity is bad. It's actually not, security only through obscurity is terrible.

        Put it this way. You want your system to be complete secure if someone knew the full workings of it, apart from the key secret component that you control. Cryptography is a perfect example of this. If you are relying on them 'not seeing your algorithm' by using something like a ROT13 cipher it's terrible. On the flip side if they can see exactly the algorithm used yet still cannot practically do anything we see the ideal security situation.

        The thing to realize is that you never want to count on obscurity but it certainly never hurts. Should I password protect / use keys for my SSH connection? Absolutely. Should I rely on changing the server from 22 to port 2222 to keep my connection safe? Absolutely not. Is it bad to change my SSH server to port 2222 while also using a password? No, if anything this is the best solution. Changing ("Obscuring") the port will simply cut down on a heap of automatic exploit scanners searching normal ports. We gain a security advantage through obscurity which is good, but we are not counting on the obscurity. If they found it they still need to crack the password.

        TLDR - Only counting on obscurity is bad. You want your system to be secure with the attacker knowing it's complete workings apart from specifically controllable secret information (i.e. passwords). Obscurity in itself however isn't bad, and can actually be a good thing.

        Edit: To more precisely answer your probability question, yes in a way you could look at it like that, yet do so appreciating the differences. Ports range from 1-65535 and can be quickly checked within 1 minute with a scanner like nmap. "Guessing" a random say 10 digit password of all ascii characters is 1 / 1.8446744e+19 and would take 5.8 million years guessing 100,000 passwords a second.

        Edit 2: To address the comment below. Keys can be generated with sufficient entropy to be considered truly random ( If not it's a flaw with the implementation rather than the philosophy. You're correct in saying that everything relies on attackers not knowing information (passwords) and the dictionary definition of obscurity is "The state of being unknown", so you can correctly say that everything is counting on a level of obscurity.

        Once more though the worth comes down to the practical security given the information you're able to control remaining unknown. Keys, be it passwords or certificates etc, are (relatively) simple to maintain secret. Algorithms and other easy to check methods are hard to keep secret. "Is it worth while" comes down to determining what is possible to keep unknown, and judging the possibility of compromise based off that unknown information.

        Secrets are hard to keep secret. The larger a secret is and the more people that know it, the sooner it is likely to leak.

        When we accuse someone of security through obscurity what we are really saying as that we think their secret could be smaller, known by fewer people and/or easier to change.

        Algorithms, port numbers and shared passwords all fail the second and third points above. Algorithms also fail the first point.

        The distinction between when something is an appropriate secret and just obscure is whether we know of a way of achieving the same level of security with a smaller secret that is easier to change and is known by fewer people.

        I disagree with the assertion that additional obscurity never hurts.

        In the case of SSH port numbers, there is a small amount of extra time required to type in -p 1234 every time you use SSH. This is only a second or two, but with the number of times I use SSH, this would end up significant. There's the overhead of remembering that this client is slightly different and of teaching new hires the same. There's the case where you forget that this client is on a strange port and waste minutes looking at firewall configs and uptime monitors trying to figure out why you can't connect.

        Since port numbers are so easy to discover with a port scan, you will also have to implement an IPS that detects port scan and prevents the correct port from responding when it is checked or implement something like port-knocking. Both these methods can be overcome and don't add anything more than more obscurity, but they take up your time playing cat-and-mouse with your attacker.

        The same amount of time spent switching root logins and passwords off and switching to keys will have a better impact on security. Wasting time on obscuring details takes away from real security measures.

        In the case of a secret algorithm, the algorithm misses out on the additional scrutiny that many security researchers can provide. The obscurity (or secrecy) of the algorithm is likely causing it to be less secure.

        /.ssh/config and writing scripts to login) is time you could have spent on adding real security. &ndash Ladadadada Mar 6 '13 at 13:04

        Security by obscurity is where you rely upon some fact which you hope is not known to an attacker. A major problem with this is that once the fact is disclosed, the security scheme is rendered useless.

        However, all SSH is doing is obscuring information. It relies on the hope that an attacker won't think to guess the correct cryptographic key.

        When the phrase "Security by obscurity" is discussed, it often refers to the processes involved, rather than secret information. The thing about SSH is that as a process it has been heavily vetted to ensure that the only thing you need to keep secret is the cryptographic key. This is not possible in principle for the attacker to "think and guess", because the space in which cryptographic keys live is vast.

        Bruce Schneier showed that in order to brute force a 256-bit AES key you would need at a minimum, to capture the entire sun's energy output for 32 years(!). It doesn't matter how fast your computer is. That's just an information theoretic result which holds regardless of the computer you use (quantum computing notwithstanding).

        This is totally impractical with current technology. That's not to say that SSH uses AES, but it is one of the principles of good cryptography.

        An example might be where a bug is discovered in a piece of software where a (trusted) user finds a specific input allows an authentication bypass. A poor manager might say "ah, but it's really unlikely that any untrusted users will ever discover that, why bother fixing it". This is security by obscurity.

        It's been touched on in several other answers, but there are three pieces to this puzzle.

        An example of a mechanism would be AES, or SHA-1, or for your example, SSH.
        An example of an implementation/configuration would be which port SSH is listening on, or which encryption algorithm you've chosen to encrypt your application's data. An example of data is a private key, or a password.

        A mechanism should never be obscure. "It's safe because you don't know how it works" is not security. It should be able to be examined in minute detail without implementations being exploitable in the absence of secret data.

        An implementation may or may not be obscured. Generally, it neither hurts nor helps security materially when you do this. You may see fewer port scans identifying your SSH port, or you may be able to hide the encryption algorithm used for a particular ciphertext, but for a secure mechanism without the secret data, it should not matter. The mechanism should still be unexploitable. There's a argument that there's a marginal security benefit here, and a marginal harm to usability. Your milage may vary.

        Secret data should always be obscure. If someone gets a hold of your private keys or passwords, you revoke them, create new secret data, and vow to protect it better next time.

        Security to obscurity applies to everything related to not fixing the particular weakness at the code / source level instead finding workaround to cover your holes. When that layer of protection is removed the vulnerability is out open to be exploited.

        One such example is program-hooks which gives developers kind of covert means of connecting to applications in production environment. This is indeed a threat a security myth but its quickly tarnished by someone who has enough knowledge to reverse engineer and sometimes just by sniffing the network.

        Usually the main reason these threats escape into the wild when they are missed in SDLC phase of system/application design then when it goes to production environment its just too much cost for the things to repair from that point forward. It is there workarounds or coverups starts to emerge.

        Another example People writing their password on pieces of paper and putting it under their keyboard.

        Also as a market factor you should know that such practices are normally followed by closed-source vendors / community for an open-source project this concept doesn't apply any practical purpose as the code is released to general public for review and just about anyone can address concerns through techniques such as code-reviews. The best and most reliable way of catching it.

        Defeating the SSH security through obscurity concept practical examples

        1. Run nessus scan on targeted network would bring you the vulnerable services and mapped ports
        2. Run nmap scan on targeted network for open services.

        Security through obscurity is no security is perhaps more accurately stated as "A security system is only as secure as it's secrets are hard to guess." Really, when you get down to it, encryption could be argued to be security through obscurity since the encryption key is obscure. The difference is that it is so obscure that it is mathematically infeasible to find and therefore secure.

        In any secret based security system, you want the secret to be as limited as possible and as hard to guess as possible. The more complex a secret, the more likely there is to be a flaw in it. Also, limiting the amount that must be kept secret makes it easier to keep it secret.

        The statement "security through obscurity isn't security" stems from the idea that many "clever" ideas are simply coming up with convoluted ways to do something to try and make it harder for an attacker to figure something out, but often one detail of those approaches will impact other details of other steps, so it is impossible to tell how hard it will be for an attacker with partial knowledge of a secret algorithm to determine the rest of the algorithm.

        Keys on the other hand should be random, knowing a few bits of a cryptographic key for example shouldn't help you figure out the other bits in the key. Similarly, the difficulty in figuring out the key is fairly well understood. Since the relative security of the algorithm is not impacted significantly (or reliably quantifiable) by the secrecy of the algorithm, it doesn't add statistically significant security.

        What does make a statistically significant impact in the security of an algorithm is any problems with the algorithm. In general, published algorithms have been much more thoroughly examined for any flaws that break them and thus will generally provide a higher confidence in the security they provide.

        So in closing, most security does involve some level of obscurity, but the trick is to minimize the quantity and maximize the ease of protecting those secrets while also trying to ensure that there are not undetected flaws that will cause the system to misbehave and reveal the secrets.

        In every encryption algorithm, at every login prompt 'security by obscurity' is a major component. It always relies on some kind of secret knowledge (with the exception of two-factor authentication).

        The difference between good security and bad security is connected to the properties of the secret knowledge: Does it stay secret?

        A bad example is a system where you can derive information about this secret from other channels. Let's say you invented your own encryption algorithm, for example "zip then XOR with your key". An attacker probes your system and might determine the compression algorithm from the time it takes your encryption scheme to encode different plain text messages. The attacker gained knowledge about your system, knows the internals of the zip algorithm and might be able to use this data to determine your key. From the outside this looks like a perfectly good algorithm, the compressed and xor'ed data will look pretty random but only pose a small challenge to a sophisticated attacker. Your key might be very long but that does not help you to distinguish between bad and good obscurity. You accidentally embedded a path to gain knowledge about your secret key into the algorithm:

        The counterexample is RSA public key encryption. Here the secret key is a large prime number. The public key is the product of the secret key and another large prime number. Now even with the RSA algorithm well known I can give you my public key, you can encode whatever data you want with it but it does not leak any information about the secret key.

        So important to distinguish good from bad security is the amount of time someone needs to access your data. In your specific example going from port 22 to 2222 do is another bit of information the attacker needs, so that is a security plus. As this is easy to figure out within a short time it only adds a very small amount but does not leak anything about your key. As this port scan is trivial and a one-time cost only the total amount of information necessary to get to know your secret key stays constant, that is why it is not considered to improve the total security, hence the common saying that 'security by obscurity' does not help.

        3 RESULTS

        Potential habitat areas surveyed in BC were more variable in slope than survey sites in WA with a range of aspects (16% with slopes ≤5%) whereas, survey sites in WA were on the Columbia Plateau and generally flat (85% with slopes ≤5%, Figure 1). All point count locations were dominated by Artemisia shrubs, and had low coverage of native and weedy forbs compared to annual and perennial grass cover (Figure 2). On average, point count locations had a similar range of low annual grass cover across the region (WA median 8.8%, interquartile range [IQR] 1.3, 19.1% BC median = 1.2%, IQR 0.4, 12.1%), with slightly higher perennial grass coverage in WA (median = 31.0%, IQR 18.4, 45.8%) relative to point count locations in BC (median = 7.4%, IQR 2.0, 21.2% Table 2). When annual and perennial grasses were grouped as “Grass Cover,” coverage in WA remained on average higher than that in BC, but with similar broad ranges in coverage across the region.

        BC state WA state
        Annual grass cover 1.2% (IQR: 0.4, 12.1) 8.8% (1.3, 19.1)
        Perennial grass cover 7.4% (2.0, 21.2) 31.0% (18.4, 45.8)

        3.1 All variable model

        We fit several models to describe suitable habitat for Sage Thrashers that included both local-scale (right side of Figure 3) and landscape-scale variables (left panels of Figure 3) across the study region. The best model based on deviance information criterion (DIC) evaluation and cross-validation analysis of the logarithmic score indicated that local-scale variables characterizing both vegetation and ground cover were important in determining Sage Thrasher occurrence (Table 3). Models that included local-scale variables typically improved measures of model fit compared to models with only landscape-scale variables (Table 5), highlighting the importance of local vegetation (DIC reduced by >50). Sage Thrashers selected habitat in low-slope regions with lower accumulations of leaf litter and grass cover (Table 4), unrelated to percent cover of either forbs or shrubs. Yet landscape-scale measures of open and very sparse habitat classes showed a negative association, indicating a marginal (nonsignificant) preference for more dense vegetation classes (Table 4).

        Local-scale and landscape-scale models p.eff a a Effective number of parameters (p.eff).
        DIC ΔDIC LS b b Mean logarithmic score (LS).
        %Grasses + %litter + veg. cover class c c Vegetation cover class is defined as “open” versus “dense.”
        + D2Edge d d Distance to habitat edge (D2Edge)3
        + %slope
        20.0 913.1 0.0 0.703
        %Grasses + %litter + veg. cover class c c Vegetation cover class is defined as “open” versus “dense.”
        + aspect e e Aspect is defined is categorized as “flat” ≤5%, or “not flat” >5.
        22.4 914.7 −1.6 0.710
        %Grasses + %litter + veg. cove classr c c Vegetation cover class is defined as “open” versus “dense.”
        + D2Edge d d Distance to habitat edge (D2Edge)3
        + aspect e e Aspect is defined is categorized as “flat” ≤5%, or “not flat” >5.
        20.4 915.9 −2.8 0.708
        %Grasses + %litter + D2Edg d d Distance to habitat edge (D2Edge)3
        + aspect e e Aspect is defined is categorized as “flat” ≤5%, or “not flat” >5.
        17.6 916.8 −3.7 0.709
        • Note: We report all models within 4 DIC units of the best model. Distance to habitat edge, D2Edge.
        • a Effective number of parameters (p.eff).
        • b Mean logarithmic score (LS).
        • c Vegetation cover class is defined as “open” versus “dense.”
        • d Distance to habitat edge (D2Edge)3
        • e Aspect is defined is categorized as “flat” ≤5%, or “not flat” >5.
        Regression parameter Mean (95% credible interval) a a Posterior estimates of parameter means (and 95% credible intervals).
        % Grasses* * Significant covariates at α = .05.
        −0.022 (−0.037, −0.008)
        % Litter* * Significant covariates at α = .05.
        −0.050 (−0.066, −0.035)
        Veg. cover class* * Significant covariates at α = .05.
        −0.623 (−1.191, −0.048)
        Distance to habitat edge −0.226 (−0.530, 0.077)
        % Slope* * Significant covariates at α = .05.
        −0.645 (−1.151, −0.147)
        Zero-inflation parameter* * Significant covariates at α = .05.
        b b The zero-inflation parameter is from a binomial Type 1 likelihood (Blangiardo & Cameletti, 2015 ).
        0.687 (0.6698, 0.7107)
        • a Posterior estimates of parameter means (and 95% credible intervals).
        • b The zero-inflation parameter is from a binomial Type 1 likelihood (Blangiardo & Cameletti, 2015 ).
        • * Significant covariates at α = .05.

        Accounting for the variability in the landscape across the sampled regions, we found Sage Thrashers were selecting similar habitat across the region we sampled (i.e., no significant difference between BC and WA). This implies that the variability within the sampling frame used in Canada and the United States characterizes Sage Thrasher habitat using the existing variables, or at least it is not confounded within some unmeasured covariate linked to country.

        3.2 Landscape-scale variable model

        We fit a secondary set of models restricted to include only landscape-scale variables for which cell-based raster values were available for the broader region. As with the all-variable model, we selected the best landscape-scale variable model by DIC evaluation and used the cross-validation analysis of the logarithmic score as a second metric of model performance (Table 5). The best model included slope, distance to natural or anthropogenic habitat “edges,” vegetation cover class, and a dichotomous variable discriminating between flat (<5%) and sloped aspects (Table 6). The first three of the four covariates in the model were also identified as important in the all-variable model above. Similarly, this model predicted higher occupancy in regions of low slope, with natural and anthropogenic edges at farther distances, and less open-vegetation classes. Areas with higher likelihood of finding Sage Thrasher habitat corresponded well with the three areas already identified as critical habitat in BC and identified potential new areas currently not under consideration (Figure 4).

        Landscape-scale models p.eff a a Effective number of parameters (p.eff).
        DIC ΔDIC LS b b Mean logarithmic score (LS).
        Veg. cover class c c Vegetation cover class is defined as “open” versus “dense.”
        + D2Edge d d Distance to habitat edge (D2Edge).
        + %Slope + aspect e e Aspect is defined is categorized as “flat” ≤5% or “not flat” >5.
        17.4 964.5 0 0.739
        Veg. cover classr c c Vegetation cover class is defined as “open” versus “dense.”
        + aspect e e Aspect is defined is categorized as “flat” ≤5% or “not flat” >5.
        11.4 971.7 −7.2 0.749
        • Note: We report only the best model and the runner up based on DIC value (Spiegelhalter, Best, Carlin, & Van Der Linde, 2002 ).
        • a Effective number of parameters (p.eff).
        • b Mean logarithmic score (LS).
        • c Vegetation cover class is defined as “open” versus “dense.”
        • d Distance to habitat edge (D2Edge).
        • e Aspect is defined is categorized as “flat” ≤5% or “not flat” >5.
        Regression parameter Mean (95% credible interval) a a Posterior estimates of parameter means (and 95% credible intervals).
        Veg. cover class (open/dense) −0.541 (−1.075, 0.018)
        D2Edge (m)* * Significant covariates at α = 0.05.
        −0.334 (−0.599, −0.074)
        % slope* * Significant covariates at α = 0.05.
        −0.474 (−0.956, −0.008)
        Aspect (flat ≤5 or > 5) −0.295 (−0.974, 0.388)
        Zero-inflation parameter* * Significant covariates at α = 0.05.
        b b The zero-inflation parameter is from a binomial Type 1 likelihood (Blangiardo & Cameletti, 2015 ).
        0.714 (0.702, 0.724)
        • a Posterior estimates of parameter means (and 95% credible intervals).
        • b The zero-inflation parameter is from a binomial Type 1 likelihood (Blangiardo & Cameletti, 2015 ).
        • * Significant covariates at α = 0.05.


        What is land-use change?

        Looking at the surface of Earth from an airplane, one can see a mosaic of forests, grasslands, lakes, cities, farmlands, and more (Figure 1). Each of these land cover types exhibits different ecosystem functions and properties, and collectively all the types form what ecologists refer to as a landscape.

        Currently, landscapes are changing dramatically via the expansion of both urban and agricultural land cover types. These changes, which are referred to as land-use change, have inevitable environmental impacts. For instance, forests and wetlands, which used to be dominant land cover types in Florida, continue to be converted into agricultural and urban lands (Kautz et al. 2007). With the conversion of forests and wetlands to other land cover types comes the loss of the many ecosystem services and functions that these ecosystems provide. A description of these ecosystem services can be found in various chapters of the United Nation's Millennium Ecosystem Assessment (MEA 2005a MEA 2005b).

        Land-use change not only causes losses in total area of given natural and seminatural land cover types, but also changes in the spatial arrangement of these land cover types, the loss of important core habitat provided by these land cover types, and an increase in edge habitat (Shao et al. 2018 Murcia 1995). The distinction between core and edge habitat is important because each exhibit different characteristics. Habitat edges experience greater effects from nearby land cover types and thus greater levels of stressors in urban areas, including increased pollution and invasive plants. In contrast, core habitats, i.e., the areas at the center of habitat areas, exhibit characteristics typical of less-disturbed ecosystems. They are often critical for conservation efforts and for maintaining the ecosystem services that these habitats provide. Some species do quite well in edge habitats because they are able to gather resources from multiple types of ecosystems. For these reasons, we need to determine not only how land-use change affects the total area of different land cover types, but also the spatial arrangement, and amounts of core vs. edge habitats that these land cover types provide. Being able to quantify these changes is of critical importance for future land-use planning in order to ensure that our future choices regarding how we alter landscapes are more environmentally responsible.

        So how can we do this? We could simply compare satellite images taken at different times, but trying to perceive changes in land cover types simply by looking, let alone quantifying those complicated patterns precisely, would be difficult if not impossible. Satellite images of Alachua County, Florida, from 1984 and 2016 demonstrate this point (Figure 2). While it is relatively easy to detect some changes in land cover types, measuring amounts and differences in spatial patterns visually or by hand would be challenging.

        One way that land-use change can be quantified is by dividing each map into cells and then assigning each cell a land cover type (e.g., urban, forest, grassland, wetland, agricultural land). The size of each cell (i.e., "resolution") for such data varies depending on different data sources. For data on larger spatial scales, cell resolution will usually be 1 km 2 , 300 m 2 , or 30 m 2 . For more information about land cover types and standard classification procedures, please check This post-processed land cover map is referred to as a raster map. To give a better overview of what a raster map is, we provide two maps of the urban and upland forest land cover of Alachua County, Florida, in 2001 and in 2011 (Figure 3). Urban cells are colored red, and upland forest cells are colored green. White cells are various other land cover types. Even this simplified depiction of changes in upland forest and urban cover, while easier to visualize than initial land cover images (e.g., Figure 2), would be challenging to quantify by hand or visually.

        To assist in quantification, designers created FRAGSTATS, a free software that can quantify spatial patterns of land cover from rasterized maps. Using FRAGSTATS on two photos taken at two locations or at two different periods (e.g., Figure 3) allows for quantification of differences and changes in land cover over time. Such information is critical for making decisions about land development and for the conservation of habitat and habitat core areas provided by different land cover types.

        Introduction to FRAGSTATS: An Example of Land-Use Change in Alachua County, Florida.

        The FRAGSTATS team periodically updates the software and provides technical support. For this case study, you will need to download FRAGSTATS from The version described in this fact sheet is 4.2. In addition, we highly recommend you download the help document for FRAGSTATS, which provides much more detail about FRAGSTATS than is practical in a short fact sheet:

        Below we present a case study to better demonstrate how FRAGSTATS works. In this case, we focus on the land cover change in Alachua County, Florida, from 2001 to 2011, a period over which urbanization has changed the total area and spatial arrangement of natural areas. We illustrate the utility of FRAGSTATS by using it to quantify spatial changes in urban and upland forest land cover types between 2001 and 2011.

        Before we jump into our introduction of FRAGSTATS, one step cannot be missed. FRAGSTATS does not support the raster data visualized in Figure 3. It instead works off integer grids in which each grid cell is directly assigned an integer value corresponding to the class of land cover type dominating that cell. Therefore, we need to convert the maps shown in Figure 3 to integer grids. Typically, FRAGSTATS prefers integer grid files in the GeoTIFF and/or ASCII format. This conversion can be done using ArcGIS or other GIS software. Figure 4 shows an example of a GeoTIFF file in which the gray cells represent urban areas, white cells represent upland forests, and black cells represent other land cover types merged into one class of land cover type.

        We provide the two GeoTIFF files required for this exercise via the Institutional Repository at the University of Florida at To work through this example, download the entire folder to your desktop. The first GeoTIFF file named "UrbanForestsCover2001.tif" shows the land cover for Alachua County, Florida, in 2001. The other GeoTIFF file entitled "UrbanForestsCover2011.tif" shows the same, but for 2011. This folder also contains a text file named "ClassDescriptor" needed to run our example, as explained below, and a Word document entitled "How a tiff file is created" describing how to convert raster files to GeoTIFF files using GIS software. Again, for the purpose of this document, we have completed this step for you.

        Working with Example Files

        Open FRAGSTATS, then click on "File" in the upper left corner and select "New" from the dropdown menu. Afterward, you will see the console shown in Figure 5. Here we provide a brief introduction about different sections in this console.

        Section 1 in Figure 5 is the menu bar, where we can create, open, save, or run analyses on an integer grid file map (e.g., Figure 4).

        Section 2 of Figure 5 has two tabs in the upper left. The tab entitled Input Layer is where we input or remove data, i.e., GeoTIFF/ASCII files generated from GIS raster data. The tab entitled Analysis Parameters is where we specify extra parameters or sampling methods (Figure 6).

        Section 3 of Figure 5 is where we specify what FRAGSTATS refers to as the common tables important parameters that are necessary for running FRAGSTATS are specified here. More information about the common table area is provided below.

        Section 4 is for selecting the scales at which metrics will be calculated: Patch, Class, and/or Landscape, as depicted in Figure 7. A patch is an area of a homogenous land cover type that differs from its surroundings (Forman 1995). A class is a group of patches that share the same land cover type. Landscape usually represents the area of interest—for our example, Alachua County, Florida. The scale on which to estimate metrics depends on the research objectives.

        After selecting the scale at which metrics will be calculated, we need to choose specific metrics that we want to calculate. This selection is done in Section 5 of Figure 5. Note that the specific choices of metrics vary depending on the selected scale (i.e., patch, class, or landscape). FRAGSTATS divides metrics into 5 groups, including area and edge, shape, core area, contrast, and aggregation. To select these different metric groups, click on the tabs shown at the top of Section 5 in Figure 5. The spatial aspects being quantified by each of these metric classes are defined in Table 1.

        Finally, Section 6 of Figure 5 is where the program shows its activity log, including errors or processing information. The purpose of these sections will become clearer as you work through this example.

        The document called is a user guide that provides detailed explanations about the background of FRAGSTATS, how FRAGSTATS works, and the explanation of the different metrics and parameters. In addition, you can click "help" on the menu bar, i.e., Section 1 of Figure 5, to learn more.

        Now that we have explained the FRAGSTATS console as well as metric scales and types, we will show how FRAGSTATS works by quantifying changes between 2001 and 2011 in spatial aspects of urban and upland forest cover in Alachua County, Florida.

        First, input the GeoTIFF files of Alachua County land cover in 2001 and 2011. Click the button "Add layer" shown in Section 2 of Figure 5, and a dialog box will pop up (Figure 8). Next, select the data type you will use by clicking on the corresponding line in the left pane. In this example, it would be GDAL Geotiff grid (.tif). Next, click on ". " to the right of the upper right-hand box, navigate to the folder on your desktop, select the file entitled "UrbanForestCover2001.tif," and click OK. Once the first file has been uploaded, repeat these same steps to upload the file entitled "UrbanForestCover2011.tif."

        While uploading "UrbanForestCover2011.tif," look to the right pane outlined in red in Figure 8. Row count and column count show the number of cells in the rows and columns, respectively, of the uploaded GeoTIFF file of Alachua County, Florida. Cell size defines the size of each grid in the GeoTIFF file. In our example, each grid is 30 x 30m. Some images may have multiple bands (i.e., layers). You may choose which band to import into the band box. There is only one band in our example. Some GeoTIFF files may contain cells having no data or that are not of interest for the objectives of an analysis (e.g., white cells in Figure 3 and black cells in Figure 4). The value for these cells is assigned in the "no data box". Background value is the value shown for background cells, which are cells beyond the area we are analyzing, i.e., Alachua County, Florida. The program will not calculate values using information from this area of the GeoTIFF image.

        After uploading the GeoTIFF files, click on the "Save as" icon shown in Section 1 of Figure 5. Projects in FRAGSTATS are called "models." Name the current model "Alachua," and save it to the folder on your desktop containing the GeoTIFF files.

        The next step is to specify the common tables: class descriptors, edge depth, edge contrast, and similarity (Section 3 of Figure 5). For this example, we only need to specify "class descriptor" and "edge depth." The class descriptor is a text file that contains information on the integers used to specify different land cover types and other information, as shown in Figure 9. We have provided class descriptor in the text file entitled "ClassDescriptor.txt." (Note that you can create your own class descriptor using Notepad.) Upload this text file by clicking on the browse button shown in Section 3 of Figure 5, making sure to specify that you want to view "all files (*.*)." As shown in Figure 9, we set "1" to represent "urban" and "2" to represent "upland forest," and "0" as "background." Essentially, what the user needs to know is described in four values: ID, Name, Enabled, IsBackground. For instance, the first row explains that 0 is named "background," and background cells are not to be used ("false") to calculate metrics.

        Next, we will specify the edge depth used to estimate metrics on core areas of patches (Table 1). Edge depth defines how wide edges of patches are. In this example, click "use fixed depth" box and set the value to 100, i.e., 100m, by clicking on the navigate button (". ") shown in Section 3 of Figure 5. We selected 100m as our edge depth because effects of edges are detected even at this large of a distance from forest edges (Laurance et al. 2002). For this example, we will not specify parameters for the edge contrast or similarity boxes. To learn more about the functionality of these boxes, please see "USER GUDIELINES?Running via the Graphical User Interface?Step 4. Specifying Common Tables [optional]" section (Pages 54&ndash60) in the "FRAGSTATS help 4.2" document.

        In our example, the main interest is in quantifying how upland forest and urban areas (i.e., two land cover types) have changed in area and spatial pattern over time. Therefore, we are only interested in calculating metrics at the class level. Click on the "Analysis parameter" tab shown in Section 2 of Figure 5 and click the box before "Class metrics" under "Sampling strategy" and "No Sampling" in Figure 6.

        Next, highlight "Class metrics" in Section 4 of Figure 5 and then choose the target metrics in Section 5 of Figure 5. All of the metrics we will estimate, and the tab under which they are found, are listed in Table 2. Simply check the boxes for each metric in their respective tab, and note when selecting Total Edge (TE) that you will need to specify what "edge" is by clicking the ". " after "Count all background/boundary interface as edge," and selecting "Count all as edge" in the box that pops up. Please refer to "FRAGSTATS METRICS" section of the " 4.2" document (Pages 76&ndash170) for definitions of these and the many other metrics that FRAGSTATS can calculate.

        Finally, click "Run" in Section 1 of Figure 5. A progress window, as shown in Figure 10, will pop up: click "Proceed." To see results, click on the "Results" tab in Section 4 of Figure 5, making sure that the "Class" tab is selected (Figure 11). Results will be shown in the results window to the right.

        We can check results for the different input GeoTIFF files by highlighting different files listed in the run list (Section 1 of Figure 11). Now look at the table in Section 2 of Figure 11. The first row contains values for spatial metrics calculated for upland forests, while the second row shows the same metrics calculated for urban areas. Both rows are reporting calculations for the GeoTiff file highlighted in Section 1 of Figure 11. We define each metric below when we describe how to interpret model outputs.

        Spatial Inferences Revealed

        The numerical output generated by FRAGSTATS for the GeoTIFF files depicting land cover change in Alachua County, Florida, provides an understanding of not only how total area of upland forest and urban land cover has changed, but also how the spatial arrangement of land cover types has changed.

        This understanding is gained by the many spatial metrics that FRAGSTATS calculates. We calculated three types of metrics, including area&ndashedge metrics (Table 3), core area metrics (Table 4), and aggregation metrics (Table 5). Each specific metric is defined in its respective table.

        Through calculating area and edge metrics (Table 3), we found upland forests are not only shrinking in total area, but upland forest patches are also shrinking and becoming more fragmented. For instance, declines in CA and %PLAND show that the total upland forests area in Alachua County declined by an amount that might not seem concerning (0.5%). However, we found that the remaining forest patches decreased in size (AREA_MN) by an average of close to 1 ha. In addition, the 1,145,040 m increase in TE shows that there is now more forest edge in 2011 as compared to 2001, a sign of fragmentation.

        These metrics reveal that urban areas, in contrast with upland forests, are not only becoming larger but also more consolidated. Total urban area increased by 0.6% between 2001 and 2011, as revealed by increases in %PLAND. Unlike forest patch size, urban patch size (AREA_MN) on average increased by 1.6 ha while urban edge (TE) increased by 28,000 m (less than the increase exhibited by upland forests). An increase in urban patch size in conjunction with smaller increases in edge suggests that increases in urban areas occurred largely via the growth of existing urban areas.

        The result of core area metrics calculation (Table 4) reveals a great loss of upland forest core area. This decline in core area would go undetected in an interpretation that considered only the total area. To be specific, TCA and %CPLAND of upland forests have declined by about 18.9%. In addition, the size of the mean core area of upland forest has decreased by 22.3% (From 3.6 ha to 2.8 ha). This result confirms that though the loss of total upland forest area seems minor, in fact the upland forests are experiencing considerable fragmentation.

        In contrast to upland forest areas, the urban areas in Alachua County are experiencing increases in their core. For instance, the increases in TCA and %CPLAND shown in Table 4 reveals a 14.2% increase in urban core areas. In addition, the mean size of urban core areas (CORE_MN) has increased by 0.7 ha, from 4.4 ha to 5.1 ha. The rate of increase of urban core areas in Alachua County is greater than the rate of increase for the total urban area (Table 3), suggesting that sprawl from existing urban areas is contributing to urban expansion more than the emergence of isolated urban areas.

        Metrics pertaining to aggregation (Table 5) reveal that overall upland forest area did not only shrink between 2001 and 2011, but it also became more fragmented. For example, the number of patches of upland forests increased from 5,618 to 5,865. Furthermore, the splitting index, which increases with more fragmented patches, rose from 579.1 to 886.1 from 2001 to 2010. These changes in value suggest an increase in the number of upland forest patches due to upland forests becoming more fragmented.

        On the other hand, these same metrics reveal that urban patches became more aggregated between 2001 and 2011. This is because the number of urban patches decreased, as does the splitting index, suggesting that urban patches converged somewhat during this period.

        One final note, while this case study shows the utility of FRAGSTATS for quantifying spatial patterns, outputs are only as good as the quality of the data used in an analysis. This case study relied on the high-quality National Land Cover Data set from the US Geological Survey. Other datasets may not be of this quality. In addition, data quality may change over time. These changes, along with data spatial resolution and changes in protocols for land cover classification can all affect FRAGSTAT outputs. Therefore, data products should be carefully selected to ensure that they have the characteristics and quality needed to meet the objectives of a given analysis.

        How much plastic floats in the great pacific garbage patch?

        At the time of sampling, there were more than 1.8 trillion pieces of plastic in the patch that weigh an estimated 80,000 tonnes. These figures are much higher than previous calculations.


        The center of the GPGP has the highest density and the further boundaries are the least dense. When quantifying the mass of the GPGP, the team chose to account only for the denser center area. If the less-dense outer region was also considered in the total estimate, the total mass would then be closer to 100,000 tonnes.

        A total of 1.8 trillion plastic pieces were estimated to be floating in the patch – a plastic count that is equivalent to 250 pieces of debris for every human in the world.

        Using a similar approach as they did when figuring the mass, the team chose to employ conservative estimations of the plastic count. While 1.8 trillion is a mid-range value for the total count, their calculations estimated that it may be range from 1.1 to up to 3.6 trillion pieces.


        Using data from multiple reconnaissance missions, a mass concentration model was produced to visualize the plastic distribution in the patch.

        The mass concentration model, pictured below, shows how the concentration levels gradually decrease by orders of magnitude towards the outside boundaries of the GPGP. The center concentration levels contain the highest density, reaching 100s of kg/km² while decreasing down to 10 kg/km² in the outermost region.

        These results prove that plastic pollution at sea, while densely distributed within the patch, is scattered and does not form a solid mass, thus demystifying the trash island concept.

        Vertical distribution

        The Ocean Cleanup measured the vertical distribution of plastic during six expeditions between 2013 to 2015. Results from these expeditions proved that the buoyant plastic mass is distributed within the top few meters of the ocean. ,

        Factors such as wind speed, sea state, and plastic buoyancy will influence vertical mixing. However, buoyant plastic will eventually float back to the surface in calmer seas. Larger pieces were observed to resurface much more rapidly than smaller pieces.


        Characteristics of the debris in the Great Pacific Garbage Patch, such as plastic type and age, prove that plastic has the capacity to persist in this region.

        Plastic in the patch has also been measured since the 1970’s and the calculations from subsequent years show that microplastic mass concentration is increasing exponentially – proving that the input of plastic in the patch is greater than the output. Unless sources are mitigated, this number will continue to rise.

        #01 - This crate found in the Great Pacific Garbage Patch was produced in 1977. Source: The Ocean Cleanup #02 - This hard hat dates back to 1989. Source: The Ocean Cleanup #03 - This cover of a Nintendo Gameboy was produced in 1995. Source: The Ocean Cleanup

        New South Slope forest park gets a look

        Abigail Margulis/[email protected] Mother-daughter duo Inge, left, and Imke Durre want to preserve the forest at 11 Collier Ave. in the South Slope as a park. (Photo: Abigail Margulis/[email protected])

        ASHEVILLE — A proposed forest park on downtown’s South Slope will get a serious look from city staff.

        Municipal personnel plan to meet with the development company that owns an old patch of woods in a booming area of downtown. That move follows direction from City Council at a Tuesday meeting.

        The discussion of the half acre of woods at 11 Collier Ave. was not on the council’s printed agenda. But a small crowd came to the three-hour meeting, waiting and speaking at the end during a public comment period where they asked elected officials to help preserve the 23 oaks and other trees, estimated at 80-100 years old.

        Proponents, including Inge Durre, said such a collection of old trees exists nowhere else downtown. Durre called it “a singular occurrence in the hardscape of the city center area.” An online petition to preserve the woods had drawn more than 1,000 signatures by Wednesday.

        Council members didn’t take an official vote but said there was a consensus for staff to make serious inquiries. Several council members were cautiously encouraging, but said much of the work in terms of planning and fund raising needs to be done by proponents.

        “It’s time to start getting pledges to financial commitments if you are going to be a part of this. Because we can’t do it alone,” said Councilman Chris Pelly, who supported the preservation and has spoken with representatives of the company that owns the land.

        That is the Wilmington-based Tribute Companies Inc. The company has presented plans for the site that include a 48-unit, five-story apartment building and a two-level parking garage. But on Sept. 2, Tribute voluntarily halted the city’s approval process. Company representatives said they wanted to work with proponents and the city to preserve the land and wanted to give them time.

        At Tuesday’s meeting, Pelly said it wasn’t clear how long Tribute representatives would wait. The councilman said the company was doing a market appraisal and would share that price for the property when the appraisal was finished.

        The company had suggested a land swap, he said, and had proposed some possible parcels to trade. It also might be willing to sell the property and even take a loss to preserve the woods, Pelly said. The company said affordable housing could be a component of a future project, the councilman said.

        The parcel is linked to the Ravenscroft School campus, whose main building was originally an 1840s residence, according to the National Register of Historic Places. Prior to the 2008 recession, a 150-condominium project was planned for the area but was stymied by the economic downturn.

        Buncombe County Geographic Information Systems lists the property as 0.46 acres, though the total project talks about 0.55 acres. In 2011, the land was sold by First Troy Spe LLC for $314,000, county GIS says. The county now values it at $460,900 for tax purposes.

        Councilman Cecil Bothwell offered the most support for the park, saying in the 35 years he’s lived in the area, he’s seen older downtown trees disappear because of construction or other causes.

        “Over and over again, we lose these trees, and as one commenter made note of, you can’t replace old trees. There’s no way to do it,” Bothwell said.

        Vice Mayor Marc Hunt said he supported such preservation efforts but activists needed to help plan the park and raise money. Hunt pointed to land along Hominy Creek he helped preserve, a nine-acre park Pelly backed in his Haw Creek neighborhood and other city conservation efforts. In several of those cases, the city and its taxpayers ended up paying about a third of costs, he said.

        “I, frankly, am not comfortable with the proposition that the city figure out how to fund 90 percent or all of it. This really needs to be a community effort like all the other great parks we’ve been able to achieve. That would be the challenge I offer up,” he said.


        The mean pellet density across 76 surveyed stands was 1.93 pellets plot −1 stand −1 (SE = 0.19 pellets plot −1 stand −1 ), with a range of 0.04 – 6.28 pellets plot −1 stand −1 . Converting mean pellets to hare density ( Krebs et al. 2001) resulted in a range of 0.03 – 2.38 hares/ha, with a mean of 0.82 hares/ha (SE = 0.07 hares/ha) and a median of 0.67 hares/ha.

        Of the suite of stand-scale variables considered, only 4 were highly correlated with pellet density ( Table 1): vegetation type, understory cover, sapling density, and medium tree density. Most variables were correlated with one another. To avoid overfitting by using autocorrelated variables we used only sapling and medium tree density as predictive stand-scale variables in the model, because they created understory cover, were more highly correlated with pellet density, and are the variables most directly targeted in forest management.

        Spearman rank correlations (rs) between densities of snowshoe hare (Lepus americanus) pellets and vegetation variables measured in 76 forest stands in northern Washington during 2003 – 2004. Variables measured for each focal stand included length of the stand perimeter (m), stand area (ha), edge-to-area ratio (m:ha), slope (%), aspect (°), canopy cover (%), understory cover (%), and density (trees/ha) of saplings (<10.2 cm diameter at breast height [DBH]), medium-sized trees (10.3 – 27.9 cm DBH), and large-sized trees (>27.9 cm DBH).

        Watch the video: I Asked Bill Gates Whats The Next Crisis?