Export as Python script from ModelBuilder never completes?
The script below was exported from ModelBuilder and runs within ModelBuilder in about 7.5 seconds.
However when I export it and try to run it as a standalone script in either PyScripter or the default Python IDLE it creates a schema lock on the DB and never ends… I have let it run for an hour.
There is no error it simply never completes the task.
Any ideas on how to trouble shoot this or why it works in ModelBuilder but not as a standalone?
I'm using ArcGIS 10.2 for Desktop with a Basic level license.
import arcpy arcpy.env.overwriteOutput = True # Local variables: Wells_csv = "c:xxWells.csv" Wells_Layer = "Wells_Layer" wells = "c:xxwells" # Process: Make XY Event Layer arcpy.MakeXYEventLayer_management(Wells_csv, "Surface_Longitude", "Surface_Longitude", Wells_Layer, "GEOGCS['GCS_North_American_1927',DATUM['D_North_American_1927',SPHEROID['Clarke_1866',6378206.4,294.9786982]],PRIMEM['Greenwich',0.0],UNIT['Degree',0.0174532925199433]];-400 -400 1000000000;-100000 10000;-100000 10000;8.98305509728916E-09;0.001;0.001;IsHighPrecision", "") # Process: Copy Features arcpy.CopyFeatures_management(Wells_Layer, wells, "", "0", "0", "0")
The Help page entitled Exporting a model to a Python script lists a number of caveats when using this technique as an aid to learn Python/ArcPy.
I far prefer to run tools manually via their tool dialogs and then to use Geoprocessing | Results to access Copy As Python Snippet and then Paste that well-formed code into a Python script instead.
In your case the script only uses two tools so I recommend running each from its tool dialog with the same parameters, and assuming it works, start writing a new script using Copy As Python Snippet.
If either tool fails to complete then this is a problem with the tool rather than ArcPy (or PyScripter/IDLE).
FeedBurner makes it easy to receive content updates in My Yahoo!, Newsgator, Bloglines, and other news readers.
A message from this feed's publisher:
Current Feed Content
5 things you didn&rsquot know about the new Tau VMs
Posted: Fri, 25 Jun 2021 21:00:00 -0000
Google Cloud Compute VMs are built on the same global infrastructure that runs Google's search engine, Gmail, YouTube and other services. And over the years, we’ve continued to launch more and more Compute family and VMs types to serve your workload needs at the price point you’re looking for. When you take a bird’s eye look at our Compute offerings, you’ll notice the following family types:
- General Purpose(E2/N2/N2D/N1): Virtual machines well suited when you need a balance between customization, performance, and total cost of ownership.
- Compute Optimized (C2/C2D): Performance sensitive workloads where CPU frequency and consistency are required, or applications that require more powerful cores and a higher core:memory ratios.
- Memory Optimized(M1/M2): Virtual machines for the largest memory requirements for business critical workloads.
- Accelerator Optimized(A2): These are the highest performance GPUs for ML, HPC, and massive parallelized computation.
As their names might suggest, each family is optimized for specific workload requirements. While they cover use cases like dev/test, enterprise apps, HPC, and large in-memory databases, many customers still have compute requirements for scale-out workloads, like large scale Java apps, web-tier applications, and data analytics. They want focused VM features without breaking the bank or sacrificing developer productivity.
The Tau VM family is the new VM family that extends Compute Engine’s VM offerings for those looking for cost-effective performance for scale-out workloads with full x86 compatibility. Check out the official blog post and my video below to get a quick intro to the new Tau VM family and T2D, its first instance type.
If you’re like me and still want help understanding when to use T2D VMs and how they stack up, here are 5 Tau VM facts that should help:
1. T2D VMs are built on the latest 3rd generation AMD EPYC TM processors
AMD EPYC processors are x86-64 microprocessors based on AMD’s Zen microarchitecture (introduced in 2017). The third generation, Milan, came out in March 2021, building upon the previous generation with additional compute density and performance for the cloud. At our data centers, we’re able to get more performance per socket per rack, and pass that over to workloads running on T2D VMs.
The AMD EPYC processor-based VMs also preserve x86 compatibility so that you don’t need to utilize technical resources and time redesigning applications and instead can immediately take full advantage of x86 processing speed and ecosystem depth.
2. T2D VMs are well suited for cloud-native and scale-out workloads
Cloud-native workloads have led to the continued proliferation of distributed architectures. Data analytics and media streaming, for example, often leverage scale-out (horizontally scalable) multi-tier architectures. That means when additional processing power is needed, you can scale out by statically adding or removing resources to meet changing application demands. As cluster sizes increase, the communication requirements between compute notes rise quickly. AMD EPYC processors are built using the Zen 3 architecture, which uses a new "unified complex" design that dramatically reduces core-to-core and core-to-cache latencies. This reduces communication penalties when you need fast scale-out across compute nodes.
T2D VMs offer the ideal combination of performance and price for your scale-out workloads including web servers, containerized microservices, media transcoding, and large-scale Java applications. T2D VMs will come in predefined VM shapes, with up to 60 vCPUs per VM, and 4 GB of memory per vCPU, and offer up to 32 Gbps networking.
3. T2D VMs win against other major cloud providers on absolute performance and price-performance
Let’s take an example. A 32vCPU VM with 128GB RAM will be priced at $1.3520 per hour for on-demand usage in us-central1. This makes T2D the lowest cost solution for scale-out workloads, with 56% higher absolute performance and 42% higher price-performance compared to general-purpose VMs of any of the leading public cloud vendors. You can check out how we collected these benchmark results and how to reproduce them here.
4. Google Kubernetes Engine support from day one
Google Kubernetes Engine (GKE) supports Tau VMs, helping you optimize price-performance for your containerized workloads. You can add T2D nodes to your GKE clusters by specifying the T2D machine type in your GKE node pools.
This is useful if you’re leveraging GKE's cluster autoscaler, for example, which resizes the number of nodes in a given node pool, based on the demands of your workloads (another example of horizontal scaling). You specify a minimum and maximum size for the node pool, and the rest is automatic. T2D VMs in this case would provide scale-out performance and low latency during autoscaling events.
In addition, cluster autoscaler considers the relative cost of the instance types in the various pools, and attempts to expand the least expensive possible node pool. Coupled with the T2D VMs price-performance ratio, you can experience a lower total cost of ownership without sacrificing performance and scale.
5. We worked with pre-selected customers to test Tau VM performance
Snap, Inc. is continuing to improve their scale-out compute infrastructure for key capabilities like AR, Lenses, Spotlight, and Maps. After testing the T2D VMs with Google Kubernetes Engine, they saw the potential for a double-digit performance gain in the companies' real-world workloads. Likewise, Twitter shared their excitement about the price-performance enhancements critical for their infrastructure used to serve the global public conversation.
If you’re interested in signing up to try out the Tau VMs (slated for Q3 2021), you can sign up here.