More

Processing algorithm fails from Python script

Processing algorithm fails from Python script


I'm trying to write a Python script performing a geoalgorithm. What is surprising me is the following:

  1. I first test the algorithm by QGIS (2.8) interface. In my case, the GRASS interpolator v.surf.idw
  2. I see that the solution is sufficiently good using a certain setting of parameters.
  3. Then, I run the same algorithm with the same parameters, froma Python script. In my case:

    out_ras = processing.runalg("grass:v.surf.idw", vl,12,2,"field_3" ,False, "%f , %f, %f, %f "% (xmin , xmax , ymin , ymax), 0.5, -1, 0.001, fileoutput)

where:

  • vlis the point vector layer
  • field_3is the filed where vlaues to be interpolated
  • fileoutputis the raster file in output
  • (xmin, , xmax , ymin , ymax)are the Extent of my layer

This setting (perfectly working when launched from QGis interface) produices a Nodata value Raster (only 1 cell). It seems that the algorithm does not recognize the vector in input. I've also checked the CRS of the layer (with vl.crs().authid() ) and everything sounds good.

Any help? Any experience in detecting different behaviour of the SAME algorithm run by Python through processing instead of from QGIS UI ?


it seems that my problem was in the GRASS algorithm I was using. Now, I've moved to GDAL algorithm named "Grid - InverseDistance To a Power" and it works. I don't really know what was wrong: my only suspect is that after running grass algorithm I would need to "convert" the grass raster map created to a "readble" raster, to be loaded in QGis correctly. I realized such a guess after reading the processing log file and comparing the log obtained by python-based run with the one obtained after launching the process by the GUI. But this is only a guess… hoping this could help some body else playing with GRASS and processing.