2.3.9. vacumm.misc.io – Input/output

2.3.9.2. Content

Inheritance diagram:
 
Inheritance diagram of vacumm.misc.io.ColoredFormatter

In/Output tools

class ColoredFormatter(msg, full_line=False)[source]

Bases: logging.Formatter

Log formatter with colors

converter()
localtime([seconds]) -> (tm_year,tm_mon,tm_mday,tm_hour,tm_min,
tm_sec,tm_wday,tm_yday,tm_isdst)

Convert seconds since the Epoch to a time tuple expressing local time. When ‘seconds’ is not passed in, convert the current time instead.

format(record)[source]

Format the specified record as text.

The record’s attribute dictionary is used as the operand to a string formatting operation which yields the returned string. Before formatting the dictionary, a couple of preparatory steps are carried out. The message attribute of the record is computed using LogRecord.getMessage(). If the formatting string uses the time (as determined by a call to usesTime(), formatTime() is called to format the event time. If there is exception information, it is formatted using formatException() and appended to the message.

formatException(ei)

Format and return the specified exception information as a string.

This default implementation just uses traceback.print_exception()

formatTime(record, datefmt=None)

Return the creation time of the specified LogRecord as formatted text.

This method should be called from format() by a formatter which wants to make use of a formatted time. This method can be overridden in formatters to provide for any specific requirement, but the basic behaviour is as follows: if datefmt (a string) is specified, it is used with time.strftime() to format the creation time of the record. Otherwise, the ISO8601 format is used. The resulting string is returned. This function uses a user-configurable function to convert the creation time to a tuple. By default, time.localtime() is used; to change this for a particular formatter instance, set the ‘converter’ attribute to a function with the same signature as time.localtime() or time.gmtime(). To change it for all formatters, for example if you want all logging times to be shown in GMT, set the ‘converter’ attribute in the Formatter class.

usesTime()

Check if the format uses the creation time of the record.

class Logger(name, logfile=None, console=True, maxlogsize=0, maxbackup=0, cfmt='%(name)s [%(levelname)-8s] %(message)s', ffmt='%(asctime)s: %(name)s [%(levelname)-8s] %(message)s', asctime='%Y-%m-%d %H:%M', level='debug', colors=True, full_line=False, redirect_warnings=False, redirect_stdout=False, redirect_stderr=False)[source]

Bases: object

Class for logging facilities when subclassing. Logging may be sent to the console and/or a log file

Params:
  • name: Name of the logger.
  • logfile, optional: Log file.
  • console, optional: Log to the console.
  • maxlogsize, optional: Maximal size of log file before rotating it.
  • maxbackup, optional: Maximal number of rotated files.
  • sfmt, optional: Format of log messages in log file.
  • cfmt, optional: Format of log message in console.
  • asctime, optional: Time format.
  • level, optional: Initialize logging level (see set_loglevel()).
  • colors, optional: Use colors when formatting terminal messages?
  • full_line, optional: Colorize full line or just level name?
  • redirect_warnings, optional: Redirect messages issued by warnings.warn.
  • redirect_stdout, optional: Redirect messages issued to sys.stdout.
  • redirect_stderr, optional: Redirect messages issued to sys.stderr.
See also:

logging module

critical(text, *args, **kwargs)[source]

Send a critical message

debug(text, *args, **kwargs)[source]

Send a debug message

error(text, *args, **kwargs)[source]

Send an error message

get_loglevel(asstring=False)[source]

Get the log level as an integer or a string

info(text, *args, **kwargs)[source]

Send a info message

set_loglevel(level=None, console=None, file=None)[source]

Set the log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)

Example:
>>> logger.set_loglevel('DEBUG', console='INFO')
showwarning(message, category, filename, lineno, file=None)[source]
warning(text, *args, **kwargs)[source]

Send a warning message

class NcFileObj(ncfile, mode='r')[source]

Bases: object

Simple class to properly manage file object or name

Examples:
>>> nfo = NcFileObj('myfile.nc')
>>> nfo.f('sst')
>>> nfo.close() # or del nfo: close file descriptor
>>> f = cdms2.open(path)
>>> nfo = NcFileObj(f)
>>> nfo.f('sst')
>>> del nfo # or nfo.close(): here has no effect (descriptor still open)
>>> f.close()
>>> nfo = NcFileObj(f)
>>> nfo.close() # close file descriptor
close()[source]
isclosed()[source]
isopen()[source]
ispath()[source]
class NcIterBestEstimate(files, time=None, toffset=None, timeid=None, tslices=None, keepopen=False, autoclose=True, id=None)[source]

Bases: object

Iterator on netcdf forecast files

This class is useful for reading the best estimate of netcdf forcast files.

Params:
  • files: A list of netcdf files.
  • toffset, optional: An integer or tuple of (<num>, ‘<units>’) to skip the first time steps of each files.
  • timeid, optional: Time id. If None, it is guessed using guess_timeid().
  • tslices, optional: A list of time slices (typically taken from a previous loop on file), to prevent guessing them.
  • keepopen, optional: Keep all file descriptor open, else close those who can be closed once no longer used.
  • autoclose: Deprecated.
Iterator:

At each iteration, it returns f,tslice

  • f: the file descriptor (may be closed),

  • tslice: the time slice

    • A slice instance.
    • None if not time is found (thus no slice to perform).
    • False: if nothing to read at all.
Example:
>>> for f, tslice in NcIterBestEstimate(ncfiles, toffset=(1,'day')):
...     if tslice is False or time is None: continue
...     var = f(time=tslice)
close()[source]

Close file descriptors that can be closed

empty()[source]

Nothing to read from this file

next(verbose=False)[source]
exception NcIterBestEstimateError[source]

Bases: vacumm.__init__.VACUMMError

args
message
class Shapes(input, m=None, proj=False, inverse=False, clip=True, shapetype=None, min_area=None, sort=True, reverse=True, samp=1)[source]

Bases: object

A class to read shapefiles and return GEOS objects Inspired from basemap.readshapefile

Here are the conversion rules from shapefile to GEOS objects :

  • Points and multipoints are interpreted as Points.
  • Polylines are interpreted as LineString.
  • Polygons are interpreted as Polygons.
Params:
  • input: Refers to a shapefile or is a shapes isntance ; if a shapefile, it assumes that <input>.shp contains points, multipoints, lines or polygons, and that <input>.dbf contains their attributes.

  • **proj*, optional: A projection function to convert coordinates. It must accept the “inverse” keyword.

  • **m*, optional: A Basemap instance for converting for plotting.

  • inverse, optional: Inverset the conversion with proj .

  • **clip*, optional: If in the form (xmin,ymin,xmax,ymax), clips to this box ; if a polygon like argument, it clips to this polygon (see polygons() for arguments). If simply True and m is present, it clips to the bounds of m.

  • **min_area*, optional: Minimal area to keep a polygon

  • samp, optional: An integer to undersample coordinates of polygons and lines.

  • shapetype, optional:

    • If 0, it must only deal with points ;
    • if 1, only polylines ;
    • if 2, only polygons (conversion 1<->2 is automatic).
INPUT_MULTIPOINTS = 8
INPUT_POINTS = 1
INPUT_POLYGONS = 5
INPUT_POLYLINES = 3
LINE = 1
LINES = 1
POINT = 0
POINTS = 0
POLY = 2
POLYGON = 2
POLYGONS = 2
POLYS = 2
clip(zone, copy=True, sort=True, reverse=True, **kwargs)[source]

Clip to zone

Params:
  • zone: [xmin, ymin, xmax, ymax]
  • copy, optional: If True, make a copy of current instance, else simply rehandled the list of shapes.
  • If copy==True, Other parameters are passed to the initialization of the new instance.
get_data(key=None, proj=None)[source]

Get the numeric version of the list of geos objects (polygons, etc)

Param:
  • key: A slice selector applied to the list.
  • proj: True, or a callable to project or re-project coordinates.
get_map()[source]

Return the associated basemap instance if set

get_points(key=None, split=True, proj=None)[source]

Get all the points from all the shapes as a tuple (x,y)

get_shapes(key=None, proj=None)[source]

Get the list of geos objects (polygons, etc)

Param:
  • key: A slice selector applied to the list.
  • proj: True, or a callable to project or re-project coordinates.
get_type()[source]

Return the type of shapes

get_xy(key=None, proj=None)[source]

Shortcut to get_points(split=false)

is_type(type)[source]

Check type

Example:
>>> self.is_type(self.POLYS)
plot(select=None, ax=None, fill=None, points=False, lines=True, fillcolor=None, color='k', s=None, linewidth=None, m=None, show=True, alpha=1, autoscale=True, title=None, **kwargs)[source]

Plot shapes

Params:
  • select, optional: argument for selecting shapes in the list [defaul: None].
  • fill, optional: Force filling (True/False), else guessed from shpe type, ie filling for polygons only [default: None]
  • ax, optional: Axes instance.
  • m, optional: Map instance (created with map2()) or a Basemap instance.
  • points, optional: Plots shapes as points.
  • lines, optional: Plot shapes as lines (if a of type POINTS).
  • fill_<params>, optional: <param> is passed to PolyCollection.
  • lines_<params>, optional: <param> is passed to LineCollection or to PolyCollection.
  • points_<params>, optional: <param> is passed to scatter.
  • m_<params>, optional: <param> is passed to map2 if m is True.
  • autoscale, optional: Autoscale axis limits?
resol(deg=True)[source]

Compute the mean “resolution” of the shapes based on the first shape

  • deg:

    • if False: return a resolution in meters has a the median distance between points
    • if True: return the median distance between points as a resolution in degrees (xres,yres)
sort(reverse=True)[source]

Sort shapes according to their surface or length

  • reverse: If True, greater polygons are first [default: True]
sorted()[source]
xy

XY coordinates as a (2,npts) array

class XYZ(xyz, m=None, units=None, long_name=None, transp=True, trans=False, magnet=0, rsamp=0, id=None, **kwargs)[source]

Bases: object

Class to manipulate xyz data (randomly spaced)

  • xyz: It can be either

    • a .xyz ascii file, or a netcdf/grd file with variables x, y and z,
    • a (x,y,z) tuple,
    • a (3, npts) array,
    • another XYZ instance.
  • long_name: Long name

  • units Units

  • tranform: It can be either

    • a factor applied to z at initialisation
    • a fonction that takes z as the only argument to filter its data.
  • exc: Polygons to exclude data (see exclude()).

    Several polygons must be passed as a tuple (poly1, poly2, …).

  • sel: Polygons to select data (see select()).

    Several polygons must be passed as a tuple (poly1, poly2, …).

  • load_<keywords>: keywords are passed to numpy.loadtxt()

  • rsamp_<keywords>: keywords are passed to rsamp()

  • Other keywords are set as atrributes.

Slicing:

  • len(obj): number of xyz data
  • obj[1:3]: [(x0,y0,z0),(x1,y1,z1)]

Operations :

>>> xyz += 2
>>> xyz3 = xyz1 + xyz2/2. # concatenation
astuple(mask=True)[source]

Shortcut to xyz(split=True) (see xyz())

clip(zone=None, margin=None, inverse=False, mask=False, id=None, **kwargs)[source]

Geographical selection of part of the data

  • zone: (xmin,ymin,xmax,ymax) or a float/int a complex polygon (see polygons()).
  • margin: Margin around zone relative to the resolution (see resol())
  • inverse: Inverse the selection.
  • mask: zone must be interpreted as a mask
consolidate()[source]

Apply radius undersampling and all exclusions and selections to data and reset them

contains(x, y)[source]

Check if one or several points are within a the convex hull

  • x,y: X,Y positions as floats or lists or an numpy arrays.
copy()[source]

Deep copy

del_rsamp()

Reset rsamp without affecting data

exclude(*zones)[source]

Add one or more zones where data are not used.

A zone can be :

  • an argument to polygons() to get a _geoslib.Polygon instance,
  • another :class:XYZ` instance from which the convex hull (see hull()) is used as a delimiting area
Usage:
>>> xyz.exclude([[-8,43],[-5.5,43],[-6,45.]],[[-10,45],[-7,47],[-10,49.]])
>>> xyz.exclude(polygon1,polygon2)
>>> xyz.exclude(xyz1,[-5,42,-3,48.])
exclusions()[source]

Get all exclusion polygons as a tuple

get_grid(res=None, xmin=None, xmax=None, ymin=None, ymax=None, relres=0.5, degres=False, id='xyz_grid')[source]

Generate a rectangular grid based on x/y positions and resolution

  • res: Resolution. It can be:

    • a float where then xres=yres=res
    • a tuple as (xres,yres)
    • else it is guessed using get_res() (and maybe resol())` and multiplied by relres
  • relres: Relative resolution factor applied to res when resolution is guessed (res=None)

  • degres: When res is explicitly given, it interpreted as degrees is degres is True.

  • xmin,xmax,ymin,ymax: Bounds of the grid. If not specified, bounds of the dataset are used (see xmin(), etc).

Note

Resolutions are adjusted when they are not mutiple of grid extensions (slightly decreased). Therefore, extensions of the grid are always preserved.

See also

resol(), togrid()

get_magnet()[source]

Get the magnet integer attribute

Note

Useful only for mixing XYZ instances

get_res(deg=False, auto=None)[source]

Get the mean X and Y resolutions in meters or degrees

get_rsamp()[source]

Get the radius sampling rsamp

get_transp()[source]

Get transp

Note

Useful only for mixing XYZ instances

get_x(mask=True)[source]

Get valid X positions

get_xmax(mask=True)[source]
get_xmin(mask=True)[source]
get_xy(mask=True)[source]

Return coordinates as a (2, npts) array xy

  • xy()[0]: X
  • xy()[1]: Y
get_xyz(mask=True, split=False)[source]

Return coordinates and data as a (3, npts) array xyz

  • xy()[0]: X
  • xy()[1]: Y
  • xy()[2]: Z
get_y(mask=True)[source]

Get valid Y positions

get_ymax(mask=True)[source]
get_ymin(mask=True)[source]
get_z(mask=True)[source]

Get valid Z values

get_zmax(mask=True)[source]
get_zmin(mask=True)[source]
grid

Rectangular grid based on x/y positions and resolution

hull(out='xy', mask=True)[source]

Return the convex hull

Returns:Depends on out
  • "xy": (xhull, yhull)
  • "ind": indices of points
  • "poly": _geoslib.Polygon instance
interp(xyo, xyz=False, **kwargs)[source]

Interpolate to (xo,yo) positions using nat.Natgrid

Params:
  • xo: Output X
  • yo: Output Y
  • xyz: If True, return a XYZ instance instead of a numpy array
  • interp_<param>, optional: <param> is passed to the xy2xy() interpolation routine.
  • Other params are passed to XYZ initialization for the output dataset.
Returns:

An XYZ instance

magnet

Magnet integer attribute

mask()[source]

Get the current mask due to exclusion and selection polygons

See also

exclude() select()

plot(size=5.0, color=None, alpha=1.0, masked_alpha=0.3, masked_size=None, linewidth=0.0, show=True, savefig=None, savefigs=None, m=None, colorbar=True, title=None, units=None, cmap=None, mode='valid', zorder=100, masked_zorder=50, margin=2, xmin=None, xmax=None, ymin=None, ymax=None, xres=None, yres=None, **kwargs)[source]

Scatter plot of bathymetry points

Params:
  • mode, optional: ‘valid’, ‘masked’ or ‘both’.
  • size, optional: Size of markers.
  • color, optional: Color of markers.
  • alpha, optional: Alpha transparency of markers.
  • zorder, optional: zorder of markers.
  • m, optional: Use this Basemap instance to plot the points.
  • masked_size, optional: Size of masked markers.
  • masked_alpha, optional: Alpha transparency of masked markers.
  • masked_zorder, optional: zorder of masked markers.
reset_exclusions()[source]

Remove all exclusions

reset_rsamp()[source]

Reset rsamp without affecting data

reset_selections()[source]

Remove all selections

resol(convex_hull_method='delaunay', exc=[], deg=False)[source]

Return the mean resolution.

Algorithm: Median distances between facets of triangles

Returns:(xres,yres)
rsmap

Radius of unsersampling

save(xyzfile, **kwargs)[source]

Save to a file

  • xyzfile: Output file name

    • write a netcdf file if it ends with “.nc” or “.grd”
    • write a sinux file if it ends with “.snx”
    • else write an ascii file with 3 columns
  • Other keywords are passed to numpy.savetxt() for ascii saving

select(*zones)[source]

Add one or more zone (polygons) where only these data are used

A zone is an argument to polygons() to get a _geoslib.Polygon instance.

Usage:
>>> xyz.select([[-8,43],[-5.5,43],[-6,45.]],[[-10,45],[-7,47],[-10,49.]])
>>> xyz.select(polygon1,polygon2)
selections()[source]

Get all selection polygons as a tuple

set_magnet(magnet)[source]

Set the magnet integer attribute. If set to 0, no magnet effect.

Note

Useful only for mixing XYZ instances

set_res(xres, yres=None)[source]

Set the resolution of the dataset

If yres is not, it is set to xres. When a value is negative, it is supposed to be in meters (not in degrees)

set_rsamp(rsamp)[source]

Set the radius sampling rsamp If set to 0, no sampling.

set_transp(transp)[source]

Set transp

Note

Useful only for mixing XYZ instances

shadows()[source]

Get the polygons defining the ‘shadow’ of this dataset.

It consists of a tuple of two elements:

  • the convex hull as a polygon,
  • a list of exclusion polygons that intersect the convex hull.

Therefore, a point in the shadow must be inside the convex hull polygon, and outside the exclusion polygons.

Returns:(hull_poly, [exclusion_poly1,…])
tocfg(cfg, section, param=None)[source]

Dump one or all parameters as options to a cfg section

  • cfg: ConfigParser object
  • section: Section of cfg
  • param: A single or a list of parameter names
togrid(grid=None, mask=False, cgrid=False, **kwargs)[source]

Interpolate to a regular grid

  • grid: The output grid. It can be either:

    • a (x,y) tuple or a grid or a MV2 variable with a grid,
    • None, thus guessed using grid()
  • mask: It can be either:

    • None, False or MV2.nomask: no masking
    • an array: this mask array is directly applied
    • a Shapes instance (or ShoreLine) or a single char GSHHS resolution (and optionally ‘s’ for Histolitt)
    • a callable fonction so that mask = thisfunc(mask, **kwmask)
    • a float: data with this value are masked
  • mask_<param>: <param> is passed to polygon_mask() for evaluation of mask thanks to the polygons.

  • grid_<param>: <param> is passed to grid().

  • cgrid: If True, returns bathy at U- and V-points, else at T-points

  • Other keyparam are passed to griddata() for regridding.

Return: (Zx,Zy) OR Z depending on cgrid.

toxy(xo, yo, mask=None, outtype='tuple')[source]

Interpolate on random points using xy2xy()

  • xo,yo: Output positions

  • mask: It can be either:

    • None, False or MV2.nomask: no masking
    • a Shapes instance (or ShoreLine) or a single char GSHHS resolution (and optionally ‘s’ for Histolitt)
  • outtype: Define output type

    • "tuple": as a tuple (x, y, z)
    • "xyz": as xyz block
    • "XYZ": as an XYZ (or subclass) instance
transp

Transparency boolean attribute

x

Valid X positions

xmax

X max

xmin

X min

xy

Coordinates as a (2, npts) array

xyz

Coordinates and data as a (3, npts) array

y

Valid Y positions

ymax

Y max

ymin

Y min

z

Valid Z values

zmax

Z max

zmin

Z min

zone(poly=False, mask=True)[source]

Get xmin,ymin,xmax,ymax

  • poly: if True, return zone as a Polygon instance
class XYZMerger(*datasets, **kwargs)[source]

Bases: object

Mix different bathymetries

append(d)[source]

Append a dataset to the merger

clean()[source]

Remove all current dataset

copy()[source]
get_xyz(mask=True, **kwargs)[source]

Merge current dataset

ids()[source]
merge(**kwargs)[source]

Shortcut to xyz()

plot(color=None, marker=None, mode='cluster', title='XYZ merger', show=True, colorbar=True, savefig=None, savefigs=None, legend=True, xmin=None, xmax=None, ymin=None, ymax=None, margin=5, xres=None, yres=None, **kwargs)[source]
  • alpha: Alpha transparency:

    • applied to all points if mode="cluster"
    • applied to hidden points if mode="data"
  • mode: Display mode:

    • "cluster": Points from different datasets have different colors and markers,
      and hidden points are transparent.
    • "data": Points have the same marker, colors depends on Z value and hidden
      points are masked.
  • marker: Define a single or several markers to be used.

  • legend: Show a legend if mode="cluster".

  • title: Title of the plot.

  • m: Basemap instance.

  • m_margin: Margin for m, relative to the mean resolution (see XYZ.resol())

  • m_<keywords>: Keywords are passed to map().

  • Extra keywords are passed to XYZ.plot().

remove(d)[source]

Remove a dataset from the merger

togrid(*args, **kwargs)[source]

Interpolate merged bathymetries to a grid

tolist()[source]

Return the merger as a list of datasets

xyz

Coordinates and data as a (3, npts) array

grib2nc(filepattern, varname)[source]

*Currently for test purpose only*

grib_get_names(gribfile)[source]

Return a list of a grib file parameter unique names (using grib message’s shortName).

grib_read_files(filepattern, varname, time=None, select=None, torect=None, samp=None, grid=None, squeeze=None, atts=None, verbose=False, **kwargs)[source]

Read cdms2 variables through one or a set of grib files.

Examples:
>>> vardict = grib_read_files("r0_2010-%m-%d_00.grb", 'u',
        ('2010-08-10', '2010-08-15', 'cc'), samp=[2, 1, 3])
>>> vardict = grib_read_files("r0_2010-??-??_00.grb", dict(shortName:'u'),
        select=dict(lon=(-10.0,-5.0), lat=slice(100,200)), grid=smallgrid)
>>> vardict = grib_read_files("myfiles*.grb", [dict(shortName=['u', 'u10']), dict(shortName=['v','v10'])])
Params:
  • filepattern: must be:
    • File pattern. See list_forecast_files() for more information.
    • One or more string(s) of the files(s) to be processed. string(s) may contain wildcard characters.
  • varname: Name of the grib variable(s) to read.
    • If a simple name, it reads this variable using the grib message’s shortName.
    • If a list of names, it reads them all.

    If a name is a dict, then it is used as grib selector in which case the user should not specify selectors which may interfer with the select keyword (see select()).

  • time, optional: Time selector for files and data. This keyword is mandatory if filepattern has date patterns.

  • select, optional: An additional selector applied after data have been loaded. It can be a dictionary or a Selector instance (see create_selector()).

  • torect, optional: If True, try to convert output grid to rectanguar using curv2rect() (see ncread_var()).

  • samp, optional: Undersample rate as a list of the same size as the rank of the variable. Set values to 0, 1 for no undersampling.

  • grid, optional: A grid to regrid the variable on.

  • grid_<keyword>, optional: keyword is passed to regrid().

  • squeeze, optional: Argument passed to ncread_var() to squeeze out singleton axes.

  • atts: attributes dict (or list of attributes dict for each varname)

  • verbose: function to be called for logging (sys.stderr if True,

    disabled with False)

Return:

If varname is a list of names or dicts: - a dict of loaded variables as cdms2.tvariable.TransientVariable

this dict keys are are filled with the corresponding varname value if it is a string, or wiht the loaded message’s shortName/name/parameterName.

Else: - the loaded variable as cdms2.tvariable.TransientVariable

list_forecast_files(filepattern, time=None, check=True, nopat=False, patfreq=None, patfmtfunc=None, patmargin=None, verbose=False, sort=True)[source]

Get a list of forecast files according to a file pattern

Params:
  • filepattern: It can be either:

    • a global matching pattern ("file??.nc"),
    • a date pattern ("file%Y-%m-%d.nc"),
    • an url ("http://site.net/file.nc"),
    • a list of files.
  • time: A time selector (('2000', '2001', 'co')).

    Warning

    This argument is mandatory if filepattern is a date pattern, and not used if filepattern is of another type.

  • check, optional: Check if local files exist.

  • nopat, optional: Never consider that input patterns have date patterns.

  • patfreq, optional: Frequency of files to generate file names for each date

    when filepattern is a date pattern.

  • patfmtfunc, optional: Function to use in place of

    strftime() to generate file names. It must take as arguments a date pattern and a CDAT component time.

  • sort, optional: If True, files are sorted alphabetically after being listed; if a callable function, they are sorted using this function (files=sort(files)).

    Warning

    Files are sorted alphabetically by default!

Examples:
>>> 'Prefered way'
>>> list_forecast_files('mrsPRVMR_r0_%Y-%m-%d_00.nc', ('2010-08-06', '2010-08-15'))
>>> list_forecast_files('http://www.ifremer.fr/data/mrsPRVMR_r0_%Y-%m-%d_00.nc', ('2010-08-06', '2010-08-15'))
>>> list_forecast_files('mrsPRVMR_r0_%Y-%m-%d_*.nc', ('2010-08-06', '2010-08-15'))
>>> 'Possible way'
>>> list_forecast_files('mrsPRVMR_r0_2010-05-??_00.nc')
>>> list_forecast_files(['mrsPRVMR_r0_2010-05-??_00.nc', 'mrsPRVMR_r0_2010-05-??_60.nc'])
>>> 'Just ot filter in existing files'
>>> list_forecast_files(['mrsPRVMR_r0_2010-05-06_00.nc', 'mrsPRVMR_r0_2010-05-07_00.nc'])
>>> 'Simple conversion to list'
>>> list_forecast_files('http://www.ifremer.fr/data/mrsPRVMR_r0_2010-05-06_00.nc')
nccache_get_time(f, timeid=None, ro=False)[source]

Get a time axis from cache or netcdf file

A time axis not in cache is read using ncget_time(), them stored in cache.

Params:
  • f: File object or name.
  • timeid, optional: Single or list of time ids for ncget_time().

Example:

>>> taxis = nccache_get_time('myfile.nc', ['time','time_counter'])
ncfind_axis(f, specs, ignorecase=True, regexp=False, **kwargs)[source]

Find an axis in a netcdf file using ncfind_obj()

ncfind_obj(f, specs, ignorecase=True, regexp=False, ids=None, searchmode=None, **kwargs)[source]

Find a variable or an axis in netcdf file using a name, list of names or matching attributes such as standard_name, long_name and units.

Objects are checked using ncmatch_obj(). It first checks the standard_name, then the names (ids), the axis, and finally the long_names and units.

Example:
>>> f = cdms2.open('temp.nc')
>>> ncfind_obj(f, 'temp')
>>> ncfind_obj(f, ['temperature','temp'])
>>> ncfind_obj(f, ('temperature','TEMP'), ignorecase=False)
>>> ncfind_obj(f, dict(standard_name="sea_surface_temperature"))
>>> ncfind_obj(f, 'lon')
Params:
  • f: A cdms2.dataset.CdmsFile.
  • name: A string or list of string to look for, or a dictionary with keys “name”, “standard_name”, “long_name”, ‘units’ and ‘axis’.
  • ignorecase, optional: Ignore name case when searching variable.
  • regexp, optional: Interpret long_names and units as regular expressions that must be compiled.
  • searchmode, optional: Search order when specs is a dictionary and not a OrderedDict. It defaults to None or 'snlua' which means: standard_name -> name -> long_name -> units -> axis (first letters). If name is an OrderedDict, it simply acts as a filter to restrict search.
Return:

The first matching object name, or None if not found.

ncfind_var(f, id, ignorecase=True, regexp=False, **kwargs)[source]

Find a variable in a netcdf file using ncfind_obj()

ncget_axis(f, checker, ids=None, ro=False, checkaxis=False, **kwargs)[source]

Get an axis in a netcdf file by searching all axes and variables

If checker is a list, dict or tuple, ncfind_axis() is called directly to search for the axis within the file.

Param:
  • checker: Can be either

    • A generic name such as ‘x’ or ‘lon’,
    • A function to check that an object is an axis. of appropriate type (such as islon()). This function must accept the ‘ro’ keyword (‘readonly’).
    • An argument to ncfind_axis(): list, dict, tuple.
  • ids, optional: A list of ids to focus search.

Return:

The axis or None if not found

ncget_fgrid(f, gg)[source]

Get the file grid that matches a transient grid or variable

Matching is checked using ids of longitudes and latitudes.

Params:
  • f: file name or object.
  • gg: cdms2 grid or variable with a grid.
Return:

A FileGrid instance or None

ncget_grid(f, ids=None, torect=False)[source]

Get a grid of a netcdf file

Params:
  • f: Netcdf file name or object.
  • ids, optional: List of ids to help searching.
ncget_lat(f, ids=None, checkaxis=False, ro=False)[source]

Get latitude axis of a netcdf file

Params:
  • f: Netcdf file name or object.
  • ids, optional: List of ids to help searching.
ncget_level(f, ids=None, checkaxis=False, ro=False)[source]

Get level axis of a netcdf file

Params:
  • f: Netcdf file name or object.
  • ids, optional: List of ids to help searching.
ncget_lon(f, ids=None, checkaxis=False, ro=False)[source]

Get longitude axis of a netcdf file

Params:
  • f: Netcdf file name or object.
  • ids, optional: List of ids to help searching.
ncget_time(f, ids=None, checkaxis=False, ro=False)[source]

Get time axis of a netcdf file

Params:
  • f: Netcdf file name or object.
  • ids, optional: List of ids to help searching.
ncget_var(f, *args, **kwargs)[source]

Get a variable object as returned by cdms2.dataset.CdmsFile.getVariable() which is equivalent to f[varname].

Return:A cdms2.fvariable.FileVariable or None if not found.
See:ncfind_var()
ncmatch_obj(obj, id=None, standard_name=None, long_name=None, units=None, axis=None, ignorecase=True, searchmode=None, **kwargs)[source]

Check if an MV2 object (typicaly from a netcdf file) matches names, standard_names, etc

It first checks the standard_name, then the names (ids), the axis, and finally the long_names and units.

Params:
  • obj: A MV2 array.
  • standard_name, optional: List of possible standard_names.
  • id, optional: Name (id) of this array, wich defaults to the id attribute.
  • axis, optional: Axis type, as one of ‘x, ‘y’, ‘z’, ‘t’.
  • long_name, optional: List of possible long_names or callable expression (such as regular expression method).
  • units, optional: Same as long_names but for units.
Example:
>>> ncmatch_obj(sst, standard_name='sea_surface_temperature', id=['sst'])
>>> import re
>>> ncmatch_obj(sst, long_name=re.compile('sea surface temp').match)
ncread_axis(f, name, select=None, ignorecase=True, mode='raise')[source]

Read a 1D axis

Note

Please use ncread_var() to read 2D axes.

Params:
  • mode, optional: if 'raise' raises an IOError if not found, else returns None.
ncread_best_estimate(filepattern, varname, *args, **kwargs)[source]

Read the best estimate of a variable through a set of netcdf forecast files

Warning

This function is deprecated. Please use ncread_files() switching the first two argument.

This is equivalent to:

ncread_files(varname, filepattern, *args, **kwargs)
ncread_files(filepattern, varname, time=None, timeid=None, toffset=None, select=None, atts=None, samp=None, grid=None, verbose=False, ignorecase=True, torect=True, squeeze=False, searchmode=None, nibeid=None, sort=True, nopat=False, patfreq=None, patfmtfunc=None, check=True, bestestimate=True, **kwargs)[source]

Read the best estimate of a variable through a set of netcdf files

Warning

Files are listed using function list_forecast_files(). Please read its documentation before using current function.

Examples:
>>> var = ncread_files("r0_2010-%m-%d_00.nc", 'xe',
    ('2010-08-10', '2010-08-15', 'cc'), samp=[2, 1, 3])
>>> var = ncread_files("http://www.net/r0_2010-%m-%d_00.nc", 'xe',
    ('2010-08-10', '2010-08-15', 'cc'),
    timeid='TIME', toffset=(1, 'day'))
>>> var = ncread_files("r0_2010-??-??_00.nc", 'xe',
    select=dict(lon=(-10,-5), z=slice(23,24)), grid=smallgrid)
>>> xe, sst = ncread_files("myfiles*.nc", [('xe', 'sla'),('sst','temp'),'u'])
Params:
  • varname: Name of the netcdf variable to read.

    • If a simple name, it reads this variable.
    • If a list of names, it reads them all.
    • If a list of list of names, each variable is searched for using the sublist of names.
  • filepattern: File pattern. See list_forecast_files() for more information.

  • time, optional: Time selector. This keyword is mandatory if filepattern has date patterns.

  • toffset: Skip the first time steps. See NcIterBestEstimate for more information.

  • select, optional: An additional selector for reading the variable. It can be a dictionary or a Selector instance (see create_selector()).

  • atts: attributes dict (or list of attributes dict for each varname) (see ncread_var().)

  • samp, optional: Undersample rate as a list of the same size as the rank of the variable. Set values to 0, 1 for no undersampling.

  • grid, optional: A grid to regrid the variable on.

  • grid_<keyword>, optional: keyword is passed to regrid().

  • timeid, optional: Time id (otherwise it is guessed).

  • ignorecase, optional: Ignore variable name case (see ncfind_var()).

  • torect, optional: If True, try to convert output grid to rectanguar using curv2rect() (see ncread_var()).

  • Extra kwargs are used to refine the selector initialized with select.

  • squeeze, optional: Argument passed to ncread_var() to squeeze out singleton axes.

  • searchmode, optional: Search order (see ncfind_obj()).

  • sort/nopat/patfreq/patfmtfunc/check, optional: These arguments are passed to list_forecast_files().

Raise:

NcIterBestEstimateError in case of error.

ncread_obj(f, name, *args, **kwargs)[source]

Read an arbitrary netcdf object (axis or variable)

ncread_var(f, vname, *args, **kwargs)[source]

Read a variable in a netcdf file and some more

In addition to a simple f(vname, *args, **kwargs)`:

  • vname can be a list of var names, and it takes the first one found, ignoring the case by default.
  • If a variable is on a grid that is stored as curvilinear but is rectangular in real, it convert its grid to a rectanguar grid

If variabe is not found, it raises

Params:
  • f: File descriptor.
  • vname: Variable name(s) (see ncfind_var()).
  • ignorecase, optional: Case insensitive search for the name of variable.
  • Other arguments and keywords are passed to f.
  • atts: Dictionary of attributes to apply.
  • squeeze, optional: A single argument (or a list of them) interpreted as a squeeze specification passed to squeeze_variable(), to squeeze out singleton axes.
  • torect, optional: If True, try to convert output grid to rectanguar using curv2rect().
  • mode, optional: if 'raise' raises an IOError if not found, else returns None.
Example:
>>> var = ncread_var(f, ['u', 'u2'], lon=(45, 47), atts={'id':'U'})
netcdf3()[source]

Turn netcdf4 writing off with cdms2

netcdf4(level=3, deflate=1, shuffle=1)[source]

Turn netcdf4 writing on and suppress compression warning

write_snx(objects, snxfile, type='auto', mode='w', z=99, xfmt='%g', yfmt='%g', zfmt='%g', close=True)[source]

Write points, lines or polygons in a sinusX file