Data management core routines of psyplot.

Classes:

AbsoluteTimeDecoder(array)

AbsoluteTimeEncoder(array)

ArrayList([iterable, attrs, auto_update, ...])

Base class for creating a list of interactive arrays from a dataset

CFDecoder([ds, x, y, z, t])

Class that interpretes the coordinates and attributes accordings to cf-conventions

DatasetAccessor(ds)

A dataset accessor to interface with the psyplot package

InteractiveArray(xarray_obj, *args, **kwargs)

Interactive psyplot accessor for the data array

InteractiveBase([plotter, arr_name, auto_update])

Class for the communication of a data object with a suitable plotter

InteractiveList(*args, **kwargs)

List of InteractiveArray instances that can be plotted itself

Signal([name, cls_signal])

Signal to connect functions to a specific event

UGridDecoder([ds, x, y, z, t])

Decoder for UGrid data sets

Functions:

decode_absolute_time(times)

encode_absolute_time(times)

get_filename_ds(ds[, dump, paths])

Return the filename of the corresponding to a dataset

get_index_from_coord(coord, base_index)

Function to return the coordinate as integer, integer array or slice

get_tdata(t_format, files)

Get the time information from file names

open_dataset(filename_or_obj[, decode_cf, ...])

Open an instance of xarray.Dataset.

open_mfdataset(paths[, decode_cf, ...])

Open multiple files as a single dataset.

setup_coords([arr_names, sort, dims])

Sets up the arr_names dictionary for the plot

to_netcdf(ds, *args, **kwargs)

Store the given dataset as a netCDF file

to_slice(arr)

Test whether arr is an integer array that can be replaced by a slice

Data:

get_fname_funcs

functions to use to extract the file name from a data store

t_patterns

mapping that translates datetime format strings to regex patterns

class psyplot.data.AbsoluteTimeDecoder(array)[source]

Bases: NDArrayMixin

Attributes:

dtype

property dtype
class psyplot.data.AbsoluteTimeEncoder(array)[source]

Bases: NDArrayMixin

Attributes:

dtype

property dtype
class psyplot.data.ArrayList(iterable=[], attrs={}, auto_update=None, new_name=True)[source]

Bases: list

Base class for creating a list of interactive arrays from a dataset

This list contains and manages InteractiveArray instances

Parameters:
  • iterable (iterable) – The iterable (e.g. another list) defining this list

  • attrs (dict-like or iterable, optional) – Global attributes of this list

  • auto_update (bool) – Default: None. A boolean indicating whether this list shall automatically update the contained arrays when calling the update() method or not. See also the no_auto_update attribute. If None, the value from the 'lists.auto_update' key in the psyplot.rcParams dictionary is used.

  • new_name (bool or str) – If False, and the arr_name attribute of the new array is already in the list, a ValueError is raised. If True and the arr_name attribute of the new array is not already in the list, the name is not changed. Otherwise, if the array name is already in use, new_name is set to ‘arr{0}’. If not True, this will be used for renaming (if the array name of arr is in use or not). '{0}' is replaced by a counter

Attributes:

all_dims

The dimensions for each of the arrays in this list

all_names

The variable names for each of the arrays in this list

arr_names

Names of the arrays (!not of the variables!) in this list

arrays

A list of all the xarray.DataArray instances in this list

coords

Names of the coordinates of the arrays in this list

coords_intersect

Coordinates of the arrays in this list that are used in all arrays

dims

Dimensions of the arrays in this list

dims_intersect

Dimensions of the arrays in this list that are used in all arrays

is_unstructured

A boolean for each array whether it is unstructured or not

logger

logging.Logger of this instance

names

Set of the variable in this list

no_auto_update

bool.

with_plotter

The arrays in this instance that are visualized with a plotter

Methods:

append(value[, new_name])

Append a new array to the list

array_info([dump, paths, attrs, ...])

Get dimension informations on you arrays

copy([deep])

Returns a copy of the list

draw()

Draws all the figures in this instance

extend(iterable[, new_name])

Add further arrays from an iterable to this list

from_dataset(base[, method, default_slice, ...])

Construct an ArrayList instance from an existing base dataset

from_dict(d[, alternative_paths, datasets, ...])

Create a list from the dictionary returned by array_info()

next_available_name([fmt_str, counter])

Create a new array out of the given format string

remove(arr)

Removes an array from the list

rename(arr[, new_name])

Rename an array to find a name that isn't already in the list

start_update([draw])

Conduct the registered plot updates

update([method, dims, fmt, replot, ...])

Update the coordinates and the plot

property all_dims

The dimensions for each of the arrays in this list

property all_names

The variable names for each of the arrays in this list

append(value, new_name=False)[source]

Append a new array to the list

Parameters:
  • value (InteractiveBase) – The data object to append to this list

  • new_name (bool or str) – If False, and the arr_name attribute of the new array is already in the list, a ValueError is raised. If True and the arr_name attribute of the new array is not already in the list, the name is not changed. Otherwise, if the array name is already in use, new_name is set to ‘arr{0}’. If not True, this will be used for renaming (if the array name of arr is in use or not). '{0}' is replaced by a counter

Raises:
  • ValueError – If it was impossible to find a name that isn’t already in the list

  • ValueError – If new_name is False and the array is already in the list

See also

list.append, extend, rename

property arr_names

Names of the arrays (!not of the variables!) in this list

This attribute can be set with an iterable of unique names to change the array names of the data objects in this list.

array_info(dump=None, paths=None, attrs=True, standardize_dims=True, pwd=None, use_rel_paths=True, alternative_paths={}, ds_description={'fname', 'store'}, full_ds=True, copy=False, **kwargs)[source]

Get dimension informations on you arrays

This method returns a dictionary containing informations on the array in this instance

Parameters:
  • dump (bool) – If True and the dataset has not been dumped so far, it is dumped to a temporary file or the one generated by paths is used. If it is False or both, dump and paths are None, no data will be stored. If it is None and paths is not None, dump is set to True.

  • paths (iterable or True) – An iterator over filenames to use if a dataset has no filename. If paths is True, an iterator over temporary files will be created without raising a warning

  • attrs (bool, optional) – If True (default), the ArrayList.attrs and xarray.DataArray.attrs attributes are included in the returning dictionary

  • standardize_dims (bool, optional) – If True (default), the real dimension names in the dataset are replaced by x, y, z and t to be more general.

  • pwd (str) – Path to the working directory from where the data can be imported. If None, use the current working directory.

  • use_rel_paths (bool, optional) – If True (default), paths relative to the current working directory are used. Otherwise absolute paths to pwd are used

  • ds_description ('all' or set of {'fname', 'ds', 'num', 'arr', 'store'}) –

    Keys to describe the datasets of the arrays. If all, all keys are used. The key descriptions are

    fname

    the file name is inserted in the 'fname' key

    store

    the data store class and module is inserted in the 'store' key

    ds

    the dataset is inserted in the 'ds' key

    num

    The unique number assigned to the dataset is inserted in the 'num' key

    arr

    The array itself is inserted in the 'arr' key

  • full_ds (bool) – If True and 'ds' is in ds_description, the entire dataset is included. Otherwise, only the DataArray converted to a dataset is included

  • copy (bool) – If True, the arrays and datasets are deep copied

  • **kwargs – Any other keyword for the to_netcdf() function

  • path (str, path-like or file-like, optional) – Path to which to save this dataset. File-like objects are only supported by the scipy engine. If no path is provided, this function returns the resulting netCDF file as bytes; in this case, we need to use scipy, which does not support netCDF version 4 (the default format becomes NETCDF3_64BIT).

  • mode ({"w", "a"}, default: "w") – Write (‘w’) or append (‘a’) mode. If mode=’w’, any existing file at this location will be overwritten. If mode=’a’, existing variables will be overwritten.

  • format ({"NETCDF4", "NETCDF4_CLASSIC", "NETCDF3_64BIT", "NETCDF3_CLASSIC"}, optional) –

    File format for the resulting netCDF file:

    • NETCDF4: Data is stored in an HDF5 file, using netCDF4 API features.

    • NETCDF4_CLASSIC: Data is stored in an HDF5 file, using only netCDF 3 compatible API features.

    • NETCDF3_64BIT: 64-bit offset version of the netCDF 3 file format, which fully supports 2+ GB files, but is only compatible with clients linked against netCDF version 3.6.0 or later.

    • NETCDF3_CLASSIC: The classic netCDF 3 file format. It does not handle 2+ GB files very well.

    All formats are supported by the netCDF4-python library. scipy.io.netcdf only supports the last two formats.

    The default format is NETCDF4 if you are saving a file to disk and have the netCDF4-python library available. Otherwise, xarray falls back to using scipy to write netCDF files and defaults to the NETCDF3_64BIT format (scipy does not support netCDF4).

  • group (str, optional) – Path to the netCDF4 group in the given file to open (only works for format=’NETCDF4’). The group(s) will be created if necessary.

  • engine ({"netcdf4", "scipy", "h5netcdf"}, optional) – Engine to use when writing netCDF files. If not provided, the default engine is chosen based on available dependencies, with a preference for ‘netcdf4’ if writing to a file on disk.

  • encoding (dict, optional) –

    Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g., {"my_variable": {"dtype": "int16", "scale_factor": 0.1, "zlib": True}, ...}. If encoding is specified the original encoding of the variables of the dataset is ignored.

    The h5netcdf engine supports both the NetCDF4-style compression encoding parameters {"zlib": True, "complevel": 9} and the h5py ones {"compression": "gzip", "compression_opts": 9}. This allows using any compression plugin installed in the HDF5 library, e.g. LZF.

Returns:

An ordered mapping from array names to dimensions and filename corresponding to the array

Return type:

dict

See also

from_dict

property arrays

A list of all the xarray.DataArray instances in this list

property coords

Names of the coordinates of the arrays in this list

property coords_intersect

Coordinates of the arrays in this list that are used in all arrays

copy(deep=False)[source]

Returns a copy of the list

Parameters:

deep (bool) – If False (default), only the list is copied and not the contained arrays, otherwise the contained arrays are deep copied

property dims

Dimensions of the arrays in this list

property dims_intersect

Dimensions of the arrays in this list that are used in all arrays

draw()[source]

Draws all the figures in this instance

extend(iterable, new_name=False)[source]

Add further arrays from an iterable to this list

Parameters:
  • iterable – Any iterable that contains InteractiveBase instances

  • new_name (bool or str) – If False, and the arr_name attribute of the new array is already in the list, a ValueError is raised. If True and the arr_name attribute of the new array is not already in the list, the name is not changed. Otherwise, if the array name is already in use, new_name is set to ‘arr{0}’. If not True, this will be used for renaming (if the array name of arr is in use or not). '{0}' is replaced by a counter

Raises:
  • ValueError – If it was impossible to find a name that isn’t already in the list

  • ValueError – If new_name is False and the array is already in the list

See also

list.extend, append, rename

classmethod from_dataset(base, method='isel', default_slice=None, decoder=None, auto_update=None, prefer_list=False, squeeze=True, attrs=None, load=False, **kwargs)[source]

Construct an ArrayList instance from an existing base dataset

Parameters:
  • base (xarray.Dataset) – Dataset instance that is used as reference

  • method ({'isel', None, 'nearest', ...}) – Selection method of the xarray.Dataset to be used for setting the variables from the informations in dims. If method is ‘isel’, the xarray.Dataset.isel() method is used. Otherwise it sets the method parameter for the xarray.Dataset.sel() method.

  • auto_update (bool) – Default: None. A boolean indicating whether this list shall automatically update the contained arrays when calling the update() method or not. See also the no_auto_update attribute. If None, the value from the 'lists.auto_update' key in the psyplot.rcParams dictionary is used.

  • prefer_list (bool) – If True and multiple variable names pher array are found, the InteractiveList class is used. Otherwise the arrays are put together into one InteractiveArray.

  • default_slice (indexer) – Index (e.g. 0 if method is ‘isel’) that shall be used for dimensions not covered by dims and furtherdims. If None, the whole slice will be used. Note that the default_slice is always based on the isel method.

  • decoder (CFDecoder or dict) –

    Arguments for the decoder. This can be one of

    • an instance of CFDecoder

    • a subclass of CFDecoder

    • a dictionary with keyword-arguments to the automatically determined decoder class

    • None to automatically set the decoder

  • squeeze (bool, optional) – Default True. If True, and the created arrays have a an axes with length 1, it is removed from the dimension list (e.g. an array with shape (3, 4, 1, 5) will be squeezed to shape (3, 4, 5))

  • attrs (dict, optional) – Meta attributes that shall be assigned to the selected data arrays (additional to those stored in the base dataset)

  • load (bool or dict) – If True, load the data from the dataset using the xarray.DataArray.load() method. If dict, those will be given to the above mentioned load method

  • arr_names (string, list of strings or dictionary) –

    Set the unique array names of the resulting arrays and (optionally) dimensions.

    • if string: same as list of strings (see below). Strings may include {0} which will be replaced by a counter.

    • list of strings: those will be used for the array names. The final number of dictionaries in the return depend in this case on the dims and **furtherdims

    • dictionary: Then nothing happens and an dict version of arr_names is returned.

  • sort (list of strings) – This parameter defines how the dictionaries are ordered. It has no effect if arr_names is a dictionary (use a dict for that). It can be a list of dimension strings matching to the dimensions in dims for the variable.

  • dims (dict) – Keys must be variable names of dimensions (e.g. time, level, lat or lon) or ‘name’ for the variable name you want to choose. Values must be values of that dimension or iterables of the values (e.g. lists). Note that strings will be put into a list. For example dims = {‘name’: ‘t2m’, ‘time’: 0} will result in one plot for the first time step, whereas dims = {‘name’: ‘t2m’, ‘time’: [0, 1]} will result in two plots, one for the first (time == 0) and one for the second (time == 1) time step.

  • **kwargs – The same as dims (those will update what is specified in dims)

Returns:

The list with the specified InteractiveArray instances that hold a reference to the given base

Return type:

ArrayList

classmethod from_dict(d, alternative_paths={}, datasets=None, pwd=None, ignore_keys=['attrs', 'plotter', 'ds'], only=None, chname={}, **kwargs)[source]

Create a list from the dictionary returned by array_info()

This classmethod creates an ArrayList instance from a dictionary containing filename, dimension infos and array names

Parameters:
  • d (dict) – The dictionary holding the data

  • alternative_paths (dict or list or str) – A mapping from original filenames as used in d to filenames that shall be used instead. If alternative_paths is not None, datasets must be None. Paths must be accessible from the current working directory. If alternative_paths is a list (or any other iterable) is provided, the file names will be replaced as they appear in d (note that this is very unsafe if d is not and dict)

  • datasets (dict or list or None) – A mapping from original filenames in d to the instances of xarray.Dataset to use. If it is an iterable, the same holds as for the alternative_paths parameter

  • pwd (str) – Path to the working directory from where the data can be imported. If None, use the current working directory.

  • ignore_keys (list of str) – Keys specified in this list are ignored and not seen as array information (note that attrs are used anyway)

  • only (string, list or callable) –

    Can be one of the following three things:

    • a string that represents a pattern to match the array names that shall be included

    • a list of array names to include

    • a callable with two arguments, a string and a dict such as

      def filter_func(arr_name: str, info: dict): -> bool
          '''
          Filter the array names
      
          This function should return True if the array shall be
          included, else False
      
          Parameters
          ----------
          arr_name: str
              The array name (i.e. the ``arr_name`` attribute)
          info: dict
              The dictionary with the array informations. Common
              keys are ``'name'`` that points to the variable name
              and ``'dims'`` that points to the dimensions and
              ``'fname'`` that points to the file name
          '''
          return True or False
      

      The function should return True if the array shall be included, else False. This function will also be given to subsequents instances of InteractiveList objects that are contained in the returned value

  • chname (dict) – A mapping from variable names in the project to variable names that should be used instead

  • **kwargs (dict) – Any other parameter from the psyplot.data.open_dataset function

  • filename_or_obj (str, Path, file-like or DataStore) – Strings and Path objects are interpreted as a path to a netCDF file or an OpenDAP URL and opened with python-netCDF4, unless the filename ends with .gz, in which case the file is gunzipped and opened with scipy.io.netcdf (only netCDF3 supported). Byte-strings or file-like objects are opened by scipy.io.netcdf (netCDF3) or h5py (netCDF4/HDF).

  • chunks (int, dict, 'auto' or None, optional) – If chunks is provided, it is used to load the new dataset into dask arrays. chunks=-1 loads the dataset with dask using a single chunk for all arrays. chunks={} loads the dataset with dask using engine preferred chunks if exposed by the backend, otherwise with a single chunk for all arrays. In order to reproduce the default behavior of xr.open_zarr(...) use xr.open_dataset(..., engine='zarr', chunks={}). chunks='auto' will use dask auto chunking taking into account the engine preferred chunks. See dask chunking for more details.

  • cache (bool, optional) – If True, cache data loaded from the underlying datastore in memory as NumPy arrays when accessed to avoid reading from the underlying data- store multiple times. Defaults to True unless you specify the chunks argument to use dask, in which case it defaults to False. Does not change the behavior of coordinates corresponding to dimensions, which always load their data from disk into a pandas.Index.

  • decode_cf (bool, optional) – Whether to decode these variables, assuming they were saved according to CF conventions.

  • mask_and_scale (bool, optional) – If True, replace array values equal to _FillValue with NA and scale values according to the formula original_values * scale_factor + add_offset, where _FillValue, scale_factor and add_offset are taken from variable attributes (if they exist). If the _FillValue or missing_value attribute contains multiple values a warning will be issued and all array values matching one of the multiple values will be replaced by NA. This keyword may not be supported by all the backends.

  • decode_times (bool, optional) – If True, decode times encoded in the standard NetCDF datetime format into datetime objects. Otherwise, leave them encoded as numbers. This keyword may not be supported by all the backends.

  • decode_timedelta (bool, optional) – If True, decode variables and coordinates with time units in {“days”, “hours”, “minutes”, “seconds”, “milliseconds”, “microseconds”} into timedelta objects. If False, leave them encoded as numbers. If None (default), assume the same value of decode_time. This keyword may not be supported by all the backends.

  • use_cftime (bool, optional) – Only relevant if encoded dates come from a standard calendar (e.g. “gregorian”, “proleptic_gregorian”, “standard”, or not specified). If None (default), attempt to decode times to np.datetime64[ns] objects; if this is not possible, decode times to cftime.datetime objects. If True, always decode times to cftime.datetime objects, regardless of whether or not they can be represented using np.datetime64[ns] objects. If False, always decode times to np.datetime64[ns] objects; if this is not possible raise an error. This keyword may not be supported by all the backends.

  • concat_characters (bool, optional) – If True, concatenate along the last dimension of character arrays to form string arrays. Dimensions will only be concatenated over (and removed) if they have no corresponding variable and if they are only used as the last dimension of character arrays. This keyword may not be supported by all the backends.

  • decode_coords (bool or {"coordinates", "all"}, optional) –

    Controls which variables are set as coordinate variables:

    • ”coordinates” or True: Set variables referred to in the 'coordinates' attribute of the datasets or individual variables as coordinate variables.

    • ”all”: Set variables referred to in 'grid_mapping', 'bounds' and other attributes as coordinate variables.

    Only existing variables can be set as coordinates. Missing variables will be silently ignored.

  • drop_variables (str or iterable of str, optional) – A variable or list of variables to exclude from being parsed from the dataset. This may be useful to drop variables with problems or inconsistent values.

  • inline_array (bool, default: False) – How to include the array in the dask task graph. By default(inline_array=False) the array is included in a task by itself, and each chunk refers to that task by its key. With inline_array=True, Dask will instead inline the array directly in the values of the task graph. See dask.array.from_array().

  • chunked_array_type (str, optional) – Which chunked array type to coerce this datasets’ arrays to. Defaults to ‘dask’ if installed, else whatever is registered via the ChunkManagerEnetryPoint system. Experimental API that should not be relied upon.

  • from_array_kwargs (dict) – Additional keyword arguments passed on to the ChunkManagerEntrypoint.from_array method used to create chunked arrays, via whichever chunk manager is specified through the chunked_array_type kwarg. For example if dask.array.Array() objects are used for chunking, additional kwargs will be passed to dask.array.from_array(). Experimental API that should not be relied upon.

  • backend_kwargs (dict) – Additional keyword arguments passed on to the engine open function, equivalent to **kwargs.

  • **kwargs

    Additional keyword arguments passed on to the engine open function. For example:

    • ’group’: path to the netCDF4 group in the given file to open given as a str,supported by “netcdf4”, “h5netcdf”, “zarr”.

    • ’lock’: resource lock to use when reading data from disk. Only relevant when using dask or another form of parallelism. By default, appropriate locks are chosen to safely read and write files with the currently active dask scheduler. Supported by “netcdf4”, “h5netcdf”, “scipy”.

    See engine open function for kwargs accepted by each specific engine.

  • engine ({'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'gdal'}, optional) – Engine to use when reading netCDF files. If not provided, the default engine is chosen based on available dependencies, with a preference for ‘netcdf4’.

  • gridfile (str) – The path to a separate grid file or a xarray.Dataset instance which may store the coordinates used in ds

Returns:

The list with the interactive objects

Return type:

psyplot.data.ArrayList

property is_unstructured

A boolean for each array whether it is unstructured or not

property logger

logging.Logger of this instance

property names

Set of the variable in this list

next_available_name(fmt_str='arr{0}', counter=None)[source]

Create a new array out of the given format string

Parameters:
  • format_str (str) – The base string to use. '{0}' will be replaced by a counter

  • counter (iterable) – An iterable where the numbers should be drawn from. If None, range(100) is used

Returns:

A possible name that is not in the current project

Return type:

str

property no_auto_update

bool. Boolean controlling whether the start_update() method is automatically called by the update() method

Examples

You can disable the automatic update via

>>> with data.no_auto_update:
...     data.update(time=1)
...     data.start_update()

To permanently disable the automatic update, simply set

>>> data.no_auto_update = True
>>> data.update(time=1)
>>> data.no_auto_update = False  # reenable automatical update
remove(arr)[source]

Removes an array from the list

Parameters:

arr (str or InteractiveBase) – The array name or the data object in this list to remove

Raises:

ValueError – If no array with the specified array name is in the list

rename(arr, new_name=True)[source]

Rename an array to find a name that isn’t already in the list

Parameters:
  • arr (InteractiveBase) – A InteractiveArray or InteractiveList instance whose name shall be checked

  • new_name (bool or str) – If False, and the arr_name attribute of the new array is already in the list, a ValueError is raised. If True and the arr_name attribute of the new array is not already in the list, the name is not changed. Otherwise, if the array name is already in use, new_name is set to ‘arr{0}’. If not True, this will be used for renaming (if the array name of arr is in use or not). '{0}' is replaced by a counter

Returns:

  • InteractiveBasearr with changed arr_name attribute

  • bool or None – True, if the array has been renamed, False if not and None if the array is already in the list

Raises:
  • ValueError – If it was impossible to find a name that isn’t already in the list

  • ValueError – If new_name is False and the array is already in the list

start_update(draw=None)[source]

Conduct the registered plot updates

This method starts the updates from what has been registered by the update() method. You can call this method if you did not set the auto_update parameter when calling the update() method to True and when the no_auto_update attribute is True.

Parameters:

draw (bool or None) – If True, all the figures of the arrays contained in this list will be drawn at the end. If None, it defaults to the ‘auto_draw’` parameter in the psyplot.rcParams dictionary

update(method='isel', dims={}, fmt={}, replot=False, auto_update=False, draw=None, force=False, todefault=False, enable_post=None, **kwargs)[source]

Update the coordinates and the plot

This method updates all arrays in this list with the given coordinate values and formatoptions.

Parameters:
  • method ({'isel', None, 'nearest', ...}) – Selection method of the xarray.Dataset to be used for setting the variables from the informations in dims. If method is ‘isel’, the xarray.Dataset.isel() method is used. Otherwise it sets the method parameter for the xarray.Dataset.sel() method.

  • dims (dict) – Keys must be variable names of dimensions (e.g. time, level, lat or lon) or ‘name’ for the variable name you want to choose. Values must be values of that dimension or iterables of the values (e.g. lists). Note that strings will be put into a list. For example dims = {‘name’: ‘t2m’, ‘time’: 0} will result in one plot for the first time step, whereas dims = {‘name’: ‘t2m’, ‘time’: [0, 1]} will result in two plots, one for the first (time == 0) and one for the second (time == 1) time step.

  • replot (bool) – Boolean that determines whether the data specific formatoptions shall be updated in any case or not. Note, if dims is not empty or any coordinate keyword is in **kwargs, this will be set to True automatically

  • fmt (dict) – Keys may be any valid formatoption of the formatoptions in the plotter

  • force (str, list of str or bool) – If formatoption key (i.e. string) or list of formatoption keys, thery are definitely updated whether they changed or not. If True, all the given formatoptions in this call of the are update() method are updated

  • todefault (bool) – If True, all changed formatoptions (except the registered ones) are updated to their default value as stored in the rc attribute

  • auto_update (bool) – Boolean determining whether or not the start_update() method is called after the end.

  • draw (bool or None) – If True, all the figures of the arrays contained in this list will be drawn at the end. If None, it defaults to the ‘auto_draw’` parameter in the psyplot.rcParams dictionary

  • enable_post (bool) – If not None, enable (True) or disable (False) the post formatoption in the plotters

  • **kwargs – Any other formatoption or dimension that shall be updated (additionally to those in fmt and dims)

Notes

When updating to a new array while trying to set the dimensions at the same time, you have to specify the new dimensions via the dims parameter, e.g.:

da.psy.update(name='new_name', dims={'new_dim': 3})

if 'new_dim' is not yet a dimension of this array

If the no_auto_update attribute is True and the given auto_update parameter are is False, the update of the plots are registered and conducted at the next call of the start_update() method or the next call of this method (if the auto_update parameter is then True).

property with_plotter

The arrays in this instance that are visualized with a plotter

class psyplot.data.CFDecoder(ds=None, x=None, y=None, z=None, t=None)[source]

Bases: object

Class that interpretes the coordinates and attributes accordings to cf-conventions

Methods:

can_decode(ds, var)

Class method to determine whether the object can be decoded by this decoder class.

clear_cache()

Clear any cached data.

correct_dims(var[, dims, remove])

Expands the dimensions to match the dims in the variable

decode_coords(ds[, gridfile])

Sets the coordinates and bounds in a dataset

decode_ds(ds, *args, **kwargs)

Static method to decode coordinates and time informations

get_cell_node_coord(var[, coords, axis, nans])

Checks whether the bounds in the variable attribute are triangular

get_coord_idims(coords)

Get the slicers for the given coordinates from the base dataset

get_coord_info(var, dimname, coord, coords, what)

_summary_

get_decoder(ds, var, *args, **kwargs)

Class method to get the right decoder class that can decode the given dataset and variable

get_grid_type_info(var, coords)

Get info on the grid type

get_idims(arr[, coords])

Get the coordinates in the ds dataset as int or slice

get_metadata_for_section(var, section, coords)

Get the metadata for a specific section

get_metadata_for_variable(var[, coords, ...])

Get the metadata information on a variable.

get_metadata_sections(var)

Get the metadata sections for a variable.

get_plotbounds(coord[, kind, ignore_shape])

Get the bounds of a coordinate

get_projection_info(var, coords)

Get info on the projection

get_t(var[, coords])

Get the time coordinate of a variable

get_t_metadata(var, coords)

Get the temporal metadata for a variable.

get_tname(var[, coords])

Get the name of the t-dimension

get_triangles(var[, coords, convert_radian, ...])

Get the triangles for the variable

get_variable_by_axis(var, axis[, coords])

Return the coordinate matching the specified axis

get_x(var[, coords])

Get the x-coordinate of a variable

get_x_metadata(var, coords)

Get the metadata for spatial x-dimension.

get_xname(var[, coords])

Get the name of the x-dimension

get_y(var[, coords])

Get the y-coordinate of a variable

get_y_metadata(var, coords)

Get the metadata for spatial y-dimension.

get_yname(var[, coords])

Get the name of the y-dimension

get_z(var[, coords])

Get the vertical (z-) coordinate of a variable

get_z_metadata(var, coords)

Get the vertical level metadata for a variable.

get_zname(var[, coords])

Get the name of the z-dimension

is_circumpolar(var)

Test if a variable is on a circumpolar grid

is_unstructured(var)

Test if a variable is on an unstructered grid

register_decoder(decoder_class[, pos])

Register a new decoder

standardize_dims(var[, dims])

Replace the coordinate names through x, y, z and t

Attributes:

logger

logging.Logger of this instance

supports_spatial_slicing

True if the data of the CFDecoder supports the extraction of a subset of the data based on the indices.

classmethod can_decode(ds, var)[source]

Class method to determine whether the object can be decoded by this decoder class.

Parameters:
Returns:

True if the decoder can decode the given array var. Otherwise False

Return type:

bool

Notes

The default implementation returns True for any argument. Subclass this method to be specific on what type of data your decoder can decode

clear_cache()[source]

Clear any cached data. The default method does nothing but can be reimplemented by subclasses to clear data has been computed.

correct_dims(var, dims={}, remove=True)[source]

Expands the dimensions to match the dims in the variable

Parameters:
  • var (xarray.Variable) – The variable to get the data for

  • dims (dict) – a mapping from dimension to the slices

  • remove (bool) – If True, dimensions in dims that are not in the dimensions of var are removed

static decode_coords(ds, gridfile=None)[source]

Sets the coordinates and bounds in a dataset

This static method sets those coordinates and bounds that are marked marked in the netCDF attributes as coordinates in ds (without deleting them from the variable attributes because this information is necessary for visualizing the data correctly)

Parameters:
  • ds (xarray.Dataset) – The dataset to decode

  • gridfile (str) – The path to a separate grid file or a xarray.Dataset instance which may store the coordinates used in ds

Returns:

ds with additional coordinates

Return type:

xarray.Dataset

classmethod decode_ds(ds, *args, **kwargs)[source]

Static method to decode coordinates and time informations

This method interpretes absolute time informations (stored with units 'day as %Y%m%d.%f') and coordinates

Parameters:
  • ds (xarray.Dataset) – The dataset to decode

  • gridfile (str) – The path to a separate grid file or a xarray.Dataset instance which may store the coordinates used in ds

  • decode_times (bool, optional) – If True, decode times encoded in the standard NetCDF datetime format into datetime objects. Otherwise, leave them encoded as numbers.

  • decode_coords (bool, optional) – If True, decode the ‘coordinates’ attribute to identify coordinates in the resulting dataset.

Returns:

The decoded dataset

Return type:

xarray.Dataset

get_cell_node_coord(var, coords=None, axis='x', nans=None)[source]

Checks whether the bounds in the variable attribute are triangular

Parameters:
  • var (xarray.Variable or xarray.DataArray) – The variable to check

  • coords (dict) – Coordinates to use. If None, the coordinates of the dataset in the ds attribute are used.

  • axis ({'x', 'y'}) – The spatial axis to check

  • nans ({None, 'skip', 'only'}) – Determines whether values with nan shall be left (None), skipped ('skip') or shall be the only one returned ('only')

Returns:

the bounds corrdinate (if existent)

Return type:

xarray.DataArray or None

get_coord_idims(coords)[source]

Get the slicers for the given coordinates from the base dataset

This method converts coords to slicers (list of integers or slice objects)

Parameters:

coords (dict) – A subset of the ds.coords attribute of the base dataset ds

Returns:

Mapping from coordinate name to integer, list of integer or slice

Return type:

dict

get_coord_info(var: DataArray, dimname: str, coord: DataArray, coords: Dict, what: str) Dict[str, str][source]

_summary_

Parameters:
  • var (xarray.DataArray) – The data array to get the metadata for

  • dimname (str) – The dimension in the dimension of var

  • coord (Union[xr.Variable, xr.DataArray]) – The coordinate to get the info from

  • coords (Dict) – Other coordinates in the dataset

  • what (str) – The name on what this is all bout

Returns:

The coordinate infos

Return type:

Dict[str, str]

Raises:

ValueError – When the coordinates specifies boundaries but they could not be found in the given coords

classmethod get_decoder(ds, var, *args, **kwargs)[source]

Class method to get the right decoder class that can decode the given dataset and variable

Parameters:
Returns:

The decoder for the given dataset that can decode the variable var

Return type:

CFDecoder

get_grid_type_info(var: DataArray, coords: Dict) Dict[str, str][source]

Get info on the grid type

Parameters:
  • %(CFDecoder.get_metadata_for_variable.parameters.var)s

  • coords (Dict) – Other coordinates in the dataset

Returns:

The info on the grid type

Return type:

Dict[str, str]

get_idims(arr, coords=None)[source]

Get the coordinates in the ds dataset as int or slice

This method returns a mapping from the coordinate names of the given arr to an integer, slice or an array of integer that represent the coordinates in the ds dataset and can be used to extract the given arr via the xarray.Dataset.isel() method.

Parameters:
  • arr (xarray.DataArray) – The data array for which to get the dimensions as integers, slices or list of integers from the dataset in the base attribute

  • coords (iterable) – The coordinates to use. If not given all coordinates in the arr.coords attribute are used

Returns:

Mapping from coordinate name to integer, list of integer or slice

Return type:

dict

get_metadata_for_section(var: DataArray, section: str, coords: Dict) Dict[str, str][source]

Get the metadata for a specific section

Parameters:
  • var (xarray.DataArray) – The data array to get the metadata for

  • section (str) – The section name

  • coords (Dict) – Other coordinates in the dataset

Returns:

A mapping from metadata name to section.

Return type:

Dict[str, str]

get_metadata_for_variable(var: DataArray, coords: Dict | None = None, fail_on_error: bool = False, include_tracebacks: bool = False) Dict[str, Dict[str, str]][source]

Get the metadata information on a variable.

Parameters:
  • var (xarray.DataArray) – The data array to get the metadata for

  • coords (Dict, optional) – The coordinates to use. If none, we’ll fallback to the coordinates of the base dataset.

  • fail_on_error (bool, default False) – If True, an error is raised when an error occurs. Otherwise it is captured and entered as an attribute to the metadata.

  • include_tracebacks (bool, default False) – If True, the full traceback of the error is included

Returns:

A mapping from meta data sections for meta data attributes on the specific section.

Return type:

Dict[str, Dict[str, str]]

get_metadata_sections(var: DataArray) List[str][source]

Get the metadata sections for a variable.

Parameters:

var (xarray.DataArray) – The data array to get the metadata for

Returns:

The sections for the metadata information

Return type:

List[str]

get_plotbounds(coord, kind=None, ignore_shape=False)[source]

Get the bounds of a coordinate

This method first checks the 'bounds' attribute of the given coord and if it fails, it calculates them.

Parameters:
  • coord (xarray.Coordinate) – The coordinate to get the bounds for

  • kind (str) – The interpolation method (see scipy.interpolate.interp1d()) that is used in case of a 2-dimensional coordinate

  • ignore_shape (bool) – If True and the coord has a 'bounds' attribute, this attribute is returned without further check. Otherwise it is tried to bring the 'bounds' into a format suitable for (e.g.) the matplotlib.pyplot.pcolormesh() function.

Returns:

bounds – The bounds with the same number of dimensions as coord but one additional array (i.e. if coord has shape (4, ), bounds will have shape (5, ) and if coord has shape (4, 5), bounds will have shape (5, 6)

Return type:

np.ndarray

get_projection_info(var: DataArray, coords: Dict) Dict[str, str][source]

Get info on the projection

Parameters:
  • %(CFDecoder.get_metadata_for_variable.parameters.var)s

  • coords (Dict) – Other coordinates in the dataset

Returns:

The grid mapping attributes

Return type:

Dict[str, str]

Raises:

KeyError – when the variable specified by the grid_mapping is not part of the given coords

get_t(var, coords=None)[source]

Get the time coordinate of a variable

This method searches for the time coordinate in the ds. It first checks whether there is one dimension that holds an 'axis' attribute with ‘T’, otherwise it looks whether there is an intersection between the t attribute and the variables dimensions, otherwise it returns the coordinate corresponding to the first dimension of var

Possible types

  • var (xarray.Variable) – The variable to get the time coordinate for

  • coords (dict) – Coordinates to use. If None, the coordinates of the dataset in the ds attribute are used.

Returns:

The time coordinate or None if no time coordinate could be found

Return type:

xarray.Coordinate or None

get_t_metadata(var: DataArray, coords: Dict) Dict[str, str][source]

Get the temporal metadata for a variable.

Parameters:
  • var (xarray.DataArray) – The data array to get the metadata for

  • coords (Dict) – The coordinates to use

Returns:

A mapping from metadata name to section.

Return type:

Dict[str, str]

get_tname(var, coords=None)[source]

Get the name of the t-dimension

This method gives the name of the time dimension

Parameters:
  • var (xarray.Variables) – The variable to get the dimension for

  • coords (dict) – The coordinates to use for checking the axis attribute. If None, they are not used

Returns:

The coordinate name or None if no time coordinate could be found

Return type:

str or None

See also

get_t

get_triangles(var, coords=None, convert_radian=True, copy=False, src_crs=None, target_crs=None, nans=None, stacklevel=1)[source]

Get the triangles for the variable

Parameters:
  • var (xarray.Variable or xarray.DataArray) – The variable to use

  • coords (dict) – Alternative coordinates to use. If None, the coordinates of the ds dataset are used

  • convert_radian (bool) – If True and the coordinate has units in ‘radian’, those are converted to degrees

  • copy (bool) – If True, vertice arrays are copied

  • src_crs (cartopy.crs.Crs) – The source projection of the data. If not None, a transformation to the given target_crs will be done

  • target_crs (cartopy.crs.Crs) – The target projection for which the triangles shall be transformed. Must only be provided if the src_crs is not None.

  • nans ({None, 'skip', 'only'}) – Determines whether values with nan shall be left (None), skipped ('skip') or shall be the only one returned ('only')

Returns:

The spatial triangles of the variable

Return type:

matplotlib.tri.Triangulation

Raises:

ValueError – If src_crs is not None and target_crs is None

get_variable_by_axis(var, axis, coords=None)[source]

Return the coordinate matching the specified axis

This method uses to 'axis' attribute in coordinates to return the corresponding coordinate of the given variable

Possible types

  • var (xarray.Variable) – The variable to get the dimension for

  • axis ({‘x’, ‘y’, ‘z’, ‘t’}) – The axis string that identifies the dimension

  • coords (dict) – Coordinates to use. If None, the coordinates of the dataset in the ds attribute are used.

Returns:

The coordinate for var that matches the given axis or None if no coordinate with the right axis could be found.

Return type:

xarray.Coordinate or None

Notes

This is a rather low-level function that only interpretes the CFConvention. It is used by the get_x(), get_y(), get_z() and get_t() methods

Warning

If None of the coordinates have an 'axis' attribute, we use the 'coordinate' attribute of var (if existent). Since however the CF Conventions do not determine the order on how the coordinates shall be saved, we try to use a pattern matching for latitude ('lat') and longitude (lon'). If this patterns do not match, we interpret the coordinates such that x: -1, y: -2, z: -3. This is all not very safe for awkward dimension names, but works for most cases. If you want to be a hundred percent sure, use the x, y, z and t attribute.

See also

get_x, get_y, get_z, get_t

get_x(var, coords=None)[source]

Get the x-coordinate of a variable

This method searches for the x-coordinate in the ds. It first checks whether there is one dimension that holds an 'axis' attribute with ‘X’, otherwise it looks whether there is an intersection between the x attribute and the variables dimensions, otherwise it returns the coordinate corresponding to the last dimension of var

Possible types

  • var (xarray.Variable) – The variable to get the x-coordinate for

  • coords (dict) – Coordinates to use. If None, the coordinates of the dataset in the ds attribute are used.

Returns:

The y-coordinate or None if it could be found

Return type:

xarray.Coordinate or None

get_x_metadata(var: DataArray, coords: Dict) Dict[str, str][source]

Get the metadata for spatial x-dimension.

Parameters:
  • var (xarray.DataArray) – The data array to get the metadata for

  • coords (Dict) – The coordinates to use

Returns:

A mapping from metadata name to section.

Return type:

Dict[str, str]

get_xname(var, coords=None)[source]

Get the name of the x-dimension

This method gives the name of the x-dimension (which is not necessarily the name of the coordinate if the variable has a coordinate attribute)

Parameters:
  • var (xarray.Variables) – The variable to get the dimension for

  • coords (dict) – The coordinates to use for checking the axis attribute. If None, they are not used

Returns:

The coordinate name

Return type:

str

See also

get_x

get_y(var, coords=None)[source]

Get the y-coordinate of a variable

This method searches for the y-coordinate in the ds. It first checks whether there is one dimension that holds an 'axis' attribute with ‘Y’, otherwise it looks whether there is an intersection between the y attribute and the variables dimensions, otherwise it returns the coordinate corresponding to the second last dimension of var (or the last if the dimension of var is one-dimensional)

Possible types

  • var (xarray.Variable) – The variable to get the y-coordinate for

  • coords (dict) – Coordinates to use. If None, the coordinates of the dataset in the ds attribute are used.

Returns:

The y-coordinate or None if it could be found

Return type:

xarray.Coordinate or None

get_y_metadata(var: DataArray, coords: Dict) Dict[str, str][source]

Get the metadata for spatial y-dimension.

Parameters:
  • var (xarray.DataArray) – The data array to get the metadata for

  • coords (Dict) – The coordinates to use

Returns:

A mapping from metadata name to section.

Return type:

Dict[str, str]

get_yname(var, coords=None)[source]

Get the name of the y-dimension

This method gives the name of the y-dimension (which is not necessarily the name of the coordinate if the variable has a coordinate attribute)

Parameters:
  • var (xarray.Variables) – The variable to get the dimension for

  • coords (dict) – The coordinates to use for checking the axis attribute. If None, they are not used

Returns:

The coordinate name

Return type:

str

See also

get_y

get_z(var, coords=None)[source]

Get the vertical (z-) coordinate of a variable

This method searches for the z-coordinate in the ds. It first checks whether there is one dimension that holds an 'axis' attribute with ‘Z’, otherwise it looks whether there is an intersection between the z attribute and the variables dimensions, otherwise it returns the coordinate corresponding to the third last dimension of var (or the second last or last if var is two or one-dimensional)

Possible types

  • var (xarray.Variable) – The variable to get the z-coordinate for

  • coords (dict) – Coordinates to use. If None, the coordinates of the dataset in the ds attribute are used.

Returns:

The z-coordinate or None if no z coordinate could be found

Return type:

xarray.Coordinate or None

get_z_metadata(var: DataArray, coords: Dict) Dict[str, str][source]

Get the vertical level metadata for a variable.

Parameters:
  • var (xarray.DataArray) – The data array to get the metadata for

  • coords (Dict) – The coordinates to use

Returns:

A mapping from metadata name to section.

Return type:

Dict[str, str]

get_zname(var, coords=None)[source]

Get the name of the z-dimension

This method gives the name of the z-dimension (which is not necessarily the name of the coordinate if the variable has a coordinate attribute)

Parameters:
  • var (xarray.Variables) – The variable to get the dimension for

  • coords (dict) – The coordinates to use for checking the axis attribute. If None, they are not used

Returns:

The coordinate name or None if no vertical coordinate could be found

Return type:

str or None

See also

get_z

is_circumpolar(var)[source]

Test if a variable is on a circumpolar grid

Parameters:
  • var (xarray.Variable or xarray.DataArray) – The variable to check

  • coords (dict) – Coordinates to use. If None, the coordinates of the dataset in the ds attribute are used.

  • axis ({'x', 'y'}) – The spatial axis to check

  • nans ({None, 'skip', 'only'}) – Determines whether values with nan shall be left (None), skipped ('skip') or shall be the only one returned ('only')

Returns:

the bounds corrdinate (if existent)

Return type:

xarray.DataArray or None

is_unstructured(var)[source]

Test if a variable is on an unstructered grid

Parameters:
  • var (xarray.Variable or xarray.DataArray) – The variable to check

  • coords (dict) – Coordinates to use. If None, the coordinates of the dataset in the ds attribute are used.

  • axis ({'x', 'y'}) – The spatial axis to check

  • nans ({None, 'skip', 'only'}) – Determines whether values with nan shall be left (None), skipped ('skip') or shall be the only one returned ('only')

Returns:

the bounds corrdinate (if existent)

Return type:

xarray.DataArray or None

Notes

Currently this is the same as is_unstructured() method, but may change in the future to support hexagonal grids

property logger

logging.Logger of this instance

static register_decoder(decoder_class, pos=0)[source]

Register a new decoder

This function registeres a decoder class to use

Parameters:
  • decoder_class (type) – The class inherited from the CFDecoder

  • pos (int) – The position where to register the decoder (by default: the first position

standardize_dims(var, dims={})[source]

Replace the coordinate names through x, y, z and t

Parameters:
  • var (xarray.Variable) – The variable to use the dimensions of

  • dims (dict) – The dictionary to use for replacing the original dimensions

Returns:

The dictionary with replaced dimensions

Return type:

dict

supports_spatial_slicing = True

True if the data of the CFDecoder supports the extraction of a subset of the data based on the indices.

class psyplot.data.DatasetAccessor(ds)[source]

Bases: object

A dataset accessor to interface with the psyplot package

Methods:

copy([deep])

Copy the array

create_list(*args, **kwargs)

Create a psyplot.data.ArrayList with arrays from this dataset

to_array(*args, **kwargs)

Deprecated version of to_dataarray

Attributes:

data_store

The xarray.backends.common.AbstractStore used to save the dataset

filename

The name of the file that stores this dataset

num

A unique number for the dataset

plot

An object to generate new plots from this dataset

copy(deep=False)[source]

Copy the array

This method returns a copy of the underlying array in the arr attribute. It is more stable because it creates a new psy accessor

create_list(*args, **kwargs)[source]

Create a psyplot.data.ArrayList with arrays from this dataset

Parameters:
  • base (xarray.Dataset) – Dataset instance that is used as reference

  • method ({'isel', None, 'nearest', ...}) – Selection method of the xarray.Dataset to be used for setting the variables from the informations in dims. If method is ‘isel’, the xarray.Dataset.isel() method is used. Otherwise it sets the method parameter for the xarray.Dataset.sel() method.

  • auto_update (bool) – Default: None. A boolean indicating whether this list shall automatically update the contained arrays when calling the update() method or not. See also the no_auto_update attribute. If None, the value from the 'lists.auto_update' key in the psyplot.rcParams dictionary is used.

  • prefer_list (bool) – If True and multiple variable names pher array are found, the InteractiveList class is used. Otherwise the arrays are put together into one InteractiveArray.

  • default_slice (indexer) – Index (e.g. 0 if method is ‘isel’) that shall be used for dimensions not covered by dims and furtherdims. If None, the whole slice will be used. Note that the default_slice is always based on the isel method.

  • decoder (CFDecoder or dict) –

    Arguments for the decoder. This can be one of

    • an instance of CFDecoder

    • a subclass of CFDecoder

    • a dictionary with keyword-arguments to the automatically determined decoder class

    • None to automatically set the decoder

  • squeeze (bool, optional) – Default True. If True, and the created arrays have a an axes with length 1, it is removed from the dimension list (e.g. an array with shape (3, 4, 1, 5) will be squeezed to shape (3, 4, 5))

  • attrs (dict, optional) – Meta attributes that shall be assigned to the selected data arrays (additional to those stored in the base dataset)

  • load (bool or dict) – If True, load the data from the dataset using the xarray.DataArray.load() method. If dict, those will be given to the above mentioned load method

  • arr_names (string, list of strings or dictionary) –

    Set the unique array names of the resulting arrays and (optionally) dimensions.

    • if string: same as list of strings (see below). Strings may include {0} which will be replaced by a counter.

    • list of strings: those will be used for the array names. The final number of dictionaries in the return depend in this case on the dims and **furtherdims

    • dictionary: Then nothing happens and an dict version of arr_names is returned.

  • sort (list of strings) – This parameter defines how the dictionaries are ordered. It has no effect if arr_names is a dictionary (use a dict for that). It can be a list of dimension strings matching to the dimensions in dims for the variable.

  • dims (dict) – Keys must be variable names of dimensions (e.g. time, level, lat or lon) or ‘name’ for the variable name you want to choose. Values must be values of that dimension or iterables of the values (e.g. lists). Note that strings will be put into a list. For example dims = {‘name’: ‘t2m’, ‘time’: 0} will result in one plot for the first time step, whereas dims = {‘name’: ‘t2m’, ‘time’: [0, 1]} will result in two plots, one for the first (time == 0) and one for the second (time == 1) time step.

  • **kwargs – The same as dims (those will update what is specified in dims)

Returns:

The list with the specified InteractiveArray instances that hold a reference to the given base

Return type:

ArrayList

property data_store

The xarray.backends.common.AbstractStore used to save the dataset

property filename

The name of the file that stores this dataset

property num

A unique number for the dataset

property plot

An object to generate new plots from this dataset

To make a 2D-plot with the psy-simple plugin, you can just type

project = ds.psy.plot.plot2d(name='variable-name')

It will create a new subproject with the extracted and visualized data.

See also

psyplot.project.DatasetPlotter

for the different plot methods

to_array(*args, **kwargs)[source]

Deprecated version of to_dataarray

class psyplot.data.InteractiveArray(xarray_obj, *args, **kwargs)[source]

Bases: InteractiveBase

Interactive psyplot accessor for the data array

This class keeps reference to the base xarray.Dataset where the array.DataArray originates from and enables to switch between the coordinates in the array. Furthermore it has a plotter attribute to enable interactive plotting via an psyplot.plotter.Plotter instance.

The *args and **kwargs are essentially the same as for the xarray.DataArray method, additional **kwargs are described below.

Parameters:
  • base (xarray.Dataset) – Default: None. Dataset that serves as the origin of the data contained in this DataArray instance. This will be used if you want to update the coordinates via the update() method. If None, this instance will serve as a base as soon as it is needed.

  • decoder (psyplot.CFDecoder) – The decoder that decodes the base dataset and is used to get bounds. If not given, a new CFDecoder is created

  • idims (dict) – Default: None. dictionary with integer values and/or slices in the base dictionary. If not given, they are determined automatically

  • plotter (Plotter) – Default: None. Interactive plotter that makes the plot via formatoption keywords.

  • arr_name (str) – Default: 'data'. unique string of the array

  • auto_update (bool) – Default: None. A boolean indicating whether this list shall automatically update the contained arrays when calling the update() method or not. See also the no_auto_update attribute. If None, the value from the 'lists.auto_update' key in the psyplot.rcParams dictionary is used.

Attributes:

base

Base dataset this instance gets its data from

base_variables

A mapping from the variable name to the variablein the base dataset.

decoder

The decoder of this array

idims

Coordinates in the base dataset as int or slice

iter_base_variables

An iterator over the base variables in the base dataset

logger

logging.Logger of this instance

onbasechange

Signal to be emiited when the base of the object changes

Methods:

copy([deep])

Copy the array

fldmean([keepdims])

Calculate the weighted mean over the x- and y-dimension

fldpctl(q[, keepdims])

Calculate the percentiles along the x- and y-dimensions

fldstd([keepdims])

Calculate the weighted standard deviation over x- and y-dimension

get_coord(what[, base])

The x-coordinate of this data array

get_dim(what[, base])

The name of the x-dimension of this data array

gridweights([keepdims, keepshape, use_cdo])

Calculate the cell weights for each grid cell

init_accessor([base, idims, decoder])

Initialize the accessor instance

isel(*args, **kwargs)

Select a subset of the array based on position.

sel(*args, **kwargs)

Select a subset of the array based on indexes.

shiftlon(central_longitude)

Shift longitudes and the data so that they match map projection region.

start_update([draw, queues])

Conduct the formerly registered updates

to_interactive_list()

Return a InteractiveList that contains this object

update([method, dims, fmt, replot, ...])

Update the coordinates and the plot

property base

Base dataset this instance gets its data from

property base_variables

A mapping from the variable name to the variablein the base dataset.

copy(deep=False)[source]

Copy the array

This method returns a copy of the underlying array in the arr attribute. It is more stable because it creates a new psy accessor

property decoder

The decoder of this array

fldmean(keepdims=False)[source]

Calculate the weighted mean over the x- and y-dimension

This method calculates the weighted mean of the spatial dimensions. Weights are calculated using the gridweights() method, missing values are ignored. x- and y-dimensions are identified using the decoder`s :meth:`~CFDecoder.get_xname and get_yname() methods.

Parameters:

keepdims (bool) – If True, the dimensionality of this array is maintained

Returns:

The computed fldmeans. The dimensions are the same as in this array, only the spatial dimensions are omitted if keepdims is False.

Return type:

xr.DataArray

See also

fldstd

For calculating the weighted standard deviation

fldpctl

For calculating weighted percentiles

fldpctl(q, keepdims=False)[source]

Calculate the percentiles along the x- and y-dimensions

This method calculates the specified percentiles along the given dimension. Percentiles are weighted by the gridweights() method and missing values are ignored. x- and y-dimensions are estimated through the decoder`s :meth:`~CFDecoder.get_xname and get_yname() methods

Parameters:
  • q (float or list of floats between 0 and 100) – The quantiles to estimate

  • keepdims (bool) – If True, the number of dimensions of the array are maintained

Returns:

The data array with the dimensions. If q is a list or keepdims is True, the first dimension will be the percentile 'pctl'. The other dimensions are the same as in this array, only the spatial dimensions are omitted if keepdims is False.

Return type:

xr.DataArray

See also

fldstd

For calculating the weighted standard deviation

fldmean

For calculating the weighted mean

Warning

This method does load the entire array into memory! So take care if you handle big data.

fldstd(keepdims=False)[source]

Calculate the weighted standard deviation over x- and y-dimension

This method calculates the weighted standard deviation of the spatial dimensions. Weights are calculated using the gridweights() method, missing values are ignored. x- and y-dimensions are identified using the decoder`s :meth:`~CFDecoder.get_xname and get_yname() methods.

Parameters:

keepdims (bool) – If True, the dimensionality of this array is maintained

Returns:

The computed standard deviations. The dimensions are the same as in this array, only the spatial dimensions are omitted if keepdims is False.

Return type:

xr.DataArray

See also

fldmean

For calculating the weighted mean

fldpctl

For calculating weighted percentiles

get_coord(what, base=False)[source]

The x-coordinate of this data array

Parameters:
  • what ({'t', 'x', 'y', 'z'}) – The letter of the axis

  • base (bool) – If True, use the base variable in the base dataset.

get_dim(what, base=False)[source]

The name of the x-dimension of this data array

Parameters:
  • what ({'t', 'x', 'y', 'z'}) – The letter of the axis

  • base (bool) – If True, use the base variable in the base dataset.

gridweights(keepdims=False, keepshape=False, use_cdo=None)[source]

Calculate the cell weights for each grid cell

Parameters:
  • keepdims (bool) – If True, keep the number of dimensions

  • keepshape (bool) – If True, keep the exact shape as the source array and the missing values in the array are masked

  • use_cdo (bool or None) – If True, use Climate Data Operators (CDOs) to calculate the weights. Note that this is used automatically for unstructured grids. If None, it depends on the 'gridweights.use_cdo' item in the psyplot.rcParams.

Returns:

The 2D-DataArray with the grid weights

Return type:

xarray.DataArray

property idims

Coordinates in the base dataset as int or slice

This attribute holds a mapping from the coordinate names of this array to an integer, slice or an array of integer that represent the coordinates in the base dataset

init_accessor(base=None, idims=None, decoder=None, *args, **kwargs)[source]

Initialize the accessor instance

This method initializes the accessor

Parameters:
  • base (xr.Dataset) – The base dataset for the data

  • idims (dict) – A mapping from dimension name to indices. If not provided, it is calculated when the idims attribute is accessed

  • decoder (CFDecoder) – The decoder of this object

  • %(InteractiveBase.parameters)s

isel(*args, **kwargs)[source]

Select a subset of the array based on position.

Same method as xarray.DataArray.isel() but keeps information on the base dataset.

property iter_base_variables

An iterator over the base variables in the base dataset

property logger

logging.Logger of this instance

onbasechange

Signal to be emiited when the base of the object changes

sel(*args, **kwargs)[source]

Select a subset of the array based on indexes.

Same method as xarray.DataArray.sel() but keeps information on the base dataset.

shiftlon(central_longitude)[source]

Shift longitudes and the data so that they match map projection region.

Only valid for cylindrical/pseudo-cylindrical global projections and data on regular lat/lon grids. longitudes need to be 1D.

Parameters:

central_longitude – center of map projection region

References

This function is copied and taken from the mpl_toolkits.basemap.Basemap class. The only difference is that we do not mask values outside the map projection region

start_update(draw=None, queues=None)[source]

Conduct the formerly registered updates

This method conducts the updates that have been registered via the update() method. You can call this method if the no_auto_update attribute of this instance is True and the auto_update parameter in the update() method has been set to False

Parameters:
  • draw (bool or None) – Boolean to control whether the figure of this array shall be drawn at the end. If None, it defaults to the ‘auto_draw’` parameter in the psyplot.rcParams dictionary

  • queues (list of Queue.Queue instances) – The queues that are passed to the psyplot.plotter.Plotter.start_update() method to ensure a thread-safe update. It can be None if only one single plotter is updated at the same time. The number of jobs that are taken from the queue is determined by the _njobs() attribute. Note that there this parameter is automatically configured when updating from a Project.

Returns:

A boolean indicating whether a redrawing is necessary or not

Return type:

bool

See also

no_auto_update, update

to_interactive_list()[source]

Return a InteractiveList that contains this object

update(method='isel', dims={}, fmt={}, replot=False, auto_update=False, draw=None, force=False, todefault=False, **kwargs)[source]

Update the coordinates and the plot

This method updates all arrays in this list with the given coordinate values and formatoptions.

Parameters:
  • method ({'isel', None, 'nearest', ...}) – Selection method of the xarray.Dataset to be used for setting the variables from the informations in dims. If method is ‘isel’, the xarray.Dataset.isel() method is used. Otherwise it sets the method parameter for the xarray.Dataset.sel() method.

  • dims (dict) – Keys must be variable names of dimensions (e.g. time, level, lat or lon) or ‘name’ for the variable name you want to choose. Values must be values of that dimension or iterables of the values (e.g. lists). Note that strings will be put into a list. For example dims = {‘name’: ‘t2m’, ‘time’: 0} will result in one plot for the first time step, whereas dims = {‘name’: ‘t2m’, ‘time’: [0, 1]} will result in two plots, one for the first (time == 0) and one for the second (time == 1) time step.

  • replot (bool) – Boolean that determines whether the data specific formatoptions shall be updated in any case or not. Note, if dims is not empty or any coordinate keyword is in **kwargs, this will be set to True automatically

  • fmt (dict) – Keys may be any valid formatoption of the formatoptions in the plotter

  • force (str, list of str or bool) – If formatoption key (i.e. string) or list of formatoption keys, thery are definitely updated whether they changed or not. If True, all the given formatoptions in this call of the are update() method are updated

  • todefault (bool) – If True, all changed formatoptions (except the registered ones) are updated to their default value as stored in the rc attribute

  • auto_update (bool) – Boolean determining whether or not the start_update() method is called after the end.

  • draw (bool or None) – Boolean to control whether the figure of this array shall be drawn at the end. If None, it defaults to the ‘auto_draw’` parameter in the psyplot.rcParams dictionary

  • queues (list of Queue.Queue instances) – The queues that are passed to the psyplot.plotter.Plotter.start_update() method to ensure a thread-safe update. It can be None if only one single plotter is updated at the same time. The number of jobs that are taken from the queue is determined by the _njobs() attribute. Note that there this parameter is automatically configured when updating from a Project.

  • **kwargs – Any other formatoption or dimension that shall be updated (additionally to those in fmt and dims)

Notes

When updating to a new array while trying to set the dimensions at the same time, you have to specify the new dimensions via the dims parameter, e.g.:

da.psy.update(name='new_name', dims={'new_dim': 3})

if 'new_dim' is not yet a dimension of this array

If the no_auto_update attribute is True and the given auto_update parameter are is False, the update of the plots are registered and conducted at the next call of the start_update() method or the next call of this method (if the auto_update parameter is then True).

class psyplot.data.InteractiveBase(plotter=None, arr_name='arr0', auto_update=None)[source]

Bases: object

Class for the communication of a data object with a suitable plotter

This class serves as an interface for data objects (in particular as a base for InteractiveArray and InteractiveList) to communicate with the corresponding Plotter in the plotter attribute

Parameters:
  • plotter (Plotter) – Default: None. Interactive plotter that makes the plot via formatoption keywords.

  • arr_name (str) – Default: 'data'. unique string of the array

  • auto_update (bool) – Default: None. A boolean indicating whether this list shall automatically update the contained arrays when calling the update() method or not. See also the no_auto_update attribute. If None, the value from the 'lists.auto_update' key in the psyplot.rcParams dictionary is used.

Attributes:

arr_name

str.

ax

The matplotlib axes the plotter of this data object plots on

block_signals

Block the emitting of signals of this instance

logger

logging.Logger of this instance

no_auto_update

bool.

onupdate

Signal to be emitted when the object has been updated

plot

An object to visualize this data object

plotter

psyplot.plotter.Plotter instance that makes the interactive plotting of the data

Methods:

start_update([draw, queues])

Conduct the formerly registered updates

to_interactive_list()

Return a InteractiveList that contains this object

update([fmt, replot, draw, auto_update, ...])

Update the coordinates and the plot

property arr_name

str. The internal name of the InteractiveBase

property ax

The matplotlib axes the plotter of this data object plots on

property block_signals

Block the emitting of signals of this instance

property logger

logging.Logger of this instance

property no_auto_update

bool. Boolean controlling whether the start_update() method is automatically called by the update() method

Examples

You can disable the automatic update via

>>> with data.no_auto_update:
...     data.update(time=1)
...     data.start_update()

To permanently disable the automatic update, simply set

>>> data.no_auto_update = True
>>> data.update(time=1)
>>> data.no_auto_update = False  # reenable automatical update
onupdate

Signal to be emitted when the object has been updated

property plot

An object to visualize this data object

To make a 2D-plot with the psy-simple plugin, you can just type

plotter = da.psy.plot.plot2d()

It will create a new psyplot.plotter.Plotter instance with the extracted and visualized data.

See also

psyplot.project.DataArrayPlotter

for the different plot methods

property plotter

psyplot.plotter.Plotter instance that makes the interactive plotting of the data

start_update(draw=None, queues=None)[source]

Conduct the formerly registered updates

This method conducts the updates that have been registered via the update() method. You can call this method if the no_auto_update attribute of this instance and the auto_update parameter in the update() method has been set to False

Parameters:
  • draw (bool or None) – Boolean to control whether the figure of this array shall be drawn at the end. If None, it defaults to the ‘auto_draw’` parameter in the psyplot.rcParams dictionary

  • queues (list of Queue.Queue instances) – The queues that are passed to the psyplot.plotter.Plotter.start_update() method to ensure a thread-safe update. It can be None if only one single plotter is updated at the same time. The number of jobs that are taken from the queue is determined by the _njobs() attribute. Note that there this parameter is automatically configured when updating from a Project.

Returns:

A boolean indicating whether a redrawing is necessary or not

Return type:

bool

to_interactive_list()[source]

Return a InteractiveList that contains this object

update(fmt={}, replot=False, draw=None, auto_update=False, force=False, todefault=False, **kwargs)[source]

Update the coordinates and the plot

This method updates all arrays in this list with the given coordinate values and formatoptions.

Parameters:
  • replot (bool) – Boolean that determines whether the data specific formatoptions shall be updated in any case or not. Note, if dims is not empty or any coordinate keyword is in **kwargs, this will be set to True automatically

  • fmt (dict) – Keys may be any valid formatoption of the formatoptions in the plotter

  • force (str, list of str or bool) – If formatoption key (i.e. string) or list of formatoption keys, thery are definitely updated whether they changed or not. If True, all the given formatoptions in this call of the are update() method are updated

  • todefault (bool) – If True, all changed formatoptions (except the registered ones) are updated to their default value as stored in the rc attribute

  • auto_update (bool) – Boolean determining whether or not the start_update() method is called at the end. This parameter has no effect if the no_auto_update attribute is set to True.

  • draw (bool or None) – Boolean to control whether the figure of this array shall be drawn at the end. If None, it defaults to the ‘auto_draw’` parameter in the psyplot.rcParams dictionary

  • **kwargs – Any other formatoption that shall be updated (additionally to those in fmt)

Notes

If the no_auto_update attribute is True and the given auto_update parameter are is False, the update of the plots are registered and conducted at the next call of the start_update() method or the next call of this method (if the auto_update parameter is then True).

class psyplot.data.InteractiveList(*args, **kwargs)[source]

Bases: ArrayList, InteractiveBase

List of InteractiveArray instances that can be plotted itself

This class combines the ArrayList and the interactive plotting through psyplot.plotter.Plotter classes. It is mainly used by the psyplot.plotter.simple module

Parameters:
  • iterable (iterable) – The iterable (e.g. another list) defining this list

  • attrs (dict-like or iterable, optional) – Global attributes of this list

  • auto_update (bool) – Default: None. A boolean indicating whether this list shall automatically update the contained arrays when calling the update() method or not. See also the no_auto_update attribute. If None, the value from the 'lists.auto_update' key in the psyplot.rcParams dictionary is used.

  • new_name (bool or str) – If False, and the arr_name attribute of the new array is already in the list, a ValueError is raised. If True and the arr_name attribute of the new array is not already in the list, the name is not changed. Otherwise, if the array name is already in use, new_name is set to ‘arr{0}’. If not True, this will be used for renaming (if the array name of arr is in use or not). '{0}' is replaced by a counter

  • plotter (Plotter) – Default: None. Interactive plotter that makes the plot via formatoption keywords.

  • arr_name (str) – Default: 'data'. unique string of the array

Methods:

append(*args, **kwargs)

Append a new array to the list

extend(*args, **kwargs)

Add further arrays from an iterable to this list

from_dataset(*args, **kwargs)

Create an InteractiveList instance from the given base dataset

start_update([draw, queues])

Conduct the formerly registered updates

to_dataframe()

to_interactive_list()

Return a InteractiveList that contains this object

Attributes:

logger

logging.Logger of this instance

no_auto_update

bool.

psy

Return the list itself

append(*args, **kwargs)[source]

Append a new array to the list

Parameters:
  • value (InteractiveBase) – The data object to append to this list

  • new_name (bool or str) – If False, and the arr_name attribute of the new array is already in the list, a ValueError is raised. If True and the arr_name attribute of the new array is not already in the list, the name is not changed. Otherwise, if the array name is already in use, new_name is set to ‘arr{0}’. If not True, this will be used for renaming (if the array name of arr is in use or not). '{0}' is replaced by a counter

Raises:
  • ValueError – If it was impossible to find a name that isn’t already in the list

  • ValueError – If new_name is False and the array is already in the list

See also

list.append, extend, rename

extend(*args, **kwargs)[source]

Add further arrays from an iterable to this list

Parameters:
  • iterable – Any iterable that contains InteractiveBase instances

  • new_name (bool or str) – If False, and the arr_name attribute of the new array is already in the list, a ValueError is raised. If True and the arr_name attribute of the new array is not already in the list, the name is not changed. Otherwise, if the array name is already in use, new_name is set to ‘arr{0}’. If not True, this will be used for renaming (if the array name of arr is in use or not). '{0}' is replaced by a counter

Raises:
  • ValueError – If it was impossible to find a name that isn’t already in the list

  • ValueError – If new_name is False and the array is already in the list

See also

list.extend, append, rename

classmethod from_dataset(*args, **kwargs)[source]

Create an InteractiveList instance from the given base dataset

Parameters:
  • base (xarray.Dataset) – Dataset instance that is used as reference

  • method ({'isel', None, 'nearest', ...}) – Selection method of the xarray.Dataset to be used for setting the variables from the informations in dims. If method is ‘isel’, the xarray.Dataset.isel() method is used. Otherwise it sets the method parameter for the xarray.Dataset.sel() method.

  • auto_update (bool) – Default: None. A boolean indicating whether this list shall automatically update the contained arrays when calling the update() method or not. See also the no_auto_update attribute. If None, the value from the 'lists.auto_update' key in the psyplot.rcParams dictionary is used.

  • prefer_list (bool) – If True and multiple variable names pher array are found, the InteractiveList class is used. Otherwise the arrays are put together into one InteractiveArray.

  • default_slice (indexer) – Index (e.g. 0 if method is ‘isel’) that shall be used for dimensions not covered by dims and furtherdims. If None, the whole slice will be used. Note that the default_slice is always based on the isel method.

  • decoder (CFDecoder or dict) –

    Arguments for the decoder. This can be one of

    • an instance of CFDecoder

    • a subclass of CFDecoder

    • a dictionary with keyword-arguments to the automatically determined decoder class

    • None to automatically set the decoder

  • squeeze (bool, optional) – Default True. If True, and the created arrays have a an axes with length 1, it is removed from the dimension list (e.g. an array with shape (3, 4, 1, 5) will be squeezed to shape (3, 4, 5))

  • attrs (dict, optional) – Meta attributes that shall be assigned to the selected data arrays (additional to those stored in the base dataset)

  • load (bool or dict) – If True, load the data from the dataset using the xarray.DataArray.load() method. If dict, those will be given to the above mentioned load method

  • plotter (psyplot.plotter.Plotter) – The plotter instance that is used to visualize the data in this list

  • make_plot (bool) – If True, the plot is made

  • arr_names (string, list of strings or dictionary) –

    Set the unique array names of the resulting arrays and (optionally) dimensions.

    • if string: same as list of strings (see below). Strings may include {0} which will be replaced by a counter.

    • list of strings: those will be used for the array names. The final number of dictionaries in the return depend in this case on the dims and **furtherdims

    • dictionary: Then nothing happens and an dict version of arr_names is returned.

  • sort (list of strings) – This parameter defines how the dictionaries are ordered. It has no effect if arr_names is a dictionary (use a dict for that). It can be a list of dimension strings matching to the dimensions in dims for the variable.

  • dims (dict) – Keys must be variable names of dimensions (e.g. time, level, lat or lon) or ‘name’ for the variable name you want to choose. Values must be values of that dimension or iterables of the values (e.g. lists). Note that strings will be put into a list. For example dims = {‘name’: ‘t2m’, ‘time’: 0} will result in one plot for the first time step, whereas dims = {‘name’: ‘t2m’, ‘time’: [0, 1]} will result in two plots, one for the first (time == 0) and one for the second (time == 1) time step.

  • **kwargs – Further keyword arguments may point to any of the dimensions of the data (see dims)

Returns:

The list with the specified InteractiveArray instances that hold a reference to the given base

Return type:

ArrayList

property logger

logging.Logger of this instance

property no_auto_update

bool. Boolean controlling whether the start_update() method is automatically called by the update() method

Examples

You can disable the automatic update via

>>> with data.no_auto_update:
...     data.update(time=1)
...     data.start_update()

To permanently disable the automatic update, simply set

>>> data.no_auto_update = True
>>> data.update(time=1)
>>> data.no_auto_update = False  # reenable automatical update
property psy

Return the list itself

start_update(draw=None, queues=None)[source]

Conduct the formerly registered updates

This method conducts the updates that have been registered via the update() method. You can call this method if the auto_update attribute of this instance is True and the auto_update parameter in the update() method has been set to False

Parameters:
  • draw (bool or None) – Boolean to control whether the figure of this array shall be drawn at the end. If None, it defaults to the ‘auto_draw’` parameter in the psyplot.rcParams dictionary

  • queues (list of Queue.Queue instances) – The queues that are passed to the psyplot.plotter.Plotter.start_update() method to ensure a thread-safe update. It can be None if only one single plotter is updated at the same time. The number of jobs that are taken from the queue is determined by the _njobs() attribute. Note that there this parameter is automatically configured when updating from a Project.

Returns:

A boolean indicating whether a redrawing is necessary or not

Return type:

bool

See also

no_auto_update, update

to_dataframe()[source]
to_interactive_list()[source]

Return a InteractiveList that contains this object

class psyplot.data.Signal(name=None, cls_signal=False)[source]

Bases: object

Signal to connect functions to a specific event

This class behaves almost similar to PyQt’s PyQt4.QtCore.pyqtBoundSignal

Methods:

connect(func)

disconnect([func])

Disconnect a function call to the signal.

emit(*args, **kwargs)

Attributes:

instance

owner

connect(func)[source]
disconnect(func=None)[source]

Disconnect a function call to the signal. If None, all connections are disconnected

emit(*args, **kwargs)[source]
instance = None
owner = None
class psyplot.data.UGridDecoder(ds=None, x=None, y=None, z=None, t=None)[source]

Bases: CFDecoder

Decoder for UGrid data sets

Warning

Currently only triangles are supported.

Methods:

can_decode(ds, var)

Check whether the given variable can be decoded.

decode_coords(ds[, gridfile])

Reimplemented to set the mesh variables as coordinates

get_cell_node_coord(var[, coords, axis, nans])

Checks whether the bounds in the variable attribute are triangular

get_mesh(var[, coords])

Get the mesh variable for the given var

get_nodes(coord, coords)

Get the variables containing the definition of the nodes

get_triangles(var[, coords, convert_radian, ...])

Get the of the given coordinate.

get_x(var[, coords])

Get the centers of the triangles in the x-dimension

get_y(var[, coords])

Get the centers of the triangles in the y-dimension

is_unstructured(*args, **kwargs)

Reimpletemented to return always True.

Attributes:

supports_spatial_slicing

True if the data of the CFDecoder supports the extraction of a subset of the data based on the indices.

classmethod can_decode(ds, var)[source]

Check whether the given variable can be decoded.

Returns True if a mesh coordinate could be found via the get_mesh() method

Parameters:
Returns:

True if the decoder can decode the given array var. Otherwise False

Return type:

bool

static decode_coords(ds, gridfile=None)[source]

Reimplemented to set the mesh variables as coordinates

Parameters:
  • ds (xarray.Dataset) – The dataset to decode

  • gridfile (str) – The path to a separate grid file or a xarray.Dataset instance which may store the coordinates used in ds

Returns:

ds with additional coordinates

Return type:

xarray.Dataset

get_cell_node_coord(var, coords=None, axis='x', nans=None)[source]

Checks whether the bounds in the variable attribute are triangular

Parameters:
  • var (xarray.Variable or xarray.DataArray) – The variable to check

  • coords (dict) – Coordinates to use. If None, the coordinates of the dataset in the ds attribute are used.

  • axis ({'x', 'y'}) – The spatial axis to check

  • nans ({None, 'skip', 'only'}) – Determines whether values with nan shall be left (None), skipped ('skip') or shall be the only one returned ('only')

Returns:

the bounds corrdinate (if existent)

Return type:

xarray.DataArray or None

get_mesh(var, coords=None)[source]

Get the mesh variable for the given var

Parameters:
  • var (xarray.Variable) – The data source whith the 'mesh' attribute

  • coords (dict) – The coordinates to use. If None, the coordinates of the dataset of this decoder is used

Returns:

The mesh coordinate

Return type:

xarray.Coordinate

get_nodes(coord, coords)[source]

Get the variables containing the definition of the nodes

Parameters:
  • coord (xarray.Coordinate) – The mesh variable

  • coords (dict) – The coordinates to use to get node coordinates

get_triangles(var, coords=None, convert_radian=True, copy=False, src_crs=None, target_crs=None, nans=None, stacklevel=1)[source]

Get the of the given coordinate.

Parameters:
  • var (xarray.Variable or xarray.DataArray) – The variable to use

  • coords (dict) – Alternative coordinates to use. If None, the coordinates of the ds dataset are used

  • convert_radian (bool) – If True and the coordinate has units in ‘radian’, those are converted to degrees

  • copy (bool) – If True, vertice arrays are copied

  • src_crs (cartopy.crs.Crs) – The source projection of the data. If not None, a transformation to the given target_crs will be done

  • target_crs (cartopy.crs.Crs) – The target projection for which the triangles shall be transformed. Must only be provided if the src_crs is not None.

  • nans ({None, 'skip', 'only'}) – Determines whether values with nan shall be left (None), skipped ('skip') or shall be the only one returned ('only')

Returns:

The spatial triangles of the variable

Return type:

matplotlib.tri.Triangulation

Notes

If the 'location' attribute is set to 'node', a delaunay triangulation is performed using the matplotlib.tri.Triangulation class.

get_x(var, coords=None)[source]

Get the centers of the triangles in the x-dimension

Returns:

The y-coordinate or None if it could be found

Return type:

xarray.Coordinate or None

get_y(var, coords=None)[source]

Get the centers of the triangles in the y-dimension

Returns:

The y-coordinate or None if it could be found

Return type:

xarray.Coordinate or None

is_unstructured(*args, **kwargs)[source]

Reimpletemented to return always True. Any *args and **kwargs are ignored

supports_spatial_slicing: bool = False

True if the data of the CFDecoder supports the extraction of a subset of the data based on the indices.

For UGRID conventions, this is not easily possible because the extraction of a subset breaks the connectivity information of the mesh

psyplot.data.decode_absolute_time(times)[source]
psyplot.data.encode_absolute_time(times)[source]
psyplot.data.get_filename_ds(ds, dump=True, paths=None, **kwargs)[source]

Return the filename of the corresponding to a dataset

This method returns the path to the ds or saves the dataset if there exists no filename

Parameters:
  • ds (xarray.Dataset) – The dataset you want the path information for

  • dump (bool) – If True and the dataset has not been dumped so far, it is dumped to a temporary file or the one generated by paths is used

  • paths (iterable or True) – An iterator over filenames to use if a dataset has no filename. If paths is True, an iterator over temporary files will be created without raising a warning

  • **kwargs – Any other keyword for the to_netcdf() function

  • path (str, path-like or file-like, optional) – Path to which to save this dataset. File-like objects are only supported by the scipy engine. If no path is provided, this function returns the resulting netCDF file as bytes; in this case, we need to use scipy, which does not support netCDF version 4 (the default format becomes NETCDF3_64BIT).

  • mode ({"w", "a"}, default: "w") – Write (‘w’) or append (‘a’) mode. If mode=’w’, any existing file at this location will be overwritten. If mode=’a’, existing variables will be overwritten.

  • format ({"NETCDF4", "NETCDF4_CLASSIC", "NETCDF3_64BIT", "NETCDF3_CLASSIC"}, optional) –

    File format for the resulting netCDF file:

    • NETCDF4: Data is stored in an HDF5 file, using netCDF4 API features.

    • NETCDF4_CLASSIC: Data is stored in an HDF5 file, using only netCDF 3 compatible API features.

    • NETCDF3_64BIT: 64-bit offset version of the netCDF 3 file format, which fully supports 2+ GB files, but is only compatible with clients linked against netCDF version 3.6.0 or later.

    • NETCDF3_CLASSIC: The classic netCDF 3 file format. It does not handle 2+ GB files very well.

    All formats are supported by the netCDF4-python library. scipy.io.netcdf only supports the last two formats.

    The default format is NETCDF4 if you are saving a file to disk and have the netCDF4-python library available. Otherwise, xarray falls back to using scipy to write netCDF files and defaults to the NETCDF3_64BIT format (scipy does not support netCDF4).

  • group (str, optional) – Path to the netCDF4 group in the given file to open (only works for format=’NETCDF4’). The group(s) will be created if necessary.

  • engine ({"netcdf4", "scipy", "h5netcdf"}, optional) – Engine to use when writing netCDF files. If not provided, the default engine is chosen based on available dependencies, with a preference for ‘netcdf4’ if writing to a file on disk.

  • encoding (dict, optional) –

    Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g., {"my_variable": {"dtype": "int16", "scale_factor": 0.1, "zlib": True}, ...}. If encoding is specified the original encoding of the variables of the dataset is ignored.

    The h5netcdf engine supports both the NetCDF4-style compression encoding parameters {"zlib": True, "complevel": 9} and the h5py ones {"compression": "gzip", "compression_opts": 9}. This allows using any compression plugin installed in the HDF5 library, e.g. LZF.

Returns:

  • str or None – None, if the dataset has not yet been dumped to the harddisk and dump is False, otherwise the complete the path to the input file

  • str – The module of the xarray.backends.common.AbstractDataStore instance that is used to hold the data

  • str – The class name of the xarray.backends.common.AbstractDataStore instance that is used to open the data

psyplot.data.get_fname_funcs = [<function _get_fname_netCDF4>, <function _get_fname_scipy>, <function _get_fname_nio>]

functions to use to extract the file name from a data store

psyplot.data.get_index_from_coord(coord, base_index)[source]

Function to return the coordinate as integer, integer array or slice

If coord is zero-dimensional, the corresponding integer in base_index will be supplied. Otherwise it is first tried to return a slice, if that does not work an integer array with the corresponding indices is returned.

Parameters:
  • coord (xarray.Coordinate or xarray.Variable) – Coordinate to convert

  • base_index (pandas.Index) – The base index from which the coord was extracted

Returns:

The indexer that can be used to access the coord in the base_index

Return type:

int, array of ints or slice

psyplot.data.get_tdata(t_format, files)[source]

Get the time information from file names

Parameters:
  • t_format (str) – The string that can be used to get the time information in the files. Any numeric datetime format string (e.g. %Y, %m, %H) can be used, but not non-numeric strings like %b, etc. See [1] for the datetime format strings

  • files (list of str) – The that contain the time informations

Returns:

  • pandas.Index – The time coordinate

  • list of str – The file names as they are sorten in the returned index

References

psyplot.data.open_dataset(filename_or_obj, decode_cf=True, decode_times=True, decode_coords=True, engine=None, gridfile=None, **kwargs)[source]

Open an instance of xarray.Dataset.

This method has the same functionality as the xarray.open_dataset() method except that is supports an additional ‘gdal’ engine to open gdal Rasters (e.g. GeoTiffs) and that is supports absolute time units like 'day as %Y%m%d.%f' (if decode_cf and decode_times are True).

Parameters:
  • filename_or_obj (str, Path, file-like or DataStore) – Strings and Path objects are interpreted as a path to a netCDF file or an OpenDAP URL and opened with python-netCDF4, unless the filename ends with .gz, in which case the file is gunzipped and opened with scipy.io.netcdf (only netCDF3 supported). Byte-strings or file-like objects are opened by scipy.io.netcdf (netCDF3) or h5py (netCDF4/HDF).

  • chunks (int, dict, 'auto' or None, optional) – If chunks is provided, it is used to load the new dataset into dask arrays. chunks=-1 loads the dataset with dask using a single chunk for all arrays. chunks={} loads the dataset with dask using engine preferred chunks if exposed by the backend, otherwise with a single chunk for all arrays. In order to reproduce the default behavior of xr.open_zarr(...) use xr.open_dataset(..., engine='zarr', chunks={}). chunks='auto' will use dask auto chunking taking into account the engine preferred chunks. See dask chunking for more details.

  • cache (bool, optional) – If True, cache data loaded from the underlying datastore in memory as NumPy arrays when accessed to avoid reading from the underlying data- store multiple times. Defaults to True unless you specify the chunks argument to use dask, in which case it defaults to False. Does not change the behavior of coordinates corresponding to dimensions, which always load their data from disk into a pandas.Index.

  • decode_cf (bool, optional) – Whether to decode these variables, assuming they were saved according to CF conventions.

  • mask_and_scale (bool, optional) – If True, replace array values equal to _FillValue with NA and scale values according to the formula original_values * scale_factor + add_offset, where _FillValue, scale_factor and add_offset are taken from variable attributes (if they exist). If the _FillValue or missing_value attribute contains multiple values a warning will be issued and all array values matching one of the multiple values will be replaced by NA. This keyword may not be supported by all the backends.

  • decode_times (bool, optional) – If True, decode times encoded in the standard NetCDF datetime format into datetime objects. Otherwise, leave them encoded as numbers. This keyword may not be supported by all the backends.

  • decode_timedelta (bool, optional) – If True, decode variables and coordinates with time units in {“days”, “hours”, “minutes”, “seconds”, “milliseconds”, “microseconds”} into timedelta objects. If False, leave them encoded as numbers. If None (default), assume the same value of decode_time. This keyword may not be supported by all the backends.

  • use_cftime (bool, optional) – Only relevant if encoded dates come from a standard calendar (e.g. “gregorian”, “proleptic_gregorian”, “standard”, or not specified). If None (default), attempt to decode times to np.datetime64[ns] objects; if this is not possible, decode times to cftime.datetime objects. If True, always decode times to cftime.datetime objects, regardless of whether or not they can be represented using np.datetime64[ns] objects. If False, always decode times to np.datetime64[ns] objects; if this is not possible raise an error. This keyword may not be supported by all the backends.

  • concat_characters (bool, optional) – If True, concatenate along the last dimension of character arrays to form string arrays. Dimensions will only be concatenated over (and removed) if they have no corresponding variable and if they are only used as the last dimension of character arrays. This keyword may not be supported by all the backends.

  • decode_coords (bool or {"coordinates", "all"}, optional) –

    Controls which variables are set as coordinate variables:

    • ”coordinates” or True: Set variables referred to in the 'coordinates' attribute of the datasets or individual variables as coordinate variables.

    • ”all”: Set variables referred to in 'grid_mapping', 'bounds' and other attributes as coordinate variables.

    Only existing variables can be set as coordinates. Missing variables will be silently ignored.

  • drop_variables (str or iterable of str, optional) – A variable or list of variables to exclude from being parsed from the dataset. This may be useful to drop variables with problems or inconsistent values.

  • inline_array (bool, default: False) – How to include the array in the dask task graph. By default(inline_array=False) the array is included in a task by itself, and each chunk refers to that task by its key. With inline_array=True, Dask will instead inline the array directly in the values of the task graph. See dask.array.from_array().

  • chunked_array_type (str, optional) – Which chunked array type to coerce this datasets’ arrays to. Defaults to ‘dask’ if installed, else whatever is registered via the ChunkManagerEnetryPoint system. Experimental API that should not be relied upon.

  • from_array_kwargs (dict) – Additional keyword arguments passed on to the ChunkManagerEntrypoint.from_array method used to create chunked arrays, via whichever chunk manager is specified through the chunked_array_type kwarg. For example if dask.array.Array() objects are used for chunking, additional kwargs will be passed to dask.array.from_array(). Experimental API that should not be relied upon.

  • backend_kwargs (dict) – Additional keyword arguments passed on to the engine open function, equivalent to **kwargs.

  • **kwargs (dict) –

    Additional keyword arguments passed on to the engine open function. For example:

    • ’group’: path to the netCDF4 group in the given file to open given as a str,supported by “netcdf4”, “h5netcdf”, “zarr”.

    • ’lock’: resource lock to use when reading data from disk. Only relevant when using dask or another form of parallelism. By default, appropriate locks are chosen to safely read and write files with the currently active dask scheduler. Supported by “netcdf4”, “h5netcdf”, “scipy”.

    See engine open function for kwargs accepted by each specific engine.

  • engine ({'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'gdal'}, optional) – Engine to use when reading netCDF files. If not provided, the default engine is chosen based on available dependencies, with a preference for ‘netcdf4’.

  • gridfile (str) – The path to a separate grid file or a xarray.Dataset instance which may store the coordinates used in ds

Returns:

The dataset that contains the variables from filename_or_obj

Return type:

xarray.Dataset

psyplot.data.open_mfdataset(paths, decode_cf=True, decode_times=True, decode_coords=True, engine=None, gridfile=None, t_format=None, **kwargs)[source]

Open multiple files as a single dataset.

This function is essentially the same as the xarray.open_mfdataset() function but (as the open_dataset()) supports additional decoding and the 'gdal' engine. You can further specify the t_format parameter to get the time information from the files and use the results to concatenate the files

Parameters:
  • paths (str or nested sequence of paths) – Either a string glob in the form ``"path/to/my/files/*.nc"`` or an explicit list of files to open. Paths can be given as strings or as pathlib Paths. If concatenation along more than one dimension is desired, then paths must be a nested list-of-lists (see combine_nested for details). (A string glob will be expanded to a 1-dimensional list.)

  • chunks (int, dict, 'auto' or None, optional) – Dictionary with keys given by dimension names and values given by chunk sizes. In general, these should divide the dimensions of each dataset. If int, chunk each dimension by chunks. By default, chunks will be chosen to load entire input files into memory at once. This has a major impact on performance: please see the full documentation for more details [2]_.

  • concat_dim (str, DataArray, Index or a Sequence of these or None, optional) – Dimensions to concatenate files along. You only need to provide this argument if combine='nested', and if any of the dimensions along which you want to concatenate is not a dimension in the original datasets, e.g., if you want to stack a collection of 2D arrays along a third dimension. Set concat_dim=[..., None, ...] explicitly to disable concatenation along a particular dimension. Default is None, which for a 1D list of filepaths is equivalent to opening the files separately and then merging them with xarray.merge.

  • combine ({"by_coords", "nested"}, optional) – Whether xarray.combine_by_coords or xarray.combine_nested is used to combine all the data. Default is to use xarray.combine_by_coords.

  • compat ({"identical", "equals", "broadcast_equals", "no_conflicts", "override"}, default: "no_conflicts") –

    String indicating how to compare variables of the same name for potential conflicts when merging:

    • ”broadcast_equals”: all values must be equal when variables are broadcast against each other to ensure common dimensions.

    • ”equals”: all values and dimensions must be the same.

    • ”identical”: all values, dimensions and attributes must be the same.

    • ”no_conflicts”: only values which are not null in both datasets must be equal. The returned dataset then contains the combination of all non-null values.

    • ”override”: skip comparing and pick variable from first dataset

  • engine ({'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'gdal'}, optional) – Engine to use when reading netCDF files. If not provided, the default engine is chosen based on available dependencies, with a preference for ‘netcdf4’.

  • t_format (str) – The string that can be used to get the time information in the files. Any numeric datetime format string (e.g. %Y, %m, %H) can be used, but not non-numeric strings like %b, etc. See [1] for the datetime format strings

  • gridfile (str) – The path to a separate grid file or a xarray.Dataset instance which may store the coordinates used in ds

Returns:

The dataset that contains the variables from filename_or_obj

Return type:

xarray.Dataset

psyplot.data.setup_coords(arr_names=None, sort=[], dims={}, **kwargs)[source]

Sets up the arr_names dictionary for the plot

Parameters:
  • arr_names (string, list of strings or dictionary) –

    Set the unique array names of the resulting arrays and (optionally) dimensions.

    • if string: same as list of strings (see below). Strings may include {0} which will be replaced by a counter.

    • list of strings: those will be used for the array names. The final number of dictionaries in the return depend in this case on the dims and **furtherdims

    • dictionary: Then nothing happens and an dict version of arr_names is returned.

  • sort (list of strings) – This parameter defines how the dictionaries are ordered. It has no effect if arr_names is a dictionary (use a dict for that). It can be a list of dimension strings matching to the dimensions in dims for the variable.

  • dims (dict) – Keys must be variable names of dimensions (e.g. time, level, lat or lon) or ‘name’ for the variable name you want to choose. Values must be values of that dimension or iterables of the values (e.g. lists). Note that strings will be put into a list. For example dims = {‘name’: ‘t2m’, ‘time’: 0} will result in one plot for the first time step, whereas dims = {‘name’: ‘t2m’, ‘time’: [0, 1]} will result in two plots, one for the first (time == 0) and one for the second (time == 1) time step.

  • **kwargs – The same as dims (those will update what is specified in dims)

Returns:

A mapping from the keys in arr_names and to dictionaries. Each dictionary corresponds defines the coordinates of one data array to load

Return type:

dict

psyplot.data.t_patterns = {'%H': '[0-9]{1,2}', '%M': '[0-9]{1,2}', '%S': '[0-9]{1,2}', '%Y': '[0-9]{4}', '%d': '[0-9]{1,2}', '%m': '[0-9]{1,2}'}

mapping that translates datetime format strings to regex patterns

psyplot.data.to_netcdf(ds, *args, **kwargs)[source]

Store the given dataset as a netCDF file

This functions works essentially the same as the usual xarray.Dataset.to_netcdf() method but can also encode absolute time units

Parameters:
  • ds (xarray.Dataset) – The dataset to store

  • path (str, path-like or file-like, optional) – Path to which to save this dataset. File-like objects are only supported by the scipy engine. If no path is provided, this function returns the resulting netCDF file as bytes; in this case, we need to use scipy, which does not support netCDF version 4 (the default format becomes NETCDF3_64BIT).

  • mode ({"w", "a"}, default: "w") – Write (‘w’) or append (‘a’) mode. If mode=’w’, any existing file at this location will be overwritten. If mode=’a’, existing variables will be overwritten.

  • format ({"NETCDF4", "NETCDF4_CLASSIC", "NETCDF3_64BIT", "NETCDF3_CLASSIC"}, optional) –

    File format for the resulting netCDF file:

    • NETCDF4: Data is stored in an HDF5 file, using netCDF4 API features.

    • NETCDF4_CLASSIC: Data is stored in an HDF5 file, using only netCDF 3 compatible API features.

    • NETCDF3_64BIT: 64-bit offset version of the netCDF 3 file format, which fully supports 2+ GB files, but is only compatible with clients linked against netCDF version 3.6.0 or later.

    • NETCDF3_CLASSIC: The classic netCDF 3 file format. It does not handle 2+ GB files very well.

    All formats are supported by the netCDF4-python library. scipy.io.netcdf only supports the last two formats.

    The default format is NETCDF4 if you are saving a file to disk and have the netCDF4-python library available. Otherwise, xarray falls back to using scipy to write netCDF files and defaults to the NETCDF3_64BIT format (scipy does not support netCDF4).

  • group (str, optional) – Path to the netCDF4 group in the given file to open (only works for format=’NETCDF4’). The group(s) will be created if necessary.

  • engine ({"netcdf4", "scipy", "h5netcdf"}, optional) – Engine to use when writing netCDF files. If not provided, the default engine is chosen based on available dependencies, with a preference for ‘netcdf4’ if writing to a file on disk.

  • encoding (dict, optional) –

    Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g., {"my_variable": {"dtype": "int16", "scale_factor": 0.1, "zlib": True}, ...}. If encoding is specified the original encoding of the variables of the dataset is ignored.

    The h5netcdf engine supports both the NetCDF4-style compression encoding parameters {"zlib": True, "complevel": 9} and the h5py ones {"compression": "gzip", "compression_opts": 9}. This allows using any compression plugin installed in the HDF5 library, e.g. LZF.

psyplot.data.to_slice(arr)[source]

Test whether arr is an integer array that can be replaced by a slice

Parameters:

arr (numpy.array) – Numpy integer array

Returns:

If arr could be converted to an array, this is returned, otherwise None is returned

Return type:

slice or None