Scenes

Scenes and SceneCollections are the primary data structures in mira. Scenes store images and annotations together while SceneCollections combine a number of scenes into a single structure.

class mira.core.Scene(categories, image, annotations=None, metadata=None, cache=False, masks=None, labels=None)[source]

A single annotated image.

Parameters
  • categories (Union[List[str], Categories]) – The configuration for annotations for the image.

  • annotations (Optional[List[Annotation]]) – The list of annotations.

  • image (Union[ndarray, str]) – The image that was annotated. Can be lazy-loaded by passing a string filepath.

  • metadata (Optional[dict]) – Metadata about the scene as a dictionary

  • cache (bool) – Defines caching behavior for the image. If True, image is loaded into memory the first time that the image is requested. If False, image is loaded from the file path or URL whenever the image is requested.

  • masks (Optional[List[MaskRegion]]) – A list of MaskRegion dictonaries which will determine which parts of images are shown and hidden.

annotated(dpi=72, fontsize='x-large', labels=True, opaque=False, color=(255, 0, 0))[source]

Show annotations on the image itself.

Parameters
  • dpi – The resolution for the image

  • fontsize – How large to show labels

  • labels – Whether or not to show labels

  • opaque – Whether to draw annotations filled in.

  • color – The color to use for annotations.

Return type

ndarray

assign(**kwargs)[source]

Get a new scene with only the supplied keyword arguments changed.

Return type

Scene

augment(augmenter=None, min_visibility=None)[source]

Obtain an augmented version of the scene using the given augmenter.

Return type

Tuple[Scene, ndarray]

Returns

The augmented scene

bboxes()[source]

Obtain an array of shape (N, 5) where the columns are x1, y1, x2, y2, class_index where class_index is determined from the annotation configuration.

compute_iou(other)[source]

Obtain the inter-scene annotation IoU.

Parameters

other (Scene) – The other scene with which to compare.

Returns

A matrix of shape (N, M) where N is the number of annotations in this scene and M is the number of annotations in the other scene. Each value represents the IoU between the two annotations. A negative IoU value means the annotations overlapped but they were for different classes.

deferred_image()[source]

Create a deferred image.

Return type

Callable[[], ndarray]

property dimensions

Get size of image, attempting to get it without reading the entire file, if possible.

Return type

Dimensions

drop_duplicates(threshold=1, method='iou')[source]

Remove annotations of the same class where one annotation covers similar or equal area as another.

Parameters
  • method (Literal[‘iou’, ‘coverage’]) – Whether to check overlap by “coverage” (i.e., is X% of box A contained by some larger box B) or “iou” (intersection-over-union). IoU is, of course, more strict.

  • threshold – The threshold for equality. Boxes are retained if there is no larger box with which the overlap is greater than or equal to this threshold.

filepath(directory=None)[source]

Gets a filepath for this image. If it is not currently a file, a file will be created in a temporary directory.

classmethod fromString(string)[source]

Deserialize scene from string.

classmethod from_qsl(item, label_key, categories, base_dir=None)[source]

Create a scene from a set of QSL labels.

Parameters
  • item (Dict) – The QSL labeling item.

  • label_key (str) – The key for the region label to use for annotation.

  • categories (Categories) – The annotation configuration for the resulting scene.

property image

The image that is being annotated

Return type

ndarray

property image_bytes

Get the image as a PNG encoded to bytes.

Return type

bytes

classmethod load(filepath)[source]

Load a scence from a filepath.

resize(resize_config)[source]

Resize a scene using a custom resizing configuration.

scores(level='annotation')[source]

Obtain an array containing the confidence score for each annotation.

segmentation_map(binary, threshold=0.5)[source]

Creates a segmentation map using the annotation scores.

Return type

ndarray

show(annotation_kwargs=None, **kwargs)[source]

Show an annotated version of the image. All arguments passed to mira.core.utils.imshow().

Return type

Axes

show_annotations(**kwargs)[source]

Show annotations as individual plots. All arguments passed to plt.subplots.

toString(extension='.png')[source]

Serialize scene to string.

to_subcrops(max_size)[source]

Split a scene into subcrops of some maximum size while trying to avoid splitting annotations.

Parameters

max_size (int) – The maximum size of a crop (it may be smaller at the edges of an image).

Return type

List[Scene]

class mira.core.SceneCollection(scenes, categories=None)[source]

A collection of scenes.

Parameters
  • categories (Optional[Categories]) – The configuration that should be used for all underlying scenes.

  • scenes (List[Scene]) – The list of scenes.

annotation_groups()[source]

The groups of annotations in the collection.

annotation_sizes()[source]

An array of dimensions for the annotations in the collection.

assign(**kwargs)[source]

Obtain a new scene with the given keyword arguments changing. If categories is provided, the annotations are converted to the new categories first.

Return type

SceneCollection

Returns

A new scene

augment(augmenter, **kwargs)[source]

Obtained an augmented version of the given collection. All arguments passed to Scene.augment

property categories

The annotation configuration

consistent()[source]

Specifies whether all scenes have the same annotation configuration.

deferred_images()[source]

Returns a series of callables that, when called, will load the image.

filter(path, value)[source]

Find scenes in the collection based on metadata.

classmethod from_qsl(jsonpath, label_key, base_dir=None)[source]

Build a scene collection from a QSL JSON project file.

image_sizes()[source]

An array of dimensions for the images in the collection.

images()[source]

All the images for a scene collection. All images will be loaded if not already cached.

label_groups()[source]

The groups of labels in the collection.

Return type

List[List[Label]]

classmethod load(filename, directory=None, force=False)[source]

Load scene collection from a tarball. If a directory is provided, images will be saved into that directory rather than retained in memory.

classmethod load_from_directory(directory)[source]

Load a dataset that already was extracted from directory.

onehot(binary=True)[source]

Get the one-hot encoded (N, C) array for this scene collection. If binary is false, the score is used instead of 0/1.

Return type

ndarray

sample(n, replace=True)[source]

Get a random subsample of this collection

Return type

SceneCollection

save(filename, **kwargs)[source]

Save scene collection a tarball.

save_placeholder(filename, colormap)[source]

Create a placeholder scene collection representing blank images with black blobs drawn on in the location of annotations. Useful for testing whether a detector has any chance of working with a given dataset.

Parameters
  • filename (str) – The tarball to which the dummy dataast should be saved.

  • colormap (Dict[str, Tuple[int, int, int]]) – A mapping of annotation categories to colors, used for drawing the annotations onto a canvas.

property scenes

The list of scenes

split(sizes, random_state=42, stratify=None, group=None, preserve=None)[source]

Obtain new scene collections, split based on a given set of proportios.

For example, to get three collections containing 70%, 15%, and 15% of the dataset, respectively, you can do something like the following:

training, validation, test = collection.split(
    sizes=[0.7, 0.15, 0.15]
)

You can also use the stratify argument to ensure an even split between different kinds of scenes. For example, to split scenes containing at least 3 annotations proportionally, do something like the following.

training, validation, test = collection.split(
    sizes=[0.7, 0.15, 0.15],
    stratify=[len(s.annotations) >= 3 for s in collection]
)

Finally, you can make sure certain scenes end up in the same split (e.g., if they’re crops from the same base image) using the group argument.

training, validation, test = collection.split(
    sizes=[0.7, 0.15, 0.15],
    stratify=[len(s.annotations) >= 3 for s in collection],
    group=[s.metadata["origin"] for s in collection]
)
Return type

Sequence[SceneCollection]

Returns

A train and test scene collection.

uniform()[source]

Specifies whether all scenes in the collection are of the same size. Note: This will trigger an image load.