Skip to content

Releases: roboflow/supervision

supervision-0.21.0

06 Jun 06:43
e50c761
Compare
Choose a tag to compare

πŸ“… Timeline

The supervision-0.21.0 release is around the corner. Here is the timeline:

  • 5 Jun 2024 08:00 PM CEST (UTC +2) / 5 Jun 2024 11:00 AM PDT (UTC -7) - merge develop into main - closing list supervision-0.21.0 features
  • 6 Jun 2024 11:00 AM CEST (UTC +2) / 6 Jun 2024 02:00 AM PDT (UTC -7) - release supervision-0.21.0

πŸͺ΅ Changelog

πŸš€ Added

non-max-merging

import supervision as sv

paligemma_result = "<loc0256><loc0256><loc0768><loc0768> cat"
detections = sv.Detections.from_lmm(
    sv.LMM.PALIGEMMA,
    paligemma_result,
    resolution_wh=(1000, 1000),
    classes=['cat', 'dog']
)
detections.xyxy
# array([[250., 250., 750., 750.]])

detections.class_id
# array([0])
import supervision as sv

image = ...
key_points = sv.KeyPoints(...)

LABELS = [
    "nose", "left eye", "right eye", "left ear",
    "right ear", "left shoulder", "right shoulder", "left elbow",
    "right elbow", "left wrist", "right wrist", "left hip",
    "right hip", "left knee", "right knee", "left ankle",
    "right ankle"
]

COLORS = [
    "#FF6347", "#FF6347", "#FF6347", "#FF6347",
    "#FF6347", "#FF1493", "#00FF00", "#FF1493",
    "#00FF00", "#FF1493", "#00FF00", "#FFD700",
    "#00BFFF", "#FFD700", "#00BFFF", "#FFD700",
    "#00BFFF"
]
COLORS = [sv.Color.from_hex(color_hex=c) for c in COLORS]

vertex_label_annotator = sv.VertexLabelAnnotator(
    color=COLORS,
    text_color=sv.Color.BLACK,
    border_radius=5
)
annotated_frame = vertex_label_annotator.annotate(
    scene=image.copy(),
    key_points=key_points,
    labels=labels
)

vertex-label-annotator-custom-example (1)

mask-to-rle (1)

🌱 Changed

import cv2
import numpy as np
import supervision as sv
from inference import get_model

model = get_model(model_id="yolov8x-seg-640")
image = cv2.imread(<SOURCE_IMAGE_PATH>)

def callback(image_slice: np.ndarray) -> sv.Detections:
    results = model.infer(image_slice)[0]
    return sv.Detections.from_inference(results)

slicer = sv.InferenceSlicer(callback = callback)
detections = slicer(image)

mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator()

annotated_image = mask_annotator.annotate(
    scene=image, detections=detections)
annotated_image = label_annotator.annotate(
    scene=annotated_image, detections=detections)

inference-slicer-segmentation-example

output

πŸ† Contributors

@onuralpszr (Onuralp SEZER), @LinasKo (Linas Kondrackis), @rolson24 (Raif Olson), @mario-dg (Mario da Graca), @xaristeidou (Christoforos Aristeidou), @ManzarIMalik (Manzar Iqbal Malik), @tc360950 (Tomasz CΔ…kaΕ‚a), @emSko, @SkalskiP (Piotr Skalski)

supervision-0.20.0

24 Apr 20:49
f7f40f0
Compare
Choose a tag to compare

πŸš€ Added

import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')

result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)

edge_annotators = sv.EdgeAnnotator(color=sv.Color.GREEN, thickness=5)
annotated_image = edge_annotators.annotate(image.copy(), keypoints)

edge-annotator-example

import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')

result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)

vertex_annotators = sv.VertexAnnotator(color=sv.Color.GREEN, radius=10)
annotated_image = vertex_annotators.annotate(image.copy(), keypoints)

vertex-annotator-example

🌱 Changed

  • sv.LabelAnnotator by adding an additional corner_radius argument that allows for rounding the corners of the bounding box. (#1037)

  • sv.PolygonZone such that the frame_resolution_wh argument is no longer required to initialize sv.PolygonZone. (#1109)

Warning

The frame_resolution_wh parameter in sv.PolygonZone is deprecated and will be removed in supervision-0.24.0.

import torch
import supervision as sv
from PIL import Image
from transformers import DetrImageProcessor, DetrForSegmentation

processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic")
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")

image = Image.open(<SOURCE_IMAGE_PATH>)
inputs = processor(images=image, return_tensors="pt")

with torch.no_grad():
    outputs = model(**inputs)

width, height = image.size
target_size = torch.tensor([[height, width]])
results = processor.post_process_segmentation(
    outputs=outputs, target_sizes=target_size)[0]
detections = sv.Detections.from_transformers(results, id2label=model.config.id2label)

mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)

annotated_image = mask_annotator.annotate(
    scene=image, detections=detections)
annotated_image = label_annotator.annotate(
    scene=annotated_image, detections=detections)

πŸ› οΈ Fixed

πŸ† Contributors

@onuralpszr (Onuralp SEZER), @rolson24 (Raif Olson), @xaristeidou (Christoforos Aristeidou), @jeslinpjames (Jeslin P James), @Griffin-Sullivan (Griffin Sullivan), @PawelPeczek-Roboflow (PaweΕ‚ PΔ™czek), @pirnerjonas (Jonas Pirner), @sharingan000, @macc-n, @LinasKo (Linas Kondrackis), @SkalskiP (Piotr Skalski)

supervision-0.19.0

15 Mar 12:04
55f93a8
Compare
Choose a tag to compare

πŸ§‘β€πŸ³ Cookbooks

Supervision Cookbooks - A curated open-source collection crafted by the community, offering practical examples, comprehensive guides, and walkthroughs for leveraging Supervision alongside diverse Computer Vision models. (#860)

πŸš€ Added

  • sv.CSVSink allowing for the straightforward saving of image, video, or stream inference results in a .csv file. (#818)
import supervision as sv
from ultralytics import YOLO

model = YOLO(<SOURCE_MODEL_PATH>)
csv_sink = sv.CSVSink(<RESULT_CSV_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)

with csv_sink:
    for frame in frames_generator:
        result = model(frame)[0]
        detections = sv.Detections.from_ultralytics(result)
        csv_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
traffic_csv_2.mp4
  • sv.JSONSink allowing for the straightforward saving of image, video, or stream inference results in a .json file. (#819)
import supervision as sv
from ultralytics import YOLO

model = YOLO(<SOURCE_MODEL_PATH>)
json_sink = sv.JSONSink(<RESULT_JSON_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)

with json_sink:
    for frame in frames_generator:
        result = model(frame)[0]
        detections = sv.Detections.from_ultralytics(result)
        json_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
import cv2
import supervision as sv
from inference import get_model

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")

result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)

crop_annotator = sv.CropAnnotator()
annotated_frame = crop_annotator.annotate(
    scene=image.copy(),
    detections=detections
)
supervision-0.19.0-promo.mp4

🌱 Changed

  • sv.ByteTrack.reset allowing users to clear trackers state, enabling the processing of multiple video files in sequence. (#827)
  • sv.LineZoneAnnotator allowing to hide in/out count using display_in_count and display_out_count properties. (#802)
  • sv.ByteTrack input arguments and docstrings updated to improve readability and ease of use. (#787)

Warning

The track_buffer, track_thresh, and match_thresh parameters in sv.ByterTrack are deprecated and will be removed in supervision-0.23.0. Use lost_track_buffer, track_activation_threshold, and minimum_matching_threshold instead.

  • sv.PolygonZone to now accept a list of specific box anchors that must be in zone for a detection to be counted. (#910)

Warning

The triggering_position parameter in sv.PolygonZone is deprecated and will be removed in supervision-0.23.0. Use triggering_anchors instead.

  • Annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input. (#875)

πŸ› οΈ Fixed

πŸ† Contributors

@onuralpszr (Onuralp SEZER), @LinasKo (Linas Kondrackis), @LeviVasconcelos (Levi Vasconcelos), @AdonaiVera (Adonai Vera), @xaristeidou (Christoforos Aristeidou), @Kadermiyanyedi (Kader Miyanyedi), @NickHerrig (Nick Herrig), @PacificDou (Shuyang Dou), @iamhatesz (Tomasz Wrona), @capjamesg (James Gallagher), @sansyo, @SkalskiP (Piotr Skalski)

supervision-0.18.0

25 Jan 09:46
53f4cde
Compare
Choose a tag to compare

πŸš€ Added

  • sv.PercentageBarAnnotator allowing to annotate images and videos with percentage values representing confidence or other custom property. (#720)
import supervision as sv

image = ...
detections = sv.Detections(...)

percentage_bar_annotator = sv.PercentageBarAnnotator()
annotated_frame = percentage_bar_annotator.annotate(
    scene=image.copy(),
    detections=detections
)

percentage-bar-annotator-example-purple

supervision-detection-smoothing.mp4
import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO("yolov8n-obb.pt")

result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)

oriented_box_annotator = sv.OrientedBoxAnnotator()
annotated_frame = oriented_box_annotator.annotate(
    scene=image.copy(),
    detections=detections
)

oriented-box-annotator

import supervision as sv

sv.ColorPalette.from_matplotlib('viridis', 5)
# ColorPalette(colors=[Color(r=68, g=1, b=84), Color(r=59, g=82, b=139), ...])

visualized_color_palette

🌱 Changed

  • sv.Detections.from_ultralytics adding support for OBB (Oriented Bounding Boxes). (#770)
  • sv.LineZone to now accept a list of specific box anchors that must cross the line for a detection to be counted. This update marks a significant improvement from the previous requirement, where all four box corners were necessary. Users can now specify a single anchor, such as sv.Position.BOTTOM_CENTER, or any other combination of anchors defined as List[sv.Position]. (#735)
  • sv.Detections to support custom payload. (#700)
  • sv.Color's and sv.ColorPalette's method of accessing predefined colors, transitioning from a function-based approach (sv.Color.red()) to a more intuitive and conventional property-based method (sv.Color.RED). (#756) (#769)

Warning

sv.ColorPalette.default() is deprecated and will be removed in supervision-0.21.0. Use sv.ColorPalette.DEFAULT instead.

default-color-palette

Warning

Detections.from_roboflow() is deprecated and will be removed in supervision-0.21.0. Use Detections.from_inference instead.

import cv2
import supervision as sv
from inference.models.utils import get_roboflow_model

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_roboflow_model(model_id="yolov8s-640")

result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)

πŸ› οΈ Fixed

  • sv.LineZone functionality to accurately update the counter when an object crosses a line from any direction, including from the side. This enhancement enables more precise tracking and analytics, such as calculating individual in/out counts for each lane on the road. (#735)
supervision-0.18.0-promo-sample-2-result.mp4

πŸ† Contributors

@onuralpszr (Onuralp SEZER), @HinePo (Rafael Levy), @xaristeidou (Christoforos Aristeidou), @revtheundead (Utku Γ–zbek), @paulguerrie (Paul Guerrie), @yeldarby (Brad Dwyer), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

supervision-0.17.1

08 Dec 14:21
bcb26f9
Compare
Choose a tag to compare

πŸš€ Added

  • Support for Python 3.12.

πŸ† Contributors

@onuralpszr (Onuralp SEZER), @SkalskiP (Piotr Skalski)

supervision-0.17.0

06 Dec 15:22
36ab9dc
Compare
Choose a tag to compare

πŸš€ Added

walking-pixelate-corner-optimized.mp4
  • sv.TriangleAnnotator allowing to annotate images and videos with triangle markers. (#652)

  • sv.PolygonAnnotator allowing to annotate images and videos with segmentation mask outline. (#602)

    >>> import supervision as sv
    
    >>> image = ...
    >>> detections = sv.Detections(...)
    
    >>> polygon_annotator = sv.PolygonAnnotator()
    >>> annotated_frame = polygon_annotator.annotate(
    ...     scene=image.copy(),
    ...     detections=detections
    ... )
walking-polygon-optimized.mp4

🌱 Changed

mask_annotator_speed

πŸ› οΈ Fixed

πŸ† Contributors

@onuralpszr (Onuralp SEZER), @hugoles (Hugo Dutra), @karanjakhar (Karan Jakhar), @kim-jeonghyun (Jeonghyun Kim), @fdloopes (
Felipe Lopes), @abhishek7kalra (Abhishek Kalra), @SummitStudiosDev, @xenteros @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

supervision-0.16.0

19 Oct 08:26
f34993c
Compare
Choose a tag to compare

πŸš€ Added

supervision-0.16.0-annotators.mp4
>>> import supervision as sv

>>> image = ...
>>> detections = sv.Detections(...)

>>> halo_annotator = sv.HaloAnnotator()
>>> annotated_frame = halo_annotator.annotate(
...     scene=image.copy(),
...     detections=detections
... )

🌱 Changed

  • sv.LineZone.trigger now return Tuple[np.ndarray, np.ndarray]. The first array indicates which detections have crossed the line from outside to inside. The second array indicates which detections have crossed the line from inside to outside. (#482)
  • Annotator argument name from color_map: str to color_lookup: ColorLookup enum to increase type safety. (#465)
  • sv.MaskAnnotator allowing 2x faster annotation. (#426)

πŸ› οΈ Fixed

  • Poetry env definition allowing proper local installation. (#477)
  • sv.ByteTrack to return np.array([], dtype=int) when svDetections is empty. (#430)
  • YOLONAS detection missing predication part added & fixed (#416)
  • SAM detection at Demo Notebook MaskAnnotator(color_map="index") color_map set to index (#416)

πŸ—‘οΈ Deleted

Warning
Deleted sv.Detections.from_yolov8 and sv.Classifications.from_yolov8 as those are now replaced by sv.Detections.from_ultralytics and sv.Classifications.from_ultralytics. (#438)

πŸ† Contributors

@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @kapter, @keshav278 (Keshav Subramanian), @akashpambhar (Akash Pambhar), @AntonioConsiglio (Antonio Consiglio), @ashishdatta, @mario-dg (Mario da Graca), @ jayaBalaR (JAYABALAMBIKA.R), @abhishek7kalra (Abhishek Kalra), @PankajKrana (Pankaj Kumar Rana), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

supervision-0.15.0

05 Oct 07:54
1bddf26
Compare
Choose a tag to compare

πŸš€ Added

supervision-0.15.0.mp4
>>> import supervision as sv

>>> image = ...
>>> detections = sv.Detections(...)

>>> bounding_box_annotator = sv.BoundingBoxAnnotator()
>>> annotated_frame = bounding_box_annotator.annotate(
...     scene=image.copy(),
...     detections=detections
... )
  • Supervision usage example. You can now learn how to perform traffic flow analysis with Supervision. (#354)
traffic_analysis_result.mov

🌱 Changed

πŸ› οΈ Fixed

πŸ† Contributors

@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @Killua7362 (Akshay Bhat), @fcakyon (Fatih C. Akyon), @akashAD98 (Akash A Desai), @Rajarshi-Misra (Rajarshi Misra), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

supervision-0.14.0

31 Aug 13:23
f82f0fa
Compare
Choose a tag to compare

πŸš€ Added

>>> import cv2
>>> import supervision as sv
>>> import numpy as np
>>> from ultralytics import YOLO

>>> image = cv2.imread(SOURCE_IMAGE_PATH)
>>> model = YOLO(...)

>>> def callback(image_slice: np.ndarray) -> sv.Detections:
...     result = model(image_slice)[0]
...     return sv.Detections.from_ultralytics(result)

>>> slicer = sv.InferenceSlicer(callback = callback)

>>> detections = slicer(image)
inference-slicer.mov
detect-and-track-objects-on-video.mov

🌱 Changed

πŸ› οΈ Fixed

πŸ† Contributors

@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @mayankagarwals (Mayank Agarwal), @rizavelioglu (Riza Velioglu), @arjun-234 (Arjun D.), @mwitiderrick (Derrick Mwiti), @ShubhamKanitkar32, @gasparitiago (Tiago De Gaspari), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

supervision-0.13.0

08 Aug 09:17
4f79d29
Compare
Choose a tag to compare

πŸš€ Added

>>> import supervision as sv
>>> from ultralytics import YOLO

>>> dataset = sv.DetectionDataset.from_yolo(...)

>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
...     result = model(image)[0]
...     return sv.Detections.from_yolov8(result)

>>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
...     dataset = dataset,
...     callback = callback
... )

>>> mean_average_precision.map50_95
0.433
>>> import supervision as sv
>>> from ultralytics import YOLO

>>> model = YOLO(...)
>>> byte_tracker = sv.ByteTrack()
>>> annotator = sv.BoxAnnotator()

>>> def callback(frame: np.ndarray, index: int) -> np.ndarray:
...     results = model(frame)[0]
...     detections = sv.Detections.from_yolov8(results)
...     detections = byte_tracker.update_from_detections(detections=detections)
...     labels = [
...         f"#{tracker_id} {model.model.names[class_id]} {confidence:0.2f}"
...         for _, _, confidence, class_id, tracker_id
...         in detections
...     ]
...     return annotator.annotate(scene=frame.copy(), detections=detections, labels=labels)

>>> sv.process_video(
...     source_path='...',
...     target_path='...',
...     callback=callback
... )
byte_track_result_small.mp4

πŸ† Contributors

@hardikdava (Hardik Dava), @kirilllzaitsev (Kirill Zaitsev), @onuralpszr (Onuralp SEZER), @dbroboflow, @mayankagarwals (Mayank Agarwal), @danigarciaoca (Daniel M. GarcΓ­a-OcaΓ±a), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)