MilleVisionNotes

This site focuses on practical examples of controlling Basler industrial cameras
using the pylon SDK and C#, based on real-world development experience.

タグ: .NET

  • Implementing Background Subtraction (Motion Detection) in WPF with Basler pylon SDK + OpenCV (C# / .NET 8)

    Implementing Background Subtraction (Motion Detection) in WPF with Basler pylon SDK + OpenCV (C# / .NET 8)

    Once your live view is working, a natural next step is highlighting only what moved. In this article, we implement a simple background subtraction pipeline to detect motion regions, draw bounding boxes, and display the results in a WPF Image.


    ✅ Environment

    Because this implementation converts between BitmapSource and Mat, install the NuGet package:

    • OpenCvSharp4.WpfExtensions
    Item Details
    Camera Basler acA2500-14gm
    SDK pylon Camera Software Suite
    Language / GUI C# / .NET 8 / WPF
    Libraries OpenCvSharp4 (OpenCvSharp4.Windows, OpenCvSharp4.WpfExtensions)

    Recommended camera settings (for stable subtraction): Set ExposureAuto=Off, GainAuto=Off, and keep illumination stable (reduce flicker).

    Prerequisites from earlier posts (reference):


    Implementation Overview

    We will follow this workflow:

    1. Capture a background frame (a frame with no motion)
    2. Preprocess with Gaussian blur
    3. Compute absolute difference (AbsDiff) → threshold (Threshold) → opening (MorphologyEx(Open))
    4. Extract contours (FindContours) → draw bounding rectangles
    5. Display the processed frame in WPF

    Adding opening after threshold helps remove small speckle noise.


    🧩 XAML (Minimal UI)

    Add one column for a “Set BG” button and sliders for tuning.

    BackgroundSubtraction_View


    🔧 Core Code (Background Subtraction Pipeline)

    This logic ideally belongs in the Model layer, but to keep diffs small from previous articles, it is implemented in the ViewModel.

    The author tested on a Mono8 camera. If you use a color camera, convert to grayscale as needed.

    The ViewModel implements IDisposable to ensure OpenCV Mat resources are released.


    Code-Behind

    Dispose the ViewModel when the window closes.


    Example Run

    1. Click Set BG to capture the background frame.

    BackgroundSubtraction_BG 2. Move an object in the scene → motion regions are detected and boxed. 3. Use sliders to tune sensitivity and minimum area.

    BackgroundSubtraction_Detect


    Tuning Tips

    • Retake background when illumination changes

    • Threshold: increase to suppress sensor noise; decrease to detect subtle motion

    • Morphology iterations: increase if edges are jagged; decrease if boxes “bloat”

    • Min area: simple filter to reduce false positives

    • Speed-ups:

      • Apply camera ROI to reduce the processed area
      • Cv2.Resize to process a smaller image (scale coordinates back)
      • Cache blurred background (as shown) to avoid blurring every frame

    Common Pitfalls

    Symptom Likely Cause Fix
    Nothing detected Threshold too high / changes too small Lower threshold or increase blur kernel
    Entire frame white Auto exposure/gain is fluctuating Set ExposureAuto/GainAuto=Off, stabilize lighting
    Too many speckles Sensor noise / tiny vibration Keep opening (MorphTypes.Open) and tune kernel
    UI freezes Processing too heavy Move processing to another Task, use ROI / resize
    Memory grows Mat.Dispose() missing Use using blocks (as shown) and cache carefully

    Summary

    • Background subtraction highlights only moving regions
    • Fixed exposure/gain + stable lighting improves reliability
    • Tune threshold, minimum area, and morphology to balance sensitivity vs noise
    • ROI and resizing are effective for performance
  • Zoom & Pan for WPF Live View Using TransformGroup (C# / .NET 8)

    Zoom & Pan for WPF Live View Using TransformGroup (C# / .NET 8)

    In the previous article, we built a minimal setup to overlay a HUD (FPS, exposure, gain) on top of live video. As a continuation, this article adds mouse-driven zoom/pan, an ROI rectangle that follows the image, and a crosshair fixed to the screen center.

    We keep the same layer-separated design (Video = zoom/pan, HUD = fixed) to minimize blur and jitter.


    Environment / Prerequisites


    Goals

    • Mouse wheel zoom centered at the cursor (1.1× per step)
    • Shift + drag (or middle button drag) to pan
    • ROI rectangle scales/moves with the image (follows zoom/pan)
    • Crosshair stays fixed in screen coordinates (HUD layer)

    XAML: Two Layers + ROI Layer (Shared Transform)

    Because the implementation has grown, some parts are omitted where appropriate.


    HalfConverter (For Center Crosshair)

    This converter makes it easy to draw lines centered in the HUD layer.


    Code-Behind: Zoom / Pan

    In this article, zoom and pan are implemented purely in code-behind. You can refactor into ViewModel/Model later if needed.


    ROI Handling and Applying It in Real Apps

    In this sample, we assign the same TransformGroup to both the Image and the RoiLayer.

    • Because RoiLayer shares the transform, it naturally follows zoom/pan.
    • In real applications, you typically bind Canvas.Left/Top/Width/Height of RoiRect to a ViewModel, then apply those values to camera ROI parameters.
    • When applying ROI to the camera, remember GenICam constraints such as width/height increments.

    Meanwhile, HUD elements (crosshair, texts) remain transform-free and screen-fixed, preventing blur and keeping interaction intuitive.


    Example Screens

    The UI looks like this:

    ROIZoomDefault.png

    Pan the live view by dragging while holding Shift. The ROI follows; the crosshair stays fixed.

    ROIZoomTrans.png

    Zoom with the mouse wheel. The zoom ratio in the upper area updates as well.

    ROIZoomZoom.png

    Use Fit to fit the image to the window. 1:1 sets the zoom to 1.0, and Reset returns zoom to 1.0 and pan offset to 0.

    ROIZoomFit.png


    Summary

    • Video + ROI share the same transform; HUD stays fixed (layered design)
    • Mouse interaction provides cursor-centered zoom and intuitive pan
    • ROI follows the image while the crosshair remains screen-fixed

  • Implementing Live View in a WPF App for Basler Cameras (pylon SDK / C# / .NET 8)

    Implementing Live View in a WPF App for Basler Cameras (pylon SDK / C# / .NET 8)

    In the previous article (Displaying a Basler Camera Snapshot in a WPF App), we captured a single frame and displayed it in a WPF Image. This time, we’ll implement a live camera preview inside a WPF window.

    The live view uses an event-driven approach: we receive frames via ImageGrabbed, convert them to BitmapSource, and update an Image control. The update is performed safely via the Dispatcher to avoid blocking or crashing the UI thread.


    Environment / Assumptions

    • Basler pylon Camera Software Suite (reference Basler.Pylon)
    • .NET 8 / WPF (Windows desktop)
    • Camera: acA2500-14gm (assumed Mono8) If you use a color camera, you must convert based on pixel format such as BGR8packed.

    Goal

    • Live preview controlled by Connect / Start / Stop / Disconnect
    • Smooth updates by continuously refreshing a BitmapSource bound to an Image
    • Safe UI updates (always via the Dispatcher)

    UI (XAML)

    GUILiveSampleXAML

    We add two buttons: Live (start preview) and Stop (stop preview).

    The code-behind is the same as in the previous article (set DataContext to the ViewModel and disconnect on closing).


    Model Side (BaslerCameraSample)

    If we reuse the previous StartGrabbing() implementation, frames keep piling up in ConcurrentQueue<(DateTime, IGrabResult)> _bufferedImageQueue, which is great for async saving but makes live view unnecessarily complex.

    So, we add a simple streaming method dedicated to live display:

    Add/Remove event handlers

    To keep the design flexible, we expose methods to attach/detach ImageGrabbed handlers:


    ViewModel: Event-Driven Preview → Update BitmapSource

    We continue using BaslerCameraSample, but now we update CurrentFrame whenever ImageGrabbed fires.

    If performance becomes an issue, consider using WriteableBitmap rather than creating a new BitmapSource every time.


    Example Run

    Click Connect, then Live to start continuous preview. Click Stop to stop streaming.

    GUILiveSample


    Common Pitfalls & Fixes

    • Colors look wrong (color cameras) Match conversion to the camera’s pixel format (e.g., PixelType.BGR8packed and PixelFormats.Bgr24).

    • Event handler registered twice Remove (-=) once before adding (+=), as shown in Start().

    • Rendering can’t keep up with camera FPS If the camera FPS is high, you often don’t need to render every frame. You can drop frames by using Monitor.TryEnter or Interlocked.Exchange to avoid overlapping UI updates.


    Summary

    • Implemented smooth live view using pylon’s event-driven acquisition
    • Updated WPF UI safely through the Dispatcher
    • The structure is flexible and works well with ROI and resolution changes
    • Combining live preview with triggers is already valuable for many lab/inspection setups

    Next: HUD Overlay

    In the next article, we will overlay a HUD showing values such as FPS, exposure time, and gain on top of the live preview.

  • Displaying a Basler Camera Snapshot in a WPF App (pylon SDK / C# / .NET 8)

    Displaying a Basler Camera Snapshot in a WPF App (pylon SDK / C# / .NET 8)

    “You have working code, but you want to preview images in a GUI as quickly as possible.” As a first step, this article builds a minimal WPF viewer with three buttons: Connect / Disconnect / Snap. With a single click, the app captures one frame and displays it in a WPF Image control.

    Example run


    Goal

    • Operate the camera using three buttons: Connect / Disconnect / Snap
    • Display the snapped frame in an Image
    • Ensure the camera is properly disconnected and disposed on exit (avoid resource leaks)

    Project Structure

    We reuse the existing BaslerCameraSample (class library) created in earlier articles (e.g., software-trigger capture) and add a new WPF application project BaslerGUISample, referencing that library.


    View (XAML)

    Place three buttons and an Image preview.

    Designer view example:

    XAML layout

    In code-behind, set DataContext = new MainViewModel(); and make sure the camera disconnects in Window_Closing (shown later).


    ViewModel

    • Any MVVM base is fine (BindableBase / DelegateCommand are user-defined in this article; implementation omitted)
    • Use the existing BaslerCameraSample for connect/snap/disconnect
    • Bind CurrentFrame (BitmapSource) to Image.Source
    • If the UI freezes while connecting, consider making Connect asynchronous later

    (Reference) MainWindow Code-Behind


    Add Disconnect() to BaslerCameraSample

    Since the earlier articles focused mainly on connecting and grabbing, we now add a proper Disconnect() to ensure resources are released when the app exits.


    Implementation Tips / Common Pitfalls

    • UI thread updates When you move on to event-driven continuous acquisition, ImageGrabbed is raised from a non-UI thread. In that case, update CurrentFrame via Application.Current.Dispatcher.Invoke(...) (planned for a future article).

    • Stretch behavior UniformToFill: fills without margins (may crop) Uniform: shows the entire image (may add margins)

    • Error handling If connect/snap fails, show user-friendly feedback (e.g., MessageBox) instead of silently failing.


    Summary

    • Implemented a minimal WPF viewer (Connect / Disconnect / Snap / preview)
    • Used MVVM to separate View and camera-control logic; bound BitmapSource to Image.Source
    • Added Disconnect() to reliably release camera resources on exit
    • This becomes a foundation for event-driven continuous preview in the next steps

    Next Article Preview

    Next, we’ll explain how to continuously acquire images using event-driven grabbing while updating the UI asynchronously. We’ll also connect it with queue-based saving and OpenCV display, aiming for a practical viewer.


    About the Author

    @MilleVision Sharing practical knowledge on industrial cameras and machine vision development, with a Basler pylon SDK × C# series.


    Full Sample Code Package

    A complete C# project (including unit tests) is available on BOOTH. It includes the omitted BindableBase and DelegateCommand implementations.

    • Exposure / gain / frame rate / ROI
    • Event-driven acquisition
    • Burst capture
    • Saving and logging
    • Updates planned alongside new articles

    👉 https://millevision.booth.pm/items/7316233

  • Software Trigger Capture with the Basler pylon SDK (C# / .NET)

    Software Trigger Capture with the Basler pylon SDK (C# / .NET)

    When working with industrial cameras such as Basler, you don’t always want to run the camera in free-run mode (continuous acquisition). There are many situations where you need to capture exactly when something meaningful happens, for example:

    • The moment a button on an experimental setup is pressed
    • When a temperature or voltage reaches a certain threshold
    • When an external event occurs in another device or system

    In such cases, software-triggered acquisition is extremely useful.

    In this article, we’ll look at how to perform software-trigger capture using the Basler pylon SDK in C#, and compare it against free-run / GrabOne() acquisitions using simple stopwatch-based timing.


    ✅ Environment

    Item Value
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Lang C# / .NET 8.0 (Windows)

    As in previous articles (for example: How to record camera settings alongside captured images (C# / .NET)), we extend the same BaslerCameraSample class and add software-trigger-related functions plus an example test.


    🔧 Configuring the Software Trigger

    In pylon, software trigger is configured by setting the trigger mode and trigger source as follows:

    When you want to capture an image, you call ExecuteSoftwareTrigger() to fire the trigger and acquire a frame at that instant:

    Before calling ExecuteSoftwareTrigger, you must start camera streaming with Camera.StreamGrabber.Start(). In other words, the camera must already be armed and grabbing when the trigger is executed.


    🔁 Example: Using the Software Trigger

    Here is one example of how to use the software trigger. There are many ways to structure this, but using the existing implementation from earlier articles, we follow this sequence:

    1. Enable software trigger mode
    2. Start streaming
    3. Fire the software trigger
    4. Wait for frame acquisition (via ImageGrabbed) and then stop streaming

    Because there is a delay between firing the trigger and the ImageGrabbed event, we wait until the handler confirms that a frame was actually acquired before stopping streaming.

    Here, BufferedImageQueue refers to the queue filled by the OnImageGrabbed event handler (as implemented in the earlier event-driven capture article).


    ⏱ Experiment: Software Trigger vs “Just call GrabOne()

    You might wonder:

    “What’s the difference between using a software trigger and simply calling GrabOne() after an event occurs?”

    To test this, we extend ExecuteSoftwareTriggerTest to also measure the time taken by a plain GrabOne() call and compare the two.


    📊 Results

    On my test environment (Basler acA2500-14gm, GigE, AMD Ryzen 9, Windows), software-trigger capture achieved a faster time-to-image than a plain GrabOne().

    This is likely because the camera is already streaming and armed, so the sensor doesn’t need to be restarted for each frame.

    Summary:

    Mode Acquisition Time Characteristics
    Software trigger ~112 ms Faster image acquisition; precise knowledge of when the trigger occurred
    GrabOne ~193 ms Simple to implement; trigger timing is not strictly known

    📝 Summary

    • Software trigger is ideal when you want to capture semantically meaningful moments:

      • e.g., synchronization with experimental data, test equipment, or external signals
    • Compared to GrabOne():

      • Software trigger can reduce the delay between event and image acquisition
      • It also lets you know exactly when the trigger was fired

    In the next article, we’ll look at how to display camera images in WPF, using the same design that I’m adopting in my in-development camera configuration management tool.


    Author: @MilleVision


    🛠 Full Sample Code Project(With Japanese comment)

    All sample code introduced in this Qiita series is available as a C# project with unit tests on BOOTH:

    • Includes unit tests for easy verification and learning
    • Will be updated alongside new articles

    👉 Product page

  • Reproducing Camera Conditions: Saving and Loading Basler Settings as PFS Files

    Reproducing Camera Conditions: Saving and Loading Basler Settings as PFS Files

    Basler pylon SDK × C#

    When you need to reproduce a specific capture condition or quickly switch between multiple setups, being able to save and load Basler camera settings as PFS files is extremely useful.

    In this article, we’ll look at how to save and restore camera settings in .pfs format using the Basler pylon SDK.


    What Is the PFS Format?

    A PFS file (pylon Feature Stream file) is Basler’s configuration format for saving and loading camera settings.

    ✅ Typical Use Cases

    1. Backup of camera settings

      • Decide on a stable configuration once → save it → reuse it after reboot or on another PC.
    2. Sharing experiment conditions

      • Distribute settings so everyone in a lab or production line captures under the same conditions.
    3. Loading settings from your software

      • Call camera.Parameters.Load("config.pfs") to instantly apply a predefined setup (shown later).

    ✅ Environment

    Item Value
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Language C# / .NET 8
    File Format .pfs (pylon Feature Set / Feature Stream)

    📝 Saving Camera Settings (Save)

    As in previous articles, we’ll extend the same BaslerCameraSample class introduced in:

    • How to Display and Save Basler Camera Images with OpenCV
    • How to Capture a Single Image Using the Basler pylon SDK in C#

    To save settings, we can use the Camera.Parameters.Save() method. The example below saves the CameraDevice settings. The ParameterPath argument can be either a direct string or one of the predefined properties of the ParameterPath class.

    In many practical scenarios, saving just CameraDevice is sufficient. However, if you want to dump multiple parameter groups at once, you can iterate over all string properties of ParameterPath via reflection.

    Even though we check Camera.Parameters.Contains(value) before calling Save(), some properties may still cause Save() to throw an ArgumentException due to model differences or unsupported paths. In that case we catch and skip them.

    Note: Relying on exceptions as control flow is not ideal; if you know of a cleaner approach, feel free to adapt this logic.


    📂 Example of a Saved PFS File

    If you open a .pfs file in a text editor, you can see that the configuration values are stored in a tab-separated (TSV) format.

    Most of the important parameters are grouped under CameraDevice, so in many cases saving just that group is enough.

    ExposureMode	Timed       // Exposure mode
    ExposureAuto	Off         // Auto exposure off (manual control)
    ExposureTimeRaw	35000       // Unit: µs
    AcquisitionFrameRateEnable	1 // Enable frame rate control
    AcquisitionFrameRateAbs	30.0003 // Frame rate setting
    

    📥 Loading Settings (Load)

    Next, let’s load a previously saved .pfs file back into the camera.


    Example: Save & Load Round Trip

    The following test changes the frame rate, saves the settings, then loads them back and verifies that the value was restored.

    After loading, the camera’s frame rate returns to 10 fps, confirming that the PFS file was applied correctly.


    ⚠️ Notes & Caveats

    Point Details
    What gets saved Only parameters that are exposed as user-accessible by the camera
    Trigger & device-specific settings Some hardware-dependent features may be excluded
    Applying PFS to another camera Prefer using the same model or a model with compatible features

    ✅ Summary

    • Using Camera.Parameters.Save() / Load() you can easily reproduce capture conditions.

    • .pfs files are essentially TSV-based configuration files.

    • Very useful for:

      • Backing up camera setups
      • Sharing experiment conditions
      • Managing multiple capture profiles

    Tool: PFS → JSON / C# DTO (Web)

    Drag & drop a PFS file to generate PfsSettingsDto.cs and JSON automatically.

    https://pfs.millevision.net/


    Coming Up Next

    In the next article, we’ll look at how to log metadata such as exposure, gain, and ROI together with captured images.

    I’m also working on a GUI tool to conveniently view and edit Basler camera settings; that will be introduced in a future post.


    Author: @MilleVision

  • Displaying and Saving Basler Camera Images with OpenCV

    Displaying and Saving Basler Camera Images with OpenCV

    Displaying and Saving Basler Camera Images with OpenCV

    pylon SDK × OpenCvSharp / C#

    When working with Basler cameras, converting captured images into OpenCV’s Mat format gives you access to a wide range of image processing algorithms.

    In this article, we’ll take a GrabResult from the Basler pylon SDK, convert it to an OpenCV Mat, and then:

    • Display it in real time using Cv2.ImShow
    • Save it as an image file using Cv2.ImWrite

    ✅ Environment

    Item Value
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Pixel Format Mono8 (8-bit grayscale)
    OpenCV Wrapper OpenCvSharp4.Windows (via NuGet)

    🎯 Goal: Convert GrabResultMat → Display & Save

    Conceptual flow:

    [GrabResult] → [byte[]] → [Mat] → [ImShow / ImWrite]
    

    🔧 Converting GrabResult to Mat

    We’ll use PixelDataConverter.Convert() to extract pixel data from IGrabResult into a byte[], and then wrap that buffer as an OpenCV Mat.

    A convenient way is to implement an extension method on IGrabResult:

    💡 Note: For color formats (e.g., BGR8, RGB8), you’ll need to adjust OutputPixelFormat and MatType (e.g., CV_8UC3) accordingly.


    🖼 Real-Time Display with Cv2.ImShow

    We now extend the BaslerCameraSample class (introduced in the previous article on event-driven capture) to display buffered images using OpenCV.

    Here, we assume that _bufferedImageQueue is a queue of (DateTime, IGrabResult) that the ImageGrabbed event handler enqueues into.

    Example Test: Live Display for 10 Seconds

    Running this test opens an OpenCV window named “Camera” that shows the live feed from your Basler camera.


    💾 Saving Mat with ImWrite

    To save a Mat to disk, use ImWrite(). The following example:

    1. Captures one frame using Snap()
    2. Converts it to Mat
    3. Saves the image
    4. Applies a simple threshold operation and saves again
    5. Also saves via BitmapSource for comparison

    ImShowTest

    This creates three files:

    1. SnapAndSaveMatTest_Bitmap.bmp — saved from BitmapSource
    2. SnapAndSaveMatTest_Mat.bmp — saved from OpenCV Mat
    3. SnapAndSaveMatTest_Mat_Thresholded.bmp — thresholded version

    Comparing (1) and (2) confirms that both pipelines produce the same image.

    BitmapSource Mat Mat(Binary)
    BitmapSource Mat Mat(Binary)

    📌 Tips & Common Pitfalls

    Point Notes
    Pixel format For color (BGR8, RGB8), use MatType.CV_8UC3 and correct channel order
    Nothing shows in ImShow You must call Cv2.WaitKey(); otherwise the window won’t update
    OpenCvSharp packages Install both OpenCvSharp4 and OpenCvSharp4.runtime.windows via NuGet
    Performance considerations Pinning buffers is fast, but for very high rates you may want to profile allocations

    ✅ Summary

    • You can convert Basler pylon GrabResult to an OpenCV Mat via PixelDataConverter and a byte[] buffer.
    • Cv2.ImShow enables simple live viewing; Cv2.ImWrite lets you save images directly from Mat.
    • Once in Mat form, you can apply the full power of OpenCV: filtering, edge detection, feature extraction, etc.

    Author: @MilleVision

  • Stabilizing Basler Camera Acquisition with Event-Driven Capture and Async Saving (C# / pylon SDK)

    Stabilizing Basler Camera Acquisition with Event-Driven Capture and Async Saving (C# / pylon SDK)

    In previous articles, we covered burst capture (continuous acquisition) and performance tuning using ROI.

    As a follow-up, this article explains how to make your acquisition loop more robust by combining event-driven image grabbing with asynchronous save processing.


    ✅ Why Event-Driven Acquisition Helps

    Key characteristics:

    • Your handler is called only when a frame arrives
    • It integrates cleanly with UI frameworks like WPF

    For single-frame capture, you can simply use GrabOne(). However, for GUI applications (live previews, start/stop capture buttons, etc.), using the ImageGrabbed event often results in a cleaner design and more responsive UI.


    🔧 Minimal Event-Based Implementation

    We will extend the same BaslerCameraSample class used in previous articles:

    • ROI article: Using ROI to Boost Burst Capture Performance
    • Single-frame capture article: How to Capture a Single Image with Basler pylon SDK in C#

    First, we register an ImageGrabbed event handler and start streaming.

    It’s convenient to expose an IsGrabbing property on BaslerCameraSample:

    To stop continuous acquisition, detach the event handler and stop the stream grabber:

    Whether you attach/detach the event handler every time depends on your application design. For simple apps, you might register the handler once (e.g., in the constructor) and keep it for the lifetime of the object.


    Example: 1 Second of Continuous Capture


    ⚠️ Problem: Saving Overhead Can Stall the Pipeline

    Depending on your PC and image size, saving each image may take several to tens of milliseconds. At higher frame rates, a simple event handler that saves directly inside ImageGrabbed may not keep up, causing:

    • Increased latency
    • Dropped frames
    • Unstable frame intervals

    🧭 Solution: Buffer with ConcurrentQueue and Save on a Separate Thread

    Overall flow:

    [Basler SDK]
         ↓
    [OnImageGrabbed event]
         ↓
    [Enqueue into ConcurrentQueue]
         ↓
    [Background task dequeues and saves frames]
    

    First, modify OnImageGrabbed to push cloned results into a queue instead of saving immediately.

    Next, run a separate asynchronous loop that dequeues items and saves them:


    Example: Combining Event-Driven Capture with Async Saving

    We can now update the test to run acquisition and saving concurrently:

    Depending on your requirements:

    • If real-time streaming is critical and your CPU has enough headroom, you can save while grabbing.
    • If your CPU is heavily loaded but RAM is sufficient, you can buffer during capture and save after stopping (by starting SaveBufferedImages only after StopGrabbing).

    ✅ Summary

    • Using the ImageGrabbed event yields a clean, C#-idiomatic structure for continuous acquisition.
    • Directly saving in the event handler can become a bottleneck at higher frame rates.
    • A robust pattern is: Clone → Enqueue → Async Save on a background task using ConcurrentQueue.
    • This separation helps stabilize continuous acquisition and makes your pipeline more resilient under load.

    In the next article, we’ll look at integrating Basler cameras with OpenCV, enabling more advanced image processing workflows.


    Author: @MilleVision


    📦 Full Sample Project(With Japanese Comment)

    A complete C# project that includes event-driven acquisition, ROI, burst capture, and asynchronous saving is available here(With Japanese Comment):

    👉 https://millevision.booth.pm/items/7316233

  • Boosting Frame Rate with ROI on Basler Cameras Using the pylon SDK (C# / .NET 8)

    Boosting Frame Rate with ROI on Basler Cameras Using the pylon SDK (C# / .NET 8)

    Boosting Frame Rate with ROI on Basler Cameras Using the pylon SDK (C# / .NET 8)

    When developing machine-vision applications, you may find yourself asking:

    • “Can I increase the frame rate if I only need a portion of the image?”
    • “Can I reduce image size to speed up processing?”

    The answer is often yes—by using ROI (Region of Interest).

    By reducing the sensor readout area, you can significantly decrease data size and improve achievable frame rate. In this article, we explore how to configure ROI using the Basler pylon SDK in C#, along with real-world comparisons of full-frame vs ROI performance.


    ✔ What You Will Learn

    • How to configure ROI (Region of Interest) on Basler cameras
    • How ROI improves frame rate and processing speed
    • Common constraints, pitfalls, and error handling

    ✔ Environment

    Item Details
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Lang C# / .NET 8 (Windows)
    Lens VS-LD25N (25 mm C-mount)

    🔍 What Is ROI?

    ROI (Region of Interest) defines a rectangular area on the camera sensor that you want to capture. Basler cameras expose four parameters to control ROI:

    • OffsetX — horizontal position
    • OffsetY — vertical position
    • Width — ROI width
    • Height — ROI height

    Reducing ROI has several benefits:

    • Smaller image size
    • Shorter readout time → higher potential frame rate
    • Faster downstream processing

    🔧 Setting ROI in C#

    Below are helper methods added to the BaslerCameraSample class from previous articles:

    Setting ROI in one operation

    Because ROI must remain inside the sensor boundaries, the order of parameter updates matters.

    ROI using Rect (WPF)


    📈 Why ROI Increases Frame Rate

    Sensor readout time is almost always proportional to the number of pixels captured. Reducing the readout area:

    • Lowers pixel count
    • Reduces required bandwidth
    • Allows the camera to reach higher frame rates

    This effect is independent of exposure time, so even long exposures benefit from faster readout afterward.


    ✔ Constraints When Setting ROI

    Parameter Notes
    Width / Height Many Basler models require values to follow a specific increment (2, 4, 8, 16… depending on sensor)
    OffsetX / OffsetY Must not exceed sensor boundaries

    To inspect the increment value:

    Example test:


    📏 Testing Frame Rate with ROI


    🧪 Real-World Comparison

    Full Resolution (2592 × 1944)

    Frame timestamps demonstrate slower readout:

    Frame 0 1 2 3 4
    T [ms] 72 140 209 277 346
    72ms 140ms 209ms 277 346

    ROI (640 × 480)

    Frame 0 1 2 3 4
    T [ms] 23 67 87 120 154
    23ms 67ms 87ms 120ms 154ms

    Summary Comparison

    Mode Resolution Resulting Frame Rate
    Full Frame 2592×1944 ~14.6 fps
    ROI 640×480 ~30.0 fps

    → ROI doubles the effective frame rate in this test.


    ⚠️ Common Errors & Solutions

    Issue Cause
    InvalidParameterException ROI exceeds sensor bounds, or width/height violates increments
    No frames after setting ROI Some camera models require reinitialization after ROI changes

    📝 Summary

    • ROI can significantly reduce readout time and increase frame rate
    • Combining ROI with proper frame-rate settings improves real-time performance
    • Pay attention to width/height increments and sensor boundaries
    • Ideal for speeding up burst capture and reducing processing load

    筆者: @MilleVision


    📦 Full Sample Code Package

    👉 https://millevision.booth.pm/items/7316233 Includes ROI, burst capture, trigger control, and full pylon SDK integration.(With Japanese Comment)

  • Implementing High-Speed Burst Capture with the Basler pylon SDK (C# / .NET 8)

    Implementing High-Speed Burst Capture with the Basler pylon SDK (C# / .NET 8)

    In machine vision and inspection systems, it’s often not enough to capture a single frame—you frequently need to capture multiple images in rapid succession, also known as burst capture or continuous acquisition.

    In this article, we walk through how to implement burst capture using the Basler pylon SDK in C#, including:

    • A basic capture loop
    • Controlling frame interval
    • Saving images with timestamps
    • Practical performance considerations

    ✔ What You Will Learn

    • How to implement a burst capture loop
    • How to control acquisition interval using frame-rate settings
    • How to save frames with timestamps for later analysis
    • How to avoid dropped frames and performance bottlenecks

    ✔ Environment

    Item Details
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Lang C# / .NET 8 (Windows)

    🔄 Basic Burst Capture: Capturing N Consecutive Frames

    Before implementing burst capture, define a helper method to control trigger mode:

    The following method performs a free-run (trigger-off) burst capture, saving each frame with both an index and elapsed time. Including timestamps makes later debugging and performance analysis much easier.


    🕒 Controlling the Capture Interval

    Burst capture does not automatically enforce timing. To control the interval—for example, 10 fps—enable and set AcquisitionFrameRate:

    AcquisitionFrameRateEnable = true  
    AcquisitionFrameRate = desired fps
    

    Reference: How to Control and Verify Frame Rate Using the Basler pylon SDK (C# / .NET 8)


    📷 Example: Capturing 10 Frames at 10 fps


    📊 Example Output

    Captured images at 10 fps while the object was rotating:

    45ms 143ms 243ms 343ms 443ms
    BurstCaptureTest_000_45ms.png BurstCaptureTest_001_143ms.png BurstCaptureTest_002_243ms.png BurstCaptureTest_003_343ms.png BurstCaptureTest_004_443ms.png
    543ms 643ms 743ms 843ms 943ms
    BurstCaptureTest_005_543ms.png BurstCaptureTest_006_643ms.png BurstCaptureTest_007_743ms.png BurstCaptureTest_008_843ms.png BurstCaptureTest_009_943ms.png

    ⚠️ Common Problems & How to Fix Them

    Issue Cause / Solution
    Missing frames Saving images takes too long → save in a background thread using ConcurrentQueue
    CPU load too high Full-resolution images are heavy → reduce ROI
    First-frame timestamp inconsistent Trigger mode behavior → use hardware/software trigger for strict timing

    🔁 Slow Burst Capture at Full Resolution?

    → Reduce ROI to Increase Frame Rate

    Full-resolution capture (e.g., 2592×1944) can bottleneck both bandwidth and processing time. Reducing the Region of Interest (ROI) often dramatically improves:

    • Frame rate
    • Latency
    • Stability

    ROI-based optimization will be covered in the next article.


    📝 Summary

    • Burst capture in pylon is implemented with StreamGrabber.Start() + a loop
    • Frame rate control determines capture timing
    • Image saving can become a bottleneck—offload saving to another thread if necessary
    • Combining ROI with burst capture greatly improves performance

    筆者: @MilleVision


    📦 Full Sample Code Package

    A complete working version of all code shown in this article (including burst capture templates) is available here:

    👉 https://millevision.booth.pm/items/7316233