Blog

  • Software Trigger Capture with the Basler pylon SDK (C# / .NET)

    Software Trigger Capture with the Basler pylon SDK (C# / .NET)

    When working with industrial cameras such as Basler, you don’t always want to run the camera in free-run mode (continuous acquisition). There are many situations where you need to capture exactly when something meaningful happens, for example:

    • The moment a button on an experimental setup is pressed
    • When a temperature or voltage reaches a certain threshold
    • When an external event occurs in another device or system

    In such cases, software-triggered acquisition is extremely useful.

    In this article, we’ll look at how to perform software-trigger capture using the Basler pylon SDK in C#, and compare it against free-run / GrabOne() acquisitions using simple stopwatch-based timing.


    ✅ Environment

    Item Value
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Lang C# / .NET 8.0 (Windows)

    As in previous articles (for example: How to record camera settings alongside captured images (C# / .NET)), we extend the same BaslerCameraSample class and add software-trigger-related functions plus an example test.


    🔧 Configuring the Software Trigger

    In pylon, software trigger is configured by setting the trigger mode and trigger source as follows:

    When you want to capture an image, you call ExecuteSoftwareTrigger() to fire the trigger and acquire a frame at that instant:

    Before calling ExecuteSoftwareTrigger, you must start camera streaming with Camera.StreamGrabber.Start(). In other words, the camera must already be armed and grabbing when the trigger is executed.


    🔁 Example: Using the Software Trigger

    Here is one example of how to use the software trigger. There are many ways to structure this, but using the existing implementation from earlier articles, we follow this sequence:

    1. Enable software trigger mode
    2. Start streaming
    3. Fire the software trigger
    4. Wait for frame acquisition (via ImageGrabbed) and then stop streaming

    Because there is a delay between firing the trigger and the ImageGrabbed event, we wait until the handler confirms that a frame was actually acquired before stopping streaming.

    Here, BufferedImageQueue refers to the queue filled by the OnImageGrabbed event handler (as implemented in the earlier event-driven capture article).


    ⏱ Experiment: Software Trigger vs “Just call GrabOne()

    You might wonder:

    “What’s the difference between using a software trigger and simply calling GrabOne() after an event occurs?”

    To test this, we extend ExecuteSoftwareTriggerTest to also measure the time taken by a plain GrabOne() call and compare the two.


    📊 Results

    On my test environment (Basler acA2500-14gm, GigE, AMD Ryzen 9, Windows), software-trigger capture achieved a faster time-to-image than a plain GrabOne().

    This is likely because the camera is already streaming and armed, so the sensor doesn’t need to be restarted for each frame.

    Summary:

    Mode Acquisition Time Characteristics
    Software trigger ~112 ms Faster image acquisition; precise knowledge of when the trigger occurred
    GrabOne ~193 ms Simple to implement; trigger timing is not strictly known

    📝 Summary

    • Software trigger is ideal when you want to capture semantically meaningful moments:

      • e.g., synchronization with experimental data, test equipment, or external signals
    • Compared to GrabOne():

      • Software trigger can reduce the delay between event and image acquisition
      • It also lets you know exactly when the trigger was fired

    In the next article, we’ll look at how to display camera images in WPF, using the same design that I’m adopting in my in-development camera configuration management tool.


    Author: @MilleVision


    🛠 Full Sample Code Project(With Japanese comment)

    All sample code introduced in this Qiita series is available as a C# project with unit tests on BOOTH:

    • Includes unit tests for easy verification and learning
    • Will be updated alongside new articles

    👉 Product page

  • Reproducing Camera Conditions: Saving and Loading Basler Settings as PFS Files

    Reproducing Camera Conditions: Saving and Loading Basler Settings as PFS Files

    Basler pylon SDK × C#

    When you need to reproduce a specific capture condition or quickly switch between multiple setups, being able to save and load Basler camera settings as PFS files is extremely useful.

    In this article, we’ll look at how to save and restore camera settings in .pfs format using the Basler pylon SDK.


    What Is the PFS Format?

    A PFS file (pylon Feature Stream file) is Basler’s configuration format for saving and loading camera settings.

    ✅ Typical Use Cases

    1. Backup of camera settings

      • Decide on a stable configuration once → save it → reuse it after reboot or on another PC.
    2. Sharing experiment conditions

      • Distribute settings so everyone in a lab or production line captures under the same conditions.
    3. Loading settings from your software

      • Call camera.Parameters.Load("config.pfs") to instantly apply a predefined setup (shown later).

    ✅ Environment

    Item Value
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Language C# / .NET 8
    File Format .pfs (pylon Feature Set / Feature Stream)

    📝 Saving Camera Settings (Save)

    As in previous articles, we’ll extend the same BaslerCameraSample class introduced in:

    • How to Display and Save Basler Camera Images with OpenCV
    • How to Capture a Single Image Using the Basler pylon SDK in C#

    To save settings, we can use the Camera.Parameters.Save() method. The example below saves the CameraDevice settings. The ParameterPath argument can be either a direct string or one of the predefined properties of the ParameterPath class.

    In many practical scenarios, saving just CameraDevice is sufficient. However, if you want to dump multiple parameter groups at once, you can iterate over all string properties of ParameterPath via reflection.

    Even though we check Camera.Parameters.Contains(value) before calling Save(), some properties may still cause Save() to throw an ArgumentException due to model differences or unsupported paths. In that case we catch and skip them.

    Note: Relying on exceptions as control flow is not ideal; if you know of a cleaner approach, feel free to adapt this logic.


    📂 Example of a Saved PFS File

    If you open a .pfs file in a text editor, you can see that the configuration values are stored in a tab-separated (TSV) format.

    Most of the important parameters are grouped under CameraDevice, so in many cases saving just that group is enough.


    📥 Loading Settings (Load)

    Next, let’s load a previously saved .pfs file back into the camera.


    Example: Save & Load Round Trip

    The following test changes the frame rate, saves the settings, then loads them back and verifies that the value was restored.

    After loading, the camera’s frame rate returns to 10 fps, confirming that the PFS file was applied correctly.


    ⚠️ Notes & Caveats

    Point Details
    What gets saved Only parameters that are exposed as user-accessible by the camera
    Trigger & device-specific settings Some hardware-dependent features may be excluded
    Applying PFS to another camera Prefer using the same model or a model with compatible features

    ✅ Summary

    • Using Camera.Parameters.Save() / Load() you can easily reproduce capture conditions.

    • .pfs files are essentially TSV-based configuration files.

    • Very useful for:

      • Backing up camera setups
      • Sharing experiment conditions
      • Managing multiple capture profiles

    Coming Up Next

    In the next article, we’ll look at how to log metadata such as exposure, gain, and ROI together with captured images.

    I’m also working on a GUI tool to conveniently view and edit Basler camera settings; that will be introduced in a future post.


    Author: @MilleVision

  • Displaying and Saving Basler Camera Images with OpenCV

    Displaying and Saving Basler Camera Images with OpenCV

    Displaying and Saving Basler Camera Images with OpenCV

    pylon SDK × OpenCvSharp / C#

    When working with Basler cameras, converting captured images into OpenCV’s Mat format gives you access to a wide range of image processing algorithms.

    In this article, we’ll take a GrabResult from the Basler pylon SDK, convert it to an OpenCV Mat, and then:

    • Display it in real time using Cv2.ImShow
    • Save it as an image file using Cv2.ImWrite

    ✅ Environment

    Item Value
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Pixel Format Mono8 (8-bit grayscale)
    OpenCV Wrapper OpenCvSharp4.Windows (via NuGet)

    🎯 Goal: Convert GrabResultMat → Display & Save

    Conceptual flow:


    🔧 Converting GrabResult to Mat

    We’ll use PixelDataConverter.Convert() to extract pixel data from IGrabResult into a byte[], and then wrap that buffer as an OpenCV Mat.

    A convenient way is to implement an extension method on IGrabResult:

    💡 Note: For color formats (e.g., BGR8, RGB8), you’ll need to adjust OutputPixelFormat and MatType (e.g., CV_8UC3) accordingly.


    🖼 Real-Time Display with Cv2.ImShow

    We now extend the BaslerCameraSample class (introduced in the previous article on event-driven capture) to display buffered images using OpenCV.

    Here, we assume that _bufferedImageQueue is a queue of (DateTime, IGrabResult) that the ImageGrabbed event handler enqueues into.

    Example Test: Live Display for 10 Seconds

    Running this test opens an OpenCV window named “Camera” that shows the live feed from your Basler camera.


    💾 Saving Mat with ImWrite

    To save a Mat to disk, use ImWrite(). The following example:

    1. Captures one frame using Snap()
    2. Converts it to Mat
    3. Saves the image
    4. Applies a simple threshold operation and saves again
    5. Also saves via BitmapSource for comparison

    ImShowTest

    This creates three files:

    1. SnapAndSaveMatTest_Bitmap.bmp — saved from BitmapSource
    2. SnapAndSaveMatTest_Mat.bmp — saved from OpenCV Mat
    3. SnapAndSaveMatTest_Mat_Thresholded.bmp — thresholded version

    Comparing (1) and (2) confirms that both pipelines produce the same image.

    BitmapSource Mat Mat(Binary)
    BitmapSource Mat Mat(Binary)

    📌 Tips & Common Pitfalls

    Point Notes
    Pixel format For color (BGR8, RGB8), use MatType.CV_8UC3 and correct channel order
    Nothing shows in ImShow You must call Cv2.WaitKey(); otherwise the window won’t update
    OpenCvSharp packages Install both OpenCvSharp4 and OpenCvSharp4.runtime.windows via NuGet
    Performance considerations Pinning buffers is fast, but for very high rates you may want to profile allocations

    ✅ Summary

    • You can convert Basler pylon GrabResult to an OpenCV Mat via PixelDataConverter and a byte[] buffer.
    • Cv2.ImShow enables simple live viewing; Cv2.ImWrite lets you save images directly from Mat.
    • Once in Mat form, you can apply the full power of OpenCV: filtering, edge detection, feature extraction, etc.

    Author: @MilleVision

  • Stabilizing Basler Camera Acquisition with Event-Driven Capture and Async Saving (C# / pylon SDK)

    Stabilizing Basler Camera Acquisition with Event-Driven Capture and Async Saving (C# / pylon SDK)

    In previous articles, we covered burst capture (continuous acquisition) and performance tuning using ROI.

    As a follow-up, this article explains how to make your acquisition loop more robust by combining event-driven image grabbing with asynchronous save processing.


    ✅ Why Event-Driven Acquisition Helps

    Key characteristics:

    • Your handler is called only when a frame arrives
    • It integrates cleanly with UI frameworks like WPF

    For single-frame capture, you can simply use GrabOne(). However, for GUI applications (live previews, start/stop capture buttons, etc.), using the ImageGrabbed event often results in a cleaner design and more responsive UI.


    🔧 Minimal Event-Based Implementation

    We will extend the same BaslerCameraSample class used in previous articles:

    • ROI article: Using ROI to Boost Burst Capture Performance
    • Single-frame capture article: How to Capture a Single Image with Basler pylon SDK in C#

    First, we register an ImageGrabbed event handler and start streaming.

    It’s convenient to expose an IsGrabbing property on BaslerCameraSample:

    To stop continuous acquisition, detach the event handler and stop the stream grabber:

    Whether you attach/detach the event handler every time depends on your application design. For simple apps, you might register the handler once (e.g., in the constructor) and keep it for the lifetime of the object.


    Example: 1 Second of Continuous Capture


    ⚠️ Problem: Saving Overhead Can Stall the Pipeline

    Depending on your PC and image size, saving each image may take several to tens of milliseconds. At higher frame rates, a simple event handler that saves directly inside ImageGrabbed may not keep up, causing:

    • Increased latency
    • Dropped frames
    • Unstable frame intervals

    🧭 Solution: Buffer with ConcurrentQueue and Save on a Separate Thread

    Overall flow:

    First, modify OnImageGrabbed to push cloned results into a queue instead of saving immediately.

    Next, run a separate asynchronous loop that dequeues items and saves them:


    Example: Combining Event-Driven Capture with Async Saving

    We can now update the test to run acquisition and saving concurrently:

    Depending on your requirements:

    • If real-time streaming is critical and your CPU has enough headroom, you can save while grabbing.
    • If your CPU is heavily loaded but RAM is sufficient, you can buffer during capture and save after stopping (by starting SaveBufferedImages only after StopGrabbing).

    ✅ Summary

    • Using the ImageGrabbed event yields a clean, C#-idiomatic structure for continuous acquisition.
    • Directly saving in the event handler can become a bottleneck at higher frame rates.
    • A robust pattern is: Clone → Enqueue → Async Save on a background task using ConcurrentQueue.
    • This separation helps stabilize continuous acquisition and makes your pipeline more resilient under load.

    In the next article, we’ll look at integrating Basler cameras with OpenCV, enabling more advanced image processing workflows.


    Author: @MilleVision


    📦 Full Sample Project(With Japanese Comment)

    A complete C# project that includes event-driven acquisition, ROI, burst capture, and asynchronous saving is available here(With Japanese Comment):

    👉 https://millevision.booth.pm/items/7316233

  • Boosting Frame Rate with ROI on Basler Cameras Using the pylon SDK (C# / .NET 8)

    Boosting Frame Rate with ROI on Basler Cameras Using the pylon SDK (C# / .NET 8)

    Boosting Frame Rate with ROI on Basler Cameras Using the pylon SDK (C# / .NET 8)

    When developing machine-vision applications, you may find yourself asking:

    • “Can I increase the frame rate if I only need a portion of the image?”
    • “Can I reduce image size to speed up processing?”

    The answer is often yes—by using ROI (Region of Interest).

    By reducing the sensor readout area, you can significantly decrease data size and improve achievable frame rate. In this article, we explore how to configure ROI using the Basler pylon SDK in C#, along with real-world comparisons of full-frame vs ROI performance.


    ✔ What You Will Learn

    • How to configure ROI (Region of Interest) on Basler cameras
    • How ROI improves frame rate and processing speed
    • Common constraints, pitfalls, and error handling

    ✔ Environment

    Item Details
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Lang C# / .NET 8 (Windows)
    Lens VS-LD25N (25 mm C-mount)

    🔍 What Is ROI?

    ROI (Region of Interest) defines a rectangular area on the camera sensor that you want to capture. Basler cameras expose four parameters to control ROI:

    • OffsetX — horizontal position
    • OffsetY — vertical position
    • Width — ROI width
    • Height — ROI height

    Reducing ROI has several benefits:

    • Smaller image size
    • Shorter readout time → higher potential frame rate
    • Faster downstream processing

    🔧 Setting ROI in C#

    Below are helper methods added to the BaslerCameraSample class from previous articles:

    Setting ROI in one operation

    Because ROI must remain inside the sensor boundaries, the order of parameter updates matters.

    ROI using Rect (WPF)


    📈 Why ROI Increases Frame Rate

    Sensor readout time is almost always proportional to the number of pixels captured. Reducing the readout area:

    • Lowers pixel count
    • Reduces required bandwidth
    • Allows the camera to reach higher frame rates

    This effect is independent of exposure time, so even long exposures benefit from faster readout afterward.


    ✔ Constraints When Setting ROI

    Parameter Notes
    Width / Height Many Basler models require values to follow a specific increment (2, 4, 8, 16… depending on sensor)
    OffsetX / OffsetY Must not exceed sensor boundaries

    To inspect the increment value:

    Example test:


    📏 Testing Frame Rate with ROI


    🧪 Real-World Comparison

    Full Resolution (2592 × 1944)

    Frame timestamps demonstrate slower readout:

    Frame 0 1 2 3 4
    T [ms] 72 140 209 277 346
    72ms 140ms 209ms 277 346

    ROI (640 × 480)

    Frame 0 1 2 3 4
    T [ms] 23 67 87 120 154
    23ms 67ms 87ms 120ms 154ms

    Summary Comparison

    Mode Resolution Resulting Frame Rate
    Full Frame 2592×1944 ~14.6 fps
    ROI 640×480 ~30.0 fps

    → ROI doubles the effective frame rate in this test.


    ⚠️ Common Errors & Solutions

    Issue Cause
    InvalidParameterException ROI exceeds sensor bounds, or width/height violates increments
    No frames after setting ROI Some camera models require reinitialization after ROI changes

    📝 Summary

    • ROI can significantly reduce readout time and increase frame rate
    • Combining ROI with proper frame-rate settings improves real-time performance
    • Pay attention to width/height increments and sensor boundaries
    • Ideal for speeding up burst capture and reducing processing load

    筆者: @MilleVision


    📦 Full Sample Code Package

    👉 https://millevision.booth.pm/items/7316233 Includes ROI, burst capture, trigger control, and full pylon SDK integration.(With Japanese Comment)

  • Implementing High-Speed Burst Capture with the Basler pylon SDK (C# / .NET 8)

    Implementing High-Speed Burst Capture with the Basler pylon SDK (C# / .NET 8)

    In machine vision and inspection systems, it’s often not enough to capture a single frame—you frequently need to capture multiple images in rapid succession, also known as burst capture or continuous acquisition.

    In this article, we walk through how to implement burst capture using the Basler pylon SDK in C#, including:

    • A basic capture loop
    • Controlling frame interval
    • Saving images with timestamps
    • Practical performance considerations

    ✔ What You Will Learn

    • How to implement a burst capture loop
    • How to control acquisition interval using frame-rate settings
    • How to save frames with timestamps for later analysis
    • How to avoid dropped frames and performance bottlenecks

    ✔ Environment

    Item Details
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Lang C# / .NET 8 (Windows)

    🔄 Basic Burst Capture: Capturing N Consecutive Frames

    Before implementing burst capture, define a helper method to control trigger mode:

    The following method performs a free-run (trigger-off) burst capture, saving each frame with both an index and elapsed time. Including timestamps makes later debugging and performance analysis much easier.


    🕒 Controlling the Capture Interval

    Burst capture does not automatically enforce timing. To control the interval—for example, 10 fps—enable and set AcquisitionFrameRate:

    Reference: How to Control and Verify Frame Rate Using the Basler pylon SDK (C# / .NET 8)


    📷 Example: Capturing 10 Frames at 10 fps


    📊 Example Output

    Captured images at 10 fps while the object was rotating:

    45ms 143ms 243ms 343ms 443ms
    BurstCaptureTest_000_45ms.png BurstCaptureTest_001_143ms.png BurstCaptureTest_002_243ms.png BurstCaptureTest_003_343ms.png BurstCaptureTest_004_443ms.png
    543ms 643ms 743ms 843ms 943ms
    BurstCaptureTest_005_543ms.png BurstCaptureTest_006_643ms.png BurstCaptureTest_007_743ms.png BurstCaptureTest_008_843ms.png BurstCaptureTest_009_943ms.png

    ⚠️ Common Problems & How to Fix Them

    Issue Cause / Solution
    Missing frames Saving images takes too long → save in a background thread using ConcurrentQueue
    CPU load too high Full-resolution images are heavy → reduce ROI
    First-frame timestamp inconsistent Trigger mode behavior → use hardware/software trigger for strict timing

    🔁 Slow Burst Capture at Full Resolution?

    → Reduce ROI to Increase Frame Rate

    Full-resolution capture (e.g., 2592×1944) can bottleneck both bandwidth and processing time. Reducing the Region of Interest (ROI) often dramatically improves:

    • Frame rate
    • Latency
    • Stability

    ROI-based optimization will be covered in the next article.


    📝 Summary

    • Burst capture in pylon is implemented with StreamGrabber.Start() + a loop
    • Frame rate control determines capture timing
    • Image saving can become a bottleneck—offload saving to another thread if necessary
    • Combining ROI with burst capture greatly improves performance

    筆者: @MilleVision


    📦 Full Sample Code Package

    A complete working version of all code shown in this article (including burst capture templates) is available here:

    👉 https://millevision.booth.pm/items/7316233

  • How to Control and Verify Frame Rate Using the Basler pylon SDK (C# / .NET 8)

    How to Control and Verify Frame Rate Using the Basler pylon SDK (C# / .NET 8)

    When developing vision applications—especially for real-time inspection or production environments—one of the most important performance metrics is frame rate (fps). You may need to know:

    • Can the camera actually capture frames at the rate I’ve set?
    • What limits the maximum achievable frame rate?
    • How do I read both the requested and the actual frame rate?

    This article explains how to set frame rate, enable frame-rate control, and check the actual resulting fps using the Basler pylon SDK in C#.


    ✔ Environment

    Item Details
    Camera Basler acA2500-14gm
    SDK Basler pylon Camera Software Suite
    Lang C# / .NET 8 (Windows)

    💡 AcquisitionFrameRate vs. ResultingFrameRate

    Parameter Meaning
    AcquisitionFrameRate The requested frame rate (user setting)
    ResultingFrameRate The actual achievable frame rate calculated by the camera

    Even if you set 30 fps, the resulting fps may be lower depending on exposure time, ROI, interface bandwidth, etc.


    🔧 Enabling Frame Rate Control

    Frame-rate control must be explicitly enabled:

    Helper method for Boolean parameters

    AcquisitionFrameRateEnable is a boolean parameter (IBoolean), so we define:


    🔧 Setting the Desired Frame Rate

    💡 Depending on the camera model, the parameter name may be:

    • AcquisitionFrameRate
    • AcquisitionFrameRateAbs

    Verify via pylon Viewer or by checking:


    📏 Reading the Requested and Actual Frame Rate

    Requested (user-set) frame rate:

    Actual achievable rate:

    ResultingFrameRate is essential—it tells you whether your settings are physically achievable given the exposure time, bandwidth, and ROI.


    🚧 What Limits Maximum Frame Rate?

    1. Image Size / ROI

    Smaller ROI = fewer pixels = higher possible fps.

    2. Exposure Time

    If exposure is too long, the sensor cannot start the next frame.

    Example: ExposureTime = 50,000 µs (50 ms) → max possible fps is 20 fps.

    3. Interface Bandwidth (USB3 / GigE)

    High-resolution images or 24-bit RGB increase bandwidth usage, lowering achievable fps.


    📝 When Your Frame Rate Setting Is Ignored

    If you set AcquisitionFrameRate but nothing changes, check:

    ✔ Is AcquisitionFrameRateEnable set to true?

    If false, the camera ignores your setting and runs at the highest allowable frame rate.


    🧪 Example Test: 5 fps → 15 fps → 30 fps

    Example output on my machine:

    Even though 30 fps was requested, the actual fps dropped to ~14.6 fps due to bandwidth or exposure constraints.


    🧭 Troubleshooting

    Symptom Likely Cause
    Frame rate does not increase Exposure too long / ROI too large
    Cannot set AcquisitionFrameRate AcquisitionFrameRateEnable is false
    ResultingFrameRate reads 0 Camera not streaming or not initialized

    📝 Summary

    • Enable frame-rate control via AcquisitionFrameRateEnable = true
    • Set target fps with AcquisitionFrameRateAbs
    • Verify actual achievable fps via ResultingFrameRateAbs
    • Exposure time, ROI size, and bandwidth all influence the maximum fps

    Understanding both the requested and actual frame rate is essential for designing stable, real-time machine vision systems.


    🔜 Next Article

    Next, we will build on frame-rate control by implementing continuous acquisition (burst capture) for high-speed inspections and inline processing.


    筆者: @MilleVision

  • Manual Gain Control with the Basler pylon SDK (C# / .NET 8)

     

    筆者: [@MilleVision](https://qiita.com/MilleVision)

    Manual Gain Control with the Basler pylon SDK (C# / .NET 8)

    When working with industrial cameras such as those from Basler, exposure time alone is not always enough to achieve the brightness you need. This is especially true when:

    • The subject moves quickly
    • You must keep exposure time short
    • The lighting environment limits the available exposure

    In these situations, manual gain control becomes an important tool.

    This article explains how to read, set, and validate gain values in C# using the Basler pylon SDK. We will also examine how gain affects noise and how to balance exposure vs. gain using practical example images.


    ✔ Environment

    Item Details
    Camera Basler acA2500-14gm
    Lens VS Technology VS-LD25N (25 mm C-mount)
    SDK Basler pylon Camera Software Suite
    Lang C# / .NET 8 (Windows)

    💡 What Is Gain?

    Gain represents electronic amplification of the sensor signal.

    • Increasing gain brightens the image
    • But it also amplifies noise, reducing image quality

    Because of this trade-off, gain should be balanced with exposure time rather than used alone. Exposure increases brightness with lower noise, but may blur moving objects. Gain brightens the image without changing shutter time but introduces noise.


    🔧 Code: Setting and Reading Gain

    Below are simple helper methods for manual gain control.

    Set Gain (manual mode)

    Auto Gain Mode

    Read Gain


    📷 Gain in Practice: Image Comparisons

    ① Exposure Fixed, Gain Increased

    In this test, exposure time is constant while only gain is changed. You can clearly see how higher gain results in brighter images but more noise.

    Gain 0 dB Gain 12 dB Gain 24 dB
    Gain0fixed Gain12fixed Gain24fixed

    ② Same Brightness, Different Gain (Exposure Adjusted)

    Here, exposure time was adjusted so that the brightness remained approximately constant, allowing you to see just the difference in noise.

    Gain 0 dB (60ms) Gain 12 dB (24ms) Gain 24 dB (12ms)
    Gain0exp60ms Gain12exp24ms Gain24exp12ms

    💡 GainRaw ↔ Gain (dB) Conversion

    Basler cameras often expose two gain parameters:

    • Gain (in dB) — user-friendly
    • GainRaw — internal integer register value

    The mapping between them varies by camera model, but it can typically be approximated by a linear transform:

    This is useful when a camera supports only GainRaw or when you need precise dB control.


    🧭 Troubleshooting Common Gain Issues

    Issue Cause / Solution
    Gain does not change GainAuto is not set to Off
    Parameter write error Value outside the allowed range — check GetMin() / GetMax()
    Noise becomes excessive Gain too high — prefer exposure when possible
    Cannot write to Gain Parameter not writable → likely because auto gain is active

    When Gain Cannot Be Set

    If GainAuto is not disabled, the parameter:

    will be false, preventing manual writes.

    Always check IsWritable before writing to avoid runtime exceptions.


    📝 Summary

    • Gain amplifies brightness and noise — unlike exposure, which brightens with less noise

    • Set GainAuto = Off to enable manual gain control

    • The balance between exposure and gain is crucial:

      • Exposure = cleaner images, but motion blur risk
      • Gain = no blur, but increased noise

    By comparing images side-by-side, we can see why most machine-vision systems prioritize exposure first, then use gain sparingly.


    🔜 Next Article

    With exposure and gain under control, the next major topic is trigger modes and frame synchronization. We’ll explore hardware triggers, software triggers, and best practices for synchronized acquisition.


    筆者: @MilleVision

  • How to Manually Control Exposure Time Using the Basler pylon SDK (C# / .NET 8)


    How to Manually Control Exposure Time Using the Basler pylon SDK (C# / .NET 8)

    Have you ever felt that your industrial camera images look too dark, too bright, or fluctuate unexpectedly?
    Automatic exposure (auto-exposure) can be useful, but in many machine-vision or inspection systems, consistent lighting is essential, and automatic adjustments can cause instability.

    With the Basler pylon SDK, you can switch exposure control to manual mode and specify an exact exposure time (in microseconds), enabling stable and predictable imaging.

    This article explains how to set, read, and validate exposure time manually using C# and the pylon SDK.


    ✔ Environment

    ItemDetails
    CameraBasler acA2500-14gm
    SDKBasler pylon Camera Software Suite
    LangC# / .NET 8.0 (Windows)

    💡 Background: GenICam and Camera Parameters

    Parameters such as ExposureTimeAbs used in this article follow GenICam (Generic Interface for Cameras) — an industry standard used by Basler, FLIR, IDS, JAI, and many other industrial camera manufacturers.

    With GenICam SFNC (Standard Features Naming Convention), you can access:

    • Exposure
    • Gain
    • ROI / image size
    • Trigger settings
    • Acquisition modes

    all using a common parameter model.

    👉 GenICam SFNC specification:
    https://www.emva.org/standards-technology/genicam/

    This article focuses on practical usage, but knowing that pylon follows GenICam helps when switching between camera brands.


    🔧 Setting Exposure Time Manually

    Below is an extension to the BaslerCameraSample class from the previous article.

    Method: Set Exposure Time (manual mode)


    Helper Functions

    Set auto-exposure mode (Off / Once / Continuous)

    Set an enum parameter

    Set a floating-point parameter (e.g., ExposureTimeAbs)


    🔍 Reading the Current Exposure Time

    Use GetValue() to read exposure parameters from the camera.


    💡 About ExposureAuto Modes

    ValueBehavior
    OffManual exposure (used in this article)
    OnceAdjusts exposure once, then fixes it
    ContinuousContinuously adjusts to maintain brightness

    📷 Example: Change Exposure and Capture Images

    The test below captures images at 5 ms, 20 ms, and 50 ms.


    📊 Exposure Comparison


    5 ms (darker) 20 ms (balanced) 50 ms (brighter)
    5ms 20ms 50ms

    🧭 Common Issues & Troubleshooting

    IssueCause / Solution
    Exposure does not changeExposureAuto is still On
    Value outside valid rangeExposure too long or too short for the camera model
    Image too dark/brightLighting or gain settings are insufficient
    Exposure fluctuatesExternal lighting or auto exposure still enabled

    📝 Summary

    • Set ExposureAuto = Off to enable manual exposure control
    • Exposure time is specified in microseconds (μs)
    • pylon SDK accesses all parameters through Camera.Parameters[...], following GenICam SFNC

    🔜 Coming Up Next

    Exposure alone may not give enough brightness range depending on your lighting environment.
    The next article will cover manual gain control using the pylon SDK.

  • How to Capture a Single Image from Basler ace Camera using pylon SDK in C#


    How to Capture a Single Image from Basler ace Camera using pylon SDK in C#

    Basler industrial cameras can be controlled easily using the pylon Camera Software Suite.
    In this article, I’ll show you how to connect to a camera, grab a single frame, convert it to a BitmapSource, and save it as a .bmp file—all using C# and .NET 8.

    When I first started working with Basler cameras in a .NET environment, I found very few practical code examples.
    This guide is based on tested code from my own applications.

    This article covers:

    • Connecting to a camera
    • Capturing one frame (Snap)
    • Converting to BitmapSource (WPF-friendly)
    • Saving the image as .bmp

    ⚠️ Important Notice

    This article demonstrates how to develop custom applications using the Basler pylon Camera Software Suite SDK.

    • The content here is based on my own testing and implementation
    • It is not endorsed by Basler AG
    • The SDK itself is not redistributed here (no DLLs, headers, etc.)

    To use the SDK, download it directly from Basler’s official website:

    🔗 pylon SDK download
    https://www.baslerweb.com/en/products/software/basler-pylon-camera-software-suite/

    Make sure to review the EULA, especially if using the SDK in commercial or high-risk environments:
    https://www.baslerweb.com/en/service/eula/

    The sample code in this article is provided strictly for learning and reference. Use it at your own responsibility.


    ✔ Environment

    ItemDetails
    SDKBasler pylon Camera Software Suite
    LanguageC# (.NET 8.0 — Windows)
    CameraBasler acA2500-14gm
    OSWindows 10 / 11

    1. Connecting to the Camera

    First, create a simple wrapper class for managing the connection.
    You can connect to the first available camera or use a specified camera name (as shown in Basler pylon Viewer).


    2. Capturing a Single Image

    The simplest way to grab one frame is by calling GrabOne() through the Stream Grabber.


    3. Converting to BitmapSource (for WPF)

    IGrabResult cannot be displayed directly in WPF.
    We convert it to a BitmapSource using PixelDataConverter.

    Supports:

    • RGB8packed
    • BGR8packed
    • Mono8 (default fallback)

    4. Saving the Image as .bmp

    A simple helper method using BmpBitmapEncoder:


    5. Example Test Code

    You can easily test the entire pipeline (grab → convert → save) using the following:


    ✔ Output Example

    The captured image will be saved as:SaveBitmapTest.bmp

    Example output:

    (Image from the original article)


    ✨ Conclusion

    Although official documentation covers the basics, hands-on .NET examples for pylon are still relatively rare.
    Fortunately, pylon’s .NET API is clean and integrates smoothly with Windows desktop applications such as WPF.