As I was reading about the applications of UV (ultraviolet) radiation in industrial operations, especially for anomaly detection, I became fascinated by the possibility of developing a proof-of-concept AI-driven industrial automation mechanism as a research project for detecting plastic surface anomalies. Due to the shorter wavelength of ultraviolet radiation, it can be employed in industrial machine vision systems to detect extremely small cracks, fissures, or gaps, as UV-exposure can reveal imperfections on which visible light bounces off, leading to catching some production line mistakes overlooked by the human eye or visible light-oriented camera sensors.
In the spirit of developing a proof-of-concept research project, I wanted to build an easily accessible, repeatable, and feature-rich AI-based mechanism to showcase as many different experiment parameters as I could. Nonetheless, I quickly realized that high-grade or even semi-professional UV-sensitive camera sensors were too expensive, complicated to implement, or somewhat restrictive for the features I envisioned. Even UV-only high-precision bandpass filters were too complex to utilize since they are specifically designed for a handful of high-end full-spectrum digital camera architectures. Therefore, I started to scrutinize the documentation of various commercially available camera sensors to find a suitable candidate to produce results for my plastic surface anomaly detection mechanism by the direct application of UV (ultraviolet radiation) to plastic object surfaces. After my research, I noticed that the Raspberry Pi camera module 3 was promising as a cost-effective option since it is based on the CMOS 12-megapixel Sony IMX708 image sensor, which provides more than 40% blue responsiveness for 400 nm. Although I knew the camera module 3 could not produce 100% accurate UV-induced photography without heavily modifying the Bayer layer and the integrated camera filters, I decided to purchase one and experiment to see whether I could generate accurate enough image samples by utilizing external camera filters, which exposes a sufficient discrepancy between plastic surfaces with different defect stages under UV lighting.
In this regard, I started to inspect various blocking camera filters to pinpoint the wavelength range I required — 100 - 400 nm — by absorbing visible light spectrums. After my research, I decided to utilize two different filter types separately to increase the breadth of UV-applied plastic surface image samples — a glass UV bandpass filter (ZWB ZB2) and color gel filters (with different light transmission levels - low, medium, high).
Since I did not want to constrain my experiments to only one quality control condition by UV-exposure, I decided to employ three different UV light sources providing different wavelengths of ultraviolet radiation — 275 nm, 365 nm, and 395 nm.
✅ DFRobot UVC Ultraviolet Germicidal Lamp Strip (275 nm)
✅ DARKBEAM UV Flashlight (395 nm)
✅ DARKBEAM UV Flashlight (365 nm)
After conceptualizing my initial prototype with the mentioned components, I needed to find an applicable and repeatable method to produce plastic objects with varying stages of surface defects (none, high, and extreme), composed of different plastic materials. After thinking about different production methods, I decided to design a simple cube on Fusion 360 and alter the slicer settings to engender artificial but controlled surface defects (top layer bonding issues). In this regard, I was able to produce plastic objects (3D-printed) with a great deal of variation thanks to commercially available filament types, including UV-sensitive and reflective ones, resulting in an extensive image dataset of UV-applied plastic surfaces.
✅ Matte White
✅ Matte Khaki
✅ Shiny (Silk) White
✅ UV-reactive White (Fluorescent Blue)
✅ UV-reactive White (Fluorescent Green)
Before proceeding with developing my industrial-grade proof-of-concept device, I needed to ensure that all components, camera filters, UV light sources, and plastic materials (filaments) I chose were compatible and sufficient to generate the UV-applied plastic surface image samples with enough discrepancy (contrast), in accordance with the surface defect stages, to train a visual anomaly detection model. Therefore, I decided to build a simple data collection rig based on Raspberry Pi 4 to construct my dataset and review its validity. As I decided to utilize the Raspberry Pi camera module 3 Wide to cover more of the surface area of the target plastic objects, I designed unique multi-part camera lenses according to its 120° ultra-wide angle of view (AOV) to make the camera module 3 compatible with the glass UV bandpass filter and the color gel filters. Then, I designed two different rig bases (stands) compatible with UV light sources in the flashlight form and the strip form, enabling height adjustment while attaching the camera module case mounts (carrying lenses) to change the distance between the camera (image sensor) focal point and the target plastic object surface.
After building my simple data collection rig, I was able to:
✅ utilize two different types of camera filters — a glass UV bandpass filter (ZWB ZB2) and color gel filters (with different light transmission levels),
✅ adjust the distance between the camera (image sensor) focal point and the plastic object surfaces,
✅ apply three different UV wavelengths — 395 nm, 365 nm, and 275 nm — to the plastic object surfaces,
✅ and capture image samples of various plastic materials showcasing three different stages of surface defects — none, high, and extreme — while recording the concurrent experiment parameters.
After collecting UV-applied plastic surface images with all possible combinations of the mentioned experiment parameters, I managed to construct my extensive dataset and achieve a reliable discrepancy between the different surface defect stages to train a visual anomaly detection model. In this regard, I confirmed that the camera module 3 Wide produced sufficient UV-exposed image samples to continue developing my proof-of-concept mechanism.
After training and building my FOMO-AD (visual anomaly detection model) on Edge Impulse Studio successfully, I decided not to continue developing my mechanism with the Raspberry Pi 4 and migrated my project to the Raspberry Pi 5 since I wanted to capitalize on the Pi 5’s dual-CSI ports, which allowed me to utilize two different types of camera modules (regular Wide and NoIR Wide) simultaneously. I decided to add the secondary camera module 3 NoIR Wide, which is based on the same IMX708 image sensor but has no IR filter, to review the visual anomaly model behaviour with a regular camera and a night-vision camera simultaneously to develop a feature-rich industrial-grade surface defect detection mechanism.
After configuring my dual camera set-up and visual anomaly detection model (FOMO-AD) on Raspberry Pi 5, I started to work on designing a complex circular conveyor mechanism based on my previous data collection rig, letting me place plastic objects under two cameras (regular Wide and NoIR Wide) automatically and run inferences with the images produced by them simultaneously.
Since I wanted to develop a sprocket-chain circular conveyor mechanism rather than a belt-driven one, I needed to design a lot of custom mechanical components to achieve my objectives and conduct fruitful experiments. Since I wanted to apply a different approach rather than limit switches to align plastic objects under the focal points of the cameras, I decided to utilize neodymium magnets and two magnetic Hall-effect sensor modules. While building these complex parts, I encountered various issues and needed to go through different iterations to complete my conveyor mechanism until I was able to demonstrate the features I planned. I documented my design mistakes and adjustments below to explain my development process thoroughly for this research study :)
As I was starting to design the mechanical components, I decided to develop a unique controller board (PCB) as the primary interface of the sprocket-chain circular conveyor. To reduce the footprint of the controller board, I decided to utilize an ATmega328P and design the controller board (4-layer PCB) as a custom Raspberry Pi 5 shield (hat).
Finally, since I wanted to simulate the experience of operating an industrial-grade automation system, I developed an authentic web dashboard for the circular conveyor, which lets the user:
✅ review real-time inference results with timestamps,
✅ sort the inference results by camera type (regular or NoIR),
✅ and enable the Twilio integration to get the latest surface anomaly detection notifications as SMS.
By referring to the following tutorial, you can inspect the in-depth feature, design, and code explanations with the challenges I faced during the overall development process.
🎁📢 Huge thanks to ELECROW for sponsoring this project by providing their high-quality PCB manufacturing service:































































Kutluhan Aktar