Vizy The AI Camera Aims To Ease Machine Vision

porno video hentai 3dCameras are getting smarter and more capable than ever, able to run embedded machine vision algorithms and pull off tricks far beyond what something like a serial camera and microcontroller board would be capable of, and the upcoming Vizy aims to be even smarter and easier to use yet. Vizy is the work of Charmed Labs, and this isn’t their first foray into accessible machine vision. Charmed Labs are the same folks behind the Pixy and Pixy 2 cameras. Vizy’s main goal is to make object detection and classification easy, with thoughtful hardware features and a browser-based interface.

Vizy can identify common birds with “Birdfeeder”, one of the several built-in applications that uses local processing only.

porno video hentai 3dThe usual way to do machine vision is to get a USB camera and run something like OpenCV on a desktop machine to handle the processing. But Vizy leverages a Raspberry Pi 4 to provide a tightly-integrated unit in a small package with a variety of ready-to-run applications. For example, the “Birdfeeder” application comes ready to take snapshots of and identify common species of bird, while also identifying party-crashers like squirrels.

The demonstration video on their page shows off using the built-in high-current I/O header to control a sprinkler, repelling non-bird intruders with a splash of water while uploading pictures and video clips. The hardware design also looks well thought out; not only is there a safe shutdown and low-power mode for the Raspberry Pi-based hardware, but the lens can be swapped and the camera unit itself even contains an electrically-switched IR filter.

Vizy has a Kickstarter campaign planned, but like many others, Charmed Labs is still adjusting to the changes the COVID-19 pandemic has brought. You can sign up to be notified when Vizy launches; we know we’ll be keen for a closer look once it does. Easier machine vision is always a good thing, because it helps free people to focus on clever ideas like machine vision-based tool alignment.

OAK Vision Modules Help You See The Forest And The Trees

OpenCV is an open source library of computer vision algorithms, its power and flexibility made many machine vision projects possible. But even with code highly optimized for maximum performance, we always wish for more. Which is why our ears perk up whenever we hear about a hardware accelerated vision module, and the latest buzz is coming out of the OpenCV AI Kit (OAK) Kickstarter campaign.

There are two vision modules launched with this campaign. The OAK-1 with a single color camera for two dimensional vision applications, and the OAK-D which adds stereo cameras for that third dimension. The onboard brain is a Movidius Myriad X processor which, according to team members who have dug through its datasheet, have been massively underutilized in other products. They believe OAK modules will help the chip fulfill its potential for vision applications, delivering high performance while consuming low power in a small form factor. Reading over the spec sheet, we think it’s fair to call these “Ultimate Myriad X Dev Boards” but we must concede “OpenCV AI Kit” sounds better. It does not provide hardware acceleration for the entire OpenCV library (likely an impossible task) but it does cover the highly demanding subset suitable for Myriad X acceleration.

Since the campaign launched a few weeks ago, some additional information have been released to help assure backers that this project has real substance. It turns out OAK is an evolution of a project we’ve covered almost exactly one year ago that became a real product DepthAI, so at least this is not their first rodeo. It is also encouraging that their invitation to the open hardware community has already borne fruit. Check out this thread discussing OAK for robot vision, where a question was met with an honest “we don’t have expertise there” from the OAK team, but then ArduCam pitched in with their camera module experience to help.

We wish them success for their planned December 2020 delivery. They have already far surpassed their funding goals, they’ve shipped hardware before, and we see a good start to a development community. We look forward to the OAK-1 and OAK-D joining the ranks of other hacking friendly vision modules like OpenMV, JeVois, StereoPi, and AIY Vision.

Dial In Your Multi-Headed 3D Printer With 2020 Machine Vision

Most folks that have been poking around at multi-tool 3D printing know that lining up nozzles can be a gnarly, but necessary pain point. Existing methods either have us measure offsets with a vernier scale or with a series of pictures taken with an upwards-facing camera. And this step is not to be ignored! Any mismatch between nozzles, and your multicolor prints end up looking like Scotty really screwed up those sliders on that transporter beam console. Fear not, however! [Danal] took this problem as an opportunity to write something that’s completely automated and brought to you by some machine vision.

Dubbed TAMV, for?Tool Align Machine Vision, [Danal] added a Raspberry Pi alongside his existing 3D printing motion controller in addition to an upwards facing camera. A few lines of code (and a few hours of compiling OpenCV) later, and he had himself a circle-detecting script that automatically cycles through each tool, detects the nozzle center, and calculates an offset for each tool that’s stored into the machine’s configuration file. If that’s not nifty enough, he’s made the entire setup open-source, and he included both an installation script for compiling OpenCV and a well-written set of step-by-step instructions.

In a world where most hobbyists approaches still solve this problem manually, this is leaps and bounds ahead of what we know, and it’s a great application of machine vision built on top of a stack of recognizable hardware and software. While this project was outfitted for a Jubilee running a Duet3 controller with a Raspberry Pi connected in “single-board computer” mode, the core features are readily adaptable to any other multi-tool machine with a similar control board stack. And for folks willing to poke under the hood, the project could even be extended to a standalone script that you can run on your PC locally to simply print the tool offsets separately.

Alongside TAMV, it’s refreshing that even a decade after 3D printers have been with us, we’re still finding ways to make these machines more capable. For more fresh hacks in this category, check out a new spin on using sharpie ink as a support material release agent.

Sadly, [Danal] has recently passed away in the last week, but we are grateful to capture a snapshot in the history of this person’s life.

Continue reading “Dial In Your Multi-Headed 3D Printer With 2020 Machine Vision”

Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips

What has dual compressed-air cannons, 500 roll-on deodorant balls, and a machine-learning brain with a bad attitude? We didn’t know either, until [Leo Fernekes] dropped this video on his autonomous robot sentry gun and saw it in action for ourselves.

Now, we’ve seen tons of sentry guns on these pages before, shooting everything from water to various forms of Nerf. And plenty of those builds have used some form of machine vision to aim the gun onto the target. So while it might appear that [Leo]’s plowing old ground here, this build is chock full of interesting tips and tricks.

It started when [Leo] saw a video on TensorFlow basics from our friend [Edje Electronics], which gave him the boost needed to jump into an AI project. The controller he ended up with looks for humans in the scene and slews the turret onto target, where the air cannons can do their thing. The hefty ammo is propelled by compressed air, which is dumped into the chamber using a solenoid valve with an interesting driver that maximizes the speed at which it opens. Style points go to the bacteriophage T4-inspired design, and to the sequence starting at 1:34 which reminded us of the factory scene from RoboCop.

[Leo] really put a ton of work into this project, and the results show. He is hoping to get an art gallery or museum to show it as an interactive piece to comment on one possible robot-human future, presumably after getting guests to sign a release. Whatever happens to it, the robot looks great and [Leo] learned a lot from it, as did we.

Continue reading “Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips”

Using Smartphone Cameras To Make Sure Drivers Are Looking At The Road

Most of us are probably quite aware of the damage that a car can inflict when driven by a distracted driver. In an ideal world, people who are driving a car would not allow something like their phone to distract them from their primary task of being the primary navigation system for the 1+ metric ton vehicle which they are controlling.

Many smartphone apps as well as in-car infotainment systems have added features over the years that try to prevent a driver from using them, but they run into the issue that it’s hard to distinguish between passenger and driver. As it turns out, asking the human driver whether they are the driver doesn’t always get the expected result. This is where [Rushil Khurana] and his team at Carnegie Mellon University (CMU) have come up with a more fool-proof approach.

In their paper (PDF), they cover the algorithm and software implementation that uses the smartphone’s own front (selfie) and back cameras to determine from the car’s interior which side of the car the user is sitting in, and deducing from that whether the user is sitting in the driver’s seat or not.? From there it is a fairly safe assumption to make that if the user is sitting in the driver’s seat, and the car is moving, that this user should not be looking at the phone’s screen.

In a test involving 16 different cars and 33 users, they achieved an overall accuracy of 94% with the phone held in the hand, and 92.2% while docked. This is more reliable than the other approaches covered in the paper, and as a benefit does not require any extra hardware. Who knows, upcoming smartphones may include a feature like this, so that apps can easily determine what feature set should be made available to a driver, if any.

Continue reading “Using Smartphone Cameras To Make Sure Drivers Are Looking At The Road”

Machine Learning Algorithm Runs On A Breadboard 6502

When it comes to machine learning algorithms, one’s thoughts do not naturally flow to the 6502, the processor that powered some of the machines in the first wave of the PC revolution. And one definitely does not think of gesture recognition running on a homebrew breadboard version of a 6502 machine, and yet that’s exactly what [Nick Bild] has accomplished.

Before anyone gets too worked up in the comments, we realize that [Nick]’s Vectron breadboard computer is getting a lot of help from other, more modern machines. He’s got a pair of Raspberry Pi 3s in the mix, one to capture and downscale images from a Pi cam, and one that interfaces to an Atari 2600 emulator and sends keypresses to control games based on the gestures seen by the camera. But the logic to convert gesture to control signals is all Vectron, and uses a k-nearest neighbor algorithm executed in 6502 assembly. Fifty gesture images are stored in ROM and act as references for the four known gesture classes: up, down, left, and right. When a match between the camera image and a gesture class is found, the corresponding keypress is sent to the game. The video below shows that the whole thing is pretty responsive.

In our original article on [Nick]’s Vectron breadboard computer, [Tom Nardi] said that “You won’t be playing?Prince of Persia on it.” That may be true, but a machine learning system running on the Vectron is not too shabby either.

Continue reading “Machine Learning Algorithm Runs On A Breadboard 6502”

Machine Vision Keeps Track Of Grubby Hands

Can you remember everything you’ve touched in a given day? If you’re being honest, the answer is, “Probably not.” We humans are a tactile species, with an outsized proportion of both our motor and sensory nerves sent directly to our hands. We interact with the world through our hands, and unfortunately that may mean inadvertently spreading disease.

[Nick Bild] has a potential solution: a machine-vision system called Deep Clean, which monitors a scene and records anything in it that has been touched. [Nick]’s system uses Jetson Xavier and a stereo camera to detect depth in a scene; he built his camera from a pair of Raspberry Pi cams and a Pi 3B+, but other depth cameras like a Kinect could probably do the job. The idea is to watch the scene for human hands — OpenPose is the tool he chose for that job — and correlate their depth in the scene with the depth of objects. Touch a doorknob or a light switch, and a marker is left on the scene. The idea would be that a cleaning crew would be able to look at the scene to determine which areas need extra attention. We can think of plenty of applications that extend beyond the current crisis, as the ability to map areas that have been touched seems to be generally useful.

[Nick] has been getting some mileage out of that Xavier lately — he’s used it to build an AI umpire and shades that help you find lost stuff. Who knows what else he’ll find to do with them during this time of confinement?

Continue reading “Machine Vision Keeps Track Of Grubby Hands”