Christmas Lights Version 2

This project is a continuation of my Christmas Lights project.

Overview Link to heading

My Christmas tree lights project involved playing video on a set of addressable Christmas tree lights.

The program first determines the relative positions of the lights by turning on each light in sequence and using a camera to see where they are. This generates a list of coordinates which are stored on disk. Then for each frame of a video, the program resizes the frame to fit the light positions, then assigns each light the corresponding color at that point on the frame. Finally, it displays each frame on the light in sequence.

The lights were driven by a raspberry pi with low compute capacity. To increase speed, operations were split into different parts and run between my laptop and the pi. The pi would calculate the positions. Using the positions my laptop would calculate the colors for each frame. Using the colors for each frame, the pi would display the images. The laptop was connected to the pi via SSH over my home network allowing me to play video from anywhere in my home.

Origins Link to heading

My father wanted to buy a new set of Christmas lights. After looking for some online, I found a set that was addressable, and my mind immediately imagined a Christmas tree that also functioned as a video display. For Christmas that year I got 500 LED lights to play with.

Challenges Link to heading

At first, I just tried to get the lights up and running. However, they were not addressing properly. I would write code for one LED to turn on and four would light up. After doing some research I found out that, the raspberry pi outputs data signals at 3.3V, while the lights could only properly recognize a data signal at 5V. I used a level shifter to transform the 3.3V signal into 5V.

Now that the lights were working. I started to think about how I would play video. Unlike a regular computer screen, if the lights were on a Christmas tree they would be scattered randomly. I found that the best plan was to remove the randomness by determining the positions, then overlaying the image, and finally displaying the lights with the corresponding colors.

After I was done coding, I tested the system with the lights on a table. I found that the camera was not able to determine the positions of the lights. It would take a photo of one bright LED and determine the brightest spot of the image to be in a completely different place. I realized that this behavior could be resolved by applying a gaussian blur to the image before I found the brightest pixel. Bright noise pixels are usually surrounded by dim pixels, so they would become a lower brightness. On the other hand, bright pixels showing LEDs were usually surrounded by other bright pixels, so they would stay at a similar intensity. This allowed me to find accurate positions of the lights. I even did some trials to statistically determine the best gaussian blur radius for the lights.

Finally, after all the kinks were worked out, I put the lights on the tree and observed my amazing creation. I found that the best use of the lights was to put on cool designs in the background as mood lighting because you can repeat the video indefinitely.

Code Link to heading

All my code can be found here: https://github.com/blucardin/ChirstmasLights3

My favourite code snippet is “detectingPositions/detect_lightAllnewHighRes.py” lines 46-55

This code reads a frame from the camera, converts it to grayscale, applies a gaussian blur of radius GaussianRadius, and finally finds the location of the pixel with the greatest intensity (maxLoc).

I wanted to show this section of the code off because of its simplicity, utility, and ingenuity. This routine to determine the positions of the lights is the core of the project. These five lines took me a lot of head-scratching to come up with and solved a lot of my problems by using a gaussian blur to greatly increase the accuracy of the generated positions.

I can manually determine the position of a light by displaying the camera image to the screen and clicking it.

We can compare a manually generated position with an autogenerated position by calculating the Euclidian distance between the two points using the Pythagorean theorem.

After running this snippet of code with several gaussian radius sizes and comparing them with 50 manually generated positions on the same set of lights, we get the graph below.

image

This shows that, under testing conditions, implementing a gaussian blur of radius 15 would result in points almost the same as if a human had picked them out and with relatively low variability.

Faults Link to heading

The system worked amazing for high contrast or high saturation images with low minimum resolution. However, when the image was low contrast, low saturation, or needed lots of detail to understand, the lights did not display them nicely. I think this comes from the lights themselves not having completely accurate color mixing as they were made for high saturation colors. The LEDs also blur into each other dulling contrast. And of course, with only 500 pixels in a random distribution on a tree, detail is limited.

Determining the positions of the lights also needed relatively low light to be accurate. If there were any sources of light in the image other than the lights themselves, the positioning system would break down. If I would do the project again, I would experiment with different means of determining the positions. Maybe creating some machine learning algorithm to detect the LED instead of relying on brightness values.

Applications Link to heading

Other than looking cool and being an interesting conversation piece on your tree, this project has some useful applications.

First, as a teaching tool. These lights combine electronics, software, image analysis, and computer networking in an easy to understand and interesting looking package. I have used this project to show beginner programmers what you can do with code. Plus, the lights interface is so easy to use, even a python beginner can make cool designs with it.

Second, in concert wristbands. At concerts, concertgoers are given wristbands that light up with different colors to create displays on the crowd. Current methods of lighting up with wristbands are very imprecise, lighting up large sections of lights with the same color. Imagine if each wristband was equipped with an infrared LED that could be located by a camera before the show. A command center could then calculate the color of each wristband and update them over radio signals to show an image. Using a similar method to my Christmas tree, artists can use their crowd as a display.

image

Sources Link to heading

To learn how to operate the lights and connect them to my raspberry pi, I used Abigail Torre’s article titled, “NeoPixels on Raspberry Pi” https://learn.adafruit.com/neopixels-on-raspberry-pi/python-usage. This information in this article was only used for connecting to the lights and usage of the CircuitPython NeoPixel library. I wrote all the code for the project myself.

I used the Open Source Computer Vision (OpenCV) library for working with images (taking a photo, blurring it, determining the greatest pixel etc.). I had prior knowledge of the library and used their documentation here: https://docs.opencv.org/4.x/index.html

Information and images of concert lights came from this Wall Street Journal article entitled, “The Tech Behind How Concert LED Light Wristbands Work” https://www.wsj.com/video/series/tech-behind/the-tech-behind-how-concert-led-light-wristbands-work/EAA54145-D07A-4100-8153-2EAF8D671921