Data Through Design 2022
Data Through Design (DxD) is a collective in New York City that organizes an annual data art exhibition of projects that make use of data about the city made available via the NYC OpenData project. This project was included in the 2022 Ground Truth IRL exhibit.
Dataset
One of the datasets available on OpenData is NYC Parks Forever Wild. It maps wildlife zones and ecologically important areas across 138 parks in NYC. The boundaries for each park’s wildlife zones were graphical abstract shapes. These seemed like a natural fit for a robotic tufting system which can fill areas with yarn.
Data Processing
The data came in the form of GeoJSON files. GeoJSON is a standard format for using JSON to encode geographic features. To process the data, I first broke down the dataset to have a single GeoJSON file per park.
This allowed me to use it with vvzen’s Houdini-Geospatial-Tools which created a Houdini polygon primitive given a single GeoJSON file.
Houdini is a software application for doing work that is common in computer generated visual effects, like procedural modeling and simulation. It provides a node-based programming environment for manipulating geometry. Off-the-shelf nodes are provided for common operations like moving geometry around, simulating particles, adding noise, etc. But there are also nodes that allow arbitrary programs to be written to create any kind of operation that you might want. By combining nodes in networks very complex procedural workflows can be developed.
When working in Houdini I wanted to have a network per park so that I could tweak them individually.
For each park I created a canvas polygon scaled appropriately to reflect the 16’x20’ monk cloth I was planning to use for my output. The canvas polygon was formed out of horizontal rows. Later on the map of the yarn to be tufted would be formed by removing sections of these rows.
I moved the park polygon loaded from the GeoJSON data to the origin and scaled it to fit in the canvas. Then the shape was extruded to increase the threshold for intersecting with canvas polygon.
Using these intersections, I created a group called “Inside” for the points in the canvas polygon that were bound by the wildlife shape.
With this group set, I could now assign a new attribute for each point that described the direction of the tooltip. I wanted the robot to move row by row from left to right, so I assigned the direction values while moving along the rows in the canvas polygon. The values that I assigned were:
- 0 if it’s in the same state as the previous point
- 1 if it’s entering the wildlife shape
- -1 if it’s exiting a wildlife shape
Based on this attribute, I could now create edge primitives based on when the attribute value changes. These edges now correspond to the lines that will be tufted by the robot.
Since the robot motion planner takes in a series of points and does the inverse kinematics for us, it didn’t need all the points as an input. It just needed a command for the endpoints where it needed to go down to tuft and come up to stop. So all the points inside were deleted, except for the ends.
Then finally, these endpoints were exported out as a CSV file which contained the point ID, position, and command.
The next step would be to write a program to read in the data from this CSV file and pass it along to the motion planner and from there out to the actual robot to tuft the map!
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.
This is so cool!
Are you sure? yes | no