Knollbot™ Preproduction

Sketching the Design

Like Athena from Zeus, the Knollbot sprung, fully formed, from my head.

Photo Nov 04, 8 30 27 AM

I did take the time to do a sketch design exploration to work out potential other form factors and mechanics. Although it was apparent that the original form was the most likely the right one to proceed with.

Refining the Concept

Based on the feedback from play testing, it was clear that people liked the idea but thought it lacked a clear interactive element. True. The core concept is a silly bit of automation: knolling objects so you don’t have to. By definition it lacks a certain degree of interactivity.

I thought about the Knollbot’s user interaction/cycle and it occurred to me that while there isn’t an immediate feedback loop, there is a definite feedback loop over time. Say Knollbot is in your house. You get home from work, toss your stuff on the desk and go about your business. Now it’s Knollbot’s turn to get to work. The next time you go to grab your stuff, it’s perfectly knolled. You take your stuff, come home, drop it on the table again, Knollbot does its thing.

In this case, the “interaction” aspect of the object is purposefully subtle.

Project Plan

There are two major systems at work (and a few more sub-systems): the computer vision and the mechanical robotic action. I’ll tackle the computer vision system while I wait for the mechanical parts to arrive.

System Diagram

SystemsDiagram

Bill of Materials

https://docs.google.com/spreadsheets/d/1iTZihisGcKGrZxAlQHiKuHJl-mSAEfFzB5ytI0lKEqI/edit?usp=sharing

Timeline

Nov 4 – 11

  • Finalize schematic
  • Order materials
  • Gather programing resources
  • Begin programming

Nov 12 – 18

  • Programming con’t
  • Gather non-material build resources

Nov  19 – 25

  • Programming con’t
  • Build
  • Troubleshooting

Nov 26 – Dec 2

  • Troubleshooting

Dec 3 – Forward

  • Business plan
  • Market research
  • Branding
  • Hire a CMO
  • Take meeting with President Obama about his “messy-ass desk”

PComp Final: Concept Development

A Short History of Knolling

You know knolling. Even if you don’t know you know knolling, you do know knolling.

You’ve seen it in Wes Anderson films. It’s given you pangs of jealousy on those  übercool “inspiration” blogs dedicated to “curating” your “lifestyle”. And I think most illustrative of it’s awesome power is Bullet VIII in artist Tom Sachs’ infamous 10 Bullets workshop manual.

Knolling is the organizational technique of placing all items on a surface at 90º angles to each other and to the surface itself. 

Knolling as Philosophy

Sure. It looks great in those little square Instagram pictures. And there’s no denying that it’s functional. But unless you’re actually working for Tom Sachs (which you’re decidedly not), who’s got the time to sit there and measure out perfect right angles and place each item down on the grid, one by one,  side by side, being extra careful to leave consistent padding around each object? The answer is no one. Except “lifestyle bloggers” (but you’re decidedly not one of them either).

So then, you’re thinking: what’s the point?

The point is that knolling is an exercise in organization. And an organized workspace makes for organized thoughts.

One Problem

In reality, you can’t really knoll anything.  A perfectly knolled table is impossible to achieve.

Hey, champ. Chin up. Don’t get upset. It’s not you. You didn’t do anything wrong. It’s actually much much bigger than any one person. If we’re gonna blame someone, we should probably blame Greek philosopher Plato.

I don’t really want to get into all the fuzzy details of Platonic Idealism, but what it comes down to is that the idea of knolling exists as a perfect, albeit separate, entity from any individually instance of knolling in reality. Even the most beautifully, painstakingly knolled table has failed to achieve perfection in the required form (e.g. 90º angles, parallel lines, universally constant padding, etc…). It can’t. It is physically impossible to obtain the philosophically ideal version of a knolled table.

Or, it was physically impossible. Until now.

For my Physical Computing final I’m developing a robot that will bring us one step closer to the Platonic Ideal of knolling. I’m going to make my desk so organized that my thoughts will be so organized that I will be able to see simultaneously into the future and the past.

Introducing: Knollbot™

Knollbot™ is a robot that knolls.

Knollbot™’s  state-of-the-art robot mechanics and computer vision algorithm guarantee a perfectly knolled table, every time. And we’re not just talking about a “perfectly” knolled table.” We’re talking about the Platonic Ideal of a knolled table*.

In the next post, I’ll have some initial sketches and more information about the development process.

*You know, assuming I get my shit together and build this. And that it works and stuff.

PCOMP Midterm II: Prototyping & Build

This is the second in a series of posts on my Physical Computing Midterm project. The first one on Ideation and Concept Development is here.

Disclaimer: I still do not own any of the rights to any Jurassic Park properties; imagery, sound, video, or otherwise. Please don’t sue me. This project is still for academic purposes only.

Sensors and Control

Ok. We’ve got the idea: create a book that controls the movie version of itself. (The obvious choice being Jurassic Park.)

From the start we wanted to use a flex sensor attached to the cover of the book as a switch. Open the book, play the movie. Close the book, pause or stop.

We also decided that a FSR (force sensitive resistor) would be a great way to determine where in the book the user is by reading the weight of the pages laying flat. The user flips through the book, the weight of the pages laying on the back cover changes and the movie fast forwards or rewinds accordingly. It’s a really natural, pleasurable motion associated with reading that we’re mapping to the movie.

Setting Up the Sensors

We bought a few books to mess around with. It became obvious very quickly that a hardback was the way to go. We picked out a particularly boring-looking hardback that no one would miss and we got to destroying it.

We attached the sensors and wrote a short program to get them connected through the serial port. This way we could test them in a real environment and see what range of readings they were giving us.

Prototype

We found that while the flex sensor worked like a charm, the FSR was having trouble picking up the weight of the book. This was the first of a few ongoing issues with the FSR. Because the weight of the book is distributed over a much larger area than the sensor, the was very little force on the sensor itself. In order to concentrate the weight of the book in a more focused area we implemented what we’re calling “the Princess and the Pea” solution. Which you’ll see below.

Now that the sensors were sensing we started thinking about the larger user experience. We wanted to create a stand for the book, something that would situate the book/sensor system for the user and look good doing it.

I had some big ideas that involved a scale model of the Jurassic Park gates.

Photo Oct 08, 12 16 42 PM

For the sake of prototyping and troubleshooting we created a box first.

We actually had a bit of trouble with this set up. The FSR was getting good strong readings, but they were constantly changing and the book wobbled. We decided it was a better idea to put the FSR in the book itself on the inside back cover and add a little hex nut or something small and hard to the back page as the “pea.”

Code

The Processing code was relatively simple. The video library makes it easy to play, pause, jump, etc. All the basic action we were looking for. The trouble was getting a single reading off the FSR. We found (not surprisingly) that as pages turned the FSR would give use constant readings and the movie would constantly change frames. Making it impossible to jump to a time and play the movie from there. Yining tackled this problem by introducing thresholds. If the FSR was within a range of numbers, the code told Processing to pick a nice round number and use that. To skip forward or back the new reading had to be outside the set range. This basically solved the problem, but requires the user to flip a bunch of pages to affect the change.

I think this code could certainly be refined, maybe even on the Arduino side of things.

The Build

We created a simple bookstand emblazoned with the iconic logo. Wired the sensors through the book and into the stand. And once everything was hooked up, we had a working build.

Photo Oct 12, 4 23 00 PM

Photo Oct 12, 12 05 27 PM

In the next post I’ll have a more polished wrap up.

PCOMP Midterm : Ideation & Concept Development

This is the first in a series of posts detailing the process of developing and creating my Physical Computing midterm alongside Yining Shi.

Disclaimer: I do not own any of the rights to any Jurassic Park properties; imagery, sound, video, or otherwise. Please don’t sue me. This project is for academic purposes only.

Ideation

Let’s say we had three weeks to complete this assignment. That means we spent the first 2 weeks coming up with ideas. I’ve got a notebook full of nearly incomprehensible sketches of little candles in boxes that was initially the direction we were headed in until, we a) found out that flaming candles are a no-no on the floor at ITP and b) decided we didn’t actually love the idea.

Photo Oct 16, 9 26 18 AM

So after a 6+ hour hangout, walkabout, and sit down brainstorming session we finally hit on something we both liked.

The Book Remote

I guess by this point in the night I was pretty tired. And when I’m tired I start thinking about the classic 1993 science fiction/adventure film Jurassic Park. Directed, of course, by virtuoso Steven Spielberg with a story by none other than the king of the science fiction/adventure genre, Michael Crichton. Why do I think about Jurassic Park? Probably because it gets me amped up. The very thought of “An Adventure 65 Million Years in the Making” is enough to give me energy for days. And imagining fighting for my life against a pack of highly intelligent Velociraptors? I mean, come on. Tell me you don’t want to just punch through a wall right now.

So there I am thinking about Jurassic Park. I’m thinking about how much I enjoyed the book and the movie. The book. The BOOK!

I said to Yining, “We should make a physical version of the novel Jurassic Park that controls the movie Jurassic Park!”

“What’s Jurassic Park?”

“…”

This was a dark moment for me. After I made her watch the movie and read the book we reconvened. She liked the idea and we sketched the concept out.

Photo Oct 04, 6 32 55 PM
Initial concept sketch
Slightly more detailed concept sketch

Now that we had a concept we began to experiment and prototype. Which I’ll cover in the next post.

Serial Communication

Lab 1: Serial to Processing

Here’s we’re starting with the basics of reading sensor data through a serial connection. USB in this case. Then we write a Processing sketch that reads from the serial port and outputs a graph of the sensor reading. The graph is an arbitrary visualization. It could be any type of visual that you want, as you’ll see in the second lab.

I used one of the Arduino kit potentiometers for my first set up.

Photo Sep 30, 7 02 58 AM

It worked well enough. But something strange was happening: every once in a while I would get nothing as a reading. A sting of random characters and then nothing. Sometimes I could wiggle the sensor and the readings would return, sometimes not. I think this might be on account of these kit sensors being kind of crap.

Next I decided to try out a new FSR (Force Sensing Resistor). That worked and there were no strange readings. Also, what’s nice about the FSR is that it creates really smooth curves. The growth and falloff from each high pressure input plots a nice slope, with no jumpiness.

Lab 2: Multisensor Serial Communication

In this lab we’re setting up several input devices at once and creating a parse-able string. (Sidenote: if someone is reading this and any terminology is incorrect, please let me know so I don’t look like an ignorant asshole. Thanks so much.) All that means is that we’re reading several inputs at one and telling Arduino and Processing how to break up the string of values in order to get individual readings from each sensor.

I used an FSR, a kit pot, and a toggle switch .

Photo Sep 30, 8 02 29 AMI had no trouble getting this up and running on Arduino and getting the Processing sketch to run, but I couldn’t get the sensor to talk to Processing.  It was perplexing. Perplexing and not vexing because some aspects of the whole system were working. Arduino was giving me beautiful reading after beautiful reading.

FSR: 123, Pot: 990, Switch: 0
FSR: 130 Pot: 870, Switch: 0
FSR: 150, Pot: 720, Switch: 0
FSR: 266, Pot: 103, Switch: 1

And so on…but Processing was giving me nothing. Occasionally an “Error, disabling serialEvent() for [the port name]      null,” but I figured that was just a connection problem. That wasn’t the problem becuase Arduino was still showing readings. It took some time before that I realized that the strings weren’t supposed to include the sensor names, “FSR,” “Pot,” “Switch.” I had added them in for clarity. But they were screwing with the string being properly parsed. I removed those extra parts of the string and I was up and running.

Questions

  • As far as the code goes, I understand in theory what’s going on and I can point to which block of code does what, but I need a little explanation of some of the syntax in each of the programs.
  • Why did I sometimes get “Error, disabling serialEvent() for [the port name]      null” and sometimes not?
  • In the second lab, why aren’t we using different variables for each sensor?
  • In the Handshake method, the first “hello” is always misspelled. Why is that?
  • What am I going to have for breakfast now that this lab is done

P-Comp: Observe and Report

This week we were asked to get out into the world to observe people’s natural behaviors, tendencies, inclinations, frustrations and pitfalls interacting with some…thing.

Mic Check

A microphone is an interactive, digital tool.

Interactive because it’s a physical device that we input information or action (in this case speaking) and it returns information or feedback (hopefully not the deafening, high-pitched kind) to us. Tool because it takes a natural ability that we have and literally amplifies it. It augments both our vocal and aural abilities. Digital because it’s translating our analog audio signals into zeros and ones in order for other parts of the system to be able to use.

It’s a common scene. A flustered speaker in the spotlight. Awkward silence as they turn the mic over in their hands. Hushed whispers off stage as a technician is summoned. The mic being passed like a race baton back and forth. The mic belly-up in someone’s hand, their fingernail scratching at the tiny nub of a button on the bottom.

This isn’t how microphones should operate. They should be as intuitive as speaking itself. Natural. Seamless. An extension of our thoughts. There are a couple simple fixes that would make the common handheld microphone easier to use.

One, a more obvious “on/off” indicator. Instead of a dim red LED on the polar opposite of the place you’re naturally holding upright, maybe a LED ring on the handle or attached to the ring around ubiquitous metal mesh covering the sensitive electronics up to. The indicator should be obvious enough that as it’s passed from person to person (or as it’s approached from off stage) it is readily apparent if it’s hot or not. A eye-catching indicator light might also help hold people’s attention longer, by naturally drawing people’s eyes to the speaker’s mouth. (This sounds weird, but I’ve always noticed that my hearing seems to improve when I’m listening to someone and looking at their mouths form the words.)

Two, don’t make it rocket science to equalize the output. Yes, of course, there’s an art and a science to getting the sound right. But when we’re talking a college lecture hall or some on a smaller scale than a Beyonce concert, it makes sense that it should be as easy to operate as possible. Here I’m thinking about whatever audio-tech has been shrunk down to fit into smart phones. There must be some auto-equalizing going on to make your voice come through clearly and not too loudly.

Using a microphone might not be the most common interaction in the world, but I there’s still room for improvement.

P-Comp Lab: Servos & Speakers

Self Servo

Servos are small motors with a 180º angle of movement. They can be precisely controlled. Or so I’m told.

My servo circuit worked. Sort of. If I turned to control too fast or as it reached the end of its range of movement it started to wobble like a fat man on a kayak.

I’m not exactly sure why this happened. Although I can poisit the potentiometer was the culprit from the Serial communication  feedback I was getting. At the high end of the potentiometer’s range (which was mapped to the high end of the servo’s range of movement) the numbers were jumping back and forth in a seemingly random pattern.

I’ll need to try this same set up with a more robust potentiometer and see what happens.

 Bringing the Noise

My little baby Arduino starter kit didn’t come with a speaker. So I went to the venerable Junk Shelf to see what I could find. (Spoiler alert: I found treasure.)  Photo Sep 23, 2 40 29 PM

I can’t tell you exactly where on the Junk Shelf I found this windfall. That would break the promise I made to the wizard who showed me the way. I can tell you what I found: a wireless phone handset.

Photo Sep 23, 2 40 57 PM

And tucked in the murky plastic shallows just beneath the outer layer I discovered a small speaker with clearly marked Power and Ground wires. Joy of joys! So I ripped it out, leaving a plastic carcass behind for the hands of fate/other ITPers to decide it’s ultimate destiny.

Photo Sep 23, 2 41 18 PM

The lab itself was fairly straight forward. I hooked up a circuit and followed the instructions to write a program that told the speaker to emit a series of short tones.

Photo Sep 20, 11 52 35 PM Next I implemented two photodetectors as “controls.” After a little tweaking and much flailing I did find that using only one of the photodetectors gave a more consistent sound from the speaker.

Photo Sep 21, 12 01 21 AM
what a shitty picture

At that point I also realized it was past 12:30am and whatever was emanating from my apartment must have sounded like I was trying to make love to an am radio. So that’s where I ended this lab.

‘The Art of Interactivity’ by Chris Crawford

“I choose to define [interactivity] in terms of a conversation: a cyclic process in which two actors alternately listen, think, and speak.”

Interactivity as Conversation

I think Crawford’s definition of interactivity is swell. I also agree with his assessment that two (or more) actors are key for the process to work. However, I think there’s a more important factor tucked into that definition: “a cyclic process.”

Here’s where it really comes together for me. With a cycle we go from the more binary Action → Reaction (Speak → Listen; Listen → Think; Think → Speak, etc) to a more complex network of cause and effects. Especially if the actors adjust their rhythm of listen, think, speak in accordance with outside factors.

The best conversations never follow a prescribed structure. It’s never listen, think, speak. The thinking is going on constantly. And speaking frequently slides into something closer to yelling. There’s an element of unpredictability to it. You might spiral off into a tangent, go off topic, only to find a new perspective on what you had just been discussing.

Interactivity should be fluid. And dynamic. It should feel natural. And it should allow and encourage discovery. Just like a great conversation.

An Aside About the “Interactivizing Step”

It’s painful when Crawford mentions the interactivizing step in the design process. Coming from an advertising background, it’s all too familiar to here “how do we make this interactive?” As if it’s only a matter of sprinkling new, delicious and nutritious Interactivity Flakes® on whatever the idea is. When, in reality, all that’s actually being said is “how do we get people to click faster/stay longer/post on social media harder?”

If I hear “interacting with our brand” one more time, I’m going to throw a box of Interactivity Flakes® at a wall. According to Crawford, and, you know, common sense, it’s impossible to interact with a brand. A brand is the made-up face of an otherwise faceless company. A ghost putting on a sheet to answer the door.

There’s no conversation. No listen, think, speak. Only sell, sell, sell. (He’s eerily close to the future when he posits that laundry detergent boxes will soon proclaim “NEW! IMPROVED! INTERACTIVE!” Think what they/we did/let happen to organic.)

A Brief Reply to ‘A Brief Rant on the Future of Interaction Design’ by Bret Victor

‘Vision Matters’

Victor’s right. About a lot of things. But first about that Microsoft ‘Future Vision’ video at the top of his brief rant. It’s not all that visionary. It’s a design exploration at best. And a huge waste of money on motion graphics at worst.

(Although, actually this was probably a marketing piece much more so than any indication of what Microsoft R&D is working on, Future Vision-wise. It was probably designed by art directors in the marketing department and not user interface or user experience professionals. Which makes the video’s relevancy moot. But, at least, it’s provided us with a really interesting place to start thinking about what we want from our tools [cue: echo SFX] of the Future. )

If movies are any indication, than futuristic has always meant simpler. And simpler has meant flatter and sleeker. As if everyone designing the future has a limited palate of ideas to work with. Eventually everything is pressed into one thin sheet of glass and then that vanishes altogether and we’re left with some sort of floating projection to swat at. At one time these hardware and UI designs were visionary. But now you’ve been communicating with one of these visions for years.

Maybe it’s time to rethink simplicity. Is opening a jar simple? Yeah it is. Of course…that’s opening a jar and not sending an email or color grading an image or sequencing music. But the idea translates: it’s simple and it’s a specific interaction to get a task accomplished and it’s no where close to swiping a screen with a finger.

Giving Us The Finger

Victor makes the observation that almost all of these interactions have been reduced to the use of a finger. And really, just the tip of a finger.

“Hands feel things, and hands manipulate things.”

Why is this when our bodies are so capable? We have an amazing ability to sense and decode feedback from the word around us. Weight, thickness, friction, density, balance, temperature, texture, etc. These are the world’s ways of transferring information.

This Proverbial Wallets Project from the MIT Media Lab is a great example of using dynamic object properties to convey information about the world. It might seem odd or just an interesting exercise, but from Victor’s perspective, an inflatable wallet is way more of a window into the future than the flat pieces of glass we’re used to.