As computing becomes more ubiquitous in our objects, designers need to be more aware of how to design meaningful interactions into electronically enhanced objects. At the University of Washington, a class of junior Interaction Design majors is exploring this question. These pages chronicle their efforts.

Monday, June 6, 2016

Xiao + Nicola's Blog Posts (3 – 9)

1) Processing Experiment
see post from April

2) Knolling Project
see post from April

3) Situation Description
Our device is intended to be set up in a somewhat noisy public space, like a cafe or a park (if it’s dry), where some environmental noise is occurring, as well as conversation. The option of printing roughly a minute’s-worth of abstracted sound data might be a fun or interesting point of conversation for two or more people, or simply a cool take-away for someone alone in a noisy environment.

4) Sensing Description
The arduino will read the voltage measurements detected by an electret microphone, which will be interpreted as volume. Between approximately 60 second increments of sound detection, a photocell will be activated.

5) Logic Description
The arduino makes a decision based on the photocell reading. This uses an if statement: if the photocell reading is less than X, where X is a number between the photocell being covered versus not covered. This statement is only read when a certain number of volume readings have been taken, and is indicated to the users by a light turning on. There will also be a printout that explains the meaning of the light, so they know to cover the photocell when the light goes on (if they want a printout).

6) Actuation Description
If it is light because no one waved at it, the photocell will stop reading, the light will turn off, and arduino will return to sensing and recording volume. If it is dark because they did wave at it, then it will tell the printer to start. When printing is done, the volume recording begins again.

7) Prototyping Update 1
Our updated concept was the table could detect the conversation content and print out a summary report in graphical form during the conversation break. We ordered two SparkFun electret microphone breakouts to sense the voice volume, and each microphone was supposed to detect the words from one person in the conversation context. We also ordered the thermal receipt printer guts, thermal receipt paper, power supply and adapter. We searched online for speech recognition program and tested the speech recognition system on Macintosh, and found it difficult to connect speech recognition system to Arduino.

We decided to change the concept to detecting the volume of speech and producing graphical report based on that.  We managed to create interesting digital graphs by processing based on the speech volume detected by the microphone.


8) Physical Housing Description
For the non-electronic part, we plan to use an existing table instead of building a table due to the cost and time constraints. We are planning to attach the thermal receipt printer and microphone beneath the table surface. The empty space on table surface allows the receipt paper to go through above the surface and the  microphone to detect voice directly.

9) Prototyping Update 2
We found out the thermal receipt printer guts did not work at all for some reason, so we had to slow down our progress to wait for the delivery of the new printer. Unfortunately, the same issue happened to the second printer, which made us wonder if there was any problem with other components rather than the printer. After a couple attempts of testing by replacing other electrical components, we found out it was actually the power supply which was not working. After replacing the power supply, we successfully activated the thermal receipt printer. Connected with our latest code, the printer could print out visually attractive graph made of random characters and numbers.

No comments:

Post a Comment