Human Computer Interaction - Progress Log
- Week 2 - Consultation & kick off
- Week 3 - Flexi Dinosaur
- Week 4 - Milestone Prototype
- Week 5 - Public holiday but still cooking
- Week 6 - I'm cooked
- Week 7 - Not what I want
- Week 8 - Added bunch of stuffs
- Week 9 - Hand crank flashlight
- Week 10 - I miss Ron
- Apart from using a force-sensitive resistor in the stress ball, we could also consider using a barometric sensor in the ball, a sound sensor to detect blowing sounds, or a light sensor to detect light from a kinetic torch.
- And rather than projecting onto a plain wall, we could make the experience more immersive, for example, by creating an immersive space or projecting onto unusual objects (like rubbish bags to mimic a flower bush).
- Instead of showing the same video for projection mapping, we could explore generative images (which would require coding).
- I also consulted on the technical part about how to connect Arduino to p5.js. Instead of using p5.js, which requires an extra step for serial communication, he suggested that I explore Processing instead.
- Refined my proposal slides to better explain my concept, objectives, and workflow.
- Bought the necessary tools for this project: Arduino board, photoresistor, and a cable adapter (since I’m using a MacBook)
- Workflow breakdown: Arduino circuit setup with LDR, Arduino IDE coding, Processing coding, and the connection between Arduino and Processing.
- Arduino Circuit Setup with LDR: I referred to a simple online tutorial [link to tutorial] to set up my circuit. Since I’ve learned some Arduino basics before, the setup was manageable, I just needed to ensure the wiring was correct. To verify that the circuit was working, I first tested it using an LED as feedback. Following the tutorial step-by-step helped me confirm that the photoresistor setup was functioning properly.
- Arduino IDE: Using the same tutorial, I input and uploaded the code to the microcontroller through the Arduino IDE. To test the LDR, I used my phone flashlight in a dark room and observed changes in the LED response to confirm that the sensor was detecting light accurately.
- Processing coding: Next I explored the concept of L-systems, which I plan to use to visualize the flower blooming effect. I found several generative design examples online for reference, as well as a tutorial for creating an L-system using p5.js [link to tutorial]. Then I copied the p5.js script and converted it into Processing (Java) with the help of AI. The output worked successfully and looked the same as the original p5.js version.
- Connecting Arduino Input to Processing Output: To connect Arduino and Processing, I researched online tutorials and also discussed with my GPT buddy. I learned that the key step is matching the serial communication speed,
Arduino Serial.begin(115200)↔Processing new Serial(..., 115200)once both are aligned, the connection works smoothly. After implementing this, I successfully received the light sensor input from Arduino and controlled the visual growth in Processing, in which low light intensity: reversed growth, mid light intensity: no growth, and high light intensity: growth.
Rough Proposal Idea
Week 3 (6/10/2025)
Feedback
- Good progress, I earned a white flexi dinosaur!!!
- My Processing code currently has too many unnecessary lines generated by AI
- Need to understand the logic behind the code instead of relying purely on AI-generated output
- Instead of using only shapes to generate visuals through code, I can experiment with images or videos. For example: For a flower blooming effect, use an image that enlarges or shrinks, or a short video showing the blooming sequence.
- L-system: building rules on top of rules. For example: the stem grows until a certain condition is met, then the flower blooms, when there’s no further growth, the plant could gently wave.
After clarifying my confusion about how to achieve the art style I wanted, I learned that I could do it using images or videos. To make my projection mapping visually closer to my preferred art style, I started experimenting with a random video. However, based on what I learned from ChatGPT, rendering videos in Processing can be quite heavy. So instead of using a video, it suggested using keyframes which are multiple sequential images that create a video-like effect.
- When the light intensity is high, the video plays.
- When the light intensity is medium, it stops.
- When the light intensity is low, it reverses.
Demo
Week 4 (13/10/2025)
Feedback
- Ron helped me solve the combination of the L-system with the video. I had tried it before but failed due to a careless mistake. But there are still some parts of the code that need to be tweaked to make the visual feedback look better. Currently:
- The flower image pops up immediately instead of playing the animation first.
- When the light intensity is low, the flower animation reverses, but the stem stops moving once the animation stops.
- The growing effect of the plant starts very slowly, then suddenly speeds up, I want to make it move at a constant speed instead.
- Instead of making the plant grow taller, I want it to grow more flowers, not just expand vertically.
- Explore ImageMagick, an open-source tool for editing digital images.
- User testing: need to buy a kinetic torch to test whether my installation can effectively capture users’ attention.
Refinement during the class
Flexi Dinosaur!!!
I started by trying out ImageMagick, an open-source tool that runs through the terminal, which honestly very troublesome at first but afterwards super impressive to me. My main goal was to use it for bulk background removal, as I couldn’t find any free online tool that could handle up to 100 images at once. So, I had no choice but to explore ImageMagick, and setting it up on macOS required a few extra steps. I spent quite some time learning and figuring it out since it was completely new to me, but after experimenting and learning how it works, this is so wow, I could remove the background from every image in a folder with just in one single command and a second. This is crazy-
Terminal Command
Then I continued working on fixing minor bugs to make the overall animation look better.
- To solve the issue where the flower “pops” out when the conditions are met instead of playing the animation, I debugged the code and found that the animation was actually playing from the start as soon as the light condition was met, before the flower even appeared on the branch. Technically the code itself wasn’t wrong. To fix the “pop” effect, I adjusted the scale, so when the flower appears, it starts from scale 0 and gradually grows larger. Although it doesn’t play a blooming animation, the scaling makes it look like the flower is growing naturally.
- I also managed to fix a bug, where during the reverse growth effect, the video animation reversed and the scale became smaller until it reached 0, while the stem continued to shrink as well.
- To further enhance the visuals, I added two variations of flowers in different colors and organized their image frames into separate folders. This makes the scene look more visually appealing, as if two harmonious colors of flowers are blooming from the stem.
- I added a continuous rotation effect to the flowers to make the growing and blooming effects more dynamic and engaging, instead of keeping them static.
- I also fine-tuned the variables for each setting to ensure the animation looks smoother and more natural.
Milestone Prototype!
- High intensity: flower blooming and growing
- Medium intensity: remains static
- Low intensity: blooming reverses and shrinks
Week 6 (27/10/2025)
Feedback
Started conducting user testing to gather insights on:
-
What kind of visual effects users prefer.
- Whether my project can achieve its objective, helping users reduce hyperactivity and improve focus.
-
Instead of using Unity, explore PlayCanvas to build the AR app, which can be accessed easily using a QR code.
-
For 3D modeling of the porcelain, suggested materials and methods include:
-
PLA matte filament – provides a matte surface that avoids light reflection during projection.
- Optionally, apply a layer of white matte spray paint to reduce glossiness.
- Alternatively, use grey-white sandpaper to polish the surface of the 3D model for better light diffusion.
-
Before class, I tried changing the background of my canvas using a static image. Moving forward, my goal is to create a dynamic background that changes according to the light intensity, aligning with the main visual. I also added random flower sizes within a certain range to create more variation. With these updates, the main visual looks quite nice overall.
Meanwhile, during class, I experimented with adding a visual effect where the plant sways occasionally when it’s static. The outcome looks good as well, making the overall visual appear more natural and lively.
Week 7 (3/11/2025)
Task Completed
-
Added falling particles (like petals). For now, they fall from the top of the canvas when
gen = 6(instead of from the flower). - Made the plant growth velocity constant by increasing the time per generation, so the animation looks more consistent.
- Set flowers to grow only when
gen = 6. - Bought a kinetic torch, but it’s not what I wanted, it’s hand-cranked to recharge a battery, not a true kinetic light with variable intensity control.
- Make petals fall from the flowers (not from the top).
- Learn how to build a hand-crank/kinetic torch (DIY approach).
-
Need to build the hand-crank torch myself, and I have no prior experience.
- Ongoing refinement and troubleshooting to improve visuals; this requires time to understand the code and fix things step by step, rather than dumping everything into ChatGPT and getting random code.
-
Petals: Start simple, create a falling ball from the flower, then swap in petal PNGs; add a wind effect to make the fall feel natural.
- Hand-crank torch: Since there’s no suitable product on the market, consider a DIY build.
- Coding approach: Instead of asking ChatGPT to rewrite everything, break the problem into algorithms and implement step by step.
Week 8 (10/11/2025)
Task Completed
- Added background music and sound effects when the generation increases.
- Added a new condition: when light intensity > 500, the branches turn red, indicating the plant is scorching and prompting the user to decrease the light intensity.
- Changed the gradient background colour according to the growth stage of the plant.
- Added flying butterflies that appear and move randomly, using image frames and code logic to create a natural flying motion.
- Added narration for the plant, guiding users on how to control the light intensity.
- Instead of creating a single falling ball, I created an emission area for falling petals using a particle effect, restricting the emission to one area when the flowers bloom.
- Create a simple hand-crank torch.
- Conduct user testing and refine the project based on feedback.
- Want to make the visuals more appealing and impactful, but some visual feedbacks may not be engaging or noticeable enough.
- Concerned whether the hand-crank torch I built can effectively control different ranges of light intensity.
Hand-crank torch:
-
A setup using a DC motor and LED will work.
- If the change in light intensity isn’t obvious, try using two LEDs so it requires more kinetic energy to light up, helping achieve the objective.
- Visual feedback:
- The piano sound effect is not obvious, reduce the BGM volume to balance the sound.
- The falling petals effect may be too subtle because there are too few; consider increasing the number for better visibility.
- Light intensity feedback: Add a spotlight on the screen that syncs with the light sensor’s input, allowing users to visually track light intensity.
- Overall, emphasize direct and clear visual feedback for better user response.
- Conduct tests to identify which areas need improvement, rather than relying on assumptions.
Week 9 (17/11/2025)
Task Completed
- Created a simple hand-cranked torch using an LED light and a DC motor.
- Added a light input feedback, but it’s not very obvious yet — need to improve it by turning it into a spotlight.
- Added leaves that grow starting at
generation > 3to introduce different visual stages and make the feedback more encouraging. - Created a more structured user testing feedback form to collect both qualitative and quantitative data.
- Upgrade the hand-crank torch.
- Conduct at least 5 user tests.
- I want users to crank the flashlight slowly and steadily, instead of fast and aggressively.
- Currently, the torch does not produce stable light intensity when cranked slowly, and the light sensor cannot detect sufficient intensity for the plant to grow.
Hand-crank torch:
- Add a gear mechanism so that slow cranking can still generate a higher light intensity detectable by the sensors.
- Consider using LEGO gears for an easier setup.
- Gear generator
- Visual feedback:
- Adding the leaves was a good improvement.
- The light intensity feedback can be made more obvious.
- Add a challenge: when the plant wilts, make it look more natural, instead of shrinking, have the leaves and flowers fall off.
Week 10 (24/11/2025)
Task Completed (Based on User Feedback)
- Changed the background music to make it sound more natural and magical.
- Adjusted the plant color to turn brownish when exposed to excessively high light intensity.
- Added a new stage: 5 seconds after
gen = 6, butterflies and stars appear. - Added a glowing effect for each stage (later removed).
- Added a circular progress bar that glows when each stage is achieved.
- Added a sound effect when the plant shrinks.
- Changed the gradient color for improved visual appeal.
- Upgrade the hand-crank torch.
- Conduct at least 10 user tests.
- Enhance user experience based on feedback
- I’m not sure how to add a gear to my hand-crank torch, so I will need to seek help.
- I’m having trouble making the berries (flowers and leaves) fall from their original positions.


.jpg)
.png)
Comments
Post a Comment