1st Annual Ambiki Hackathon
May 30th - June 1st, 2022

Ambiki's 1st Annual Hackathon is a 48-hour event where Ambitious Idea Labs employees will compete on teams on projects with the theme of improving speech recognition for marginalized populations.
On the final day there will be project presentations in front of a panel of judges.
Format
- 4 teams – drafted by randomly chosen team captains
- 2 projects (2 teams on each project)
- Project A
- Team Blue
- Team Green
- Project B
- Team Salmon
- Team Gold
- Project A
- Takes place over 48 hours (May 30th - June 1st)
- Final presentations: 8 minutes + 2-minute Q&A
- Team presentations will be stack ranked across the following categories to determine the hackathon winner
- Please rank the team presentations from most impressive to least impressive
- Please rank the UI/UX (user interface / user experience) of the solutions from most impressive to least impressive
- Please rank the viability of the solutions from "most ready to be used today without additional work" to "needs the most additional work”
- Please rank the execution of the teams from most impressive execution to least impressive execution
- Please rank the innovation/creativity/wow factor of the solutions from most impressive to least impressive
Teams
Blue Team
Green Team
Salmon Team
Gold Team
Projects
Project A (Team Blue and Team Green) - Capturing a sound recording
GitHub repo: 1st Annual Hackathon (Project A)
- Easily allow a therapist to capture a voice recording of a patient
- Should work anywhere on Ambiki (including the teletherapy room)
- Could be used during an in-person session (if the therapist has their computer or device open) or during a teletherapy session
- Incorporate the foot pedal
- Hands-free way for the therapist to start or stop a recording
- Ideally want to capture
- Sound recording
- Patient
- Word or phrase attempted
Project B (Team Salmon and Team Gold) - Scoring a sound recording
GitHub repo: 1st Annual Hackathon (Project B)
- Allow a therapist to score a sound recording (the output from Project A)
- Should allow error analysis down to the phoneme level
- Ideas to consider
- Should you have them listen to just the audio first (prior to showing the attempted word or phrase)?
- Is there a way to gamify it?
- Should it be accompanied with a sound file demonstrating the correct or standard pronunciation?
- Hint - don't recreate the wheel; there are tables with seed data which already exist in Ambiki related to words, phonemes, blends, and error analysis