Ford Motor Company has been delving into the realm of autonomous driving in their vehicles. Combined with the decrease in traditional motor vehicle use among younger generations, Ford has explored autonomous driving in alternative methods of transportation, specifically micromobility vehicles such as electric bicycles. In this project, we explored a paradigm shift in rider-vehicle interactions, incorporating augmented reality (AR) as our single interface for our electric bicycle - Ford InVision.
The Problem
How do we incorporate the benefits of displaying information to the rider without distracting them to an unsafe level?
Recent increases in interactive technology in vehicles has pushed for larger and more complex displays, leading to safety hazards stemming from rider distraction. In fact, “distracted driving is a factor in 27% of all crashes”, which mainly come from the use of touchscreen displays and in-vehicle infotainment.
The solution
In order to minimize rider distraction away from the road ahead, I devised a concept of implementing an augmented reality interface attached to the helmet’s visor - Ford InVision. This would greatly promote safety - riders would not be able to use InVision without wearing the helmet, and the interface would be designed to minimize distraction, while maximizing functionality. Being passionate about emerging technology, this project would allow me to explore a new problem space, with new interaction paradigms to tackle.
Research
To learn more about current e-bikes, we conducted contextual inquiry in the form of guerilla research. We intercepted and interviewed three diverse e-bike users while, or just before, riding. We wanted to identify their favorite features of their bikes, and their biggest pain points. From the interviews, we realized that they varied immensely in terms of e-bike types (assisted vs throttle), size, and use-case.
Our foundational research gave us a diverse problem space to solve for. We used affinity mapping to organize our findings by similarity to identify the major reasons people choose e-bikes over other modes of transportation. We then narrowed down the data we collected to generate three insights we wanted to focus on:
Screens either don’t show enough information, or are too crowded Interface control systems were difficult to use while riding Users often ride e-bikes to be more connected to their surroundings
Ideation
Once we had identified the problems we wanted to tackle, we began to develop ideas in the form of storyboards. We had conducted a task analysis process to prioritize important tasks, and utilized storyboards to expand on tasks that would involve a new way for riders to interact with InVision’s controls and interface. We made sure to create several different ways to complete a different task, as we had begun diverging for solutions.
Informed by research that identified key challenges, we opted for an augmented reality interface, seamlessly weaving solutions into users' environments. An AR interface uses the rider’s field of view, and thus gave us much more room to display information to the rider. However, we still needed to explore interaction techniques that would be intuitive and reduce cognitive load, and design an interface that displays just the right amount of information.
Early Prototypes
As this project was exploring multimodal design - combining physical and digital interactions - we designed early-stage digital and physical prototypes.
Digital
Focus and Clarity
Information and controls were strategically placed to minimize distractions while cycling. Although we originally placed buttons and information in the corners of the interface, we quickly realized this would be too difficult to interact with and distracting to the rider. Thus, key data (speed mode, range, battery) were merged into a centralized bar, with navigation arrows displayed near the speedometer. Semi-opaque backgrounds and bold numbers further enhanced clarity and reduced distraction.
First iteration of our digital prototype
Style choices
We made semi-opaque backgrounds for displayed information and controls; this created a clean look that did not distract or impede the user while mobile. We also bolded and enlarged numbers, as they reveal more relevant information to the user than the labels.
Second iteration, with navigation feature
Controls while on the move
Using eye-tracking, the user can alternate through different levels of speed assistance (Speed Mode) and toggle autonomy on or off.
Physical
Intuitive Controls
In our low-fidelity cardboard e-bike prototype, we intentionally minimized physical buttons, only including [enter] & [back], leveraging eye-tracking for menu navigation to increase rider attention on the road ahead. To navigate our prototype, users could look at a button and press the “ENTER” button to confirm.
First iteration of our handlebar
AR Helmet Insights
A simulated AR interface on a modified bike helmet (transparent plastic film) allowed us to evaluate user interaction and feasibility, informing later design decisions like optimized field of view and intuitive controls.
First iteration of our AR helmet
Testing and Feedback
We conducted user testing on 12 diverse individuals with varying levels of expertise in e-bikes, and used the Rose, Bud, Thorn process to garner feedback. Participants were instructed to accomplish specific tasks such as “Turn on/off Autonomy Mode”, “Turn on navigation to a certain address” or “View your recent rides”.
After the session, we synthesized the feedback and identified several challenges participants faced: Feedback for interface navigation 9/12 participants emphasized the need for feedback mechanisms, including hover states and notifications, to provide clarity when changes or button activations occur on the interface. Data visualization improvements 6/12 participants highlighted the need for clearer and more understandable data visualizations, especially in functions related to the battery range relationship page. Voice input confirmation/retry 5/12 participants recommended incorporating voice input confirmation prompts and retry functionality to improve the overall user experience on the navigation page. Including an onboarding tutorial 5/12 participants recommended including an onboarding tutorial. Due to the novelty of our concept, participants were not used to eye-tracking or augmented reality interfaces, and thus we believed creating a general onboarding tutorial for our product would help to enhance the overall riding experience.
Final Design
Our final design consisted of three items - the augmented reality interface operated on a Microsoft Hololens 2, a physical prototype, and a mobile app.
Augmented Reality
Interface
The augmented reality interface was the bulk of our research, design, and brainstorming sessions as it was a complex and novel task for us all to work through. The following are the major features, challenges, and novel processes we designed:
Status Bar
The Status Bar, located above the user’s field of view center, was designed to be interacted with intermittently. As its name suggests, the Status Bar acts mostly as a contained of information to be glanced at. As a result, it only affords two actionable buttons - autonomy and speed mode - to limit user distraction while riding.
Hover States
We received feedback from several testing participants that they were not aware if they were hovering over the item with eye-tracking. Since this is a novel and intuitive interaction mechanism, users needed feedback on their navigation, so we added hover states on every actionable item.
Figma interaction issues
Along with a hover state, the autonomy feature needed an on/off toggle state, as well as an additional notification for user feedback. This made the individual Figma component very complex, and we encountered difficulties in getting a hover, toggle (on click), and notification overlay to appear in Figma.
Features
From our user research and testing sessions, we realized that e-bike users often use their vehicles to commute, and battery drainage was a constant concern. Thus, we focused the majority of our features in battery visualization and range estimates.
Navigation
Navigation was our first feature. Since users would be able to use semi-autonomy, we believed that having being able to set and view guidance on an augmented reality interface was the next logical step.
My Ride
This feature was to allow users to view recent rides to view detailed information on the ride and specific power consumption in increments. This would allow users to easily estimate battery range for a frequent or recent ride.
Detailed Battery
This feature enabled users to view detailed battery consumption estimates, based off speed modes. Since higher speed modes gave users more power assistance when cycling, this would reduce range. With this feature, riders can accurately estimate range based off speed modes, and balance trip duration and battery consumption.
Takeover moment
Ford Motor Company also tasked us to design a takeover moment - a circumstance where the vehicle’s autonomy could no longer function safely, and needed to hand over the vehicle’s control to the rider. Our vehicle contained level 3 autonomy - conditional automation - where “the vehicle can perform most driving tasks, but human override is still required”.
Our takeover moment would occur when InVision’s autopilot detects a situation where it cannot safely respond to an obstruction in the rider’s path. Our takeover moment was designed to minimize cognitive overload and maximize safety for the rider. To accomplish this, we made several key design decisions:
Getting the rider's attention
Vibrating handlebar and seats provide haptic feedback to alert the driver. We decided to include vibrations on the seat in case their hands were not on the handlebar.
Flashing text that replaces the speedometer urging users to take control of InVision. This was included from feedback during the critique session.
Outlining the danger
Outlining the obstacle using an orange rectangle would make it clear to the rider of the immediate danger, without overwhelming them.
Providing the "best way out"
The final addition was to provide the best course of action the rider should take. This would aid in reducing cognitive load by providing the rider the easiest way out of the incoming danger.
Early sketch of takeover moment
Final takeover moment
Physical design
We redesigned our physical prototype to better match a futuristic and ergonomic design language.
Handlebar Redesign
A standout change is the remade handle, shifting from a straight to a curved shape. This adjustment, based on user feedback and ergonomic considerations, prioritizes comfort by creating the natural curvature of the human hand during gripping.
Refined Controls Placement
Two buttons are positioned at a suitable distance from the rider's thumb, minimizing error and discomfort.
Using Colors as Visual Cues
The addition of a color scheme serves not only as aesthetics but also as a visual guide that emphasizes the hierarchy of controls. This intentional use of white on top of black helps quick and intuitive recognition of different components.
Mobile app
We designed a mobile companion app for InVision. This covered the needs that the AR interface alone could not achieve.
In addition to the AR solution, we conceptualized a paired mobile app to be used when the bike is stationary. This app can be used to monitor the battery status of both bike and headset. Additionally, the app has a “Find My Bike” feature that shows the user their bike’s real-time GPS location.
Other features that are accessible on the AR headset are also accessible on mobile, including My Rides and Range. We also enabled controls adjustment, such as speed mode level, audio, and lights. Finally, we implemented the navigation feature on mobile, as some users pointed out they would like the option to type out a destination on a traditional keyboard.
Reflection
This project was, for the most part, a true delight to work on. As a tech-enthusiast, I was extremely eager to learn and get my hands on emerging technologies. Furthermore, as someone deeply passionate about automotive and transportation, I was beyond excited when I first heard about this assignment.
This project was the first time I worked on anything AR related, so attempting to conceptualize, prototype, and test (at as high of fidelity as possible) was an ongoing challenge. Creating Figma prototypes and wireframes without having the medium at which we would test our prototypes was also difficult, since we had no idea of how field of view, resolution, motion, and many other factors would impact our designs. When we finally received the Microsoft Hololens 2, we had to do extensive research on how to use it, and how we could display a Figma prototype on the headset. At first, I tried using Figma in the browser app, but it was extremely slow and basically unusable. I later found a solution for this where I mirrored a computer onto the Hololens. This worked, but did not allow for users to click on any buttons or use real eye tracking.
In terms of next steps, I believe this project has some feasibility in the near future with advances in AR, ubiquitous computing, and eye tracking. However, before that, there needs to be thorough testing done in context, where users are using AR in the real world, in motion. We need to explore the in depth interactions with users, the AR interface and controls, and the real world. I had spoken to an AR-expert in industry, and she advised that thorough and contextual testing should be done. Furthermore, an immediate next step could be to design our interface on an AR-specific tool, such as Bezi.
Smart Loading Zones (SLZs) are a pilot program conducted by Automotus and the City of Pittsburgh to efficiently manage curbside parking in commercial areas, with the aim to decrease congestion and emissions. Our team was tasked to identify the source of existing SLZ problems, and design a solution that aligned with Automotus' vision while respecting user privacy and enhancing user experience.
The Problem
Sub title
Smart Loading Zones (SLZs) promised a curbside parking revolution, but instead delivered large-scale user confusion. Unclear signage, overly-complex onboarding processes, and general misunderstanding of SLZ goals and motivations led to widespread disapproval from primary users and the public.
We took a user-centric approach, delving deep into use cases, user types, and unmet needs when designing our solution.
The solution
Sub title
The focus of our project was helping private and commercial drivers utilize Smart Loading Zones in an efficient yet harmonious manner by designing a dynamic sign, redesigning the payment process, and creating a reservation feature to increase the incentives for both commercial freight drivers and private drivers to utilize Smart Loading Zones.
Research
Sub title
We began by looking at information readily available to us. We conducted in-depth analysis on user data provided by Automotus, heuristic analysis on their app - CurbPass, and walked the wall to synthesize insights across methods.
Data Analysis
We sorted and visualized data given to us by Automotus, and looked at registered users, park events, and vehicle types occupying SLZs.
Registered Accounts vs Park Events
We noticed massive discrepancies in registered user accounts compared to park events, possibly suggesting there to be poor signage and registration processes that lead to users parking without registering.
Vehicle Types that Use SLZs
Despite being named Smart Loading Zones, private cars occupy these zones more often and longer, than commercial vehicles or freight trucks.
Heuristic Evaluation
Next, we conducted a Heuristic Evaluation on CurbPass - the app portal to register and pay for Smart Loading Zones.
Snippet from Heuristic Evaluation
We used Nielsen's Usability Heuristics, and discovered most issues were regarding payment security, onboarding complexity, and unclear pricing. The onboarding process was simply complex and redundant
narrowing the scope
Drawing from our preliminary research, we decided to focus our research on the lack of information communication and the low ratio of commercial vehicles using SLZs.
We had gathered valuable insights from background information, and moved our research efforts to on-site observation and in-depth interviews.
insights
We conducted intercept interviews with 18 participants near Smart Loading Zones across Pittsburgh, and sought out a balanced variety of commercial freight drivers, ride-share drivers, and private drivers. We synthesized our interview notes using affinity clustering and developed the following insights:
Lack of clear information makes users are unable to understand SLZ use cases and goals
Conflict of use case between private and commercial drivers
Inconsistent enforcement create misconceptions, reducing user incentives
Mismatch of mental models: The mental models users have with conventional parking does not match how SLZs charge and enforce their zones
By looking at our insights, interview notes, and on-site interviews, we began to envision our users. Thus, we developed two user personas and created two customer journey maps for the different use cases we outlined:
User Persona for a Commercial Truck Driver
User Persona for a Private Driver
Customer Journey Map for short-term SLZ parking
Customer Journey Map for long-term SLZ parking
Ideation
Before beginning to explore solutions for our users, we looked back and consolidated our preliminary research with our intercept interviews into insights, questions, and design ideas. This process allowed us to isolate specific needs and match them with design ideas.
We proceeded to isolate specific user needs that derived from the above consolidation, and began storyboarding.
Storyboarding
We created a total of 36 storyboards, each focusing on a user need. Each storyboard also contained a leading question, follow-up discussion questions, and a varying risk level. We wanted to create solutions of varying risk levels, to probe and assess the willingness of our users to try each solution. The storyboards focused on needs such as data gathering, pricing transparency, social pressure, and street-sign design.
Snippet of our storyboarding session
We then presented these storyboards to 4 interviewees, and gathered the following insights:
Information transparency is crucial. Interviewees pointed out that displaying parking rates on the physical sign makes it easily digestible, and they know what to expect.
Reservation system saves users' time. Most participants expressed interest in the idea of reserving SLZs.
Users want to receive reminders through their phones. Participants expressed a strong interest in the idea of receiving reminders about their remaining parking time on their phones.
These findings further solidified our project direction. We decided to begin prototyping, with three artifacts in mind:
Dynamic sign - This could distill important information on the physical sign, increasing readability and payment transparency.
Reservation system for commercial freight drivers - From our truck driver interviews, many stated that they cannot park in the same SLZ as a private car, due to the limited size of the zone.
Redesigned payment process - We decided to eliminate many of the redundant onboarding processes, and redesign it to a simple and quick payment process.