Big Ideas firmly believes that rigorous program evaluation is key to understanding whether or not the Contest is meeting its goals. As a result, Big Ideas gathers feedback to conduct an impact assessment and process evaluation each contest year to measure the program’s impact and learn about how it can better improve its offerings.
To accurately measure outcomes, the Blum Center has rigorously monitored and evaluated the Contest using: annually gathered data; surveys of applicants, judges and mentors; external evaluators; and the social science expertise of UC Berkeley graduate student researchers. In the last few years, Big Ideas has integrated statistical analysis techniques into its monitoring and evaluation systems. For instance, in order to determine whether teams experience improvement in certain skills, a before and after analysis is conducted, controlling for certain key variables like student status or start-up background.
Big Ideas utilizes three surveys each year that provide information that feeds into both analyses: a Pre-proposal Feedback Survey (for all student contest participants), a Full Proposal Feedback Survey (for finalist students), and a Judge & Mentor Feedback Survey. Additionally, on a routine basis Big Ideas issues an Alumni Feedback Survey to former contest winners in order to capture more information on teams’ progress after they leave the contest. Survey questions are developed by Big Ideas staff and are refined each year to ensure that they provide accurate measurements of skill development and provide opportunities for feedback on Contest components (see the Tools section for the Big Ideas Metrics Framework to learn about the types of questions asked in each survey).
The Blum Center measures Big Idea’s impact in three key ways:
These three metrics reflect the broad scope of the Big Ideas pipeline, which transforms early-stage undergraduates and graduate students into a comprehensive network of innovators.
Big Ideas seeks to better understand the extent of its contribution to applicants’ development over the course of the program. It uses the following guiding questions to inform its evaluation process:
In order to assess the impact of the Big Ideas program on students’ development over the course of their participation, Big Ideas uses a mix of quantitative and qualitative evidence to gauge how students value the provided services.
Quantitative evidence
Big Ideas team leads are asked to rank their confidence in a number of different skills areas at the time they submit their Pre-proposal and Full Proposal applications. They are also asked to report on the likelihood of implementing or working for a social venture in the next year, and rank their top sectors of interest. The results of these two surveys are analyzed to see if there is any significant difference between the two rounds of reporting.
In the 2015-2016 contest year, Big Ideas found the following:
Qualitative evidence
The quantitative approach is also supplemented with free answer responses in the surveys, where teams can describe in detail what they perceive the impact of Big Ideas to be. Year after year, the mentorship and advising hours with Big Ideas staff are overwhelmingly cited as the most useful contest offering. Specifically, it was having a dedicated industry professional who was able to connect teams with the resources they need, and offer a great deal of specific feedback on the design of the project. The different perspectives and availability of last minute feedback provided by Big Ideas staff were also reported to improve upon the quality of submissions. The amount of detail provided in the judging feedback is also mentioned as an important resource utilized by teams.
In their responses, teams also mentioned that they achieved a great deal in developing their proposal writing, team building, and project management skill sets. The framework and deadlines of the application provided teams with a set of deliverables that forced them to be accountable. In order to strengthen their projects to meet the criteria demanded of the Big Ideas application, applicants sought partners, conduct market surveys, built prototypes, and test their hypotheses. For many teams, Big Ideas was the extra push they needed to actually execute a social venture.
Sample responses include:
Big Ideas also evaluates the extent to which teams continue to work on their Big Ideas projects and the difference that those teams are making as part of its impact assessment. Initially, Big Ideas created a LinkedIn group to connect past winners and keep track of their updates. Big Ideas staff hoped that the group would provide a forum for past winners to share their accomplishments with each other and with staff, but the LinkedIn group has proven relatively inactive, and has therefore not been a particularly effective evaluation tool. Thus, Big Ideas gathers information on past winners primarily by issuing alumni surveys and conducting phone interviews.
Alumni surveys and phone interviews
Alumni surveys are sent out every couple of years to capture information on graduated teams. Big Ideas captures three key metrics to help assess its influence: additional revenue generation, number of people working on the project, and number of beneficiaries or clients served to date. The Alumni Survey also requires a more detailed response about the progress projects made to date. It prompts the respondent to report on the team’s current involvement of the project, whether any key pivots have been made in the project, and its current state (design, pilot, scale etc.). It also asks alumni to describe any key challenges they are facing in implementation, and what gaps are preventing them from taking the project to the next level. Questions also focus on how the program can better prepare or support teams to deal with these obstacles and teams’ plans for future work (see the Alumni Update Survey in the tools section).
Phone interviews cover the same content sent out in Alumni Surveys. Outreach to former Big Ideas winners was conducted in 2011 (to 2010 winners), and again in 2014 (to 2012-2013 winners) via follow-up phone calls to teams that did not respond to the survey. The phone interviews ask the same questions prompted in the survey. This form of direct outreach proved to be a more effective means of reaching past winners and allowed Big Ideas to gather information and stories on past winners, which have been used in Big Ideas newsletters, in pitching Big Ideas to potential category sponsors, and as informal evidence of the impact of the Contest in grant proposals. It also helped Big Ideas reconnect with past winners has also allowed staff to develop a greater sense of connection to and commitment from past winners to the Contest. However, the process of contacting individuals, scheduling meetings, conducting interviews, transcribing the information, and integrating it into the survey findings, is a very time consuming process. It requires a dedicated individual (part-time graduate student, with experience in conducting surveys.) Due to bandwidth and funding constraints, these types of interviews have not been conducted in recent years.
Big Ideas also uses all three surveys and input from staff to conduct an informal process evaluation each year to assess its execution of the program each year. The team collects a great deal of feedback from students, judges, and mentors on whether they utilize the resources offered and found them to be effective. It explores which of its strategies are the most effective in conduct outreach to students and recruitment of judges and mentors. It also gauges whether participation in the Contest is seamless for students, judges, and mentors (see all three surveys in the Tools section for sample process evaluation questions).
Big Ideas develops a set of recommendations each year on how the program can be adjusted for the better next year, and uses these lessons to inform its long-term strategy. The process evaluation allows the team to prioritize resources in future years, and constantly reflect upon how it can best service teams going through its program.