Team:St Andrews/review

From 2011.igem.org

Revision as of 13:25, 21 September 2011 by ChristinaS (Talk | contribs)

An Internal Review of iGEM

Intro, reasons behind compilation

Methods

Information was compiled based on a variety of variables and placed into a tabulated spreadsheet. The amount of accumulated variables differed only slightly for each of the three years, as it was dependent on the availability of the data for each team.

For 2008, there were 33 primary variables originally created which included the following:

Area (where the team registered under), university/team name, medal/prize/final6, university citation score, university rank, projected budget, budget at the time of registration, university endowment, if school was public/private (USA only), academic sponsors total, biotech sponsors total, other sponsors total, number of biobricks submitted, if a team withdrew, total students, students with no data, biology students, chemistry students, engineering students, mathematics/ computer scientist students, physics students, medical students, art students, social scientist students, total advisors, engineering advisors, biology advisors, medical advisors, physics advisors, chemistry advisors, mathematics/computer scientist advisors, art advisors.

2009 incorporated one extra variable, which looked at the team’s predicted award versus the award received by the team itself. This information wasn’t accessible to process for 2008, as at this point judging forms weren’t employed online for the competition in 2008. Finally, university research score was included in the 2010 selection.

The next step in the data analysis was to create an ordinal measurement of the medal criteria and the predicted versus awarded medals, in addition to creating a student to advisor ratio. An ordinal scale is where a particular ranking order is given to the data. The scaling for the medal criteria, so that we incorporated teams that withdrew into our samples, is as follows:

0 – Team Withdrew 1 – No medal awarded 2 – Bronze 3 – Silver 4 – Gold 5 – Finalist 6 – Grand Prize Winner

This scale now allows us to analyse the relative success of each team.

Also the monetary variables were all altered by changing the currency base into US dollars and historical exchange rates were used from an online currency website (www.xe.com) to keep these values comparable, at a specific time of each respective year.

In total, data from 303 iGEM teams spanning the last three years was collected.

The majority of the data was found on the iGEM website for each respective year. Data regarding student and advisor numbers, sponsors, parts submitted, degrees and advisor specialties were obtained from of the various teams' wikis which were accessed through the iGEM website. For advisors’ specialities that couldn’t be found, we used the Google search engine to locate the information from their various universities or laboratories. After a thorough search, anyone that couldn’t be firmly identified was placed in the ‘no data’ category. The projected budget and budget at time of registration were taken from the resource description page on the team information page on the iGEM website. The medals or prizes awarded, as well as number of withdrawals for each year were found on the iGEM results page for each year. Each university’s citation score and world rank were found using ‘The Times Higher Education World Rankings’. Finally, endowment and public/private were obtained from the universities’ websites.

What data was compiled AND how data we chose to work with, programs used, acknowledgments

Analysis

A general correlation matrix was used initially and this found very basic relations between the examined variables, however this approach was employed with care as chance relationships may be over-emphasized. Correlations of note, which are prominent at the 1% significance level (or equivalently at the 0.01 p-value) against the award received by the team, are the number of biobricks submitted, the projected budget, total number of sponsors, total number of advisors, and student to advisor ratio.

As an example of variables that have very strong significance between them, are university research ranking and university citation score; however this proves that some relations are overly accentuated. This correlation is not particularly useful information in relation to our project, as these are obviously dependent on each other when it comes to ranking. With this in mind, we selected relationships that we think would have a significant bearing on the project and placed them into a general mixed linear model.

By incorporating all of the relevant variables, initially, into the model including the year and division, it was noted that the sample size was only 122 cases. This, although a small proportion of the original 303, is still able to display significance in the data. There are two very prominent variables that show significance at the 0.01 level, these are the number of biobricks submitted and the total number of advisors. Also significant at the 0.05 level is the total number of students.

As biobricks were adopted as an outcome of iGEM, we decided to focus on the input factors that may affect the team’s success. By reselecting the data, our proportion is increased to 131 cases. This time the only variables that show significance are the total number of advisors and, once again at the 0.05 p-value threshold, the total number of students. There is obviously strong evidence to suggest that the total number of advisors that a team has does affect the success of the team.

The next stage was to investigate whether the total number of students, ultimately had an effect on awards received. This was investigated via two separate variables, the student total and the ratio of students to advisors, to examine the varying consequences. With only these variables to consider, we utilised the whole population size of 303 cases. The statistical analysis quite clearly depicted that the total number of advisors is the key significant variable this time. The previous elements of student to advisor ratio and total students are no longer significant. Yet again, we have strong evidence to suggest that total advisor number is an important factor.

Although we have shown that there is a strong positive correlation between the number of advisors on a team and their success, we were unable to analyse the breakdowns further to see if there may be more applicable information to be gleaned.

What does this mean for iGEM?

Get 15 advisors.