Team:St Andrews/review

From 2011.igem.org

(Difference between revisions)
Line 73: Line 73:
<p class="textpart">Intro, reasons behind compilation
<p class="textpart">Intro, reasons behind compilation
<h2>Data Collected</h2>
<h2>Data Collected</h2>
 +
<p class="textpart">Information was compiled based on a variety of variables and placed into a tabulated spreadsheet. The amount of accumulated variables differed only slightly for each of the three years, as it was dependent on the availability of the data for each team.
 +
</p>
 +
<p class="textpart">For 2008, there were 33 primary variables originally created which included the following:
 +
</p>
 +
<h2>Analyzed Variables</h2>
<h2>Analyzed Variables</h2>
 +
<p class="textpart">2009 incorporated one extra variable, which looked at the team’s predicted award versus the award received by the team itself. This information wasn’t accessible to process for 2008, as at this point judging forms weren’t employed online for the competition in 2008. Finally, university research score was included in the 2010 selection.
 +
</p>
 +
<p class="textpart">The next step in the data analysis was to create an ordinal measurement of the medal criteria and the predicted versus awarded medals, in addition to creating a student to advisor ratio. An ordinal scale is where a particular ranking order is given to the data. The scaling for the medal criteria, so that we incorporated teams that withdrew into our samples, is as follows:
 +
</p>
 +
<p class="textpart">This scale now allows us to analyse the relative success of each team.
 +
</p>
 +
<p class="textpart">Also the monetary variables were all altered by changing the currency base into US dollars and historical exchange rates were used from an online currency website (www.xe.com) to keep these values comparable, at a specific time of each respective year.
 +
</p>
 +
<p class="textpart">In total, 303 iGEM teams worth of information was collected over the past three years. The majority of the data was found on the iGEM website for each respective year. Data regarding student and advisor numbers, sponsors, parts submitted, degrees and advisor specialties were obtained from of the various teams' wikis which were accessed through the iGEM website. For advisors’ specialities that couldn’t be found, we used the Google search engine to locate the information from their various universities or laboratories. After a thorough search, anyone that couldn’t be firmly identified was placed in the ‘no data’ category. The projected budget and budget at time of registration were taken from the resource description page on the team information page on the iGEM website. The medals or prizes awarded, as well as number of withdrawals for each year were found on the iGEM results page for each year. Each university’s citation score and world rank were found using ‘The Times Higher Education World Rankings’. Finally, endowment and public/private were obtained from the universities’ websites.
 +
</p>
 +
 +
 +
 +
 +
 +
<h2>Computational Programs</h2>
<h2>Computational Programs</h2>
<h2>Statistical Background and Theory</h2>
<h2>Statistical Background and Theory</h2>

Revision as of 23:38, 21 September 2011

An Internal Review of iGEM

Intro, reasons behind compilation

Data Collected

Information was compiled based on a variety of variables and placed into a tabulated spreadsheet. The amount of accumulated variables differed only slightly for each of the three years, as it was dependent on the availability of the data for each team.

For 2008, there were 33 primary variables originally created which included the following:

Analyzed Variables

2009 incorporated one extra variable, which looked at the team’s predicted award versus the award received by the team itself. This information wasn’t accessible to process for 2008, as at this point judging forms weren’t employed online for the competition in 2008. Finally, university research score was included in the 2010 selection.

The next step in the data analysis was to create an ordinal measurement of the medal criteria and the predicted versus awarded medals, in addition to creating a student to advisor ratio. An ordinal scale is where a particular ranking order is given to the data. The scaling for the medal criteria, so that we incorporated teams that withdrew into our samples, is as follows:

This scale now allows us to analyse the relative success of each team.

Also the monetary variables were all altered by changing the currency base into US dollars and historical exchange rates were used from an online currency website (www.xe.com) to keep these values comparable, at a specific time of each respective year.

In total, 303 iGEM teams worth of information was collected over the past three years. The majority of the data was found on the iGEM website for each respective year. Data regarding student and advisor numbers, sponsors, parts submitted, degrees and advisor specialties were obtained from of the various teams' wikis which were accessed through the iGEM website. For advisors’ specialities that couldn’t be found, we used the Google search engine to locate the information from their various universities or laboratories. After a thorough search, anyone that couldn’t be firmly identified was placed in the ‘no data’ category. The projected budget and budget at time of registration were taken from the resource description page on the team information page on the iGEM website. The medals or prizes awarded, as well as number of withdrawals for each year were found on the iGEM results page for each year. Each university’s citation score and world rank were found using ‘The Times Higher Education World Rankings’. Finally, endowment and public/private were obtained from the universities’ websites.

Computational Programs

Statistical Background and Theory

Model Method

including pdf

Analysis

What does this mean for iGEM?

Get 15 advisors.

Acknowledgments

References