Team:USTC-Software/parameter

From 2011.igem.org

(Difference between revisions)
m
Line 417: Line 417:
<p>PSO has been successfully applied in many research and application areas. It is demonstrated that PSO gets better results in a faster, cheaper way compared with other methods.  </p>
<p>PSO has been successfully applied in many research and application areas. It is demonstrated that PSO gets better results in a faster, cheaper way compared with other methods.  </p>
-
<h3>Simulated Algorithm(SA) </h3>
+
<h3>Simulated Annealing(SA) Algorithm </h3>
 +
 
 +
 
 +
<p>Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more efficient than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. </p>
 +
 
 +
 
 +
<p>In the simulated annealing (SA) method, each point s of the search space is analogous to a state of some physical system, and the function E(s) to be minimized is analogous to the internal energy of the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state with the minimum possible energy.</p>
 +
 
 +
 
 +
<p> At each step, the SA heuristic considers some neighbouring state s' of the current state s, and probabilistically decides between moving the system to state s' or staying in state s. These probabilities ultimately lead the system to move to states of lower energy. Typically this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted.</p>
 +
 
 +
<p>The neighbours of a state are new states of the problem that are produced after altering the given state in some particular way. For example, in the traveling salesman problem, each state is typically defined as a particular permutation of the cities to be visited. The neighbours of some particular permutation are the permutations that are produced for example by interchanging a pair of adjacent cities. The action taken to alter the solution in order to find neighbouring solutions is called "move" and different "moves" give different neighbours. These moves usually result in minimal alterations of the solution, as the previous example depicts, in order to help an algorithm to optimize the solution to the maximum extent and also to retain the already optimum parts of the solution and affect only the suboptimum parts. In the previous example, the parts of the solution are the parts of the tour. </p>
 +
 
 +
<p>The probability of making the transition from the current state s to a candidate new state s' is specified by an acceptance probability function P(e,e',T), that depends on the energies e = E(s) and e' = E(s') of the two states, and on a global time-varying parameter T called the temperature. States with a smaller energy are better than those with a greater energy. The probability function P must be positive even when e' is greater than e. This feature prevents the method from becoming stuck at a local minimum that is worse than the global one. </p>
</div>
</div>

Revision as of 02:51, 1 October 2011


Team:USTC-Software - 2011.igem.org/parameter

Parameter section

Background

A most ideal tool that already exist in electronics: User provide the part’s name, the software fetch the standard model and its associated parameter, and return a complete mathematical model with all the parameters known (for instance, the PROTEL-software can easily get the released device from the NI company with all the response parameters standardized from a device library).

But temporally, the knowledge and cognition level of synthetic biology do not support this:

Most parts do not have a standard model associated with it. Some parameters of the parts haven’t been measured yet. For some parts with quantitative parameter value, which is highly context dependent, is hard to transfer with defined parameter across different hosts.

More consideration

Why do we try automatic parameter fitting(or estimation, adjustment)?

I.If it’s a big network with too many parameters undefined, it would get the user exhausted adjusting all the empty parameters by hand according to his/her experience and web resource. Networks generated by rule based modeling is ordinarily large.

II.Compared to manually parameter adjustment, in silicon auto parameter adjustment is more convenient. It gives an estimation of the interest parameter value according to wet lab data, thus free the user from spending too much time on estimating a good parameter.

Successful examples

We adopted a demo from a Tinkercell tutorial website on implementing a simple repressilator. After the network is generated by hand, the parameters are left behind to the user to adjust themselves. But as you can see from the following four figures, the process is tough and time consuming. ( more snapshots of the process are eliminated )

The four figures above show a difficult parameter adjustment process by hand.

By contrast, parameters estimated by our approach in silicon can be done in a timely fashion. It’s fast and convenient. (see figure below)

Algorithm

Particle swarm optimization (PSO) Algorithm

Particle swarm optimization (PSO) is a population based stochastic optimization technique developed by Dr. Eberhart and Dr. Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling.

Each particle keeps track of its coordinates in the problem space which are associated with the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle. This location is called lbest. when a particle takes all the population as its topological neighbors, the best value is a global best and is called gbest.

The particle swarm optimization concept consists of, at each time step, changing the velocity of (accelerating) each particle toward its pbest and lbest locations (local version of PSO). Acceleration is weighted by a random term, with separate random numbers being generated for acceleration toward pbest and lbest locations.

PSO has been successfully applied in many research and application areas. It is demonstrated that PSO gets better results in a faster, cheaper way compared with other methods.

Simulated Annealing(SA) Algorithm

Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more efficient than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution.

In the simulated annealing (SA) method, each point s of the search space is analogous to a state of some physical system, and the function E(s) to be minimized is analogous to the internal energy of the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state with the minimum possible energy.

At each step, the SA heuristic considers some neighbouring state s' of the current state s, and probabilistically decides between moving the system to state s' or staying in state s. These probabilities ultimately lead the system to move to states of lower energy. Typically this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted.

The neighbours of a state are new states of the problem that are produced after altering the given state in some particular way. For example, in the traveling salesman problem, each state is typically defined as a particular permutation of the cities to be visited. The neighbours of some particular permutation are the permutations that are produced for example by interchanging a pair of adjacent cities. The action taken to alter the solution in order to find neighbouring solutions is called "move" and different "moves" give different neighbours. These moves usually result in minimal alterations of the solution, as the previous example depicts, in order to help an algorithm to optimize the solution to the maximum extent and also to retain the already optimum parts of the solution and affect only the suboptimum parts. In the previous example, the parts of the solution are the parts of the tour.

The probability of making the transition from the current state s to a candidate new state s' is specified by an acceptance probability function P(e,e',T), that depends on the energies e = E(s) and e' = E(s') of the two states, and on a global time-varying parameter T called the temperature. States with a smaller energy are better than those with a greater energy. The probability function P must be positive even when e' is greater than e. This feature prevents the method from becoming stuck at a local minimum that is worse than the global one.