Автор: Пользователь скрыл имя, 19 Февраля 2012 в 20:48, реферат
Scheduling is a decision-making process that is used on a regular basis in many manufacturing and services industries. It deals with the allocation of resources to tasks over given time periods and its goal is to optimize one or more objectives. The resources and tasks in an organization can take many different forms.
04. range Jobs 1..nbJobs;
05. int nbOperations = . . .;
06. range Operations 1..nbOperations;
07. Machines resource[Jobs,Operations]= . . .;
08. int+ duration[Jobs,Operations] = . . . ;
09. int totalDuration=sum(j in Jobs, o in Operations)duration[j,o];
10. scheduleHorizon=totalDuration;
11. Activity operation[j in Jobs, o in Operations](duration[j,o]);
12. Activity makespan(0);
13. UnaryResource tool[Machines];
14. minimize
15. Makespan.end
16. subject to { 17. forall(j in Jobs)
18. Operation[j,nbOperations] precedes makespan;
19. forall(j in Jobs)
20. forall(o in 1,..nbOperations-1)
21. Operation[j,o] precedes Operation[j,o+1];
22. forall(j in Jobs)
23. forall(o in Operations)
24. Operation[j,o] requires tool[resource[j,o]];
25. };
The first six lines define the input data of the problem, consisting of the number of machines, the number of jobs, and the number of operations. The array resource contains input data that consists of the identity and the characteristics of the machine needed for processing a particular operation of a job. The array duration specifies the time required for processing each operation of a job. The keyword Activity implicitly indicates a decision variable. An activity consists of three elements: a start time, an end time and a duration. In the declaration given here, the durations of each activity are given as data. When an activity is declared, the constraint
operation[j,o].start + operation[j,o].duration =
operation[j,o].end
is automatically included in the program. The makespan activity is a dummy activity with zero processing time that will be the very last operation in the schedule. Unaryresource implies also a decision variable; decisions have to be made which activities have to be assigned to which resources at any given time. The remaining part of the program states the problem: first the objective and then all the constraints. The statement precedes is a keyword of the language. So one of the constraints is internally immediately translated into
operation[j,o].end <= operation[j,o+1].start
The word requires
is also a keyword of the language. Declaring a set of requirements causes
the creation of a set of so-called disjunctive
constraints (see Appendix A). For example, let the resource tool[i]
denote a given machine i and let operation[j1,o1] and operation[j2,o2]
denote two operations which both require machine i, i.e., the data include
resource[j1,o1]=resource[j2,
operation[j1,o1].end<= operation[j2,o2].start
or
operation[j2,o2].end<= operation[j1,o1].start
These disjunctive constraints imply that operation[j1,o1] either precedes
or follows operation[j2,o2].
C.4 Constraint Programming vs. Mathematical
Programming Constraint programming, as a modelling tool, is more flexible than mathematical programming. The objective functions as well as the constraints can be of a more general nature, since the decision variables may represent besides integer values and real values also set elements and even subsets of sets. The ways in which searches for optimal solutions can be conducted in mathematical programming and in constraint programming have similarities as well as differences. One similarity between constraint programming and the branchand- bound method applied to integer programming is based on the fact that both approaches typically search through a tree of which each one of the nodes represent a partial solution of a schedule. Both approaches have to deal with issues such as how to examine the search space effectively and how to evaluate all the alternatives as efficiently as possible (i.e., how to prune the tree). One of the differences between the dichotomic search used in constraint programming and a branch-and-bound procedure for a mixed-integer programming problemlies in the fact that the dichotomic search stresses a search for feasible solutions, whereas a branch-and-bound procedure emphasizes improvements in lower bounds. Constraint programming techniques and mathematical programming techniques can be embedded within one framework. Such an integrated approach has already proven useful for a variety of scheduling problems. Consider, for example, the application of a branch-and-price approach to the parallel machine problem Pm || _wjCj . Such a problem is usually of the set partitioning type and the solution techniques are typically based on column generation. A column in this parallel machine problem represents a schedule for one machine, which is a feasible combination of a subset of the jobs. Since the potential number of columns is rather huge, it is important to be able to start the optimization with an initial subset of columns that is appropriate. The solution strategy uses constraint programming in two ways. First, it is used when generating the initial subset of columns. Second, it is used as a sub problem algorithm for generating new columns during the optimization process. Thus, the master problem must find a set of appropriate single machine scheduling problems. The main program alternates between solving the linear programming relaxation of the set covering problem and solving a column generation problem that generates new columns for the master problem. Once the columns are satisfactory, the columns are fixed and the set covering problem is solved to find the integer optimal solution. The constraint programming subproblem is interesting. Constraint programming is used to enumerate the potential single machine schedules. In the initialization stage a set of single machine schedules are generated that are guaranteed to cover every job. In the column generation phase the constraint program is used twice. First, the optimal single machine schedules with the current set of dual multipliers are determined. After this has been determined, a search is done for all single machine schedules with a reduced cost of at least half of the optimal schedule. This provides a large set of entering columns and eliminates one of the major weaknesses of column generation, namely the large number of iterations needed to improve the objective value in the master problem. Constraint programming turns out to be also suitable for this preprocessing stage. A constraint programming routine can detect columns that are infeasible and reduce this way the number of columns to start out with. Modeling and Solving Scheduling Problems in Practice Scheduling Problems in Practice Real world scheduling problems are usually very different from the mathematical models studied by researchers in academia. It is not easy to list all the differences between the real world problems and the theoretical models, as every real world problemh as its own particular idiosyncrasies. Nevertheless, a number of differences are common and therefore worth mentioning. (i) Theoretical models usually assume that there are n jobs to be scheduled and that after scheduling these n jobs the problemis solved. In the real world there may be at any point in time n jobs in the system, but new jobs are added continuously. Scheduling the current n jobs has to be done without a perfect knowledge of the near future. Hence, some provisions have to be made in order to be prepared for the unexpected. The dynamic nature may require, for example, that slack times are built into the schedule to accommodate unexpected rush jobs or machine breakdowns. (ii) Theoretical models usually do not emphasize the resequencing problem. In practice the following problemo ften occurs: there exists a schedule, which was determined earlier based on certain assumptions, and an (unexpected) random event occurs that requires either major or minor modifications in the existing schedule. The rescheduling process, which is sometimes referred to as reactive scheduling, may have to satisfy certain constraints. For example, one may wish to keep the changes in the existing schedule at a minimum, even if an optimal schedule cannot be achieved this way. This implies that it is advantageous to construct schedules that are in a sense “robust”. That is, resequencing brings about only minor changes in the schedule. The opposite of robust is often referred to as “brittle”.
(iii) Machine environments in the real world are often more complicated than the machine environments considered in previous chapters. Processing restrictions and constraints may also be more involved. They may be either machine dependent, job dependent or time dependent. (iv) In the mathematical models the weights (priorities) of the jobs are assumed to be fixed, i.e., they do not change over time. In practice, the weight of a job often fluctuates over time and it may do so as a random function. A low priority job may become suddenly a high priority job. (v) Mathematical models often do not take preferences into account. In a model a job either can or cannot be processed on a given machine. That is, whether or not the job can be scheduled on a machine is a 0−1 proposition. In reality, it often occurs that a job can be scheduled on a given machine, but for some reason there is a preference to process it on another one. Scheduling it on the first machine would only be done in case of an emergency and may involve additional costs. (vi) Most theoretical models do not take machine availability constraints into account; usually it is assumed that machines are available at all times. In practice machines are usually not continuously available. There are many reasons why machines may not be in operation. Some of these reasons are based on a deterministic process, others on a random process. The shift pattern of the facility may be such that the facility is not in operation throughout. At times preventive maintenance may be scheduled. The machines may be also subject to a randombr eakdown and repair process. (vii) Most penalty functions considered in research are piecewise linear, e.g., the tardiness of a job, the unit penalty, and so on. In practice there usually does exist a committed shipping date or due date. However, the penalty function is usually not piecewise linear. In practice, the penalty function may take, Such a penalty function may be regarded as a function that lies somewhere in between the tardiness function and the unit penalty function. (viii) Most theoretical research has focused on models with a single objective. In the real world there are usually a number of objectives. Not only are there several objectives, their respective weights may vary over time and may even depend on the particular scheduler in charge. One particular combination of objectives appears to occur very often, especially in the process industry, namely the minimization of the total weighted tardiness and the minimization of the sumof the sequence dependent setup times (especially on bottleneck machines). The minimization of the total weighted tardiness is important since maintaining quality of service is usually an objective that carries weight. The minimization of the sum of the sequence dependent setup times is important as, to a certain extent, it increases the throughput. When such a combination is the overall objective, the weights given to the two objectives may not be fixed. The weights may depend on the time as well as on the current status of the production environment. If the workload is relatively heavy, then minimization of the sequence dependent setup times is more important; if the workload is relatively light, minimization of the total weighted tardiness is more important. (ix) The scheduling process is in practice often strongly connected with the assignment of shifts and the scheduling of overtime. Whenever the workload appears to be excessive and due dates appear to be too tight, the decision maker may have the option to schedule overtime or put in extra shifts in order to meet the committed shipping dates. (x) The stochastic models studied in the literature usually assume very special processing time distributions. The exponential distribution, for example, is a distribution that has been studied in depth. In reality, processing times are usually not exponentially distributed. One can think of this density function as the convolution of a deterministic (fixed value) and an Erlang(k, λ). The number of phases of the Erlang(k, λ) tends to be low, say 2 or 3. This density function tends to occur in the case of a manual performance of a given task. That processing times may have such a density function is plausible. One can imagine that there is a certain minimum time that is needed to perform the task to be done. Even the best worker cannot get below this minimum (which is equal to the fixed value). However, there is a certain amount of variability in the processing times that may depend on the person performing the task. The density function may have a tail at the right which represents those processing times during which something went wrong. One can easily show that this density function has an Increasing Completion Rate. Another type of density function that does occur in practice is the one depicted in Figure 16.2.b. The processing time is a fixed value with a very high probability, say .98, and with a very low probability, say .02, it is an additional randomt ime that is exponentially distributed with a very large mean. This type of density function occurs often in automated manufacturing or assembly. If a robot performs a task, the processing time is always fixed (deterministic); however, if by accident something goes wrong the processing time becomes immediately significantly larger. (xi) Another important aspect of random processing times is correlation.Successive processing times on the same machine tend to be highly positively correlated in practice. In the stochastic models studied in the literature usually all processing times are assumed to be independent draws from (a) given distribution( s). One of the effects of a positive correlation is an increase the variance of the performance measures. (xii) Processing time distributions may be subject to change due to learning or deterioration.When the distribution corresponds to a manual operation, then the possibility of learning exists. The human performing the operation may beable to reduce the average time he needs for performing the operation. If the distribution corresponds to an operation in which a machine is involved, then the aging of the machine may have as effect that the average processing time increases. In spite of the many differences between the real world and the mathematical models discussed in the previous sections, the general consensus is that the theoretical research done in past years has not been a complete waste of time. It has provided valuable insights into many scheduling problems and these insights have proven to be useful in the development of the algorithmic framework for a large number of real world scheduling systems. The remaining part of this chapter deals with examples of real world problems for which scheduling theory has turned out to be useful. In practice scheduling problems are often tackled with seemingly crude heuristics. The reason for not applying more sophisticated procedures is usually based on the relatively high frequency of randomev ents. Such events often cause schedule changes during the execution of an existing schedule. Cyclic Scheduling of a Flow Line Consider a number of machines in series with a limited buffer between each two successive machines. When a buffer is full, the machine that feeds that buffer cannot release any more jobs; this machine is then blocked. Serial processing is common in assembly lines for physically large items, such as television sets, copiers and automobiles, whose size makes it difficult to keep many items waiting in front of a machine. Also, the material handling system does not permit a job to bypass another so that each machine serves the jobs according to the First In First Out (FIFO) discipline. These flow lines with blocking are often used in Flexible Assembly Systems. Since different jobs may correspond to different product types, the processing requirements of one job may be different from those of another. Even if jobs are from the same product family, they may have different option packages. The timing of job releases from a machine may be a function of the queue at the machine immediately downstream as well as of the queue at the machine itself. This machine environment with limited buffers is mathematically equivalent to a systemo fmachines in series with no buffers between any machines. Within such a system a buffer space can be modeled as a machine at which all processing times are zero. So any number of machines with limited buffers can be transformed mathematically into a system with a (larger) number of machines in series and zero buffers in between the machines. Such a model, with all buffers equal to zero, is mathematically easier to formulate because of the fact that there are no differences in the buffer sizes. Sequences and schedules used in flow lines with blocking are often periodic or cyclic. That is, a number of different jobs are scheduled in a certain way and this schedule is repeated over and over again. It is not necessarily true that a cyclic schedule has the maximum throughput. Often, an acyclic schedule has the maximum throughput. However, cyclic schedules have a natural advantage because of their simplicity; they are easy to keep track of and impose a certain discipline. In practice, there is often an underlying basic cyclic schedule from which minor deviations are allowed, depending upon current orders. In order to discuss this class of schedules further, it is useful to define the Minimum Part Set (MPS). Let Nk denote the number of jobs that correspond to product type k in the overall production target. Suppose there are l different product types. If q is the greatest common divisor of the integers N1, . . . ,Nl, then the vector
Ї N = _N1 q , . . . , Nl q _ represents the smallest set having the same proportions of the different product types as the production target. This set is usually referred to as the Minimum Part Set (MPS). Given the vector ЇN , the items in an MPS may, without loss of generality, be regarded as n jobs, where n = 1 q l _k=1 Nk. The processing time of job j, j = 1, . . . , n, on machine i is pij . Cyclic schedules are specified by the sequence of the n jobs in the MPS. The fact that some jobs within an MPS may correspond to the same product type and have identical processing requirements does not affect the approach described below. The Profile Fitting (PF) heuristic described in Chapter 6 for Fm | block | Cmax is well suited for the machine environment described above and appears to be a good heuristic for minimizing the cycle time in steady state. The cycle time is the time between the first jobs of two consecutive MPS’s entering the system. Minimizing the cycle time is basically equivalent to maximizing the throughput. The following example illustrates the cycle time concept. Scheduling of a Flexible Flow Line with Limited Buffers and Bypass Consider a number of stages in series with a number of machines in parallel at each stage. A job, which in this environment often is equivalent to a batch of relatively small identical items such as Printed Circuit Boards (PCB’s), needs to be processed at every stage on only one machine. Usually, any of the machines will do, but it may be that not all the machines at a given stage are identical and that a given job has to be processed on a specific machine. A job may not need to be processed at a stage at all; in this case the material handling system that moves jobs from one stage to the next will usually allow a job to bypass a stage and all the jobs residing at that stage (see Figure 16.4). The buffers at each stage may have limited capacity. If this is the case and the buffer is full,then either the material handling system must come to a standstill, or, if there is the option to recirculate, the job bypasses that stage and recirculates. The manufacturing process is repetitive and it is of interest to find a cyclic schedule (similar to the one in the previous section).The Flexible Flow Line Loading (FFLL) algorithmw as designed at IBM for the machine environment described above. The algorithm was originally conceived for an assembly system used for the insertion of components in PCB’s.The two main objectives of the algorithm are
(i) the maximization of throughput and
(ii) the minimization of WIP.
With the goal of maximizing the throughput an attempt is made to minimize the makespan of a whole day’s mix. The FFLL algorithm actually attempts to minimize the cycle time of a Minimum Part Set (MPS). As the amount of buffer space is limited, it is recommended to minimize the WIP in order to reduce blocking probabilities. The FFLL algorithmco nsists of three phases:
Phase 1: machine allocation,
Phase 2: sequencing,
Phase 3: timing of releases.
The machine allocation phase assigns each job to a particular machine in each bank of machines. Machine allocation is done before sequencing and timing because in order to performt he last two phases the workload assigned to each machine must be known. The lowest conceivable maximum workload at a bank would be obtained if all the machines in that bank were given the same workload. In order to obtain balanced, or nearly balanced, workloads over the machines in a bank, the Longest Processing Time first (LPT) heuristic, described in Section 5.1, is used. According to this heuristic all the jobs are, for the time being, assumed to be available at the same time and are allocated to a bank one at a time on the next available machine in decreasing order of their processing time. After the allocation is determined in this way, the items assigned to a machine may be resequenced, which does not alter the workload balance over the machines in a given bank. The output of this phase is merely the allocation of jobs and not the sequencing of the jobs or the timing of the processing. The sequencing phase determines the order in which the jobs of the MPS are released into the system, which has a strong effect on the cycle time. The FFLL algorithmu ses a Dynamic Balancing heuristic for sequencing an MPS. This heuristic is based on the intuitive observation that jobs tend to queue up in the buffer of a machine if a large workload is sent to that machine within a short time interval. This occurs when there is an interval in the loading sequence that contains many jobs with large processing times going to the same machine. Let n be the number of jobs in an MPS and m the number of machines in the entire system. Let pij denote the processing time of job j on machine i. Note that Scheduling a Bank of Parallel Machines with Jobs having Release Dates and Due Dates This section focuses on the gate assignments at an airport, as described in Examples Consider m machines in parallel and n jobs. Job j has a processing time pj, are lease date rj and a due date dj. Job j may be processed on any machine that belongs to set Mj . The goal is to find a feasible assignment of jobs to machines. At times, one can make in the problem formulation a distinction between hard constraints and soft constraints. A hard constraint has to be satisfied at all cost. A soft constraint basically refers to a preference. For example, it may be the case that it is preferable to process job j on a machine that belongs to a set Mj, but if necessary, job j can be processed on anyone of the m machines. There may be additional constraints as well. For example, job j may be processed on machine i ∈ Mj only if it is started after rij and completed before dij . The entire algorithmic procedure consists of three phases.
Ban k of Parallel Machines - Jobs with Release and Due Dates 449
Phase 1: Constraint guided heuristic search;
Phase 2: Constraint relaxation;
Phase 3: Assignment adjustment.
The constraint guided heuristic search phase uses a generalization of the procedure described in Section 15.3 to search for a feasible schedule or an assignment that meets all due dates. More often than not, this phase does not yield immediately a feasible schedule with an assignment of all jobs. It often occurs that after a first pass through Phase 1 only a subset of the jobs are assigned and with this partial assignment there does not exist a feasible schedule for the remaining jobs. If this is indeed the case, then the procedure has to go through the second phase. The constraint relaxation phase attempts to find a feasible schedule by relaxing some of the constraints. Constraint relaxation can be done in various ways. First, due dates of jobs that could not be assigned in the first phase may be postponed and/or the processing times of these jobs may be shortened. Second, in the first pass a particular assignment of a job to a machine may have been labelled impossible, whereas in reality this assignment would have been considered only less preferable because the machine in question was not perfectly suited for the particular job (i.e., a soft constraint was being violated). However, such an assignment can be made possible in a second pass through Phase 1 by relaxing the soft constraint. Third, one of the additional constraints may be relaxed. Constraint relaxation usually yields a complete schedule, with all jobs assigned to machines. The assignment adjustment phase attempts to improve the schedule via pairwise swaps. Even though the main goal is to obtain a feasible schedule, other objectives may still play a role. Such objectives can include the balancing of the workloads over the various machines and the maximization of the idle times between the processing of consecutive jobs on the various machines (this last objective may be important in order to be prepared for potential fluctuations in the processing times). Note that the second phase in this framework, the constraint relaxation phase, is different fromthe backtracking that may occur in the execution of a constraint program. When a program backtracks, it annuls some segments of the partial schedule it had already created; however, it does not make any changes in the formulation or the statement of the scheduling problem. Constraint relaxation, on the other hand, refers to changes that are made in the actual formulation of the problem. The approach described in this section has at times also been referred to as the reformulative approach. The idea is simple: the model has to be reformulated until a feasible schedule is found.