The Role of Scheduling

Автор: Пользователь скрыл имя, 19 Февраля 2012 в 20:48, реферат

Описание работы

Scheduling is a decision-making process that is used on a regular basis in many manufacturing and services industries. It deals with the allocation of resources to tasks over given time periods and its goal is to optimize one or more objectives. The resources and tasks in an organization can take many different forms.

Работа содержит 1 файл

ref.docx

— 127.14 Кб (Скачать)

   Applied Research .

   Applied research may go a little bit deeper into some of the topics covered in Part III of this book. The applied topics described in this section include (i) performance analyses of heuristics, (ii) robustness and reactive scheduling, and (iii) integrated scheduling models. Performance analysis of heuristics is a very important area of empirical and experimental research. Currently, many job shop problems can only be dealt with on a small scale. For example, it is still very hard to find an optimal solution for an instance of Jm || _wjTj with, say, 10 machines and 30 jobs. If  there are multiple objectives and a parametric analysis has to be done, then the problemb ecomes even harder. Many different types of heuristics are available, but it is not clear how effective they are in dealing with large scale scheduling problems, e.g., job shop scheduling problems with two or three objectives and with, say, 50 machines and 1000 jobs. The heuristics for these large scale job shops may require continuous improvement and fine-tuning in the future. Heuristics can be compared to one another with respect to several criteria. In practice, three criteria are important. First, the quality of the solution obtained; second, the amount of computer time needed to generate a good solution; third, the time required to develop and implement the heuristic. Comparative studies of heuristics conducted in academic environments typically take only the first two criteria into account. However, in an industrial environment the third criterion is of critical importance. In industry it is important that the time needed to develop a heuristic be short. This is one of the reasons why in practice local search heuristics are often more popular than very sophisticated decomposition techniques such as the shifting bottleneck heuristic. The performance of any heuristic depends, of course, on the structure of the scheduling problem, e.g., the type of routing constraints in job shops. The performance may even depend on the particular data set. Often, when one deals with a unary NP-hard problem, it turns out that most instances can be solved to optimality in a reasonably short time; however, some instances may turn out be very hard to solve and may require an enormous amount of computing time before reaching optimality. It is of interest to find out why such instances are hard to solve. Empirical studies indicate that a heuristic often may perform quite well when the data of an instance are generated randomly, whereas that heuristic may perform quite poorly when it is applied to an instance of that same problemwith data froma n industrial setting. (Itmay be the case that industrial data have certain dependencies and correlations that make such instances hard to solve.) It would be useful to establish rules that indicate the type of algorithm that is most suitable for the type of instance under consideration. In order to characterize a probleminsta nce properly, one may want to have a number of suitable descriptive factors, such as, for example, the due date tightness factor τ defined in Chapter 14. One may also have to assess proper weights to each one of the factors. It would be useful to know which type of algorithmis most suitable for a given instance when the following is known: the size of the instance (the scale), the values of characteristic factors, and the computer time available. An important class of heuristic methods comprises local search procedures. The last two decades have seen an enormous amount of work on applications and implementations of local search procedures. This research has yielded interesting results with regard to neighbourhood structures. However, most of this research has focused on nonpreemptive scheduling. Preemptive scheduling has received very little attention fromresearc hers specializing in local search procedures. One reason is that it tends to be more difficult to design an effective neighbourhood structure for a preemptive environment than for a nonpreemptive environment. It may be of interest to focus attention first on problems that allow preemptions only at certain well-defined points in time, e.g., when new jobs are released.It is likely that in the future there will be a certain demand for industrial strength heuristics that are applicable to scheduling problems common in industry. Consider, for example, the problem Qm | rj, sjk | θ1_wjTj+θ2Cmax. This scheduling problemis typical in many process industries. The two objectives are quite common: One objective focuses on the due dates and the other tries to balance the loads over the machines and minimize setup times. In the future, heuristics may be developed that are problem-specific and that can be linked easily to a variety of scheduling systems. These industrial strength heuristics may be hybrids that make use of Operations Research (OR) techniques as well as Artificial Intelligence (AI) techniques. For example, such a hybrid may combine an integer programming procedure with a constraint-based local search procedure. Robustness and reactive scheduling. A completely different line of empirical research involves robustness and rescheduling. As stated in the previous section, the concepts of robustness and rescheduling may lead to interesting theoretical research. However, they may lead to even more interesting empirical and experimental research. New measures for robustness have to be developed. The definition of these measures may depend on the machine environment.  Integrated Scheduling models. More practical models often combine machine scheduling aspects with other aspects, such as inventory control, workforce scheduling, maintenance scheduling, capacity control or pricing. For example, in supply chains the production scheduling function is often tied to inventory control and to transportation scheduling. The models that are useful for the analysis of such real world environments tend to be more involved than the simpler machine scheduling models considered in this book. However, in the analysis of these more complicated models one may often have to resort to decomposition methods that partition the problem into a number of different modules. The smaller modules can then be tackled more easily using procedures that are described in this book. For example, in the airline industry, planes and crews have to be scheduled in a coherent way. An extensive amount of research has been done on pure personnel scheduling (independent of machine scheduling), but little research has been done on models that combine personnel scheduling with machine scheduling. Some more theoretical research has been done in other areas related to these types of problems, namely resource constrained scheduling (i.e., a limited number of personnel may be equivalent to a constraining resource). However, research in resource constrained scheduling has typically focused on complexity analysis and on worst case analysis of heuristics. It may be of interest in the future to study more specific models that combine machine scheduling with personnel scheduling. There are many scheduling applications in the information systems world. Nowadays, distributed computer systems are connected to one another in socalled grid environments in which users can submit jobs that are automatically assigned to appropriate resources. The performance of a grid computing system depends heavily on the underlying scheduling procedures. A grid scheduling systemop erates on the premise that a new job that is in need of processing must make itself known to the ”resource selector”. In current systems, the resource selector acts as a gateway to the grid. It will select resources from a global directory and then allocates the job to one of the nodes on the grid. Typically, a job allocation is done in two phases. First, a job is allocated to a node on the grid, and second, within that node, the job is scheduled onto the processor. The first phase is referred to as resource allocation, whereas the second phase is referred to as job scheduling. The last decade has seen a fair amount of development and implementation of grid scheduling systems. However, it seems that less attention has been paid to the more theoretical and algorithmic aspects of these systems. 

       Systems Development

       Systems development may focus in the future a little bit more on some of the topics covered in Part III of this book. In this section the topics discussed include

       (i) problemdeco mposition and distributed scheduling,

       (ii) user interfaces and interactive optimization,

       (iii) scheduling description languages, and

       (iv) integration within supply chain management systems.

       Problem decomposition and distributed scheduling. Dealing with large scale scheduling problems may lead to implementations of distributed scheduling. Many industrial problems are so large that they cannot be solved on a single workstation. The computational effort has to be divided over a number of workstations or computers that may reside at different locations. With certain procedures the computational effort can be divided up rather easily whereas with other procedures it may not be that easy. For example, when a problem is solved via branch-and-bound it may be relatively easy to decompose the branching tree and partition the computational work involved. At periodic intervals the different workstations still have to compare their progress and share information (e.g., compare their best solutions found so far). If a problem is solved via time based decomposition, then distributed scheduling may also be applicable (as long as the schedules of the different periods are somewhat independent of one another). With the latest Internet technologies, distributed scheduling may become increasingly more important in the future. User interfaces and interactive optimization. The development of user interfaces and interactive optimization may face some interesting hurdles in the future. The designs of the user interfaces have to be such that interactive optimization can be done easily and effectively. A scheduler must maintain at all times a good overview of the schedule, even when the schedule contains over a thousand jobs. The user interface must have abilities to zoom in and out of a schedule easily. In order to allow for interactive optimization the user interface must have provisions for clicking, dragging and dropping operations, freezing operations, dealing with cascading and propagation effects, and rescheduling. After the user makes some manual changes in the system, the system may reschedule automatically in order to maintain feasibility (without any user input). The (internal) algorithms that are used to maintain schedule feasibility may be relatively simple; they may only postpone some operations. However, internal algorithms may also be more involved and may perform some internal reoptimization (that is done automatically). On the other hand, the reoptimization process may also be managed by the user; he may want to specify the appropriate objective functions for the reoptimization process. Reoptimization algorithms may be very different from optimization algorithms that generate schedules from scratch. The main reason why reoptimizing is harder than optimizing fromscr atch is because an algorithmtha t reoptimizes has to deal with boundary conditions and constraints that are dictated by the original schedule. Embedding rescheduling algorithms in a user interface that enables the user to optimize schedules interactively is not easy. Scheduling description languages. Composition and integration of procedures have led to the development of so-called scheduling description languages. A scheduling description language is a high level language that enables a scheduler to write the code for a complex algorithm with only a limited number of concise statements or commands. Each statement in a description language involves the application of a relatively powerful procedure. For example, a statement may give an instruction to apply a tabu search procedure on a given set of  jobs in a given machine environment. The input to such a statement consists of the set of jobs, the machine environment, the processing restrictions and constraints, the length of the tabu-list, an initial schedule, and the maximum number of iterations. The output consists of the best schedule obtained with the procedure. Other statements may be used to set up various different procedures in parallel or to concatenate two different procedures. Scheduling description languages are not yet very popular. However, the existing languages are still somewhat cumbersome and need streamlining. It is likely that there will be some improvement in the future. Integration within supply chain management systems. Many companies had started out initially with developing scheduling software for the manufacturing industry. However, they soon realized that in order to compete in the market place they had to offer software dealing with all aspects of supply chain management. The types of modules that are required in supply chain optimization include, besides planning and scheduling, forecasting, demand management, inventory control, and so on. Scheduling problems in supply chain management have to take forecasts, inventory levels and routings into consideration. These integrated scheduling problems are considerably harder than the more  lementary problems studied in the research literature. The structure and the organization of the software must be well designed and modular.

       Deterministic and Stochastic

       Dynamic Programming

       Dynamic programming is one of the more widely used techniques for dealing with combinatorial optimization problems. Dynamic Programming can be applied to problems that are solvable in polynomial time, as well as problems that cannot be solved in polynomial time (see Appendix C). It has proven to be very useful for stochastic problems as well.

       B.1 Deterministic Dynamic Programming

       Dynamic programming is basically a complete enumeration scheme that attempts, via a divide and conquer approach, to minimize the amount of computation to be done. The approach solves a series of subproblems until it finds the solution of the original problem. It determines the optimal solution for each subproblema nd its contribution to the objective function. At each iteration it determines the optimal solution for a subproblem, which is larger than all previously solved subproblems. It finds a solution for the current subproblem by utilizing all the information obtained before in the solutions of all the previous subproblems. Dynamic programming is characterized by three types of equations, namely

       (i) initial conditions;

       (ii) a recursive relation and

       (iii) an optimal value function.

       In scheduling, a choice can be made between forward dynamic programming and backward dynamic programming. The following example illustrates the use of forward dynamic programming.

   B.2 Stochastic Dynamic Programming

   Dynamic programming is often used in stochastic sequential decision processes, especially when the randomv ariables are exponentially distributed. This class of decision processes is usually referred to as Markovian Decision Processes (MDP’s). An MDP can be characterized, in the same way as a deterministic dynamic program, by

       (i) initial conditions;

       (ii) a recursive relation and

       (iii) an optimal value function

       Constraint Programming

       C.1 Constraint Satisfaction

       C.2 Constraint Programming

       C.3 An Example of a Constraint Programming Language

       C.4Constrai nt Programming vs. Mathematical 

       In contrast to mathematical programming, which has its roots in the Operations Research community, constraint programming has its origins in the Artificial Intelligence and Computer Science communities. Constraint programming can be traced back to the constraint satisfaction problems studied in the 1970’s. A constraint satisfaction problemrequires a search for a feasible solution that satisfies all given constraints. To facilitate the search for a solution to such a problemv arious special purpose languages have been developed, e.g., Prolog. However, during the last decade of the twentieth century, constraint programming has not only been used for solving feasibility problems, but also for solving optimization problems. Several approaches have been developed that facilitate the application of constraint programming to optimization problems. One such approach is via the Optimization Programming Language (OPL), which was designed for modeling and solving optimization problems through both constraint programming techniques and mathematical programming procedures.

       C.1 Constraint Satisfaction

       To describe the constraint programming framework, it is necessary to first define the constraint satisfaction problem. In order to be consistent with the mathematical programming material presented in Appendix A, it is advantageous to present the constraint satisfaction problem using mathematical programming terminology. Assume n decision variables x1, . . . , xn and let Dj denote the set of allowable values for decision variable xj . This set is typically referred to as the domain of the variable xj . Decision variables can take integer values, real values, as well as set elements. Formally, a constraint is a mathematical relation that implies a subset S of the set D1 ×D2 × ×Dn such that if (x1, . . . , xn) S, then the constraint is said to be satisfied. One can also define a mathematical function f such that f(x1, . . . , xn) = 1 if and only if the constraint is satisfied. Using this notation, the Constraint Satisfaction Problem(CSP ) can be defined as follows:

       fi(x1, . . . , xn) = 1, i= 1, . . .,m       xj Dj j = 1, . . . , n

       Since the problemis only a feasibility problem, no objective function has to be defined. The constraints in the constraint set may be of various different types: they may be linear, nonlinear, logical combinations of other constraints,cardinality constraints, higher order constraints or global constraints. One basic difference between the constraints in a mathematical programming formulation and the constraints in a constraint satisfaction problem lies in the fact that the constraints in a mathematical programming formulation are typically either linear or nonlinear, whereas the constraints in a constraint programming formulation can be of a more general form. Constraint Satisfaction is typically solved via a tree search algorithm. Each node in the search tree corresponds to a set of domains D_1 ×D_2 × ×D_n such that D_j Dj . In other words, a node is nothing more than a contraction of the original domains that has not yet proven infeasible. The tree search algorithm can branch fromon e node to another by assigning a value to a problemv ariable. 
     

At each node, the following operations have to be performed:

       WHILE not solved AND not infeasible DO

       consistency checking (domain reduction)

       IF a dead-end is detected THEN

       try to escape fromd ead-end (backtrack)

       ELSE

       select variable

       assign value to variable

       ENDIF

       ENDWHILE

       The selection of the next variable and the assignment of its value is done by variable selection heuristics and value assignment heuristics. In job shop scheduling a variable typically corresponds to an operation and the value corresponds to its starting time. After a value is assigned to a variable, inconsistent values of unassigned variables are deleted. The process of removing inconsistent values is often referred to as consistency checking or domain reduction. One well-known technique of consistency checking is constraint propagation. For a variable x, the current domain δ(x) is the set of values for which no inconsistency can be found with the available consistency checking techniques. If, after removing inconsistent values from the current domains, a current domain has become empty, a so-called dead-end has been reached. A dead-end means that either the original problem is infeasible or that some of the branching decisions made from the root of the tree down to this point has created an infeasibility. In such a case, the algorithm has to backtrack; that is, one or more assignments of variables have to be undone and alternatives have to be tried out. An instance is solved if every variable is assigned a value; an instance is shown to be infeasible if for a variable in the root of the tree no values are remaining to be tried. If constraint satisfaction is applied to the job shop problemin Chapter 7, then the problemis to verify whether there exists a feasible job shop schedule with a makespan Cmax that is less than a given value z∗. Domain reduction in job shop scheduling boils down to the following: Given the partial schedule already constructed, each operation yet to be scheduled has an earliest possible starting time and a latest possible completion time (which are basically equivalent to a release date and a due date). Whenever the starting time and completion time of an operation are fixed, some form of checking has to be done on how the newly scheduled (fixed) operation affects the earliest possible starting times and latest possible completion times of allthe operations that still remain to be scheduled. The earliest possible starting time of a yet to be scheduled operation may now have to be set later while the latest possible completion time of that operation may now have to be set earlier. Note that constraint satisfaction requires the specification of an actual makespan; this allows the procedure to construct a schedule going forward in time as well as backward in time. A branch-and-bound approach, on the other hand, often constructs a schedule going forward in time whenever the makespan is not known a priori (see, for example, Sections 3.2 and 7.1). 
     
     

   C.2 Constraint Programming 

   Originally, constraint satisfaction was used only to find feasible solutions for problems. However, the constraint satisfaction structure, when embedded in a more elaborate framework, can be applied to optimization (e.g., minimization) problems as well. An optimization problem may be formulated as follows:

       minimize g(x1, . . . , xn)

       subject to

       fi(x1, . . . , xn) = 1, i= 1, . . .,m

       xj Dj j = 1, . . . , n

       The standard search procedure for finding the optimal solution is to first find a feasible solution to the Constraint Satisfaction Problem, while ignoring the objective function. Let y1, . . . , yn represent such a feasible solution. Let z∗ = g(y1, . . . , yn) and add the constraint g(x1, . . . , xn) < z∗

       to the constraint set and solve this modified Constraint Satisfaction Problem. The additional constraint forces the new feasible solution to have a better objective  value than the current one. Constraint propagation may cause the domains of the decision variables to be narrowed, thus reducing the size of the search space. As the search goes on, new solutions must have progressively better objective values. The algorithmter minates when no feasible solution is found; when this happens, the last feasible solution found is optimal. A more sophisticated and efficient search procedure, often referred to as dichotomic search, requires at the outset a good lower bound L on the objective g(x1, . . . , xn). The procedure must also find an initial feasible solution that represents an upper bound U on the objective function. The dichotomic search procedure essentially performs a binary search on the objective function. The procedure computes the midpoint

       M= (U + L) 2

       of the two bounds and then solves the constraint satisfaction problemwith the added constraint

       g(x1, . . . , xn) <M

       If it finds a feasible solution, then the new upper bound is set equal to M and the lower bound is kept the same; a new midpoint is computed and the search continues. If it does not find a feasible solution, then the lower bound  is set equal to M and the upper bound is kept the same; a new midpoint is computed and the search continues. Dichotomic search is effective when the lower bound is strong, because the computational time that is needed to show that a constraint satisfaction problem do es not have a feasible solution may be very long. Constraint programming can be described as a two-level architecture that at one level has a list of constraints (sometimes referred to as a constraint store) and at the other level a programming language component that specifies the organization and the sequence in which the basic operations are executed. One fundamental feature of the programming language component is its ability to specify a search procedure.

       C.3A n Example of a Constraint Programming Language

       The Optimization Programming Language (OPL) was developed during the last decade of the twentieth century to deal with optimization problems through a combination of constraint programming techniques and mathematical programming techniques.This section describes a constraint programdesigned for the job shop problem  Jm || Cmax that is considered in Chapter 7. Example C.3.1 (An OPL Program for a Job Shop)

       An OPL programfo r the job shop problem Jm || Cmax can be written as

       follows:

       01. int nbMachines = . . .;

       02. range Machines 1..nbMachines;

       03. int nbJobs = . . .;

Информация о работе The Role of Scheduling