1. Introduction

Most classical scheduling models assume that job processing times remain constant over time. However, in various real-life production and manufacturing systems job processing times change over time. The contributing factors to varying processing times may be machine or worker learning, machine deterioration, production system upgrades or technological shocks. Gawienjnowiz (2008) provides a wide variety of applications and to date results on scheduling with time-dependent jobs. Time-dependent scheduling models can be classified into three general categories. The first category consists of linear models, whereby job processing times increase or decrease linearly as a function of the job’s start time. The first papers to introduce time deteriorating jobs were Gupta et al (1987), Gupta and Gupta (1998), and Browne and Yechiali (1990). The authors of these papers assumed that job processing times consist of two components. The former is a basic processing time which remains constant throughout the planning horizon. The latter varies over time and is the product of the job start time and a job-dependent deterioration index. The makespan minimization problem on a single machine is shown to be solved in polynomial time using a simple sorting rule; however, the total completion time counterpart remains open even when the basic processing times are assumed to be identical for all jobs (see Mosheiov, 1991). The second category of time-dependent processing time scheduling models addresses piece-wise linear job processing time functions. The underlying assumption of these models is that job processing times remain constant until a certain event occurs. Following changes in the production setting, job processing times begin to increase, or in some cases decrease, linearly at a predefined point in time. Cheng et al (2004) and the references within describe a variety of applications for this class of models, including fire fighting efforts, searching for items under worsening light or weather conditions and maintenance scheduling. Wu et al (2009) propose a branch-and-bound algorithm as well as heuristic algorithms for solving a single machine scheduling problem with the objective of minimizing the makespan. Farahani and Hosseini (2013) show that a single machine scheduling problem with the objective of minimizing the cycle time where the job processing time is a piecewise linear function of its start time is polynomially solvable. The third category is devoted to scheduling settings where job processing times have two possible values, depending on their start time. The motivation for such models arises from systems which experience a single discrete change in setting. For example, the introduction of a new technology or the purchase of a more efficient machine. Under this model job processing times do not change over time and are only affected by the different machine setting. These models are often referred to as step-deteriorating or step-improving problems, depending on the context. Step-deteriorating models were first introduced by Sundararaghavan and Kunnathur (1994) and later studied by Mosheiov (1995), Cheng and Ding (2001) and Jeng and Lin (2004). The makespan minimization problem was shown to be NP-hard in the ordinary sense by both Mosheiov (1995) and Cheng and Ding (2001). The same problem under the assumption that each job has a distinct event which affects its processing time is also NP-hard (see Jeng and Lin, 2004). Cheng and Ding (2001) also addressed the total completion time objective, showing that the problem is strongly NP-hard. In a recent paper, Cheng et al (2006) study a makespan minimization problem with step-improving processing times and a common critical date. They prove the problem is NP-hard and develop an efficient pseudo-polynomial time algorithm. They also introduce a Fully Polynomial Time Approximation Scheme (FPTAS) and an online algorithm with a worst case ratio of 2. Ji et al (2007) address the same problem and develop a simple linear time algorithm with a worst case ratio of 5/4. For earlier results on scheduling with step-improving or step-deteriorating processing times, we refer the reader to two extensive survey papers by Alidaee and Womer (1999) and Cheng et al (2004).

Recently, some studies investigate time-dependent scheduling models with other variations of classic scheduling models. Cai et al (2011) consider the single machine scheduling problem with time-dependent processing times and machine breakdowns. The objectives are to minimize the expected makespan and the variance of the makespan. Mor and Mosheiov (2012) consider a classical batch scheduling model with step-deterioration of processing times. The objective is to minimize flowtime. Qian and Steiner (2013) consider a single machine scheduling problem with learning/deterioration effects and time-dependent processing times. By using special assignment problem on product matrices, they solve the problem in near-linear time. Lu et al (2014) consider a single machine scheduling problem with decreasing time-dependent processing times and group technology assumption. The group setup times and job processing times are both decreasing linear functions of their starting times. They show that the problem can be solved in polynomial time.

In this paper we consider the same setting as Cheng et al (2006) and Ji et al (2007): we assume a single machine scheduling environment and a common critical date after which job processing times reduce by a job-dependent amount. To the best of our knowledge, this is the first paper to consider a single machine setting with step-improving jobs and a critical date under a scheduling criterion which is not the makespan. We consider the total completion time objective which is paramount in determining the work in process of a manufacturing system and the level of service provided to customers. The focus on a min-sum type objective gives rise to interesting problem characteristics, various special cases which can be solved optimally and a very efficient approximation scheme for the general setting. We begin by introducing some notation in Section 2. We show that the total completion time minimization problem is NP-hard in Section 3, and discuss several special cases which can be solved in polynomial time in Section 4. We formulate a Mixed Integer Programming (MIP) model and develop an LP-based heuristic for the problem in Section 5. In Section 6, computational experiments are performed. Some concluding remarks and directions for future research are given in Section 7.

2. Problem description

A set of n non-preemptive jobs N={1, …, n} is available for processing at time zero. All jobs in N share a common critical date D which affects their processing times. The processing time of job j is specified by two integers a j and b j with 0⩽b j a j . If job j begins processing at some time t<D, then its processing time equals a j ; if it starts at some time tD, then its processing time is a j b j . The goal consists of finding a non-preemptive schedule which minimizes the total completion time.

A schedule σ is an assignment of the jobs in N to the single machine such that each job receives an appropriate amount of processing time, and no two jobs can be processed on the single machine at the same time. Let S j (σ) and C j (σ) denote the start and finish times of job j in schedule σ, respectively. We represent S j (σ) as S j and C j (σ) as C j when schedule σ is clear from the context. Let σ* represent the optimal schedule, that is, the one which minimizes the total completion time, and let ∑C j (σ*) denote the minimum total completion time associated with this schedule. Let [i] denote the job in the ith position of the job sequence.

The standard classification scheme for scheduling problems (Graham et al, 1979) is α1|α2|α3, where α1 describes the machine structure, α2 gives the job characteristics or restrictive requirements, and α3 defines the objective function to be minimized. We extend this scheme to provide for the step-improving processing time with a common critical date by using p j =a j or p j =a j b j and d j =D in the α2 field. Our problem can be denoted as 1|p j =a j or p j =a j b j , d j =D|∑C j .

3. Complexity results

This section studies the complexity issue of the considered problem. A reduction method is used from the following problem, which is known to be NP-complete.

Even-odd partition (Garey and Johnson, 1979): Given positive integers x1, x2, …, x2t, where x1x2⩽⋯⩽x2t, does it exist a set XT={1, 2, …, 2t} such that ∑jXx j =∑jT\Xx j =r, and X contains exactly one of {x2j−1, x2j} for j=1, 2, …, t? Assume without loss of generality that r>3t. If not, then we can multiply each partition element and partition size by 3t without changing the solution of the problem.

Given any instance of even-odd partition problem with t, r and x j for j=1, …, 2t, we consider the following instance of 1|p j =a j or p j =a j b j , d j =D|∑C j , called instance I:

A threshold value, K, is defined as Kj=1t(tj+1)(x2j−1+x2j)+(3t2+3t)r2+(t+4)r. In the following we prove that there exists a schedule for this instance of 1|p j =a j or p j =a j b j , d j =D|∑C j with ∑C j K if and only if there exists a solution to even-odd partition problem.

Lemma 1

  • If there exists a solution to even-odd partition problem, there exists a feasible schedule for instance I.

Proof

  • Since there exists a solution to even-odd partition, the elements are reindexed such that Σj=1tx2j−1j=1tx2j=r. Consider a schedule σ as constructed in Figure 1. The jobs 2j−1 for j=1, …, t are scheduled in this order without idle time. Then, since Σj=1ta2j−1=D, jobs 2j for j=1, …, t and job 2t+1 are scheduled in this order starting at time D without idle time. Thus,

    Figure 1
    figure 1

    The structure of schedule σ.

    The third equality follows because Σj=1t(a2jb2j)=r. This implies that there exists a feasible schedule with ∑C j K for instance I. □

We now show that a feasible solution to instance I implies a solution to even-odd partition. Let σ* denote a optimal schedule for instance I.

Lemma 2

  • In σ*, job 2t+1 is the last job to be processed.

Proof

  • Suppose that job 2t+1 starts processing before D in σ*. Since a2t+1=2r2+2r>2r2+x j =a j for j=1, …, 2t, job 2t+1 and the last job for processing, job [2t+1], can be interchanged while the total completion time decreases or remains the same, where job [i] denotes the job in the ith position in a given sequence. Alternatively, suppose that job 2t+1 starts processing after D in σ*. For the jobs starting after D, the SPT (Shortest Processing Time) rule is optimal (Smith, 1956). Therefore, job 2t+1 must be the last job in σ* because a2t+1b2t+1=2r>x j =a j b j for j=1, …, 2t. □

Lemma 3

  • In σ*, there are exactly t jobs that start processing before D, and the total processing time of these jobs is not greater than D.

Proof

  • Observe that a j >2r2 for j=1, …, 2t and 2r2t<D<2r2(t+1). Thus, there are at most t+1 jobs that start processing before D. Suppose that t+1 jobs start processing before D. Then, it follows that C[t+1]>2r2(t+1). Since 2r2(t+1)>D+(a[t+1]b[t+1]), the total completion time can be decreased by inserting idle time so that job [t+1] start processing at D. Consequently, exactly t jobs start processing before D in σ*.

    Furthermore, the total processing time of the first t jobs is not greater than D. Otherwise, since Σj=12t(a j b j )=2r, it follows that Σj=t+12ta[j]<D. Therefore, for j=1, …, t, job [j] and job [t+j] can be interchanged while the total completion time decreases or remains the same. □

Theorem 1

  • The problem 1|p j =a j or p j =a j b j , d j =D|C j is NP-hard even when b j =b for all j.

Proof

  • Lemma 1 shows that a solution to even-odd partition implies a feasible schedule for instance I. To complete the proof, we must now show that a feasible schedule for instance I implies a solution to even-odd partition.

    From the results of Lemma 2 and Lemma 3, exactly t jobs in {1, …, 2t} start processing before D, and t jobs in {1, …, 2t} and job 2t+1 start processing from D. Thus, similar to the analysis in Lemma 1, the total completion time can be expressed as

    The third equality follows from the fact that Σj=1t(a[t+j]b[t+j])=Σj=1tx[t+j]=2r−Σj=1tx[j].

    Since Σj=12t+1C j K and Σj=1tx[j]r, it follows that {x[j], x[t+j]}={x2j−1, x2j} for j=1, …, t and Σj=1tx[j]=r. As a result, the set of jobs that start processing before D provides a solution to even-odd partition problem, which indicates that a feasible schedule for instance I implies a solution to even-odd partition.

    By combining Lemma 1 and the proof above, there exists a schedule for this instance of 1|p j =a j or p j =a j b j , d j =D|∑C j with ∑C j K if and only if there exists a solution to even-odd partition problem. Therefore, the problem 1|p j =a j or p j =a j b j , d j =D|∑C j is NP-hard even when b j =b for all j. □

4. Polynomially solvable cases

In this section, we study several special cases of the problem which can be solved in polynomial time.

4.1. All processing times improve according to a fixed ratio b j =δa j for all 1⩽jn, and there exists k such that Σ j=1 k a j =D

We first focus on the case where jobs are shortened according to a fixed ratio if they start processing at, or after, the common critical date D. If there exists k such that Σj=1ka j =D, then we show that a simple sorting procedure yields an optimal solution for the problem. Otherwise, the problem remains NP-hard.

Lemma 4

  • Suppose that a1⩽a2⩽a n and b j =δa j for ∀j where 0<δ<1. If there exists k such that Σj=1ka j =D, then the SPT rule with respect to a j is optimal.

Proof

  • For a given schedule, there are two sets of jobs: jobs that start processing before D and jobs that start processing after D. Since the SPT rule on each set of jobs is optimal, it suffices to show that jobs in {1, …, k} are scheduled in the time interval [0, D] in an optimal schedule.

    Consider a schedule σ where some jobs in {k+1, …, n} start processing before D. In σ, let A denote a set of jobs that start processing before D, let B denote a set of jobs that start processing after D. Let X=A∩{k+1, …, n} and Y=B∩{1, …, k}. Note that the SPT rule on each set of jobs is optimal. Thus, since a i a j for iA\X and jX, jobs in X do not precede any jobs in A\X. Similarly, since a i b i =(1−δ)a i ⩽(1−δ)a j =a j b j for iY and jB\Y, jobs in Y have to precede jobs in B\Y. Let x[i] and y[i] denote the jobs in the ith position of the job sequences in X and Y, respectively. Figure 2 depicts the schedule σ.

    Figure 2
    figure 2

    Two possible structures of schedule σ.

    We first show that |X|⩽|Y|. Suppose that |X|>|Y|. From the assumption that a1a2⩽…⩽a n , we obtain that for i=1, …, |Y|. Therefore, we can exchange x[i] and y[i] for i=1, …, |Y| without increasing the start time of x[|Y|+1]. This implies that k+1 jobs can start its processing before D, which contradicts to the assumption that Σj=1ka j =D, that is, at most k jobs can start it processing before D. Alternatively, we assume that |X|⩽|Y|.

    As shown in Figure 2, there are two cases: ∑jAa j <D and ∑jAa j D.

    Case 1: :

    jAa j <D.

    Since ∑jAa j <D, observe that

    Construct a new schedule σ′ by interchanging jobs in A and jobs in B in the same order as in σ. Let t=D−∑jA\Xa j . Then,

    Since |X|⩽|Y|,

    The last inequality follows because for i=1…|X|, and δjYa j <t and ∑jXa j <∑jYa j . As a result, we can create a new schedule σ′ where jobs in {1, …, k} are scheduled in the time interval [0, D] without increasing the total completion time.

    Case 2: :

    jAa j D.

    Since ∑jAa j D, we obtain

    Let s=∑jXa j and t=D−∑jA\Xa j , respectively. Observe that st. Construct a new schedule σ′ by interchanging jobs in A and jobs in B in the same order as in σ. Then,

    Since |X|⩽|Y|,

    The last inequality follows because ts, and for i=1…|X|, and δjYa j <s and ∑jYa j ⩾∑jXa j . As a result, we can create a new schedule σ′ where jobs in {1, …, k} are scheduled in the time interval [0, D] without increasing the total completion time. □

In the following we present an example to demonstrate that even when a1a2⩽…⩽a n and b j =δa j for ∀j where 0<δ<1, if there does not exist any k such that Σj=1ka j =D, then the SPT rule may not be optimal. Consider an example where there are three jobs with a1=16, a2=18 and a3=22 while D=20 and δ=0.5. Therefore, a j b j are 8, 9 and 11, respectively, for j=1, 2, 3. The SPT rule yields a solution value of 95 (the completion times are 16, 34 and 45, respectively). If we schedule job 2 first and insert an idle time of 2 time units, followed by jobs 1 and 3, then the total completion time is 85 (with completion times of 18, 28 and 39). Clearly, since at most one job can start processing before the critical date, it is beneficial to sequence the longest job which does not exceed the critical date D, that is, job 2 in our case.

4.2 All improved processing times are identical a i b i =a j b j for all 1⩽i,jn

We consider the special case where jobs have different processing times if scheduled before D, but the same improved processing time if they start processing after D. If D does not coincide with the completion time of a job, it may be beneficial to insert some idle time before D, as shown below.

Lemma 5

  • Suppose that a1⩽a2⩽a n and a i b i =a j b j for all 1⩽i, jn. The SPT rule with respect to a j is optimal.

Proof

  • Let k denote the maximum number of jobs which can finish processing before D. Construct a schedule σ based on the SPT rule: Assign the first k jobs in non-decreasing order of a j before D; If Σj=1k+1a j D+ak+1bk+1 (which is equivalent to Σj=1ka j Dbk+1), then job k+1 starts directly after job k finishes at time Σj=1ka j . If Σj=1ka j >Dbk+1, then job k+1 starts at time D inserting idle time; The remaining jobs are assigned in non-decreasing order of a j b j without any idle time.

    We now show that any feasible schedule can be changed into σ without increasing its total completion time. Take a feasible schedule σ′. Let k′ denote the number of jobs which finish processing before D in σ′. It is clear that in order to minimize the total completion time of the first k′ jobs they must be ordered according to the SPT rule with respect to a j . The remaining jobs can be ordered arbitrarily as they have identical improved processing times. Without loss of generality we assume that the jobs processed after time D are also in SPT order. Now construct a new schedule σ″ by interchanging jobs [k′] and [k′+1], without changing the start time of the jobs [k′] and [k′+1]. From the assumption that the first k′ jobs and the latter nk′ jobs are in SPT order, we have that . Therefore,

    By repeating this procedure, we can construct σ without increasing the total completion time for the given schedule. □

4.3. All initial processing times are identical a j =a for all 1⩽jn

Next, we consider a special case for which all jobs have the same processing time initially but different processing times if they begin processing no earlier than D. It is clear that exactly k=⌊D/a⌋ jobs can finish their processing before D. The focus is on the job [k+1] which is in the (k+1)st position in the sequence: if b[k+1]Dka=D−⌊D/aa, then job [k+1] should start at time ka; if, on the other hand, b[k+1]Dka, then job [k+1] should start at time D.

Lemma 6

  • Suppose that a j =a forj and that b1b2⩾…⩾b n . The SPT rule with respect to a j b j (or Longest Process Time (LPT) rule with respect to b j ) is optimal.

The proof of Lemma 6 is based on a standard pair-wise interchange argument and is similar to the proof of Lemma 5. For the sake of brevity we omit the formal proof. These results are meaningful considering that the problem is NP-hard even when b j =b, but polynomially solvable either when a j =a or when a i b i =a j b j .

5. The LP-based heuristic

In this section, we develop an LP-based heuristic for the proposed problem. The scheduling problem is expressed in an MIP formulation in Section 5.1. Then, an LP-based heuristic is developed on the basis of an LP relaxation of MIP in Section 5.2.

5.1. The MIP formulation

In this section we develop an MIP formulation of the proposed problem. Without loss of generality, we assume that the jobs are indexed in non-decreasing order of a j .

Variables

MIP:

The objective function (1) seeks to minimize the total completion time. Constraint (2) implies that the starting time of the job in the (i+1)st position in the job sequence is greater than or equal to the completion time of the job in the ith position. Constraints (3) and (4) imply that the processing time of the job in the ith position in the job sequence is a[i] if the job starts before D and that the processing time of the job in the ith position in the job sequence is a[i]b[i] if the job starts after D. Note that M is a sufficiently large number. Constraint (5) implies that a job starts either before or after D. Constraints (6) and (7) imply that exactly one job occupies exactly one position in the job sequence. Constraint (8) implies that x ij and y ij are binary variables. Constraint (9) implies that starting times of the jobs are nonnegative.

Lemma 7

Proof

  • Constraint (3) implies that ∑jNx ij =1 if S i <D and ∑jNx ij =0 if S i D. Suppose that S i <D. Since the jobs are indexed in non-decreasing order of a j ,

    Thus, 0<DS i <D+1−Σj=1i−1a j . If S i <D, then

    which imposes ∑jNx ij =1.

    Suppose that S i D. Then, the right-hand side of Constraint (10) is not greater than zero, which imposes ∑jNx ij =0. As a result, Constraint (10) is equivalent to Constraint (3) in MIP. □

Observe that Constraint (10) is stronger than Constraint (3). We use Constraint (10) instead of Constraint (3) in order to reduce a search space for finding an optimal solution of MIP.

5.2. Heuristic

In this section, we develop an LP-based heuristic using the optimal solutions of an LP relaxed problem of MIP. In the LP relaxed problem of MIP, we relax the binary constraint for x ij and y ij , Constraint (8), and using Lemma 7, we develop the LP relaxed problem of MIP as follows.

LPR:

subject to (2), (4), (5), (6), (7), (10) and

Note that the LPR can be solved in polynomial time, using the ellipsoid method (Grotschel et al, 1993).

In the LP-based heuristic, let X and Y denote two distinct subsets of the job set N where jobs in X start processing before time D and jobs in Y start processing at, or after time D. The heuristic classifies each job into two sets, X and Y, according to an optimal solution of LPR. Then, schedule the jobs in X in non-decreasing order of a j , and schedule the jobs in Y in non-decreasing order of a j b j .

We now present a formal description of the heuristic.

The LP-based heuristic algorithm: Construction of a schedule for 1∣p j =a j or p j =a j b j , d j =D∣ΣC j .

illustration

figure a

The following example demonstrates the LP-based heuristic algorithm for a small problem containing three jobs.

Example 1

  • There are three jobs with a1=16, a2=18, a3=22, b1=11, b2=9, b3=10 and D=20. The optimal solution of LPR can be obtained as follows:

    Since x11r+x21r+x31r<y11r+y21r+y31r, we assign job 1 in Y. Similarly, since x12r+x22r+x32r>y12r+y22r+y32r and x13r+x23r+x33r<y13r+y23r+y33r, we assign job 2 and job 3 in X and Y, respectively. Consequently, X={2} and Y={1, 3}. We schedule job 2 first, and since max{D, ΣjXa j }=20, we schedule job 1 time 20 followed by job 3 at time 25. The heuristic yields a solution value of 80 with C1=25, C2=18 and C3=37.

6. Computational study

In this section, we undertake extensive numerical tests to evaluate the quality of solutions found for problem 1∣p j =a j or p j =a j b j , d j =D∣∑C j by the proposed heuristic. The heuristic algorithm is coded in GAMS/CPLEX with the default settings, and run on a personal computer with an Intel Core i7-2600, 3.40GHZ processor. Since there is no computational study which considers the setting addressed in this paper in the existing literature, we have adapted the data generation scheme used in Jeng and Lin (2004). In Jeng and Lin (2004), the authors study a makespan minimization problem under the assumption that job processing times are a non-linear function of their start time and due-date. In order to evaluate the performance of their proposed algorithms, the authors undertake a comprehensive computational study. We have adapted their framework to our problem setting for the purpose of measuring the performance of our heuristic algorithm. In our study we aim to isolate the effects, if these exist, of the following factors: the number of jobs (n), the difference (or ratio) between the job processing times before and after the common due date (a j and b j ), and the restrictiveness of the common due date (D). The results will allow us to determine the accuracy of the heuristic algorithm for any specific problem setting.

To test the effects of varying n and D, we let n∈{50, 100, 200, 400} and D=αa j where α∈{0.2, 0.4, 0.6, 0.8}. In order to determine whether or not the range of a j has an impact on the performance of the heuristic, we let a j ~DU[1, 50], a j ~DU[1, 100], a j ~DU[1, 150] and a j ~DU[1, 200], where DU[l, u] is the discrete uniform distribution over the interval [l, u]. Moreover, the ratio of a j to b j may affect the performance of the heuristic. For the associated test, we let b j ~DU[0, βa j ] where β∈{0.1, 0.4, 0.7, 1.0}.

Two sets of experiments were run. In the first set, n and D were allowed to vary, while a j ~DU[1, 100] and b j ~DU[0, a j ]. These results are presented in Table 1. In the second set, a j and b j were allowed to vary, while n=100 and D=0.6∑a j . These results are presented in Table 2. Since calculating the optimal solution is computationally difficult, we use the lower bound which is the optimal solution value of the LPR as a surrogate. As a result, our performance indicator, the relative error, 100 × (zHzL)/zL, provides an upper bound for the actual sample relative error, where zH and zL represent the solution value of the proposed heuristic and a lower bound, respectively. For each condition, the table entry is the average of 20 instances. Times are given in seconds and only include computation time.

Table 1 Performance of the heuristic with different numbers of jobs and different common critical dates
Table 2 Performance of the heuristic with different processing times

In Table 1, one can observe that the error increases as the number of jobs increases. Also, the error decreases as the common critical date (D) increases. This may be due to the quality of the lower bounds obtained from the LPR. As D increases, more jobs start their processing before the common critical date increases. Therefore, as shown in the proof of Lemma 7, the set of valid inequalities of LPR becomes larger, that is, the number of Constraints (10) increases. As a result, the LPR yields a tighter lower bound as the common critical date increases.

In Table 2, we observe that the range of a j has no significant impact on the performance of the heuristic. Moreover, the error tends to be maximized when b j ~DU[0, 0.7a j ]; however, there is no linear relationship between the error and the ratio of the standard deviation of a j to the standard deviation of b j .

To evaluate the performance of the proposed heuristic in terms of the deviation from the optimal solutions, we generated 20 instances of the problem where n=70, a j ~DU[1, 100], b j ~DU[1, a j ] and D=0.4∑a j . Optimal solutions were obtained using GAMS/CPLEX with a fixed time limit of 1 h of CPU time. These results are presented in Table 3. The error is defined as 100 × (zHzO)/(zO), where zH and zO represent the solution values of the heuristic algorithm and the optimal solution, respectively. The heuristic algorithm yields near-optimal solutions in short computation times. The average error is 1.942% with the worst case error of 2.633%. Optimal solutions were not obtained in six instances among 20 instances within 1 h. In these cases, the best feasible solutions obtained within the time limit were used for comparison. The average computation time required for the heuristic algorithm is less than 1 s, while approximately 50 min are required for GAMS/CPLEX. Considering that real-life manufacturing systems are often required to process thousands of jobs, even commercial optimization software packages would be unable to find optimal solutions for the problem. Therefore, it is sensible to use the heuristic algorithm instead of an optimal algorithm.

Table 3 Performance of the heuristic algorithm compared with optimal solutions

7. Conclusions

We study a total completion time minimization problem on a single machine with step-improving processing times. We establish that the problem is NP-hard for the general case, and develop polynomial time solution procedures for several interesting special cases. For solving the general case, we formulate an MIP model and develop an LP-based heuristic for the problem.

In order to evaluate the effectiveness and efficiency of the heuristic, extensive computational experiments are carried out. The experimental results show that the heuristic yields very close-to-optimal solutions in short computational times. Moreover, the proposed heuristic algorithm performs better as the common critical date increases and as the number of jobs decreases.

Future research may focus on studying step-improving processing times with different scheduling criteria, such as total weighted completion time. Also, it would be interesting to extend our results to the various multiple machine environments.