• Articles
  • Borland C++ Sorting Algorithm
Published by
Jul 6, 2014 (last update: Jul 7, 2014)

Borland C++ Sorting Algorithm

Score: 4.0/5 (37 votes)
*****
INTRODUCTION

Have you ever wondered about the software programs that sort large numbers of items? We take them for granted to do our everyday tasks on the computer, but what exactly makes them function? Many software packages have implemented their own algorithms for taking care of this job. I have developed my own approach for handling this important task and I will present a detailed explanation here of how it works.

AN OVERVIEW OF MY PROBLEM

In 1996 I was working on an inventory system for a customer using procedural C programming to sort a large number of items - about 8,000 to 10,000. The sorting program I had at the time was something I created in the early 1990s and could only sort up to 1,500 items. This Borland C alphabetizing code is listed on my website.

Back in the mid 1990s, most IBM PC based computers were running Intel 486, Intel Pentium, AMD K-5, etc. However, their capability and that of the hard disks at the time seemed like they had to struggle to handle a large capacity sorting task like the one my application required. I had to start with the basic programming idea behind my procedural C sorting code from the early 1990s and somehow expand it so it could process larger data files. If I tried to design the new sorting program so it did most of the work on the mechanical hard disk that would have created a new problem. Attempting to sort a large data file on a disk drive would have created a very big reduction in speed because of slowness of the mechanical moving parts of the hard disk. The customer would certainly object to the slower speed and I would have been sent back to the drawing board to start over with something more acceptable.

Performing the sorting on the hard disk was obviously a road to nowhere with a large data file. The only other option I could think of was to do the bulk of the work in the memory. By concentrating the data manipulation in memory, I could escape the slower world of the mechanical disk drive and pick up much more speed. This was especially important at the time because of the less powerful processors of the day. Another compelling reason for shifting the work into memory was because doing much of the work on a disk that could potentially have any number of sector errors on it could create catastrophic problems. This would have thrown a wrench into the sorting process and created a corrupted output file. Of course this is also possible with concentrating the work in memory, but it’s less likely to occur.

MOVING FORWARD

I will begin discussing the “nuts and bolts” of how my algorithm works shortly. This new and improved alphabetizing code for sorting jobs was later adapted to Borland C++ and I have included pieces of the code along with diagrams to help illustrate the logic flow. Please note that some of the C++ variables are referred to as “non-persistent” variables, while the “top” and “bott” variables are called “persistent” variables. This is because “non-persistent” variables are completely reset to new values during the processing while “persistent” variables are incremented or decremented at various times, but never reset. Also, you will notice I refer to various data structures I use such as “grid”, “name”, and “stor” as conventional data structures. They are allocated within the boundaries of the 64K data segment as prescribed by the small memory model I used in the programming. This is to differentiate them from the far memory data structures “s”, “s1” and “s2”. This algorithm was performed on binary fixed width text files. I use these in my application development because they are easy to work with. The algorithm can easily be adjusted to work with binary variable width (delimited) text files, also.

THE MAIN OBJECTIVE: LARGER SORTING CAPACITY

Now that I had decided to focus most of the processing in the memory, I had to come up with a way to do this so it could allocate the capacity for a large number of items. In Borland C/C++, there were 6 memory models to choose from: tiny, small, medium, compact, large and huge. I always used the small memory model since it was the default and I just became accustomed to dealing with it since I started with C coding in 1990. In the small memory model, the code and data segments each have 64K of memory available. In order to sort large numbers of items, I would need a much larger space of memory than a 64K data segment that also had to hold a variety of other data structures.

I decided to use the far side of the heap, or what is known as “far memory”. To set this up, I first included a necessary C++ header file for allocating far memory:

1
2
3
4

// alloc.h is needed for far memory data structures.
#include <alloc.h>
 


Then I declared 3 far memory pointers like this near the beginning of the sorting code:

1
2
3
4
5
6

// declare far memory pointers.
unsigned long int far *s;
unsigned long int far *s1;
unsigned long int far *s2;


I allocated them like this to handle up to 16,000 items:

1
2
3
4
5
6

// allocate far memory.
s = ( unsigned long int far * ) farmalloc(65000L);
s1 = ( unsigned long int far * ) farmalloc(65000L);
s2 = ( unsigned long int far * ) farmalloc(65000L);
 


The reason I set up 3 far memory data structures is because all of them are needed to manipulate the data with the new sorting algorithm I created. This gave me the space to manipulate up to 16,000 items. I could have allocated for a larger number of data records, but this was more than enough to do the job at hand.

ASSIGNING A NUMERICAL WEIGHT TO EACH ITEM IN THE DATA FILE

The processing starts with applying a mathematical formula to the first four characters of each item in the binary fixed width text file. Consider the following numerical succession of powers of “10”:

10,000,000 1,000,000 100,000 10,000 1,000 100 10 1

Next, remove the following powers of “10” in the above numerical succession:

1,000,000
10,000
100
10

This is what is left with these powers of “10” in the updated numerical succession:

10,000,000 100,000 1,000 1

The ASCII codes of each character in a given item can range from 32 to 126. Each of these ASCII codes has been “mapped” to numerical values ranging from 0 to 94. The numerical values for each of the first four characters starting from the beginning in a given item will each be multiplied by the updated numerical succession in a left to right fashion.

This is the math formula I use in the programming to assign numerical weights to each item:

(10,000,000 X numerical value of character 1) +
(100,000 X numerical value of character 2) +
(1,000 X numerical value of character 3) +
(1 X numerical value of character 4)

This amount is equal to the numerical weight for this item. Consider the following example:

"SMITHSON"

"S" = Character 1
"M" = Character 2
"I" = Character 3
"T" = Character 4
"H" = Character 5
"S" = Character 6
"O" = Character 7
"N" = Character 8

ASCII code for Character 1: S = 83 which corresponds to numerical value 51 per the algorithm.
ASCII code for Character 2: M = 77 which corresponds to numerical value 45 per the algorithm.
ASCII code for Character 3: I = 73 which corresponds to numerical value 41 per the algorithm.
ASCII code for Character 4: T = 84 which corresponds to numerical value 52 per the algorithm.

Now, let’s plug in the numerical values from this example to the math formula to yield the numerical weight for the above item:

(10,000,000 X 51) + (100,000 X 45) + (1,000 X 41) + (1 X 52) = 514,541,052

This math formula is something I came up with that I believed would be a good way to assign a numerical weight to each item. Here is a partial piece of the code that performs this task in the program:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128

.
.
.
.
.
// open the data file "job_part.txt" for initial processing.
infile.open("job_part.txt", ios::in | ios::binary);      
inn = infile.rdbuf();                                 

// initialize far memory data structure position variables.
// "top" is the first sequential item and “bott” is number
// of items less 1, or the last sequential item. "top" and
// "bott" are what i call persistent item markers in the far
// memory data structures. this is because they are never reset
// to their original values during the entire processing sequence.
// “top” is incremented and “bott” is decremented during processing
// to assist in facilitating the overall sorting.
top = 0;                  
bott = number_of_items - 1;       
// initialize the record counter variable, “aa”.      
aa = 0;                     

	// <----- start of processing loop "a".

	// this will calculate the highest and lowest numerical weights
	// for all items.
	do {                      

		// <----- start of processing loop "b".

		// this will generate numerical weights for the items. get a
		// character from the current file pointer position and prepare
		// for further processing.
		inn -> seekpos(fileoffset, ios::in);
		first_four_numbers = 0;
		do {                      
		// convert the character to uppercase for subsequent comparison.
		kh = infile.readByte();      
		khchar[0] = kh;             
		strupr(khchar);           
		kh = khchar[0];
		// assign numerical value range of 0 to 94 to variable “var1”
		// that i have mapped to the ascii character code range of 32 to 126.
		if( kh <= 32 ) var1 = 0;
		if( kh == 33 ) var1 = 1;
		if( kh == 34 ) var1 = 2;
		if( kh == 35 ) var1 = 3;
		if( kh == 36 ) var1 = 4;
		if( kh == 37 ) var1 = 5;
		if( kh == 38 ) var1 = 6;		.
.
.
.
.
		if( kh == 119 ) var1 = 87;
		if( kh == 120 ) var1 = 88;
		if( kh == 121 ) var1 = 89;
		if( kh == 122 ) var1 = 90;
		if( kh == 123 ) var1 = 91;
		if( kh == 124 ) var1 = 92;
		if( kh == 125 ) var1 = 93;
		if( kh == 126 ) var1 = 94;
			// multiply the numeric variable "var1" for first character
			// of item by 10,000,000 and add to total for this item in
			// far memory data structure "s".
			if( first_four_numbers == 0 ) *(s+aa) = *(s+aa) + ( var1 * 10000000 );    
				// multiply the numeric variable "var1" for second character
				// of item by 100,000 and add to total for this item in far
				// memory data structure "s".
				if( first_four_numbers == 1 ) *(s+aa) = *(s+aa) + ( var1 * 100000 );      
					// multiply the numeric variable "var1" for third character
					// of item by 1,000 and add to total for this item in far
					// memory data structure "s".
					if( first_four_numbers == 2 ) *(s+aa) = *(s+aa) + ( var1 * 1000 );        
						// multiply the numeric variable "var1" for fourth character
						// of item by 1 and add to total for this item in far memory
						// data structure "s".
						if( first_four_numbers == 3 ) *(s+aa) = *(s+aa) + ( var1 * 1 );           
                                                     
			// ( first item character numerical value X 10,000,000 ) + 
			// ( second item character numerical value X 100,000 ) + 
			// ( third item character numerical value X 1,000 ) + 
			// ( fourth item character numerical value X 1 ) = numerical 
			// weighted value for the item. this accumulated numerical
			// value is stored in far memory data structure "s" for the
			// corresponding item's sequential position in the data file.

					// the first 4 characters of each item are subjected to the
					// above math formula. they are given a numerical weighting,
					// which will later be used to segment the actual sorting process
					// into "top1" and "bott1" processing regions. in other words,
					// the sorting process is performed in a compartmentalized fashion.

		// <----- end of processing loop "b".

		first_four_numbers++;
		} while( first_four_numbers < 4 );                                      

// as we are moving from the top to the bottom of the data
// file, keep track of the lowest primary and highest primary
// accumulated numerical values as they are stored in far memory
// data structure "s". the value extremes (highest and lowest)
// are assigned to the primary high “up1” and primary low “low1”
// variables, respectively.                                                     
if( aa == 0 ) {                                        
low1 = *(s+aa);                                     
up1 = *(s+aa);                                   
}
if( *(s+aa) < low1 ) low1 = *(s+aa);               
if( *(s+aa) > up1 ) up1 = *(s+aa);                 
                                                     
	// move to next record of the data file "job_part.txt". the constant
	// record length in this case is “RECORDLENGTH” and fixed field width (SDF format).                                                 
	aa++;
	fileoffset = fileoffset + RECORDLENGTH;                      

	// <----- end of processing loop "a".

	} while( aa < number_of_items );

infile.close();                                     
.
.
.
.
.


The lowest and highest numerical weights are now known after we have applied this math formula to all the items in the data file. All numerical weights will be stored in the far memory data structure “s” in positions that correspond to their sequential positions in the unsorted data file (See Figure 1).




1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147

.
.
.
.
.
// if the lowest primary “low1” and highest primary “up1” variables are not equal,
// meaning there are records to be sorted, then proceed with processing.
if ( low1 != up1 ) {      

	// initialize high "main" processing loop exit variable "qqq" to stay within
	// the processing loop.                            
	qqq = 0;                    
	// initialize low "main" processing loop exit variable "sss" to stay within
	// the processing loop.
	sss = 0;                    

		// <----- start of “main” processing loop.

		// the "main" processing loop will redefine "top1" and "bott1" pairs of
		// processing regions until all items are sorted.
		do {                      
                            
		// assign primary high variable "up1" to secondary low variable "low2".
		low2 = up1;               
		// assign primary low variable "low1" to secondary high variable "up2".
		up2 = low1;               

		// the loop that follows will set boundaries and numerical values for
		// processing regions "top1" and "bott1". each of these processing regions
		// can handle up to 150 items with the same numerical weighting as calculated
		// above on the first 4 characters of the item. i need to mention that there
		// is a highly unlikely possibility that the algorithm could malfunction if a
		// specific numerical weight that has been assigned to a "top1" or "bott1"
		// processing region exceeds 150 items. this is because it would exceed the row
		// depth of the 2 dimensional “grid” conventional data structure which is heavily
		// used for sorting and insertion operations in the "top1" and "bott1" processing
		// regions. i have used this algorithm in many of my programs and have never seen
		// this happen. 

		// initialize item loop counting variable "ii" to 0.
		ii = 0;                     
		// initialize upper processing region "top1" non-persistent processing region.
		top1 = 0;                   
		// initialize lower processing region "bott1" non-persistent processing region.
		bott1 = 0;                  
		// initialize the start of upper processing region "start" non-persistent item
		// marker with "top" persistent item marker.
		start = top;                
		// initialize the start of lower processing region "finish" non-persistent item
		// marker with "bott" persistent item marker.                            
		finish = bott;              

			// <----- start of processing loop "c".                            

			do {                      

				// if the numerically weighted value for the given item in far memory data
				// structure “s” for index “ii” is equal to the lowest primary variable
				// “low1” and the high “main” processing loop exit variable “qqq” is set to
				// stay within the “main” processing loop, then assign the sequential file
				// position value for the given item “ii” to far memory data structure “s1”
				// in the array position denoted by the “top” persistent item marker.
				if( *(s+ii) == low1 && qqq == 0 ) {     
				*(s1+top) = ii;                      
				// next, increment the “top” persistent item marker and “top1” non-persistent
				// item marker by 1 each.
				top++;                               
				top1++;                              
				}

				// if the numerically weighted value for the given item in far memory data
				// structure “s” for index “ii” is equal to the highest primary variable “up1”
				// and the low “main” processing loop exit variable “sss” is set to stay within
				// the processing loop, then assign the sequential numerical file position value
				// for the given item “ii” to far memory data structure “s1” in the array position
				// denoted by “bott” persistent item marker.
				if( *(s+ii) == up1 && sss == 0 ) {    
				*(s1+bott) = ii;                     
				// next, decrement the “bott” persistent item marker and increment “bott1” non-persistent
				// item marker by 1 each.                                       
				bott--;                                
				bott1++;                              
				}

				// if the numerically weighted value for the given item in far memory data structure “s”
				// is greater than the lowest primary variable “low1” and is less than the lowest secondary
				// variable “low2”, then assign numerically weighted value for the given item for index “ii”
				// in far memory data structure “s” to the lowest secondary variable “low2”.
				if( *(s+ii) > low1 && *(s+ii) < low2 ) low2 = *(s+ii);    
                                                                
				// if the numerically weighted value for the given item in far memory data structure “s” is
				// less than the highest primary variable “up1” and is greater than the highest secondary
				// variable “up2”, then assign numerically weighted value for the given item for index “ii”
				// in far memory data structure “s” to the highest secondary variable “up2”.                                                               
				if( *(s+ii) < up1 && *(s+ii) > up2 ) up2 = *(s+ii);      
                                                                
			// increment item counting variable "ii" by 1 and loop sequentially through the data file until
			// the item counting variable is at the end. this looping routine will set the records of the
			// same numerically weighted value to be processed for sorting in processing region "top1" and
			// the same for processing region "bott1". the “start” non-persistent item marker is the beginning
			// of the “top1” processing region and “start+top1” is the end. the “finish” non-persistent item
			// marker is the end of the “bott1” processing region and “finish-bott1+1” is the beginning. 

			// <----- end of processing loop "c".                                                                

			ii++;                    
			} while( ii < number_of_items );           
                           
					// first, set the variable “r” equal to 0. if the secondary low variable “low2” and the
					// secondary high variable “up2” are equal and the low "main" processing loop exit variable
					// “sss” is set to stay within the "main" processing loop, then set the low "main" processing
					// loop exit variable “sss” to attempt to exit the "main" processing loop. also, set the
					// variable “r” equal to 1 so the very next logic structure will not be evaluated.
					r = 0;
					if( low2 == up2 && sss == 0 ) {     
					sss = 1;                            
					r = 1;
					}

					// if the variable “r” is still set to 0, then evaluate the next logic structure which is
					// described as follows. if the secondary low variable “low2” and the secondary high
					// variable “up2” are equal and the low "main" processing loop exit variable “sss” is set
					// to attempt to exit the "main" processing loop, then set the high "main" processing loop
					// exit variable “qqq” to attempt to exit the "main" processing loop.
					if( low2 == up2 && r == 0 && sss == 1 ) qqq = 1;        
                                                      
					// if the secondary low numerically weighted variable “low2” is greater than the secondary
					// high numerically weighted variable “up2”, then set the low "main" processing loop exit
					// variable “sss” and the high "main" processing loop exit variable “qqq” to both exit the
					// "main" processing loop, which will cease the redefinition of successive “top1” and “bott1”
					// processing regions.
					if( low2 > up2 ) {             
					qqq = 1;                       
					sss = 1;
					}

					// next, assign secondary low weighted variable “low2” to primary low weighted variable “low1”.
					low1 = low2;                 
					// next, assign secondary high weighted variable “up2” to primary high weighted variable “up1”.
					up1 = up2;                   
					.
					.
					.
					.
					.


In the above patch of code, the first thing that occurs is to see whether or not the lowest and highest numerical weights are equal. This compares the lowest primary variable “low1” to the highest primary variable “up1”. If they are equal, the start of processing will be aborted because all items will have the same numerical weight. This means the first 4 characters of all items are the same. This would be highly unusual because they would already be nearly sorted to begin with and the probability of ever encountering a data file like this would be remote. In the end, the original data file to be sorted would be left intact and not be reconstructed at the end. If they are unequal, the lowest primary variable “low1” and the highest primary variable “up1” would represent two different sets of numerically weighted items and therefore processing would continue with the commencement of the “main” processing loop.

A TALE OF TWO FAR MEMORY PROCESSING REGIONS: “TOP1” AND “BOTT1”

The program cycles around a “do-while loop” which I call the “main” processing loop. I use 2 regions of far memory to facilitate the sorting process, which I call the “top1” and “bott1” processing regions. Each of these will be repeatedly redefined with each loop through the “main” processing loop. This is the “segmented mechanism” which drives the sorting process.

Both of these processing regions actually begin as numerical variables. They later evolve into processing regions. First, they are both initialized to 0. Then “top1” is incremented by 1 for each item in the far memory data structure “s” that corresponds to the lowest primary variable, “low1” (lowest current numerical weight). Next, “bott1” is incremented by 1 for each item in the far memory data structure “s” that corresponds to the highest primary variable, “up1” (highest current numerical weight). This is done in the above code. Also, the “main” processing loop exit variables “qqq” and “sss” cannot be set to exit the “main” processing loop while both processing regions need to be redefined to process unsorted items. In other words, “qqq” must be set to 0 for “top1” to include the lowest current numerical weight in its processing region that is being defined. And “sss” must be set to 0 for “bott1” to include the highest current numerical weight in its processing region, which is also being defined.

One other thing to notice in the previous code are 2 markers I use for the items denoted by “start” and “finish”. “start” is assigned the value in “top”, and “finish” is assigned the value in “bott”. “start” is a “non-persistent” item marker used to denote the item count or depth of the “top1” processing region. “finish” is a “non-persistent” item marker used to denote the item count or depth of the “bott1” processing region. Both “top” and “bott” are “persistent” item markers that are incremented along with “top1” and “bott1”. (See Figures 7 and 8 to see a visual representation of the “top1” and “bott1” processing regions.)




After the redefinition process is completed, the “top1” processing region will encompass items corresponding to the lowest current numerical weight. The same is true for the “bott1” processing region, but with a numerical weight that corresponds to the highest current numerical weight. The algorithm will use both processing regions to facilitate the actual sorting process, the specifics of which I will not get into with this article. To view that, you can refer to the “improved alphabetizing code” hyperlink near the beginning of the article. After the sorting has been performed, the program will loop around the “main” processing loop and proceed to redefine new pairs of “top1” and “bott1” processing regions. (See Figure 2).




Both processing regions will approach each other in spatial proximity as they move toward the center of the far memory data structure “s” from being redefined with each pass through the “main” processing loop. Each new “top1” processing region will have a higher numerical weight than its predecessor “top1” region. Each new “bott1” processing region will have a lower numerical weight than its predecessor “bott1” region. Please refer to figures 3, 4, 5, and 6 for a visual illustration of the progression of the algorithm as successive “top1” and “bott1” processing regions are redefined with each pass through the “main” processing loop.







Notice what happens in Figure 6 after the processing in successive “top1” and “bott1” processing regions reaches the middle of far memory in the far memory data structure “s”. The “top1” processing region with the least lowest numerical weight is adjacent to the “bott1” processing region with the least highest numerical weight. The processing will cease at this point because there will be no more items left to sort. The “main” processing loop will then be exited and the new sorted array of item positions stored in far memory data structure “s1” will be written to a new data file. (See Figures 9 and 10).







Here, I want to talk about ways the “main” processing loop could be exited before the data is written back to a newly sorted data file. As the processing draws to a close in the middle of the far memory data structure “s”, it will not necessarily end with an even pair of final “top1” and “bott1” processing regions. It can also near completion with either of the “top1” or “bott1” processing regions having its “main” processing loop exit variable set to attempt to exit the “main” processing loop. To be more specific, the “top1” processing region could have its “main” loop exit variable “qqq” set to 1, which means there are no more “top1” regions to be redefined. The “bott1” processing region could have its “main” loop exit variable “sss” set to 0, meaning there is another “bott1” processing region to be redefined and sorted. The opposite of this can also occur.

AN ANALOGY THAT MAY HELP CLARIFY THE LOGIC FLOW

Knowing this narrative may be overwhelming for some readers, I would like to take a page from American history that may be helpful in creating a better understanding of how my algorithm works.

During the latter part of the 19th century, the United States turned its attention to nation building. Connecting the vast expanse of North America by way of a coast-to-coast railroad became a national priority. This was the start of America’s first Transcontinental Railroad.

Two railroad companies, the Union Pacific and the Central Pacific, spearheaded this ambitious and daunting task. The Central Pacific began building its railway eastward from Sacramento, California, while the Union Pacific began construction work heading westward from Omaha, Nebraska.

Both crews in the east and west worked relentlessly for seven years. On April 28, 1868 the Union Pacific’s construction gang of Chinese and Irish workers laid ten miles of railroad track in a single day as a result of a $10,000 bet that it could actually be done. On May 10, 1869 the construction was completed at Promontory Point in the Utah territory. The Union Pacific’s No. 119 engine and the Central Pacific’s No. 60 engine, Jupiter, were drawn up face-to-face separated by the width of a single railroad tie. At the Golden Spike ceremony, three spikes were driven in to connect the two railways: gold, silver and a composite spike made from gold, silver and iron. Travel time between the east and west coasts of the United States was reduced from 4 to 6 months to only 6 days by railroad!

Now, the progression of my algorithm is quite similar to the construction of America’s first Transcontinental Railroad when you take a moment to really think about it. As the algorithm moves along, it begins to resemble two work crews gradually progressing towards a conclusion in the middle of the allocated far memory space, which is like a long stretch of terrain awaiting the arrival of “sorting construction labor”, so to speak. The “top1” and “bott1” processing regions are like “two construction gangs” that commence “sorting work” that begins at opposite ends of the allocated memory space. They each work hard to sort items of the same numerical weight as previously described, while constantaly moving closer and closer to one another. After the program loops around the “main” processing loop and new “top1” and “bott1” processing regions have been defined, the process repeats itself. Finally, The “Golden Spike Ceremony” occurs when the “top1” and “bott1” processing regions are adjacent to one another somewhere near the middle of the allocated far memory segment - Promontory Point in the Utah territory, if I may use that to hopefully foster a better understanding of my algorithm.

A POTENTIAL PROBLEM AND A REMEDY

Here, I would like to expand on a potential problem with my algorithm and a recommended solution that should take care of it. The 2 dimensional “grid” conventional data structure is used extensively to manipulate items in the “top1” and “bott1” processing regions. It is designed to hold up to 150 items of the same numerical weight. You need to be conscious about how much row depth you give the 2 dimensional “grid” conventional data structure so it and other conventional data structures taken together don’t breach the 64K data segment of the small memory model that is used. The problem arises if there are more than 150 items in a “top1” or “bott1” processing region. The algorithm won’t abort or malfunction, but rather it will include only the first 150 items in a processing region. I never really tried to address this potential snafu, because it is highly unlikely to occur in the first place. There would have to be more than 150 “Smiths” or “Joneses” to trigger the glitch. This could potentially happen in a voter registration verification data file that could include a large number of same last names.

A good way to correct this is to declare a fourth far memory data structure of the same size as each of the first 3. It would replace and perform the job of the 2 dimensional “grid” conventional data structure, yet it would always be large enough to hold all the items for a particular numerical weight. This is because it would be allocated to hold as many items as are in the entire data file.

JUST SAY “NO” TO REDUNDANT, SPEED ROBBING CODE

Many of you may be wondering by now about the speed of the algorithm. I tested it with a binary fixed record width text file containing 10,959 part numbers. On a Gateway Pentium 4 tower CPU using an old 6 GB Quantum Bigfoot hard drive the processing took a little over 3 seconds. When it was run on a Dell M5030 laptop with an AMD V160 Processor at 2.4 GHz it took about 1 second. There are some areas in the “do-while” loop processing that could be redesigned or eliminated that should further increase the processing speed since less work is required to achieve the same result. After I finished this in 1996 it seemed to work in a reasonable amount of time so I didn’t go back and try to optimize it some more. Here I will elaborate with some selected areas in the code that could be improved to yield more processing speed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

	.
	.
	.
	.
	.	
		// convert the character to uppercase for subsequent comparison.
		kh = infile.readByte();      
		khchar[0] = kh;             
		strupr(khchar);           
		kh = khchar[0];
		// assign numerical value range of 0 to 94 to variable “var1”
		// that i have mapped to the ascii character code range of 32 to 126.
		if( kh <= 32 ) var1 = 0;
		if( kh == 33 ) var1 = 1;
		if( kh == 34 ) var1 = 2;
		if( kh == 35 ) var1 = 3;
		if( kh == 36 ) var1 = 4;
		if( kh == 37 ) var1 = 5;
		if( kh == 38 ) var1 = 6;
		.
.
.
.
.
		if( kh == 119 ) var1 = 87;
		if( kh == 120 ) var1 = 88;
		if( kh == 121 ) var1 = 89;
		if( kh == 122 ) var1 = 90;
		if( kh == 123 ) var1 = 91;
		if( kh == 124 ) var1 = 92;
		if( kh == 125 ) var1 = 93;
		if( kh == 126 ) var1 = 94;
	.
	.
	.
	.
	.


This block of code that tests for ASCII characters 32 through 126 could be replaced with the C++ function, “atoi()”. It would eliminate much of the repetitive conditional “if-then” logic structure comparisons and convert the character to an integer. This new integer value could then be used in the math formula that computes numerical weights for each item. Here is another place for adding some speed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81

	.
	.
	.
	.
	.
		// set processing loop “2” counter variable “ii” to the non-persistent "start" item
		// marker denoting the beginning of the "top1" processing region.
		ii = start;           
       
		// <----- start of processing loop "2".                               

		do {                       

		// retrieve and calculate file stream position offset of the item for processing loop
		// “2” counter "ii" in the far memory data structure “s1”.
		far_memory_contents_2 = *(s1+ii);               
		far_memory_contents_2 = far_memory_contents_2 * RECORDLENGTH;    
                                 
		// use calculated file stream position offset "far_memory_contents_2" to retrieve all
		// characters of the item except the first 4 characters into the "name" conventional
		// data structure. compare this to the 2 dimensional "grid" conventional data structure
		// set at lower index counter "lowx". if they are equal and high processing loop “1”
		// exit variable "qqqx" is set to stay within processing loop "1", then assign the
		// sequential item position in far memory data structure "s1" for	 processing loop “2”
		// counter variable “ii” to far memory data structure "s2" for non-persistent index
		// marker "topx". next, increment non-persistent index marker "topx" by 1.
		r = 0;
		inn -> seekpos(far_memory_contents_2 + 4, ios::in);                          
		for(v = 0; v < FIELDLENGTH - 4; v++) name[v] = infile.readByte();  
		strupr(name);                                             
                                                            
		for(v = 0; v < FIELDLENGTH - 4; v++) {                          
		if( name[v] != grid[v][lowx] ) r = 1;                           
		}                                                         
                                                            
		if( r == 0 && qqqx == 0 ) {                                 
		*(s2+topx) = *(s1+ii);                                    
		topx++;
		}

		// retrieve and calculate file stream position offset of the item for processing loop
		// “2” counter "ii" in the far memory data structure “s1”.
		far_memory_contents_2 = *(s1+ii);              
		far_memory_contents_2 = far_memory_contents_2 * RECORDLENGTH;   
                                
		// use calculated file stream position offset "far_memory_contents_2" to retrieve all
		// characters of the item except the first 4 characters into the "name" conventional
		// data structure. compare this to the 2 dimensional "grid" conventional data structure
		// set at higher index counter "upx". if they are equal and low processing loop “1” exit
		// variable "sssx" is set to stay within processing loop "1", then assign the sequential
		// item position in far memory data structure "s1" for processing loop “2” counter variable
		// “ii” to far memory data structure "s2" for non-persistent index marker "bottx". next,
		// decrement non-persistent index marker "bottx" by 1.
		r = 0;
		inn -> seekpos(far_memory_contents_2 + 4, ios::in);                          
		for(v = 0; v < FIELDLENGTH - 4; v++) name[v] = infile.readByte();  
		strupr(name);                                             
                                                            
		for(v = 0; v < FIELDLENGTH - 4; v++) {                          
		if( name[v] != grid[v][upx] ) r = 1;                            
		}                                                         
                                                            
		if( r == 0 && sssx == 0 ) {                                 
		*(s2+bottx) = *(s1+ii);                                   
		bottx--;
		}

		// increment processing loop “2” counter variable “ii” by 1. processing loop “2” counter
		// variable “ii” cycles through the non-persistent "start+top1" depth for the "top1"
		// processing region.
		ii++;                              

		// <----- end of processing loop "2".

		} while( ii < start + top1 );            	.
	.
	.
	,
	.


In the “top1” and “bott1” processing sections of the code, there is a patch of code enclosed by processing loop “2”. There are two places where the “far_memory_contents_2” file stream position offset is calculated twice. It is then used to retrieve data into the “name” conventional data structure for comparison operations in two different rows in the 2 dimensional “grid” conventional data structure. It only needs to be calculated once to achieve the same result. In fact, the “name” conventional data structure only needs to retrieve the data once with each processing loop “2” loop instead of twice.

CONCLUSION

I have used this sorting algorithm in many C++ applications, typically for sorting part numbers or customer names that are to be previewed as reports. It has proven itself to be reliable as well as fast. I have also adapted it for sorting numbers and dates. If you would like to learn more about my developer skills, then please visit my software developer website. Additionally, be sure to check out my computer repair services and my "fix my computer" technical tips.


References:

http://www (dot) accelerationwatch (dot) com/promontorypoint (dot) html

http://en (dot) wikipedia (dot) org/wiki/Promontory,_Utah

http://www (dot) history (dot) com/topics/transcontinental-railroad