There is a new feature in Suprtool 5.8.10 and higher which allows a user to add a number of months to a give date. Let's say you have to follow up with a customer in three months, a simple Suprtool task can do this:
base custdb,1,;
get customers
if custno="12345"
item follow-up-date,date,ccyymmdd
update
ext follow-up-date=$month(follow-up-date,+3)
xeq
This was done due to a direct request from one of our customers. One of the features of this is that if a month does not have the same number of days, Suprtool will find the closest date. For Example, if the +1 is invoked for a date that is say August 31st, the resulting date would be "September 31", which is of course incorrect. Suprtool will adjust the date to be a correct date of September 30th.
Occasionally we get requests to summarize data or manipulate reports to be re-organized. One common manipulation is to stick the contents of one line onto the end of another based on the a value in the first line. The commands to do this are fairly simple :
Append "~" "Transaction"
Glue "~"
ch "~" " @
Here is the data:
qux/lq all
Transaction Date: 20160908
Balance: 1245.85
Transaction Date: 20151213
Balance: 908.54
Transaction Date: 20121221
Balance 125.44
Here is the code in action:
qux/append "~" "Transaction"
1 Transaction Date: 20160908~
3 Transaction Date: 20151213~
5 Transaction Date: 20121221~
3 lines appended to
qux/glue "~"
1 Transaction Date: 20160908~Balance: 1245.85
3 Transaction Date: 20151213~Balance: 908.54
5 Transaction Date: 20121221~Balance 125.44
3 lines Glue-updated
qux/ch "~" " @
1 Transaction Date: 20160908 Balance: 1245.85
3 Transaction Date: 20151213 Balance: 908.54
5 Transaction Date: 20121221 Balance 125.44
3 lines changed
The Append command sticks a Tilde on the end of the line, and the glue command, glues the line after the Transaction line onto the end. Tilde is used since it isn't often in normal files, you can look to insure that the tilde is not in the file with a simple: ch "~"~" @, to see if there are any occurences.
Occasionally we get updates to customers e-mail addresses and often these e-mails have bad data in them. Over the past years I have tracked the bad data to be, Tab, Carriage Return or Line Feed. Luckily we have Suprtool to fix the data in very few commands:
base custdb,1,;
get d-custdata
clean "^9:^13"
if $findclean(e-mail-address)
update
ext e-mail-address=$clean(e-mail-address)
xeq
We specify Decimal Nine thru to Decimal 13, which is Tab thru to Line Feed, we use $findclean to find the entries and we update and run the extract of the e-mail address thru the $clean function and the data gets fixed and updated in one easy step.
One of the more common questions we get with respect to data and portions of data in a given field. Specifically, let's say you want to define the last four bytes of a given 20 byte field.
So with 20 bytes the last four bytes begins at byte 17:
12345669132135689071 Data 12345678901234567890 Byte PositionSo starting at offset 17 for a length of 4, the data extracted should be 9071.
>in atminfo >list >xeq >IN atminfo (0) >OUT $NULL (0) ATM-CARD-NUMBER = 12345669132135689071 IN=1, OUT=1. CPU-Sec=1. Wall-Sec=1. >in atminfo >def last-4,atm-card-number[17],4 >ext last-4 >list >xeq >IN atminfo (0) >OUT $NULL (0) LAST-4 = 9071 IN=1, OUT=1. CPU-Sec=1. Wall-Sec=1.So to deconstruct the define command, you define the new name associated with the field you want a portion of, and you define the starting offset and the length.
This is a line.would become:
Before Line This is a line. After LineNow normally in Qedit (host-based) this is extremely easy using a couple of tricks.
ch "This is a line."Before Line~This is a line.~After Line" @ divide "~" @ ch "~This is a line."This is a line." @ divide "~" @ ch "~After Line"After Line" @The use of the tilde character (I use this as it is typically unique) along with change and divide we end up getting a new line before and a new line after. However, the customer that was asking for this has Qedit for Windows only and Qedit server, which doesn't allow Host-based Qedit commands. Following is a script of a Qedit Scripting Language programming language, that will do just that:
-- Copyright 1996-2015 Robelle Solutions Technology Inc. -- Version 1.10 May 21, 2015 sub create_at (line,column) result = {}; result.line = line; result.column = column; return result; endsub; result = dialog("What string do you want to find?",1, "String to find."); if result.button = 1 then userString = result.enteredText; endif result = dialog("What line should be before?" ,1, "Line to add before string."); if result.button = 1 then lineBefore = result.enteredText; endif result = dialog("What line should be after?" ,1, "Line to add after string."); if result.button = 1 then lineAfter = result.enteredText; endif theFile = qedit.activefile; findresult = theFile.find(string: userString); repeat while findresult row = theFile.lastfoundline - 1; where = create_at(row,1); theFile.insert(at: where, text: lineBefore); row = theFile.lastfoundline + 1; where = create_at(row,1); theFile.insert(at: where, text: lineAfter); findresult = theFile.find(string: userString); endrepeat theFile = dialog("End of Script");
:comment Step One: convert date to have a century field and a begining of year date in datefile def boy,1,4,double def century,1,4,double item boy,date,ccyymmdd item mydate,date,ccyymmdd ext mydate ext boy=$truncate(a / 10000) * 10000 + 0101 ext century=$truncate(a / 10000) out workfile,link xeq Step Two: Format and determine number of days since Beginning of Year. in workfile def newdate,1,4,double ext mydate ext boy ext century ext diff=$days(mydate) - $days(boy) + 1 ext newdate=(century * 1000) + ($days(a) - $days(boy) + 1) out convert,link xeqYou can see the results below including the starting date, you don't need to include all the fields in the result file:
>IN convert (0) >OUT $NULL (0) MYDATE = 20141213 BOY = 20140101 CENTURY = 2014 DIFF = 347 NEWDATE = 2014347 >IN convert (1) >OUT $NULL (1) MYDATE = 20131221 BOY = 20130101 CENTURY = 2013 DIFF = 355 NEWDATE = 2013355 >IN convert (2) >OUT $NULL (2) MYDATE = 20150321 BOY = 20150101 CENTURY = 2015 DIFF = 80 NEWDATE = 2015080 >IN convert (3) >OUT $NULL (3) MYDATE = 20100904 BOY = 20100101 CENTURY = 2010 DIFF = 247 NEWDATE = 2010247 IN=4, OUT=4. CPU-Sec=1. Wall-Sec=1.The basis of this was to take the current date and make up a second date and make up the Beginning Of Year date, and use $days and get the difference in days between the current date and the Beginning of Year. Thus the "Julian Date" or ccyyDDD, is the current century * 1000, plus the number of days difference (+1) from the beginning of the year.
Translate "^65:^90" ext name=$translate(name)The extract command will change "Neil Armstrong" to "Neil Zrmstrong". We've also provided a sample translation table which will translate many readable characters to other readable characters just by using:
Translate TounreadYou can see an example below:
>get m-supplier >ext supplier-name >list stan >xeq May 02, 2014 8:08 Base STORE.TESTSD Set M-SUPPLIER Page 1 SUPPLIER-NAME Makita Canada Inc SKIL Power Tools Black & Decker IN=3, OUT=3. CPU-Sec=1. Wall-Sec=1. >get m-supplier >translate tounread >update >ext supplier-name=$translate(supplier-name) >xeq Update all records from the M-SUPPLIER dataset [no]: yes Warning: Using DBGET for the input records IN=3, OUT=3. CPU-Sec=1. Wall-Sec=1. >get m-supplier >ext supplier-name >list stan >xeq May 02, 2014 8:12 Base STORE.TESTSD Set M-SUPPLIER Page 1 SUPPLIER-NAME Npzxep Qpmpsp Wmr HYWZ Klhtc Gllod Poprz ; Rtrztc IN=3, OUT=3. CPU-Sec=1. Wall-Sec=1. >get m-supplier >translate toread >ext supplier-name=$translate(supplier-name) >list >list stan Warning: Previous >LIST options reset >xeq May 02, 2014 8:12 Base STORE.TESTSD Set M-SUPPLIER Page 1 SUPPLIER-NAME Makita Canada Inc SKIL Power Tools Black & Decker IN=3, OUT=3. CPU-Sec=1. Wall-Sec=1.This new feature is not meant as a method to encrypt data but does allow for you to quickly make test data unreadable.
I recently was asked how could you Obfuscate data in a byte type field AND be able to bring it back. Short of writing a $encrypt and $decrypt function, which may be in the works, I came up with the following:
Note the Product-desc field:/RUN SUPRTOOL.PUB.ROBELLE SUPRTOOL/Copyright Robelle Solutions Technology Inc. 1981-2014. (Version 5.6.12 Internal) MON, APR 7, 2014, 3:46 PM Type H for help. >BASE STORE,5,READER >GET M-PRODUCT >LIST >XEQ >GET M-PRODUCT (4) >OUT $NULL (0) PRODUCT-DESC = Skil 3/8" Variable Speed Drill PRODUCT-MODEL = #6523 PRODUCT-NO = 50531501 >GET M-PRODUCT (39) >OUT $NULL (1) PRODUCT-DESC = B&D Router PRODUCT-MODEL = #7613-04 PRODUCT-NO = 50522001 >GET M-PRODUCT (49) >OUT $NULL (2) PRODUCT-DESC = Skil Var. Sp. Auto-Scroll Saw PRODUCT-MODEL = #4560 PRODUCT-NO = 50533001 >GET M-PRODUCT (50) >OUT $NULL (3) PRODUCT-DESC = Skil 8 1/2" Circular Saw PRODUCT-MODEL = #5665 PRODUCT-NO = 50532501 >GET M-PRODUCT (52) >OUT $NULL (4) PRODUCT-DESC = B&D Cordless Screwdriver PRODUCT-MODEL = #9018-04 PRODUCT-NO = 50521001 >GET M-PRODUCT (103) >OUT $NULL (5) PRODUCT-DESC = Makita 8 1/4" Circular Saw PRODUCT-MODEL = #5008NB PRODUCT-NO = 50512501 >GET M-PRODUCT (146) >OUT $NULL (6) PRODUCT-DESC = B&D Variable Speed Jig Saw PRODUCT-MODEL = #7548-04 PRODUCT-NO = 50523001 >GET M-PRODUCT (164) >OUT $NULL (7) PRODUCT-DESC = Makita 1/2" Router PRODUCT-MODEL = #3601B PRODUCT-NO = 50512001 >GET M-PRODUCT (171) >OUT $NULL (8) PRODUCT-DESC = Makita 3/8" Var. Speed Drill PRODUCT-MODEL = #DP3730 PRODUCT-NO = 50511501 >GET M-PRODUCT (221) >OUT $NULL (9) PRODUCT-DESC = Skil Router PRODUCT-MODEL = #1835 PRODUCT-NO = 50532001 >GET M-PRODUCT (231) >OUT $NULL (10) PRODUCT-DESC = B&D 7 1/4" Circular Saw PRODUCT-MODEL = #5728 PRODUCT-NO = 50522501 >GET M-PRODUCT (241) >OUT $NULL (11) PRODUCT-DESC = B&D 3/8" Variable Speed Drill PRODUCT-MODEL = #P1149 PRODUCT-NO = 50521501 >GET M-PRODUCT (243) >OUT $NULL (12) PRODUCT-DESC = Makita 1" Jig Saw PRODUCT-MODEL = #4300 BV PRODUCT-NO = 50513001 IN=13, OUT=13. CPU-Sec=1. Wall-Sec=1.I made a simple translate file with the intended use being in a table with data. The key would be match to the original data and the Trans field would be what the data would become.
>in trans1 >list >xeq >IN TRANS1.NEIL.GREEN (0) >OUT $NULL (0) KEY = A TRANS = S >IN TRANS1.NEIL.GREEN (1) >OUT $NULL (1) KEY = B TRANS = T >IN TRANS1.NEIL.GREEN (2) >OUT $NULL (2) KEY = C TRANS = U >IN TRANS1.NEIL.GREEN (3) >OUT $NULL (3) KEY = D TRANS = V >IN TRANS1.NEIL.GREEN (4) >OUT $NULL (4) KEY = E TRANS = W >IN TRANS1.NEIL.GREEN (5) >OUT $NULL (5) KEY = F TRANS = X >IN TRANS1.NEIL.GREEN (6) >OUT $NULL (6) KEY = G TRANS = Y >IN TRANS1.NEIL.GREEN (7) >OUT $NULL (7) KEY = H TRANS = Z >IN TRANS1.NEIL.GREEN (8) >OUT $NULL (8) KEY = I TRANS = A >IN TRANS1.NEIL.GREEN (9) >OUT $NULL (9) KEY = J TRANS = B >IN TRANS1.NEIL.GREEN (10) >OUT $NULL (10) KEY = K TRANS = C >IN TRANS1.NEIL.GREEN (11) >OUT $NULL (11) KEY = L TRANS = D >IN TRANS1.NEIL.GREEN (12) >OUT $NULL (12) KEY = M TRANS = E >IN TRANS1.NEIL.GREEN (13) >OUT $NULL (13) KEY = N TRANS = F >IN TRANS1.NEIL.GREEN (14) >OUT $NULL (14) KEY = O TRANS = G >IN TRANS1.NEIL.GREEN (15) >OUT $NULL (15) KEY = P TRANS = H >IN TRANS1.NEIL.GREEN (16) >OUT $NULL (16) KEY = Q TRANS = I >IN TRANS1.NEIL.GREEN (17) >OUT $NULL (17) KEY = R TRANS = J >IN TRANS1.NEIL.GREEN (18) >OUT $NULL (18) KEY = S TRANS = K >IN TRANS1.NEIL.GREEN (19) >OUT $NULL (19) KEY = T TRANS = L >IN TRANS1.NEIL.GREEN (20) >OUT $NULL (20) KEY = U TRANS = M >IN TRANS1.NEIL.GREEN (21) >OUT $NULL (21) KEY = V TRANS = N >IN TRANS1.NEIL.GREEN (22) >OUT $NULL (22) KEY = W TRANS = O >IN TRANS1.NEIL.GREEN (23) >OUT $NULL (23) KEY = X TRANS = P >IN TRANS1.NEIL.GREEN (24) >OUT $NULL (24) KEY = Y TRANS = Q >IN TRANS1.NEIL.GREEN (25) >OUT $NULL (25) KEY = Z TRANS = R >IN TRANS1.NEIL.GREEN (26) >OUT $NULL (26) KEY = a TRANS = s >IN TRANS1.NEIL.GREEN (27) >OUT $NULL (27) KEY = b TRANS = t >IN TRANS1.NEIL.GREEN (28) >OUT $NULL (28) KEY = c TRANS = u >IN TRANS1.NEIL.GREEN (29) >OUT $NULL (29) KEY = d TRANS = v >IN TRANS1.NEIL.GREEN (30) >OUT $NULL (30) KEY = e TRANS = w >IN TRANS1.NEIL.GREEN (31) >OUT $NULL (31) KEY = f TRANS = x >IN TRANS1.NEIL.GREEN (32) >OUT $NULL (32) KEY = g TRANS = y >IN TRANS1.NEIL.GREEN (33) >OUT $NULL (33) KEY = h TRANS = z >IN TRANS1.NEIL.GREEN (34) >OUT $NULL (34) KEY = i TRANS = b >IN TRANS1.NEIL.GREEN (35) >OUT $NULL (35) KEY = j TRANS = c >IN TRANS1.NEIL.GREEN (36) >OUT $NULL (36) KEY = k TRANS = d >IN TRANS1.NEIL.GREEN (37) >OUT $NULL (37) KEY = l TRANS = e >IN TRANS1.NEIL.GREEN (38) >OUT $NULL (38) KEY = m TRANS = f >IN TRANS1.NEIL.GREEN (39) >OUT $NULL (39) KEY = n TRANS = g >IN TRANS1.NEIL.GREEN (40) >OUT $NULL (40) KEY = o TRANS = h >IN TRANS1.NEIL.GREEN (41) >OUT $NULL (41) KEY = p TRANS = i >IN TRANS1.NEIL.GREEN (42) >OUT $NULL (42) KEY = q TRANS = j >IN TRANS1.NEIL.GREEN (43) >OUT $NULL (43) KEY = r TRANS = k >IN TRANS1.NEIL.GREEN (44) >OUT $NULL (44) KEY = s TRANS = l >IN TRANS1.NEIL.GREEN (45) >OUT $NULL (45) KEY = t TRANS = m >IN TRANS1.NEIL.GREEN (46) >OUT $NULL (46) KEY = u TRANS = n >IN TRANS1.NEIL.GREEN (47) >OUT $NULL (47) KEY = v TRANS = o >IN TRANS1.NEIL.GREEN (48) >OUT $NULL (48) KEY = w TRANS = a >IN TRANS1.NEIL.GREEN (49) >OUT $NULL (49) KEY = x TRANS = p >IN TRANS1.NEIL.GREEN (50) >OUT $NULL (50) KEY = y TRANS = q >IN TRANS1.NEIL.GREEN (51) >OUT $NULL (51) KEY = z TRANS = r >IN TRANS1.NEIL.GREEN (52) >OUT $NULL (52) KEY = 1 TRANS = 9 >IN TRANS1.NEIL.GREEN (53) >OUT $NULL (53) KEY = 2 TRANS = 8 >IN TRANS1.NEIL.GREEN (54) >OUT $NULL (54) KEY = 3 TRANS = 7 >IN TRANS1.NEIL.GREEN (55) >OUT $NULL (55) KEY = 4 TRANS = 6 >IN TRANS1.NEIL.GREEN (56) >OUT $NULL (56) KEY = 5 TRANS = 5 >IN TRANS1.NEIL.GREEN (57) >OUT $NULL (57) KEY = 6 TRANS = 4 >IN TRANS1.NEIL.GREEN (58) >OUT $NULL (58) KEY = 7 TRANS = 3 >IN TRANS1.NEIL.GREEN (59) >OUT $NULL (59) KEY = 8 TRANS = 2 >IN TRANS1.NEIL.GREEN (60) >OUT $NULL (60) KEY = 9 TRANS = 1 >IN TRANS1.NEIL.GREEN (61) >OUT $NULL (61) KEY = / TRANS = * >IN TRANS1.NEIL.GREEN (62) >OUT $NULL (62) KEY = & TRANS = ! >IN TRANS1.NEIL.GREEN (63) >OUT $NULL (63) KEY = . TRANS = ) >IN TRANS1.NEIL.GREEN (64) >OUT $NULL (64) KEY = " TRANS = : >IN TRANS1.NEIL.GREEN (65) >OUT $NULL (65) KEY = _ TRANS = | >IN TRANS1.NEIL.GREEN (66) >OUT $NULL (66) KEY = - TRANS = } IN=67, OUT=67. CPU-Sec=1. Wall-Sec=1. rename trans1,translteI use this file in a table to do the translation:
>BASE STORE,1,WRITER >get m-product >TABLE MYTRANS,KEY,FILE,TRANSLTE,DATA(TRANS),HOLD There are 67 entries in MYTRANS >DEF A,PRODUCT-DESC[1],1,BYTE >DEF B,PRODUCT-DESC[2],1,BYTE >DEF C,PRODUCT-DESC[3],1,BYTE >DEF D,PRODUCT-DESC[4],1,BYTE >DEF E,PRODUCT-DESC[5],1,BYTE >DEF F,PRODUCT-DESC[6],1,BYTE >DEF G,PRODUCT-DESC[7],1,BYTE >DEF H,PRODUCT-DESC[8],1,BYTE >DEF I,PRODUCT-DESC[9],1,BYTE >DEF J,PRODUCT-DESC[10],1,BYTE >DEF K,PRODUCT-DESC[11],1,BYTE >DEF L,PRODUCT-DESC[12],1,BYTE >DEF M,PRODUCT-DESC[13],1,BYTE >DEF N,PRODUCT-DESC[14],1,BYTE >DEF O,PRODUCT-DESC[15],1,BYTE >DEF P,PRODUCT-DESC[16],1,BYTE >DEF Q,PRODUCT-DESC[17],1,BYTE >DEF R,PRODUCT-DESC[18],1,BYTE >DEF S,PRODUCT-DESC[19],1,BYTE >DEF T,PRODUCT-DESC[20],1,BYTE >DEF U,PRODUCT-DESC[21],1,BYTE >DEF V,PRODUCT-DESC[22],1,BYTE >DEF W,PRODUCT-DESC[23],1,BYTE >DEF X,PRODUCT-DESC[24],1,BYTE >UPDATE >ext a=$lookup(mytrans,a,trans) >ext b=$lookup(mytrans,b,trans) >ext c=$lookup(mytrans,c,trans) >ext D=$lookup(mytrans,D,trans) >ext E=$lookup(mytrans,E,trans) >ext F=$lookup(mytrans,F,trans) >ext G=$lookup(mytrans,G,trans) >ext H=$lookup(mytrans,H,trans) >ext I=$lookup(mytrans,I,trans) >ext J=$lookup(mytrans,J,trans) >ext K=$lookup(mytrans,K,trans) >ext L=$lookup(mytrans,L,trans) >ext M=$lookup(mytrans,M,trans) >ext N=$lookup(mytrans,N,trans) >ext O=$lookup(mytrans,O,trans) >ext P=$lookup(mytrans,P,trans) >ext Q=$lookup(mytrans,Q,trans) >ext R=$lookup(mytrans,R,trans) >ext S=$lookup(mytrans,S,trans) >ext T=$lookup(mytrans,T,trans) >ext U=$lookup(mytrans,U,trans) >ext V=$lookup(mytrans,V,trans) >ext W=$lookup(mytrans,W,trans) >ext X=$lookup(mytrans,X,trans) >xeq Update all records from the M-PRODUCT dataset [no]: YES Warning: Using DBGET for the input records IN=13, OUT=13. CPU-Sec=1. Wall-Sec=1.You can now not really read the data:
>GET M-PRODUCT >LIST >XEQ >GET M-PRODUCT (4) >OUT $NULL (0) PRODUCT-DESC = Kdbe 7*2: Nskbstew Kiwwv Drill PRODUCT-MODEL = #6523 PRODUCT-NO = 50531501 >GET M-PRODUCT (39) >OUT $NULL (1) PRODUCT-DESC = T!V Jhnmwk PRODUCT-MODEL = #7613-04 PRODUCT-NO = 50522001 >GET M-PRODUCT (49) >OUT $NULL (2) PRODUCT-DESC = Kdbe Nsk) Ki) Snmh}Kukhel Saw PRODUCT-MODEL = #4560 PRODUCT-NO = 50533001 >GET M-PRODUCT (50) >OUT $NULL (3) PRODUCT-DESC = Kdbe 2 9*8: Ubkunesk Ksa PRODUCT-MODEL = #5665 PRODUCT-NO = 50532501 >GET M-PRODUCT (52) >OUT $NULL (4) PRODUCT-DESC = T!V Uhkvewll Kukwavkbowk PRODUCT-MODEL = #9018-04 PRODUCT-NO = 50521001 >GET M-PRODUCT (103) >OUT $NULL (5) PRODUCT-DESC = Esdbms 2 9*6: Ubkunesk Kaw PRODUCT-MODEL = #5008NB PRODUCT-NO = 50512501 >GET M-PRODUCT (146) >OUT $NULL (6) PRODUCT-DESC = T!V Nskbstew Kiwwv Bby Kaw PRODUCT-MODEL = #7548-04 PRODUCT-NO = 50523001 >GET M-PRODUCT (164) >OUT $NULL (7) PRODUCT-DESC = Esdbms 9*8: Jhnmwk PRODUCT-MODEL = #3601B PRODUCT-NO = 50512001 >GET M-PRODUCT (171) >OUT $NULL (8) PRODUCT-DESC = Esdbms 7*2: Nsk) Kiwwv Vrill PRODUCT-MODEL = #DP3730 PRODUCT-NO = 50511501 >GET M-PRODUCT (221) >OUT $NULL (9) PRODUCT-DESC = Kdbe Jhnmwk PRODUCT-MODEL = #1835 PRODUCT-NO = 50532001 >GET M-PRODUCT (231) >OUT $NULL (10) PRODUCT-DESC = T!V 3 9*6: Ubkunesk Ksa PRODUCT-MODEL = #5728 PRODUCT-NO = 50522501 >GET M-PRODUCT (241) >OUT $NULL (11) PRODUCT-DESC = T!V 7*2: Nskbstew Kiwwv Drill PRODUCT-MODEL = #P1149 PRODUCT-NO = 50521501 >GET M-PRODUCT (243) >OUT $NULL (12) PRODUCT-DESC = Esdbms 9: Bby Ksa PRODUCT-MODEL = #4300 BV PRODUCT-NO = 50513001 IN=13, OUT=13. CPU-Sec=1. Wall-Sec=1.Then you can reverse the translation file key and translate fields in the table file and reverse them back: I reversed the translate fields:
>BASE STORE,1,WRITER >get m-product >TABLE MYTRANS,KEY,FILE,TRANSLTE,DATA(TRANS),HOLD There are 67 entries in MYTRANS >DEF A,PRODUCT-DESC[1],1,BYTE >DEF B,PRODUCT-DESC[2],1,BYTE >DEF C,PRODUCT-DESC[3],1,BYTE >DEF D,PRODUCT-DESC[4],1,BYTE >DEF E,PRODUCT-DESC[5],1,BYTE >DEF F,PRODUCT-DESC[6],1,BYTE >DEF G,PRODUCT-DESC[7],1,BYTE >DEF H,PRODUCT-DESC[8],1,BYTE >DEF I,PRODUCT-DESC[9],1,BYTE >DEF J,PRODUCT-DESC[10],1,BYTE >DEF K,PRODUCT-DESC[11],1,BYTE >DEF L,PRODUCT-DESC[12],1,BYTE >DEF M,PRODUCT-DESC[13],1,BYTE >DEF N,PRODUCT-DESC[14],1,BYTE >DEF O,PRODUCT-DESC[15],1,BYTE >DEF P,PRODUCT-DESC[16],1,BYTE >DEF Q,PRODUCT-DESC[17],1,BYTE >DEF R,PRODUCT-DESC[18],1,BYTE >DEF S,PRODUCT-DESC[19],1,BYTE >DEF T,PRODUCT-DESC[20],1,BYTE >DEF U,PRODUCT-DESC[21],1,BYTE >DEF V,PRODUCT-DESC[22],1,BYTE >DEF W,PRODUCT-DESC[23],1,BYTE >DEF X,PRODUCT-DESC[24],1,BYTE >UPDATE >ext a=$lookup(mytrans,a,trans) >ext b=$lookup(mytrans,b,trans) >ext c=$lookup(mytrans,c,trans) >ext D=$lookup(mytrans,D,trans) >ext E=$lookup(mytrans,E,trans) >ext F=$lookup(mytrans,F,trans) >ext G=$lookup(mytrans,G,trans) >ext H=$lookup(mytrans,H,trans) >ext I=$lookup(mytrans,I,trans) >ext J=$lookup(mytrans,J,trans) >ext K=$lookup(mytrans,K,trans) >ext L=$lookup(mytrans,L,trans) >ext M=$lookup(mytrans,M,trans) >ext N=$lookup(mytrans,N,trans) >ext O=$lookup(mytrans,O,trans) >ext P=$lookup(mytrans,P,trans) >ext Q=$lookup(mytrans,Q,trans) >ext R=$lookup(mytrans,R,trans) >ext S=$lookup(mytrans,S,trans) >ext T=$lookup(mytrans,T,trans) >ext U=$lookup(mytrans,U,trans) >ext V=$lookup(mytrans,V,trans) >ext W=$lookup(mytrans,W,trans) >ext X=$lookup(mytrans,X,trans) >xeq Update all records from the M-PRODUCT dataset [no]: YES Warning: Using DBGET for the input records IN=13, OUT=13. CPU-Sec=1. Wall-Sec=1.Now you can re-read the data since you've updated it and reversed the translation:
>GET M-PRODUCT >LIST >XEQ >GET M-PRODUCT (4) >OUT $NULL (0) PRODUCT-DESC = Skil 3/8" Variable Speed Drill PRODUCT-MODEL = #6523 PRODUCT-NO = 50531501 >GET M-PRODUCT (39) >OUT $NULL (1) PRODUCT-DESC = B&D Router PRODUCT-MODEL = #7613-04 PRODUCT-NO = 50522001 >GET M-PRODUCT (49) >OUT $NULL (2) PRODUCT-DESC = Skil Var. Sp. Auto-Scroll Saw PRODUCT-MODEL = #4560 PRODUCT-NO = 50533001 >GET M-PRODUCT (50) >OUT $NULL (3) PRODUCT-DESC = Skil 8 1/2" Circular Saw PRODUCT-MODEL = #5665 PRODUCT-NO = 50532501 >GET M-PRODUCT (52) >OUT $NULL (4) PRODUCT-DESC = B&D Cordless Screwdriver PRODUCT-MODEL = #9018-04 PRODUCT-NO = 50521001 >GET M-PRODUCT (103) >OUT $NULL (5) PRODUCT-DESC = Makita 8 1/4" Circular Saw PRODUCT-MODEL = #5008NB PRODUCT-NO = 50512501 >GET M-PRODUCT (146) >OUT $NULL (6) PRODUCT-DESC = B&D Variable Speed Jig Saw PRODUCT-MODEL = #7548-04 PRODUCT-NO = 50523001 >GET M-PRODUCT (164) >OUT $NULL (7) PRODUCT-DESC = Makita 1/2" Router PRODUCT-MODEL = #3601B PRODUCT-NO = 50512001 >GET M-PRODUCT (171) >OUT $NULL (8) PRODUCT-DESC = Makita 3/8" Var. Speed Drill PRODUCT-MODEL = #DP3730 PRODUCT-NO = 50511501 >GET M-PRODUCT (221) >OUT $NULL (9) PRODUCT-DESC = Skil Router PRODUCT-MODEL = #1835 PRODUCT-NO = 50532001 >GET M-PRODUCT (231) >OUT $NULL (10) PRODUCT-DESC = B&D 7 1/4" Circular Saw PRODUCT-MODEL = #5728 PRODUCT-NO = 50522501 >GET M-PRODUCT (241) >OUT $NULL (11) PRODUCT-DESC = B&D 3/8" Variable Speed Drill PRODUCT-MODEL = #P1149 PRODUCT-NO = 50521501 >GET M-PRODUCT (243) >OUT $NULL (12) PRODUCT-DESC = Makita 1" Jig Saw PRODUCT-MODEL = #4300 BV PRODUCT-NO = 50513001 IN=13, OUT=13. CPU-Sec=1. Wall-Sec=1.So while not really a sophisticated encryption of data it may allow you to mask some important data for test purposes.
Recent question about Phone Numbers and how to remove non-number characters from a byte container, raised some interesting solutions to normalizing phone numbers:
Considering the following data, you see that the phone numbers have all sorts of different formats.
>in myphone >list >xeq >IN myphone (0) >OUT $NULL (0) PHONENUM = #123.456.7890 >IN myphone (1) >OUT $NULL (1) PHONENUM = (123)567-1234 >IN myphone (2) >OUT $NULL (2) PHONENUM = (321).123.5678 IN=3, OUT=3. CPU-Sec=1. Wall-Sec=1.The steps in normalizing the data is to remove the non-numeric numbers:
>in myphone >set cleanchar "You can then use an edit mask to format it in the same way. You do need to redefine the field being edited with a define of the number with just the length of the phone number:" >clean "^0:^47","^58:^255" >def newphone,1,14 >ext phonenum=$clean(phonenum) >out newphone,link >xeq IN=3, OUT=3. CPU-Sec=1. Wall-Sec=1. >in newphone >list >xeq >IN newphone (0) >OUT $NULL (0) PHONENUM = 1234567890 >IN newphone (1) >OUT $NULL (1) PHONENUM = 1235671234 >IN newphone (2) >OUT $NULL (2) PHONENUM = 3211235678 IN=3, OUT=3. CPU-Sec=1. Wall-Sec=1.
>in newphone >form File: newphone (SD Version B.00.00) Has linefeeds Entry: Offset PHONENUM X14 1 Entry Length: 14 Blocking: 1 >def my,phonenum,10 >def targ,1,12 >ext targ=$edit(my,"xxx.xxx.xxxx") >list >xeq >IN newphone (0) >OUT $NULL (0) TARG = 123.456.7890 >IN newphone (1) >OUT $NULL (1) TARG = 123.567.1234 >IN newphone (2) >OUT $NULL (2) TARG = 321.123.5678 IN=3, OUT=3. CPU-Sec=1. Wall-Sec=1.
While on training recently I was asked by a customer to find the difference between a due-date for a payment and the current date. However, they only wanted to list the dates that were overdue as the number of days overdue, and if the payment was not overdue then they should show a zero as the days difference.
Now the best way to calculate a difference between two days is to convert the date using the $days function and subtract the two dates.
The $days function converts any date to the number of days since 4713 BC, and is known as Julian Day number. Therefore you can perform or figure out the difference between two dates. The task below has a couple of tricks, the first is to use Suprtool to build a command file to get the current date into a variable in Julian Day format!
The second part of the script figures out the difference between the two dates, keep in mind that the customer wants to know which dates are "overdue" and only want to see those that are overdue, so we re-order the subtraction such that overdue payments will be a positive number and those payments that are not yet due, will be a negative number.
rm tdays tdays.sd rm setdate export EQ_DBSERVER=":8202" suprtool << EOD_Suprtool { Set $mydate variable to todays date in julian day format } base membrs,5,lookup get member-file num 1 def tdays,1,4,double item tdays,date,julian ext tdays=\$today out tdays,link xeq in tdays def tdayascii,1,7,byte ext 'export MYJULDAY="' ext tdayascii=\$edit(tdays,"zzzzzzz") ext '"' out setdate exit EOD_Suprtool chmod +x setdate . ./setdate rm setdate suprtool << EOD_Suprtool set varsub on in dates def diff,1,4,double item a,date,ccyymmdd ext a ext diff=(\$MYJULDAY - \$DAYS(A)) list xeq exit EOD_SuprtoolGiven some dates here is what you would see:
>IN dates (0) >OUT $NULL (0) A = 20131213 DIFF = 47 >IN dates (1) >OUT $NULL (1) A = 20131231 DIFF = 29 >IN dates (2) >OUT $NULL (2) A = 20140201 DIFF = -3 >IN dates (3) >OUT $NULL (3) A = 20140130 DIFF = -1 >IN dates (4) >OUT $NULL (4) A = 20140115 DIFF = 14 IN=5, OUT=5. CPU-Sec=1. Wall-Sec=1. >exitNow since we only want to see positive numbers, if we change the target type to be logical, we will only get the difference in days with a positive number and the negatives will become zero since the definition of a logical number is that it cannot be negative.
Neo%dev/source/suprtool: ./datediff rm: setdate non-existent SUPRTOOL/UXia/Copyright Robelle Solutions Technology Inc. 1981-2014. (Version 5.6 Internal) WED, JAN 29, 2014, 11:55 AM Type H for help. Build 11 >{ Set variable to todays date in julian day format } >base membrs,5,lookup >get member-file >num 1 >def tdays,1,4,double >item tdays,date,julian >ext tdays=$today >out tdays,link >xeq Warning: NUMRECS exceeded; some records not processed. IN=2, OUT=1. CPU-Sec=1. Wall-Sec=1. >in tdays >def tdayascii,1,7,byte >ext 'export MYJULDAY="' >ext tdayascii=$edit(tdays,"zzzzzzz") >ext '"' >out setdate >exit IN=1, OUT=1. CPU-Sec=1. Wall-Sec=1. SUPRTOOL/UXia/Copyright Robelle Solutions Technology Inc. 1981-2014. (Version 5.6 Internal) WED, JAN 29, 2014, 11:55 AM Type H for help. Build 11 >set varsub on >in dates >def diff,1,4,logical {note target is now logical} >item a,date,ccyymmdd >ext a >ext diff=($MYJULDAY - $DAYS(A)) >list >xeq >IN dates (0) >OUT $NULL (0) A = 20131213 DIFF = 47 >IN dates (1) >OUT $NULL (1) A = 20131231 DIFF = 29 >IN dates (2) >OUT $NULL (2) A = 20140201 DIFF = 0 >IN dates (3) >OUT $NULL (3) A = 20140130 DIFF = 0 >IN dates (4) >OUT $NULL (4) A = 20140115 DIFF = 14 IN=5, OUT=5. CPU-Sec=1. Wall-Sec=1. >exit
There are occasions when you need to do a while loop to repeat multiple edit commands in Qedit, typically the edit operations are not on the same line as the condition or data you want to change or edit.
Recently we had the following request:
I am using Qedit for HPUX. I have a file where I would like to execute the following qedit commands until end of file is reached:
fq "707C" (1/4) lq * - 1 c 5/5 "C" lq * + 1
Basically, I am trying to change the line previous to one with 707C in positions 1 to 4 for any occurrence of a line with 707C in positions 1 to 4. Is that possible?Naturally, I wanted to help the customer get the job done, so here is what I came up with in a file called mytest.
rm qedituse export lines=$(grep '^707C' file | wc -l) echo tq file >>qedituse echo lq [ >>qedituse x=0 while [ $x -lt $lines ]; do echo "fq '707C'(1/4);cq 5/5 'C' *-1;lq *+1" >>qedituse x=`expr $x + 1` done echo k newfile,yes >> qedituse echo exit >> qedituse qedit -c'use qedituse'The resulting file had the lines with C in column 5 proceeding the 707C.
blinC 707C ack ick bla asdfC 707C hippy hoppy frippy froppy a 707 ha ack C 707C Neo%/home/neil:The process broken down as follows: The following just removes the file qedituse and then stores in a variable called lines the number of lines that start with 707C.
rm qedituse export lines=$(grep '^707C' file | wc -l)The next two lines simply text in the file and positions the file pointer at the beginning of the file.
echo tq file >>qedituse echo lq [ >>qedituseThe next portion, is the while loop that will go thru the file "lines" number of times and issue the desired qedit commands. Notice that cq 5/5 'C' *-1, means put a C in column 5 of the proceeding line.
x=0 while [ $x -lt $lines ]; do echo "fq '707C'(1/4);cq 5/5 'C' *-1;lq *+1" >>qedituse x=`expr $x + 1` doneThe next two lines keep the file and exit
echo kq newfile,yes >> qedituse echo exit >> qedituseThe following is run qedit with the editor commands and actually do the task.
qedit -c'use qedituse'Given what I learned now I would write things a little more compact and do as follows:
rm qedituse echo t file;l [ >>qedituse export lines=$(grep '^707C' file | wc -l) x=0 while [ $x -lt $lines ]; do echo "fq '707C'(1/4);cq 5/5 'C' *-1;lq *+1" >>qedituse x=`expr $x + 1` done echo k newfile,yes;exit >> qedituse qedit -c'useq qedituse'You could also use parms and variables to make this even more powerful by allowing you to specify the string to search for etc.
Their is a macro called COPYMACRO used by some of our customers that copies some data sets and clone some data. The current script uses Dbedit to add a new macro to the two datasets necessary. Since Dbedit is not available on the Itanium version of Suprtool we re-wrote the macro to simply use Suprtool as opposed to simulating interactive commands inside Dbedit. The script uses features of UC4.
:BEGINREAD "Copy A Macro" : PRINT 'What type of close report to select?' : READ &OLDMACRO,"04", "Macro Name to Copy:",, 'M' : READ &NEWMACRO,"04","New Macro Name:",, 'M' : READ &INSTANCE#, "LIVE, TEST, TRN", "FI to run MACRODUP:", "TEST", 'M' :ENDREAD :PUT_ATT LOGIN = "LOGIN.UNIX.SPECLIVE" :INC GET_VARIABLES #!/usr/bin/sh echo "UC4 job ran as &JOB_NAME" date export SPXBATCH=1 spxlogin -n MACRODUP FI DATA echo Login as $SPXLOGIN echo SPXHOME is $SPXHOME set -x rm MN rm MN.sd rm MD rm MD.sd suprtool <If any customers have any other scripts that use Dbedit we can look at re-writing them for you.
Why is my Job Suddenly Failing?
I had a recent call in to support asking why all of a sudden some jobs started failing in a job stream. The report was a blank line after run of a program suddenly started giving the error:
Missing colon before command name. (CIERR 981)
In previous incarnations the job stream with the blank line would NOT fail but nothing changed... of course. :)
First off I want to confirm that a blank line does abort a job stream.
:comment bland line is next Missing colon before command name. (CIERR 981) REMAINDER OF JOB FLUSHED. CPU sec. = 1. elapsed min. = 1. THU, FEB 4, 2010, 10:47 AM.And I found it does.
The customer though showed that the blank line was after the run of a program and in this case I am using query as an example. We didn't see the same behaviour. Odd.
:comment :comment test blank line after run query.pub.sys :comment :purge file1x :query HP32216N.03.18 QUERY/NM THU, FEB 4, 2010, 10:18 AM COPYRIGHT HEWLETT-PACKARD CO. 1976 exit END OF PROGRAM :showjcw jcw JCW = 0 :showjcw cierror CIERROR = 0 :eoj CPU sec. = 2. elapsed min. = 1. THU, FEB 4, 2010, 10:18 AM.Some investigation and some clues from something I had remembered investigating in 2001 allowed me to duplicate by switching to query being run/resolved via a command file.
In the above example even though the commands were the same, query was being run via path resolution. This is the part that was tricky and I later make the examples explicit by specifically running query.pub.sys or using my query command file:
:comment :comment Blank line after query which resolves to a command file :comment :query HP32216N.03.18 QUERY/NM THU, FEB 4, 2010, 10:20 AM COPYRIGHT HEWLETT-PACKARD CO. 1976 exit END OF PROGRAM Missing colon before command name. (CIERR 981) REMAINDER OF JOB FLUSHED.Another example but this time with garbage commands:
:comment query command file with garbage after exit :query HP32216N.03.18 QUERY/NM THU, FEB 4, 2010, 10:44 AM COPYRIGHT HEWLETT-PACKARD CO. 1976 exit END OF PROGRAM yabba dabba doo Missing colon before command name. (CIERR 981) REMAINDER OF JOB FLUSHED. CPU sec. = 2. elapsed min. = 1. THU, FEB 4, 2010, 10:44 AM.Now I switched back to running directly with the yabba dabba doo still in the job and some other lines:
:comment :run query.pub.sys HP32216N.03.18 QUERY/NM THU, FEB 4, 2010, 10:45 AM COPYRIGHT HEWLETT-PACKARD CO. 1976 exit END OF PROGRAM :showjcw jcw JCW = 0 :showjcw cierror CIERROR = 383 :eoj CPU sec. = 2. elapsed min. = 1. THU, FEB 4, 2010, 10:45 AM.In the case of a run of the program file the garbage lines after the run are being read as $stdin until it sees a "!". So what happens is the program exits and the extra lines left in the stdin are thrown away.
However, if you switch this run to a command file, suddenly the job stops running to completion.
So, if you have a customer that switches from say
query
which is run thru HPPATH resolution, to having
query
be resolved via a command file, the jobs with blank lines or garbage after the exit line will suddenly stop failing.
It's a bit tricky to diagnose especially if the customer doesn't believe you or points to something else. I have not tested UDC's but I would believe that they would have the same behaviour. My guess is that when a command file ends MPE just reads the next line in the job stream, but when a program ends the job stream is trying to re-orient itself in regard to stdin and just reads until is sees a "!". That is the behaviour that I am seeing.
Any past labbies or gurus care to comment or have thoughts.
Why is my task with Dates not working?
Customers have been contacting us saying that task with $stddate and other date functions where a century needs to be assumed are no longer working or finding the incorrect records:BASE mybase,5,pass G SHIP-DATA ITEM SHIP-DATE,DATE,MMDDYY IF (($STDDATE(SHIP-DATE)>= 20100101 AND $STDDATE(SHIP-DATE)<= 20100131)) OUT TEMPSHIP,LINK XWell the reason for this is that the DEFAULT value for "cutoff" on what century to assume is 10. Therefore if you date field is January 15,2010, but in the form 011510, the year portion being 10 will have an assumed century of 19, because the default value for the cutoff year is 10.The solution to this is to issue the command:
set date cutoff 15or a similar number that will be greater than the oldest year of data that you want to have a 20 for the century.Happily you do not need to add this command to every task but rather just add to your suprmgr files. On MPE this is suprmgr.pub.sys and on HP-UX this is /opt/robelle/suprmgr.
Why was the Cutoff at 10?
In 1998 when we did these changes we had to consider every type of date involved and we have a large number of customers that still had six digit dates with birthdates of 1913 etc. We also had real estate and Title dates that were in the 1910's as well and most customers said that 10 years it would be no problem for them to convert all their dates to including the century.How Do I Convert CM Ksam to NM KSAM?
Suprtool can't/won't create a new NM KSAM file for you. You will have to create the new NM file first using the MPE BUILD command (or some other method, see below), then use Suprtool to copy the records from the old file to the new file.But Suprtool should definitely be able to copy the data *much* faster than MPE's FCOPY utility could.
The trick, of course, is getting a new NM KSAM file built correctly first. You could use a complicated BUILD command or you could do this:
:file n=newfile; disc=That will create a new NM KSAM file called NEWFILE with the FLIMIT equal to the FLIMIT of your current CM file, and with all the keys set up correctly, but will copy only a single record into it. Then use Suprtool:
:suprtool >in oldfile >out newfile, erase >xeqFinally, if it's a big KSAM file you may wish to create it as temporary first, then SAVE it after you're done, to squeeze even more speed out of the process:
:file n=newfile; disc=; temp <<-- Note the temp designation :fcopy from=oldfile; to=(*n); subset=0,1 :suprtool >in oldfile >out newfile, erase >exit :save newfile How Do I print on HP-UX in Suprtool?
I recently came into my office this past Thursday morning to find two support e-mails asking very similar questions. How do I get output to print on a Unix device using Suprtools list command. While I certainly knew the answers in terms of Suprtool I had forgotten many of the other commands to figure out what printers I had available and some of the parameters to the lp command.But let's backup and discuss the functionality that MPE and the Suprtool list command could provide thru file equations as this is the main question that users ask, which is How do I get the output from the list command to either a file or a device.
On MPE you can either do:
file suprlist;dev=lporfile suprlist=myfile;rec=-132;disc=100000;dev=discThe first would send the output from the list command to the device LP and the second file equation would re-direct the output from the list command to a disc file. Well on HP-UX the concept of file equations doesn't exist. There is I/O re-direction but this can be cumbersome when you have to input the commands and filter the output so we specifically added options on the list command to allow for the same functionality.
How do I print a file on HP-UX?
To understand printing in Suprtool, it is beneficial to understand some methods to print on HP-UX. The most common method for printing a file is the lp command. You can get details on the command by doing a "man lp" .A sample of an lp command that prints to a printer device called dragon:
lp -ddragon t1.cc request id is dragon-1387 (1 file)What printers do I have?
This is all well and good, but how do I know what devices do I have on my system? One method is to do an:lpstat -pwhich means to output all printers. From that output you can determine the device names for various printers. lpstat -d prints your default printer and lpstat -a prints out all devices that are accepting output requests, which may be your best bet on a system with a large LP subsystem. Keep in mind that this assumes that the CUPS system is installed and enabled. If you want to know more about CUPS you can start with the wiki page at:http://en.wikipedia.org/wiki/CUPSThe bottom line is that you can use lpstat to determine the device printer names that are on your system.
How do I Print to a device in Suprtool for HP-UX.
There are two ways to print to a device using Suprtool for HP-UX, these are very similar but are worth noting. The first is thru the Suprtool directly:>in bla4 >list device dragon standard >xeq request id is dragon-1388 (standard input) IN=272, OUT=272. CPU-Sec=1. Wall-Sec=1.Suprtool opens a pipe to the device dragon and sends the output from the list command to that device and uses the lp command/program to output the file. So essentially when you enter the command list device dragon, Suprtool is sending the output to lp with the device of dragon, as in:/usr/bin/lp -ddragon.We also provide an environment variable to change the lp program from being used which by default is /usr/bin/lp to being whatever you to use specifying the ROBELLE_LP environment variable with the value of the other third party program:export ROBELLE_LP=/usr/thirdparty/lpHow do I print to a file in Suprtool?
You can print to a file using Suprtools file option, what you can also do is append multiple reports:>set list formfeed on >in bla4 >list file myreport standard >xeq IN=272, OUT=272. CPU-Sec=1. Wall-Sec=1. >in bla3 >list file myreport append standard >xeq IN=272, OUT=272. CPU-Sec=1. Wall-Sec=1.In addition with the set list formfeed on command you can put multiple reports in one file and have them print and do the page eject(s) between each report. So in the above example the list of bla4 would print out and then bla3 would print on a new page. As always if you have any feedback, questions or advice, please feel free to e-mail me at neil at robelle.com.Move my Qedit for Windows Settings to a new PC?
A customer e-mailed us with the following question:Is there any way to export the connections and other settings from QEDIT on one PC and load onto another?
Thanks.
Yes. The steps are very easy:
As a side bar it is probably a good idea to install the same version on each machine before moving files.
- Install Qedit on New PC.
- Run Qedit on New PC.
- Quit Qedit on New PC.
- Quit Qedit on the Old PC."
- Go to old PC and look for where QWIN.INI is on old PC.
- Look for QWIN.INI on NEW PC.
- Copy the following files from the OLD PC they will be in the same directory that you found QWIN.INI: QWIN.INI, QEDCONN.DAT,QWINDDB.DAT to the new PC, replacing them in the directory that you found them in.
- Start Qedit on the New PC, your settings etc should be there.
Find the Difference between two Dates in Months?
A customer called and asked how to find the difference between a Maturity date and Todays date.
First step, I change the matdate to be ccyymm and create a ccyymm for $today. Note that the Maturity date or matdate, is a dat field in the format of ccyymmdd and is in an I2 container.
>in mature >form File: mature (SD Version B.00.00) Has linefeeds Entry: Offset MATDATE I2 1 Entry Length: 4 Blocking: 1 >ext matdate=matdate / 100 >def todaydt,1,4,double >item todaydt,date,ccyymm >ext todaydt=$today >out mature2,link >xeq IN=20, OUT=20. CPU-Sec=1. Wall-Sec=1.So the data now looks like this:>in mature2 >list >xeq >IN mature2 (0) >OUT $NULL (0) MATDATE = 200906 TODAYDT = 200907 >IN mature2 (1) >OUT $NULL (1) MATDATE = 200907 TODAYDT = 200907 >IN mature2 (2) >OUT $NULL (2) MATDATE = 200908 TODAYDT = 200907 >IN mature2 (3) >OUT $NULL (3) MATDATE = 200909 TODAYDT = 200907 >IN mature2 (4) >OUT $NULL (4) MATDATE = 200910 TODAYDT = 200907 >IN mature2 (5) >OUT $NULL (5) MATDATE = 200911 TODAYDT = 200907 >IN mature2 (6) >OUT $NULL (6) MATDATE = 200912 TODAYDT = 200907 >IN mature2 (7) >OUT $NULL (7) MATDATE = 201001 TODAYDT = 200907 >IN mature2 (8) >OUT $NULL (8) MATDATE = 201002 TODAYDT = 200907Now the final step converts the dates to a month number and calculates the difference. :SUPRTOOL/UX/Copyright Robelle Solutions Technology Inc. 1981-2009. (Version 5.2 Internal) WED, JUL 01, 2009, 8:12 PM Type H for help. in mature2 ext matdate ext todaydt def mmdiff,1,4,double ext mmdiff=(($truncate(matdate / 100) * 12) + matdate mod 100) - & (($truncate(todaydt / 100) * 12) + todaydt mod 100) list xeq >IN mature2 (0) >OUT $NULL (0) MATDATE = 200906 TODAYDT = 200907 MMDIFF = -1 >IN mature2 (1) >OUT $NULL (1) MATDATE = 200907 TODAYDT = 200907 MMDIFF = 0 >IN mature2 (2) >OUT $NULL (2) MATDATE = 200908 TODAYDT = 200907 MMDIFF = 1 >IN mature2 (3) >OUT $NULL (3) MATDATE = 200909 TODAYDT = 200907 MMDIFF = 2 >IN mature2 (4) >OUT $NULL (4) MATDATE = 200910 TODAYDT = 200907 MMDIFF = 3 >IN mature2 (5) >OUT $NULL (5) MATDATE = 200911 TODAYDT = 200907 MMDIFF = 4 >IN mature2 (6) >OUT $NULL (6) MATDATE = 200912 TODAYDT = 200907 MMDIFF = 5 >IN mature2 (7) >OUT $NULL (7) MATDATE = 201001 TODAYDT = 200907 MMDIFF = 6 >IN mature2 (8) >OUT $NULL (8) MATDATE = 201002 TODAYDT = 200907 MMDIFF = 7 >IN mature2 (9) >OUT $NULL (9) MATDATE = 201003 TODAYDT = 200907 MMDIFF = 8 >IN mature2 (10) >OUT $NULL (10) MATDATE = 201004 TODAYDT = 200907 MMDIFF = 9 >IN mature2 (11) >OUT $NULL (11) MATDATE = 201005 TODAYDT = 200907 MMDIFF = 10 >IN mature2 (12) >OUT $NULL (12) MATDATE = 201006 TODAYDT = 200907 MMDIFF = 11 >IN mature2 (13) >OUT $NULL (13) MATDATE = 201007 TODAYDT = 200907 MMDIFF = 12 >IN mature2 (14) >OUT $NULL (14) MATDATE = 201008 TODAYDT = 200907 MMDIFF = 13 >IN mature2 (15) >OUT $NULL (15) MATDATE = 201009 TODAYDT = 200907 MMDIFF = 14 >IN mature2 (16) >OUT $NULL (16) MATDATE = 201010 TODAYDT = 200907 MMDIFF = 15 >IN mature2 (17) >OUT $NULL (17) MATDATE = 201011 TODAYDT = 200907 MMDIFF = 16 >IN mature2 (18) >OUT $NULL (18) MATDATE = 201012 TODAYDT = 200907 MMDIFF = 17 >IN mature2 (19) >OUT $NULL (19) MATDATE = 201101 TODAYDT = 200907 MMDIFF = 18 IN=20, OUT=20. CPU-Sec=1. Wall-Sec=1.Fix all of My Zip codes
Recently a customer asked how they could fix some of the Zip codes which were in the form of:
123450000and are stored in a numeric J2 field.
The customer wanted to normalize those zip codes that had the four trailing zeroes to be 12345 instead of 123450000.
The first step we wanted to do was to determine all of the codes that needed to be updated and what the old zip would be and what the new zip would be.
>base membrs >get member-file >def new-zip,1,4,double >if zip > 99999 and (zip mod 10000) = 0 >ext account >ext zip >ext new-zip = zip / 10000 >list >xeqSo what is the above doing?
Well the if command looks at all records that are greater than 99999 and ends in the four zeroes, which is what the (zip mod 10000) = 0 is doing. This should isolate just those records that the customer wanted to fix!
Once we determined that we had the correct records selected we easily updated them with:
>base membrs >get member-file >if zip > 99999 and (zip mod 10000) = 0 >update >ext new-zip = zip / 10000 >list >xeqImport Data Using Suprtool
One of the more recent questions that has come up lately is How Do I Import data with Suprtool. Let's say we have the following data layout, Image, Eloquence or SD file, it doesn't matter. The purpose of this small white paper would be to extract data to a ".csv" file, and then import that same data back. Closing the loop so to speak.
- BIN-NO ~ J1
- LAST-SHIP-DATE ~ J2
- ON-HAND-QTY ~ J2
- PRODUCT-NO ~ Z8
- SUPPLIER-NO ~ Z8
- UNIT-COST ~ P8
- ITEM-DESC1 ~ X20
- ITEM-DESC2 ~ X20
- ITEM-DESC3 ~ X20
- ITEM-DESC4 ~ X20
To begin our project we will extract the data from a sample data source and output to an SD file and then use STExport to create a CSV file.
>get d-inventory >out dinv,link >xeq IN=13, OUT=13. CPU-Sec=1. Wall-Sec=1. :run stexport.pub.robelle $in dinv $out dinvcsv $xeq In=13. Out=13. CPU-Sec=1. Wall-Sec=1. $listf dinvcsv,2 ACCOUNT= GREEN GROUP= NEIL FILENAME code ------------LOGICAL RECORD----------- ----SPACE---- SIZE TYP EOF LIMIT R/B SECTORS #X MX DINVCSV 152B VA 13 13 1 16 1 *So at this point we know have a file, typically how a customer would want to import using Suprtool for adding to a database etc. Note that Suprtool does not handle variable-length files so the first step is to convert the Variable length file to fixed length.
Step One: Convert to Fixed Length
On MPE you can convert using Fcopy:
/file dinvcsvf;rec=-152,1,f,ascii /fcopy from=dinvcsv; to=*dinvcsvf;newOn HP-UX you can use Qedit:/t dinvcsv /set len 152 /set keep var off lf on /k dinvcsvfOr Awk: (Thanks to Barry Lake from Allegro for fixify.sh)#!/bin/sh # # Script to turn a typical bytestream file (variable length # records) into a file with fixed length records by padding # records with spaces as needed. The resulting record length # will be that of the longest record in the file. export INFILE=$1 OUTFILE=$(mktemp) # Step 1: Find the length of the longest record in the input file # file and store it in a variable for use in the next step. export MAXLEN=$(awk 'BEGIN { len = 0; } { if (length > len) len = length; } END { print len; }' $INFILE ) echo Maximum record length in \"$INFILE\" is $MAXLEN. # Step 2: Get the MAXLEN variable; use it to create a string # of that many blanks; then use that to pad each input # record as needed to make it the same length as the # longest record. awk 'BEGIN { "echo $MAXLEN" | getline maxlen; spaces = ""; for(i=0; i < maxlen * 1; i++) spaces = " "spaces; } { print substr($0 spaces, 1, maxlen) }' $INFILE >$OUTFILE # Step 3: awk can't edit a file in place so its output was # written to a new file which we then pour back into the # original file. Note: we could just as easily have done # cp $OUTFILE $INFILE or mv $OUTFILE $INFILE, but in that # case we'd lose the original file's creator and # permission bits. cat $OUTFILE > $INFILE rm $OUTFILEYou can run the above script:$./fixify.sh datafileAt this point Suprtool can now read what is essentially variable-length data in a fixed length file.
Now remember what a typical CSV file looks like! Byte fields are surronded by quotes and separated or delimited with commas:
159,19910827,1,50532001,5053,9449,"Test index","Test index","Test index","Test index"
Step Two: Split out the Fields
The first step is to separate out each field from the record based on the delimeter, which in this case is the comma. I use byte lengths for each number field based on the rules for output ,ascii table.
Duplicated below:
I1 J1 06 bytes I2 J2 11 bytes I3 J3 16 bytes I4 J4 20 bytes K1 05 bytes K2 10 bytes Zn n+1 bytes Pn n bytes
in dinvcsvf,rec 152,lf {define targets} def bin-x,1,6 def last-x,1,11 def qty-x,1,11 def prod-x,1,9 def supp-x,1,9 def cost-x,1,8 def desc1-x,1,22 {Note room for quotes} def desc2-x,1,22 def desc3-x,1,22 def desc4-x,1,22 {define source} def record,1,152 {extract using split } ext bin-x=$split(record,first,",") ext last-x=$split(record,",",1,",") ext qty-x=$split(record,",",2,",") ext prod-x=$split(record,",",3,",") ext supp-x=$split(record,",",4,",") ext cost-x=$split(record,",",5,",") ext desc1-x=$trim($split(record,",",6,",")) ext desc2-x=$trim($split(record,",",7,",")) ext desc3-x=$trim($split(record,",",8,",")) ext desc4-x=$trim($split(record,",",9,last)) {where?} out myfile,link xeq
Step Three: Closing the Loop
Now we just need to "close the loop" and extract the individual byte type fields into their appropriate data types. In this step we also "clean" the data of the double quotes.
Keep in mind that the data format that we want:
- BIN-NO ~ J1
- LAST-SHIP-DATE ~ J2
- ON-HAND-QTY ~ J2
- PRODUCT-NO ~ Z8
- SUPPLIER-NO ~ Z8
- UNIT-COST ~ P8
- ITEM-DESC1 ~ X20
- ITEM-DESC2 ~ X20
- ITEM-DESC3 ~ X20
- ITEM-DESC4 ~ X20
The defines for the above are in the section starting with the comment {Actual targets}
in myfile {re-define number in display format} def bin-z,bin-x,display def last-z,last-x,display def qty-z,qty-x,display def prod-z,prod-x,display def supp-z,supp-x,display def cost-z,cost-x,display {Actual targets} {number} def bin ,1,2,integer def last,1,4,double def qty ,1,4,double def prod,1,8,display def supp,1,8,display def cost,1,4,packed {bytes} def desc1,1,20 def desc2,1,20 def desc3,1,20 def desc4,1,20 ext bin=$number(bin-z) ext last=$number(last-z) ext qty=$number (qty-z) ext prod=$number(prod-z) ext supp=$number(supp-z) ext cost=$number(cost-z) clean '"' ext desc1=$trim($clean(desc1-x)) ext desc2=$trim($clean(desc2-x)) ext desc3=$trim($clean(desc3-x)) ext desc4=$trim($clean(desc4-x)) out loop,link xeqThe SD file (loop) is now in the same format and layout of the original Dataset extraction prior to the STExport task. We've now closed the loop.
>form loop File: LOOP.NEIL.GREEN (SD Version B.00.00) Entry: Offset BIN I1 1 LAST I2 3 QTY I2 7 PROD Z8 11 SUPP Z8 19 COST P8 27 DESC1 X20 31 DESC2 X20 51 DESC3 X20 71 DESC4 X20 91 Limit: 13 EOF: 13 Entry Length: 110 Blocking: 37So to summarize the first step is to split out the data into separate fields with byte data types. Then to convert by re-defining with a new name to reference the data as display and use $clean and $number to extract into the final targets.
Generate Three Random Characters
A customer recently asked how they could use Suprtool to generate some random data. The answer is you can't. But Barry Durand came up with a reasonable Unix shell script, using something he found on the net.
Here is a script that called rand3:
a[0]=a; a[1]=b; a[2]=c; a[3]=d; a[4]=e; a[5]=f; a[6]=g; a[7]=h a[8]=i; a[9]=j; a[10]=k;a[11]=l; a[12]=m; a[13]=n; a[14]=o; a[15]=p a[16]=q; a[17]=r; a[18]=s; a[19]=t; a[20]=u; a[21]=v; a[22]=w a[23]=x; a[24]=y; a[25]=z; a[26]=1; a[27]=2; a[28]=3; a[29]=4 a[30]=5; a[31]=6; a[32]=7; a[33]=8; a[34]=9; a[35]=0 echo ${a[$RANDOM%36]}${a[$RANDOM%36]}${a[$RANDOM%36]}./rand3.sh gtj ./rand3.sh 7o2Remove Trailing spaces
Someone asked how to remove trailing spaces on HP-UX. Here is one way:
pr -t INFILE >INFILE.spacesSolving Floating Point Exception Errors
One of the more common tech calls we get is: my job aborted with Floating Exception(coredump) what is wrong?
First what is a Floating Point Exception.
A Floating point exception is an HP-UX specific arithmetic trap which occurs when the system encounters a character that it cannot deal with when converting to/from Floating Point. These errors occur in Suprtool and STExport for HP-UX, mainly because Suprlink just matches bytes, it doesn't try to transform the data in any manner. It is the transformation or coercion from one form to another where these FPEs occur. In Suprtool, it could be from nearly any type to nearly any other type. An FPE in Suprtool is typically a coercion error from one type TO floating point/display or packed.
Why?
This article will attempt to show how to find these errors and common reasons for these errors. The most significant and really the main reason for a coercion error is that we have bad data for the area defined for a given field that is being coerced. Now essentially there is only one reason for an FPE and that is that the low level routine that converts data from one type to another encounters a character that it cannot handle.One of two possibilities exist:
- You've defined either the record size or the field position incorrectly such that you are pointing to the wrong spot.
- You really have bad data.
Most of the time we see that there is an incorrect definition of a field or record.
Wrong record size on Input file
in baddisp,rec 80,lf def a,1,5,display >if a=65312 >ext a >out * >xeq 65312 IN=6, OUT=1. CPU-Sec=1. Wall-Sec=1.If we do essentially the same job, but get the record size wrong by one, we see the dreaded:
>in baddisp,rec 81,lf >def a,1,5,display >if a=65312 >out * >xeq Floating exception(coredump)Remember that on Unix files are just a series of bytes, there is no record structure. So in the case above we read the first record and it lines up ok, however, the next record will start at the wrong spot and the coercion will fail.
Bad Field Definition:
Consider the following data:more baddisp 01234 21222 00345 12345 54321xx 65312xxNote that the last two records have characters right beside them so if we get the definition of the beginning display field incorrect we will end up trying to interpret the record with the 4321x as display data, which will of course fail.
Now we show an incorrect field definition:
>in baddisp,rec 80,lf >def a,2,5,display {obviously wrong} >if a<>65312 >out * >ext a >xeq 1234 1222 0345 2345 Floating exception(coredump)We actually have bad data!
Once we've investigated the layout for the record and for the field there is the possibility that we may have bad data. Hidden characters or escape sequences or more often than not nulls. In Suprtool you can list the data in Hex format and look for Null (00), Tab (09), CR (0D) or Line Feed (0A). These are common "bad" characters found in fields. You can use the $clean function to clean out this data, or you would have to trace the bad data to the source.How to find what is happening?
Where do I look first?
In any given task, really the first place to look is at the fields referenced in the if command. Concentrate on the Display and packed fields referenced in the if command.You can see each record as it is read by using the out * command instead of output to some file. (Hint: Out * just means out to stdlist so it may not be useful if you have lots of fields). The List command to a file may be more useful, you could then tail the file to see where the problem was.
It is advisable at first to limit the number of records with the numrecs command while investigating these errors. Typically if a record definition and/or field definition are incorrect, then the abort will occur in the first few records so numrecs 10 command could save you waiting for a long listing to go to the screen.
It might not be the current task
It is important to note that the problem may not be in the current task. Consider again the following data:more baddisp 01234 21222 00345 12345 54321xx 65312xxIn the first step below we just read the data and put into a self-describing file:>in baddisp,rec 80,lf >def a,2,5,display >ext a >out mydata,link >xeq IN=6, OUT=6. CPU-Sec=1. Wall-Sec=1.Now in a later step we may reference the bad field and the abort will then occur. So sometimes you have to trace back a step to the one that created the file you are reading:>in mydata >form File: mydata (SD Version B.00.00) Has linefeeds Entry: Offset A Z5 1 Entry Length: 5 Blocking: 1 >if a<>65213 >list >xeq>IN mydata (0) >OUT $NULL (0) A = 1234? >IN mydata (1) >OUT $NULL (1) A = 1222? >IN mydata (2) >OUT $NULL (2) A = 345? >IN mydata (3) >OUT $NULL (3) A = 2345? Floating exception(coredump)Note that the "?" is a clue that you have a bad definition as Suprtool can handle spaces after numbers, but it aborts when it encounters the "x". Hopefully this gives you information on what a Floating exception is.
We are working on catching SIGFPE and trying to give more information about the circumstances in a future version of Suprtool, but for now the above are some of the techniques that you would likely need to use to find the bad data or bad definitions.
Comparing Files in Groups
I've been working on projects that requires that I have two copies of some of our source code modules and I want to compare all the files for the given groups to monitor the changes.
Here is an MPE/Qedit command file that will compare all the files in two groups. Keep in mind that this is dependent on the groups having the same number of files in the group and the same filenames.
parm group_one group_two purge fg1,temp >$null purge fg2,temp >$null purge fgout,temp >$null echo Comparing !group_one against !group_two listf @.!group_one,6 >fg1 listf @.!group_two,6 >fg2 build fgout;rec=-256,1,f,ascii;disc=20000;temp file fgout;acc=append /set totals off { No totals listed } /tq fg1,text /set length 256 /set right 256 /appendq "~" @ /addq last=fg2 /lsortq all /glueq "~" @ /changeq 1 "compare " @ /changeq "~"," @ /changeq " ,"," @ /changeq " ,"," @ /changeq " ,"," @ /changeq " ,"," @ /changeq " ,"," @ /changeq " ,"," @ /changeq " ,"," @ /appendq ",tempout >$null" @ /appendq "~:if compareoutcount <> 0 then" @ /divideq "~" @ /changeq "~"" @ /appendq "~:fcopy from=tempout; to=*fgout >$null" "compareoutcount" /divideq "~" @ /changeq "~"" @ /appendq "~:endif" "fgout" /divideq "~" @ /changeq "~"" @ /useq * reset fgout /textq fgout,yes /listq $r @ /set totals onThe command file essentially builds a list of compare commands and then if differences are found in the files, then the compare output is copied to a single file, which is listed on my attached printer at the end.This is specific to my needs but is easily customizable. Feel free to send any comments or improvements to neil@robelle.com.
Control Where Temp Sort Space is Allocated
Occasionally we get calls and e-mails asking about sort scratch space on HP-UX and where Suprtool puts its Sort Scratch space. This is usally due to some job failing due to an out of disc spaces on a particular volume.By default, Suprtool typically puts the temporary space it needs on /var/tmp. If this volume runs out of space when sorting data you may receive an error similar to this:
Error: Failure in ROBSORT'INPUT routine ROBSORT'INPUT Error 12 Unable to write to the RSORTSCR file (length = 27657) No space left on device Error: ROBSORT'END Failed ROBSORT'END Error 19 File system error Error 0You can control what directory/volumeset that Suprtool puts its temp sort scratch files by setting the TMPDIR variable prior to running Suprtool.export TMPDIR=/home/bigsortspaceThe above will cause Suprtool to create Suprtool temp file in the directory space /home/bigsortspace.Installing Suprtool for Testing
Some customers have asked recently how to install the latest version of Suprtool without making it the production version.
An example of customers that are looking for this are some of the Ecometry customers that are migrating off of the HP 3000 and don't want to upgrade their production version of Suprtool or STExport but they want to use the clean features available in either Suprtool or STExport for migrating data off the 3000.
This HowDoI article will address these issues in the following manner:
- Discuss how to Install the most recent version of Suprtool without making it the production version.
- How to use the various clean features inside Suprtool and STExport.
Installing Latest Suprtool/Suprlink/STExport to Test
The following instructions are ONLY to give a method for users on MPE to install the program files from Suprtool to TEST or use new features. This can be done by anyone that wants to test the latest version of Suprtool, STExport and Suprlink.
The steps to do this are:
- Request codes:
http://www.robelle.com/forms/request-prod-release.php- Download the install packages from the next page:
After you have requested codes you will be taken to a page to download the installation package. Once you have downloaded the Suprtool file you can follow the instructions for uploading the file to your HP3000, once that step is done you can use the following steps to just restore and enable the software for testing.- Restore files
Logon (or stay logged on) as manager.sys,pub to do the restore.:hello manager.sys,pub :file t=stprod;dev=disc :restore *t;@nm.pub.robelle;create;show=short :purge stprod :stream robelle.job.robelleWait for the job to send you a completion message:
========= Robelle.Pub.Sys ================= = Robelle account structure ready. ============================================
- Enable the Software
:hello mgr.robelle,pub :Run extend.pub Please enter the Product Verification code : n Please enter the Verification code : n Please enter the Second Verification code : nExtend will cycle thru all the files and patch the files. You may get an error message such as:
Error: Unable to open SUPRTOOL.PUB.ROBELLE EXCLUSIVE VIOLATION: FILE BEING ACCESSED (FSERR 90)This message just means that someone is running the program that extend tried to open this is not a problem unless it is one of the three files that need to be patched by this process which are:
suprnm.pub.robelle
linknm.pub.robelle
stexpnm.pub.robelle
Once you have applied the codes you should be able to run Suprtool/STExport on their own.
run suprnm.pub.robelle
run stexpnm.pub.robelle.
When you restore the files you will get six files restored, but the other files will not impact anything. Please note that the methods above will not load the latest help files, which is typically what most customers want, and since the latest manuals are now available on line, it makes the most sense.
Again this is for only those customers that want to try just the Suprtool, Suprlink and STExport binaries.
Clean Features
The main reason for most Ecometry customers to install the latest version is to get the clean features of either Suprtool or STExport.
While the clean features of each product do the same thing, they are implemented in slightly different ways.
STExport
When STExport cleans data, it just very simply removes all offending characters from each byte field, for all those characters specified in the clean command. STExport takes an SD file and creates a "PRN" file so to speak, but it allows you to control more about the file being created, such as delimiters and format of the numbers etc.
For example if you wanted to remove any extraneous Tab characters in byte fields the stexport task would be:
$ in mysdfile $clean "^9" $set cleanchar " " $out myexport $xeqYou can have the Clean function clean the field, and instead of replacing with a space, STExport will essentially shift characters to the left by Setting the CleanChar in the following manner:
>Set CleancharSTExport will pad the field that was cleaned with the appropriate amount of characters with a space at the end of the field.
STExport has other easy to use features such as Clean SPECIAL which automatically cleans all byte type fields with characters ranging from Decimal 0 to Decimal 31.
Suprtool
The Suprtool implementation of Clean is more focused and will allow you to actually fix the Source data as opposed to fixing the data "along the way", as STExport does.
Suprtool will "clean" or replace all of the characters specified in the Clean command from a byte type field when invoked by the $Clean function. To define what characters that need to be replaced you use the clean command with the character you want to clean in quotes. Since most of the characters that you will need to clean are unprintable, you can enter the decimal equivalent of the character. This is denoted by entering the "^" character in quotes preceding the decimal number of the character you wish to clean.
An example of how easy it would be to clean your database of certain "bad" characters in byte-type fields would be as follows:
>base mydb,1,; >get customer >clean "^9","^10","^0","^7" >update >ext address(1) = $clean(address(1)) >ext address(2) = $clean(address(2)) >xeqThe SPECIAL keyword automatically defines Clean characters of Decimal 0 thru to Decimal 31.>base mydb,1,; >get customer >clean special >update >ext address(1) = $clean(address(1)) >ext address(2) = $clean(address(2)) >xeqThe above method cleans the source data in the database and allows you to fix any data in the database prior to exporting. This gives you a little more flexibility.
Now you can also, ONLY fix the data that has clean characters, (which makes much more sense):
>base mydb,1,; >get customer >clean special >if $findclean(address(1)) or $findclean(address(2)) >update >ext address(1) = $clean(address(1)) >ext address(2) = $clean(address(2)) >xeqSo the findclean function will look for any of the characters specified by the clean command and only those records will be updated.