From:
To: Users of Robelle Software
Re: News of the HP 3000 and of HP-UX, 1998 #5
[Paul Gobes]
[Dave Lo]
"Course was interesting throughout and Hans [Hendriks] was very knowledgeable."
"Excellent use of lab and instruction."
[Mike Shumko]
Michael Wolff takes us on a roller-coaster ride with his company, Wolff New Media LLC, as it grows from a three-person publishing company to a seventy person Internet start-up. Learn about East Coast content versus West Coast technology, familiar Internet start-ups (Netscape, Yahoo, Excite, and others), venture capitalists, bankers, and what you have to do as a CEO to keep the money coming to support the burn rate.
Read about the start of Wired magazine, the launch of Time Warners’ Pathfinder, and meet the Internet visionaries. You’ll read about companies that have gone from nothing to millions, even billions.
Burn Rate shifts between the highs of making deals and the lows of running out of money. Once a journalist by trade, Mr. Wolff’s ability to intrigue makes it difficult to put this book down. If you are in the software business or interested in the Web business, Burn Rate is worth reading.
[David Greer]
If you’ve called Robelle with questions about your account, chances are you discussed the details with Eunice Sheehan.
Eunice has been with Robelle for a little over three years, and is still surprised by the frequency of her job "enrichment." She’s progressed from part to full time, with her profile of responsibilities growing along the way.
Eunice originally hails from Bradford, England in the county of Yorkshire. Before departing for Canada with her family in May of 1987, she studied Business and Commerce at Bradford and Newcastle Universities.
A true Vancouverite, Eunice enjoys hiking, bicycling, gardening and other outdoor pursuits. She is an avid reader, preferring biographies and mysteries. Eunice also volunteers time as a mentor in a College and Career youth group, helping young people stay focused and motivated. When you next chat with Eunice, perhaps she will do the same for you!
[Ken Robertson]
All new developments on Qedit for Windows will be on 32-bit operating systems (Windows 95, Windows 98 or Windows NT).
Robelle has contacted Hewlett-Packard about this issue, but has been unable to get this policy changed. As a result of HP’s decision, Qedit for HP-UX customers have the following options:
[Paul Gobes]
Right now this is a text-only version, but otherwise it has all the same great Robelle news, tips and technical information you get in our printed version. Send your request by e-mail to support@robelle.com.
:listf qedit.pub.robhelle,8 ******************** FILE: QEDIT.PUB.ROBELLE 7 Accessors(O:7,P:7,L:0,W:0,R:7),Share #S252 ROBYN,MGR.ISDEV P:1,L:0,W:1,R:1 LDEV:13 #S251 ROBYN,MGR.ISDEV P:1,L:0,W:1,R:1 REM:198.14.25.10 #S249 NEIL,MGR.SOURCE P:2,L:0,W:2,R REM:136.14.10.82 #S240 FRANCOIS,MGR.SOURCE P:1,L:0,W:1,R:1 REM:201.13.15.11 #J2339 MAILJOB,MGR.XPRESS P:1,L:0,W:1,R:1 SPID:#O2492 #J2337 KBLOAD,MGR.IS P:1,L:0,W:1,R:1 SPID:#O2499[Paul Gobes]
The latest version of the MPE/iX Store command is more paranoid than earlier versions and chokes on the Emast file in Pub.Robelle. By incorrectly identifying Emast as the root file of a database, the command runs into a problem when it can’t find the associated datasets. In fact Emast is not a root file; it is a Priv file that was used only by the DBMGR program bundled into our Xpress e-mail application.
If you don’t have Xpress, you can delete Emast because it is just a relic of an expired Robelle demo tape.
[Hans Hendriks]
The Get command always takes the same amount of time to read a given dataset, regardless of how many records you want to select. The Chain command takes more time as you select more chains and records. In other words, it’s data- dependent. So the question becomes, "Given that I want to select this many chains/records, which is faster, Get or Chain?"
Let’s try a few no-brainer examples. It’s pretty obvious that selecting against one key value should be done with the Chain command, and selecting 99% of the key values should be done with the Get command. But for less obvious numbers of records, where is the break even point where you should switch from Chain to Get? The question really is, "At what point do the Get and Chain commands take the same amount of time?"
When you know the answer to that question, you’ll know the best point to switch from using a Chain command (for low numbers) to Get a command (for higher numbers).
The balance point can be given in terms of the number of disc accesses (a.k.a. I/Os or input/output operations) it takes to read the data.
Suprtool’s Get command, as stated above, always takes the same number of I/Os to read a given dataset. Suprtool’s Get command reads up to 50,000 bytes of data with every disc access, so the rule of thumb is to multiply the number of records by the record size in bytes, then divide by 50,000. The resulting number is the approximate number of I/Os Suprtool will need to read the dataset. For example, our d-notes dataset has 155,054 80-byte records. The ballpark estimate is 248 disc accesses to read the whole dataset ([155,054 x 80] / 50,000 = 248).
The ballpark estimate does not take into account block sizes, the number of blocks that fit into Suprtool’s buffer, and other overhead bytes embedded in the records, but it’s accurate enough for a rough calculation. You can see the exact number of I/Os by doing a Suprtool Get task with statistics enabled (Set Stat On). The number appears in the Input FREAD calls. For example,
>get d-notes >set stat on >out $null >xeq IN=155054, OUT=155054. CPU-Sec=6. Wall-Sec=7. ** INPUT ** Input buffer (wds): 24576 Input record len (wds): 40 Input logical dev: 2 Input FREAD calls: 281 <----- LOOK HERE Input elapsed-time (ms): 2915 Input records/block: 23 Input blocks/buffer: 24As you can see, Suprtool is actually taking 281 I/Os to read the dataset in this example. That tells you the balance point for disc accesses.
But how do disc accesses relate to a specific number of chains or records? First you need to know the average number of records in each chain. If you know your data really well, you may already know this. If you’re not sure, Suprtool can help you figure it out, as can HowMessy.
Suprtool will tell you how many records there are in the dataset, and how many different chains. Divide one number into the other to see the average number of records per chain. For example,
>get d-notes >sort key-field-name >dup none keys >out $null >xeq IN=155054, SEL=155054, OUT=3748. CPU-Sec=31. Wall-Sec=34.In this example, the average chain has 41.3 records. (155,054 records / 3,748 keys = 41.3)
HowMessy can give you a dizzying number of statistics per dataset, including the average chain length. The following example is an abbreviated listing,
... Max Ave Std Data Set Capacity Entries Factor ... Chain Chain Dev D-NOTES Det 219098 155054 70.8% ... 511 41.37 43.11The last bit of information you need to know is exactly how Chain does its job, so that you can calculate the number of I/Os for a given Chain operation.
In detail datasets, for every key value you want to retrieve using Chain, Suprtool does one DBFIND (that’s one I/O) and as many DBGETs as there are records on the chain (assume one I/O per DBGET). So if the average chain has 41.3 records in it, then on average it would take 42.3 I/Os to retrieve the records for a given chain. If there are 10 key values (chains) to retrieve, it would take 423 I/Os (42.3 x 10 = 423).
For master datasets, there is no DBFIND, but there is still one DBGET per key value.
Now you can answer the basic question: "What is the balance point where Get and Chain take the same amount of time?" You already found that out (in our example it is 281 I/Os). Divide that by the average number of I/Os per chain (in our example it is 42.3), and you get the balance point in terms of the number of chains. The example numbers give a balance point of 6.6 chains (281 / 42.3 = 6.6). As long as you want the records from six or fewer chains, use a Chain command. If you want the records from more than six chains, use a Get command.
To code this choice in a job stream, where the key values you want to retrieve are stored in the file called Values, you would use the following commands:
! !setvar number_of_chains,finfo("values",19) ! !if number_of_chains > 6 then ! ! run suprtool.pub.robelle base mybase get d-notes table t,key-field-name,file,values if $lookup(t,key-field-name) ...<other suprtool statements> xeq exit ! !else ! ! run suprtool.pub.robelle base mybase chain d-notes,key-field-name=t table t,key-field-name,file,values ...<other suprtool statements> xeq exit ! !endif !There is another cooler, sneakier way of doing this. Place the commands that are different into a file and perform a Use command of this file as part of a Suprtool task.
! !setvar number_of_chains,finfo("values",19) ! !if number_of_chains > 6 then ! ! echo get d-notes > myfile ! echo if $lookup(t,key-field-name) >> myfile ! !else ! ! echo chain d-notes,key-field-name=t > myfile ! !endif ! !run suprtool.pub.robelle base mybase table t,key-field-name,file,values use myfile ...<other suprtool statements> xeq exit !One obvious drawback of this method is that you have hardcoded the balance point (6) right into the job stream. If the balance point changes (because the size of the dataset has changed or the average number of records per chain changes) you will have to update this job stream, as well as the dozens of associated job streams you have coded.
How can you make job streams avoid using a hardcoded cutoff value? That’s the subject of part 2 of this article, which will appear in the next issue of What’s Up, DOCumentation?. Stay tuned.
[Mike Shumko]
$in loadfile $heading fieldnames $html table title "Store Database Howmessy Report" $out /apache/www/howmessy.html $exit[Neil Armstrong]
DELPART is a utility that allows you to delete an NTFS partition. FDisk will not do this. If you have to rebuild your PC from scratch and can’t get NT to recognize your CD-ROM, then you will need this utility.
I found DELPART at http://www.tmco.co.uk/nt.html.
[Neil Armstrong]
The Qedit command file below can be used to search a file for multiple strings and print them in the original line order. It takes 3 parameters: the filename of the file you want searched, the listname of the file of strings and the maximum length (stringlength) of each string. Save this command file as Findmany, then invoke it from inside Qedit:
/findmany custlist, strings, 10The command file loops through the Strings file. For each string, it saves the results in a temporary file and performs a Text command. The strings are cleaned up, sorted and printed. Here is the Findmany command file:
parm filename, listname, stringlength purge found,temp setvar num,1 setvar eof, finfo('!listname',19) while num <= !eof do print !listname;start=!num;end=!num > temp input inline < temp setvar string str("!inline",1,!stringlength) :/list !filename "!string" > > found echo inside loop !string setvar num, num + 1 endwhile /text found /deleteq "line found" /deleteq "lines found" /lsortq all /listq $lp all[Paul Gobes]
:file formout;dev=disc form sets :reset formout in formout def pc,62,1 if pc = "%" def set,4,16 ext "fo ",set o use2,temp x u use2 :file formout=$stdlist[Paul Gobes]
Item: CUST-ACCOUNT Z8 >get d-sales >extract cust-account >list >output * >x >GET D-SALES (8) >OUT $STDLIST (0) CUST-ACCOUNT = +5 0000000E {overpunch}Now try the same task with an arithmetic operation on the Extract command:
>get d-sales >extract cust-account = cust-account * 1 {multiply by one} >list >output* >x >GET D-SALES (8) >OUT $STDLIST (0) CUST-ACCOUNT = 5 00000005 {no overpunch}[Paul Gobes]
The customer wanted
/mstream *to work the same way as
/stream *The Mstream command file contained:
parm jobfile run MSTREAM.MAESTRO.CCC;info="!jobfile"The following simple command file called Qstream can be used inside Qedit to create a temporary file that the Mstream UDC can read.
1 /lq @ > tempfile 2 mstream tempfileConsultant Michael Abootorab enhanced this command file to be usable both inside and outside of Qedit, with a file name or with an asterisk (*).
if bound(insideqedit) and insideqedit = 0 then mstream !filename else if '!filename' = '*' then /holdq all {or lq @ > filename} mstream hold else mstream !filename endif endif[Hans Hendriks]