Vladimir Volokh of VESOFT fame called us recently, and passed on an interesting story.
He was doing MPE system and security consulting at a user site. One of his regular steps is to run VESOFT's Veaudit tool on the system. From this he learned that every user in the production account had System Manager (SM) capabiity!
Giving a regular user SM capability is a really bad thing. It means that the users can purge the entire system, look at any data on the system, insert nasty code into the system, etc. And this site had recently passed their Sarbanes-Oxley audit.
Vladimir removed SM capability from the users and sat back to see what would happen. The first problem to occur was a job stream failure. The reason it failed was because the user did not have Read access to the STUSE group, which contained the Suprtool "Use" scripts. So, Suprtool aborted.
"Background Info Break"
For those whose MPE security knowledge is a little rusty, or non-existent, here is a helpful quote from Vladimir's son Eugene, from his article, BURN BEFORE READING - HP3000 SECURITY AND YOU - available at http://www.adager.com/VeSoft/SecurityAndYou.html
When a user tries to open a file, MPE checks the account security matrix, the group security matrix, and the file security matrix to see if the user is allowed to access the file. If he is allowed by all three, the file is opened; if at least one security matrix forbids access by this user, the open fails.As Eugene points out above, account users do NOT have Read access by default to a new group in their account. This was the source of the problem at the site Vladimir was visiting. When the jobs could not read the files in the new STUSE group, the system manager wielded the MPE equivalent of the medieval broadsword: give all the users SM capability.For instance, if we try to open TESTFILE.JOHN.DEV when logged on to an account other than DEV and the security matrix of the group JOHN.DEV forbids access by users of other accounts, the open will fail (even though both TESTFILE's and DEV's security matrices permit access by users of other accounts).
Each security matrix describes which of the following classes can READ, WRITE, EXECUTE, APPEND to, and LOCK the file: * CR - File's creator
* GU - Any user logged on to the same group as the file is in
* GL - User logged on to the same group as the file is in and having Group Librarian (GL) capability
* AC - Any user logged on to the same account as the file is in
* AL - User logged on to the same account as the file is in and having Account Librarian (AL) capability
* ANY - any user
* Any combination of the above (including none of the above)
...
Whenever any group is created, access to all its files is restricted to GU (group users only).
ALTUSER PRODCLRK; CAP=SM,IA,BA,SF,...This did solve the problem, since it certainly allowed them to read the STUSE files, but it also allowed them to read or purge any file on the system, in any account.
What he should have done was an Altgroup command immediately after the Newgroup command:
ALTGROUP stuse; access=(R:any;a,w,x,l: gu)or specified the correct access when the group was built:
NEWGROUP stuse;access=(r:any;a,w,x,l:gu)Since the HP 3000, runs in a corner virtually unattended (except for feeding the occasional backup tape), we often forget many of the options on the commands that are used sparingly. Neil Armstrong, my cohort in our Labs, often does a Help commandname to remind himself of some of the pitfalls and options on the lesser-used commands, NEWGROUP being one of them.
Another Story!
Just as in Vladimir's experience above, it seems as if every HP 3000 site has a Sarbanes-Oxley (SOX) story. For example, consider this discussion thread from the 3000-L:
From: Ray ShahanI'd like to get some ideas/info on how others solved the 'access to the spool file' SOX issue. My current aggravation with the auditors is as follows: We have a snag in our production data, and I have to run JCL jobs in production to try to find the hosed data. To run these research JCL's, I have to use my logon with BA access in the job card of the JCL so that if I try to change the production data using these JCL's while doing research, my user will be tracked in the log files for the changes...geeez .
However, I, myself, can't log onto production as an online user (IA access), I can only log onto develop with IA access, so I can't view the STDLIST created by the JCL I've run. I don't want to have the operators copy the spool files to my private account every time the job is run, and the auditors won't let me view production spool files (the word ridiculous should come in here somewhere, but...). Also, I can't run GOD (the auditors nixed that long ago), or do a CHGLOGON.
So, any help with this would be great.
From: Carol DarnellWelcome to the SOX world, Ray!
I'd love to offer a constructive suggestion, but I'm dealing with the same thing (including just losing AM and OP from my production accounts, so trying to get onto a restricted box to look at a problem is... um....er...not so nicely achieved). Losing access to much of maestro, most particularly the ability to submit jobs using a PRIV user to actually perform some of the cross-account and cross-box copies I have to do. Of course, if someone who doesn't have a CLUE tries to do something my fingers aren't permitted to do, and screws up, WHO has to repair the damage????
From: Larry BarnesIt's my understanding that SOX isn't supposed to be so restrictive that it prevents you from doing your job functions.
From: Carol DarnellWe've tried to explain what damage this can (and will) do - but compliance appears to be more critical than being able support our customers. I'm so utterly and totally frustrated by this that I'm giving myself an ulcer. You know things are counter-productive when the general solution is 'let the stuff break to prove the point'.
From: Greg StigersI must say that your SOX stories are scarier than anything Stephen King ever wrote. For literary analogies, I'm more reminded of Dante's Purgatory.
From: Ray ShahanThanks to all who responded. We have, for the moment, gone with giving us developers "OP" cap, so we can read spool files we didn't create. Of course, "OP" cap can do store/restores and execute the REPLY command, so this has raised the eyebrows of the auditors (it just goes on and on).
Above is merely a snapshot from the thread. Read it all here.
And good luck.
Jeff has written a general discussion of security followed by details to help ensure that your MPE systems connected to a network are secure. This tutorial is about 1/2 general networking security and 1/2 MPE-specific; it is introductory. When you get to the MPE section, you learn: MPE has an advantage because of it proprietary nature. A common type of attack usually will not work or if it did, it would only result in a process abort. The worst result would be a process abort with a loss of a networking service. And Jeff shows an example of an INETDSEC.NET.SYS config line to limit Telnet access to your system:
telnet allow 10.3-5 192.34.56.5 ahost anetwork # The above entry allows the following hosts to attempt # to access your system using telnet: # hosts in subnets 3 through 5 in network 10, # the host with Internet Address of 192.34.56.5, # the host by the name of "ahost", # all the hosts in the network "anetwork" # tftp deny 192.23.4.3Don't forget to read Jeff's notes as well as the slides. Read the Powerpoint version in the normal view; you can and resize the slide window smaller and you can also minimize the contents window. In the HTM version, the notes are impossible to read on my monitor, and you cannot resize the slide window. People at HP must work on much larger monitors than I do....
Sometimes, the computer "dies"... so this note discusses system failures, system hangs, memory dumps, subsystem numbers, and interpreting a system abort number. Sometimes, the system is alive ... so the free speedometer is discussed.There are two basic kinds of system failure that an MPE/iX (or MPE XL) user will encounter: a "System Failure" and a "system hang".
A System Failure reports the following information on the hardware console:
SYSTEM ABORT 504 FROM SUBSYSTEM 143 SECONDARY STATUS: INFO = -34, SUBSYS = 107 SYSTEM HALT 7, $01F8Additionally, the hex display (enabled on the console by typing control-B) displays something like:B007 0101 02F8 DEADNote that the "504" and "$1F8" above are the same value, shown in decimal and in hex. Further, the hex display shows "0101" and "02F8". These two numbers are reporting the following:0101 02F8The bold (and, depending on your Web browser, underlined) portions indicate packets 1 and 2 of the System Abort number (01F8) (i.e., the first two hex nibbles (01 and 02 above) of each 4-digit hex number are "packet numbers").Note: if the System Abort number is in the range 0 to $FF (decimal 255), only one "Part" will be needed to represent it, and no "Part 2" will be shown.
Step 2: Learn Debug, DAT and SAT (essential system utilities). This step is left to the student as an exercise.
Step 3: Read "Basic System Problem Analysis" by Bill Cadier.
Here is how Bill introduces his topic:
As the HP 3000 winds down it will be advantageous for owners of this system to be able to perform as much trouble shooting as possible. The amount of trouble shooting will be limited because source code for the OS is not available outside HP.You really need to be familiar with the internals of MPE to use this paper, but it is excellent for people who are planning to provide 3rd Party MPE support, analyzing dumps and chasing down system problems! Here is what the paper covers: Debug macros for accessing system, data structures, Process Management structures, Job/Session Management structures, File System structures, finding the GUFD of an opened or closed file, Virtual Space Management structures, Memory Management Structures, Dispatcher structures, Table Management, System Globals, the general registers and space registers, short vs long pointers, and the procedure calling convention. Finally, there is a detailed case study of analyzing an actual System Abort 663, plus a system hang.It is assumed that readers have good familiarity with the tools DEBUG, DAT and SAT. The documentation for these tools may be found online at:
http://docs.hp.com/mpeix/onlinedocs/32650-90901/32650-90901.html
Bill also provides a bit of good background information, such as:
The PA-RISC Instruction Set Reference Manual and the Procedure Calling Convention manual are pretty hard to come by. They are not at the docs.hp.com web site, so it is worth spending a little time going over some of the basics of the hardware.DEBUG, DAT and SAT use aliases for certain of the registers, SP, the stack pointer will always be R30. DP, the data pointer (global variables in a program context) will always be R27. RP or the procedure return pointer is R2.
The procedure calling convention specifies that the first four argument values being passed in a procedure call be placed in registers R26 to R23. The first parameter going into R26 and onward to R23. The first parameter going into R26 and onward to R23. All additional parameters are placed into the stack frame that was created by the procedure making the call.
Parameters may require more than one register, a long pointer or LONGINT for example, will take two registers. If that occurs the registers must be aligned. This may result in one of the registers being skipped and left unused (more on this in a bit).
GR31 is called the "millicode RP" but it is also where the "BLE" instruction initially stores the current value of the PC register before making the branch. It moved to R2 immediately after that, in the "delay slot" of the branch.
R0 is a scratch register that contains the value 0. It is cannot be written to but it is legal to use R0 as a target register when a value is not required. For example, the "NO OP" instruction (one that does nothing) is 08000240 OR r0, r0, r0. Logically OR R0, through R0 giving R0... nothing.
Ramusage is a small, free MPE program that reports your current memory and determines how much is used by the system and how much is available to users. It then calculates the effect of adding various amounts of memory. For our production system at Robelle, which had 112 Mb of memory, it showed that by adding only 64Mb of memory, we could double the amount available to users. Here’s the actual output:
RAMUSAGE [2.47] - LPS Toolbox [A.09b] (c) 1995 Lund Performance Solutions SERIES 928LX MPE/iX 6.0 #CPUS: 1 Memory size: 112 MB (117,440,512 bytes; 28,672 4KB pages) Memory usage by “type” of Object Class: Class #LogicalPages #MB % total SYSTEM_CODE 3,406 13 11.9% SYSTEM_DATA 10,443 40 36.4% TURBO_DATA 2,923 11 10.2% USER_CODE 4,234 16 14.8% USER_DATA 2,882 11 10.1% USER_STACK 1,548 6 5.4% USER_FILE 3,235 12 11.3% Totals: 28,671 111 100.0% “User” pages are 51.7% of memory (58 Mb out of 112 Mb) If you added: 32 Mb, you’d have 1.6 times as much “User” memory. ( 144 total Mb) 64 Mb, you’d have 2.1 times as much “User” memory. ( 176 total Mb) 96 Mb, you’d have 2.7 times as much “User” memory. ( 208 total Mb) ...Installing Ramusage is easy — it only takes about 10-15 minutes. You can download it from Allegro’s Web site at...
http://www.allegro.com/software/hp3000/allegro.html#RAMUSAGE
There are three downloadable formats: :LZW, Store-to-disk (STD) and tar.Z with instructions for each, but you must set up the Allegro account first:
:hello manager.sys :newacct allegro, mgr; cap = & AM,AL,GL,DI,CV,UV,LG,PS,NA,NM,CS,ND,SF,BA,IA,PM,MR,DS,PH :altacct allegro;access=(r,x,l:any;w,a:ac) :altgroup pub.allegro; cap = BA,IA,PM,MR,DS,PH; & access=(r,x,l:any;w,a,s:ac) :newgroup data.allegro; access=(r,x,l:any;w,a,s:ac) :newgroup doc.allegro; access=(r,x,l:any;w,a,s:ac) :newgroup help.allegro; access=(r,x,l:any;w,a,s:ac)I chose the STD method and followed the sample found on the same Web page.
After downloading to my Windows PC I uploaded to my 3000 using FTP. If you have never enabled FTP on your e3000, it is easy. Just follow the instructions on the Web page at
http://www.robelle.com/tips/ftp.htmlThe FTP commands to download ramusage are simple:
C:\>ftp my3000 User (my3000.robelle.com:(none)): paul,manager.sys,pub ftp> binary ftp> put ramusage.std ramusage;rec=128,1,f,binary;code=2501 ftp> quitThen on my 3000, I restored the files from the uploaded STD file:
:hello paul,manager.sys,pub :file ramusage;dev=disc :restore *ramusage; @.@.@; olddate; createI then ran the program with the command
:run ramusage.pub.allegroThe Ramusage program, written by the legendary Stan Sieler, is a subset of the PAGES utility (part of the Lund Performance Solutions “System Manager’s Toolbox”). It does not use the Measurement Interface. so you won’t see high system overhead. When asked how to calculate the amount of memory to buy, Stan provides the following formula:
Money_available / price_per_MB = Amount_of_MB_to_buy.
In other words, buy as much as you can afford!
Now where do you buy more memory? Well, you can get it from HP: outfield.external.hp.com/cgi-bin/spi/main.pl if you know the part number...
Or you can try one of the 14 third-party vendors who are listed under ‘Memory’ at the comprehensive vendor listing at
http://www.triolet.com/HPVend/hpvend.html.
Or try www.solutionstore3000.com for a list of vendors (select Product Name = Memory and leave every search criteria as “no option”).
For example, Kingston listed the 64Mb for our 928LX at $286, whereas HP was charging $1,090 (all prices are in US dollars). See the Kingston Web site at
http://www.ec.kingston.com/ecom/kepler/mfrmod.asp.
Another source might be third-party hardware maintainers like the ICS Group (www.icsgroup.com) which had 128Mb for $600.
Finally, if you’re really lucky (and brave), there’s eBay (www.ebay.com). The day that I looked there were offers for 9x9 systems, so I contacted one of them (Sales@Hprecovery.com) and was quoted 128Mb for $95. At that price it was worth the risk.
However you buy memory, it will probably take some searching around, because availability is likely an issue. But it is also probably the cheapest and easiest way to boost your system performance.
The development environment at Robelle is quite unusual, in that we have a single job stream, which launches all of the necessary compiling and testing steps for each product. So if I want to compile and run the entire test suite for Suprtool, I just have to issue a single stream command, :stream jrelease.suprtool
If I want run the Qedit test suite then I just have to stream the qedit jrelease with the command :stream jrelease.qedit
There’s a problem though: the individual job streams that test each product can only run single threaded. Each job must complete before the next one begins, so we have to keep the job limit perfect all the time. This also means that we can’t run the Qedit and Suprtool test suites simultaneously.
This has always been a problem, as sometimes people or jobs alter the limit incorrectly, which means that multiple jobs would stream at the same time, causing test jobs to fail and results to be incorrect. This could mean losing a night’s “productive” testing, as the job streams are generally streamed in the early hours of the morning, after the nightly backup. So if you had just made a major change to a module of Suprtool, you wouldn’t know the impact of that change for another day.
Mike Shumko suggested that we try to implement jobq’s for our environment, to address this problem. Without knowing what jobq’s really were, I naturally volunteered for the job in the hope that I could alleviate this dependency we had on the job limit.
What I hoped jobq’s would do
Without even reading about jobq’s I thought they were a way to have job streams operate in specified queues, and thus be independent of jobs in other queues and the main job queue.
The Commands
To start using jobq’s immediately, you need only do two things:
:newjobq suprtool;limit=1Then I changed all of the job cards to have the “;jobq=suprtool” at the end:
!job jtest01,user.acct,suprtest;outclass=lp,3,1;inpri=7;jobq=suprtoolSince I have MPEX, I used the MPEX %qedit command to make a global change to all jobs:
%qedit j@.supr@,append “;jobq=suprtool” “!job”This will open each file that qualifies, and append to the job card line the jobq specification. Voila, Suprtool now could run in its own jobq independently!
There are some other useful commands associated with jobq’s:
:listjobq JOBQ LIMIT EXEC TOTAL HPSYSJQ 3500 8 10 SUPRTOOL 1 0 0 QEDIT 1 0 0 RANDMISC 1 0 0The purgejobq command (:purgejobq suprtool) will allow you to remove any jobq’s that you’ve defined. Showjob ;jobq will show you the jobs with the jobq in which they are running in. A detail line from that output would look as follows:
:showjob ;jobq JOBNUM STATE IPRI JIN JLIST JOBQ INTRODUCED JOB NAME #J4567 EXEC 10 S LP SUPRTOOL WED 11:48A JTEST01,USER.ACCTYou can also alter the jobq that a job is running in with the altjob command, by typing :altjob #j4567;jobq=qedit
The Practice
Since it took me only a few commands to create and change all the jobq’s on our development system, I had everything changed to take advantage of the jobq’s in short order.
So I started testing running jobs associated with the Qedit and Suprtool test suites at the same time. I quickly discovered that each jobq requires that a slot be open in the global jobq in order for the job to run. I found an explanation in the newjobq command documentation:
“The global limit takes precedence over individual queue limits. That is, even if a jobqueue has a slot available, if the overall limit has been reached, jobs have to wait till one of the jobs finish or the global limit is increased. When a global slot becomes available, the next job is picked from among the eligible jobqueues (those which haven’t yet reached their individual limits).”
I hadn’t expected this; however, it merely meant that I needed to expand the job limit to some huge value. In retrospect, I should probably have created a separate jobq for our regular background jobs, like inetd or the apache webserver. That way I could create a small job to periodically check that queue, to ensure that all the required background jobs are “up,” and take appropriate action if a job has failed.
Maintenance
To my surprise, I found that after a “start norecovery” on this system, all my jobqs were missing. Again, the newjobq command help revealed what had happened:
“The job queues persist across reboots, provided a START RECOVERY is done. Any other system starts will cause the job queues to be deleted and they will have to be created again.This command is available in a session, job, or in BREAK. Pressing [Break] has no effect on this command. This command is not allowed in the SYSSTART file.”
We have a command called Startall, which is used to start all of the system jobs, so I put the newjobq commands in this command file to insure all of the jobq’s were built with the proper names and limits:
setvar hpmsgfence 2 newjobq suprtool;limit=1 newjobq randmisc;limit=1 newjobq qedit;limit=1This way I am assured that the jobq’s always exist when we restart the system.
Problems
Personally, I have found no unexplainable problems with the new jobq feature; however recent traffic on the 3000-L mailing list did showcase this query from David Knispel:
“We ran out of disk space over the weekend. Now my JOBQs are screwed up. When I do LISTJOBQ for HPSYSJQ, it shows 6 EXEC but only a limit of 3 and total of 3. When I do a SHOWJOB, only 3 jobs show for this queue. I’m having the same problem with other queues also. Any way to fix this without bouncing the system?”
To which Richard Bayly from HP responded:
“The patch you are after is: MPELXC2B - 6.5; MPELXC2D - 7.0; MPELXC2C - 6.0 but superseded by MPELXL7A.”
Are jobq’s for you?
In conclusion, I do find the new jobq feature quite valuable, as it gives me a tool to break up my jobq’s logically if not physically. I am able to manage some groups of jobs more easily and have more jobs working concurrently, getting more out of my HP e3000. If you have problems with concurrency, or having jobs run when they shouldn’t, then perhaps implementing jobq’s is the way to go.
We recently resolved a support issue in which a customer was having trouble compiling a C program within Qedit for MPE, and believed that Robelle software was at fault. The error in question was:
line 1438: error 1615: Default parameter specification not supported.
The offending line of code was:
fnum = FOPEN ( fname, fop, aop, -recsize,,,,,, filesize );
We've frequently done similar things in our own C programs, so we know this should work, and we often do compiles inside Qedit, as well, so we were fairly certain the problem didn't have anything to do with Qedit. For the most part Qedit simply passes non-Qedit commands to MPE for execution. But just to be sure, our first troubleshooting tactic was to have the customer try the same compile command from the MPE prompt. As expected, he got the same error, which ruled out Qedit as the culprit.
Next, we asked to see the contents of the customer's CCXL command file. In particular we wanted to see exactly how the C compiler was invoked. Here's what we found:
RUN CCOMXL.PUB.SYS; INFO="!INFO"; PARM=!_CCPARM_; XL="QCOMPXL.PUB.ROBELLE"
The XL parameter showed that the compiler had been "Qedified". (QCOMPXL allows programs which reference it to understand Qedit workfiles). As before, we didn't think QCOMPXL could have anything to do with the problem. In fact, tests showed that the problem existed regardless the format of the file, flat ASCII or Qedit. Even so, to be thorough, we asked the customer to run his compile without the extra XL parameter. Same result, same error, which ruled out QCOMPXL as the source of the problem.
Thus we had determined definitively that no Robelle software was involved in causing the customer's problem. Although we had suspected this from the start, it was nice to get verification of it. Nevertheless, as is our wont, we continued to work with the customer, even though this was technically no longer our issue.
We next turned our attention to source code and environmental factors. We know that the C language does not support default parameters. But we also know that in HP's implementation default parameters *are* supported, at least for system intrinsic calls, via the "#pragma intrinsic" declaration mechanism. So we asked the customer if he had included "#pragma intrinsic FOPEN" in his source file. This line of inquiry bore no fruit, however, as the customer assured us that he had included the correct pragma in his source. In addition, in our tests we found that omitting the appropriate pragma statement from our source code resulted in a different compiler error message from what the customer was seeing:
line 28: error 1634: Missing arguments only allowed on intrinsic calls.
We then asked the customer for the exact command he was using to perform his compile:
ccxl mysource, myobject, *lp; "-Aa -C -Wc,-m,-o,-w2"
That looked reasonable enough, but when we explicitly used the customer's info string in our own test compilation, we were finally able to reproduce the same error he was seeing!
Thus we knew that something about the options in the info string was causing a problem, but which option? From reading the compiler options section of the HP C/iX manuaI it appeared that the customer's info string was perfectly valid. So we needed to compare it with something we knew worked.
In our development environment we had long ago configured the appropriate default info string in a CI variable that gets set up at login time, and is used in all C compilations. Since everything had been working for us for so long we simply forgot about the info string. When we compared our default info string...
"-Aa +e -DMPEIX -Wc,-w1"
...with the customer's info string, we saw several obvious differences.
What turned out to be the crucial difference is that our info string includes the "+e" argument to the "-A" option. The C/iX manual describes that argument thusly:
Allows the use of extension features, such as long pointers and using the $ character in the identifier name.
Apparently, extension features also include support for inserting default parameters, although this isn't explicitly documented. At any rate, once our customer added "+e" to his info string, all compilations worked without error. Problem solved!
The bottom line is that two things must be true in order for C/iX compilations to support default parameters in MPE intrinsic calls:
An ad-hoc request may come to the harried data processing manager. She may throw her hands up in despair and say, "It can't be done. Not within the time frame that you need it in." Of course, every computer-literate person knows deep down in his heart that every programming request can be fulfilled, if the programmer has enough hours to code, debug, test, document and implement the new program. The informed DP manager knows that programming the Command Interpreter (CI) can sometimes reduce that time, changing the "impossible deadline" into something more achievable.
Getting Data Into and Out of Files
So you want to keep some data around for a while? Use a file! Well, you knew that already, I'll bet. What you probably didn't know is that you can get data into and out of files fairly easily, using I/O re-direction and the print command. I/O re-direction allows input or output to be directed to a file instead of to your terminal. I/O re-direction uses the symbols ">", ">>" and "<". Use ">" to re-direct output to a temporary file. (You can make the file permanent if you use a file command.) Use ">>" to append output to the file. Finally, use "<" to re-direct input from a file.
echo Value 96 > myfile echo This is the second line >> myfile input my_var < myfile setvar mynum_var str("!my_var",7,2) setvar mynum_var_2 !mynum_var - (6 * 9 ) echo The answer to the meaning of life, the universe echo and everything is !mynum_var_2.
After executing the above command file, the file Myfile will contain two lines, "Value 42" and "This is the second line". (Without quotes, of course.) The Input command uses I/O re-direction to read the first record of the file, and assigns the value to the variable my_var. The first Setvar extracts the number from the middle of the string, and proceeds to use the value in an important calculation in the next line.
How can you assign the data in the second and consequent lines of a file to variables? You use the Print command to select the record that you want from the file, sending the output to a new file.
print myfile;start=2;end=2 > myfile2
You can then use the Input command to extract the string from the second file.
Rolling Your Own System Variables
It's easy enough to create a static file of Setvar commands that gets invoked at logon time, and it's not difficult to modify the file programmatically. For example, let's say that you would like to remember a particular variable from session to session, such as the name of your favorite printer. You can name the file that contains the Setvars, Mygvars. It will contain the line:
setvar my_printer "biglaser"
The value of this variable may change during your session, but you may want to keep it for the next time that you log on. To do this, you must replace your normal logoff procedure (the Bye or Exit command) with a command file that saves the variable in a file, and then logs you off.
byebye purge mygvars > $null file mygvars;save echo setvar my_printer "!my_printer" > *mygvars bye
Whenever you type byebye, the setvar command is written to Mygvars and you are then logged off. The default close disposition of an I/O re-direction file is TEMP, which is why you have to specify a file equation. Because you are never certain that this file exists beforehand, doing a Purge ensures that it does not.
Program Control - If/Else and While
One of the reasons we like programming the CI is its lack of goto's. Consider the following:
echo Powers of 2... if "!hpjobname" = "KARNAK" then setvar loop_count 1 setvar temp 1 while !loop_count < 10 do setvar temp !temp * 2 echo 2^!loop_count = !temp setvar loop_count !loop_count + 1 endwhile else echo Sorry, you're not authorized to know. endif
The above command file will display a "powers of two" table from one though nine for the user who is logged on as KARNAK. You can indent code inside If and While blocks for readability, because the CI doesn't care about leading spaces or blank lines. However, you must use the Comment command to insert comments.
There are times when you wish to exit out of a loop in an ungraceful manner. To do this, use the Return command. We often use this when we display help in a command file, and we don't want to clutter further code with a big if/endif block.
parm hidden_away="?" if "!hidden_away" = "?" then echo This command file doesn't do much, echo but it does it well. return endif echo Local variable is !hidden_away.
Another way to terminate loops and nested command files is to use the escape command. This command will blow you right back to the CI. (Using the Return command only exits the current command file.) You can optionally set the CIERROR jcw by adding a value to the end of the Escape command.
escape 007
Simulating Arrays
It's true - arrays are not directly supported in the CI. However, because of some undocumented techniques (read: tricks), you can simulate arrays.
One question that may immediately pop into your head is "Why would I want to use arrays?" Arrays are useful for table driven events, such as returning days per month, sessions on ldevs, etc.
We won't keep you in suspense. Here's the core method:
setvar !variable_name!variable_index value
By using the expression evaluation feature of the CI, you can have a changeable variable name in the Setvar command. CAVEAT USER: This only works within command files! If you try to do this interactively, the expression evaluation on the Setvar command is performed for the part of the command line after the variable name. Within a command file, the entire line is evaluated before being passed to the CI for re-evaluation.
In much the same way, you can use command files to change sequences of commands, i.e., to create self-modifying code. For example,
weirdcmd setvar variable_command "setvar apple ""orange""" !variable_command
If you run the command file, and then display the contents of the variable, you will see
weirdcmd {execute command file, above} showvar apple APPLE = orange
To simulate arrays, you must assign a variable per each element. For example, you would assign months_12 for months(12). This variable can be either string or numeric, but keep in mind that string variables can be up to 256 characters long.
Here are a few command files that allow you to manipulate arrays.
arraydef parm array_name nbr_elements=0 initial_value=0 setvar array_index 0 while !array_index <= !nbr_elements do setvar !array_name!array_index !initial_value setvar array_index array_index + 1 endwhile
The command file Arraydef allocates variables for each element of the array that you need. The call sequence would be something like
arraydef months_ 12
Just as you put an index in parentheses, we like to put underbars at the end of array names so that they are more readable.
There is a limit on the storage (it was about 20K bytes in MPE/iX 5.0). The space used can be calculated by adding the number of characters in each variable name, plus 4 bytes per integer item, or the length of each string. For example, the integer variable "XYZZY" takes up 5 bytes for the name, and four more bytes for its integer value. When you run out of space, you get the following error from the CI:
Symbol table full: addition failed. To continue, delete some variables, or start a new session. (CIERR 8122)
The following command file allows you to return the space used by your pseudo-array when you are finished using it.
arraydel parm array_name nbr_elements=0 setvar array_index 0 while !array_index < !nbr_elements do deletevar !array_name!array_index setvar array_index array_index + 1 endwhile
To demonstrate how arrays can be set (and values returned), the following two command files, Arrayset and Arrayget, use the expression evaluation feature of the CI to convert the element number to the actual variable name. Setvar is called to set either the value, or the name of the variable passed to the command file.
arrayset parm array_name element_nbr=0 anyparm set_value setvar !array_name!element_nbr !setvalue arrayget parm destination_var array_name element_nbr=0 anyparm get_value setvar !destination_var !array_name!element_nbr
Here's a quick command file to show how you can use arrays in a month table. It uses the Input command to prompt the user for the number of days for each month.
setvar months_nbr 0 while !months_nbr Lt; 12 do setvar months_nbr months_nbr + 1 input months_!months_nbr; & prompt="Enter the number of days in month !months_nbr: " endwhile deletevar months_nbr
For more on CI programming, read http://www.robelle.com/ftp/papers/progci.txt