Saturday, November 22, 2008

25 years of DB2

What kind of a mainframe blog doesn't mention DB2's 25th birthday? I hang my head in shame. DB2 celebrated its birthday on the 7th July this year, and like a forgetful relative, I don't send a card until a 4 month later. But, finally, here is my take on 25 years of DB2.

DB2 - which stands for DataBase 2 - first saw the light of day as an MVS application in 1983. It is a relational database and uses SQL (Structured Query Language) - pronounced SQL by mainframers and "sequel" by non-mainframers, apparently. Where did it come from? In the 1970s, Ted Codd created a series of rules for relational databases, which DB2 was very loosely based on. However, originally, DB2 did break many of Codd's rules. There was also a VM product called SQL/DS, which was similar in nature.

As well as MVS (or z/OS as it's called in its current incarnation), DB2 is available on other platforms. During the 1990s versions were produced for OS/2 (if you remember that one) Linux, Unix, and Windows. There have been a variety of naming conventions over the years. Things like DB2/VSE and DB2/6000, which were replaced by DB2 for VSE and then DB2 UDB (Universal DataBase). This current naming convention can make it harder to work out whether a mainframe version of DB2 or a server version of DB2 is being discussed in any article on the topic. Interestingly, although mainframe and server versions of DB2 are currently very similar in functionality, they are written in different languages. The mainframe version is written in PL/S and the server version in C++.

In the early days, the big competition was Oracle and Informix. Well, IBM bought Informix in 2001, and happily runs Oracle on a zLinux partition. There is also a 31-bit version of Oracle available for z/OS.

Of course, DB2 isn't IBM's only database. It has its relational IMS DB. People interested in IMS DB will be interested in the Virtual IMS Connections user group at www.virtualims.com.

As they say, other mainframe databases are available, including: CA-Datacom, CA-IDMS, Cincom's SUPRA, Computer Corporation of America's Model 204, Select Business Solutions' NOMAD, and Software AG's Adabas.

DB2 is currently at Version 9, which you might remember was code-named Viper before its launch.



Happy belated birthday DB2.

Arcati Mainframe Yearbook ..IIs Back for 2009

I used this attention-grabbing headline so I can tell you that the Arcati Mainframe Yearbook is back for another year - or it will be very soon. The celebrated Arcati Mainframe Yearbook is one of the very few vendor-independent sources of information for mainframe users.

The Arcati Mainframe Yearbook has been the de facto reference work for IT professionals working with z/OS (OS/390) systems since 2005. It includes an annual user survey, an up-to-date directory of vendors and consultants, a media guide, a strategy section with papers on mainframe trends and directions, a glossary of terminology, and a technical specification section. Each year, the Yearbook is downloaded by 10,000 to 15,000 mainframe professionals. Last year's issue is still available at www.arcati.com/newyearbook08.

At the moment, the compilers of the Yearbook are hoping that mainframers will be willing to complete the annual user survey, which is at www.arcati.com/usersurvey09. The more users who fill it in, the more accurate and therefore useful the survey report will be. All respondents before the 5th December will receive a PDF copy of the survey results on publication. The identity and company information of all respondents is treated in confidence and will not be divulged to third parties.

Anyone reading this who works for a vendor, consultant, or service provider, can ensure their company gets a free entry in the vendor directory section by completing the form at www.arcati.com/vendorentry. This form can also be used to amend last year's entry.

As in previous years, there is the opportunity for organizations to sponsor the Yearbook or take out a half page advertisement. Half-page adverts (5.5in x 8in max landscape) cost $500 (UK£250). Sponsors get a full-page advert (11in x 8in) in the Yearbook; inclusion of a corporate paper in the Mainframe Strategy section of the Yearbook; logo/link on the Yearbook download page on the Arcati Web site; a brief text ad in the Yearbook publicity e-mails sent to users. Price $1700 (UK£850).

So, get cracking and complete the user survey so it's the most comprehensive survey ever.

The Arcati Mainframe Yearbook 2009 will be freely available for download early in January next year.

Sunday, September 14, 2008

A Mainframe SOA Strategy

To make the mainframe a part of your SOA you must service enable legacy systems. This does not necessarily mean your mainframe needs to support web services. You can use integration techniques to incorporate your mainframe into a distributed SOA strategy.

There are many ways to integrate the mainframe with distributed systems to participate in the SOA. The mainframe is an odd creature to integrate with because legacy applications were build on proprietary systems (like CICS) and protocols (like SNA). Open systems and of late web services make recently developed systems more interoperable.

To understand mainframe integration, you have to understand the underlying data structures. We then can discuss integration techniques and patterns. The data structures are represented in the existing programs written for the mainframe, which are mostly COBOL programs.

COBOL Metadata Primer

COBOL is a structured language with a data division that defines data structures including external files (file description) and internal program storage (working storage). The file description defines the file structure to the COBOL program so we can use this COBOL definition to understand the existing data for integration purposes. We essentially use the COBOL file definition as a map to the mainframe data that is external to the distributed systems
When working with mainframe data you will hear the term copybook used to describe file structures. The copybook is a reusable file description that the COBOL complier copies into the code at compile time - there is no late binding of file structures. Since files are used over and over the copybook was created as a means to allow the COBOL definition to be done once and copied into all the programs the access the file. If the file structure changes, all the programs using the copybook generally must be recompiled - some changes do not affect the data structure and might not affect other programs.

The copybook therefore becomes a source of metadata describing file structures on the mainframe. Since flat file structures do not have a data dictionary or system catalog description (like a database would), at times the copybook is the single source of metadata about the file structure. Mainframe databases may also have a data dictionary as a source of more metadata, but the copybook is still required for COBOL and can be used for integration purposes.

The copybook is hierarchical in structure. Data definitions can be elementary (with only one level) or grouped. Grouped structures are numbered with the super-group having a lower level number (such as the 01 and 05 levels below) and subgroups higher numbers.

01
ORDER-MASTER.

05 ORDER-MASTER-ID.

10 ORDER TYPE PIC X(2).

10
PART-NUMBER
PIC 9(4).

05 CUSTOMER-NAME
PIC X(20).


The picture clause defines the data type as alphanumeric (PIC X), numeric (PIC 9) and alphabetic (PIC A). The number in parenthesis, as shown above, indicates the number of characters in the data element. You might also see FILLER data elements at the end of the copybook. This is padding in the file to allow for additional fields to be added to the file without changing the programs that do not need the new fields.

Mainframe integration software products can take COBOL copybooks as input and create a mapping for the transformation of mainframe data to a structure that can be used by distributed systems such as an ASCII XML schema - mainframe data is encoded in the EBCDIC character set so this too must be translated.

Two additional useful mainframe terms are batch and online processing. Many legacy mainframe systems do not directly update master files as transactions occur. They write the transactions to a flat file (sequential records) and use batch programs to read flat files and update the master files. This is typically done at night when the online systems are down. The batch processing technique predates On-line Transaction Processing (OLTP) systems, such as CICS, and in many cases is an artifact of these legacy beginnings.

Online processing uses an OLTP system such as CICS to process multiple users' requests as transactions and updates files or databases as the transactions occurs. The OLTP system manages the concurrent access to resources with support for resource locking and transactions.

Mainframe Integration Patterns

At a high level, you can integrate with the mainframe within processes or though the underlying data. Integrating at a process level involves a peer-to-peer communication between the distributed system and the mainframe. Data integration operates at the data level without direct interaction with the mainframe process, for example using FTP or a database connection via ODBC/JDBC.

Process Integration Techniques

Screen Scraping

A "screen scraper" uses a terminal emulator on the distributed system to appear to the mainframe as a terminal to existing applications. For example, a mainframe CICS program that displays and accepts user input via a 3270 terminal can be emulated programmatically to process the screen displays and automatically enter data. The advantage of this approach is mainframe programs do not have to change. The disadvantage is that the integration is brittle, changes to the mainframe screens break the integration, and error recovery is cumbersome.

Peer-to-peer

Mainframes also support a peer-to-peer communications scheme called APPC (Advanced Program-to-Program Communication, sometimes called LU 6.2). This technique is mainly used for IBM systems to communicate with each other using a distributed protocol, but can be used through an emulator similar to screen scrapping. If the existing mainframe applications do not already support APPC, then messaging is preferred since the distributed system would not have to emulate IBM APPC.

Messaging

A program that supports a mainframe online transaction, for example a CICS transaction, can be modified to interact with a distributed system in several ways. The most common technique is to use asynchronous messaging, for example MQ request-reply messages, to communicate with the mainframe. The MQ approach is attractive since MQ is supported on many platforms, is well understood by mainframe programmers and the distributed system can connect to MQ through a JMS client library - good support for JAVA.

CICS Adapter

Many vendors provide CICS adapters that make the distributed system look like another mainframe CICS system. CICS supports region-to-region communication using a technique IBM calls Multi-Region Operation (MRO). In the case of MRO, external data is presented to and from a CICS transaction over a queue called the Transient Data Queue (TDQ). Both the TDQ and MQ approach require modifications to existing terminal-based CICS transactions.
Adapters typically support data format transformation such as copybook to XML. The MQ approach is usually more attractive if the MQ software is already available on the mainframe but at times an adapter may be cheaper and faster to implement if there are no other requirements of mainframe messaging in general.

API Adapter

Many mainframe applications have an Application Program Interface. If the program has an API, an adapter can be written, or bought off the shelf for popular applications such as SAP, that create message events through the API. The advantage of this approach is real-time integration with the application, automated mapping of the applications native data formats to a message and ease of integration. The disadvantage is the often high cost of the adapter plus the messaging software.

Web Services

There are a host of companies that can service wrap applications to support WS-* standards. In general, these products use process integration as outlined above to integrate with the legacy systems and then provide standards based protocol on top of the integration, for example, a web services interface to MQ or a web services interface on top of a screen scrapping tool. Tools are emerging to natively create services on the mainframe such as tools to parse and create XML schemas documents directly from COBOL or to generate WSDLs from mainframe constructs such as the CICS COM area.

Data Integration Techniques

File Transfer

The most common data integration technique is a file transfer using FTP. FTP is a TCP/IP application and is widely supported on many platforms. The FTP technique is well suited for batch processing. For online processing a delta (changes only) file can be created on a timed interval (like every five minutes) and sent via FTP to distributed systems. There are also programs available for Managed FTP with features such as scheduling, security, and monitoring of FTPs. The advantages of FTPs are costs, wide availability and simplicity to implement. The disadvantages are poor support for real-time systems and management of all the scripted FTPs - the latter can be addressed with Managed FTP.

Data Base Connectivity

With the now ubiquitous support of ODBC/JDBC distributed systems can connect remotely to mainframe databases effectively sharing the mainframe database with distributed systems. The advantages of this approach are distributed systems can cheaply connect to the database and retrieve or insert data real-time. The disadvantage comes into play if process integration is required - access to a mainframe algorithm and not just the underlying data. The technique may also require a two-phase commit if both the mainframe and distributed system database are being updated within a single transaction.

Data Base Adapters

There are also database adapters available that send messages when tables are changed based on a database trigger. This approach message enables the database and can make transactions entered into a database appear to distributed systems as real-time messages. You can also send messages to the adapter to update the database, but this is typically done with a simple remote database connection. Similar to a database connection, this approach does not help if you require mainframe process integration.

File Adapters

Many integration vendors have mainframe file adapters that run natively on the mainframe to read and write flat files based on messages. The file adapters can be configured to send messages based on the creation of a file, when records are added to the file or periodically poll the file for new records. File adapters can also create and delete files based on a message from the distributed system. The advantage of this approach is strong native support of the mainframe file system and the ability to message enable batch systems. The disadvantage is the slow transfer speed since most file adapters send message one record at a time versus blocked transmissions with FTPs. File adapters should be used to send delta (changes only) files.

Sunday, September 7, 2008

Mainframe in Come Back Mode

When the biggest and toughest question "Is this the end of Mainframe" was asked, a firm "NO" came back from the Mainframe World.

Mainframe breathes a brand new fresh life. What is this all about? This is about mainframe in come back mode.

The Interview of Ms. Florence Hudson, VP, Marketing & Strategy System Z published in The Hindu Business Line will let you know more about the new life of Mainframe. The Detailed Interview is available at

http://www.thehindubusinessline.com/ew/2007/01/01/stories/2007010100110300.htm

Lusting for the dumb terminal: lessons for the virtualization market

In the old days when mainframes roamed the world, there were things called dumb terminals that enabled users to connect to server based applications. These dumb terminals were -- as their name suggests -- dumb. They had almost no intelligence other than displaying code from an application. Fast forward to today. As we moved from traditional mainframe computing, the personal computer became the interface of choice for most users. Client/Server technologies provided the capability of having a graphical front end that could communicate with backend databases and logic. Now we are faced with an interesting and interesting twist of fate. It has become apparent that in many situations, computer users actually don't need much intelligence on the front end. They need the logic and data that sits on the server (the customer service application, the call center application, the classroom application used to teach skills to students).

Ironically, we can't go back to the good old days of the dumb terminal. Instead we have moved to the thin client, the locked down PC, and the virtual display interface. These approaches are part of the hot new area -- virtualization. Now don't get me wrong. I think that virtualization is quite important and will become an important way that customers will find ways to utilize existing resources in a much more pragmatic way. It will provide better protection for data and resources that might be compromised if too many users have free access to too much.

But I think it is important to keep in mind that this is not a new issue. The computer industry has a way of thinking that the old ways are always wrong and backwards. Yet unintended consequences are a fact of life -- even in an industry that loves the future and is skeptical of the past. Now, with virtualization on the rise, we are reinventing what the industry had taken for granted in the mainframe days.

Elastra Brings Virtual Mainframe to Cloud Computing

Elastra is a company with a neat kind of Cloud-based middleware. The GoogleGazer expects we'll see more of them, and more like them. Founded by serial entrepreneur Kirill Sheynkman, who successfully sold companies to IBM and to BEA. Elastra is funded by Hummer Winblad Venture Partners, an experienced VC who invests almost exclusively in software and middleware, and lately has been investing heavily in Software As A Service (SAAS) and in Cloud Computing. John Hummer sits on their board.

Elastra aims to help you easily overcome the challenges of scalability in the Cloud, by making it seem almost transparent to you. Their "White Paper" is a good read, and discusses the problems of scaling as well as Elastra's solutions.The following two pictures, taken from Elastra's website summarize what they accomplish.

[gallery]

Elastra provides:

Industry-standard database and application infrastructure in the Cloud that is:

  • Easily architected, configured and deployed in a complete, clustered, run-time environment
  • Elastically scaled with automated system monitoring and management
  • Priced pay-for-use
  • Delivered on-demand

Right now, Elastra runs on Amazon's infrastructure, Amazon Elastic Compute Cloud, which provides scalability within minutes on a pay-as-you-go basis, as well as its Amazon Simple Storage Service. It would not surprise the GoogleGazer to see Elastra support some of the other platforms that we mentioned in our previous post. Mean time, they have been garnering an impressive array of clients, and support PostgreSQL, the world's most advanced open source database, and MySQL (now owned by Sun). Besides Amazon, Elastra partners with EnterpriseDB, the world's leading provider of enterprise-class products and services based on PostgreSQL, Postgres Plus and Postgres Plus Advanced Server.

Expect to hear more about them.

Cloud Computing - Is It Old Mainframe Bess in a New Dress?

"Cloud Computing is all the rage," says InfoWeek. "Some analysts and vendors," they say, "define cloud computing narrowly as an updated version of utility computing: basically virtual servers available over the Internet. Others go very broad, arguing anything you consume outside the firewall is "in the cloud," including conventional outsourcing," the article goes on to say. Those who don't have a Cloud Computing offering, but still want to be considered chic go with the InfoWeek's broader definition.

The GoogleGazer prefers to define Cloud Computing as highly scalable distributed services, available on a "pay-as-you-go" basis, what we like to call "Rent-a-cloud."

The idea of Cloud Computing is certainly not new. In his autobiography, Dr. Jack B. Dennis, Emeritus Professor of Computer Science and Engineering at MIT (and MIT Class of '53), and a pioneer in the development of computer science wrote in 2003:

In 1960 Professor John McCarthy, now at Stanford University and known for his contributions to artificial intelligence, led the "Long Range Computer Study Group" (LRCSG) which proposed objectives for MIT's future computer systems. I had the privilege of participating in the work of the LRCSG, which led to Project MAC and the Multics computer and operating system, under the organizational leadership of Prof. Robert Fano and the technical guidance of Prof. Fernando Corbat.

At this time Prof. Fano had a vision of the Computer Utility the concept of the computer system as a repository for the knowledge of a community data and procedures in a form that could be readily shared a repository that could be built upon to create ever more powerful procedures, services, and active knowledge from those already in place. Prof. Corbat's goal was to provide the kind of central computer installation and operating system that could make this vision a reality. With funding from DARPA, the Defense Advanced Research Projects Agency, the result was Multics.

For those under sixty, and probably not old enough to remember, MULTICS (Multiplexed Information and Computing Service) was an extremely influential early time-sharing operating system, started in 1964. It proved that [mainframe-based] computing could serve many people in remote locations at the same time. It set creative minds to thinking about a generally available computer utility, connected to your house through a cable. The GoogleGazer still has an original copy of Fred Gruenberger's influential book, Computers and Communications; Toward a Computer Utility, which he read when it first appeared in 1968, back when the GoogleGazer was an undergraduate and bra-burning and anti-Vietnam demonstrations preoccupied the college campuses, and nearly all computing was based on mainframes and batch processing. Gruenberger posited a "computing utility" which would operate much like an electrical utility, letting you draw as much or as little as you need, while paying only for what you use was articulated in detail.

Back to InfoWeek.

Utility computing, InfoWeek goes on to say,

is a [type of Cloud Computing that provides a] way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT's existing capabilities.

Sure sounds like Gruenberger's computer utility to the GoogleGazer.

This form of rent-a-cloud, as we noted earlier, is offered commercially by Amazon.com, Google, Sun (zembly.com for creating and hosting social applications, and Network.com for pay-as-you-go computing), IBM, and others who now offer storage and virtual servers that IT can access on demand. In InfoWeeks's view, "Early enterprise adopters mainly use utility computing for supplemental, non-mission-critical needs, but one day, they may replace parts of the datacenter." However, the GoogleGazer knows that many smaller, fast-growing high-tech outfits run their entire business off of the "Cloud" of one of these major vendors, and by all reports, reliability exceeds that of most IT shops.

Software As A Service (SAAS) is a type of cloud computing that delivers a single application through the browser to thousands of customers using a multitenant architecture. On the customer side, it means no upfront investment in servers or software licensing; on the provider side, with just one app to maintain, costs are low compared to conventional hosting. Salesforce.com, according to InfoWeek, is by far the best-known example among enterprise applications, but SaaS is also common for HR applications and is also used in ERP applications from vendors such as Workday. More recently, as we have noted, SaaS applications, such as Google Apps and Zoho Office are causing Billionaire's Agita to Steve Ballmer and his competitors, as they are encroaching on ground long firmly held by Microsoft Office (a risk Microsoft was forced to disclose in its SEC filings). APIs are also increasingly available in the Cloud that enable developers to exploit functionality of others over the Internet, rather than developing, hosting, and delivering it themselves. These range from providers offering discrete business services -- such as Strike Iron and Xignite -- to the full range of APIs offered by Google Maps, Yahoo BOSS. The U.S. Postal Service, Bloomberg, and even online banking and conventional credit card processing services are headed in this direction.

So while the technology may be different, updated, and certainly faster, cheaper, more pervasive, and much more scalable, at the end of the day, Cloud Computing is a centralized mainframe-like core with distributed nodes, in a prettier, sexier new miniskirt. But hey, we like the pretty dress, and the GoogleGazer believes that Cloud Computing not only is not a fad, but it presages a fundamental paradigm shift that will have as powerful an effect on society as the Internet itself, and will turn out to be truly disruptive technology.

"Strong words," you say? Well stay tuned for further proof as Cloud Computing matures over the next five years. Remember, you heard it first from the GoogleGazer.

Monday, September 1, 2008

Comparison of DB2 and VSAM ....Cont 3

Feature

DB2

VSAM

Data Archival

Selective archival

Selective retrieval

Upto row level archival

Specific Products

No Selective archival

No Selective retrieval

Dataset level archival

Dataset Migration

Personnel

IBM & 3rd party training

Easy to find Skills

Reuse any RDBMS skill

Same across platforms

Not much training

Scarce skill

VSAM Specific skill

Mainframe Specific skill

Data Warehouse

Real time updates

Direct Propagation

Product suites available

Batch updates only

Extract & transform

Not suitable for warehouse

Data types

Images, Video, Audio etc

Contents can be in file

Text only

No such option

Comparison of DB2 and VSAM.....Cont 2

Feature

DB2

VSAM

Performance Tuning


Can be tuned anytime

Writes SMF records

Can be at SQL level

Tools available for aiding

Subsystem level tuning possible

Abundant tuning skills

Depends on initial design

No SMF records

Only application level

No tuning aids

Not a subsystem

Tuning skills are rare

CPU & IO parallelism


Scanning is Faster

No parallelism

Parallel Sysplex


Can participate

Optimizer handles

Can participate

No optimization

Reorganization


Direct reorganization

Online reorg possible

Parallel reorg

Delete & recreate

Downtime needed

No parallelism

Recovery


Managed by DB2

Always recoverable

From log / backup

Auto Recovery

Parallel recovery

Managed by CICS/IMS

No recovery in batch

From backup only

Manual Restore

No parallelism

Backup


Online backup possible

Incremental backup

Parallel backup

Downtime needed

No incremental backup

No parallelism

Availability


Parallel reorg, backup

Online reorg, backup

Less downtime

No online maintenance

No parallelism

More Downtime

Disaster Recovery


Supported by DB2

Part of DASD recovery

Comparison of DB2 and VSAM

Feature

DB2

VSAM

Hardware Independence

PC to mainframe

Only Mainframe

OS Independence

NT, Unix & OS/390

Only OS/390

Vendor Independence

RBDMS with ANSI std.

Only IBM

Scalability

PC to mainframes

Upto 4000 terabytes for LOB

Only Mainframe

Maximum size is 128 terabytes

Ease of development

Standard SQL

Stored procedure & triggers

Not so simple

No such option

Ease of maintenence

Standard SQL

Difficult

Security

High degrees of security

Only at Dataset level

Referential Integrity

DB2 enforces it

Manages even externally stored data

Developers responsibility

Not applicable

Query Interface

Easy to view/modify

Not available

Products/tool suite

IBM & 3rd Parties

Not available

Data Capacity

254 times largest VSAM

Limited to 2 terabytes

Data sharing

Across CICS, IMS, Batch, TSO

Very limited support

Web & Java support

JDBC, SQLJ, Net.data

Need custom interfaces

Distributed environment

Consistent across platforms

Stored procs reduce network traffic

Only Mainframe

Not applicable

XML support

XML extenders

Not supported

Performance

For less data

Better for large data

Optimizer handles

Partitioning improves performance

Better when data is less

For less data

Developer responsible

No partitioning

MVS

MVS (Multiple Virtual Storage) is an operating system from IBM that continues to run on many of IBM's mainframe and large server computers. MVS has been said to be the operating system that keeps the world going and the same could be said of its successor systems, OS/390 and z/OS. The payroll, accounts receivable, transaction processing, database management, and other programs critical to the world's largest businesses are usually run on an MVS or successor system. Although MVS has often been seen as a monolithic, centrally-controlled information system, IBM has in recent years repositioned it (and successor systems) as a "large server" in a network-oriented distributed environment, using a 3-tier application model.

The follow-on version of MVS, OS/390, no longer included the "MVS" in its name. Since MVS represents a certain epoch and culture in the history of computing and since many older MVS systems still operate, the term "MVS" will probably continue to be used for some time. Since OS/390 also comes with Unix user and programming interfaces built in, it can be used as both an MVS system and a UNIX system at the same time. A more recent evolution of MVS is z/OS, an operating system for IBM's zSeries mainframes. MVS systems run older applications developed using COBOL and, for transaction programs, CICS. Older application programs written in PL/I and FORTRAN are still running. Older applications use the Virtual Storage Access Method access method for file management and Virtual Telecommunications Access Method for telecommunication with users. The most common program environment today uses the C and C++ languages. DB2 is IBM's primary relational database management system (RDBMS). Java applications can be developed and run under OS/390's UNIX environment.

MVS is a generic name for specific products that included MVS/SP (MVS/System Product), MVS/XA (MVS/Extended Architecture), and MVS/ESA (MVS/Enterprise Systems Architecture). Historically, MVS evolved from OS/360, the operating system for the System/360, which was released in 1964. It later became the OS/370 and the System/370. OS/370 evolved into the OS/VS, OS/MFT, OS/MVT, OS/MVS, MVS/SP, MVS/XA, MVS/ESA, and finally OS/390 and then z/OS. Throughout this evolution, application programs written for any operating system have always been able to run in any of the later operating systems. (This is called forward compatibility.)

An MVS system is a set of basic products and a set of optional products. This allows a customer to choose the set of functions they need and exclude the rest. In practice, most customers probably use almost all of the functions. The main user interface in MVS systems is TSO (Time Sharing Option). The Interactive System Productivity Facility (ISPF) is a set of menus for compiling and managing programs and for configuring the system. The main work management system is either Job Entry Subsystem 2 or 3 (JES2 or JES3). Storage (DASD) management is performed by DFSMS (Distributed File Storage Management Subsystem). MVS is considerably more complex and requires much more education and experience to operate than smaller server and personal computer operating systems.

The Virtual Storage in MVS refers to the use of virtual memory in the operating system. Virtual storage or memory allows a program to have access to the maximum amount of memory in a system even though this memory is actually being shared among more than one application program. The operating system translates the program's virtual address into the real physical memory address where the data is actually located. The Multiple in MVS indicates that a separate virtual memory is maintained for each of multiple task partitions.

Other IBM operating systems for their larger computers include or have included: the Transaction Processing Facility (TPF), used in some major airline reservation systems, and VM, an operating system designed to serve many interactive users at the same time

VSAM

VSAM is a high-performance access method used in the MVS, OS/390 and VSE/ESA operating systems. It was initially released by IBM in 1973 and is part of the Base product.

VSAM provides a number of data set types or data organization schemes. They are:

  • Key-sequenced data set (KSDS)
  • Entry-sequenced data set (ESDS)
  • Relative record data set (RRDS)
  • Variable-length relative record data set (VRRDS)
  • Linear data set (LDS)

Installations have been using VSAM data sets to hold more and more of their data to the point where many have reached the 4-gigabyte architectural limit for the size of VSAM data sets. Beginning with DFSMS V1.3, you can create and use VSAM KSDSs that can be much larger than the 4-gigabyte limit imposed on any VSAM data set defined before this release. DFSMS V1.5 allows non-KSDS file types (ESDS, RRDS, VRRDS and LDS) to exceed 4 gigabytes.

VSAM record-level sharing (RLS) was introduced to provide the value of the Parallel Sysplex to the existing applications. RLS itself does not provide transactional recovery. CICS provides a file access interface on top of VSAM. It is a CICS file control function that includes transactional recovery for VSAM files. This isolation and rollback capability enables VSAM data to be shared among CICS applications.

DB2

DB2 is an abbreviation for IBM Database 2 and was launched in June 1983 as a subsystem on MVS that allowed MVS users to build, access, and maintain relational databases using the well known Structured Query language (SQL).

Since then, DB2 has come a long way and provides facilities to exploit the latest hardware and software technologies, accommodating a majority of user requirements. The latest versions are available on almost all platforms, including Windows, HP-UX, Sun Solaris, Unix, AIX, NUMAQ, Linux, AS/400 and OS/390.

As the name suggests, DB2 "Universal Database" provides universal data types, universal integration, universal access from clients of all types, universal applicability (for all types of applications), universal scalability (across all types of platforms), universal reliability (for non-stop 24/7 processing) and universal manageability.

The ability to manage many concurrent users, very large databases, high transaction rates and deliver consistent rapid response is fundamental and delivered by DB2 through the wide range of platforms and the exploitation of platform-specific features. Beyond this, DB2 meets the requirements for high availability, low planned maintenance, wide connectivity, open standards and effective manageability.

Job Entry Subsystems:

Job Scheduler:

JES is an MVS component which keeps track of jobs that enter the system. The job entry subsystem also called as JES in short is used by MVS operating system. The JES is the component which presents jobs for MVS processing and sends the job’s spooled output back to the correct destination. A JOB is the execution of one or more related programs in sequence. Each program to be executed by a JOB is called a JOB STEP.

Types of JES:

There are two main types in this as given below:

HASP:

This is the acronym for Houston Automatic Spooling Program. It is otherwise called as JES2.

ASP:

This is the acronym for Asymmetric Multiprocessing system. It is otherwise called as JES3. This system is more suitable for shops with more than one processor.

It is vital that a user must know that each MVS system can use JES2 or JES3 but not both.

Job management of MVS:

The task of job management of MVS is taken care by two main resources:

JES and

Base control program of MVS.

Here the former job entry subsystem also called as JES in short takes up the task of managing jobs before and after running the program. The latter base control program takes the task of managing the job during processing.

General Phases in a Job:

The phases through which a job generally flows through starting from input stage till final stage are given below:

Input

Conversion

Processing

Output

Print

Purge

Input:

The HASP otherwise called as JES2 accepts jobs as input stream. The ability of JES2 is to accept multiple jobs at the same time. Before JES2 accept the jobs it is obvious that user has to submit the jobs. This process of submitting jobs can be done using programs and commands to JES2.We will see about commands in JES2 in detail in our coming sections. The jobs submitted is accepted as input stream by JES2 and a job identifier given to each JOB JCL statement with all jobs, JCL commands and statements being placed in spool data sets from which job is selected by JES2 for further processing.

Conversion:

In the process the jobs are processed and fine tuned for execution. That is to say in detail JES2 uses converter program which associates the JCL statements placed in JOB statement with JCL in procedure library. This merged JCL is then converted into a combined JCL as text by JES2. If there is no errors detected in JCL by JES2 the jobs are queued in spool date set for further processing and execution. If errors were detected by JES2 then appropriate messages are placed and the job is queued for processing and not for execution.

Processing:

In this the jobs that were placed in the previous phase in the jobs queue are taken by JES2 and sent to initiators and these are defined using JES2 initialization statements which we will see in detail in our JES2 commands section. The processing by initiators is based on various priorities of class assigned and the priority of queued jobs.

Output:

All the output produced in system by jobs also called as SYSOUT in short is controlled and monitored by JES2. In this phase the output activities like printing of datasets, output device handling, system messages to be outputted are all handled by JES2.The dataset which is to be outputted having same characteristics are grouped together by JES2 for printing.

Print:

In this phase the output dataset generated by earlier output phase are processed by JES2. Here the output is selected for processing by JES2 based on priority, output class mentioned on JCL. After JES2 process the output of job it places the job in purge queue for next phase to take over.

Saturday, July 5, 2008

Tso Commands

ABCODE - DISPLAYS COMMON ABEND CODES AND POSSIBLE FIXES
ACCMCHK1 - DISPLAYS ACCOUNTING INFORMATION (INFO IN JOB CARD)
ACCTHELP - GIVES: GLSUM,COST CTR,CUST CODE,APPL ID'S,SYS CDDE,ACCESS CODE ACESHIST - BROWSE ACES INFORMATION FROM ORIGINAL ACES SYSTEM
ACF - DISPLAYS YOUR TSO LOGON RULES INFORMATION (AT ? ENTER L * )
ACFHELP - DEFINITIONS/EXAMPLES ARE GIVEN WHEN USING 'ACFRULE'
ACFRULE - USED TO DEFINE RULES ON WHAT OTHERS CAN DO TO YOUR DATASETS/PDS
BPRINT - USE TSO BPRINT TO PRINT THE DATASET YOU ARE CURRENTLY BROWSING BRCLIST - BROWSE ALL DATASETS THAT CONTAIN THIS CLIST (REXX)
BRLINK - BROWSE ALL LINK LIBRARY DATASETS THAT CONTAIN THIS MEMBER BRMLIB - BROWSE ALL ISPMLIBS THAT CONTAIN THIS MEMBER
BROBJLB - BROWSE ALL OBJECT LIBRARY DATASETS THAT CONTAIN THIS MEMBER BRPLIB - BROWSE ALL PANEL LIBRARY DATASETS THAT CONTAIN THIS MEMBER BRPROC - BROWSE ALL PROC LIBRARY DATASETS THAT CONTAIN THIS MEMBER BRPSBLIB - BROWSE TEST PSBLIB
BRSLIB - BROWSE ALL SKELETON LIBRARY DATASETS THAT CONTAIN THIS MEMBER CALCU - USED TO DO SIMPLE CALCULATIONS (ADD, SUB, MULT AND DIVIDE) CAWRITER - LIST OF CA-DISPATCH UNIVERSAL WRITER NAMES AND DEFINITIONS CA7 - INVOKE JOB SCHEDULE PRODUCT (CA-7 PRIMARY OPTION MENU)
CA7HIST - VIEW UP TO 14 MONTHS OF JOB EQUEST HISTORY
CA7LOOK - SHOWS ALL CONTROL-M RUN REQUESTS SUBMITTED FOR TODAY
CA7REQ - USED TO SCHEDULE HOST JOBS AT AUBRUN HILLS
CHAMPDOC - DOCUMENTATION OUTLINING CHAMP RELATED QUESTIONS CHAMPRPT - BROWSE CHAMP DAILY MOVE REPORT,UP TO LAST 5 DAYS OF INFORMATION CMI - CNTLM JOB SCHEDULES BY JOB
CMPOST - WILL HELP YOU CATALOG UP ROUND REEL TAPES
CMPOSTC - WILL HELP YOU CATALOG UP CARTRIDGE TAPES
CMR - CNTLM JOB PERFORMANCE REPORTING SYSTEM, STATS FOR EXECUTED JOBS COMPARE2 - USE ISPF OPTION 3.13 TO MAKE COMPARES
COUNT - EXECUTE THE LINES OF CODE COUNTER (SLOC COUNTER)
CSPACE - CALCULATES THE NEEDED ALLOCATIONS FOR DASD BASED ON INPUT INFO DBADOCSE - DBA DOCUMENTATION OF INTEREST TO AN INFORMATION ANALYST DBAPROC - DEFINITION OF "DBA'S" RESPONSIBILITIES AND SUPPORT PROCEDURES DBAREQ - USED TO REQUEST: INCLUDE MOVES, DVAN, PSB (NUCREQ) AND MISC DBASE - DISPLAYS DBA CONTACTS BY AREA THAT THEY SUPPORT
DBCHANGE - INITIAL DBA NOTIFICATION PANEL FOR DATABASE CHANGES
DBCREATE - USED TO CREATE A COPY OF A TEST DATABASE IN YOUR CATALOG DBMAP - DISPLAYS A "PSB" MAP OF A PROGRAM OR A "DBD" MAP OF A DATABASE DB2TIPS - TIPS ON SETUP,CODING,TESTING,CONVERTING TO DB2 DATA BASES DISASTER - UPDATES RECOVERY PRIORITIES FOR JOBS AND ONLINE IMS PGMS DISPDESC - DISPLAYS A DESCRIPTION AND OWNER FOR DATABASES
DPRINT - USE DPRINT TO PRINT A DATASET FROM A LISTING
DRARPTS - BROWSE DRA BACKUP REPORTS FOR CURRENT AND 4 WEEKS BACK DSNREST - BROWSE LETTER TO COMMUNITY ON HOW TO RESTORE FROM DRA BACKUPS DVANJCL - CREATES JCL TO RUN TEST DATAVANTAGE AS A BMP OR IN A DLI REGIONDVANREQ - CREATE A DATAVANTAGE REQUEST FOR THE DBA GROUP TO PROCESS DVBTS - ALLOWS DATAVANTAGE TO BE EXECUTED IN "BTS" FOR UP TO 3 DB'S EPRINT - USE EPRINT TO PRINT THE DATASET YOU ARE CURRENTLY EDITING FATIPS - FILE AID TIPS FOR REFRESH 97 VERSION GDG - BUILD OR DELETE GDG BASES GENPSB - WILL GEN A PSB FROM THE PROD PAN LIB TO VERSION 1 PSBLIB ONLY GSAMB37 - HOW TO PREVENT BLOCK ERRORS WHEN COPYING GSAM FILE FOR RESTART HOLDJCL - VIEW A JCL MEMBER IN ENDEVOR HOLD
HOLDSORC - VIEW A SOURCE CODE MEMBER IN ENDEVOR HOLD
IMFTS - INVOKE BOOLE AND BABBAGE
IMSCHKPT - ALLOWS INQUIRY/UPDATE OF CHECKPOINT LIMIT/NUMBER AND RUN NUMBERINCREQ - REQUESTS DATABASE INCLUDE MOVE TO PRODUCTION OR TEST INITIAL - SET YOUR "BIN" AND "ACCT CODE" FOR OTHER CLISTS TO USE
IPACT - DISPLAYS IPACS INFO FOR A GIVEN JOB, PGM OR ACCESS CODE
IPCABEND - BROWSE UPTO 6 DAYS OF THE DAILY AHIPC MORNING REP FOR PROB IPCPROB - BROWSE THE AHIPC PROBLEM TICKET FILE AND THE OPEN TICKET STATUSIPCUSER - BROWSE/PRINT IPC USER GUIDE(VOL5), STDS FOR OUR ACCT., ETC; ISPFTIPS - REFRESH 97 ISPF TIPS AND ENVIRONMENT SETTING CHANGES JCLFORM - FOR NEW JCL OR NEW STEPS: THIS WILL REFORMAT THE JCL TO STDS
JCLX - THIS SHOWS ALL DATASETS (IN SEQ) AND THE JOBS THAT USE THEM
JOBHIST - SHOWS WHEN A JOB WAS STARTED, COMPLETED, CPU/REAL TIME, ETC;
LA - DISPLAY THE NAMES OF LIBRARIES CURRENTLY ALLOC TO YOUR LOGON ID LIMITCHG - DISPLAYS CHECK POINT LIMIT CHANGES FOR THE LAST 7 DAYS LINKDATE - DISPLAY LINKDATE FOR A GIVEN MEMBER FROM A SPECIFIC LIB
LISTA - LISTA ST H - LISTS ALL ALLOC DATASETS AND THEIR HISTORY INFO
LISTA - USED TO DISPLAY THE NAMES OF CURR ALLOC DATASETS TO YOUR ID LISTBC - USED TO LIST MESSAGES SAVED IN THE BROADCAST DATASET
LISTC - LISTS ENTRIES FROM EITHER THE MASTER OR USER CATALOG
LISTC1 - EX. LISTC EN('F.F133152') ALL - WILL LIST MAX. ENTRIES ALLOWED
LISTC2 - EX. LISTC VOL - WILL LIST YOUR CATLG & VOLUMES DSN'S RESIDE ON
LISTD - DISPLAYS BASIC ATTRTIBUTES OF DATASET SPECIFIED
LISTD1 - LISTD 'SYS2.TESTLIB(B0450)' - WILL DISPLAY PROGRAM LINK INFO
LJOB - DISPLAY CA7 SCHEDULE INFO FOR THE JOB ENTERED
LOCKOUT - THIS CROSS-REF IS GIVEN IN: JOB, DATABASE, PROGRAM AND SEG SEQ LOGONID - BROWSE FILE FOR TSO LOGON ID'S(UID,FULL NAMES,LAST DATE USED) MFSTEST - USED TO LINK OUT MFS SCREENS FROM PROD, TEST OR YOUR CATALOG MFS3270 - USED TO CREATE/LINK NEW MFS SCREENS FROM YOUR CATALOG MISCREQ - MISCELLANEOUS DBA REQUEST PANEL
MSG - THIS COMMAND SENDS A MULTI LINE MESSAGE TO A TSO ID
MSGSEND - THIS COMMAND SENDS A MULTI LINE MESSAGE TO A TSO ID
NEWIPACS - SETS UP FILES FOR RUNNING IPACT CLIST
NOTCOMP - GENERATE A REPORT OF PROGRAMS NOT COMPILED FOR SPECIFIC SEG NOTIFY - ASSIGN PROGRAMMERS TO A PGM SO OPERATIONS CAN CALL ON ABENDS OQACCLOC - VIEW DATASET OF CHANGED SOURCE LINES OF CODE COUNTS
OQACDB2 - ID FOREIGN KEY TABLES FOR DB2 TABLE-CREATE FILE FOR ISEE MVSWB OQACDB2U - UNLOAD DB2 TABLE W/ DSNTIAUL UTIL.-CREATE FILE FOR ISEE MVSWB OQACEJCL - EDIT ACCOUNT SUPPORT GROUPS ENDEVOR JCL LIBRARY
OQACERPT - BROWSE ENDEVOR MASTER CONTROL FILE - ELEMENT CATALOG REPORT
OQACIMSQ - VIEW IMS DISPLAY Q INFORMATION
OQACJCL - EDIT ACCOUNT SUPPORT GROUPS JCL LIBRARY-EX: FILEAID SCAN JCL PANXREF - SHOWS "EVERY" INCLUDE (DB & PGM) AND "EVERY" PGM THAT USES IT PDSCMP - COMPRESS A PDS IN SHARE MODE PEEK - RUN PEEK EVALUATION SYSTEM - PL/I ONLY - COMPLEXITY EVALUATION
PGMTBLX - LIST ALL DB2 TABLES USED BY THE SPECIFIED PROGRAM
PRESTORE - COPIES AN UNCATLG PRODUCTION DATASET FROM DASD INTO YOUR CATLG
PRINTQIP - PRINT VARIOUS QIP GROUP RESULTS, INCLUDING THE ACES SYSTEM PRODDBIC - ALLOWS YOU TO VIEW DATABASE INCLUDES FROM PWDDB.COBOL.INCLUDES
PRODDBIP - ALLOWS YOU TO VIEW DATABASE INCLUDES FROM PWDDB.PLI.INCLUDES
PRODINCL - VIEW AN INCLUDE MEMBER IN ENDEVOR PRODUCTION
PRODJCL - VIEW A JCL MEMBER IN ENDEVOR PRODUCTION
PRODSORC - VIEW A SOURCE CODE MEMBER IN ENDEVOR PRODUCTION
PROGXREF - GENERATE A REPORT OF PGMS AFFECTED BY DATABASE SEGMENT CHANGES
PRTCUT - PRINT REQUEST TO CUT SHEET FORMS(LETTER) AT SPO C.O.
PRTDOC - PRINT INFO. ON MFS, DATAVANTAGE, IMS CMDS, CHKPT, DESIGN REV PRTFTD - PRINT REQUEST AT FLINT PRINT CENTER IN 1UP FORM
PRTJCL - PRINT REQUESTED "JCL" FROM PWDS1.SPOCH.JCLPAN
PRTLJOB - PRINT ON PC LPT1 PRINTER-CA7 SCHEDULE INFO FOR THE JOB ENTERED PRTMIN - PRINT REQUEST TO MINIMUM PRINT AT SPO C.O.
PRTTRD - PRINT REQUEST TO AT TROY OUTPUT CENTER
PRT2UP - PRINT REQUEST AT FLINT PRINT CENTER IN 2UP FORM
PSBREQ - REQUEST PSB GENERATION FOR PRODUCTION AND TEST
RECEIVE - RECEIVE A DATASET FROM ANOTHER TSO ID
RESET - USED TO LOAD NEW/CHANGED ACF RULES WITHOUT LOGGING OFF
RESTORE - RESTORES YOUR OWN ARCHIVED DATASET/PDS BACK INTO YOUR CATALOG RFSJCL - BUILDS CHECKPOINT RESTART JCL IN YOUR CATALOG FOR A PROGRAM SCAN - USED TO SUBMIT A SCAN AGAINST PROD, TEST OR JCL LIBRARIES SCANOUT - USED TO BROWSE THE DATASET CONTAINING THE RESULTS OF YOUR SCAN SCHEDULE - LIST THE AUBURN HILLS SPO SCHEDULING CONTACT NAMES AND NUMBERS SECINFO - VARIOUS DOCUMENTATION ON SECURITY ITEMS OF INTEREST TO SPO SETERMS - DISPLAYS DEFINITIONS OF TERMS AND ACRONYMS FOR SPO ACCOUNT SMR - SYSLOG MANAGEMENT AND RETRIEVAL
SPACE - CALCULATES SPACE REQUIRED FOR DATA BASES BASED ON BLOCK SIZE START - USED TO START UP A PROGRAM/TRANSACTION ON IMST THAT IS STOPPED TAPERECS - CALCULATES THE NUMBER OF RECORDS ON A TAPE
TBLPGMX - LIST PROGRAMS THAT USE THE SPECIFIED DB2 TABLE,VIEW,SYSTEM CODE
TCOMP2 - USED TO LINK OUT PROGRAMS FROM PROD, TEST OR YOUR CATALOG TESTINCL - VIEW AN INCLUDE MEMBER IN ENDEVOR TEST
TESTJCL - VIEW A JCL MEMBER IN ENDEVOR TEST
TESTSORC- VIEW A SOURCE CODE MEMBER IN ENDEVOR TEST
TIME - DISPLAY CURRENT SYSTEM DATE AND TIME
TLSINQ - VIEW THE TAPE LIBRARY LISTING IN DATASET OR VOLUME SERIAL SEQ TRACE - USED TO TRACE A JOB IN IMSTEST (BMP OR TRANSACTION)
TRANCNTS- SHOWS TRANS. PER MONTH (D8806=JUNE 88) AND # OF TIMES USED TRANSMIT- TRANSMIT A DATASET TO ANOTHER TSO ID
TRANSMI1- EX..TRANSMIT (IPCNODE.HIGHLEVEL) DATASET (DSN)
TRANSMI2- EX..TRANSMIT (PLIPC4B.PDSJBTD) DATASET ('USERID.PLI.STDS')
VSPACE - CALCULATES SPACE REQUIRED FOR DATA BASE INDEXES
WAAPLNKL- LIST DATASETS CONTAINED IN THE LINKLIST
WHEREIS - ISPSLIB MEMBER - FIND DATASET THAT CONTAINS SKELETON MEMBER WHEREIS - ISPPLIB MEMBER - FIND DATASET THAT CONTAINS PANEL MEMBER WHEREIS - SYSEXEC MEMBER - FIND DATASET THAT CONTAINS REXX MEMBER WHEREIS - SYSPROC MEMBER - FIND DATASET THAT CONTAINS CLIST MEMBER WHEREIS - LINKLIST MEMBER - FIND DATASET THAT CONTAINS LINKLIB MEMBER WHEREIS - ISPMLIB MEMBER - FIND DATASET THAT CONTAINS MESSAGE MEMBER WHEREPGM- MEMBER - LOCATES ALL PROGRAM LIBRARIES THAT CONTAIN MEMBER WHEREPSB- MEMBER - LOCATES ALL PSB LIBRARIES THAT CONTAIN MEMBER

'


Ads By CbproAds