google ads

google search

Tuesday, June 9, 2009

old system

this detail will be provide later

Goals

The goal of this text is to provide the reader with a general framework for understanding all of the components of the programming environment. These include all of the components listed in Figure 1.1. A secondary goal of this text is to illustrate the design alternatives which must be faced by the developer of such system software. The discussion of these design alternatives precludes an in-depth examination of more than one or two alternatives for solving any one problem, but it should provide a sound foundation for the reader to move on to advanced study of any components of the programming environment.

Historical Note

Historically, system software has been viewed in a number of different ways since the invention of computers. The original computers were so expensive that their use for such clerical jobs as language translation was viewed as a dangerous waste of scarce resources. Early system developers seem to have consistently underestimated the difficulty of producing working programs, but it did not take long for them to realize that letting the computer spend a few minutes on the clerical job of assembling a user program was less expensive than having the programmer hand assemble it and then spend hours of computer time debugging it. As a result, by 1960, assembly language was widely accepted, the new high level language, FORTRAN, was attracting a growing user community, and there was widespread interest in the development of new languages such as Algol, COBOL, and LISP.
Early operating systems were viewed primarily as tools for efficiently allocating the scarce and expensive resources of large central computers among numerous competing users. Since compilers and other program preparation tools frequently consumed a large fraction of an early machine's resources, it was common to integrate these into the operating system. With the emergence of large scale general purpose operating systems in the mid 1960's, however, the resource management tools available became powerful enough that they could efficiently treat the resource demands of program preparation the same as any other application.
The separation of program preparation from program execution came to pervade the computer market by the early 1970's, when it became common for computer users to obtain editors, compilers, and operating systems from different vendors. By the mid 1970's, however, programming language research and operating system development had begun to converge. New operating systems began to incorporate programming language concepts such as data types, and new languages began to incorporate traditional operating system features such as concurrent processes. Thus, although a programming language must have a textual representation, and although an operating system must manage physical resources, both have, as their fundamental purpose, the support of user programs, and both must solve a number of the same problems.
The minicomputer and microcomputer revolutions of the mid 1960's and the mid 1970's involved, to a large extent, a repetition of the earlier history of mainframe based work. Thus, early programming environments for these new hardware generations were very primitive; these were followed by integrated systems supporting a single simple language (typically some variant of BASIC on each generation of minicomputer and microcomputer), followed by general purpose operating systems for which many language implementations and editors are available, from many different sources.
The world of system software has varied from the wildly competitive to domination by large monopolistic vendors and pervasive standards. In the 1950's and early 1960's, there was no clear leader and there were a huge number of wildly divergent experiments. In the late 1960's, however, IBM's mainframe family, the System 360, running IBM's operating system, OS/360, emerged as a monopolistic force that persists to the present in the corporate data processing world (the IBM 390 Enterprise Server is the current flagship of this line, running the VM operating system).
The influence of IBM's near monopoly of the mainframe marketplace cannot be underestimated, but it was not total, and in the emerging world of minicomputers, there was wild competition in the late 1960's and early 1970's. The Digital Equipment Corporation PDP-11 was dominant in the 1970's, but never threatened to monopolize the market, and there were a variety of different operating systems for the 11. In the 1980's, however, variations on the Unix operating system originally developed at Bell Labs began to emerge as a standard development environment, running on a wide variety of computers ranging from minicomputers to supercomputers, and featuring the new programming language C and its descendant C++.
The microcomputer marketplace that emerged in the mid 1970's was quite diverse, but for a decade, most microcomputer operating systems were rudimentary, at best. Early versions of Mac OS and Microsoft Windows presented sophisticated user interfaces, but on versions prior to about 1995 these user interfaces were built on remarkably crude underpinnings.
The marketplace of the late 1990's, like the marketplace of the late 1960's, came to be dominated by a monopoly, this time in the form of Microsoft Windows. The chief rivals are MacOS and Linux, but there is yet another monopolistic force hidden behind all three operating systems, the pervasive influence of Unix and C. MacOS X is fully Unix compatable. Windows NT offers full compatability, and so, of course, does Linux. Much of the serious development work under all three systems is done in C++, and new languages such as Java seem to be simple variants on the theme of C++. It is interesting to ask, when we will we have a new creastive period when genuinely new programming environments will be developed the way they were on the mainframes of the early 1960's or the minicomputers of the mid 1970's?

Programming Environments

The term programming environment is sometimes reserved for environments containing language specific editors and source level debugging facilities; here, the term will be used in its broader sense to refer to all of the hardware and software in the environment used by the programmer. All programming can therefore be properly described as takin place in a programming environment.
Programming environments may vary considerably in complexity. An example of a simple environment might consist of a text editor for program preparation, an assembler for translating programs to machine language, and a simple operating system consisting of input-output drivers and a file system. Although card input and non-interactive operation characterized most early computer systems, such simple environments were supported on early experimental time-sharing systems by 1963.
Although such simple programming environments are a great improvement over the bare hardware, tremendous improvements are possible. The first improvement which comes to mind is the use of a high level language instead of an assembly language, but this implies other changes. Most high level languages require more complicated run-time support than just input-output drivers and a file system. For example, most require an extensive library of predefined procedures and functions, many require some kind of automatic storage management, and some require support for concurrent execution of threads, tasks or processes within the program.
Many applications require additional features, such as window managers or elaborate file access methods. When multiple applications coexist, perhaps written by different programmers, there is frequently a need to share files, windows or memory segments between applications. This is typical of today's electronic mail, database, and spreadsheet applicatons, and the programming environments that support such applications can be extremely complex, particularly if they attempt to protect users from malicious or accidental damage caused by program developers or other users.
A programming environment may include a number of additional features which simplify the programmer's job. For example, library management facilities to allow programmers to extend the set of predefined procedures and functions with their own routines. Source level debugging facilities, when available, allow run-time errors to be interpreted in terms of the source program instead of the machine language actually run by the hardware. As a final example, the text editor may be language specific, with commands which operate in terms of the syntax of the language being used, and mechanisms which allow syntax errors to be detected without leaving the editor to compile the program.

Basic Loader Functions

A loader is a system program that performs the loading function. It brings object program into memory and starts its execution. The role of loader is as shown in the figure 3.1. In figure 3.1 translator may be assembler/complier, which generates the object program and later loaded to the memory by the loader for execution. In figure 3.2 the translator is specifically an assembler, which generates the object loaded, which becomes input to the loader. The figure 3.3 shows the role of both loader and linker.








Figure 3.1 : The Role of Loader



Figure 3.2: The Role of Loader with Assembler



Figure 3.3 : The Role of both Loader and Linker


3.3 Type of Loaders

The different types of loaders are, absolute loader, bootstrap loader, relocating loader (relative loader), and, direct linking loader. The following sections discuss the functions and design of all these types of loaders.

3.3.1 Absolute Loader

The operation of absolute loader is very simple. The object code is loaded to specified locations in the memory. At the end the loader jumps to the specified address to begin execution of the loaded program. The role of absolute loader is as shown in the figure 3.3.1. The advantage of absolute loader is simple and efficient. But the disadvantages are, the need for programmer to specify the actual address, and, difficult to use subroutine libraries.

VAX Architecture

Memory - The VAX memory consists of 8-bit bytes. All addresses used are byte addresses. Two consecutive bytes form a word, Four bytes form a longword, eight bytes form a quadword, sixteen bytes form a octaword. All VAX programs operate in a virtual address space of 232 bytes , One half is called system space, other half process space.

Registers – There are 16 general purpose registers (GPRs) , 32 bits each, named as R0 to R15, PC (R15), SP (R14), Frame Pointer FP ( R13), Argument Pointer AP (R12) ,Others available for general use. There is a Process status longword (PSL) – for flags.
Data Formats - Integers are stored as binary numbers in byte, word, longword, quadword, octaword. 2’s complement notation is used for storing negative numbers. Characters are stored as 8-bit ASCII codes. Four different floating-point data formats are also available.

Instruction Formats - VAX architecture uses variable-length instruction formats – op code 1 or 2 bytes, maximum of 6 operand specifiers depending on type of instruction. Tabak – Advanced Microprocessors (2nd edition) McGraw-Hill, 1995, gives more information.

Addressing Modes - VAX provides a large number of addressing modes. They are Register mode, register deferred mode, autoincrement, autodecrement, base relative, program-counter relative, indexed, indirect, and immediate.

Instruction Set – Instructions are symmetric with respect to data type - Uses prefix – type of operation, suffix – type of operands, a modifier – number of operands. For example, ADDW2 - add, word length, 2 operands, MULL3 - multiply, longwords, 3 operands CVTCL - conversion from word to longword. VAX also provides instructions to load and store multiple registers.

Input and Output - Uses I/O device controllers. Device control registers are mapped to separate I/O space. Software routines and memory management routines are used for input/output operations.

Addressing modes & Flag Bits

Five possible addressing modes plus the combinations are as follows.

Direct (x, b, and p all set to 0): operand address goes as it is. n and i are both set to the same value, either 0 or 1. While in general that value is 1, if set to 0 for format 3 we can assume that the rest of the flags (x, b, p, and e) are used
as a part of the address of the operand, to make the format compatible to the
SIC format

Relative (either b or p equal to 1 and the other one to 0): the address of the operand should be added to the current value stored at the B register (if b = 1) or to the value stored at the PC register (if p = 1)

Immediate (i = 1, n = 0): The operand value is already enclosed on the instruction (ie. lies on the last 12/20 bits of the instruction)

Indirect (i = 0, n = 1): The operand value points to an address that holds the address for the operand value.

Indexed (x = 1): value to be added to the value stored at the register x to obtain real address of the operand. This can be combined with any of the previous modes except immediate.

The various flag bits used in the above formats have the following meanings

e - e = 0 means format 3, e = 1 means format 4

Bits x,b,p: Used to calculate the target address using relative, direct, and indexed addressing Modes

Bits i and n: Says, how to use the target address

b and p - both set to 0, disp field from format 3 instruction is taken to be the target address. For a format 4 bits b and p are normally set to 0, 20 bit address is the target address

x - x is set to 1, X register value is added for target address calculation
i=1, n=0 Immediate addressing, TA: TA is used as the operand value, no memory reference
i=0, n=1 Indirect addressing, ((TA)): The word at the TA is fetched. Value of TA is taken as the address of the operand value

i=0, n=0 or i=1, n=1 Simple addressing, (TA):TA is taken as the address of the operand value
Two new relative addressing modes are available for use with instructions assembled using format 3.

Mode Indication Target address calculation
Base relative b=1,p=0 TA=(B)+ disp
(0£disp £4095)
Program-counter relative b=0,p=1 TA=(PC)+ disp
(-2048£disp £2047)

System Software and Machine Architecture

One characteristic in which most system software differs from application software is machine dependency.

System software – support operation and use of computer. Application software - solution to a problem. Assembler translates mnemonic instructions into machine code. The instruction formats, addressing modes etc., are of direct concern in assembler design. Similarly, Compilers must generate machine language code, taking into account such hardware characteristics as the number and type of registers and the machine instructions available. Operating systems are directly concerned with the management of nearly all of the resources of a computing system.

There are aspects of system software that do not directly depend upon the type of computing system, general design and logic of an assembler, general design and logic of a compiler and, code optimization techniques, which are independent of target machines. Likewise, the process of linking together independently assembled subprograms does not usually depend on the computer being used.

e-Notes

The subject introduces the design and implementation of system software. Software is set of instructions or programs written to carry out certain task on digital computers. It is classified into system software and application software. System software consists of a variety of programs that support the operation of a computer. Application software focuses on an application or problem to be solved. System software consists of a variety of programs that support the operation of a computer. Examples for system software are Operating system, compiler, assembler, macro processor, loader or linker, debugger, text editor, database management systems (some of them) and, software engineering tools. These software’s make it possible for the user to focus on an application or other problem to be solved, without needing to know the details of how the machine works internally.

Monday, June 8, 2009

Assembly Process

It is useful to consider how a person would process a program before trying to think about how it is done by a program. For this purpose, consider the program in Figure 2.1. It is important to note that the assembly process does not require any understanding of the program being assembled. Thus, it is unnecessary to understand the integer division algorithm implemented by the code in Figure 2.1, and little understanding of the particular machine code being used is needed (for those who are curious, the code is written for an R6502 microprocessor, the processor used in the historically important Apple II family of personal computers from the late 1970's).
; UNSIGNED INTEGER DIVIDE ROUTINE
; Takes dividend in A, divisor in Y
; Returns remainder in A, quotient in Y
START: STA IDENDL ;Store the low half of the dividend
STY ISOR ;Store the divisor
LDA #0 ;Zero the high half of the dividend (in register A)
TAX ;Zero the loop counter (in register X)
LOOP: ASL IDENDL ;Shift the dividend left (low half first)
ROL ; (high half second)
CMP ISOR ;Compare high dividend with divisor
BCC NOSUB ;If IDEND < ISOR don't subtract
SBC ISOR ;Subtract ISOR from IDEND
INC IDENDL ;Put a one bit in the quotient
NOSUB: INX ;Count times through the loop
CPX #8
BNE LOOP ;Repeat loop 8 times
LDY IDENDL ;Return quotient in Y
RTS ;Return remainder in A

IDENDL:B 0 ;Reserve storage for the low dividend/quotient
ISOR: B 0 ;Reserve storage for the divisor
Figure 2.1. An example assembly language program.
When a person who knows the Roman alphabet looks at text such as that illustrated in Figure 2.1, an important, almost unconscious processing step takes place: The text is seen not as a random pattern on the page, but as a sequence of lines, each composed of a sequence of punctuation marks, numbers, and word-like strings. This processing step is formally called lexical analysis, and the words and similar structures recognized at this level are called lexemes.
If the person knows the language in which the text is written, a second and still possibly unconscious processing step will occur: Lexical elements of the text will be classified into structures according to their function in the text. In the case of an assembly language, these might be labels, opcodes, operands, and comments; in English, they might be subjects, objects, verbs, and subsidiary phrases. This level of analysis is called syntactic analysis, and is performed with respect to the grammar or syntax of the language in question.
A person trying to hand translate the above example program must know that the R6502 microprocessor has a 16 bit memory address, that memory is addressed in 8 bit (one byte) units, and that instructions have a one byte opcode field followed optionally by additional bytes for the operands. The first step would typically involve looking at each instruction to find out how many bytes of memory it occupies. Table 2.1 lists the instructions used in the above example and gives the necessary information for this step.
Opcode Bytes Hex Code

ASL 3 0E aa aa
B 1 cc
BCC 2 90 oo
BNE 2 D0 oo
CMP 3 CD aa aa
CPX # 2 E0 cc
INC 3 EE aa aa
INX 1 E8
LDA # 2 A9 cc
LDY 3 AC aa aa
ROL 1 2A
RTS 1 60
SBC 3 ED aa aa
STA 3 8D aa aa
STY 3 8C aa aa
TAX 1 AA

Notes: aa aa - two byte address, least significant byte first.
oo - one byte relative address.
cc - one byte of constant data.
Table 2.1. Opcodes on the R6502.
To begin the translation of the example program to machine code, we take the data from table 2.1 and attach it to each line of code. Each significant line of an assembly language program includes the symbolic name of one machine instruction, for example, STA. This is called the opcode or operation code for that line. The programmer, of course, needs to know what the program is supposed to do and what these opcodes are supposed to do, but the translator has no need to know this! For the curious, the STA instruction stores the contents of the accumulator register in the indicated memory address, but you do not need to know this to assemble the program!
Table 2.1 shows the numerical equivalent of each opcode code in hexadecimal, base 16. We could have used any number base; inside the computer, the bytes are stored in binary, and because hexidecimal to binary conversion is trivial, we use that base here. While we're at it, we will strip off all the irrelevant commentary and formatting that was only included only for the human reader, and leave only the textual description of the program.
8D START: STA IDENDL
aa
aa
8C STY ISOR
aa
aa
A9 LDA #0
cc
AA TAX
0E LOOP: ASL IDENDL
aa
aa
2A ROL
CD CMP ISOR
aa
aa
90 BCC NOSUB
oo
ED SBC ISOR
aa
aa
EE INC IDENDL
aa
aa
E8 NOSUB: INX
E0 CPX #8
cc
D0 BNE LOOP
oo
AC LDY IDENDL
aa
aa
60 RTS
cc IDENDL:B 0
cc ISOR: B 0
Figure 2.2. Partial translation of the example to machine language
The result of this first step in the translation is shown in Figure 2.2. This certainly does not complete the job! Table 2.1 included constant data, relative offsets and addresses, as indicated by the lower case notatons cc, oo and aaaa, and to finish the translation to machine code, we must substitute numeric values for these!
Constants are the easiest. We simply incorporate the appropriate constants from the source code into the machine code, translating each to hexadecimal. Relative offsets are a bit more difficult! These give the number of bytes ahead (if positive) or behind (if negative) the location immediately after the location that references the offset. Negative offsets are represented using 2's complement notation.
8D START: STA IDENDL
aa
aa
8C STY ISOR
aa
aa
A9 LDA #0
00
AA TAX
0E LOOP: ASL IDENDL
aa
aa
2A ROL
CD CMP ISOR
aa
aa
90 BCC NOSUB
06
ED SBC ISOR
aa
aa
EE INC IDENDL
aa
aa
E8 NOSUB: INX
E0 CPX #8
08
D0 BNE LOOP
EC
AC LDY IDENDL
aa
aa
60 RTS
00 IDENDL:B 0
00 ISOR: B 0
Figure 2.3. Additional translation of the example to machine language
The result of this next translation step is shown in boldface in Figure 2.3. We cannot complete the translation without determining where the code will be placed in memory. Suppose, for example, that we place this code in memory starting at location 020016. This allows us to determine which byte goes in what memory location, and it allows us to assign values to the two labels IDENDL and ISOR, and thus, fill out the values of all of the 2-byte address fields to complete the translation.
0200: 8D START: STA IDENDL
0201: 21
0202: 02
0203: 8C STY ISOR
0204: 22
0205: 02
0206: A9 LDA #0
0207: 00
0208: AA TAX
0209: 0E LOOP: ASL IDENDL
020A: 21
020B: 02
020C: 2A ROL
020D: CD CMP ISOR
020E: 22
020F: 02
0210: 90 BCC NOSUB
0211: 06
0212: ED SBC ISOR
0213: 22
0214: 02
0215: EE INC IDENDL
0216: 21
0217: 02
0218: E8 NOSUB: INX
0219: E0 CPX #8
021A: 08
021B: D0 BNE LOOP
021C: EC
021D: AC LDY IDENDL
021E: 21
021F: 02
0220: 60 RTS
0221: 00 IDENDL:B 0
0222: 00 ISOR: B 0
Figure 2.4. Complete translation of the example to machine language
Again, in completing the translation to machine code, the changes from Figure 2.3 to Figure 2.4 are shown in boldface. For hand assembly of a small program, we don't need anything additional, but if we were assembling a program that ran on for pages and pages, it would be helpful to read through it once to find the numerical addresses of each label in the program, and then read through it again, substituting those numerical values into the code where they are needed.
symbol address

START 0200
LOOP 0209
NOSUB 0218
IDENDL 0221
ISOR 0222
Table 2.2. The symbol table for Figure 2.4.
Table 2.2 shows the symbol table for this small example, sorted into numerical order. For a really large program, we might rewrite the table into alphabetical order to before using it to finish the assembly.
It is worth noting the role which the meaning of the assembly code played in the assembly process. None! The programmer writing the line STA IDENDL must have understood its meaning, "store the value of the A register in the location labeled IDENDL", and the CPU, when it executes the corresponding binary instruction 8D 21 02 must know that this means "store the value of the A register in the location 0221", but there is no need for the person or computer program that translates assembly code to machine code to understand this!
This same assertion holds for compilers for high level languages. A C++ compiler does not understand that for(;;)x(); involves a loop, but only that, prior to the code for a call to the function x, the compiler should note the current memory address, and after the call, the compiler should output some particular instruction that references that address. The person who wrote the compiler knew that this instruction is a branch back to the start of the loop, but the compiler has no understanding of this!
To translator performing the assembly process, whether that translator is a human clerk or an assembler, the line STA IDENDL means "allocate 3 consecutive bytes of memory, put 8D in the first byte, and put the 16 bit value of the symbol IDENDL in the remaining 2 bytes." If the symbol IDENDL is mapped to the value 0221 by the symbol table, then the interpretation of the result of the assembler's interpretation of the source code is the same as the programmers interpretation. These relationships may be illustrated in Figure 2.5.
Source Text
/ \ compiler or
programmer's / \ assembler's
view of meaning / \ view of meaning
/ \
Abstract Meaning ----- Machine Code

hardware's
view of meaning
Figure 2.5. Views of the meaning of a program.

What is an Assembler?

The first idea a new computer programmer has of how a computer works is learned from a programming language. Invariably, the language is a textual or symbolic method of encoding programs to be executed by the computer. In fact, this language is far removed from what the computer hardware actually "understands". At the hardware level, after all, computers only understand bits and bit patterns. Somewhere between the programmer and the hardware the symbolic programming language must be translated to a pattern of bits. The language processing software which accomplishes this translation is usually centered around either an assembler, a compiler, or an interpreter. The difference between these lies in how much of the meaning of the language is "understood" by the language processor.
An interpreter is a language processor which actually executes programs written in its source language. As such, it can be considered to fully understand that language. At the lowest level of any computer system, there must always be some kind of interpreter, since something must ultimately execute programs. Thus, the hardware may be considered to be the interpreter for the machine language itself. Languages such as BASIC, LISP, and SNOBOL are typically implemented by interpreter programs which are themselves interpreted by this lower level hardware interpreter.
Interpreters running as machine language programs introduce inefficiency because each instruction of the higher level language requires many machine instructions to execute. This motivates the translation of high level language programs to machine language. This translation is accomplished by either assemblers or compilers. If the translation can be accomplished with no attention to the meaning of the source language, then the language is called an assembly or low level language, and the translator is called an assembler. If the meaning must be considered, the translator is called a compiler and the source language is called a high level language. The distinction between high and low level languages is somewhat artificial since there is a continuous spectrum of possible levels of complexity in language design. In fact, many assembly languages contain some high level features, and some high level languages contain low level features.
Since assemblers are the simplest of symbolic programming languages, and since high level languages are complex enough to be the subject of entire texts, only assembly languages will be discussed here. Although this simplifies the discussion of language processing, it does not limit its applicability; most of the problems faced by an implementor of an assembly language are also faced in high level language implementations. Furthermore, most of these problems are present in even the simplest of assembly languages. For this reason, little reference will be made to the comparatively complex assembly languages of real machines in the following sections.

Short and long game thinking, tests driving design and CRAP metrics

Kent Beck recently posted on the complex “theory versus practice” issue of always automating tests, where he states,”Then a cult of [agile] dogmatism sprang up around testing–if you can conceivably write a test you must”. By classifying projects into long game and short game, he argues that ROI becomes a major issue on whether a test stays manual. He says “Not writing the test for the second defect gave me time to try a new feature”, but several people commented that this was a technical debt tradeoff, and Guilherme Chapiewski noted he had done the same thing with a Proof of Concept that went live then he had to rewrite major chunks later. It is interesting that this ROI discussion is reflecting the experiences of the pre-agile functional automation community. Back in November 2001 (Wow! Long time ago!!), I posted to the Agile Testing list some . While many of these were from the context of two separate development teams and the automaters using expensive test tools, the risks of incomplete automation and insufficient ROI dominate. The benefits of having the same people both develop the code and the tests are great, and beyond my experience when I wrote that post.
I think the ROI issue for code-based tests will go away over time. Much of the creation of code-based tests is mechanical. Just as programming languages replaced assembler and took care of fiddly details (what registers to use, low level comparisons etc) and build utilities replaced simple text file include statements, I think that soon it will be standard practice to have tool-created unit testing to handle mocking, dependency injection and assert-based testing. Mocking was originally very manual, then tools were developed. Dependency Injection was very manual,then tools were developed. For assert-based testing, we’ve already seen and now amongst others. I think these tools will become standard, just as coverage tools are now standard in IDEs when they originally were luxuries costing tens of thousands of dollars. Another variation of this is tools like recently by Jeffrey Frederick. Celerity is a fast way to run GUI web tests, but could be handled as a mechanical translation not a manual one. Some meta language could generate Celerity and selected browser tests in a single step.
Mechanically generated tests are cheap to produce and overcome ROI issues. However, they only reflect the current code. The benefits of test design infusing the coding approach are missing. If tests are not being automated for whatever reason, some analysis of the refactoring risk should be done, at least to know where and what the error-prone code is. One way of doing this is using the Agitar-created , which Bob Martin recently as a way to keep design clean. While I currently believe all code should be created test first wherever possible, techniques like the CRAP metric can highlight the complicated bits for refactoring where possible. While it may be a great intellectual challenge, there is no need to refactor a complex industry standard algorithm. [Aside: is there an inherent advantage to doing test first design all the time? Perhaps, just as renaissance masters only painted and sculpted hand and faces and left the rest to their workshop staff, we only need to focus on core functions for test first and do the rest test last?]
As Kent says,”By insisting that I always write tests I learned that I can test pretty much anything given enough time.” Time is often a rare commodity, so Kent argues compromises are often needed in short goal projects. As Ron Jeffries said in a comment on Kent’s post, “My long experience suggests that there is a sort of knee in the curve of impact for short-game-focused decisions. Make too many and suddenly reliability and the ability to progress drop substantially.” I hope that advancements in mechanical generation of tests don’t push us into a short game perspective, impacting the use of hand crafting tests to drive design. At the same time, metrics that can be run as part of the build to highlight areas for refactoring on all projects are proving valuable (and I’m looking forward to ). By any measure, these are interesting times we live in. Long live long game thinking!

IBM offers $2B in financing for federal HIT projects

Now here's a model we expect to see more of over the next several months: Vendor financing of key infrastructure needed to meet federal IT demands. Global technology giant IBM has announced that it is making up to $2 billion available to finance technology projects related to the demands of the new stimulus package. Clever move--not only does this bind clients to IBM technology, but the federal stimulus funds make it far less likely that IBM will get stiffed.

IBM's Global Financing arm is stepping in where banks fear to tread, offering to structure flexible payment arrangements, deferred payments, lines of credit and project financing packages for clients. The idea, IBM said, is to help healthcare organizations get going on projects before government begins doling out the stimulus funding.

Question about Model-Based Testing

First, a quick note on terms. I tend to use James Bach’s definition of Testing as “Questioning a product in order to evaluate it”. All test rely on /mental/ models of the application under test. The term Model-Based Testing though is typically used to describe programming a model which can be explored via automation. For example, one might specify a number of states that an application can be in, various paths between those states, and certain assertions about what should occur in on the transition between those states.
There are real costs here: building a useful model, creating algorithms for exploring it, logging systems that allow one to weed through for interesting failures, etc. Whether or not the costs are reasonable has a lot to do with *what are the questions you want to answer?* In general, start with “What do I want to know? And how can I best learn about it?” rather than looking for a use for an interesting technique.
All that said, some excellent testers have gotten a lot of mileage out of automated model-based tests. Sometimes we have important questions about the application under test are best explored by automated, high-volume semi-randomized tests. Here’s one very colorful example from Harry Robinson (one of the leading theorists and proponents of model-based testing) where he discovered many interesting bugs in Google driving directions using a model-based test (written with ruby’s Watir library): http://model.based.testing.googlepages.com/exploratory-automation.pdf
Robinson has used MBT successfully at companies including Bell Labs, Microsoft, and Google, and has a number of essays here: http://www.harryrobinson.net/
Ben Simo (another great testing thinker and writer) has also written quite a bit worth reading on model-based testing: http://www.questioningsoftware.com/search/label/Model-Based%20Testing
Finally, a few cautions: To make good use of a strategy, one needs to explore both its strengths and its weaknesses. Toward that end, James Bach has an excellent essay on the limits of Model Based Testing http://www.satisfice.com/blog/archives/87 has links to his hour long talk (and associated slides on the Unbearable Lightness of Model Based Testing.
I’ll end with a note about what Boris Beizer calls the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.” Scripted tests (whether executed by a computer or a person) are particularly vulnerable to the pesticide paradox, tending to find less and less useful information each time the same script is executed. Folks sometimes turn to model-based testing thinking that it gets around the pesticide problem. One should remember that in some contexts model-based testing may well find a much larger set of bugs than a given set of scripted tests…but that it is still fundamentally limited by the Pesticide Paradox. Remembering its limits — and starting with questions MBT addresses well — it has the potential to be a very powerful testing strategy.
If you haven’t been to yet, it’s an interesting forum for asking technical questions — and sorting through the answers — written by Joel Spolsky and Jeff Atwood.
I noticed a question on Model-Based Testing over there that I had something to say about. I wanted to link to articles by Harry Robinson, Ben Simo and James Bach…but as a new user, I’m allowed to add only one link. What to do? How about using my one link to go to my blog..
And here’s my answer, complete with links:
First, a quick note on terms. I tend to use James Bach’s definition of Testing as “Questioning a product in order to evaluate it”. All test rely on /mental/ models of the application under test. The term Model-Based Testing though is typically used to describe programming a model which can be explored via automation. For example, one might specify a number of states that an application can be in, various paths between those states, and certain assertions about what should occur in on the transition between those states. Then one can have scripts execute semi-random permutations of transitions within the state model, logging potentially interesting results.
There are real costs here: building a useful model, creating algorithms for exploring it, logging systems that allow one to weed through for interesting failures, etc. Whether or not the costs are reasonable has a lot to do with *what are the questions you want to answer?* In general, start with “What do I want to know? And how can I best learn about it?” rather than looking for a use for an interesting technique.
All that said, some excellent testers have gotten a lot of mileage out of automated model-based tests. Sometimes we have important questions about the application under test are best explored by automated, high-volume semi-randomized tests. Here’s one very colorful example from Harry Robinson (one of the leading theorists and proponents of model-based testing) where he (written with ruby’s Watir library).
Robinson has used MBT successfully at companies including Bell Labs, Microsoft, and Google, and has .
Ben Simo (another great testing thinker and writer) has also written
Finally, a few cautions: To make good use of a strategy, one needs to explore both its strengths and its weaknesses. Toward that end, James Bach has an excellent talk on the limits and challenges of Model-Based Testing. links to his hour long talk (and associated slides).
I’ll end with a note about what Boris Beizer calls the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.” Scripted tests (whether executed by a computer or a person) are particularly vulnerable to the pesticide paradox, tending to find less and less useful information each time the same script is executed. Folks sometimes turn to model-based testing thinking that it gets around the pesticide problem. In some contexts model-based testing may well find a much larger set of bugs than a given set of scripted tests…but one should remember that it is still fundamentally limited by the Pesticide Paradox. Remembering its limits — and starting with questions MBT addresses well — it has the potential to be a very powerful testing strategy.

CCHIT holds release of IT system testing criteria

The Certification Commission for Healthcare Information Technology has put on hold the rollout of its new sets of completed testing criteria for multiple health IT systems while it waits for HHS to release its plans for certifying IT under the American Recovery and Reinvestment Act of 2009, also called the stimulus law.

Earlier this month, CCHIT announced it had completed work on updated versions of test scripts and criteria for use in the 2009-10 round of testing and certification.

The commission also announced it will publish in either June or July an updated certification handbook explaining the testing and certification process. But CCHIT Chairman Mark Leavitt said that it won't be taking applications from IT vendors for testing and certifying their electronic health record and other systems until HHS acts.





Leavitt said that CCHIT will defer launch of its 2009-10 testing programs until its people have had a chance to look at the initial batch of HHS-approved criteria under the stimulus act. The law mandates the creation of an HIT Policy Committee and an HIT Standards Committee to develop and review IT certification criteria as well as health information transmission standards and implementation specifications.

"The policy and standards committees have some very tight deadlines," Leavitt said.

"HHS has to take it through a public rulemaking and then it goes to OMB," Leavitt said, referring to the White House's Office of Management and Budget.

To keep the whole process on schedule, the policy and standards committees have to be done with their work by Aug 21, Leavitt said. "Since we want to conform our process to what those committees' recommendations are, we want to hold our process," until the committees' work is completed. "They may want to add or subtract something. This will give us a chance to adapt the 2009-2010 process” to the stimulus act.

Initially, CCHIT certification lasted for three years, but testing was updated annually. Going forward, Leavitt said, he's guessing certification will be on a two-year cycle.

CCHIT has been criticized in some quarters for certifying systems only on functionality, but not ease of use. Leavitt said that CCHIT is "beginning to investigate how to test usability."

"There are a number of ways to do it, but we have to look for ways that are objective, that we can repeat," Leavitt said.

One way, Leavitt said, would be to "look for the most common tasks and then count the number of clicks to do those tasks." Those would include what Leavitt, himself a physician, calls "the speed-dial tasks in a physician's office," including refilling a prescription or taking a history on a new patient.

"You test that part of the product and you literally time it," Leavitt said. Vendors could be asked to bring in their systems and their best user and test them on these common tasks. “If it takes 150 clicks and 10 minutes, you have a big problem.

"The other end of the spectrum is you survey users," Leavitt said. "We ask the vendors for 10 sites. We want to see at least one that's measuring quality, or using (the system) to manage chronic disease. Or even do a survey as part of the reimbursement payment process."

The survey results could provide data on how many customers of a given system have applied for reimbursement under the "meaningful use" standard in the stimulus act vs. how many have qualified under that standard.

Leavitt said that the new certification criteria for 2009-10 have "a big focus on interoperability, including a requirement that EHRs be able to input and store data using the Continuity of Care Document format developed by standards development organizations Health Level 7 in collaboration with ASTM International.

Another test area—an option, not a requirement this year—will be whether the systems incorporate the interoperability specification approved by the federally supported Healthcare Information Technology Standards Panel that deals with querying another data source, such as a health information exchange, for the existence of patient records.

"If they do it, we give them a gold star and everyone will know it, but if they don't, they'll still get certified," Leavitt said.

Another testing requirement that was on the CCHIT road map for inclusion in future certification criteria was that all EHRs be able to link the diagnosis code with an electronic prescription and be able to communicate the diagnosis code and prescription information together in a single electronic prescription sent to a drugstore or pharmacy benefit manager outside the physician's practice.

The American Medical Association has a long-standing and oft-reaffirmed policy against any requirement to include diagnosis codes on prescriptions "to protect patient confidentiality and to minimize administrative burdens."

According to a grid of CCHIT testing criteria posted on the organization's Web site, the specific listing of this testing requirement "will be removed in 2009 when the corresponding Foundation criterion is tested." The requirement itself isn't being eliminated, however.

Leavitt said that by requiring EHRs be able to combine prescription data with a patient's diagnosis doesn't mean physicians will be forced to do so.

"The AMA doesn't want you to provide it. Fine. Don't provide it," Leavitt said. "That's a policy decision, so go ahead and fight that one out."

But there are safety benefits, Leavitt said, allowing a second set of eyes to review the applicability of the prescription for the specified diagnosis. "It's a potential way to reduce errors." And there are financial considerations. "For some medications, in some prescribing situations, you're required to do it. I believe it has to do with health plans qualifying patients to be on a medication."

Another controversial requirement that was originally proposed as a separate line item in the 2009 criteria would require building into EHRs a back door to allow access by insurance companies for fraud control. The requirement would make EHRs conform to a recommendations in the 2007, HHS-funded report by RTI International.

"Recommended Requirements for Enhancing Data Quality in Electronic Health Records Systems," which, despite the title, primarily dealt with the issue of medical billing and payment fraud control. According to CCHIT spokeswoman Sue Reber, that specific testing criterion also was de-listed—but not eliminated—sometime before the first draft of the 2009 criteria was published "because it is redundant with existing security criteria in the area of 'access control.'"

cell code

Now I tell you some Codes in short description that work on all mobiles. On the main screen type in :

*#06# for checking the IMEI (International Mobile Equipment Identity).

*#7780# reset to factory settings.

*#67705646# This will clear the LCD display(operator logo).

*#0000# To view software version.

*#2820# Bluetooth device address.

*#746025625# Sim clock allowed status.

#pw+1234567890+1# Shows if sim have restrictions.

*#92702689# - takes you to a secret menu where you may find some of the information below:
1. Displays Serial Number.
2. Displays the Month and Year of Manufacture
3. Displays (if there) the date where the phone was purchased (MMYY)
4. Displays the date of the last repair - if found (0000)
5. Shows life timer of phone (time passes since last start)

*#3370# - Enhanced Full Rate Codec (EFR) activation. Increase signal strength, better signal reception. It also help if u want to use GPRS and the service is not responding or too slow. Phone battery will drain faster though.

*#3370* - (EFR) deactivation. Phone will automatically restart. Increase battery life by 30% because phone receives less signal from network.

*#4720# - Half Rate Codec activation.

*#4720* - Half Rate Codec deactivation. The phone will automatically restart
If you forgot wallet code for Nokia S60 phone, use this code reset: *#7370925538#
Note, your data in the wallet will be erased. Phone will ask you the lock code. Default lock code is: 12345

Press *#3925538# to delete the contents and code of wallet.

Unlock service provider: Insert sim, turn phone on and press vol up(arrow keys) for 3 seconds, should say pin code. Press C,then press * message should flash, press * again and 04*pin*pin*pin# \

*#7328748263373738# resets security code.
Default security code is 12345
Change closed caller group (settings >security settings>user groups) to 00000 and ure phone will sound the message tone when you are near a radar speed trap. Setting it to 500 will cause your phone 2 set off security alarms at shop exits, gr8 for practical jokes! (works with some of the Nokia phones.) Press and hold "0" on the main screen to open wap browser.

books

Nicole Kidman of Forbes magazine's second annual list of least bankable stars.

The magazine used a simple ratio - they added up each actor's salary for their last three films, and divided it by the films' combined gross income, to get a payback factor.

In Kidman's case, last year she returned $8 for each $1 she earned. This year, she returned $1 for each dollar she earned.

She was paid $17 million for "Invasion" and the film lost $2.68 for each dollar it paid her.