Translate

Python Programming by John M. Zelle - Reference

                  Python Programming:
      An Introduction to Computer Science
                  John M. Zelle, Ph.D.

Keypoints-
 
Computers and Programs

Almost everyone has used a computer at one time or another. Perhaps you have played computer games or used a computer to write a paper or balance your checkbook. Computers are used to predict the weather, design airplanes, make movies, run businesses, perform financial transactions, and control factories.

1. The Universal Machine

A modern computer might be defined as “a machine that stores and manipulates information under the control of a changeable program.” There are two key elements to this definition. The first is that computers are devices for manipulating information. This means that we can put information into a computer, and it can transform the information into new, useful forms, and then output or display the information for our interpretation.

Every computer is just a machine for executing (carrying out) programs. There are many different kinds of computers. You might be familiar with Macintoshes and PCs, but there are literally thousands of other kinds of computers both real and theoretical. One of the remarkable discoveries of computer science is the realization that all of these different computers have the same power; with suitable programming, each computer can basically do all the things that any other computer can do. In this sense, the PC that you might have sitting on your desk is really a universal machine. It can do anything you want it to, provided you can
describe the task to be accomplished in sufficient detail. Now that’s a powerful machine!

2. Program Power

You have already learned an important lesson of computing: Software (programs) rules the hardware (the physical machine). It is the software that determines what any computer can do. Without programs, computers would just be expensive paperweights. The process of creating software is called programming, and that
is the main focus of this book.

As you probably know, programmers are in great demand. More than a few liberal arts majors have turned a couple computer programming classes into a lucrative career option. Computers are so commonplace in the business world today that the ability to understand and program computers might just give you the edge over your competition, regardless of your occupation.

3. What is Computer Science?

You might be surprised to learn that computer science is not the study of computers. A famous computer scientist named Edsgar Dijkstra once quipped that computers are to computer science what telescopes are to astronomy. The computer is an important tool in computer science, but it is not itself the object of study. Since a computer can carry out any process that we can describe, the real question is What processes can we describe? Put another way, the fundamental question of computer science is simply What can be computed?
Computer scientists use numerous techniques of investigation to answer this question. The three main ones are design, analysis, and experimentation.

Analysis is the process of examining algorithms and problems mathematically. Computer scientists have shown that some seemingly simple problems are not solvable by any algorithm. Other problems are in-tractable. The algorithms that solve these problems take too long or require too much memory to be of practical value. Analysis of algorithms is an important part of computer science; throughout this book we will touch on somethe fundamental principles. 

4. Programming Languages

Remember that a program is just a sequence of instructions telling a computer what to do. Obviously, we need to provide those instructions in a language that a computer can understand. It would be nice if we could just tell a computer what to do using our native language, like they do in science fiction movies (“Computer, how long will it take to reach planet Alphalpha at maximum warp?”) Unfortunately, despite the continuing efforts of many top-flight computer scientists (including your author), designing a computer to understand human language is still an unsolved problem.

All of the languages mentioned above are examples of high-level computer languages. Although they are precise, they are designed to be used and understood by humans. Strictly speaking, computer hardware can only understand very low-level language known as machine language.

Suppose we want the computer to add two numbers. The instructions that the CPU actually carries out might be something like this.

load the number from memory location 2001 into the CPU
load the number from memory location 2002 into the CPU
Add the two numbers in the CPU
store the result into location 2003

5. The Magic of Python

The computational processes inside the computer are like magical spirits that we can harness for our work. Unfortunately, those spirits only understand a very arcane language that we do not know. What we need is a friendly Genie that can direct the spirits to fulfill our wishes. Our Genie is a Python interpreter. We can give instructions to the Python interpreter, and it directs the underlying spirits to carry out our demands. We communicate with the Genie through a special language of spells and incantations (i.e., Python). The
best way to start learning about Python is to let our Genie out of the bottle and try some spells. 

You can start the Python interpreter in an interactive mode and type in some commands to see what happens. When you first start the interpreter program, you may see something like the following:

Python 2.1 (#1, Jun 21 2001, 11:39:00)
[GCC pgcc-2.91.66 19990314 (egcs-1.1.2 release)] on linux2 Type "copyright", "credits" or "license" for more information.
>>>

The >>>> is a Python prompt indicating that the Genie is waiting for us to give it a command. In programming languages, a complete command is called a statement.

Here is a sample interaction with the Python interpreter.

         >>> print "Hello, World"
         Hello, World
         >>> print 2 + 3
         5
         >>> print "2 + 3 =", 2 + 3
         2 + 3 =

Writing Simple Programs

1. The Software Development Process

Formulate Requirements Figure out exactly what the problem to be solved is. Try to understand as much as possible about it. Until you really know what the problem is, you cannot begin to solve it.

Determine Specifications Describe exactly what your program will do. At this point, you should not worry about how your program will work, but rather with deciding exactly what it will accomplish. For simple programs this involves carefully describing what the inputs and outputs of the program will be and how they relate to each other.

Create a Design Formulate the overall structure of the program. This is where the how of the program gets worked out. The main task is to design the algorithm(s) that will meet the specifications.

Implement the Design Translate the design into a computer language and put it into the computer. In this book, we will be implementing our algorithms as Python programs.

Test/Debug the Program Try out your program and see if it works as expected. If there are any errors (often called bugs), then you should go back and fix them. The process of locating and fixing errors is called debugging a program.

Maintain the Program Continue developing the program in response to the needs of your users. Most programs are never really finished; they keep evolving over years of use.

2. Elements of Programs

Now that you know something about the programming process, you are almost ready to start writing programs on your own. Before doing that, though, you need a more complete grounding in the fundamentals of Python. The next few sections will discuss technical details that are essential to writing correct programs. This material can seem a bit tedious, but you will have to master these basics before plunging into more interesting 
waters.

3.Output Statements

Now that you have the basic building blocks, identifier and expression, you are ready for a more complete description of various Python statements. You already know that you can display information on the screen using Python’s print statement. But what exactly can be printed? Python, like all programming languages, has a precise set of rulesfor the syntax (form) and semantics (meaning) of each statement. Computerscientists have developed sophisticated notations called meta-languages for describing programming languages. In this book we will rely on a simple template notation to illustrate the syntax of statements.

Here are the possible forms of the print statement:

print
print <expr>
print <expr>, <expr>, ..., <expr>
print <expr>, <expr>, ..., <expr>,

4. Definite Loops

You already know that programmers use loopsto execute a sequence of statements several times in succession. The simplest kind of loop is called a definite loop. This is a loop that will execute a definite number of times. That is, at the point in the program when the loop begins, Python knows how many times to go around (or iterate) the body of the loop.

for i in range(10):
x = 3.9 * x * (1 - x)
print x

This particular loop pattern is called a counted loop, and it is built using a Python for statement. Before considering this example in detail, let’s take a look at what for loops are all about.

A Python for loop has this general form.

for <var> in <sequence>:
<body>

The body of the loop can be any sequence of Python statements. The start and end of the body is indicated by its indentation under the loop heading (the for <var> in <sequence>: part).

Computing with Numbers

When computers were first developed, they were seen primarily as number crunchers, and that is still an important application. As you have seen, problems that involve mathematical formulas are easy to translate into Python programs. This chapter takes a closer look at computations involving numeric calculations.

1. Numeric Data Types

The information that is stored and manipulated by computer programs is generically referred to as data. Different kinds of data will be stored and manipulated in different ways. Consider this program that calculates the value of loose change.

You should be warned that the float type only stores approximations. There is a limit to the precision, or accuracy, of the stored values. Since float values are not exact, while ints always are, your general rule of thumb should be: if you don’t absolutely need fractional values, use an int.

A value’s data type determines what operations can be used on it. As we have seen, Python supports the usual mathematical operations on numbers. 

2. Accumulating Results: Factorial

Suppose you have a root beer sampler pack containing six different kinds of root beer. Drinking the various flavors in different orders might affect how good they taste. If you wanted to try out every possible ordering, how many different orders would there be? It turns out the answer is a surprisingly large number.

Let’s write a program that will compute the factorial of a number entered by the user. The basic outline of our program follows an Input-Process-Output pattern.

Input number to take factorial of, n
Compute factorial of n, fact Output fact

3.The Limits of Int

I have talked about numeric data types as representations of familiar numbers such as integers and decimal fractions. It is important to keep in mind, however, that these numeric types are just representations, and they do not always behave exactly like the numbers that they represent. We can see an example of this as we test out our new factorial program.

>>> import factorial
Please enter a whole number: 6
The factorial of 6 is 720

>>> factorial.main()
Please enter a whole number: 10
The factorial of 10 is 3628800

>>> factorial.main()
Please enter a whole number: 13
Traceback (innermost last):
        File "<stdin>", line 1, in ?
       File "factorial.py", line 9, in main
                fact = fact * factor
OverflowError: integer multiplication

Computer memory is composed of electrical “switches,” each of which can be in one of two possible states, basically on or off. Each switch represents a binary digit or bit of information. One bit can encode two possibilities, usually represented with the numerals 0 (for off) and 1 (for on).

Three bits allow us to represent eight different values by adding a zero or one to each of the four two-bit patterns.

You can see the pattern here. Each extra bit doubles the number of distinct patterns. In general, n bits can represent 2n different values.

Index....

Python Programming An Introduction to Computer Science good reads

1.Betsy rosalen rated it 5 out of 5

Excellent Book for Learning Python. I've since been required to read some other free textbooks for classes I am taking and they are not even close to as good at explaining the concepts as this one is. Highly recommend it to anyone trying to learn Python.

2.Peter Herrmann  rated it 5 out of 5

Great for learning Python and computer graphics. Of course, this book covers ALL aspects of Python, not just graphics. I'd gotten another Python book a few years ago - which I'd since forgotten - but it did not cover graphics. Also, to repeat, this book has great exercises for learning the language. 

3.TallabAbdelhakim rated it 5 out of 4

It's one of the best Books about Introduction to Computer Science and Programming out there. Easy to read, lot of exercises to practice and Explanations worth investing your precious time in reading it.


Previous Post Next Post