Skip to Content
Our misssion: to make the life easier for the researcher of free ebooks.


Three Paradigms of Computer Science

In his seminal work on scientific revolutions, Thomas Kuhn (1962) defines scientific paradigms as “some accepted examples of actual scientific practice… [that provide models from which spring particular coherent traditions of scientific re search.” The purpose of this paper is to investigate the paradigms of computer science and to expose their philosophical origins. Peter Wegner (1976) examines three definitions of computer science: as a branch of mathematics (e.g. Knuth 1968), as an engineering (‘technological’) discipline, and as a natural (‘empirical’) science. He concludes that the practices of computer scientists are effectively committed not to one but to either one of three ‘research paradigms’ ( 1 ). Taking a historical perspective, Wegner argues that each paradigm dominated a different decade during the 20 th century: the scientific paradigm dominated the 1950s, the mathematical paradigm dominated the 1960s, and the technocratic paradigm dominated the 1970s—the decade in which Wegner wrote his paper. ( 2 ) We take Wegner’s historical account to hold and postulate (§5) that to this day computer science is largely dominated by the tenets of the technocratic paradigm. We shall also go beyond Wegner and explore the philosophical roots of the dispute on the definition of the discipline.

Timothy Colburn (2000, p. 154) suggests that the different definitions of the discipline merely emanate from complementary interpretations (or ‘views’) of the activity of writing computer programs, and therefore they can be reconciled as such.

Intel Compilers on Linux Clusters

Commodity computer clusters are emerging as a cost-effective way to provide supercomputer level performance to department-level organizations, at an affordable cost. Such clusters are typically based on widely available CPUs, such as Intel Pentium 4, and use the robust Linux operating system. Various choices of CPU interconnects are available, ranging from standard 100bT and 1000bT Ethernet, to Myrinet or Dolphin networks.

For the computational scientist, the platform offers slightly more limited choice of programming tools than traditional supercomputers, although the range of these tools is steadily increasing. Typically, there is no support for shared-memory parallel programming spanning multiple CPU boards (nodes), and the shared-memory model is limited to CPUs residing on a single board. On the other hand, mature distributed-memory tools, such as Message-Passing Interface (MPI) library, are freely available. These often exploit the particular strengths of an interconnect, cf. Myrinet-specific MPICH-GM implementation of MPI.

Customising Microsoft Office to develop a tutorial learning environment

Powerful applications such as Microsoft Office’s Excel and Word are widely used to perform common tasks in the workplace and in education. Scripting within these applications allows unanticipated user requirements to be addressed. We show that such extensibility, intended to support office automation-type applications, is well suited to the creation of learning activities and learning environments. We have developed a range of tutorial activities using Excel and Word in introductory mathematics, writing and economics courses. These tutorials have the dual purpose of teaching academic concepts and practical computer literacy skills. The software architecture of our learning environment includes a database-supported back-end to automatically record students’ responses, which allows for greater control over what students do.

Additionally, this allows one to automate common procedures to improve usability and feedback automation to support learning. We have been applying our ideas for the last six years and currently 1,500 students are using the environment. We suggest that this pragmatic solution can provide a high degree of interactivity and flexibility in a range of learning contexts that represents a cost-effective alternative for use alongside traditional approaches.

Software Release Management

The advent of the Internet and the use of component-based technology have each individually in uenced the software development process. The Internet has facilitated geographically distributed software development by allowing improved communication through such tools as distributed CM systems, shared white-board systems, and real-time audio and video. Component-based technology has facilitated the construction of software through assembly of relatively large grained components by dening standards for component interaction such as CORBA 5]. But, it is their combined use that has led to a radically new software development process: increasingly,software is being developed as a \system of systems" by a federated group of organizations.

Sometimes such a process is initiated formally,as when organizations create a virtual enterprise 4] to develop software. The virtual enterprise establishes the rules by which dependencies among the components are to be maintained by the members of the enterprise. Other times it is the connectivity of the Internet that provides the opportunity to create incidental systems of systems.

Yahoo! Secrets

Do you Yahoo!?
If you connect to the Internet, chances are that you do. Yahoo! is the most popular site on the Internet. More people visit Yahoo! every day than visit America Online or Google or or eBay or any other Internet destination. With more than 237 million users in 25 different countries (and 13 different languages), Yahoo! is visited by more than two-thirds of all Internet users at least once a month.

It’s fair to assume that you’re one of those 237 million users, and that you use Yahoo! to find other sites on the Web. But do you know everything you can do at Yahoo!? Do you know all about Yahoo! services, including free e-mail and online shopping and personal ads and stock quotes and TV schedules and travel reservations and interactive games and downloadable music radio and real-time chat and instant messaging and... well, do you?

55 Ways to Have Fun With Google

This book, in a way, is born out of my daily weblog “Google Blogoscoped” ( and those who read it. Since 2003 I’ve been writing there covering all things Google – not just the fun stuff, but news, discussion, interviews, tutorials, and everything beyond with a relation to search engines. Thanks to those reading along and providing pointers or feedback, I’ve been able to discover more interesting pages and get to know more interesting people around the world than ever before.

When I think of Google, first and foremost I think of its role to discover knowledge, people, and people’s thoughts. Search engines are truly one of the first emergents of a global brain, and in good tradition of Gutenberg’s inventions in the technology of printing, of the invention of the internet, and later the invention of the World Wide Web. All those bring us closer together by speeding up the rhythm in which we communicate.

Screen shot PDF Ebook 55 Ways to Have Fun With Google

The Excel 2007 Data & Statistics Cookbook

The predecessor to this book was well received, and considering the feedback from instructors and students, I have expanded the coverage of the data management features of Excel in this update, as these are quite powerful and flexible. This book makes use of Excel 2007 for Microsoft Windows. All the functionality described in this book is also available in Excel 2003, and most of it is available in earlier versions as well. However, as you have already found or will soon find, the screens look very much different in the newest version of Excel. If you are using Excel 2003 or an earlier version, you may find my Excel Statistics Cookbook (2006) more to your liking.

Students, instructors, and researchers wanting to perform data management and basic descriptive and inferential statistical analyses using Excel will find this book helpful. My goal was to produce a succinct guide to conducting the most common basic statistical procedures using Excel as a computational aid. For each procedure, I provide an example problem with data, and then show how to perform the procedure in Excel. I display the output and explain how to interpret it.

ActionScript 3.0 Cookbook

Using ActionScript, you can create Flash applications that do just about anything you can imagine. But before launching into the vast possibilities, let’s start with the basic foundation. The good news is that ActionScript commands follow a well-defined pattern, sharing similar syntax, structure, and concepts. Mastering the fundamental grammar puts you well on the way to mastering ActionScript. This chapter addresses the frequent tasks and problems that relate to core ActionScript knowledge. Whether you are a beginner or master—or somewhere in between—these recipes help you handle situations that arise in every ActionScript project.

This book assumes that you have obtained a copy of Flex Builder 2 and have successfully installed it on your computer. It’s also helpful if you have some experience using a previous version of actionScript as well. When you launch Flex Builder 2, the Eclipse IDE should start up and present you with a welcome screen. You are presented with various options to get started and more information about Flex and ActionScript 3, such as links to documentation, tutorials, and more. You can close that screen by clicking on the small “x” on its tab. Nowyou are in the Eclipse IDE itself, ready to start coding; but where do you go from here? Flex Builder 2 allows you to create three kinds of projects: a Flex project, Flex Library project, and an ActionScript project. The difference is that Flex projects have access to the entire Flex Framework, which includes all of the Flex components, layout management, transitions, styles, themes, data binding, and all the other stuff that goes into making a Flex Rich Internet Application. Flex applications are written in MXML (a form of XML), which describes the layout and relationship between components. They use ActionScript for their business logic.

PHP/MySQL Tutorial

Unless you've been living on Mars for the last six to eight months, you've heard of open source software (OSS). This movement has got so much momentum that even the big boys are taking notice. Companies like Oracle, Informix, and a host of others are releasing their flagship database products for that poster child of the OSS movement, Linux.

Having a massively complex RDBMS (relational database management system) is all well and good if you know what to do with it. But perhaps you are just getting into the world of databases. You've read Jay's article and you want to put up your own data-driven Web site. But you find you don't have the resources or desire for an ASP server or some pricey database. You want something free, and you want it to work with Unix.

PostgreSQL Tutorial

Postgres, developed originally in the UC Berkeley Computer Science Department, pioneered many of the object-relational concepts now becoming available in some commercial databases. It provides SQL92/SQL3 language support, transaction integrity, and type extensibility. PostgreSQL is a public-domain, open source descendant of this original Berkeley code.

Syndicate content