Sie sind auf Seite 1von 10

This article has been accepted for publication in IEEE Annals of the History of Computing but has not

yet been fully edited. Some content may change prior to final publication.

The Oracle Story: 1984 2001 


Abstract

By Andrew Mendelsohn 7/9/2012

This article tells the story of Oracle from 1984 through 2001, primarily through the authors experiences during those years. He worked on the software development team that built the Oracle Relational Database Management System (RDBMS). This was an exciting time when Oracle went from being a small niche software company to becoming one of the giants in the software industry. Although many observers believe Oracles strong marketing and sales organizations were the primary reasons for its success during this time, the author argues that Oracles success was also due to its highly innovative RDBMS product that was strongly differentiated from its competitors. This article traces the development of the Oracle RDBMS through the mainframe, minicomputer, client-server, and Internet computing eras. It calls out the key competitors at each stage and the key product innovations that allowed Oracle to compete so successfully in the market. Finally, this article also provides insight into the workings of the overall Oracle business and culture.

How I Came to Oracle: Right Time, Right Place In 1980, when I interviewed for my first job out of MIT, I narrowed the opportunities down to two companies: Xerox and HP. At the time, Xerox PARC was world-renowned for its computer science research with innovations such as Ethernet, the ALTO personal computer (the forebear of the Apple Mac), and client-server computing. Xerox had just released its first commercial product based on this research called the Xerox Star. I had the opportunity to work on the Xerox Star project in El Segundo, California near LA. The job opportunity at HP was to work on their Horizon RDBMS project in Cupertino, California. HP was known as a great company at the time, but to be honest, my first choice was to work at Xerox. Then I visited both sites. The beautiful setting of HP in the land of Silicon Valley startups was much more to my liking than the suburban LA sprawl of El Segundo, so I went to HP. This turned out to be a great decision. The Xerox Star went nowhere, and I ended up at the epicenter of the RDBMS revolution. In the fall of 1980, I started work at HP as a developer on the Horizon RDBMS Project. The objective of Horizon was to build a commercial RDBMS.I was a fairly rare commodity at the time: a recent graduate who actually knew something about RDBMS. During my senior year at Princeton, I had attended a graduate seminar on relational databases. This was at the time when Professor Jeff Ullman was still at Princeton and was just starting to get interested in databases. During graduate school at MIT, I took a seminar where we studied some of the latest research in database systems, e.g., Jim Grays Notes on Database Operating Systems [1] which was based on the IBM System R research. After spending two and half years working on the Horizon project at HP, I was approached by a headhunter to work at a startup company called ESVEL. The HP Horizon project wasnt making great progress, and it had always been my dream to work for a Silicon Valley startup. So I interviewed at ESVEL and was quickly sold on the company. ESVEL had been started by Kapali Eswaran (the ES) and Ron Revel (the VEL). ESVEL had an all-star relational database development team: Kapali Eswaran from IBM Research was the CEO and there also was Don Slutz from IBMs System R, Roger Bamford from IBM Development, and Louise Madrid from Britton-Lee. I joined ESVEL in the summer of 1983. Soon after I joined, another RDBMS superstar, Franco Putzolu joined; he was from Tandem and previously had worked on IBMs System R. When I arrived, ESVEL had been working on its RDBMS for about two years, and had a working data and transaction layer called MARS. They were also well along in building their SQL layer code-named Jupiter. I was put to work for Louise Madrid on Jupiter. I worked on their SQL Pre-compiler and also worked on adding sub-queries to their SQL language implementation. Although I spent less than a year at ESVEL, it was a great experience for me. While ESVEL had by far the most advanced SQL RDBMS engine in existence at the time, I quickly learned that you need more than great product technology to become a successful business. After I arrived at ESVEL, I found out that there was a major morale problem on the team. Unfortunately, Ron Revel enjoyed parachuting as a hobby, and one day his chute didnt open. Kapali Eswaran had then taken over running the company and had managed to antagonize virtually all the key developers. Also, several of the new hires (including me) found out that our stock options were lower than expected. Franco Putzolu returned to Tandem after just a few months at ESVEL. Next Roger Bamford left for Oracle. At that point, there was essentially a mutiny by most of the remaining key developers. We told the companys venture capitalist (VC) to either get rid of Kapali or we were all leaving. The VC did nothing, and we all quit. With the core development team gone, ESVEL was never able to advance the product much further, but it did survive a few more years as a business. I had connected Kapali with my old group at HP, and Kapali was able to sell the ESVEL RDBMS to HP. HP abandoned the Horizon project and brought the ESVEL product to market as HP Allbase. Allbase had some moderate success at HP but was eventually discontinued in favor of partnering with Oracle and other RDBMS providers. HP was really just interested in selling their server hardware. Kapali eventually sold ESVEL to Cullinet. Cullinet brought the product to market as CA-DB, but without the core development team, it was never able to compete with Oracle and others in the market. While ESVEL didnt succeed as a business, its former developers played pivotal roles in many of the successful RDBMS products. I joined Roger Bamford at Oracle. Ken Ng went to Informix and led the development of the Informix SQL product. Page 1

 
Digital Object Indentifier 10.1109/MAHC.2012.56 1058-6180/$26.00 2012 IEEE

This article has been accepted for publication in IEEE Annals of the History of Computing but has not yet been fully edited. Some content may change prior to final publication.

Louise Madrid, Don Slutz, and Franco Putzolu went to Tandem and were instrumental in delivering Tandems Nonstop SQL product. Franco Putzolu joined Oracle about 10 years later and was a key architect on the Oracle RDBMS team. Figure 1 shows the genealogy of many of the principal RDBMS products.


<insertpowerpointslide1> Figure1

Oracle in 1984 After ESVEL, I narrowed down my job choices to working at either Software Publishing or Oracle. Software Publishing had been founded by John Page, who had worked on the Horizon project at HP. Its PFS File program was a very popular PC data management application. Oracle, on the other hand, was the leader in the emerging RDBMS market. I chose Oracle, and in May 1984, I arrived at Oracle as employee #130. Oracle was run by Larry Ellison and Bob Miner. Larry was CEO and chief database visionary, marketer, and salesman. Bob Miner ran development under Larry. Oracle was quite a unique place. I still remember my first encounter with Larry Ellison, when I was interviewing for the job at Oracle. Larry grilled me with technical questions about SQL query optimization. He also sold me on how valuable my stock options would be someday; I took this with a large grain of salt. I went to work for Bob Miner as a database kernel developer. At Oracle, the kernel group was the small team of developers who wrote the core RDBMS engine. It consisted of Bob Miner, Derry Kabcenell, Forrest Howard, Roger Bamford, Ned Dana, and me. Bob Miner had written a lot of the original Oracle product code, but was transitioning out of active development when I joined. Bob was an extremely nice guy and his development team had great energy. Bob believed in giving his developers lots of responsibility: his trademark slogan was KMABYOYO Kiss my ass baby; youre on your own. I already knew Derry Kabcenell. Four years before, Derry had recruited me to work on the Xerox Star project. Derry and another developer from Xerox, Forrest Howard, had joined the Oracle kernel group about a year before. Neither had ever worked on an RDBMS before, but were both brilliant developers. Among other things, Derry designed Oracles row source query execution runtime and Oracles distributed query mechanism. Derry later went on to succeed Bob Miner as the head of Oracle RDBMS development. Roger Bamford was by far the most experienced RDBMS developer at Oracle. He had worked on SQL RDBMS at IBM and had worked at ESVEL from the start in designing and developing their SQL RDBMS. Roger was a prodigious coder. He also liked innovating and was responsible for the design of several of Oracles core innovations including multi-version read consistency with row-level locking and Oracles Real Application Cluster (RAC) technology. As for me, I was first set to work on the database kernel. Over the next couple of years, I added support for SQL subqueries, jointly did the first design of Oracles stored procedure language, PLSQL, and designed and implemented Oracles core B-tree index component including support for multi-version read consistency with row-level locking. Oracles development culture was like nowhere I had been before. Ill point out three areas where Oracle was well ahead of its time: telecommuting, distributed development, and college hiring. At my first day of work, I was given a DEC VT100 terminal and a modem, so I could work from home. At HP and ESVEL, if I needed to work at odd hours, I had to come into the office to do it. At Oracle, the culture was that local developers all came to the office by day but continued working at home by night and on weekends. No one ever came into the office during odd hours. While this sounds very familiar today, I thought this was rather strange, and the modem just sat on my dining room table for six months. Then one weekend I gave it a try, and I never turned back. I would guess that working from home roughly doubled my productivity. Oracle was one of the first companies to discover the secret that developers will never stop coding, if you make it fun and easy for them by letting them work at home. A few years later Oracle replaced our modems with leased lines to the Oracle data center that gave us high speed access to development servers similar to what we get today with the internet. Oracle also blazed the trail of having a distributed development organization. Especially on the porting side of the development team, we had a large number of developers working remotely in places like Canada, Washington, and elsewhere. We also rented a nearby condominium, so it was easy for the remote developers to spend time at headquarters when necessary. Finally, Larry Ellison always wanted the best and brightest students for the development group. The initial database kernel group developers had degrees from Harvard, MIT, and Princeton. Starting later in the 1980s, Oracle started a college recruiting program that formalized this. Larry hired Larry Lynn and gave him the charter to recruit the best and brightest students from the top computer science schools in the US. This included MIT, Stanford, UC Berkeley, U. Wisconsin Madison, Cal Tech, CMU, Brown, Harvard, Princeton, and U. Illinois Urbana. We were not allowed to hire new college graduates from any other schools. At the start of each summer, the newly hired students were put up at a hotel for several weeks, given training, worked on team projects, and did lots of bonding activities. Much of the current leadership of the database team at Page 2

 
Digital Object Indentifier 10.1109/MAHC.2012.56 1058-6180/$26.00 2012 IEEE

This article has been accepted for publication in IEEE Annals of the History of Computing but has not yet been fully edited. Some content may change prior to final publication.

Oracle today came through this program. As we have extended development to other countries like India and China, we continue to follow the same general college hiring philosophy. On the business side, Oracle was doing incredibly well. I still remember my first company meeting that June. At the meeting, Larry announced that our fiscal year revenue had more than doubled from $5.0M in 1983 to $12.7M in 1984. [Note: Oracles fiscal year ends on May 31.] Oracle had the leading share in the rapidly growing RDBMS market. Oracle had an aggressive sales force featuring the likes of Gary Kennedy and Tom Siebel and had established global sales channels in Europe and Asia. Finally, we had a very swank office building located on Sand Hill Road in Menlo Park, home to the Silicon Valley venture capital community. So while Oracle was only a fast growing startup with just 100+ employees, it felt like a well-established company. Oracle Database Generations Over the remainder of this article, Ill review the key technology innovations we delivered at Oracle through the Oracle Database 9i release in 2001. I believe that one of the reasons the Oracle database has been so successful is that it has major competitive differentiators in the core architecture of the product versus other RDBMS products. This makes it easy to sell the product and helps it command premium prices. To gain this differentiation, Oracle development from the very early days consistently bucked conventional wisdom about how to build an RDBMS. By conventional wisdom, I mean the RDBMS technology directions blazed by the IBM System R [2] and UC Berkeley INGRES [3] research projects. These two projects were quite competitive and foreshadowed some of the future competition in the commercial RDBMS market between the SQL (from System R) and QUEL (from INGRES) query languages. Almost all the early commercial RDBMS products followed either IBM System R design principles (IBM SQL/DS, IBM DB2) or INGRES design principles (Britton-Lee, INGRES, Sybase, Microsoft, Illustra). In a number of key areas, Oracle diverged from conventional wisdom to deliver major value to customers. The corollary to this is that Oracle also did not allow competitors to maintain any real or perceived competitive advantages versus Oracle for very long. Larry Ellison was always very quick to direct development to knock off any competitor advantages. In the 1970s and early 1980s, the IBM mainframe dominated the data center and was the key platform for running enterprise databases. With mainframe database applications, the user interacted via the IBM 3270 block mode alphanumeric terminal and the user interface, application and database programs ran on the mainframe. The mainframe database market was dominated by IBM and third party vendors selling non-RDBMS products like IMS, ADABAS, and Cullinet. While Oracle had a product for the mainframe for many years, Oracles core market was the minicomputer market. Minicomputers were much lower in cost than mainframes but supported the same style of computing as mainframes. The minicomputer market was blazed by Digital Equipment Corporation (DEC). Oracle started out on the DEC PDP 11 minicomputer, but by the early 1980s the dominant minicomputer platform for Oracle was the hugely successful DEC VAX running the VMS Operating System. Unconventional Innovation One Portable C Codebase Oracle made two critical decisions early on. One was the decision in 1979 to build an IBM SQL-compatible RDBMS. The other was the decision in 1981 to write Oracle version 3 to be portable across multiple operating system (OS) platforms using the C programming language. This was a radical departure from conventional wisdom at the time. The belief was that high performance database systems required a special code base for each operating system. For example, IBM produced four unique code bases for its SQL RDBMS: SQL/DS for DOS/VSE and VM, DB2 for MVS, DB2 for AS400, and DB2 for Linux/Unix/Windows. Microsoft SQL Server to this day only runs on Windows. C was also a rather obscure language at the time. It was mostly being used at universities and in the still young UNIX operating system. Our long-time head of database product management, Ken Jacobs, liked to quip that at this time no one even knew how to spell C. By the mid-1980s, Oracle had codified the practices around writing portable database software via our C Portable Coding Standards and our Operating System Dependent (OSD) code layer. For developers like me who worked above the OSD layer, the result was much higher productivity: I could rapidly write code that ran across any OS without having to know anything about the OS details. We also soon created the notion of a porting kit. This was the product source code along with a porting guide that enabled internal and external developers to port the Oracle database to new platforms by simply re-implementing the OSD code layer. Developers who knew a particular OS could port the Oracle database to their OS platform, while remaining largely uninformed about how the Oracle database worked. The OSD layer was less than 2% of the overall source code. Portability was critically important to Oracles growth over the years, as we were able to navigate the transitions from mainframes, to minicomputers, to SMPs, to PCs, and to MPPs and SMP clusters. During the height of the client-server era alone, there must have been 30+ different server platforms for which Oracle was available. These included DEC VAX, Data General, Prime, Sun, ATT, Wang, Stratus, Pyramid, Sequent, nCube, IBM SP1, Silicon Graphics, Siemens Nixdorf, Elxsi, IBM RS6000, HP PA-RISC, and more. And Oracle was able to make a profitable business out of this. For each new server startup that came along, rather than risk any investment on our side on the unproven platform, we would sell the porting kit to the startup and have them do the port of Oracle to their platform. If the platform was successful, we would have a good business selling Oracle to their customers and we would later move the port in-house. If the platform was unsuccessful, we would still have the initial money paid by the startup for the porting kit. Key Competitor - INGRES Through the 1990s, we always seemed to have one key competitor who was our archrival at a given time. The competitive battleground was always characterized by a series of feature wars: how do the products differ so that one is better than the Page 3

 
Digital Object Indentifier 10.1109/MAHC.2012.56 1058-6180/$26.00 2012 IEEE

This article has been accepted for publication in IEEE Annals of the History of Computing but has not yet been fully edited. Some content may change prior to final publication.

other. Ken Jacobs, our head of database product management, supplied us with detailed material on competitors products so that we were always well-armed on both the sales and development sides to take on the competitor. When I joined Oracle in 1984, the key competitor was INGRES. The feature wars were focused around query language, query performance, and distributed query. The query language war was QUEL vs SQL. INGRES argued that QUEL was superior. IBM and Oracle said SQL was better. There was also a US standards (ANSI) effort to define a standard relational database language. The ANSI committee was determined to define its own relational query language that was presumably going to be better than both QUEL and SQL. Around 1984, IBM decided to provide the SQL language definition to ANSI (and INGRES did not provide the QUEL definition). The ANSI committee and its counterpart in ISO put their support behind SQL and in 1986 published the first SQL Standard [5][6]. INGRES eventually accepted defeat and embraced SQL, but it lost momentum in the marketplace as it took several years to make the transition from QUEL to SQL. Professor Michael Stonebraker, who led both the UC Berkeley and Commercial INGRES products and was a fervent proponent of QUEL, later took to referring to SQL as Intergalactic Dataspeak. The next feature war was fought over query performance. In 1983, a young professor at the University of Wisconsin, David DeWitt, published the Wisconsin benchmark [4]. The benchmark consisted of a small set of simple queries including select, project, join, and aggregate. Professor DeWitt not only published the benchmark but also published the results of running the benchmark against three databases: University INGRES, Commercial INGRES, and Oracle. The initial results favored INGRES and this of course created an uproar at Oracle. When I joined Oracle, the database kernel team was fully engaged improving Oracle performance on the benchmark queries. Roger Bamford was building Oracles first high performance sort engine, so Oracle could do sort-merge joins and high performance aggregate functions. Derry Kabcenell was building the Oracle row source dataflow query processing engine. By the end of the war, Oracle had the industrys highest performance on the Wisconsin queries driven by these state of the art query processing components. This was the first of many benchmark wars fought by Oracle. The result was always the same: The product was vastly improved, and Oracle won the war. This first benchmark war also produced a lasting change in Oracles software license agreement: from then on we required third parties to get our permission before publishing performance results using Oracle. The final feature war with INGRES was fought over distributed query. INGRES had developed a distributed query technology called INGRES Star. Professor Stonebraker was able to convince the RDBMS marketplace that this was a must-have feature. While we werent convinced that this was that important at the time, we couldnt permit INGRES to have a perceived competitive advantage over us. So Derry Kabcenell quickly came up with a design for Oracle Star; all of the Oracle kernel developers quickly built their parts; and in about three months we shipped Oracle Star. My recollection is that we were able to ship even before INGRES Star went production. Over the years, Oracle distributed query has turned out to be a very popular product feature. Unconventional Innovation One RDBMS for all Workloads The distributed query war also was the start of a major philosophical direction at Oracle. INGRES Star was actually a separate product from INGRES. It had its own specialized data dictionary, query optimizer, and query execution engine. Oracle Star was not a separate product. It was simply a new product feature. Over the years, a clear difference in philosophy emerged between Oracle and others. Competitors and academics would often claim that you needed a new kind of specialized database for certain types of workloads. INGRES built INGRES Star for distributed query; Teradata and IBM built specialized databases for scale-out, parallel query. Oracle always took the approach of adding these new features to the core Oracle database. Most customers apparently preferred the Oracle approach, as it simplified the lives of their administrators and developers who were able to work with just one database as opposed to many. Oracle version 5.1 with distributed query shipped in 1986 and remained the top RDBMS in the market. INGRES was second in the market and never again seriously challenged Oracle for the top spot. On the business side, Oracles revenue nearly doubled to $23.1M by the end of fiscal year 1985 and then more than doubled to $55.3M by the end of fiscal year 1986. And in March, 1986, Oracle did a very successful initial public offering with a market cap of $270M. Larry was right about the future value of the Oracle stock after all.

Client-Server Era: 1985 - 1994 By the mid 1980s, the PC was starting to become a factor in major enterprises and this led to the client-server era of computing. With client-server database applications, the user interface and the application code now ran on the PC. The minicomputer was now reborn as the database server to run the database. The DEC VAX minicomputer was dominant during the early years, but it was soon challenged by a host of minicomputer makers virtually all running their own variants of the UNIX operating system. First startups like Sequent, Pyramid, and Sun entered the market. Later the established players like ATT, IBM and HP followed. These new servers were all Symmetric Multi-processors (SMPs). They had multiple processors residing in the server that shared the same memory. SMPs have the nice property that the systems can be scaled as the business processing requirements grow. They can start with two processors and then expand to 4, 8, 16, or more processors.

Page 4

 
Digital Object Indentifier 10.1109/MAHC.2012.56 1058-6180/$26.00 2012 IEEE

This article has been accepted for publication in IEEE Annals of the History of Computing but has not yet been fully edited. Some content may change prior to final publication.

Unconventional Innovation Multi-version Read Consistency At the start of the client-server era, Oracle was primarily being used for running reporting workloads or mixed workloads of reporting and lightweight online transaction processing (OLTP). Typical OLTP transactions are the booking of a sale at a retail store, transferring funds between accounts at a bank, and the recording of billing data for a phone call. The notion of a transaction is a fundamental property of a database and deserves some more explanation. Consider a simple funds transfer transaction: $100 is transferred from Jims account to Bobs account. This transaction must be atomic (all or nothing) and consistent : Either Jims account balance is decreased by $100 and Bobs account balance is increased by $100, or there is no change at all. You must never end up in an inconsistent state where say the $100 was added to Jims account balance but Bobs account balance was not changed. The transaction must be durable: no matter what failures happen after the completion of the funds transfer, the changes in the account balances must survive the failure. Finally, if two funds transfer transactions happen concurrently involving some of the same accounts, they must be isolated: they must behave as though each funds transfer followed the other in some order. Locking is the most common mechanism used to implement transaction isolation (aka concurrency control) in a database system. There are two forms of locks: read locks and write locks. When a row of data is read by a transaction, a read lock is obtained on the row. When a row of data is updated, a write lock is obtained on the row. Locking follows the protocol that readers must wait for writers and writers must wait for readers and writers. For example, a request to obtain a write lock on a row already locked by another transaction must wait for the other transaction to complete. Starting as early as Oracle version 4 in 1984, Oracle had been experimenting with using what it called multi-version read consistency to support mixed workloads: When a query starts running, Oracle effectively creates a snapshot of the database and runs the query against this snapshot. This snapshot never changes for the life of the query even as the underlying data is updated by other users, so no read locks need to be obtained on the data. This has the very nice property that read-only queries never need to wait for writers and also writers never need to wait for readers. So transactions can flow through the system at a much higher rate. This was ideal for the types of workloads being run on Oracle at the time. This was in contrast to the conventional wisdom promoted by everyone else in the RDBMS world that concurrency control should be implemented using read and write locks at the table, block, or row level. The problem was that conventional read/write locking worked very poorly for real-world workloads. It exhibited poor multi-user concurrency and was prone to deadlocks. Users were forced to circumvent this poor behavior by doing dirty reads (reading data without retaining the locks). Queries run with dirty reads potentially returned inconsistent results. With the new, powerful SMP servers coming to market, Oracle saw the opportunity to move into the higher-end OLTP database market with the next release of the database, version 6. In order to do this, we decided the key requirement was row-level locking. Row-level locking would support the highest level of concurrency and allow us to scale much better on the new large SMPs than the other existing RDBMS that all used block-level locking. There were significant challenges with implementing row-level locking. All the existing row-level locking mechanisms used lock escalation: if a transaction grabbed too many row locks, the RDBMS would automatically escalate the transactions locks to the block or even table level. This limited use of memory by the locking system. The problem was that escalation could cause deadlocks, so we decided to implement rowlevel locking with no lock escalation. Finally, we decided to retain multi-version read consistency too. Nothing like this had ever been designed before in any academic or commercial setting. Roger Bamford was the key designer and builder of the core technology. I designed the B-tree indexes that leveraged Rogers mechanism to also support multi-version read consistency and row level locking. This was by far the most technically challenging work I ever did as a developer at Oracle. The resulting Oracle version 6 multi-version read consistency with row-level locking gave Oracle an enduring competitive advantage. It gave us the core technology we needed to be the most scalable database for OLTP on the popular SMP servers. Later releases extended multi-version read consistency with row-level locking to work across database clusters and distributed systems. To this day, none of our competitors have matched this design. Figure 2 summarizes the advantages of Oracle multi-version read consistency versus conventional locking. <insert powerpoint slide 2>

Figure2
Unconventional Innovation Undo Written to Database As part of this effort, an unconventional logging design was implemented that also provided Oracle with an enduring competitive advantage. Logging is the mechanism used to ensure database durability in the face of system crashes. The conventional design is to mix transaction undo and redo in the log. To recover from a system crash, the redo is replayed to bring the database to the state it was at the time of the crash and then undo is applied to rollback any transactions that were not committed at the time of the crash. This worked fine for conventional locking systems, but in order to implement multi-version read consistency, we needed the ability to read blocks as of a point in time in the past. To do this, we needed to undo changes to the blocks from all transactions that updated the block after the snapshot time of the query. Logs periodically got recycled, so we decided to store the undo in special database tables called rollback segments. This unconventional storage of the undo in database tables has proven to be the source of a series of differentiating features over the years. Examples include the ability to recycle the log even in the middle of a long running transaction, fast-start rollback which allows lazy transaction rollback post crash recovery, flashback query which allows a SQL query to run against a database snapshot of a past point in time, flashback table which allows a table to be restored to a point in time in the past, and our OPS and RAC database cluster technology. Page 5

 
Digital Object Indentifier 10.1109/MAHC.2012.56 1058-6180/$26.00 2012 IEEE

This article has been accepted for publication in IEEE Annals of the History of Computing but has not yet been fully edited. Some content may change prior to final publication.

When version 6.0 shipped in 1988, Oracle had by far the most scalable SMP database technology. We proved it by producing world-record TP1 debit-credit benchmark results. Oracle 6.0 was a complete rewrite of the lower half of the RDBMS. It was a large body of complex, new software, and it took us a year or two to stabilize this version into a robust, mature product. Oracles revenue roughly doubled every year from 1986 to 1989. By the end of fiscal year 1989 (June, 1989), Oracle revenue was at $589M. On the surface, everything looked great. Below the surface though, problems were starting to develop. Sybase was beginning to be perceived as the technology leader, so the sales force was having a hard time hitting the annual goal to double revenue. Fiscal year 1990 revenue managed to rise 66% to $971M. Then Oracle hit the wall. The next quarter (FY1991 Q1) revenue only grew about 30% and the company announced its first ever loss. The stock price plummeted. What happened over the next year behind the scenes at Oracle was largely not visible to me in development, since we were completely focused on finishing Oracle version 7. But during that year Oracle came perilously close to running out of funds and had to secure a loan from Nippon Steel. Larry replaced much of the senior management team. A new CFO was brought in to clean up our balance sheet and our business practices. The head of North American sales was replaced. Finally, the companys business stabilized and started to grow again. This entire episode is described in detail in [7].


Key Competitor Sybase On the competition front, Oracle was facing a strong challenge from a new RDBMS startup, Sybase. Sybase was founded by Bob Epstein and others. Bob had previously worked on the University INGRES project and at the database machine startup, Britton-Lee. Sybase shipped its first product release in 1986. The design center for Sybase was to build a database for clientserver computing. Oracle had originally been designed for mainframe/minicomputer computing. The version 6 rewrite gave Oracle a very strong offering on the server side of client-server computing, but it lacked the innovations Sybase had introduced to minimize the message roundtrips between the client and server. With one network roundtrip, Sybase could invoke a stored procedure that performed a complex database transaction. Sybase triggers allowed developers to implement referential integrity on the server side with no additional message roundtrips. Sybase also had a programmable two-phase commit mechanism for distributed transactions. Oracle lacked all these features, so we started losing momentum in the marketplace. Unconventional Innovation Just-in-time SQL Compilation with Caching The goal of Oracle version 7 was not to just neutralize Sybases competitive differentiators; it was to blow them away. We had originally planned to have an interim release we called Oracle 6.1 that knocked off just the Sybase referential integrity feature, but that wasnt good enough for Larry so he cancelled the release. Oracle7 shipped in June of 1992. It had everything Larry wanted and more. We introduced Oracle PL/SQL stored procedures and triggers. Then we leapfrogged Sybase in two major ways in developer productivity. We implemented declarative ANSI standard referential integrity. In one simple phrase, we could express integrity constraints that required Sybase developers to code 50 or more lines of complex trigger code. We implemented declarative two phase commit. With one SQL COMMIT statement, we could do what Sybase developers required procedural coding to accomplish. On the SMP scalability side, Sybase only had block level locking compared to the superior and mature Oracle multi-versioning and row-level locking technology. This provided Oracle with strong superiority over Sybase in the transaction processing benchmark wars and also in the war for the hearts and minds of ISVs. For example, SAP refused to support Sybase as a database under SAP due to its lack of row-level locking. Its only now, over 20 years later, that SAP is finally starting support for Sybase. Oracle also implemented some clever technology for SQL execution called the library cache. The conventional wisdom especially from the IBM school of RDBMS was to focus on OLTP applications which typically had a small set of static SQL statements. IBMs best practice was for customers to pre-compile these static SQL statements ahead of time, then compile the application program that referenced them, and then run the application. IBM did let customers create SQL dynamically at runtime but this was not recommended for good performance. Oracles original focus was more on dynamic reporting workloads, where queries were created dynamically by end-users often via tools. So Oracle had no mechanism for compiling SQL and only supported dynamic creation, compilation, and execution of SQL. However, for OLTP, it was too expensive for all users to dynamically create and compile SQL. With the library cache only the first user of a query went through the overhead of compiling the query. Thereafter, all subsequent users of the same query could find the compiled plan for the query and simply re-execute the plan. In 1995, Oracle7 was the top RDBMS in the market. Sybase was second in the market and never again seriously challenged Oracle for the top spot. A large part of Oracles revenue was still on the VAX VMS platform at this time; however, DEC had built a relational database named RDB that was making inroads into Oracles market share on the VAX platform. Fortunately for Oracle, DEC decided to sell RDB to Oracle in 1994. This was a timely and profitable acquisition for Oracle. Oracle has taken good care of the RDB installed base over the years, and even at the time of writing this article in 2012, RDB still remains a profitable business for Oracle. The only downside of the acquisition was that Oracle did not succeed in retaining all the top technical talent at RDB, and a number of very good RDB developers went to work on Microsoft SQL Server. Oracle finished fiscal year 1995 with revenue just under $3B with almost 50% growth versus 1994.

Internet Computing Era: 1995 - 2001 By the mid 1990s, the Internet had changed computing again. With internet database applications, the user interface now runs on a Web Browser, the application runs on an application server, and the database continues to run on a database server. Page 6

 
Digital Object Indentifier 10.1109/MAHC.2012.56 1058-6180/$26.00 2012 IEEE

This article has been accepted for publication in IEEE Annals of the History of Computing but has not yet been fully edited. Some content may change prior to final publication.

As e-commerce burst onto the scene, Oracle became the database of choice behind many Internet sites. Amazon, eBay, and Yahoo all were heavy users of Oracle database and remain so to this day. To make the Internet connection completely clear, Larry decided to change the name of the database product. I still remember him calmly walking into one of our Server Meetings and telling us he had cancelled Oracle 8.1. After a dramatic pause, he said it was henceforth to be known as Oracle 8i, the database for the Internet. The most highly marketed feature in Oracle 8i was our support for Java, the language of the Internet, as a new language for writing database stored procedures and triggers. Also, the Internet demanded that databases be online all the time so we added a number of features to eliminate many types of planned and unplanned downtime. Fast-start recovery allowed database administrators (dbas) to control how long database crash recovery would take by bounding rollforward time and allowing the database to start without waiting for rollback to complete. Oracle 8i online data reorganization eliminated many forms of downtime around index creation, in-place index defragmentation, index reorganization, and table/partition reorganization. Each of these operations was done completely online: no tables or row locks are held, and no users are prevented from reading or writing the data. Although Java received all the marketing hype, Oracles strong OLTP feature set including the new Oracle 8i features to minimize downtime were the critical features that enabled Oracle to be the leading engine behind e-commerce on the internet. Larry also became a much more public figure with his promotion of the Network Computer. The Network Computer vision forecast the death of the PC: PCs on the desktop would be replaced by much lower cost, diskless computers that only ran web browsers that accessed resources on the internet. Oracle actually created a subsidiary for building these network computers. This was considered a radical notion at the time, and drew headlines in the media. Unfortunately, this was a good idea but well before its time. The prices of PCs plummeted close to the level of the network computers, and by 2000 the idea had fizzled. Of course today we have many forms of network computers such as smart phones and tablet computers accessing internet resources on the cloud. Oracle Culture - Server Meeting When I talk to people outside Oracle, one of the things they admire about Oracle is how swiftly it is able to execute on major new business and engineering directions. Larry sets the direction and the whole company immediately follows. While Larry is an extremely smart guy and knows database technology extremely well, he still needs to be highly informed on what is going on in engineering to make good decisions. In the early days of Oracle, Im sure it was easy for Larry to simply meet with Bob Miner and later Derry Kabcenell to make informed decisions. As the company grew, I imagine this became more difficult and sometime in the mid-90s, Larry instituted what is known inside Oracle as the database server meeting. This is a standing meeting every Tuesday where Larry, some of Larrys staff, and members of database development meet. This is the forum where we inform Larry of what were building for new product releases and how we propose to name, price and package new product features. The format of the meetings is very casual. Developers and product managers who are running particular projects generally present directly to Larry. The meetings are very effective at both informing Larry and allowing many levels of the development organization to get direct guidance from Larry. The middleware and applications development groups also have server meetings with Larry. Key Competitor - Informix By around 1995, Informix had become the key competitor for Oracle. Informix had a solid SQL RDBMS known as Informix Online Dynamic Server (ODS) and it had an ambitious CEO, Phil White, who was determined to overtake Oracle. The OracleInformix competitive battle was fought on two main fronts: a Transaction Processing Council benchmark (TPC-C) war and a feature war around object-relational database technology. Ill focus on the object-relational feature war. By the early 1990s, customers were starting to look to RDBMSs to store more than just the traditional structured datatypes like numbers, strings, and dates. They wanted databases to also store unstructured data like files, audio, images, video, and spatial information. This spurred database research into how to extend RDBMSs to store these new kinds of data and how to make databases more extensible so even more datatypes could be added in the future. The leading research project on this topic was UC Berkeleys POSTGRES project. POSTGRES was the follow-on project to INGRES, and was once again led by Professors Michael Stonebraker and Eugene Wong. Among other things, POSTGRES researched how to add complex objects and extensibility to an RDBMS. In the early 1990s, Professor Stonebraker founded Illustra to commercialize the POSTGRES object-relational technology and came up with a great concept to use to market the Illustra technology: DataBlades. A DataBlade was a mechanism for extending the Illustra RDBMS with new datatypes. Illustra provided a core set of DataBlades for audio, video, spatial, and time-series datatypes. The idea was that third parties would extend the product further with their own DataBlades. The industry analysts loved the concept and objects and extensibility became a must-have feature for all RDBMS products. By this time, I was running the SQL layer development team at Oracle. Larry Ellison had been following the POSTGRES project and Illustra, and he had given us the direction to add object-relational technology to the next release of the Oracle Database. My guess is that Informix got wind of this project and decided to leapfrog Oracle by acquiring Illustra toward the end of 1995. At the time of acquisition, Informix made the claim that they could merge the Illustra codebase into Informix ODS in six months. This would have beaten Oracle to market with an object-relational database by at least a year. However, the actual merged product, Informix Universal Server, didnt come to market until sometime during 1997. At about the same time, we delivered Oracle version 8.0 with object-relational support and built-in datatypes for multimedia, spatial, text, and files (LOBs). Page 7

 
Digital Object Indentifier 10.1109/MAHC.2012.56 1058-6180/$26.00 2012 IEEE

This article has been accepted for publication in IEEE Annals of the History of Computing but has not yet been fully edited. Some content may change prior to final publication.

Object-relational technology had an enduring impact on RDBMS. Before this, all unstructured data was stored in files. Today, all sophisticated database applications include substantial amounts of multimedia, spatial, and LOB data. I would estimate that at least 50% of the data in typical databases today is unstructured data. By providing the capability for a single repository to take care of the consistency, availability, and security of all the data in the application, object-relational RDBMS greatly simplified application development and administration. While the built in unstructured datatypes are very popular, extensibility frameworks and complex data structure concepts were never used much by customers. They were primarily used by the RDBMS vendors themselves to build the few popular datatypes that most customers wanted. The intense rivalry between Informix and Oracle soon came to an abrupt end. Informix was sued by the government for financial irregularities and prior financial results had to be restated for many years. Informix ceased being a major competitor in the market. Unconventional Innovation Shared Everything Clusters SMP servers have been the principal platform for running Oracle databases from the client-server era in the mid-1980s through today. SMP servers have two major weaknesses: availability and scalability. An SMP server is a single point of failure: if the server goes down, the entire database goes down. SMP servers also have limited scalability: as the number of processors grows, the cost of implementing high performance shared memory becomes cost prohibitive. The remedy to these problems is to run the database on a cluster of SMP servers. A cluster contains one or more SMP server nodes which communicate over a private network. If one node fails, the remaining nodes continue to run. There is no shared memory across the cluster. In a classic, shared nothing cluster, each node has its own local storage that is not visible to other nodes. In a shared disk cluster, all nodes in the cluster share the same storage. The database research community concluded very early on that shared disk cluster database technology could not be made to scale well for OLTP workloads [8]. So, virtually all the competing RDBMS went the shared nothing direction includingTeradata, Tandem Non-Stop SQL, Informix XPS, and IBM DB2 Parallel Edition. The only contrarians to this view were IBM on the mainframe and Oracle. Oracle had been dabbling with shared disk database cluster technology for many years. Oracle version 5 had support for running the database on a VAX VMS cluster. Oracle version 6.2 reworked the cluster database functionality into a new feature called Oracle Parallel Server (OPS). This technology evolved over the years across the Oracle version 7, 8, and 8i releases. Customers had mixed success with OPS. It worked quite well running read-mostly, parallel query workloads for Data Warehouses, and mixed workloads with little write-write conflict across nodes, but didnt scale well for most OLTP applications. By the mid 1990s at Oracle, there was a raging debate: should we abandon our OPS shared disk cluster technology and implement shared nothing database cluster technology instead. Fortunately, at that time Roger Bamford came up with a breakthrough design that we later called cache fusion. With cache fusion, Roger invented algorithms that enabled packaged OLTP applications like SAP and Oracle e-Business Suite to scale quite well. Larry was presented with the two alternative approaches: move to shared nothing clustering or go with the new cache fusion-based shared disk clustering approach. The cache fusion approach was very risky. It had never successfully been done before in a commercial or even a research RDBMS. Conventional wisdom said to go with the tried and true shared nothing approach; however, shared nothing database clusters had never been successful in the market for running anything other than DW (Data Warehousing) workloads. Packaged OLTP applications like SAP R3 and Oracle e-Business Suite (EBS) were simply not designed to scale on shared nothing clusters. On the other hand, shared disk databases with cache fusion, had the potential to be able to scale packaged applications without complex development work required by the application developers. Larry took the risky approach and went with cache fusion. With the delivery of Oracle version 9i in 2001, Oracle delivered this radical new shared everything database cluster technology. Larry named it Real Application Clusters (RAC), as it was the first open system database clustering technology that could scale all database workloads from OLTP to DW to SAP and Oracle EBS. Unconventional Innovation Oracle Data Warehousing By 1994, Oracle had entered the emerging data warehousing (DW) market. A DW is a very large database containing many years of historical transaction detail data and summary data that requires parallel processing for analysis. I will focus on three key technology areas where we diverged from conventional wisdom. First, the conventional wisdom is that you need a specialized database for running DW workloads. Examples of this are Teradata, IBM DB2 Parallel Edition and Netezza. We viewed DW support as just a new feature to add to the core RDBMS. After all, most customer workloads are not purely OLTP or purely DW, they are a combination of both. Also, its much lower cost for customers to be able to use the same staff to manage and develop against one database rather than multiple databases. The fact that Oracle has a single RDBMS for all workloads continues to be a major differentiator to this day. Second, the conventional wisdom is that you need a shared nothing database architecture to run scalable parallel query against very large databases. All the specialized DW databases mentioned above have a shared nothing architecture. On a shared nothing database cluster, the notions of nodes and table partitions and parallelism are all tied together: table partitions are local to a node, so only the processors on that node are able to scan the data in the partitions. Since we used a shared everything cluster architecture, we were able to decouple these notions: table partitions are accessible to all nodes, so processors on all nodes are able to process data in a partition in parallel. Finally, traditional shared nothing designs only support hash Page 8

 
Digital Object Indentifier 10.1109/MAHC.2012.56 1058-6180/$26.00 2012 IEEE

This article has been accepted for publication in IEEE Annals of the History of Computing but has not yet been fully edited. Some content may change prior to final publication.

partitioning of the data across nodes. Our partitioning was able to partition data by range, hash, and eventually combinations of both. Our shared everything approach had some major benefits versus the competitors at the time. The architecture is inherently tolerant to the loss of a node, as all remaining nodes can access all data. New nodes can be added or dropped without the need for re-partitioning the existing data across the new nodes, as is required by the hash partitioning schemes used by shared nothing architectures. We supported rolling window partitions: new partitions could be created and loaded online and old partitions could be dropped online with no downtime to the application required. Finally, range partitioning enabled the query optimizer to do partition pruning based on range predicates; hash partitioning only allowed partition pruning based on equality predicates. Third, the conventional wisdom was that Extract/Transform/Load (ETL), multi-dimensional analytics (OLAP), and predictive analytics had to be performed on separate data management engines specialized for each of these three areas. This was problematic as it increased DW management complexity and created the potential for security holes due to extra data movement. Instead we pursued the course of pushing each of these forms of processing into the database engine. ETL became ELT (extract, load, transform) where the transformations were performed using the databases parallel SQL engine. Oracle acquired and developed OLAP, ROLAP, and predictive analytics technologies, which were also pushed into the database engine. Admittedly, we were much more successful in pushing the ROLAP and predictive analytics technologies into the database than we were with OLAP. With Ingres, Sybase, and Informix no longer major competitive forces in the market, Oracle was left to compete with two remaining heavyweights: IBM DB2 and Microsoft SQL Server. Oracle was #1 in the market, followed by IBM, and Microsoft was a distant third. By this time, the plethora of minicomputer vendors from the 1990s had all either been acquired or gone out of business. Most enterprise customers ran Oracle on UNIX RISC servers from Sun, HP, and IBM. Linux was just emerging as a viable platform. Windows was emerging as a solid RDBMS platform for departmental use, and was growing very rapidly. By the end of fiscal year 2001, Oracle revenue was at about $10.9B, but it had grown just 7% that year. During Oracle fiscal year 2001, the Internet bubble had burst: Oracle stock hit a historic high of over $45 per share, but by the end of the year it had tumbled to about $15 per share. But even after the bursting of the Internet bubble, Oracle still had a market cap of around $90B. On the product side, we had just shipped Oracle 9i with Real Application Clusters (RAC) in May 2001. Between Oracle RAC and many of the other unconventional technologies we delivered over the previous 20 years, Oracle had a highly differentiated product versus the competition as shown in figure 3. Most of these differentiators were built into the core architecture of the product and so were difficult for competitors to match. RDBMS industry analysts like Gartner and Forrester all agreed Oracle was the overall RDBMS market share and technology leader. <insert powerpoint slide 3>

Figure3
Epilog Relational Database has been an extremely vibrant area of innovation for an extremely long time. Over the last forty years since E. F. Codd published his seminal paper, there has been huge innovation. The original concept of a declarative language that operates against tables that are independent from any specific physical representation has been extremely powerful. As the hardware technologies have evolved from the mainframe to minicomputers to SMPs to clusters of commodity SMPs, SQL RDBMS systems have been able to rapidly evolve to leverage the latest generation of hardware. This change all happened without requiring application developers to change their programs. This is all quite remarkable. Think about other important software technologies like specific programming languages and operating systems. All had periods of innovation and then became mature. SQL RDBMS has been a glaring exception. Even after 40 years, the technology remains vibrant. You can see this in the number of database startup companies that continue to be created. No less than three SQL RDBMS startup companies were sold in 2010 for over $200M each. I see no end in sight to the possibilities for future innovations in SQL RDBMS. From a personal level, Ive spent about thirty years in the RDBMS industry with most of them at Oracle. Its been a very exciting and rewarding place to spend my career. References [1] Gray, J. N., Notes on Database Operating Systems, Operating Systems: An Advanced Course, Springer-Verlag, 1978, pp. 393-481. [2] Chamberlin, D. D., Astrahan, M. M., et al., A History and Evaluation of System R, Communications of the ACM, vol. 24, number 10, October 1981, pp. 632-646. [3] Stonebraker, M., Held, G., Wong, E., Kreps, P., The Design and Implementation of INGRES, ACM Transactions on Database Systems, vol. 1,number 3, September 1976, pp. 189-222. [4] Bitton, D., DeWitt, D. J., Turbyfil C., Benchmarking Database Systems: A Systematic Approach, Proceedings of the 1983 Very Large Database Conference, October 1983. Page 9

 
Digital Object Indentifier 10.1109/MAHC.2012.56 1058-6180/$26.00 2012 IEEE

This article has been accepted for publication in IEEE Annals of the History of Computing but has not yet been fully edited. Some content may change prior to final publication.

[5] X3.135-1986, American National Standard for information systems - - database language - - SQL. [6] ISO 9075:1987, Database language - - SQL. [7] Symonds, M., Softwar: An Intimate Portrait of Larry Ellison and Oracle. Simon & Schuster, 2003. [8] Mohan, C., Narang, I., Recovery and Coherency-Control Protocols for Fast Intersystem Page Transfer and Fine-Granularity Locking in a Shared-disks Transaction Environment, Proceedings of the 17th International Conference on Very Large Databases, Barcelona, September 1991. Biography Andrew Mendelsohn is senior vice president for Database Server Technologies at Oracle. He is responsible for the development and product management of Oracle's family of database products. These include software products such as Oracle Database, Oracle TimesTen In-memory Database, Oracle Berkeley DB, and Oracle NoSQL Database, and engineered systems such as Oracle Exadata Database Machine, Oracle Database Appliance, and Oracle Big Data Appliance. Mr. Mendelsohn has been at Oracle since May 1984. He began his career at Oracle as a developer on Oracle Database version 5.1. Prior to joining Oracle, he worked at Hewlett-Packard and ESVEL. Mr. Mendelsohn holds a B.S.E. in electrical engineering and computer science from Princeton University and performed graduate work in computer science at M.I.T.

Page 10

 
Digital Object Indentifier 10.1109/MAHC.2012.56 1058-6180/$26.00 2012 IEEE

Das könnte Ihnen auch gefallen