• Order
  • QAT
  • Offers
  • Support
    • Due to unforeseen circumstances, our phone line will be unavailable from 5pm to 9pm GMT on Thursday, 28th March. Please be assured that orders will continue to be processed as usual during this period. For any queries, you can still contact us through your customer portal, where our team will be ready to assist you.

      March 28, 2024

  • Sign In

Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Evolution of Computer Technology in Last 25 Years

Paper Type: Free Essay Subject: Computer Science
Wordcount: 1319 words Published: 18th Aug 2017

Reference this

The advancement of the computing technology could commonly identify in 6 generations. The physical size of the computers significantly decreased from the first generation vacuum tube computers to third generation computers based on the integrated chip technology. Fourth and fifth generation computer technology increased computer chip’s efficiency by developing the very large scale integration (VLSI) and ultra large scale integration (ULSI) technology. (Halya, 1999) During the fifth generation computing, the idea of using multiple computer chips to solve the same problem flourished, which was based on the earlier design of parallel computing that was developed during the fourth generation. With the improvement of hardware, increased network bandwidth, and developing more efficient algorithms, massively parallel architectures allowed fifth generation computers to increase the efficiency of computing significantly. (Drako, 1994) This research paper is mainly going to discuss how the computer technology evolved from the end of the fifth generation to current day sixth generation computers.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

The improvement in microprocessor chips technology allowed millions of transistors to be placed on a single integrated chip, which opened the generation of computers based on ultra large scale integration, or ULSI. The 64-bit microprocessor was developed during this time and became the fifth generation chip we mostly use today. Even the older fourth-generation chip architecture concepts like Reduced Instruction Set Computers (RISC) and Complex Instruction Set Computers (CISC) derived the benefit of ULSI technology. During the fourth generation period, microprocessors were commonly classified into RISC or CISC type architectures. The difference between RISC and CISC were very clearly distinguishable. RISC has a very simple set of instructions which required a low number of transistors but needed a higher memory to do a task. CISC has more instructions set available compared to RISC which required more transistors but less memory space. (Hennessy, 1991) Due to the limited computing resources, each programmer decided the specific chip type to deliver the endstate the application delivery. However, with the advancement of microprocessors, the 64-bit chip now has more transistors and memory address access available for computing. Today, the need of differentiating what used to be two main categories of the microprocessor is almost pointless because of the level of complexity in modern day 64-bit chips for both CISC and RISC. Many new CISC chips behave like RISC with the increased processor clock cycle while the new RISC has increased number of instructions available like CISC (Cole, 2015).

Two of the most important hardware techniques used to improve performance during the fourth and fifth generation of computer development have been pipelining and caches. Both techniques rely on using more devices to achieve higher performance. Pipelining might have been available only to some mainframe computers and supercomputers during fourth generation computing; however, the technique became very common within computer architecture during the fifth generation computing which became the baseline for the sixth generation computer which uses decentralized computing process to perform as an artificial intelligence and neural network computing. Pipelining improves the throughput of a machine without changing the basic cycle time and increases performance by exploiting instruction-level parallelism. (Hennessey, 1991) Instruction-level parallelism is available when instructions in a sequence are independent and thus can be executed in parallel by overlapping. Unarguably, the pipelining technology led to faster speeds and better performances but the hardware performance couldn’t keep up with the demand of even faster hardware that could facilitate applications that required processing a large amount of data or critical commercial transaction very fast. Addition to advances in pipelining, the advancement in cache memory technology also significantly enhanced performance of how computer access data. By creating a small pool of memory either in the actual processor or very close to it decreased the need of frequent access of data directly from the memory. This technique made cache memories one of the most important ideas in computer architecture. (Uri, 2010)

Cache memories substantially improved performance by the use of memory. Cache memories were first used in the third-generation computers from the late 60s and early 70s, both in large machines and minicomputer. From the fourth-generation and on, virtually every microprocessor has included support for a cache. Although large caches can certainly improve performance, total cache size, associativity, and block size all directly impact the performance and have optimal values that depend on the details of a design. (Hennessey, 1991) Just like microprocessor and pipelining, the cache technology improved significantly last two decades. Traditional cache architectures are demand fetch, cache lines are only brought into the cache when they are explicitly required by the process. Prefetching increased the efficiency of this process by anticipating that some memory will be used near future, thus, proactively fetched into the cache. Earlier of prefetching was either done through software or hardware prefetching. As the complexity of prefetching increases, some more recent research has looked at combining the imprecise future knowledge available to the compiler with the detailed run-time information available to hardware like programmable prefetching engine consisting of a run-ahead table that populates using explicit software instruction. (Srinivasan, 2011)

With such advancement in core computer technologies, the ability to process data and store information truly became increasingly decentralized. From cloud to PC over IP technology, cheaper storage, faster processor, and higher bandwidth wide area network allowed the modern day computer to work in collaboration rather than isolation. If from the first generation to the fifth generation focused on improving the efficiency of the hardware to meet demands of software engineers, the current sixth generation is more about how human interacts with the computers to enrich human lives. Computers became smaller while still sufficient to process necessary application by itself or using servers through the internetwork. Everything has become smarter, faster, smaller, and connected. With the improved network and parallel computing, the sixth generation computers definitely getting closer to simulate how the human brain functions. Using basic algorithms, probability and statistic, and economic theories, new computer technology could simulate human-like decision-making process to improve human lives and help to solve more complex issues. In the sixth generation, we are actually experiencing the true potential of commercial Artificial Intelligence.

References

Cole, Bernard, (2015). New CISC Architecture Takes on RISC. EE Times, Retrieved from http://www.eetimes.com

Drako, Nikos, (1995) . An Overview Of Computational Science. The Computational Science Education Project

Haldya, Micky, (1999). Computer Architecture. Biyani’s Think Tanks; Chap 5, 26 – 27

Hennessy, John L. & Jouppi,Norman P., (1991). Computer Technology and Architecture: An Evolving Interaction.Computer, vol. 24, no., 18 – 29

Srinivasan, James R., (2011). Improving Cache Utilization. Technical Report; no 800., 31 – 35

Uri, Cohen, (2010). From Caching to Space-based Architecture: The Evolution of Memory. Enterprise System Journal. Retrieved from https://esj.com/

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: