Monday, October 11, 2010

The End of my Blog

Well after many months of research and looking throughout the internet for interesting emerging technologies, the blog space now comes to an end for the assessment.

This blog tool has been critical in being able to voice and store ideas about certain key emerging technologies i have found to be very influencial in the longer term for society and businesses.

Light Peak, Silicon Photonics, Virtualisation are all making headways if not now but soon into the future. I look forward to seeing more improvements in these technologies, and what other technologies can be created because of them.

Sunday, October 3, 2010

Virtualisation and Cloud

Found an interesting article in regards to cloud computing and virtualisation:

http://www.zdnet.com/blog/virtualization/does-virtualization-equal-cloud-computing/1475

I’ve been taking part in an online discussion of virtualization over on LinkedIn. Stowe Spivey, Owner, Intermarket Solutions LLC, posed an interesting question “Is VM here to stay in and of itself or will it morph into, become a part of, the cloud?” That’s an interesting thing to consider.

From my vantage point, virtual machine software, one of five different types of virtualization technology found in the virtual processing layer, one of seven layers of virtualization technology found in my model of virtualization technology, is a useful tool in creating cloud computing environments, but may be used by itself.

Bert Armijo, 3Tera’s SVP Sales and Marketing, added this comment:

Cloud and virtualization are interlinked in much the same way cars and spark plugs are interlinked - they are different layers of a comprehensive service. In the car analogy, what we ultimately want is transportation, and along the way we become consumers of both spark plugs and cars. Some of the technology which provides our transportation we know about as consumers, while others disappear under the hood. This is what’s happening with virtualization and cloud today.

VMs, while actually a very old technology from the mainframe days, have had a major impact on the way we think about and use PC based servers. As a technology, though, virtualization is focused on resource usage, how to leverage excess capacity in servers by running multiple OS instances on the same physical box. (btw - while someone pointed out virtulization isn’t required for cloud, all successful clouds today use virtualization.)

Cloud computing (referring to infrastructure services rather than SaaS) is about more than VMs on-demand; it’s about turning data centers into online services. All the infrastructure components you’d have deployed in a physical data center in the past now have to be exposed programmatically for the cloud to accommodate your applications. Security, storage, networking, life-cycle control, inventory, HA are all required to be part of the cloud. This is obviously quite a different technology space from virtualization.

Cloud computing could not have happened without virtualization as the complexities of trying to expose a traditional data center as an online service are too complex to have ever been reliable. However, as cloud matures, VMs will disappear under the hood.

While interesting, it seems to me that analogy is incomplete. While the spark plug/automobile analogy has some merit, it doesn’t really work with the idea that spark plugs don’t necessarily have to be installed in an automobile. I’ve seen them in chain saws, scooters, motorcycles, ATVs, boats and many other motorized tools and vehicles.

Virtual machine software, in the same fashion, need not be part of a cloud computing environment. Furthermore, cloud computing can be accomplished without a virtual machine in sight.

Some forms of cloud computing, such as infrastructure as a service (IaaS) is likely to often be based upon utilization of virtual machine software. Even this form of cloud computing might be running on a physical machine if the goal is high performance computing or “extreme” transaction processing.

Thursday, September 23, 2010

EDS - VMWare Case Studies Success Story

Found a whole bunch of case study success stories for the implementation of VMWare ESX server and this particular one caught my eye as this company is quite big in Australia and manages many big international companies as well.

http://www.vmware.com/files/pdf/customers/apac_au_07Q4_cs_vmw_eds_english.pdf

Basically the company EDS is one of the world’s largest IT services businesses, with a global
client list that includes General Motors, UK Ministry of Defence and Kraft Foods. Australian customers include the Australian Taxation Office, Westpac Banking Corporation, Telstra and the Commonwealth Bank of Australia. The company maintains an Australian workforce of more than 6,000 and its portfolio of off erings includes business process outsourcing, information technology outsourcing and application services.

Why this particular case study is so interesting is how they are using server virtualisation to create virtual servers etc for there own customers needs, and on a huge scale with many benefits in cost reduction, speed to market, greenhouse gas emissions reduction etc in a small amount of time.

Desktop virtualisation

Well it seems like desktop virtualisation is becoming the new hottest IT trend at the moment for virtualisation. Basically IT administrators are turning to this technology to help simplify management, improve ROI, accelerate provisioning of new machines, and achieve better security and compliance.

According to an article on ZDNet 19 percent of business are planning at rolling out this technology in the next year or so 2011.

http://whitepapers.zdnet.com/abstract.aspx?docid=2112035&promo=100303

Monday, September 20, 2010

Server virtualisation and the takeup of this technology

Seems like virtualisation especially server virtualisation is really becoming mainstream and pretty much all organisations are looking at implementing some type of virtualisation. Be it in development or full production virtual machines.

Here is an interesting article i found that looks at the stats of virtualisation
  • 86 percent of respondents are involved in exploring, testing or usingvirtualization technology.
  • The largest portion of respondents (40 percent) will approach server virtualization by implementing a standalone pilot, with success creating the case for the production environment.
  • The most important criteria when selecting a server virtualization solution are hardware reduction and infrastructure manageability.
  • The two biggest hurdles to overcome when implementing server virtualization are lack of staff expertise and identifying applications that are unaffected by virtualization. This latter hurdle is most problematic to large organizations.
  • The top three objectives for implementing virtualization in the production environment are improving disaster recovery, lowering administrative costs and immediate hardware and software savings.
http://www.virtualiqsolutions.com/docs/INS%20WP.pdf

Saturday, September 18, 2010

Virtualisation but for mobiles...

Well it seems to be spreading the whole virtualisation bug across all electronic hardware. Virtualisation is already a great tool for PC's and servers, and now it looks like it will extend to the mobile market now.

Found this article about this by Ramana Rolla from doing a search for the future of virtualisation on google news.

http://www.mwd.com/2010/10/virtualization-is-going-to-change-everything-about-a-mobile-performance/


Basically VMware are looking at implementing something similar whereby smartphones can run multiple OSes on the same hardware giving users the ability to have different applications from different platforms on the same phone.

Where will virtualisation end up next... who knows... only time will tell...

Thursday, September 16, 2010

World's largest virtual desktop implementation

Found this great article on the world's largest implementation of virtualisation on gizmag.com

http://www.gizmag.com/virtualization-green-computing/11168/

In what's billed as the world’s largest virtual desktop deployment 356,800 virtualized desktops will be supplied to schools across Brazil, bringing computer access to millions. Userful Multiplier

software effectively turns one computer into up to 10 independent PC workstations, reducing CO2 emissions by up to 15 tons per year per system* and electronic waste by up to 80%.

Additional users can work on a single computer by simply attaching extra monitors, mice and keyboards. "This deployment alone saves more than 170,000 tons of CO2 emissions annually, the same as taking 28,000 cars off the road, or planting 41,000 acres of trees”, said Sean Rousseau, Marketing Manager at Userful.

With increasing CO2 emissions, growing evidence about the toxic waste from old electronic goods and more landfill than we can deal with, green computing is a hot topic.

In developing countries virtualization provides huge scope to improve student to computer ratios at a relatively low cost and in a sustainable way. It’s also preferable to us dumping our old environmentally unfriendly computers on these countries as we rush to upgrade or buy the "next big thing".

Userful Multiplier is also a low-cost, energy-saving option for schools, libraries, internet cafes, government, call centers and many other industries, leveraging the unused processing power of computers that sit idle while users check mail, work on documents or surf the web. Each user has access to the full power of the multi-core processor and when more than one user needs the processor at exactly the same time, the computer splits its resources to perform all tasks equally quickly.

According to Userful, all other virtualization solutions lead to sacrifices in performance. Userful offers the features of a full PC including high performance video for less than $50 per additional seat in large deployments (not including monitors and keyboards) and uses standard PC hardware.

Monday, September 13, 2010

A Case Study of Server Virtualisation

Found this really interesting article/case study on a successful implementation of server virtualisation

http://itleaders.com.au/articles/a-case-study-of-server-virtualisation--virtualization-.html

Server Virtualisation is getting a lot of press, but like most new technologies, this 'IT gibberish' takes some translation for business owners to see the benefits.

IT Leaders recently completed virutalisation of its head office server rack, giving an easy to understand case study for you.

What is Virtualisation and why is it good?

Traditionally a business will run a different server for every one or two server roles. For example, a typical situation for a smaller SME would be to have one server running Microsoft Small Business Server, which gives the business file & print sharing, an email server, and domain controller.

To this base they would then add another server for new roles. If they start running a database, this would be put on a new physical server. If they have staff logging in remotely to work, then a new physical server would be used for staff to log into.

There are good reasons to run different servers, such as:

  • The speed of each IT business service can operate and be monitored independently - a speed issue in one service is isolated to that service only.
  • Some softwares don't play nicely together on the one Windows installation
  • Some softwares will (by design) hog a lot of memory or processor, which can slow down other activities on the server
  • If one application or server role fails, it can be worked on and repaired without having to bring down other critical business facilities. (For example, if the database server has a problem it can be worked on without interrupting email, file access, internet access, etc etc


The thing about running lots of physical servers however, is that the more you have, the more they cost to buy (capital cost), run (electricity & cooling), and maintain (maintenance and repairs).

With virtualisation, you can have multiple server environments running on a single physical server. In other words, you can have 3,4,5 or more installations of Windows server, running on the same server box. Each installation works independently, but all share the one set of hardware.

This reduces capital costs, running costs and maintenance costs.

Case Study: Turning Six Servers into One

Before Virtualisation

Here are the physical servers we had when the project began:

  1. Domain Controller and Email Server and File/Print Share
  2. Terminal Server and File Share
  3. Sharepoint Server
  4. WSUS Server (controls software updates for our clients)
  5. Performance Monitoring Server (monitors every computer device on all our clients networks)
  6. Reporting Server (generates performance reports from the monitoring)

In addition we wanted to set up two additional servers:

7. Data Centre monitoring (dedicated server for our new data centre facilities)
8. Backup Server (to offer our clients a backup service which runs over night across the internet, storing their data on our server.)


Our power usage for the server rack was approximately 3500 VA, and would have gone over 4500 VA by adding two more servers.

Adding more servers was also going to require more cooling, further increasing capital and running costs.

TIme to maintain 8 servers properly would normally be in the realm of 20-30 hours per month.

Considerations during virtualisation

So we went down the virtualisation path for our own equipment and were interested in experimenting with the boundaries of how many virtual servers will run effectively and safely on a reasonably priced single physical server. Bear in mind that we have less than 20 staff at this point, so there is minimal load on several of the servers, a situation that will be different for each business.

Not all servers are good to virtualise. Our Performance Monitoring server does currently need its own physical server. We also decided to leave the number one server as it was, which minimised the operational impact of the project. Both of these servers are less than 12 months old and good quality Hewlett Packard hardware so we were already happy with them from those angles. That left servers 2,3,4,6,7 and 8 as candidates for virtualisation.

This required purchasing one new server with two dual-core processors, good hard drive speed, and plenty of drive space. We started off with 3.5TB of drive space and 12 GB of RAM, with room to increase if required.

Using Microsoft Server 2008 Hyper-V (VM Ware is another good option), we then installed the host operating system and six other operating systems. We were able to cut each server over independently, resulting in a seamless transition. After each new server was brought on line we monitored server software performance and hardware utilisation. With virtualisation you want to maximise the utilisation of your server hardware, but not overload the hardware. It is incredibly flexible and allows you to assign hardware resources dynamically. For example, we could allocate to one Windows server environment, 2GB of RAM (out of the 12GB installed), 50GB of drive space and one 'virtual processor'. Then we monitor performance of the server and then tweak up or down accordingly. Allocating another 1GB of RAM is just a five minute tweak.

You also need to be aware that the more roles that one set of physical hardware is performing, the more important it is that this hardware is reliable and the downsides of hardware failure are catered for. You will definitely want fast-response warranties (eg 2hr onsite response form the hardware vendor). You also may wish to have new spare components on hand for immediate swap out, since a 2hr warranty response does not guarantee that specific parts are not requried that will take a week or more to arrive. In our case we will have a spare motherboard on hand as it will be slow to procure that model, and employ the extra protection of image-based backups that can be restored to any hardware.

After Virtualisation

Number of Physical Servers: 3

Number of Operating System environments: 8

Approximate power usage: 1800VA

Time to maintain: Around 11 hours per month.

So in analysis we have better than halved our direct power usage, halved our maintenance costs, and removed the need to upgrade cooling.

Our equipment at our data centre in Varsity Lakes, is designed from the ground up for virtualisation. Using latest generation Hewlett Packard 'blade' technology, we can run hundreds of virtual environments on industrial-strength servers, with a range of redundancy options such as complete automated fail-over in the event of hardware failure.

Talk to IT Leaders today about whether virtualisation can improve your business.

Saturday, September 11, 2010

My experiences with Virtual Desktop Technologies

I have used a number of different types of virtualisation technologies over the years in IT from local desktop virtualisation to VMWare ESX server implementations.

As a developer i used alot of local desktop virtualisation technologies to test various platforms to run particular software which we built. The main driving benefit of using this technology was that we didn't have to have different hardware for different Operating Systems. All the operating systems could be run locally on the same machine so that we could ensure that our custom software could be deployed and run successfully without any issues.

Because of this technology it not only saves huge amounts of capital, because of not having to buy all the physical server. But huge amounts of time, as virtual desktop instances could easily be created while still working on the local machine.

Some of the virtualisation desktop software i have used is virtual desktop by microsoft, parallels, and virtual box

Friday, September 10, 2010

Disadvantages with Server Virtualisation

With all things technology there are advantages and disadvantages for all emerging technologies. Earlier i spoke about advantages of implementation of virtualisation, and how cost savings and server consolidation could be achieved with server virtualisation. Now im going to look at some of the disadvantages associated with virtualisation.

1. Magnified physical failures
Imagine you had 10 very important servers running on the one physical host and its internals (i.e. RAID controller fails) and wipes all the important information for all these machines.

2. Degraded Performance
Because more than one machine is running on the same physical host, performance can drop if all virtual machines need the resources at the same time. Leading to slower response times.

3. Increased Complexity
Virtualisation adds a new layer of complexity to the physical host running the machine, which in-turn can add considerable time and effort to work out the root cause of a problem if the issue is within the virtualisation layer.

4. Virtual Machine Spawl
Even though virtual server management can get quite complex, and installing a new machine can be quick. The problem might arise is that if the number of servers grows faster than the number of administrators who are supposed to manage them.

These are just a few of the disadvantages of virtualisation i have found on the internet related to virtualisation.

Monday, September 6, 2010

Benefits Virtualisation

Virtualisation, Cloud Computing, Green IT, have all been touted as the next big thing in the world of computing. All for the right reasons, because there are several advantages to be had by implementing virtualisation.

Cost Savings – this is one of the major benefits which often cited as a reason for implementing virtualisation.
Firstly when you virtualise, you can cut down on computing equipment like the number of servers you employ. The net effect is a drastic reduction in the amount of energy your company uses. This represents a huge cost savings.
Secondly, by reducing the amount of computing equipment, you are eliminating the need for more space for such equipment. Real estate costs can be a significant part of your costs. Therefore when you do not need more space for equipment, you are saving money. Cost savings of between 50 – 70% have been quoted as typical when virtualisation is implemented.
Thirdly if you run your IT in-house, you will need to employ people to administer your systems. After virtualisation, the number of systems is greatly reduced and therefore your cost of system administration will be significantly cut down.

Simplification of IT – when you virtualize, you simplify your computing, by having for example various applications running on a single server.

Reduce or eliminate need for several upgrades – when you run multiple applications each on their own server, there is the need for you to always upgrade these systems each time there is a new patch for your systems. With virtualisation there is no need for you upgrading (if your systems are outsourced) or you will have fewer systems to upgrade.

Improve efficiency and availability of resources - When you virtualise, your computing efficiency is dramatically improved. You use much less energy as a consequence. One clear advantage of virtualisation is the reduction of energy consumption. When you use less energy, the amount green gas emitted into the atmosphere is reduced. Virtualisation is therefore rightly seen as one tool in the fight against global warming.

Your business is likely to recover more quickly from a disaster (if you were hit by one) when you have virtualized than when you run all your systems in-house.

Strategic advantage – many people nowadays are becoming environmentally aware. As a result more and customers and potential customers are now looking at the green credentials of companies they do business with. If you can present your company as an environmentally friendly company, taking the right actions to protect the environment, this is a plus and can be an advantage to you over your competitors.

To sum up - virtualisation will enable better system and hardware usage; it will curb data centre sprawl, reduce your IT administration cost, by reducing the number of physical machines you have to manage, contribute towards the fight against global warming by reducing the amount of energy you use, and give you a strategic advantage over your competitors.

Saturday, August 21, 2010

Desktop Virtualisation

One type of desktop virtualisation is to use your desktop device only to display data while all the processing and storage (applications & data) is done by back-end servers.
Your desktop device could be a traditional PC or a specialised device like SunRay virtual display Client.

To implement this type of desktop virtualisation, you need to run a virtual desktop infrastructure Software on the desktops.

There are several virtual desktop infrastructure Software like:
  • VMware's Virtual Desktop Infrastructure.
  • Sun VDI Software.
Benefits of Implementing this type of Desktop Virtualisation:
  • Data is stored in a centralised location for easy management and is less likely to be lost or stolen.
  • Huge amounts of savings from reduced desktop system administration.
  • Reduced IT maintainance, upgrades, and threats from viruses and other malware.
Another type of desktop virtualisation is to run a software which enables you install different operating systems on your desktop.

You could have a system running windows and you install this software which enables you run Linux on the same computer.

Data/Storage Virtualisation

Managing disk storage was once simple: If we needed more space, we got a bigger disk drive. But data storage needs grew, so we started adding multiple disk drives. Finding and managing these became harder and took more time, so we developed RAID, network-attached storage and storage-area networks. Still, managing and maintaining thousands of disk drives became an ever more onerous task.

The latest answer to this dilemma is storage virtualisation, which adds a new layer of software and/or hardware between storage systems and servers, so that applications no longer need to know on which specific drives, partitions or storage subsystems their data resides. Administrators can identify, provision and manage distributed storage as if it were a single, consolidated resource. Availability also increases with storage virtualisation, since applications aren't restricted to specific storage resources and are thus insulated from most interruptions.

Also, storage virtualisation generally helps automate the expansion of storage capacity, reducing the need for manual provisioning. Storage resources can be updated on the fly without affecting application performance, thus reducing downtime.

Thursday, August 19, 2010

Server Virtualisation

There are three types of server virtualisation - operating system virtualisation, hardware emulation & paravirtualisation.

Operating System (OS) Virtualisation - also known as "Containers" enables an operating system ("guest") to run on top a "host" operating system. The guest operating system makes the resources of the hardware on which it is installed available to the applications using it. The applications have no interaction with the host operating system. In fact according to these applications, they are the only one interacting with the hardware.

You can use this type of virtualisation (container virtualisation) to offer different operating systems to different users, with a single physical machine.

This is the ideal type of virtualisation for web hosting companies. They can host several different websites on the same physical machine, with each website having its own "container". To each website, they are in total control of the machine, but in reality they are sharing it with other websites.

One serious drawback of operating system virtualisation is that your choice operating system is limited depending on the host operating system.
Example operating system virtualisation:
Sun - Solaris Operating System.

Hardware Emulation
With this type of virtualisation, the virtualisation software also known as hypervisor serves up an emulated hardware environment for guest operating system to operate on. The emulated hardware environment is called a virtual machine monitor. I.e. the virtualisation software (hypervisor) "fools" the guest operating system, into thinking it has a real hardware environment on which to operate, by presenting it the virtual environment known as virtual machine monitor (VMM).

The hypervisor sits between the VMM and the physical hardware and acts as an interpreter between the two. Each guest OS runs on one VMM
This implementation implies multiple OS and also different types of Operating software can run on the same machine. For example you can run windows and Linux on the same machine or different versions of windows on the same physical machine.

Software development companies can use this type of virtualisation to test their software on different operating systems, without having to buy new machines for each operating system.
You can also use hardware emulation virtualisation to move your applications environments unto the same physical machine.

The major disadvantage of hardware emulation is that the hypervisor (virtualisation software) hurts performance, and you will often find that applications run slower on virtualized systems.
One other drawback is that since the VMM acts as an interpreter between the hypervisor and the physical machine, there is need for device drivers to be installed between the hypervisor and the VMM, however they is need to update these drivers from time to time. But the users are unable to install these devices. This might lead to a situation where some resources may not run on virtualized environment when there are hypervisor drivers for them.

Where to get hardware emulation virtualisation (hypervisor software):
VMware - VMware Server & ESX Server
Microsoft - VMware (supports X86 servers only, emphasis on MS OS), Hyper-V,
Xen - Open source alternative

Paravirtualisation
With a paravirtualisation implementation, the virtualisation software sits between the guest operating system and the resources of the physical machine. The virtualisation software controls access to the resources of the physical machine.

The main advantage of this implementation regime is that there is less performance overhead used. There is also no need for device drivers like is the case with hardware emulation.
It has its own drawbacks which include the fact that

Examples:
Xen from XenSource (found in Red Hat and Novell distributions
Virtual iron also from Xensource

Monday, August 16, 2010

How virtualisation works...

To understand how virtualisation works, it is better perhaps to look at the different types of virtualisation. Each works in a different way and is implemented differently.

There are three different types of virtualisation: Server Virtualisation, Data or Storage Virtualisation and Desktop Virtualisation.

Saturday, August 14, 2010

Virtualisation History

Just to give a little background on virtualisation, as this was a technology which emerged in the 1960's, and then went into hibernation before re-emerging again in 2000.

Virtualisation was first implemented more than 30 years ago by IBM as a way to logically partition mainframe computers into separate virtual machines. These partitions allowed mainframes to “multitask”: run multiple applications and processes at the same time. Since mainframes were expensive resources at the time, they were designed for partitioning as a way to fully leverage the investment.

A great timeline is on wapedia which describes the different times in the history for virtualisation
http://wapedia.mobi/en/Timeline_of_virtualization_development

Tuesday, August 10, 2010

Why Virtualisation?

Most machines in data centres are running at only between 10 - 15% of their capacity a lot of the time. In other words much of their capacities and other resources (like the electricity they use) are wasted. These machines can do better. By enabling virtualisation on these machines, they would be made to support more than one system, thus making better use of more of their resources.

To satisfy the computing needs of your company, you need to add more computing resources – desktop computers, servers, etc. You also need to store data and this might mean putting in separate data servers. This hardware takes up space. We all know that space can be a very expensive commodity. By introducing virtualisation, whereby you host several systems on a single physical server, you are effectively reducing the need for more space.

If your company is running a data center, you would realize that their cost can run into millions of dollars. By eliminating the need to build a data center yourself with virtualisation, you would be making some serious cost savings.

More and more people today are increasingly becoming environmentally aware. As a result they are looking at the green credentials of companies they buy from or want to do business with. So your customers or potential customers are watching you!
Some of the ways your company can show potential customers that you care about the environment is to be less energy dependant by cutting down the amount of energy you consume. One key area where you can cut down on energy consumption is with your IT & computing equipment, especially your servers. By adopting virtualisation, you would be reducing the number of physical servers you run and thereby the amount of energy consumed.

This represents a double advantage - you save significant amounts of money on energy costs and you gain strategic advantage when your potential customers know you are taking concrete steps to protect the environment.

If you run your IT in-house you are probably aware of the need to have full time staff on your payroll administering your systems. The cost of hiring and keeping a system administrator is significant. When you implement virtualisation, the numbers of machines you have to take care of reduce or may be eliminated altogether. Hence you cut the cost necessary for system administration.

Monday, August 9, 2010

What is virtualisation?

Virtualisation is a software technology which enables one single computer to run several (sometimes different) virtual guest operating systems (OS).

Essentially, virtualisation can enable you to run more than one environment on the same hardware. For example, with virtualisation you can run a windows operating system (like XP) and a Linux Operating system (like Ubuntu) on the same computer. By implementing virtualisation, you let different operating systems and applications share the resources of one computer.

This technology is necessary because most of today’s computers (X86 Computers) were designed to run one operating system on one physical machine. With this mode of operation, the resources of each machine are underutilised most of the time.

What virtualisation does in practice is that it separates a user from the kind of hardware they are using.
For example you could be running windows on a Mac platform or you could be running a Linux OS on your windows personal computer (PC).

It is important to note that virtualisation is not a server-only technology; in fact virtualisation can be applied throughout your business. Starting from the desktop, you can “virtualise” nearly all aspects of your IT infrastructure.

What is being claimed at the moment is that virtualisation is set to dramatically change the way in which we compute.

There are different types of virtualisation: Server Virtualization, Data or Storage Virtualisation and Desktop Virtualisation.

Saturday, August 7, 2010

The dilemma.... what emerging technology to go with in my research

As i try to finalise what my major emerging technology reports will be based on, im stuck with what to choose and go with...

At first i was leaning towards nanotechnology, as this is a fairly new emerging technology. However, the information on this technology is really in its infacy and hasn't really created many applications within business that i would be able to talk about in my research reports.

After much deliberation and consultation with the course co-ordinator, i decided to go with "Virtualisation" as this was a technology which i was familiar with and how it is making a re-emergence again in the business world again once more.

Tuesday, August 3, 2010

Light Peak continued...

Found a really interesting article on cnet.com in regards to Light Peak and how intel is bringing the optical technology to the masses.

http://news.cnet.com/8301-30685_3-10362246-264.html

One special thing to note is that light peak is looking at replacing all existing connectors into the one type of connector...

"Intel's hope for Light Peak is to create a single connection for video, storage devices, the network, printers, Webcams, and anything else that plugs into a PC. Light Peak uses circuitry that can juggle multiple communication protocols at the same time, and the Light Peak promise is for a universal connector to replace today's incompatible sockets for USB, FireWire, DVI, DisplayPort, and HDMI. It's a hot-plug technology, meaning that devices can be linked when they're up and running. "


Saturday, July 31, 2010

Light Peak

Just found out some more information on another technology which is coming to mainstream use in 1 or 2 years which uses optical technology to deliver high speed data transmission between devices, similar to silicon photonics.

Description by Intel

Light Peak is a new high-speed optical cable technology designed to connect your electronic devices to each other. Light Peak delivers high bandwidth starting at 10Gb/s with the potential ability to scale to 100Gb/s over the next decade. At 10Gb/s, you could transfer a full-length Blu-Ray movie in less than 30 seconds. Optical technology also allows for smaller connectors and longer, thinner, and more flexible cables than currently possible. Light Peak also has the ability to run multiple protocols simultaneously over a single cable, enabling the technology to connect devices such as peripherals, displays, disk drives, docking stations, and more. . Light Peak components are expected to begin to become available to customers in late 2010, and Intel expects to see Light Peak in PCs and peripherals in 2011.

Existing electrical cable technology in mainstream computing devices is approaching practical limits for speed and length, due to attenuation, noise, and other issues. However, optical technology, used extensively in data centers and telecom communications, does not have these limitations since it transmits data using light instead of electricity. Light Peak brings this optical technology to mainstream computing and consumer electronic devices in a cost-effective manner.

Friday, July 30, 2010

How the silicon photonics link works

This is a great explaination of how the optical microchips work... from Intel..

The optical microchips that Intel is researching are made from silicon – just like conventional microchips. This offers the advantage of being able to use existing production systems and expertise spanning four decades in their manufacture. Furthermore, silicon is a practically inexhaustible resource. After oxygen, it is the next most common element on the Earth, and can be obtained from quartz sand.

An optical microchip functions in a similar way to a conventional chip, except that its conductor paths are not made from metal, and that data is not transferred in the form of electrons. Rather, photons are transported through the silicon in light ducts, referred to as waveguides. To a large extent, an optical microchip is made of three components: a modulator that converts electronic data into light, a laser which acts as a light pump to send the photons through the silicon, and a demodulator that converts the photons back into electronic impulses.

The electronic data is recoded into light pulses in the modulator. Theoretically, this is very straightforward, because digital data only occurs in two states, namely "zero" / "one", or "on" / "off". To put it simply, the modulator operates like a light switch which switches the laser on and off, thereby passing on the digital data in the form of photons. To enable the photons to be transmitted, it is necessary to have a certain type of laser, in this case a Raman laser4iv. Intel has developed a Raman laser composed of silicon and indium phosphide, which will be used on the company's optical microchips. The photons are converted back into electrons by the demodulator, the last missing component of an optical microchip. The demodulator is made up of a silicon core, but has a special germanium coating in order to absorb the light. When the laser pulses strike the demodulator, they are absorbed by the germanium layer, thereby causing electronic pulses to be created. These pulses are passed on to the silicon where they are amplified and can then be processed electrically.

Wednesday, July 28, 2010

Applications for Silicon Photonics

According to Intel, Silicon photonics will have applications across the computing industry. For example, at these immense data rates one could imagine a wall-sized 3D display for home entertainment and videoconferencing with a resolution so high that the actors or family members appear to be in the room with you.

Tomorrow's datacenter or supercomputer may see components spread throughout a building or even an entire campus, communicating with each other at high speed, as opposed to being confined by heavy copper cables with limited capacity and reach. This will allow datacenter users, such as a search engine company, cloud computing provider or financial datacenter, to increase performance, capabilities and save significant costs in space and energy, or help scientists build more powerful supercomputers to solve the world's biggest problems.

"This achievement of the world's first 50Gbps silicon photonics link with integrated hybrid silicon lasers marks a significant achievement in our long term vision of ‘siliconizing' photonics and bringing high bandwidth, low cost optical communications in and around future PCs, servers, and consumer devices" Rattner said.

Monday, July 26, 2010

Why Silicon Photonics...

So the biggest question asked is why silicon photonics?? According to many individuals in the computer industry it seems as though the traditional sense of transferring data over copper wires is reaching its intended limit. With computers and businesses needed more data faster and more efficiently, here steps in silicon photonics which can provide this further boost to transmission speeds needed.

Because silicon photonics is based on optical transmission which is not so really new to the communications world. However, the bulkiness and costs associated with implementing these in everyday devices is not cost effective.

So this is where Intel and the convergence of multiple technologies (laser, silicon) have come together to make this possible, and bring down the cost of implementation.

Saturday, July 24, 2010

Silicon Photonics - continuing on...

Just found some more information in regards to the silicon photonics link which i thought might be of interest...

This was found on a website called GizMag.com which mentions the Silicon Photonics Link by Intel
http://www.gizmag.com/first-silicon-based-optical-data-connection/15888/

And a great presentation from Intel detailing the new technology
http://download.intel.com/pressroom/pdf/photonics/50G_Silicon_Photonics_Link.pdf?iid=pr_smrelease_vPro_materials1

Tuesday, July 20, 2010

Silicon Photonics and Intel... amazing...

Well i have just found my first emerging technology which i believe is quite amazing, and will change the face of computers and industries quite dramatically over the next few years.

The kind of reaction when i first found out about this technology was What? Really? woah that is really a big breakthrough by Intel. 50Gbps data transfer speed which can transfer an entire HD movie in seconds.

Idea behind this technology is quite simple, but effective. Photonic Link uses lasers (photons) to transfer data, rather than electrons, hence, dramatically increasing the data transfer speeds. To add to your excitement, this is just the starting phase for this technology, and Intel expect to achieve as much as 1Tbps (1000Gbps) speed in future.

This i believe is a game changer and will soon be seen in more computers and devices well into the future... I think i might make this my first research assignment 2... :)

Why is it so hard to find a topic to talk about

Well i have been searching the web for that elusive topic to talk about and it is just so hard to get the right topic. There are just so many different emerging technologies out there and each of them so very different.

This has been my dilemma... what to choose.

Anyways I thought i would blog this because it is important to show whats going through our minds at the time..

Hopefully soon i will find the answer to my question of what emerging technology i will be talking about...

Thursday, July 15, 2010

Top 5 Emerging Technologies

Here is what I believe to be the top 5 emerging technologies of today and the reasons why...

Electric Cars - Due to global warming, and the reliance of our economies on oil. It has become apparent for individuals, businesses, and governments to provide a cleaner mode of transport, and rid its dependance on oil.

Wireless Energy Transfer - This emerging technology can and will have substantial impact on how we power our devices at home, and work.

Quantum Computing - Will be able to solve certain problems much faster than the traditional computer, and will impact all industry as this types of computers could become mainstream.

Nanotechnology - Being able to do things at the nanoscale has huge amounts of impact in medical fields, computer fields, military etc etc.

Artificial Intelligence (AI) - Artificial Intelligence has been used in a wide range of fields including medical diagnosis, stock trading, robot control, and scientific discovery and toys

Monday, July 12, 2010

Emerging Technology Definition

So what is an emerging technology?? According to Wikipedia (2010) 'Emerging technologies' are those technical innovations which represent progressive developments within a field for competitive advantage. However, the opinion on the degree of impact, status, and economic viability of several emerging technologies can vary.

Emerging technologies are not just restricted to one individual field but it is many. Also newer technologies can have a substantial impact on other fields as well.

Here is a long list which states out some of the emerging technologies for the different fields today
http://en.wikipedia.org/wiki/List_of_emerging_technologies

Introduction

Well this is my third year of masters in IT, and am over the hump of a four year distance degree. I'm looking forward to this subject and seeing what other emerging technologies are out there which are influencing businesses of today.