A New View on Operating Systems and the Wold Wide Web

1. Introduction

The present day solutions for PaaS, IaaS or SaaS, revolve around the concept of cloud computing and sometimes virtualization. Virtualization is not cloud computing.Virtualization only extends cloud computing, by facilitating the use of underlying resources. If there would be such a high level of abstraction, where the cloud and the internet of things start using the virtualization to an even higher level, where entire operating systems are being accessed via the cloud and manage to eliminate the need to have an end user/consumer need of an access point, such a solution would indeed be seen as a remake of present day status-quo of computers and internet.

The concept of OSaaS is not new, as Linux already released CoreOS, under the form of OSaaS. If OSaaS would be used as the general consumer standard, with enough functionality to allow world wide resource sharing, the Internet of Things and the Cloud would indeed change beyond human comprehension.

The definition of cloud computing, given by NIST is “Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”. The definition for virtualization is the separation of a resource or request for a service from the underlying physical delivery of that service according to VMWare`s site. Since the cloud separates either the software or the hardware resources and manages to offer them separately to each customer in a metered fashion, we could look at these terms of cloud computing and virtualization as interchangeable or equal, if the virtualization would offer resources in a metered fashion. The abstraction would add another layer of requirements – the end user would not need an operating system to access the cloud resource. In this case, the personal computer would be a part of the internet of things (or the Internet of Everything, according to CISCO) and access the operating system resources over the web.

2. Present day Operating Systems and Internet of Things

An operating system definition revolves around managing hardware resources for applications and is the interface between the user and the hardware. The operating system does not:

  • Allow other threads from other computers, to run on the computer it is installed on, so it cannot use network resources to full capacity;
  • Dynamically control hardware resources between workstations, if supplemented;

The Internet of Things (apex or event horizon) is seen as the point when more devices are connected to the internet, than the population of the globe. It is predicted that in 2015 there will be 25 billion devices connected to the internet, for a population of 7.2 billion. If we were to assume that IoT is a living organism, the sheer number of device population overtakes the humans by a factor of 3.3, thus the world of computing is 3 times bigger than the human world. This alone would make the world of devices an unexploited resource, that if connected, could give the future a totally new perspective.

However, at this point in time, the devices:

  • Function on different platforms and the platforms cannot be integrated;
  • The operating systems themselves do not fully decouple the hardware from the software and are semi-dependent on the hardware to a degree, where its almost impossible to share resources over the internet;

Since the new directions in technology is studying nature and implement natural patterns into technology and infrastructure, the next logical step is to use natural patterns in developing the IoT and how the future of devices will be.

3. Why the OS?

The OS is the first level of intervention, where something can be introduced in order to change the way devices work. Also, modifications brought to the OS level can overcome hardware architecture differences.

Changing the OS to allow for devices to share hardware resources over the internet and transforming the cloud (or the Internet of Things), by applying a natural pattern to it, into a structure similar to that of a human society, where devices could be seen as independent decision cells, but allowing them to be grouped together into functional organisms, would radically improve the way we live.

4. The proposed concept

The following features are proposed as main attributes of OSaaS:

  1. Totally decouple the OS from the hardware and allow for shared hardware resources, over the internet, much like a server environment would work in a private network;
  2. Enable the end consumer to access the resource via the internet (cloud), based on a specific hardware identification system;
  3. Enable the consumer to access the resource in a metered fashion;
  4. The end consumer hardware becomes a resource of the IoT;
  5. Selective hardware resource sharing over the IoT;

SaaS offers targeted software applications for the end consumer. PaaS offers hardware and software resources, usually to build other applications. IaaS offers the hardware, hardware management, storage and networking resources.

OSaaS would have to be a combination of all the three concepts, where the end consumer would actually provide the infrastructure, the software would be provided by the producer and the network automatically manages the resources and access, with the help of the operating system.

Virtualization technology offers the ability to support the distribution of OS and applications over any type of hardware system, while improving resource usage and security. The types of virtualization that are of interest for such an implementation are OS-level virtualization or hardware-level virtualization. Obviously, for the purpose of such a proposal, the usage of hardware-level virtualization is the preferred solution. This is because hardware-level virtualization handles the entire OS and application, while detaching both the OS and applications from the hardware.

In terms of metering the access to the OS as a resource, similar solutions already exist, so it all reduces to selecting and implementing a solution from an already existing wide range.

The users would be metered under a specific payment plan and would access the OS as a resource, either when needed, or non-stop access, based on a payment plan. This solution would require an authentication system which is hardware and software based, but the main security lair would have to require a hardware signature to offer access. Such systems already exist, where internet access is given by the NIC MAC address. This solution could be extended and complemented with other means, that could be integrated to CPU level. The user would download the OS after authentication and would login to use it, but after the subscription has ended, the entire OS would be deleted, moved to a cloud cache or simply inactivated.

Furthermore, such a solution would also integrate elements of OS level virtualization, where each application would run into its own virtual environment. This would allow dynamic allocation of resources. If such a solution would also allow running threads across CPUs, while slightly changing CPU architecture to allow such thread operations, then the way the Internet of Things works would truly change into something organic.

The OS in this proposed architecture would have to act as a virtual machine on its own, and the personal computer would become an usable component or an extension, on the web. This concept would be very close to para-virtualization. However the OS itself would not need a virtual environment to work in, as itself may include virtualization features so that the computer does not need an underlying virtual environment to function and access hardware resources. Furthermore, the personal computer would be able to run other processes and threads from other personal computers who need more processing power. Such an OS would be able to virtualize any type of PC resource: memory, hard drives, CPU, network.

Since the explosion of the internet, it has been discovered in a study done by a group of researchers in China, that the Internet doubles in size every 5.32 years, just like Moore`s Law. This makes the Internet, the biggest computer in the world. The parts are the computers of the consumers, while the information circulates in a free manner. If the internet would be compared to the physical body of a human, the information would be the blood circulating through the body. However some specific aspects of such an architecture would stand out – the information could be easily shared and the entire consumer work stations could be used as a collective resource, much like the human cell. Secondly, this approach would create a self-redundant organism, where availability of information and infrastructure would be virtually unlimited. Each PC would represent a cell that performs the same function, while a cluster of PCs would represent an organic functional structure.

5. Features and advantages

There is no limitation on what such an OS would have to offer in terms of functionality. Based on the deployment environment, this approach would increase the power and the value of computing, by simply making available more processing power, through the web. Only by designing such a solution, without additional features, such an OS would offer at least the following features and advantages:

  1. Users can share hardware resources as a feature of the OSaaS (built-in or opt-in). Since virtualized environments can make available additional hardware resources, such an operating system would by default include the ability to use other PCs as extra-computing power. Such a feature would be well welcomed especially in corporate environments.
  2. Easier recovery from failures, as the OS would simply be transferred as a copy of a standard blueprint, over the web. This could be achieved by having a set of features attached to computer, as the computer becomes a metadata set on the web. The supplier would therefore know already what are the hardware components on the computer and would simply automatically customize the OS to function on the configuration. In practice, installation of an OS is just the beginning of a setup, as the subsequent actions of updating, installing additional drivers and configuration takes more time than the OS installation.
  3. Users can work both offline and online, but must authenticate online at a given time interval in order to continue the usage of the OS. This will almost eliminate the hacking and black markets built around illegal software sharing.
  4. Eliminate unwated access to data, by simply shutting down an operating system. Such a facility would not completely eliminate all possibilities of unwanted access, by physically accessing the hardware, but would more than likely completely eliminate the access to data, if the OS would be shutdown on demand.
  5. Data would still be available, even if the subscription would not be paid. The OS would simply be “migrated” or inactivated, without damaging data or other owned applications, on the host computer.

The most important reasons in implementing virtualized solutions are infrastructure consolidation and supporting mission critical areas, as specified by all major virtualization solution providers. However, virtualization does not seem to be so present in day-to-day consumer world. Such a solution would integrate the entire desktop environment in the cloud and facilitate better resource control and optimization, especially for data.

If the concept of this solution would be extended to include support for integration with specially designated server solutions for data backup, management and security, such an environment would offer a greatly improved private cloud solution to a corporate environment.

In the public domain, such a solution will offer long term benefits to the security status of the Internet traffic. For the proposed solution functionality, integrating in the OS operating system level virtualization functionality, would also allow for the elimination of IPR infringement. This would be achieved by an untapped feature of using OS virtualization – the ability to limit access to, or entirely remove the un-certified applications, running in the OS environment.

Implementing a back-up solution and cross-platform access of hardware resources for processor calls would improve the entire web processing capacity and would truly turn the internet into an internet of things.

6. Effects in the market

In the long run, both the technology suppliers and the consumers would win from such solutions. The implementation model for the OSaaS could mean any of the following (or a combination of them):

  • Pay as you go models, where the consumer would pay in a metered fashion, the access to the OS resources;
  • Any type of subscription model (monthly or yearly), where the user would pay an year subscription to use the OS or some specific traits of the OS. This model is not new, as the Office 365 is now sold under a subscription model.

VMWare`s online studies show that hardware costs were reduced by 72%, while only very few work environments are virtualized (36% of x86 servers are virtualized). In a corporate environment, the usage of such a technology, where the personal computer would turn into an usable resource of space and processing power, we could assume a substantial profit increase, by simply cutting infrastructure costs.

In the public domain, a Windows 8 license costs 49.99 £ (or 101$), on the Microsoft Store (data may slightly differ at the date of publication, as this article was written in March 2015). In the long run, adding the described functionalities to the OS itself, would more than likely increase the license cost. Distributing the costs under a new subscription model, would make the cost impact lower to the end consumer.

For the supplier, such an approach would more than likely improve real income simply by increasing the raw sale price. However, the implementation of such a technology could bring other cost cuts, like:

  • The need of a smaller implementation and distribution infrastructure. The OS itself can simply be downloaded over the web, once the subscription is done;
  • Elimination of the entire first level support team, by simply implementing some already existing self-healing/self-diagnostics functionality and simply allowing for self-repairing processes;

It is clear that there are numerous advantages for the suppliers simply by adding another layer of control into software distribution and increasing the profit out of sales. But in other areas like education, such a solution would truly show its value, by facilitating easy access to software and hardware resources.

In the research industry, such a solution would be instantly adopted, as it would allow for almost non-stop access to computing resources orders of magitude above publicly known data stats.

7. Conclusions

Though many people would criticise such a view on devices and how the future looks, the evolution of the IoT into an environment where information is not shared like a dry stream of data, but also the hardware can be used as a resource, seems natural.

The internet growth in the 2000-2014 interval, according to the internetworldstats.com is 741%. SETI has an active peak processing speed of 704.507 TeraFLOPS for a little over 90,000 connected computers, under SETI @ home program. If each device in the world would be allowed to work under a similar structure, the order of magnitude of the total IoT, under these statistics is 3600. Assuming that the main OS providers and internet providers also start using new infrastructures, based on optic fibers, as well as new concepts in hardware like quantum computing, the collective power of the IoT would be increased even more.

Because of the above presented reasons, the OSaaS and the new OS as a concept would improve the way we see the world today.

The following recommendations should be specified in order for such a solution to exist:

  1. Implementing a common protocol to facilitate communications cross-platforms.
  2. Changing the OS to allow sharing resources across workstations, including memory and CPU resources.
  3. Build complex hardware authentification system, aside from the already existing NIC MAC address.
  4. Improving already existing Internet infrastructure and reducing the PC hardware costs, in order to make them more accessible.

The Windows 8 Operating System

Now that Microsoft has released Windows 7, it is time to focus on Windows 8. Microsoft always starts working on the next operating system as soon as one is released. In Microsoft Windows 8, the user interface will be completely changed.

The new operating system may be called Midori, but there isn’t any confirmation at this time what the actual name is going to be. At this point, it is referred to as Windows Operating System 8.

Window Operating System 8 is currently not scheduled to be released until late 2011 or early 2012. Basically that’s 2-3 years after the release of the current operating system which is Windows 7. It is most likely, however, that Microsoft will ship the new Windows8 sooner than 2011.

At this point, it is not totally know what will be included in Windows Operating System 8. The Windows team has announced some features that will be included but not all. The actual list will be finalized in late 2010 when Microsoft will probably release the first beta version available to the public.

Currently, Windows operating systems operate on either 32-bit or 64-bit. It is too cumbersome on software developers to maintain dual processor codes. It is rumored that Windows Operating System 8 will be offered in 64-bit and 128-bit versions as well. Virtually every modern processor supports 64-bit computing. Many people are hoping that 32-bit support will be dropped from Windows 8. However, this probably will not happen. By the time Windows 8 is ready to launch, the cost of 128-bit chips will still be too expensive for the average consumer to purchase. Most likely, 128-bit processors will only be used in Windows 8 servers.

Also, at this point computer users wouldn’t get any use from 128-bit support. There won’t be any software written for it for many years, as right now most software is still only written to support 32-bit. Full 64-bit software support is only just now beginning to appear and it offers no additional benefits to the average computer user or computer gamer.

Another possible new feature of Windows 8 will be a new Hibernate/Resume engine. This means that the computer will have even faster hibernation and resume times than currently is available.

In addition, Windows 8 will also have new networking and security features built-in. It is possible that the new operating system will have a new PatchGuard system that stops viruses from changing system files. This feature was not released in Windows 7.

It will also have better multimonitor support. Many computer users today are using two monitors to better multitask. It is also rumored there may even be support for three or more screens.

It is also being said that Windows 8 will run on what is called the ARM chip, which is what is commonly used in smartphones. With this type of chip, it is possible that a mobile version of Windows may be available. It could be run with much lower system specifications than those currently required on PCs.

The new operating system will also have what is called DFSR service or Distributed File System Replication. This will be a feature used in Windows 8 Server. It is a folder system engine that will allow for folder synchronization across multiple servers.

Many computers users are very excited about the release of Windows Operating System 8 because Windows 7 was basically an upgrade to Vista. Windows 8 is supposed to open up a whole, new world in terms of Microsoft Windows Operating Systems.

How an Operating System’s File System Works

File systems are an integral part of any operating systems with the capacity for long term storage. There are two distinct parts of a file system, the mechanism for storing files and the directory structure into which they are organised. In modern operating systems where it is possible for several user to access the same files simultaneously it has also become necessary for such features as access control and different forms of file protection to be implemented.

A file is a collection of binary data. A file could represent a program, a document or in some cases part of the file system itself. In modern computing it is quite common for their to be several different storage devices attached to the same computer. A common data structure such as a file system allows the computer to access many different storage devices in the same way, for example, when you look at the contents of a hard drive or a cd you view it through the same interface even though they are completely different mediums with data mapped on them in completely different ways. Files can have very different data structures within them but can all be accessed by the same methods built into the file system. The arrangement of data within the file is then decided by the program creating it. The file systems also stores a number of attributes for the files within it.

All files have a name by which they can be accessed by the user. In most modern file systems the name consists of of three parts, its unique name, a period and an extension. For example the file ‘bob.jpg’ is uniquely identified by the first word ‘bob’, the extension jpg indicates that it is a jpeg image file. The file extension allows the operating system to decide what to do with the file if someone tries to open it. The operating system maintains a list of file extension associations. Should a user try to access ‘bob.jpg’ then it would most likely be opened in whatever the systems default image viewer is.

The system also stores the location of a file. In some file systems files can only be stored as one contiguous block. This has simplifies storage and access to the file as the system then only needs to know where the file begins on the disk and how large it is. It does however lead to complications if the file is to be extended or removed as there may not be enough space available to fit the larger version of the file. Most modern file systems overcome this problem by using linked file allocation. This allows the file to be stored in any number of segments. The file system then has to store where every block of the file is and how large they are. This greatly simplifies file space allocation but is slower than contiguous allocation as it is possible for the file to be spread out all over the disk. Modern operating systems overome this flaw by providing a disk defragmenter. This is a utility that rearranges all the files on the disk so that they are all in contiguous blocks.

Information about the files protection is also integrated into the file system. Protection can range from the simple systems implemented in the FAT system of early windows where files could be marked as read-only or hidden to the more secure systems implemented in NTFS where the file system administrator can set up separate read and write access rights for different users or user groups. Although file protection adds a great deal of complexity and potential difficulties it is essential in an environment where many different computers or user can have access to the same drives via a network or time shared system such as raptor.

Some file systems also store data about which user created a file and at what time they created it. Although this is not essential to the running of the file system it is useful to the users of the system.

In order for a file system to function properly they need a number of defined operations for creating, opening and editing a file. Almost all file systems provide the same basic set of methods for manipulating files.

A file system must be able to create a file. To do this there must be enough space left on the drive to fit the file. There must also be no other file in the directory it is to be placed with the same name. Once the file is created the system will make a record of all the attributes noted above.

Once a file has been created we may need to edit it. This may be simply appending some data to the end of it or removing or replacing data already stored within it. When doing this the system keeps a write pointer marking where the next write operation to the file should take place.

In order for a file to be useful it must of course be readable. To do this all you need to know the name and path of the file. From this the file system can ascertain where on the drive the file is stored. While reading a file the system keeps a read pointer. This stores which part of the drive is to be read next.

In some cases it is not possible to simply read all of the file into memory. File systems also allow you to reposition the read pointer within a file. To perform this operation the system needs to know how far into the file you want the read pointer to jump. An example of where this would be useful is a database system. When a query is made on the database it is obviously inefficient to read the whole file up to the point where the required data is, instead the application managing the database would determine where in the file the required bit of data is and jump to it. This operation is often known as a file seek.

File systems also allow you to delete files. To do this it needs to know the name and path of the file. To delete a file the systems simply removes its entry from the directory structure and adds all the space it previously occupied to the free space list (or whatever other free space management system it uses).

These are the most basic operations required by a file system to function properly. They are present in all modern computer file systems but the way they function may vary. For example, to perform the delete file operation in a modern file system like NTFS that has file protection built into it would be more complicated than the same operation in an older file system like FAT. Both systems would first check to see whether the file was in use before continuing, NTFS would then have to check whether the user currently deleting the file has permission to do so. Some file systems also allow multiple people to open the same file simultaneously and have to decide whether users have permission to write a file back to the disk if other users currently have it open. If two users have read and write permission to file should one be allowed to overwrite it while the other still has it open? Or if one user has read-write permission and another only has read permission on a file should the user with write permission be allowed to overwrite it if theres no chance of the other user also trying to do so?

Different file systems also support different access methods. The simplest method of accessing information in a file is sequential access. This is where the information in a file is accessed from the beginning one record at a time. To change the position in a file it can be rewound or forwarded a number of records or reset to the beginning of the file. This access method is based on file storage systems for tape drive but works as well on sequential access devices (like mordern DAT tape drives) as it does on random-access ones (like hard drives). Although this method is very simple in its operation and ideally suited for certain tasks such as playing media it is very inefficient for more complex tasks such as database management. A more modern approach that better facilitates reading tasks that aren’t likely to be sequential is direct access. direct access allows records to be read or written over in any order the application requires. This method of allowing any part of the file to be read in any order is better suited to modern hard drives as they too allow any part of the drive to be read in any order with little reduction in transfer rate. Direct access is better suited to to most applications than sequential access as it is designed around the most common storage medium in use today as opposed to one that isn’t used very much anymore except for large offline back-ups. Given the way direct access works it is also possible to build other access methods on top of direct access such as sequential access or creating an index of all the records of the file speeding to speed up finding data in a file.

On top of storing and managing files on a drive the file system also maintains a system of directories in which the files are referenced. Modern hard drives store hundreds of gigabytes. The file system helps organise this data by dividing it up into directories. A directory can contain files or more directories. Like files there are several basic operation that a file system needs to a be able to perform on its directory structure to function properly.

It needs to be able to create a file. This is also covered by the overview of peration on a file but as well as creating the file it needs to be added to the directory structure.

When a file is deleted the space taken up by the file needs to be marked as free space. The file itself also needs to be removed from the directory structure.

Files may need to be renamed. This requires an alteration to the directory structure but the file itself remains un-changed.

List a directory. In order to use the disk properly the user will require to know whats in all the directories stored on it. On top of this the user needs to be able to browse through the directories on the hard drive.

Since the first directory structures were designed they have gone through several large evolutions. Before directory structures were applied to file systems all files were stored on the same level. This is basically a system with one directory in which all the files are kept. The next advancement on this which would be considered the first directory structure is the two level directory. In this There is a singe list of directories which are all on the same level. The files are then stored in these directories. This allows different users and applications to store there files separately. After this came the first directory structures as we know them today, directory trees. Tree structure directories improves on two level directories by allowing directories as well as files to be stored in directories. All modern file systems use tree structure directories, but many have additional features such as security built on top of them.

Protection can be implemented in many ways. Some file systems allow you to have password protected directories. In this system. The file system wont allow you to access a directory before it is given a username and password for it. Others extend this system by given different users or groups access permissions. The operating system requires the user to log in before using the computer and then restrict their access to areas they dont have permission for. The system used by the computer science department for storage space and coursework submission on raptor is a good example of this. In a file system like NTFS all type of storage space, network access and use of device such as printers can be controlled in this way. Other types of access control can also be implemented outside of the file system. For example applications such as win zip allow you to password protect files.

There are many different file systems currently available to us on many different platforms and depending on the type of application and size of drive different situations suit different file system. If you were to design a file system for a tape backup system then a sequential access method would be better suited than a direct access method given the constraints of the hardware. Also if you had a small hard drive on a home computer then there would be no real advantage of using a more complex file system with features such as protection as it isn’t likely to be needed. If i were to design a file system for a 10 gigabyte drive i would use linked allocation over contiguous to make the most efficient use the drive space and limit the time needed to maintain the drive. I would also design a direct access method over a sequential access one to make the most use of the strengths of the hardware. The directory structure would be tree based to allow better organisation of information on the drive and would allow for acyclic directories to make it easier for several users to work on the same project. It would also have a file protection system that allowed for different access rights for different groups of users and password protection on directories and individual files.Several file systems that already implement the features I’ve described above as ideal for a 10gig hard drive are currently available, these include NTFS for the Windows NT and XP operating systems and ext2 which is used in linux.

Best Regards,

Sam Harnett MSc mBCS

Pixeko Studio – Web Developers in Kent