Process-Based Security
If your device has an IP Address it needs Process-Based Security Read More...
Let the crusade for more reliable internet security begin!
We need YOU, the Android User, to join our world-wide crusade for eliminating cyber-threats & invasions by helping us bring the SAGESecured Operating System project to a reality that will provide the world with the most robust security protocol invented to date by allowing more definative control over what does and does not run on any Android device, permanently eliminating the means and methods that can infect the device with malicious software, PAST, PRESENT and FUTURE. Beginning with Android, SAGESecured serves as a launching pad to bring a new gold standard security protocol to ALL operating systems which are ill-served with conventional best practices.
Undergraduate: University of Texas, Austin. Bachelor of Business Administration, 1971.
Graduate: New York University, Master of Business Administration, 1972.
Professional designation: Certified Public Accountant
Moved to Amarillo, Texas 1972
Employed as staff accountant at H.V. Roberson, CPA’s until 1978
Self-employed ever since with personal investments.
Founder: SAGEFirst, Inc. – Computer security based on implementing a change control or change prevention protocol for applications and operating systems. Such a protocol prevents worms, viruses, Trojan horses and other exploits that involve the loading and execution of arbitrary code. Besides preventing worms, viruses, etc……past present and future……this protocol eliminates the opportunity for malicious insiders to create mayhem.
Married with two sons in college.
My inspiration for Process Based Security (PBS) was pretty simple. One day I was reading about the next big exploit and realized it was all about getting "root" on a machine. People were exploiting programs that ran as root, to be able to do whatever they wanted, on that computer. Once someone was running as root, every program they ran also ran as root ad infinitum.
This told me there were a few problems with the current view of security:
1) People were being given access to system resources, when it was the programs they ran that actually used the resource.
2) Giving people access to resources meant that you had to give them all the access they would ever need, regardless of the program they were running at the time. This meant that when the user did a simple function like getting a directory listing, they could also read their e-mail. It was up to trusting the directory listing program not to read your e-mail that helped make things secure.
3) The kinds of access given weren’t anywhere near fine-grained enough.
Trying to solve these problems is what let to PBS. With PBS I decided:
1) Setup access rights based on the program and control users by controlling the programs they can run.
2) For each program, give it access only to what it needs (i.e. a directory listing program can only open directories for reading).
3) Allow for a lot more access control. Beyond the typical read, write and execute access, PBS also can do such things as read existing data but new data can only be appended. Program X can execute program Y (programs aren't marked as executable and then anyone can execute them. A program is given executable rights to another program and it is assumed the other program is a program [the OS checks for this anyway]. This keeps compromised web servers from starting shells).
4) Reduce the access given to a program to what it minimally needs to run.
While in the US Navy between 1978 and 1984, I was in charge of all crypto code for my shop on board the USS Midway as well as reporting to the CMS Custodian for security of all codes. My systems were NTDS tactical data systems where we changed codes based on security levels...my career began with security as a primary concern.
I have worked with different companies as a contractor such Bank of America for Microsoft; FedEx; UPS; IBM; US Navy; NASA; First Union; Bank of Boston; to name a few; security is our utmost concern.
Since I’ve been working with SAGEFirst since 2004 on different projects, the introduction of security as an appliance fit well into my experience with security while in the US Navy. Securing smart phone data is my primary concern today as it represents locations; bank accounts; communications with anyone in the world; my children's communications with others; insurance information; medical information; to name a few.
In today's world it is no longer a concern of "if your personal data will be compromised" it is a question of "when your personal data will be compromised". What concerns me that most companies are only concerned with how it affects them instead of how it affects the client, i.e., me and my family.
I view the world of data not as corporate data but as my data that resides on corporate servers. I cannot sit around "attempting to trust them", I need to "control my own data". For example, I do not leave my "wallet on a table for anyone to take advantage of", I consider that "my responsibility where I must safe guard my private information" and it is no different with smart phone devices. As the world of smart phone opens up, I need to control my data on that device, I need to take responsibility of my security as I would with my own wallet.
Corporate information on these devices also opens up security holes that must be addressed so that client information is not compromised so as a good community partner, the corporations can not only protect themselves but look for ways to help clients protect themselves so that we are both winners.
Let the crusade for more reliable internet security begin!
With the SAGESecured Operating System project, we will provide the world more control over what does and does not run on any Android device, permanently eliminating the means and methods that can infect the device with malicious software, PAST, PRESENT and FUTURE. Beginning with Android, SAGESecured serves as a launching pad to bring a new gold standard security protocol to ALL operating systems which are ill-served with conventional best practices.
Click on the video's play button below to find out!
SAGESecured is Process Based Security –a deny-all, permit only-what-the-owner-of-the-system- wants-to-have-running approach to security. You control what can and can’t happen. Makes sense, seems simple but it fundamentally changes the way we think of cyber protection. By limiting system access to predetermined processes and by limiting the scope of access to only what is necessary for the process to function, SAGESecured STOPS the execution of malicious programs. The astute reader will correctly conclude that only allowing predetermined applications is a whitelist. This is just the beginning for SAGESecured, though. After mapping all necessary system resources for the application in question, SAGESecured deliberately shuts down any resources in excess of those needed by the predetermined applications. Malware without access to system resources can do no harm. To a very real extent the security we provide is a by-product of not allowing system resources to remain available for arbitrary code.
Think about Smart TVs, Smart Watches, Smart Fish Bowls, Smart Anything....as long as it is attached to the Internet it is a sitting duck. With more and more connectivity to the Internet the model of anti-virus protection becomes less and less attractive. It is bad enough to have to manage and maintain updates for the desktop, laptop, tablet, etc. Even if anti-virus were reliable, which it has never been, the aggravation of maintaining it is addressed when fixing the underlying cause. Computer security should never be a "make work" proposition. With our project and introduction of our protocol the underlying problem is fixed and people can deal with other things besides managing and maintaining unreliable updates. When the Smart House or Smart Car gains traction the problem really needs to be fixed and not just treated. With more devices being created for the Internet every day, a reliable, permanent solution instead of an ongoing headache of updates begins to make sense. Our SAGESecured project will also help reclaim the desktop, laptop, tablet etc. from the security treadmill along with all the other Smart devices that are the "Internet of Things". Our project is designed to get more reliable security into the hands of everyone so they can get off the security treadmill of exploit, patch, update....exploit, patch, update....etc.
Remember, computers and the Internet were designed to make life easier. They were designed to work for the owner, not the other way around...the owner was not intended to work for the computer.
Click on the video's play button below to find out!
Strictly speaking, the status quo for computer security is not as good as it gets. Properly configured, computers can be more secure than they have ever been since the dawn of the personal computer age. How secure is that? Out of the box they can be delivered permanently immune from that category of exploit involving the surreptitious loading and execution of arbitrary code. That means that spyware, ransomware and all worms, Trojan horses, viruses and similar exploits…Past, Present AND FUTURE can be relegated to the dustbin of history. Such exploits can be made irrelevant to all current and future computer users. Future generations of computer users will say “is it true that computers used to drop like flies?”…because such exploits will have become a footnote in history. You can have a major part in hastening the arrival of a more secure computer infrastructure.
Why is this protocol useful to the end user? There are several reasons.
SAGESecured began as the brainchild of a Geek’s geek – the type of person who writes their own operating system and codes in Assembler for recreation.
It all started in Amarillo, Texas, the Internet Security capitol of the universe since the late 1990s. :~) Once a week, he’d comment that the virus or worm of the week could be easily prevented. After several weeks, he was challenged to prove that such “preventable” attacks could indeed be avoided.
This was the last days of the DOS operating system and he had a demo with DR DOS. It included an infected file from a word processing application. When the file was opened it was immediately denied access and no arbitrary code was surreptitiously loaded and executed. With the introduction of Windows by Microsoft, the days for both DR DOS and MS DOS were numbered. That was a shame because both operating systems could be readily modified to demonstrate the merits of the change prevention protocol that was imagined. We missed our chance to save the (cyber) world at that time.
With the advent of Windows, everyone seemed less concerned with security and more just fascinated by the shiny new toy that Microsoft had shared with the world. Because Windows was proprietary, there was no way to introduce a security protocol that can prevent all worms, viruses, Trojan horses and other malware that is surreptitiously loaded and executed and that conventional best practices can only address after a cyber-autopsy. The next virus, etc. is always going to have free rein until its profile can be distributed in an update. Polymorphic exploits basically render anti-virus software a moot point....
Read the rest of the story by clicking here.
SAGESecured began as the brainchild of a Geek’s geek – the type of person who writes their own operating system and codes in Assembler for recreation.
It all started in Amarillo, Texas, the Internet Security capitol of the universe since the late 1990s. :~) Once a week, he’d comment that the virus or worm of the week could be easily prevented. After several weeks, he was challenged to prove that such “preventable” attacks could indeed be avoided.
This was the last days of the DOS operating system and he had a demo with DR DOS. It included an infected file from a word processing application. When the file was opened it was immediately denied access and no arbitrary code was surreptitiously loaded and executed. With the introduction of Windows by Microsoft, the days for both DR DOS and MS DOS were numbered. That was a shame because both operating systems could be readily modified to demonstrate the merits of the change prevention protocol that was imagined. We missed our chance to save the (cyber) world at that time.
With the advent of Windows, everyone seemed less concerned with security and more just fascinated by the shiny new toy that Microsoft had shared with the world. Because Windows was proprietary, there was no way to introduce a security protocol that can prevent all worms, viruses, Trojan horses and other malware that is surreptitiously loaded and executed and that conventional best practices can only address after a cyber-autopsy. The next virus, etc. is always going to have free rein until its profile can be distributed in an update. Polymorphic exploits basically render anti-virus software a moot point.
About that time a college student from Finland gave the world a non-proprietary operating system called Linux. Linus Torvalds gave the world an updated version of Minix which was a Unix derivative. That was the first widely distributed open source operating system and would be used by us to demonstrate the robust nature of the change prevention protocol we propose be used to permanently eliminate worms, viruses, Trojan horses, etc. With the arrival of an open source operating system that was available for people to play with, we were able to create a secure web-server appliance by retrofitting our solution into the open source Linux. The resultingappliance included an email server, an FTP server and a website hosting server.
About that time there were several high profile web hacking incidents that were making headlines. An Albuquerque ISP was so severely hacked that the ISP was offline for several days while the servers involved were rebuilt. It turns out that a raging feud between a site hosted by the ISP and several black hatted hackers caused the black hats to take down the entire ISP when the targeted website was too difficult to subvert. It is indeed a delicious irony that Albuquerque should be the site of such an exploit. Microsoft was born in Albuquerque and had only left for Seattle a short time before the Route 66 exploit.
A second exploit at that time was of Security News Portal. The website was also the target of black hatted hackers who targeted another hosting ISP. Security News Portal publically explained that the ISP was going to shut the site it was hosting because the site was ostensibly an attractive nuisance. As a result we reached out and offered to host the soon-to-be-abandoned website.
So we took two high profile targets that had previously been destroyed and used these “toxic” sites as demonstration projects for prototypes of a server that incorporated the protocol we propose. It could be used for Android users who want to get off the security treadmill and use an operating system that addresses and solves the problem instead of dealing with the symptoms. What we propose for Android has been time-tested with servers. For more on the matter please review: http://www.securitynewsportal.com/faq.shtml and http://www.forbes.com/forbes/1998/1116/6211132a.html.
Around this time there was an article we read about a penetration testing group out of Sandia National Labs in Albuquerque. After a brief back and forth we gave them a seminar on the technology, all the source code and access to the developers. Then we gave them a server with a challenge to either tell us the content of a particular file or to change the content of a particular file. After several months they told us they did not accomplish the challenge but did report some problems with the Linux operating system that were not a reflection on the change prevention/change control protocol we recommend people adopt. Finally they pointed out some hypothetical problems with Linux that did not result from the design of the security model we presented them. The Army’s Communications Electronics Command at Ft. Monmouth, NJ (CECOM) also inspected the protocol. This consisted of an evaluation of an implementation of our secure web-server appliance on a mobile Compaq Ipaq server. Again they did not subvert the security model protecting the contents on the Ipaq from malicious third party fiddling. After the results with Sandia National Labs and CECOM…. problems involving several intractable circumstances were solved using our secure web-server appliance.
The National Library of Medicine at the National Institutes of Health had a public facing website that was being looted and plundered on a daily basis. Once the site was hosted on a server invoking the protocol we advised to be used, the website was available all day every day. The public was able to get access to needed information without the bother of trolls preventing access.
Meanwhile, the Incident Response Center of the Nuclear Regulatory Commission (NRC) on the other hand wanted to collect reports from the 100 + nuclear power plants that it supervises. The NRC was transitioning from a report collecting system that incorporated faxes, emails, snail mail and even phoned in reports to an Internet based system that consolidated the reporting and sped up the collection of those reports for processing. By implementing our change prevention/change control protocol, they were able to keep unauthorized users away from confidential information.
An example from the private sector…
A commercial entity in the business of overnight delivery of letters and parcels has one of the larger fleets of planes in the free world. That means that they have lots of pilots, co-pilots and other personnel who are constantly updating their time in particular types of aircraft. In addition there was just as large if not larger a pool of people wanting to apply for work with this company who also needed to maintain their log books. Keeping up with all that information was becoming a daunting task especially so when pilots could be in any time zone when completing a leg and they wanted to update their logs.
When this company wanted to simplify their credentialing process they put it on the Internet on a secure web-server appliance that was designed specifically to keep unauthorized prying eyes out of the very expensive database of highly skilled and trained individuals. In addition the company wanted to keep the information segregated from the accounting system so this credentialing system was put in front of the firewall directly exposed to the cold cruel Internet in order to allow the personnel easier access to the logs that they updated when they completed a trip.
These examples of real problems with real solutions demonstrate the implementation of our protocol in custom built solutions. After these installs occurred, these various problems were solved. Their servers have performed without a hitch.
At the time we could only demonstrate the merits of our security on Linux which is not proprietary but never was really adopted to such a level as to attract the attention of crackers as Microsoft attracted attention.
Now with the advent and market acceptance of Android, off the shelf implementation of this solution is available to a broader audience. With the completion of the porting to Android of the same security model, everyday users can surf the internet and know that they are not going to get drive-by downloads of extraneous applications that some complete stranger might try to run.
In the period including the early 2000’s security took a decidedly backseat to bells and whistles that Windows was rolling out. Then came the famous “Trustworthy Computing” memo from Bill Gates to the crew at Microsoft in mid-January, 2002. For the next year or so the market expected some results from that effort to recognize the need to build more robust security into operating systems and applications.
More than a decade later the results of the memo have not reduced the need for patch Tuesday (the second Tuesday of each month) or lessened the reliance on anti-virus. Remember anti-virus was a cottage industry in the early 2000’s that was supposed to be a short term bridge between the then existing problem-plagued operating systems and the systems that incorporated that promised “Trustworthy Computing”. Since then antivirus has grown like the eggplant that ate Chicago and is about as useful.
After it became clear that proprietary operating systems were going to rely on the End User License Agreement and not address the underlying cause of the ongoing problem which surreptitiously loads and executes arbitrary code we decided to hunker down and wait.
That wait was finally ended with the introduction of Android in November 2007 with the beta release and the September 2008 commercial release of the first version of Android. With the roll out of Android, there is now a well-received operating system where we can demonstrate our protocol and the market can decide how much insecurity they choose to endure.
In the wake of the advent and market acceptance of Android, off the shelf implementation of this solution is available to an initially skeptical but ultimately grateful public. With completing the port to Android of the same security model that has been proven in servers, everyday Android users can surf the internet with confidence knowing that they will not be subjected to drive-by downloads of extraneous applications that some malicious coder might desire to run.
Once Android cracked 50% of smartphone operating systems we decided that it was for real. We began the port in the third quarter of 2011. At the end of the 4th quarter of 2012 Android held a greater than 70% market share for smartphones. It will probably exceed 500 million installations for smartphones in 2013.
No more worry about keystroke loggers, spyware, ransomware or Trojan horses bent on stealing a person’s banking credentials. There is no telling what sort of exploit shenanigans are already in store for the Google Glass product which is still in beta. Today, Android allows for more rank and file computer users (desktop, laptops, notebooks, smartphones etc.) to benefit from the enhanced control of what does and does not run on any system. It’s that simple, and that revolutionary.
Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”
If you think about it, it’s weird. Everything about computer security has changed in the past 20 years, but computers themselves haven’t. It’s the world around them that has. An article to be published in the February 2013 issue of Communications of the ACM sums up the situation pretty succinctly:
“The role of operating system security has shifted from protecting multiple users from each other toward protecting a single…user from untrustworthy applications.…Embedded devices, mobile phones, and tablets are a point of confluence: The interests of many different parties…must be mediated with the help of operating systems that were designed for another place and time.”
The author of that article is Robert Watson. He advocates taking a fresh start to computing, what he calls a “clean slate.” He’s a senior research associate in the Security Research Group at the University of Cambridge, and a research fellow at St John's College, also at Cambridge. He’s also a member of the board of directors of the FreeBSD Foundation, and he’s my guest today by phone.
Robert, welcome to the podcast.
Robert Watson: Hi, Steven. It’s great to be with you.
Steven Cherry: Robert, computer security meant something very different before the Internet, and in your view, we aren’t winning the war. What’s changed?
Robert Watson: Right. I think that’s an excellent question. I think we have to see this in a historic context.
So in the 1970s and 1980s, the Internet was this brave new world largely populated by academic researchers. It was used by the U.S. Department of Defense, it was used by U.S. corporations, but it was a very small world, and today we put everyone and their grandmother on the Internet. Certainly the systems that we designed for those research environments, to try and solve really fundamental problems in communications, weren’t designed to resist adversaries. And when we talk about adversaries, we have to be careful, but, you know, I think it’s fair to say that there were, you know, very poor incentives from the perspective of the end user. As we moved to banking and purchasing online, we produced a target, and that target didn’t exist in the 1990s. It does exist today.
Steven Cherry: Your research is focused on the operating system. But how much of computing security is built into the operating system currently?
Robert Watson: We’ve always taken the view that operating system security was really central to how applications themselves experience security. And in historic systems, large multiuser computer systems, you know, we had these central servers or central mainframes, lots of end users on individual terminals. The role of the OS was to help separate these users from each other, to prevent accidents, perhaps to control the flow of information. You didn’t want trade secrets leaking from, perhaps, one account on a system to another one. And when we had large time-sharing systems, we were forced to share computers among many different users. Operating systems have historically provided something called access control. So you allow users to say this file can’t be accessed by this user. This is a very powerful primitive. It allows us to structure the work we do into groups, interact with each other. Users are at their own discretion to decide what they’re going to share and what they won’t.
So the observation we make on these new end-user systems like phones is that what we’re trying to control is very different. The phone is a place where lots of different applications meet. But I’m downloading software off the Internet, and this is something we’ve always, you know, encouraged users to be very cautious about. We said, “Don’t just download random programs through the Internet. You never know where it will have come from.” You know, you have no information on the provenance of the software. And on phones today, we encourage users to download things all the time. So what has changed now? Well, we’ve deployed something called sandboxing inside of these phones so that every application you download runs inside its own sandbox. And that is a very different use of security. And it is provided by the operating system, so it’s still a function of the operating system. So a phone is trying to mediate between these applications, prevent them from doing what people sort of rather vividly describe as “bricking” the phone. So you have integrity guarantees that you want. You don’t want to damage the operation of the phone. But you also don’t want information to spread between applications in ways that you don’t want.
Steven Cherry: Now, let’s talk about Clean Slate. This is research you’re conducting for the Department of Defense in the U.S., along with noted computer scientist Peter Neumann. Neumann was recently profiled in The New York Times, and he was quoted as saying that the only workable and complete solution to the computer security crisis is to study the past half-century’s research, cherry-pick the best ideas, and then build something new from the bottom up. What does that mean?
Robert Watson: That’s a great question. I mean it is an interesting problem. You know, the market is controlled by what people are willing to pay for a product. And one of the things we know about the computer industry is that it’s very driven by this concept of “time to market.” You want to get things to the consumer as soon as possible. You don’t do everything 100 percent right. You do it 90 percent right or 70 percent right, because you can always issue updates later, or once you’re doing a bit better in the marketplace, replace the parts, and your second-generation users will expect something a little bit better than what we call early adopters, who are willing to take risks as they adopt technology. So there’s a cycle there that means that we’re willing to put things out that aren’t quite ready. So when we look at algorithms to search for desired values in some large space—and we have this term which is called hill climbing, and the idea of hill climbing is that wherever you are, you look around your set of strategic choices. Do you adjust this parameter? Do you adjust that parameter? And you pick the one that seems to take you closest to the goal that you’re getting to. And you just repeat this process over time, and eventually you get to the top of the hill. So there’s a risk in this strategy. It’s not a bad strategy. It does get you to the top of a hill, but it might get you to the top of the wrong hill.
So what the Clean Slate approach advocates is not throwing the whole world away, but instead taking a step back and asking, Have we been chasing, you know, the wrong goals all along? Or have we made the right choice at every given moment given where we were, but we ended up at the top of the wrong hill? And that’s really what it’s all about. Peter talks about a crisis, and I think it is a crisis. We can see what is effectively an arms race between the people building systems and the people who are attacking systems on a daily basis. Every time you get a critical security update from your vendor or a new antivirus update—these things happen daily or weekly—they reflect the discovery and exploitation of vulnerabilities in the software that we rely on to do our jobs. So we’re clearly, as the defenders, at something of a disadvantage.
And there’s an asymmetric relationship, as we like to say. The attacker has to find just one flaw in order to gain control of our systems. And we, as defenders, have to close all flaws. We must make no mistakes, and we cannot build systems that way; it’s just not a reliable way of doing it. It doesn’t solve the problem. Antivirus is fundamentally responsive. It’s about detecting somebody’s broken into your machine and trying to clean up the mess that’s been left behind by poorly crafted malware that can’t defend itself against a knowledgeable adversary. It presupposes that they’ve gotten in, that they’ve gotten access to your data, they could have done anything they want with your computer, and it’s the wrong way to think about it. It’s not to say that we shouldn’t use antivirus in the meantime, but it can’t be the long-term answer, right? It means that somebody else has already succeeded in their goal.
Steven Cherry: Yeah, I guess what you want to do is compartmentalize our software, and I guess the New York Times article talked about software that shape-shifts to elude would-be attackers. How would that work?
Robert Watson: You know, we could try to interfere with the mechanisms used to exploit vulnerabilities. So, you know, a common past exploit mechanism, something called a buffer overflow attack. So the vulnerability is that the bounds are calculated incorrectly on a buffer inside of the software, and you overflow the buffer by sending more data than the original software author expected. And as you overflow the buffer, you manage to inject some code, or you manage to insert a new program that will get executed when the function that you’re attacking returns. So this allows the adversary to take control of your machine. So we could eliminate the bug that left a buffer overflow, but imagine for a moment that we’re unable to do that. Well, we could interfere with the way the buffer overflow exploit works. We could prevent it from successfully getting code into execution. So this is something we try to do: Many contemporary systems deploy mitigation techniques. It’s hard to get an operating system that doesn’t. If you use Windows or you use iOS, [or you] use Mac OS X, they all deploy lots of mitigation techniques that attack exploit techniques.
So the one that we’re particularly interested in is one called compartmentalization. And the principle is fairly straightforward. We take a large piece of software, like a Web browser, and we begin to break it into pieces. And we run every one of those pieces in something called a sandbox. A sandbox is a container, if you will, and the software in the sandbox is only allowed to do certain things with respect to the system that runs outside the sandbox. So a nice example of this is actually in the Chrome Web browser. So in Chrome, every tab is rendered inside a separate sandbox. And the principle is that if a vulnerability is exploited by a particular Web page, it’s not able to interfere with the contents of other Web pages in the same Web browser.
So originally this functionality was about robustness. What you don’t want is a bug in the rendering of any one page to make all your other tabs close, right, crash the Web browser, require you to effectively, well you almost reboot your computer in some sense as you get started up in your Web sessions again. But Google noticed that they could align these sandboxes with the robust units that they were processing each tab in, try and prevent undesired interference between them. So that’s kind of a rudimentary example of compartmentalization. And it does work, but there were some problems with it.
What we’d really like to do, though, is align these sandboxes or compartments with every individual task that we’re trying to accomplish and the specific rights that are needed. And there’s an interesting principle called the principle of least privilege, which was an idea first really talked about in the mid-1970s, sort of proposed at MIT. And what the principle says is every individual piece of software should run with only the rights that it requires to execute. So if we run software that way, then we’re actually—we can be successful at mitigating attacks, because when you exploit a vulnerability in a piece of software, whether it’s a buffer overflow or maybe something more subtle or maybe something in the logic of the program itself, we just got the rules wrong, you now gain some rights. But you gain only the rights of that particular compartment.
For example, we’d really like not to be able to see what is going on in your online banking. It would seem natural to us as users that that should be the case. But it requires very granular sandboxing. This is part of where our Clean Slate research comes in. Current computer systems were not designed to provide that granularity of sandboxing.
Steven Cherry: You’ve used the word “fundamental” a couple of times, and I think what you’re advocating is really fundamental. It’s in some ways changing the entire 60-year paradigm of computing, abandoning what’s sometimes called the von Neumann architecture. This is a different Neumann, John von Neumann, who coinvented game theory as well as the modern computer. According to, you know, basically we don’t even put code and data in separate sandboxes. Am I right in thinking it’s that fundamental, and do you think the discipline of computer science is really ready for such a fundamental change?
Robert Watson: Well, it’s an interesting question. So, you know, the von Neumann architecture, as you suggest, originally described in the paper in the mid 1940s on the heels of the success of systems like ENIAC and so on. And what John von Neumann says is if we store the program—you know, there are a number of aspects in the architecture—if we store the program in the same memory that we store data in, we gain enormous flexibility. Provides access to ideas like software compilers that allow us to describe software at a high level and have the computer itself write the code that it’s later going to run. It’s a, you know, pretty fundamental change in the nature of computing.
I don’t want to roll back that aspect of computing, but we have to understand that many of the vulnerabilities that we suffer today are a direct consequence of that design for computers. So I talked a moment ago about this idea of code injection attacks at the buffer overflow where I, as the attacker, can send you something that exploits a bug and injects code. This is a very powerful model for an attacker because, you know, suppose for a moment we couldn’t do that. I’d be looking for vulnerabilities that directly correspond to my goals as the attacker. So I have to find a logical bug that allows the leaking of information. You know, I could probably find one, perhaps. But it’s much more powerful for me to be able to send you new code that you’re going to run on the target machine directly, giving me complete flexibility.
So, yes, we want to revisit some of these ideas. I’d make the observation that the things that are really important to us, that we want to perform really well on computers, that have to scale extremely well, so there could be lots and lots of them, are the things that we put in low-level hardware. The reason we do that is that they often have aspects of their execution which perform best when they’re directly catered to by our processor design. A nice example of this is graphical processing. So, today, every computer, every mobile device, ships with something that just didn’t exist in computers 10 or 15 years ago, called a graphical processing unit, a graphics processing unit, a GPU. So today you don’t buy systems without them. They’re the thing that makes it possible to blend different images, you know, render animations at high speed and so on. Have the kind of snazzy, three-dimensional graphics we see on current systems. Hard to imagine life without it.
The reason that was sucked into our architecture design is that we could make it dramatically faster by supporting it directly in hardware. If we now think security is important to us, extremely important to us because of the costs and the consequences of getting it wrong, there’s a strong argument for pulling that into hardware if it provides us with dramatic improvement in scalability.
Steven Cherry: Well, Robert, it sounds like we’re still in the early days of computing. I guess in car terms we’re still in maybe the 1950s. I guess the MacBook Pro is maybe a Studebaker or Starliner, and the Air is a 1953 Corvette. And it’s up to folks like you to lay the groundwork for the safe Volvos and Subarus of tomorrow. In fact, also for making our cars safe from hackers, I guess, but that’s a whole other show. Thanks, and thanks for joining us today.
Robert Watson: Absolutely. No, I think your comparison is good, right. The computer world is still very much a fast-moving industry. We don’t know what systems will look like when we’re done. I think the only mistake we could make is to think that we are done, that we have to live with the status quo that we have. There is still the opportunity to revise fundamental thinking here while maintaining some of the compatibility we want. You know, we can still drive on the same roads, but we can change the vehicles that we drive on them. Thanks very much.
Steven Cherry: Very good. Thanks again.
We’ve been speaking with Robert Watson about finally making computers more secure, instead of less.
For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.
This interview was recorded 5 December 2012. Segment producer: Barbara Finkelstein; audio engineer: Francesco Ferorelli
Read more “Techwise Conversations” or follow us on Twitter.
NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.
Hack-Proof Web Server designed to provide affordable, bulletproof protection. More...
FEATURES
FEATURES