The Zero Trust model treats the behavior of connected nodes in a network. The capabilities of the compute nodes are crucial towards preserving the integrity and trustworthiness of the environment.
The core principles of zero trust (Never trust, always verify) date back to the 1990s.
1. User and entity behavior analytics, now called continuous monitoring, means keeping track of the ongoing actions of a user or a process to see if it is straying outside its prescribed range.
2. Strong authentication, based in identity and access management, which enforces the principle of least privilege, which dates back to the 1960s.
3. Network segmentation is a contemporary form of de-perimeterization – the notion that there is no perimeter, canonized by the Jericho Forum in the early 2000s.
4. Fine grained-access control, meaning that a user or entity only accesses the resource or service it is explicitly authorized for. In networking, this means rater than using a VPN to access a site’s entire network, putting access control in front of each resource and requiring authorization and authentication before use is permitted.
Mainframes have unique and powerful security capabilities which could make ZT implementation securely grounded in that powerful anchor – the mainframe, the grand high poohbah of data centricity. In the mainframe, we add an additional layer of protection – assume that a process can only access the memory explicitly allocated to it: architectural separation via the virtual addressing scheme, storage protection keys, and supervisor vs. problem state execution.
Mainframes have a dominant position in the hierarchy of computing due to their articular balance of capabilities. All computers have some ability to perform calculations, and to access data. From their beginning, mainframes typically had about 30 to 50 Gigabytes of data per “MIPS” (One MIPS is approx. 2 DEC VUPS or the processing power of 4 MHz on an Intel architecture.) Personal computers typically have 10 to 20 Mb per Mhz – so relatively speaking a mainframe is 3,000 times more data rich than a PC.
While any kind of computing can run on any kind of computer, certain classes of problems are more data-intensive while others are more compute-intensive. The core programming for a transaction processing workload typically is about 250,000 instructions – say processing a change of address for an insurance policy. The data against which that code will run could be in the terabytes. On a PC the code to check for spelling errors in a document takes about 250,000 instructions, while the target data is a few hundred bytes.
Placing a data-intensive process on a data-poor platform yields a brittle, fragile structure. Siting a process on the right platform optimizes compute resources. And Zero Trust optimizes the security and privacy protections for that content.
March 1, 2024