A Matter of Trust - The Story of where we have been with Technology and Computers

Instalment 1

A Matter of Trust

The Wolds data communications open up with the introduction of the Internet. Well before that, data connections, in terms of exclusive data links, were outrageously expensive.

The location of huge Data Centres was often confined to house as many directly connected users. The notion of placing a secure data centre and have the bulk of the users housed in a far distance place; meant the pure cost of dedicated data connections was the overriding factor.

Before the Internet, all manner of input devices from workstations to Terminals or PC’s needed to gain access to a data centre, were not automatically trusted to permit access.

This was seen as a good thing! It provided a totally secure network because any input device that was connected or wanted to gain access; was not trusted by default!

Trust was only granted to input devices so listed by the Data Centre’s Communications front end that held all security access tables of trust.

There was the marriage of the input devices needing to be trusted as well as the input device itself, granting trust only to users who’s sign on credentials was given by the input device.

This was the standard approach of security for many years. The input device only trusted certain users sign on credentials and in turn the input device was trusted only by the set of security trust tables held by the communications front end; to then be trusted by the data centre processing itself.

The 3rd level of trust was afforded to a few input devices, which trusted only certain users sign on credentials, for the maintenance of all data security trust tables; where trust itself was created, modified or changed.

At a programmers level of sign on, input devices and user credentials were still vetted by the communications’ data security trust tables of the front end.

There was a fourth and final level of trust which differed slightly on this model – these input devices and user credentials did not have to pass through the communications front end.

There were a number of input devices at the Data Centre that held the highest level of rust. These input devices were given many different names but lets just call them Prime Console Input Devices. - Sorry no acronym I am aware of.

A Prime Console did not connect to the processors via a communications front end, where all the security tables were held.
A Prime Console connected directly to the Processors and thus trust was unlimited by such an input device.

User credentials themselves were allocated trust by hard coding within the operating systems. There were few such Prime Input devices and few user sign on credentials.

This was done not so much as to guarantee trust but to ensure that no matter what; a users sign on credentials plus the Prime Console input devices were both needed and guaranteed to have limitless power over all functions of the Processors.

It was at this level that Operating Systems were able to directly effected not so much as the programs; I’m talking about being able to either terminate a rouge process or shut all processors down themselves.

Most of these data centres had remained online, without a pause, for over three decades and often far longer.

The time in which a data centre could be shut down, as far as its Operating System and power down, was NOT a short time. This would often take many days and was highly structured and the stuff of Reference Manuals was needed.

Most of the minders and keepers and Data Base Analysts of the Operating System would never see a data centre shut down during their working life. Such was the rarity, often the manufacturer of the hardware needed to be consulted to get this correct.

With todays Transactional Tracking Systems and Transactional Processing the DBA was king. Terminating a rouge process and moreover, shutting down a data centre was a long audacious task. Without today’s superior TTS and TP, every process needed to be shut down in a very specific order and left closed. If this was not done the processor would come online with its applications looking like scrambled eggs.

As an aside the value of UPS was no luxury.

I’ll give you and example of a very cut down type of total system shut-down procedure, a far far cry from a clustered bank of blade servers which carry Operating Systems, Applications and Communications Access – If this was not important, I would have left it out!

Create and Verify user authentication was a Binary Value as Hexadecimal values were not included into the Operating Systems and Front end control. Hexadecimal expressions were a neat interpretation of binary values which added an unnecessary translation at the time. The size of files was the biggest limiting value as hard drive capacities were only say 20-40 Megabyte in size.

The neat representation of hexadecimal values added an uncomfortable amount of file space and was a luxury at best. Coding was done via huge but very compactable files written in assembler! The only way to shorted tedious coding was resolved by symbolic names and all up these huge files would generate down into a very small executable files. The biggest issue for the coding was the absolute file length that was one of the system endpoint of the text editors.

>Dismount All
>>Input create user authority codes
>>Input verify user authority codes
>Broadcast message to all Prime Consoles: All online storage will be dismounted ! Close or Terminate all open users front end connections
>>Proceed? y/n/f
>>Halt All
>Input create user authority codes
>Input verify user authority codes
>>Broadcast message to all Prime Consoles: All processors will shut down in -x hours: minutes – Terminate front End Access
>>Proceed y/n/f
>>Broadcast message to all Prime Consoles: All processors will shut down in -x minutes – Terminate front End Access
>> Input create user authority codes
>> Input verify user authority codes
>>Proceed y/n/f
>Broadcast message to all Prime Consoles: All processors will shut down in -x minutes – All front End Access will be deigned!
>> Input create user authority codes
>> Input verify user authority codes
>>Proceed y/n/f
>Broadcast message to all Prime Consoles: All processors will halt and power down in -x minutes
>> Input create user authority codes
>> Input verify user authority codes
>>Proceed y/n/f
>Halt UPS
>> Input create user authority codes
>> Input verify user authority codes
>>Broadcast message to all Prime Consoles: UPS Power terminated!
>>Proceed y/n/f
>>Broadcast message to all Prime Consoles: All processes will halt and power down immediately
>>Proceed y/n/f
>Offline _ _ _

One other reason why shutting down such data centres took so long was to ensure that every single user in the data centre knew, not only in advance, but also to warn everyone that someone was going to pull the plug – Indeed the issue of trust had to be earned in both the logical and physical attributes or every input device.

Circa 2001 Enter current day of say, the year 2001 – as we really don’t want to get in the way of Y2K, 2038 and other system range endpoints! Welcome the PC and the Internet.

…(second Instalment to follow 1 of 6)

Maybe Wikipedia would be interested in your story, or entering hibernation may be an answer, zzzzzz…

There were a number of input devices at the Data Centre that held the highest level of rust.

Yes, I knew some people like that. lol!

Why did I actually read it? :slight_smile:

Part 2 of 6

Circa 2001 Enter current day of say, the year 2001 – as we really don’t want to get in the way of Y2K, 2038 and other system range endpoints! Welcome the PC and the Internet.

If you would rather just read Bind - Berkeley Internet Name Domain, as your bedtime companion for the next few years, please leave now.

If you would rather read every Request For Comment about BIND you may also leave for another bedside companion for another few years. If your willing for me to make some assumptions and leave out chunks; then stay.

If you do want to stay here, I’m going to cut many corners and make huge statements that I cannot easily reference.

The issue of trust is now opposite to the model above; it’s an opt in default rather than an opt out default.

The Internet being a web type structure assumes that the users of such a system are broadly spread all over the world. The massive rise in fast cheap data communications has its hand up whilst fibre optic cable and communications satellites expand exponentially often without physical logic; physical logic can almost be ignored as more and more systems join bilateral agreements with themselves and others, without physical logic.

Our communications is key and where trust once needed to be earned, trust is granted to any device which holds it hand up wanting to join the network.

Behind these logical networks exists real people behind their keyboards and where once trust needed to be given to the user and the input device; every device and user is now trusted by default.

Think of the Internet as a spiders web. The web is anchored by many many top level strands all identical. From each of these strands, the spiders web has multiple multiple strands off the upper strands and these go on to lower strands.

The web gets larger as each day more strands are added. The strands that form the web become more complex but the web can withstand huge winds and remain intact, however intricate it may be.

The web is constantly adding adjustments and strands upon strands and so forth as our web self perpetuates with just about endless number of strands to hold far more complex web construction.

The InterNET is not so different. It begins with multiple beginnings being our top level which we call our root servers and Top Level Domains. Here the top level domain must hold each a complete copy of the entire web.

Physical constructions give way to logical constructions and after that the dynamic nature of the web is not finite. Logic has no finite nature, however our logic must hold up against the winds of change and our ever evolving technology and number of PC’s – it grows physically and logically at an alarming exponential rate each and every day.

The Internet also has to maintain a Communication Protocol to connect everything. In the example of the first pages we saw that the communications front end held all data security files that permitted a terminal to access the main site.

Forget that scenario now – The communication protocol of the Internet is an intrinsic part of the logical evolution of the Internet. Rather than permit a PC to gain access it provides complete access to every single device that want to or is connected to the Internet.

In this stark openness we have logical ways of providing restricted access where trust again needs to be granted before access is gained.

This new type of Trust comes in the form of ‘root certificates’ – O.K here’s the drill – Every Computer, Physical or Logical Device needs to trust someone – In the example of our Mainframe, ultimate trust was afforded to a small number of Prime Console Input devices.

Ultimate trust was then given by the hardcoding of the Operating System, and to a lesser point, the Applications themselves.

Well before this ultimate Trust, the Front End Communications vehicle handed out differing amounts of trust and it was a rare occasion that exemptions from the control of the Front End ever needed to be made.

Back to current days…The Internet and the communications’ protocol are one single inseparable logical value! There is NO Front End communications device as such.

Yes even the Internet must afford ultimate trust to someone – by and large this is taken care of by root servers, however ultimate Trust is in reality, everywhere.

The Internet grows exponentially larger and larger every day and if security tables of trust needed to be maintained it would be impossible to manage. The aspect of Trust is almost completely the role of the communications’ protocol…Time to give this communications’ Protocol a name – It’s actually a collection of different parts all rolled into what we call TCP/IP.

TCP/IP has been around for a very very long time and you will be surprised to learn that its basic structure is almost unchanged from its inception back in 1969…OH! You doubt me – o.k see

From now on, If you have any more doubts about my accuracy please leave now. You can have a bedside companion in the form of books for0 the next 5 years if you want to verify my accuracy – Oh! Bye the way – you’ll have to read B.I.N.D and every RFC as discussed in the first instalment!

O.K Now my readership has shrunk a little lets get on with it! Seriously I do make make mistakes and I need to cut huge corners just as I have described in the first instalment…

Part 2 ends*

OOOPs! - Even I make errors - without errors life It boaring …:slight_smile:

Premature comments on the first part of 6 are not helpful nor constructive to others who may want to read this 6 part story…You can always change the channel your watching…

On the quiet note and between you and I and the rest of the world; I also have know people who have set out to read the whole of BIND. It does not matter that it gets bigger almost every day; but I also have had a quiet giggle about those who would want to read it. There ARE those who have both BIND and every RFC as their bedside companion; which frightens me just a tad:-)

On 02/09/2011 02:36 AM, zczc2311 wrote:
> You can always change the channel your watching…

did they close down all the blogs on the net?
the free ones also??

Soapbox is for “Strong opinions about mostly anything (no political or
religious content)”…

so why not begin your tome with a strong opinion (which i’ve not seen
in chapter one or two)…or at least begin with a “Why i write this
and submit it here:” and then use the rest to support your opinion…

actually, please don’t do that…this is a forum of openSUSE users
trying to help openSUSE users and nurture a ‘community’ of new users
up to guru status…and this “Soapbox” is only here to try to keep
“strong opinions” out of the help fora where they just serve to cause
disruptions and fist fights…

another way to say that is, imHo Soapbox is not here to collect
diatribes on how trustworthy the internet is (if that is your
‘strong opinion’—i can’t tell and . . . and i promise i will not
read through to chapter six to find out… and i LIKE pondering
philosophy, sociology, psychology, anthropology, marketing, group
dynamics, leadership, management, team building and etc etc etc, and
so far yours does not wind my watch…ymmv)

CAVEAT: http://is.gd/bpoMD
[NNTP posted w/openSUSE 11.3, KDE4.5.5, Thunderbird3.0.11, nVidia
173.14.28 3D, Athlon 64 3000+]
“It is far easier to read, understand and follow the instructions than
to undo the problems caused by not.” DD 23 Jan 11

Thank you for your support as always - Denver D - I fould the comment from ‘Shaman Penguin’ quite frightening for all the good reasons you have mentioned - I dont mind constructive criticism at all - It builds better communications, but for ‘Shamus Penguin’ to make such a strong comment after the first instalment ruined so much for others who do want to read…and as for his ‘zzzzzzzzzz’, perhaps its best he really does read BIND…lol…Thanks mate :slight_smile:

With the communications protocol being intertwined with the structure of the Internet, it takes the role of the protection once offered by front end communications systems .

In our open Internet structure the Trust tables are gone, as we have seen above, so any degree of authority or restriction is now provided by the communications Protocol TCP/IP.

If you have a quick look at the wiki link you will see the original specifications and applied use for TCP/IP was a joint venture by Bell Labs, the American Military and AT@T

(By the way if you have ever wondered why a telephone key pad is the opposite way around to the PC Keyboard, you can blame it on a fight between AT@T and Bell Labs and a few others)

The development of TCP/IP that occurred between AT@T, Bell Labs and the US Military was soon abandoned as a desirable communications Protocol. The reasons why it was abandoned we’ll talk about in this instalment.

TCP/IP collectively, offered an open form of communications and had the desirable condition of needing NO Front End to administer Trust. The Protocol itself administered trust when required.

This is going to be complicated so I’ll borrow extracts from the Original Concerns that became obvious during its development and its these very same concern that lead the project to be abandoned by the originators of its inception; only to be picked up whole and mostly unchanged when the Internet arrived.

We are not concerned with flaws in particular implementations of the protocols, such as those used by…Rather, we discuss generic problems with the protocols themselves. As
will be seen, careful implementation techniques can alleviate or prevent some of these problems. Some of the protocols we discuss are derived from Berkeley’s version of the UNIXÒ system; others are generic… protocols.
We are also not concerned with classic network attacks, such as physical eavesdropping, or altered or injected messages. We discuss such problems only in so far as they are facilitated or possible because of protocol problems. For the most part, there is no discussion here of vendor-specific protocols. We do discuss some problems with Berkeley’s protocols, since these have become de facto standards for many vendors, and not just for UNIX systems.


These initial concerns were very serious ones. As the protocol was to determine and provide Trust by itself it was open for abuse. Abuse did not come in the form of ‘eavesdropping’ – Abuse came in the form of the Protocol, when required, could escalate its own authority. In our Linux example, the Protocol Itself could be come root when it was required and then drop to a user type level; without anyone authorising trust.

Trust was now determined solely by the Protocol. I’m not talking about ‘root’ type escalation of authority – I’m taking about its ability to escalate its own authority to be of an Unlimited Type, the type of Trust that was afforded to a number of limited Prime Consoles, that bypassed the Front End, in instalment #1.

This was not the sole reason why its development was abandoned by AT@T, Bell Labs and the U.S Military; but a sizeable reason for it.

I’ll try to extract more of the simplest explanations for this innate ability but if you would rather – go to this link its o.k


One of the more fascinating security holes was first described by Morris[7]. Briefly, he used TCP
sequence number prediction to construct a TCP packet sequence without ever receiving any responses
from the server. This allowed him to spoof a trusted host on a local network.

The normal TCP connection establishment sequence involves a 3-way handshake. The client selects and transmits an initial sequence number ISNC, the server acknowledges it and sends its own sequence number ISNS, and the client acknowledges that. Following those three messages, data transmission may take place. The exchange may be shown schematically as follows:
and / or

That is, for a conversation to take place, C must first hear ISNS, a more or less random number.
Suppose, though, that there was a way for an intruder X to predict ISN S. In that case, it could send the following sequence to impersonate trusted host T:

X®S:ACK(ISNS ) ,SRC = T,nasty – data

Even though the message S®T does not go to X, X was able to know its contents, and hence could send data. If X were to perform this attack on a connection that allows command execution (i.e., the Berkeley rsh server), malicious commands could be executed.
How, then, to predict the random ISN? In Berkeley systems, the initial sequence number variable is incremented by a constant amount once per second, and by half that amount each time a connection is initiated. Thus, if one initiates a legitimate connection and observes the ISNS used, one can calculate, with a high degree of confidence, ISNS ¢ used on the next connection attempt. Morris points out that the reply message

S®T:SYN(ISNS ) ,ACK(ISNX ) does not in fact vanish down a black hole; rather, the real host T will receive it and attempt to reset the connection. This is not a serious obstacle. Morris found that by impersonating a server port on T, and by flooding that port with apparent connection requests, he could generate queue overflows that would make it likely that the S®T message would be lost. Alternatively, one could wait until T was down for routine maintenance or a reboot.

A variant on this TCP sequence number attack, not described by Morris, exploits the netstat service. In this attack, the intruder impersonates a host that is down. If netstat is available on the target host, it may supply the necessary sequence number information on another port; this eliminates all need to guess defences.

Obviously, the key to this attack is the relatively coarse rate of change of the initial sequence number variable on Berkeley systems. The TCP specification requires that this variable be incremented approximately 250,000 times per second; Berkeley is using a much slower rate. However, the critical factor is the granularity, not the average rate. The change from an increment of 128 per second in 4.2BSD to 125,000 per second in 4.3BSD is meaningless, even though the latter is within a factor of two of the specified rate*

The netstat protocol is obsolete, but is still present on some Internet hosts. Security concerns were not behind its elimination

*Let us consider whether a counter that operated at a true 250,000 hz rate would help. For simplicity’s sake, we will ignore the problem of other connections occurring, and only consider the fixed rate of change of this counter. To learn a current sequence number, one must send a SYN packet, and receive a response, as follows:
The first spoof packet, which triggers generation of the next sequence number, can immediately follow the server’s response to the probe packet:
X®S: SYN(ISNX ) ,SRC = T (2)
The sequence number ISNS used in the response
is uniquely determined by the time between the origination of message (1) and the receipt at the server of message (1). But this number is precisely the round-trip time between X and S.

Thus, if the spoofer can accurately measure (and predict) that time, even a 4 m-second clock will not defeat this attack.

How accurately can the trip time be measured? If we assume that stability is good, we can probably bound it within 10 milliseconds or so. Clearly, the Internet does not exhibit such stability over the long-term[9], but it is often good enough over the short term.2 There is thus an uncertainty of 2500 in the possible value for ISNS. If each trial takes 5 seconds, to allow time to re-measure the round-trip time, an intruder would have a reasonable likelihood of succeeding in 7500 seconds, and a near-certainty within a day. More predictable (i.e., higher quality) networks, or more accurate measurements, would improve the odds even further in the intruder’s favour. Clearly, simply following the letter of the TCP specification is not good enough.

We have thus far tacitly assumed that no processing takes places on the target host. In fact, some processing does take place when a new request comes in; the amount of variability in this processing is critical. On a 6 MIPS machine, one tick — 4 m-seconds — is about 25 instructions. There is thus considerable sensitivity to the exact instruction path followed. High-priority interrupts, or a slightly different TCB allocation sequence, will have a comparatively large effect on the actual value of the next sequence number.

This randomizing effect is of considerable advantage to the target. It should be noted, though, that faster machines are more vulnerable to this attack, since the variability of the instruction path will take less real time, and hence affect the increment less. And of course, CPU speeds are increasing rapidly.

This suggests another solution to sequence number attacks: randomizing the increment. Care must be taken to use sufficient bits; if, say, only the low-order 8 bits were picked randomly, and the granularity of the increment was coarse, the intruder’s work factor is only multiplied by 256. A combination of a fine-granularity increment and a small random number generator, or just a 32-bit generator, is better. Note, though, that many pseudo-random number generators are easily invertible.

In fact, given that most such generators work via feedback of their output, the enemy could simply compute the next ‘‘random’’ number to be picked. Some hybrid techniques have promise — using a 32-bit generator, for example, but only emitting 16 bits of it — but brute-force attacks could succeed at determining the seed.

One would need at least 16 bits of random data in each increment, and perhaps more, to defeat probes from the network, but that might leave too few bits to guard against a search for the seed. More research or simulations are needed to determine the proper parameters.

At the moment, the Internet may not have such stability even over the short-term, especially on long-haul connections. It is not comforting to know that the security of a network relies on its low quality of service.

Rather than go to such lengths, it is simpler to use a cryptographic algorithm (or device) for ISNS generation. The Data Encryption Standard (DES) in electronic codebook mode is an attractive choice as the ISNS source, with a simple counter as input. Alternatively, DES could be used in output feedback mode without an additional counter. Either way, great care must be taken to select the key used. The time-of-day at boot time is not adequate; sufficiently good information about reboot times is often available to an intruder, thereby permitting a brute-force attack. If, however, the reboot time is encrypted with a per-host secret key, the generator cannot be cracked with any reasonable effort.

Performance of the initial sequence number generator is not a problem. New sequence numbers are needed only once per connection, and even a software implementation of DES will suffice. Encryption times of 2.3 milliseconds on a 1 MIPS processor have been reported[13].
An additional defence involves good logging and alerting mechanisms.

Measurements of the round-trip time — essential for attacking Compliant hosts — would most likely be carried out using ICMP Ping messages; a ‘‘transponder’’ function could log excessive ping requests. Other, perhaps more applicable, timing measurement techniques would involve attempted TCP connections; these connections are conspicuously short-lived, and may not even complete SYN processing. Similarly, spoofing an active host will eventually generate unusual types of RST packets; these should not occur often, and should be logged.*

There is a huge chunk of information above to digest so I’ll stop this instalment right here.

Next Instalment we’ll go through the above information in much more plane language and I’ll make every effort I can to make the very heavy and verbose information more palatable.

I’m not going to just dump this stuff on you and not return – Every new instalment will get more complex and I would better serve you as a guide rather than be the authority itself. If you want to jump ship that’s o.k but I’m not about to leave you with ANY chunk of information, without great effort on my part to be your guide.

Many will now understand where we are going now – The above alludes to the necessity of some type of ultimate Trust. The problems highlighted above on how to provide something tangible that cannot have its trust logically established; is why we have Root Certificates of Trust in the structure of the Internet.

As I have said earlier, we need to trust someone – That someone is in reality many people or authorities that are listed in your browsers list of Trusted Identities. We are very far away from getting from here to root certificates of Trust as its not possible to jump that far ahead…

I suspect not frightening enough, “Strident Penguin”. I decided after a couple of paragraphs of the monologue, that someone with such zest for output might be hard pressed to deal with input like “constructive criticism”, or indeed any input. In spite of acknowledging DenverD’s “support”, Block #2 arrives down the channel, clearly without any hint of reconstruction or any signs of an attempt at reconstruction – so much for “constructive criticism”. Wherein lies the problem?

At a guess, I would have to go for the bottleneck being an output bound system, with an overloaded processor that is “disabled to interrupts”. I didn’t need to read Block 2 to recognize the formatting and block length. The operating system isn’t doing too well either given, this comment:

to make such a strong comment after the first instalment ruined so much for others who do want to read

That’s rich coming from one who continually invites readers to leave, or switch off the channel. One who makes such silly comments, cannot have much faith in one’s own output. Those who wish to read will not be deterred by strong comment any more or less than those who wish to comment will be undeterred by your protestations.

I also suspect that following DenverD’s big hint as to the inappropriate format, content, and placement of your “diatribe”, making excuses has become the name of the game. :slight_smile:

Down Under where I am in OZ its Friday already - I normally take Friday Saturday and Sunday off and wont even open the door to my office. I just wanted to reassure anyone who is following the story that I’m not about to dump you all with the last chunk of Part3.

Now I become a guide rather than an authority. The first two parts were very important, and you can all see why I had to make authoritative statements!

There’s not much on the Net about the time where Intel’s Silicon chip was still a dream and even fewer people still alive who were very much of the time period.

A printed circuit board was exciting to look at and there were very little of them. I started off in my 20’s but please don’t make the mistake that what was done with early data centres was only small.

The first Data Centre I worked on had over 2,000 Terminals/Printers attached to it. We did not measure CPU power by using Floating Access Points translated into a number.

We did however process nearly 3,000 thousand transactions per second in peak periods and the acceptable wait time from hitting enter to response was less that 300th of a second…The transaction may have been very small but in those days to keep the Data Centre continuing 24/7 for over 5 years without shut-down; was no small accomplishment.

Fixes to code could be done online, however if you were responsible for testing a new module Offline and then authorising it to be put online; and something went wrong, - you were lucky to keep your job. ANY downtime or reduction in performance was operationally NOT acceptable and amounted to huge monetary losses IF it were to happen - So stick that in your GUI Screen today and try and hope for such performance…lol:):):)rotfl!

I must of touched a very raw nerve that was far more than attitude…I was sent a hardware bomb that took out level 3 out of 4 hardware defenses - I’ll be back to continue as soon as I can - Thanks for the private messages of encouragement and its nice to know, that maybe, the mist is clearing for a few.
While I’m away, and this will help in following installments, - Why is the first assignment you are given to code a ‘Random Number Generator’ - I’m not asking for everyone to reply, but…Hey…It will come in handy to think about it…BTW forget pi and forget Perfect and Prime Numbers, in their natural expression, in your thoughts for how you may do this - Sorry - I’ll be back in about a week…TA - SC

In the small town of Swindon-UK sits a very large silver box - What does the box currently do or what is its current purpose? - I’m not talking about the
Magic-Round-About !!:stuck_out_tongue:

Given the heckling nature of forums, I was stupid to entertain the fact that I could explain the difference, where trust needed to be earned to today’s status where trust is given in absolute authority by default. The obtuse heckling for one single uninformed belligerent sources that comments on just about all forums categories; make turning text reality into current fact – wast-full.
Whilst I was committed to take this subject to its six part conclusion, the shear belligerent attitude from one source and ignorance has won the day…My idea of passing the 1990 I.T environment and comparing it against the Internet and its useless protocol, with the associated belligerence has stooped me right now…If you want to learn more search goggle

Mate, get yourself a website or a blog and self-publish all you want. The nature of a forum is that anybody can comment. You can’t hog the stage here.

On 04/23/2011 08:36 AM, zczc2311 wrote:
> If you want to learn more search goggle

i spent hours on google seeking the place where you post your
wisdom…alas, i am too ignorant to succeed…maybe you would post a URL…

CAVEAT: http://is.gd/bpoMD
[openSUSE 11.3 + KDE4.5.5 + Thunderbird3.1.8 via NNTP]
A Penguin Being Tickled - http://www.youtube.com/watch?v=0GILA0rrR6w

A philosophical txt about technology and computers, but zzzzzzz a bit sleepy.:Z

What he said.

The silver box in Swindon has had three different rolls over time - I know as I have been there for all three.
The Silver box that sits next to British Telecoms Offices, as well at AT&T, currently runs Secure Flight as its the only box currently large enough to do this together with the fact that the box has remained unused for many years. The comms used for secure flight are secure in VTAM data access trust that has not been given away by default…So before you go shooting your mouth off about other peoples countries and culture, I would get to know your own region as its a small piece of land in world terms.
Other comments made by others are neither helpful nor constructive and I get paid to write teck articles elsewhere!
If I cannot pass-onto a receptive audience anyone who wants to listen about the beginnings of Data Farms to current day - So be it - I loose nothing and don’t loose sleep over what I am yet to experience, be evolved in and to learn every day. Polish up on VTAM and you may be able to converse with me with some degree of authority…If you only know TCP/IP and Linux, God help you in a World Market
Secure Flight - Wikipedia, the free encyclopedia