翻訳と辞書
Words near each other
・ DNSBL
・ DNSC
・ DNSChanger
・ DNSCurve
・ DNSimple
・ Dnepr M-72
・ Dnepr-1
・ Dnepropetrovsk maniacs
・ Dnepropetrovsk Sputnik
・ Dneprov
・ DNER
・ Dnestr radar
・ Dnestrovsc
・ Dnestrovskaya Pravda
・ Dnestrovskiye melodii
DNET
・ Dneven Trud
・ Dnevne Novine
・ Dnevni avaz
・ Dnevni list
・ Dnevni telegraf
・ Dnevnik
・ Dnevnik (Bulgaria)
・ Dnevnik (Serbia)
・ Dnevnik (Skopje)
・ Dnevnik (Slovenia)
・ Dnevnik Hiacinte Novak
・ Dnevnik HRT
・ Dnevnik jedne ljubavi
・ Dnevnik starog momka


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

DNET : ウィキペディア英語版
DNET

DNET is a proprietary software suite of network protocols created by Swedish Dataindustrier AB DIAB, originally deployed on their Databoard products. It was based upon X.25, which was particularly popular in European telecommunications circles at that time. In that incarnation it was rated at 1 Mbit/s over RS-422.
In the 80's, ISC Systems Corporation (ISC) purchased DNET as part of their purchase of DNIX, and ported it to run over Ethernet. ISC's choice of DNET over TCP/IP was in part due to the relative light weight of the DNET protocol stack, allowing it to run more efficiently on the target machinery. DNET was also auto-configuring so there was no manual configuration of the local network, all that was required was that each machine in a network be given a unique name. This simplicity was advantageous in ISC's market.
Being based on X.25, DNET was connection-oriented, datagram-based (as opposed to a byte stream), supported out-of-band (interrupt) messages, and provided link-down notifications to its clients and servers so that applications did not have to provide their own heartbeats. In the financial community these were all considered advantages over, say, TCP/IP. DNET also supported Wide Area Networks (WAN) using X.25 point-to-point communication links, either leased line or dialup (see also Data link). (WAN support ''did'' require manual configuration of the gateway machines.)
DNET provided named network services, and supported a multicast protocol for finding them. Clients would ask for a named service, and the first respondent (of potentially many) would get the connection. Servers could either be resident, in which case they registered their service name(s) with the protocol stack when they were started, or transient, in which case a fresh server was forked/execed for each client connection.
DNET at ISC consisted of the following services:
* netman (the main networking client/server support handler)
* raccess (remote file access via /net/machine/path/from/raccess/root...)
* rx (remote execution)
* ncu (network login)
* bootserver (diskless workstation boot service)
* dmap (ruptime analog)
There were ''many'' more services than these at a typical DNET installation - these are representative.
== netman ==
netman was the main component of DNET. It was a DNIX Handler, usually mounted on /netphys, and was responsible for providing all Layer 2 and Layer 3 X.25 protocol handling. It talked to the Ethernet and HDLC device drivers. It also provided the service name registry, and the WAN gateway functionality. Resident servers could also utilize, at their instigation, a Layer 3 protocol stack (called 'serverprot') between themselves and netman, allowing them to support up to 4095 client connections through one file descriptor (to netman). Such servers were called ''complex'' resident servers, so named in honor of the relatively complicated (though not large) bit of protocol code that had to be included to handle the multiplexing and flow control. Simple resident and transient servers consumed a file descriptor per client connection. It was possible to run more than one netman process, for testing or other special purposes. (Such a process would be configured to use different Ethertype and handler mount points, at a minimum.) The /usr/lib/net/servtab file was the usual location for the configuration file controlling WAN configuration and transient servers.
Client applications would open /netphys/servicename, this would normally result in an open connection to a server somewhere, possibly even on the same machine. Resident servers would open /netphys/listen/servicename, this would register their service name with netman. Transient servers were pre-registered via their entry in servtab, and would be forked/execed with their connection already established by netman. Machine-specific services (such as ncu---network login) would contain the machine name as part of the service name, installation-specific services (such as dmap---a site's machine status servers) would not.
Service name resolution was handled entirely between netman processes. A client's representative would multicast the desired service name to the network using a MUI (Unnumbered Information ) extension to X.25. Responses indicating server availability would be directed (not multicast) back by potential server representatives. When there was more than one respondent to the multicast (as was normal for, say, dmap) the first one would be selected for opening a connection. Only one server was ever contacted per client service request. As with all UI-class messages in X.25, packet loss was possible, so the MUI process was conducted up to three times if there was no response.
The X.25-ness of connections, namely datagram control, was exposed to applications (both client and server) as an extra control byte at the beginning of each read and write through a connection. As was customary in network header processing, this byte was usually accessed at a -1 offset within any application's networking code, only the buffer allocation and the read(2)/write(2) calls were usually aware of it. This byte contained the X.25 M, D, and Q bits (for More, Delivery, and Qualifier). DNET never implemented the D (delivery confirmation) bit, but the other two were useful, particularly the M bit. The M bits were how datagrams were delimited. A byte-stream application could safely ignore them. Any read with a clear M bit indicated that the read result contained an entire datagram and could be safely processed. Reads that were too small to contain an entire datagram would get the part that would fit into the buffer, with the M bit set. M bits would continue to be set on reads until a read contained the end of the original datagram. Datagrams were never packed together, you could get at most one per read. Any write with the M bit set would propagate to the other end with the M bit set, indicating to the other end that it should not process the data yet as it was incomplete. (The network was free to coalesce M'd data at its discretion.) The usual application merely wrote an entire datagram at once with a clear M bit, and was coupled with a small read loop to accumulate entire datagrams before delivery to the rest of an application. (Though not often required due to automatic fragmentation and reassembly within the protocol stack, this protective loop ensured that allowable exposed fragmentation was never harmful.) The Q bit was a simple marker, and could be used to mark 'special' datagrams. In effect it was a single header bit that could be used to mark metadata.
Out-of-band (OOB) data, which bypassed all buffering, flow control, and delivery confirmation was accomplished via DNIX's ioctl mechanism. It was limited (per X.25) to 32 bytes of data. (Asynchronous I/O reads were usually utilized so that out of band data could be caught at any time.) As with UDP, it was possible to lose OOB data, but this normally could only happen if it was overutilized. (The lack of a reader waiting for it resulted in OOB data being discarded.)
flow control was accomplished within the network (between netman processes, and possibly involving external X.25 WAN links) using the usual X.25 mechanisms. It was exposed to the applications only insofar as whether the network data reads and writes blocked or not. If a request could be satisfied via the buffering abilities of the netman handler and/or the current state of the connection it would be satisfied immediately without blocking. If the buffering were exceeded the request would block until the buffers could satisfy what remained of the request. Naturally, Asynchronous I/O could be used to insulate the process from this blocking if it would be a problem. Also, complex resident servers used the 'serverprot' X.25 flow control mechanisms internally to avoid ever blocking on their single network file descriptor, this was vital considering that the file descriptor was shared by up to 4095 client connections.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「DNET」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.