www.a00.de > tcpgroup > 1991 > msg00051
 

TCP-group 1991


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

MAC sublayer protocol



>Date: Mon, 14 Jan 91 07:16:43 PST
>From: 14-Jan-1991 1008 <goldstein@carafe.enet.dec.com>

>BTW we had this discussion a year or two ago and maybe more than once...

Is there a good summary, or archive of this discussion(s) someplace?  I'm
sure that I'm rehashing some old stuff.

>Token passing requires fast t/r turnaround, or each passing station will
>add lots of delay.  With TxD's in the 300 ms. range,  it's pretty
>pathetic with most ham sets.  Purpose-build radios would fix this but
>most hams have voice radios. (Mine has real relays.)

Agreed.  Am I wrong in assuming that 56k and microwave packet systems
are purpose-built radios, and do exhibit superior turn-on/off delays as
compared to low-speed modems connected to standard voice radios?

For technical/installed base/compatibility reasons many (most?) of the
existing low and moderate speed nets would not be able to (or want to)
migrate to a new protocol.  However, when dealing with direct 56k user
access to backbones, as opposed to 1200/2400/9.6k access to a 56k backbone,
it seems the idea might still have merit.  And for point to point, omni-gain
type of connections, microwave links it almost seems like a requirement.

>Token passing has problems adding and subtracting stations.  There's
>some reconfiguration delay whenever that occurs.  Hams drop in and out
>all the time on access channels -- certainly I do.  That overhead alone
>would be very bad.

This seems the crux of the matter.  Controlling and limiting the reconfig
overhead is crucial.  However, if every so often one of the stations in
the ring/star solicit bids to enter the ring/star, with multiple bids
being resolved by 'normal' contention/collision backoff algorithms, then
we limit the time slots in which collisions can occur.  Naively, I assume
that the overhead generated during these access contention periods can't
be much worse than the contention collisions that could happen 100% of
the time under the existing scheme.  Again I'm assuming TxD's of much less
than 300ms can be achieved.

>And tokens mean that everybody "on channel" has to transmit regularly,
>whether or not they have anything to send.  Given the turnaround times
>and the nature of ham operation, this is a minus -- it's like automated
>net operation without a net control, but worse.

I disagree.  After a certain period of inactivity, a station should
be dropped from the channel.  'Dropped' means that they are not auto-
matically passed the token, but have to contend for being added to
the ring.  They can still listen, of course and are automatically
joined to the ring and passed the token, when any AX.25 packet is
sent to it (since in most cases we want to be able to ack the packet
in a connected mode).  The exact mechanisms are still fuzzy in my mind;
which is why I'm asking for criticism here.  The general idea is that
the "ring" consists only stations that have something to say and/or
have transmitted/received data (other than token or ring maintenance
info) recently; and in fact the ring can degnerate into zero stations
and go completely quiet if there is no activity (maybe??).  This means
that the regional network behaves as an ALOHA/CSMA system under light
load and gradually migrates to a token based system under heavy load.
The definitions of 'inactivity' and 'recently' are central, and may
actually vary dynamically with network loading.

A possible plus is that it is exceeding difficult (impossible?) to establish
priority routing on an Aloha/CSMA channel.  A token based system would
allow interactive/telnet type traffic to be given priority over FTP or
SMTP traffic on heavily loaded (saturated) links.  This would seem to
address concerns/objections that some people have to TCP/IP existance at
all.  The gain may not be worth the complexity, but it's a thought.

>Exisiting single-freq. access channels are indeed cruddy due to their
>basically Aloha nature.  But if we used simple repeaters (even baseband
>regenerators, like audio-style repeaters), we'd have true CSMA and no
>HTS.  Then the existing p-persistent pseudo-CSMA would work.

While doubling the spectrum usage of a 100kHz wide 56k signal seems
wasteful, I have heard mentioned a narrower beacon being transmitted
by a central digipeater when it is receiving.  This gives almost the
same effect allowing CSMA, but does not give collsion detection that
a full repeater might allow.  It still would support p-persistent
CSMA as you said, with the effeciency largely a function of turn-on
and propagation delays.  Doesn't this require a separate receiver to
at each site to detect 'carrier sense', since without a full repeater
the site would have monitor both the main frequecy and the beacon
frequency simultaneously in order to keep from missing incoming packets
when it has something to send?

Are there other reasons besides spectrum usage that split frequency
full duplex systems aren't used on packet?  It does seem the simpler
approach if enough spectrum is available.

-Steve





Document URL : http://www.a00.de/tcpgroup/1991/msg00051.php
Ralf D. Kloth, Ludwigsburg, DE (QRQ.software). < hostmaster at a00.de > [don't send spam]
Created 2004-12-21. Last modified 2004-12-21. Your visit 2024-11-24 01:22.25. Page created in 0.0181 sec.
 
[Go to the top of this page]   [... to the index page]