Several variables within structures contain the same information in
different forms.
The aircraft structure contains addr and hexaddr. Hexaddr is a printable
string version of addr. However, hexaddr is only ever used in printf
statments, so we can use printf(%06x) to print addr. This saves a printf
for every received command.
The modesMessage structure contains addr, and aa1,aa2,aa3 as separate
bytes. aa1,aa2 and aa3 are only ever used to construct addr, and to
print out. Therefore, we can use addr instead of them..
The original code made an attempt to reconcile a newly arrived ModeA/C
message with every known Mode-S report at the time of detection.
However, the results of matching up Modes A/C and S are only used in the
interactive display routine, and that is only called periodically from
within the BackgroundTasks loop.
Doing the matching on every ModeA/C arrival incurs quite a large CPU
processing load. Moving the matching up routine to the backgroundTasks
loop means that the task is performed muck less frequently and therefore
uses less CPU time.
Allow a greater range and negative values for Mode C (down to -1200
feet)
Stop attempting to feed ModeA/C data to SBS Output stream.
Allow Mode A only matches to Mode-S squawks when the Mode A code does
not conflict with any possible (legal) Mode C code.
Allow Mode C matches to track aircraft climbing and descending
relatively slowly. This also helps when trying to match Mode-S altitudes
which are 25 foot increments, with Mode C altitudes which are in 100
foot increments.
First attempt at decoding legacy SSR Modes A and C.
If the command line switch --modeac is used, the program will now
attempt to recover Mode A/C signals contained in the raw I/Q data
stream. The current recovery mechanism is quite strict and does not cope
well with overlapping and corrupt SSR replies. I estimate that less than
20% of possible returns are decoded correctly. Hopefully over the next
few iterations this can be improved.
If outputting raw data it is recommended to use the --net-ro-size and
--net-ro-rate command line options to reduce the number of very small
ethernet packets that will be generated by mode A/C replies.
Thanks to vk1et for these.
1) Correct for additional timestamp langth in raw output buffer when
using mlat mode
2) Don't output a timestamp when the message has been received from a
remote site (the internet). This is to avoid upsetting MLAT because
there is an indeterminate delay between reception at the remote site and
subsequent message arrival in the local dump1090 instance.
3) Allow @ character for raw data input with timestamp, and correctly
calculate the length.
The original code calculates Lat/Long only if it receives two DF-17
(subtype 9 or 18) within one second of each other. I have no idea why.
It then caches the results in the aircrafts data structure for use in
the --interactive display.
When SBS-1 style ASCII output is selected (port 30003) the code does not
attempt to calculate Lat/Long from the data just received - instead it
picks it up from the cached information in the aircraft's data
structure.
However, if the data isn't being updated this results in stale Lat/Long
being sent out. This is most likely to occur when the aircraft is at the
extreme edge of the receivers range when it may be getting some DF-17s
containing Lat/Long, but not 2 per second. The program will continue
sending the stale data until the aircraft finally times out (default 60
seconds)
I have modified the code to include a sbsflags variable in the aircraft
data structure. When a new Lat/Long is decoded and put into the
structure, a bit is set to indicate SBS_LAT_LONG_FRESH. Then, once the
Lat/Long is output the first time, the bit is cleared. Thereafter the
code will not populate the Lat/Long fields in the output message until
SBS_LAT_LONG_FRESH is set again.
The default is 0. This works in conjunction with --net-ro-size.
The program will attempt to gather up "net-ro-size" raw bytes before
sending them out over Ethernet. However, to avoid a long wait if the
traffic density is very low, the program will only wait for
"net-ro-rate" 64mS periods since the last send. before sending any data
added to the output buffer since the last send. This allows the user to
tailor the network load to suit their requirements.
Move memcopy to outside the main bit loop, and just flip the modified
bit back at the end of each loop if it didn;t work
Pass in a pointer to the mm structure being corrected, and fix-up the
crc with the value inside the function, rather than re-calculate on
return
1) Populate Field 3 witn "111"
2) Populate field 4 with "11111"
3 Populate Field 6 with "111111"
4) End the record with <CRLF>, rather than just <LF>
5) Increase the ctrCommon buffer size to cope with additional field data
Apparently, this makes the output more compatible with Plane Plotter and
RTL1090.
Allow the user to specify the minimum size of raw data to be sent to the
TCP port. Dump1090 will buffer up raw data until it has at least this
many bytes to send to the TCP socket.
The default is 0, which means every frame is sent to the TCP socket as
it is decoded. The maximum value is limited to 1300 bytes.
Note the buffer will be flushed every 65 ms regardless of the amount of
data in it so that excessive in transmission do not occur.
The original code wrote every individual received frame to the TCP port.
Some O/S's buffer smaller writes into larger packets. It appears that
some versions of Linux don't. The result is that the (Ethernet) network
gets bombarded with lots of small Ethernet packets.
Therefore, I've added a 1500 byte output buffer to the raw output
functions. Data is written into this buffer by the raw output routines.
Data is flushed out to the TCP port when either.
1) The latest write to the output buffer takes the contents to more than
1300 bytes
2) At the end of every processed block of data supplied by rtl-sdr. This
will be every 56mS or so,
The end result should be that on systems detecting a lot of traffic, you
should see lots of > 1300-byte Ethernet packets. On systems receiving
less traffic, you should see one network packet every 56 mS or so.
The total number of network packets should be much reduced, and their
average size much bigger. The worst case delay in transmission will be
56 mS.
As requested by mlino
Note : I haven't been able to validate that the format is correct. I
think it should be Ok, but it needs someone with an SBS setup to check
it. Any offers?
M$ VC 6.0 does not like long long
UNIX compilers don't like printing int64_t or uint64_t as %lld
Raspberry Pi Linux doesn;t like PRId64
So I give up.
I've changed the affected variables to bog standard unsigned ints.
Assuming these compile as 32 bit unsigned's, its unlikely youll have the
program running long enough for these to overflow.
If noist/sampling/Nyquist errors cause bit detection errors in the DF
pare of the frame, then we may not be able to work out the correct
length of the message. We have to guess whether the bit should be a 0 or
a 1.
In such circumstances we assume the message length is long(112 bits)
However, if we start to get encoding errors after bit 56 the we attempt
to change our original guess at the bit, and invert it. If this change
of guess would have resulted in a short message, and if the short
message would have been error free then we can recover.
--mlat option introduced: to display raw data in Beast ascii format with
counter (@...;), does not affect Beast binary format ;
--interactive-rtl1090 option introduced: order of flight table Iin
interactive mode) and some formats adopted to RTL1090 format, so
comparison is easier
Change (and hopefully improve) the Message bit decoder.
When decoding bits, a "0" is encoded as (m[j]<m[j+1]), and a "1" is
(m[j]>m[j+1]) . However, there is a problem if mpj[ == m[j+1] because we
can't decide what it's supposed to be.
Antirez's original code defaults to a '1', and then lets the bit error
detection code sort it out. I *think* it's better to default to '0'
because it's more likely that noise added to the signal will produce a
spurious '1' rathar than anything subtracting from the signal to produce
a spurious '0'
Also, Antirez''s code only looks for errors in the first bit of the
message. I don't know why this is.
There is a potential problem in deciding the message length if there are
any errors in the first 5 bits decoded, because this defines the message
type (the DF-??), and it's the DF that determines how many bits the
message shall contain.
If there is an error in the first 5 bits, then we could ignore the DF,
and continue decoding a long format (112 bits). However, for weak
signals, if the message is a short one (56 bits) this results in the
sigStrength decaying to the point where it's level drops below squelch,
so we discard a possibly decodeable 56bit
However, if we assume it's a short message, and only decode 56 bits, and
it's really a long message we won't have decoded all the bits.
Not sure what to do about this
Three changes in this one
1) Change the checksum testing for DF-11
2) Recode the Checksum generation routine to use pointers.
3) Tidy up the appearance of some print debug statments
Change the I/Q lookup table for better detection. Changes fully
described in the source dump1090.c at line 347 onwards. This change
results in about 30% more frames being detected at weak signal input
levels.
Also a bug fix from the last commit - C doesn't support the min()
function.
Original code loops through the analogue array m[] detecting data bits
and putting them into the bits[] array. It then loops through all the
bits[] creating the msg[] byte array. It then loops through the analogue
array m[] again calculating the signal strength.
Change this so that everything is done in one loop so we can go straight
from analogue samples to bytes, calculating the signal strength on the
fly.
Also use the results of the signal strength calculation to populate the
message records mm.signalLevel variable.
Create a pointer, pPayload, which points to the start of the data bits
in the analogue sample buffer m[]. So pPayload =
&m[j+MODES_PREAMBLE_SAMPLES] Then use this
pointer to perform the data bit detection tests. It should save a few
cpu cycles per test because accessing pPayload[2] should be quicker than
m[2+j+MODES_PREAMBLE_SAMPLES].
Also change the way phase correction works. the original code saves the
original data (from m[pPayload] to aux[], and then phase corrects m[],
and then restores aux[] back to m[] afterwards. Change this so that m[]
is copied to aux[] and then phase correction is carried out in aux[],
and the pPayload pointer points to aux[]. This then avoids the
requirement to copy aux[] back to m[] afterwards, which saves a fair few
CPU cycles.
Create a preamble pointer in the message detector loop
Create a pointer, pPreamble, which points to the start of the preamble
in the analogue sample buffer m[]. So pPreamble = &m[p] Then use this
pointer to perform the preamble detection tests. It should save a few
cpu cycles per test because accessing pPointer[2] should be quicker than
m[p+2].
Also move the decision on whether to try OutOfPhase correction to the
end of the first pass, rather than automatically going into phase
correction if the first pass fails. This saves two memcpy's if the
decision in the second pass is to not do phase correction.
Create a pointer, pPreamble, which points to the start of the preamble
in the analogue sample buffer m[]. So pPreamble = &m[p] Then use this
pointer to perform the preamble detection tests. It should save a few
cpu cycles per test because accessing pPointer[2] should be quicker than
m[p+2].
Also move the decision on whether to try OutOfPhase correction to the
end of the first pass, rather than automatically going into phase
correction if the first pass fails. This saves two memcpy's if the
decision in the second pass is to not do phase correction.
Change the following so that M$ compilers and debuggers complain less
1) change all long long data types to uint64_t.
2) Typecast all malloc function returns to the correct pointer types.
3) Explicitly typecast all float to int conversions.
4) Remove inline variable declaration. Allowed in C++, but not in C.
Change the following so that M$ compilers and debuggers complain less
1) change all long long data types to uint64_t.
2) Typecast all malloc function returns to the correct pointer types.
3) Explicitly typecast all float to int conversions.
4) Remove inline variable declaration. Allowed in C++, but not in C.
Apparently, the Beast output timestamp has a frequency of 12 Mhz.
Therefore, I've updated the timestamp counter to simulate a 12 Mhz
frequency.
Also incorporate terribls latest updates
Increase the speed of the I/Q to magnitude calculation lookup by
expanding the table to 65536 entries (256*256*2 bytes). At runtime, this
allows us to pick up raw I/Q bytes as a 16 bit value and index into the
magnitude table to get a 16 bit result. This removes the need for
subtracting 127, and then correcting for -ve numbers, so should be
faster, at the expense of a larger data table.
Change the maglut lookup table from 129*129 to 256*256
Initialise the maglut buffer accordingly
Change the data->maglut lookup to use the new maglut buffer
Change the I/Q data buffer pointet to a uint16_t *
I messed up merging the Squawk display in interactive mode into my
master.
However, the original source posted by terribl causes a print line
length greater than 80 characters. This in turn causes the lines to
spill over on a terminal display. I have therefore re-formatted some of
the output so that it fits within 80 characters.