Allow variable declarations mixed with code, also in nested blocks with
proper scoping, and with variable initializers. E.g:
function fn(int a)
{
int b;
int c = 10;
if a > 20 then
{
b = 30;
int d = c * 2;
print a, b, c, d;
}
string s = "Hello";
}
When f_line is done, we have to pop the stack frame. The old code just
removed nominal number of args/vars. Change it to use stored ventry value
modified by number of returned values. This allows to allocate variables
on a stack frame during execution of f_lines instead of just at start.
But we need to know the number of returned values for a f_line. It is 1
for term, 0 for cmd. Store that to f_line during linearization.
When a new variable used the same name as an existing symbol in an outer
scope, then offset number was defined based on a scope of the existing
symbol ($3) instead of a scope of the new symbol (sym_). That can lead
to two variables sharing the same memory slot.
Direct recursion almost worked, just crashed on function signature check.
Split function parsing such that function signature is saved before
function body is processed. Recursive calls are marked so they can be
avoided during f_same() and similar code walking.
Also, include tower of hanoi solver as a test case.
Add literal for empty set [], which works both for tree-based sets
and prefix sets by using existing constant promotion mechanism.
Minor changes by committer.
All instructions with a return value (i.e. expressions, ones with
non-zero outval, third argument in INST()) should declare their return
type. Check that automatically by M4 macros.
Set outval of FI_RETURN to 0. The instruction adds one value to stack,
but syntactically it is a statement, not an expression.
Add fake return type declaration to FI_CALL, otherwise the automatic
check would fail builds.
Pass instructions of function call arguments as vararg arguments to
FI_CALL instruction constructor and move necessary magic from parser
code to interpreter / instruction code.
After switching to 16-way tries, trie format ignored unaligned / internal
prefixes and only reported the primary prefix of a trie node.
Fix trie format by showing internal prefixes based on the 'local' bitmask
of a node. Also do basic (intra-node) reconstruction of prefix patterns
by finding common subtrees in 'local' bitmask.
In future, we could improve that by doing inter-node reconstruction, so
prefixes entered as one pattern for a subtree (e.g. 192.168.0.0/18+)
would be reported as such, like with aligned prefixes.
Lexer expression for bytestring was too loose, accepting also
full-length IPv6 addresses. It should be restricted such that
colon is used between every byte or never.
Fix the regex and also add some test cases for it.
Thanks to Alexander Zubkov for the bugreport
Add operators .min and .max to find minumum or maximum element in sets
of types: clist, eclist, lclist. Example usage:
bgp_community.min
bgp_ext_community.max
filter(bgp_large_community, [(as1, as2, *)]).min
Signed-off-by: Alexander Zubkov <green@qrator.net>
For convenience, Trie functions generally accept as input values not only
NET_IPx types of nets, but also NET_VPNx and NET_ROAx types. But returned
values are always NET_IPx types.
The prefix trie now supports longest-prefix-match query by function
trie_match_longest_ipX() and it can be extended to iteration over all
covering prefixes for a given prefix (from longest to shortest) using
TRIE_WALK_TO_ROOT_IPx() macro.
Trie walking allows enumeration of prefixes in a trie in the usual
lexicographic order. Optionally, trie enumeration can be restricted
to a chosen subnet (and its descendants).
Add trie tests intended as benchmarks that use external datasets
instead of generated prefixes. As datasets are not included, they
are commented out by default.
Use 16-way (4bit) branching in prefix trie instead of basic binary
branching. The change makes IPv4 prefix sets almost 3x faster, but
with more memory consumption and much more complicated algorithm.
Together with a previous filter change, it makes IPv4 prefix sets
about ~4.3x faster and slightly smaller (on my test data).
For numeric operators, comma is used for disjunction in expressions like
"10, 20, 30..40". But for bitmask operators, comma is used for
conjunction in a way that does not really make much sense. Use always
explicit logical operators (&& and ||) to connect bitmask operators.
Thanks to Matt Corallo for the bugreport.
Add support to set or read outgoing MPLS labels using filters. Currently
this supports the addition of one label per route for the first next hop.
Minor changes by committer.
Broken detection of top-level case caused crash when return was called
from top-of-stack position. It should behave as reject/accept.
Thanks to Damian Zaremba for the bugreport.
Add 'weight' route attribute that allows to get and set ECMP weight of
nexthops. Similar to 'gw' attribute, it is limited to the first nexthop,
but it is useful for handling BGP multipath, where an ECMP route is
merged from multiple regular routes.
Compare the content of PM_ASN_SET in path masks. A reconfiguration
was not properly triggering a reload of affected protocols when the
members of a set in a path mask change.
Also, update the printing code to so that it can display sets in a path
mask.
Add a missing return statement. Path masks with the same length were all
considered the same. Comparing two with different length would cause
out-of-bounds memory access.
Implement regex-like '+' operator in BGP path masks to match previous
path mask item multiple times. This is useful as ASNs may appear
multiple times in paths due to path prepending for traffic engineering
purposes.
Do not apply dynamic type check for second argument of AND/OR, as it is
not evaluated immediately like regular argument would be.
Thanks to Mikael for the bugreport.
Initial parsing of test.conf must be done directly in filter_test main,
while reconfiguration is handled as a regular test. Also fix several
minor issues in test code.