The Linkspace Guide

This document is kept up to date on a best effort basis.
Sometimes the Rust linkspace docs is ahead of this guide.

This is a technical document describing the linkspace packet format, and software library to create, query, and process packets.

lk --version
linkspace-cli linkspace-cli - 0.4.0 - main - 188b2ed - 1.74.0-nightly

Introduction

Linkspace is a open-source library and protocol to build event-driven applications using a distributed log as a source of truth. The linkspace packet format is designed to be extremely fast to read, write, and interpret. Depending on your use case, you can limit yourself to just those functions and write your event streams and network from scratch. The library has various tools to make developing increasingly complex systems much easier.

The library is structured around 4 concepts.

  • Point - Plain bytes in any format, optionally with a spacename and links to other points.
  • ABE - Language agnostic (byte) templating for convenience.
  • Query - A list of predicates and options for selecting packets.
  • Runtime - A runtime around a multi-reader single-writer database for saving and querying.

With that foundation, common challenges are addressed by a set of Conventions.

Setup

This guide uses Python and (Bash) CLI snippets.

Binary

The download contains the CLI and python library and the examples used in this document.

Package manager

pip install linkspace

cargo +nightly install linkspace-cli --git https://github.com/AntonSol919/linkspace

Build from source

git clone https://github.com/AntonSol919/linkspace

for users

make install-lk install-python

for development / debug builds

source ./activate builds and sets PATH and PYTHONPATH env variables.

API overview

The linkspace API is the Packet type and a small set of functions. It is available as a Rust crate linkspace and binding for other languages follow the same API.

It consists of the following:

Point

Rust docs

Points are the basic units in linkspace. They carry data, link to other points, and might contain information about the who, what, when, and how. There are 3 kinds of points. datapoints, linkpoints, and keypoints. A point has a maximum size of 216-512 bytes.

The library exposes fields as properties.

The functions automatically generate the hash and prepend a netheader when you build a point. The result is a packet. For simplicity sake, all functions in the library deal only with packets.

lk_datapoint

echo "Hello, Sol!" | lk data | lk pktf "[hash:str]\n[data]"
Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
Hello, Sol!

from linkspace import *
datap = lk_datapoint(b"Hello, Sol!\n")
print(f'Packed data {datap.data.decode()} into a packet with hash {b64(datap.hash)}')
print(lk_eval2str("Or use abe, a language agnostic template, like this: [hash:str] = [data]", datap))
Packed data Hello, Sol!
 into a packet with hash Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
Or use abe, a language agnostic template, like this: Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk = Hello, Sol!

lk_linkpoint

A linkpoint can: hold data, hold links to other points, and can be found with by its spacename.

It consists of these fields:

Fieldsize  
Group32 the intended recipients.
Domain16 the intended application.
Spacenamevar<240 Sequence of bytes. e.g. '/dir1/dir2/thing'
Stamp8 Big endian UNIX timestamp in microseconds.
Links48*n A variable length list of (Tag16, Pointer32)
-Link[0]48  
-Link[1]48  
-Link[2]48  
-Link[…]48  
Datavar<216  

Each (Domain,Group) is a 'tree', and each (Domain,Group,Spacename) is a points 'location'.

All values, including the Spacename, contain arbitrary bytes.

An entire point can be 216-512 bytes in size. The header is always 4 bytes. A data point can hold a maximum of 216-512-4 bytes. This space is shared between the links and data so beware that too much data and links won't fit into a single packet To overcome this, create multiple points and link them in a new linkpoint. These limits ensure there is only a single level of fragmentation to deal with.

Point hashes, GroupID's, and public keys are 32 bytes.
They are usually encoded in URL-safe no-padding base64, e.g. RD3ltOheG4CrBurUMntnhZ8PtZ6yAYF.
Such a string makes things unreadable.
The [...] syntax (ABE) allows you to name and manipulate bytes.
This following example shows that [#:pub] resolves to the bytes RD3ltOheG4CrBurUMntnhZ8PtZ6yAYF in both the Group and the second link
Furthermore, If no group was provided it defaults to [#:pub]

Datapoints do not have a 'create' field, so they get the same hash given the same data. If we had forced a specific 'create' stamp for both the python and bash example it would have produced the same hash for both. By default 'create' is set to the current time ( microseconds since epoch ) and thus the hashes are different.

The command lk link builds one or more linkpoint packets and output's it to stdout by default. Whenever a cli commands deal with (domain, group, spacename) tuples, they are set by the first argument: DOMAIN:GROUP:SPACENAME. Here two links are added with the tags first_tag_1 and another_tag.

lk link "a_domain:[#:pub]:/dir1/dir2/thing" -- \
          first_tag_1:Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk \
          another_tag:[#:pub] \
| lk pktf

type	LinkPoint
hash	wmWoiUUS4dGI6B4HhwFS6RCjhLv-YCk6lqrBNXzhkP4
group	[#:pub]
domain	a_domain
space	/dir1/dir2/thing
pubkey	[@:none]
create	1695205480703156
links	2
	first_tag_1 Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
	another_tag Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk

data	0
echo hello | lk link my_domain --data-stdin | lk pktf
type	LinkPoint
hash	hO5Z5PzoxxQVRsubHawUDrztDOMaBfwesY860giuDwQ
group	[#:pub]
domain	my_domain
space	
pubkey	[@:none]
create	1695205480715346
links	0

data	6
hello

The API deals with arbitrary bytes, not encoded strings. Some example python code that returns a value of type 'bytes' are:

  • "some string".encode()"
  • the b"byte notation"
  • fields like apkt.group, apkt.hash apkt.domain etc
  • evaluate an ABE string with lk_eval.
ptr1 = lk_eval("[b:Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk]")
link1 = Link(tag=b"first tag 1",ptr=ptr1)

ptr2 = lk_eval("[#:pub]")
link2 = Link(b"another tag",ptr2)

assert(link1.ptr == link2.ptr)

datap = lk_datapoint(b"Hello example");
link3 = Link(b"a datapoint",datap.hash)

linkp = lk_linkpoint(
    domain=b"example-domain",
    group=lk_eval("[#:pub]"),
    data=b"Hello, World!",
    links=[link1,link2,link3]
)
str(linkp)
type	LinkPoint
hash	0YezRgYL6ciOBr9xoS10bJS142ihogym9gHvF8w-iYE
group	[#:pub]
domain	example-domain
space	
pubkey	[@:none]
create	1695205480865654
links	3
	first tag 1 Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
	another tag Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
	a datapoint eVo9H3i6aoDZOT7D3FJfG5foxDXtf7f16vneRXeGLIo

data	13
Hello, World!

lk_keypoint

A key point is a linkpoint with an additional publickey and signature.

There are functions to generate, encrypt, and decrypt a linkspace key. Leaving you to deal with the saving. Alternatively there is the lk_key function that does it all for you. With the added benefit that you can address your own public key as [@:me:local].

export LK_DIR=/tmp/linkspace
lk --init key --decrypt-cost 0 --password "my secret" # remove the --decrypt-cost. it speeds up building this doc
$argon2d$v=19$m=8,t=1,p=1$mSAqlBL7G6ppJrvJ51uyB/RUqJT8thaFt49AP7nfA00$Ou/PiSKr5V+/+/Rt08VGNO4Ab0Le5SYhvL5BD8Lt110
mSAqlBL7G6ppJrvJ51uyB_RUqJT8thaFt49AP7nfA00
lk keypoint "example::" --password "my secret" | lk pktf
type	KeyPoint
hash	SdgyARmRArzp2aSCfLO1Gwonro7k3mjG_eoMPbpSLlE
group	[#:pub]
domain	example
space	
pubkey	[@:me:local]
create	1695205480903545
links	0

data	0

The CLI also accepts lk link --sign instead of lk keypoint

lk = lk_open("/tmp/linkspace",create=True)
key = lk_key(lk,b"my secret");
example_keypoint = lk_keypoint(key=key,domain=b"example")
str(example_keypoint)
type	KeyPoint
hash	gDSjocA5Ifqc1BIw1tVWZWEWILino6csDJ0Vp-T7jXM
group	[#:pub]
domain	example
space	
pubkey	[@:me:local]
create	1695205481050234
links	0

data	0

Fields

In python you can access these fields directly as bytes. Fields are not writable because they are included in the hash.

[attr for attr in dir(lk_linkpoint())  if not "__" in attr]
['comp0', 'comp1', 'comp2', 'comp3', 'comp4', 'comp5', 'comp6', 'comp7', 'comp_list', 'create', 'data', 'depth', 'domain', 'group', 'hash', 'hop', 'links', 'netflags', 'pkt_type', 'pubkey', 'recv', 'rooted_spacename', 'signature', 'size', 'spacename', 'stamp', 'ubits0', 'ubits1', 'ubits2', 'ubits3', 'until']

Where spacename[0..7] are the spacename components,

Some fields we've not seen so far are writable, but they are not relevant for most applications.

Notes

Groups signals the intended set of recipients. Domains signal the activity, and practically the application used to present an interface to the user.

The groups bytes can be chosen arbitrarily. Membership is enforced by its members. It's up to the user (or some management tool) to pick a method of data exchange.

The following do have a meaning. The [0;32] null group ( [#:0] ), i.e. the local only group, is never transmitted to other devices and is never accepted from outside sources. Everything in the [#:pub] group1 is meant for everybody. e.g. the public.

By convention the group created by pubkey1 XOR pubkey2 forms a group with those keys as its only two members.

The [#:...] is part of the LNS.
A public registry for assigning names and naming rights.
e.g. [#:sales:mycomp:com] for groups and [@:alicekey:mycomp:com] for keys.

lk_write and lk_read

The point is the content that is hashed, the packet is the a mutable network header, the hash, and the point.

datap = lk_datapoint("hello")
linkp = lk_linkpoint()
keyp = lk_keypoint(key)
packet_bytes = lk_write(datap) + lk_write(linkp) + lk_write(keyp)
print(len(packet_bytes),packet_bytes)

# read the bytes as packets
(p1, packet_bytes)= lk_read(packet_bytes)
(p2, packet_bytes)= lk_read(packet_bytes)
(p3, packet_bytes)= lk_read(packet_bytes)

assert(p1 == datap)
assert(p2 == linkp)
assert(p3 == keyp)
432 b'LK1\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x97b\xe5\x8e\xd1P/\xde\xa8\xc9\x15c\xc4\xce\x00Q\xaf\xf4\x1f\x8f@oc1\xad\x86&3f\x03\x01\x00\x01\x00\thello\xff\xff\xff\xff\xff\xff\xffLK1\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x001\xe0z\tb$I\x8b\xf0\x0fDTFsR\xf0\x99\xd9\x88\xcf\xab\x0c\xd4\x853q\xa5Tek\xd0\x1c\x00\x03\x00@\x00@\x00@\x00\x06\x05\xc7\xc8\x89\xf4\xcbb\xbb;\x8b=\xd5\xceu\xe1\xfa\x88/\xe1\xa3:\xd9Y\x8c7\x15\xc5\x89>\x0f\xdb\x8bH}\\\xfd\xb19\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00LK1\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\'\xf5\xfd"0\x96\xd0Lf\xa21L\x06\xdcf\xbe_,\xff\xcc\x08j\x1d\xf1i\x18K\x0e\xb5V+\x00\x07\x00\xa0\x00@\x00@\x00\x06\x05\xc7\xc8\x89\xf4\xd0b\xbb;\x8b=\xd5\xceu\xe1\xfa\x88/\xe1\xa3:\xd9Y\x8c7\x15\xc5\x89>\x0f\xdb\x8bH}\\\xfd\xb19\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x99 *\x94\x12\xfb\x1b\xaai&\xbb\xc9\xe7[\xb2\x07\xf4T\xa8\x94\xfc\xb6\x16\x85\xb7\x8f@?\xb9\xdf\x03M(\xefb\xec\xd0\xc6\x88\x1c:}\xf4\xbfSh\xf1\xc8\x1e\x0c\x97\x9bre\xdf\xb6P\xa7\x01\x81\xccS\xc9\x18\xc7\xccvr.n5O\xc5\xcb\xb4\xff\xfa\x18\x7fc\xb9\xf3L\xb5\xf2N\xfc0\x8a\x06\x04WV\xae\x84\xc9'

The CLI automatically reads and writes in packet format from the relevant pipes.

echo datapoint:
echo -n hello | lk data | tee /tmp/pkts | xxd 
echo linkpoint:
echo -n hello | lk link my_domain:[#:pub]:/hello/world -- link1:[#:0] link2:[#:test] | tee -a /tmp/pkts | xxd 
echo keypoint:
echo -n hello | lk keypoint --password "my secret" my_domain:[#:pub]:/hello/world -- link1:[#:0] link2:[#:test] | tee -a /tmp/pkts | xxd 
cat /tmp/pkts | lk pktf [hash:str]
datapoint:
00000000: 4c4b 3100 0000 0000 ffff ffff ffff ffff  LK1.............
00000010: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000020: 0f97 62e5 8ed1 502f dea8 c915 63c4 ce00  ..b...P/....c...
00000030: 51af f41f 8f40 6f63 31ad 8626 3366 0301  Q....@oc1..&3f..
00000040: 0001 0009 6865 6c6c 6fff ffff ffff ffff  ....hello.......
linkpoint:
00000000: 4c4b 3100 0000 0000 ffff ffff ffff ffff  LK1.............
00000010: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000020: 9db0 f25e c4f6 7ab1 f3bb 832e 5922 84b2  ...^..z.....Y"..
00000030: d6f8 5f44 e4b7 9da4 9c37 6861 4c11 0988  .._D.....7haL...
00000040: 0003 00b4 00a0 00b4 0006 05c7 c88a 3dde  ..............=.
00000050: 62bb 3b8b 3dd5 ce75 e1fa 882f e1a3 3ad9  b.;.=..u.../..:.
00000060: 598c 3715 c589 3e0f db8b 487d 5cfd b139  Y.7...>...H}\..9
00000070: 0000 0000 0000 006d 795f 646f 6d61 696e  .......my_domain
00000080: 0000 0000 0000 0000 0000 006c 696e 6b31  ...........link1
00000090: 0000 0000 0000 0000 0000 0000 0000 0000  ................
000000a0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
000000b0: 0000 0000 0000 0000 0000 006c 696e 6b32  ...........link2
000000c0: 2d1b f1bc 0f64 246e 3485 65a5 5b0a 9198  -....d$n4.e.[...
000000d0: b02b 33ce dc82 bfb0 56c2 4741 85b4 367c  .+3.....V.GA..6|
000000e0: 0206 0c0c 0c0c 0c0c 0568 656c 6c6f 0577  .........hello.w
000000f0: 6f72 6c64 ffff ffff                      orld....
keypoint:
00000000: 4c4b 3100 0000 0000 ffff ffff ffff ffff  LK1.............
00000010: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00000020: 47ed 1020 6ddb ec7f 2fbf 7e2e 8f95 5f72  G.. m.../.~..._r
00000030: f7cc ff99 566a 1277 02a3 ccda cf49 c98e  ....Vj.w.....I..
00000040: 0007 0114 00a0 00b4 0006 05c7 c88a 530e  ..............S.
00000050: 62bb 3b8b 3dd5 ce75 e1fa 882f e1a3 3ad9  b.;.=..u.../..:.
00000060: 598c 3715 c589 3e0f db8b 487d 5cfd b139  Y.7...>...H}\..9
00000070: 0000 0000 0000 006d 795f 646f 6d61 696e  .......my_domain
00000080: 0000 0000 0000 0000 0000 006c 696e 6b31  ...........link1
00000090: 0000 0000 0000 0000 0000 0000 0000 0000  ................
000000a0: 0000 0000 0000 0000 0000 0000 0000 0000  ................
000000b0: 0000 0000 0000 0000 0000 006c 696e 6b32  ...........link2
000000c0: 2d1b f1bc 0f64 246e 3485 65a5 5b0a 9198  -....d$n4.e.[...
000000d0: b02b 33ce dc82 bfb0 56c2 4741 85b4 367c  .+3.....V.GA..6|
000000e0: 0206 0c0c 0c0c 0c0c 0568 656c 6c6f 0577  .........hello.w
000000f0: 6f72 6c64 ffff ffff 9920 2a94 12fb 1baa  orld..... *.....
00000100: 6926 bbc9 e75b b207 f454 a894 fcb6 1685  i&...[...T......
00000110: b78f 403f b9df 034d addb 54e2 1eb2 91d4  ..@?...M..T.....
00000120: eb97 34f0 f5d6 acd8 13b5 7071 30a2 9a61  ..4.......pq0..a
00000130: 5a58 b8e8 7ea6 45be df5f 6788 2d6a 8f87  ZX..~.E.._g.-j..
00000140: b728 0b19 5a9d 97fb 24cd 5c07 9da8 ebf9  .(..Z...$.\.....
00000150: c037 c14c f76b 5145                      .7.L.kQE
D5di5Y7RUC_eqMkVY8TOAFGv9B-PQG9jMa2GJjNmAwE
nbDyXsT2erHzu4MuWSKEstb4X0Tkt52knDdoYUwRCYg
R-0QIG3b7H8vv34uj5VfcvfM_5lWahJ3AqPM2s9JyY4

Linkspace can be used in two general ways. A classic client/server where you control the entire network. Or in full distributed mode where each user manages their own runtime. In the latter, an application does not have to deal with IO sockets directly.

ABE

Rust docs

ABE (Ascii-Byte-Expr) is a tiny language-agnostic byte templating engine. Its core structure is a stringly representation of delimited bytes ( of the type: [ ( [u8], delimiter ) ] ). Its primary purpose is to make it easy (for developers) to read and write sequences of bytes (0..=255) in plain ascii, including the null (0) byte. In addition, it supports evaluation of functions that act as shorthand for long sequences of bytes.

Linkspace packets have no concept of encoding formats. All fields are fixed length or prefix their exact length.

ABE is used for things like Query, printing, and in most arguments for the cli.

ABE is not meant to be a programming language! It's primarily meant to read and write arbitrary bytes in some context and quickly beat them into a desired shape. Some things are limited by design. If there is no obvious way to do something use a general purpose language to deal with your use-case.

When building an application you can choose where to use ABE and when to use a different encoding.

Basic Encoding

  • Most printable ascii letters are as is.
  • Newline is an external delimiters.
  • : and / are internal delimiters. Separating two byte expressions.
  • [ and ] wrap an expression
  • :, /, \, [, ] can be escaped with a \.
  • \x00 up-to \xFF for bytes.
  • \0 equals \x00, \f equals \xFF

We can encode binary into valid abtxt as follows:

We'll get back to encode more in depth later.

printf "hello" | lk encode -i
printf "world/" | lk encode -i
printf "nl \n" | lk encode -i
printf "open [ close ]" | lk encode -i
printf "emoji ⌨" | lk encode -i

All fields are arbitrary bytes, and lk_encode can print them as ab text

multiline = """newline
tab	""".encode() # encode implies utf-8

lkp = lk_linkpoint(spacename=[b"hello",b"world/",multiline,b"open [ close ]"])

print(lk_encode(lkp.comp0),"\t",list(lkp.comp0))
print(lk_encode(lkp.comp1),"\t",list(lkp.comp1))
print(lk_encode(lkp.comp2),"\t",list(lkp.comp2))
print(lk_encode(lkp.comp3),"\t",list(lkp.comp3))
print(lk_encode(lkp.comp4),"\t",list(lkp.comp4), lkp.comp4.decode("utf-8"))
hello 	 [104, 101, 108, 108, 111]
world\/ 	 [119, 111, 114, 108, 100, 47]
newline\ntab\t 	 [110, 101, 119, 108, 105, 110, 101, 10, 116, 97, 98, 9]
open \[ close \] 	 [111, 112, 101, 110, 32, 91, 32, 99, 108, 111, 115, 101, 32, 93]
 	 []

lk_eval

ABE is evaluated by substituting an expressions ( [..] ) with its result. For example in [u8:97], the function 'u8' is called with the arguments ["97"]. The function 'u8' reads the decimal string and writes it as a byte. The byte 97 equals the character 'a'. The byte 99 equals the byte 'c'

lk eval "ab[u8:99]" | xxd
00000000: 6162 63                                  abc
lk eval --json "h[u8:101]ll[u8:111] / world:etc" 
[[null,"hello "],["/"," world"],[":","etc"]]

Note that bytes are joined after evaluating. In the example this results in h + e + ll + o + ' ' => 'hello '. The meaning of the delimiters ('\n', ':', '/') are interpreted depending on the context. For instance, lk eval prints them 'as is' for the outer expression.

The rest of this chapter explains ABE further in depth.

The ABE functions can shorten your code, and almost every CLI argument is an ABE expression.
These expressions can directly refer to the context, such as a packet.
This makes ABE a powerful tool.

But knowing all of ABE's features it not required to use linkspace.

ABE a language of convenience.

With basic knowledge of its purpose to read and write separated bytes (i.e. [ [u8] ]), expression substitution ([..]), and the ':', '/' delimiters,
the rest of the guide (starting at Query) can be read while you return here for reference in case something is unclear.

There are two modes for tokenizing (before evaluation).

  • Strict: \n, \t, \r and bytes outside the range 0x20(SPACE)..=0x7e(~) are escaped.
  • Parse Unencoded: bytes outside 0x20..=0x7e are read 'as-is'.

Both error when bytes are incorrectly escaped or non-closed [, ] brackets exists.

Sub-expressions

A list of functions/macros be found by evaluating [[help][\[help\]]].

Functions
  • [fn]
  • [fn:arg0]
  • [fn:arg0:arg1]

The arguments are plain bytes. A function can take upto 8 arguments. Usually the results is concatenated with its surrounding bytes. The empty function '[:...]' resolves to its first argument.

  • hello [:world] == hello world

Arguments are evaluated before application. [fn0:[fn1]] will call fn1 and use its result as the first argument to fn0.

You can chain results with /. It uses the result as the first argument to the next function.

  • [:97/u8] = ~[u8:97]~ = a
  • [:97/u8/?u] = ~[?u:[u8:97]]~ = 97

You can think of ABE functions as a translation of conventional function calling.

[name:arg1:arg2] name(arg1,arg2)
[name:[other_name:argA]:arg2] name( other_name(argA) , arg2 )
[other_name:argA/name:arg2] name ( other_name(argA) , arg2 )

Functions are aware if they are first or not.
The vast majority of functions do not care.

[[:u8]:97] is explicitly not allowed. Variable function identifiers are conceptually interesting but practically begging for bugs.

Note: describing ABE can be a bit tricky in relation to conventional languages. Specifically, there is no syntax to "reference" a function, they are always resolved to their result. i.e. fn name(){..}; let x = name; let y = name(); has the () syntax to differentiate between calling a function or referencing a value. There are no 'variables' by design, because ABE is not meant to be used that way.

Macros

The second type of operation is applying a macro. Whereas functions are called after their arguments are evaluated. Macros are called as is up until its matching ']' without evaluation [..] expressions.

  • [/a_macro]
  • [/a_macro:arg0:arg1]
  • [/a_macro:[fn:arg0]:arg1/hello]

The /a_macro macro operates on :[fn:arg0]:arg1/hello without it being evaluated.

Scope & Context

Functions and Macros are defined in a scope. Scopes can be chained, so that if no matching function is found it looks in the next scope. The standard scope chain has multiple functions and macros to manipulate bytes. You can see all active scopes with the [help] function.

Sometimes the scope chain is extended with additional context:

Argv

A scope containing functions resolving to an argument vector.

inp = "Rm9ycmVzdA" # the base 64 encoding of the word "Forrest"
lk_eval("[0] [1/b], [0]!",argv=["Run",inp])
b'Run Forrest, Run!'
Packet

By providing a packet, the packet scope is added to the chain. This adds functions such as hash, group, spacename etc. These are bytes that you can use as arguments.

e.g [hash/?b] encodes the hash in base 64.

For convenience all packet fields accept 'str' and 'abe' as a first argument to print them in a default format.

[hash:str] [hash/?b]
[group:str] [group/?b]
[create:str] [create/?u]
[links_len:str] [links_len/?u]
 

The [/links:...] macro iterates over every link in a packet. It evaluates the inner scope for each link with setting the tag and ptr function.

pktf is eval that reads packets from stdin and puts them in scope.

lk link "::" -- tag1:[#:0] tag2:[#:pub] | \
    lk pktf "HASH:[hash/?b]\n[/links:TAG = [tag:str] PTR = [ptr:str] \n]"
HASH:UKlxiPfr6SMxLi2Fh_5SJ539wYygccadiKBgXiAJIAI
TAG = tag1 PTR = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 
TAG = tag2 PTR = Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
lp = lk_linkpoint(links=[Link("hello",PUBLIC),Link("world",PRIVATE)])
lk_eval2str("hash:[hash:str]\\n[/links:[tag:str] [ptr:str]\\n]",pkt=lp)
hash:fmrIKgaXdtpHC5hVbxuhB6RaXwvrKvmAY_TNbSpqCf0
hello Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk
world AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Runtime

Having a linkspace runtime in the scope gives you access to functions like:

  • # and @ ( see LNS ) for named groups, keys, and other data
  • readhash

When using lk_open , the instance is automatically setup as scope.

readhash is considered bad practice, fine to hack something together, but it doesn't give you much room to process errors or async. But you can do some wizardry combining it with [/links].

Usage notes

ABE expressions evaluate into a list of [ (?sep,bytes) ]. Sometimes each element has a different meaning, e.g. [ ( 0, domain ) , ( :, group) ] in the CLI arguments. You can process this list with lk_tokenize_abe.

But in the majority of cases we don't care about the list and only want a single result. lk_eval does just that. It interprets the separators as plain characters.

Finally, consider what you would expect to happen when an macro takes a ABE expression as its final argument:

  • [/links:abc[:hello]/world]
  • [/readhash:[#:pub]:the pkt:[pkt]]
  • [/:hello/world]

The choice was made that if the final argument is an abe expression that will be evaluated, it doesn't need wrapping []. Instead, it interprets the entire tail as is. This reduces the need to escape ':' and '/', but complicating some other expressions.

We can add an expression to –write arguments
lk link :: --write "stdout-expr:hello world:/ [hash:str]"
In case of file this leaves us in the situation that second argument is the file and the tail of the expression will be evaluated
One option is to use [/:..] to read ':' and '' as is.
~lk link :: –write "file-expr:[
:./afolder:with/colons]:hello world:/ [hash:str]"~

Help

A full list of active scopes can be viewed with the help function.

The following naming conventions are used:

- ending with '?' is a predicate to check a property.
- starting with '?' is a basic reverse operation. [u8:97/?u] == 97. Its similar but less powerful then lk_encode and lacking '[]' brackets.
- b_RADIX_ ( b2, b8, b16 ) 'b' defaults to base64 radix
- u_SIZE_ ( u8, .., u128 ) parse decimal into big endian bytes. ?u interpret as big endian print to decimal

lk_eval2str("[help]",pkt=lk_linkpoint(),argv=["hello"]) # the help won't show up if no scope is set. 
The context has one or more scopes active
Each scope has functions and macros
For each function the option set  ['[' , '/' , '?'] is given
These refers to its use as:
 '['  => Can be used to open   '[func/..]'
 ':'  => Can be used in a pipe '[../func]'
 '?'  => Can be encoded (i.e. 'reversed') to some extend '[../?:func]' || [?:..:func]

# bytes
Byte padding/trimming
## Functions
- ?a               [/           1..=1     encode bytes into ascii-bytes format  
- ?a0              [/           1..=1     encode bytes into ascii-bytes format but strip prefix '0' bytes  
- a                [/?          1..=3     [bytes,length = 16,pad_byte = \0] - alias for 'lpad'  
- f                [/           1..=3     same as 'a' but uses \xff as padding   
- lpad             [/           1..=3     [bytes,length = 16,pad_byte = \0] - left pad input bytes  
- rpad             [/           1..=3     [bytes,length = 16,pad_byte = \0] - right pad input bytes  
- ~lpad            [/           1..=3     [bytes,length = 16,pad_byte = \0] - left pad input bytes  
- ~rpad            [/           1..=3     [bytes,length = 16,pad_byte = \0] - right pad input bytes  
- lcut             [/           1..=2     [bytes,length = 16] - left cut input bytes  
- rcut             [/           1..=2     [bytes,length = 16] - right cut input bytes  
- ~lcut            [/           1..=2     [bytes,length = 16] - lcut without error  
- ~rcut            [/           1..=2     [bytes,length = 16] - lcut without error  
- lfixed           [/           1..=3     [bytes,length = 16,pad_byte = \0] - left pad and cut input bytes  
- rfixed           [/           1..=3     [bytes,length = 16,pad_byte = \0] - right pad and cut input bytes  
- replace          [/           3..=3     [bytes,from,to] - replace pattern from to to  
- slice            [/           1..=4     [bytes,start=0,stop=len,step=1] - python like slice indexing  
- ~utf8            [/           1..=1     lossy encode as utf8  

# UInt
Unsigned integer functions
## Functions
- +                [/           1..=16     Saturating addition. Requires all inputs to be equal size  
- -                [/           1..=16     Saturating subtraction. Requires all inputs to be equal size  
- u8               [/?          1..=1     parse 1 byte  
- u16              [/?          1..=1     parse 2 byte  
- u32              [/?          1..=1     parse 4 byte  
- u64              [/?          1..=1     parse 8 byte  
- u128             [/?          1..=1     parse 16 byte  
- ?u               [/           1..=1     Print big endian bytes as decimal  
- lu               [/           1..=1     parse little endian byte (upto 16)  
- lu8              [/           1..=1     parse 1 little endian byte  
- lu16             [/           1..=1     parse 2 little endian byte  
- lu32             [/           1..=1     parse 4 little endian byte  
- lu64             [/           1..=1     parse 8 little endian byte  
- lu128            [/           1..=1     parse 16 little endian byte  
- ?lu              [/           1..=1     print little endian number  

# base-n
base{2,8,16,32,64} encoding - (b64 is url-safe no-padding)
## Functions
- b2               [/?          1..=1     decode binary  
- b8               [/           1..=1     encode octets  
- b16              [/?          1..=1     decode hex  
- ?b               [/           1..=1     encode base64  
- 2mini            [/           1..=1     encode mini  
- b                [/?          1..=1     decode base64  
- ?bs              [/           1..=1     encode base64 standard padded  
- bs               [/?          1..=1     decode base64 standard padded  

# comment function / void function. evaluates to nothing

## Functions
- C                [/           1..=16     the comment function. all arguments are ignored. evaluates to ''  

# help

## Functions
- help             [/           0..=16     help  
## Macros
- help             desribe current eval context  

# logic ops
ops are : < > = 0 1 
## Functions
- size?            [/           3..=3     [in,OP,VAL] error unless size passes the test ( UNIMPLEMENTED )  
- val?             [/           3..=3     [in,OP,VAL] error unless value passes the test ( UNIMPLMENTED)  
## Macros
- or               :{EXPR}[:{EXPR}]* short circuit evaluate until valid return. Empty is valid, use {_/minsize?} to error on empty  

# encode
attempt an inverse of a set of functions
## Functions
- eval             [/           1..=1     parse and evaluate  
- ?                [/           2..=8     encode  
- ??               [/           2..=8     encode - strip out '[' ']'  
- ???              [/           2..=8     encode - strip out '[func:' + ']'  
## Macros
- ?                find an abe encoding for the value trying multiple reversal functions - [/fn:{opts}]*   
- ~?               same as '?' but ignores all errors  
- e                eval inner expression list. Useful to avoid escapes: eg file:{/e:/some/dir:thing}:opts does not require escapes the '/'   

# static-lns
static lns for local only [#:0] and public [#:pub]
## Functions
- #                [ ?          1..=16     resolve #:0 , #:pub, and #:test without a db  
- @                [ ?          1..=16     resolve @:none  

# microseconds
utilities for microseconds values (big endian u64 microsecond since unix epoch)
arguments consists of ( [+-][YMWDhmslu]usize : )* (str | delta | ticks | val)?

## Functions
- us               [/           0..=16     if chained, mutate 8 bytes input as stamp (see scope help). if used as head assume stamp 0  
- now              [            0..=16     current systemtime  
- epoch            [            0..=16     unix epoch / zero time  
- us++             [            0..=16     max stamp  

# space
space utils. Usually [//some/space] is the most readable
## Functions
- ?space           [/           1..=1     decode space  
- si               [/           2..=3     space idx [start,?end]  
- s                [/?          1..=8     build space from arguments - alternative to [//some/path] syntax  
## Macros
-                  the 'empty' eval for encoding space. i.e. [//some/space/val] creates the byte for /some/space/val  
- ~                similar to '//' but forgiving on empty components  

# lns

## Functions
- #                [ ? <partial> 1..=7     (namecomp)* - get the associated lns group  
- ?#               [/           1..=1     find by group# tag  
- @                [ ? <partial> 1..=7     (namecomp)* - get the associated lns key  
- ?@               [/           1..=1     find by pubkey@ tag  
## Macros
- lns              [:comp]*/expr  

# private-lns
Only look at the private claims lookup tree. Makes no requests
## Functions
- private#         [ ?          1..=7     (namecomp)* - get the associated lns group  
- ?private#        [/           1..=1     find by group# tag  
- private@         [ ?          1..=7     (namecomp)* - get the associated lns key  
- ?private@        [/           1..=1     find by pubkey@ tag  
## Macros
- private-lns      [:comp]*/expr  

# filesystem env
read files from Ok("/tmp/linkspace")/files 
## Functions
- files            [/           1..=1     read a file from the LK_DIR/files directory  

# database
get packets from the local db.
e-funcs evaluate their args as if in pkt scope.
funcs evaluate as if [/[func + args]:[rest]]. (e.g. [/readhash:HASH:[group:str]] == [readhash:..:group:str])
## Functions
- readhash         [/           1..=16     open a pkt by hash and use tail args as if calling in a netpkt scope  
- read             [/           2..=16     read but accesses open a pkt by dgpk space and apply args. e.g. [read:mydomain:[#:pub]:[//a/space]:[@:me]::data:str] - does not use default group/domain - prefer eval ctx  
## Macros
- readhash         HASH ':' expr (':' alt if not found)   

# Unset<abe::eval::EScope<linkspace_common::eval::OSEnv>>


# netpkt field
get a field of a netpkt. also used in watch predicates.
## Functions
- netflags         [            0..=1     ?(str|abe) - netpkt.netflags  
- hop              [            0..=1     ?(str|abe) - netpkt.hop  
- stamp            [            0..=1     ?(str|abe) - netpkt.stamp  
- ubits0           [            0..=1     ?(str|abe) - netpkt.ubits0  
- ubits1           [            0..=1     ?(str|abe) - netpkt.ubits1  
- ubits2           [            0..=1     ?(str|abe) - netpkt.ubits2  
- ubits3           [            0..=1     ?(str|abe) - netpkt.ubits3  
- hash             [            0..=1     ?(str|abe) - netpkt.hash  
- type             [            0..=1     ?(str|abe) - netpkt.type  
- size             [            0..=1     ?(str|abe) - netpkt.size  
- pubkey           [            0..=1     ?(str|abe) - netpkt.pubkey  
- signature        [            0..=1     ?(str|abe) - netpkt.signature  
- group            [            0..=1     ?(str|abe) - netpkt.group  
- domain           [            0..=1     ?(str|abe) - netpkt.domain  
- create           [            0..=1     ?(str|abe) - netpkt.create  
- depth            [            0..=1     ?(str|abe) - netpkt.depth  
- links_len        [            0..=1     ?(str|abe) - netpkt.links_len  
- data_size        [            0..=1     ?(str|abe) - netpkt.data_size  
- spacename        [            0..=1     ?(str|abe) - netpkt.spacename  
- rspacename       [            0..=1     ?(str|abe) - netpkt.rspacename  
- comp0            [            0..=1     ?(str|abe) - netpkt.comp0  
- comp1            [            0..=1     ?(str|abe) - netpkt.comp1  
- comp2            [            0..=1     ?(str|abe) - netpkt.comp2  
- comp3            [            0..=1     ?(str|abe) - netpkt.comp3  
- comp4            [            0..=1     ?(str|abe) - netpkt.comp4  
- comp5            [            0..=1     ?(str|abe) - netpkt.comp5  
- comp6            [            0..=1     ?(str|abe) - netpkt.comp6  
- comp7            [            0..=1     ?(str|abe) - netpkt.comp7  
- data             [            0..=1     ?(str|abe) - netpkt.data  

# print pkt default

## Functions
- pkt              [            0..=0     default pk fmt  
- netpkt           [            0..=0     TODO default netpkt fmt  
- point            [            0..=0     TODO default point fmt  
- pkt-quick        [            0..=2     [add recv? =false , data_limit = max] same as pkt but without dynamic lookup  
- html-quick       [            0..=0     same as html but without dynamic lookup  
- netbytes         [            0..=0     raw netpkt bytes  

# select link

## Functions
- links            [            0..=4     [delim='\n',start=0,stop=len,step=1] - python like slice indexing  
- link             [            1..=1     [suffix] get first link with tag ending in suffix  
## Macros
- links            :{EXPR} where expr is repeated for each link binding 'ptr' and 'tag'  

# recv
recv stamp for packet. value depends on the context
## Functions
- recv             [            0..=1     recv stamp - returns now if unavailable in context  
- recv_now         [            0..=1     recv stamp - returns an error if not available in context  

# user input list
Provide values, access with [0] [1] .. [7] 
## Functions
- 0                [            0..=0     argv[0]  
- 1                [            0..=0     argv[1]  
- 2                [            0..=0     argv[2]  
- 3                [            0..=0     argv[3]  
- 4                [            0..=0     argv[4]  
- 5                [            0..=0     argv[5]  
- 6                [            0..=0     argv[6]  
- 7                [            0..=0     argv[7]  

lk_encode

Translate bytes into abe such that lk_eval(lk_encode(X)) == X

We can get meta. lk_encode is available as the macro [/?:bytes:options]

data = bytes([0,0,0,255])
abe = lk_encode(data)
assert data == lk_eval(abe)
print("ab  text:", abe)
abe = lk_encode(data,"u8/u32/b") # Try to encode as expression
print("abe text:", abe)
ab  text: \0\0\0\f
abe text: [u32:255]

DEFAULT_FMT

This is how packets are printed by default using lk pktf or pythons str(pkt).

import linkspace
print(linkspace.DEFAULT_PKT)
type\t[type:str]\nhash\t[hash:str]\ngroup\t[/~?:[group]/#/b]\ndomain\t[domain:str]\nspace\t[spacename:str]\npubkey\t[/~?:[pubkey]/@/b]\ncreate\t[create:str]\nlinks\t[links_len:str]\n[/links:\t[tag:str] [ptr:str]\n]\ndata\t[data_size:str]\n[data/~utf8]\n

lk_tokenize_abe

LNS

LNS is a system for publicly naming keys and groups, and adding auxiliary data to them. It allows you to register as @:Alice:nl, #:sales:company:com, etc.

LNS is easy to use from an abe expression. Both to lookup and do a reverse lookup.

See lns for info.

You can create local bindings, allowing you to reference [@:my_identity:local] or [#:friends:local]
By default lk_key sets up the [@:me:local] identity.

lk eval "[#:pub]" | lk encode "@/#/b"
lk eval "[@:me:local]" | lk encode "@/#/b"
group = example_keypoint.group
print("The bare bytes:", group)

# encode as b64
b64 = lk_encode(group,"b")
print("b64 encoded   :", b64)

# Try to express as a [#:..], on failure try as [@:..], fallback to [b:...]
try_name = lk_encode(group,"#/@/b")
print("Or through lns:", try_name)

print("Pkt's pubkey  :",example_keypoint.pubkey)
try_keyname = lk_encode(example_keypoint.pubkey,"#/@/b")
print("Similarly lns :", try_keyname)


The bare bytes: b'b\xbb;\x8b=\xd5\xceu\xe1\xfa\x88/\xe1\xa3:\xd9Y\x8c7\x15\xc5\x89>\x0f\xdb\x8bH}\\\xfd\xb19'
b64 encoded   : [b:Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk]
Or through lns: [#:pub]
Pkt's pubkey  : b'\x99 *\x94\x12\xfb\x1b\xaai&\xbb\xc9\xe7[\xb2\x07\xf4T\xa8\x94\xfc\xb6\x16\x85\xb7\x8f@?\xb9\xdf\x03M'
Similarly lns : [@:me:local]

Query

Rust docs

A query is a list of predicates and options used to define a set of packets. They're used in various ways, most notably you can use them to read (lk_get, lk_get_all), await (lk_watch) and request (lk_pull) packets.

lk_query

Queries are newline separated. Predicates are an ABE 3-tuple field ':' test-operation ':' value and constrain the set of accepted packets. Options are context dependent and start with ':'

A query might look like this:

group:=:[#:pub]
domain:=:example
spacename:=:/hello/world
pubkey:=:[@:me:local]
create:>:[now:-1D]

A predicate can be set multiple times. In the example above we could add create:<:[now:+2D] to constrain it further. Queries are designed such that you can concatenate their strings and get their union. If the result is the empty set an error is returned.

There are 4 basic test operations and a couple of aliases.

Basic Op  
> greater eq
< less eq
0 all '0' in value are '0' in field
1 all '1' in value are '1' in field

The following are shorthand and resolve to one or more of the basic tests.

Derived Ops  
= >(val-1) and <(val+1)
>= >(val-1)
<= <(val+1)
*= Last n-bytes must eq val
=* First n-bytes must eq val

The CLI has options that can act as a guide in creating queries by using lk print-query --help.

Many cli commands (e.g. print-statemnt, watch ) take as the first argument a domain:group:spacename:(?depth)
If no depth is set the depth is constraint by default.
Except for watch-tree which sets the depth to unconstrained by default

Here we look for the domain 'my' , the group [#:pub], with spacename starting at /hello and with one additional spacename component.

lk print-query "my:[#:pub]:/hello:*" --signed
:mode:tree-desc
type:1:[b2:00000111]
domain:=:[a:my]
group:=:[b:Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk]
prefix:=:/hello
depth:<:[u8:3]
depth:>:[u8:0]
template = lk_query_parse(lk_query(),"group:=:[#:pub]")
a_copy = lk_query(template)
lk_query_print(a_copy)
type:1:\x02
group:=:b\xbb;\x8b=\xd5\xceu\xe1\xfa\x88\/\xe1\xa3\:\xd9Y\x8c7\x15\xc5\x89>\x0f\xdb\x8bH}\\\xfd\xb19

lk_query_parse

Add multiple constraints to a query. You can add multi line strings or per line. Each line is evaluated as an abe expression. You can set a pkt or argv context.

Returns an error if the resulting set is empty. The full list of predicates and their byte size can be found here.

q = lk_query()

stmt = """
group:=:[#:pub]
domain:=:example
"""

q = lk_query_parse(q,stmt,
               "depth:<:[u8:4]",
               "data_size:<:[0]",argv=[int(10).to_bytes(2)]) 
lk_query_print(q,True)
type:1:[b2:00000011]
domain:=:[a:example]
group:=:[b:Yrs7iz3VznXh-ogv4aM62VmMNxXFiT4P24tIfVz9sTk]
data_size:<:[u16:10]
depth:<:[u8:4]

lk_query_push

Similar to lk_query_parse, but only adds a single statement and the last field expects the bytes.

q = lk_query()
q = lk_query_push(q,"data_size","<",bytes([0,4])) # less than 4
q = lk_query_push(q,"data_size","<",lk_eval("[u16:20]"))  # less than 20
q = lk_query_push(q,"data_size","<",int(3).to_bytes(2))  # less than 3
lk_query_print(q)
type:1:\x01
data_size:<:\0\x03

Adding a contradictions returns an error.

try:
  r = lk_query_push(q,"data_size",">",bytes([0,100])) # greater than 100 and smaller than 3 can not both be true
except Exception as e :
  r = ("That's not possible",e)
r

("That's not possible", RuntimeError("Error adding rule 'data_size'\n\nCaused by:\n    0: data_size:>:[u16:100]\n    1: incompatible Greater 100"))

lk_query_print

Print a query as text. The query will have merged overlapping predicates The boolean argument sets whether to create abe expressions or stick to a representation without expressions.

lk_query_print(q,True)
type:1:[b2:00000001]
data_size:<:[u16:3]

The b2 function read a binary representation.
The types are: datapoint=[b2:0000_0001], linkpoint [b2:0000_0011], keypoint [b2:0000_0111].
Setting 'group', 'domain', 'spacename', 'links', or 'create' predicates automatically exclude the datapoint type.
Setting pubkey or signature excludes link and data points.

More on predicates

group requires 32 bytes but will try to parse base64.
domain requires 16 bytes but will prepend '\0' if too few bytes are given
spacename and prefix only take the = op. Their value is the bytes spacename:=:[//hello/world], but they'll accept /hello/world as well

Besides the fields in a point, predicates also apply to the hash and variable net header fields.

The netheader fields can be mutated, and are stored in the database when a packet is first written. Domain applications should avoid these fields. They are used when writing an exchange process. (For more notes on that see dev/exchange.md)

The netheader is 32 bytes which are named:

   
Fieldsize  
Prefix3 magic bytes 'LK1'
NetFlags1 See source code
hop2 number of hops since creation
stamp8  
ubits04  
ubits14  
ubits24  
ubits34 RESERVED

Except for the prefix and ubits3.

Recv

The final predicate is 'recv'. This is a 8 byte stamp for when the packets was first read. It can be used to filter such as recv:>:[now:-1D].

It is considered bad design for applications to depend on this. They should use the create stamp to avoid depending on any group-exchange specific behavior.

The recv predicate depends on the context. Reading from the database the recv is set to the time the packet was received: lk watch-log --bare -- "recv:>:[now:-1D]" But when reading from a pipe it is set to when the pipe reads the packet: lk watch-log | lk filter "recv:>:[now:-1D]"

In both cases the predicate recv:<:[now:+1m] would stop the process after 1 minute.

Options

Options are additional configurations for a set of predicates.

They are of the form ':' name ':' rest. Depending on the context/function they are ignored. Developers are free to expand their meaning for their usecase like in a group exchange processes.

Unlike predicates, options have no standard meaning when two queries with the same option are concatenated. Predicates have a well defined meaning when queries are concatenated. Options should define their own logic when they appear multiple times.

The following options are known to have a specific meaning:

NOTE: These options will change somewhat in coming versions.

name value use multiple
:mode: (tree/hash/log)-(asc/desc) set the table to read from Last is used
:follow   also output linked packets N/A
:qid: <any> identity/close an active query Last is used
:notify-close   send a final dummy packet when closing a query N/A

Known predicates & options

The full list of options and predicates:

Current set of predicates and options.

Unknown options are added but ignored by most library functions. This allows other processes to add additional options it understands.

lk print-query --help
hash         - the point hash e.g. \[b:AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\]
group        - group id e.g. \[#:pub\]
domain       - domain - if fewer than 16 bytes, prepadded with  e.g. \[a:example\]
prefix       - all points with spacename starting with prefix - only accepts '=' op e.g. /hello/world
spacename    - exact spacename - only accepts '=' op e.g. /hello/world
pubkey       - public key used to sign point e.g. \[@:me:local\]
create       - the create stamp e.g. \[now:-1H\]
depth        - the total number of space components - max 8 e.g. \[u8:0\]
links_len    - the number of links in a packet e.g. \[u16:0\]
data_size    - the byte size of the data field e.g. \[u16:0\]
recv         - the recv time of a packet e.g. \[now:+1D\]
i_branch     - total packets per uniq (group,domain,space,key) - only applicable during local tree index, ignored otherwise e.g. \[u32:0\]
i_db         - total packets read from local instance e.g. \[u32:0\]
i_new        - total newly received packets e.g. \[u32:0\]
i            - total matched packets e.g. \[u32:0\]
hop          - (mutable) number of hops e.g. \[u16:5\]
stamp        - (mutable) variable stamp e.g. \[now\]
ubits0       - (mutable) user defined bits e.g. \[u32:0\]
ubits1       - (mutable) user defined bits e.g. \[u32:0\]
ubits2       - (mutable) user defined bits e.g. \[u32:0\]
ubits3       - (mutable) user defined bits e.g. \[u32:0\]
type         - the field type bits - implied by other predicates e.g. \[b2:00000001\]
netflags     - (mutable) netflags e.g. \[b2:00000000\]
size         - exact size of the netpkt when using lk_write or lk_read - includes netheader and hash  e.g. \[u16:4\]

The following options are available

	:mode
	:qid
	:follow
	:notify-close


query - print full query from common aliases

Usage: lk print-query [OPTIONS] [DGPD] [-- <EXPRS>...]

Arguments:
  [DGPD]      
  [EXPRS]...  

Options:
  -p, --print-expr               print the query
      --print-text               print in ascii-byte-text format (ABE without '[..]' expressions)
      --mode <MODE>              [default: tree-desc]
      --db-only                  only match locally indexed pkts           | i_new:=:[u32:0]
      --new-only                 only match new unindexed pkts             | i_db:=:[u32:0]
      --max <MAX>                match upto max packets.                   | i:<:[u32:max]
      --max-branch <MAX_BRANCH>  match upto max per (dm,grp,space,key) pkts | i_branch:<:[u32:max_branch]
      --max-index <MAX_INDEX>    match upto max from local index           | i_db:<:[u32:max_index]
      --max-new <MAX_NEW>        match upto max unindexed pkts             | i_new:<:[u32:max_new]
      --signed                   match only signed pkts                    | pubkey:>:[@:none]
      --unsigned                 match only unsigned pkts                  | pubkey:=:[@:none]
      --watch                    Add :qid option (generates qid)
      --qid <QID>                set :qid option (implies --watch)
      --follow                   Add :follow option
      --until <UNTIL>            add recv:<:[us:INIT:+{until}] where INIT is set at start
  -b, --bare                     do not read any domain:group:space argument - WARNING - this might include all datapoints depending on mode and filters
  -h, --help                     Print help

General Pkt IO Options:
      --private  enable io of linkpoints in [#:0] [env: LK_PRIVATE=]

lk_hash_query

[This function might be removed]

A shorthand for getting a packet by hash. Can be used with lk_watch. If you expect the value to be known locally lk_get_hash is faster.

Runtime

Rust docs

You can open/create a instance with lk_open. If given no directory it opens $LK_DIR or $HOME/linkspace.

An instance is a handle to a multi-reader, single-writer database. The instance is thread local. Each thread or process requires calling lk_open.

An application can save to the database (lk_save). To read from the database you can directly get a packet by hash (lk_get_hash) or using a query (lk_get, lk_get_all).

A thread can register a watch (lk_watch). A watch is a query & function that is called for each new packet matching the query. By default the function is immediatly called for each match in the database.

Whenever any thread saves a new packet all other threads receive a signal.

By default nothing happens. A thread does not see the new packets untill its read transaction is updated. This includes calls to lk_get* and lk_watch.

Update the transaction and process all the new packets (lk_process,lk_process_while). This runs all functions registered as a watch. Only after that is the thread's view of the database updated to the latest state.

The Rust docs are currently the most up to date, but the python package has most functions typed and commented as well.

lk_open

lk_save

lk_get

lk_get_all

lk_get_hash

lk_watch

lk_process

lk_process_while

lk_close_watch

Conventions

Rust docs

Conventions are functions built on top of the other linkspace functions. They provide a standard way for unrelated processes to loosly couple/interface with each other by encoding data into linkspace packets.

Generally they require the caller to also run lk_process or lk_process_while

One general conventions is that domains and spacenames starting with \xff are for meta things such as status queries and packet exchange.

lk_status_set

Status queries allow us to communicate if a process exists that is handling a specific type and a specific instance.

The function signature is (domain, group, obj_type, instance).

  • A request is a packet in the form DOMAIN:[#:0]:/\fstatus/GROUP/type(/instance?) and has no data and no links.
  • A reply is of the form DOMAIN:[#:0]/\status/GROUP/type/instance with some data and at least one link.

Note that the packets are in `#:0`. This function is only for local status updates.

The group argument does not ask inside GROUP, it only signals which group the query is about. Other processes are meant to answer a request.

The following are statuses that the exchange process should set:

  • exchange GROUP process
  • exchange GROUP connection PUBKEY
  • exchange GROUP pull PULL PULL_HASH

lk_status_poll

Request the status of a `domain group obj_type ?instance timeout`.

lk_pull

A pull request is made by a domain application and signals the set of packets it wants. The function takes the query and saves it as: [f:exchange]:[#:0]:/pull/[query.group]/[query.domain]/[query.qid]

Note that from a domain's perspective, there is no such thing as 'fully synchronized'.
It is entirely up to the developer to structure their points such that it provides the right level of sync.
For example, a 'log' packets that link to known packets from a single device's perspective.

Pull queries must have the predicates domain:=:.. and group:=:.., and :qid.

An exchange process (such as in the tutorial) watches these packets and attempts to gather them. The exchange is only responsible for pull requests received when it is running. The exchange drops requests when you reuse the 'qid'. The function returns the hash of the request.

A domain application should be conservative with its query. Requesting too much can add overhead.

lk_key

Read ( or creates ) an encrypted private key using the local LNS. It can then be referenced with [@:NAME:local].

LNS

See LNS for some general information. See abe#lns for how to use LNS for lookup and reverse lookup.

The LNS system works by making a claim in lns:[#:pub]:/claim/test/example/john which we'll call $Claim1 A claim can have 3 types of special links. The first link with the tag pubkey@ has as ptr the pubkey bytes to use when referring to @:john:example:test. The first link with the tag group# has as ptr the group bytes to use when referring to #:john:example:test. Every tag ending with '^' e.g. root_00^ is an authority public key. An authority has the right to vote for its direct subclaims. For example the claim lns:[#:pub]:/claim/test/example/john/home/

$Claim1 becomes 'live' when a single authority of claim/test/example creates a vote by creating a keypoint lns:[#:pub]:/claim/test/example/john with the link vote:$Claim1.hash. The first claim to get a majority of votes wins.

Advanced topics

Big data

One way of reading/writing data larger than aprx 216 is to create a linkpoint with multiple ("data",datap_hash). That would give you space for 85mb. You can go infinitely large by adding ("continue",next_linkpoint).

python -c 'print("-" * 200000000,end="")' \
    | lk data \
    | lk collect example:: --create [epoch] --collect-tag 'data' --chain-tag 'continue'\
    | lk save -f \
    | lk filter example:: \
    | lk pktf "[hash:str] [links_len:str]"
JIhVzkxzP5kfPhjdudu0a6nDcbnOGKixsEvd4SXeoNc 1333
KxvjqFFRCRDFpzyNsXr80K3wMkBUD9Sszhfcjpt9qro 1334
1FAKPrMTY8nBQ4zeoXVvgn_ESQJcjJQ8l9fJDSmDEgA 411

Advanced tip: we could have made this shorter with collect --forward db --write db --write stdout-expr "[hash:str] [links_len]"

Note that to recreate the data you have to do a 'depth first' search starting from the last result

lk watch-hash 1FAKPrMTY8nBQ4zeoXVvgn_ESQJcjJQ8l9fJDSmDEgA | lk get-links -R pause | lk pktf -d '' [data] > /tmp/bytes
python -c 'print("-"*200000000,end="")' > /tmp/bytes2
diff /tmp/bytes /tmp/bytes2 && echo ok
ok

Note the pktf -d '' to remove the delimiter between packets.
An alternative approach would be to use lk get-links --write file-expr:[/:/tmp/bytes]:[data] --forward null

Q&A

Why Big Endian?

The tree index is in the expected order when using the numbers as space components. E.g. lk linkpoint ::/some/dir/[now] will come after lk linkpoint ::/some/dir/[now:-1D] because now > (now - one day)

Every user of my domain app needs X from my server/I want to add advertisements to my domain app.

Hardcode a public key into the app and combine it with a group exchange service. Either use an existing group, or use the group: their-key XOR your-key for personalized stuff

I'm not in control of the user! / Anybody in my group can leak data from it!?

If this is news to you, you're suffering from security-theater. I don't make the rules, I just make them obvious.

A domain application can write outside its own domain space.

Yes, the current API has no restriction. Maybe at some point we can effectively restrict processes through wasm or some other access control.

Why don't queries support negative predicates?

In most cases the meaning would be non obvious and/or slow to implement, and it would remove their "string concat == union" property.

Furthermore, when you want to exclude something it is much clearer to always have to define two phases. i.e. All packets in group X, excluding those with property Y.

Why not use an SQL backend? / Why invent queries?

First off, if you desire to run SQL queries it is not too difficult to stream packets into a SQL table with `lk pktf` or some custom code, and query them.

Why its not the primary backend/query method has multiple reason.

SQL isn't magic, and its non-trivial price to pay for something that is not a great fit for a few fundamental problems including:

  1. What are the tables peers should have?
  2. How to constrain a query you receive/as it travels to multiple peers?
  3. how to encode bytes?

All could be solved in a number of ways, but most solutions are quickly going to bloat and usually create multiple incompatible sublanguages depending on the context.

linkspace queries support arbitrary bytes, can be constraint/tested through concatenation, and have a consistent meaning w.r.t. predicates and can easily be expanded with options.

Footnotes:

1

the hash of lk_datapoint(b"Hello, Sol!\n")

Created: 2023-09-20 Wed 12:24

Validate