Skip to content
This repository was archived by the owner on Aug 2, 2021. It is now read-only.

Conversation

@jpeletier
Copy link
Contributor

@jpeletier jpeletier commented Aug 13, 2018

Abstract

The current MRU implementation requires users to agree upon a predefined frequency and start time to start publishing updates about a certain topic. This causes lots of problems if that update frequency is not honored and requires users to know other user's update frequencies / start times in order to look up their updates on common topics

This PR removes this limitation via a novel adaptive frequency resource lookup algorithm. This algorithm automatically adjusts to the publisher's actual update frequency and converges quickly whether an update is found or not.

Users "following" a publisher automatically "tune" to the perceived frequency and can guess easily where the next update ought to be, meaning that subsequent lookups to get a newer update run faster or can be prefetched. This also allows to monitor a resource easily.

The algorithm is described below.

As a result, interactions with Swarm's MRUs are greatly simplified since the user doesn't have to come up with a start time and frequency upfront, but rather start publishing updates about the topic they want.

API changes

HTTP API

To publish an update:

1.- Get resource metainformation

  • GET /bzz-resource:/?topic=<TOPIC>&user=<USER>&meta=1
  • GET /bzz-resource:/<MANIFEST OR ENS NAME>/?meta=1

Where:

  • user: Ethereum address of the user who publishes the resource
  • topic: Resource topic, encoded as a hex string.

Note:

  • If topic is omitted, it is assumed to be zero, 0x000...
  • if name=<name> is provided, a subtopic is composed with that name

A common use is to omit topic and just use name, allowing for human-readable topics.

You will receive a JSON like the below:

{
  "view": {
    "topic": "0x6a61766900000000000000000000000000000000000000000000000000000000",
    "user": "0xdfa2db618eacbfe84e94a71dda2492240993c45b"
  },
  "epoch": {
    "level": 16,
    "time": 1534237239
  }
}

2.- Post the update

POST /bzz-resource:/?topic=<TOPIC>&user=<USER>&level=<LEVEL>&time=<TIME>&signature=<SIGNATURE>

body: binary stream with the update data.

To get the last update:

  • GET /bzz-resource:/?topic=<TOPIC>&user=<USER>
  • GET /bzz-resource:/<MANIFEST OR ENS NAME>

Note:

  • Again, if topic is omitted, it is assumed to be zero, 0x000...
  • if name=<name> is provided, a subtopic is composed with that name

A common use is to omit topic and just use name, allowing for human-readable topics.
Thus, this is also valid: GET /bzz-resource:/?name=profile-picture&user=<USER>

To get a previous update:

  • GET /bzz-resource:/?topic=<TOPIC>&user=<USER>&time=<T>
  • GET /bzz-resource:/<MANIFEST OR ENS NAME>?time=<T>

Advanced search:

If you have an idea of when the last update happened, you can also hint the lookup algorithm by adding the following extra parameters:

  • hint.time: Time at when you think the last update happened
  • hint.level: Approximate period you think the updates where happening at, expressed as log2(T), rounded down. For example, a resource updating every 300 seconds, level should be set to 8. log2(300) = 8.22. See the Adaptive Frequency algorithm below for details on this.

Note that this would only affect first lookups. Your swarm node will keep track of last updates and automatically use the last seen update as a hint. Using these parameters would override that automatic hint.

To publish a manifest:

POST /bzz-resource:/?topic=<TOPIC>&user=<USER>&manifest=1 with an empty body.

Note: this functionality could be moved to the client and removed from the node, since this just creates a JSON and publishes it to bzz-raw, so the client could actually create this itself and call client.UploadRaw().

CLI

Creating a resource manifest:

swarm resource create is redefined as a command line to create and publish a MRU manifest only.

swarm resource create [command options]

creates and publishes a new Mutable Resource manifest pointing to a specified user's updates about a particular topic.
          The topic can be specified directly with the --topic flag as an hex string
          If no topic is specified, the default topic (zero) will be used
          The --name flag can be used to specify subtopics with a specific name
          The --user flag allows to have this manifest refer to a user other than yourself. If not specified,
          it will then default to your local account (--bzzaccount)

OPTIONS:
--name value   User-defined name for the new resource, limited to 32 characters. If combined with topic, the resource will be a subtopic with this name
--topic value  User-defined topic this resource is tracking, hex encoded. Limited to 64 hexadecimal characters
--user value   Indicates the user who updates the resource

Update a resource

swarm resource update [command options] <0x Hex data>

creates a new update on the specified topic
          The topic can be specified directly with the --topic flag as an hex string
          If no topic is specified, the default topic (zero) will be used
          The --name flag can be used to specify subtopics with a specific name.
          If you have a manifest, you can specify it with --manifest instead of --topic / --name
          to refer to the resource

OPTIONS:
--manifest value  Refers to the resource through a manifest
--name value      User-defined name for the new resource, limited to 32 characters. If combined with topic, the resource will be a subtopic with this name
--topic value     User-defined topic this resource is tracking, hex encoded. Limited to 64 hexadecimal characters

Quick and dirty test:

In the example, the user wants to publish his/her profile picture so it can be found by anyone who knows his/her Ethereum address.

# (OPTIONAL) Publish a manifest:
swarm --bzzaccount "your public key" resource create --name "profile-picture"

# the above command will output a manifest hash. We will refer to it as $MH later on

# Publish the first update:
$ IMAGE=$(swarm up myprofilepicture.jpg) && swarm --bzzaccount "<YOUR ADDRESS>" resource update --name "profile-picture" "0x1b20$IMAGE"

# To retrieve the latest update:
$ curl 'http://localhost:8500/bzz-resource:/?name=profile-picture&user=<YOUR ADDRESS>'

# Alternatively, if we created a manifest, we can use it:
$ IMAGE=$(swarm up myprofilepicture.jpg) && swarm --bzzaccount "your public key" resource update --manifest "$MH" "0x1b20$IMAGE"

# Your last profile picture can be viewed in:
# http://localhost:8500/bzz:/$MH
# (Only if a manifest was published)

Adaptive frequency lookup algorithm

At the core of this PR is a new lookup algorithm with the following properties:

  • Does not require the user to commit to an update frequency without affecting lookup time linearly with the time difference since now and when that last update happened.
  • The algorithm finishes quickly if no updates found
  • Once the last update is found, subsequent lookups take less time.
  • If we have a rough idea of where the last update was, we can hint the algorithm to get a faster lookup time.
  • It allows time-based lookups.
  • The lookup key of the next update can be predicted and monitored.

Revamping the resource frequency concept

Note: Starting with this PR, in the documentation and this text, we use the strict definition of frequency as f = 1 / T. Thus, higher frequencies mean shorter periods of time.

In this new implementation, period lengths are expressed as powers of 2. The highest frequency (shortest period, update every second) is expressed as 2⁰ = 1 second. The lowest update frequency is currently set to 2²⁵ = 33554432 seconds which equals to roughly one year.

Therefore, the frequency can be encoded as just the exponent. We call this exponent frequency level, or level for short. A higher level means a longer period and thus a smaller frequency.

Introducing Epochs

Now that we have determined a set of finite possible frequencies, we can divide time in a grid of epochs. One epoch is a concrete time range at a specific frequency level, starting at a specific point in time, called the epoch base time. Level 0 epochs have a maximum length of 2⁰ = 1 seconds. Level 3 epochs have a maximum length of 2³ = 8 seconds, etc.

image

To refer to a specific epoch, or epoch ID we need to know the epoch base time and the epoch level

image

We will use this epoch addressing scheme to derive a chunk address in which to store a particular update.

Epoch base time

To caclculate the epoch base time of any given instant in time at a particular level, we use the simple formula:

baseTime(t, level) = t & ( 0xFFFFFFFFFFFFFFFF << level )

In other words, we are dropping the level lowest significant bits of t.

Seeding algorithm

The seeding algorithm describes the process followed by the update publisher to determine in what epoch "plant" the content so it can be found (harvested) by users. The algorithm works as follows:

First updates

Any first resource update will have a level of 25.

Note: We have chosen 25 as the highest level. This is a constant in code and can be changed.

Thus, if as of writing this it is August 12th 2018 at 16:51 UTC, Unix Time is 1534092715. Therefore, the epoch base time is 1534092715 & 0xFFFFFFFFFE000000 = 1509949440

The epoch id for a first update now is therefore (1509949440, 25)

image

Subsequent updates

To determine the epoch in which to store a subsequent update, the publisher needs to know where they stored the previous update. This should be straightforward. However, if the publisher can't or does not want to keep track of this, it can always use the harvesting algorithm (see below) to find their last update.

The selected epoch for a subsequent update must be the epoch with the highest possible level that is not already occupied by a previous update.

Let's say that we want to update our resource 5 minutes later. The Unix Time is now 1534093015.

We calculate getBaseTime(1534093015, 25) = 1509949440.
This results in the same epoch as before (1534093015, 25). Therefore, we decrease the level and calculate again:
getBaseTime(1534093015, 24) = 1526726656

Thus, the next update will be located at (1526726656, 24)

image

If the publisher keeps updating the resource exactly every 5 minutes, the epoch grid will look like this:

update #1,  t=1534092715, epoch=(1509949440, 25)
update #2,  t=1534093015, epoch=(1526726656, 24)
update #3,  t=1534093315, epoch=(1526726656, 23)
update #4,  t=1534093615, epoch=(1530920960, 22)
update #5,  t=1534093915, epoch=(1533018112, 21)
update #6,  t=1534094215, epoch=(1534066688, 20)
update #7,  t=1534094515, epoch=(1534066688, 19)
update #8,  t=1534094815, epoch=(1534066688, 18)
update #9,  t=1534095115, epoch=(1534066688, 17)
update #10, t=1534095415, epoch=(1534066688, 16)
update #11, t=1534095715, epoch=(1534066688, 15)
update #12, t=1534096015, epoch=(1534083072, 14)
update #13, t=1534096315, epoch=(1534091264, 13)
update #14, t=1534096615, epoch=(1534095360, 12)
update #15, t=1534096915, epoch=(1534095360, 11)
update #16, t=1534097215, epoch=(1534096384, 10)
update #17, t=1534097515, epoch=(1534096896, 9)
update #18, t=1534097815, epoch=(1534097408, 11)
update #19, t=1534098115, epoch=(1534097408, 10)
update #20, t=1534098415, epoch=(1534097920, 9)
update #21, t=1534098715, epoch=(1534098176, 8)
update #22, t=1534099015, epoch=(1534098432, 10)
update #23, t=1534099315, epoch=(1534098944, 9)
update #24, t=1534099615, epoch=(1534099200, 8)
update #25, t=1534099915, epoch=(1534099456, 15)
update #26, t=1534100215, epoch=(1534099456, 14)
update #27, t=1534100515, epoch=(1534099456, 13)
update #28, t=1534100815, epoch=(1534099456, 12)
update #29, t=1534101115, epoch=(1534099456, 11)
update #30, t=1534101415, epoch=(1534100480, 10)

If the publisher keeps updating every 5 minutes (300s), we can expect the updates to stay around level 8-9 (2⁸ = 256 seconds, 2⁹ = 512 seconds). The publisher can however, at any time vary this update frequency or just update randomly. This does not affect the algorithm.

image

Here is a closer look at the converging levels further down:

image

Harvesting algorithm

The harvesting algorithm describes how to find the latest update of a resource. This involves looking up an epoch and walking back in time until we find the last update.

Start Epoch

To select the best starting epoch to walk our grid, we have to assume the worst case, which is that the resource was never updated after we last saw it.

If we don't know when the resource was last updated, we asume 0 as the "last time" it was updated.

We can guess a start level as the position of the nonzero bit of XOR(last base time, now) counting from the left. The bigger the difference among the two times (last update time and now), the higher the level will be as the update frequency we are estimating is lower.

If the resulting level is higher than 25, we use 25.

Walking the grid - a simple algorithm

Consider the following grid. In it we have marked in yellow where the updates have happened in the past.

image

All the above is unknown to the harvester. All we know is the last seen update happened at (20,2), marked in light orange. We call this the "hint". The algorithm will consider this hint but will discard it if it proves it really did not contain an update. An invalid hint can pontentially slow down the algorithm but won't stop it from finding the last update.

Now it is t=26 and we want to look for the last update. Our guess at a start level is:

XOR(20, 26) = 14 = 1110b, in which the first nonzero bit counting from the left is bit #3. Thus, our first lookup will happen at (baseTime(26,3), 3) = (24,3), shown in dark blue below:

image

If a lookup at (24,3) fails, we consider that there are no updates at lower levels either, since the seeding algorithm would've filled (24,3) before going down. This means there are no updates on or after t=24. Thus, our search area would be reduced to the area directly below (20,2) (green area). We restart the algorithm as if now was 23

If however, a lookup at (24,3) succeeds, then we know the last update could either be the one at (24,3) itself or may be there is a later one in the epochs below (blue area). At this point we consider that the update is in 24 <= t <= 26. We restart the algorithm with the hint set to (24,3) instead of the original one. If the lookup then fails, then the last update was indeed in (24,3)

This is how the algorithm would play out if now t=26 if the last update is in (22,1)

image

Lookups are, in this order:

#1 (24,3) (fails)
#2 (22,1) (succeeds, but we don't know if this would be the latest update). Go down one level to confirm.
#3 (23,0) (fails)
#4 (22,0) (fails, so we return the last found value at (22,1)

Locking on / following a resource

Once we have found the last update of a resource, we can easily calculate in what epoch will the next update appear, if the publisher actually makes it.

In figure 9 above, if the last update was found at (22,1) and now it is t=26, the next update must happen exactly at (24,3). This holds true until t=32. Beyond that point, the next update can be expected at (32,4) until t=48

Therefore, the node following the resource could sample the expected epoch, keeping in sync with the publisher.

Final notes:

Please let me know your feedback, questions and test issues while I document and clean up the code. I hope you like this feature. I am available on Gitter (@jpeletier). Enjoy!!

@jpeletier jpeletier requested review from lmars and nolash as code owners August 13, 2018 15:46
@jpeletier jpeletier changed the title [PREVIEW] Adaptive frequency [PREVIEW] Adaptive frequency MRUs Aug 13, 2018
@jpeletier jpeletier changed the base branch from master to mru-query August 13, 2018 21:22
@jpeletier jpeletier force-pushed the mru-autofreq branch 4 times, most recently from cc6ad1b to fafcab0 Compare August 14, 2018 13:39
@zelig zelig assigned zelig and jpeletier and unassigned zelig Aug 17, 2018
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extract this in a constructor please

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks. The constructor was already there (LookupLatest), just not used.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this comment is no longer correct right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected. Thanks.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not use pointer to LookupParams?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lp gets modified so don't want to touch the original.
Corrected.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metainformationA

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Thanks.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok got it

@zelig zelig mentioned this pull request Aug 30, 2018
@nagydani
Copy link

I like the overall design, but I have a few questions:

  1. Is the order of query string parameters fixed or they can be switched places?
  2. Is the API extensible by new parameters and how the persence of unknown parameters should be handled? Are they ignored or is it an error?
  3. Same about responses.
    How about adding protocol version numbers for avoiding compatibility issues? I feel that this is going to change a lot, when people actually start using them.

@jpeletier
Copy link
Contributor Author

@nagydani, thank you for your comments.

Is the order of query string parameters fixed or they can be switched places?

The order is not fixed. Parameters work in any order.

Is the API extensible by new parameters and how the persence of unknown parameters should be handled? Are they ignored or is it an error?

Unknown parameters are ignored. No error will be thrown if you put an unknown parameter.

Same about responses.

Responses are either:

  • binary content (the resource you asked for)
  • a HTTP error
  • a JSON. The client ignores fields in the JSON it does not recognize.

How about adding protocol version numbers for avoiding compatibility issues? I feel that this is going to change a lot, when people actually start using them.

Yes, we should have API versioning, but I guess not only in bzz-resource, but throughout. The easiest right now would be to have it as another query string parameter, e.g., ?version=1. The cleanest would be to have it as part of the path: /bzz-resource:/v1/?x=1&y=2. I think we should have a more holistic API design to have API version scheme be coherent across services.

karalabe and others added 6 commits September 21, 2018 14:16
cmd/faucet: cache internal state, avoid sync-trashing les
`(void)data;` may cause link error on Windows.
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.

Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.

The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.

* p2p/discover: port to p2p/enode

This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:

  - Table.Lookup is not available anymore. It used to take a public key
    as argument because v4 protocol requires one. Its replacement is
    LookupRandom.
  - Table.Resolve takes *enode.Node instead of NodeID. This is also for
    v4 protocol compatibility because nodes cannot be looked up by ID
    alone.
  - Types Node and NodeID are gone. Further commits in the series will be
    fixes all over the the codebase to deal with those removals.

* p2p: port to p2p/enode and discovery changes

This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.

New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.

* p2p/simulations, p2p/testing: port to p2p/enode

No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:

 - testing.ProtocolSession tracks complete nodes, not just their IDs.
 - adapters.NodeConfig has a new method to create a complete node.

These changes were needed to make swarm tests work.

Note that the NodeID change makes the code incompatible with old
simulation snapshots.

* whisper/whisperv5, whisper/whisperv6: port to p2p/enode

This port was easy because whisper uses []byte for node IDs and
URL strings in the API.

* eth: port to p2p/enode

Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.

* les: port to p2p/enode

Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.

* node: port to p2p/enode

This change simply replaces discover.Node and discover.NodeID with their
new equivalents.

* swarm/network: port to p2p/enode

Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).

There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.

Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
liangdzou and others added 16 commits September 25, 2018 12:26
The contributing instructions in the README are not in the GitHub contributing
guide, which means that people coming from the GitHub issues are less likely to
see them.
* signer: remove local path disclosure from extapi

* signer: show more data in cli ui

* rpc: make http server forward UA and Origin via Context

* signer, clef/core: ui changes + display UA and Origin

* signer: cliui - indicate less trust in remote headers, see ethereum#17637

* signer: prevent possibility swap KV-entries in aes_gcm storage, fixes ethereum#17635

* signer: remove ecrecover from external API

* signer,clef: default reject instead of warn + valideate new passwords. fixes ethereum#17632 and ethereum#17631

* signer: check calldata length even if no ABI signature is present

* signer: fix failing testcase

* clef: remove account import from external api

* signer: allow space in passwords, improve error messsage

* signer/storage: fix typos
* Added more details to the clef tutorial

* Fixed last issues with the comments on the clef tutorial
*Total -- 171.97kb -> 127.26kb (26%)

/swarm/api/testdata/test0/img/logo.png -- 17.71kb -> 4.02kb (77.29%)
/cmd/clef/sign_flow.png -- 35.54kb -> 20.27kb (42.98%)
/cmd/clef/docs/qubes/qrexec-example.png -- 18.66kb -> 15.79kb (15.4%)
/cmd/clef/docs/qubes/clef_qubes_http.png -- 13.97kb -> 11.95kb (14.44%)
/cmd/clef/docs/qubes/clef_qubes_qrexec.png -- 19.79kb -> 17.03kb (13.91%)
/cmd/clef/docs/qubes/qubes_newaccount-2.png -- 41.75kb -> 36.38kb (12.86%)
/cmd/clef/docs/qubes/qubes_newaccount-1.png -- 24.55kb -> 21.82kb (11.11%)
…hash-length

swarm/network/stream: fix DoS invalid offered hashes length
…tl-pr

swarm: prevent forever running retrieve request loops
…ry-expansion

cmd/swarm: use expandPath for swarm cli path parameters
swarm/storage/mru/lookup: fixed getBaseTime
Added NewEpoch constructor

swarm/api/client: better error handling in GetResource()


swarm/storage/mru: Renamed structures.
Renamed ResourceMetadata to ResourceID. 
Renamed ResourceID.Name to ResourceID.Topic

swarm/storage/mru: Added binarySerializer interface and test tools

swarm/storage/mru/lookup: Changed base time to time and + marshallers

swarm/storage/mru:  Added ResourceID (former resourceMetadata)

swarm/storage/mru: Added ResourceViewId and serialization tests

swarm/storage/mru/lookup: fixed epoch unmarshaller. Added Epoch Equals

swarm/storage/mru: Fixes as per review comments

cmd/swarm: reworded resource create/update help text regarding topic

swarm/storage/mru: Added UpdateLookup and serializer tests

swarm/storage/mru: Added UpdateHeader, serializers and tests

swarm/storage/mru: changed UpdateAddr / epoch to Base()

swarm/storage/mru: Added resourceUpdate serializer and tests

swarm/storage/mru: Added SignedResourceUpdate tests and serializers

swarm/storage/mru/lookup: fixed GetFirstEpoch bug

swarm/storage/mru: refactor, comments, cleanup

Also added tests for Topic
swarm/storage/mru: handler tests pass

swarm/storage/mru: all resource package tests pass

swarm/storage/mru: resource test pass after adding
timestamp checking support

swarm/storage/mru: Added JSON serializers to ResourceIDView structures

swarm/storage/mru: Sever, client, API test pass

swarm/storage/mru: server test pass

swarm/storage/mru: Added topic length check

swarm/storage/mru: removed some literals,
improved "previous lookup" test case

swarm/storage/mru: some fixes and comments as per review

swarm/storage/mru: first working version without metadata chunk

swarm/storage/mru: Various fixes as per review

swarm/storage/mru: client test pass

swarm/storage/mru: resource query strings and manifest-less queries


swarm/storage/mru: simplify naming

swarm/storage/mru: first autofreq working version



swarm/storage/mru: renamed ToValues to AppendValues

swarm/resource/mru: Added ToValues / FromValues for URL query strings

swarm/storage/mru: Changed POST resource to work with query strings.
No more JSON.

swarm/storage/mru: removed resourceid

swarm/storage/mru: Opened up structures

swarm/storage/mru: Merged Request and SignedResourceUpdate

swarm/storage/mru: removed initial data from CLI resource create

swarm/storage/mru: Refactor Topic as a direct fixed-length array

swarm/storage/mru/lookup: Comprehensive GetNextLevel tests

swarm/storage/mru: Added comments

Added length checks in Topic
swarm/storage/mru: fixes in tests and some code comments

swarm/storage/mru/lookup: new optimized lookup algorithm

swarm/api: moved getResourceView to api out of server

swarm/storage/mru: Lookup algorithm working

swarm/storage/mru: comments and renamed NewLookupParams

Deleted commented code


swarm/storage/mru/lookup: renamed Epoch.LaterThan to After

swarm/storage/mru/lookup: Comments and tidying naming



swarm/storage/mru: fix lookup algorithm

swarm/storage/mru: exposed lookup hint
removed updateheader

swarm/storage/mru/lookup: changed GetNextEpoch for initial values

swarm/storage/mru: resource tests pass

swarm/storage/mru: valueSerializer interface and tests



swarm/storage/mru/lookup: Comments, improvements, fixes, more tests

swarm/storage/mru: renamed UpdateLookup to ID, LookupParams to Query

swarm/storage/mru: renamed query receiver var



swarm/cmd: MRU CLI tests
cmd/swarm: remove rogue fmt

swarm/storage/mru: Add version / header for future use-
@acud
Copy link
Contributor

acud commented Sep 28, 2018

closing due to merge to upstream

@acud acud closed this Sep 28, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.