Submitted: 10 January 2019
Rejected: 17 April 2019
Review A: 2. Weak reject
Review B: 2. Weak reject
Review C: 2. Weak reject
Review D: 1. Reject
2. Weak reject
1. No familiarity
1. Low
The authors present OWebSync which is a middleware to support synchronization of browser based clients editing tree like data structures (e.g., json), that can tolerate eventual consistency and which do not have inter-node constraints. This is done by using a state-based CRTD with a Merkle tree to determine which portions of the state need to be resynchronized. OWebSync operates in the same way both for on-line clients and to resynchronize a previously offline client. The authors evaluate OWebSync against three other solutions for synchronizing state -- Yjs, ShareDB, and Legion -- with varying number of clients and either 100 or 1000 objects in a single document. The main conclusion from their results is that OWebSync is the most consistent in performance both at the 50th and 99th percentile, increasing only from 2.3s to 2.7s from the smallest to largest online test and having comparable performance with limited variability for both online and resynchronization after offline. However, at small scale OWebSync is significantly slower than ShareDB and Legion. This improved performance at scale and during recovery comes at increased network usage as compared to the alternatives.
The authors do a good job of describing their solution and placing it in the context of alternatives. I liked the use of Merkle tree as an alternative to tracking what various client have seen. I also appreciate that the authors implemented three of the alternatives and show results that indicate an alternative is sometimes better.
While the authors say they ran 48 benchmarks, it is really only one benchmark run with different parameters: three different numbers of clients times, two different numbers of objects, four implementation and online/resynchronization. I would like to see additional benchmarks, including multiple overlapping sets of documents (and not just one document) and measurements from real world experience with their industrial use cases.
Overall, I found the paper well written and the main contributions were clear. I appreciate that the authors implemented a range of alternatives and compared OWebSync to these results.
As indicated above, I would like to see a broader range of benchmarks executed against these implementations. For instance, if there are many documents with limited sharing, how does ShareDB do as compared to OWebSync after a client has been offline? What happens when the client is offline for more than one minute? What is the experience eDesigners or eWorkforce in actually using their solution?
I would also like to see a little more analysis that says where a solution like ShareDB is better. For instance, from the results in figure 5 with a small number of clients (<8), it seems like ShareDB wins if clients don't work offline. But what happens with 4 client with significant periods of offline work?
I personally found the argument for stable performance even after a network disconnect to be more compelling than the better performance at large number of clients. I am not convinced how common 24 clients will be in practice.
I was somewhat confused by figure 2b since the same UUID is used for all of the items whereas in the description of ORMap on page 4 it says each item has its own UUID.
2. Weak reject
2. Some familiarity
2. Medium
This paper presents OWebSync: a web middleware that supports seamless synchronization of both online and offline clients that are concurrently shared data sets.
The paper provides two motivating case studies and then provides the rationale and more background on synchronization mechanisms such as CRDTs.
The analysis of performance evaluation is comprehensive in terms of objects, clients and time to synchronize updates after the network failures.
Lock of the implementation details of OWebSync, such as combining state-based CRDTs with specific enhancements based on Merkle-trees.
OWebSync implements a fine-grained data synchronization model and leverages Merkle-trees and convergent replicated data types to achieve the performance. Experimental results show that in comparison with operation-based and other state-based approaches, OWebSync scales better to tens of concurrent editors on a single document, and is also better in recovering from offline situations, and can achieve acceptable interactive performance with limited network overhead at a higher scale.
Some problems need to be addressed:
2. Weak reject
2. Some familiarity
2. Medium
The paper proposes a synchronization system that allows clients to collaboratively edit objects even when disconnected and synchronize once they reconnect to a server.
1. Reject
2. Some familiarity
2. Medium
This paper presents OWebSync, an approach for synchronization among multiple clients in a web-based application. OWebSync uses a variation of state-based CRDTs in order to minimize synchronization time, especially in the case of long disconnections. In particular, OWebSync leverages Merkle trees in order to quickly determine the differences between two large states and to minimize unnecessary network traffic. The evaluation suggests that OWebSync performs similarly to Δ-CRDTs during connected operation, but can achieve lower synchronization time in the presence of lengthy disconnects.
Thank you for submitting your paper to ATC. Unfortunately, I don't think this paper should be accepted at ATC in its current form. Here are the main reasons behind my decision.
The contribution of the paper feels quite incremental. Your approach is quite similar to many of the approaches that you describe, except that you are using Merkle trees in order to quickly identify differences and minimize synchronization time. This seems more like an implementation-level optimization than a conceptual contribution.
Also, the contribution of the paper is quite narrow. It applies mostly to web-based applications with a large number of clients, all concurrently making a large number of edits to a large number of objects; where the application is using tree-structured JSON objects; and where disconnections are frequent.
I would have preferred to see an evaluation based on real user data. Instead the evaluation is based on a synthetic workload where the clients keep performing updates for 8 minutes at the rather alarming rate of 1 update every second. It is hard for me to imagine that this workload imitates a realistic usage of these applications. Do you really expect users to make updates so frequently? And in fact, would *all* users to making updates at the same time, even during disconnections?
Finally, the results of the evaluation are rather underwhelming. During the “fully online” scenario, Legion is essentially just as good as OWebSync, except with a higher variance. It is only during the disconnected scenario, and especially with a large number of users, that OWebSync shows a sizable performance benefit. Even in this somewhat extreme case, Legion is still within the 10 second limit, though. Not as good as OWebSync, for sure, but still on the verge of acceptable.
Suggestions
I suggest that you rethink how you present your paper. Currently the paper reads more like a low-level performance optimization, rather than a conceptual contribution. I suggest you focus on determining whether there exists an insight behind the mechanism of OWebSync; an insight that could then be leveraged to increase performance.
Your current introduction includes a rather lengthy “related work” paragraph. I suggest that you rethink this paragraph in two ways. First, it is currently going into too much detail about various other approaches, before the reader has acquired enough context to understand these details. Second, this paragraph is clearly too long. I suggest that you aggressively prune the content in this paragraph. You don't need to give all the details here (that's what the “Related work” section is for). All you need to do is give the high-level intuition for why previous approaches fail where you succeeded. When rethinking this paragraph, it would help if you had already identified the critical insight of OWebSync. This would make it easier to distill the essence of why previous approaches fall short of your goal.
Nits
In a few occasions you use "less" where "fewer" should be used.
In page 9, you write [13][14] instead of [13,14].