a little class for combining behavior, layout and style


I like some of what React offers. I also think it's bloated, over-hyped, over-engineered and it sees the web as a compile target rather than a development platform.

I like most of what Web Components offer, they're a part of the web platform. They offer real encapsulation  —  for css. And like most web APIs designed by consensus, they're awkward.


Tonic is about 250 lines of code. It borrows goals and ideas from React but is built on native Web Components. It works in all browsers. It's stable. It's the minimum of what is needed to organize application code, the flow of data and accommodate component based architecture.

You can find the core library here and a collection of components here on Github.

2019-7-3 Update

Tonic is about a year old. To celebrate a year without any new features, let's add a new feature...

Your render function can now be async or an async generator. This provides a declaritive way to express the intent of your render method. For example...

class SomeComponent extends Tonic {
  async * render () {

    yield loadingState()

    return await getStuff()

What are javascript then-ables

async / await improves program flow and reduces the number of callbacks in your code. The await keyword can be used to pause the current code path and wait for an async function to return a value without blocking the event loop.

async function main () {
  await sleep(100)


An implementation for the sleep function might look like this...

const sleep = n => new Promise(r => setTimeout(r, n))

However, as this example demonstrates, the code in the promise starts executing immediately. Promises are eager (as opposed to being lazy), and sometimes we want them to be lazy. Here is a detailed explaination of why an eager promise may not be what you want.

A then-able is lazy. It has no constructor. It's any function, object or class that implements a then method.

Await-able Classes

To create an async class, implement a then method on it!

class Foo {
  then (resolve, reject) {

async function main () {
  const answer = await new Foo()
  // answer === 42

Await-able Objects

You can do the same thing with an object. You can name the callback functions whatever you want. Also, you aren't required to use or care about the rejection callback.

const Foo = {
  then (resolve) {
    setTimeout(() => resolve(42), 1024)

async function main () {
  const answer = await Foo
  // answer === 42

Await-able object factories

const Foo = num => ({
  then (resolve) {
    setTimeout(() => resolve(num), 1024)

async function main () {
  const answer = await Foo(42)

Async then-ables

Object and class methods can use the async keyword, just like functions.

const Foo = {
  async then (resolve) {
    resolve(await request('https://foo.com'))

Destructuring assignments provide a way to return multiple values...

class Foo {
  then (resolve) {
    request('https://foo.com', (err, res) => resolve({ err, res }))

async function main () {
  const { err, res } = await new Foo

  // More than one err? Const is block-scoped!
    const { err, res } = await new Foo

  // Destructured values can also be aliased.
  const { err: namedError, res: namedResponse } = await new Foo



This is a collection of notes that explore peer-to-peer topics.


Rhis collection focuses on the following topics...

These notes are not complete and don't advocate any particular approachs. They are related to my work on dat-cxx.

illustrated lamport timestamp


With the client-server model, you can easily determine the order of events in a system because they are all maintained by a single source. This is critical in, for example a chat application.

But with the distributed model, how do we know if an event happened before another? How can we thread together datasets from different sources in the correct order?


A Lamport Timestamp is one solution to determine the order of events in a distributed system. Although it may not solve all problems in this problem space, it is a useful primitive that we will explore.

Clocks vs. Logical Clocks

Why don't we use regular time stamps? Most clocks count time at different rates and experience failures that require resynchronization. This means they are reliably unreliable for determining the order of events.

Lamport Timestamps use a Logical Clock to keep track of the order of events on each node. A logical clock is not a clock that keeps track of the time, it's a monotonically increasing counter. So, when a node in a network receives a message, it re-synchronizes its counter (its clock) with the node that sent the message.


Node A increments its clock before each event that hapens. An event is something meaningful, like when it creates some data. When node A eventually sends its payload over the network, it will include the current value of its clock.

let clock = 0

// A thread-safe, monotonically increasing function.
function createTimeStamp () {
  clock += 1
  return clock 

function doSomething (data) {
  // Do something with some data.
  return {
    clock: createTimeStamp()

// Eventually send the data to the network.
sendToNetworkQueue(doSomething({ ... }))

When node B receives a message from node A, it will decide how to set its own clock. If the clock value in the message is greater than its own value, it will use the value in the message. Otherwise it will use its own value. In either case, it will also increment its own clock by 1.

let clock = 0

// Eventually receive some data from the network.
receiveFromNetworkQueue (message) {
  clock = Math.max(message.clock, clock) + 1

Here we semi-randomly fail to always tell the other node about the event that happened, illustrating what happens when a node is eventually synchronized.

This may not be the correct primitive for all your use cases. For example, Lamport Timestamps don't express causality, meaning, the reason why one event happened before another isn't in scope of this soluton, but that is something that can be achieved using a Vector Clock.

Special thanks to Fedor Indutny and Feross Aboukhadijeh for reviewing this post. ♥

vector clocks

In the previous post, I wrote about how Lamport Timestamps (aka Logical Clocks) can help determine the order of events in a distributed system.


Logical clocks only offer "Partial Ordering", because they can tell us the order of a single event, but not the total ordering of events or why a system arrived at its state.


Vector Clocks build on the idea of Logical Clocks to help track causality in a distributed system.

Here is an example vector clock in a network where there are three participating nodes...

{ alice: 0, bob: 1, carol: 0 }

To set up a node we will give it an id and an in memory object to store some data.

const myId = 'alice'
const data = {}

sending messages

When a node writes some data, it increments its own logical clock in the vector and includes it as a property of a message that it will attempt to send. We also add the value as a property of the message.

function write (key, value) {
  if (!data[key]) {
    data[key] = { clock: { [myId]: 0 } }

  data[key].clock[myId] += 1 
  data[key].value = [value]

  send(key, data[key])

In this case we made the value property an array. This is because we must anticipate the possibility of concurrent messages — that is, a message was received where two nodes have a logical clock with the same count.

In this case we can push the new value onto the array and allow the the conflict to be resolved somehow (we'll discuss this more later).

receiving messages

When a node receives a message it increments its own Logical Clock in its local copy of the vector.

Then for each node in the message's vector, it compares the local clock count (if there is one) to the clock count in the received message, taking the max of the numbers.

const max = arr => Math.max.apply(null, Object.values(arr))

function receive (message) {
  const key = message.key

  // If this is new data, there is no need to compare anything.
  // we can store it locally and return early from the function.
  if (!data[key]) {
    data[key] = message
    data.clock[myId] = max(message.clock) + 1

  // We have received the message, update our clock
  data[key].clock[myId] += 1

  const localClock = data[key].clock
  const messageClock = message.clock

  // For each node in the vector of the message
  for (const id in Object.keys(messageClock)) {
    const a = localClock[id] || 0
    const b = messageClock[id]

    const isConcurrent = a === b

    if (isConcurrent) {
      data[key].conflict = true

    const happenedBefore = a < b

    if (happenedBefore) {
      data[key].value = [message.value]

    localClock[id] = Math.max(a, b)

handling concurrent messages

Two messages that are received at the same time and have the same logical clock count are "concurrent".

To understand what to do with this type of data, we need to create a resolution function. This function may be the only way to determine what data is either a descendant or which data comes-before.

  1. Reject the data and send it back to the clients asking for it to be resolved. This might mean asking them to manually merge or discard some of the data.

  2. Last-Writer-Wins uses time-based timestamps. If you consider clock-drift (mentioned in the first post), there is a high probability of losing data with this strategy.

research timeline

When discussing Vector Clocks we should consider some other closely related research...

Version Vectors also build on the idea of Lamport Timestamps, but are specifically meant to track changes to data in distributed systems. They are also the basis for optimistic replication.


Each message sent by a node contains a vector clock that has all the node names (and their corresponding clock counts) who want to write to the same field of data.

This can be a problem since a data structure that can grow to an unbound size can be a problem in larger networks with more frequent writes. Strategies for dealing with this are often based on what suits your use-cases best, for example, two possible solutions are...

  1. If a network has a finite number of nodes, a message that has reached all nodes can be considered "complete", could be marked as such and have it's historic information removed.

  2. If a network has an acceptable threshold of nodes that once a message has reached, the message can be considered complete and can then be cleaned up.

implementing dat

In my spare time I am implementing dat by following this, this and this as references.

You can follow this post and this github org for updates and information.

DTN recap


Over the last 10 years I've been a part of various meet-ups, IRC channels and p2p networks that are interested in building distributed systems. In 2015 I pulled together a lot of the leaders and contributors from the projects that I found interesting for an event we called DTN, aka Data Terra Nemo, Decentralize The Network, Don't Think Normal. It went well!


About 6 months ago Feross, Mikeal and Juan convinced me I should do another one and it made sense! After all, many of the projects we discussed in 2015 were just ideas (libp2p), still in the prototyping phase (dat) or didn't exist at all (filecoin, patchwork). It happened!

This event works for a few reasons.

  1. Zero profit. 100% of what we earn gets spent on the conference — no exceptions. All funds go towards flights, hotels, food, A/V, etc.

  2. We curate speakers who are hard working, highly motivated, implementers but also kind, empathetic people. Software is nothing without the people who work together to make it.

  3. One of the most important reasons this works is that it's collaborative and not competitive. We're sharing fundamental ideas. And while we're taking different approaches to solving some of the same problems, we're all interested in the best outcome. Things like programming languages are trivial details.

What happened?

Over 2 days about 150 people attended. We recorded about 12 hours of video from 12 speakers. We had 12 formal talks and several impromptu lightning talks. I'll be posting the videos online once they are processed.

We discussed the technical (and some social) challenges in coding fundamental primitives that help people build a more distributed and decentralized internet. We shared language agnostic patterns for building solutions but also many concrete solutions — actual code that illustrates what problems we're solving today.

How did you do it?

I have no idea. It wasn't easy. Unexpected things happened. People are hard to organize. I'm not an event organizer. I have no idea what I'm doing. I almost broke-even though. Ultimately it was a lot of fun. And the lesson here is that anyone can do this. If you want help putting on your own event, I'm happy to discuss what I know! My handle is heapwolf most places on the internet. If you're not on any decentralized networks yet, you can even reach me on twitter.

What's next?

Let's do it again! Many of the projects we met to discuss are moving faster, more people are involved, so let's not make it a 4 year gap this time. Should we do the same dates? Different Location? Let's talk.

Thank you!

If you bought a ticket, you were a sponsor! So I want to thank all the sponsors, especially those who bought high value tickets. You are helping to build a better internet for everyone.

Linux on MacBook Pro

I recently switched from MacOS to Linux.


Because learning is fun.


There are thousands of posts about which Linux is the best. There isn't one right answer. For me it's a distribution that reflects my own values. I like to keep things simple.



You can see from the output that Arch Linux is running on MacBook Pro hardware. That's a screenshot of the terminal running ZSH and TMUX. MacOS aka Darwin, is a BSD variant, and it's Unix-ish. So transitioning to Linux is easy and familiar.


There are other desktop options, but Gnome basically gives you an improved version of MacOS. I added some MacOS icons to make the transition more seamless.



Before you do anything, you'll need a USB-C to USB hub. You can buy a cheap keyboard and mouse to plug into it. You'll also need a USB drive. Get one with a few gigabytes they are cheap.




This wont be a complete guide. I don't think a complete guide exists. You'll need to do some searches to find solutions for issues specific to your machine. Also, this could brick your computer. So don't do any of this unless you really know how to yolo. If you're a part-time nerd, check out this link.

The two best resources I found were this and this. And of course the arch wiki was incredibly helpful. I got here by analyzing these resources and doing some searches.


Start by downloading the ISO from here, pick a link with http! After you download it, verify the signature.

You can use this app to make the USB bootable from the downloaded iso.

Plug everything in and turn your computer on while holding down the command key. When the menu pops up, select your USB drive and continue booting it. Say goodbye to MacOS, you're about to install Arch.


You'll see some logs and then you'll get dropped into a command line. The first thing we're going to do is partition and format the disk. Your prompt is going to look like root@archiso ~ #, the command we're going to run is lsblk.

root@archiso ~ # lsblk

You'll see a list of bulk devices, mine looks like this but yours will look different. I don't have a USB drive plugged in. You need to figure out which one is your storage device and which one is your USB device. You can probably determine that by the sizes of things.

nvme0n1     259:0    0 233.8G  0 disk
|- nvme0n1p1 259:1    0   489M  0 part /boot
|- nvme0n1p2 259:2    0 222.6G  0 part /
 \ nvme0n1p3 259:3    0  10.3G  0 part [SWAP]


After you figure out your device's name, write it down, we're going to need it again. Now, let's edit it. For example, if I was to edit my device, I would run the following command.

root@archiso ~ # parted /dev/nvme01

You're in the partition editor. Next, destroy everything on the device.

mklabel gpt

Then with the next two commands create the boot partition.

mkpart ESP fat32 1049kB 538MB
set 1 boot on

"Take the size of your hard disk, that you noted yourself earlier, and subtract the size of your RAM from it. I've got 8GB of RAM, so for SIZE I've put in: 243GB (251GB minus 8GB)." - Phil

You might have to adjust these values, but try something like this to start.

mkpart primary ext4 538MB 243GB
mkpart primary linux-swap 243GB 100%


Next we're going to format the partitions. Run lsblk again to see the new partitions you made, replace foo with your actual device name.

mkfs.vfat -F32 /dev/foo1
mkfs.ext4 /dev/foo2
mkswap /dev/foo3
swapon /dev/foo3

If you get the error Not enough clusters for a 32 bit FAT!, try increasing the size of your partition as discussed here.


Next we need to get the OS onto your newly partitioned and formatted device and configure it.


Let's mount the partitions, replacing foo with your actual device name.

root@archiso ~ # mount /dev/foo2 /mnt

root@archiso ~ # mkdir -p /mnt/boot
root@archiso ~ # mount /dev/foo1 /mnt/boot


Select the right mirror by moving it to the top of the list.

root@archiso ~ # vi /etc/pacman.d/mirrorlist

Run the pacstrap command to copy the OS onto your device. I needed to install not only the base and the base-devel packages, but also the wireless networking packages so that I could get on the Wifi. Later I realized that after installing the gnome package, network connectivity is handled really well by the networkmanager package.

root@archiso ~ # pacstrap /mnt base base-devel dialog wireless_tools netcfg wpa_supplicant


Generate your File System Table with this command.

root@archiso ~ # genfstab -U -p /mnt >> /mnt/etc/fstab

Open it with vi, there are some changes you'll need to make...

"Make sure that the line of the ext4 partition ends with a 2, the swap partition’s line ends with a 0, and the boot partition’s line ends with a 1. This configures the partition checking on boot." - Phil

root@archiso ~ # vi /mnt/etc/fstab


Now change the root and configure the OS.

root@archiso ~ # arch-chroot /mnt


After you log run arch-chroot, the prompt will change slightly. Type in the following command to pick a new password for the root user. Write this down.

[root@archiso /]# passwd


Let's get your keyboard and trackpad working. Use pacman (the Arch package manager) to install some things.

pacman -S git kernel-devel dkms

Now edit the keyboard configuration file.

[root@archiso /]# vi /etc/dracut.conf.d/keyboard.conf

Add the following line.

add_drivers+="applespi intel_lpss_pci spi_pxa2xx_platform apple-ib-tb"

Now make the system aware of our new modules

vi /etc/initramfs-tools/modules

Add the following lines


Now get and build the drivers. If you have a touch-bar, check out the branch for that using git checkout touchbar-driver-hid-driver after you clone.

[root@archiso /]# git clone https://github.com/roadrunner2/macbook12-spi-driver.git
[root@archiso /]# cd macbook12-spi-driver
[root@archiso /]# ln -s `pwd` /usr/src/applespi-0.1
[root@archiso /]# dkms install applespi/0.1

There are some tweaks you can do, but at this point you should have a working keyboard and trackpad!



Open the locale.gen file and uncomment the line with en_US.UTF-8 UTF-8.

[root@archiso /]# vi /etc/locale.gen
[root@archiso /]# locale-gen
[root@archiso /]# echo LANG=en_US.UTF-8 > /etc/locale.conf
[root@archiso /]# export LANG=en_US.UTF-8


Set the timezone. To get a list of time zones, use timedatectl list-timezones.

[root@archiso /]# ln -s /usr/share/zoneinfo/Zone/SubZone /etc/localtime

Set the hardware clock.

[root@archiso /]# hwclock --systohc --utc


Add kernel modules by opening or creating the following file.

[root@archiso /]# vi /etc/modules

Add the following two lines.



Set your host name, replace foo with your actual hostname.

[root@archiso /]# echo foo > /etc/hostname

HOSTFILE   localhost.localdomain foo
::1         localhost.localdomain foo


Install and enable the DHCP daemon.

[root@archiso /]# pacman -S dhcpcd
[root@archiso /]# systemctl enable dhcpcd


Install EFI tools and use them to install systemd-boot on your boot partition.

[root@archiso /]# pacman -S dosfstools
[root@archiso /]# bootctl --path=/boot install
[root@archiso /]# vi /boot/loader/entries/arch.conf

Add the following lines and replace foo with the name of your storage device.

title Arch Linux
linux /vmlinuz-linux
initrd /initramfs-linux.img
options root=/dev/foo2 rw elevator=deadline quiet splash resume=/dev/foo3 nmi_watchdog=0

Now tell the bootloader about your new bootable option.

[root@archiso /]# echo "default arch" > /boot/loader/loader.conf

Exit arch-chroot, unplug all those USBs and reboot your system.

[root@archiso /]# exit
[root@archiso /]# reboot

After this you should get dropped into a command line again. But this time you will be running your new OS and you will have keyboard and mouse support. After that you may or may not have some more work to do if any of your devices aren't working (audio either works or can be tricky to get working).

There are also configuration things to do but it depends on how you want to use your computer. Most of the code in the official repository is seen by a lot of eyes. But personally I try to stay away from the AUR if I can. I try to audit the packages I install.

You should read the Security section of the Arch Wiki.

Good luck or congratulations depending on where you are. Hit me up on either Twitter (@heapwolf) or Freenode IRC heapwowlf if you have questions.





Curriculum Vitae


Name: Paolo Fragomeni, Software Engineer


Email: paolo@async.ly

Web: https://hx.ht

Twitter: https://twitter.com/heapwolf

Github: https://github.com/heapwolf


I Left MIT in 2010 to co-found Nodejitsu (a PaaS, since integrated with GoDaddy). Most recently I founded Voltra Co. (entertainment software) which joined Conductor Lab. In addition to being a technical founder, CTO and engineer I have worked in the infosec space.


Computer Science Research. Software Engineering: programming design and engineering, concurrent and distributed programming, metaprogramming, functional programming and ECMAScript (Javascript). Key-Value store


CTO, Cofounder at Voltra Co.

January 2016 - Augest 2018 (2.5 years)

Voltra Co. was a cloud storage service and set of audio products. Voltra's desktop and mobile players sync so you can stream your music from anywhere. The only ad-free fully hi-res platform. On this project I worked with Electron, JavaScript, Node.js, C++, CSS3, HTML5, Stylus, Jade, Webpack, React, React-Native, and Amazon Web Services. Voltra joined Conductor Lab in Augest 2018.

VP of Engineering at Now Secure

November 2014 - January 2016 (1 year 3 months)

Built engineering and security research teams. Coordinated engineering and research teams. Set technical goals, worked hands on on lots of projects. On this project I worked primarily with C, C++, JavaScript, Node.js, HTML5.

Engineer, CTO at Mic

January 2014 - November 2014 (11 months)

Hereishow.to joined mic.com where I served as CTO. Built an engineering team and integrated components of Here Is How into existing products. On this project I worked with Amazon Web Services, Node.js JavaScript, HTML5, CSS3.

Engineer, CTO, Cofounder at Here Is How

November 2012 - January 2014 (1 year 3 months)

A CMS for technical writing, a web based interface similar to Medium.com. This project was acqui-hired by mic.com On this project I worked with Docker, JavaScript, Node.js, Websockets, C, C++, HTML5, CSS3.

Engineer, CTO, Cofounder at Nodejitsu

September 2010 - December 2012 (2 years 4 months)

Co-Founder, Chief Technology Officer. Lots of R&D. Conceptualized and implemented products that simplify and manage application deployments for the node.js platform.