US Counties Map in WebGL using D3 and Three.js

Last Updated On 24 Feb 2020 by

So the elections are around the corner and you’re ready to create a kickass dataviz of the polls. Or maybe you’re prepared to map the extent of Corona virus across the US. Well no matter why you’re in for, please let me help out with it!

Final demo map animation

Let’s get started!

Following this D3 example, we start by creating a SVG map of 3,142 US counties.

The SVG is created using D3 - see the full source code here


createD3Map () {
  const svg = d3.select(this.$refs['map-svg'])
  const path = d3.geoPath()
  svg.append('g')
      .attr('class', 'counties')
    .selectAll('path')
    .data(topojson.feature(us, us.objects.counties).features)
    .enter().append('path')
      .attr('id', e => e.id)
      .attr('d', path)
}

The snippet generates a SVG map where each county is a path element with an Id to match it to its county name.

This is great if you don’t need too much interactivity and animation, say if you’re just trying to compare values across the US.

Now say for example you want to compare a county in relation with all the others when the mouse rolls over a county. One could achieve it by updating the fill attribute for each county SVG node with the appropriate value. The solution would probably work but it would likely have low performance. Indeed, on each mouseover the SVG needs to re-render 3,142 US counties or SVG nodes. That can be a very costly operation for your CPU and would likely slow down your application.

Could we use the GPU instead?

By using the parallel powers of WebGL we can try to speed up the map by rendering it with the GPU instead of the CPU.

The first thing we need to do is generating a texture of the map, more specifically the counties, that we will upload to the GPU. Because the GPU doesn’t know county lines and is limited to a rectangle area, we need to place each county into its own rectangle area and upload this mapped map to the GPU - how meta is that?

This technique is called texture packing. To generate it we will follow these main steps:

  • compute bitmap data of each county from its SVG node element by using getBBox() to obtain its box data

  • use GrowingPacker library to generate the packed positions

  • redraw All 3,142 US Counties onto a canvas, using their packed positions

Sprite-Sheet Counties

The code below creates the texture packing data which allows us to generate a texture and its UV map information.

const blocks = [...this.$refs["map-svg"].querySelectorAll("g.counties path")]
  .map((path) => {
    const { x, y, width, height } = path.getBBox();
    return {
      id: parseInt(path.getAttribute("id")),
      x: x * this.scale,
      y: y * this.scale,
      w: width * this.scale,
      h: height * this.scale,
      path,
    };
  })
  .sort((a, b) => Math.min(b.w, b.h) - Math.min(a.w, a.h));
const packer = new GrowingPacker();
packer.fit(blocks);

See the full source code here

Let’s write some shaders

For efficient rendering we will use an instanced geometry of a plane for each of which we will project the county texture based on its UV data.

If you are not familiar with instance geometries, it is a way of initializing a multiple mesh instances from a single geometry buffer. In this case we define a plane geometry that we will resize to each county box.

And if you recall from earlier we already have the UV data ready as it is in the texture pack, where every instance will have its own defined box area (x y width height) on the texture.

Think of it as a pool of particle to which we set their own image texture (the county image) and its dimensions (the county bounds) which was defined earlier using SVG.

Keep in mind: although we are not covering the entire code in this tutorial, you can access the complete code base on github as well as the boilerplate code for geometry instancing on threejs examples page

Add Uniforms

uniforms: {
  map: { value: canvasTexture },
  isPicking: { type: 'f', value: 0.0 }
}
  • map is the texture pack we generated earlier.

  • isPicking will be used later on for mouse/county intersection, we’ll come back to this later

Define Attributes

// Create the plane instance geometry
const geometry = new THREE.InstancedBufferGeometry();
geometry.copy(new THREE.PlaneBufferGeometry(1, 1, 1, 1));

const instances = this.counties.length;
const ratios = [];
const countyIndexes = [];
const countyTags = [];
const offsets = [];
const uvOffsets = [];
const uvScales = [];
for (let i = 0, l = this.counties.length; i < l; i++) {
  const block = this.counties[i];
  ratios.push(RESET_VALUE);
  countyIndexes.push(i);
  countyTags.push(
    (((i + 1) >> 16) & 255) / 255,
    (((i + 1) >> 8) & 255) / 255,
    (((i + 1) >> 0) & 255) / 255,
    1
  );
  offsets.push(block.x - 650 + block.w / 2, -block.y + 380 - block.h / 2, 0);
  uvOffsets.push(block.fit.x / 1024, -block.fit.y / 1024);
  uvScales.push(block.w / 1024, block.h / 1024);
}

geometry.setAttribute(
  "ratio",
  new THREE.InstancedBufferAttribute(new Float32Array(ratios), 1)
);
geometry.setAttribute(
  "countyIndex",
  new THREE.InstancedBufferAttribute(new Float32Array(countyIndexes), 1)
);
geometry.setAttribute(
  "countyTag",
  new THREE.InstancedBufferAttribute(new Float32Array(countyTags), 4)
);
geometry.setAttribute(
  "offset",
  new THREE.InstancedBufferAttribute(new Float32Array(offsets), 3)
);
geometry.setAttribute(
  "uvOffsets",
  new THREE.InstancedBufferAttribute(new Float32Array(uvOffsets), 2)
);
geometry.setAttribute(
  "uvScales",
  new THREE.InstancedBufferAttribute(new Float32Array(uvScales), 2)
);
  • countyIndex and countyTag will be used later on for mouse intersection

  • offset is the x y z location of the county on the map

  • uvOffsets and uvScales allow us to map the texture onto the instance plane.

Vertex Shader

The vertex shader takes care of applying the instance offset, aka position of the county on the map as well as scaling the plane to fit the county image.

It also passes the feature ratio of the instance to the fragment shader that we will use to affect the color rendering later on.

precision highp float;
uniform float time;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec2 uv;
varying vec2 vUv;
attribute float ratio;
varying float vRatio;
attribute float countyIndex;
attribute vec4 countyTag;
attribute vec3 position;
attribute vec3 offset;
attribute vec4 color;
attribute vec2 uvOffsets;
attribute vec2 uvScales;
varying vec4 vCountyTag;

void main(){
  vCountyTag = countyTag;
  vec3 pos = position * vec3(uvScales.xy * 1024.0, 1.0);
  pos = pos + offset;
  vUv = vec2(uv.x, 1.0-uv.y);
  vUv *= uvScales;
  vUv = vec2(vUv.x, 1.0-vUv.y);
  vUv += vec2(uvOffsets.x , uvOffsets.y);
  vRatio = ratio;
  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0 );
}

Fragment Shader

While the vertex shader takes care of setting the proper position of the geometry, the fragment shader is responsible for drawing it to the screen.

For that we use a transparent plane and draw the county image onto it using the UV coordinates and scaled geometry from the vertex.

precision highp float;
uniform float time;
uniform sampler2D map;
uniform float isPicking;
varying vec2 vUv;
varying float vRatio;
varying vec4 vCountyTag;

void main() {
  vec2 uv = vUv;
  vec4 color = texture2D(map, uv);
  if (color.x + color.y + color.z < 0.9) {
    discard;
  }
  if (isPicking == 1.0) {
    gl_FragColor = vCountyTag;
  } else {
    color.r = 1.0 - vRatio * 0.7;
    color.g = 0.5 - vRatio * 0.2;
    color.b = vRatio * 0.9;
    gl_FragColor = color;
  }
}

User Interactions

Next we want to add interactivity to the map, we would like to react on mouse roll over, by locating the current county and updating all the other ones based on this current value. Say we select Los Angeles County then we want to update the ratio of each other county left with its distance value.

Although less scalable, if using SVG this would be very easy to do as we could simply select the DOM element or SVG node to know exactly which county is selected.

But since we’re drawing within the GPU context, we have no simple way of querying it from the CPU and know exactly what the user selected.

The solution I came up with proceeds the following way:

  • Assign a unique id for each instance county and encode it in a color channel (RGB)

  • When user moves mouse, render a back buffer of the map using the id color instead of the desired render

  • Back from the CPU sample the pixel value at the mouse location and decode its value as a county ID

First let’s encode the county id value to RGB values, we use the following snippet:

countyTags.push(
  (((i + 1) >> 16) & 255) / 255,
  (((i + 1) >> 8) & 255) / 255,
  (((i + 1) >> 0) & 255) / 255,
  1
);

When the mouse moves, this is why in the uniforms we created isPicking: { type: ‘f’, value: 0.0 } - we use it as a Boolean where 0 is false and 1 is true

To sample the pixel value, we toggle isPicking uniform to 1, and draw one pass on the back buffer, then roll back isPicking to 0

const samplePoint = () => {
  mesh.material.uniforms.isPicking.value = 1;

  renderer.setRenderTarget(pickingRenderTarget);
  renderer.render(scene, camera);
  renderer.readRenderTargetPixels(
    pickingRenderTarget,
    this.mouse.x,
    pickingRenderTarget.height - this.mouse.y,
    1,
    1,
    pixelBuffer
  );
  renderer.setRenderTarget(null);
  renderer.clear();

  mesh.material.uniforms.isPicking.value = 0;
};

Finally we can sample the buffer for a hit:

// interpret the pixel as an ID
var id = (pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | pixelBuffer[2];

id -= 1;

if (id >= 0) {
  const index = id;
  if (id >= this.counties.length) {
    console.error("ERROR! ---> ", id, ">", this.counties.length, pixelBuffer);
  } else {
    // WE HAVE FOUND A COUNTY!
  }
}

Note that an alternative and more simple way would have been to use a raycast - example

The limitation with this approach is that we would get a hit for the full box and not the county lines. So this approach may be a bit more involved but offers pixel perfect sampling!

Final demo and source code

Checkout the following:

all steps demo

final demo

full source code

That’s about it! Hope you enjoyed this tutorial, if you have any question or comment, please use the section below or reach out on twitter. Enjoy!

About The Author

Headshot of Michael Iriarte aka Mika

Hi, I'm Michael aka Mika. I'm a software engineer with years of experience in frontend development. Thank you for visiting tips4devs.com I hope you learned something fun today! You can follow me on Twitter, see some of my work on GitHub, or read more about me on my website.