This article describes how to make a fried egg across various rendering Web rendering APIs.
The HTML/CSS Way
We begin this journey with the basics, HTML and CSS.
First we create the layout, which is just a pink container with two children, the egg white and the yolk.
<div class="egg">
<div class="white"></div>
<div class="yolk"></div>
</div>
We have these two rectangles (yolk and white) inside our main egg container (the pink background). It does not quite look like an egg yet. Let's apply some CSS rules to change the shape of each containers.
We first set the width
and height
of each containers, then we use the
border-radius: 50%
property to round the corners and make it more like a circle shape.
Finally we set the background colors respectively and add a small border around it to look like a shadow. See the full code below:
Below is the full CSS code.
Be aware that it is not exactly valid CSS syntax.
Instead it uses SCSS (Sassy CSS) formatting which
enables features such as code nesting, mixins, variables and even built-in utilities such as
random(n)
which we use here to make the egg a bit different every time we process the SCSS into CSS
using our favorite tools.
.egg {
position: relative;
background: hotpink;
width: 100%;
height: 100%;
.white {
position: absolute;
background: white;
width: 45% + random(20);
height: 50% + random(20);
top: 50%;
left: 50%;
transform: translate(-50%, -50%) rotate(0deg + random(180));
border-radius: 50%;
border: 2vmin solid darken(white, 4%);
}
.yolk {
position: absolute;
background: gold;
width: 25%;
height: 25%;
top: 48% + random(4);
left: 48% + random(4);
transform: translate(-50%, -50%);
border-radius: 50%;
border: 2vmin solid darken(gold, 1%);
}
}
Congratulations if you are new to programming and made it this far. Keep in mind this is more of a hack than a common technique but it remains fun to explore.
Next checkout the another approach using the SVG path API .
Using SVG Path
Scalable Vector Graphics, aka SVG is a XML based standard developed by the World Wide Web Consortium (W3C) for sharing graphics contents on the Web.
SVGs are great for responsive design elements. Try zooming in and see how the edges remain sharp! That’s because vector graphics can scale to any resolutions. And that is definitely a great asset when you have to support different devices and screen sizes.
One great advantage of using SVG is the control over
the reponsive behavior. It is defined using
viewbox
and
aspect-ratio
attributes. Learn more about it
here.
But let's not forget why we came here for and let's start frying that egg! Checkout the full code below:
<svg xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 100 100">
<rect width="100" height="100" fill="hotpink"/>
<path d="M35,17 C48,4 71,17 75,31 C79,45 73,59 75,71 C77,83 64,94 52,90 C40,86 29,78 26,67 C19,45 23,30 35,17 Z"
stroke-width="3" stroke="whitesmoke" fill="white">
</path>
<circle
cx="50.5" cy="49.5" r="13"
stroke-width="3" stroke="#fad300" fill="gold"
>
</circle>
</svg>
First thing we defined a viewbox that allows us to work with a defined viewport of 100 by 100 units.
We can later rely on these units when placing elements and the SVG automagically scales everything accordingly. Pretty convenient indeed!
Let's start adding elements, the easiest parts are the
background and the egg yolk.
We use the built-in
rect
and
circle
elements.
Drawing the egg white however requires a less primitive shape.
For that level of customization we use the
path
element
which allows us greater control over the shape.
Let's take a closer look at the gibberish looking part below:
d='M35,17 C48,4 71,17 75,31 C79,45 73,59 75,71 C77,83 64,94 52,90 C40,86 29,78 26,67 C19,45 23,30 35,17 Z
It simply translates to the following commands
Move to {x:35, y:17}
Bézier Curve To {x:48, y:4}
via control points {x:71, y:17} and {x:75, y:31}
Bézier Curve To {x:79, y:45}
via control points {x:173, y:59} and {x:75, y:71}
etc...
For more information regarding the path element and the meaning of the
d
attribute,
refer to the
MDN Path documentation.
If you were not already familiar with SVG, you should by now get a pretty good idea of its potential. On the next chapter we will explore how to make an egg with this time a more advanced SVG feature: SVG filters .
SVG Filters
In this chapter we take a look into another great feature of SVGs. Filters in SVG have a similar approach to the CSS API but with a XML like syntax instead.
You can learn more about filters in this great Codrops article. Once you're back, let's take a look at the full code for this fried egg.
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100">
<defs>
<filter id="blur-filter">
<feTurbulence type="fractalNoise" baseFrequency=".02"/>
<feDisplacementMap in="SourceGraphic" scale="25"/>
<feComposite in="SourceGraphic" operator="atop"/>
</filter>
</defs>
<rect width="100%" height="100%" fill="hotpink"/>
<circle cx="50" cy="50" r="40" stroke-width="3"
stroke="whitesmoke" fill="white"
filter="url(#blur-filter)"/>
<circle cx="50" cy="50" r="15" stroke-width="3"
stroke="#fad300" fill="gold"/>
</svg>
In the previous chapter we used a
circle
element.
for the yolk and a custom
path
and
for the egg white.
This time we're using two circles, but instead
we alter the shape of the egg white using a displacement filter.
The first step is to define the filter in the
defs
section of the SVG. This is where we define elements, patterns and filters
ahead of time. We can later reffer to and reuse them multiple times.
One thing to keep in mind is that these definitions
are global to the HTML DOM.
This means that if one SVG defines
defs
any other SVG in the page has access to it.
It can be useful if you want to update all the
patterns on multiple SVGs at once, but it can also create
issues when id names clash with one another. For this reason it's
important to always use unique and specific
id
attributes.
Try chainging the fractal noise turbulence and frequency in
feTurbulence
or the scale of
feDisplacementMap
.
You're one step away from making an omelette!
As for the other examples see the code below:
<svg class="egg-svg-filters" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100">
<defs>
<filter id="blur-filter">
<feTurbulence type="fractalNoise" baseFrequency=".02"/>
<feDisplacementMap in="SourceGraphic" scale="25"/>
<feComposite in="SourceGraphic" operator="atop"/>
</filter>
</defs>
<rect width="100%" height="100%" fill="hotpink"/>
<circle cx="50" cy="50" r="40" stroke-width="3" stroke="whitesmoke"
fill="white" filter="url(#blur-filter)"/>
<circle cx="50" cy="50" r="15" stroke-width="3" stroke="#fad300" fill="gold"/>
</svg>
Of course this article is just skimming the surface of SVG filters. But we still have a long way to go! In the next chapter we will look into the Canvas 2D Api .
Canvas 2D
So far we've looked into making a fried egg with HTML, CSS, SVG. Easy stuff you say? This time we get our hands dirty and go all the way with Javascript!
For that we use the
canvas
element that exposes a 2D rendering engine to Javascript.
// Query the canvas from the DOM tree
const canvas = container.querySelector('canvas');
// Get the 2D context for the canvas
const context = canvas.getContext('2d');
All we do is query the canvas element and use it to request the '2D' rendering context. We save this reference for later as we will need it to execute drawing commands.
const size = container.getBoundingClientRect().width;
const scale = size / 100;
// make it a square by applying the width as the height
canvas.width = size;
canvas.height = size;
This config step seems a bit more involved but it is in a way similar to setting the viewbox in a SVG as seen in a previous chapter.
We get the size of the parent container and set it to the canvas. Be aware this operation does not set the computed on screen size of the canvas, instead it sets the pixel density of the rendered image.
For example, a canvas could have a width attribute value of 500 units, but a 250px width in CSS. In this case your canvas would have a pixel density of 2 per pixel and probably well suited for retina displays.
This type of situation can be handled by checking the value of
window.devicePixelRatio
.
Now keep in mind the higher it is, the more pixels there are to render and
it can quickly have an impact on performance. With great power comes great responsibility.
Note how we define a scale ratio using 100 as a base,
we will come back to it shortly.
The rest of the code is fairly straightforward. Each time we
want to make a draw operation, we set the main settings first:
fillStyle
- background color,
stroke
- border color,
lineWidth
- border size etc...
With these we can draw a pink rectangle for the background,
and a yellow circle for the yolk.
Now remember the earlier chapter on
// Our path was draw in a 100x100 square but now,
// we're drawing to a scaled canvas
// so we need to adjust our scale
context.save();
context.scale(scale, scale);
// Let's start drawing the path
context.moveTo(35,20);
context.bezierCurveTo(48,7, 76,17, 79,31);
context.bezierCurveTo(83,45, 81,56, 84,68);
context.bezierCurveTo(86,80, 68,94, 57,90);
context.bezierCurveTo(45,86, 27,87, 23,77);
context.bezierCurveTo(17,55, 23,33, 35,20);
context.fill();
context.stroke();
// We're done drawing, let's reset the scale for later
context.restore();
The only caveat we need to deal here is that our path string (or coordinates) were generated under a 100 by 100 unit viewport. That is why before executing the drawing commands, we scale the entire context using the scale ratio calculated earlier.
Before scaling the context, we save its initial state. Once we're done with drawing operations, we can restore the context to before the scale alteration.
We used the bezierCurveTo command here. Alternatively we could also have drawn the same shape with a single drawing command and using the same string as we did previously in SVG.
const path = new Path2D('M35,20 C48,7 76,17 79,31 C83,45 81,56 84,68 C86,80 68,94 57,90 C45,86 27,87 23,77 C17,55 23,33 35,20 Z');
context.fill(path);
context.stroke(path);
As you can see, canvas unleashes the power of Javascript and we can have much more dynamic drawings compared to our HTML/CSS and SVG eggs. The drawback however is that we now have to put more effort in managing the responsive behavior of our canvas and its resolution.
At this point chances are you have evolved into an egg frying enthusiast who wants to start cooking hundreds of them, every 16 milliseconds. That's when you want to start looking for more dedicated 'cooking' tools and tap into the power of the GPU. This is what we cover in the next section about 2D drawing in WebGL .
WebGL 2D
Got tired of the CPU yet, maybe the CPU got tired of your script? Time to move up to WebGL and unleash the power of the GPU!
One common misconception about WebGL is that it's for 3D only. In a way it is true as the 3D rendering pipeline is part of the core architecture of WebGL with its vertex and fragment shaders. But it's also very powerful at rendering 2D layers, stacked and blended on top of each others. Because sometimes you don't need the entire toolbox of complex tris and geometries, sometimes you may just need to do some rendering on a quad.
Because the WebGL API is pretty low level and boilerplate code can become cumbersome. We will try to avoid confusion by using the popular framework Three.js and try focusing essentially on the fragment shader.
Have you ever stumbled upon ShaderToy website and wondered how the heck it worked? This site uses WebGL to render a 3D quad on the screen. From there it focuses only on the fragment shader to draw on a 2D canvas.
You may think it's an odd way to over-complicate graphics rendering for a 2D output, and you would probably be right.
But let's look at it from this perspective: A quad is two triangles. If you can draw on triangles you can draw on anything in 3D. This is a great way to get started with WebGL by putting aside some of the complexity of 3D while getting comfortable with using its shader language GLSL.
With that said let's jump in the code and set up
a Three JS 2d scene using the OrthographicCamera.
THREE.OrthographicCamera
because it allows us to define our own shaders.
const camera = new THREE.OrthographicCamera(-1, 1, 1, -1, 0, 1);
const scene = new THREE.Scene();
const renderer = new THREE.WebGLRenderer({canvas});
renderer.setPixelRatio(window.devicePixelRatio);
renderer.render(scene, camera);
// Create quad
const geometry = new THREE.PlaneGeometry(2, 2, 1, 1);
const material = new THREE.ShaderMaterial({...});
const mesh = new THREE.Mesh( geometry, material );
scene.add(mesh);
ThreeJS allows us to get the ball rolling real quick! We create a camera, a scene and a renderer. Then we add a quad to the scene.
Note that the quad mesh is composed of a Plane geometry and a Shader material.
We use
THREE.ShaderMaterial
because it allows us to define our own shaders.
Since we're just interested in modifying the rendered pixel and not affect the actual geometry at all we use a boilerplate vertex shader:
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = vec4(position, 1.0);
}
The vertex shader has two jobs: Calculating the position of the pixel on the quad and sending the UV variable to the fragment shader using varying variables (shared variables across fragment and vertex shaders).
Fragment shaders are executed over every pixel of your fried egg. The reason they are so powerful is because they are executed in parallel in your GPU.
Running an execution in parallel makes you think about your code from a different perspective and because of its architecture the variable types and data strutures are different than what you may be used to in JS land.
varying vec2 vUv;
float draw_circle_gradient(vec2 coord, float radius) {
return smoothstep(0.0, length(coord), radius);
}
float draw_circle(vec2 coord, float radius) {
return step(length(coord), radius);
}
void main() {
vec3 pink = vec3(1, 0.41, 0.71);
vec3 white = vec3(1,1,1);
vec3 yellow = vec3(1,0.84,0);
vec3 yellowDark = vec3(0.98,0.82,0);
vec2 p = - 1.0 + 2.0 * vUv;
float circle = draw_circle_gradient(p+vec2(0., 0.25), 0.35);
circle = mix(circle, draw_circle_gradient(p +vec2(0.35, -0.1), 0.35), 0.5);
circle = mix(circle, draw_circle_gradient(p +vec2(-0.3, -0.2), 0.35), 0.5);
vec3 color = pink;
float eggWhite = smoothstep(0.5,0.51,circle);
if (eggWhite > 0.0) {
color = white;
}
float yolkOutline = draw_circle(p+vec2(0., 0.1), 0.35);
if (yolkOutline > 0.0) {
color = yellowDark;
}
float yolk = draw_circle(p+vec2(0., 0.1), 0.3);
if (yolk > 0.0) {
color = yellow;
}
gl_FragColor = vec4(color, 1.0);
}
Fragment shaders output is defined by the
main
function which role is to set the RGBA (red, green, blue, alpha) values of the pixel
inside a variable called
gl_FragColor
.
This is of course just the tip of the iceberg. It is in no way optimized and was simplified for readability.
If you want to learn more on the topic, I strongly recommend checking out the Book of Shaders and even take it a step further and dig into signed distance functions as exlpained by Inigo Quilez in his incredibly useful articles.
As we just took a detour to look into a very specific use of WebGL, let's next continue frying eggs in 3D using WebGL .
WebGL 3D
As we are getting to the end of our egg frying journey. It's time to top it off with some 3D!
As we did in the previous chapter
WebGL 2D
we are going to use Three.js
and start
once again with creating a scene with a pink background color.
Then just like before we add a camera, except this time we use
THREE.PerspectiveCamera
.
What this means in short is that when using the Orthographic
camera,
objects appear smaller as they get farther from the camera. Similar to how
your eyes work in the real world.
const camera = new THREE.PerspectiveCamera(20, 1, 1, 10000);
camera.position.y = 140;
camera.position.z = 200;
camera.lookAt(scene.position);
const scene = new THREE.Scene();
scene.background = new THREE.Color(0xff69b4);
With our scene and camera set up, let's keep things simple here and use the Three built-in primitive geometries (circle and sphere).
Of course we could open up Blender and try modelling a more realistic looking egg geometry, add textures, lights, environment maps and shadows but that would go beyond the scope of this article.
So for now we can create the egg-white using a simple
CircleGeometry
.
By default the circle is facing vertically so we rotate it 90°
(or half of PI in radians)
along the X axis so that it is aligned with the floor.
const eggWhite = new THREE.Mesh(
new THREE.CircleGeometry(30, 40),
new THREE.MeshBasicMaterial({ color: 0xffffff })
);
eggWhite.rotation.x = -Math.PI/2;
scene.add(eggWhite);
Next the yolk is created using the SphereGeometry
.
Add a basic material with a yellow color and add the
mesh to the scene.
const eggYolk = new THREE.Mesh(
new THREE.SphereGeometry(10, 32, 32),
new THREE.MeshBasicMaterial({ color: 0xffd700 })
);
scene.add(eggYolk);
Your egg is now ready for the kitchen.
In the last step we create a renderer
and call renderer.render(scene, camera)
.
const renderer = new THREE.WebGLRenderer({ canvas });
renderer.setPixelRatio(window.devicePixelRatio);
renderer.render(scene, camera);
Look! The egg is sizzling! Yummy!
And this concludes our series of little gourmet experiments. I hope you enjoyed the adventure and that it will help you in deciding what pipeline to use in different situations.