Table of Contents

WebGL2Fundamentals.org

Fix, Fork, Contribute

WebGL2 3D - Data Textures

This post is a continuation of a series of posts about WebGL2. The first started with fundamentals and the previous was about textures.

In the last post we went over how textures work and how to apply them. We created them from images we downloaded. In this article instead of using an image we'll create the data in JavaScript directly.

Creating data for a texture in JavaScript is mostly straight forward depending on the texture format. WebGL2 supports a ton of texture formats though. WebGL2 supports all the un-sized formats from WebGL1

FormatTypeChannelsBytes per pixel
RGBAUNSIGNED_BYTE44
RGBUNSIGNED_BYTE33
RGBAUNSIGNED_SHORT_4_4_4_442
RGBAUNSIGNED_SHORT_5_5_5_142
RGBUNSIGNED_SHORT_5_6_532
LUMINANCE_ALPHAUNSIGNED_BYTE22
LUMINANCEUNSIGNED_BYTE11
ALPHAUNSIGNED_BYTE11

They're called un-sized because how they are actually represented internally is undefined in WebGL1. It is defined in WebGL2. In addition to those un-sized formats there are a slew of sized formats including

Sized
Format
Base
Format
R
bits
G
bits
B
bits
A
bits
Shared
bits
Color
renderable
Texture
filterable
R8 RED 8
R8_SNORM RED s8
RG8 RG 8 8
RG8_SNORM RG s8 s8
RGB8 RGB 8 8 8
RGB8_SNORM RGB s8 s8 s8
RGB565 RGB 5 6 5
RGBA4 RGBA 4 4 4 4
RGB5_A1 RGBA 5 5 5 1
RGBA8 RGBA 8 8 8 8
RGBA8_SNORM RGBA s8 s8 s8 s8
RGB10_A2 RGBA 10 10 10 2
RGB10_A2UI RGBA ui10 ui10 ui10ui2
SRGB8 RGB 8 8 8
SRGB8_ALPHA8 RGBA 8 8 8 8
R16F RED f16
RG16F RG f16 f16
RGB16F RGB f16 f16 f16
RGBA16F RGBA f16 f16 f16 f16
R32F RED f32
RG32F RG f32 f32
RGB32F RGB f32 f32 f32
RGBA32F RGBA f32 f32 f32 f32
R11F_G11F_B10F RGB f11 f11 f10
RGB9_E5 RGB 9 9 9 5
R8I RED i8
R8UI RED ui8
R16I RED i16
R16UI RED ui16
R32I RED i32
R32UI RED ui32
RG8I RG i8 i8
RG8UI RG ui8 ui8
RG16I RG i16 i16
RG16UI RG ui16 ui16
RG32I RG i32 i32
RG32UI RG ui32 ui32
RGB8I RGB i8 i8 i8
RGB8UI RGB ui8 ui8 ui8
RGB16I RGB i16 i16 i16
RGB16UI RGB ui16 ui16 ui16
RGB32I RGB i32 i32 i32
RGB32UI RGB ui32 ui32 ui32
RGBA8I RGBA i8 i8 i8 i8
RGBA8UI RGBA ui8 ui8 ui8 ui8
RGBA16I RGBA i16 i16 i16 i16
RGBA16UI RGBA ui16 ui16 ui16ui16
RGBA32I RGBA i32 i32 i32 i32
RGBA32UI RGBA ui32 ui32 ui32ui32

And these depth and stencil formats as well

Sized
Format
Base
Format
Depth
bits
Stencil
bits
DEPTH_COMPONENT16 DEPTH_COMPONENT 16
DEPTH_COMPONENT24 DEPTH_COMPONENT 24
DEPTH_COMPONENT32F DEPTH_COMPONENT f32
DEPTH24_STENCIL8 DEPTH_STENCIL 24 ui8
DEPTH32F_STENCIL8 DEPTH_STENCIL f32 ui8

Legend:

  • a single number like 8 means 8bits that will be normalized from 0 to 1
  • a number preceded by an s like s8 means a signed 8bit number that will be normalized from -1 to 1
  • a number preceded by an f like f16 means a floating point number.
  • a number preceded by in i like i8 means an integer number.
  • a number preceded by in ui like ui8 means an unsigned integer number.

We won't use this info here but I highlighted the half and float texture formats to show unlike WebGL1 they are always available in WebGL2 but they are not marked as either color renderable and/or texture filterable by default. Not being color renderable means they can not be rendered to. Rendering to a texture is covered in another lesson. Not texture filterable means they must be used with gl.NEAREST only. Both of those features are available as optional extensions in WebGL2.

For each of the formats you specify both the internal format (the format the GPU will use internally) and the format and type of the data you're supplying to WebGL. Here is a table showing which format and type you must supply data for a given internal format

Internal
Format
Format Type Source
Bytes
Per Pixel
RGBA8
RGB5_A1
RGBA4
SRGB8_ALPHA8
RGBA UNSIGNED_BYTE 4
RGBA8_SNORM RGBA BYTE 4
RGBA4 RGBA UNSIGNED_SHORT_4_4_4_4 2
RGB5_A1 RGBA UNSIGNED_SHORT_5_5_5_1 2
RGB10_A2
RGB5_A1
RGBA UNSIGNED_INT_2_10_10_10_REV 4
RGBA16F RGBA HALF_FLOAT 8
RGBA32F
RGBA16F
RGBA FLOAT 16
RGBA8UI RGBA_INTEGER UNSIGNED_BYTE 4
RGBA8I RGBA_INTEGER BYTE 4
RGBA16UI RGBA_INTEGER UNSIGNED_SHORT 8
RGBA16I RGBA_INTEGER SHORT 8
RGBA32UI RGBA_INTEGER UNSIGNED_INT 16
RGBA32I RGBA_INTEGER INT 16
RGB10_A2UI RGBA_INTEGER UNSIGNED_INT_2_10_10_10_REV 4
RGB8
RGB565
SRGB8
RGB UNSIGNED_BYTE 3
RGB8_SNORM RGB BYTE 3
RGB565 RGB UNSIGNED_SHORT_5_6_5 2
R11F_G11F_B10F RGB UNSIGNED_INT_10F_11F_11F_REV 4
RGB9_E5 RGB UNSIGNED_INT_5_9_9_9_REV 4
RGB16F
R11F_G11F_B10F
RGB9_E5
RGB HALF_FLOAT 6
RGB32F
RGB16F
R11F_G11F_B10F
RGB9_E5
RGB FLOAT 12
RGB8UI RGB_INTEGER UNSIGNED_BYTE 3
RGB8I RGB_INTEGER BYTE 3
RGB16UI RGB_INTEGER UNSIGNED_SHORT 6
RGB16I RGB_INTEGER SHORT 6
RGB32UI RGB_INTEGER UNSIGNED_INT 12
RGB32I RGB_INTEGER INT 12
RG8 RG UNSIGNED_BYTE 2
RG8_SNORM RG BYTE 2
RG16F RG HALF_FLOAT 4
RG32F
RG16F
RG FLOAT 8
RG8UI RG_INTEGER UNSIGNED_BYTE 2
RG8I RG_INTEGER BYTE 2
RG16UI RG_INTEGER UNSIGNED_SHORT 4
RG16I RG_INTEGER SHORT 4
RG32UI RG_INTEGER UNSIGNED_INT 8
RG32I RG_INTEGER INT 8
R8 RED UNSIGNED_BYTE 1
R8_SNORM RED BYTE 1
R16F RED HALF_FLOAT 2
R32F
R16F
RED FLOAT 4
R8UI RED_INTEGER UNSIGNED_BYTE 1
R8I RED_INTEGER BYTE 1
R16UI RED_INTEGER UNSIGNED_SHORT 2
R16I RED_INTEGER SHORT 2
R32UI RED_INTEGER UNSIGNED_INT 4
R32I RED_INTEGER INT 4
DEPTH_COMPONENT16 DEPTH_COMPONENT UNSIGNED_SHORT 2
DEPTH_COMPONENT24
DEPTH_COMPONENT16
DEPTH_COMPONENT UNSIGNED_INT 4
DEPTH_COMPONENT32F DEPTH_COMPONENT FLOAT 4
DEPTH24_STENCIL8 DEPTH_STENCIL UNSIGNED_INT_24_8 4
DEPTH32F_STENCIL8 DEPTH_STENCIL FLOAT_32_UNSIGNED_INT_24_8_REV 8
RGBA RGBA UNSIGNED_BYTE 4
RGBA RGBA UNSIGNED_SHORT_4_4_4_4 2
RGBA RGBA UNSIGNED_SHORT_5_5_5_1 2
RGB RGB UNSIGNED_BYTE 3
RGB RGB UNSIGNED_SHORT_5_6_5 2
LUMINANCE_ALPHA LUMINANCE_ALPHA UNSIGNED_BYTE 2
LUMINANCE LUMINANCE UNSIGNED_BYTE 1
ALPHA ALPHA UNSIGNED_BYTE 1

Let's create a 3x2 pixel R8 texture. Because it's an R8 texture there is only 1 value per pixel in the red channel.

We'll take the sample from the last article. First we'll change the texture coordinates to use the entire texture on each face of the cube.

// Fill the buffer with texture coordinates the cube.
function setTexcoords(gl) {
  gl.bufferData(
      gl.ARRAY_BUFFER,
      new Float32Array([
        // front face
        0, 0,
        0, 1,
        1, 0,
        1, 0,
        0, 1,
        1, 1,
        ...

Then we'll change the code that creates a texture

// Create a texture.
var texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);

-// Fill the texture with a 1x1 blue pixel.
-gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE,
-              new Uint8Array([0, 0, 255, 255]));

// fill texture with 3x2 pixels
const level = 0;
const internalFormat = gl.R8;
const width = 3;
const height = 2;
const border = 0;
const format = gl.RED;
const type = gl.UNSIGNED_BYTE;
const data = new Uint8Array([
  128,  64, 128,
    0, 192,   0,
]);
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, width, height, border,
              format, type, data);

// set the filtering so we don't need mips and it's not filtered
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);

-// Asynchronously load an image
-...

And here's that

Oops! Why is this not working?!?!?

Checking the JavaScript console we see this error something like this

WebGL: INVALID_OPERATION: texImage2D: ArrayBufferView not big enough for request

It turns out there's a kind of obscure setting in WebGL left over from when OpenGL was first created. Computers sometimes go faster when data is a certain size. For example it can be faster to copy 2, 4, or 8 bytes at a time instead of 1 at a time. WebGL defaults to using 4 bytes at a time so it expects each row of data to be a multiple of 4 bytes (except for the last row).

Our data above is only 3 bytes per row, 6 bytes total but WebGL is going to try to read 4 bytes for the first row and 3 bytes for the 2nd row for a total of 7 bytes which is why it's complaining.

We can tell WebGL to deal with 1 byte at a time like this

const alignment = 1;
gl.pixelStorei(gl.UNPACK_ALIGNMENT, alignment);

Valid alignment values are 1, 2, 4, and 8.

I suspect in WebGL you will not be able to measure a difference in speed between aligned data and un-aligned data. I wish the default was 1 instead of 4 so this issue wouldn't bite new users but, in order to stay compatible with OpenGL the default needed to stay the same. That way if a ported app supplies padded rows it will work unchanged. At the same time, in a new app you can just always set it to 1 and then be done with it.

With that set things should be working

And with that covered lets move on to rendering to a texture.

Pixel vs Texel

Sometimes the pixels in a texture are called texels. Pixel is short for Picture Element. Texel is short for Texture Element.

I'm sure I'll get an earful from some graphics guru but as far as I can tell "texel" is an example of jargon. Personally I generally use "pixel" when referring to the elements of a texture without thinking about it. 😇

Issue/Bug? Create an issue on github.
Use <pre><code>code goes here</code></pre> for code blocks
comments powered by Disqus