It was an idea I had while washing dishes at work: can we do refraction in the vertex shader? You see, games with fancy water typically implement refraction using simple screen-space distortion. It's a classic effect, but if you've ever spent some time looking at how water distorts objects in real life you'll realize this method misses some of its most interesting properties.

Screen-space methods are really good at capturing high frequency distortions caused by ripples in the water's surface; what they fail to capture are large scale distortions. Refraction "squishes" the underwater image. If you look directly down into a body of water it'll appear about 25% shallower than it actually is, this being the water's apparent depth. In fact, as the viewing angle gets shallower, apparent depth approaches 0 and the underwater image becomes completely flat. For large bodies of water this can be difficult to observe but in ponds, pools, or even bathtubs this is a very obvious and visually interesting effect.
The challenge trying to achieve this in screen-space is the same challenge you encounter doing anything in screen-space: all you have to work with are the contents of the screen. If you try doing large scale screen-space refraction, you'll inevitably run into situations where you're sampling a point that's either offscreen or occluded. This is where vertex displacement comes in—rather than shooting our view ray off into the hinterlands, why not distort the scene itself? If we model our water as a flat plane (as developers are wont to do) we can simulate refraction by applying view-dependent Y offset to underwater vertices. Combined with screen-space distortion for high frequency detail, we get much more realistic refraction than with screen-space methods alone.
I developed this technique independently in 2024, but as it turns out there was prior work on it. In 2020 a team of researchers led by Hongli Liu published "Two-phase real-time rendering method for realistic water refraction," detailing a more advanced[a] version of the very same method. The full PDF is freely available, I recommend giving it a read! I owe them their representation of the math, as the equation they present is much cleaner than the one I used originally. Liu et al. don't give the vertex shader technique any particular name, so for catchiness purposes I will be referring to it as planar vertex refraction.
If you wanna try this out you can jump to the Usage section below, or read on for a brief technical explanation. If you have questions or know of any other sources on this technique please e-mail me at tentabrobpy@gmail.com.
Math
Given our index of refraction i and the positions of our camera and underwater vertex, we want to find the path of a refracted view ray from camera to vertex satisfying Snell's Law
where θi is the angle of incidence and θr is the angle of refraction. Tracing along this ray from where it enters the water gives us the vertex's refracted image, the position where it would appear to be when viewed through the water. We find the entry point by solving the following equation:
where a is the camera's height above the water, b is the vertex's height, and d the horizontal distance between them. We can then evaluate the linear expression \(-\frac{a}{x}d+a\) to give us b′, the vertex's apparent height under refraction.

Implementation
The entry point equation above can be solved analytically, but it's not practical for real-time. We have a few different options to deal with this; here, I'll show a simple lookup based approach.[b] We'll start with the case where our camera stays above-water then move on to handling an underwater view.
Above-water view
First, note that viewing angle has a large influence on the refracted image. Given our water plane's surface normal, we can calculate a normalized view vector from vertex to camera and take cosL = dot(viewDir, surfaceNormal)
to get the cosine of the viewing angle. In the above-water case this value stays between 0–1, making it a good candidate for a lookup coordinate.
If we fix the viewing angle, moving the camera along the view vector will still affect the refracted image. We need a way of accounting for how close the camera is to the water's surface. Since the camera's height will always be a fraction of the total height from camera to vertex we can encode this relationship in the ratio yRatio = a / (a + b)
, giving us our second coordinate.

For a planar interface, these two ratios uniquely determine the refracted image. To create the lookup texture we generate the necessary constants from UV, calculate x to arbitary precision using a root-finding algorithm, then store a Y offset ratio \(\frac{b'}{b}\) which we can multiply with our vertex's Y at runtime to get its refracted image.
Underwater view
The case of an underwater view proves more difficult. Above-water points seen from underwater will "recede" endlessly upward; an arbitrarily large offset like this is hard to represent in a lookup texture. The good news is we can use the same lookup: if we modify our coordinates, taking abs(cosL)
and 1.0 - yRatio
for above-water points, we can take the reciprocal of our lookup result to get the correct Y offset ratio.
Precision wasn't an issue in the previous section, but in this case taking the inverse of 8-bit lookup values can result in severe artifacts. Storing our lookup as a 16-bit float texture is sufficient for clean results.

Lastly, we need to apply a Snell's window effect on the underside of our water plane. I won't go into detail here but basically refraction prevents you from seeing above the water's surface past a certain viewing angle. The following is a simple implementation resembling the ubiquitous "Fresnel" term.
// Returns 1.0 if we can see above the water, 0.0 otherwise
float snells_window(vec3 normal, vec3 lookDir, float ior) {
float cos_theta = dot(normal, lookDir);
return step(sqrt(1.0 - cos_theta * cos_theta) * ior, 1.0);
}
Water-to-air refraction creates a rather striking effect.
Usage
Limitations
As with anything, there are limitations to how this technique can be used. Here's a few things to keep in mind as you work with it:
- Tesselation - Models should be sufficiently tesselated for correct results. This is especially important for level geometry, where your boxy Portal chamber's giant ass floor triangles can end up intersecting other objects.
- Lighting - Lighting should be performed using the vertex's original position, depth, and normal. You'll want to disable vertex refraction in the shadow pass as points will end up in completely different locations relative to the light vs. the camera.
- Culling - Extreme vertex displacement tends not to play well with frustum culling. It's easy for a model's vertices to end up well outside their bounding box, which can lead to objects that should be visible suddenly popping out of existence as you move the camera. You can avoid this by extending the bounding box vertically; underwater objects should have the top of their bounds no lower than the water's surface.
Is planar vertex refraction right for you?
Let me be real with you: I don't think this technique is particularly hard to come up with. I'm sure more than one tech artist has made the same connection between apparent depth and vertex displacement, and I was surprised to find very little prior information about the idea online. Why is it so obscure?
There are many possible reasons, but I think mainly it's just not very plug-and-play and not always practical depending on your goals. Slap a nifty water shader on a plane and you have screen-space refraction in no time, whereas vertex refraction has to be applied to all underwater objects and particles individually. For most applications a simple screen-space effect is "good enough," and realistic refraction isn't worth the extra effort.
That being said, if realistic refraction is important to you then vertex refraction is flexible, easy to implement, and gives amazing results for dastardly cheap. It adds an excellent touch of realism for applications involving small bodies of water like ponds or pools where refraction is most visible. I imagine it being most useful for mobile or games with small scope, but hey, it scale ups just as well for larger levels if you're making the next Half Life 2 or something.
In any case, I think this technique deserves to be known. I hope I've presented it in an accessible way and that you'll try it out for yourself!
Footnotes
- [a] I kind of handwave "screen-space distortion" here, but the method outlined by Liu et al. includes a screen-space phase specifically designed as a refinement of the vertex phase. If physical accuracy is paramount to your application I'd definitely recommend checking out their paper.
- [b] Other options include root-finding algorithms and function approximation. IOU a comprehensive addendum.
References
- Liu, H., Han, H., & Fei, G. (2020). Two-phase real-time rendering method for realistic water refraction. Virtual Reality & Intelligent Hardware, 2(2), 132–141. https://doi.org/10.1016/j.vrih.2019.12.005
Attributions
- Piscina aquecida com bar molhado by João Marcello Morais
- Stanford Bunny PBR by hackmans
- Crab Model Free low-poly 3D model by georgebutch888