![]() |
Sample scene with different PBR materials. |
This time we will talk about something really interesting and useful - Physically Based Rendering. Actually, I am not going to explain everything about this topic, as it is huge and probably there is a ton of information that already exists on the web. However, I'd like to share my experience with adding PBR to my engine and struggles that I ran into during the process.
But what is PBR? If you only know that it makes things fancy and realistic in the application, I will briefly give you an explanation. By this time you should realize that one of the main goals in computer graphics is to simulate "proper" light behavior. Somehow we want our objects in the scene to interact with light sources like in the real world. We want to shade them, have reflections and do many other complicated things. And it so happened that J.Kajiya introduced a general rendering equation in 1986:
And since then people have been trying to mimic and solve it for all kinds of purposes. Of course, games became 3D much later, but they also needed lights. People have also found partial solutions and approximations of that equation for real-time applications (i.e Lambert diffuse), but they were not 100% accurate. Just look at that nasty indefinite integral over the hemisphere above! How the heck we should compute it in real-time? This task seems to be impossible...
But approximations... They are everywhere in this world... Developers and researchers did not stop on Lambert and other simplified shading models, so that's why in the early 2000s, PBR shading model was introduced. In a way, it is another solution to that equation which takes some physical properties into consideration: conservation of energy, microsurface scattering and etc. So it is more "physical" and thus more accurate and computationally complicated. And fortunately, it was popularized for the real-time applications by companies like Epic Games. If you are not familiar with the concepts of radiance, irradiance, flux, BRDF you can read about them here. By the way, I have used the Cook-Torrance BRDF model, as it is very popular in rendering engines. (Read about it here)
So as you can see we have diffuse and specular components. We can separate this integral in two integrals and solve them separately. Of course, for direct lighting (from direct sources) it is not a big of a deal: we just compute a Lambert diffuse term for the left integral and specular term with all tricky but straightforward formulas: D for GGX Distribution, F - Shlick-Fresnel and G for Smith's Geometry. ( So if things are already unclear, make sure to read the theory! ) In general, this is just some Maths that can be calculated in our shader. However, the results are not satisfying :(
Why? Well, we are calculating radiance for direct lighting. But look at our integral again. It's about all directions (w) over the hemisphere. Pretty unrealistic to compute, I agree... But it doesn't mean that we do not want to try! Let's dive into a more complicated part of PBR implementation - IBL or Image Based Lighting.
Probably you have already realized that we would work with environment maps / cubemaps. That's a nice and relatively cheap way to calculate environment diffuse light. It is called an irradiance map. Basically, it is a pre-convoluted cubemap - for every direction over the hemisphere we take a sample in the cubemap. Fortunately, you can generate an irradiance map with a third-party tool, such as CubeMapGen by AMD, or you can precompute it by yourself in the code.
Moving next, we should deal with a specular part. It consists of 2 textures: radiance and integration maps. The second one is just a lookup texture for computing the BRDF, so you can download it from the Internet. However, the first one must consist of several pre-filtered mip-maps. Your shader will calculate an indirect specular with something like split-sum approximation. That's why for different roughness values on your material you would use different mip-maps of your cubemap. The screenshot below should give you an intuition.
The mip-map generation process required some additional code for my engine so I could load my irradiance map into a PBR shader with all mips at once. For example, here is my IBLRadianceMap class:
And that is it! I will leave my final PBR shader/effect for you to explore. I understand that there may be many improvements, but for now, I am really satisfied with the results. As a result, I can load fancy 4K textures and get the realism out of them:)
Links:
1) https://learnopengl.com/PBR/Theory
2) https://blog.selfshadow.com/publications/s2013-shading-course/
TOREAD: "Physically Based Rendering: From Theory to Implementation" by G.Humphreys and M. Pharr
![]() |
That's how our BRDF radiance equation looks now. D, F, G terms are explained in the link above. |
So as you can see we have diffuse and specular components. We can separate this integral in two integrals and solve them separately. Of course, for direct lighting (from direct sources) it is not a big of a deal: we just compute a Lambert diffuse term for the left integral and specular term with all tricky but straightforward formulas: D for GGX Distribution, F - Shlick-Fresnel and G for Smith's Geometry. ( So if things are already unclear, make sure to read the theory! ) In general, this is just some Maths that can be calculated in our shader. However, the results are not satisfying :(
Why? Well, we are calculating radiance for direct lighting. But look at our integral again. It's about all directions (w) over the hemisphere. Pretty unrealistic to compute, I agree... But it doesn't mean that we do not want to try! Let's dive into a more complicated part of PBR implementation - IBL or Image Based Lighting.
Probably you have already realized that we would work with environment maps / cubemaps. That's a nice and relatively cheap way to calculate environment diffuse light. It is called an irradiance map. Basically, it is a pre-convoluted cubemap - for every direction over the hemisphere we take a sample in the cubemap. Fortunately, you can generate an irradiance map with a third-party tool, such as CubeMapGen by AMD, or you can precompute it by yourself in the code.
Moving next, we should deal with a specular part. It consists of 2 textures: radiance and integration maps. The second one is just a lookup texture for computing the BRDF, so you can download it from the Internet. However, the first one must consist of several pre-filtered mip-maps. Your shader will calculate an indirect specular with something like split-sum approximation. That's why for different roughness values on your material you would use different mip-maps of your cubemap. The screenshot below should give you an intuition.
![]() |
Image taken from https://polycount.com/discussion/139342/introducing-lys-open-beta |
The mip-map generation process required some additional code for my engine so I could load my irradiance map into a PBR shader with all mips at once. For example, here is my IBLRadianceMap class:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#include "stdafx.h" | |
#pragma comment(lib, "D3DCompiler.lib") | |
#include <WinSDKVer.h> | |
#define _WIN32_WINNT 0x0600 | |
#include <SDKDDKVer.h> | |
#define NOMINMAX | |
#define WIN32_LEAN_AND_MEAN | |
#include "Common.h" | |
#include "Game.h" | |
#include "GameException.h" | |
#include "IBLRadianceMap.h" | |
#include "IBLCubemap.h" | |
#include "QuadRenderer.h" | |
#include "ShaderCompiler.h" | |
#include <DDSTextureLoader.h> | |
namespace Library | |
{ | |
IBLRadianceMap::IBLRadianceMap(Game& game, const std::wstring& cubemapFilename) : | |
mCubeMapFileName(cubemapFilename), | |
mEnvMapShaderResourceView(nullptr), | |
mQuadRenderer(nullptr), mIBLCubemap(nullptr), myGame(nullptr) | |
{ | |
myGame = &game; | |
} | |
IBLRadianceMap::~IBLRadianceMap() | |
{ | |
ReleaseObject(mEnvMapShaderResourceView); | |
DeleteObject(myGame); | |
mQuadRenderer.reset(nullptr); | |
mIBLCubemap.reset(nullptr); | |
} | |
void IBLRadianceMap::Initialize() | |
{ | |
assert(myGame != nullptr); | |
ID3D11Device1* device = myGame->Direct3DDevice(); | |
ID3D11DeviceContext1* deviceContext = myGame->Direct3DDeviceContext(); | |
HRESULT hr = DirectX::CreateDDSTextureFromFile(myGame->Direct3DDevice(), mCubeMapFileName.c_str(), nullptr, &mEnvMapShaderResourceView); | |
if (FAILED(hr)) | |
{ | |
throw GameException("Failed to load an Environment Map!", hr); | |
} | |
mQuadRenderer.reset(new QuadRenderer(device)); | |
mIBLCubemap.reset(new IBLCubemap(device, 7, 256)); | |
mConstantBuffer.Initialize(device); | |
} | |
void IBLRadianceMap::Create(Game& game) | |
{ | |
ID3D11Device* device = game.Direct3DDevice(); | |
ID3D11DeviceContext* deviceContext = game.Direct3DDeviceContext(); | |
// vertex shader | |
ID3DBlob* vertexShaderBlob = nullptr; | |
HRESULT hrloadVS = ShaderCompiler::CompileShader("RadianceMapVS.hlsl", "main", "vs_5_0", &vertexShaderBlob); | |
if (FAILED(hrloadVS)) throw GameException("Failed to load a shader: RadianceMapVS.hlsl!", hrloadVS); | |
ID3D11VertexShader* vertexShader = NULL; | |
HRESULT hrcreateVS = device->CreateVertexShader(vertexShaderBlob->GetBufferPointer(), vertexShaderBlob->GetBufferSize(), NULL, &vertexShader); | |
if (FAILED(hrcreateVS)) throw GameException("Failed to create vertex shader from RadianceMapVS.hlsl!", hrcreateVS); | |
//pixel shader | |
ID3DBlob* pixelShaderBlob = nullptr; | |
HRESULT hrloadPS = ShaderCompiler::CompileShader("RadianceMapPS.hlsl", "main", "ps_5_0", &pixelShaderBlob); | |
if (FAILED(hrloadPS)) throw GameException("Failed to load a shader: RadianceMapPS.hlsl!", hrloadPS); | |
ID3D11PixelShader* pixelShader = NULL; | |
HRESULT hrcreatePS = device->CreatePixelShader(pixelShaderBlob->GetBufferPointer(), pixelShaderBlob->GetBufferSize(), NULL, &pixelShader); | |
if (FAILED(hrcreatePS)) throw GameException("Failed to create pixel shader from RadianceMapPS.hlsl!", hrcreatePS); | |
D3D11_INPUT_ELEMENT_DESC inputLayoutDesc[2]; | |
inputLayoutDesc[0].SemanticName = "POSITION"; | |
inputLayoutDesc[0].SemanticIndex = 0; | |
inputLayoutDesc[0].Format = DXGI_FORMAT_R32G32B32_FLOAT; | |
inputLayoutDesc[0].InputSlot = 0; | |
inputLayoutDesc[0].AlignedByteOffset = 0; | |
inputLayoutDesc[0].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA; | |
inputLayoutDesc[0].InstanceDataStepRate = 0; | |
inputLayoutDesc[1].SemanticName = "TEXCOORD"; | |
inputLayoutDesc[1].SemanticIndex = 0; | |
inputLayoutDesc[1].Format = DXGI_FORMAT_R32G32_FLOAT; | |
inputLayoutDesc[1].InputSlot = 0; | |
inputLayoutDesc[1].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT; | |
inputLayoutDesc[1].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA; | |
inputLayoutDesc[1].InstanceDataStepRate = 0; | |
int numElements = sizeof(inputLayoutDesc) / sizeof(inputLayoutDesc[0]); | |
ID3D11InputLayout* layout; | |
device->CreateInputLayout(inputLayoutDesc, numElements, vertexShaderBlob->GetBufferPointer(), vertexShaderBlob->GetBufferSize(), &layout); | |
assert(layout != nullptr); | |
deviceContext->PSSetShaderResources(0, 1, &mEnvMapShaderResourceView); | |
deviceContext->IASetInputLayout(layout); | |
// Set sampler state. | |
ID3D11SamplerState* TrilinearWrap; | |
D3D11_SAMPLER_DESC samplerStateDesc; | |
ZeroMemory(&samplerStateDesc, sizeof(samplerStateDesc)); | |
samplerStateDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR; | |
samplerStateDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP; | |
samplerStateDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP; | |
samplerStateDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP; | |
HRESULT hr = device->CreateSamplerState(&samplerStateDesc, &TrilinearWrap); | |
if (FAILED(hr)) | |
{ | |
throw GameException("ID3D11Device::CreateSamplerState() failed.", hr); | |
} | |
deviceContext->PSSetSamplers(0, 1, &TrilinearWrap); | |
// Set shaders | |
deviceContext->VSSetShader(vertexShader, NULL, 0); | |
deviceContext->PSSetShader(pixelShader, NULL, 0); | |
// Draw faces | |
for (int faceIndex = 0; faceIndex < 6; faceIndex++) | |
{ | |
int size = 256; | |
for (int mipIndex = 0; mipIndex < 7; mipIndex++) | |
{ | |
CD3D11_VIEWPORT viewPort(0.0f, 0.0f, static_cast<float>(size), static_cast<float>(size)); | |
deviceContext->RSSetViewports(1, &viewPort); | |
DrawFace(device, deviceContext, faceIndex, mipIndex, mIBLCubemap->mSurfaces[faceIndex]->mRenderTargets[mipIndex]); | |
size /= 2; | |
} | |
} | |
// reset buffer | |
game.SetBackBuffer(); | |
} | |
ID3D11ShaderResourceView** IBLRadianceMap:: GetShaderResourceViewAddress() | |
{ | |
return mIBLCubemap->GetShaderResourceViewAddress(); | |
} | |
void IBLRadianceMap::DrawFace(ID3D11Device* device, ID3D11DeviceContext* deviceContext, | |
int faceIndex, int mipIndex, ID3D11RenderTargetView* renderTarget) | |
{ | |
mConstantBuffer.Data.Face = faceIndex; | |
mConstantBuffer.Data.MipIndex = mipIndex; | |
mConstantBuffer.ApplyChanges(deviceContext); | |
mConstantBuffer.SetVSConstantBuffers(deviceContext); | |
mConstantBuffer.SetPSConstantBuffers(deviceContext); | |
float clearColor[4] = { 1.0f, 1.0f, 1.0f, 1.0f }; | |
// Set the render target and clear it. | |
deviceContext->OMSetRenderTargets(1, &renderTarget, NULL); | |
deviceContext->ClearRenderTargetView(renderTarget, clearColor); | |
// Render the cubemap face. | |
mQuadRenderer->Draw(deviceContext); | |
} | |
} |
And that is it! I will leave my final PBR shader/effect for you to explore. I understand that there may be many improvements, but for now, I am really satisfied with the results. As a result, I can load fancy 4K textures and get the realism out of them:)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//*****************************************************************************// | |
//*****************************************************************************// | |
// Simple PBR + IBL implementation // | |
//*****************************************************************************// | |
//*****************************************************************************// | |
#define FLIP_TEXTURE_Y 0 | |
static const float Pi = 3.141592654f; | |
cbuffer CBufferPerFrame | |
{ | |
float3 LightPosition; | |
float LightRadius; | |
float3 CameraPosition; | |
float Roughness; | |
float Metalness; | |
} | |
cbuffer CBufferPerObject | |
{ | |
float4x4 WorldViewProjection : WORLDVIEWPROJECTION; | |
float4x4 World : WORLD; | |
} | |
// Albedo, normal, roughness, metalness, ao maps | |
Texture2D albedoTexture; | |
Texture2D roughnessTexture; | |
Texture2D metallicTexture; | |
Texture2D normalTexture; | |
Texture2D aoTexture; | |
TextureCube irradianceTexture; | |
TextureCube radianceTexture; | |
Texture2D integrationTexture; | |
SamplerState ColorSampler | |
{ | |
Filter = MIN_MAG_MIP_LINEAR; | |
AddressU = WRAP; | |
AddressV = WRAP; | |
}; | |
SamplerState SamplerAnisotropic | |
{ | |
Filter = ANISOTROPIC; | |
MaxAnisotropy = 16; | |
AddressU = Wrap; | |
AddressV = Wrap; | |
}; | |
struct VS_INPUT | |
{ | |
float4 ObjectPosition : POSITION; | |
float2 TextureCoordinate : TEXCOORD; | |
float3 Normal : NORMAL; | |
float3 Tangent : TANGENT; | |
}; | |
struct VS_OUTPUT | |
{ | |
float4 Position : SV_Position; | |
float3 Normal : NORMAL; | |
float3 Tangent : TANGENT; | |
float3 Binormal : BINORMAL; | |
float2 TextureCoordinate : TEXCOORD0; | |
float3 WorldPosition : TEXCOORD1; | |
float Attenuation : TEXCOORD2; | |
}; | |
RasterizerState BackFaceCulling | |
{ | |
CullMode = BACK; | |
}; | |
float2 get_corrected_texture_coordinate(float2 textureCoordinate) | |
{ | |
#if FLIP_TEXTURE_Y | |
return float2(textureCoordinate.x, 1.0 - textureCoordinate.y); | |
#else | |
return textureCoordinate; | |
#endif | |
} | |
float3 get_vector_color_contribution(float4 light, float3 color) | |
{ | |
// Color (.rgb) * Intensity (.a) | |
return light.rgb * light.a * color; | |
} | |
// =============================================================================================== | |
// http://graphicrants.blogspot.com.au/2013/08/specular-brdf-reference.html | |
// =============================================================================================== | |
float GGX(float NdotV, float a) | |
{ | |
float k = a / 2; | |
return NdotV / (NdotV * (1.0f - k) + k); | |
} | |
// =============================================================================================== | |
// Geometry with Smith approximation: | |
// http://blog.selfshadow.com/publications/s2013-shading-course/rad/s2013_pbs_rad_notes.pdf | |
// http://graphicrants.blogspot.fr/2013/08/specular-brdf-reference.html | |
// =============================================================================================== | |
float G_Smith(float a, float nDotV, float nDotL) | |
{ | |
return GGX(nDotL, a * a) * GGX(nDotV, a * a); | |
} | |
// ================================================================================================ | |
// Fresnel with Schlick's approximation: | |
// http://blog.selfshadow.com/publications/s2013-shading-course/rad/s2013_pbs_rad_notes.pdf | |
// http://graphicrants.blogspot.fr/2013/08/specular-brdf-reference.html | |
// ================================================================================================ | |
float3 Schlick_Fresnel(float3 f0, float3 h, float3 l) | |
{ | |
return f0 + (1.0f - f0) * pow((1.0f - dot(l, h)), 5.0f); | |
} | |
// ================================================================================================ | |
// Lambertian BRDF | |
// http://en.wikipedia.org/wiki/Lambertian_reflectance | |
// ================================================================================================ | |
float3 DirectDiffuseBRDF(float3 diffuseAlbedo, float nDotL) | |
{ | |
return (diffuseAlbedo * nDotL) / Pi; | |
} | |
// ================================================================================================ | |
// Cook-Torrence BRDF | |
float3 DirectSpecularBRDF(float3 specularAlbedo, float3 positionWS, float3 normalWS, float3 lightDir, float roughness) | |
{ | |
float3 viewDir = normalize(CameraPosition - positionWS); | |
float3 halfVec = normalize(viewDir + lightDir); | |
float nDotH = saturate(dot(normalWS, halfVec)); | |
float nDotL = saturate(dot(normalWS, lightDir)); | |
float nDotV = max(dot(normalWS, viewDir), 0.0001f); | |
float alpha2 = roughness * roughness; | |
// Normal distribution term with Trowbridge-Reitz/GGX. | |
float D = alpha2 / (Pi * pow(nDotH * nDotH * (alpha2 - 1) + 1, 2.0f)); | |
// Fresnel term with Schlick's approximation. | |
float3 F = Schlick_Fresnel(specularAlbedo, halfVec, lightDir); | |
// Geometry term with Smith's approximation. | |
float G = G_Smith(roughness, nDotV, nDotL); | |
return D * F * G; | |
} | |
// ================================================================================================ | |
/************* Vertex Shader *************/ | |
VS_OUTPUT vertex_shader(VS_INPUT IN) | |
{ | |
VS_OUTPUT OUT = (VS_OUTPUT) 0; | |
OUT.Position = mul(IN.ObjectPosition, WorldViewProjection); | |
OUT.WorldPosition = mul(IN.ObjectPosition, World).xyz; | |
OUT.TextureCoordinate = get_corrected_texture_coordinate(IN.TextureCoordinate); | |
OUT.Normal = normalize(mul(float4(IN.Normal, 0), World).xyz); | |
OUT.Tangent = normalize(mul(float4(IN.Tangent, 0), World).xyz); | |
OUT.Binormal = cross(OUT.Normal, OUT.Tangent); | |
float3 lightDirection = LightPosition - OUT.WorldPosition; | |
OUT.Attenuation = saturate(1.0f - (length(lightDirection) / LightRadius)); // Attenuation | |
return OUT; | |
} | |
// ================================================================================================ | |
// Split sum approximation | |
// http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf | |
// ================================================================================================ | |
float3 ApproximateSpecularIBL(float3 specularAlbedo, float3 reflectDir, float nDotV, float roughness) | |
{ | |
// Mip level is in [0, 6] range and roughness is [0, 1]. | |
float mipIndex = roughness * 6; | |
float3 prefilteredColor = radianceTexture.SampleLevel(SamplerAnisotropic, reflectDir, mipIndex); | |
float3 environmentBRDF = integrationTexture.Sample(SamplerAnisotropic, float2(roughness, nDotV)); | |
return prefilteredColor * (specularAlbedo * environmentBRDF.x + environmentBRDF.y); | |
} | |
float3 DirectLighting(float3 normalWS, float3 lightColor, float3 lightPos, float3 diffuseAlbedo, | |
float3 specularAlbedo, float3 positionWS, float roughness, float attenuation) | |
{ | |
float3 lighting = 0.0f; | |
float3 pixelToLight = lightPos - positionWS; | |
float lightDist = length(pixelToLight); | |
float3 lightDir = pixelToLight / lightDist; | |
float nDotL = saturate(dot(normalWS, lightDir)); | |
if (nDotL > 0.0f) | |
{ | |
lighting = DirectDiffuseBRDF(diffuseAlbedo, nDotL)*attenuation + DirectSpecularBRDF(specularAlbedo, positionWS, normalWS, lightDir, roughness)*attenuation; | |
} | |
return max(lighting, 0.0f) * lightColor; | |
} | |
float3 IndirectLighting(float roughness, float3 diffuseAlbedo, float3 specularAlbedo, float3 normalWS, float3 positionWS) | |
{ | |
float3 viewDir = normalize(CameraPosition - positionWS); | |
float3 reflectDir = normalize(reflect(-viewDir, normalWS)); | |
float nDotV = max(dot(normalWS, viewDir), 0.0001f); | |
// Sample the indirect diffuse lighting from the irradiance environment map. | |
float3 indirectDiffuseLighting = irradianceTexture.SampleLevel(SamplerAnisotropic, normalWS, 0) * diffuseAlbedo; | |
// Split sum approximation of specular lighting. | |
float3 indirectSpecularLighting = ApproximateSpecularIBL(specularAlbedo, reflectDir, nDotV, roughness); | |
return indirectDiffuseLighting + indirectSpecularLighting; | |
} | |
float4 pixel_shader(VS_OUTPUT IN) : SV_TARGET | |
{ | |
float3 sampledNormal = (2 * normalTexture.Sample(SamplerAnisotropic, IN.TextureCoordinate).xyz) - 1.0; // Map normal from [0..1] to [-1..1] | |
float3x3 tbn = float3x3(IN.Tangent, IN.Binormal, IN.Normal); | |
sampledNormal = mul(sampledNormal, tbn); // Transform normal to world space | |
float3 normalWS = sampledNormal; | |
//float3 diffuseAlbedo = albedoTexture.Sample(ColorSampler, IN.TextureCoordinate); | |
float3 diffuseAlbedo = pow(albedoTexture.Sample(SamplerAnisotropic, IN.TextureCoordinate), 2.2); | |
//cheap trick if you need a custom metalness/roughness | |
float metalness = 1.0; | |
if (Roughness < 0) | |
{ | |
metalness = Metalness; | |
} | |
else | |
{ | |
metalness = metallicTexture.Sample(SamplerAnisotropic, IN.TextureCoordinate).r; | |
} | |
float roughness = 1.0; | |
if (Metalness < 0) | |
{ | |
roughness = Roughness; | |
} | |
else | |
{ | |
roughness = roughnessTexture.Sample(SamplerAnisotropic, IN.TextureCoordinate).r; | |
} | |
float3 specularAlbedo = float3(metalness, metalness, metalness); | |
float3 lightPos = LightPosition; | |
float3 lightColor = float3(0.8f, 0.8f, 0.8f); | |
float3 directLighting = DirectLighting(normalWS, lightColor, lightPos, diffuseAlbedo, | |
specularAlbedo, IN.WorldPosition, roughness, IN.Attenuation); | |
float3 indirectLighting = IndirectLighting(roughness, diffuseAlbedo, | |
specularAlbedo, normalWS, IN.WorldPosition); | |
return float4(directLighting + indirectLighting, 1); | |
} | |
/************* Techniques *************/ | |
technique11 pbr_material | |
{ | |
pass p0 | |
{ | |
SetVertexShader(CompileShader(vs_5_0, vertex_shader())); | |
SetGeometryShader(NULL); | |
SetPixelShader(CompileShader(ps_5_0, pixel_shader())); | |
SetRasterizerState(BackFaceCulling); | |
} | |
} |
Links:
1) https://learnopengl.com/PBR/Theory
2) https://blog.selfshadow.com/publications/s2013-shading-course/
TOREAD: "Physically Based Rendering: From Theory to Implementation" by G.Humphreys and M. Pharr