By Developers, For Developers

Historical errata for The Ray Tracer Challenge

PDF PgPaper PgTypeDescriptionFixed onComments
?TYPO

chapter 1 dot product pseudo code:
w should not be included in calculation

2018-09-05Hello Mark! The dot product is actually defined for any number of dimensions (see https://en.wikipedia.org/wiki/Dot_product#Algebraic_definition). While it's true that we're only ever going to use them with vectors, where w=0, I figured it was easier to just implement the 4d solution, than to try to make an exception here and just omit the w from the calculation. This also makes it (possibly?) easier to catch bugs in your code, if you happen to be taking the dot product of two points, instead of vectors.
108TYPO

“6. The smallest sphere is scaled by a third, before being translated.”

The code below is repeated from previous point (for the ‘right’ sphere) instead of introducing a new object.

2018-09-02
20SUGGEST

I am not sure if I am right, but i tried to implement, that the tuple can only have a 0 or 1 as fourth value. So if we subtract a point from a vector, we get a negative number. In “addition” it is explained, why it doesn’t make sense to add points. Maybe it would be nice to mention here, what to do, when a point is subtracted from a vector. Maybe its also unnecessary or i misunderstood. Then i apologize.
Best, Theresa

2018-09-03
2/3SUGGEST

In the introduction chapter for points and vectors you use a carthesian coordinate system. What I stumbled about was that the system you used is a left-hand system, without any notice to the reader. As an engineer I am used to right-hand carthesian coordinate systems and it took some time until I figured out that this is actually a left-hand.

While I think, this is not an issue, you can define coordinate systems as you want, you should define it explicit when not using a standard (right-hand?) system. See for example Wikipedia on right-hand rule - during my education I was nearly always confronted with right-hand systems (except for some weird examples in exams ;-)

2018-09-03
53ERROR

Around page 53. Callout titled “Fluent APIs”

“Depending on your implementation language, you may be able to present a more intuitive interface for concatenating transformation matrices. A fluent API, for instance, could let you declare your transformations like this:
transform ← identity(). rotate_x(π / 2).scale(5, 5, 5).translate(10, 5, 7)
Each call in this chain multiplies its matrix by the result of the previous call, effectively turning the chain ”inside-out”."

The last sentence is somewhat misleading.

It is correct that in the normal “Builder Pattern”, the chain is traversed in order. i.e. translate is applied to the result of scale, scale is applied to the result of rotate and rotate is applied to the result of identity. However, for matrix trasforms, the functions must be applied in reverse order.

At the very least, you should change the last two sentences to:

transform ← rotate_x(π / 2).scale(5, 5, 5).translate(10, 5, 7).identity()
Each call in this chain multiplies its matrix by the result of the following call, effectively turning the chain “inside-out”. Identity() must be passed to Translate, which must be passed to Scale, etc.

2018-09-03Thanks for the comment, Ron. I think the issue is that I'm not explaining myself well, here. The point of the fluent API is that you can describe the transformations in a "natural" order, and the API figures out the proper order in which the matrices should be multiplied. So, yes, absolutely, the underlying matrices must be multiplied in the reverse order, but the fluent API lets you describe them in the order you want them applied, and the rest happens behind the scenes. I'll give this text another pass in the next version, to see if I can explain it better.
95SUGGEST

“Note that in order for the test to pass, intersect_world() must return the intersections in sorted order!”

Intersection list is used to find a hit, which is an intersection with minimal positive t. I think you don’t need to sort the list to do this. In fact it increases complexity giving nothing in return.

2018-09-13Piotr, you're right, in this specific instance, it is not necessary to return the list in sorted order. However, in later chapters (in particular the last chapter, on CSG), being able to iterate over all intersections in increasing order will simplify some things. I'll add a bit of explanation to clarify this point. Thanks for bringing it up!
113TYPO

“Scenario: There is no shadow when nothing is colinear with point and light”

colinear -> collinear

2018-09-13
125TYPO

“Scenario: Computing the normal on a scaled shape”

There are two scenarios named the same. The first should be “Scenario: Computing the normal on a translated shape”.

2018-09-12
125ERROR

The `features/shapes.feature` example has two scenarios, and they’re both the same. I’m assuming they should be “Computing the normal on a scaled shape” and “Computing the normal on a translated shape”. Currently it’s a copy/paste of the “scaled” shapes scenario.

2018-09-12
134TYPO

In pseudocode 134/135:

“”"
function lighting(material, light, point, eyev, normalv, in_shadow)
if material has a pattern
report erratum • discuss
color ← stripe_color_at(material.pattern, point) else
color ← material.color
end if
“”"

stripe_color_at -> stripe_at

2018-09-13
10088ERROR

Picture of eye vector in final test of lighting function is located at the same Y level as normal and light but in the test, eye vector is vector(0, 1, –1) which should be (0, 0, –1).

2018-09-13
30TYPO

Tuple for matrix multiplication is listed alternatively as (1,2,3,1) and (1,–2,3,1).
Probably a typo, at the very least confusing for the reader.

2018-09-28
23SUGGEST

Hi! Thanks for a great book!

I am assuming the book is written in LaTeX.
Mathematical operators and functions should be typeset in upright font.
For instance, the magnitude of v, on PDF page 23 is typeset in italics. One should instead
define a new command or a new math-operator:

1. \
ewcommand{\\magnitude}[1]{\\mathrm{magnitude}(#1)}
2. \\DeclareMathOperator{\\magnitude}{magnitude}

Then in the LaTeX-document:

1. \\magnitude{v} = \\sqrt{v_x^2 + v_y^2 + v_z^2} // as a command
2. \\magnitude(v) = \\sqrt{v_x^2 + v_y^2 + v_z^2} // as an operator

Other occurances. When typesetting rotation matrices, sine and cosine should be typeset using the built in
math operators \\sin and \\cos.

I am having a blast with this book so far! Keep it up! :)

2018-09-28
10797TYPO

intersections.feature line 4 should be
hit <- intersection(ray, sphere)
or hit <- hit(intersection(ray, sphere))

2018-09-29I'm sorry, I'm not able to see which scenario you're referring to. I've looked at all of them that seem relevant and they all seem correct. Can you please tell me not only the page number, but the name of the scenario in question? Thank you!
-TYPO

In the Scenario: Lighting with the light behind the surface

The last line has an assignment (Then result ← color[0.1, 0.1, 0.1]) when it should have an equality check: result = color[0.1, 0.1, 0.1]

I caught this because of a passing test before I’d written any code. Always a horror.

2018-09-28
117ERROR

In the pseudo code

intersections <- intersect(world, r)

should be

intersections <- intersect_world(world, r)

as the type “world” is not appropriate for the function “intersect” and ALL objects within the world should be considered for the rays path.

2018-09-28
152ERROR

For “Test #4: Strike a Reflective Surface”:

In the pseudocode shape is not attached to the scene.

2018-09-28
154ERROR

For “Test #6: Avoid Infinite Recursion”:

Upper plane, specifically its normal vector, needs to be turned upside down by applying scaling(1f, –1f, 1f) first in order to get proper results.

2018-09-29Piotr -- please feel free to email me (jamis@jamisbuck.org) about this issue. It shouldn't actually be necessary to flip the plane upside down, because the prepare_hit function should be inverting the normal if it points away from the ray's origin. If your code is doing this and you are still finding it necessary to flip the plane over, please let me know and I'll be happy to help troubleshoot it.
30TYPO

This will likely have been reported already, but just in case: the tuple reference in the middle of the page has values 1, −2, 3, 1. References to the same tuple in other parts of the same page have values 1, 2, 3, 1. I would tend to think that the stray ‘−’ needs to be removed for the sake of clarity. Apologies if this has already been raised.

2018-09-28
103SUGGEST

The transform_view method can yield incorrect results yet still pass all the unit tests prescribed before.
For example I experienced a problem where every single object in the world was rendered with a 500% stretch on the Y axis.
Every test in the book passed with flying colours, yet the generated output on page 109 was completely wrong.
Several hours debugging and testing later I noticed I’d forgotten to normalise the forward vector in the transformView method of the camera.
Fixing that fixed the output as well.
Unit tests still passed.

Suggestion: add some more tests to catch this.

In the rendering test on page 108, the orientation matrix becomes different:
Correct:
orientation: Matrix{dimension=4,grid=
[–1.0, 0.0, 0.0, 0.0]
[0.0, 1.0, –0.0, 0.0]
[–0.0, –0.0, –1.0, 0.0]
[0.0, 0.0, 0.0, 1.0]
}

forward vector not normalised
orientation: Matrix{dimension=4,grid=
[–5.0, 0.0, 0.0, 0.0]
[0.0, 25.0, –0.0, 0.0]
[–0.0, –0.0, –5.0, 0.0]
[0.0, 0.0, 0.0, 1.0]
}

The transformation matrix then becomes:
Correct:
transformation: MutationMatrix{matrix=Matrix{dimension=4,grid=
[–1.0, 0.0, 0.0, 0.0]
[0.0, 1.0, 0.0, 0.0]
[0.0, 0.0, –1.0, –5.0]
[0.0, 0.0, 0.0, 1.0]
}}

without forward.normalise():

transformation: MutationMatrix{matrix=Matrix{dimension=4,grid=
[–5.0, 0.0, 0.0, 0.0]
[0.0, 25.0, 0.0, 0.0]
[0.0, 0.0, –5.0, –25.0]
[0.0, 0.0, 0.0, 1.0]
}}

the existing tests on the validity of the results of the transform_view method nowhere caught this. Either by sheer bad luck or through missing certain corner cases all of them pass if the forward vector isn’t normalised.

2018-09-29Jeroen, this is very curious. If I comment out the normalization of the forward vector, I find that at least two of the transform scenarios fail for me. Specifically, "Scenario: The view transformation moves the world" and "Scenario: An arbitrary view transformation". Were those two tests passing for you? If so, could you email me (jamis@jamisbuck.org) and show me how they were implemented?
179ERROR

I can’t get the “Test #2: A Ray Misses a Cube” to work. Given the discussion in this section and the algorithms, I don’t see that the algorithm works when the ray has zero direction in two axes (3D case) or one axis (2D case). Adding a couple more examples to the “Scenario Outline: A ray misses a cube” helps:

point(0.5, 0.5, 3), vector(0, 0, –1) : by inspection, this one should intersect the cube at t = 2 and t = 4 (it hits the z faces)

point(2.5, 2.5, 3), vector(0, 0, –1) : by inspection, this one should miss the cube (the ray is parallel to the z axis but outside the z faces)

However, since the x and y directions are 0, the check_axis() function returns -INF,INF for min,max and these two cases are therefore treated the same in local_intersect(cube, ray). The result is an intersection in the latter case when it should miss.

2018-10-03Ron, excellent catch! You're right, and this snuck past me because I was relying on my implementation language's native support for infinity. I'll update the book shortly; in the meantime, you can make this work by changing your check_axis function to set tmin and tmax to (respectively) "(-1 - origin) * INFINITY" and "(1 - origin) * INFINITY", instead of simply "-INFINITY" and "INFINITY".
1229SUGGEST

On page 12, in the section “Putting it together,” we are told to make a world with gravity and wind.

Then later, on page 96, we’re told to make a world class as well, but it is completely different from the original world class.

Perhaps the initial world class should be renamed? If not, you either have to delete it, thus causing the earlier example to fail, or move it somewhere else (in which case, you might get the linker complaining about duplicate definitions). Renaming it to something like “Environment” might be helpful.

2018-10-11
84SUGGEST

Add a test to verify that the world_normal calculation is correct. I got all test passing at this point by omitting the transpose and merely multiplying the inverse transform by the object_normal. It wasn’t until I tried to create the scene on page 110 that I realized something was wrong. Suggest the following test:
Scenario: The normal on a sphere at a non-axial point on a rotated sphere
Given s ← sphere()
And set_transform(s, rotY(π/4)
When n ← normal_at(s, point(√3/3, √3/3, √3/3))
Then n = vector(√3/3, √3/3, √3/3)

2018-10-11Excellent catch, Paul. Thank you! It turns out that you need to have a test for the scaling transform in order to ensure the inverse transform is used, but you're right that the rotation is indeed needed to catch the transpose. I've updated that test to concatenate a scaling and a rotation so that both cases are caught.
203TYPO

“You’ll see your old test_shape() function from Chapter 10, Patterns, on page 133 used here, as a generic shape to demonstrate the addition of the new attribute.”

The test_shape() function appeared first in Chapter 9, Planes.

2018-10-11
201SUGGEST

Suggestion to Chapter 14, Groups as a whole. I thinks it’s worth mention explicitly that we don’t implement ‘local_normal_at’ despite group is shape.

Group is kind of shape and shape requires implementation of two methods: ‘local_intersect’ and ‘local_normal_at’. In the implementation of groups only the former is provided. This is completely fine, because ‘local_intersect’ for groups return a list of intersections with references to concrete object in the group and these objects are asked about normals. Nevertheless statically typed language would require it though in dynamically typed you can leave ‘local_normal_at’ method unimplemented. If your language requires it you can write one by throwing exception for example.

2018-10-11
101ERROR

Hello,

I think that in the given scenario ( see below), the color returned by color_at includes the diffuse and specular parts, too. So c should be color(1.502, 1.502, 1.502) (which would be clipped to color(1,1,1)). Or am I wrong?

By the way: thank you very much for this fantastic book, I love this book and the clear explanations.

Regards,
Ulrich

Scenario: The color with an intersection behind the ray Given world ← default_world()
And outer ← the first object in world And outer.material.ambient ← 1
And inner ← the second object in world And inner.material.ambient ← 1
And ray ← ray(point(0, 0, –0.75), vector(0, 0, 1)) When c ← color_at(world, ray)
Then c = inner.material.color

2018-10-17EDIT: You're right, Ulrich! I was depending on the shadow calculations, which don't occur until the next chapter. You can fix the text by putting the ray at (0, 0, 0.75) and pointing in (0, 0, -1), instead of what the text says. I'm fixing this now. \n \nHello Ulrich! In that particular scenario, the ray originates (0, 0, -0.75) inside the outer sphere, so the outer sphere casts its shadow on the inner sphere, where the intersection occurs. Thus, there should be no specular or diffuse contribution to the intersection; only the ambient value should contribute, resulting in a color exactly equal to the intersected object's color. \n \nI'd be happy to help figure out what's happening in your code. I probably need to explain something better somewhere, or add another test earlier. Please email me at jamis@jamisbuck.org, even if you figure out the issue yourself. I really want to make sure people hit as few of these sorts of road bumps as possible!
101ERROR

Agree with Ulrich: I traced through the calculation for the given scenario, and the color_at includes both the diffuse (0.502, 0.502, 0.502) and specular (1, 1, 1) components, thus giving the color (1.502, 1.502, 1.502) instead of the expected white (1, 1, 1).

Should we be clipping colors?

Here are my very messy calculations as I run through the lighting call step by step:
Run through lighting:
1. Effective colour = white
2. lightV:
1. Light position is (–10, 10, –10)
2. point is hit point, i.e. (0, 0, –0.5)
3. v unnormalized = (–10, 10, –9.5), length = sqrt(290.25)
4. v normalized = –0.5869, 0.5869, –0.5576 (*)
3. ambient component = white
4. light.normal = lightv dot normalv = (*) . (0, 0, –1) = 0.5576
5. diffuse component = 0.5576 > 0 so diffuse_component = diffuse (0.9) * effective colour(white) * light.normal
\t = 0.9 * 0.5576 * white = 0.50185
6. reflectv = lightv reflect normalv =(*) reflect (0, 0, –1) = (0.5869, –0.5869, –0.55761)
7. reflect_dot_eye = 0
8. specular_component = 0
9. ambient component + diffuse component + specular component = (1.502, 1.502, 1.502) ?!

2018-10-17Sebastian, though I responded to you elsewhere, I wanted to update this here in case others run into this. Thank you for the detailed step-by-step of your calculations! The only bit missing is the shadow calculation; you should find that the outer sphere is casting its shadow on the surface of the inner sphere, so that there should actually be no specular or diffuse contributions from the light source.
11SUGGEST

“You certainly don’t need any deep understanding of the dot product to use it, though. (Lucky you!) For now, just make the test pass, and then move on.”

I think that even you don’t want to elaborate dot product it’s worth to provide a geometrical interpretation of it just like for cross product on the next page and mention that it involves cosine of the angle between two vectors.
Then in the Chapter 6, Light and Shading you could refer to that feature explicitly why the dot product is used in light calculations when cosine between different vectors matters.
You are also using this cosine feature in the refracted color calculations in Chapter 11 so it would be even more valuable to mention cosine law when you describe dot product.

2018-10-17
89TYPO

The specular component again falls off to zero, so the total intensity should be 0.1 + √2⁄2 + 0, or approximately 0.7364.

Precisely, because diffuse factor is 0.9
0.1 + 0.9 * √2⁄2 + 0

The same issue on the next page:

“The total intensity should therefore be 0.1 + √2⁄2 + 0.9, or approximately 1.6364.”

-> 0.1 + 0.9 *√2⁄2 + 0.9

2018-10-17
101ERROR

First of all, thanks for the great and immensely entertaining book.

I think the test scenario: “The color with an intersection behind the ray” is not correct.

As Ulrich and Sebastian have mentioned, following all the calculations, I also get the resulting color to be Color( 1.50186, 1.50186, 1.50186 )

I do not understand your suggestion that the shadow calculation is missing, as shadows are not treated until next chapter.

Kind regards,
Bas

2018-10-17That is an excellent point which I think I had completely overlooked, Bas. I think I may have got some tests crossed between chapters in my reference implementation. I'll take a closer look and see what I can figure out. \n \n[UPDATE] Yes, you're right! I'm updating the book, but you can make the test pass by putting the ray at (0, 0, 0.75) pointing at (0, 0, -1), instead of what the text says. Thanks again!
65SUGGEST

“Also, make sure the intersections are returned in increasing order, to make it easier to determine which intersections are significant, later.
t1 ← (b √(discriminant)) / (2 * a)
t2 ← (-b + √(discriminant)) / (2 * a)
if t1 > t2 then swap(t1, t2)”

You don’t have to swap t1 and t2 because they are already sorted. It’s taken from the fact that coefficient ‘a’ as a square is always positive here.

The same on page 190 in Chapter 13, Cylinders where ‘a’ is sum of squares.

2018-10-18
154ERROR

On PDF page number 154: in the definition of the function shade_hit in the call of lighting, the second argument “object” was forgotten. This new argument “object” was introduced on PDF page number 138.

Call of lighting on PDF page number 154: <<surface ← lighting(hit.object.material, world.light, hit.point, hit.eyev, hit.normalv, shadowed)>>

From PDF page number 138: <<Add object as yet another parameter for your lighting() function. The tests and pseudocode in this book assume the new function signature is lighting(material, object, light, point, eyev, normalv, in_shadow).>>

BTW A remark to this second argument of lighting: “object” contains “material”. So either leave out the argument “material” or (better) instead of “object” just use “object.transformation” which is used by lighting.

2018-10-23Excellent catch! I'll get that fixed. Thanks, also, for your suggestion about simplifying the method signature for the lighting() function. I'll think more about that.
103SUGGEST

First of all, this is a fantastic book, well done! I have a suggestion. I was able to get all the way to PDF p103 with all tests passing until the test “The view transformation moves the world”, which kept failing. It turned out that my implementation of “Matrix.col(num)” all the way back from the Matrix chapter was returning the row at the specified index, not the col. The test that tests that “col()” is working happens to have row 1 and col 1 identical, so my bug slipped right through. So it may be useful to either alter this test so that every element of the matrix is different so there’s no chance of the implementation being wrong or add another test with different rows and cols.

Keep up the great work! I’m thoroughly enjoying working through this book.

2018-10-23
119SUGGEST

“In pseudocode, your prepare_hit() function will need to do something like this: hit.point ← hit.point + hit.normalv * 0.0001”

The 0.0001 is IMHO not optimal in conjunction with the EPSILON constant (= 0.00001) from page 5 (the one used to compare floating point numbers). If I use both constants with the values suggested by you, the following three tests fail in my C# implementation after completing chapter 8:

“Precomputing the state of an intersection” on page 97: hit.point.z = –1.0001, but (1 (–1.0001)) = 0.0001 is not less than 0.00001 (= the EPSILON constant on page 5).

“An intersection occurs on the inside” on page 98/99: hit.point.z = 0.9999, but (1 - 0.9999) = 0.0001 (same case as before).

“Shading an intersection from the inside” on page 99: c = color(0.90495, 0.90495, 0.90495) in my implementation, but (0.90498 - 0.90495) = 0.00003, which is not less than 0.00001(= the EPSILON constant on page 5).

I solved the issue by setting EPSILON = 0.0001, now all tests pass.

Alternatively it helps in my implementation, to set “hit.point ← hit.point + hit.normalv * 0.000001” (with two zeros added) and leave EPSILON = 0.00001. I still don’t get the acne effect then. Please note that it is not enough to use “hit.point ← hit.point + hit.normalv * 0.00001” (with just one zero added), because “Precomputing the state of an intersection” still fails then. hit.point.z = –1.00001 in that case, but (1 (–1.00001)) = 0.00001, which is not less than 0.00001 (but equal to, but on page 5 it says “if abs(a - b) < EPSILON”).

BTW: Great book :)

Best regards,
Torsten

2018-11-09
131SUGGEST

On the “Putting it together” section of the Planes chapter, it took me a while to figure out why my plane was bright white at the very top. It appeared to be because the specular component of the default material (which was assigned to the default plane) was high enough to make it look white. After I dropped the specular component to 0, the plane faded to black light the picture in the book. It might be worth mentioning that planes that use the default material which has a non-zero specular highlight might not look like the pictures in the book.

As a broader suggestion, it might be helpful for the reader to give the details of the scenes that you produce in the “putting it together” section (which produce some really great scenes!). You do this in some chapters, but doing it for each one would be a great way to verify that our tracers are working (beyond the tests of course).

Once again — really enjoying this book!

2018-11-09Thank you, Sten! I actually found a bug during the second technical review related to this. It turns out that the method used to compute the specular component was being described and implemented incorrectly in the book. The eye_dot_reflect value should NOT be raised to the power of shininess until AFTER it is compared to zero. I've updated the book with this correction, which should fix the issue you saw with the plane being illuminated strangely.
154ERROR

A light source is missing in test “color_at with mutually reflective surfaces” on page 154.

color_at(…) (as defined on page 101) calls shade_hit(…), which in turn calls is_shadowed() (see page 120). is_shadowed(…) (as defined on page 117) relies on a light source being present. But as the test “color_at with mutually reflective surfaces” on page 154 does not use the default_world, but a new one, no light source is present.

I thought it would be a good idea to add a default light source even to the empty world (the one created by simply calling “world()”), but that breaks the test “Creating a world” on page 96, which explicitly checks for the non-existence of a light source.

Best regards,

2018-11-12
125TYPO

“One possible way to implement this […] is to declare a local_intersect(local_ray, shape) function […]” → “[…] local_intersect(shape, local_ray) […]” (the parameters are used the other way round in your code).

Best regards,

2018-11-09
68SUGGEST

On paper pp 65, it states:
“Also, make sure the intersections are returned in increasing order, to make it easier to determine which intersections are significant, later.”

yet here, the final hit() test here requires hit() to support unordered intersections.

Presumably, keeping them in order is more efficient. In this case, the test should be updated to supplythem in sort order so that the the impl can rely on the sort order.

(if not, then the code/comment references to keeping them in increasing order needs to be removed/revised)

2018-11-12Thank you! I'll clarify that the intersections() function should have the ultimate responsibility of keeping the list sorted. This is important in later chapters, as you create more complex scenes with multiple shapes, where it's not feasible for each shape to be responsible for (re)sorting the list as new intersections are added.
101ERROR

Further to the other reports about: “The color with an intersection behind the ray” being incorrect…

There’s an alternative fix instead of relying on side-effects of moving the ray…

Since you only want the Ambient term to be used in the calculation (irrespective of shadows, at this point in the book), you can also make the test work by setting diffuse = specular = 0 along with setting ambient = 1. That yields a color equal to the second object’s material color.

2018-11-12Thank you, that's a good point! The test has been corrected for the next release.
119SUGGEST

IMHO, it might be a little nicer to use the same Epsilon constant for the “adjustment factors” within the code - like the hit adjustment here - and have the comparison functions used for the tests instead check if the difference is less than Epsilon*2 (or Epsilon*10 which would permit a full digit’s round-up/down).

It might be a little easier to deal with “dialling in” the accuracy that way rather than playing with several constants throughout the code.

2018-11-14
219SUGGEST

Just a small suggestion that the “A ray strikes a triangle” test should verify that xs.count 1 before verifying T 2

2018-11-27
245TYPO

para. 2: … CSG object, the CSG object will product a list of intersections…

s/product/produce/

2018-11-27
71TYPO

“it’s” in “shrinking it’s direction vector” should be “its” I think. If so, there are other instances in the book as well.

107ERROR

It’s entirely likely I’m missing something. But after spending a few hours of head scratching, and everything else seemingly working correctly, it seems the `Constructing a ray when the camera is transformed` test’s direction vector is incorrect. If I follow the algorithm given, my result vector is: [tan(π/6), tan(-π/6), tan(-π/6)]

Lee, I'm pretty certain that the test is correct, but I'd be happy to work with you to figure out what's wrong here. If you post this on the forum I set up at http://forum.raytracerchallenge.com, I'll see what I can find out. Thanks, and sorry for the frustration!
61ERROR

The label on the graph axis looks wrong for coordinate (0, 0, –1) on the x axis. Should be the z axis. Same on page 62.

Hey Randall, thanks for the input. That actually *is* the z axis; it's just that I was trying to handwrite an italic font, and looking at it now I can see that it does, indeed, look rather like an x. It might be too late to fix now, but I'll see if I can update it when I get the book back for a final proofread.
7SUGGEST

If the purpose of tuple negation is to find the opposite, e.g. of a vector, shouldn’t then the “w” component be left alone? So that a vector remains a vector? In the test “Negating a tuple” the w-component is also negated.

231ERROR

In the definition of local_normal_at for smooth triangles, the computed vector must be normalized before it is returned. Otherwise, “Test #17: Normal interpolation” will fail.

61TYPO

The text says “if the ray originates at (0,0,–5) and passes directly through the origin” than the next figure shows the ray in the xOy plane with the ray origin at (–5,0,0)!

Suggestion, change the horizontal axis label from x to z for consistency or change the numbers to match the figure. It is probably easier to change the axis label.

192TYPO

The second line of the pseudo code shows:
if t0 > t1 then swap(t0, t1)

Which isn’t in the previous pseudo code snippets, but it isn’t shown as added here. And the tests pass without it.

277ERROR

The Bibliography page is blank (except for the heading)!

5839SUGGEST

The Scenario “Calculating the inverse of a matrix” is the first time where use of an EPSILON smaller than 0.0001 will result in the test failing. On page xix under “Comparing floating-point numbers”, it suggests “using a tiny value like 0.0001 for EPSILON”. I used a smaller one (10e-7), which caused no trouble until page 58 under Inverting Matrices - Implementing Inversion.

I’d suggest mentioning on page xix that many of the tests assume that particular value.

89ERROR

Under Ch.6, Putting It Together:
Add a light source. Here’s one possible configuration, with a white light
behind, above and to the left of the eye:

light_position ← point(–10, 10, –10)
light_color ← color(1, 1, 1)
light ← point_light(light_position, light_color)

If we’re following the left-hand co-ordinate system, shouldn’t the light_position be point(–10, –10, –10) in order to be “above” the sphere? I tried this out, and does it seem to match the given image.

198180ERROR

The direction vector for the third ray striking a cylinder test is not normalized and will not return the t0 and t1 values listed using the given pseudocode implementation. I believe the implementation is correct but it assumes the ray direction vector is normalized. Very minor issue but it took me a few minutes to understand what was happening.

198180SUGGEST

The issue I reported is absolutely not an issue. Not 10 seconds after I submitted it I noticed that the test code assumed the normalization. My mistake - sorry. Loving the book.

4SUGGEST

I find the use of Given When Then in the Scenarios confusing. See The Gherkin reference for guidelines.
A Scenario with no When (as on page 4/pdf and many others) is confusing as the Given is what is being asserted, so it must be assumed to be “true”, but the Then tests the assertion, so the Scenario must always pass (once the implementation is in place), or put anther way without a When, nothing is being tested. Simply changing the Given to a When in these cases would address that. A precondition/assertion/Given is not needed for every Scenario, but every Scenario needs a When.
There are also many Scenarios with multiple When statements, which is a discouraged practice (see Gherkin reference), and in these cases confuses the purpose of the test. For example (from p21/pdf):

Scenario: Constructing the PPM pixel data
Given c ← canvas(5, 3)
And c1 ← color(1.5, 0, 0)
And c2 ← color(0, 0.5, 0)
And c3 ← color(–0.5, 0, 1)
When write_pixel(c, 0, 0, c1)
And write_pixel(c, 2, 1, c2)
And write_pixel(c, 4, 2, c3)
And ppm ← canvas_to_ppm©
Then lines 4-6 of ppm are …

This implies that we are testing both write_pixel and canvas_to_ppm, but write_pixel was verified in prior tests, so does not need to be tested here. If write_pixel were being tested, then there are missing Then statements to verify it. I would suggest that this scenario (and others like it) most of the of the When statements should be changed to Given statements, so that the actual thing being tested is the only thing in a When step. In this case it would be changed to the following.

Scenario: Constructing the PPM pixel data
Given c ← canvas(5, 3)
And c1 ← color(1.5, 0, 0)
And c2 ← color(0, 0.5, 0)
And c3 ← color(–0.5, 0, 1)
And write_pixel(c, 0, 0, c1)
And write_pixel(c, 2, 1, c2)
And write_pixel(c, 4, 2, c3)
When ppm ← canvas_to_ppm©
Then lines 4-6 of ppm are …

This makes it clear that what is being tested is the canvas_to_ppm, and we don’t need to verify that the write_pixel calls updated the canvas correctly as well.

119SUGGEST

In this spec in “Refactoring Shapes”:

Scenario: The default material
Given s ← test_shape()
When m ← s.material
Then m = material()

Any reason not to combine the last two lines, like the scenario above it? (“The default transformation”), e.g.:

Scenario: The default material
Given s ← test_shape()
Then s.material = material()

21SUGGEST

Regarding this test case

Scenario: Constructing the PPM pixel data
Given c ← canvas(5, 3)
And c1 ← color(1.5, 0, 0)
And c2 ← color(0, 0.5, 0)
And c3 ← color(–0.5, 0, 1)
When write_pixel(c, 0, 0, c1)
And write_pixel(c, 2, 1, c2)
And write_pixel(c, 4, 2, c3)
And ppm ← canvas_to_ppm©
Then lines 4-6 of ppm are
“”"
255 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 128 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 255
“”"
my output is “”"
`1.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0.5 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
“”"
it is not clear to me why is first number in the string 255,
in the second line 128, and and the third 255 ?

I get clamping, if number is less than 0 or bigger than 255,
but how does 1.5 suddenly become 255 ? What did I miss I just assumed
Color (1.5, 0, 0) -> String “1.5 0 0”
how and why did it become “255 0 0” and in which part of the book is that explained,
because I am stuck now.

I could not open Discussion,
so I report this here,

regards,
Elvis

136SUGGEST

The tests for the “Making a 3D Checker” patterns don’t cover any x-y-z interaction and allow an implementation like this one:

| white -> (abs(x) + abs(y) + abs(z)) % 2 == 0 // abs used instead of floor
| black -> otherwise

to make all tests green. with disappointing results :)

The fact that the floor notation is so similar to the abs one makes the situation worse and confuse the most superficial readers :D

Similar considerations apply to other patterns.

Thank you for the wonderful book, btw.

147ERROR

On page 147, the test for infinite recursion of ReflectedColor() doesn’t recurse infinitely if I use the computation’s OverPoint in ReflectedColor(). The ray hits the upper plane and seems to stop.

If I use Point in ReflectedColor() instead, then I get infinite recursion with a timeout/stack-level-too-deep issue. Page 145 already said to use OverPoint, though.

171153SUGGEST

The two lines in this pseudocode look like this:

> comps.n1 ← last(containers).material.refractive_index

Would it make sense to reference the shape first, to clarify that the material belongs to the shape and not the intersection?

> comps.n1 ← last(containers).shape.material.refractive_index

36SUGGEST

Add ‘And cofactor(A,2,1) = 21’ to the test

In my implementation of cofactor i managed to have a bug because in my chosen language the modulo operator has higher precedence than addition. So when I wrote ‘row + col % 2 0' it was interpreted as 'row + (col % 2) 0’ and not ‘(row + col) % 2 == 0’

Unfortunately this still passed the tests for cofactor for (0,0) and (1,0) and I didn’t realize my mistake until later when implementing inverse (after some debugging), adding one more case to the cofactor tests exposed the error.

95ERROR

In the test “Scenario: Shading an intersection,” we are finding the shaded colour of “the first object in w” which is a pinkish/magenta coloured sphere.

The test expects the result “Then c = color(0.38066, 0.47583, 0.2855)” which is a greenish colour.

My actual result after writing the code to satisfy this test is color(0.47582649135129296, 0.09516529827025859, 0.47582649135129296) which is a dark magenta colour.

My result seems to be reasonable, and I suspect that the book’s expected result for this test is erroneous.

95SUGGEST

Please disregard my previous error report regarding the test “Scenario: Shading an intersection,” it seems the error was in my code, not in the book.

Apologies

189ERROR

I believe the scalar calculation when a 0 is incorrect. The book indicates the value is: t = -c/(2 b). However from the quadratic formula: a t^2 + b t + c = 0, one would get t = -c/b when a 0.

This also seems to be bourn out by the test case (Intersecting a cone with a ray parallel to one of its halves), on pdf page 190 where xs[0].t 0.35355. For this test case, the ray (using the books formula) would intersect the cone at coordinates (0., 0.249998, -0.750002) which can be verified to not be on the cone since the sum of the squares of the x, and z components does not equal the square of the y component. Using instead xs[0].t 0.7071067811865476 (twice the test cases value) gives the point (0., 0.5, –0.5) which is on the cone.

Hopefully someone can verify whether or not I’ve made an error.

176158SUGGEST

I had a floating-point math issue while doing Chapter 11 in Golang:

The test “The refracted color with a refracted ray” kept failing with different values for the refracted color:

“Expected Color( 0.30000000000000004 0.30000000000000004 0.30000000000000004 ) to be equal to Color( 0 0.99888 0.04725 )”

And I finally realized that it’s a combination of the math in the Sphere intersection logic and the Hit() method. Here’s the code-path:

RefractedColor(comps) -> ColorAt(refractedRay) -> Intersect(ray) -> spheres.each { |obj| obj.Intersect(ray) }

In the obj.Intersect(ray) call above, the 2nd intersection was 0.000008, versus –0.00001 for the other TRC implementations on GitHub that I tested.

After we have those intersections, ColorAt(refractedRay) moves on and gets the hit from them with Hit(intersections). For me, it chose 0.000008 because it was the smallest positive number. But for the other implementations it skipped that one (because they had a negative value for it), and chose the next intersection.

I couldn’t see any other issue than just floating-point math, so my fix was to use EPSILON instead of 0 when removing less-than-zero values in Hit().

Not sure if it’s worth including this fix, but hopefully this helps someone else who has this failing test too.

137ERROR

Implementation of checkers pattern as given causes acne for default plane.

To explain why:
let’s look at all the points on the plane where floor(p_x) and floor(p_z) are zero (this is the square with corners at (0, 0, 0) and (1, 0, 1)).

For any point on this square, p_y is exactly zero, but since in reality the y coordinate would have minor errors, it would randomly be either slightly above zero (having floor(p_y)=0) or slightly below zero (having floor(p_y)=–1). This means that some of the points on that square will be rendered with color c_a and some with color c_b.

I eventually “solved” this issue by changing the algorithm slightly to the folllowing:

color(p, c_a, c_b) = c_a if (floor(p_x + epsilon) + floor(p_y + epsilon) + floor(p_z + epsilon)) mod 2 = 0, else c_b

(note: the same would probably be an issue with the striped pattern if I would have tested it with the plane x=0)

158SUGGEST

First of all: Excellent book! I absolutely love it! Thank you for all of the hard work and careful preparation in laying out such a daunting topic. So here’s an issue I’ve found:

In Test #7 for implementing refraction, I’m thinking that we should not be using comps.under_point every time. I found that I have to switch to this logic:

point start_point = comps.inside ? comps.over_point : comps.under_point;
ray refract_ray = ray(start_point, direction)

If I don’t have this particular logic I’ve found that my sphere’s are always black, because when the light tries to exit the sphere, the first intersection that it hits is the edge of the sphere. Then, recursively, when it finds the next intersection it gets the same point because it’ll hit the inside edge, move back inside a little bit and then move forward (effectively bumping it’s head against the wall).

My test to capture this was something like this:

- create a default world, but remove all the scene objects (but keep the light)

- Create a sphere at the origin with 0 ambient, diffuse, and specular. Make it completely transparent. For simplicity make refractive index 1.0.

- Put a white plane behind it, rotated on the X axis (transform = translateZ(6) * rotateX(90deg))
- Shoot a ray from (0, 0, –10) directly into the sphere (direction = (0, 0, 1)). This should NOT be black, but for my code it was. You should get some sort of lighter gray.

This might be an implementation bug on my part, but all of my tests are passing on my end. I’m just wondering if this is a more robust handling of this edge case.

Happy to discuss! chipbell4@gmail.com

184TYPO

the line if t0 > t1 then swap(t0, t1) appears on this page as if it was previously part of the code, but it didn’t exist in the previous version of the code on page 180.

187SUGGEST

Maybe I’m missing something but I don’t understand why the test “dist < 1” is needed in local_normal_at. Since the normal is only calculated for points on the object’s surface, can’t we just assume that if the point has y coordinate close to maximum then it’s on the top cap (and same for minimum and bottom cap)?

83ERROR

To get the reflect test to pass, parenthesis need to be added around “normal * 2 * dot(in, normal)” in the reflect function. The line should be
return in - (normal * 2 * dot(in, normal)).

Categories: