0
$\begingroup$

I am creating a stylized animation in which everything takes place on a 2D plane.

I have a complex geometry node setup that applies some stylized effects but they are all made to work in 2D. I quickly discovered that working with just flat planes makes doing camera moves not very feasible.

For example I want to fly through some mountains having their profiles change as they get closer to camera. So I had the idea of modeling a 3D scene and then flattening it onto the camera plane, thereby allowing for a traditional 3D pipeline. Basically there is a "Witness" camera and a "Render" camera.

The Witness camera moves while the Render camera is static. Using geometry nodes I am flattening 3D objects onto the Witness camera-plane, but then I need to transform them to my Render camera-plane. I will then apply a slight offset in Z to separate elements.

This is where I'm having some trouble.

I understand the concept, I just don't know the proper maths get my flattened geometry into the same position relative to the new camera. I can get the matrix of both cameras and the location of the center (Bounding Box) of my flattened geometry, but how to calculate where that geometry should be in relation to the new camera?

If I were just doing this in the 3D view it would be as simple as parenting the flattened geometry to my Witness camera and then setting it's transforms to be identical to the Render camera.

screenshot of Blender interface showing scene setup and geometry nodes.

$\endgroup$
1
  • $\begingroup$ Just as a side, not sure if relevant - I recommend orthographic cameras for 2d renders $\endgroup$ Commented Aug 26, 2025 at 18:47

2 Answers 2

0
$\begingroup$

I had gone through a similar problem in the past where I wanted to transfer the object(s) of a camera view A to a camera view B. Luckily, I realized the solution wasn't too elaborate. So, I'll just give you my old node tree from my asset library.

enter image description here

In my case, I named my Witness Camera, Source Camera, and my Render Camera, Duplicate Camera. First, I found the inverse transformations of my Source Camera. Then, I applied them to my object. Why? Imagine doing this manually. You would parent the object to the Source Camera, zero the Source Camera transformations, then apply the Duplicate Camera transformations to put the object in the view of the Duplicate Camera.

Likewise, I applied the inverse Source Camera transformations to send the object into a zero-transformation state relative to the Source Camera. Only then was I able to apply the Duplicate Camera Transformations.

enter image description here

I left out applying the scale transformation of the Cameras because like you, I was using this method for 2d modeling, grease pencil, and Parallax adjusting. I wanted to adjust the settings of my Duplicate Camera / Render Camera and didn't want my object to be affected by the Camera's scale. Doing so, I came to the conclusion that the camera Position and Rotation are the only transformations that needed to be applied for this method to work. You can apply the Scale too but, it's not really necessary.

The result is as you would see above. This should be able to be applied to any mesh or flat mesh.


Blender 4.5.2

$\endgroup$
4
  • $\begingroup$ Hmmm, this is definitely getting me somewhere, but isn't giving me an exact one to one lineup between the cameras. The geometry is showing up bigger in my Duplicate camera. It also only works if I plug in the source geometry instead of my flattened geometry. My flattened geometry is obviously undergoing transformations due to the raycast... I will play around with this a bit more tomorrow. You've given me some good logic to work with. Many thanks for talking through it and sharing your example. $\endgroup$ Commented Aug 27, 2025 at 4:09
  • $\begingroup$ @Jonahtan Farrell, I always recommend sharing your blend file when you have a specific issue that you want to resolve as I did through blend-exchange.com. Even if you don't want to share your project with the public, if you attempt to recreate your project in a reduced form, many times you can find out what the real problem is. $\endgroup$ Commented Aug 27, 2025 at 13:59
  • $\begingroup$ I got it figured out. The issue I was having is that the object's initial transform was causing the offset. Once I apply transforms on the object it goes to the right place. $\endgroup$ Commented Aug 27, 2025 at 16:57
  • $\begingroup$ @ Jonathan Farell, Good job! 👍 $\endgroup$ Commented Aug 27, 2025 at 19:13
0
$\begingroup$

For anyone else in the future this is what my final Node group looks like. Very important: Any object this modifier is applied to has to have it's transforms applied or it won't line up between the camera views.

Screenshot of Blender

$\endgroup$
3
  • $\begingroup$ I would refer to blender.meta.stackexchange.com on how to write an appropriate answer instead of just making a follow-up suggestion. $\endgroup$ Commented Aug 27, 2025 at 19:17
  • 1
    $\begingroup$ As for your suggestion, transforms on the object you're trying to send to another camera view do not really need to be applied. If you're using the Object Info node just switch to Relative. If you're inputting Geometry, use Self Object node along with Object Info node to tamper with the transformations. $\endgroup$ Commented Aug 27, 2025 at 19:23
  • $\begingroup$ A good example on why someone should share his/her blend file through blend-exchange.com so, that we can make direct suggestions. $\endgroup$ Commented Aug 27, 2025 at 19:26

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.