Greetings!

Welcome to Scifi-Meshes.com! Click one of these buttons to join in on the fun.

Rendering a large image in Max in small peices

CoolhandCoolhand287 Mountain LairPosts: 1,296Member
Does anyone know how to break a really large render into smaller, more easily renderable chunks?
Post edited by Coolhand on

Posts

  • GuerrillaGuerrilla797 HelsinkiPosts: 2,868Administrator
    Render Region should work pretty well.

    Here's an official-looking tutorial:
    Autodesk - Autodesk 3ds Max Services & Support - Rendering Very Large Images

    I gave it a shot before I googled with the following 10 easy steps:

    1. Fire up your scene, go to Render Scene and set the output resolution to whatever your desired output resolution happens to be.
    2. Change the Render dropdown bit from 'View' to 'Region' (or one of the other options if you feel like it.)
    3. Set the region dimensions and location in Viewport Configuration (in my case I just did a 640x480 divided into 4, so 320x240). You want to set up the idiotically named Sub-Region. :p
    4. Put the marquee in x0, y0 and hit render. Click OK on the viewport.
    5. Wait for render.
    6. Change background colour to make different regions stand out a bit :p
    7. Move the marquee to the next bit (in my case x320, y0)
    8. Render
    9. Repeat
    10. Don't clear the VFB in between renders. You should have a complete image after all the regions have been rendered.

    I'm not entirely sure if it's actually any more memory efficient, but the region render had a bit lower total rendertime.

    There are a couple of maxscripts that do something similar, but people seem to want money for them, and I can't imagine it being too hard to script.

    Anyway, pics:

    #1 Region Render
    #2 View Render
    Comco: i entered it manually in the back end
    Join our fancy Discord Server!
  • [Deleted User][Deleted User]2 Posts: 3Member
    I would imagine that rendering bits of the image would save a lot more memory than rendering the whole thing. right?
  • GuerrillaGuerrilla797 HelsinkiPosts: 2,868Administrator
    Yeah it would, but I have no idea whether or not Render Region actually culls any geometry or lighting information and whatever might be eating your memory or whether it's still all there, but doesn't get processed. But yeah, obviously smaller output equals less memory eaten and barring horrible memory leaks in the renderer you should get most of that memory back when the pic's finished rendering.

    Anyone have any hard scientific stuff to add. I'm just sort of making this up. :p
    Comco: i entered it manually in the back end
    Join our fancy Discord Server!
  • aszazerothaszazeroth176 Posts: 209Member
    A much more efficient way is to fire up a backburner on the same machine (or a few more) and use the render to stripes function. Way leaner on the memory, negative side is that all the backburners need to have atleast your max configuration / plugins. Done this with HUGE 300dpi print-proofs several times.
  • GuerrillaGuerrilla797 HelsinkiPosts: 2,868Administrator
    Found this while looking for something else.

    MaxScripts
    Comco: i entered it manually in the back end
    Join our fancy Discord Server!
  • LennOLennO5 Posts: 0Member
    Guerrilla wrote: »
    Yeah it would, but I have no idea whether or not Render Region actually culls any geometry or lighting information and whatever might be eating your memory or whether it's still all there, but doesn't get processed. But yeah, obviously smaller output equals less memory eaten and barring horrible memory leaks in the renderer you should get most of that memory back when the pic's finished rendering.

    Anyone have any hard scientific stuff to add. I'm just sort of making this up. :p

    Actually, with raytracing, nothing CAN be culled away A– any given ray *can* hit any given object. The whole polygonal geometry has to be stored in the acceleration tree, regardless of region rendered. Memory consumption for the scene will be the very same with the excpetion of textures which could have a delayed read, i.e. they are only loaded when the first ray intersection calls for it. The framebuffer itself will consume less memory though, which may or may not affect render performance. Mental ray for example is well known for crashing with large output resolutions. Most renderers won't even let your render to extremely large output resolutions.
  • GuerrillaGuerrilla797 HelsinkiPosts: 2,868Administrator
    That makes sense. Any ideas on really big renders then? :)
    Comco: i entered it manually in the back end
    Join our fancy Discord Server!
  • ShinoharaShinohara171 Posts: 0Member
    LennO wrote: »
    Actually, with raytracing, nothing CAN be culled away A– any given ray *can* hit any given object.
    Technically true but in practice not so much. Any decent modern raytracer (including Max's scanline renderer and Mental Ray) bin polys into smaller chunks by voxels or BSP trees or some other method of spatial partitioning this way each ray can be tested against a much smaller number of boxes and only raytrace against triangles that have a high chance of being in the rendered pixel. Mental Ray and the scanline render both let you tweak how the spatial partitioning is done which can help quite a bit with larger images, there should be some sections in the appropriate areas of the Max manual that give some starting pointers for adjusting these settings. Mental Ray and scanline also have settings to use limited memory, not sure if Brazil does though (I seem to recall Coolhand mentioning that he uses Brazil in the past).
  • LennOLennO5 Posts: 0Member
    Shinohara wrote: »
    Technically true but in practice not so much...

    Actually, yes, what i said is absolutely true, and if you read the sentence after the one you quoted I even specifically mentioned the accleration structure:

    "Actually, with raytracing, nothing CAN be culled away – any given ray *can* hit any given object. The whole polygonal geometry has to be stored in the acceleration tree, regardless of region rendered."

    NOTHING is culled away from the scene, every single triangle is held in memory. What is reduced is the amount of TIME (Cpu cycles) you spend on ray/triangle interection testing. Accelleration data structures such as BSP-Trees, octrees do NOT reduce the amount of memory used (infact, they increase it as all triangles have to be stored in the structure and not just a simple array or linked list, and a typical BSP can get quite large) but they are there for accelerating the rendering, as you do not spend lots of CPU cycles on ray-testing every triangle but only those stored in the sub-trees that your ray traverses. Adjusting the BSP settings in mental ray for example will give you EITHER fast renderings and huge memory footprint(heigher depth/small voxel size), or slow rendering and low memory footprint (low depth, large number of tris per voxel). Finding the balance for every single scene (or using gird for evenly distributed tris) is key to rendering very complex scenes.

    In short: Every single tri is held in memory at any given time during raytracing (with the exception of delayed read geometry) The acceleration structure allows you to discard a huge amount of them during ray intersection testing, however this does reduce rendering time, not memory footprint.
  • aszazerothaszazeroth176 Posts: 209Member
    wow, lots of cool technical stuff.... I actually learned a few things. Still I like the my "cheaty" backburner approach =)
Sign In or Register to comment.