This is a new option for panorama cameras to render
stereo that can be used in virtual reality devices
The option is available under the camera panel when Multi-View is enabled (Views option in the Render Layers panel)
Known limitations:
------------------
* Parallel convergence is not supported (you need to set a convergence distance really high to simulate this effect).
* Pivot was not supposed to affect the render but it does, this has to be looked at, but for now set it to CENTER
* Derivatives in perspective camera need to be pre-computed or we shuld get rid of kcam->dx/dy (Sergey words, I don't fully grasp the implication shere)
* This works in perspective mode and in panorama mode. However, for fully benefit from this effect in perspective mode you need to render a cube map. (there is an addon for this, developed separately, perhaps we could include it in master).
* We have no support for "neck distance" at the moment. This is supposed to help with objects at short distances.
* We have no support to rotate the "Up Axis" of the stereo plane. Meaning, we hardcode 0,0,1 as UP, and create the stereo pair related to that. (although we could take the camera local UP when rendering panoramas, this wouldn't work for perspective cameras.
* We have no support for interocular distance attenuation based on the proximity of the poles (which helps to reduce the pole rotation effect/artifact).
THIS NEEDS DOCS - both in 2.78 release log and the Blender manual.
Meanwhile you can read about it here: http://code.blender.org/2015/03/1451
This patch specifically dates from March 2015, as you can see in the code.blender.org post. Many thanks to all the reviewers, testers and minor sponsors who helped me maintain spherical-stereo for 1 year.
All that said, have fun with this. This feature was what got me started with Multi-View development (at the time what I was looking for was Fulldome stereo support, but the implementation is the same). In order to make this into Blender I had to make it aiming at a less-specic user-case Thus Multi-View started. (this was December 2012, during Siggraph Asia and a chat I had with Paul Bourke during the conference). I don't have the original patch anymore, but you can find a re-based version of it from March 2013, right before I start with the Multi-View project https://developer.blender.org/P332
Reviewers: sergey, dingto
Subscribers: #cycles
Differential Revision: https://developer.blender.org/D1223
Parallel rendering was not working.
The idea of having parallel convergence mode to render as parallel but
visualize as off-axis was good, but it was leading to some complications
in the code.
I think it's more clear to the user if parallel looks and render as
parallel, and if she wants to pre-visualize the converged planes, simply
temporarily set the camera to off-axis.
Official Documentation:
http://www.blender.org/manual/render/workflows/multiview.html
Implemented Features
====================
Builtin Stereo Camera
* Convergence Mode
* Interocular Distance
* Convergence Distance
* Pivot Mode
Viewport
* Cameras
* Plane
* Volume
Compositor
* View Switch Node
* Image Node Multi-View OpenEXR support
Sequencer
* Image/Movie Strips 'Use Multiview'
UV/Image Editor
* Option to see Multi-View images in Stereo-3D or its individual images
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
I/O
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
Scene Render Views
* Ability to have an arbitrary number of views in the scene
Missing Bits
============
First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report.
Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report.
Everything else is likely small todos, and may wait until we are sure none of the above is happening.
Apart from that there are those known issues:
* Compositor Image Node poorly working for Multi-View OpenEXR
(this was working prefectly before the 'Use Multi-View' functionality)
* Selecting camera from Multi-View when looking from camera is problematic
* Animation Playback (ctrl+F11) doesn't support stereo formats
* Wrong filepath when trying to play back animated scene
* Viewport Rendering doesn't support Multi-View
* Overscan Rendering
* Fullscreen display modes need to warn the user
* Object copy should be aware of views suffix
Acknowledgments
===============
* Francesco Siddi for the help with the original feature specs and design
* Brecht Van Lommel for the original review of the code and design early on
* Blender Foundation for the Development Fund to support the project wrap up
Final patch reviewers:
* Antony Riakiotakis (psy-fi)
* Campbell Barton (ideasman42)
* Julian Eisel (Severin)
* Sergey Sharybin (nazgul)
* Thomas Dinged (dingto)
Code contributors of the original branch in github:
* Alexey Akishin
* Gabriel Caraballo
A new checkbox "High quality" is provided in camera settings to enable
this. This creates a depth of field that is much closer to the rendered
result and even supports aperture blades in the effect, but it's more
expensive too. There are optimizations to do here since the technique is
very fill rate heavy.
People, be careful, this -can- lock up your screen if depth of field
blurring is too extreme.
Technical details:
This uses geometry shaders + instancing and is an adaptation of
techniques gathered from
http://bartwronski.com/2014/04/07/bokeh-depth-of-field-going-insane-http://advances.realtimerendering.com/s2011/SousaSchulzKazyan%20-
%20in%20Real-Time%20Rendering%20Course).ppt
TODOs:
* Support dithering to minimize banding.
* Optimize fill rate in geometry shader.
This commit introduces a few ready made effects for the 3D viewport
and OpenGL rendering.
Included effects are Depth of Field, accessible from camera view
and screen space ambient occlusion. Those effects can be turned on and
tweaked from the shading panel in the 3D viewport.
Off screen rendering will use the settings of the current camera.
WIP documentation can be found here:
http://wiki.blender.org/index.php/User:Psy-Fi/Framebuffer_Post-processing
This patch adds the option to set minimum/maximum latitude/longitude values for
the equirectangular panorama camera in Cycles, as discussed in T34400.
The separate functions in kernel_projection.h are needed because the regular
ones are also used as helper functions for environment map sampling.
Reviewers: #cycles, sergey
Reviewed By: #cycles, sergey
Subscribers: dingto, sergey, brecht
Differential Revision: https://developer.blender.org/D960
Notes:
* Made those edits by full checking of py files, so I should have spoted most needed edits, yet it remains quite probable I missed a few ones, we'll fix if/when someone notice it...
* Also made some cleanup "on the road"!
shows both title safe and action safe areas following more modern standards.
Patch #32822 by Harley Acheson, full description:
Our current "title safe" camera display option is anachronistic. It shows a
border of 10% on all edges, which used to be the recommended title safe area
for 4:3 content on standard definition CRT televisions. However we are very
unlikely to create new projects that output for SD TV at that aspect ratio.
This patch change the option to "safe areas" with and indicates the
"title safe" area (also known as "graphic safe") as well as the "action safe"
area. "Title Safe" is an area visible by all reasonably maintained sets, where
text was certain not to be cut off. "Action Safe" is a larger area that
represented where a "perfect" set (with high precision to allow less
overscanning) would cut the image off.
The current recommendation for Action Safe is 3.5% on all edges, which is the
maxiumum overscan for TVs now. The recommended title safe is now 5% vertically
and 10% horizontally for content that is of wider aspect ratio than 4:3. The
reason for the difference between horizontal versus vertical margin is because
wider content would be letterboxed on an older 4:3 television, giving it
additional margin.
For sample images see:
http://www.dalaifelinto.com/?p=399 (equisolid)
http://www.dalaifelinto.com/?p=389 (equidistant)
The 'use_panorama' option is now part of a new Camera type: 'Panorama'.
Created two other panorama cameras:
- Equisolid: most of lens in the market simulate this lens - e.g. Nikon, Canon, ...)
this works as a real lens up to an extent. The final result takes the
sensor dimensions into account also.
.:. to simulate a Nikon DX2S with a 10.5mm lens do:
sensor: 23.7 x 15.7
fisheye lens: 10.5
fisheye fov: 180
render dimensions: 4288 x 2848
- Equidistant: this is not a real lens model. Although the old equidistant lens simulate
this lens. The result is always as a circular fisheye that takes the whole sensor
(in other words, it doesn't take the sensor into consideration).
This is perfect for fulldomes ;)
For the UI we have 10 to 360 as soft values and 10 to 3600 as hard values (because we can).
Reference material:
http://www.hdrlabs.com/tutorials/downloads_files/HDRI%20for%20CGI.pdfhttp://www.bobatkins.com/photography/technical/field_of_view.html
Note, this is not a real simulation of the light path through the lens.
The ideal solution would be this:
https://graphics.stanford.edu/wikis/cs348b-11/Assignment3http://www.graphics.stanford.edu/papers/camera/
Thanks Brecht for the fix, suggestions and code review.
Kudos for the dome community for keeping me stimulated on the topic since 2009 ;)
Patch partly implemented during lab time at VisGraf, IMPA - Rio de Janeiro.
environment map, by enabling the Panorama option in the camera.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Camera#Panorama
The focal length or sensor settings are not used, the UI can be tweaked still to
communicate this, also panorama should probably become a proper camera type like
perspective or ortho.
- Added support of variable size sensor width and height.
- Added presets for most common cameras, also new presets can be defined by user.
- Added option to control which dimension (vertical or horizontal) of sensor
size defines FOV. Old behavior of automatic FOV calculation is also kept.
- Renderer, viewport, game engine and collada importer/exporter should
deal fine with this changes. Other exporters would be updated soon.
- Make gettext stuff draw-time. so switching between languages
can happens without restart now.
- Added option to translate visible interface (menus, buttons, labels)
and tooltips. Now it's possible to have english UI and localized tooltips.
- Clean-up sources, do not use gettext stuff for things which can be
collected with RNA.
- Fix issues with windows 64bit and ru_RU locale on my desktop
(it was codepage issue).
- Added operator "Get Messages" which generates new text block with
with all strings collected from RNA.
- Changed script for updating blender.pot so now it appends
messages collected from rna to automatically gathered messages.
To update .pot you have to re-generate messages.txt using "Get Messages"
operator and then run update_pot script.
- Clean up old translation stuff which wasn't used and most probably
wouldn't be used.
- Return back "International Fonts" option, so if it's disabled, no
gettext lookups happens on draw.
- Merged read_homefile function back. No need in splitting it.
TODO:
- Custom fonts and font size.
Current font isn't nice at least for russian locale, it's
difficult to read it.
- Put references to messages.txt so gettext can merge translation when
name/description of some property changes.
- render check for ortho/panorama combination wasn't working since the flags were not initialized at the time of checking.
- disable panorama button in ortho mode.
ui/ --> startup/bl_ui
op/ --> startup/bl_operators
scripts/startup/ is now the only auto-loading script dir which gives some speedup for blender loading too.
~/.blender/2.56/scripts/startup works for auto-loading scripts too.