2011-10-23 17:52:20 +00:00
|
|
|
/*
|
2012-11-18 00:30:06 +00:00
|
|
|
* ***** BEGIN GPL LICENSE BLOCK *****
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
* as published by the Free Software Foundation; either version 2
|
|
|
|
* of the License, or (at your option) any later version.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* along with this program; if not, write to the Free Software Foundation,
|
|
|
|
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
|
|
|
|
*
|
|
|
|
* Copyright (c) 2007 The Zdeno Ash Miklas
|
|
|
|
*
|
|
|
|
* This source file is part of VideoTexture library
|
|
|
|
*
|
|
|
|
* Contributor(s):
|
|
|
|
*
|
|
|
|
* ***** END GPL LICENSE BLOCK *****
|
|
|
|
*/
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
|
2011-02-22 19:30:37 +00:00
|
|
|
/** \file ImageRender.h
|
|
|
|
* \ingroup bgevideotex
|
|
|
|
*/
|
2012-11-18 00:30:06 +00:00
|
|
|
|
2012-03-09 19:17:19 +00:00
|
|
|
#ifndef __IMAGERENDER_H__
|
|
|
|
#define __IMAGERENDER_H__
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
|
|
|
|
|
|
|
|
#include "Common.h"
|
|
|
|
|
2012-06-16 20:20:07 +00:00
|
|
|
#include "KX_Scene.h"
|
|
|
|
#include "KX_Camera.h"
|
|
|
|
#include "DNA_screen_types.h"
|
|
|
|
#include "RAS_ICanvas.h"
|
|
|
|
#include "RAS_IRasterizer.h"
|
BGE: Various render improvements.
bge.logic.setRender(flag) to enable/disable render.
The render pass is enabled by default but it can be disabled with
bge.logic.setRender(False).
Once disabled, the render pass is skipped and a new logic frame starts
immediately. Note that VSync no longer limits the fps when render is off
but the 'Use Frame Rate' option in the Render Properties still does.
To run as many frames as possible, untick the option
This function is useful when you don't need the default render, e.g.
when doing offscreen render to an alternate device than the monitor.
Note that without VSync, you must limit the frame rate by other means.
fbo = bge.render.offScreenCreate(width,height,[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
Use this method to create an offscreen buffer of given size, with given MSAA
samples and targetting either a render buffer (bge.render.RAS_OFS_RENDER_BUFFER)
or a texture (bge.render.RAS_OFS_RENDER_TEXTURE). Use the former if you want to
retrieve the frame buffer on the host and the latter if you want to pass the render
to another context (texture are proper OGL object, render buffers aren't)
The object created by this function can only be used as a parameter of the
bge.texture.ImageRender() constructor to send the the render to the FBO rather
than to the frame buffer. This is best suited when you want to create a render
of specific size, or if you need an image with an alpha channel.
bge.texture.<imagetype>.refresh(buffer=None, format="RGBA", ts=-1.0)
Without arg, the refresh method of the image objects is pretty much a no-op, it
simply invalidates the image so that on next texture refresh, the image will
be recalculated.
It is now possible to pass an optional buffer object to transfer the image (and
recalculate it if it was invalid) to an external object. The object must implement
the 'buffer protocol'. The image will be transfered as "RGBA" or "BGRA" pixels
depending on format argument (only those 2 formats are supported) and ts is an
optional timestamp in the image depends on it (e.g. VideoFFmpeg playing a video file).
With this function you don't need anymore to link the image object to a Texture
object to use: the image object is self-sufficient.
bge.texture.ImageRender(scene, camera, fbo=None)
Render to buffer is possible by passing a FBO object (see offScreenCreate).
bge.texture.ImageRender.render()
Allows asynchronous render: call this method to render the scene but without
extracting the pixels yet. The function returns as soon as the render commands
have been send to the GPU. The render will proceed asynchronously in the GPU
while the host can perform other tasks.
To complete the render, you can either call refresh() directly of refresh the texture
to which this object is the source. Asynchronous render is useful to achieve optimal
performance: call render() on frame N and refresh() on frame N+1 to give as much as
time as possible to the GPU to render the frame while the game engine can perform other tasks.
Support negative scale on camera.
Camera scale was previously ignored in the BGE.
It is now injected in the modelview matrix as a vertical or horizontal flip
of the scene (respectively if scaleY<0 and scaleX<0).
Note that the actual value of the scale is not used, only the sign.
This allows to flip the image produced by ImageRender() without any performance
degradation: the flip is integrated in the render itself.
Optimized image transfer from ImageRender to buffer.
Previously, images that were transferred to the host were always going through
buffers in VideoTexture. It is now possible to transfer ImageRender
images to external buffer without intermediate copy (i.e. directly from OGL to buffer)
if the attributes of the ImageRender objects are set as follow:
flip=False, alpha=True, scale=False, depth=False, zbuff=False.
(if you need to flip the image, use camera negative scale)
2016-06-09 23:56:45 +02:00
|
|
|
#include "RAS_IOffScreen.h"
|
|
|
|
#include "RAS_ISync.h"
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
|
|
|
|
#include "ImageViewport.h"
|
|
|
|
|
|
|
|
|
|
|
|
/// class for render 3d scene
|
|
|
|
class ImageRender : public ImageViewport
|
|
|
|
{
|
|
|
|
public:
|
|
|
|
/// constructor
|
BGE: Various render improvements.
bge.logic.setRender(flag) to enable/disable render.
The render pass is enabled by default but it can be disabled with
bge.logic.setRender(False).
Once disabled, the render pass is skipped and a new logic frame starts
immediately. Note that VSync no longer limits the fps when render is off
but the 'Use Frame Rate' option in the Render Properties still does.
To run as many frames as possible, untick the option
This function is useful when you don't need the default render, e.g.
when doing offscreen render to an alternate device than the monitor.
Note that without VSync, you must limit the frame rate by other means.
fbo = bge.render.offScreenCreate(width,height,[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
Use this method to create an offscreen buffer of given size, with given MSAA
samples and targetting either a render buffer (bge.render.RAS_OFS_RENDER_BUFFER)
or a texture (bge.render.RAS_OFS_RENDER_TEXTURE). Use the former if you want to
retrieve the frame buffer on the host and the latter if you want to pass the render
to another context (texture are proper OGL object, render buffers aren't)
The object created by this function can only be used as a parameter of the
bge.texture.ImageRender() constructor to send the the render to the FBO rather
than to the frame buffer. This is best suited when you want to create a render
of specific size, or if you need an image with an alpha channel.
bge.texture.<imagetype>.refresh(buffer=None, format="RGBA", ts=-1.0)
Without arg, the refresh method of the image objects is pretty much a no-op, it
simply invalidates the image so that on next texture refresh, the image will
be recalculated.
It is now possible to pass an optional buffer object to transfer the image (and
recalculate it if it was invalid) to an external object. The object must implement
the 'buffer protocol'. The image will be transfered as "RGBA" or "BGRA" pixels
depending on format argument (only those 2 formats are supported) and ts is an
optional timestamp in the image depends on it (e.g. VideoFFmpeg playing a video file).
With this function you don't need anymore to link the image object to a Texture
object to use: the image object is self-sufficient.
bge.texture.ImageRender(scene, camera, fbo=None)
Render to buffer is possible by passing a FBO object (see offScreenCreate).
bge.texture.ImageRender.render()
Allows asynchronous render: call this method to render the scene but without
extracting the pixels yet. The function returns as soon as the render commands
have been send to the GPU. The render will proceed asynchronously in the GPU
while the host can perform other tasks.
To complete the render, you can either call refresh() directly of refresh the texture
to which this object is the source. Asynchronous render is useful to achieve optimal
performance: call render() on frame N and refresh() on frame N+1 to give as much as
time as possible to the GPU to render the frame while the game engine can perform other tasks.
Support negative scale on camera.
Camera scale was previously ignored in the BGE.
It is now injected in the modelview matrix as a vertical or horizontal flip
of the scene (respectively if scaleY<0 and scaleX<0).
Note that the actual value of the scale is not used, only the sign.
This allows to flip the image produced by ImageRender() without any performance
degradation: the flip is integrated in the render itself.
Optimized image transfer from ImageRender to buffer.
Previously, images that were transferred to the host were always going through
buffers in VideoTexture. It is now possible to transfer ImageRender
images to external buffer without intermediate copy (i.e. directly from OGL to buffer)
if the attributes of the ImageRender objects are set as follow:
flip=False, alpha=True, scale=False, depth=False, zbuff=False.
(if you need to flip the image, use camera negative scale)
2016-06-09 23:56:45 +02:00
|
|
|
ImageRender(KX_Scene *scene, KX_Camera *camera, PyRASOffScreen *offscreen);
|
2013-03-29 06:21:28 +00:00
|
|
|
ImageRender(KX_Scene *scene, KX_GameObject *observer, KX_GameObject *mirror, RAS_IPolyMaterial * mat);
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
|
|
|
|
/// destructor
|
|
|
|
virtual ~ImageRender (void);
|
|
|
|
|
|
|
|
/// get background color
|
2015-06-16 00:05:25 +02:00
|
|
|
float getBackground (int idx);
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
/// set background color
|
2015-06-16 00:05:25 +02:00
|
|
|
void setBackground (float red, float green, float blue, float alpha);
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
|
2008-12-09 14:16:10 +00:00
|
|
|
/// clipping distance
|
|
|
|
float getClip (void) { return m_clip; }
|
|
|
|
/// set whole buffer use
|
2011-10-06 22:04:01 +00:00
|
|
|
void setClip (float clip) { m_clip = clip; }
|
BGE: Various render improvements.
bge.logic.setRender(flag) to enable/disable render.
The render pass is enabled by default but it can be disabled with
bge.logic.setRender(False).
Once disabled, the render pass is skipped and a new logic frame starts
immediately. Note that VSync no longer limits the fps when render is off
but the 'Use Frame Rate' option in the Render Properties still does.
To run as many frames as possible, untick the option
This function is useful when you don't need the default render, e.g.
when doing offscreen render to an alternate device than the monitor.
Note that without VSync, you must limit the frame rate by other means.
fbo = bge.render.offScreenCreate(width,height,[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
Use this method to create an offscreen buffer of given size, with given MSAA
samples and targetting either a render buffer (bge.render.RAS_OFS_RENDER_BUFFER)
or a texture (bge.render.RAS_OFS_RENDER_TEXTURE). Use the former if you want to
retrieve the frame buffer on the host and the latter if you want to pass the render
to another context (texture are proper OGL object, render buffers aren't)
The object created by this function can only be used as a parameter of the
bge.texture.ImageRender() constructor to send the the render to the FBO rather
than to the frame buffer. This is best suited when you want to create a render
of specific size, or if you need an image with an alpha channel.
bge.texture.<imagetype>.refresh(buffer=None, format="RGBA", ts=-1.0)
Without arg, the refresh method of the image objects is pretty much a no-op, it
simply invalidates the image so that on next texture refresh, the image will
be recalculated.
It is now possible to pass an optional buffer object to transfer the image (and
recalculate it if it was invalid) to an external object. The object must implement
the 'buffer protocol'. The image will be transfered as "RGBA" or "BGRA" pixels
depending on format argument (only those 2 formats are supported) and ts is an
optional timestamp in the image depends on it (e.g. VideoFFmpeg playing a video file).
With this function you don't need anymore to link the image object to a Texture
object to use: the image object is self-sufficient.
bge.texture.ImageRender(scene, camera, fbo=None)
Render to buffer is possible by passing a FBO object (see offScreenCreate).
bge.texture.ImageRender.render()
Allows asynchronous render: call this method to render the scene but without
extracting the pixels yet. The function returns as soon as the render commands
have been send to the GPU. The render will proceed asynchronously in the GPU
while the host can perform other tasks.
To complete the render, you can either call refresh() directly of refresh the texture
to which this object is the source. Asynchronous render is useful to achieve optimal
performance: call render() on frame N and refresh() on frame N+1 to give as much as
time as possible to the GPU to render the frame while the game engine can perform other tasks.
Support negative scale on camera.
Camera scale was previously ignored in the BGE.
It is now injected in the modelview matrix as a vertical or horizontal flip
of the scene (respectively if scaleY<0 and scaleX<0).
Note that the actual value of the scale is not used, only the sign.
This allows to flip the image produced by ImageRender() without any performance
degradation: the flip is integrated in the render itself.
Optimized image transfer from ImageRender to buffer.
Previously, images that were transferred to the host were always going through
buffers in VideoTexture. It is now possible to transfer ImageRender
images to external buffer without intermediate copy (i.e. directly from OGL to buffer)
if the attributes of the ImageRender objects are set as follow:
flip=False, alpha=True, scale=False, depth=False, zbuff=False.
(if you need to flip the image, use camera negative scale)
2016-06-09 23:56:45 +02:00
|
|
|
/// render status
|
|
|
|
bool isDone() { return m_done; }
|
|
|
|
/// render frame (public so that it is accessible from python)
|
|
|
|
bool Render();
|
|
|
|
/// in case fbo is used, method to unbind
|
|
|
|
void Unbind();
|
|
|
|
/// wait for render to complete
|
|
|
|
void WaitSync();
|
2008-12-09 14:16:10 +00:00
|
|
|
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
protected:
|
2011-10-06 22:04:01 +00:00
|
|
|
/// true if ready to render
|
|
|
|
bool m_render;
|
BGE: Various render improvements.
bge.logic.setRender(flag) to enable/disable render.
The render pass is enabled by default but it can be disabled with
bge.logic.setRender(False).
Once disabled, the render pass is skipped and a new logic frame starts
immediately. Note that VSync no longer limits the fps when render is off
but the 'Use Frame Rate' option in the Render Properties still does.
To run as many frames as possible, untick the option
This function is useful when you don't need the default render, e.g.
when doing offscreen render to an alternate device than the monitor.
Note that without VSync, you must limit the frame rate by other means.
fbo = bge.render.offScreenCreate(width,height,[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
Use this method to create an offscreen buffer of given size, with given MSAA
samples and targetting either a render buffer (bge.render.RAS_OFS_RENDER_BUFFER)
or a texture (bge.render.RAS_OFS_RENDER_TEXTURE). Use the former if you want to
retrieve the frame buffer on the host and the latter if you want to pass the render
to another context (texture are proper OGL object, render buffers aren't)
The object created by this function can only be used as a parameter of the
bge.texture.ImageRender() constructor to send the the render to the FBO rather
than to the frame buffer. This is best suited when you want to create a render
of specific size, or if you need an image with an alpha channel.
bge.texture.<imagetype>.refresh(buffer=None, format="RGBA", ts=-1.0)
Without arg, the refresh method of the image objects is pretty much a no-op, it
simply invalidates the image so that on next texture refresh, the image will
be recalculated.
It is now possible to pass an optional buffer object to transfer the image (and
recalculate it if it was invalid) to an external object. The object must implement
the 'buffer protocol'. The image will be transfered as "RGBA" or "BGRA" pixels
depending on format argument (only those 2 formats are supported) and ts is an
optional timestamp in the image depends on it (e.g. VideoFFmpeg playing a video file).
With this function you don't need anymore to link the image object to a Texture
object to use: the image object is self-sufficient.
bge.texture.ImageRender(scene, camera, fbo=None)
Render to buffer is possible by passing a FBO object (see offScreenCreate).
bge.texture.ImageRender.render()
Allows asynchronous render: call this method to render the scene but without
extracting the pixels yet. The function returns as soon as the render commands
have been send to the GPU. The render will proceed asynchronously in the GPU
while the host can perform other tasks.
To complete the render, you can either call refresh() directly of refresh the texture
to which this object is the source. Asynchronous render is useful to achieve optimal
performance: call render() on frame N and refresh() on frame N+1 to give as much as
time as possible to the GPU to render the frame while the game engine can perform other tasks.
Support negative scale on camera.
Camera scale was previously ignored in the BGE.
It is now injected in the modelview matrix as a vertical or horizontal flip
of the scene (respectively if scaleY<0 and scaleX<0).
Note that the actual value of the scale is not used, only the sign.
This allows to flip the image produced by ImageRender() without any performance
degradation: the flip is integrated in the render itself.
Optimized image transfer from ImageRender to buffer.
Previously, images that were transferred to the host were always going through
buffers in VideoTexture. It is now possible to transfer ImageRender
images to external buffer without intermediate copy (i.e. directly from OGL to buffer)
if the attributes of the ImageRender objects are set as follow:
flip=False, alpha=True, scale=False, depth=False, zbuff=False.
(if you need to flip the image, use camera negative scale)
2016-06-09 23:56:45 +02:00
|
|
|
/// is render done already?
|
|
|
|
bool m_done;
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
/// rendered scene
|
|
|
|
KX_Scene * m_scene;
|
|
|
|
/// camera for render
|
|
|
|
KX_Camera * m_camera;
|
2011-10-06 22:04:01 +00:00
|
|
|
/// do we own the camera?
|
|
|
|
bool m_owncamera;
|
BGE: Various render improvements.
bge.logic.setRender(flag) to enable/disable render.
The render pass is enabled by default but it can be disabled with
bge.logic.setRender(False).
Once disabled, the render pass is skipped and a new logic frame starts
immediately. Note that VSync no longer limits the fps when render is off
but the 'Use Frame Rate' option in the Render Properties still does.
To run as many frames as possible, untick the option
This function is useful when you don't need the default render, e.g.
when doing offscreen render to an alternate device than the monitor.
Note that without VSync, you must limit the frame rate by other means.
fbo = bge.render.offScreenCreate(width,height,[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
Use this method to create an offscreen buffer of given size, with given MSAA
samples and targetting either a render buffer (bge.render.RAS_OFS_RENDER_BUFFER)
or a texture (bge.render.RAS_OFS_RENDER_TEXTURE). Use the former if you want to
retrieve the frame buffer on the host and the latter if you want to pass the render
to another context (texture are proper OGL object, render buffers aren't)
The object created by this function can only be used as a parameter of the
bge.texture.ImageRender() constructor to send the the render to the FBO rather
than to the frame buffer. This is best suited when you want to create a render
of specific size, or if you need an image with an alpha channel.
bge.texture.<imagetype>.refresh(buffer=None, format="RGBA", ts=-1.0)
Without arg, the refresh method of the image objects is pretty much a no-op, it
simply invalidates the image so that on next texture refresh, the image will
be recalculated.
It is now possible to pass an optional buffer object to transfer the image (and
recalculate it if it was invalid) to an external object. The object must implement
the 'buffer protocol'. The image will be transfered as "RGBA" or "BGRA" pixels
depending on format argument (only those 2 formats are supported) and ts is an
optional timestamp in the image depends on it (e.g. VideoFFmpeg playing a video file).
With this function you don't need anymore to link the image object to a Texture
object to use: the image object is self-sufficient.
bge.texture.ImageRender(scene, camera, fbo=None)
Render to buffer is possible by passing a FBO object (see offScreenCreate).
bge.texture.ImageRender.render()
Allows asynchronous render: call this method to render the scene but without
extracting the pixels yet. The function returns as soon as the render commands
have been send to the GPU. The render will proceed asynchronously in the GPU
while the host can perform other tasks.
To complete the render, you can either call refresh() directly of refresh the texture
to which this object is the source. Asynchronous render is useful to achieve optimal
performance: call render() on frame N and refresh() on frame N+1 to give as much as
time as possible to the GPU to render the frame while the game engine can perform other tasks.
Support negative scale on camera.
Camera scale was previously ignored in the BGE.
It is now injected in the modelview matrix as a vertical or horizontal flip
of the scene (respectively if scaleY<0 and scaleX<0).
Note that the actual value of the scale is not used, only the sign.
This allows to flip the image produced by ImageRender() without any performance
degradation: the flip is integrated in the render itself.
Optimized image transfer from ImageRender to buffer.
Previously, images that were transferred to the host were always going through
buffers in VideoTexture. It is now possible to transfer ImageRender
images to external buffer without intermediate copy (i.e. directly from OGL to buffer)
if the attributes of the ImageRender objects are set as follow:
flip=False, alpha=True, scale=False, depth=False, zbuff=False.
(if you need to flip the image, use camera negative scale)
2016-06-09 23:56:45 +02:00
|
|
|
/// if offscreen render
|
|
|
|
PyRASOffScreen *m_offscreen;
|
|
|
|
/// object to synchronize render even if no buffer transfer
|
|
|
|
RAS_ISync *m_sync;
|
2011-10-06 22:04:01 +00:00
|
|
|
/// for mirror operation
|
|
|
|
KX_GameObject * m_observer;
|
|
|
|
KX_GameObject * m_mirror;
|
2008-12-09 14:16:10 +00:00
|
|
|
float m_clip; // clipping distance
|
2011-10-06 22:04:01 +00:00
|
|
|
float m_mirrorHalfWidth; // mirror width in mirror space
|
|
|
|
float m_mirrorHalfHeight; // mirror height in mirror space
|
|
|
|
MT_Point3 m_mirrorPos; // mirror center position in local space
|
|
|
|
MT_Vector3 m_mirrorZ; // mirror Z axis in local space
|
|
|
|
MT_Vector3 m_mirrorY; // mirror Y axis in local space
|
|
|
|
MT_Vector3 m_mirrorX; // mirror X axis in local space
|
|
|
|
/// canvas
|
|
|
|
RAS_ICanvas* m_canvas;
|
|
|
|
/// rasterizer
|
|
|
|
RAS_IRasterizer* m_rasterizer;
|
|
|
|
/// engine
|
|
|
|
KX_KetsjiEngine* m_engine;
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
|
2010-12-03 12:30:59 +00:00
|
|
|
/// background color
|
BGE: Various render improvements.
bge.logic.setRender(flag) to enable/disable render.
The render pass is enabled by default but it can be disabled with
bge.logic.setRender(False).
Once disabled, the render pass is skipped and a new logic frame starts
immediately. Note that VSync no longer limits the fps when render is off
but the 'Use Frame Rate' option in the Render Properties still does.
To run as many frames as possible, untick the option
This function is useful when you don't need the default render, e.g.
when doing offscreen render to an alternate device than the monitor.
Note that without VSync, you must limit the frame rate by other means.
fbo = bge.render.offScreenCreate(width,height,[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
Use this method to create an offscreen buffer of given size, with given MSAA
samples and targetting either a render buffer (bge.render.RAS_OFS_RENDER_BUFFER)
or a texture (bge.render.RAS_OFS_RENDER_TEXTURE). Use the former if you want to
retrieve the frame buffer on the host and the latter if you want to pass the render
to another context (texture are proper OGL object, render buffers aren't)
The object created by this function can only be used as a parameter of the
bge.texture.ImageRender() constructor to send the the render to the FBO rather
than to the frame buffer. This is best suited when you want to create a render
of specific size, or if you need an image with an alpha channel.
bge.texture.<imagetype>.refresh(buffer=None, format="RGBA", ts=-1.0)
Without arg, the refresh method of the image objects is pretty much a no-op, it
simply invalidates the image so that on next texture refresh, the image will
be recalculated.
It is now possible to pass an optional buffer object to transfer the image (and
recalculate it if it was invalid) to an external object. The object must implement
the 'buffer protocol'. The image will be transfered as "RGBA" or "BGRA" pixels
depending on format argument (only those 2 formats are supported) and ts is an
optional timestamp in the image depends on it (e.g. VideoFFmpeg playing a video file).
With this function you don't need anymore to link the image object to a Texture
object to use: the image object is self-sufficient.
bge.texture.ImageRender(scene, camera, fbo=None)
Render to buffer is possible by passing a FBO object (see offScreenCreate).
bge.texture.ImageRender.render()
Allows asynchronous render: call this method to render the scene but without
extracting the pixels yet. The function returns as soon as the render commands
have been send to the GPU. The render will proceed asynchronously in the GPU
while the host can perform other tasks.
To complete the render, you can either call refresh() directly of refresh the texture
to which this object is the source. Asynchronous render is useful to achieve optimal
performance: call render() on frame N and refresh() on frame N+1 to give as much as
time as possible to the GPU to render the frame while the game engine can perform other tasks.
Support negative scale on camera.
Camera scale was previously ignored in the BGE.
It is now injected in the modelview matrix as a vertical or horizontal flip
of the scene (respectively if scaleY<0 and scaleX<0).
Note that the actual value of the scale is not used, only the sign.
This allows to flip the image produced by ImageRender() without any performance
degradation: the flip is integrated in the render itself.
Optimized image transfer from ImageRender to buffer.
Previously, images that were transferred to the host were always going through
buffers in VideoTexture. It is now possible to transfer ImageRender
images to external buffer without intermediate copy (i.e. directly from OGL to buffer)
if the attributes of the ImageRender objects are set as follow:
flip=False, alpha=True, scale=False, depth=False, zbuff=False.
(if you need to flip the image, use camera negative scale)
2016-06-09 23:56:45 +02:00
|
|
|
float m_background[4];
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
|
|
|
|
|
|
|
|
/// render 3d scene to image
|
BGE: Various render improvements.
bge.logic.setRender(flag) to enable/disable render.
The render pass is enabled by default but it can be disabled with
bge.logic.setRender(False).
Once disabled, the render pass is skipped and a new logic frame starts
immediately. Note that VSync no longer limits the fps when render is off
but the 'Use Frame Rate' option in the Render Properties still does.
To run as many frames as possible, untick the option
This function is useful when you don't need the default render, e.g.
when doing offscreen render to an alternate device than the monitor.
Note that without VSync, you must limit the frame rate by other means.
fbo = bge.render.offScreenCreate(width,height,[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
Use this method to create an offscreen buffer of given size, with given MSAA
samples and targetting either a render buffer (bge.render.RAS_OFS_RENDER_BUFFER)
or a texture (bge.render.RAS_OFS_RENDER_TEXTURE). Use the former if you want to
retrieve the frame buffer on the host and the latter if you want to pass the render
to another context (texture are proper OGL object, render buffers aren't)
The object created by this function can only be used as a parameter of the
bge.texture.ImageRender() constructor to send the the render to the FBO rather
than to the frame buffer. This is best suited when you want to create a render
of specific size, or if you need an image with an alpha channel.
bge.texture.<imagetype>.refresh(buffer=None, format="RGBA", ts=-1.0)
Without arg, the refresh method of the image objects is pretty much a no-op, it
simply invalidates the image so that on next texture refresh, the image will
be recalculated.
It is now possible to pass an optional buffer object to transfer the image (and
recalculate it if it was invalid) to an external object. The object must implement
the 'buffer protocol'. The image will be transfered as "RGBA" or "BGRA" pixels
depending on format argument (only those 2 formats are supported) and ts is an
optional timestamp in the image depends on it (e.g. VideoFFmpeg playing a video file).
With this function you don't need anymore to link the image object to a Texture
object to use: the image object is self-sufficient.
bge.texture.ImageRender(scene, camera, fbo=None)
Render to buffer is possible by passing a FBO object (see offScreenCreate).
bge.texture.ImageRender.render()
Allows asynchronous render: call this method to render the scene but without
extracting the pixels yet. The function returns as soon as the render commands
have been send to the GPU. The render will proceed asynchronously in the GPU
while the host can perform other tasks.
To complete the render, you can either call refresh() directly of refresh the texture
to which this object is the source. Asynchronous render is useful to achieve optimal
performance: call render() on frame N and refresh() on frame N+1 to give as much as
time as possible to the GPU to render the frame while the game engine can perform other tasks.
Support negative scale on camera.
Camera scale was previously ignored in the BGE.
It is now injected in the modelview matrix as a vertical or horizontal flip
of the scene (respectively if scaleY<0 and scaleX<0).
Note that the actual value of the scale is not used, only the sign.
This allows to flip the image produced by ImageRender() without any performance
degradation: the flip is integrated in the render itself.
Optimized image transfer from ImageRender to buffer.
Previously, images that were transferred to the host were always going through
buffers in VideoTexture. It is now possible to transfer ImageRender
images to external buffer without intermediate copy (i.e. directly from OGL to buffer)
if the attributes of the ImageRender objects are set as follow:
flip=False, alpha=True, scale=False, depth=False, zbuff=False.
(if you need to flip the image, use camera negative scale)
2016-06-09 23:56:45 +02:00
|
|
|
virtual void calcImage (unsigned int texId, double ts) { calcViewport(texId, ts, GL_RGBA); }
|
|
|
|
|
|
|
|
/// render 3d scene to image
|
|
|
|
virtual void calcViewport (unsigned int texId, double ts, unsigned int format);
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
|
2015-05-10 00:54:06 +02:00
|
|
|
void setBackgroundFromScene(KX_Scene *scene);
|
VideoTexture module.
The only compilation system that works for sure is the MSVC project files. I've tried my best to
update the other compilation system but I count on the community to check and fix them.
This is Zdeno Miklas video texture plugin ported to trunk.
The original plugin API is maintained (can be found here http://home.scarlet.be/~tsi46445/blender/blendVideoTex.html)
EXCEPT for the following:
The module name is changed to VideoTexture (instead of blendVideoTex).
A new (and only) video source is now available: VideoFFmpeg()
You must pass 1 to 4 arguments when you create it (you can use named arguments):
VideoFFmpeg(file) : play a video file
VideoFFmpeg(file, capture, rate, width, height) : start a live video capture
file:
In the first form, file is a video file name, relative to startup directory.
It can also be a URL, FFmpeg will happily stream a video from a network source.
In the second form, file is empty or is a hint for the format of the video capture.
In Windows, file is ignored and should be empty or not specified.
In Linux, ffmpeg supports two types of device: VideoForLinux and DV1394.
The user specifies the type of device with the file parameter:
[<device_type>][:<standard>]
<device_type> : 'v4l' for VideoForLinux, 'dv1394' for DV1394; default to 'v4l'
<standard> : 'pal', 'secam' or 'ntsc', default to 'ntsc'
The driver name is constructed automatically from the device types:
v4l : /dev/video<capture>
dv1394: /dev/dv1394/<capture>
If you have different driver name, you can specify the driver name explicitely
instead of device type. Examples of valid file parameter:
/dev/v4l/video0:pal
/dev/ieee1394/1:ntsc
dv1394:ntsc
v4l:pal
:secam
capture:
Defines the index number of the capture source, starting from 0. The first capture device is always 0.
The VideoTexutre modules knows that you want to start a live video capture when you set this parameter to a number >= 0. Setting this parameter < 0 indicates a video file playback. Default value is -1.
rate:
the capture frame rate, by default 25 frames/sec
width:
height:
Width and height of the video capture in pixel, default value 0.
In Windows you must specify these values and they must fit with the capture device capability.
For example, if you have a webcam that can capture at 160x120, 320x240 or 640x480,
you must specify one of these couple of values or the opening of the video source will fail.
In Linux, default values are provided by the VideoForLinux driver if you don't specify width and height.
Simple example
**************
1. Texture definition script:
import VideoTexture
contr = GameLogic.getCurrentController()
obj = contr.getOwner()
if not hasattr(GameLogic, 'video'):
matID = VideoTexture.materialID(obj, 'MAVideoMat')
GameLogic.video = VideoTexture.Texture(obj, matID)
GameLogic.vidSrc = VideoTexture.VideoFFmpeg('trailer_400p.ogg')
# Streaming is also possible:
#GameLogic.vidSrc = VideoTexture.VideoFFmpeg('http://10.32.1.10/trailer_400p.ogg')
GameLogic.vidSrc.repeat = -1
# If the video dimensions are not a power of 2, scaling must be done before
# sending the texture to the GPU. This is done by default with gluScaleImage()
# but you can also use a faster, but less precise, scaling by setting scale
# to True. Best approach is to convert the video offline and set the dimensions right.
GameLogic.vidSrc.scale = True
# FFmpeg always delivers the video image upside down, so flipping is enabled automatically
#GameLogic.vidSrc.flip = True
if contr.getSensors()[0].isPositive():
GameLogic.video.source = GameLogic.vidSrc
GameLogic.vidSrc.play()
2. Texture refresh script:
obj = GameLogic.getCurrentController().getOwner()
if hasattr(GameLogic, 'video') != 0:
GameLogic.video.refresh(True)
You can download this demo here:
http://home.scarlet.be/~tsi46445/blender/VideoTextureDemo.blend
http://home.scarlet.be/~tsi46445/blender/trailer_400p.ogg
2008-10-31 22:35:52 +00:00
|
|
|
void SetWorldSettings(KX_WorldInfo* wi);
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
#endif
|