4.0.1: MVC OpenGL Part 2

The Render Loop
Copy the OpenGLMVCPt1 target from the previous tutorial, and run the app to make sure everything still works.  From this starting point, we will continue by refactoring our CVDisplayLink code.  Create a new Swift file called SwiftOpenGLViewDelegates.swift.  Make sure to import CoreVideo.CVDisplayLink.

import Foundation
import CoreVideo.CVDisplayLink

Define the DisplayLink type.

struct DisplayLink {
    
}

With SwiftOpenGLView as our reference, we’ll start to define what this delegate should implement.  We know that we need a property to access the link and we need a callback function that calls our drawView() method.

struct DisplayLink {
    let id: CVDisplayLink
    let callback: CVDisplayLinkOutputCallback = {(displayLink: CVDisplayLink, inNow:
                      UnsafePointer<CVTimeStamp>, inOutputTime: UnsafePointer<CVTimeStamp>,
                      flagsIn: CVOptionFlags, flagsOut: UnsafeMutablePointer<CVOptionFlags>,
                      displayLinkContext: UnsafeMutableRawPointer?) -> CVReturn in
        let view = unsafeBitCast(displayLinkContext, to: SwiftOpenGLView.self)
        view.displayLink?.currentTime = Double(inNow.pointee.videoTime) /
                                               Double(inNow.pointee.videoTimeScale)
        let result = view.drawView()
        
        return result
    }
}

Last we see there is some additional setup code, the current time, time interval since the last frame, and the ability to start and stop the link.  In regard to starting and stopping, we need to protect against starting when the link is already running or stopping when we are already stopped.  We'll add in a bool for this purpose.

struct DisplayLink {
    let id: CVDisplayLink
    let callback: CVDisplayLinkOutputCallback = {(displayLink: CVDisplayLink,
                    inNow: UnsafePointer<CVTimeStamp>, inOutputTime: UnsafePointer<CVTimeStamp>,
                    flagsIn: CVOptionFlags, flagsOut: UnsafeMutablePointer<CVOptionFlags>,
                    displayLinkContext: UnsafeMutableRawPointer?) -> CVReturn in

        let view = unsafeBitCast(displayLinkContext, to: SwiftOpenGLView.self)
        view.displayLink?.currentTime = Double(inNow.pointee.videoTime) /
                                               Double(inNow.pointee.videoTimeScale)
        let result = view.drawView()
        
        return result
    }
    var currentTime: Double = 0.0 {
        willSet {
            deltaTime = currentTime - newValue
        }
    }
    var deltaTime: Double = 0.0
    var running: Bool = false
    
    init?(forView view: SwiftOpenGLView) {
        var newID: CVDisplayLink?
        
        if CVDisplayLinkCreateWithActiveCGDisplays(&newID) == kCVReturnSuccess {
            self.id = newID!
            CVDisplayLinkSetOutputCallback(id, callback
                          UnsafeMutableRawPointer(Unmanaged.passUnretained(view).toOpaque()))
        } else {
            return nil
        }
    }
    
    mutating func start() {
        if !running {
            CVDisplayLinkStart(id)
            running = true
        }
    }
    mutating func stop() {
        if running == true {
            CVDisplayLinkStop(id)
            running = false
        }
    }
}
You may have noticed that we have defined the DisplayLink's id as a let instead of a var.  We're able to do this because Swift allows us to postpone setting the value of a let—in fact, we are allowed to set it at any point during our init() as long it is set by the time init() completes.  This used to not be the case, but Swift is getting better all the time.  In SwiftOpenGLView, remove the CVDisplayLink code and replace it with our DisplayLink.


final class SwiftOpenGLView: NSOpenGLView {
    fileprivate var shader = Shader()
    fileprivate var vao = VertexArrayObject()
    fileprivate var vbo = VertexBufferObject()
    fileprivate var tbo = TextureBufferObject()
    fileprivate var data = [Vertex]()
    
    fileprivate var view = FloatMatrix4()
    fileprivate var projection = FloatMatrix4()
    
    var displayLink: DisplayLink?
    
    required init?(coder: NSCoder) {
       
        ...

        displayLink = DisplayLink(forView: self)
    }
    
    override func prepareOpenGL() {
        
        ...
        
        let displayLinkOutputCallback: CVDisplayLinkOutputCallback = {(displayLink: CVDisplayLink,
                     inNow: UnsafePointer<CVTimeStamp>, inOutputTime:
                     UnsafePointer<CVTimeStamp>, flagsIn: CVOptionFlags, flagsOut:
                     UnsafeMutablePointer<CVOptionFlags>, displayLinkContext:
                     UnsafeMutableRawPointer?) -> CVReturn in
            unsafeBitCast(displayLinkContext, to: SwiftOpenGLView.self).drawView()
            
            return kCVReturnSuccess
        }
        
        CVDisplayLinkCreateWithActiveCGDisplays(&displayLink)
        CVDisplayLinkSetOutputCallback(displayLink!, displayLinkOutputCallback,
                     UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()))
        CVDisplayLinkStart(displayLink!)

        displayLink?.start()
    }

    ...

    func drawView() -> CVReturn {
        guard let context = self.openGLContext else {
            print("Could not acquire an OpenGL context")
            return kCVReturnError
        }
        
        context.makeCurrentContext()
        context.lock()
        
        let time = CACurrentMediaTime()
        let value = Float(sin(time))
        previousTime = time

        if let time = displayLink?.currentTime {
        
           ...

        }

        context.flushBuffer()
        context.unlock()
    }
    
    deinit {
        CVDisplayLinkStop(displayLink!)
        displayLink?.stop()
        shader.delete()
        vao.delete()
        vbo.delete()
        tbo.delete()
    }

}

The Mysterious CVTimeStamp

I glossed over it before, but let's take a moment to talk about how we get currentTime, and deltaTime.  If you have experience with CVTimeStamp, then it's various fields may not be so confusing for you, but personally, these took me some time to figure out.  Unfortunately, Apple's developer documentation is rather poor at explaining what each field is and/or how each field is helpful.  To a certain extent, CVDisplayLink feels like an unfinished API:  for instance there version property... it's unused and always 0!  I won't cover the use of smpteTime (Society of Motion Picture and Television Engineers) here, but it is a very common timing convention lays out time in the format:  HH:MM:SS:FF.  The flags argument may contain information about any given time stamp, and reserved is simply not supposed to be used according to the documentation (so descriptive, haha).

let callback: CVDisplayLinkOutputCallback {(displayLink: CVDisplayLink,
                    inNow: UnsafePointer<CVTimeStamp>, inOutputTime: UnsafePointer<CVTimeStamp>,
                    flagsIn: CVOptionFlagsflagsOut: UnsafeMutablePointer<CVOptionFlags>,
                    displayLinkContext: UnsafeMutableRawPointer?) -> CVReturn {
    
    //  CVTimeStamp has five fields.  Three of the five are very useful for
    //  keeping track of the current time, calculating delta time, the frame
    //  number, and the number of frames per second.  Two of the fields are
    //  a little more ambiguous as to what they are and how they may be
    //  useful.  The useful fields are videoTime, videoTimeScale, and
    //  videoRefreshPeriod.  The reason not all of the fields are readily
    //  understandable is that the developer documentation is very bad about
    //  using alternate names for each of fields and thus does not do a good
    //  job of describing the fields or comparing the fields to one another.
    //  Thankfully, CaptainRedmuff on StackOverflow asked a question that
    //  provided the equation that calculates frames per second.  From that
    //  equation, we can extrapolate the value of each field.
    //
    //  @hostTime = current time in Units of the "root".  Yeah, I don't know.
    //    The key to this field is to understand that it is in nanoseconds
    //    (e.g. 1/1_000_000_000 of a second) not units.  To convert it to
    //    seconds divide by 1_000_000_000.  Interestingly, dividing by
    //    videoRefreshPeriod and videoTimeScale in a calculation for frames
    //    per second still yields the appropriate number of frames.  This
    //    works as a result of proportionality--dividing seconds by seconds.
    //    by videoTimeScale to get the time in seconds does not work like it
    //    does for viedoTime.
    //
    //    framesPerSecond:
    //      (videoTime / videoRefreshPeriod) / (videoTime / videoTimeScale) = 59
    //          and
    //      (hostTime / videoRefreshPeriod) / (hostTime / videoTimeScale) = 59
    //          but
    //      hostTime * videoTimeScale ≠ seconds, but Units
    //      i.e. seconds * (Units / seconds) = Units
    //
    //  @rateScalar = ratio of "rate of device in CVTimeStamp/unitOfTime" to
    //    the "Nominal Rate".  I think the "Nominal Rate" is
    //    videoRefreshPeriod, but unfortunately, the documentation doesn't
    //    just say videoRefreshPeriod is the Nominal rate and then define
    //    what that means.  Regardless, because this is a ratio, and we know
    //    the value of one of the parts (e.g. Units/frame), we know that the
    //    "rate of the device" is frame/Units (the units of measure need to
    //    cancel out for the ratio to be a ratio).  This makes sense in that
    //    rateScalar's definition tells us the rate is "measured by timeStamps".
    //    Since there is a frame for every timeStamp, the rate of the device
    //    equals CVTimeStamp/Unit or frame/Unit.  Thus,
    //
    //      rateScalar = frame/Units : Units/frame
    //
    //  @videoTime = the time the frame was created since computer started up.
    //    If you turn your computer off and then turn it back on, this timer
    //    returns to zero.  The timer is paused when you put your computer to
    //    sleep, but it is paused.This value is in Units not seconds.  To get
    //    the number of seconds this value represents, you have to apply
    //    videoTimeScale.
    //  @videoRefreshPeriod = the number of Units per frame (i.e. Units/frame)
    //    This is useful in calculating the frame number or frames per second.
    //    The documentation calls this the "nominal update period"
    //
    //      frame = videoTime / videoRefreshPeriod
    //
    //  @videoTimeScale = Units/second, used to convert videoTime into seconds
    //    and may also be used with videoRefreshPeriod to calculate the expected
    //    framesPerSecond.  I say expected, because videoTimeScale and
    //    videoRefreshPeriod don't change while videoTime does change.  Thus,
    //    to to calculate fps in the case of system slow down, one would need to
    //    use videoTime with videoTimeScale to calculate the actual fps value.
    //
    //      seconds = videoTime / videoTimeScale
    //
    //      framesPerSecondConstant = videoTimeScale / videoRefreshPeriod
    //
    //  Time in DD:HH:mm:ss using hostTime
    let rootTotalSeconds = inNow.pointee.hostTime
    let rootDays = inNow.pointee.hostTime / (1_000_000_000 * 60 * 60 * 24) % 365
    let rootHours = inNow.pointee.hostTime / (1_000_000_000 * 60 * 60) % 24
    let rootMinutes = inNow.pointee.hostTime / (1_000_000_000 * 60) % 60
    let rootSeconds = inNow.pointee.hostTime / 1_000_000_000 % 60
    Swift.print("rootTotalSeconds: \(rootTotalSeconds) rootDays: \(rootDays) rootHours: \
       (rootHours) rootMinutes: \(rootMinutes) rootSeconds: \(rootSeconds)")
    
    //  Time in DD:HH:mm:ss using videoTime
    let totalSeconds = inNow.pointee.videoTime / Int64(inNow.pointee.videoTimeScale)
    let days = (totalSeconds / (60 * 60 * 24)) % 365
    let hours = (totalSeconds / (60 * 60)) % 24
    let minutes = (totalSeconds / 60) % 60
    let seconds = totalSeconds % 60
    print("totalSeconds: \(totalSeconds) Days: \(days) Hours: \(hours) Minutes: \(minutes
       Seconds: \(seconds)")
    
    print("fps: \(Double(inNow.pointee.videoTimeScale) / 
       Double(inNow.pointee.videoRefreshPeriod)) seconds: \(inNow.memory.videoTime / 
       Int64(inNow.pointee.videoTimeScale))")
    
    ...
}

Now that we've finished that little diversion, let's get back to the task.  We need a data source delegate  that will allow SwiftOpenGLViewController to provide an instance of SwiftOpenGLView with content to display.  Meaning that we won't hold onto a shader, VBO, VAO, and TBO in the view, but request it.  This is the final step toward a generic view that draws OpenGL content.


protocol GraphicViewDataSource {
    func loadScene()
    func prepareToRender(_ scene: SceneName, for time: Double)
    func render(_ scene: SceneName, with renderer: Renderer)

}

There are two new types here that we haven't seen before:  SceneName and Renderer.  SceneName is just a typealias for String, while Renderer is a new protocol which was taken from the WWDC '15 presentation Protocol-Oriented Programming.  We're going to consider an instance of NSOpenGLContext as a rendering environment since it's the place where the drawing result is going to reside and then be used for display.


// A helper type to tell glDraw* commands what type of primitive to draw
enum RenderElementType: UInt32 {
    case points = 0
    case lines = 1
    case triangles = 4
}
protocol Renderer {
    func render(_ elementCount: Int32, as elementType: RenderElementType)
}
// Much like the functions of Core Graphics that create bezier curves
// and points, glDraw* commands take on a very similar task making them
// a perfect extension to NSOpenGLContext
extension NSOpenGLContext: Renderer {
    func render(_ elementCount: Int32, as elementType: RenderElementType) {
        glDrawArrays(elementType.rawValue, 0, elementCount)
    }

}

Now let's create our Scene.  Define our new type at the bottom of SwiftOpenGLObjects.  A Scene has a name, shader, VAO, VBO, TBO, and vertex data array, but additionally defines a Light and Camera.  We'll look at those in just a minute, but they're exactly what they sounds like.  A Scene may be initialized at any time because, just as we discussed last time, we are going to hold off on running any OpenGL code until an NSOpenGLContext is active by using a load(_:) method.  This method dose exactly what we have been doing in prepareOpenGL(), so we can delete all of that from SwiftOpenGLView.  For brevity, I have left out the vertices from the data property and the source code for the vertex and fragment shaders, but you'll want to make sure these are present when you define this Type.  There are two other important methods we're implementing below:  update(_:) and draw(_:).  The update(_:) method allows us to update the uniforms (which are the Light and Camera in this case), while draw(_:) is where we place our drawing code.  Both update(_:) and draw(_:) will be very important when we implement the SwiftOpenGLView's data source.

typealias SceneName = String
struct Scene {
    var name = SceneName()
    var shader = Shader()
    var vao = VertexArrayObject()
    var vbo =  VertexBufferObject()
    var tbo = TextureBufferObject()
    var light = Light()
    var camera = Camera()
    let data: [Vertex] = [ ... ]
    
    private init() {}
    init(named name: String) {
        self.name = name
    }
    
    mutating func load(into view: SwiftOpenGLView) {
        view.scene = name
        tbo.loadTexture(named: "Texture")
        
        vbo.load(data)
        
        vao.layoutVertexPattern()
        vao.unbind()
        
        camera.position = FloatMatrix4().translate(x: 0.0, y: 0.0, z: -5.0)
        camera.projection = FloatMatrix4.projection(aspect: Float(view.bounds.size.width /
                                                    view.bounds.size.height))
        
        let vertexSource = " ... "
        let fragmentSource = " ... "
        shader.create(withVertex: vertexSource, andFragment: fragmentSource)
        shader.setInitialUniforms(for: &self)
    }
    mutating func update(with value: Float) {
        light.position = [sin(value), 2.0, -2.0]
        camera.position = FloatMatrix4().translate(x: 0.0, y: 0.0, z: -5.0)
    }
    mutating func draw(with renderer: Renderer) {
        shader.bind()
        vao.bind()
        
        light.updateParameters(for: shader)
        camera.updateParameters(for: shader)
        
        renderer.render(vbo.vertexCount, as: .triangles)
        
        vao.unbind()
    }
    
    mutating func delete() {
        vao.delete()
        vbo.delete()
        tbo.delete()
        shader.delete()
    }
}

The Light and Camera types don't contain complicated code, so we'll be brief.  The key aspects of the code are how we handle the setting and updating of Uniforms.  In order to avoid getting locations for the uniforms every time we draw, we'll capture those locations and then later use these locations for updating.  Using this method, we can restrict Uniform updates to only those that have changed.

struct Light {
    private enum Parameter: String {
        case color = "light.color"
        case position = "light.position"
        case ambientStrength = "light.ambient"
        case specularStrength = "light.specStrength"
        case specularHardness = "light.specHardness"
    }
    
    var color: [GLfloat] = [1.0, 1.0, 1.0] {
        didSet {
            parametersToUpdate.append(.color)
        }
    }
    var position: [GLfloat] = [0.0, 1.0, 0.5] {
        didSet {
            parametersToUpdate.append(.position)
        }
    }
    var ambietStrength: GLfloat = 0.25 {
        didSet {
            parametersToUpdate.append(.ambientStrength)
        }
    }
    var specularStrength: GLfloat = 3.0 {
        didSet {
            parametersToUpdate.append(.specularStrength)
        }
    }
    var specularHardness: GLfloat = 32 {
        didSet {
            parametersToUpdate.append(.specularHardness)
        }
    }
    
    private var shaderParameterLocations = [GLuint : [Parameter : Int32]]()
    private var parametersToUpdate: [Parameter] = [.color,
                                                   .position,
                                                   .ambientStrength,
                                                   .specularStrength,
                                                   .specularHardness]
    
    mutating func attach(toShader shader: Shader) {
        let shader = shader.id
        var parameterLocations = [Parameter : Int32]()
        
        parameterLocations[.color] = glGetUniformLocation(shader, Parameter.color.rawValue)
        parameterLocations[.position] = glGetUniformLocation(shader, Parameter.position.rawValue)
        parameterLocations[.ambientStrength] = glGetUniformLocation(shader,
                                                     Parameter.ambientStrength.rawValue)
        parameterLocations[.specularStrength] = glGetUniformLocation(shader,
                                                     Parameter.specularStrength.rawValue)
        parameterLocations[.specularHardness] = glGetUniformLocation(shader,
                                                     Parameter.specularHardness.rawValue)
        
        shaderParameterLocations[shader] = parameterLocations
    }
    mutating func updateParameters(for shader: Shader) {
        if let parameterLocations = shaderParameterLocations[shader.id] {
            for parameter in parametersToUpdate {
                switch parameter {
                case .color:
                    if let location = parameterLocations[parameter] {
                        glUniform3fv(location, 1, color)
                    }
                case .position:
                    if let location = parameterLocations[parameter] {
                        glUniform3fv(location, 1, position)
                    }
                case .ambientStrength:
                    if let location = parameterLocations[parameter] {
                        glUniform1f(location, ambietStrength)
                    }
                case .specularStrength:
                    if let location = parameterLocations[parameter] {
                        glUniform1f(location, specularStrength)
                    }
                case .specularHardness:
                    if let location = parameterLocations[parameter] {
                        glUniform1f(location, specularHardness)
                    }
                }
            }
            parametersToUpdate.removeAll()
        }
    }
}
struct Camera {
    private enum Parameter: String {
        case position = "view"
        case projection = "projection"
    }
    
    var name: String = "Camera"
    var position = FloatMatrix4() {
        didSet {
            parametersToUpdate.insert(.position)
        }
    }
    var projection = FloatMatrix4() {
        didSet {
            parametersToUpdate.insert(.projection)
        }
    }
    
    private var shaderParameterLocations = [GLuint : [Parameter : Int32]]()
    private var parametersToUpdate: Set<Parameter> = [.position, .projection]
    
    mutating func attach(toShader shader: Shader) {
        let shader = shader.id
        var parameterLocations = [Parameter : Int32]()
        
        parameterLocations[.position] = glGetUniformLocation(shader, Parameter.position.rawValue)
        parameterLocations[.projection] = glGetUniformLocation(shader,
                                                 Parameter.projection.rawValue)
        
        shaderParameterLocations[shader] = parameterLocations
    }
    mutating func updateParameters(for shader: Shader) {
        if let parameterLocations = shaderParameterLocations[shader.id] {
            for parameter in parametersToUpdate {
                switch parameter {
                case .position:
                    if let location = parameterLocations[parameter] {
                        glUniformMatrix4fv(location, 1, GLboolean(GL_FALSE),
                            position.columnMajorArray())
                    }
                case .projection:
                    if let location = parameterLocations[parameter] {
                        glUniformMatrix4fv(location, 1, GLboolean(GL_FALSE),
                            projection.columnMajorArray())
                    }
                }
            }
            parametersToUpdate.removeAll()
        }
    }
}

Right.  Now that we have all of the pieces in place, we'll define the data source protocol GraphicViewDataSource.  We'll use this protocol to ensure a view controller can load, prepare, and "draw" a scene.

protocol GraphicViewDataSource {
    func loadScene()
    func prepareToRender(_ scene: SceneName, for time: Double)
    func render(_ scene: SceneName, with renderer: Renderer)
}

Note that the render(_:_:) method has a Renderer as a parameter.  This is what allows the scene to be draw into the NSOpenGLContext (our Renderer).  Now let's make our SwiftOpenGLViewController conform to our protocol.


class SwiftOpenGLViewController: NSViewController, GraphicVeiwDataSource {
    @IBOutlet weak var interactiveView: SwiftOpenGLView!
    var scenes = [String : Scene]()
    
    override func viewDidLoad() {
        super.viewDidLoad()
        
        scenes["Scene"] = Scene(named: "Scene")
        interactiveView.dataSource = self
    }
    
    func loadScene() {
        scenes["Scene"]?.load(into: interactiveView)
    }
    
    func prepareToRender(_ scene: SceneName, for time: Double) {
        scenes[scene]!.update(with: Float(time))
    }
    
    func render(_ scene: SceneName, with renderer: Renderer) {
        scenes[scene]!.draw(with: renderer)
    }

}

We also added in a Dictionary of Scene's that we can draw into the view.  For now, it'll only contain one, but it prepares us for the next extension of our project.  We fill the Dictionary in viewDidLoad() with a Scene, but the Scene is essentially empty.  It doesn't actually have contain drawable content until we call the Scene's load(_:) method in loadScene().  The prepareToRender(_:_:) and render(_:_:) methods are pretty self explanatory, but they allow for updating the model, and telling the model to draw, respectively.  Now jump into SwiftOpenGLView and use the data source.  Be sure all of the Shader, VertexArrayObject, VertexBufferObject, TextureBufferObject, and Vertex data are removed from SwiftOpenGLView since we don't need it in here anymore.

final class SwiftOpenGLView: NSOpenGLView {
    var scene: SceneName?
    var displayLink: DisplayLink?
    var dataSource: GraphicViewDataSource?
    
    ...
    
    override func prepareOpenGL() {
        super.prepareOpenGL()
        
        glClearColor(0.5, 0.5, 0.5, 1.0)
        
        dataSource?.loadScene()
        displayLink?.start()
    }

    ...
    
    func drawView() -> CVReturn {
        guard let context = self.openGLContext else {
            print("Could not acquire an OpenGL context")
            return kCVReturnError
        }
        
        context.makeCurrentContext()
        context.lock()
        
        if let time = displayLink?.currentTime {
            glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
            glEnable(GLenum(GL_CULL_FACE))
            
            if let scene = scene {
                dataSource?.prepareToRender(scene, for: time)
                
                dataSource?.render(scene, with: context)
            }
        }
        
        context.flushBuffer()
        context.unlock()
        
        return kCVReturnSuccess
    }
    
    deinit {
        displayLink?.stop()
    }
}

With that, we've completed our move to MVC.  From here, we'll re-implement the ability to take user input in a way that is far more extendable than before.  Until then... enjoy your 3D cube.  Hint, if you add a call to one of FloatMatrix4's rotate methods, you'll be able to see the cube a little better.


I hope these tutorials are helping you!  If you have questions, find errors, feel that something is wrong or confusing, please let me know in the comments below.  Help me make these tutorials better!

The files corresponding to this project target are located here on GitHub.

Comments

  1. Myles,

    This question is really off-topic but I'm hoping that you can help me with it. I've been all over Stack Overflow and the web in general but can't find any Mac-specific information.

    In the application I'm working on I need to draw a "waterfall" display. It's done by drawing a single pixel height line at the top of the view and then scrolling down that line before drawing the next line. This is repeated over and over with a new line added at the top and old lines vanishing out the bottom of the view. It seems like this should be doable using glBlitFramebuffer but I can't figure out the correct way to do it.

    I am able to draw the initial line. It's actually a set of points drawn with: glDrawArrays(GLenum(GL_POINTS), 0, GLint(numberOfPoints)) but I haven't been able to make it scroll.

    If you have any insight, I could use a nudge in the right direction.

    Thanks

    ReplyDelete
  2. I don't know that I completely understand what you are trying to do. However, let me start by saying that OpenGL is used for drawing 3D vertices and editing them. It is used in 2D art by passing through a Rect-like object through the vertex shader and then applying it as a texture to that Rect-like object. Once that drawing has been done, and the data is in the frame buffer, the individual pixel data may thereafter be altered and combined with other frame buffers.

    It sounds like you want a view that drops down from the top of the screen (i.e. Notifications center on the iPhone). You could accomplish this simply by moving the Rect down the screen, or you could consider the view your Rect, and two frame buffers one with the background and one with the foreground (the view being waterfalled) and then combine the two together such that the waterfalling view is drawn bottom to top into the background frame buffer. The first option would not give you per pixel control, but would still look very smooth and is more likely to easily implemented and understandable. The second option gives you per pixel control, but would be more difficult to handle. I have not yet gotten a handle on frame buffers myself. I'm learning OpenGL as I am writing these tutorials and hoping to make the road easier for those that follow. I hope that helps, though I am not sure it is what you were looking for in an answer. If I miss understood your question or issue, please let me know.

    ReplyDelete
  3. Myles,

    Thanks very much for trying to help. I'm doing just what you described, "learning OpenGl as you write these tutorials". I wrote my question after spending a long day trying, unsuccessfully, to figure out my issue. The next day (as usual) things were clearer and I was able to figure it out.

    The effect I wanted is common in spectrum analyzers and is known there as a "waterfall" display. Within a view you draw a line of dots at the top of the view that represent the strength of a signal at a range of frequencies (from lowest to highest across the view). Then at some interval you scroll that line down (the width of the line) and add a new line. The effect is that you get a history of the signal strength at each frequency shown by the view. Old values scroll out the bottom of the view.

    Here's the "draw" part of the code that finally worked.

    // clear the view if this is the initial draw
    if _shouldClear { glClear(GLbitfield(GL_COLOR_BUFFER_BIT)) }
    _shouldClear = false

    // select the Program & bind the VAO
    glUseProgram(_shaders[0].program!)
    glBindVertexArray(_vaoHandle)

    // scroll the display down one pixel
    glBlitFramebuffer(GLint(0), GLint(1), GLint(frame.width), GLint(frame.height), // from
    0, GLint(0), GLint(frame.width), GLint(frame.height - 1), // to
    GLenum(GL_COLOR_BUFFER_BIT), GLenum(GL_LINEAR))

    // draw the Waterfall points
    glDrawArrays(GLenum(GL_POINTS), 0, GLint(intNumberOfBins))

    // Unbind the VAO
    glBindVertexArray(0)

    // swap the buffer and unlock the context
    CGLFlushDrawable(openGLContext!.CGLContextObj)
    CGLUnlockContext(openGLContext!.CGLContextObj)

    I'm still dealing with minor issues like what happens when you resize the window containing my NSOpenGLView but sooner or later I'll work those out as well. Apparently (at least in the Mac) the frame buffer is accessible when you have the context locked and therefore you can do glBlitFrameBuffer(0, ...) operations.

    Please keep up the good work, learn some more and then share it with the rest of us.

    Thanks,
    Doug

    ReplyDelete
  4. Now I understand what you were trying to do. I'm glad you got it working. It's almost a histogram view, but you're not separating out the intensity of RGB. As far as resizing is concerned, the view object itself resizes automatically if you have applied auto layout in IB (Interface Builder). I realize you already know that (having a background in Cocoa), but I add that for other readers. Then to tell OpenGL to use the entirety of the frame, you have to tell the context how big the drawing space is with glViewport(). Finally, to make things look right (proportionate), you need to supply a newly calculated projection matrix. Then you're view will resize as expected -- even go full screen. Note that you don't actually have to implement NSOpenGLView's reshape(). If you get "flickering" or whiteout of the view upon resizing, it's because you are using your custom draw() method, but you have placed a call to it in the view's drawRect(_:) method. drawRect(_:) is called when the window is resized and it if is called but a new context is not requested, you get a view who's context has nothing in it -- as though OpenGL weren't connected and thus a white/gray screen like when you don't call CGLFlushDrawable(_:).

    In regard to contexts, when using double buffering it is required to lock the focus before making changes to state (i.e. accessing a frame buffer). You don't want to many hands in the pot as it were, or cooks in the kitchen, haha. Locking the focus ensures that only one thread is working on a given context at any time. Now there may be a way with GCD (Grand Central Dispatch) to do things little differently in a multithreaded environment, but I have very little experience there, unfortunately. Something I'll look into in the future to maximize app performance. For now, lock those contexts.

    ReplyDelete

Post a Comment

Popular posts from this blog

4.0.0: MVC OpenGL Part 1

Using Swift with OpenGL

0.0: Swift OpenGL Setup