Sanctimonia Updates and a Plea for Assistance

A brief summary of progress since Withstand the Fury’s last update includes (in chronological order):

  • Added clouds to the horizon bar.
  • Projectiles are cast in the direction your character is facing.
  • Added destructive particles.
  • Added secondary and tertiary (ad infinitum) particles.
  • Implemented render queue for proper overlay of graphical objects.
  • Added particle lighting.
  • Improved particle and object shadows by adding multiple types and variations.
  • Created weather system and associated client-side rain particles and cloud shadows.
  • Created audio engine with three variations on open water and waves crashing, wind and rainfall.
  • Numerous bug fixes and partially-implemented new advancements.

As always, complete weekly summaries may be found at my web site at here.

I’m programming Sanctimonia in GAMBAS, an object oriented BASIC dialect native to Linux and recently ported to OSX. Historically the lead developer of GAMBAS, Benoît Minisini, and the programmer responsible for the SDL component, Laurent Carlier, have been very helpful in allowing me to progress with my game. They’ve helped me by improving GAMBAS itself to my ends and with the intricacies of the language’s syntax and logic.

Even so, I have run into some limitations of their incomplete implementation of SDL in GAMBAS. Recent posts to the mailing list have largely fallen on what seem to be deaf ears, whether bug reports or questions about functionality. I no longer have time to wait for a reply that may not be coming. I’ve given well over $300 in Euros to these two developers and am quite financially poor myself, so I’ve made no small sacrifice to propel my project onward.

I need help in understanding and potentially correcting specific issues with the SDL component of GAMBAS in order to move forward with my project.

This is the line I didn’t want to cross, but I think I may have to. If you are able and willing, I’d like to work out an arrangement between us so that we may both be rewarded for our efforts. Sanctimonia could become a blip on the radar of indie gaming sites, or it could displace the most profitable of online games and redefine the genre. We’d need to work out something in the middle so we’re both protected whatever the outcome.

If you have a decent understanding of C, preferably with some Linux experience and general good programming practices, and are interesting in helping this project (and laterally GAMBAS) please let me know.

14 Responses

  1. fearyourself says:

    Hmmm… I do understand the issues but what is missing from your post is:

    – What needs to be done? How much work is necessary?
    – Why don’t the Gambas people help anymore, are they giving up this part of their tool?

    Finally, I think the problem might be that you offered money. Potentially that makes people only care if they get paid whereas getting free help, though more complicated or rare, is more stable once people are onboard.

  2. Sanctimonia says:

    I know generally what needs to be done but not how much work it would take. I would guess that researching how the existing objects and methods operate would require the most time, and that implementing changes would be somewhere between light and moderate difficulty. The changes would need to be submitted upstream so they could be incorporated into the project. I’m sure Benoît would be overjoyed to see someone new making commits.

    Here’s an example. Currently the gb.sdl component supports blitting an image in RAM to an SDL surface, which is the screen/window. I think these blits are from RAM to video RAM, so they’re partially accelerated. This works pretty well. Blitting image-to-image however is constrained to RAM operations. There is no way to load images into video RAM then have one blitted to the other, then finally to the SDL surface which would keep all operations in hardware. If there was a way to create an SDL surface the same way an image can be created in RAM, I could create “scratch areas” in hardware for frame composition and blit the final frame to the SDL buffer. This would increase the frame rate beyond belief, as currently there are many image-to-image blits.

    Benoît and Laurent do help, but they (especially Benoît) are pretty busy and I’m far from their only customer. Benoît just yesterday fixed a bug in gb.sdl.sound for me, for example. Benoît’s trying to release GAMBAS 3, as it’s been in alpha for a while, so he’s primarily concerned with killing bugs and not modifying/expanding functionality. Not sure what Laurent’s story is. So it’s not that they don’t care, but they have bigger fish to fry than the SDL component at the moment. My biggest fish right now is SDL, as my frame rate is going farther down the toilet as the game progresses.

    I donated to Benoît and Laurent both to show gratitude for their good work and in the hope it would encourage their continued assistance. I’m far from their only benefactor, however, and my wishes are but a drop in the ocean.

    I’ll put together a list of things I need and post it in a moment, just so everything’s out on the table.

  3. Sanctimonia says:

    Here’s the list, short but sweet:

    (1) Allow the creation of images in video memory with support for existing image methods from the gb.image, gb.image.effect, gb.image.imlib and gb.image.io components.

    (2) Images created in video memory should use the OpenGL backend to support the PaintImage, DrawImage and Rotate methods.

    (3) The Width and Height parameters of the PaintImage and DrawImage methods should perform scaling using the OpenGL backend.

    I’ve noticed that GAMBAS somehow uses precedence in the way it handles the methods provided by its various components. For example, if gb.image provides one of the same methods as gb.image.imlib (PaintImage, for example) and both components are selected in a project, the method from gb.image.imlib will be used, overriding the slower one from gb.image. I’m not sure how this is achieved but it is something to be aware of.

  4. fearyourself says:

    I’ll go point by point, you text is under “”:

    “I know generally what needs to be done but not how much work it would take. I would guess that researching how the existing objects and methods”

    Yeah I remember when I went into Gambas, it wasn’t so easy to do but not super difficult.

    “The changes would need to be submitted upstream so they could be incorporated into the project. I’m sure Benoît would be overjoyed to see someone new making commits.”

    Not sure about that, people tend to not like external commits.

    “Currently the gb.sdl component supports blitting an image in RAM to an SDL surface, which is the screen/window. I think these blits are from RAM to video RAM, so they’re partially accelerated”

    That really depends how the surface is defined. Generally it is done at the creation of the SDL surface. So if it is software based, you are correct. Otherwise, if it is already in video ram, there is nothing to be done. You’d have to look at how Gambas creates their surfaces.

    “Blitting image-to-image however is constrained to RAM operations. There is no way to load images into video RAM then have one blitted to the other, then finally to the SDL surface which would keep all operations in hardware”

    That is correct except if the surfaces are created in video ram from the start.

    “If there was a way to create an SDL surface the same way an image can be created in RAM, I could create “scratch areas” in hardware for frame composition and blit the final frame to the SDL buffer”

    Well technically this isn’t really necessary. Normally, SDL allows you to do double buffering, so you’ve got that buffer, it is the screen surface. When you flip, you go from one to the other.

    “This would increase the frame rate beyond belief, as currently there are many image-to-image blits.”

    It really depends, benchmarking would be necessary to assess that. It could but it might not. Generally, it’s better to first reduce the number of blits you are doing, then worry about this.

    “My biggest fish right now is SDL, as my frame rate is going farther down the toilet as the game progresses”

    Have you thought of threading, parallelizing the code, etc. ?

    Now for your list:

    “(1) Allow the creation of images in video memory with support for existing image methods from the gb.image, gb.image.effect, gb.image.imlib and gb.image.io components.”

    Like I said, if you request the SDL surface to be in hardware, then it’s the runtime’s decision to put it there or not.

    “(2) Images created in video memory should use the OpenGL backend to support the PaintImage, DrawImage and Rotate methods.”

    Now that’s way more tricky. Are you doing SDL or OpenGL ? You can’t really mix them up that easily. Normally, people either do SDL with blitting, flipping, etc; or they do SDL + OpenGL, using SDL to open the window and handle events and do pure OpenGL code for the rendering. I, for example, do the latter.

    As far as I know, I don’t see how you would mix SDL blitting + OpenGL except to do this:

    – Prepare your SDL image
    – Blit what you want on it

    But then, if you’re using OpenGL, you have to:
    – Generate an OpenGL texture
    – Send off the SDL image to the OpenGL back end
    – Do your OpenGL code to make it the new image

    The passage between SDL and OpenGL will be costly to the point that you will be crying.

    “(3) The Width and Height parameters of the PaintImage and DrawImage methods should perform scaling using the OpenGL backend.”

    If by scaling you mean changing the size of the images, this is automatic as long as they are OpenGL textures. There is nothing to be done here. However, this brings me back to my previous point.

    Basically, from what I can tell, you want performance. To get that performance, you surely will have to change the way you do your rendering. Either by blitting much less, or going into a OpenGL way of doing things which is generally highly different than what would be done in a pure SDL world.

    Hopefully, this helps a bit,
    Jc

  5. Sanctimonia says:

    Thanks for the completely thorough response. Wow. 🙂

    “Not sure about that, people tend to not like external commits.”

    From what I’ve seen (I read the user and dev mailing lists every day), Benoît really does love people helping, both with bug reports and code contributions. It’s a big project with a big user base, and all the users are programmers too. I’ve never really seen him get snooty or pissed, even when others throw the rare tantrum. To be safe simply sending an email to the mailing list (which I could do) with the patch and explanation would probably be all it took to get it included. I could even add an example project to show that it works. If formatting or other convention deviations were an issue he or another dev (at least two others are interested in OpenGL and SDL) could make the code compliant.

    “That really depends how the surface is defined. Generally it is done at the creation of the SDL surface. So if it is software based, you are correct. Otherwise, if it is already in video ram, there is nothing to be done. You’d have to look at how Gambas creates their surfaces.”

    I think the SDL surface used as the rendering window is created in hardware by default and without option, but that GAMBAS allows copying images in system memory to the surface. A bit of a hardware/software hybrid to preserve component method interoperability, especially considering there isn’t much “hardware” there. I researched the Window and Screen classes and they have no properties for controlling whether they use hardware or software. Laurent had this to say about SDL:

    “SDL 1.2.x library itself isn’t hardware accelerated under linux. The SDL component use only the SDL library to keep window/images in RAM, but all drawing in the window is done with OpenGL (using texturing), so image are cached (send only one time in the VRAM). If you modify the Image continuously, the Image will need to be load in VRAM before displaying continously also.

    Currently blitting image to image is slow because done in software, but i’ve plans to do this with OpenGL (so in VRAM directly) through the Framebuffer Object extension.”

    He didn’t say how “the Image will need to be load in VRAM before displaying continously”, probably because it’s not possible right now.

    As I understand it he also told me that SDL was basically a wrapper for OpenGL that provided basic blitting functionality. GAMBAS has what appears to be pretty robust OpenGL support, though I’m not sure how that relates to expanding the functionality of its SDL component (I’m guessing it could only make it easier). Here is the documentation for the OpenGL and SDL components, etc., that are currently supported:

    gambasdoc.org/help/comp?v3

    Docs in italics with a prepended ./ are not yet documented.

    It doesn’t matter to me if I use SDL or OpenGL as long as I’m able to hardware accelerate operations like blitting, scaling, and rotation in a way that is compatible with native software methods such as DrawAlpha. OpenGL is extremely low level, so if I used it directly I’d need to create procedures that accomplish the basic functions I require, effectively hiding the low level nature of OpenGL to avoid a complete rewrite of the rendering engine.

    “That is correct except if the surfaces are created in video ram from the start.”

    I think the only surface created in video RAM is the render window/screen. There’s basically only one SDL surface from what I can tell.

    “Well technically this isn’t really necessary. Normally, SDL allows you to do double buffering, so you’ve got that buffer, it is the screen surface. When you flip, you go from one to the other.”

    I might be able to pull things off with only two surfaces, but the compositing of the various layers is pretty complex and involves multiple sublayers of varying sizes. Even so, I don’t think the current SDL implementation supports double buffering. I tried to create two screens once just for that purpose and it wasn’t too happy about it. I don’t think the surface supports read operations either, so ultimately I’d be using images in RAM as the source for a write.

    “It really depends, benchmarking would be necessary to assess that. It could but it might not. Generally, it’s better to first reduce the number of blits you are doing, then worry about this.”

    That is true and I agree completely. Believe me that I don’t write inefficient code and wonder why it’s slow. If I do I always optimize it later, even if it requires a major overhaul or changes in logic. I’ve done everything I can to make things more efficient short of removing major features. I have compared FPS rates between experiments extensively. What’s killing me is software scaling, rotation, and image-to-image blits. The fastest thing I can do is blit an image directly to the window/screen surface, which is passing an image in RAM to an image in video RAM. Often this is accompanied by software scaling and/or rotation, which really kills the frame rate (even on small images).

    “Have you thought of threading, parallelizing the code, etc. ?”

    I have, though mostly for the server and not the rendering engine. I might be able to do it with the rendering engine, though I’d have to run multiple GAMBAS applications (currently no native support for multithreading) and figure out some way to get them communicating (maybe local loopback network interface or the file system to pass image pointers and completion states). That’s something I’d consider once hardware acceleration was working and were I to need abnormally large resolutions to be rendered (greater than 1920×1080).

    “Like I said, if you request the SDL surface to be in hardware, then it’s the runtime’s decision to put it there or not.”

    I’m pretty sure it is in hardware by default, as image-to-surface blits are much faster than image-to-image blits. Sadly there are no options when creating a window/screen using the SDL component, or properties to determine if it is in hardware or software.

    “Now that’s way more tricky. Are you doing SDL or OpenGL ? You can’t really mix them up that easily. Normally, people either do SDL with blitting, flipping, etc; or they do SDL + OpenGL, using SDL to open the window and handle events and do pure OpenGL code for the rendering. I, for example, do the latter.”

    I think the GAMBAS SDL component implementation is extremely barebones from Laurent’s apologies and limited functionality. I think he’s created a hybrid of a partial SDL implementation using OpenGL as the backend for its operations.

    “As far as I know, I don’t see how you would mix SDL blitting + OpenGL except to do this:

    – Prepare your SDL image
    – Blit what you want on it

    But then, if you’re using OpenGL, you have to:
    – Generate an OpenGL texture
    – Send off the SDL image to the OpenGL back end
    – Do your OpenGL code to make it the new image

    The passage between SDL and OpenGL will be costly to the point that you will be crying.”

    Hmmm. I think I don’t know enough about the relationship between SDL and OpenGL. My understanding is that it’s an OpenGL/DirectX wrapper with event handling as you mentioned. They should go hand in hand, whether transparently or explicitly (though I seriously don’t know shit about them so please excuse me!). I think the SDL component implementation is doing what you’re saying transparently, but in a severely limited way.

    “If by scaling you mean changing the size of the images, this is automatic as long as they are OpenGL textures. There is nothing to be done here. However, this brings me back to my previous point.”

    That would be true except that currently images seem to be created in software, scaled and rotated in software, and finally written to the SDL window/screen surface which is in hardware and using OpenGL. Maybe the properties of the window/screen surface could be extended to the creation of images (Dim someimage As Surface = New Surface[128, 128]), so they could be written to each other using regular OpenGL texture methodology.

    “Basically, from what I can tell, you want performance. To get that performance, you surely will have to change the way you do your rendering. Either by blitting much less, or going into a OpenGL way of doing things which is generally highly different than what would be done in a pure SDL world.”

    Right now I’m doing a reasonable number of straight image blits, a small number of scaling and rotation functions via normal image methods, and a moderate number of DrawAlpha methods which must be done in software. The DrawAlpha method is extremely fast, so that is perfectly acceptable. What’s killing me are the straight blits, scaling and rotations.

    From what I gather from your comments, my feeble conjecture and Laurent’s comments, we’re working with possible incompatibilities between SDL’s accelerated operations, OpenGL’s accelerated operations and GAMBAS’s software image operations. What’s a mystery to me is how the SDL component of GAMBAS apparently uses OpenGL to perform its window/screen writes while maintaining interoperability with GAMBAS’s software methods such as DrawAlpha. The supposedly interoperable (though limited) relationship between GAMBAS’s native image processing, SDL and OpenGL may be the glue that binds these areas together.

    Maybe I should shoot this thread to Benoît and Laurent to see what their input is?

    And now to end what has taken me longer to write than any post or reply in my life! Holy crap. 🙂

    • WtF Dragon says:

      I either feel twice as smart or ten times dumber* reading the discussion you two are having.

      Can’t decide which, though.

      * because I’m an intellectual dwarf (but: “very formidable over short distances!”) by comparison, not because it’s a dumb conversation…

  6. fearyourself says:

    WTF: 🙂

    Sanctimonia: Here are my answers to yours.

    “To be safe simply sending an email to the mailing list (which I could do) with the patch and explanation would probably be all it took to get it included. I could even add an example project to show that it works. If formatting or other convention deviations were an issue he or another dev (at least two others are interested in OpenGL and SDL) could make the code compliant.”

    Yeah, that’s generally how it works in most open sourced projects.

    “I think the SDL surface used as the rendering window is created in hardware by default and without option, but that GAMBAS allows copying images in system memory to the surface.”

    Yes, so does SDL. Probably Gambas just does a wrapper work around SDL.

    “SDL 1.2.x library itself isn’t hardware accelerated under linux. The SDL component use only the SDL library to keep window/images in RAM, but all drawing in the window is done with OpenGL (using texturing), so image are cached (send only one time in the VRAM). If you modify the Image continuously, the Image will need to be load in VRAM before displaying continously also.

    Currently blitting image to image is slow because done in software, but i’ve plans to do this with OpenGL (so in VRAM directly) through the Framebuffer Object extension.”

    Yeah but then you move away from just normal SDL and actually start writing it with your own OpenGL version of how to do it.

    “It doesn’t matter to me if I use SDL or OpenGL as long as I’m able to hardware accelerate operations like blitting, scaling, and rotation in a way that is compatible with native software methods such as DrawAlpha.”

    Yeah, it seems to me that you want to do OpenGL but just don’t have the motivation to jump to it. If all you are doing is blitting images and not creating new images on the fly all the time, then you should probably work on a simple wrapper of the OpenGL version to allow it. All you do is have your textures in VRAM all the time and you can blit them on the buffered screen in the right order.

    ” OpenGL is extremely low level, so if I used it directly I’d need to create procedures that accomplish the basic functions I require, effectively hiding the low level nature of OpenGL to avoid a complete rewrite of the rendering engine.”

    Exactly. Most likely, that is what you’d have to do or what you are essentially asking someone to do in the sdl component. Because, if performance is what you’re missing in the SDL world and you want to rotate and scale a lot, you go in OpenGL mode, there is almost no corner cutting there.

    “I might be able to pull things off with only two surfaces, but the compositing of the various layers is pretty complex and involves multiple sublayers of varying sizes. Even so, I don’t think the current SDL implementation supports double buffering. I tried to create two screens once just for that purpose and it wasn’t too happy about it. I don’t think the surface supports read operations either, so ultimately I’d be using images in RAM as the source for a write.”

    Careful, double buffering doesn’t mean you have both screens available. You essentially have one single screen surface pointer and as you are writing (blitting) to one, the card is displaying the other. Then you call a “flip” function and you basically send your new screen to the back and get the one that was rendered. You then start working on blitting again on the screen and the video card renders the one you just send back.
    Modern cards sometimes even allow more than just 2 buffers, allowing the program to prepare more screens in advance.

    “What’s killing me is software scaling, rotation, and image-to-image blits. The fastest thing I can do is blit an image directly to the window/screen surface, which is passing an image in RAM to an image in video RAM. Often this is accompanied by software scaling and/or rotation, which really kills the frame rate (even on small images).”

    Yeap, that is logical. As long as you have X images that you want to blit from, you could send them all off to video ram and use OpenGL to accelerate it all.

    “I have, though mostly for the server and not the rendering engine. I might be able to do it with the rendering engine, though I’d have to run multiple GAMBAS applications (currently no native support for multithreading) and figure out some way to get them communicating (maybe local loopback network interface or the file system to pass image pointers and completion states). That’s something I’d consider once hardware acceleration was working and were I to need abnormally large resolutions to be rendered (greater than 1920×1080).”

    Then it’s over, if you have to run multiple gambas applications, the simple cost of getting them to talk to each other is too much to do anything interesting here.

    “I think the GAMBAS SDL component implementation is extremely barebones from Laurent’s apologies and limited functionality. I think he’s created a hybrid of a partial SDL implementation using OpenGL as the backend for its operations.”

    It sure sounds like it. I apologize to tell you that what you really seem to be wanting is that wrapper around an OpenGL API. It shouldn’t be that difficult to do but you might have to rethink certain ways you were sending things to Gambas’ rendering system.

    “Hmmm. I think I don’t know enough about the relationship between SDL and OpenGL. My understanding is that it’s an OpenGL/DirectX wrapper with event handling as you mentioned. They should go hand in hand, whether transparently or explicitly (though I seriously don’t know shit about them so please excuse me!). I think the SDL component implementation is doing what you’re saying transparently, but in a severely limited way.”

    Not exactly hand in hand. SDL normally doesn’t do things only in OpenGL and thus that’s why you are paying the price. The point was to actually have it work even if OpenGL was not available. Thus the software side of things. SDL alone works very well for programs that do blitting but no rotations or scaling. As soon as you want those, you must go to OpenGL except if you can pre-calculate all your rotations and scalings without blowing up the memory imprint.

    “That would be true except that currently images seem to be created in software, scaled and rotated in software, and finally written to the SDL window/screen surface which is in hardware and using OpenGL”

    Exactly.

    “Maybe the properties of the window/screen surface could be extended to the creation of images (Dim someimage As Surface = New Surface[128, 128]), so they could be written to each other using regular OpenGL texture methodology”

    Well, in OpenGL view of things you almost never blit to an image. You blit to the screen. So basically what you need to do is just have a simple way to create those blits in forms of passes on the screen to get them in the right order.

    Now say you want to NOT regenerate or redraw parts, you can do it with the OpenGL extensions I believe but I’ve never seen any real need for this in our kinds of games.

    “From what I gather from your comments, my feeble conjecture and Laurent’s comments, we’re working with possible incompatibilities between SDL’s accelerated operations, OpenGL’s accelerated operations and GAMBAS’s software image operations. What’s a mystery to me is how the SDL component of GAMBAS apparently uses OpenGL to perform its window/screen writes while maintaining interoperability with GAMBAS’s software methods such as DrawAlpha. The supposedly interoperable (though limited) relationship between GAMBAS’s native image processing, SDL and OpenGL may be the glue that binds these areas together.”

    Actually most likely not. Most likely you have to decide on this:

    A) Can you precreate all rotations/scales without making it explode the memory imprint?
    B) Do you really need any of the software elements such as the alpha function from Gambas at each frame or only at startup?
    C) How much SDL code is sprinkled in your code?

    For example, in my case, I use SDL+OpenGL in this way:
    SDL opens a window and an OpenGL context
    Textures are loaded up into SDL surfaces and then sent to OpenGL textures at the first use
    SDL does event management
    OpenGL does all the rendering

    In my case, there are only 13 files that are aware of OpenGL out of 92 and only each for like a function here or there. SDL is a little bit more present because it is used for events and so I have it in 30 files.

    I think you have to go there, the more important question is: are you ready?

  7. Sanctimonia says:

    Alright, I think I have a much better understanding of what is and what needs to be. And I’m as ready as I’ll ever be, haha.

    First, SDL itself is pretty barebones and the GAMBAS implementation even more so. While this surprises me, it is what it is and therefore in my case is nearly useless. I’m going to use OpenGL but will keep SDL around for window management and audio.

    “A) Can you precreate all rotations/scales without making it explode the memory imprint?”

    Not possible, as I’m rotating (and will be scaling) the entire landscape which is built dynamically from five datasets, 15 interconnective alpha blending tiles and nine 2048×2048 textures. Not to mention the water layer, which is animated and also built from several layers.

    “B) Do you really need any of the software elements such as the alpha function from Gambas at each frame or only at startup?”

    DrawAlpha is used every frame for light sources, water and hill shading, and every time the player traverses a tile for landscape tile generation. The variations it creates are for all practical purposes unlimited.

    “C) How much SDL code is sprinkled in your code?”

    I just use it to create a window/screen, event handling and audio (gb.sdl.sound). Fortunately the gb.sdl component is compatible with gb.opengl and GAMBAS’s software image libraries, so I should be able to mix them to a limited degree (keeping DrawAlpha).

    Using the GambasGears example as a starting point, I’ve been perusing various OpenGL tutorials and have built a skeleton OpenGL project. Currently it only sets a background color and creates an untextured quad. I can’t figure out how to load a GAMBAS image into an OpenGL texture and bind it to the quad, however. Any clues about this?

    My project is here with some additional annotations:

    http://www.eightvirtues.com/sanctimonia/misc/

    Since GAMBAS’s software image libraries can write to the SDL window just like OpenGL can, I should be able to convert parts of the program one at a time over to OpenGL without breaking the entire application. Right now I’m just trying to work out the basics of OpenGL so I can plan my attack. Any insight would be appreciated.

  8. fearyourself says:

    Normally, you do something like this to load up a texture into OpenGL:

    // Have OpenGL generate a texture object handle for us
    glGenTextures (1, &texture);

    // Bind the texture object
    glBindTexture (GL_TEXTURE_2D, texture);

    // Set the texture’s stretching properties
    glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    // Edit the texture object’s image data using the information SDL_Surface gives us
    glTexImage2D (GL_TEXTURE_2D, 0, 4, img->w, img->h, 0, texture_format, GL_UNSIGNED_BYTE, img->pixels);

    Jc

  9. fearyourself says:

    Then for the textures you would do something like this:

    //Set back texturing on
    glEnable (GL_TEXTURE_2D);

    glBindTexture (GL_TEXTURE_2D, texture);
    glBegin (GL_QUADS);
    glTexCoord2f (txtLeft, txtBottom);
    glVertex3f (pos.x,pos.y, 0);

    glTexCoord2f (txtRight, txtBottom);
    glVertex3f (pos.x+pos.w, pos.y,0);

    glTexCoord2f (txtRight, txtTop);
    glVertex3f (pos.x+pos.w, pos.y + pos.h, 0);

    glTexCoord2f (txtLeft, txtTop);
    glVertex3f (pos.x, pos.y + pos.h,0);
    glEnd ();

    Where glTexCoord2f and glVertex3f go hand in hand to map a point in the world (glVertex) and a texture coordinate (glTexCoord).

    Jc

  10. Sanctimonia says:

    Awesome, thank you. It’s now working pretty well and I can control multiple textures with per-pixel accuracy, including rotation.

    One thing I can’t figure out is how to allow the texture’s alpha channel to show the background. It just renders it as black, ignoring it from what I can tell. The source image does have an alpha channel, so either it’s not being loaded into the texture or it’s just not being displayed.

    Here’s what I have working so far, including a couple of lines that attempt to get alpha blending working:

    http://www.eightvirtues.com/sanctimonia/misc/opengl_texturing

  11. Sanctimonia says:

    Update: I got it working. The last line here was required along with the gb.opengl.glu component:

    ‘ Set up texture.
    Gl.BindTexture(Gl.GL_TEXTURE_2D, textures[0])
    Gl.TexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR)
    Gl.TexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR)
    Gl.TexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_S, Gl.GL_CLAMP)
    Gl.TexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_T, Gl.GL_CLAMP)
    Gl.TexImage2D(t1)
    Glu.Build2DMipmaps(t1)

    I’m starting to integrate the OpenGL code into the main project now, so wish me luck. Hopefully tonight I’ll have it at 1920×1080 with a decent frame rate.

  12. Sanctimonia says:

    Actually not so much. It was revealed to me that there was a bug in GAMBAS which caused the problem, and has now been fixed. “Glu.Build2DMipmaps(t1)” is no longer necessary to get the alpha channel working. Thanks to Tomek for finding the workaround prior to the bug fix. =)

    Now if I can figure out how to create a triangle strip (or whatever’s good for a polygonal height map), and assign texture coordinates for a grid of textures I’ll be happy. Damn thee, OpenGL, thou has lost an Eighth, and thy lies have brought thee low!