What format does the wxpython Image object expect for data when instantiating it using the wx.Image(width, height, data) constructor
I am trying to generate an image by specifying each pixel. For this, I have written this little test to see how it works and apparently I am not using the right format for the data.
import numpy as np
import wx
class Test(wx.Frame):
def __init__(self, *args, **kwargs):
super(Test, self).__init__(*args, **kwargs)
self.initialize()
def initialize(self):
self.SetSize(500, 500)
self.SetTitle("Test")
panel = wx.Panel(self)
width = 500
height = 500
image_data = np.random.randint(0, 256, size=(width, height, 3))
print(image)
image = wx.Image(width = width, height = height, data = image_data)
bitmap = image.ConvertToBitmap()
wx.StaticBitmap(panel, bitmap = bitmap, size = (500, 500))
def main():
app = wx.App()
window = Test(None, style=wx.DEFAULT_FRAME_STYLE ^ wx.RESIZE_BORDER)
print(type(window))
window.Show()
app.MainLoop()
if __name__ == "__main__":
main()
This code opens a window displaying a striped colorful image with black, red, blue and green pixels. Instead, I would have expected every pixel to be a random colour (not just red, blue and green) and far fewer pixels that are pitch black. The documentation on the wxpython site and on the original wxwidgets site only says that "data" ought to be in "RGB format" which I thought I had supplied with the method I use. What am I doing wrong here?
Edit: Example output of the code above
Solution 1:
As one of the comments has already mentioned, the documentation of wxwidgets in its original C implementation asks for an unsigned char array. In essence, the Image object expects its data in a format where every pixel is given by three bytes, each specifying the value of one of the channels of the RGB image.
As such, either a bytes object or for example a numpy array with datatype ubyte will work for this. Using int will result in the int being reinterpreted as separate bytes, which will result in the striped image shown in the original post.