Hi! I'm having some problems with the gfxconv stuff I'm working on (...as usual :-/ ). ...At first, sorry for the long letter; just delete if you're not interested in color optimization issues. This routine is intended to take raw 24 bit 320x400 images and convert them to the C64 styled, enhanced multicolor mode of Jeri's new videoboard (in the same resolution). I'm at the last stage of the optimizer algorithm. Things like reducing the number of colors, picking up a suitable value for background color and selecting color triads for each 4x8 color blocks are done (and work O.K. as far as I'm concerned). (Just for a short explanation: the mode works exactly as the usual multicolor bitmap mode, except for the resolution and the colors. The resolution is just 'expanded', no real change in the organization. For the color thing: there is a color palette with 256 entries (instead of the original fixed 16 color long 'palette'). Color registers (like background and the others) are treated like 8 bit indexes to this palette (all 8 bits are used, instead of 4 bits like the original VIC did). In bitmap modes, the situation is similar, with one addition: the color memory values (the usual 4 bit nibbles) give the low 4 bit nibbles of the index. The higher 4 bit nibble is given by the high 4 bits of the color RAM, and this higher 4 bit is common for all color indexes in the respective 4x8 color block.). There is a problem. In MC mode, the above (last) rule means that all 3 MC colors for color blocks must be in one (16 colors long) palette chunk. (Different color blocks can select colors from different chunks, of course). No problem when it's just one, or a few blocks -- but this rule must be taken into account for _all_ such color blocks in the image, creating a heavy dependence between colors. (Quick calculation shows that if there are just 20 colors on the whole image, and each colors are featured with the other colors in the image at least once, with the above organization they occupy exactly 256 places from the palette.) I inserted a small code piece that listed out the color dependencies on the test image and the results are hmmm... 'embittering'. When using color reduction to just 64 colors for the whole image, some colors depended on 30-40 other colors in the map. I think I know the 'direction', just don't know the way. I hope someone did similar programming tricks, so he could give me some help. Imagine an X*X type symmetrical matrix, where X is the total number of colors in the image. If there's a '+' in the (i,j) position of the matrix, that means that the two colors (i, and j) were found in the same color block of the image at least once (ie. they're dependent). (Crosses in the (i,i) positions correspond to the fact that the i-th color was found in the image at least once (ie. 'dependent on just itself')). The problem to be solved: form <=16 disjunct groups from the above dependency matrix, where each disjunct groups include <= 16 elements, with the possible least number of ignored dependencies. ...Well, this is what is above me at the moment. Anyone with some ideas on a working algorithm?... Thanks, L. - This message was sent through the cbm-hackers mailing list. To unsubscribe: echo unsubscribe | mail cbm-hackers-request@dot.tml.hut.fi.
Archive generated by hypermail 2.1.1.