--- Rainer Buchty <buchty@cs.tum.edu> wrote: > > Good idea, but also not enough. This does not check the addressing, > > i.e. that there are 32k unique RAM locations. > > Beats me. I mean, wasn't that the purpose of the 32k loop to step through > addressing? > > Maybe I'm misunderstanding you. What he means is that if there's a select problem, writing to $1000 could be the same as writing to $2000. What you typically have to do is to reset all memory in the test range to a known value ($00, $FF, $AA, whatever), then write a semi-unique (like the page number, i.e., $1000 =$10, $1100 = $11... $2000 = $20) value to each location/block (depending on how fine a granularity you want to test) and read before you write to an unverified location to see if it already has an interesting value in it. Typically, you test memory in the range to make sure you don't have any stuck bits or holes in the address space, then run another test to make sure you don't have any overlaps in addressing (could be caused by a faulty LS138 selector (or whatever you are using) or bad/crossed wires, or even a faulty PAL equation (if you are using one for address decode). Some tests make more sense for evaluating designs, other for spotting production problems and still others for field tests on known working designs with suspect components. Back in the old days (core memory), they even used to test patterns to see if too many adjacent 1 or 0 bits resulted in noise or bad reads/writes. At least now, the RAM chips that make it out the door don't give us those headaches. -ethan ===== Visit "The Seventh Continent" http://penguincentral.com/penguincentral.html __________________________________________________ Do You Yahoo!? Make a great connection at Yahoo! Personals. http://personals.yahoo.com Message was sent through the cbm-hackers mailing list
Archive generated by hypermail 2.1.1.