On 7/8/2019 12:56 AM, afachat_at_gmx.de wrote: > > What use cases did you have in mind? In my MMU design for the 6809 (CoCo), I use 8kB, since that's what the CC3 employs. But, I wonder if 8kB is too large. I also know that mapping the upper 4 bits (4kB granularity) makes the highest nybble stand out as the MMU "page" in code. 8kB (top 3 bits) is harder to glean > I would map them in the mirror area behind one of the existing chips (but > maybe not the SID, considering there are multi-SID configurations around that > already use it). So maybe $DC80 I am planning to hide the registers when not needed, does that change your thoughts? Also, since I am in the RAM sockets, I won't be visible with RAM is not mapped into a memory location, which means one needs to map in all RAM to see me if I hide behind IO. > >> * If multiple mappings were possible, map them into a small space >> (say, 16 bytes for the 4K page size, 16 pages for 64kB) and slide >> that window to the specific map, or use one of the pages mapped into >> the memory map as the place to store all the mappings? > I don't think I understand. I gather, from the 4k example, you would map on > page boundaries, i.e. a 4k page in the target (physical) address space would > also be on page (4k) boundary, so you'd only have to remap the upper 4 address > bits into 4+ new address bits. Sorry, and yes, that's the idea if 4kB is the plan. > > I _think_ I read that you have multiple mappings in mind. And that you are > thinking not like memory mapped registers, but an MMU that fetches the mapping > from memory when needed (or loaded)? I would not load "on demand", that seems > to be rather complex. I'd rather go for an approach where the MMU is either > loaded "manually" by the CPU (easiest), so no constraints on where to store a > mapping. Again, my apologies. I glossed over the specifics. On my other MMU project, which is 8kB granularity, I support hundreds of MMU "mappings". I do so by holding the mappings in a 32kB 15nS SRAM. When the MMU is on, there is a "task" number register. When a memory access is requested, the MMU takes the top 3 bits, adds them to the task register (multiplied by 8) and uses the resulting address to pull the mapping for that "task". Then, as normal, it uses that result as the top bits of the extended address. My question is around how to expose the mappings to the programmer so they can change them. On my other project, the system already supports an MMU, so there is a 16byte space in IO for the registers (2 tasks are supported by the original HW). The first 8 bytes are task 0 mappings, whereas bytes 8-15 are the second task. Since I support more "tasks", I re-purposed the 16 locations as a "sliding window". If you want to update task 6 and 7 mappings, set the "MMU mapping access register to task 6 (putting 6 and 7 in the window), and update the values. It seems to work OK, since a program rarely needs to modify all the task mappings at once. But, I initially implemented an idea where the entire set of mappings were "mapped" into an 8kB page in the MMU. Then, to update the mappings, just update $base + task*3 + mapping location in normal RAM. Either has merit, but both have drawbacks. > However, if you are using a paged mapping, there is no reason why two > different mappings should not point to the same target (physical) address. > So it's all to the software. Yep. On the TANDY CoCo 3, they have the "CRM" page, which is a 256 byte constant page at $fexx that will be the same no matter the mapping selected for page 7 ($e000-$ffff). But, I wonder if they did that so folks would not need to dedicate an entire 1/8th of the address space for common code. > >> * Is it enough to allow the first page to be remapped to "move" zpage >> and stack, or should it be possible to remap at the 256 byte >> granularity in page 0? > I guess that depends on your use case. If you, like me, want to page in > executables, a 4k page at the bottom often suffices, that includes zp and > stack. Yep, that was my idea. > If you want to be able to just use "more zeropage", remapping zp (or even > stack) separately maybe more what you want. And there are two ways to do this. One is to move zp/stack within the 64kB space (essentially, an MMU on top of an MMU), and the other is to set the granularity of the page size for 6502 pages $00 and $01 to be 256 bytes, meaning you can remap those spaces into anywhere in the larger physical address space. I think the C128 uses the latter approach. > > If both are mapped separately, I'd suggest a two-staged approach. As we are > "stuck" with the 6502/6510, there is no base page register, but that is > something the MMU could simulate, to create a new page address for zp. This > should then be fed into the page mapped MMU. So you can use one or both > approaches independently. I can do that, though that's the MMU on an MMU approach above, and that means that the eventual zp space needs to already be mapped into the 64kB space. I like the idea, as it's easier to implement, but will that pose too many constraints on a developer? > >> * Is there anything from the C128 MMU that makes sense to utilize >> (PCRs, LCRs...) > Having a pre-stored mapping absolutely makes sense. It reduces the context > switch times considerably, when you can just set one value to switch the > mapping instead of the whole mapping. My idea is that an entire "map" of settings will be stored as a "task", and so changing the active task value will instantly change the settings. But, I can see above creating a set of "LCR" spaces, where the user can store a task value in the PCR (say PCR1) and then by simply storing anything to LCR1, PCR1 will get loaded into CR (called TASKNUM in my idea) > > André > > > -- Jim Brain brain_at_jbrain.com www.jbrain.comReceived on 2020-05-29 22:35:35
Archive generated by hypermail 2.3.0.