How can CP/M file caching symbols be disabled?

Post Reply
nuc1e0n
Member
Posts: 49
Joined: Wed Jun 10, 2020 12:34 am

How can CP/M file caching symbols be disabled?

Post by nuc1e0n »

Firstly, I'm not sure if the following approach is best for what I'd like to achieve here, so I'm open to any suggestions of less 'hacky' alternatives.

As a result of the fairly new file caching for CP/M, lib/crt/classic/crt_cpm_fcntl.asm has this block of code:

Code: Select all

IF CLIB_OPEN_MAX > 0
    SECTION bss_crt
    PUBLIC  __fcb
__fcb:
    defs    CLIB_OPEN_MAX * 166	; Each FCB is 43 bytes long
ENDIF
For my project, I'm compiling a program for CP/M, MSXDOS1 and MSXDOS2. The initial loader program determines whether it's running on MSXDOS2 and uses MSXDOS2 file access routines or CP/M's FCB based file access routines otherwise. (It also detects whether an MSXDOS2 compatible memory mapper bios extension is available independently of this, and uses it even for MSXDOS1 if found.)

However, I found that this only works if the __sgoioblk and __sgoioblk_end symbols are located in the same address for both msx2 and fcb variants.

I have been able to always locate these symbols in the same place by placing #pragma output CRT_ENABLE_STDIO = 0 in my C source code, then manually writing an assembly language file that has contains these symbols.

Now that the CP/M file caching exists, I've found that I now need to also move the __fcb symbol as well (which is fine for me to do) as I also need to be careful the fcb and msx2 routines don't go past address 0x3fff, as I'm banking the memory from 0x4000 to 0x7fff.

But although __sgoioblk and __sgoioblk_end can be disabled from the standard crt0 startup code when using nofileio, the __fcb symbol in crt_cpm_fcntl.asm cannot be.

What I'd like to request is that the __fcb symbol be wrapped by an IF CRT_ENABLE_STDIO = 1 block, like so:

Code: Select all

; CP/M style FCB support (CP/M + MSXDOS1)





    SECTION bss_crt

IF CRT_ENABLE_STDIO = 1
IF CLIB_OPEN_MAX > 0
    SECTION bss_crt
    PUBLIC  __fcb
__fcb:
    defs    CLIB_OPEN_MAX * 166	; Each FCB is 43 bytes long
ENDIF
ENDIF

    PUBLIC  defltdsk
defltdsk:       defb    0	;Default disc
If file io is disabled at present, the symbols __sgoioblk and __sgoioblk_end aren't even defined, so the memory used by __fcb would be wasted at the moment anyhow right?

Finally, is there perhaps some better means of detecting whether MSXDOS2 file access routines are available and using the FCB based ones if they aren't?
User avatar
dom
Well known member
Posts: 2072
Joined: Sun Jul 15, 2007 10:01 pm

Re: How can CP/M file caching symbols be disabled?

Post by dom »

The caching takes place at the fcntl layer not at the stdio layer so excluding __fcb if CRT_ENABLE_STDIO=0 would be incorrect.

The old nofileio directive sets CLIB_OPEN_MAX to 0 which does achieve what you want:

Code: Select all

    ; Maximum number of fds available
    IF !DEFINED_CLIB_OPEN_MAX
        ; Map this old nofileio pragma into a modern form
        IF DEFINED_nofileio
            defc    CLIB_OPEN_MAX = 0
        ELSE
            defc    CLIB_OPEN_MAX = CLIB_FOPEN_MAX
        ENDIF
    ENDIF
    PUBLIC  __CLIB_OPEN_MAX
    defc    __CLIB_OPEN_MAX = CLIB_OPEN_MAX
However that does have knock on effects and will completely prevent files from being opened using the CP/M API

I think we would need a new directive to exclude the block. However I'm wondering if just rearranging the memory map would work for you? As I understand it you need the sgoioblk and fcb variables to sit low in memory. At the moment it's fudged by "disabling" stdio and defining sgoioblk in a separate file.

Both of these symbols are in the bss_crt section, so if we define that section first of all they'll be placed directly after the crt0 code so will end up around 0x200 or thereabouts.

At the moment there is only a way to override the entire memory map, but you just need a small tweak, so I think I'm proposing the following:

1. You have a file called pre-mmap.inc in your project, this has the contents:

Code: Select all

SECTION bss_crt
2. Compile will an additional flag that indicates that the pre-mmap.inc file exists
3. Delete your additional files defining the sgoioblk

I think this will work: you can "dry run" it by putting a "SECTION bss_crt" before the INCLUDE "crt/classic/crt_runtime_selection.asm" in cpm_crt0.asm
nuc1e0n
Member
Posts: 49
Joined: Wed Jun 10, 2020 12:34 am

Re: How can CP/M file caching symbols be disabled?

Post by nuc1e0n »

> Both of these symbols are in the bss_crt section, so if we define that section first of all they'll be placed directly after the crt0 code so will end up around 0x200 or thereabouts.

So with z88dk sections are ordered in memory according to when they are first encountered by the compiler? If so that's very interesting and I'll try just putting an empty declaration of "SECTION bss_crt" in my assembly code and report back.

It's not so much that they have to sit low in memory, just that they have to not be in the range 0x4000-0x7fff. It's more that I'm compiling two different .com file variants of C runtime functionality and these variables need to be in the same locations for both. That way the pages of my application code at 0x4000-0x7fff can always refer to them without any problems. I'm even keeping some of the C standard library code in the paged code sections. The lower in memory I can keep them the more I can ensure they stay in the same locations.
nuc1e0n
Member
Posts: 49
Joined: Wed Jun 10, 2020 12:34 am

Re: How can CP/M file caching symbols be disabled?

Post by nuc1e0n »

Looking again at the situation with my code, although I still need to keep __sgoioblk and __sgoioblk_end in the same places it doesn't actually matter where __fcb is in memory so long as everything ends up being located below 0x4000. If I enable optimisations for the C runtime code, even with the extra fcb caching the whole executable will still only finish at 0x3FD4 when loaded (close to the wire, but still small enough). __fcb is also only defined by the startup code for CP/M and MSXDOS, not MSXDOS2, so it doesn't get defined at all otherwise by default. This means there's no problem to be solved other than removing my hacked up references to the symbol.

I probably should have told the rubber duck on my desk all this before troubling folks with a forum post. Sorry about that :S
Post Reply