RTEMS 5 on mcp750 fails
Miroslaw Dach
miroslaw.dach at gmail.com
Tue May 24 00:55:51 UTC 2022
I have changed the CONFIGURE_EXTRA_TASK_STACKS
from
#define CONFIGURE_EXTRA_TASK_STACKS (4000 *
RTEMS_MINIMUM_STACK_SIZE)
down to
#define CONFIGURE_EXTRA_TASK_STACKS (3000 *
RTEMS_MINIMUM_STACK_SIZE)
so now it is 24MB instead of the original 32MB.
The system boots now without the error: INTERNAL_ERROR_TOO_LITTLE_WORKSPACE
Unfortunately the problem is not completely solved:
I get error periodic error message:
pthread_create error No more processes
CAS: unable to start the event facility
I used the diagnostic shell command:
rt malloc :
Number of free blocks: 28
Largest free block: 6288
Total bytes free: 12456
Number of used blocks: 16522
Largest used block: 1048848
Total bytes used: 30342640
Size of the allocatable area in bytes: 30355096
Minimum free size ever in bytes: 12456
Maximum number of free blocks ever: 107
Maximum number of blocks searched ever: 54
Lifetime number of bytes allocated: 31278104
Lifetime number of bytes freed: 917336
Total number of searches: 37008
Total number of successful allocations: 18126
Total number of failed allocations: 40
Total number of successful frees: 1604
Total number of successful resizes: 169
It looks like there is problem with the memory allocation (Total number of
failed allocations: 40)
It is also quit small Number of free blocks: (Number of free blocks:
28)
Could you advise me please which config parameters to tweak to resolve the
issue with the number of failed allocations. I hope that the RAM size
(32MB) of the mcp750 card is good enough.
Best Regards
Mirek
sob., 21 maj 2022 o 12:42 Miroslaw Dach <miroslaw.dach at gmail.com>
napisał(a):
> Hi Joel,
>
> >If it is configured for unified workspace, you wouldn't specify extra
> stacks and this wouldn't have been tripped.
> Do you mean that CONFIGURE_EXTRA_TASK_STACKS is not taken into account
> when CONFIGURE_UNIFIED_WORK_AREAS is defined?
> I have checked
> the powerpc-rtems5/mcp750/lib/include/rtems/confdefs/wkspace.h and it seems
> the CONFIGURE_EXTRA_TASK_STACKS is used to calculate the stack size?
>
> #define _CONFIGURE_STACK_SPACE_SIZE \
> ( _CONFIGURE_INIT_TASK_STACK_EXTRA \
> + _CONFIGURE_POSIX_INIT_THREAD_STACK_EXTRA \
> + _CONFIGURE_LIBBLOCK_TASKS_STACK_EXTRA \
> + CONFIGURE_EXTRA_TASK_STACKS \
> + rtems_resource_maximum_per_allocation( _CONFIGURE_TASKS ) \
> * _Configure_From_stackspace( CONFIGURE_MINIMUM_TASK_STACK_SIZE ) \
> + rtems_resource_maximum_per_allocation(
> CONFIGURE_MAXIMUM_POSIX_THREADS ) \
> * _Configure_From_stackspace(
> CONFIGURE_MINIMUM_POSIX_THREAD_STACK_SIZE ) \
> + _CONFIGURE_HEAP_HANDLER_OVERHEAD )
>
> #else /* CONFIGURE_EXECUTIVE_RAM_SIZE */
>
> What is the meaning of CONFIGURE_UNLIMITED_ALLOCATION_SIZE . Does it
> express the size in KB?
> Currently epics defines:
>
> #define CONFIGURE_UNLIMITED_ALLOCATION_SIZE 32
> #define CONFIGURE_UNLIMITED_OBJECTS
>
> Best Regards
> Mirek
>
> sob., 21 maj 2022 o 10:06 Joel Sherrill <joel at rtems.org> napisał(a):
>
>>
>>
>> On Sat, May 21, 2022, 11:02 AM Miroslaw Dach <miroslaw.dach at gmail.com>
>> wrote:
>>
>>> Hi Hainz,
>>>
>>> Thanks for the indication. I have exactly found this setting and am
>>> going to reduce
>>> CONFIGURE_EXTRA_TASK_STACKS which is currently setup by default in EPICS
>>> to 32MB.
>>> The mcp750 has just 32MB so it is no wonder that it does not work.
>>>
>>
>> If it is configured for unified workspace, you wouldn't specify extra
>> stacks and this wouldn't have been tripped.
>>
>> Unified and unlimited can avoid some issues.
>>
>> --joel
>>
>>>
>>> I will test the new configuration next week.
>>> Have a nice weekend
>>> Best Regards
>>> Mirek
>>>
>>>
>>> sob., 21 maj 2022 o 05:39 Heinz Junkes <junkes at fhi-berlin.mpg.de>
>>> napisał(a):
>>>
>>>> Hi Mirek,
>>>> You can find the configuration of the RTEMS at EPICS here:
>>>>
>>>> epics-base/modules/libcom/RTEMS/posix/rtems_config.c
>>>>
>>>> There for example
>>>> ...
>>>> #define CONFIGURE_EXTRA_TASK_STACKS (4000 *
>>>> RTEMS_MINIMUM_STACK_SIZE)
>>>>
>>>> #define CONFIGURE_UNLIMITED_ALLOCATION_SIZE 32
>>>> #define CONFIGURE_UNLIMITED_OBJECTS
>>>> #define CONFIGURE_UNIFIED_WORK_AREAS
>>>> …
>>>>
>>>> best regards,
>>>> Heinz
>>>>
>>>> > On 21. May 2022, at 00:21, Joel Sherrill <joel at rtems.org> wrote:
>>>> >
>>>> > On Fri, May 20, 2022 at 3:55 PM Miroslaw Dach <
>>>> miroslaw.dach at gmail.com>
>>>> > wrote:
>>>> >
>>>> >> Hi Chris,
>>>> >>
>>>> >> Thank you very much for your expertise and attached links which are
>>>> very
>>>> >> helpful.
>>>> >> As regards the INTERNAL_ERROR_TOO_LITTLE_WORKSPACE error message:
>>>> >> Is it possible to determine with some debug information what is the
>>>> >> foreseen by RTEMS WORKSPACE size?
>>>> >>
>>>> >
>>>> > This CONFIGURE_MEMORY_OVERHEAD is there as a mechanism to toss
>>>> > extra memory at the workspace in case confdefs.h makes a mistake. I
>>>> think
>>>> > this
>>>> > is more likely to be something else like a misconfigured system. But
>>>> adding
>>>> > this
>>>> > to your RTEMS configuration with some chunk of memory like 128K might
>>>> > allow this to proceed. But if it is a memory allocation error in the
>>>> BSP
>>>> > like
>>>> > not assigning memory right to RTEMS, this won't fix it. See:
>>>> >
>>>> >
>>>> https://docs.rtems.org/branches/master/c-user/config/general.html#configure-memory-overhead
>>>> >
>>>> > Chris has touched one of the boards in the motorola_powerpc family
>>>> > more recently than I have. I would think this would work unless
>>>> something
>>>> > is off in the EPICS configuration of RTEMS. There was a discussion of
>>>> the
>>>> > EPICS RTEMS configuration with Till recently on tech-talk and I
>>>> thought
>>>> > it looked ok.
>>>> >
>>>> > --joel
>>>> >
>>>> >>
>>>> >> Best Regards
>>>> >> Mirek
>>>> >>
>>>> >> śr., 18 maj 2022 o 00:53 Chris Johns <chrisj at rtems.org> napisał(a):
>>>> >>
>>>> >>> On 18/5/2022 9:36 am, Miroslaw Dach wrote:
>>>> >>>> Dear RTEMS Users and Developers,
>>>> >>>>
>>>> >>>> I have built RTEMS 5 with EPICS 7 and tried to boot my application
>>>> on
>>>> >>>> mcp750 cPCI board.
>>>> >>>> The first thing that I have encountered is that the boot file is
>>>> in the
>>>> >>> elf
>>>> >>>> format so I have
>>>> >>>> converted it to the binary one.:
>>>> >>>> powerpc-rtems5-objcopy -I elf32-powerpc -O binary myApp.boot
>>>> >>> myApp.boot.bin
>>>> >>>
>>>> >>> We removed the various post-link hooks.
>>>> >>>
>>>> >>>> So far so good
>>>> >>>> Next, I booted the system with my app and it fails in the
>>>> *bsp_early*
>>>> >>>> function in
>>>> >>>> bsps/powerpc/motorola_powerpc/start/bspstart.c
>>>> >>>>
>>>> >>>> The boot sequence:
>>>> >>>> Network Boot File load in progress... To abort hit <BREAK>
>>>> >>>>
>>>> >>>> Bytes Received =&1270208, Bytes Loaded =&1270208
>>>> >>>> Bytes/Second =&635104, Elapsed Time =2 Second(s)
>>>> >>>>
>>>> >>>> Residual-Data Located at: $01F88000
>>>> >>>>
>>>> >>>> Model: (e2)
>>>> >>>> Serial: MOT0000000
>>>> >>>> Processor/Bus frequencies (Hz): 366680480/66671508
>>>> >>>> Time Base Divisor: 4000
>>>> >>>> Memory Size: 2000000
>>>> >>>> Residual: 1f88000 (length 27148)
>>>> >>>>
>>>> >>>> PCI: Probing PCI hardware
>>>> >>>>
>>>> >>>> RTEMS 5.0.0/PPC load:
>>>> >>>> Uncompressing the kernel...
>>>> >>>> done
>>>> >>>> Now booting...
>>>> >>>> -----------------------------------------
>>>> >>>> Welcome to rtems-5.0.0 (PowerPC/Generic (classic FPU)/mcp750) on
>>>> >> Mesquite
>>>> >>>> cPCI (MCP750)
>>>> >>>> -----------------------------------------
>>>> >>>> idreg 0 = 0x1208029271
>>>> >>>> OpenPIC found at 0xc1000000.
>>>> >>>> pci : Configuring interrupt routing for 'Mesquite cPCI (MCP750)'
>>>> >>>> pci : No bridge from bus 0 towards root found
>>>> >>>> pci : No bridge from bus 0 towards root found
>>>> >>>> pci : Device 1:0x0b:0 routed to interrupt_line 27
>>>> >>>> pci : Device 1:0x0d:0 routed to interrupt_line 25
>>>> >>>> Cleared PCI errors: pci_stat was 0x2280
>>>> >>>> OpenPIC Version ? (2 CPUs and 16 IRQ sources) at 0x3238002688
>>>> >>>> OpenPIC Vendor 0 (Unknown), Device 0 (Unknown), Stepping 2
>>>> >>>> OpenPIC timer frequency is 8333848 Hz
>>>> >>>
>>>> >>> This all looks OK.
>>>> >>>
>>>> >>>>
>>>> >>>> *** FATAL ***
>>>> >>>> fatal source: 0 (INTERNAL_ERROR_CORE)
>>>> >>>> fatal code: 2 (INTERNAL_ERROR_TOO_LITTLE_WORKSPACE)
>>>> >>>> RTEMS version:
>>>> 5.0.0.fc89cc76804499eba3f3bc4097b795a84f07571a-modified
>>>> >>>> RTEMS tools: 7.5.0 20191114 (RTEMS 5, RSB 5 (6225eadda1de), Newlib
>>>> >>> 7947581)
>>>> >>>> executing thread is NULL
>>>> >>>>
>>>> >>>> My application in the binary format uncompressed with stripped
>>>> symbols
>>>> >>> is a
>>>> >>>> 2.3M + 501K bootloader so I do not think that it
>>>> >>>> is an issue with the WORKSPACE? The mcp750 has 32MB of RAM.
>>>> >>>> How to detect what is the real cause of
>>>> >>> INTERNAL_ERROR_TOO_LITTLE_WORKSPACE
>>>> >>>> ?
>>>> >>>> Would it be the problem with the linker script ppcboot.lds?
>>>> >>>
>>>> >>> I do not think so. I suggest you check the Classic API Guide here:
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>
>>>> https://ftp.rtems.org/pub/rtems/releases/5/5.1/docs/html/c-user/config/intro.html#sizing-the-rtems-workspace
>>>> >>>
>>>> >>> The accounting of memory is better and this means the extra space
>>>> needed
>>>> >>> may
>>>> >>> need to be adjusted. I am not sure where in EPCIS this is
>>>> controlled and
>>>> >>> if it
>>>> >>> can be overridden in your local configuration.
>>>> >>>
>>>> >>> RTEMS 5 has a unified workspace and heap. This means the heap and
>>>> >>> workspace can
>>>> >>> use memory until it is all used. The benefit is not need to manage
>>>> the
>>>> >>> workspace
>>>> >>> size statically:
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>
>>>> https://ftp.rtems.org/pub/rtems/releases/5/5.1/docs/html/c-user/config/general.html#configure-unified-work-areas
>>>> >>>
>>>> >>> Chris
>>>> >>>
>>>> >> _______________________________________________
>>>> >> users mailing list
>>>> >> users at rtems.org
>>>> >> http://lists.rtems.org/mailman/listinfo/users
>>>> > _______________________________________________
>>>> > users mailing list
>>>> > users at rtems.org
>>>> > http://lists.rtems.org/mailman/listinfo/users
>>>>
>>>>
More information about the users
mailing list