Hello_via_task Ada example on Rapberry
Jan Sommer
soja-misc at aries.uberspace.de
Mon Jul 27 18:12:47 UTC 2015
Ok, some new developments on this front.
This weekend I managed to build the gcc cross-compiler for the raspberrypi with ada enabled through the rtems source builder.
In the end it was quite simple. I don't know why it didn't work out when I tried it the first time a few months ago.
However now I don't have to bother keeping up with compatible version of binutils, gcc, newlib, etc. and can rely on rsb.
For reference if someone stumbles across this thread I did the following:
1. Build the cross-compiler without ada:
../source-builder/sb-set-builder --log=build.log --prefix=/opt/rtems-4.11-pi 4.11/rtems-arm
2. Build bsp for raspberry pi with multilib enabled and the same install prefix:
../rtems/configure --target=arm-rtems4.11 \
--enable-rdbg --enable-rtemsbsp="raspberrypi" --enable-multilib \
--enable-networking --enable-cxx --enable-posix \
--enable-tests=samples --prefix=/opt/rtems-4.11-pi --enable-rtems-debug
3. Build the cross-compiler with ada:
../source-builder/sb-set-builder --log=build.log --prefix=/opt/rtems-4.11-pi 4.11/rtems-arm --with-ada
Additionally I built qemu with the raspberrypi emulation as explained here:
http://wiki.osdev.org/Raspberry_Pi_Bare_Bones#Testing_your_operating_system_.28QEMU.29
I tried debugging the hello_via_task binary shortly and the program starts hanging after the call to pthread_cond_timedwait in the file gcc-4.9.3/gcc/ada/s-taprop-posix.adb:699.
I will look at it a bit more deeper in the next week. One question though: Quite some variables which store the return values of syscalls are optimized out. Is there an easy way to tell the rsb to compile the toolchain -O0? For rtems does "make VARIANT=debug all" still turn off optimization?
Best regards,
Jan
Am Freitag, 17. Juli 2015, 16:46:24 schrieb Jan Sommer:
> It is basically derived from the standard ada-examples from rtems.org.
>
> hello_world_ada worked as it only runs in the main task.
> hello_via_task hangs after the delay call.
>
> In order to compile the hello_via_task example I had to add the line
> #include <rtems/score/stackimpl.h>
>
> To /opt/rtems-4.11-pi/arm-rtems4.11/include/rtems/posix/pthreadimpl.h
> otherwise the function _Stack_Minimum() couldn't be found.
>
> I will check if your patch for gcc changes something. Compiling of the toolchain worked for me without any patching.
>
> Cheers,
>
> Jan
>
> Am Donnerstag, 16. Juli 2015, 16:58:23 schrieb Joel Sherrill:
> > The free lines look like you might not be printing the address freed
> > correctly but it doesn't matter. I don't think it explains anything you
> > are seeing.
> >
> > I built a SPARC Ada toolset. Can you post your test case including
> > Makefile, etc.? Just tar up the directory.
> >
> > I want to see if I can tell what is happening.
> >
> > FWIW attached is the patch I needed to build the 4.9 branch of gcc
> > current as of today.
> >
> > On 7/15/2015 4:23 PM, Jan Sommer wrote:
> > > I placed the debug statements in the files you suggested, rebuild rtems and compiled both examples: hello_world_ada (which works) and hello_via_task (which hangs).
> > > The output from generated by workspace_allocate are basically the same they only differ in the addresses they print, but the size and order of the allocation are the same, except that for the hello_via_task example one additional allocation is called at the very end.
> > > However several allocations seem to have failed, especially in the beginning (but in both examples). I don't know if this is a severe failure, or if it is expected to happen. Nevertheless the output I get the hello_via_task example is the following (as said before the output for the hello_ada example is the same except for the last allocation):
> > >
> > > Workspace_Allocate_or_fatal_error(24) from 44A50/0 -> 10ED78
> > > Workspace_Allocate_or_fatal_error(112) from 4161C/0 -> 10EDB8
> > > Workspace_Allocate_or_fatal_error(28) from 41660/0 -> 10EE50
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate_or_fatal_error(1304) from 4161C/0 -> 10EE90
> > > Workspace_Allocate_or_fatal_error(24) from 41660/0 -> 10F3D0
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate_or_fatal_error(26080) from 4161C/0 -> 10F410
> > > Workspace_Allocate_or_fatal_error(100) from 41660/0 -> 115A18
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(480) from 417CC/0 -> 115AA0
> > > Workspace_Allocate(52) from 417E0/0 -> 115CA8
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(736) from 417CC/0 -> 115D00
> > > Workspace_Allocate(52) from 417E0/0 -> 116008
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate_or_fatal_error(1512) from 4161C/0 -> 116060
> > > Workspace_Allocate_or_fatal_error(128) from 41660/0 -> 116670
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(384) from 417CC/0 -> 116718
> > > Workspace_Allocate(52) from 417E0/0 -> 1168C0
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(1856) from 417CC/0 -> 116918
> > > Workspace_Allocate(52) from 417E0/0 -> 117080
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(224) from 417CC/0 -> 1170D8
> > > Workspace_Allocate(52) from 417E0/0 -> 1171E0
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(1664) from 417CC/0 -> 117238
> > > Workspace_Allocate(52) from 417E0/0 -> 1178E0
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(416) from 417CC/0 -> 117938
> > > Workspace_Allocate(52) from 417E0/0 -> 117B00
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(180) from 417CC/0 -> 117B58
> > > Workspace_Allocate(56) from 417E0/0 -> 117C30
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate_or_fatal_error(288) from 402EC/0 -> 117C90
> > > Workspace_Allocate(36512) from 417CC/0 -> 117DD8
> > > Workspace_Allocate(132) from 417E0/0 -> 120CA0
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(1232) from 417CC/0 -> 120D48
> > > Workspace_Allocate(132) from 417E0/0 -> 121240
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(2128) from 417CC/0 -> 1212E8
> > > Workspace_Allocate(172) from 417E0/0 -> 121B60
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(960) from 417CC/0 -> 121C30
> > > Workspace_Allocate(52) from 417E0/0 -> 122018
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(192) from 417CC/0 -> 122070
> > > Workspace_Allocate(52) from 417E0/0 -> 122158
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(480) from 417CC/0 -> 1221B0
> > > Workspace_Allocate(52) from 417E0/0 -> 1223B8
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(928) from 417CC/0 -> 122410
> > > Workspace_Allocate(52) from 417E0/0 -> 1227D8
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(384) from 417CC/0 -> 122830
> > > Workspace_Allocate(52) from 417E0/0 -> 1229D8
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(384) from 417CC/0 -> 122A30
> > > Workspace_Allocate(52) from 417E0/0 -> 122BD8
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(256) from 417CC/0 -> 122C30
> > > Workspace_Allocate(52) from 417E0/0 -> 122D58
> > > Workspace_Free(0) from 4173C/0
> > > Workspace_Allocate(4096) from 4448C/0 -> 122DB0
> > > Workspace_Allocate(8192) from 4448C/0 -> 124000
> > > Workspace_Allocate(102400) from 4448C/0 -> 126398
> > > Workspace_Allocate(16384) from 4448C/0 -> 142CF8
> > > *** GNAT/RTEMS Hello World Test ***
> > >
> > > Welcome to the World of Lady Ada
> > >
> > > Initiating 2.5 second delay
> > >
> > >
> > >
> > > Am Dienstag, 14. Juli 2015, 15:26:01 schrieb Joel Sherrill:
> > >>
> > >> On 7/14/2015 3:13 PM, Jan Sommer wrote:
> > >>> One question. If the Ada main thread is created using pthread in the beginning and then creates new task
> > >>> (in Ada), do these tasks take their stack from the Ada main thread or do they get their own stack
> > >>> allocated from available memory?
> > >>
> > >> Ada tasks are just pthreads and their stack comes from the workspace.
> > >>
> > >> One option is some selectively placed debug statements.
> > >>
> > >> + score/src/objectallocate.c already has a debug section you
> > >> can turn on to print on object allocation failure. This would
> > >> let you know if you ran out of pthreads, semaphores, etc.
> > >>
> > >> + libcsupport/src/malloc.c for a NULL being returned.
> > >>
> > >> + score/src/wkspace.c - _Workspace_Allocate for a NULL being
> > >> returned. There is a printk in here which prints every allocation
> > >> and the return addresses.
> > >>
> > >> If it is a resource issue, that should help narrow it down.
> > >>
> > >>> Am Sonntag, 12. Juli 2015, 18:31:57 schrieb Jan Sommer:
> > >>>> Am Sonntag, 12. Juli 2015, 10:26:56 schrieb Joel Sherrill:
> > >>>>> Just to make sure we are on the same page, what gcc version?
> > >>>>
> > >>>> gcc 4.9.2
> > >>>>
> > >>>>>
> > >>>>> It sounds like the number of resources configured might be off. Can you try unlimited objects and unified workspace? They are CONFIGURE_ parameters.
> > >>>>>
> > >>>>
> > >>>> I added
> > >>>> #define CONFIGURE_UNIFIED_WORK_AREAS
> > >>>> #define CONFIGURE_UNLIMITED_OBJECTS
> > >>>> to the rtems-init.c of the Ada examples I hope this does not conflict with the other defines like
> > >>>> #define CONFIGURE_MAXIMUM_TASKS 20
> > >>>> #define CONFIGURE_MAXIMUM_SEMAPHORES 20
> > >>>>
> > >>>> #define CONFIGURE_GNAT_RTEMS
> > >>>> #define CONFIGURE_MAXIMUM_ADA_TASKS 20
> > >>>>
> > >>>> #if !defined(CONFIGURE_MAXIMUM_FAKE_ADA_TASKS)
> > >>>> #define CONFIGURE_MAXIMUM_FAKE_ADA_TASKS 0
> > >>>> #endif
> > >>>>
> > >>>> #if !defined(ADA_APPLICATION_NEEDS_EXTRA_MEMORY)
> > >>>> #define ADA_APPLICATION_NEEDS_EXTRA_MEMORY 0
> > >>>> #endif
> > >>>>
> > >>>> /* Account for any extra task stack size */
> > >>>> #define CONFIGURE_MEMORY_OVERHEAD \
> > >>>> (ADA_APPLICATION_NEEDS_EXTRA_MEMORY + GNAT_MAIN_STACKSPACE)
> > >>>>
> > >>>> Still the task hangs after it calls the delay.
> > >>>>
> > >>>> Best regards,
> > >>>>
> > >>>> Jan
> > >>>>
> > >>>> _______________________________________________
> > >>>> users mailing list
> > >>>> users at rtems.org
> > >>>> http://lists.rtems.org/mailman/listinfo/users
> > >>>
> > >>> _______________________________________________
> > >>> users mailing list
> > >>> users at rtems.org
> > >>> http://lists.rtems.org/mailman/listinfo/users
> > >>>
> > >>
> > >>
> > >
> >
> >
More information about the users
mailing list