[GSoC 2020] : BSP Buildset for EPICS (next steps)
junkes
junkes at fhi-berlin.mpg.de
Mon Aug 10 15:26:05 UTC 2020
Hello Mritunjay,
I have now finished an EPICS variant that works with libbsd so far. DHCP
and NFS work. NTP I added a primitive reader. This is sufficient for
testing.
You can find the development here:
https://gitlab.fhi.mpg.de/junkes/epics-base.git
It's not perfect yet. The adaptation to the legacy stack and the
processing of the environment variables from the flash (u-boot etc.) is
still missing.
[h1 at earth QtC-epics-base (7.0 *+)]$ ./startQemu softIoc
Script name: ./startQemu
qemu-system-aarch64: warning: nic cadence_gem.1 has no peer
WARNING: OS Clock time was read before being set.
Using 1990-01-02 00:00:00.000000 UTC
initConsole --- Info ---
stdin: fileno: 0, ttyname: /dev/ttyS1
stdout: fileno: 1, ttyname: /dev/ttyS1
stderr: fileno: 2, ttyname: /dev/ttyS1
tcsetattr failed: I/O error
time set to : 04/14/14 07:30:06.000055589 UTC
Startup.
epicsThreadSetPriority called by non epics thread
***** RTEMS Version: rtems-5.0.0-m2003 (ARM/ARMv4/xilinx_zynq_a9_qemu)
*****
***** Initializing network (dhcp) *****
nexus0: <RTEMS Nexus device>
zy7_slcr0: <Zynq-7000 slcr block> on nexus0
cgem0: <Cadence CGEM Gigabit Ethernet Interface> on nexus0
miibus0: <MII bus> on cgem0
e1000phy0: <Marvell 88E1111 Gigabit PHY> PHY 0 on miibus0
e1000phy0: none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX,
1000baseT-FDX, 1000baseT-FDX-master, auto
e1000phy1: <Marvell 88E1111 Gigabit PHY> PHY 23 on miibus0
e1000phy1: none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX,
1000baseT-FDX, 1000baseT-FDX-master, auto
info: cgem0: Ethernet address: 52:54:00:12:34:56
info: lo0: link state changed to UP
---- Wait for DHCP done ...
dhcpcd: unknown option -- pv4only
info: version 6.2.1 starting
cgem0: cgem_mediachange: could not set ref clk0 to 25000000.
info: cgem0: link state changed to UP
dhcpcd: unknown option -- pv4only
debug: cgem0: executing `ioc boot' PREINIT
***** Primary Network interface : cgem0 *****
debug: cgem0: executing `ioc boot' CARRIER
***** Primary Network interface : cgem0 *****
info: DUID 00:01:00:01:1a:de:4a:fe:52:54:00:12:34:56
info: cgem0: IAID 00:12:34:56
info: cgem0: soliciting an IPv6 router
debug: cgem0: delaying Router Solicitation for LL address
debug: cgem0: using ClientID
00:46:48:49:20:74:65:73:74:20:63:6c:69:65:6e:74
info: cgem0: soliciting a DHCP lease
debug: cgem0: sending DISCOVER (xid 0x86686938), next in %0.1f seconds
info: cgem0: carrier lost
debug: cgem0: executing `ioc boot' NOCARRIER
***** Primary Network interface : cgem0 *****
info: cgem0: carrier acquired
dhcpcd: unknown option -- pv4only
debug: cgem0: executing `ioc boot' CARRIER
***** Primary Network interface : cgem0 *****
info: cgem0: IAID 00:12:34:56
info: cgem0: soliciting an IPv6 router
debug: cgem0: delaying Router Solicitation for LL address
debug: cgem0: using ClientID
00:46:48:49:20:74:65:73:74:20:63:6c:69:65:6e:74
info: cgem0: soliciting a DHCP lease
debug: cgem0: sending DISCOVER (xid 0x441a0d89), next in %0.1f seconds
debug: cgem0: wrong xid 0x86686938 (expecting 0x441a0d89) from 10.1.0.1
debug: cgem0: sending DISCOVER (xid 0x441a0d89), next in %0.1f seconds
info: cgem0: offered 10.1.0.104 from 10.1.0.1
debug: cgem0: sending REQUEST (xid 0x441a0d89), next in %0.1f seconds
debug: cgem0: acknowledged 10.1.0.104 from 10.1.0.1
debug: cgem0: checking for 10.1.0.104
debug: cgem0: sending ARP probe (1 of 3), next in %0.1f seconds
debug: cgem0: sending ARP probe (2 of 3), next in %0.1f seconds
---- Wait for DHCP done ...
debug: cgem0: sending ARP probe (3 of 3), next in %0.1f seconds
info: cgem0: leased 10.1.0.104 for 6000 seconds
debug: cgem0: renew in 3000 seconds, rebind in 5250 seconds
debug: cgem0: adding IP address 10.1.0.104/24
info: cgem0: adding host route to 10.1.0.104 via 127.0.0.1
err: cgem0: ipv4_addroute: File exists
info: cgem0: adding route to 10.1.0.0/24
err: cgem0: ipv4_addroute: File exists
info: cgem0: adding default route via 10.1.0.1
debug: cgem0: writing lease `/var/db/dhcpcd-cgem0.lease'
debug: cgem0: executing `ioc boot' BOUND
***** Primary Network interface : cgem0 *****
Interface TGP bounded
rtems_bsdnet_bootp_server_name : 1001.1001 at 10.1.0.1:/Volumes/Epics
rtems_bsdnet_bootp_boot_file_name :
/Volumes/Epics/myExample/bin/RTEMS-xilinx_zynq_a9_qemu/myExample.boot
rtems_bsdnet_bootp_cmdline :
/Volumes/Epics/myExample/iocBoot/iocmyExample/st.cmd
debug: cgem0: sending ARP announce (1 of 2), next in 2.0 seconds
-------------- IFCONFIG -----------------
cgem0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu
1500
options=80008<VLAN_MTU,LINKSTATE>
ether 52:54:00:12:34:56
inet6 fe80::5054:ff:fe12:3456%cgem0 prefixlen 64 scopeid 0x1
inet 10.1.0.104 netmask 0xffffff00 broadcast 10.1.0.255
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
media: Ethernet autoselect (100baseTX <full-duplex>)
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
inet 127.0.0.1 netmask 0xffffff00
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
groups: lo
-------------- NETSTAT ------------------
Routing tables
Internet:
Destination Gateway Flags Netif Expire
default 10.1.0.1 UGS cgem0
10.1.0.0/24 link#1 U cgem0
10.1.0.104 link#1 UHS lo0
127.0.0.1 link#2 UH lo0
Internet6:
Destination Gateway Flags
Netif Expire
::1 link#2 UH
lo0
fe80::%cgem0/64 link#1 U
cgem0
fe80::5054:ff:fe12:3456%cgem0 link#1 UHS
lo0
fe80::%lo0/64 link#2 U
lo0
fe80::1%lo0 link#2 UHS
lo0
***** Until now no NTP support in RTEMS 5 with rtems-libbsd *****
***** Ask ntp server once... *****
time from ntp : 08/10/20 15:08:41.000055589 UTC
***** Setting up file system *****
***** Initializing NFS *****
rtems_bootp_server_name: 1001.1001 at 10.1.0.1:/Volumes/Epics
nfsMount("1001.1001 at 10.1.0.1", "/Volumes/Epics", "/Volumes/Epics")
Mount 1001.1001 at 10.1.0.1:/Volumes/Epics on /Volumes/Epics
Warning: EPICS_TIMEZONE (CST6CDT,M3.2.0/2,M11.1.0/2) unrecognizable --
times will be displayed as GMT.
check for time registered , C++ initialization ...
***** Preparing EPICS application *****
chdir("/Volumes/Epics/myExample/iocBoot/iocmyExample/")
***** Starting EPICS application *****
dbLoadDatabase("../../dbd/softIoc.dbd")
Can't register 'system' command -- no command interpreter available.
softIoc_registerRecordDeviceDriver(pdbbase)
# Begin /Volumes/Epics/myExample/iocBoot/iocmyExample/st.cmd
iocInit()
Starting iocInit
############################################################################
## EPICS R7.0.3.2-DEV
## Rev. R7.0.3.1-105-ge597f8104c18ec7b9fc5-dirty
############################################################################
debug: cgem0: sending ARP announce (2 of 2)
Warning: RSRV has empty beacon address list
epicsThreadRealtimeLock Warning: Unable to lock the virtual address
space.
VM page faults may harm real-time performance. errno=22
iocRun: All initialization complete
# End /Volumes/Epics/myExample/iocBoot/iocmyExample/st.cmd
On 2020-08-08 21:58, Mritunjay Sharma wrote:
> On Sat, Aug 8, 2020 at 11:10 PM Gedare Bloom <gedare at rtems.org> wrote:
>
>> On Sat, Aug 8, 2020 at 11:08 AM Mritunjay Sharma
>> <mritunjaysharma394 at gmail.com> wrote:
>>>
>>>
>>>
>>> On Sat, Aug 8, 2020 at 9:46 PM Heinz Junkes <junkes at fhi-berlin.mpg.de> wrote:
>>>>
>>>> Hallo Mritunjay,
>>>> everything looks pretty good. I'm commenting on the text. I also send this mail
>>>> to two EPICS experts (Andrew Johnson and Michael Davidsaver). Maybe they also have some ideas.
>>>
>>>
>>> Thank you so much Heinz! It will be really a great help!
>>>>
>>>>
>>>>
>>>>>
>>>>> Current Status:
>>>>>
>>>>> 1) Successfully built EPICS7 with RTEMS5 by hand for pc-386
>>>>> 2) Worked for RSB recipe.
>>>>> In its due process, I Wrote:
>>>>> i) rsb/rtems/config/epics/epics-7-1.cfg
>>>>> ii)rsb/rtems/config/epics/epics-base.bset
>>>>> iii)rsb/source-builder/config/epics-7-1.cfg
>>>>> 3) Added Patch for RTEMS-pc-386 support which made the above recipe work successfully.
>>>>> 4) Therefore, Successully built EPICS7 with RTEMS5 by using RSB recipe as well for pc-386 as of now.
>>>>> 5) Sent 4 Patches for review of the same.
>>>>>
>>>>> What problems are in the next steps?
>>>>>
>>>>> 1) How to make it work across different architectures?
>>>>> 2) Exisiting EPICS works on the old legacy network stack.
>>>>> 3) I am not using EPICS upstream branch. It is being built
>>>>> by Heinz's epics playground.
>>>>> 4) Doubts in how to start with testing.
>>>>>
>>>>> My Resarch work for the Problem no: 1
>>>>>
>>>>> I have gone through the EPICS developer guide from here
>>>>> exhaustively in the past couple of day and here are few interesting things
>>>>> that I found which can help:
>>>>>
>>>>> 1) "The main ingredients of the build system are:
>>>>> * A set of configuration files and tools provided in the EPICS base/configure directory
>>>>> * A corresponding set of configuration files in the <top>/configure directory of a non-base <top> directory
>>>>> structure to be built. The makeBaseApp.pl and makeBaseExt.pl scripts create these configuration files. Many of
>>>>> these files just include a file of the same name from the base/configure directory.
>>>>> * Makefiles in each directory of the <top> directory structure to be built
>>>>> * User created configuration files in build created $(INSTALL_LOCATION)/cfg directories.
>>>>> "
>>>>>
>>>>> Remarks: Now since it is also mentioned in the guide that "makeBaseApp.pl
>>>>> creates directories and then copies template files into the newly created directories
>>>>> while expanding macros in the template files. EPICS base provides two sets of template files: simple and example."
>>>>> Can we think of using makeBaseApp.pl to that end? Making the user allow
>>>>> to change the configurations from the terminal?
>>>>
>>>> I don't think that makeBaseApp.pl will help you. This is intended to build an example IOC. It takes the settings from the above mentioned configuration files.
>>>>
>>>> You "only" need to specify the location of the RTEMS installation in configure/os/CONFIG_SITE.Common.RTEMS.
>>>> RTEMS_VERSION =
>>>> RTEMS_BASE =
>>>>
>>>> Then you have to define the target in configure/CONFIG_SITE:
>>>> ...
>>>> # Which target architectures to cross-compile for.
>>>> # Definitions in configure/os/CONFIG_SITE.<host>.Common
>>>> # may override this setting.
>>>> CROSS_COMPILER_TARGET_ARCHS=
>>>> ...
>>>> e.g. "CROSS_COMPILER_TARGET_ARCHS=RTEMS-xilinx_zynq_a9_qemu"
>>>>
>>>> And for each target there must be a file in configure/os:
>>>> e.g. CONFIG_Common.RTEMS-xilinx_zynq_a9_qemu
>>>> If it is not provided by EPICS, the RSB should install it there (adapted to the target to be used by epics make).
>>>
>>
>> Probably they should be provided by EPICS for known-to-work
>> configurations. If they are not, we should push upstream.
>>
>>>
>>> Yes, Heinz. I followed the above steps and created a patch which I applied in the configuration files of RSB recipe.
>>> The problem with it is that it's made only for pc-386 and I have to hardcode there about location of the RTEMS installation in
>>> configure/os/CONFIG_SITE.Common.RTEMS. My doubt is how to modify the patch that can it offer user-specific location of the RTEMS
>>> installation and bsp?
>>>>
>>
>> I still think this should be done through some kind of pre-processing
>> (scripting) over these configuration files for a given target, using
>> some fixed pattern, rather than by patching. A patch is fine for
>> proof-of-concept, but I don't think we want to have x patches for x
>> BSP targets of EPICS. Maybe it is fine, there aren't that many BSP
>> targets right now, but I can see this kind of patch-based
>> configuration getting a little unwieldy.
>>
>> If you proceed with the patch-based approach, you need to figure out
>> how to pick the right patch to apply based on the target/bsp build. So
>> that would be your next step. Create a patch for the zynq, and see if
>> you can dynamically determine which one to apply (zynq or pc386) based
>> on RSB internal state/variables.
>
> Thank you for the suggestions. I will start working on creating the patch for zynq and will see if
> something can be done to dynamically determine them.
>
>>>>
>>>>>
>>>>> 2) "The startup directory in EPICS base contains a perl script, EpicsHostArch.pl, which can be used to define
>>>>> EPICS_HOST_ARCH. This script can be invoked with a command line parameter defining the alternate compiler (e.g.
>>>>> if invoking EpicsHostArch.pl yields solaris-sparc, then invoking EpicsHostArch.pl gnu will yield
>>>>> solaris-sparc-gnu).
>>>>> The startup directory also contains scripts to help users set the path and other environment variables"
>>>> This has nothing to do with 2)
>>>
>>>
>>> I am sorry for the misunderstanding. All the 4 points mentioned here are my observations only for the Problem No.1
>>> `1) How to make it work across different architectures?`
>>>>
>>>>
>>>> There's no need to adjust anything here. The EPICS make recognizes the architecture on which it is started.
>>>>
>>>>> Remarks: As EPICS_HOST_ARCH, can we do something similar for CROSS_COMPILER_TARGET_ARCHS?
>>>>>
>>>>> 3) ") The following is a summary of targets that can be specified for gnumake:
>>>>> * <action>
>>>>> * <arch>
>>>>> * <action>.<arch>
>>>>> * <dir>
>>>>> * <dir>.<action>
>>>>> * <dir>.<arch>
>>>>> * <dir>.<action>.<arch>
>>>>> where:
>>>>> <arch> is an architecture such as solaris-sparc, vxWorks-68040, win32-x86, etc.
>>>>> <action> is help, clean, realclean, distclean, inc, install, build, rebuild, buildInstall, realuninstall, or uninstall"
>>>>>
>>>>> Remarks: Now similar to the above stated, can we work for Cross Compiler target Architecture?
>>>>>
>>>>
>>>> But this does not refer to 3) ?
>>>
>>>
>>> No no, this remark is also for the problem 1 only as told above. Slight misunderstanding here :)
>>>>
>>>>
>>>>
>>>>
>>>> 3) You have to take "my" repo at the moment, because the adaptations to RTEMS5 are not yet included in the official epics-base. This is a hen-and-egg problem because RTEMS is only now in the release phase, so my changes have not been implemented yet.
>>>
>>>
>>> Ok, I hope Dr. Gedare and Chris can help you with that.
>>
>> We just need to be ready for when RTEMS 5.1 is officially released.
>> Hopefully soon, but I don't have a timeline. Releases are mostly
>> volunteer time, so they happen when they happen. We're trying to get
>> better about that, but it is hard (due to lack of incentives).
>
> I think that makes it clear, Heinz.
>
>>>>
>>>>
>>>> 2) I'm just about to figure out in the Epics Makefiles whether the target was built with the legacy-stack or libbsd-stack. It's working already.
>>>> Now I also have to adjust the rtems_init.c accordingly. Here I have to clean up a little bit. But I hope to have this finished by the middle of the week.
>>>
>>>
>>> Thank you so much for the update!
>>>
>>> So I would like to ask my other mentors - what can I do for the time being? What should be the next steps for this week?
>>
>> Prepare the zynq patch and try to work out the logic of how to select
>> the right patch to apply.
>>
>> Then similar logic might be usable to script the configuration changes
>> of EPICS so we don't need patches.
>
> Sure, I will do and update soon.
>
>>> And yes how to begin the testing part?
>>
>> This was suggested by Heinz earlier to look at the CI test scripts
>> that EPICS maintainers use.
>
> Yes, it slipped out of my mind. Will check and revert.
>
> Thanks
> Mritunjay Sharma
>
>>> I have tried to find some resources but I think it will be
>>> better if you can help somewhere to look at.
>>>
>>> Thanks
>>> Mritunjay Sharma
>>>>
>>>>
>>>>> These were the little doubts that originated from the research work I did.
>>>>> I will like the opinion of mentors that what can be the optimal way now to approach the
>>>>> project after this? What can be some resources for better research work of the
>>>>> above problems?
>>>>>
>>>>> Also, for the reference:
>>>>> Link to the changes in commits of rsb can be found here: https://github.com/RTEMS/rtems-source-builder/compare/master...mritunjaysharma394:epics-support
>>>>>
>>>>> The patch for epics can be found here: https://github.com/mritunjaysharma394/epics-mritunjay/tree/master/patches
>>>>>
>>>>>
>>>>> Thanks
>>>>> Mritunjay Sharma
>>>>>
>>>>>
>>>>>
>>>>>
>>>> Heinz
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20200810/65bbadd7/attachment-0001.html>
More information about the devel
mailing list