libi2c: extra parameters needed for low-level SPI driver

Arnout Vandecappelle arnout at
Wed Jun 9 09:25:54 UTC 2010

 Hoi all,

 I'm porting our old RTEMS 4.7 application to the upcoming RTEMS 4.10.  In 
the process, I'd like to use the new libi2c rather than our own hacked-
together SPI driver.  However, I'm running in a bit of difficulty here when 
it comes to setting the channel parameters.

 Just to jog your memory, libi2c uses the following procedure to perform an 
SPI transfer:

 send_start(bus);                    // Acquire bus lock
 ioctl(bus, SET_TFRMODE, &mode);     // Set channel parameters
 send_addr(bus, cs, 1);              // Enable chip select cs for read/write
 ioctl(bus, START_READ_WRITE, &buf); // Do a read/write transfer
 ioctl(bus, START_READ_WRITE, &buf);
 send_stop(bus);                     // Release bus lock

 The SPI peripheral of our SoC has 4 pre-defined chip selects, each with its 
own register to configure channel parameters.  Also, there are more channel 
parameters than what is mentioned in the tfrmode struct, e.g. delay between 
chip select and start of clock.

 With the current libi2c interface, we can't use these predefined channel 
parameters because at the time the SET_TFRMODE ioctl is done, the chip 
select to use is not known yet.  So we would have to program all four chip 

 A second problem is that when I use a high-level i2c device driver, e.g. 
spi-sd-card, it doesn't set the additional delay parameters.

 A third problem is that I need to use more than the four pre-defined chip 
selects, i.e. some of the GPIO's are used as SPI chip select.  Up to now, we 
used a callback for this that is set by the high-level device driver and 
that is called by the SPI driver at the time it does a send_addr.  With 
libi2c, I would need to pre-allocate a mapping from addresses to callbacks 
to perform the actual chip select.

 I see two possible solutions for these issues:

1. A major rewrite of the libi2c interface, where the SET_TFRMODE ioctl and 
send_addr are replaced by a single

 send_device_addr(bus, device_mode, 1);

and where device_mode is a struct like this:

typedef struct {
  uint32_t channel;
  void (*set_chip_select) (uint32_t address);
  rtems_libi2c_tfr_mode_t tfr_mode;
} rtems_libi2c_device_mode_t;

 The structure can be extended with low-level driver private data (e.g. my 
delays).  High-level driver configuration structures (e.g. 
sd_card_driver_entry) can have a pointer to this struct.

 Note that the high-level driver is obliged to use the 
rtems_libi2c_send_device_addr(...) function, else there is no way to pass 
the device parameter.

2. Use a fixed-size table in the low-level driver to map addresses to 
device-specific information in a structure like rtems_libi2c_device_mode_t.  
The SET_TFRMODE ioctl would then be ignored by the low-level driver.  
Instead, the BSP must pass the device-specific information to the low-level 
driver with a custom low-level function.  

 This requires no change to the libi2c interface.  However, it does mean 
there is no reuse of the scheme between different low-level drivers.  Also,  
it means the low-level driver must have a fixed-size mapping table similar 
to the driver table in libi2c.

 Does anybody have other suggestions?  Do you prefer one solution over the 
other?  Are there issues I didn't consider in either solution?


Arnout Vandecappelle                               arnout at mind be
Senior Embedded Software Architect                 +32-16-286540
G.Geenslaan 9, 3001 Leuven, Belgium                BE 872 984 063 RPR Leuven
LinkedIn profile:
GPG fingerprint:  31BB CF53 8660 6F88 345D  54CC A836 5879 20D7 CF43

More information about the users mailing list