dmaengine: Use dma_sg_len(sg) instead of sg->length

sg->length may or may not contain the length of the dma region to transfer,
depending on the architecture - dma_sg_len(sg) always will though. For the
architectures which use the drivers modified by this patch it probably is the
case that sg->length contains the dma transfer length. But to be consistent and
future proof change them to use dma_sg_len.

To quote Russel King:
	sg->length is meaningless to something performing DMA.

	In cases where sg_dma_len(sg) and sg->length are the same storage, then
	there's no problem. But scatterlists _can_ (and one some architectures) do
	split them - especially when you have an IOMMU which can allow you to
	combine a scatterlist into fewer entries.

	So, anything using sg->length for the size of a scatterlist's DMA transfer
	_after_ a call to dma_map_sg() is almost certainly buggy.

The patch has been generated using the following coccinelle patch:
<smpl>
@@
struct scatterlist *sg;
expression X;
@@
-sg[X].length
+sg_dma_len(&sg[X])
@@
struct scatterlist *sg;
@@
-sg->length
+sg_dma_len(sg)
</smpl>

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
diff --git a/drivers/dma/mxs-dma.c b/drivers/dma/mxs-dma.c
index 655d4ce6..3db3a48 100644
--- a/drivers/dma/mxs-dma.c
+++ b/drivers/dma/mxs-dma.c
@@ -415,9 +415,9 @@
 		ccw->bits |= BF_CCW(MXS_DMA_CMD_NO_XFER, COMMAND);
 	} else {
 		for_each_sg(sgl, sg, sg_len, i) {
-			if (sg->length > MAX_XFER_BYTES) {
+			if (sg_dma_len(sg) > MAX_XFER_BYTES) {
 				dev_err(mxs_dma->dma_device.dev, "maximum bytes for sg entry exceeded: %d > %d\n",
-						sg->length, MAX_XFER_BYTES);
+						sg_dma_len(sg), MAX_XFER_BYTES);
 				goto err_out;
 			}
 
@@ -425,7 +425,7 @@
 
 			ccw->next = mxs_chan->ccw_phys + sizeof(*ccw) * idx;
 			ccw->bufaddr = sg->dma_address;
-			ccw->xfer_bytes = sg->length;
+			ccw->xfer_bytes = sg_dma_len(sg);
 
 			ccw->bits = 0;
 			ccw->bits |= CCW_CHAIN;