What does the C syntax `Type varname[integer];` do? [on hold] - objective-c

I'm looking through some low-level Objective-C code and I see this:
Byte seq[termLength];
(termLength is a NSUInteger, which is a unsigned long in my environment; Byte is a UInt8, which is a unsigned char)
I'm not as familiar with the C part of Objective-C... what does this do? To my eye it looks like it creates a new array of Bytes named seq that is termLength long without initializing the memory therein, but then later on I see this:
memcpy(seq + bufLen, pre, preLen);
I'm quite confused about this part. bufLen is a NSUInteger. How would one add a NSUInteger to a Byte[]? What would that even do?

What you are describing is a C array. You can read about C arrays here:
https://en.wikibooks.org/wiki/C_Programming/Arrays_and_strings
C arrays can be used like pointers, including pointer arithmetic. So, if you add bufLen to seq, what you end up with is a pointer to the position bufLen bytes into seq. So (seq + buflen)[0] gets you the same byte as seq[bufLen], (seq + buflen)[1] gets you seq[bufLen + 1], etc. Hopefully bufLen is less than termLength.

Related

Passing byte array from C# to C++ DLL as char*

I am passing a byte[] from C# to C++ DLL
Inside the C++ DLL, I need to call a function which accept and read istream object, I intend to receive the byte[] from C# as char* and convert it to istream,
C++ DLL
extern "C" _declspec(dllexport) bool CheckData(char* data, int dataLength)
C#
[DllImport("example.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern bool CheckData(byte[] incoming, int size);
public void Process(byte[] bytes)
{
CheckData(bytes, bytes.Length);
}
Although it seems to work fine, I find that the equivalent data type of byte[] is unsigned char* in C++, I thought of changing to unsigned char* but most stream in C++ works on char* not unsigned char*
I would like to ask
1) Both data type char* and unsigned char* are 1 byte, what happened behind? Is there any potential problem if I keep using byte[] with char*?
2) In case there is any problem, how should I use unsigned char* to construct an istream object?
Actually, the char * and unsigned char * type sizes are not 1 byte, but rather 4-bytes, assuming we are talking about a win32 application : those are pointers, and all pointers have the same size regardless of the size of the data being pointed at.
When the P/Invoke mechanism sees an array of "simple values" as a function argument, it happily feeds a pointer to the start of the array to the C function underneath. After all, all it really knows about the C function from the info in the DLL is where its code starts. As far as I know, the number and type of arguments is not encoded in the symbol name, so it trusts the info you provided. Which means that even if you'd fed it an int array, the actual call to the C function would have worked, as the size of the arguments pushed on the stack (a pointer and an int) match the ABI of the function. Of course the processing would have probably been wrong as the size wouldn't have matched.
See also https://msdn.microsoft.com/en-us/library/75dwhxf7(v=vs.110).aspx for more details about what happens.
Processing is where the difference between unsigned char and char comes : if on the C# size you do some math on the byte values (ranging 0-255), pass it on the C side where char values (-128 to 127) are expected
to do some more math, something could go wrong. If it just uses it as a way to move data around, it's all fine.

When to use size_t vs uint32_t?

When to use size_t vs uint32_t? I saw a a method in a project that receives a parameter called length (of type uint32_t) to denote the length of byte data to deal with and the method is for calculating CRC of the byte data received. The type of the parameter was later refactored to size_t. Is there a technical superiority to using size_t in this case?
e.g.
- (uint16_t)calculateCRC16FromBytes:(unsigned char *)bytes length:(uint32_t)length;
- (uint16_t)calculateCRC16FromBytes:(unsigned char *)bytes length:(size_t)length;
According to the C specification
size_t ... is the unsigned integer type of the result of the sizeof
operator
So any variable that holds the result of a sizeof operation should be declared as size_t. Since the length parameter in the sample prototype could be the result of a sizeof operation, it is appropriate to declare it as a size_t.
e.g.
unsigned char array[2000] = { 1, 2, 3 /* ... */ };
uint16_t result = [self calculateCRC16FromBytes:array length:sizeof(array)];
You could argue that the refactoring of the length parameter was pointlessly pedantic, since you'll see no difference unless:
a) size_t is more than 32-bits
b) the sizeof the array is more than 4GB

Different c array declaration [duplicate]

Possible Duplicate:
sizeof array clarification
I have 2 arrays declared
GLfloat gCubeVertexData[216] = { list of numbers};
and an array declared:
GLfloat *resultArray = malloc(sizeof(GLfloat) * [arrayOfVerticies count]);
for(int i = 0; i < [arrayOfVerticies count]; i++)
{
resultArray[i] = [[arrayOfVerticies objectAtIndex:i] floatValue];
}
why is it when I do sizeof(gCubeVertexData) I get 864 ( a GLflot is 4 bits so divided by 4 and you get 216)
and when I do sizeof(resultArray) I get 4? event though if I were to print out resultArray[100] I get the correct number, and there is a lot more than 4 numbers stored?
Because gCubeVertexData is an array, and resultArray is a pointer. In the case of an array, the compiler knows how many bytes it is required to allocate for the array, so it explicitly knows a size (in the case of variable-length arrays in C99, it can also be computed easily at runtime, perhaps by messing with the stack pointer).
However, in the case of malloc(), the compiler has no knowledge about the size of the memory pointed by the pointer (that size can be obtained in a non-standard and platform-dependent way only anyways...), so it just returns the size of the variable itself, which is a pointer in this case, so you'll get back sizeof(GLfloat *) in the end.
Because with sizeof(resultArray) you are getting the size of the pointer to the first element.
The type of resultArray is simply GLfloat *, i.e. "pointer to GLfloat, and your machine uses 4 characters to store a pointer. The size information associated with the pointer is not visible to the sizeof operator.
Therefore, sizeof resultArray == sizeof (GLfloat *), which is what you're seeing.
Look at the declaration of gCubeVertexData and resultArray. The first is an array with 216 elements, the latter is just a pointer. C (and thus C++ and Objective-C) allow to use pointers to access arrays, but that does not mean they have the same type.

Converting int[] to byte: How to look at int[] as it was byte[]?

To explain: I have array of ints as input. I need to convert it to array of bytes, where 1 int = 4 bytes (big endian). In C++, I can easily just cast it and then access to the field as if it was byte array, without copying or counting the data - just direct access. Is this possible in C#? And in C# 2.0?
Yes, using unsafe code:
int[] arr =...
fixed(int* ptr = arr) {
byte* ptr2 = (byte*)ptr;
// now access ptr2[n]
}
If the compiler complains, add a (void*):
byte* ptr2 = (byte*)(void*)ptr;
You can create a byte[] 4 times the size of your int[] lenght.
Then, you iterate trough your integer array & get the byte array from:
BitConverter.GetBytes(int32);
Next you copy the 4 bytes from this function to the correct offset (i * 4) using Buffer.BlockCopy.
BitConverter
Buffer.BlockCopy
Have a look at the BitConverter class. You could iterate through the array of int, and call BitConverter.GetBytes(Int32) to get a byte[4] for each one.
If you write unsafe code, you can fix the array in memory, get a pointer to its beginning, and cast that pointer.
unsafe
{
fixed(int* pi=arr)
{
byte* pb=(byte*)pi;
...
}
}
An array in .net is prefixed with the number of elements, so you can't safely convert between int[] and byte[] that points to the same data. You can cast between uint[] and int[] (at least as far as .net is concerned, the support for this feature in C# itself is a bit inconsistent).
There is also a union based trick to reinterpret cast references, but I strongly recommend not using it.
The usual way to get individual integers from a byte array in native-endian order is BitConverter, but its relatively slow. Manual code is often faster. And of course it doesn't support the reverse conversion.
One way to manually convert assuming little-endian (managed about 400 million reads per second on my 2.6GHz i3):
byte GetByte(int[] arr, int index)
{
uint elem=(uint)arr[index>>2];
return (byte)(elem>>( (index&3)* 8));
}
I recommend manually writing code that uses bitshifting to access individual bytes if you want to go with managed code, and pointers if you want the last bit of performance.
You also need to be careful about endianness issues. Some of these methods only support native endianness.
The simplest way in type-safe managed code is to use:
byte[] result = new byte[intArray.Length * sizeof(int)];
Buffer.BlockCopy(intArray, 0, result, 0, result.Length);
That doesn't quite do what I think your question asked, since on little endian architectures (like x86 or ARM), the result array will end up being little endian, but I'm pretty sure the same is true for C++ as well.
If you can use unsafe{}, you have other options:
unsafe{
fixed(byte* result = (byte*)(void*)intArray){
// Do stuff with result.
}
}

How to declare 2D byte array

I am trying to make a 2D byte array.
Can anybody give the code how to declare a NULL 2D byte array in Objective-C?
Since objective-c is a strict superset of c, you could just use a pure c definition and it would work fine:
char** myMatrix = malloc(width*height);
You could also use an NSArray of NSArrays, but that's not a 2 dimensional array. It's a jagged array and considerably less easy to use than a plain byte array.
Another alternative is using an NSData/NSMutableData object. That is the Foundation way of working with byte arrays. See NSMutableData class reference for more information.
NSMutableData* data = [NSMutableData dataWithLength:1024]; // One kilobyte
void* dataPointer = [data mutableBytes]; // Get a pointer to the raw bytes
I'm cheating by doing this in C.
size_t width;
size_t height;
unsigned char *twoDimArray = calloc(width*height);

Resources