My attention ended up divided after visiting a friend to show the cave flying game. Now because of our brainstorming, I feel more like I should be doing Internet-based games instead. So I've been working on that, luckily I'd been working on that a month ago already for a good while, so I have some infrastructure in place already. I have a general idea and also a server set up to host the lobby, which is a place where you list the existing games to pick one to join to. I have two different games partially working, but on client-side I've only gotten far enough to resolve a domain name, haven't even figured out how to transfer data yet, but how hard can that be?
Yesterday I figured out how the login system should work. After weighing several options it seems that normal login/password will have to do. I cannot tie accounts to a certain iPhone based on device ID alone, because a person might change their device to an iPhone 4G or whatever, or someone else might want to play on the same device. I hate asking people to create accounts, because I hate creating accounts myself, so maybe I will have an option to play as guest. In that case your score won't be saved though, so you'll start from zero again each time.
Of course the best option would be to always start off as a guest, and then only after you've made some progress to have the option to lock that guest account as your own account... hmm...
Thursday, October 30, 2008
Tuesday, October 28, 2008
Game design time
I've decided to continue on the cave flying game, at least for now. Well truth be told, I've spent the entire day just slacking off and reading game design related articles. Lost Garden has an interesting presentation called Mixing Games and Applications. It skims some common game mechanics, so it was an interesting thing to read while trying to decide what kind of levels my game should have, how the learning curve should go. I've never been a fan of tutorials inside games, and reading that presentation made me even more determined that the user should be allowed to discover how to play, instead of being explicitly told what to do.
In my prototype, I can tilt the phone in two axis to move the ship along the X and Y coordinates. In the proto the only activity is obstacle avoidance. If I merely throw the player into this and have different kinds of obstacle courses, that would seem to be quite boring. Instead I should have them on a nice curve where they learn new things and are challenged after each level. I don't think I need to have new types of things on every level, sometimes I'll probably get away with just using older challenges and just increase the speed and amount of obstacles a bit, but new elements should be introduced at times.
I'm even considering restricting movement to the X axis at first, pinning the player to the bottom of the screen. Perhaps I can then surprise them in a later level with the ability to also fly up and down. How to show them that they are now able to move their ship along Y axis without explicitly telling them to tilt their phone, I'm not sure.
In my prototype, I can tilt the phone in two axis to move the ship along the X and Y coordinates. In the proto the only activity is obstacle avoidance. If I merely throw the player into this and have different kinds of obstacle courses, that would seem to be quite boring. Instead I should have them on a nice curve where they learn new things and are challenged after each level. I don't think I need to have new types of things on every level, sometimes I'll probably get away with just using older challenges and just increase the speed and amount of obstacles a bit, but new elements should be introduced at times.
I'm even considering restricting movement to the X axis at first, pinning the player to the bottom of the screen. Perhaps I can then surprise them in a later level with the ability to also fly up and down. How to show them that they are now able to move their ship along Y axis without explicitly telling them to tilt their phone, I'm not sure.
Monday, October 27, 2008
Collision detection works!
2D collisions seem to be working well. In this case they are a bit more accurate than using a bounding sphere. So filled with enthusiasm (okay, stock market fear, but somewhere deep down there was some enthusiasm too) I went to show my project in its current state to a friend. What was his reaction? It was: "omg have you gone mad, you bought a mac, traitor!". It took a while for the situation to recover from that, but eventually he recognized that the iPhone is a pretty cool platform.
Sadly he wasn't all that into my project, and instead we started to brainstorm what I should REALLY be doing. That isn't so bad, as mostly coding this has been a learning experience. We agreed that it should be something with clear mass appeal, and something challenging enough that competition would be a bit less. We figured that maybe most developers are not as comfortable with network programming as we are, so we should make an Internet-based game. As bonus there don't seem to be very many of those yet on the platform.
But should I really just abandon this project I've been working on? It's been my experience that if you always abandon what you are doing when you discover something even better to do, you end up never completing anything. Have to admit though these lobby-based games would seem to have way wider appeal.
Sadly he wasn't all that into my project, and instead we started to brainstorm what I should REALLY be doing. That isn't so bad, as mostly coding this has been a learning experience. We agreed that it should be something with clear mass appeal, and something challenging enough that competition would be a bit less. We figured that maybe most developers are not as comfortable with network programming as we are, so we should make an Internet-based game. As bonus there don't seem to be very many of those yet on the platform.
But should I really just abandon this project I've been working on? It's been my experience that if you always abandon what you are doing when you discover something even better to do, you end up never completing anything. Have to admit though these lobby-based games would seem to have way wider appeal.
Thursday, October 23, 2008
Collision detection thoughts
So now that I have my method lovingly called "getTrianglesTransformedByCurrentOpenGLMatrix", which does seem to produce identical results with accelerated transforms, how do I use that for my collision detection? Well, for the needs of this game I would like to know if the spaceship is going to collide with the next obstacle or not. I would like to know that even before the collision happens, so that I can warn the user. Then when the obstacle is near enough and if the player has not adjusted their position, the ship should explode.
Before I was planning on doing this properly, to actually see if the polygonal objects intersect or not, but a friend convinced me otherwise. It won't matter as long as it works well enough so that the play experience isn't disturbed by it. So I will instead just have a two-dimensional collision volume for a ship. I will disregard the Z coordinate in the collision detection. I think I'll place this collision volume to the base of the ship, because that part is most visible to the player and any error there would be too glaring.
My obstacles are very low-poly, but I have some power-ups that may be smaller than the ship itself. If I do the detection by simple is-vertex-inside-any-triangle -tests, then I should probably subdivide the collision volume to have some extra vertices so that it doesn't happen that a power-up would just slide through it because no vertex in the ship happened to be inside any of the power-up's vertices.
Before I was planning on doing this properly, to actually see if the polygonal objects intersect or not, but a friend convinced me otherwise. It won't matter as long as it works well enough so that the play experience isn't disturbed by it. So I will instead just have a two-dimensional collision volume for a ship. I will disregard the Z coordinate in the collision detection. I think I'll place this collision volume to the base of the ship, because that part is most visible to the player and any error there would be too glaring.
My obstacles are very low-poly, but I have some power-ups that may be smaller than the ship itself. If I do the detection by simple is-vertex-inside-any-triangle -tests, then I should probably subdivide the collision volume to have some extra vertices so that it doesn't happen that a power-up would just slide through it because no vertex in the ship happened to be inside any of the power-up's vertices.
Matrices from OpenGL, without OpenGL
"For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix, where indices are numbered from 1 to 16 as described in section 2.11.2 of the OpenGL 2.1 Specification."
This at least clarifies the order of the values given to me by the glGetFloatv call. Now if I have a x,y,z vertex, how do I transform it by the returned matrix? I found mention on the web that I'm supposed to divide by W. But if I don't have W to begin with, then what should it be? Hmm, makes sense it should be one. Now I should be able to do the multiplication. Let's see if I'll manage to introduce a bug here:
Edit: Wow, it works.
This at least clarifies the order of the values given to me by the glGetFloatv call. Now if I have a x,y,z vertex, how do I transform it by the returned matrix? I found mention on the web that I'm supposed to divide by W. But if I don't have W to begin with, then what should it be? Hmm, makes sense it should be one. Now I should be able to do the multiplication. Let's see if I'll manage to introduce a bug here:
// got vertex[0..2] already, multiply by matrix, divide components by w
vertex[3] = 1; // w
for (i=0;i<=4;i++) {
newVertex[i] = m[i]*vertex[0] + m[4+i]*vertex[1] + m[8+i]*vertex[2] + m[12+i]*vertex[3];
}
newVertex[0] /= newVertex[3];
newVertex[1] /= newVertex[3];
newVertex[2] /= newVertex[3];
Edit: Wow, it works.
Collision detection continued
I've been working on, or at least thinking about the collision detection problem for the past few days, at least while not distracted by the financial crisis. I'm an eternal optimist and have been buying stock regardless of the downturn, but it has not changed direction yet, and it makes me almost physically nauseus to watch my money disappear at an alarming pace from my etrade account. So I tend to log on to etrade and click refresh refresh refresh instead of working.
One slight problem I encountered with being able to even begin test for collisions is that I have just access to local coordinates, but I need world coordinates. Normally local -> world transformation is performed by OpenGL, but it is not possible to access the transformed coordinates because they only exist in the 3D accelerator chip for an instant. AFAIK I now have to ask OpenGL to give me the matrix (glGetFloatv), gather all vertex coordinates from meshes and then do the matrix multiplication myself. Currently I'm really confused about the order of components in the matrix given to me by OpenGL. Also I'm not sure what to do with the extra row and column that matrix has. I suspect it is about the "w component" which I have to somehow multiple or divide x, y, z with, but not sure exactly how.
Until I understand this, I suppose any attempt to code this will just result in a tangled mess.
One slight problem I encountered with being able to even begin test for collisions is that I have just access to local coordinates, but I need world coordinates. Normally local -> world transformation is performed by OpenGL, but it is not possible to access the transformed coordinates because they only exist in the 3D accelerator chip for an instant. AFAIK I now have to ask OpenGL to give me the matrix (glGetFloatv), gather all vertex coordinates from meshes and then do the matrix multiplication myself. Currently I'm really confused about the order of components in the matrix given to me by OpenGL. Also I'm not sure what to do with the extra row and column that matrix has. I suspect it is about the "w component" which I have to somehow multiple or divide x, y, z with, but not sure exactly how.
Until I understand this, I suppose any attempt to code this will just result in a tangled mess.
Ralph Waldo Emerson
Poet/philosopher Ralph Waldo Emerson seems to be a startuppy kind of guy. I enjoyed this quote particularly:
"What I must do is all that concerns me, not what the people think. This rule, equally arduous in actual and in intellectual life, may serve for the whole distinction between greatness and meanness. It is the harder, because you will always find those who think they know what is your duty better than you know it. It is easy in the world to live after the world's opinion; it is easy in solitude to live after our own; but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude."
"What I must do is all that concerns me, not what the people think. This rule, equally arduous in actual and in intellectual life, may serve for the whole distinction between greatness and meanness. It is the harder, because you will always find those who think they know what is your duty better than you know it. It is easy in the world to live after the world's opinion; it is easy in solitude to live after our own; but the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude."
Tuesday, October 21, 2008
Another way to do the collision detection
Here's another idea I had for detecting the collision. I'm not sure how to do the line-triangle intersection detection though, so whether this is simpler would depend on that.
Read a bit on the subject. It seems to be simple. To know if a line segment defined by two points goes through a triangle, first you check if the line goes through the plane defined by the triangle. This is actually cleverly easy: see if the start point of the line segment is on the other side of the plane than the end point. But hmm... somehow I need to know the intersection point to do the point-in-triangle check after that...
iPhone tunnel game progress
It's been a few days, so how is the game coming along? Quite well, actually. I took a step back to think about how I could have multiple levels of content. If I set everything in code, then it will too laborous to create any amount of meaningful play. I came up with a simple level system that allows me to make each level a single text file that events can easily be added to.
The ship slides forward in the level at variable speed and there is a certain draw distance that the program tries to maintain. If it notices that an object mentioned in the level file has come into draw distance, then it makes an instance of it. At first it did this by loading the model file from disk (or is it flash ram?), but that created a one-frame pause in the game when an object was loaded, so I had to preload everything in the beginning of a level, and then just make references to the already in-memory objects when they come into view.
Currently the level file has just two different lines. Either the graphics for a tunnel should change at some depth, or an obstacle should appear at some depth. This seems to work well now, I created a level about 10 seconds long with various obstacles appearing that the player can avoid by tilting the device. It's not clear from this whether or not this would be an enjoyable game, but I think it might be. Obstacle avoidance is a pretty common game element, and players do seem to enjoy it.
I've now come to a sort of mental block. The player cannot crash with the obstacles, they'll just slide through them. I feel that the collision detection code is absolutely crucial to get right. If the player feels that the collisions aren't handled properly, they may feel betrayed by the game. If you die, it should be your own fault, not the fault of inadequate collision detection in the game. But 3D collision detection is not an easy problem. Luckily in my case the player object is very simple, and the obstacles are totally flat.
I was really happy that OpenGL was doing all the matrix operations for me, but now it's coming back to bite me. To do collision detection, I need to know where the vertices are in world space. So I think I'll have to make matrix multiplication code anyway that can mimic what OpenGL is doing, so I can get the post-transform data. After I have the ship and an obstacle in world space, I should be able to see where the flat obstacle is in relation to the ship, then take a z-slice of the ship at that point. After this the collision detection becomes a 2D issue of seeing whether the flat obstacle should collide with a flat slice of the ship.
I also plan to have spherical power-ups and bonuses that can be picked up. In those it could be sufficient to see if any vertice of the world-space ship is inside the sphere.
The ship slides forward in the level at variable speed and there is a certain draw distance that the program tries to maintain. If it notices that an object mentioned in the level file has come into draw distance, then it makes an instance of it. At first it did this by loading the model file from disk (or is it flash ram?), but that created a one-frame pause in the game when an object was loaded, so I had to preload everything in the beginning of a level, and then just make references to the already in-memory objects when they come into view.
Currently the level file has just two different lines. Either the graphics for a tunnel should change at some depth, or an obstacle should appear at some depth. This seems to work well now, I created a level about 10 seconds long with various obstacles appearing that the player can avoid by tilting the device. It's not clear from this whether or not this would be an enjoyable game, but I think it might be. Obstacle avoidance is a pretty common game element, and players do seem to enjoy it.
I've now come to a sort of mental block. The player cannot crash with the obstacles, they'll just slide through them. I feel that the collision detection code is absolutely crucial to get right. If the player feels that the collisions aren't handled properly, they may feel betrayed by the game. If you die, it should be your own fault, not the fault of inadequate collision detection in the game. But 3D collision detection is not an easy problem. Luckily in my case the player object is very simple, and the obstacles are totally flat.
I was really happy that OpenGL was doing all the matrix operations for me, but now it's coming back to bite me. To do collision detection, I need to know where the vertices are in world space. So I think I'll have to make matrix multiplication code anyway that can mimic what OpenGL is doing, so I can get the post-transform data. After I have the ship and an obstacle in world space, I should be able to see where the flat obstacle is in relation to the ship, then take a z-slice of the ship at that point. After this the collision detection becomes a 2D issue of seeing whether the flat obstacle should collide with a flat slice of the ship.
I also plan to have spherical power-ups and bonuses that can be picked up. In those it could be sufficient to see if any vertice of the world-space ship is inside the sphere.
Thursday, October 16, 2008
iPhone 3D object spinning retrospect
Wow, it works. Last time I was trying to outline what I would need to get a 3D object loader and displayer working, and now about three days later it works -- I have a mushroom I created in Meshworks spinning smoothly on the iPhone. Quite pretty. Now let's see what I listed three days ago and see how it panned out.
I thought I would need bitmaps for the textures and which texture to use with which mesh. Well of course that would be more complicated, I realized I would need texture coordinates as well for each vertex. I decided that texture mapping at this point is not important, I cannot allow myself to become one of those people who tinker on a 3D engine on their spare time. No, this has to become a playable game as fast as possible, and texture mapping usually isn't totally essential to gameplay.
I figured I'd need to have a list of meshes. Now I have a nice 3D object class, each object of which contains 3D mesh instances. Each mesh then contains a vertex list and additionally the color of the mesh, which I could easily get from the file I parse. I was worried about the vertex etc. data loader being complex, but actually taking some shortcuts it is easy to get that information out from a WRL file outputted by Meshworks. I didn't attempt to write a general WRL reader, mine only understands the specific output of Meshworks, so if there is some extra whitespace in the wrong place, it wouldn't work. That means I made the decision to stick with Meshworks, even this particular version of it.
I supposed there would be a list of vertices, then another list of triangles referring to the vertice list. That's how it really was in the WRL file. I made the unnecessary move of rolling out from that data a plain polygon list with no shared vertices, but turns out OpenGL ES would have known how to do that by itself.
I had totally ignored lighting in my original list. To know the brightness of each polygon, I had to specify where the lights are, and the material properties like how strong specular highlights should be on a surface. And to be able to compute these things, of course OpenGL then wanted to know where the surface normals are pointing. I tried to refer to my linear algebra text, but in the end did the copypasta PHP coder thing and just copied the normal calculation routine from some sample code. Well, maybe I mistyped something, but I had to tweak it for hours before it actually calculated the normals correctly. It was really difficult to debug, because just looking at float values in a debugger it's not so easy to say if a vector is pointing to the correct direction.
Another thing I ignored was setting up the projection to look OK. When you create a sample project in Xcode, initially it sets you up with 2D projection. All the sample code on the net refers to some GLU functions to set up a perspective projection, but those are missing from my framework. I guess the right thing to do may have been to again learn from my lin. algebra text how to REALLY do it, but instead I again just copied a working projection matrix from an example. Just too eager to get this project forward!
I've learned a great deal about OpenGL in the past 3 days, and it's exciting to be able to rather easily display 3D objects now. Hopefully this will be useful later, and not just a distraction. I'll try to blog some more about my progress soon.
I thought I would need bitmaps for the textures and which texture to use with which mesh. Well of course that would be more complicated, I realized I would need texture coordinates as well for each vertex. I decided that texture mapping at this point is not important, I cannot allow myself to become one of those people who tinker on a 3D engine on their spare time. No, this has to become a playable game as fast as possible, and texture mapping usually isn't totally essential to gameplay.
I figured I'd need to have a list of meshes. Now I have a nice 3D object class, each object of which contains 3D mesh instances. Each mesh then contains a vertex list and additionally the color of the mesh, which I could easily get from the file I parse. I was worried about the vertex etc. data loader being complex, but actually taking some shortcuts it is easy to get that information out from a WRL file outputted by Meshworks. I didn't attempt to write a general WRL reader, mine only understands the specific output of Meshworks, so if there is some extra whitespace in the wrong place, it wouldn't work. That means I made the decision to stick with Meshworks, even this particular version of it.
I supposed there would be a list of vertices, then another list of triangles referring to the vertice list. That's how it really was in the WRL file. I made the unnecessary move of rolling out from that data a plain polygon list with no shared vertices, but turns out OpenGL ES would have known how to do that by itself.
I had totally ignored lighting in my original list. To know the brightness of each polygon, I had to specify where the lights are, and the material properties like how strong specular highlights should be on a surface. And to be able to compute these things, of course OpenGL then wanted to know where the surface normals are pointing. I tried to refer to my linear algebra text, but in the end did the copypasta PHP coder thing and just copied the normal calculation routine from some sample code. Well, maybe I mistyped something, but I had to tweak it for hours before it actually calculated the normals correctly. It was really difficult to debug, because just looking at float values in a debugger it's not so easy to say if a vector is pointing to the correct direction.
Another thing I ignored was setting up the projection to look OK. When you create a sample project in Xcode, initially it sets you up with 2D projection. All the sample code on the net refers to some GLU functions to set up a perspective projection, but those are missing from my framework. I guess the right thing to do may have been to again learn from my lin. algebra text how to REALLY do it, but instead I again just copied a working projection matrix from an example. Just too eager to get this project forward!
I've learned a great deal about OpenGL in the past 3 days, and it's exciting to be able to rather easily display 3D objects now. Hopefully this will be useful later, and not just a distraction. I'll try to blog some more about my progress soon.
Monday, October 13, 2008
Graphics time - 3D
I did some profiling and noticed that around half of the time was spent doing the rand() calls. Also I was writing data 8 bits at a time. Changing that to 32 bits immediately boosted performance, but CPU usage was still 100% and FPS was only around ~13 and fluctuating depending on background processes. If what in fact is happening is that Quartz is making a texture of my bitmap and uploading the texture to the GPU and rendering it that way, then I would actually be closer to the metal by just using OpenGL ES directly. Now this is a bit scary for me though, as I don't really know anything about OpenGL. I do know some basics about vectors, matrices etc. but the biggest thing I've done is a rotating cube (which did come 3rd place in a Javascript competition though haha).
I read some introductory text, but the concept of "shaders" bothers me. What the heck is a shader? I remember checking it on wikipedia before, but the best I could understand is that it's some kind of routine executed on the GPU against a vertex, or maybe they can be executed per-pixel too? Just guessing from the names "vertex shader" and "pixel shader". But what is a "fragment shader"? No idea. Wikipedia: "A pixel shader is a shader program, often executed on a graphics processing unit. It adds 3D shading and lighting effects to pixels in an image, for example those in video games. Microsoft's Direct X and Open GL support pixel shaders. In OpenGL a pixel shader is called a fragment shader." Ah, just a synonym.
Now, I have to admit it would be very sexy to display some 3D models of my own. But seriously, no more cubes! I've done so many of them. Always a cube on a new platform, then I get bored and make another cube a year later on another platform. Na-ah, should be a proper model at least now if I try this at all. But how do I get models with some data easy enough to load? I'm scared. Stuff I imagine I will need to load:
- bitmaps of the textures
- list of meshes
- which texture goes with which mesh
- coordinates of vertices in each mesh
- which vertices form polygons
Then to spin a 3D object...
- load object & textures
- do opengl magic to let it know about list of vertices and polygons
- maybe enter into some texture modes before each mesh? dunno.
- to spin, perhaps alter the object space -> world space transformation matrix?
- will opengl remember my vertex list etc. or do I tell it again on each frame? no idea.
So you can see I'm a bit confused about this. Let's see if there is some simple modeling tool for mac.
I read some introductory text, but the concept of "shaders" bothers me. What the heck is a shader? I remember checking it on wikipedia before, but the best I could understand is that it's some kind of routine executed on the GPU against a vertex, or maybe they can be executed per-pixel too? Just guessing from the names "vertex shader" and "pixel shader". But what is a "fragment shader"? No idea. Wikipedia: "A pixel shader is a shader program, often executed on a graphics processing unit. It adds 3D shading and lighting effects to pixels in an image, for example those in video games. Microsoft's Direct X and Open GL support pixel shaders. In OpenGL a pixel shader is called a fragment shader." Ah, just a synonym.
Now, I have to admit it would be very sexy to display some 3D models of my own. But seriously, no more cubes! I've done so many of them. Always a cube on a new platform, then I get bored and make another cube a year later on another platform. Na-ah, should be a proper model at least now if I try this at all. But how do I get models with some data easy enough to load? I'm scared. Stuff I imagine I will need to load:
- bitmaps of the textures
- list of meshes
- which texture goes with which mesh
- coordinates of vertices in each mesh
- which vertices form polygons
Then to spin a 3D object...
- load object & textures
- do opengl magic to let it know about list of vertices and polygons
- maybe enter into some texture modes before each mesh? dunno.
- to spin, perhaps alter the object space -> world space transformation matrix?
- will opengl remember my vertex list etc. or do I tell it again on each frame? no idea.
So you can see I'm a bit confused about this. Let's see if there is some simple modeling tool for mac.
Saturday, October 11, 2008
Graphics time
I've been attending demo scene events for years, so I have a certain respect for software rendered gfx effects. Now I'm curious about how to push pixels on the iPhone, so let's see how far I can get with that tonight!
First up: diving into the Core Graphics documentation. Hmm.. tried to check how many colors the iPhone screen can actually display. Specs on Apple's page don't mention it. Certainly looks like more than 64k colors, but must be less than full 24-bit color or otherwise they would prominently advertise it as a feature. Just wondering if my framebuffer should be 24bits to make it as native as possible.
... 6 hours pass ...
I created and displayed my first raw bitmap data! Feel free to copy my code (please note it turned out to be too slow for much use). As a disclaimer I just got this to display anything without crashing minutes ago, so there's likely something still wrong with the code. Here's the init part.
And then when I want to show a buffer:
My only problem now is that I'm obviously leaking memory by not deallocating anything (I should at the very least free() the bitmap data) and secondly that my code lives in drawRect and is only shown once. I don't know how to get the screen to refresh. Also I have no idea if this will be fast enough to refresh at 30fps, but I'm guessing it should be. It scares me a bit that I can't really know what unnecessary hoops this code is doing on the iPhone, since I'm not really getting a raw display buffer pointer but instead going through some classes that do who knows what before the data actually ends up on the screen.
I discovered another adventure gamish thing that you can do in the Interface Builder - sometimes it's possible to drag code files from Xcode to IB to get IB to notice they exist. I would have never thought about even trying that, just saw it mentioned on another blog. Still trying to wrap my head around the relationship between Xcode and IB.
Ugh! I tried running the above code on a real iPhone device and was only getting around 5fps! Clearly I'm doing something wrong, the iPhone is definitely powerful enough to push pixels if I just figure out a better way to do the updates. But right now I'm too sleepy to think about anything except maybe getting some quality time with HL2DM before getting some sleep =)
First up: diving into the Core Graphics documentation. Hmm.. tried to check how many colors the iPhone screen can actually display. Specs on Apple's page don't mention it. Certainly looks like more than 64k colors, but must be less than full 24-bit color or otherwise they would prominently advertise it as a feature. Just wondering if my framebuffer should be 24bits to make it as native as possible.
... 6 hours pass ...
I created and displayed my first raw bitmap data! Feel free to copy my code (please note it turned out to be too slow for much use). As a disclaimer I just got this to display anything without crashing minutes ago, so there's likely something still wrong with the code. Here's the init part.
CGDataProviderRef provider;
bitmap = malloc(320*480*4);
provider = CGDataProviderCreateWithData(NULL, bitmap, 320*480*4, NULL);
CGColorSpaceRef colorSpaceRef;
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
ir = CGImageCreate(
320,
480,
8,
32,
4 * 320,
colorSpaceRef,
kCGImageAlphaNoneSkipLast,
provider,
NULL,
NO,
kCGRenderingIntentDefault
);
And then when I want to show a buffer:
for (int i=0; i<320*480*4; i++) {
bitmap[i] = rand()%256;
}
CGRect rect = CGRectMake(0, 0, 320, 480);
CGContextDrawImage(context, rect, ir);
My only problem now is that I'm obviously leaking memory by not deallocating anything (I should at the very least free() the bitmap data) and secondly that my code lives in drawRect and is only shown once. I don't know how to get the screen to refresh. Also I have no idea if this will be fast enough to refresh at 30fps, but I'm guessing it should be. It scares me a bit that I can't really know what unnecessary hoops this code is doing on the iPhone, since I'm not really getting a raw display buffer pointer but instead going through some classes that do who knows what before the data actually ends up on the screen.
I discovered another adventure gamish thing that you can do in the Interface Builder - sometimes it's possible to drag code files from Xcode to IB to get IB to notice they exist. I would have never thought about even trying that, just saw it mentioned on another blog. Still trying to wrap my head around the relationship between Xcode and IB.
Ugh! I tried running the above code on a real iPhone device and was only getting around 5fps! Clearly I'm doing something wrong, the iPhone is definitely powerful enough to push pixels if I just figure out a better way to do the updates. But right now I'm too sleepy to think about anything except maybe getting some quality time with HL2DM before getting some sleep =)
Thursday, October 09, 2008
Interface Builder strikes again
Just spent hours on a simple things I couldn't understand. I had a tab view controller in the interface builder, then in my own code in the project I had a class called FirstViewController. To reference this, I figured I should add a FirstViewController type view controller into the interface builder as well. Hilarity ensues as I now have a FirstViewController which is unbeknownst to me already being instantiated by the tab view controller (not sure how that works) so I had TWO instances of the same controller. At first I was really perplexed how on earth my instance variables are suddenly changing values in the debugger, then happened to notice that the address for "self" was different.
I just wanted my FirstViewController to be the delegate of a picker object, but now I had a different instance being the delegate and a different instance doing other things. After learning my mistake I was trying to hunt down the extra instance, and noticed that one of the tabs in the tab controller was already declaring itself a FirstViewController. Well, how to reference that, since the class isn't visible in MainWindow.xib along the other stuff? Took a bit longer to realize I can drag delegate references not only to the xib window, but also to certain visible controls! Felt like one of those Lucasarts adventure games where you miss a puzzle because you don't notice that a certain object was clickable.
I just wanted my FirstViewController to be the delegate of a picker object, but now I had a different instance being the delegate and a different instance doing other things. After learning my mistake I was trying to hunt down the extra instance, and noticed that one of the tabs in the tab controller was already declaring itself a FirstViewController. Well, how to reference that, since the class isn't visible in MainWindow.xib along the other stuff? Took a bit longer to realize I can drag delegate references not only to the xib window, but also to certain visible controls! Felt like one of those Lucasarts adventure games where you miss a puzzle because you don't notice that a certain object was clickable.
Audio works!
Can't believe it's only been two days, because I feel like I've been battling with iPhone audio forever. I was trying to set it up, but somehow my callback was never called and I was starting to lose hope. I tried to keep things minimal, but turned out I was keeping it too minimal because I was neglecting to prime my sound buffers. I thought I wouldn't need to prime it, that I could just start the playback and then fill the buffers as they are requested from the callback function. Turns out the callback is only called when a buffer runs out, and since I had added no buffers it never got called!
To keep the callback function simple, I thought I would just create noise with rand() and fill the buffer with that instead of reading from a file. Again I neglected something important: setting bufferReference->mAudioDataByteSize. It was 0 by default, so the sound system must have figured there is no more sound to play. After fixing that I heard the sweetest sound ever: white noise being played from my phone!
Next up: learning how to use picker view to select sound waveform.
To keep the callback function simple, I thought I would just create noise with rand() and fill the buffer with that instead of reading from a file. Again I neglected something important: setting bufferReference->mAudioDataByteSize. It was 0 by default, so the sound system must have figured there is no more sound to play. After fixing that I heard the sweetest sound ever: white noise being played from my phone!
Next up: learning how to use picker view to select sound waveform.
Tuesday, October 07, 2008
Next step: audio
Now that the first test app works and I somewhat understand what outlets are, I wonder what would be the next step? At least I should know how to have multiple views and change between them using a tab bar or similar control. So I should learn basic navigation.
As a brief detour though I am curious about how recording sound works. I have some ideas for apps that need sound recording, upload and download, but am a bit concerned that it might be a bit difficult. At least the network part. How do I know if the net connection is on? How do I show a progress bar for download/upload? Should there be a cancel button in case transfer is taking forever, for example if it happens over normal GPRS? What about compression, is there some basic compression algorithm included in the API?
I recall seeing some example code about sound recording, let's dig that up.
The example is called SpeakHere. Seems that there is no simple recordAudioOKThxBye-style function, but you have to stream it to a file yourself. Fair enough, it doesn't seem to be all that complicated to do, and is probably something I will eventually have to learn how to do anyway. There seems to be PCM encoding built in. Saw passing mention of MP3. I wonder which ones iPhone supports. MP3 would be sweet for shortening transfer times and also using the same files later when playing back from Flash, but is it possible?
Read up on "Audio Queue Services". Documentation mentioned the following "kAudioFormatMPEGLayer3 - MPEG-1/2, Layer 3 audio. Uses no flags. Available in iPhone OS 2.0 and later.", so it would appear the encoder is present. It's a bit overwhelming to set all of the structures at once and hope that I don't miss any vital flags or attributes, so I'll try to start with something really simple. Simplest thing I can imagine is setting up a callback function for sound playback and just fill the buffers with rand(), hopefully white noise can then be heard from the speaker.
Found a useful tutorial on the subject.
As a brief detour though I am curious about how recording sound works. I have some ideas for apps that need sound recording, upload and download, but am a bit concerned that it might be a bit difficult. At least the network part. How do I know if the net connection is on? How do I show a progress bar for download/upload? Should there be a cancel button in case transfer is taking forever, for example if it happens over normal GPRS? What about compression, is there some basic compression algorithm included in the API?
I recall seeing some example code about sound recording, let's dig that up.
The example is called SpeakHere. Seems that there is no simple recordAudioOKThxBye-style function, but you have to stream it to a file yourself. Fair enough, it doesn't seem to be all that complicated to do, and is probably something I will eventually have to learn how to do anyway. There seems to be PCM encoding built in. Saw passing mention of MP3. I wonder which ones iPhone supports. MP3 would be sweet for shortening transfer times and also using the same files later when playing back from Flash, but is it possible?
Read up on "Audio Queue Services". Documentation mentioned the following "kAudioFormatMPEGLayer3 - MPEG-1/2, Layer 3 audio. Uses no flags. Available in iPhone OS 2.0 and later.", so it would appear the encoder is present. It's a bit overwhelming to set all of the structures at once and hope that I don't miss any vital flags or attributes, so I'll try to start with something really simple. Simplest thing I can imagine is setting up a callback function for sound playback and just fill the buffers with rand(), hopefully white noise can then be heard from the speaker.
Found a useful tutorial on the subject.
First test app works!
Phew, took a nice while to wrap my mind around how the controls work, but now I have a small app with three text fields constantly updating with the accelerometer data. For extra credit I added an image too which moves based on the accelerometer data.
Funny "bug" I had was registering to receive accelerometer events, then not receiving any. For the life of me I couldn't understand why. I was running the app in the simulator at this time. Went to meditate on this by pwning some noobs on Half-Life 2 DM and after coming back and looking at it again it was stupidly obvious - it's a SIMULATOR. It HAS NO accelerometer! So after running it on the real device it worked just fine :-)
Funny "bug" I had was registering to receive accelerometer events, then not receiving any. For the life of me I couldn't understand why. I was running the app in the simulator at this time. Went to meditate on this by pwning some noobs on Half-Life 2 DM and after coming back and looking at it again it was stupidly obvious - it's a SIMULATOR. It HAS NO accelerometer! So after running it on the real device it worked just fine :-)
How Xcode and Interface Builder relate
I'm starting to understand now how Xcode ties in with the Interface Builder.
My first confusion was this: in the main function when UIApplicationMain is created, how can it know what its delegate is when it is not explicitly mentioned in the arguments? Answer: It's mentioned in the MainWindow.xib file. This file is an XML file which is turned into a "nib file" later (when building?). Double clicking on it in Xcode brings up the Interface Builder. Clicking on "file's owner" and then pressing apple-shift-I brings up the Inspector, where I could then see the delegate -› MoveMeAppDelegate relationship (wtf had to press alt-b to get the › character).
Next I'd like to understand how to reference things set from the Interface Builder from my code. Specifically, how to change the text in a label? What identifies the label in my code?
[24 hours pass]
Okay wow, somehow that was really tough to figure out. To change a text in a label, I needed to get a reference to the label object. I was really confused trying to drag a line from "referencing outlet" to somewhere, with nothing accepting the drag. Turns out this is where the IBOutlet comes into play. I had to have IBOutlet UILabel *label; in the class to which I am dragging to, then the drag will be accepted (although at one point I seemed to sense a delay before Interface Builder realized now the drag can be accepted?).
So the controller that accepted a drag from a textfield "referencing outlet" looks like this:
@interface ThreeFieldsViewController : UIViewController {
IBOutlet UILabel *label;
}
@property (nonatomic, retain) IBOutlet UILabel *label;
Then additionally in the .m file I had to @synthesize label. Didn't check if it would work without that. Actually, it would be interesting to test if the Interface Builder code that gets generated just sets the attribute directly, or calls setLabel? Let's see. Yep, setters and getters are called if and only if there is a referencing outlet.
As a bonus I discovered that if you make a method and tag it IBAction, you can drag action references from components to that in Interface Builder. Not sure if there are some interesting arguments passed that could somehow be read. Next up: trying to make an app with three labels that get updated by a timer with data from the accelerometer.
My first confusion was this: in the main function when UIApplicationMain is created, how can it know what its delegate is when it is not explicitly mentioned in the arguments? Answer: It's mentioned in the MainWindow.xib file. This file is an XML file which is turned into a "nib file" later (when building?). Double clicking on it in Xcode brings up the Interface Builder. Clicking on "file's owner" and then pressing apple-shift-I brings up the Inspector, where I could then see the delegate -› MoveMeAppDelegate relationship (wtf had to press alt-b to get the › character).
Next I'd like to understand how to reference things set from the Interface Builder from my code. Specifically, how to change the text in a label? What identifies the label in my code?
[24 hours pass]
Okay wow, somehow that was really tough to figure out. To change a text in a label, I needed to get a reference to the label object. I was really confused trying to drag a line from "referencing outlet" to somewhere, with nothing accepting the drag. Turns out this is where the IBOutlet comes into play. I had to have IBOutlet UILabel *label; in the class to which I am dragging to, then the drag will be accepted (although at one point I seemed to sense a delay before Interface Builder realized now the drag can be accepted?).
So the controller that accepted a drag from a textfield "referencing outlet" looks like this:
@interface ThreeFieldsViewController : UIViewController {
IBOutlet UILabel *label;
}
@property (nonatomic, retain) IBOutlet UILabel *label;
Then additionally in the .m file I had to @synthesize label. Didn't check if it would work without that. Actually, it would be interesting to test if the Interface Builder code that gets generated just sets the attribute directly, or calls setLabel? Let's see. Yep, setters and getters are called if and only if there is a referencing outlet.
As a bonus I discovered that if you make a method and tag it IBAction, you can drag action references from components to that in Interface Builder. Not sure if there are some interesting arguments passed that could somehow be read. Next up: trying to make an app with three labels that get updated by a timer with data from the accelerometer.
Monday, October 06, 2008
Interface Builder
For someone who hasn't coded much, I think it's a bit dangerous to start with a graphical interface builder in an IDE. It gives you the wrong idea that everything is really easy. Just drag and drop stuff and BAM (channeling Steve Jobs here)! Of course you'll end up spending most of the time (as you should) in the actual code, and building interfaces will just be a short break. At school we had tools like this, then there would be people who confused building applications with designing their interfaces, and for them it was a shock how much work there was underneath, not just dragging stuff to build the interface.
With this in mind I am approaching the Interface Builder a bit carefully, almost trying not to have too much fun with it. It of course does make sense to use it. I could create all the components in code, and almost prefer to do so, but still I have to admit that it must be faster to use this tool if I can just learn to use it properly. I want to get stuff done fast, therefore I must learn this. So I've started it up, started dragging stuff around. At this point I still don't understand how this ties to Xcode. I do know there are some "nib files" and that the controls can be raised from it, somehow relating to the initWithCoder method.
The goal for tonight will be to understand how to create some simple text labels in the Interface Builder and then how to set the text to those labels from Xcode.
With this in mind I am approaching the Interface Builder a bit carefully, almost trying not to have too much fun with it. It of course does make sense to use it. I could create all the components in code, and almost prefer to do so, but still I have to admit that it must be faster to use this tool if I can just learn to use it properly. I want to get stuff done fast, therefore I must learn this. So I've started it up, started dragging stuff around. At this point I still don't understand how this ties to Xcode. I do know there are some "nib files" and that the controls can be raised from it, somehow relating to the initWithCoder method.
The goal for tonight will be to understand how to create some simple text labels in the Interface Builder and then how to set the text to those labels from Xcode.
Sunday, October 05, 2008
iPhone dev 17
Oh lord, I just discovered that curly braces require one extra keystroke on the Finnish keyboard layout on the mac. Somehow the keyboard layout isn't the familiar one from Windows. I would use the USA layout, but then writing scandinavian characters would be a pain. I have to press alt - shift - 8 to get a curly brace!
Spent hours today trying to find out why a sample application won't run on the iPhone. Turned out in my Info.plist file the bundle identifier was the same as with another app, so it wouldn't install another one with the same id.
Next I challenged myself to create a small app which would have three textfields that display the raw data coming from the accelerometers. I got stuck early -- I wanted to use a timer to fire an event at certain intervals. Spent a very long time trying to find info in the docs. Looked at some sample applications, but they were more hardcore and had actual threads to do the timing. Then finally NSTimer was mentioned in a forum post.
timo = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector(onTimer) userInfo:nil repeats:YES];
- (void)onTimer {
int test;
test = 10;
}
Disappointed a bit that I couldn't finish this dead simple app in one evening. I'm starting to get a feeling where iPhone development falls on the difficulty scale. Maybe 5 times easier than Symbian development, but still 2-3 times more time consuming than Flash development.
Hello World in Flash from nothing: 5 minutes
Hello World on the iPhone simulator: 2 hours
Hello World on the iPhone: 6 hours (mostly figuring out app signing, device id blah blah issues)
Hello World in Symbian: coder pronounced dead at the hospital due to massive internal bleeding
Spent hours today trying to find out why a sample application won't run on the iPhone. Turned out in my Info.plist file the bundle identifier was the same as with another app, so it wouldn't install another one with the same id.
Next I challenged myself to create a small app which would have three textfields that display the raw data coming from the accelerometers. I got stuck early -- I wanted to use a timer to fire an event at certain intervals. Spent a very long time trying to find info in the docs. Looked at some sample applications, but they were more hardcore and had actual threads to do the timing. Then finally NSTimer was mentioned in a forum post.
timo = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector(onTimer) userInfo:nil repeats:YES];
- (void)onTimer {
int test;
test = 10;
}
Disappointed a bit that I couldn't finish this dead simple app in one evening. I'm starting to get a feeling where iPhone development falls on the difficulty scale. Maybe 5 times easier than Symbian development, but still 2-3 times more time consuming than Flash development.
Hello World in Flash from nothing: 5 minutes
Hello World on the iPhone simulator: 2 hours
Hello World on the iPhone: 6 hours (mostly figuring out app signing, device id blah blah issues)
Hello World in Symbian: coder pronounced dead at the hospital due to massive internal bleeding
Code Signing Provisioning Profile
Came across a problem that I noticed others were also having on some forums.
When you create a provisioning profile in apple's portal, then download it and try to go to target info to use it, it often isn't there. You try to click on Code Signing Provisioning Profile > Any iPhone OS Device, but it doesn't show on the list. I'm not sure if there is a more clever way to do this, but I can get it to show by right clicking and selecting "show definitions", then replace the hexadecimal values shown with what I find in /Users/YOURUSERNAMEHERE/Library/MobileDevice/Provisioning Profiles. Then when I right click again "show values", it's there.
Hope this helps someone =)
When you create a provisioning profile in apple's portal, then download it and try to go to target info to use it, it often isn't there. You try to click on Code Signing Provisioning Profile > Any iPhone OS Device, but it doesn't show on the list. I'm not sure if there is a more clever way to do this, but I can get it to show by right clicking and selecting "show definitions", then replace the hexadecimal values shown with what I find in /Users/YOURUSERNAMEHERE/Library/MobileDevice/Provisioning Profiles. Then when I right click again "show values", it's there.
Hope this helps someone =)
iPhone dev 16 - memory management
Found an article about the memory management issues. Noticed that I am indeed making a mistake. After allocing and initing an object, I am increasing the retain count by one. This is not necessary, the retain count is already one at this point.
Second mistake I think I am making is failing to release my textfielddelegate and UITextField. Maybe I could use autorelease?
But what happens when I set the delegate by doing testText.delegate = x. Is the retain count of x now incremented by the delegate setter method? In API docs it shows that the property is declared like so:
@property(assign) id delegate
"assign Specifies that the setter uses simple assignment. This is the default."
Okay, so it would seem that the retain count does not get incremented, which means it is my responsibility to release the object at the end. Great, seems to work!
Second mistake I think I am making is failing to release my textfielddelegate and UITextField. Maybe I could use autorelease?
But what happens when I set the delegate by doing testText.delegate = x. Is the retain count of x now incremented by the delegate setter method? In API docs it shows that the property is declared like so:
@property(assign) id delegate
"assign Specifies that the setter uses simple assignment. This is the default."
Okay, so it would seem that the retain count does not get incremented, which means it is my responsibility to release the object at the end. Great, seems to work!
iPhone dev 15 - How do I monitor the events that the textfield sends? Where does it send them to?
Last time I wondered what happens with the textfield events. Now I've read a bit more about delegates, how to use them in practice. Controls have a "delegate", called such because application-specific behavior is delegated to it. For textfields the delegate protocol is UITextFieldDelegate. To implement protocols, angle brackets are used when declaring a class, so here is the code I used in my declaration in MyTextFieldDelegate.h:
#import
@interface MyTextFieldDelegate : NSObject {
}
- (BOOL)textFieldShouldReturn:(UITextField *)exTextField;
@end
Then implementation in MyTextFieldDelegate.m is very simple:
#import "MyTextFieldDelegate.h"
@implementation MyTextFieldDelegate
- (BOOL)textFieldShouldReturn:(UITextField *)exTextField {
[exTextField resignFirstResponder];
return YES;
}
@end
I found that "resignFirstResponder" line on some Mac development forum. I tried to read the docs a bit to learn what it means. The docs say unhelpfully that it makes something release the first responder status, whatever that is. In plain english I discovered it means that it makes the soft keyboard disappear, at least in this case.
Well above you see the delegate class, but that wouldn't do much if the textfield doesn't know about its delegate. I think this part must be slightly wrong because I'm never releasing the objects I create, but as an initial test this worked:
MyTextFieldDelegate *x = [[MyTextFieldDelegate alloc] init];
[x retain];
UITextField *testText = [[UITextField alloc] initWithFrame:testFrame];
testText.delegate = x;
The dot notation surprised me. After all the talk about automatically synthesized getters and setters, I expected to actually have to call methods to set variables. After reading the docs a bit more, it turns out that I actually AM calling a method here! The dot notation in Objective-C turns out to be exactly the same here as calling [testText setDelegate:x] and the dot notation is just a shortcut to it. This is very clever, because it allows you expose properties conveniently, but at the same time if necessary allows you to run code when they are accessed.
I'm starting to like this more and more, but memory management still confused me. I wonder how to see what I am failing to release? I don't want to leak memory on someone's iPhone.
#import
@interface MyTextFieldDelegate : NSObject
}
- (BOOL)textFieldShouldReturn:(UITextField *)exTextField;
@end
Then implementation in MyTextFieldDelegate.m is very simple:
#import "MyTextFieldDelegate.h"
@implementation MyTextFieldDelegate
- (BOOL)textFieldShouldReturn:(UITextField *)exTextField {
[exTextField resignFirstResponder];
return YES;
}
@end
I found that "resignFirstResponder" line on some Mac development forum. I tried to read the docs a bit to learn what it means. The docs say unhelpfully that it makes something release the first responder status, whatever that is. In plain english I discovered it means that it makes the soft keyboard disappear, at least in this case.
Well above you see the delegate class, but that wouldn't do much if the textfield doesn't know about its delegate. I think this part must be slightly wrong because I'm never releasing the objects I create, but as an initial test this worked:
MyTextFieldDelegate *x = [[MyTextFieldDelegate alloc] init];
[x retain];
UITextField *testText = [[UITextField alloc] initWithFrame:testFrame];
testText.delegate = x;
The dot notation surprised me. After all the talk about automatically synthesized getters and setters, I expected to actually have to call methods to set variables. After reading the docs a bit more, it turns out that I actually AM calling a method here! The dot notation in Objective-C turns out to be exactly the same here as calling [testText setDelegate:x] and the dot notation is just a shortcut to it. This is very clever, because it allows you expose properties conveniently, but at the same time if necessary allows you to run code when they are accessed.
I'm starting to like this more and more, but memory management still confused me. I wonder how to see what I am failing to release? I don't want to leak memory on someone's iPhone.
Saturday, October 04, 2008
iPhone dev 14 - my first control!
Created my first control in the MoveMe sample program in the file MoveMeAppDelegate.m, method applicationDidFinishLaunching. Defined size of a text field as a CGRect like so:
CGRect testFrame = CGRectMake(10, 10, 100, 100);
Next I created the object itself:
UITextField *testText = [[UITextField alloc] initWithFrame:testFrame];
To see it, I had to do some extra magic. This somehow adds it as subview. Had to call this after other similar calls to make sure it's not obscured:
[window addSubview:testText];
It wasn't immediately obvious that the field even appeared, but when clicking on the upper left where it is supposed to appear a keyboard did pop up and I was able to type. Stuff I'd like to understand next:
- When am I supposed to release this field?
(in dealloc do [testText release] maybe? nope, didn't work.)
- Why when creating UIViewController it is stored in self, and then released? Won't that destroy it? Apparently not.
- How do I output some debug text?
- How do I monitor the events that the textfield sends? Where does it send them to?
CGRect testFrame = CGRectMake(10, 10, 100, 100);
Next I created the object itself:
UITextField *testText = [[UITextField alloc] initWithFrame:testFrame];
To see it, I had to do some extra magic. This somehow adds it as subview. Had to call this after other similar calls to make sure it's not obscured:
[window addSubview:testText];
It wasn't immediately obvious that the field even appeared, but when clicking on the upper left where it is supposed to appear a keyboard did pop up and I was able to type. Stuff I'd like to understand next:
- When am I supposed to release this field?
(in dealloc do [testText release] maybe? nope, didn't work.)
- Why when creating UIViewController it is stored in self, and then released? Won't that destroy it? Apparently not.
- How do I output some debug text?
- How do I monitor the events that the textfield sends? Where does it send them to?
Friday, October 03, 2008
iPhone dev 13
Already the fifth day of development, with not much to show for it. Going through the MoveMe sample application now. Yesterday got an OpenGL ES sample running. So cool that a little device like this actually runs OpenGL. I don't yet grasp the structure of programs very well. I know that there is some function you call in main, which starts off the message loop and apparently your own code goes into a delegate class.
Learned that building an Xcode project turns it into a "bundle", which a directory on the iPhone that contains code and data. Since I don't believe anyone reads this (just talking out loud to concentrate more on the task), I guess I can reveal what it is that I'd like to build. Well, I thought the iPhone would be great for drawing for the "draw and guess" game. I have a Flash version of it mostly working on MySpace (haven't released though). You draw something, others guess what it is, others draw, you guess. Guessers and drawers get points, repeat. People seem to like that game, and it would be cool to draw with your finger. I'm now pondering whether iPhone users should only be able to draw, or to guess too. Maybe it's my unfamiliarity with the iPhone soft keyboard, but it seems too painful to seriously use as a part of the game. Maybe I can let players choose a game mode. But if everyone is just drawing, will there be enough people left to guess?
Another game which occurred to me after getting this phone is one where you are given a topic, then have to try to photograph that thing in a very limited time using the camera. For example "take a picture of a fork!", then you rush out to your kitchen (hopefully connected to the net with WiFi), take the picture. Judging whether people took pics of forks or not would be done peer-to-peer. You get a picture and have to decide whether it is a fork or not. Maybe with some Slashdot style meta-judgement too to make sure you are giving correct judgements. Well, just a crazy idea.
Learned that building an Xcode project turns it into a "bundle", which a directory on the iPhone that contains code and data. Since I don't believe anyone reads this (just talking out loud to concentrate more on the task), I guess I can reveal what it is that I'd like to build. Well, I thought the iPhone would be great for drawing for the "draw and guess" game. I have a Flash version of it mostly working on MySpace (haven't released though). You draw something, others guess what it is, others draw, you guess. Guessers and drawers get points, repeat. People seem to like that game, and it would be cool to draw with your finger. I'm now pondering whether iPhone users should only be able to draw, or to guess too. Maybe it's my unfamiliarity with the iPhone soft keyboard, but it seems too painful to seriously use as a part of the game. Maybe I can let players choose a game mode. But if everyone is just drawing, will there be enough people left to guess?
Another game which occurred to me after getting this phone is one where you are given a topic, then have to try to photograph that thing in a very limited time using the camera. For example "take a picture of a fork!", then you rush out to your kitchen (hopefully connected to the net with WiFi), take the picture. Judging whether people took pics of forks or not would be done peer-to-peer. You get a picture and have to decide whether it is a fork or not. Maybe with some Slashdot style meta-judgement too to make sure you are giving correct judgements. Well, just a crazy idea.
iPhone dev 12
Sample app is now running on my iPhone! It took a good while to wrap my head around the app signing procedure. I don't think I completely understand it even now. I had to get a certificate (for signing my code?), put my name in some settings, create some kind of "provisions" (some kind of combination of everything else), app id and other stuff too. Important thing is that it runs, and I can now concentrate on coding. Perhaps I will have to really understand this better if I get other members in my team, or maybe deployment won't work without understanding it (although I hope it will).
It's now 5 am and this could be a good point to get some sleep at least after the biden - palin debate. Xcode seems awesome and I'm excited to learn more about using it.
It's now 5 am and this could be a good point to get some sleep at least after the biden - palin debate. Xcode seems awesome and I'm excited to learn more about using it.
iPhone dev 11
Couldn't even sleep with all this new stuff beckoning me to hack some more. Trying to install iPhone SDK, but realized I don't even know where installed software appears on a mac. Managed to open a terminal and started building the locate db in order to find it. It was also weird that the pipe character moved to alt-7. Not complaining, I was getting bored with my current system, this is refreshing. DB is built. Locate says it's in /Developer but how to start the IDE?
Oh yeah, those strange plus and minus signs in method declarations mean whether the method is an instance method or a class method.
Oh yeah, those strange plus and minus signs in method declarations mean whether the method is an instance method or a class method.
iPhone dev 9 - got the hardware
Today all the hardware arrived. An iPhone, a WLAN box and a Mac Mini. This stuff really was disruptive to getting things done -- I haven't felt this much like a child since... well since I was a child! Walking to the post office to get the iPhone almost turned into a run because I couldn't wait to play with it. Then when I had everything, stuff worked really well right from the box. Noticed though that I accidentally got a 2G iPhone instead of 3G one, but that doesn't really make any difference for development (good bye to my plans on starting to use VoIP w/ Asterisk though). I feel like I joined some cult now that I have this stuff. I even read the holy texts of Apple -- namely the folklore about pirate flags and Woz's pranks. I have truly joined the dark side.
I have tried to get back to reading the cocoa fundamentals documentation even while I feel a bit giddy and would prefer to just play around with this stuff. I have to remind myself I got these for a purpose -- to develop an app for the iPhone which will then pay for this expensive hardware. So with this in mind I've muddled through the fundamentals documentation, but it's getting awfully abstract. Well of course it is abstract, because I just reached the design patterns part. Fun to read about abstractions in something which in itself already feels abstract to me at this point (Cocoa programming). I think for my motivation's sake I should try to get something going with Xcode while I read forward.
I have tried to get back to reading the cocoa fundamentals documentation even while I feel a bit giddy and would prefer to just play around with this stuff. I have to remind myself I got these for a purpose -- to develop an app for the iPhone which will then pay for this expensive hardware. So with this in mind I've muddled through the fundamentals documentation, but it's getting awfully abstract. Well of course it is abstract, because I just reached the design patterns part. Fun to read about abstractions in something which in itself already feels abstract to me at this point (Cocoa programming). I think for my motivation's sake I should try to get something going with Xcode while I read forward.
Thursday, October 02, 2008
iPhone dev 8
My iPhone and Mac Mini will arrive today. Might be disruptive to my Cocoa study.
"Sometimes using a protocol can avoid subclassing". Not sure what that means, not sure what "delegates" are. Code is in .m files, headers in .h files. Saw how to declare classes. Instead of C-style "#include", "#import" is used instead. It's like require_once in PHP.
#import
-- function and data type declarations --
@interface ClassName : Superclass {
-- instance variables --
}
-- method and property declarations --
@end
The .m file could then look something like this:
#import "ClassName.h"
@implementation ClassName
-- stuff --
@end
If I see "IBOutlet" in code later, that is somehow related to "nib files" and the Interface Builder synchronizing with Xcode. Vague at this point. Documentation mentioned that on the iPhone the applicationWillTerminate method gets called when the app shuts down and is the place where state should be saved.
Getters and setters can be automatically synthesized. "copy" and "retain" tell whether object variables should be copied or if the pointer should be stored instead and retain count incremented. Something very strange was mentioned about "KVB", "KVC" and "KVO" that I had no idea about.
Cool thing: in printf strings you can say %@ and then provide an object, and at that point any string returned by that object's "description" method will be inserted. There was a page about threads. Said exceptions should be handled by each thread, cannot be thrown away from thread. Talked about how error-prone thread programming is, that I should copy data and try to minimize possible conflicts arising from shared data. Events should just be handled by main thread, also UIKit objects should only be used in main thread. I imagine I may use threads with socket programming. Said not all Cocoa classes are thread safe.
"Sometimes using a protocol can avoid subclassing". Not sure what that means, not sure what "delegates" are. Code is in .m files, headers in .h files. Saw how to declare classes. Instead of C-style "#include", "#import" is used instead. It's like require_once in PHP.
#import
-- function and data type declarations --
@interface ClassName : Superclass {
-- instance variables --
}
-- method and property declarations --
@end
The .m file could then look something like this:
#import "ClassName.h"
@implementation ClassName
-- stuff --
@end
If I see "IBOutlet" in code later, that is somehow related to "nib files" and the Interface Builder synchronizing with Xcode. Vague at this point. Documentation mentioned that on the iPhone the applicationWillTerminate method gets called when the app shuts down and is the place where state should be saved.
Getters and setters can be automatically synthesized. "copy" and "retain" tell whether object variables should be copied or if the pointer should be stored instead and retain count incremented. Something very strange was mentioned about "KVB", "KVC" and "KVO" that I had no idea about.
Cool thing: in printf strings you can say %@ and then provide an object, and at that point any string returned by that object's "description" method will be inserted. There was a page about threads. Said exceptions should be handled by each thread, cannot be thrown away from thread. Talked about how error-prone thread programming is, that I should copy data and try to minimize possible conflicts arising from shared data. Events should just be handled by main thread, also UIKit objects should only be used in main thread. I imagine I may use threads with socket programming. Said not all Cocoa classes are thread safe.
iPhone dev 7
There is a windows style event loop. On Mac it lives in NSApplication and on the iPhone it's in UIApplication. In AppKit.h there is a method NSApplicationMain that creates the application object, sets up an autorelease pool, loads UI from something called a "nib file" (apparently a file that contains files, maybe even directories?) and starts handling events. On iPhone the equivalent method is called UIApplicationMain.
@"test" is shorthand for creating an NSString that contains "test". In some cases empty string @"" can mean no value / default value. String literals shouldn't be used as dictionary keys? Setter methods are called setSomeVariable, but getters are just "someVariable". Typical framework usage: create subclass, override methods to implement own functionality. Cocoa uses MVC.
@"test" is shorthand for creating an NSString that contains "test". In some cases empty string @"" can mean no value / default value. String literals shouldn't be used as dictionary keys? Setter methods are called setSomeVariable, but getters are just "someVariable". Typical framework usage: create subclass, override methods to implement own functionality. Cocoa uses MVC.
Wednesday, October 01, 2008
iPhone dev 6
Init may return a different object than was allocated. For example in singleton case it may return the already existing object. For this reason should always use the one returned by init. Objective-C seems to support exceptions (or is it a Cocoa feature? I'm confused about the distinction). Self, super. Strange plus and minus signs near method declarations. Maybe plus signs have something to do with factories? Noted in explanation about the "respondsToSelector" introspection method, that it tells if an object responds to a certain method -- so "selector" does indeed mean a method? "autorelease" was mentioned many times, but don't know what that is. Section about class clusters: public superclass with many private subclasses, you instantiate the subclasses through factory methods in the superclass. For example Number superclass which can create Ints, Floats and so on. Skipping sections about class cluster details and "creating a singleton instance". I'll return back to them if the need arises to create my own cluster objects or singletons, just too tiring to read about them now.
iPhone dev 5
SEL is data type of a selector, but couldn't really understand selectors are. Are they methods? Reference counting is called "retain counting". On alloc the retain count is 1. If the retain count reaches zero, the "dealloc" method gets called on an object and after that the memory is released. If you "copy" an object the retain count (usually?) becomes one for the copy. There are things called "autorelease pools", but their use is discouraged in iPhone. Somehow everything in the pool gets released at the same time, and somehow objects can be added to such a pool without directly referencing the pool by name (at least the sample code looked like that). App kit on Mac has some kind of autorelease pool already created in the beginning. There are some conventions on when to call release on objects. If an object is created by you, then you should also release it. If you get an object from somewhere else, you shouldn't. There was something related to class factory created object releasing that I didn't understand. alloc -> init -> usable object. In addition to allocating memory, the alloc method also sets a cool explicit "isa" property for the object, that points to the object's class. Also zeroes all properties.
iPhone dev 4
@property is syntax for declaring class methods that automatically create getter and setter methods. Enumeration of sets can be done with the nice "in" syntax as in some other languages. Calling object methods has a bit strange syntax, [object method]. Also possible to give some named arguments, but not sure if the first before : is a method name or an argument name. [object keyword1:something keyword2:somethingelse]. Where is the method name? Is it "something"? Not sure. NSObject is root class of everything, and defines some methods like init (constructor?) and reference counting (retain, release).
iPhone dev 3
NSObject is the root class for Cocoa classes. Stuff starting with UI prefix is UIKit related. Objective-C has garbage collection after version 2.0, but it cannot be used on the iPhone because of performance. Cocoa classes seem cleanly designed. Didn't encounter a regexp class, although didn't check if it's in NSString methods. Event mechanism in iPhone UIKit differs from Mac Application Kit. Looked at Objective-C example code, saw lots of weird square brackets. "id" datatype can hold any Cocoa object, so convenient for enumeration. Dynamic typing, binding, loading. New feature: "categories". By dropping mysterious @ marks in strategic places in your code, you can add methods to existing classes without subclassing. Protocols are like Java interfaces.
Subscribe to:
Posts (Atom)