Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Unity3d Game On Android Platform Computer Science Essay

Paper Type: Free Essay Subject: Computer Science
Wordcount: 3552 words Published: 1st Jan 2015

Reference this

Unity3D is a popular 3D game engine in recent years, which is particularly suitable for independent game developers and small teams. It is a cross-platform game development software which mainly comprises eight products such as the Unity, Unity Pro, Asset Server, iOS, iOS Pro, Android and Android Pro. Without writing complex codes, programmers can quickly develop a scene by using the visual integrated development environment of Unity3D.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

Unity3D has special tools in easily programming a game. For example, platform-related operations are encapsulated in its internal, the complex game object-relations are managed by different visual views, and JavaScript, C # or Boo scripting languages are applied to program a game. A script program will be automatically compiled into a .NET DLL file, so the three scripting languages, in essence, have the same performance, their execution speed is 20 times faster than traditional JavaScript. These script languages have good cross platform ability as well. That means developers can deploy games on different platforms such as Windows, Mac, Xbox 360, PlayStation 3, Wii, iPad, iPhone and Android. In addition, games can run on the Web by installing a plug-in. That means developers can deploy games on different platforms such as Windows, Mac, Xbox 360, PlayStation 3, Wii, iPad, iPhone and Android. In addition, games can run on the Web by installing a plug-in. There are many techniques for developing Unity3D game. These methods have some of the drawbacks.


This part summarizes existing method related to my research work on Unity3D game on Android platform.

1. Augmented Reality and Image Recognition Based Framework for Treasure Hunt Games (2012)

Zsolt Balint, Botond Kiss, Beata Magyari and Karoly Simon introduced the new possibilities in game creation and customization [1]. They proposed the framework that supports the development of these types of games, and provides exciting new possibilities offered by virtually. The framework is based on “client-server architecture” and its complexity is relatively high, being composed by several subsystems. Client applications have been developed for the “iOS and the Android mobile platforms”. While playing, the checkpoints provided by the server are displayed on a map, and a proposed route is marked to help the user finding and reaching them. The Android client also uses the Mina framework for establishing the connection with the server. The communication between the server and the mobile client applications is based on the “TCP/IP” protocol .The main advantage of Mina is efficient concurrency handling. Thousands of client requests can be served at the same time, using a single execution thread on server-side. It is not necessary to perform expensive thread creation operations when new clients are connected. They proposed new software that consists of three subsystems: “server, Android client and iOS client”.

The development of a new game requires the customization of some framework elements. A framework for the development of augmented reality- based adventure games for Android and iOS platforms has been proposed. Due to the software architecture, a high number of simultaneously connected clients can be served; the system can be easily maintained and upgraded. By customizing specific elements from the framework online treasure and scavenger hunt games can be easily created and published. The searched “treasures” are part of the augmented reality, in this way the cost of game creation is significantly reduced in contrast to games with real objects. Furthermore, augmented reality can serve with many other interesting solutions, allowing the transformation of scenes. For “treasure” localization image recognition methods are used. In this way objects can be localized more precisely, and the application can be used inside buildings.

2. A Study and Implementation of Graphics Engine on Android Platform (2012)

Cuixia Ni, Guang Jin and Xianliang Jiang developed a new graphics engine [2]. They used testing environment is the Android and the testing tool is Android Virtual Device. This study can be divided into three modules. It includes text spirit module, animation module and auxiliary module. The text spirit is the text content that is displayed on the scene. Due to the improvement of games and high expectations from players, games scenes have transformed from static to dynamic. The players hope that the games scenes are dynamic and variable. In this paper, they used the timer, tiled textures and some basic transformation to realize those functions. Timer Handler Class sets a series of time for the events and triggers them at the given time. Physics Handler Class is the base of the movement effect. The functions are to make an animation according to the simple physical rules and the interaction effect more verisimilitude, which can give players a better experience.

3. An Innovative ICT Service Creation Approach based on IMS and Android Collaboration ( 2011)

Chung-Shih Tang, Yi-Kai Chiang, Chin-Yuen Twu, Ying-Dian Tsou, and Gong-Da Fan proposed a typical network architecture including the IMS core network for voice communication and Android Apps network for application download and execution [3]. This paper describes the SIP Clients in Android CPEs register to the IMS Core, which is mainly composed of Home Subscriber Server (HSS) and S-CSCF, and communicates to each other through the IMS session control. The Android CPEs, at the same time, can download applications from any Android Apps market. Some applications are stand-alone and fully executed on local Android CPEs. Some others may need the Android CPEs to cooperate with their corresponding 3rd party AP Servers. An integrated ICT service creation platform is proposed and implemented by them to support the development of IMS-initiated collaborative Android applications. Based on the proposed approach, telecom operators not only operate and manage their own Android markets, but also make best use of the telecom services and thus increase their competitiveness. Introducing such an ICT service creation platform into IMS-based NGN is expected of great advantage. They described, significant effects at least include: (a) revenue gained from Android apps download and execution; (b) enhanced utilization of existing telecom resources; (c) adhesion degree improvement of telcos’ subscribers; (d) a new-style use case of integrated ICT Android apps; and finally (e) without large investment on an IMS based NGN.

4. Design and Implementation of the Game Engine based on Android Platform (2011)

Wu Yan-hui Yao Xia-xia He-Jin proposed the game engine design which is need to analysis the operating mechanism of the game [8]. They took Android for example and analyzed the design principle that is behind the game engine on Android and executed the game engine what’s going on it. From their programming understanding, they said that the game on Android platform has an activity class, a process control class, a game thread class and some others game objects. Activity is the basic execute unit of one game and it is responsible for control the life cycle of the game, such as, start the game, pause and exit from the game and so on. In this part of the paper the authors mentioned that a process control class provides switch way between multiple interfaces (such as the start screen, main menu, game scene, help information and so on), besides that the users can control “the running of the game”. Game thread class continuously cycle between events that may occur, calculates the game state and refresh the screen. The event that the game program deal with can be divided into two types: one is generated from hardware device (such as the keyboard is pressed), and the other type is produced by internal objects in game program (such as the collision between game objects). Game objects are the actual movement entity of the game. The authors also mentioned that the game objects need to define the corresponding “pictures and actions” to perform in the game. When special event occurs, game objects will execute corresponding action according to running logic of the game. In order to enhance the running effect of the game, the player, timer, labels and other components are often used in games. More complex games also can be divided into a number of units according to the game plot and each unit is regarded as a game scene. In order to improve the running efficiency of the game, they introduced “JNI” into the development of game engine.

5. Adaptive Classification and Strategy Making System for Android Soccer Games (2012)

Ming-Yuan Shieh, Juing-Shian Chiou, Chun-I Ko proposed the signal processing procedures in an android robot soccer game [5]. While the game starts, the top view image sequences of the field captured by the global CCD camera firstly, then the objective features of the robots, the ball and the environmental objects such as the door will be determined after digital image processing procedures. These objective features will offer the strategy making subsystem decision making data for path planning, game strategy decision and movement control command determination. There are two necessary steps before the proposed image processing procedures. The one is to execute a global image indexing scan; it intends to determine the center locations of objects such as ball and robots. Another step is to perform another scan for local recognition of objects. It aims to distinguish what the object is the ball, or a mated robot, or an opponent. The visual subsystem in the proposed system mainly consists of a CCD camera, a frame grabber card, attached modules and digital image processing programs. The key goal is to determine objective parameters of every object in the game field such as the locations of the ball, the robots, the door and the sidelines. He describes the sequences of image processing procedures are used in the proposed visual subsystem. He implemented a vision based android robot soccer system in actual FIRA AndroSot game. A visual subsystem is designed for recognition of objective features based on an omni-directional vision system. The host computer accomplishes image processing, strategy making, control command decision, and offer appropriate commands to every android robot to kick ball, trace, or defense. Based on SOMNN scheme, the object segmentation is insensitive of variant illumination. The adaptive algorithm of background generation provides proper salient motion detections.

6. Research on Key Technologies Base Unity3D Game Engine (2012)

Jingming XIE developed the component model which is applied in the Unity3D game and provides a scalable programming architecture [6].Unity3D has many predefined components. Programmers can combine some of them to create a featurerichGameObject. In game development, besides directly using GameObjects predefined in Unity3D, programmers can create an empty GameObject with the information about position, rotation and scale of an object, and then add scripts or other components into it. In order to facilitate the same type of game object management, Unity3D affords the Prefab that is a technology like a template. A Prefab can contain both objects and game resources such as 3D models. There are a number of scripts in a game. A script is a class controlling the behavior of a game, which should inherit a base class called MonoBehaviour. MonoBehaviour defines common event triggering methods. When a predefined event occurs, an appropriate method will be automatically executed. They used distribution of four android classes in Unity3D are AndroidInput, AndroidJNIHelper, AndroidJNI, AndroidJavaObject.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

They had generated the Random NPCs using Dynamic Algorithm. NPC is the abbreviation of non-player character. That the number and location of NPCs are dynamically changed can keep players strong interesting on a game and extend the vitality of the game. NPCs can be designed by the difficulty and logic of a game, this algorithm can be called when some NPCs need to be created in a game. There are mainly two ways of firing bullets in game development. One is that a true 3D object is modeled by decorating a bullet with texture and color. This approach needs to control the flight path of a bullet and destroy the bullet when it is out of bounds. Another method is to use ray tracing technology. That the light collides with an object means a bullet hits the object. In order to get more realistic effects, cartridge cases are pop-upped or sparks are twinkled at the gunpoint when bullets are fired. Moreover, drawing light between the launch point and target point can more realistically simulate the firing of bullets. In a 3D shooting game, a sight is usually provided to players. Players can move a mouse to control it for firing. The principle of this technique is to place a graphic sight in the center of the screen. Moving the mouse is equal to move the camera, which will cause the illusion of the sight moving in the screen. In fact, the coordinates of the camera are real target coordinates in game codes.

7. Mobile Technology for Games and Augmented Reality (2011)

Jon A Preston, Jeff Chastine proposed the object-oriented approach which is utilize more advanced game engines such as the Unity Pro engine [7]. The main objective of this paper is Students are quickly able to utilize these in their games to realize advanced games, and this drives a deeper understanding of the concepts. They utilized the Unity engine, which was also free but can be upgraded to a “pro” edition to deploy to mobile devices. Beyond traditional games as many would imagine, these technologies can be utilized in interesting ways that appeal to a broad audience. They described one such example in a mobile augmented reality “smart book” where the user can interact with virtual objects which show atop the “real world” via the phone’s camera. When the user flips a page of the book, the characters on the phone can change, move around, and also interact with objects on the page of the book. Such experiences can be useful to bring creativity to students as they design not only the computational aspect on the phone but also the artistic aspect within the book. The virtual character within the phone standing on top of a picture in the real world, and this captures the type of experience and knowledge. Given the ubiquity of mobile computing devices and their appeal to students and given the accessibility of modern game engines, students can benefit from working with Unity and the Qualcomm Augmented Reality Toolkit.

8. Implementing Gymkhanas with Android Smartphones, a Multimedia M-Learning Game (2011)

Gregorio Robles, Jesus M. Gonzalez-Barahona presented a design of a generic game (gymkhanas) using Android mobile phone technology [4]. The main advantage of their work is participants, web-based infrastructure for the organizers and web services that allows interaction between them. The main objective of their work is to simplify and increase the experience of the participants. They have presented this, a game where users, usually in teams, have to surpass one challenge after the other. These type of games can be used with learning purposes by focusing questions on the content to be learned, and also consider other type of learning such as cooperative working and geographic (or local, in the case of a city) knowledge. They have shown how this implementation of system using smartphones outperforms traditional system both in technical terms as well as in the human resources devoted to organizing and controlling such an event. Once the system has been installed, no complex technical knowledge is required for organizers and participants: organizers may set-up and run a gymkhana using a simple web interface, while participants face a typical smartphone application environment to surpass challenges. Depending on the type of challenge, participants will make use of the technologies commonly found in smartphones, such as geopositioning and pictures. Additional features include instant messaging between participants and organizers and the possibility of using augmented reality.

9. A New Method of Virtual Reality Based on Unity3D (2010)

Sa Wang, Zhengli Mao, Changhai Zeng, Huili Gong, Shanshan Li, Beibei Chen. Used Unity3D as the development platform for virtual reality [9]. They designed Browser-Server three-tier architecture VR-based framework which is including 3 layers: presentation layer, logic layer and data layer. They adopted the hierarchical approach of geographic information system and they divided the study area into four layers: Terrain layer, Building layer, Transport layer and Vegetation layer. Unity3D has a highly optimized graphics pipeline for both DirectX and OpenGL. During the process of creating a scene, works in Unity3D are as follows: Shading, Programming, Collision Detection, and Showing Information by GUI Control, Publishing and Integrating. Unity3D comes with 40 shaders ranging from the simple to advance. All the built-in shaders integrate perfectly with any type of light, with cookies or without.

They described, in order to publish online; the virtual reality system is generated by the first kind: Web Player. The project generates the system by ‘build settings’ in File. We can also generate other kinds of systems, such as Windows Standalone, if we choose this kind of ‘build settings’, there will be an executable file and can set up in user’s computers. So, the virtual reality system could be run in different purpose even different operation system.

10. PicoLife: A Computer Vision-based Gesture Recognition and 3D Gaming System for Android Mobile Devices (2011)

Mahesh Babu Mariappan, Xiaohu Guo, Balakrishnan Prabhakaran developed PicoLife which is powered by two engines [10]. First one is a mobile optimized computer vision engine based on OpenCV that tracks the user’s hand in realtime. Second one is a 3D engine that runs their motion-capture guided 3D models, which make up the game characters. They described the object tracking and the 3D modeling processes involved in PicoLife. This paper describes three main implementations: a) the training procedures for Cascaded Haar Classifiers later for performing real-time object tracking efficiently on the mobile platforms. b) the procedure for real-time object tracking using Cascaded Haar Classifiers. For the purpose of tracking, they used the Haar training XML file that they obtained after training the Haar classifiers. Firstly, get the image from the live video sequence. Secondly, convert the image to grayscale using a call to the cvtcolor function. Thirdly, they improve the contrast of the image that is getting processed. c) the 3D modeling process that they used to create the game characters. To obtain realistic animation effects, they used motion capture equipment to accurately capture human motions in a 3D space. They gave the specific focus on (1) advanced mobile-optimized real-time object tracking engine for recognizing hand gestures for a novel vision-based gaming interface and (2) advanced motioncapture guided 3D modeling pipeline for creating the game characters. They benchmarked three different mobile platforms, namely, Texas Instruments’ OMAP3630 (Motorola Droid X running Android Gingerbread), Qualcomm’s MSM8660 Snapdragon (HTC Evo 3D running Android Gingerbread) and the Texas Instruments’ OMAP4430 (Blaze Development platform running Android Gingerbread) using four computer vision algorithms used in PicoLife (FAST, STAR, SURF, Cascaded Haar Classifiers) and our 3D models(static and dynamic).


Unity3D is not only applicable for game development, but it can also be used to develop applications in real world. If we want to develop some special functions of the scenes in a simple way, we can use the script or editor of unity3d. The most powerful editor is Visual Studio; the scenes in Untiy3D can also be compiled with Visual Studio by adding UnityEngine.dll. C. The affinity of the development language makes the existing mobile applications easily transplanted to Android platform. At the same time, Android is a Linux-based system, the large number of open source application library included in Linux can be used to develop abundant applications. Based on the Linux kernel, Android provides a variety of available library and a complete application framework. Android owns Apache copyright, and also provides Google-made Dalvik virtual machine to run applications. The Platform of Android with Unity3D Game can shorten the development cycle significantly. The combination of Unity3D and Android is efficient, user-friendly interface, and provide more parameters to be customized, so that new users can use this game better to play and quickly.


Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: