Quantcast
Channel: Android*
Viewing all 343 articles
Browse latest View live

Performance Results for Android Emulators - with and without Intel HAXM

$
0
0

 

Welcome to my blog about Android Emulator performance. I started comparing the performance of the generic emulator in Eclipse and the x86 Emulator that utilizes the Intel® Hardware Accelerated Execution Manager (Intel HAXM) and was quite surprised at the difference in performance. So here is the blog! 


 

Description:

I recently started learning about Android App Development and in the process I have been going through the training tutorials on developer.android.com.  The app that I used for the tests below is based on the "MyFirstApp" offered in the training.  I made it as far as creating the two intents so there isn't much processing it has to do (it is probably the most basic app that exists.) The reason why I shifted to emulator performance was I got really tired of waiting for the emulator to load so that I could test my app. 

Test System Configuration:

  • Windows 7
  • HP Elitebook 2560p
  • Processor has Intel(r) VT-x and it is enabled in the Bios
  • Beacon Mountain (takes care of installing all components - Eclipse, Intel(r) Hardware Accelerated Execution Manager (Intel Haxm), Android ADT, Android, SDK, etc.  (Note - Intel VT-x is required to install Beacon Mountain.)

Disclaimer: The performance results below are based on my system that has all software associated with a Corporate IT Build installed and running.  It was not prepped for "Performance testing." The results below will vary according to specific system configuration.

The Basic Emulators:

In my installation of Eclipse, I have just the two basic emulators:  Nexus_7_arm and the Nexus_7_x86. 

Open the Android SDK Manager to see which emulators are available:

The available images come with the specific Android API version.  

For Android 4.4, you can install an ARM System Image and an Intel x86 Atom System Image.

Intel HAXM is also included.  Go to the "Extras" portion of the Andorid SDK Manager and make sure it is installed. 

Support for Intel HAXM was introduced in Release 17 of the Android ADT and it works with Intel x86 images going back to API Level 10.

The tests run below were against both emulators as well as my Note 2 "Phablet."

This window comes up when you run your app. If your device is in "Developer Mode" and is connected via USB cable, it will show up as an option for the "running device."  Note that the Google USB driver did not work for the Note 2 - I had to get it from the Samsung website.

The results:

The chart shows why I started looking into Intel HAXM.  I got really tired of waiting for "MyFirstApp" which is really tiny and insignificant to load and run.  

While 7 seconds for uploading directly to my Note 2 wins the race, waiting 43 seconds is much more desirable than waiting a total of 2 and a half minutes.

Since the Note 2 does ot need to "boot" every time it runs, I decided to see how much time the emulators were spending just loading the OS.

The Nexus_7_arm emulator takes 2.2 times longer to load.  Note that at 40 seconds the x86 emulator is almost done.

The Note 2 time is 0 since it is not loading the OS everytime the App is uploaded.

Then I decided to see how much time each took to upload, install and run my app.

The ARM emulator took 23 times longer to upload and install than the x86 emulator (over a minute vs 3 seconds for the x86 emulator.) And the x86 emulator was faster than my Note 2 by 1 second.  I was not surprised by that.

 

Summary

While you still have to wait for the OS to load when using the Intel x86 emulator with Intel HAXM enabled, the total run time is much faster than waiting for the generic ARM emulator (3.65X more time overall to run.)  Since the biggest difference in time was in uploading and installing the app, I would imagine that if my app was actually complex the overall performance difference would be far greater.

The really neat thing about this emulator is:  Even if your app is targeted for ARM devices, the Intel HAXM enabled emulator can be used for testing. This can save a lot of time.


  • Android; HAXM; Emulator
  • Imagem do ícone: 

  • Android*
  • Telefone
  • Tablet
  • Desenvolvedores
  • Android*

  • Transformer vos tablettes en véritables consoles de salon

    $
    0
    0

    De nombreux utilisateurs se servent majoritairement de leurs tablettes pour jouer. Il suffit de parcourir les classements des applications les plus téléchargées pour se rendre compte à quel point le jeu vidéo est roi. Le tactile a permis aux développeurs de créer des applications avec une prise en main extrêmement aisée. Cette simplicité a séduit la même audience que la Wii et de nos jours, quasiment tout le monde joue sur tablette. Certains développeurs sur console de salon se demandent même si leur audience ne risque pas de se résumer aux joueurs les plus férus. Je pense qu’ils ont raison de s’inquiéter car non seulement les tablettes leur ont pris leur audience grand public mais elles risquent aussi de leur prendre un peu de leur identité. Il est devenu possible de facilement transformer sa tablette en véritable console de salon.

    Comment opérer cette transformation

    Il y a deux éléments principaux qui caractérisent une console de salon : les manettes et le grand écran. Grace au support Bluetooth, il est maintenant possible de connecter des manettes à sa tablette. Quant à l’affichage sur grand écran, certains appareils sont équipés d’une sortie HDMI. Il suffit donc d’un simple câble pour se connecter au téléviseur. De plus, les technologies Miracast et WiDi (AirPlay sur iOS) permettent de transmettre l’affichage d’une tablette sur un téléviseur compatible via un signal Wi-Fi (donc sans fil). Il est aussi possible de se procurer des petits boitiers à connecter sur le téléviseur si celui-ci n’est pas compatible Miracast ou WiDi.

    Créer pour le tactile

    Le problème de notre console-tablette est que l’immense majorité des jeux actuels sont conçus pour une interaction tactile. Malheureusement, la plupart de ceux-ci seront donc injouables sur grand écran. C’est principalement le cas des jeux qui nécessitent de toucher précisément certains objets avec le doigt. L’affichage sur grand écran ne sert absolument à rien puisque le joueur devra regarder l’écran de la tablette avant chaque interaction. De plus, il est fort probable que ces jeux ne puissent jamais supporter la gestion avec manette (interface bien trop éloigné du design originel). C’est par exemple le cas de Smiley’s Pop (match3) ou d’Anka (jeu d’aventure point & click) deux de nos jeux sur tablette.

    Anka et Smiley's Pop, deux jeux incompatibles.

    Utiliser la tablette comme une manette

    Heureusement, de nombreux jeux tactiles sont tout à fait jouables sur grand écran. C’est le cas des nombreux jeux d’arcade qui utilisent des manettes analogiques virtuelles. Non seulement l’utilisation du téléviseur ne gêne en rien la jouabilité, mais au contraire, elle l’améliore puisque le jeu n’est plus obstrué par les doigts des joueurs.

    Il existe aussi d’autres types d’interaction tactile qui supportent très bien le grand écran. C’est le cas notamment des gestuelles qui n’ont pas besoin de partir d’un endroit précis. Nous développons d’ailleurs en ce moment un jeu qui utilise ce genre d’interaction.

    Pour déplacer notre personnage, il suffit de faire glisser son doigt de gauche à droite (ou vice versa). Un simple clic rapide fait tourner les blocs sur eux même. Finalement, un geste du bas vers le haut permet de lancer les blocs sur l’aire de jeu. Chacun de ces gestes doit être produit à l’intérieur de la zone du joueur, mais cette zone est suffisamment large pour ne pas avoir besoin de regarder la tablette quand nous jouons.

    Finalement, tous les jeux qui utilisent principalement le gyroscope comme moyen de contrôle seront eux aussi parfaitement jouables. L’immersion dans le jeu sera même améliorée puisque l’écran du téléviseur reste fixe. Ce qui n’est pas le cas de la tablette qui parfois peut prendre une inclinaison gênante pour la visibilité.

    Manettes Bluetooth

    Encore très peu de développeurs sur mobile gèrent les manettes Bluetooth mais cela ne devrait pas durer. Il semble évident que dans un futur proche, nous allons trouver de plus en plus de jeux mobiles qui les supportent. Ces manettes sont assez récentes et il faut bien évidement un temps de latence avant que les développeurs ne s’y intéressent. Certes, cela nécessite un peu plus de travail que la seule gestion tactile, mais si cette option est gérée depuis le début du développement, cela ne rajoute finalement que très peu d’investissement. En ce qui concerne Ovogame, nous ne nous posons plus de question. Notre prochain jeu mobile supportera une interface 100% pilotable par manette en plus de la gestion tactile. Cela en vaut la peine, car la gestion des manettes sera rentabilisée de plusieurs façons.

    Tous nos jeux tournent sur trois plateformes mobiles : Android, iOS, BB10. Ces trois plateformes supportent la gestion des manettes Bluetooth, donc l’investissement initial est répercuté sur ces trois environnements (ce qui représentent plus de 80% du marché).

    Il existe déjà quelques micro-consoles Android : OUYA, Shield ainsi que quelques autres déjà annoncées. Sans aucun changement, la version Android de notre jeu fonctionne déjà parfaitement sur OUYA (puisqu’il est en mode paysage et supporte les manettes). Il existe de grosses rumeurs sur la sortie probable d’autres micro-consoles Android par Amazon et Google. Il est évident que dans un futur proche, il y aura un vrai marché des micro-consoles et grâce à la compatibilité Android, il n’y aura que peu de travail pour y accéder.

    D’autres perspectives s’ouvrent aux créateurs de jeux avec Microsoft et Sony qui désirent attirer les indépendants sur leurs nouvelles consoles (sans oublier Valve avec la Steam-Box). Cela fait donc beaucoup d’opportunité pour les développeurs qui auront des jeux contrôlables par manette. C’est pour cela que nous pensons que les plateformes mobiles vont-elles aussi en profiter grâce aux portages de jeux console sur mobile.

    Miracast-WiDi

    Le WiDi permet d’afficher l’écran des tablettes sur un téléviseur via un signal Wi-Fi. Cela a des avantages mais aussi des inconvénients. Pour jouer, un câble HDMI peut être gênant s’il est trop court ou s’il empêche une manipulation aisée de la tablette. Une connexion sans fil est alors plus appréciable.  Par contre, la qualité vidéo n’est pas aussi bonne que celle qui transite par un câble HDMI. L’image est tout d’abords compressée, ce qui peut faire apparaitre quelques artefacts. De plus, il arrive que le signal subisse des microcoupures de temps en temps, ce qui peut être désagréable.

    L’une des spécificités intéressantes du WiDi est qu’il permet d’avoir deux écrans avec des affichages différents : l’écran affiché sur le téléviseur n’est pas forcément le même que celui de la tablette. Par exemple, nous pourrions avoir l’aire de jeu qui s’affiche sur le grand écran alors que la tablette affiche un tableau de bords. Cela ouvre de nouvelles perspectives en termes de jouabilité, un peu comme celles que permette la Wii U avec sa nouvelle manette-tablette.

    Conclusion

    Les tablettes n’ont pas fini de bouleverser le monde du jeu vidéo. Le rapprochement entre celles-ci et les consoles de salon devrait profiter à tous. Les joueurs y trouveront leur compte avec une nouvelle utilisation de leur tablette et les développeurs mobiles pourront encore élargir (à moindre frais) la diffusion de leurs jeux mobiles sur console.

    Imagem do ícone: 

  • Case Study
  • Desenvolvimento de jogos
  • Interfaces de toque
  • C/C++
  • Java*
  • Android*
  • Tablet
  • Desenvolvedores
  • Android*
  • Apple iOS*
  • MiraSlide : Développer une application Android (4.2+) multi-écran exploitant le WiDi par l'exemple

    $
    0
    0

    Dans ce billet, je vais vous présenter les différentes étapes pour développer une application Android exploitant le WiDi. A l'issue de cette lecture, vous devriez connaitre les bases pour développer vos propres solutions multi-écrans sous Android. Cependant, le WiDi est loin d'être une solution cantonnée au monde Android et je vous invite à lire l'article de Pierre S sur le développement d'application WinJS (Windows 8.1).

    Avant tout, il va falloir se familiariser avec certaines connaissances.

    Prérequis

    Android : Si vous débutez sur le développement Android, je vous conseille très fortement de lire tous les guides de démarrage sur le site des développeurs Android.

    Wireless Display (ou WiDi) : Cette technologie de diffusion de flux audio et vidéo développée par Intel est une alternative aux technologies Miracast du consortium Wi-Fi Alliance, et AirPlay d'Apple. Cependant, la technologie WiDi permet aux appareils le supportant de communiquer avec les récepteurs Miracast. Finalement le WiDi se veut être une "super-surcouche"à Miracast. Afin d'en apprendre plus sur le WiDi, je vous conseille de lire cette présentation par Pierre S : Tout savoir sur le WiDi.

    Enfin, si vous voulez pousser encore plus loin vos connaissances sur le Wireless Display pour Android, vous pouvez consulter la vidéo de la conférence au Paris Android User Group sur la présentation du WiDi pour Android par Xavier Hallade (slides ici)

    Assez de liens, passons au développement !

    MiraSlide : Une application de Présentations

    MiraSlide logo

    Code source du projet MiraSlide

    Page Google Play de l'application

    Le but de cette application est de répondre à un besoin simple : les conférenciers sont soit encombrés, soit esclaves du matériel prêté, soit les deux. Ici, le Wireless Display associé à un appareil plus léger qu'un ordinateur et pouvant donc être utilisé en télécommande nous permettrait de diffuser et commander les slides depuis son téléphone/tablette, en ayant en plus dans la main les informations supplémentaires que nous apporte un ordinateur (chronomètre, notes...).

    Voici donc le but très simple de MiraSlide :

    1. Vous sélectionnez votre fichier de présentation, vous connectez votre appareil à un écran récepteur Wireless Display puis vous lancez la présentation.

    2. Sur l'écran récepteur, la première page de votre présentation s'affiche.

    3. L'écran de votre appareil vous propose alors un chronomètre, ainsi qu'une télécommande affichant le slide courant, les notes éventuelles, et des boutons précédent / suivant.

    MiraSlide presentation

    Maintenant que nous avons une idée plus précise de l'application que nous voulons développer, nous allons nous pencher sur les APIs.

    Utiliser le WiDi dans Android

    Pour ceux qui n'auraient pas bien lu les prérequis, le WiDi (et plus largement la notion d'écran externe) apparaît dans le framework d'Android avec l'API 17 (Android 4.2.2). Deux éléments essentiels sont alors créés :

    • Le Display Manager va être l'interface permettant à l'application de connaitre les écrans disponibles et d’interagir avec.
    • Une Presentation est une vue similaire à une Dialog (dont elle est étendue) mais projetée sur un Display donné. Une des notions les plus importantes à comprendre du fait que la Presentation est une extension d'une Dialog est qu'elle est forcément attachée à une Activity. Ainsi, si cette dernière est mise en pause (si elle n'est plus visible à l'écran, en gros), alors la Presentation n’apparaîtra plus sur le Display associé (et le mode d'écran clone par défaut s'activera). Si vous retournez ensuite à votre Activity, la Presentation reviendra s'afficher sur le Display.

    L'implémentation est finalement très simple :

    1. La récupération du Display
    2. La création et l'affichage de la Presentation
    3. Ajout de listeners

    1. La récupération du Display

    Elle peut se faire de deux façons. Ou bien en utilisant le MediaRouterintroduit avec l'API 16 :

     MediaRouter mediaRouter = (MediaRouter) context.getSystemService(Context.MEDIA_ROUTER_SERVICE);
     MediaRouter.RouteInfo route = mediaRouter.getSelectedRoute();
    
     if (route != null) {
         Display presentationDisplay = route.getPresentationDisplay();
    
         if (presentationDisplay != null) {
    
             // Your code...
    
         }
     }

    Ou bien en utilisant le Le Display Manager :

     DisplayManager displayManager = (DisplayManager) mActivity.getSystemService(Context.DISPLAY_SERVICE);
    
     // Selecting DISPLAY_CATEGORY_PRESENTATION prevents the DisplayManager from returning inapropriate Display,
     // like the own device display.
     Display[] displays = displayManager.getDisplays(DisplayManager.DISPLAY_CATEGORY_PRESENTATION);
    
     if (displays.length == 0) {
    
         // If there is no external display connected, we launch the Display settings. We could launch
         // the Wifi display settings with ACTION_WIFI_DISPLAY_SETTINGS but it is a hidden static value 
         // because there may be not such settings (if the device does not have Wireless display but
         // have API >= 17).
         startActivity(new Intent(Settings.ACTION_DISPLAY_SETTINGS));
    
     } else {
    
         // We should show a DialogBox to let the user select the display if there is more than one but
         // for this example we only choose the first one
         Display display = displays[0];
    
     }

    2. La création et l'affichage de la Presentation

    Ici non plus, rien de très compliqué. La Presentation n'a besoin que de l'Activity parente et du Display où être affiché pour être créé. Ensuite, la méthode show() affiche la Presentation sur le Display. Comme une Dialog, la Presentation comprend une méthode setContentView() grâce à laquelle vous pourrez définir la vue à afficher :

     private void showPresentation() {
         mPresentation = new MyPresentation(this, mDisplay);
         mPresentation.show();
     }
    
     private class MyPresentation extends Presentation {
     
         /* constructors ... */
    
         @Override
         public void onCreate(Bundle savedInstanceState) {
             super.onCreate(savedInstanceState);
             View v = getLayoutInflater().inflate(R.layout.presentation, null);
             setContentView(v);
         }
     }

    3. Ajout de listeners

    Pour rendre votre système plus fiable, vous pouvez rajouter des listeners au DisplayManager afin d'être prévenu lorsque des Display sont ajoutés ou retirés. Ceci est particulièrement pratique pour éviter à une Presentation de tenter de continuer à fonctionner alors que le Display qui lui est associé a été déconnecté :

     mDisplayManager.registerDisplayListener(new DisplayListener() {
    
         @Override
         public void onDisplayRemoved(int displayId) {
             // Stop presentation ...
             // Show a message to the user to reconnect the display
    
         }
    
         @Override
         public void onDisplayChanged(int displayId) {
             // Something happend. You should check if everything is ok before continuing
    
         }
    
         @Override
         public void onDisplayAdded(int displayId) {
             // If you were waiting for a display, maybe you should use it !
    
         }
     }, null);

    Implémentation du code dans MiraSlide

    Nous n'allons bien évidemment pas revenir sur tout le code de MiraSlide, ce serait long et inutile, car finalement le code concernant le Wireless Display est assez court face au reste du code. Nous allons donc nous focaliser sur les points suivants :

    1. La récupération du Display
    2. La sélection du Display
    3. La création et l'affichage de la Presentation
    4. Le contrôle de la Presentation

    Cependant, je vais rapidement expliquer la structure générale du code qui se divise en 4 éléments principaux.

    • la MainActivity est l'unique Activity de l'application. Ainsi, si la Presentation est en cours et que l'on se déplace entre les différentes vues, la Presentation ne s'arrête pas.
    • le SelectionFragment est la première vue, qui va permettre à l'utilisateur de sélectionner le fichier (PDF) contenant les slides, ainsi que le Display sur lequel afficher la Presentation. Enfin, il permet de lancer la-dite Presentation.
    • le ControllerFragment est la vue s'affichant lorsque l'on lance la Presentation. Elle comprend un chronomètre, le slide courant ainsi que des boutons précédent et suivant.
    • le PdfViewerPresentation est la vue Presentation gérant ce qui est affiché sur le Display. Il comprend le moteur permettant de récupérer et d'afficher les images des slides demandés par le ControllerFragment.

    1. La récupération du Display

    • À la création du SelectionFragment, on récupère le DisplayManager et on déclare les listeners.
    • Chaque fois que le SelectionFragment est lancé ou relancé (dans le onResume), on va vérifier l'état des Displays. Si il n'yen a aucun, on propose d'afficher les paramètres d'affichages, si il y en a un seul, on le sélectionne automatiquement, et s'il yen a plus, on propose de sélectionner le Display voulu

    Voici le code associé :

     public class SelectionFragment extends Fragment implements OnClickListener, DisplayListener {
    
         // ...
    
         // The display manager is the object to get information about the different displays
         private DisplayManager mDisplayManager;
    
         @Override
         public void onCreate(Bundle savedInstanceState) {
             super.onCreate(savedInstanceState);
    
             // ...
    
             // We get the display manager to get info about the display. We also register to any change
             // about the (dis)connection of the displays.
             mDisplayManager = (DisplayManager) mActivity.getSystemService(Context.DISPLAY_SERVICE);
             mDisplayManager.registerDisplayListener(this, null);
         }
    	
         @Override
         public void onResume() {
             super.onResume();
    
             // On resume, we check the state of each buttons.
             checkLaunchable(getView());
         }
    
         // We check the selection of the pdf file and the display, and we update the color of the buttons and we update
         // if the "Launch Projection" button should be enabled
         // @param v : the global view of the fragment
         private void checkLaunchable(View v) {
             checkDisplay((TextView) v.findViewById(R.id.fragment_selection_button_selectwirelessdisplay));
    
             if (mActivity.getDislay() != null) {
                 v.findViewById(R.id.fragment_selection_button_selectwirelessdisplay).setBackgroundResource(R.drawable.button_green);
             } else {
                 v.findViewById(R.id.fragment_selection_button_selectwirelessdisplay).setBackgroundResource(R.drawable.button_red);
             }
    
             // ...
    
         }
    
         // We check the state of the external displays, update the text of the display button, and if there is only
         // one external display, we auto select it
         private void checkDisplay(TextView displayButton) {
             Display[] displays = mDisplayManager.getDisplays(DisplayManager.DISPLAY_CATEGORY_PRESENTATION);
    
             if (displays.length > 1 && mActivity.getDislay() == null) {
                 displayButton.setText("Select a wireless display");
             } else if (displays.length == 1) {
                 mActivity.setDisplay(displays[0]);
                 displayButton.setText("Display selected " + displays[0].getName());
             } else {
                 mActivity.setDisplay(null);
                 displayButton.setText("Connect to a wireless display");
             }
         }
    
         // Methods called when a display is added or removed. We change the button state if we add or remove a
         // display, and we stop the presentation if there is a display removed.
    
         @Override
         public void onDisplayAdded(int displayId) {
             checkLaunchable(getView());
         }
    
         @Override
         public void onDisplayChanged(int displayId) {
         }
    
         @Override
         public void onDisplayRemoved(int displayId) {
             if (mActivity.getDislay() != null && displayId == mActivity.getDislay().getDisplayId()) {
                 mActivity.stopPresentation();
                 checkLaunchable(getView());
             }
         }
     }

    2. La sélection du Display

    Comme décrit plus haut, le bouton de sélection du Display va évoluer en fonction des Display connectés à l'appareil :

    • Si aucun Display n'est connecté, le bouton affiche Connect to a wireless display. Si l'utilisateur clique dessus, le code suivant est appelé. La fenêtre des paramètres d'affichage du téléphone est alors affiché. Si l'appareil est compatible Wireless Display, un bouton Screen mirroring, ou Wireless Display ou une traduction devrait apparaitre (cf image ci dessous). En cliquant dessus, la liste des appareils visibles compatible Miracast apparait. L'utilisateur n'a plus qu'à cliquer dessus pour s'y connecter.

     if (displays.length == 0) {
         // If there is no external display connected, we launch the Display settings. We could launch the Wifi display
         // settings with ACTION_WIFI_DISPLAY_SETTINGS but it is a hidden static value because there may be not such settings
         // (if the device does not have Wireless display but have API >= 17).
         startActivity(new Intent(Settings.ACTION_DISPLAY_SETTINGS));
     }

    Connecting a Wireless Display on Android

    • Si un seul Display est connecté, ou si un Display a déjà été sélectionné, le bouton affiche Display selected NOM_DU_DISPLAY. Si l'utilisateur clique dessus, le même code que lors du cas où plusieurs Displays sont connectés est executé.

    • Si plusieurs Displays sont connectés, le bouton affiche Select a wireless display. Si l'utilisateur clique dessus, le code suivant est appelé. Une Dialog box s'ouvre, listant la liste des Displays disponibles. Si l'utilisateur clique sur un des Displays, ce dernier est alors sélectionné et on se retrouve dans le cas précédent.

     // If there is one or more external display, we show a dialog box with the list of the display.
     // The user can select the display he wants, or close the dialog.
     final ArrayAdapter arrayAdapter = new ArrayAdapter(mActivity, android.R.layout.select_dialog_singlechoice);
     for (int i = 0; i < displays.length; i++) {
         arrayAdapter.add(displays[i].getName());
     }
    
      AlertDialog.Builder builder = new AlertDialog.Builder(mActivity).setIcon(R.drawable.ic_launcher).setTitle("Select a display")
             .setNegativeButton("cancel", null).setAdapter(arrayAdapter, new DialogInterface.OnClickListener() {  
    
                 @Override
                 public void onClick(DialogInterface dialog, int which) {
                     // When the user choose a display through the dialog, we set it in the parent Activity and update
                     // the state of the buttons.
                     mActivity.setDisplay(displays[which]);
                     checkLaunchable(getView());
                 }
     });
     builder.show();

    3. La création et l'affichage de la Presentation

    Lorsque le Display est sélectionné, ainsi qu'un fichier PDF, le bouton Launch Projection est activé. Lorsque l'utilisateur clique dessus, la Presentation est créée et affichée. On bascule alors l'utilisateur sur la vue Controller. Voici le code exécuté :

     // SelectionFragment.java
    
     // ...
    
     @Override
     public void onClick(View v) {
         if (v.getId() == R.id.fragment_selection_button_launchprojection) {
             // On click on the "launch projection" button, we... launch the projection
             mActivity.launchPresentation();
    
     }
     // MainActivity.java
    
     // ...
    
     // Set the presentation (created in the selection fragment). This method creates listeners to control the visibility of the controller fragment and if the
     // mShowHideControllerActionBarButton should be enabled. Then it shows the presentation.
     // 
     // @param presentation : the presentation to show
     //
     public void launchPresentation() {
         mPresentation = new PdfViewerPresentation(this, mDisplay, mPdfPath);
    
         mPresentation.setOnShowListener(new OnShowListener() {
    
             @Override
             public void onShow(DialogInterface dialog) {
    
                 showHideController(true);
                 enableShowHideControllerActionBarButton(true);
                 mControllerFragment.notifyViewPager();
             }
         });
         mPresentation.setOnDismissListener(new OnDismissListener() {
    
             @Override
             public void onDismiss(DialogInterface dialog) {
                 showHideController(false);
                 enableShowHideControllerActionBarButton(false);
             }
         });
    
         mPresentation.show();
     }

    L'initialisation de la Presentation est très simple mais le code peut paraître un peu compliqué. Ceci est dû à la préparation du PDF et de son affichage. Ce qu'il faut retenir est que l'initialisation se fait dans le constructeur, et que la création de la vue à afficher, et l'affichage du premier slide, se fait dans le OnCreate(Bundle), et est appliqué grâce à la méthode setContentView(View). Voici le code très simplifié.

     // PdfViewerPresentation.java
    
     // ...
     
     // Constructor. It creates the Presentation, then load the display info and the Pdf to show
     public PdfViewerPresentation(Context context, Display display, String pdfFilePath) {
         super(context, display);
         mPdfFilePath = pdfFilePath;
    
         // ...
     }
    
     @Override
     public void onCreate(Bundle savedInstanceState) {
         super.onCreate(savedInstanceState);
    
         createContentView();
    
         showPage();
     }
    
     // Create the imageView to show the pdf pages Bitmaps
     private void createContentView() {
         mImageView = (ImageView) getLayoutInflater().inflate(R.layout.presentation_main, null);
         setContentView(mImageView);
     }
    
     // Show the current page on the Presentation view
     private void showPage() {
         mImageView.setImageBitmap(getPage(mPage));
     }
    
     // return the Bitmap of the pdf specified page
     public Bitmap getPage(int page) {
         
         // ...
     }

    4. Le contrôle de la Presentation

    Une fois la Presentation lancée, l'utilisateur peut la contrôler grâce au ControllerFragment. Ce dernier comprend un ViewPager qui affiche les slides du PDF. Lorsque l'utilisateur change de slide (soit en faisant glisser, soit en appuyant sur les boutons Précédent et Suivant), le ControllerFragment prévient l'Activity parente du nouveau slide sélectionné, et l'Activity fait remonter l'information à la Presentation afin qu'elle se mette à jour. Voici le code correspondant :

     // ControllerFragment.java
     
     // ...
     
     // We create the viewpager which shows the pages of the pdf file. If the user changes the slide, the listeners
     // updates tell the parent Activity to change the image in the presentation.
     // 
     // @param v : the fragment view
     //
     private void createViewPager(View v) {
         mSlidesPagerAdapter = new SlidesPagerAdapter(((FragmentActivity) getActivity()).getSupportFragmentManager());
         mSlidesViewPager = (ViewPager) v.findViewById(R.id.fragment_controller_pager);
         mSlidesViewPager.setAdapter(mSlidesPagerAdapter);
         mSlidesViewPager.setOnPageChangeListener(new SimpleOnPageChangeListener() {
    
             @Override
             public void onPageSelected(int page) {
                 ((MainActivity) getActivity()).getPresentation().moveTo(page);
             }
         });
     }
    
     @Override
     public void onClick(View v) {
     if (v.getId() == R.id.fragment_controller_button_pageprev) {
         // On click on the prev button, we move the viewpager one slide back (which will move the presentation slide as well)
         mSlidesViewPager.setCurrentItem(mSlidesViewPager.getCurrentItem() - 1);
    
     } else if (v.getId() == R.id.fragment_controller_button_pagenext) {
         // On click on the next button, we move the viewpager one slide next (which will move the presentation slide as well)
         mSlidesViewPager.setCurrentItem(mSlidesViewPager.getCurrentItem() + 1);
     }
     // MainActivity.java
     
     // ...
     
     // return the Presentation if it has been created. A presentation needs a display and a pdf file
     public PdfViewerPresentation getPresentation() {
         return mPresentation;
     }
     // PdfViewerPresentation.java
     
     // ...
    
     // Move the presentation to the page 'page'
     public void moveTo(int page) {
         if (page >= 0 && page < getPageCount()) {
             mPage = page;
             showPage();
         }
     }
    
     // Show the current page on the Presentation view
     private void showPage() {
         mImageView.setImageBitmap(getPage(mPage));
     }
    
     // return the Bitmap of the pdf specified page
     public Bitmap getPage(int page) {
         
         // ...
     }

    Conclusion

    Développer une application exploitant les écrans externes n'est vraiment pas compliqué sous Android. Le framework est simple et fonctionne bien. Cependant, l'application développée, malgré son potentiel, est très loin de repousser le WiDi dans ses retranchements, comme pourrait le faire une application de jeu 3D, ou de streaming vidéo. Enfin, il existe une partie de l'API sur le Wireless Display qui est actuellement cachée dans le code Android et non disponible dans le SDK. Cette API permet de contrôler soi-même la découverte et la connexion aux écrans externes. L'exploitation de cette API est donc dangereuse car elle peut évoluer sans prévenir ou bien ne pas fonctionner comme espéré d'un appareil à un autre. Cependant, si cela est bien fait, l'exploitation de cette API peut simplifier grandement l'utilisation de l'application par l'utilisateur.

    Sources

    présentation du WiDi pour Android par Xavier Hallade (slides)

    le site de développement Android

    Code source du projet MiraSlide

    Page Google Play de l'application

    Imagem do ícone: 

    MiraSlide
  • Ferramentas de desenvolvimento
  • Mobilidade
  • Código aberto
  • Interfaces de toque
  • Design e experiência do usuário
  • Ferramentas de desenvolvimento para Android
  • Java*
  • Android*
  • Telefone
  • Tablet
  • Desenvolvedores
  • Parceiros
  • Professores
  • Estudantes
  • Android*
  • Intel XDK : accéder aux fonctionnalités mobiles natives dans votre application HTML5

    $
    0
    0

    L’environnement de développement Intel XDK est conçu pour les développeurs qui souhaitent utiliser leurs connaissances HTML5 pour créer des applications hybrides pour des appareils mobiles (téléphones et tablettes) et d'autres plates-formes tels que Google Chrome. Pour commencer, vous devez d'abord télécharger et installer la nouvelle application Intel XDK.

    Intel XDK est constitué d'un ensemble d'outils de développement pour aider à coder, déboguer, tester et construire des applications web mobiles et applications hybrides HTML5 multi plates-formes.

    Ce tutoriel vise à expliquer les bases de la création d'une application mobile hybride avec l'environnement de développement Intel XDK.

    Vous pouvez retrouver la documentation de l'API JavaScript Intel XDK à cette url : http://software.intel.com/en-us/node/492826

    Chaque projet dans l’éditeur Intel XDK correspond à une application HTML5. Vous pouvez créer un projet HTML5 à partir de zéro, en important des applications HTML5 existantes ou en modifiant l'un des exemples d'applications incluses.

    Pour commencer,  on ouvre un nouveau projet. Cela nous permet de générer le fichier index.html suivant :

    <!DOCTYPE html><!--HTML5 doctype-->
    <html>
    <head>
      <title>Your New Application</title>
      <meta http-equiv="Content-type" content="text/html; charset=utf-8">
      <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0, user-scalable=0" />
      <style type="text/css">
        /* Prevent copy paste for all elements except text fields */
        *  { -webkit-user-select:none; -webkit-tap-highlight-color:rgba(255, 255, 255, 0); }
        input, textarea  { -webkit-user-select:text; }
      </style>
      <script src='intelxdk.js'></script>
      <script type="text/javascript">
        /* This code is used to run as soon as Intel activates */
        var onDeviceReady=function(){
          //hide splash screen
          intel.xdk.device.hideSplashScreen();
        };
        document.addEventListener("intel.xdk.device.ready",onDeviceReady,false);
        </script>
    </head>
    <body>
      <!-- content goes here-->
    </body>
    </html>

    Alerte sonore et vibreur

    Nous commençons avec deux fonctionnalités simples, un beep et l'activation du vibreur du périphérique. Nous ajoutons les deux fonctions javascripts à la fin de "</script>" :

    function beepOnce()
    {
        try
        {
            intel.xdk.notification.beep(1);
        }
        catch(e) {}
    }
    
    function vibrateDevice()
    {
        try
        {
            intel.xdk.notification.vibrate();
        }
        catch(e) {} 
    }

    Ensuite, nous ajoutons deux boutons dans les balises "<body>" :

    <div><a ontouchstart="beepOnce();">Beep</a></div>
    <div><a ontouchstart="vibrateDevice();">Vibreur</a></div>

    Les fonctions à retenir sont : "intel.xdk.notification.beep(1);" et "intel.xdk.notification.vibrate();". L'argument de la fonction beep (ici 1) correspond aux nombres d'alertes sonores jouées à chaque appel.

    Code complet :

    <!DOCTYPE html><!--HTML5 doctype-->
    <html>
    <head>
      <title>Camera</title>
      <meta http-equiv="Content-type" content="text/html; charset=utf-8">
      <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0, user-scalable=0" />
      <style type="text/css">
        /* Prevent copy paste for all elements except text fields */
        *  { -webkit-user-select:none; -webkit-tap-highlight-color:rgba(255, 255, 255, 0); }
        input, textarea  { -webkit-user-select:text; }
      </style>
      <script src='intelxdk.js'></script>
      <script type="text/javascript">
        /* This code is used to run as soon as Intel activates */
        var onDeviceReady=function(){
          //hide splash screen
          intel.xdk.device.hideSplashScreen();
        };
        document.addEventListener("intel.xdk.device.ready",onDeviceReady,false);
    
        function beepOnce()
        {
            try
            {
                intel.xdk.notification.beep(1);
            }
            catch(e) {}
        }
    
        function vibrateDevice()
        {
            try
            {
                intel.xdk.notification.vibrate();
            }
            catch(e) {} 
        }
        </script>
    </head>
    <body>
      <div><a ontouchstart="beepOnce();">Beep</a></div>
      <div><a ontouchstart="vibrateDevice();">Vibration</a></div>
    </body>
    </html>
    

    Lecteur de musique

    Intel XDK propose d’accéder au lecteur de musique du périphérique. Une manière simple de lancer un fichier son :

    function beepOnce()
    {
        try
        {
            intel.xdk.notification.beep(1);
        }
        catch(e) {}
    }
    
    function vibrateDevice()
    {
        try
        {
            intel.xdk.notification.vibrate();
        }
        catch(e) {} 
    }

    Code complet du lecteur de musique :

    <!DOCTYPE html><!--HTML5 doctype-->
    <html>
    <head>
      <title>Camera</title>
      <meta http-equiv="Content-type" content="text/html; charset=utf-8">
      <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0, user-scalable=0" />
      <style type="text/css">
        /* Prevent copy paste for all elements except text fields */
        *  { -webkit-user-select:none; -webkit-tap-highlight-color:rgba(255, 255, 255, 0); }
        input, textarea  { -webkit-user-select:text; }
      </style>
      <script src='intelxdk.js'></script>
      <script type="text/javascript">
        /* This code is used to run as soon as Intel activates */
        var onDeviceReady=function(){
          //hide splash screen
          intel.xdk.device.hideSplashScreen();
        };
        document.addEventListener("intel.xdk.device.ready",onDeviceReady,false);
    
        function playSound()
        {
            try
            {
                intel.xdk.player.playSound("music.wav"); 
            }
            catch(e) { } 
        }
        </script>
    </head>
    <body>
      <div><a ontouchstart="playSound();">Play</a></div>
    </body>
    </html>
    

    Alerte native

    Pour rendre les applications hybrides plus proches des applications natives, l'API JavaScript Intel vous permet d'enrichir certains composants tels que les alertes. Plutôt que de lancer vos alertes avec la fonction "alert()", vous pouvez utiliser :

    intel.xdk.notification.alert("Titre","Alerte","Confirmer");

    Contrôle de la caméra

    Voici un exemple permettant d'ajouter une fonctionnalités prise de photo, à ajouter avant la balise de fermeture "" :

    document.addEventListener("intel.xdk.camera.picture.add",onSuccess); 
    document.addEventListener("intel.xdk.camera.picture.busy",onSuccess); 
    document.addEventListener("intel.xdk.camera.picture.cancel",onSuccess); 
    
    function capturePhoto() {
      intel.xdk.camera.takePicture(50,false,"jpg");
    }
    
    function onSuccess(evt) {
    
      if (evt.success == true)
      {
        // create image 
        var image = document.createElement('img');
        image.src=intel.xdk.camera.getPictureURL(evt.filename);
        image.id=evt.filename;
        document.body.appendChild(image);
      }
      else
      {
        if (evt.message != undefined)
        {
            alert(evt.message);
        }
        else
        {
            // alert("error capturing picture");
        }
      }
    }

    Code complet :

    <!DOCTYPE html><!--HTML5 doctype-->
    <html>
    <head>
      <title>Camera</title>
      <meta http-equiv="Content-type" content="text/html; charset=utf-8">
      <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0, user-scalable=0" />
      <style type="text/css">
        /* Prevent copy paste for all elements except text fields */
        *  { -webkit-user-select:none; -webkit-tap-highlight-color:rgba(255, 255, 255, 0); }
        input, textarea  { -webkit-user-select:text; }
      </style>
      <script src='intelxdk.js'></script>
      <script type="text/javascript">
        /* This code is used to run as soon as Intel activates */
        var onDeviceReady=function(){
          //hide splash screen
          intel.xdk.device.hideSplashScreen();
        };
        document.addEventListener("intel.xdk.device.ready",onDeviceReady,false);
    
        document.addEventListener("intel.xdk.camera.picture.add",onSuccess); 
        document.addEventListener("intel.xdk.camera.picture.busy",onSuccess); 
        document.addEventListener("intel.xdk.camera.picture.cancel",onSuccess); 
    
        function capturePhoto() {
          intel.xdk.camera.takePicture(50,false,"jpg");
        }
    
        function onSuccess(evt) {
    
          if (evt.success == true)
          {
            // create image 
            var image = document.createElement('img');
            image.src=intel.xdk.camera.getPictureURL(evt.filename);
            image.id=evt.filename;
            document.body.appendChild(image);
          }
          else
          {
            if (evt.message != undefined)
            {
                alert(evt.message);
            }
            else
            {
                // alert("error capturing picture");
            }
          }
        }
        </script>
    </head>
    <body>
      <div><a ontouchstart="capturePhoto();">Photo</a></div>
    </body>
    </html>
    

    Géolocalisation

    Pour obtenir l'emplacement actuel du périphérique, Intel XDK dispose de la fonction : getCurrentPosition. Cette commande acquiert de façon asynchrone la latitude et la longitude approximatives de l'appareil. Lorsque des données sont disponibles, la fonction de réussite est appelée. S'il existe une erreur obtenant des données de position, la fonction d'erreur est appelée.

    intel.xdk.geolocation.getCurrentPosition(suc,fail);

    Code complet :

    <!DOCTYPE html><!--HTML5 doctype-->
    <html>
    <head>
      <title>Camera</title>
      <meta http-equiv="Content-type" content="text/html; charset=utf-8">
      <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0, user-scalable=0" />
      <style type="text/css">
        /* Prevent copy paste for all elements except text fields */
        *  { -webkit-user-select:none; -webkit-tap-highlight-color:rgba(255, 255, 255, 0); }
        input, textarea  { -webkit-user-select:text; }
      </style>
      <script src='intelxdk.js'></script>
      <script type="text/javascript">
        /* This code is used to run as soon as Intel activates */
        var onDeviceReady=function(){
          //hide splash screen
          intel.xdk.device.hideSplashScreen();
        };
        document.addEventListener("intel.xdk.device.ready",onDeviceReady,false);
    
    var getLocation = function() 
    {
        var suc = function(p){
            if (p.coords.latitude != undefined)
            {
                currentLatitude = p.coords.latitude;
                currentLongitude = p.coords.longitude;
                intel.xdk.notification.alert("Titre","Latitude : "+currentLatitude+"nLongitude : "+currentLongitude,"Valider");
            }
    
        };
        var fail = function(){ 
            alert("geolocation failed"); 
            getLocation();
        };
    
        intel.xdk.geolocation.getCurrentPosition(suc,fail);
    }
        </script>
    </head>
    <body>
      <div><a ontouchstart="getLocation();">Géolocalisation</a></div>
    </body>
    </html>
    

    Merci d'avoir suivi ce tutoriel, vous êtes à présent autonome pour créer des applications mobiles hybrides via l'environnement de développement Intel XDK.

  • Android apps
  • iphone
  • javascript
  • HTML5 App XDK
  • Imagem do ícone: 

  • Sample Code
  • Technical Article
  • Tutorial
  • Ferramentas de desenvolvimento
  • Mobilidade
  • Interfaces de toque
  • Intel® XDK
  • HTML5
  • JavaScript*
  • Android*
  • HTML5
  • Windows*
  • Laptop
  • Telefone
  • Tablet
  • Desktop
  • Desenvolvedores
  • Estudantes
  • Android*
  • Apple iOS*
  • Apple Mac OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Unix*
  • Intel® XDK Release Notes (December 2013)

    $
    0
    0

    What’s new:

    • Added two new tabs: Debug and Profile. Make use of these tabs to remote debug your app on device or analyze the performance of the JavaScript used in your app using App Analyzer.
    • Remote Debugging allows user to debug on Android 4.x based mobile device connected via USB.
    • App Analyzer allows users to set breakpoints in code, inspect variables, and single step through source code.
    • App Analyzer supports the Crosswalk APIs and provide similar performance as apps built using the Intel XDK Crosswalk target build.
    • Improved Tizen app builds allowing Tizen code-signing to submit apps to Tizen App Store.
    • Added Hot-Keys to Switch between tabs in the XDX:
      • ['Ctrl'] + ['Tab'] = Switch to previous tab (show tab-switcher UI). (or if ['Ctrl'] is held and ['Tab'] pressed cycle trough tabs in switcher UI)
      • ['Ctrl'] + ['Shift'] + ['Tab'] = Show tab-switcher (or if ['Ctrl'] is held and ['Tab'] pressed cycle trough in reverse ordertabs in switcher UI)
      • ['Ctrl'] + [Number Key] = Switch to tab based on order in tab bar
      • Example: ['Ctrl'] + ['1'] = Switch to the first tab in the list (currently develop)
      • ['Ctrl'] + ['0'] = Switch to the projects tab
      • Note: on OS X the shorcuts that do not contain a ['Tab'] key can be used by substituting ['Command'] for ['Ctrl'] (but ['Ctrl'] can still be used if desired)
    • App Designer multiple page support, themes, and interactivity.
    • App Designer allows page and sub-pages creation, button hookup to pages, sidebars, popups or custom scripts.
    • App Designer allows animated transition selection for a link to a page and more.
    • App Designer allows users to “drop in” third party themes for frameworks that support it.
    • App Designer allows basic javascript editing within App Designer.
    • Brackets updated to Sprint 34.1.
    • Editor supports autocompletion for Cordova 2.9.0 and intel.xdk APIs.
    • Better editor integration with the OS:
      • File->Open lets open any file on the filesystem
      • File->Save As allows saving to any location on the local filesystem
      • “Show in OS” context menu in the file tree opens OS file manager (e.g. Explorer, Finder) at the project file location.
      • Editor uses JSHint instead of JSLint as in previous versions. JSHint can be configured using .jshintrc in the project root. For the documentation on configuration options and configuration file format, refer to JSHint Docs.
      • Added copy/cut/paste context menu in the editor.
    • Images can be previewed in the editor. In previous version they appeared as binary content.
    • Code/Design buttons are always displayed when using AppDesigner and AppStarter projects. Design button is only enabled on .html files in these projects.
    • Emulator now supports apps that use the synchronous form of XMLHttpRequest (XHR).
    • Emulator settings now let you choose whether the JavaScript console should be cleared each time the app restarts (default is to clear). App restarts when you click reload, change projects, change devices, or change source when the always reload setting is selected.
    • Emulator automatically restarts debugger if debugger is running when the app restarts. Debugger must still close during the restart operation itself.
    • Emulator correctly initializes Accelerometer panel to +9.81 in z-axis, not -9.81.
    • App Designer / App Starter get automatically open in a new project.

    Known issues:

    Issue

    Workaround

    Cannot access project files on Windows via a UNC path.

    Make sure all your files are stored on a local drive

    Under emulation, the Intel XDK Accelerometer API reverses left-right and up-down. That is, tilting the 3D image in the accelerometer panel so that the top edge appears farther away/lower will make the can roll towards bottom in the “rolling can” demo. The emulation for the Cordova Accelerometer API works correctly.

    Pretend the accelerometer panel presents a view looking up (or just think backwards). Sorry, we will fix this soon.

    CERTIFACE E INTEL, juntos no combate às Fraudes

    $
    0
    0

    Certiface é um aplicativo para a certificação de pessoas via web. Utiliza tecnologia de ponta em biometria facial para evitar a duplicidade das faces e combater as práticas de fraude. Seu sistema opera em nuvem, a identidade do cliente é preservada e não existe contato físico.

    Num mundo onde fraudes de documentos, clonagens de cartões e falsificações de todos os tipos são assuntos que estampam capas de jornais e telejornais, uma Solução inovadora chega tendo como o principal valor a proteção da identidade das pessoas de bem, o Certiface. É inovador em vários sentidos, traz uma tecnologia que superou as expectativas de precisão na identificação de pessoas pela face, preservando totalmente a integridade e a privacidade dos cidadãos.

    O Certiface é uma Solução que utiliza tecnologia de ponta em biometria facial para certificação de pessoas, o seu principal objetivo é evitar a duplicidade das faces e combater as práticas de fraude no mercado consumidor.

    Como solução de Prevenção à Fraude na concessão de credito, uma atividade típica de empresas que trabalham com o consumidor final e que, tanto no mercado financeiro quanto no varejo, é considerada de missão crítica ela tem que ter alta disponibilidade, baixo tempo de resposta e, em conjunto, tem que ter implementação rápida e de baixo custo.

    Para atender aos requisitos apresentados, a Solução opera em nuvem e, sua alta performance com milhões de usuários, deve-se a utilização de Bibliotecas que obtém o melhor proveito dos processadores Intel nos servidores, como as Bibliotecas TBB, IPP e MKL, utilizadas para cálculo do código biométrico, posteriormente armazenado na base centralizada do sistema. O desempenho deve-se primeiramente ao rápido processamento na operações com matrizes proporcionado pela biblioteca MKL.

    A integração com a biblioteca IPP proporcionou ao Certiface uma performance 10x superior e, em conjunto com a biblioteca TBB, o Certiface passou a utilizar todos estes recursos em paralelo, tornando possível o processamento da biometria facial simultaneamente em todos os núcleos disponíveis no sistema, aumentando significativamente a velocidade de processamento, proporcionalmente à quantidade de núcleos do processador, explorando desta forma todo o poder computacional dos equipamentos com a tecnologia Intel.

    A arquitetura Intel está presente em todas as fases de uso da Solução, desde os processadores dos Servidores que suportam a operação em nuvem, aos dispositivos móveis com Sistema Operacional Android e processadores Intel, através dos quais os usuários comunicam-se com a tecnologia Certiface. Esta combinação de tecnologias permite  o combate à fraude, em campo, com todo poder do processamento móvel. Graças a evolução destas plataformas, o processamento de localização de face na imagem, aliado a outras rotinas de visão computacional, torna o processamento distribuído e eficaz sem grande consumo de banda.

    Sendo a tecnologia pouco intrusiva, não é necessário nenhum contato físico com equipamentos, oferecendo implantação, utilização e operação da solução bastante simplificadas, tornando-a ideal para adoção, por exemplo, por parte de grandes empresas de crédito ao consumidor final. 

    Servidor INTEL – Cálculo Biométrico utiliza Bibliotecas: TBB, IPP e MKL.

     

    Acesse os links abaixo e saiba mais:

    * Texto elaborado em conjunto por Plauto Diniz e Alessandro de Oliveira Faria (Cabelo).

     

     

  • biometria facial
  • Desenvolvedores
  • Parceiros
  • Android*
  • Apple iOS*
  • Apple Mac OS X*
  • Linux*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Android*
  • Cliente empresarial
  • Servidor
  • Avançado
  • Primitivas Intel® Integrated Performance
  • Computação em nuvem
  • Laptop
  • Telefone
  • Servidor
  • Tablet
  • Desktop
  • URL
  • 跟燕青一起学Android应用开发(一):安装Android开发环境

    $
    0
    0

    Android开发环境的搭建是开发Android应用之前的必备步骤,它非常的简单,主要是分两步:

    1. 下载和安装ADT。
    2. 下载和安装JDK。

    本篇博客主要讲解这两个方面,对于初学者适用,中高级人员请跳过。

     

    ADT是什么?

    ADT是Android Developer Tools的缩写,具体细节可以参考: http://developer.android.com/tools/index.html

     

    哪里可以下载ADT?

    请登录http://developer.android.com/sdk/index.html,点击右侧的“Download The SDK”按钮,如图1所示,按照步骤即可下载完毕。

    图1

     

    哪里可以下载JDK?

    请登录http://www.oracle.com/technetwork/java/javase/downloads/index.html,点击“JDK Download”按钮,如图2所示。JDK7是笔者写博客之时最新的版本,读者可以根据情况下载最新的版本。

     

     

     

    图2

     

    在下载完成之后,读者可以先后安装ADT和JDK,是不是很简单?还不去尝试一下J

     

  • Blog Challenge Android
  • Imagem do ícone: 

  • Java*
  • Android*
  • Laptop
  • Tablet
  • Desktop
  • Desenvolvedores
  • Estudantes
  • Android*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • 跟燕青一起学Android应用开发(二):配置Android SDK Manager

    $
    0
    0

    跟燕青一起学Android应用开发(一):安装Android开发环境一文中,笔者简单的介绍了如何安装Android开发环境。将安装环境搭建好之后,我们需要配置Android SDK Manager,本篇博客将着重解说如何配置Android SDK Manager。

    Android SDK Manager是Eclipse的一个重要配置,在安装目录下打开eclipse/eclipse.exe执行文件,选择“Windows”菜单下的子菜单项“Android SDK Manager”,如图1所示。

    图1

     

    在Android SDK Manager中,Android SDK Tools, Android SDK Platform-tools和Android SDK Platform是必须要安装的,如图2所示,其他的项目可以自选。

    图2

    在本篇博客中,Android4.4(API19)做为安装项目,读者选择它,点击右下角的安装按钮可以开始安装。

     

    在安装的时候,我们可能会遇到一个错误,类似信息是: ” Missing SDK platform Android, API 19”,笔者折腾很很久才发现,这个主要的原因是ADT太老。什么?!在跟燕青一起学Android应用开发(一):安装Android开发环境一文中,笔者不是下载了最新的ADT,怎么可能太老呢?!没错,笔者也为此思考了半天,后来发现虽然ADT是最新的,但是里面的部分内容还是陈旧的。那么如何搞定呢?

     

    不急,很简单,只要升级一下eclipse就可以了,如图3所示。打开“Help“菜单中的“Check for Updates”子菜单,耐心等待一下吧。

    图3

    Content.jar包会在耐心的等待中被升级完毕,现在在试试看能否安装了?

     

    哈哈哈,估计某人一脸的微笑了吧J

     

    对于其他包的安装,只要在Android SDK Manager中选中它们,点击右下角的安装按钮,你就可以泡杯咖啡,翘起二郎腿慢慢享受一段快乐时光了J。

     

    参考:

    跟燕青一起学Android应用开发(一):安装Android开发环境

  • Blog Challenge Android
  • Imagem do ícone: 

  • Java*
  • Android*
  • Laptop
  • Tablet
  • Desktop
  • Desenvolvedores
  • Estudantes
  • Android*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8

  • 跟燕青一起学Android应用开发(三):安装Android x86 Emulator System Image

    $
    0
    0

    跟燕青一起学Android应用开发(二):配置Android SDK Manager一文中,我们已经介绍了如何使用Android SDK Manager下载Android开发所需要的工具,其中包含x86 Emulator System Image。为何在此还需要写这个主题呢?不错,在正常情况下,Android SDK Manager能够搞定它,但是笔者因为被墙了,所以无法直接通过Android SDK Manager来直接安装。本篇博客主要讲述在哪里可以下载到这些资源,并且如何通过手动方式安装好Android X86 Emulator System Image。

    Intel网站提供了Android X86 Emulator System Image的下载连接,笔者整理了一下,详见如下:

     

    点击上述连接中你感兴趣的某个,下载后将压缩包解开至SDK/System-images/android-??目录下面。以Android-19为例,在这个目录下面一般包含两个目录,一个是armeabi-v7a,另外一个就是x86.上述文件解压后的x86目录就是被拷贝于此。

     

    最后一步是,打开eclipse中“Window“菜单中的“Android Virtual Device Manager”子菜单,将其添加入AVD列表中去。

     

    试一下,成了吗?J

     

    参考:

    跟燕青一起学Android应用开发(一):安装Android开发环境

    跟燕青一起学Android应用开发(二):配置Android SDK Manager

  • Blog Challenge Android
  • Imagem do ícone: 

  • Java*
  • Android*
  • Laptop
  • Tablet
  • Desktop
  • Desenvolvedores
  • Estudantes
  • Android*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • 跟燕青一起学Android应用开发(四):安装Intel HAXM

    $
    0
    0

    跟燕青一起学Android应用开发(二):配置Android SDK Manager一文中,我们已经介绍了如何使用Android SDK Manager下载Android开发所需要的工具,其中包含Intel HAXM。为何在此还需要写这个主题呢?不错,在正常情况下,Android SDK Manager能够搞定它,但是笔者因为被墙了,所以无法直接通过Android SDK Manager来直接安装。本篇博客主要讲述如何下载和安装它?

    在要下载Intel HAXM之前,可能有些读者还不知道Intel HAXM是什么东西?笔者在此简单的做一下介绍。HAXM是Hardware Accelerated Execution Manager的缩写,主要的功能是在x86上提供效率更佳的Android Emulator。低效的Android Emulator毕竟是让开发者头痛的事情,不是吗?J

    Intel网站提供了下载地址,如下: Intel® Hardware Accelerated Execution Manager 1.0.6 (R3)。对于Windows8用户而言,需要安装它的patch文件haxm-windows_r03_hotfix.zip,其他平台也请注意下载说明。

    为了便于读者对HAXM的系统需求和安装步骤有更深的了解,笔者找了一篇好的文章,有兴趣的读者可以仔细学习一下:Installation Instructions for Intel® Hardware Accelerated Execution Manager - Microsoft Windows*

    此外,在安装好Intel HAXM后,我们将如何配置呢?

    请参考Speeding Up the Android* Emulator on Intel® Architecture一文.

     

    参考:

    跟燕青一起学Android应用开发(一):安装Android开发环境

    跟燕青一起学Android应用开发(二):配置Android SDK Manager

    跟燕青一起学Android应用开发(三):安装Android x86 Emulator System Image

     

    Imagem do ícone: 

  • Java*
  • Android*
  • Laptop
  • Tablet
  • Desktop
  • Desenvolvedores
  • Android*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Intel(R) App Preview-HTML5 Support Test Sample

    $
    0
    0

    The source code for this sample can be found here: https://github.com/gomobile/sample-html5-support-test or download the Intel® App Preview Android, iOS or Windows Phone app to check out all the samples.

    Introduction

    When it comes to developing mobile apps that leverage HTML5 on various platforms such as iOS, Windows, Android or even Blackberry, it becomes evident that every platform doesn’t render or support the same features from CSS3 features to WebRTC and  libraries like WebGL. The purpose of the HTML5 Support Test sample is to provide resources for identifying supported HTML5 features in the native webView.  The resources that are made available are the following sites : http://html5test.com/ and http://rng.io/.  This information is important since all HTML5 (HTML, JS, CSS3) content is rendered in a webView when a native app (.ipa[iOS], apk[Android], .xap[Windows Phone], etc….) is built with the Intel® XDK Build System.

    The UI of this sample was made to be a minimalistic design when the content is at the center of the user’s attention. The content for both the HTML5Test and Ringmark pages are embedded within an iframe which keeps the content in the app instead of in the native web Browser. As you can see above, HTML5Test running within the sample presents a 412 out of 555 points on an iPhone 5 running iOS 7. This score represents the rating amongst semantic, multimedia, 3D, graphics & effects, performance & integration, offline & storage features. HTML5Test presents its findings by using browser sniffing methods. Ringmark is another popular test suite that measures the tested browser’s capabilities in areas such as accelerated canvas, media, touch events, navigator, indexed DB and more.  Results are displayed in color coded As you may already know, HTML5 feature support has been growing with every new release of a mobile operating system. This is very evident in the Android ecosystem where updates are as frequent as six to nine months.

    Design Considerations

    The major feature of this app is to provide resources for viewing the support for the HTML5  standard on a mobile device. After opening the app, the only elements present are the two buttons (HTML5Test and Ringmark) and the HTML5 logo.

    An iframe element is used to display the web pages within the sample instead of taking the users out of the application and into the browser. By default, the iframe has a display attribute of none and no URL set as a source. Another important attribute to note is the sandbox which as shown below restricts same origin requests, navigating content, form submission and scripts.

     

    When the HTML5Test button is pressed, the iframe’s display attribute is set to inline and the sandbox to only allow scripts and same origin request. This allows the content to load with minimal restrictions. The same is done for displaying the ringmark webpage.

    Shadow Mapping Algorithm for Android*

    $
    0
    0

    By Stanislav Pavlov

    Downloads


    Shadow Mapping Algorithm for Android* [PDF 440KB]

    "There is no light without shadows" - Japanese proverb

    Because shadows in games make them more realistic and interesting, including well-rendered shadows in your games is important. Currently, most games do not have shadows, but this situation is changing. In this paper we will discuss a common method for realizing shadows, called Shadow Mapping.

    Shadow Mapping Theory

    Shadow mapping is one of the most conventional techniques for shadow generation in real-time applications. The method is based on the observation that whatever can be seen from the position of the light source is lit; the rest is in the shade. The principle of this method consists in comparing the depth of the current fragment in the reference system associated with the light source to that which is closest to the light source geometry.

    The algorithm consists of just two stages:

    1. The shadow map generation
    2. The rendering stage

    The algorithm’s main advantage is that is it easy to understand and implement. Its disadvantages include that it requires more CPU and GPU resources and calculations to make a picture more real. As a result of these additional resources, the shadow map in the depth buffer could become slower.

    Algorithm Realization

    To create a shadow map, it is necessary to render the scene from the position of the light source. Thus, we obtain the shadow map in the depth buffer, which contains depth values ​​closest to the light source geometry. This approach has the advantage of speed, since the depth buffer generation algorithm is implemented in the hardware.

    At the final stage, rendering occurs from the camera position. Each point of the scene is translated into the coordinate system of the light source, and then we calculate the distance from this point to the light source. Calculated distance is compared with the value that is stored in the shadow map. If the distance from the point to the light source is more than the value stored in the shadow map, then this point is in the shadow of any object placed in the path of light.

    The code in this article uses the Android SDK (ver. 20) and the Android NDK (ver 8d). It is taken as the basis for a fully native application:  http://developer.android.com/reference/android/app/NativeActivity.html

    The Android MegaFon Mint* smartphone is based on the Intel® Atom™ processor Z2460: http://download.intel.com/newsroom/kits/ces/2012/pdfs/AtomprocessorZ2460.pdf

    Initialization

    The shadow map is stored in a separate texture format GL_DEPTH_COMPONENT, size 512x512 (shadowmapSize.x = shadowmapSize.y = 512), 32 bits per texel (GL_UNSIGNED_INT). In order to optimize, you can use 16 bit textures (GL_UNSIGNED_SHORT). Creating a texture is possible on devices supporting GL_OES_depth_texture [for documentation see http://www.khronos.org/registry/gles/extensions/OES/OES_depth_texture.txt].

    The parameters of  GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T are set in GL_CLAMP_TO_EDGE. So when you request any value outside the texture sampling mechanism (sampler), a value corresponding to the boundary is returned. This is done to reduce artifacts from the shadows in the final rendering stage. "Tricks with the fields" will be discussed in another blog.

            //Create the shadow map texture
    	glGenTextures(1, &m_textureShadow);
    	glBindTexture(GL_TEXTURE_2D, m_textureShadow);
    	checkGlError("bind texture");
    	// Create the depth texture.
    	glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowmapSize.x, shadowmapSize.y, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
    	checkGlError("image2d");
    	// Set the textures parameters
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    	// create frame buffer object for shadow pass
    	glGenFramebuffers(1, &m_fboShadow);
    	glBindFramebuffer(GL_FRAMEBUFFER, m_fboShadow);
    	glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_textureShadow, 0);
    	checkGlError("shadowmaptexture");
    	status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    	if(status != GL_FRAMEBUFFER_COMPLETE) {
    		LOGI("init: ");
    		LOGI("failed to make complete framebuffer object %xn", status);
    	}
    	glBindFramebuffer(GL_FRAMEBUFFER, 0);
    

    The next initialization phase is the preparation of shaders.

    Below is the Vertex shader attribute. This is the next step of generating shadow maps:

    vec3 Position;
    
    uniform mat4 Projection;
    uniform mat4 Modelview;
    
    void main(void)
    {
    	gl_Position = Projection * Modelview * vec4(Position, 1);
    }
    
    Pixel shader (step shadow generation):
    highp vec4 Color = vec4(0.2, 0.4, 0.5, 1.0);
    
    void main(void)
    {
    	gl_FragColor = Color;
    }
    

    The main task of shaders is to write the geometry, or in other words, to generate the depth buffer for the main stage.

    Stages of shadow map rendering

    These steps differ from the usual stages of the vectorization scene by the next few points:

    1. FBO, which acts as our depth buffer, is attached to the texture (shadow map) glBindFramebuffer (GL_FRAMEBUFFER, m_fboShadow).
    2. You can render shadows using orthographic projection from directional sources (the sun), or from a conical (omni) perspective. In the example, the chosen perspective projection matrix lightProjectionMatrix has a wide viewing angle—90 degrees.
    3. The color entry in the frame buffer is from the glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE). This optimization can be very useful if you use a complex pixel shader.
    4. At this stage, the map is drawn only for the rear surface of the polygons, glCullFace (GL_FRONT). This is one of the most effective and easiest methods to reduce the negative effects of artifacts on the shadow map method. (Note: this is not useful for all geometries.)
    5. Area will draw 1 pixel on each side is smaller than the shadow map glViewport ( 0, 0 , shadowmapSize.x - 2 , shadowmapSize.y - 2). This is done in order to leave the "field" on the shadow map.
    6. After drawing all the elements of the scene, we return to its original value glCullFace (GL_BACK), glBindFramebuffer (GL_FRAMEBUFFER, 0) and glColorMask (GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE).
    void RenderingEngine2::shadowPass() {
    	GLenum status;
    	glEnable(GL_DEPTH_TEST);
    	glBindFramebuffer(GL_FRAMEBUFFER, m_fboShadow);
    	status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    	if (status != GL_FRAMEBUFFER_COMPLETE) {
    		LOGE("Shadow pass: ");
    		LOGE("failed to make complete framebuffer object %xn", status);
    	}
    	glClear(GL_DEPTH_BUFFER_BIT);
    	glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
    
    	lightProjectionMatrix = VerticalFieldOfView(90.0,
    			(shadowmapSize.x + 0.0) / shadowmapSize.y, 0.1, 100.0);
    	lightModelviewMatrix = LookAt(vec3(0, 4, 7), vec3(0.0, 0.0, 0.0), vec3(0, -7, 4));
    	glCullFace(GL_FRONT);
    	glUseProgram(m_simpleProgram);
    	glUniformMatrix4fv(uniformProjectionMain, 1, 0,
    			lightProjectionMatrix.Pointer());
    	glUniformMatrix4fv(uniformModelviewMain, 1, 0,
    			lightModelviewMatrix.Pointer());
    	glViewport(0, 0, shadowmapSize.x - 2, shadowmapSize.y - 2);
    
    	GLsizei stride = sizeof(Vertex);
    	const vector& objects = m_Scene.getModels();
    	const GLvoid* bodyOffset = 0;
    	for (int i = 0; i < objects.size(); ++i) {
    		lightModelviewMatrix = objects[i].m_Transform * LookAt(vec3(0, 4, 7), vec3(0.0, 0.0, 0.0), vec3(0, -7, 4));
    		glUniformMatrix4fv(uniformModelviewMain, 1, 0, lightModelviewMatrix.Pointer());
    		glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, objects[i].m_indexBuffer);
    		glBindBuffer(GL_ARRAY_BUFFER, objects[i].m_vertexBuffer);
    
    		glVertexAttribPointer(attribPositionMain, 3, GL_FLOAT, GL_FALSE, stride,
    				(GLvoid*) offsetof(Vertex, Position));
    
    		glEnableVertexAttribArray(attribPositionMain);
    
    		glDrawElements(GL_TRIANGLES, objects[i].m_indexCount, GL_UNSIGNED_SHORT,
    				bodyOffset);
    
    		glDisableVertexAttribArray(attribPositionMain);
    	}
    	glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
    
    	glBindFramebuffer(GL_FRAMEBUFFER, 0);
    	glCullFace(GL_BACK);
    }
    

     

    Rendering scenes with shadows

    The first stage of this specific feature is to set textures with the shadow map obtained in the previous step:

    glActiveTexture(GL_TEXTURE0);
    	glBindTexture(GL_TEXTURE_2D, m_textureShadow);
    	glUniform1i(uniformShadowMapTextureShadow, 0);
    
    void RenderingEngine2::mainPass() {
    	glClearColor(0.5f, 0.5f, 0.5f, 1);
    	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    	modelviewMatrix = scale * rotation * translation
    			* LookAt(vec3(0, 8, 7), vec3(0.0, 0.0, 0.0), vec3(0, 7, -8));
    	lightModelviewMatrix = LookAt(vec3(0, 4, 7), vec3(0.0, 0.0, 0.0), vec3(0, -7, 4));
    
    	projectionMatrix = VerticalFieldOfView(45.0, (screen.x + 0.0) / screen.y, 0.1, 100.0);
    	mat4 offsetLight = mat4::Scale(0.5f) * mat4::Translate(0.5, 0.5, 0.5);
    	mat4 lightMatrix = lightModelviewMatrix * lightProjectionMatrix	* offsetLight;
    	glUseProgram(m_shadowMapProgram);
    	glUniformMatrix4fv(uniformLightMatrixShadow, 1, 0, lightMatrix.Pointer());
    	glUniformMatrix4fv(uniformProjectionShadow, 1, 0, projectionMatrix.Pointer());
    	glUniformMatrix4fv(uniformModelviewShadow, 1, 0, modelviewMatrix.Pointer());
    
    	glViewport(0, 0, screen.x, screen.y);
    
    	glActiveTexture(GL_TEXTURE0);
    	glBindTexture(GL_TEXTURE_2D, m_textureShadow);
    	glUniform1i(uniformShadowMapTextureShadow, 0);
    
    	GLsizei stride = sizeof(Vertex);
    	const vector& objects = m_Scene.getModels();
    	const GLvoid* bodyOffset = 0;
    	for (int i = 0; i < objects.size(); ++i) {
    		modelviewMatrix = scale * rotation * translation * LookAt(vec3(0, 8, 7), vec3(0.0, 0.0, 0.0), vec3(0, 7, -8));
    		glUniformMatrix4fv(uniformTransformShadow, 1, 0, objects[i].m_Transform.Pointer());
    		glUniformMatrix4fv(uniformModelviewShadow, 1, 0, modelviewMatrix.Pointer());
    		glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, objects[i].m_indexBuffer);
    		glBindBuffer(GL_ARRAY_BUFFER, objects[i].m_vertexBuffer);
    
    		glVertexAttribPointer(attribPositionShadow, 3, GL_FLOAT, GL_FALSE,
    				stride, (GLvoid*) offsetof(Vertex, Position));
    		glVertexAttribPointer(attribColorShadow, 4, GL_FLOAT, GL_FALSE, stride,
    				(GLvoid*) offsetof(Vertex, Color));
    		glVertexAttribPointer(attribNormalShadow, 3, GL_FLOAT, GL_FALSE, stride,
    				(GLvoid*) offsetof(Vertex, Normal));
    		glVertexAttribPointer(attribTexCoordShadow, 2, GL_FLOAT, GL_FALSE,
    				stride, (GLvoid*) offsetof(Vertex, TexCoord));
    
    		glEnableVertexAttribArray(attribPositionShadow);
    		glEnableVertexAttribArray(attribNormalShadow);
    		glEnableVertexAttribArray(attribColorShadow);
    		glEnableVertexAttribArray(attribTexCoordShadow);
    
    		glDrawElements(GL_TRIANGLES, objects[i].m_indexCount, GL_UNSIGNED_SHORT,
    				bodyOffset);
    
    		glDisableVertexAttribArray(attribColorShadow);
    		glDisableVertexAttribArray(attribPositionShadow);
    		glDisableVertexAttribArray(attribNormalShadow);
    		glDisableVertexAttribArray(attribTexCoordShadow);
    	}
    }
    

    The most interesting parts of these rendering shadows are the shaders. Here’s the technique.

    Vertex shader (draws shadows):

    attribute vec3 Position;
    attribute vec3 Normal;
    attribute vec4 SourceColor;
    attribute vec2 TexCoord;
    
    varying vec4 fColor;
    varying vec3 fNormal;
    varying vec2 fTexCoord;
    varying vec4 fShadowMapCoord;
    
    uniform mat4 Projection;
    uniform mat4 Modelview;
    uniform mat4 lightMatrix;
    uniform mat4 Transform;
    
    void main(void)
    {
    	fColor = SourceColor;
    	gl_Position = Projection * Modelview * Transform * vec4(Position, 1.0);
    	fShadowMapCoord = lightMatrix * Transform * vec4(Position, 1.0);
    	fNormal = normalize(Normal);
    	fTexCoord = TexCoord;
    }
    

    The vertex shader, in parallel with its usual work, produces a translation of vertices in the plane of the light source. In this example, the transition to the plane of the light given by the matrix lightMatrix and the result are passed to the pixel shader through fShadowMapCoord.

    Pixel shader (draws shadows):

    uniform highp sampler2D shadowMapTex;
    
    varying lowp vec4 fColor;
    varying lowp vec3 fNormal;
    varying highp vec2 fTexCoord;
    varying highp vec4 fShadowMapCoord;
    
    highp vec3 Light = vec3(0.0, 4.0, 7.0);
    highp vec4 Color = vec4(0.2, 0.4, 0.5, 1.0);
    
    void main(void)
    {
    	const lowp float fAmbient = 0.4;
    	Light = normalize(Light);
    	highp float depth = (fShadowMapCoord.z / fShadowMapCoord.w);
    	highp float depth_light = texture2DProj(shadowMapTex, fShadowMapCoord).r;
    	highp float visibility = depth <= depth_light ? 1.0 : 0.2;
    	gl_FragColor = fColor * max(0.0, dot(fNormal, Light)) * visibility;
    }
    
    

    The pixel shader calculates each pixel value depth based on the relative light source and compares it with the value corresponding to it in the depth map. If the value does not exceed the depth of the depth maps, it is visible from the source position; otherwise, it is in the shade. In this example, we change the visual color intensity using the coefficient visibility, but in general, it is a more difficult technique.

    About the Authors

    Stanislav works in the Software & Service Group at Intel Corporation. He has 10+ years of experience in software development. His main interest is optimization of performance, power consumption, and parallel programming. In his current role as an Application Engineer providing technical support for Intel® processor-based devices, Stanislav works closely with software developers and SoC architects to help them achieve the best possible performance on Intel platforms. Stanislav holds a Master's degree in Mathematical Economics from the National Research University Higher School of Economics.

    Iliya, co-author of this blog, is also a Senior Software Engineer in the Software & Service Group at Intel. He is a developer on the Intel® VTune™ Amplifier team. He received a Master’s degree from the Nizhniy Novgorod State Technical University.

  • Shadow Mapping
  • rendering
  • Shader
  • vcsource_index
  • vcsource_type_techsample
  • vcsource_os_windows
  • vcsource_domain_graphics
  • vcsource_type_productsample
  • vcsource_type_techarticle
  • Desenvolvedores
  • Android*
  • Desenvolvimento de jogos
  • Gráficos
  • URL
  • Opinion: Matt's Top 10 Tech & Gaming Predictions for 2014

    $
    0
    0

    Out with 2013 and in with 2014!!!

    So here are my top ten (10) predictions for technology and gaming related things in the coming new year.  I can hardly wait!

    1) PC "Next"?  It's your SmartPhone!  Expect the specs and performance of these little buggers to make some very interesting baby steps, and leaps.  In terms of 'wearable' computing I don't like wearing watches, glasses, necklaces, having piercings, wearing rings etc.  So the phone is as good as it gets for me, and likely most people, when it comes to 'wearable computing'.  The winner in this next age of computing will target your phone as having more convergence, not less.  So expect them to pair and connect better with things like Smart/er TVs & displays; and peripherals such as mice, keyboards, and gamepads.  Note:  My caveat here is if no one in 2014 realizes this, then shame on them, and the industry for such an obvious miss.

    2) Tablets grow up.  There are dumb Tablets and there are smart Tablets. If you follow any of my previous blogs you'll know what I'm talking about.  Tablets are nothing more than a PC 'form-factor'.  Tablets such as Microsoft's Surface Pro, and other 2in1's demonstrate what a Tablet PC should and can be.  Expect marketing and some analyst firms to continue to obfuscate this for as long as possible. (They want you to buy both)  The reality of it though is that Tablets are nothing more than the latest bright shiny object of PC-land.  When Laptops came out there were similar debates about the impact to Desktops.  Ultimately we as consumers don't care so much about what form-factor the PC takes next, or what OS its running; just as long as it's allowing us to connect, run, & or play the software apps we love and care about.  (For work or play)

    3) An Xbox Surface or something bigger?  After all - why wouldn't they?  I'll conjecture and take it a few steps further. Xbox has very much turned into Microsoft's entertainment brand.  So it's not just games anymore; but technically their portal to other forms of entertainment such as movies/tv/video, and music. (Long Live Zune). Ok, great, so what's the big deal?  I believe at this point they could do one of two things.  1) Either go bigger and turn Xbox into a more fully grown OEM-type brand. (ala Apple) This could be an attempt to divide consumer from business software applications. - OR - 2) Go smaller; but lock the OS and API's down even further in an attempt to position Xbox as being "premium" content in an effort to charge more for their connected cross screen cloud apps.  IMHO both of these are very poor decisions.  Please burn the Innovators Dilemma book since that has now turned into herd mentality strategic thinking - good grief.  I really hope I'm wrong on this #3.

    4) Microsoft buys or invests heavily into an OEM Display Mfg.  I won't put this past them at this point.  Given OEM manufacturers (Mfgs) response to what Windows 8 and 8.1 and it's impact on their businesses; coupled with the over-hype of Tablets supposedly being the demise of PC's has really backfired for those with too many eggs in the Microsoft basket.  As a result, it's made many OEM's more vulnerable than before for purchase, and or takeover.  Even without an outright purchase many OEM's will be desperate to agree to many terms and conditions they wouldn't otherwise. We're likely to see Microsoft do more of what we've already seen in buying/propping up some select OEMs with cash/stock/etc (e.g. Dell).  Unfortunately I also expect us to lose a few OEMs over the next ~24 months.  I really hope I'm wrong in this prediction as well.

    5) Google = Wow.  Given Google's success with Android, and even Chrome (especially this holiday); we should expect to see them gain additional traction in both the consumer and even work environments; both domestically and abroad.  Keep a close eye on their partnerships; especially with Amazon, and Samsung.  I fully expect both Android and Chrome to mature more fully and become more capable over time.

    6) Apple's next big thing?  It seems that everyone is expecting Apple to unveil the 2nd coming in the next few years. Which is somewhat unfair to expect; but this is what happens when one sets such a high bar and former precedents.  Given their patent filings; we should start to see a bigger push from them into the living room. (Gamepads! Yay! Gaming from Apple finally?!?)  This will likely spark an even bigger "Destroy all Monsters" type of fight for what we affectionately here like to call the "Hearth".

    7) Amazon = Dark Horse.  Given that Amazon has such an incredible online retail presence I fully expect them to go very big into more, not less, consumer devices in 2014.  We've already received tons of hints about their push into gaming into the living room as well.  This will most likely look like a Kindle on steroids (which I think they should call the Bonfire)((Should I tm that for them? Here - Kindle Bonfire(tm)).  I'd also keep a very close eye on their partnerships with the likes of Google (for Android), and Qualcomm (for Snapdragon, etc.).

    8) Consoles vs PC Gaming.  This will be interesting to watch.  I'm not feeling the same sense of excitement for this 8th Gen of Consoles as there was for the previous generation.  Great, so GTA V hits a billion in 3 days.  This is awesome. There will always be a few games like that.  However; the true test will be to see how these suckers perform over the next 36 months. Remember that 'Destroy all Monsters' analogy I just mentioned?  Consider this: the 8th Gen's biggest competitor is ironically the previous 7th Gen.  PC's are going bigger into the living room.  (Enter SteamMachines & even just normal Windows/MacOSX PCs).  We have Amazon and Apple likely making a play.  So grab a bag of popcorn. This will be interesting to watch and see how this unfolds over the next few years.  Ultimately I think the real form factor winner for gaming will look something like today's TabletPC form factors.  (+Docking Stations for enhanced graphics etc).  PC Gaming will continue to dominate globally revenue-wise.  I expect Xboxes in China to perform about as well as they did in S. Korea.  What a lot of people still fail to understand is that Consoles tend to be a luxury item in most of the known world.  A smart strategist would pass go, collect the $200, and converge the platforms.

    9) Smarter Devices and Voice.  Well... my voice prediction for 2013 didn't get as far as I'd hoped - darn it.  I'm still hopeful that someone will create something like we see in the Iron Man movies such as the "Jarvis" personal assistant.  (A PA?)  Couple that with more RFID-type enabled devices; which can be embedded in nearly anything nowadays such as business cards, trading/game cards, clothes, toys (e.g. Skylanders, Infinity, etc), you name it; and we now have a recipe for some very interesting connected and smarter homes, and businesses.  A little too 'Big Brother'? Yes; which is why I want my "PA-Jarvis" to be locally hosted, and not in the cloud.  I'm hoping my Jarvis will be my first line of defense before I go on the www.  This type of artificial intelligence (AI) is reason enough for me start demanding more personal computing horsepower again.

    10) Big Data.  Am I the only one sick and tired of hearing about this?  I'm pretty sure the NS of A is more than willing to share with us all what a PITA it is to suss through that much data.  I do find it amusing that all those tin-foil hat people that we all used to make fun of might have actually been onto something.  Who are we really helping when there's an algorithm that enables so few to have access to so much data?  I have to stop and ask myself, how does this really help me, or anyone for that matter?   For now I'll just have to trust that it'll never be abused or hacked into.  /sigh.

    Sorry for the long post.  I love to pontificate on the future; and hopefully some of my predictions never come to pass!  I hope you've all had a great 2013!?  I hope you all have an even better 2014!  Onward and upward!

    Best wishes,

    Matt

     

  • Technology predictions
  • gaming
  • technology
  • PC Gaming
  • Consoles
  • Smartphones
  • tablets
  • Imagem do ícone: 

  • Android*
  • Windows*
  • Laptop
  • Telefone
  • Servidor
  • Tablet
  • Desktop
  • Desenvolvedores
  • Parceiros
  • Professores
  • Estudantes
  • Android*
  • Apple iOS*
  • Apple Mac OS X*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Unix*
  • Opinion: Matt's Top 10 Tech & Gaming Predictions for 2014

    $
    0
    0

    Out with 2013 and in with 2014!!!

    So here are my top ten (10) predictions for technology and gaming related things in the coming new year.  I can hardly wait!

    1) PC "Next"?  It's your SmartPhone!  Expect the specs and performance of these little buggers to make some very interesting baby steps, and leaps.  In terms of 'wearable' computing I don't like wearing watches, glasses, necklaces, having piercings, wearing rings etc.  So the phone is as good as it gets for me, and likely most people, when it comes to 'wearable computing'.  The winner in this next age of computing will target your phone as having more convergence, not less.  So expect them to pair and connect better with things like Smart/er TVs & displays; and peripherals such as mice, keyboards, and gamepads.  Note:  My caveat here is if no one in 2014 realizes this, then shame on them, and the industry for such an obvious miss.

    2) Tablets grow up.  There are dumb Tablets and there are smart Tablets. If you follow any of my previous blogs you'll know what I'm talking about.  Tablets are nothing more than a PC 'form-factor'.  Tablets such as Microsoft's Surface Pro, and other 2in1's demonstrate what a Tablet PC should and can be.  Expect marketing and some analyst firms to continue to obfuscate this for as long as possible. (They want you to buy both)  The reality of it though is that Tablets are nothing more than the latest bright shiny object of PC-land.  When Laptops came out there were similar debates about the impact to Desktops.  Ultimately we as consumers don't care so much about what form-factor the PC takes next, or what OS its running; just as long as it's allowing us to connect, run, & or play the software apps we love and care about.  (For work or play)

    3) An Xbox Surface or something bigger?  After all - why wouldn't they?  I'll conjecture and take it a few steps further. Xbox has very much turned into Microsoft's entertainment brand.  So it's not just games anymore; but technically their portal to other forms of entertainment such as movies/tv/video, and music. (Long Live Zune). Ok, great, so what's the big deal?  I believe at this point they could do one of two things.  1) Either go bigger and turn Xbox into a more fully grown OEM-type brand. (ala Apple) This could be an attempt to divide consumer from business software applications. - OR - 2) Go smaller; but lock the OS and API's down even further in an attempt to position Xbox as being "premium" content in an effort to charge more for their connected cross screen cloud apps.  IMHO both of these are very poor decisions.  Please burn the Innovators Dilemma book since that has now turned into herd mentality strategic thinking - good grief.  I really hope I'm wrong on this #3.

    4) Microsoft buys or invests heavily into an OEM Display Mfg.  I won't put this past them at this point.  Given OEM manufacturers (Mfgs) response to what Windows 8 and 8.1 and it's impact on their businesses; coupled with the over-hype of Tablets supposedly being the demise of PC's has really backfired for those with too many eggs in the Microsoft basket.  As a result, it's made many OEM's more vulnerable than before for purchase, and or takeover.  Even without an outright purchase many OEM's will be desperate to agree to many terms and conditions they wouldn't otherwise. We're likely to see Microsoft do more of what we've already seen in buying/propping up some select OEMs with cash/stock/etc (e.g. Dell).  Unfortunately I also expect us to lose a few OEMs over the next ~24 months.  I really hope I'm wrong in this prediction as well.

    5) Google = Wow.  Given Google's success with Android, and even Chrome (especially this holiday); we should expect to see them gain additional traction in both the consumer and even work environments; both domestically and abroad.  Keep a close eye on their partnerships; especially with Amazon, and Samsung.  I fully expect both Android and Chrome to mature more fully and become more capable over time.

    6) Apple's next big thing?  It seems that everyone is expecting Apple to unveil the 2nd coming in the next few years. Which is somewhat unfair to expect; but this is what happens when one sets such a high bar and former precedents.  Given their patent filings; we should start to see a bigger push from them into the living room. (Gamepads! Yay! Gaming from Apple finally?!?)  This will likely spark an even bigger "Destroy all Monsters" type of fight for what we affectionately here like to call the "Hearth".

    7) Amazon = Dark Horse.  Given that Amazon has such an incredible online retail presence I fully expect them to go very big into more, not less, consumer devices in 2014.  We've already received tons of hints about their push into gaming into the living room as well.  This will most likely look like a Kindle on steroids (which I think they should call the Bonfire)((Should I tm that for them? Here - Kindle Bonfire(tm)).  I'd also keep a very close eye on their partnerships with the likes of Google (for Android), and Qualcomm (for Snapdragon, etc.).

    8) Consoles vs PC Gaming.  This will be interesting to watch.  I'm not feeling the same sense of excitement for this 8th Gen of Consoles as there was for the previous generation.  Great, so GTA V hits a billion in 3 days.  This is awesome. There will always be a few games like that.  However; the true test will be to see how these suckers perform over the next 36 months. Remember that 'Destroy all Monsters' analogy I just mentioned?  Consider this: the 8th Gen's biggest competitor is ironically the previous 7th Gen.  PC's are going bigger into the living room.  (Enter SteamMachines & even just normal Windows/MacOSX PCs).  We have Amazon and Apple likely making a play.  So grab a bag of popcorn. This will be interesting to watch and see how this unfolds over the next few years.  Ultimately I think the real form factor winner for gaming will look something like today's TabletPC form factors.  (+Docking Stations for enhanced graphics etc).  PC Gaming will continue to dominate globally revenue-wise.  I expect Xboxes in China to perform about as well as they did in S. Korea.  What a lot of people still fail to understand is that Consoles tend to be a luxury item in most of the known world.  A smart strategist would pass go, collect the $200, and converge the platforms.

    9) Smarter Devices and Voice.  Well... my voice prediction for 2013 didn't get as far as I'd hoped - darn it.  I'm still hopeful that someone will create something like we see in the Iron Man movies such as the "Jarvis" personal assistant.  (A PA?)  Couple that with more RFID-type enabled devices; which can be embedded in nearly anything nowadays such as business cards, trading/game cards, clothes, toys (e.g. Skylanders, Infinity, etc), you name it; and we now have a recipe for some very interesting connected and smarter homes, and businesses.  A little too 'Big Brother'? Yes; which is why I want my "PA-Jarvis" to be locally hosted, and not in the cloud.  I'm hoping my Jarvis will be my first line of defense before I go on the www.  This type of artificial intelligence (AI) is reason enough for me start demanding more personal computing horsepower again.

    10) Big Data.  Am I the only one sick and tired of hearing about this?  I'm pretty sure the NS of A is more than willing to share with us all what a PITA it is to suss through that much data.  I do find it amusing that all those tin-foil hat people that we all used to make fun of might have actually been onto something.  Who are we really helping when there's an algorithm that enables so few to have access to so much data?  I have to stop and ask myself, how does this really help me, or anyone for that matter?   For now I'll just have to trust that it'll never be abused or hacked into.  /sigh.

    Sorry for the long post.  I love to pontificate on the future; and hopefully some of my predictions never come to pass!  I hope you've all had a great 2013!?  I hope you all have an even better 2014!  Onward and upward!

    Best wishes,

    Matt

     

  • Technology predictions
  • gaming
  • technology
  • PC Gaming
  • Consoles
  • Smartphones
  • tablets
  • Imagem do ícone: 

  • Android*
  • Windows*
  • Laptop
  • Telefone
  • Servidor
  • Tablet
  • Desktop
  • Desenvolvedores
  • Parceiros
  • Professores
  • Estudantes
  • Android*
  • Apple iOS*
  • Apple Mac OS X*
  • Microsoft Windows* (XP, Vista, 7)
  • Microsoft Windows* 8
  • Unix*
  • IntelXDK , Crosswalk Runtime and WebGL

    $
    0
    0

    Introduction

    This article is about developing android apps using Intel XDK and three.js.It will give an overview on how to develop GUI based app for Android architecture using this wonderful tool.I have taken help from the article  while explaining Three.js and the full documentation of Three.js gives lot of information to work with.

    Pretty new to Android Platform

    For last 15 months i have been developing apps for Windows Desktop and so i am very new to Android platform. So this experience will be new for me as I explore the unknown world(for me) of Android .The things that i cover might not be new but i have given it a try

    Why i have chosen Intel XDK

    I have little knowledge of HTML and i intend to use an IDE where i can implement my HTML skills. The whole IDE experience was a new one for me because i have not used Intel XDK once. It's a cloud based IDE which requires you to be always connected to the internet when you go through the entire process of creating a package for distribution. Pluses for Intel XDK would be that you don't have to configure Android ADT bundle and it has got an inbuilt emulator to test your app. Here you have the options to choose from different form factors such as Google Nexus 4 and Google Nexus 7,Lenovo K900 etc. Minuses i found would be that the IDE used to freeze at times if you used to work for on it for a long time. At these times  i had to restart the IDE and then start my work again. Overall my experience using Intel XDK was good one because i had little trouble developing my app.

    Exploring Three.js 

     

    Three.js!! in fact it came to my liking as i was searching some processing based examples on net. Essentially another Creative Coders delight(http://threejs.org/) it has got lot of options powered by WebGL it is essentially helpful in  creating great looking GUI apps and have fun with. Being Open Sourced with lot of examples to work with. Now one catch as you are developing for Android not all browsers support WebGL in that case what you need to do is use Canvas renderer and you  are on your way.

     

    What is Intel XDK?  
    The Intel XDK is cross platform IDE for developing solid HTML 5 in the developing environment and you can update your code being connected to internet. After you build the app you can distribute it to different platforms. Android apps can be created by the same way and in the build option you can create the apk.This IDE has the ability to code once and distribute it to different platforms. In the new updated XDK there is CROSSWALK build option for Android  that is in Beta phase right now it helps in porting your native capabilities html 5 ,JavaScript and CSS 3 apps. During Development phase you can test the app for
    different form factors using the emulator. All in all it's a great platform to develop HTML 5 apps and distribute it 

    Download Link

    Step by step process  of downloading Intel XDK with figures

    The next step will detect your OS

    Save the File and the exe will be saved. Follow the steps as mentioned below to install and start the exe 

    The project lifecycle of Intel XDK and Android Project shown below

    When you open up Intel XDK you will be presented with an option to Start a new project. Here you can start a fresh with a blank template or reuse any demo and modify it. The options that are available are

    • i)Start with a Blank Project
    •  ii)Work with a Demo
    • iii)Import an existing app :-here you can port old apps made with the XDK's,PhoneGap apps,AppMobapps,HTML 5 api based apps but cannot port Java apps.
    • iv)Use App Starter It uses App Framework 2.0 .Full details are available here http://app-framework-software.intel.com/
    • v)Start with App Designer App Designer allows you to get going with the project using App Framework,BootStrap API,jQuery Mobile or Top Coat.


    As we are targeting Three.js we will use work with a demo that too the Cross Walk Demo and modify the Demo inserting additional codes in the index.html file and adding the required three.js files.There is a great information  explaining and giving an overview into CrossWalk runtime in the Intel Website http://software.intel.com/en-us/html5/articles/crosswalk-application-runtime

     

     What is Three.js? 
     
    Three.js is a library that makes WebGL - 3D in the browser - very easy. While a simple cube in raw WebGL would turn out hundreds of lines of JavaScript and shader code, a Three.js equivalent is only
    a fraction of that. Three.js is a lightweight cross-browser JavaScript library/API used to create and
    display animated 3D computer graphics on a Web browser.Three.js scripts may be used in conjunction with the HTML5canvas element, SVG or WebGL

     

    Starting A fresh 
    Decoding  one of the examples and creating a new apk from the GITHUB.The Founders of Three.js have done an excellent  job with all credits to them i am using one of the examples to get going. https://github.com/mrdoob/three.js/blob/master/examples/canvas_interactive_voxelpainter.html 
    1)Open    Intel XDK     2) Click on Project     3)Click on Start a new Project

    Click on Work with a demo

    Select CrossWalk and click on next

     

    Click on Create.As the project is created you will get a Congratulation message  
    According to your liking change  the index.html page as it will reflect the main changes in the app and 
    also add the JavaScript required.

    According to your liking change the  index.html page as it will reflect the main changes in the app and also add the  Three.js JavaScript required.

    A Close look at the index.html page  
    Lets see the flow of the index.html  file within Intel XDK

    After the changes in the index.html file and adding the required js files in the threejs folder(You need

    to access the files from the Windows directory structure of the project and then add the files manually  
    In my case i add the files manually over to the main project folder  
    E:\IntelXDK_Projects\eXAMPLE2\threejs)   you need to click on emulate(you can choose from the many emulators available to check the project)

    The Magic of Intel XDK,Crosswalk to bring the effect of WebGL

    Extending the Crosswalk demo with Intel XDK helps you bring WebGL to Android.As per the discussion in this topic the role of Crosswalk with Intel XDK is specified here

    Crosswalk can be thought of as an alternate runtime for Android devices. It is only compatible with Android 4.0 and higher devices, so cannot be applied to older Android 2.x and 3.x devices. It is in a preliminary (alpha) release state right now, I do not when it will be released to beta or final release. When it does become available there will be documentation describing in more detail what Crosswalk offers in comparison to using the builtin webview on Android 4.x devices.

    As I had discussion with Bob Duffy i found out that

    Crosswalk with the Intel XDK are replacing the default webview that Android doen't support WebGL.Intel XDK is providing you Chromium and WebGL on pre 4.4 devices.

     

    So the key here in building a new project is extending the index.html page which has already Crosswalk runtime associated.The important files in the project are manifest.json.Taking help from this documentation we see the application structure contains manifest.json in the root directory.The main entry point is then referenced from this manifest file

    The file format

    
    {
    
    	  "name": "WebGL Sample",
    
    	  "manifest_version": 1,
    
    	  "version": "0.0.0.1",
    
    	  "app": {
    
    	    "launch":{
    
    	      "local_path": "index.html"    }
    
    	  }
    
    	}
    
    

    The Crosswalk project is in beta phase and is undergoing changes but you can certainly experiment and learn more.As per the discussion of Crosswalk it is

    At the heart of the Crosswalk web runtime is the Blink* rendering and layout engine. Blink provides the same HTML5 features and capabilities found in today's modern web ecosystem, such as WebGL* and Web Audio*. Crosswalk enables state-of-the-art HTML5-based applications that make the most of today's leading edge mobile devices

    Crosswalk with Intel XDK provides access to WebGL API.


     
    The Build process

     

     

    Here lies the main action where the apk's are created.The Build menu has all the options to distribute the apps in multiple platforms.Here you can edit Asset as well as images that you want to add to the app.For Android there are two options

     

     

    •   i)Android :-  you create the normal APK's that you can distribute...  
    • ii)CrossWalk for Android (it's in Beta phase) :-this is a build that creates a CrossWalk Runtime Android APK where you have the options to build it for ARM based devices or X86 architecture.

     

    The Build process with  figures

     You will see that the build is about to be created.You need to click on build app now

    The next figure shows the build process

    You will get a message that build is successful

    The Whole process of changing the app development process happens at the index.html page.Any update here reflects the change and whole flow changes.Make changes to the index.html pages and
    include the necessary Three.js files.Tweaking the code from the GITHUB will help you explore.There is also a CROSSWALK build which allows to create the package in x86 or ARM architecture it's in beta phase but you can try this build.

     

    The anatomy of the index.html page(Creating a new Project) 

      Any change made to the index.htmlactually reflects how the app will look like finally.So we need includenecassry Three.js files as well as the whole logic needs to be implementedhere.I dug deep in to the three.js GITHUB repository and check which are the examples that can be worked upon and bring it to Intel XDK and finally make the apk out of it.So what i have done is broken down the index.html page and its modification to give the proper view of the project.In context of learning i have taken help from here .It's very useful in exploring the three.js.The primary contributor to this library is Mr Droob  and theo-armour .Due respect to these people(They have done excellent job in what Three.js is now) i am exporing these repositories to learn,share and contribute.

    let's Start 

    To be more compatible with different mobile platforms we need to declare viewport with device- width height.

    The device width allows adjustment according to the changing devices be it a tablet or different mode
    phones. 

     

    The declaration 

    <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0, user-scalable=0">

    This also implies that when we change the orientation of the device it gives a proper access of the app

     

    The Style Tag

     
    The Style Tag allows how the app is rendered to the devices.Here is the modification that shows how the app will look like.So now for this project we modify the Style tag accordingly

    <style>
    			body {
    				font-family: Monospace;
    				background-color: #f0f0f0;
    				margin: 0px;
    				overflow: hidden;
    			}
    		</style>
    
    

    for reloading to make easier so that we have to reload the pages again and again we include three.min.js within the head tag of the html including the script in the script tag
    <script></script>. As we include the three.js script tag in the body we allow important actions to execute within the three.min.js script.Here in lies the logic of implementing the three.js and hence we need to include it to the head tag.

    Now comes the turn for initialization of the variables or getting to implement the way the 3D GUI structure will behave we implement animations such as object movements interactions getting
    in closer to the objects or moving out we start by calling the init() method.

     

    In the entry point for three.js script we need to append Element and the chid behaviors.For getting Geometry to work we need to implement variables and their implementation logic
    here.

     As we come across Three.js Script we see that it is essentially a 3D gui depiction involving

    •  i)Scenes
    • ii)Cameras
    •  iii)Projectors
    • iv)Renderers and Objects.

    Certain modifications we have in Three.js script allows implementing plane Geometry  to Face normals

    we use

    var normalMatrix= new Three.Matrix3();

    For creating a Shadow effect with the camera perspective we use this

    camera= new THREE.perspectiveCamera(); 

    Modifying the custom grid involves changes in the geometry hence we do the following

    
      var size = 500, step = 50;
    
    				var geometry = new THREE.Geometry();
     
    				for ( var i = - size; i <= size; i += step ) {
     
    					geometry.vertices.push( new THREE.Vector3( - size, 0, i ) );
    					geometry.vertices.push( new THREE.Vector3(   size, 0, i ) );
     
    					geometry.vertices.push( new THREE.Vector3( i, 0, - size ) );
    					geometry.vertices.push( new THREE.Vector3( i, 0,   size ) );
     
    				}
     
    				var material = new THREE.LineBasicMaterial( { color: 0x000000, opacity: 0.2 } );
     
    				var line = new THREE.Line( geometry, material );
    				line.type = THREE.LinePieces;
    				scene.add( line );
     
    
    

    We use projector to change the behavior of the objects and also implementing mouse movements and to select certain objects.This also helps in projection in a screen space.

     The light reflection as well as ambient light effect is controlled in these lines of code.This also shows how the lighting effect will be.

    var ambientLight = new THREE.AmbientLight( 0x606060 );

    Taking a look at variable declaration

    target=new THREE.Vector3(0,200,0);

    In this declaration above we declare a 3D vector.A 3D vector is in general a geometric quantity that has magnitude and direction

    var normalMatrix=new THREE.Matrix 3();

    It's a 3*3 matrix.

    For projection purpose we use mouse 2D and mouse 3D

    More modifications and the whole code of the html is shown below it create a grid and you can place the boxes  and design it .This is a excerpt modified from the link

    <!DOCTYPE html>
    <html lang="en">
    	<head>
    		<title>three.js canvas - interactive - voxel painter</title>
    		<meta charset="utf-8">
    		<meta name="viewport" content="width=device-width, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0">
    		<style>
    			body {
    				font-family: Monospace;
    				background-color: #f0f0f0;
    				margin: 0px;
    				overflow: hidden;
    			}
    		</style>
    	</head>
    	<body>
     
    		<script src="../build/three.min.js"></script>
     
    		<script src="js/libs/stats.min.js"></script>
     
    		<script>
     
    			var container, stats;
    			var camera, scene, renderer;
    			var projector, plane;
    			var mouse2D, mouse3D, raycaster, theta = 45,
    			isShiftDown = false, isCtrlDown = false,
    			target = new THREE.Vector3( 0, 200, 0 );
    			var normalMatrix = new THREE.Matrix3();
    			var ROLLOVERED;
     
    			init();
    			animate();
     
    			function init() {
     
    				container = document.createElement( 'div' );
    				document.body.appendChild( container );
     
    				var info = document.createElement( 'div' );
    				info.style.position = 'absolute';
    				info.style.top = '10px';
    				info.style.width = '100%';
    				info.style.textAlign = 'center';
    				info.innerHTML = '<a href="http://threejs.org" target="_blank">three.js</a> - voxel painter<br><strong>click</strong>: add voxel, <strong>control + click</strong>: remove voxel, <strong>shift</strong>: rotate, <a href="javascript:save()">save .png</a>';
    				container.appendChild( info );
     
    				camera = new THREE.PerspectiveCamera( 40, window.innerWidth / window.innerHeight, 1, 10000 );
    				camera.position.y = 800;
     
    				scene = new THREE.Scene();
     
    				// Grid
     
    				var size = 500, step = 50;
     
    				var geometry = new THREE.Geometry();
     
    				for ( var i = - size; i <= size; i += step ) {
     
    					geometry.vertices.push( new THREE.Vector3( - size, 0, i ) );
    					geometry.vertices.push( new THREE.Vector3(   size, 0, i ) );
     
    					geometry.vertices.push( new THREE.Vector3( i, 0, - size ) );
    					geometry.vertices.push( new THREE.Vector3( i, 0,   size ) );
     
    				}
     
    				var material = new THREE.LineBasicMaterial( { color: 0x000000, opacity: 0.2 } );
     
    				var line = new THREE.Line( geometry, material );
    				line.type = THREE.LinePieces;
    				scene.add( line );
     
    				//
     
    				projector = new THREE.Projector();
     
    				plane = new THREE.Mesh( new THREE.PlaneGeometry( 1000, 1000 ), new THREE.MeshBasicMaterial() );
    				plane.rotation.x = - Math.PI / 2;
    				plane.visible = false;
    				scene.add( plane );
     
    				mouse2D = new THREE.Vector3( 0, 10000, 0.5 );
     
    				// Lights
     
    				var ambientLight = new THREE.AmbientLight( 0x606060 );
    				scene.add( ambientLight );
     
    				var directionalLight = new THREE.DirectionalLight( 0xffffff );
    				directionalLight.position.x = Math.random() - 0.5;
    				directionalLight.position.y = Math.random() - 0.5;
    				directionalLight.position.z = Math.random() - 0.5;
    				directionalLight.position.normalize();
    				scene.add( directionalLight );
     
    				var directionalLight = new THREE.DirectionalLight( 0x808080 );
    				directionalLight.position.x = Math.random() - 0.5;
    				directionalLight.position.y = Math.random() - 0.5;
    				directionalLight.position.z = Math.random() - 0.5;
    				directionalLight.position.normalize();
    				scene.add( directionalLight );
     
    				renderer = new THREE.CanvasRenderer();
    				renderer.setSize( window.innerWidth, window.innerHeight );
     
    				container.appendChild(renderer.domElement);
     
    				stats = new Stats();
    				stats.domElement.style.position = 'absolute';
    				stats.domElement.style.top = '0px';
    				container.appendChild( stats.domElement );
     
    				document.addEventListener( 'mousemove', onDocumentMouseMove, false );
    				document.addEventListener( 'mousedown', onDocumentMouseDown, false );
    				document.addEventListener( 'keydown', onDocumentKeyDown, false );
    				document.addEventListener( 'keyup', onDocumentKeyUp, false );
    
    				//
     
    				window.addEventListener( 'resize', onWindowResize, false );
    
    			}
     
    			function onWindowResize() {
     
    				camera.aspect = window.innerWidth / window.innerHeight;
    				camera.updateProjectionMatrix();
     
    				renderer.setSize( window.innerWidth, window.innerHeight );
     
    			}
     
    			function onDocumentMouseMove( event ) {
     
    				event.preventDefault();
     
    				mouse2D.x = ( event.clientX / window.innerWidth ) * 2 - 1;
    				mouse2D.y = - ( event.clientY / window.innerHeight ) * 2 + 1;
     
    				var intersects = raycaster.intersectObjects( scene.children );
     
    				if ( intersects.length > 0 ) {
     
    					if ( ROLLOVERED ) ROLLOVERED.color.setHex( 0x00ff80 );
     
    					ROLLOVERED = intersects[ 0 ].face;
    					ROLLOVERED.color.setHex( 0xff8000 )
     
    				}
     
    			}
     
    			function onDocumentMouseDown( event ) {
     
    				event.preventDefault();
     
    				var intersects = raycaster.intersectObjects( scene.children );
     
    				if ( intersects.length > 0 ) {
     
    					var intersect = intersects[ 0 ];
     
    					if ( isCtrlDown ) {
     
    						if ( intersect.object != plane ) {
     
    							scene.remove( intersect.object );
     
    						}
     
    					} else {
     
    						normalMatrix.getNormalMatrix( intersect.object.matrixWorld );
     
    						var normal = intersect.face.normal.clone();
    						normal.applyMatrix3( normalMatrix ).normalize();
     
    						var position = new THREE.Vector3().addVectors( intersect.point, normal );
     
    						var geometry = new THREE.CubeGeometry( 50, 50, 50 );
     
    						for ( var i = 0; i < geometry.faces.length; i ++ ) {
     
    							geometry.faces[ i ].color.setHex( 0x00ff80 );
     
    						}
     
    						var material = new THREE.MeshLambertMaterial( { vertexColors: THREE.FaceColors } );
     
    						var voxel = new THREE.Mesh( geometry, material );
    						voxel.position.x = Math.floor( position.x / 50 ) * 50 + 25;
    						voxel.position.y = Math.floor( position.y / 50 ) * 50 + 25;
    						voxel.position.z = Math.floor( position.z / 50 ) * 50 + 25;
    						voxel.matrixAutoUpdate = false;
    						voxel.updateMatrix();
    						scene.add( voxel );
     
    					}
     
    				}
    			}
     
    			function onDocumentKeyDown( event ) {
     
    				switch( event.keyCode ) {
     
    					case 16: isShiftDown = true; break;
    					case 17: isCtrlDown = true; break;
     
    				}
     
    			}
     
    			function onDocumentKeyUp( event ) {
     
    				switch( event.keyCode ) {
     
    					case 16: isShiftDown = false; break;
    					case 17: isCtrlDown = false; break;
     
    				}
    			}
     
    			function save() {
     
    				window.open( renderer.domElement.toDataURL('image/png'), 'mywindow' );
    				return false;
     
    			}
     
    			//
     
    			function animate() {
     
    				requestAnimationFrame( animate );
     
    				render();
    				stats.update();
     
    			}
     
    			function render() {
     
    				if ( isShiftDown ) {
     
    					theta += mouse2D.x * 3;
     
    				}
     
    				camera.position.x = 1400 * Math.sin( theta * Math.PI / 360 );
    				camera.position.z = 1400 * Math.cos( theta * Math.PI / 360 );
    				camera.lookAt( target );
     
    				raycaster = projector.pickingRay( mouse2D.clone(), camera );
     
    				renderer.render( scene, camera );
     
    			}
     
    		</script>
     
    	</body>
    </html>
    
    

     The project as it looks in the emulator 

    After the experiment we see that adding simple modification to the index.html and adding required three.js files givesyou some cool gui effects that yo can use in your projects.

     

    • The possibilities are endless with three.js as you can also develop games with it.
    •  Three.js is an excellent WebGL tool that helps you explore 3D GUI applications in an innovative manner. 

    Now when you combine the Intel XDK IDE you can get some great APK's created with it.

      Nexus7  Emulator Images

    This article is an attempt  to showcase how Three.js can be develop good GUI based WebGL Android app using Intel XDK IDE.For the entire project process Internet connectivity is required.As i learn more i will try to contribute more.  GITHUB repository for Three.js Check the examples and experiment.I had fun tweaking the codes.  

     

     Good resources  

     

    You will Know a lot and get good knowledge out of  questions of Three.js at StackOverflow

    Intel XDK Documentation

    Three.js documentation

    APK Examples and the Code link

      

     

     

     

     

  • html5 Intel XDK
  • Imagem do ícone: 

  • HTML5
  • JavaScript*
  • Android*
  • HTML5
  • Telefone
  • Tablet
  • Desenvolvedores
  • Estudantes
  • Android*

  • Using the Beacon Mountain Toolset and NDK for Native App Development

    $
    0
    0


    Download as PDF

    Download Source Code

    Summary: The goal of this project is to demonstrate how easy it is to build native Android apps with the Beacon Mountain toolset and Intel NDK.  We will do this by building a simple game. We will walk through the steps of installing tools with Beacon Mountain, building the game, and testing it with the Intel® Hardware Accelerated Execution Manager (Intel® HAXM) emulator. Commented source code is also available.

    Installing Beacon Mountain

    Beacon Mountain is a one-click install for most of the tools needed for developing Android* applications, including Eclipse* and the Android SDK and NDK. This can save hours or even days of downloading, building, and installing different packages and development tools.

    Install Beacon Mountain from here: http://software.intel.com/en-us/vcsource/tools/beaconmountain

    Creating the project

    1. Open Eclipse ADT and create a new workspace called MazeGame.



      Click the New Android Application button and set the project name to MazeGame. Change all API levels to API 17: Android 4.2.



      Click Next, accepting all default settings, until the Finish button appears, then click it.
       
    2. Since we are creating an app that will involve native C++ code, we need to set the NDK location. Click Window->Preferences and expand the Android menu. Browse to the location of your Beacon Mountain install folder, select the NDK folder inside it, and click OK.


       
    3. To enable native C++ compilation, right-click the project, and select Android Tools->Add Native Support.



      Accept the default library name by clicking Finish.
       
    4. By default, our project will only build for ARM devices. To enable building for x86 devices, we'll need to create an Application.mk file alongside our Android.mk in the /jni folder and add the following.
    APP_ABI := x86 armeabi
    APP_STL := stlport_static
    

    After building, you should see armeabie and x86 folders inside MazeGame/MazeGame/bin.

    Game Structure

    Although there are many good ways to structure our game, we'll start with the simplest possible format:

    • A nearly empty activity that loads a view.
    • A view that extends GLSurfaceView. We'll call into our native code from here to render each frame.
    • A C++ MazeGame class that will manage all the game objects, the physics engine, communication with the Java* wrapper and OpenGL* setup.
    • A C++ GameObject class that will manage object position, 3D model parsing, and drawing itself.

    Calling Native C++ Code From Java

    To call native code, we'll need to load our library (the one we configured when we created the Project) at the end of our view file.

    static {
            System.loadLibrary("MazeGame");
        }
    

    Note that the actual library (inside the lib/x86 folder) will be called libMazeGame.so, not MazeGame.so.

    We'll also need to define Java versions of the native functions we'll be calling:

        public native void init(int rotationDegrees);
        public native void restart();
        public native void setRotation(int degrees);
        public native void loadResources(Bitmap circuitBoardBitmap, Bitmap componentsBitmap, Bitmap stripesBitmap, Bitmap ballBitmap);
        public native void resize(int width, int height);
        public native void renderFrame(double timeStepSeconds, double currTimeSeconds);
        public native void accelerometerChanged(float x, float y);
        public native void deinit();
    
    

    Finally, we'll need to define these functions in MazeGame.cpp. Native code functions require a very unique format to be externally callable:

    JNIEXPORT void JNICALL Java_com_example_mazegame_MazeGameView_init(JNIEnv* env, jobject thisClazz, int rotationDegrees){
    gameInst = new MazeGame(env, thisClazz, rotationDegrees);
    gameInst->restart();
    }
    

    Notice the function name. It starts with the full classpath of the Java file that will be calling into it. Also, the first two arguments are passed in by the system, so they are required, and there are no matching parameters for them on the Java side.

    Because this is C++ and not C, we'll need to add EXTERN C linkage for them above the function definitions.

    extern "C" {
    JNIEXPORT void JNICALL Java_com_example_mazegame_MazeGameView_init(JNIEnv* env, jobject obj, int rotationDegrees);
    

    Calling Java From C++

    Some tasks, like playing sounds and opening dialogs, are best done in Java, so we'll need a way to call back out from our native C++ code. In our constructor, we'll save references to the calling class and the PlaySound method on that class:

    MazeGame::MazeGame(JNIEnv* env, jobject clazz, int rotationDegrees)
    {
    _environment = env;
    _callingClass = (jclass)(env->NewGlobalRef(clazz));
    jclass viewClass = env->FindClass("com/example/mazegame/MazeGameView");
    _playSoundMethodID = env->GetMethodID(viewClass, "PlaySound", "(Ljava/lang/String;)V");
    _showGameOverDialogMethodID = env->GetMethodID(viewClass, "ShowGameOverDialog", "()V");
    
    Then, when we are ready to play a sound, we can simply call the saved reference:
    
    void MazeGame::playSound(const char* soundId){
    jstring jstr = _environment->NewStringUTF(soundId);
        _environment->CallVoidMethod(_callingClass, _playSoundMethodID, jstr);
    }
    
    

    Integrating Box2D

    One of the best things about the NDK is that it allows development teams to use existing C++ libraries, such as the well-known Box2D physics engine. After downloading and unzipping Box2D, move it into the jni folder. We'll also need to link in all of the Box2D libraries in our jni/Android.mk file:

    LOCAL_PATH:= $(call my-dir)
    
    include $(CLEAR_VARS)
    
    LOCAL_MODULE    := maze-game
    FILE_LIST := $(wildcard $(LOCAL_PATH)/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Collision/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Collision/Shapes/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Common/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Dynamics/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Dynamics/Contacts/*.cpp) $(wildcard $(LOCAL_PATH)/Box2D/Dynamics/Joints/*.cpp)
    LOCAL_SRC_FILES := $(FILE_LIST:$(LOCAL_PATH)/%=%)
    

    Now we can include Box2D in our code:

    #include 
    ...
    b2World* _world;
    

    Testing with the Intel HAXM Emulator

    The Intel HAXM emulator, part of the Beacon Mountain toolset, provides a massive speed increase over the stock Android emulators. This can be crucial for game development, as testing many scenarios becomes impossible at low frame rates.

    Begin by right-clicking the project and choosing Properties. Click the Run/Debug Settings item in the left-nav. To test our project, we'll need to add a launch configuration. So click the New button and select Android Application from the list.

    Under the Android tab, click Browse and select the main project. Then click the Target tab and select the x86 device from the list.

    Click OK. We can now test our project by right-clicking it and selecting Run As->Android Application.

    Summary

    This has been a high-level overview of how the Beacon Mountain toolset can accelerate Android game development. For more information, download the full source code of the sample application or check out the Beacon Mountain home page (http://software.intel.com/en-us/vcsource/tools/beaconmountain).

  • Intel® HAXM
  • Beacon Mountain
  • emulator
  • applications
  • Frame rendering
  • x86
  • ARM
  • Desenvolvedores
  • Android*
  • Android*
  • Intel Hardware Accelerated Execution Manager (HAXM)
  • OpenCL*
  • Ferramentas de desenvolvimento
  • Desenvolvimento de jogos
  • Design e experiência do usuário
  • Contrato de licença: 

    Anexos protegidos: 

    AnexoTamanho
    DownloadMazeGame_Release.zip1.78 MB
  • URL
  • Android 应用程序开发另解及 Android SDK 工具集的另类用法

    $
    0
    0

    相信对于广大Android应用开发爱好者来说,Android SDK工具集的大家都已经能够很熟练的使用,但是我这里要介绍的是SDK工具集的非常用使用方法,即“另类用法”。

    首先要说的是,大部分的Android应用程序开发者是基于Android模拟器来开发应用程序的,这种开发方式虽然很方便,直接用Eclipse就可以集成开发环境,基本上不需要手动去设置或者操作什么,但是同时也有几点很大的弊端:

    1、Android模拟器的内存有限,如果开发相对比较耗资源的应用程序,就比较头疼了--模拟器运行的十分缓慢;而且随着模拟器分辨率设置的变化,分辨率越大模拟器就越卡也给应用开发带来了不小的困扰。这时候有些通报可能就会选择购买开发板或者用买个Android手机来进行应用的开发。但是其实,我们是有更好的方式来做Android的应用开发,这里先卖个关子。

    2、通常来说,大家进行Android应用程序的开发都是在本机上实现的,也就是说代码编辑和模拟器运行是在同一台机器上实现的,但是有没有想过将这两者分别在两个机器上进行,或者说用虚拟机来实现Android系统的模拟呢?

    说到这里,大家是不是觉得很有意思呢?

     

    其实,Android不止是提供了arm版的!!!这里要隆重介绍一下Android—x86版,有不了解的朋友可以百度一下。

    其实说起来很简单,使用android-x86进行应用程序的开发需要经历一下几个步骤:

    1、找一台linux主机或者装一个虚拟机

    2、下载android-x86源代码并进行编译(这部分本人就不再这里详细介绍了,百度一下大把大把的)

    3、将编译生成的android镜像(一般名为generic-x86.iso)用虚拟机运行

    经过一下几步,大家就可以看到一个类似操作系统的android系统了,它的好处是,你可以随意设置这个系统的内存大小,flash大小,CPU频率等各种硬件属性,使得你的应用程序开发不需要再考虑各种硬件资源对模拟器的影响,你不用再喝着咖啡吃着面包看着模拟器缓慢的运行心里干着急了!!!甚至,你完全可以找一台主机,把android系统装到台式机或者笔记本上!!

     

    写到这里,关键的问题就来了,可能有些朋友就要问了,这样的话,要怎么样把应用程序开发与android-x86这个系统联系起来呢?

    这就涉及到本文的第二部分了,即android sdk工具集的另类用法。

     

    首先说一种比较傻瓜式的方式:你可以将编译成的android apk应用程序通过U盘挂载到andriod-x86系统上,然后通过U盘对应用进行安装。不过显然,如果只能用这种方式的话,我也不会写这篇文章里。

    其实android sdk工具集本身已经做得够强大了,只要大家细心看看sdk各工具的使用说明就能发现,这里鄙人做下简单的说明:

    adb工具的使用其实并不只局限于对模拟器的使用,它还有更强大的用法。

    adb connect使用说明:

    按照刚才说的,你已经将android-x86通过虚拟机运行起来了,或者你资源比较丰富,一不做二不休已经将这个系统装到另外一个系统上来,那么接下来,就说一下实现的详细步骤。

    首先,在虚拟机或者装有android系统的主机上用ALT+F1,你会惊奇的发现,原来这个系统还有提供命令行界面!其实想想这也没什么好奇怪的,因为android使用的是linux内核,而linux内核的ALT+F1就是切入命令行界面的快捷键,同样的,要重新回到图形界面,ALT+F7就行了,这和linux系统也是一致的。

    进入命令行界面之后,敲入netcfg命令,我们可以看到这个系统的ip,比如说这里我们看到的是192.168.1.160。

    接下来,就是adb工具的另类用法了:adb connect 192.168.1.160:5555(冒号前面的参数是你android-x86系统的ip,后面那个参数是端口号,不可变)。

    当显示连接成功的时候,所有的猪呢比工作都已经准备完成了。

    接下来,就是见证奇迹的时刻:

    在windows主机下使用adb install命令将你主机上有的apk应用程序安装一下,你就会惊喜的发现,这个应用程序居然被安装到了android-x86系统上,并且运行的飞快,比那什么什么模拟器快了不知道多少倍!!!

    更有甚者,你会发现,怎么你通过Eclipse运行应用程序之后,模拟器并没有打开,而这个应用程序已经神奇的运行到了android-x86系统上来!!!

    这是怎么回事呢?

    其实原理很简单,之所以起初你使用adb install和用Eclipse运行程序的时候程序会装到模拟器上,是因为adb这时候模拟连接的就是模拟器,而当你使用adb connect连接上了android-x86系统上的时候,这个“默认的模拟器”就变成了你的虚拟机或者另外一个anddroid主机了,这样一来,所以对“模拟器”的操作都将会在android-x86系统上执行。

    是不是很神奇呢?

    如果有兴趣的话,不妨一试哦!虽然可能起初会花费你一些时间,但是带来的快乐和之后开发的高效,一定会给你意想不到的收获的!!

  • Curated Home
  • Android*
  • Android*
  • Avançado
  • URL
  • Test Your Apps for Free on Intel-based Android Devices at AppThwack

    $
    0
    0

    Our friends over at AppThwack run a pretty neat service. You can upload your Android or web app to their virtual device lab, and they'll install it on actual devices, run some tests, and send you the results. You can also script your own tests if you want.

    Normally, you pay for this service by the minute, based on how long your tests take, and how many devices you want to test on. But we've set up an arrangement with them to make testing on Intel-based Android devices completely free, courtesy of the Intel Developer Zone. You can test on any of these Android tablets and phones with Intel Atom processors:

    • Asus MeMO Pad FHD 10
    • Dell Venue 7
    • Dell Venue 8
    • Lenovo IdeaPhone K900
    • Motorola Droid RAZR i
    • Samsung Galaxy Tab 3 10

    More and more Android devices have Intel processors. If you're a developer, and want to see how your apps perform on Intel-based Android tablets and phones, now you have a free and easy way to test them. And if you need help with your app, don't forget to check out the rest of our Android development tools and resources at software.intel.com/android.

    Happy testing!

    EDIT: We've got a post that shows you how to sign up for AppThwack, upload your app, run some tests, and get the results. Check it out.

  • AppThwack
  • testing
  • Imagem do ícone: 

  • News
  • Depuração
  • Ferramentas de desenvolvimento
  • Android*
  • Telefone
  • Tablet
  • Desenvolvedores
  • Estudantes
  • Android*
  • How-To: Test your app for FREE on Intel Android devices using appthwack.com

    $
    0
    0

    Appthwack is a cloud based real device testing website. This website allows you to test your Android, iOS and Web applications on the real device over cloud. Which means, you can do test your app on actual devices without owning a single device.  For that you need to upload your package file to appthwack cloud system. They charge their customers based on number of device minutes/month. However, for Intel Android devices, this testing is absolutely free. Before you start, you need to create an account by providing your email id, user name and password.

    Appthewack.com account creation
    Screenshot: from appthwack.com website

    After creating the account,you will be asked to create a project by selecting the project type. Select Android App as project type.
    The project creation screen would look as shown below:
    Project creation
    Screenshot: from appthwack.com website

    Once you create a project, you’ll be asked to upload your APK file. 
    Upload APK
    Screenshot: from appthwack.com website

    After successfully uploading the APK, in the devices drop down, select Intel Atom FREE (6). #6 means, it supports 6 Intel Android devices. Following 6 devices are supported:

    • Asus MeMO Pad FHD 10
    • Dell Venue 7
    • Dell Venue 8
    • Lenovo IdeaPhone K900
    • Motorola Droid RAZR i
    • Samsung Galaxy Tab 3 10

    Select Intel Atom
    Screenshot: from appthwack.com website

    After selecting this device from the drop-down, you can continue to do built-in tests or whatever kind of tests that the website offers. All the tests performed for Intel Atom are going to be free on this website. Once everything is setup, click on the “Go” button to schedule a test. Now your APK is scheduled to be tested on appthwack.com website. 

    Test completion
    Screenshot:from appthwack.com website

    After successfully completing scheduled test, you’ll be offered with a full report on test results. The report shows, how many tests were run and how many were successful and how many were failures with reason for the failures. Report also shows what are the devices selected to perform the test.
    You can even see how the app was looking in each of the devices by verifying the screenshots of your application. It also shows the detail log and performance metrics with respect to your app on various devices. 
    Test Results
    Screenshot: from appthwack.com website

    After all the tests, you can download the full report as a zip file. This zip would contain the report with respect to each of the devices in the form of CSV, which would primarily contain the log information with respect to each of the devices. After completing your tests, you can delete your project.

    Start uploading your APK today and test your app on Intel android devices for free.

     

  • AppThwack
  • testing
  • android
  • Imagem do ícone: 

  • Android*
  • Telefone
  • Tablet
  • Desenvolvedores
  • Android*
  • Migrate Android* Phone Apps to Tablets

    $
    0
    0

    Introduction

    One of the problems that has occurred over the last few years of rapid mobile device proliferation is because smartphones came out before tablets, developers designed applications for smartphones first. When tablets came out with similar operating systems but different screen sizes, developers had a new market for their software; however, most did not change their approach to user interaction. The Android* OS is capable of adjusting most smartphone applications to tablet-sized screens, but usually the app doesn’t look right and doesn’t provide the experience users expect from a tablet application.

    When developers build applications for tablets, they need to redesign and redeploy the application for tablets to match with visual and user experience expectations. In this article, we give advice and perspective to developers who want to migrate their Android smartphone application to Android tablet.

    How to Begin

    First, we need to come up with a methodology to make the migration as efficient as possible, without spending too much time and resources.

    To do this, we will walk through the hardware and software differences between smartphones and tablets as well as user experience scenarios for both of the devices.

    The diagram below shows the steps to follow:

    Device Analysis: Tablets vs Smartphones

    HW Differences

    The most obvious hardware difference between smartphones and tablets is the screen size. Devices with screens larger than 7-inches are considered tablets, and smartphones have smaller screen sizes like 3.5”, 2.5”, 4.2”, etc. Tablets’ larger screen size creates different resolutions, user experience, and human-computer interaction that developers need to be aware of.



    (Motorola Razr-i* Smartphone with 4.3” screen size, Asus Fonepad* with 7” screen size, and Android* tablet with 10.1” screen size, respectively.)

    People tend to use tablets with their larger screen sizes for tasks that are more intensive and where they tend to spend more time within an application, like more lengthy reading, more interaction with games, and watching movies. Phones are smaller, hence more mobile to carry around and convenient for getting quick information. Smartphone applications are designed to navigate application features quickly and supply small chunks of information on smaller screens.

    Device size affects how people hold the device, how many fingers they use at the same time, or in what situation they use the device, like standing, sitting, walking, etc. These differences determine the human-computer interaction.

    In addition to the user experience issues, screen resolutions change between devices so developers should consider screen resolution differences and how they affect user habits when migrating smartphone applications to tablets.

    Another hardware difference to account for is that most tablets do not have cellular network connection like smartphones do. So application developers should assume the absence of cellular network when designing their tablet applications.

    SW Differences

    The Android OS was originally designed and developed for smartphones. Then Google created v3.0, a.k.a Honeycomb*, for tablets. But since Android v4.0, a.k.a Ice Cream Sandwich*, the same OS is used on smartphones and tablets, and the releases following v4.0 have been released for both. The pictures below show a phone and tablet with the Ice Cream Sandwich home screen. They show the basic interface differences users encounter on the two devices.



    (Android* Ice Cream Sandwich* 4.3” Phone and 10.1” Tablet Home Screen)

    The main differences between the home screens are:

    • The notification area is above the screen on smartphones, while it is mostly in the right bottom corner on tablets (changes according to tablet screen size or the manufacturer’s UI design)
    • The Applications menu icon is at the bottom center of the smartphone screen, while it is mostly in the top right corner of the tablet.
    • The Quick navigation buttons are at the bottom of the screen on smartphones, but they are in the bottom left corner of the screen on tablets.

    If your Android smartphone application was developed for Android v4.0 or later versions, the software stack is the same for tablets but if your smartphone application was developed for an earlier version, even though Android* is backward compatible, you should visit the SDK version history to see the changes, detect the deprecated methods, implement, and rebuild the code.

    User Experience: Human-Computer Interaction with Tablets

    As it has been mentioned in the hardware differences section, people use phones and tablets differently. People usually carry smartphones around with them, so they tend to use them for communication (calls, messaging, tweeting, etc.) and for short reading instead of concentrated actions like reading books, articles, etc., writing long emails, and other entertainment or productivity applications. It is common to show short information on the small phone screen. But when you move your app to tablets, you can show more information on the screen, and users tend to dwell longer while looking at the screen instead of a quick scan of information.

    It is also more common to use tablets instead of smartphones for production tasks, including writing longer notes, photo and video editing, creating presentations, writing long emails, etc. These uses mean that people tend to use tablets in more static situations instead of mobile conditions since tablets are larger, have larger screens to enjoy and larger soft keyboard to make typing easier, and so on.

    Tablets are designed to be rotated more than phones as both landscape and portrait mode are large enough on a tablet to legibly show text and graphics in either mode. Designing the application user interface on a tablet to optimize for both landscape and portrait states of the screen is important as the application can present an enhanced user experience when compared with a smartphone. One suggestion is to give more data space to an application window so users can more easily enter text or follow the state of the application.

    Design Decisions

    After analyzing the hardware, software, and user interaction differences between tablets and smartphones, it is easier to see what the design decisions are for tablet applications. Here’s a list of the most common actions that need to be considered:

    • Redesigning the user interface to account for how people use the application on a tablet.
    • Adding more visuals and text on the screen.
    • Recreating of some visual content for the user interface due to differences in screen resolution. To fill the larger screen, the OS would have to stretch some visual resources, distorting them. It’s likely going to be better to replace an image that would be stretched with a higher definition one.

    Android UI Development

    In most cases Android adjusts your application layout to fit the current device screen. In some cases it works fine, but in other cases the UI might not look as good and might need some adjustments. Even if you designed a dynamic UI, scaling images and widgets might not be so user friendly. Since the XML files were designed for smartphones, you need to create new XML files and design for tablet-sized devices.

    Here we have some quick tips for designing the user interface for tablets.

    • Use new high-res image resources instead of low-res ones to improve the quality of user interface. Otherwise, your app images will be pixellated and your application will look bad. Rescaling the low-res images;

    • Letting the OS stretch the user interface widgets could look worse than you think. Try to centralize the user interface elements. Below is a sample user interface for a phone app that was stretched to fit on a tablet.;



      (TextField Widget on phone being adjusted to show on a tablet)

    • Using absolute layouts is hard to manage since you need to specify exact locations of the user interface elements, making it hard for your application to accommodate different screen sizes and adjust for landscape and portrait modes. It is advised to use linear or relative layouts.

    Developing the new User Interface

    The Android SDK allows developers to create multiple UIs with XML files for different sizes of screens. The system handles most of the work to render your application properly on each screen configuration by scaling layouts to fit the screen size and density with scaling bitmap drawables for the screen density as appropriate.

    In the Android project you can use these folders to ensure your app uses appropriate screen design.

    res/layout/my_layout.xml“regular screen size”

    res/layout-large/my_layout.xml “larger screen size”

    Or declare the specific resolution screen with:

    res/layout-sw600dp/main_activity.xml For 7” tablets (600dp wide and bigger)

    res/layout-sw720dp/main_activity.xml For 10” tablets (720dp wide and bigger)



    (Android Project Folder Structure shows resource folders for different resolutions)

    Using Fragments

    The Android SDK provides fragments for creating more dynamic user interfaces. Using fragments, developers can include more functionality on a tablet application screen, giving users more benefits. Developers can create sub-activities for different parts of the screen by defining a fragment for each sub-activity in an Android activity class.



    (Fragments Visualization on devices)

    Fragments have their own lifecycle similar to an activity with stack, state, and back-stack.

    When an activity creates a fragment, it is attached to the activity, so the developer can define what the attached fragment does.

    public void onAttach(Activity activity){  	
    	super.onAttach(activity); 
    	}
    

    Multiple fragments can be attached and detached to an activity so developers can add more functionality to one screen by managing the fragments. Fragment APIs provide the FragmentManager class to manage fragment objects within an activity.

    Using Fragments, you don’t have to start a new activity and switch to a brand new window. The user can stay on the same screen and continue to interact with the same UI. Using the same screen fragments can mean that your app runs faster also.



    (Fragments Lifecycle)

    Fragments also have subclasses like DialogFragment, ListFragment, PreferenceFragment that are similar to activity correspondence, e.g., ListActivity. By using the appropriate subclass, you can design more dynamic applications.

    The Android SDK comes with sample fragment applications for both phones and tablets. The screenshots below are from a sample that uses list fragments to dynamically change the reading pane without changing the window and starting a new activity.



    (ListFragment on tablet)



    (ListFragment on Phone)

    Android Tablet Application Deployment

    When declaring the support for multiple screens for your application, you need to edit your application’s AndroidManifest.xml file and add the <supported-screen> tag. The tag supports `android:requiresSmallestWithDp, android:compatibleWidthLimitDp, androidLargestWidthLimitDp`. Your manifest file will then look something like this:

    <manifest ... >
      <supports-screens 	android:requiresSmallestWidthDp="600" />
        ...
    </manifest>
    

    Resources

    Conclusion

    Migrating your application from smartphone to tablet is suggested to present best user experience to users and take more attention than an adjusted smartphone application on tablet. As mentioned in the article Android* SDK is helpful to make your work easier during migration and redesign of your application if you decide the new tablet design of your application.

    Other Related Articles

    1. “User Experience Design Guidelines for Tablets running Android*” article helps you to build a User Interface that looks great and accomplishes your goals.

      http://software.intel.com/en-us/articles/user-experience-design-guidelines-for-tablets-running-android
    2. “Designing for User Experience, Power, and Performance on Tablets” article helps you by giving issues to consider when working out your User Interface and overall user experience.

      http://software.intel.com/en-us/articles/designing-for-user-experience-power-and-performance-on-tablets
    3. “Intel for Android* Developers Learning Series #4: Android Tablet Sensors” article provides you a sensor based application development on Android tablets, it is a good read to review application development process on Android tablets.

      http://software.intel.com/en-us/articles/intel-for-android-developers-learning-series-4-android-tablet-sensors
    4. “Mobile OS Architecture Trends” article gives an overview of mobile OS design and architecture such as user experience, power management, security design, cloud support and openness design.

      http://software.intel.com/en-us/articles/mobile-os-architecture-trends

    About Author

    Onur is working as Software Engineer for Intel® Corporation more than 3 years. He has worked on Android* and Linux* systems with various Intel platforms and technologies. He is currently working in Intel Labs Istanbul as Software Development Engineer.

    Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

    Copyright © 2014 Intel Corporation. All rights reserved.

    *Other names and brands may be claimed as the property of others.

  • Desenvolvedores
  • Android*
  • Android*
  • URL
  • Viewing all 343 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>