Using Docker Containers for Development

I recently bought a new computer with 64GB of memory because I was always maxing out my other at 16GB.  I work on a lot of different software simultaneously for developing applications.  I might be building an iOS app in XCode while building the API in vs code and running multiple microservices inside Kubernetes for the app to communicate with.  Plus I always end up having a million browser tabs open which love to eat memory.   

Anyway I had a clean slate and wanted to keep this new computer organized.

What I Wanted

  1. Sandbox my development environment for my different applications. 

    I wanted to be able to run my work applications next to my weekend and hobby projects without fear of one interfering with the other.  Yes you can use tools like asdf, pyenv and rbenv to help manage your versions.  But even then you’ll run into issues with application dependencies requiring different system dependencies like Postgres and OpenSSL.   

    If you upgrade to the latest version of some software only to find out it breaks your application, removing it and installing the previous one can be challenging.

    I want to be able to easily remove software and not worry that there will be leftover artifacts.

  2.  Make it quick and easy for someone on my team to be able to get started.

    Setting up a new computer for someone or installing the right tools to be able to use a new application can be a pain.  I’ve seen developers take days trying to get all their applications running at a new job.

    Using a tool like Skaffold definitely helps (https://skaffold.dev/).  Having a developer be able to open an application and run `skaffold dev` and be able to start developing right away is very nice.   You’ll still need to have the right version of skaffold installed and you’ll need to install all of your dependencies locally in order to have things like code completion within your IDE.   

  3. Keep a consistent environment between everyone working on a project.

    This can help to mitigate some of the “it works on my machine” issues.    

    Maybe your app is a python script that needs to be run with a different version of python than comes on the computer by default.   You can use tools like asdf and pyenv with virtualenv, but everyone will still need to download the same version to be consistent.  You’ll have to make sure everyone is using the same tool in order to know if your application needs a .tool-versions or .python-version file.

Upgrading software is hard to sync across developer environments.  Not too long ago I started working on a new application that someone else setup.  I ran the command to start it but it didn’t work.  Looking into the code it looked like the format was wrong but the guy already told me he had it running.  Turns out he was using a newer version of Helm that had a different syntax.  I had to upgrade mine and then update all my other applications to use the new format.   That’s exactly what I don’t want to happen. 

What Do I Do?

It seems that a docker container is exactly what I need.  But is there an easy way to develop inside them?

So I set out on my quest to find a solution.  After a 10 second google search, my quest was finished!  I came across Visual Studio Code Remote – Containers. https://code.visualstudio.com/docs/remote/containers. This was exactly what I wanted, a simple way to create development containers and attach my IDE to them.

If you’re not using Visual Studio Code, I recommend trying it out, otherwise you’ll need to search for ways to edit inside docker containers for your IDE.

The documentation for it is very thorough with some good examples.

Simply install the extension and when you open up a project you can choose to open it up in the container.  If it’s the first time it will build the image and container for you and then attach your window to it.  If you already built the container then it will attach to it immediately.

You can see an example development container I use for a lot of my projects at https://github.com/frenzylabs/devcontainer.

I keep all my project files on my computer and then just mount the volume into the container.  I might create a docker volume for node modules or crates to speed up the some of the file access, it just depends.

Some Other Benefits

1. You can keep your development environment very close to production.

2. You can monitor memory/process usage very easily with command `docker stats`

3. It makes cleaning up your computer much easier.  If I need more space it’s easy to find what items are taking the most space and which ones are in use.  I can run `docker system df -v`  to get details on images, containers, volumes and cache sizes.

4. If you do accidentally delete everything you can easily build it all again!

Go give it a try!

Setting Up a Ceph Filesystem with Kubernetes on DigitalOcean

Recently my company created an application for managing 3D printing projects, profiles, and slices. Check it out at layerkeep.com.

We wanted users to be able to keep track of all their file revisions and also be able to manage the files without having to go through the browser. To accomplish this, we decided to use Git which meant we needed a scalable filesystem.

The first thing we did is setup a Kubernetes cluster on DigitalOcean.

Currently DigitalOcean only provides Volumes that are ReadWriteOnce. Since we have multiple services that need access to the files (api, nginx, slicers), we needed to be able to mount the same volume with ReadWriteMany.

I decided to try s3fs with DigitalOcean Spaces since they are S3-compatible object stores.  I setup the CSI from https://github.com/ctrox/csi-s3. I tried both the s3fs and goofys mounter. Both worked and both were way too slow.  Most of our APIs require accessing the filesystem multiple times and each access took between 3-15 seconds so I moved on to Ceph.

Ceph Preparation:

There is a great storage manager called Rook (https://rook.github.io/) that can be used to deploy many different storage providers to Kubernetes.
** Kubernetes on DigitalOcean doesn’t support FlexVolumes so you need to use CSI instead.

Hardware Requirements.
You can check the ceph docs to see what you might need. http://docs.ceph.com/docs/jewel/start/hardware-recommendations/#minimum-hardware-recommendations

Create the Kubernetes Cluster

Follow the directions here to create the cluster: https://www.digitalocean.com/docs/kubernetes/how-to/create-clusters/

** Initially I tried a 3 node pool each with 1 CPU and 2GB of memory but it wasn’t enough. It needed more CPU on startup. I changed each node to have 2 CPUs and 2 GBs of memory which worked.

We’ll make sure to keep all ceph services constrained to this pool by naming it “storage-pool” (or whatever name you want) and adding a node affinity using that name later.

Cluster Access

Make sure you followed DO directions to accessing the Cluster with kubectl. (https://www.digitalocean.com/docs/kubernetes/how-to/connect-to-cluster/)

You also might want to add a Kubernetes Dashboard. (https://github.com/kubernetes/dashboard)

SSH:
Right now it doesn’t look like you can ssh into the droplets that DigitalOcean creates when you create a node pool. I wanted to have access just in case so I went to the droplets section and reset the root password for each of them. I was then able to add my ssh key and disable root login. I recommend doing this before adding any services.

Create Volumes
Go to the Volumes section in DigitalOcean dashboard. We want to create a volume for each node in the node pool we just created. Don’t format it. Then attach it to the correct droplet. Remember that volumes can only be increased in size (not decreased) without having to create a new one.

Create the Ceph Cluster

Clone down the Rook repository or just copy down the ceph directory from: https://github.com/rook/rook/tree/release-1.0/cluster/examples/kubernetes/ceph

cd cluster/examples/kubernetes/ceph

Modify the cluster.yaml file.

This is where we’ll add the node affinity to run the ceph cluster only on nodes with the “storage-pool” name.

placement:
  all:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
        - key: doks.digitalocean.com/node-pool
          operator: In
          values:
          - storage-pool
    podAffinity:
    podAntiAffinity:
    tolerations:
    - key: storage-pool
      operator: Exists

 

There are also other configs that are commented out that you might need to change. For example, if your disks are smaller than 100 GB you’ll need to uncomment the ‘databaseSizeMB: “1024”‘.

Modify the filesystem.yaml file if you want. (Filesystem Design)
Once you’re done configuring you can run:


kubectl apply -f ceph/common.yaml
kubectl apply -f ceph/csi/rbac/cephfs/
kubectl apply -f ceph/filesystem.yaml
kubectl apply -f ceph/operator-with-csi.yaml
kubectl apply -f ceph/cluster.yaml

If you want the ceph dashboard you can run:
kubectl apply -f ceph/dashboard-external-https.yaml

Your operator should create your cluster. You should see 3 managers, 3 monitors, and 3 osds. Check here for issues: https://rook.github.io/docs/rook/master/ceph-common-issues.html

Deploy the CSI

https://rook.github.io/docs/rook/master/ceph-csi-drivers.html

We need to create a secret to give the provisioner permission to create the volumes.

To get the adminKey we need to exec into the operator pod. We can print it out in one line with:

POD_NAME=$(kubectl get pods -n rook-ceph | grep rook-ceph-operator | awk '{print $1;}'); kubectl exec -it $POD_NAME -n rook-ceph ceph auth get-key client.admin

Create a secret.yaml file:

apiVersion: v1
kind: Secret
metadata:
  name: csi-cephfs-secret
  namespace: default
data:
  # Required if provisionVolume is set to true
  adminID: admin
  adminKey: {{ PUT THE RESULT FROM LAST COMMAND }}

Create the CephFS StorageClass.

We’ll need to modify the example storageclass in ceph/csi/example/cephfs/storageclass.yaml.

The storageclass.yaml file should look like:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-cephfs
provisioner: cephfs.csi.ceph.com
parameters:
  # Comma separated list of Ceph monitors
  # if using FQDN, make sure csi plugin's dns policy is appropriate.
  monitors: rook-ceph-mon-a.rook-ceph:6789,rook-ceph-mon-b.rook-ceph:6789,rook-ceph-mon-c.rook-ceph:6789

  # For provisionVolume: "true":
  # A new volume will be created along with a new Ceph user.
  # Requires admin credentials (adminID, adminKey).
  # For provisionVolume: "false":
  # It is assumed the volume already exists and the user is expected
  # to provide path to that volume (rootPath) and user credentials (userID, userKey).
  provisionVolume: "true"

  # Ceph pool into which the volume shall be created
  # Required for provisionVolume: "true"
  pool: myfs-data0

  # The secrets have to contain user and/or Ceph admin credentials.
  csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/provisioner-secret-namespace: default
  csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/node-stage-secret-namespace: default

reclaimPolicy: Retain
allowVolumeExpansion: true

Change the storage class name to whatever you want.

*If you changed metadata.name in filesystem.yaml to something other than “myfs” then make sure you update the pool name here.

Create the PVC:

Remember that Persistent Volume Claims are accessible only from within the same namespace.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: my-pv-claim
spec:
  storageClassName: csi-cephfs
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

Use the Storage

Now you can mount your volume using the persistent volume claim you just created in your Kubernetes resource.  An example Deployment is:

apiVersion: apps/v1
kind: Deployment
metadata:
 name: webserver
 namespace: default
 labels:
   k8s-app: webserver
spec:
 replicas: 2
 selector:
   matchLabels:
     k8s-app: webserver
 template:
   metadata:
     labels:
       k8s-app: webserver
   spec:
     containers:
     - name: web-server
       image: nginx
       volumeMounts:
       - name: my-persistent-storage
         mountPath: /var/www/assets
     volumes:
     - name: my-persistent-storage
       persistentVolumeClaim:
         claimName: my-pv-claim

 

Both deployment replicas will have access to the same data inside /var/www/assets.

ADDITIONAL TOOLS

You can also test and debug the filesystem using the Rook toolbox.  (https://rook.io/docs/rook/v1.0/ceph-toolbox.html).

First start the toolbox with:  kubectl apply -f ceph/toolbox.yaml

Shell into the pod.

TOOL_POD=$(kubectl get pods -n rook-ceph | grep tools | head -n 1 | awk '{print $1;}'); kubectl exec -it $TOOL_POD -n rook-ceph /bin/bash

Run Ceph commands:  http://docs.ceph.com/docs/giant/rados/operations/control/

Validate the filesystem is working by mounting it directly into the toolbox pod.
From: https://rook.io/docs/rook/v1.0/direct-tools.html

 

# Create the directory
mkdir /tmp/registry

# Detect the mon endpoints and the user secret for the connection
mon_endpoints=$(grep mon_host /etc/ceph/ceph.conf | awk '{print $3}')
my_secret=$(grep key /etc/ceph/keyring | awk '{print $3}')

# Mount the file system
mount -t ceph -o mds_namespace=myfs,name=admin,secret=$my_secret $mon_endpoints:/ /tmp/registry

# See your mounted file system
df -h

Try writing and reading a file to the shared file system.

echo "Hello Rook" > /tmp/registry/hello
cat /tmp/registry/hello

# delete the file when you're done
rm -f /tmp/registry/hello

Unmount the Filesystem

To unmount the shared file system from the toolbox pod:

umount /tmp/registry
rmdir /tmp/registry

No data will be deleted by unmounting the file system.

Monitoring

Now that everything is working you should add monitoring and alerts.

You can add the Ceph dashboard and/or Prometheus/Grafana to monitor your filesystem.
http://docs.ceph.com/docs/master/mgr/dashboard/
https://github.com/rook/rook/blob/master/Documentation/ceph-monitoring.md

Include System Libraries Using Swift Package Manager Or CocoaPods

I’m currently using swift package manager to build a framework for an iOS project.  Why? Because I like the clean, modular approach, no need to have an Xcode project and I find it faster for CI testing.

Unfortunately swift package manager doesn’t work for the iOS project.  And since my team is already familiar with CocoaPods that is what the iOS project is using.

Now I will explain how I included a system library inside a Swift framework that is then used in an iOS project with CocoaPods.

I’ll show you how I setup CommonCrypto to use Swift Package Manager and CocoaPods.

For Swift Package Manager:

  • A. Create a git repo for the swift package wrapper around the system library.   
    1. Add a module.modulemap file to the repo and add the system header you are wrapping.
      module CCommonCrypto [system] {
      header "/usr/include/CommonCrypto/CommonCrypto.h"
      export *
      }
    2.  Add Package.swift file to repo
      import PackageDescription
      let package = Package(
      name: "CCommonCrypto"
      )
    3.  Commit the files and add a tag to them (Swift Packages need a tag to be used)
      git tag 0.0.1
      git push origin 0.0.1
      (you can overwrite previous tag by using the "-f" flag on both of those commands)
  • B. Create another repo for the Swift Package that will use the system package wrapper.
    1. Create the Package.swift file and add the repo created in step 1 as a dependency.

      import PackageDescription
      let package = Package(
      name: "CommonCrypto",
      targets: [
      Target(name: "CommonCrypto"),
      ],
      dependencies: [
      .Package(url: "https://github.com/kmussel/ccommoncrypto.git", "0.0.1"),
      ]
      )
    2. Add the “Sources” folder and a subfolder named the same thing you named your target in the Package.swift file
      Sources -> CommonCrypto
    3. Inside the subfolder (CommonCrypto in this case) add all your swift files that you want to use.
      1.  In each file that you want to use the dependency package import that package using the name from its Package.swift file
        import CCommonCrypto

Now you can create another Swift Package and include Repo 2 as a dependency and when you run “swift build” it will download and include the dependencies

You can also run “swift package generate-xcodeproj” if you want to use the Xcode project.

Getting the Swift Package to Work with CocoaPods:

  • C.  Repo 1 is not needed.  We just need to update Repo 2 with necessary CocoaPod files.
    1.  Add another subfolder under the Subfolder we created in Step B.2. The name of the subfolder should match the name of the Package from step A.2.  This is so CocoaPods will recognize the import statement from step B.3.1.
    2. Copy the module.modulemap file from step A.1. into this subfolder.
    3. Create another subfolder in the root directory.  It can be named anything you want but I called it “CocoaPods” since it is only used for that.

    4. Inside the “CocoaPods” directory you will create multiple subfolders for each target you want this package to target.
      ie. iphoneos, iphonesimulator, macosx, etc

    5. Create the files module.map under each of the subfolders you just created pointing to the correct system headers
      CocoaPods -> iphoneos -> module.map
      module CCommonCrypto [system] {
      header "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS10.1.sdk/usr/include/CommonCrypto/CommonCrypto.h"
      export *
      }
    6. Create the Podspec file for it
      1. Our source_files will just be the .swift files inside the Sources directory.  These files are added to the Xcode project.
        s.source_files = "Sources/**/*.swift"
      2. Normally any file not in the source_files will be removed but we need the project to be able to access our “CocoaPods” directory to know which files to include.  To keep the “CocoaPods” directory without adding it to the project we use the “preserve_paths” command to keep the “CocoaPods” directory.
        s.preserve_paths = 'CocoaPods/**/*'
      3. We then tell Xcode where the include paths are for each sdk.  CocoaPods installs it in the PODS_ROOT directory and under the subdirectory named the same as the name of this Podspec.
        s.pod_target_xcconfig = {
        'SWIFT_INCLUDE_PATHS[sdk=iphoneos*]' => '$(PODS_ROOT)/CommonCrypto/CocoaPods/iphoneos',
        'SWIFT_INCLUDE_PATHS[sdk=iphonesimulator*]' => '$(PODS_ROOT)/CommonCrypto/CocoaPods/iphonesimulator',
        'SWIFT_INCLUDE_PATHS[sdk=macosx*]' => '$(PODS_ROOT)/CommonCrypto/CocoaPods/macosx'
        }
      4. The next thing is that the header file in each of the module.map files probably won’t be the same for each user.  We need to change it for the user when they install via pod install.  We create a script to do this and execute it using the prepare_command in cocoa pods.  For example, my path is “/Applications/Xcode-beta.app/Contents/Developer/” so this script replaces the default to that.  I grabbed the script from this site. But should probably should modify the script to handle sdk versions too.
        s.prepare_command = <<-CMD
        ./CocoaPods/injectXcodePath.sh
        CMD

 Now you can include Repo 2 in the Podfile of your iOS project.

The entire Podspec:

You can see the entire CommonCrypto example here:

https://github.com/kmussel/commoncrypto
https://github.com/kmussel/ccommoncrypto

Google Universal Analytics and Tag Manager with Enhanced Ecommerce

Google Analytics! It used to be a simple add this snippet of code to your page and you’re finished. Now depending on the options you want there could be a lot more work to do. You could be using classic google analytics or universal analytics.  You could be using the ecommerce plugin or enhanced ecommerce or none.  You could be using the data layer or macros. You could be using any combination of those with google tag manager.  And depending on which combination you use you will have to code it differently.

I’ll show you how I setup analytics using Universal Analytics and Tag Manger with Enhanced Ecommerce and the Data Layer.

Finding the correct documentation for the analytics combination I’m using was frustrating.  Here is a list of the docs that were helpful to me.

First off if you haven’t used Google Tag Manager I would read:
Getting Started and
How It Works

Basically once the javascript snippet is deployed to the site, it allows non-developers to manage what data they want to collect from the site without involving the developers.

For example.   Let’s say a website has a button to login.  This button has an ID of “login-btn”.  Now a user can use the tag manager to add a tag called “Log In” with a rule that when a user clicks on an element with an ID of “login-btn” it will fire the tag.  The rule would look like this: {{element id}} equals login-btn.  Depending on the type of tag you use you might also need to add to the rule {{event}} equals gtm.click.

Now your site will start collecting data every time a user clicks on the login button without your developer having to make any changes to the code.

** Once you cross over to the Ecommerce world, a developer is going to be required.

The Basic Steps:

Google Tag Manager

The data that is used by the Tag Manager and then sent to google analytics is retrieved from the Data Layer or Macros.   The recommended approach is to use the Data Layer.

I recommend going over the development docs if you haven’t already. https://developers.google.com/tag-manager/devguide

In their docs they mention two ways to populate the data layer and fire the tags:

  1. Declare all needed information in the data layer above the container snippet
  2. Use HTML Event Handlers

If you have the traditional Multi Page Application you most likely will use option 1 .  If you have a Single Page Application (SPA) you will need to use option 2.   Personally I’ve been using this on a SPA with Angular so I never used option 1.

If you are using option 1, you would create a tag in Google Tag Manager with these attributes:

Tag type : Universal Analytics
Track type : Pageview
Enable Enhanced Ecommerce Features: true
Use Data Layer: true
Basic Settings – Document Path: {{url path}}
Firing Rule: {{event}} equals gtm.js

In this case you would make sure the all the data was in the dataLayer before the snippet.

dataLayer = [{
 'ecommerce': {
  'impressions': [{
   'name': productObj.name,                      // Name or ID is required.
   'id': productObj.id,
   'price': productObj.price,
   'brand': productObj.brand,
   'category': productObj.cat,
   'variant': productObj.variant,
   'list': 'Search Results',
   'position': 1
  }]
 }
}];

For option 2 you would create a tag like this:

Tag type : Universal Analytics
Track type : Event
Event Category: Ecommerce
Event Action: Product Click
Enable Enhanced Ecommerce Features: true
Use Data Layer: true
Basic Settings – Document Path: {{url path}}
Firing Rule: {{event}} equals productClick

Then in your javascript you would send the data by pushing it to the dataLayer object like this:

dataLayer.push({
'event': 'productClick',
'ecommerce': {
  'click': {
    'actionField': {'list': 'Search Results'},      // Optional list property.
    'products': [{
      'name': productObj.name,                      // Name or ID is required.
      'id': productObj.id,
      'price': productObj.price,
      'brand': productObj.brand,
      'category': productObj.cat,
      'variant': productObj.variant
    }]
  }
}
});


I recommend using a generic event to help keep things manageable. I’m using the angular javascript framework with angulartics (http://luisfarzati.github.io/angulartics/).  The readme here (https://github.com/luisfarzati/angulartics#for-google-tag-manager)  explains what tags, rules and macros need to be setup.  Even if you aren’t using angular the setup is the same for a generic event.

Also for both of those tags, make sure you check both “Enable Enhanced Ecommerce Features”  and  “Use Data Layer”.

** If you are using angulartics you’ll need to make sure it handles ecommerce. You just need to make sure one of the top level keys is ‘ecommerce’.   I just over wrote the module and did this:

$analyticsProvider.registerEventTrack(function(action, properties){
  var dataLayer = window.dataLayer = window.dataLayer || [];
  var data = {
    'event': 'interaction',
    'target': properties.category,
    'action': action,
    'target-properties': properties.label,
    'value': properties.value,
    'interaction-type': properties.noninteraction
  };
  if(properties.ecommerce)
    data['ecommerce'] = properties.ecommerce;

  dataLayer.push(data);
});

Make sure you look at the docs for google tag enhanced ecommerce.  It will show you what attributes to use when you push your data to the data layer.  https://developers.google.com/tag-manager/enhanced-ecommerce

Viewing the data you sent in google analytics.

Go to Reporting and go to the Conversions section.  Under there you’ll see the Ecommerce reports.

* When you push your products to the dataLayer the id field maps to the Product SKU.

Dynamic Remarketing

In Google Tag Manager for each tag you want to use this feature for check the “Enable Display Advertising Features” box.
In Google Analytics, go to the Admin section. Select the property you want and then click on “Dynamic Attributes”. Then for Step 2 for Product ID select “Product SKU”. https://support.google.com/analytics/answer/6002231

Debugging Locally

Click on each tag you created to edit it. Under the “More settings” click on “Cookie Configuration”. For Cookie Domain type “none”.
You can also setup another view in Google Analytics and in Tag Manager you can create a macro for the Tag’s Tracking Code like here.

A couple useful articles about Google Tag Manager Macros:
http://www.simoahava.com/analytics/macro-guide-google-tag-manager/
http://www.simoahava.com/analytics/macro-magic-google-tag-manager/

I know this can be confusing and there is a lot of questions that I didn’t answer. The biggest thing for me is to know where I can find the answers so hopefully this article along with the links I posted will help you forge ahead through the world of google analytics.

Inter-Service Communication using Client Certificate Authentication

I love the Service Oriented Architecture. But like all things, security is needed. In this case to make sure that one service has permission to talk to another service. There are a few different ways to obtain this security but I really like using SSL certificates. It’s very simple to add other services and your webserver (apache, nginx) will handle the validation for you.

I wrote an article on getting this setup here: http://www.sitepoint.com/inter-service-communication-using-client-certificate-authentication/

iPhone Push Notifications

iPhone Push Notification Testing:

Testing out the interaction after receiving a remote push notifications on the iphone can be very annoying. Apple does not make it easy to setup remote push notifications. Plus development time is much slower if you try to build and test this interaction while waiting for remote notifications. And unfortunately Apple doesn’t provide a way to test out remote notifications on the simulator.

So instead of using a remote notification I used a local notification for testing this interaction.
In your UIApplication delegate instead of using this delegate method:


- (void)application:(UIApplication *)application 
     didReceiveRemoteNotification:(NSDictionary *)userInfo

I used this:


- (void)application:(UIApplication *)application 
        didReceiveLocalNotification: (UILocalNotification *)notification

You can then schedule a local notification (Apple Docs). This will then simulate receiving a notification when you are in the app.

To simulate the notification when the app isn’t in the foreground I created a local notification inside the delegate method:


-(void)applicationDidEnterBackground:(UIApplication *)application.  

I set it up so as soon as you closed the app the notification would fire showing the default alert box.

Here is the code I used to create the local notification inside the DidEnterBackground method.


NSDate *nowDate = [NSDate date];
NSTimeInterval seconds = 0;
NSDate *newDate = [nowDate dateByAddingTimeInterval:seconds];

UILocalNotification *localNotif = [[UILocalNotification alloc] init];
if (localNotif != nil)
{
  localNotif.fireDate = newDate;
  localNotif.timeZone = [NSTimeZone defaultTimeZone];

  localNotif.alertBody = @"You have new notifications";
  localNotif.alertAction = NSLocalizedString(@"View", nil);

  localNotif.soundName = UILocalNotificationDefaultSoundName;
  localNotif.applicationIconBadgeNumber = 2;

  NSDictionary *infoDict = [NSDictionary dictionaryWithObject:@"2" 
       forKey:@"NewNotifications"];
  localNotif.userInfo = infoDict;

  [[UIApplication sharedApplication] scheduleLocalNotification:localNotif];
  [localNotif release];
}


When you click the “View” button on the alert box, the application will come to the foreground and the didReceiveLocalNotification will be called. You can then do what you need to do based on what data you passed to it via the NSDictionary object in the local notification.

So the main difference in your code between the local and remote notification is that in the local notification you’ll have to access the data you sent to it with [notification userInfo] and in the remote notification, the dictionary object is what is passed to it.

CoreJS

Server-side Javascript using Google’s V8 Engine.

Recently I started working on an open source project with my good friend, Wess Cope, who is more passionate about javascript than anyone I’ve ever met. It was this that drove the want to be able to use javascript for everything. So we built CoreJS which you can find at https://github.com/frenzylabs/CoreJS.

We used http://wiki.commonjs.org/wiki/CommonJS#Low-level_APIs for a reference on what should be implemented in CoreJS.

Aren’t there already server-side javascript frameworks?
The main one out there is nodejs. Overall it’s pretty good but the main issue we had with it is that it forces everything to be asynchronous. Having it be asynchronous is great but we wanted to have the flexibility to be both. There are definitely times when you don’t want something to be asynchronous.
So what we ended up doing is setting up our functions to be asynchronous when there was a callback and synchronous otherwise.

For example our HTTP post request:


//Synchronous
var data = Http.post("/path/to/file", {arg:"arg1", arg2:"arg2"});
//Asynchronously
Http.post("/path/to/file", {arg:"arg1", arg2:"arg2"}, function(data){  
                     print(data); 
           });

CoreJS also utilizes the event based model, using LibEvent, and threading.

It’s definitely new and not complete yet but check it out and give us some feedback. We tried to add some decent documentation on how it all works cause we know it was very annoying to even try and build this with the lack of documentation. https://github.com/frenzylabs/CoreJS/wiki