Preparing for the Gig Economy
by Steve Blais

The “gig economy”. That is supposedly where we are headed. There are predictions based on a study by Intuit that by 2020 over 40% of the workforce will be working “gigs” rather than full time permanent employment. This is nothing new. Thirty years ago, Tom Peters predicted the “Corporation of One” in which everyone working for a company would be consultants rather than employees.

A “gig” is a term in music and performing arts that designates a paid appearance of limited duration. Those of us who are consultants, and work primarily on short term contracts (less than a year in length), are familiar with the concept and use the term “engagement” as a more sophisticated term than “gig”. “Gigs” are becoming more common in the IT industry now, rivaling the performing arts arena in numbers. As such, it seems that the concept of a series of short term work engagements may be in the future if you are working now in any area of the IT industry, especially as project managers, software developers, business analysts, testers, database administrators, data analysts, and so forth. Any project-oriented position, which means anyone who works on projects (rather than processes) is subject to being “gigged”.

Why a ‘gig’ economy?

Companies have several reasons for “gigging”:

  • Since each project is unique and likely requires different skills, the organization can bring on the expensive expertise necessary for the one project for only the duration of the project and not commit to skills that may not be needed thereafter
  • In the US, the organization does not have to pay benefits to contract or temporary workers, such as insurance, retirement, or leave time
  • In the US, the organization does not have to pay social security taxes on the contractors
  • The budget for a contractor can be fixed
  • The cost of acquiring and terminating contract workers is much less than with a full time employee
  • The HR cost for contractors is significantly less than for employees
  • The organization can “try and buy”. If a contractor doesn’t work out, the contractor can be removed without notice and replaced forthwith. If the contractor does work out and there is a good relationship, the contractor can transition into employment
  • The burden of responsibility for quality is on the contractor rather than the organization’s mid-level management. In other words, the mid-level manager does not have to apply motivational tricks and such since the contractor’s continued earnings depend on the job the contractor is doing. No other motivation is necessary.
  • There is a proliferation of websites and organizations devoted to providing contract workers, especially in the IT business.
  • Technology today allows work to be done from anywhere rather than an office which means an organization does not have to pay for office space, office supplies, in-house cafeterias, and other artifacts necessary to the care and feeding of full time employees on site.

And there are more reasons for organizations to increase the number of contracted workers while reducing the number of full time employees. Power structures, except in government, are not as dependent on numbers of full time staff working for the manager. So, senior managers are more willing to cut costs by going to contractors.

For many companies, the transition started with the recession of the last decade.

And for many more companies, as baby boomer employees retire, the organizations are not replacing them with full time employees but bringing on contractors instead.

Be Prepared

If the predictions come true, and signs are certainly supporting that premise, full time employment opportunities may be drying up as the positions are replaced by contract workers. When you are in the job market, you may find that you are competing with contract workers as well as others looking for full time employment. You may find yourself forced into the contract market whether you want to be or not.

For the younger people reading this, it probably is not a matter of if, but a matter of when.

Some tips on making yourself viable in a gig economy:

  • Hone your skills, especially the skills that are in demand and skills that are somewhat esoteric (the former keeps job opportunities available but probably at a market rate; the latter reduces your competition and allows a higher rate of return although the number of opportunities may be less).
  • Network. Connect with those organizations that act as agents for contract workers and consultants. Keep contacts at every organization you have worked for, and with your peers who may go to other organizations. Any contact may be a lead into another contract
  • Keep abreast of technology. Don’t find yourself with skills that are no longer needed. Make time for training and learning new technologies.
  • Rework your resume to list your accomplishments and your contributions to organizations rather than a laundry list of the organizations you have worked for and your positions
  • Decide whether you want to work on an hourly or daily basis, pick a range of rates (making sure you have a bottom line below which you will not go—remember that the organization will typically try to get you at the lowest rate you have charged even if that was to a “favored customer”), determine how you will pay for incidentals like travel and living if you are on site away from home, and so forth
  • Set up your home to be able to work comfortably and productively should that be a requirement or option
  • Enhance and guard you reputation carefully; you may find yourself living and dying based solely on your reputation

The biggest issue for the contractor is that of quality. You are responsible for the quality of your work. There is latitude when the organization hires an employee. Late arrivals may be tolerated for a while and some disciplinary discussions may be in order. Many organizations view an employee’s lack of motivation or performance to be at least in part the responsibility of the manager and so the employee may be given third, fourth, and fifth chances. In many organizations, there is a long process of disengagement when an employee does not work out with a number of poor reviews and warnings before termination action can be taken. A full time permanent employee is expected to be, well, permanent.

Not so with a contractor. Unless you have a contract that specifies differently, you can be removed on the spot and compensated just for the work done. And if you are being paid “by the piece” (for example, writing a fully executable piece of code, or completing 50 test cases for a system) you may not get paid at all if the “piece” is not completed. The onus is on the contractor.

In other words, in the “gig” economy, you are on your own and cannot blame your fellow office workers, your manager, office politics, or anyone else. Your success is your own and both failures and successes will follow you from gig to gig.

While this appears daunting at first, for the individual the ‘gig economy’ will be freeing and exhilarating, rewarding and fulfilling. You can work on the jobs that challenge and excite you and not have to do “scut work” or “busy work” between projects. You don’t have to play politics. You can focus on the work and your performance. You can work when you want to work and take off when you want. (I have a friend who works hard six months of the year and then takes a “vacation” to go surfing and beach combing for the other six months in different parts of the world.)

In the end, the gig economy means that you, and only you, will be responsible for your success, and there is probably no better way to be.

Do Agile Teams Go On Forever?
by Steve Blais

One of the precepts of agile teams is that they do not dissemble and reassemble at the end of the project or any time beforehand. In fact, the concept is that the “project” never ends so the team just stays together working on a never-ending backlog of items to be done.

This is ideal. The team grows together, works more and more collaboratively as it works together over time. An agile team that has worked together for a long period of time and has reached that pinnacle of Performing (as defined by Bruce Tuckman), would not need a Scrum Master, daily stand-ups, or even retrospectives. And the team would produce prodigiously.

However, in today’s world, individuals are still the Lingua Franca of the workforce, not teams, and every member of every team is available for transfer to another team at any time management sees the necessity for workload balancing or evening out costs on multiple projects.

So, of course this is not considered “Agile” by the agile advocacy, but it is real life. And despite the advantages of a standing team of Agile developers, projects do come to an end.

In one example that I witnessed, an agile team of eleven came to the end of their part of the contract. Since the termination was scheduled just before the end of the year, we prevailed on management to extend the team into the new year. The team had been together for just over three years. They started working in the Scrum framework and as time went on they continued to improve and work together as a team. Eventually, they found that the Scrum Master became extraneous and was not really adding any value to the team. They were able to conduct their own ceremonies and resolve their own issues. They also abandoned the daily stand-up because it was redundant. Since they were co-located, they were constantly updating each other and resolving impediments. They practiced “swarming”, similar to a Kanban flow, in which the entire team worked together on any problems or impediments that came up. And, in the end, they did away with the formal retrospectives because they held retrospectives constantly throughout the sprint by noting activities that they did that worked and didn’t work and making adjustments as they went rather than holding off until the end of the sprint.

The team was in all regards a Tuckman “performing” team. But the organization for which they worked was out of budget for the project and also basically out of work since the team had completed just about everything. The remaining items on the backlog were of so low priority, the organization decided not to pursue them. To the organization’s credit, they did try to identify another project or work for the team, but being the middle of the budget year, it did not succeed. There were other agile teams working on other projects and the team members were given a number of alternatives. Several joined other agile teams. A couple had had their fill of “Agile” and went back to some non-agile mainframe work, one left software development and moved into a quasi-management position, and two left the organization completely.

I worked with the team for their last several weeks helping in the transition and capturing lessons learned to guide the organization for any future repetitions of the termination of a team (there have not been any since this one). The interesting aspect of the phase out is that not one of the team members felt any bitterness or regret and all were, to a person, grateful for the experience of working in Agile and working with each other.

The Plain Old, Put On, Product Owner
by Steve Blais

Although much of the agile literature and many of the agile implementations center around the development team, perhaps not enough attention has been given to the product owner.

How can a product owner, even an otherwise good product owner, impair the success of the agile team?

  • Not knowing the full extent of what needs to be done to complete a feature or system and therefore not including necessary items on the backlog
  • Knowing the product, but not being able to communicate that knowledge well enough to the developers
  • Not being able to devote the necessary time (for many projects this means nearly full time) to the product being developed and therefore not being available to answer the team’s questions
  • Lacking a clear vision about the whole product and/or many parts of it (for example, the user interface)
  • Failure to include the entire business community which has a stake in the outcome of the product in the decisions for the development of the product
  • Letting the team dictate the outcome of the product and having it rejected by the user community

While such failures may not be attributed to the development team, it may still be considered a failure of Agile.

There is an irony here. One of the reasons that Schwaber and Southerland eliminated the project manager from Scrum and replaced that role with the Scrum Master and Product Owner was that they felt the responsibilities of the project manager were too great for a single role. The split of the role into the two Scrum roles was a good idea, dividing the project manager into the “soft” team coaching type functions for the Scrum Master and the “hard”, authority-driven parts of the role to the product owner. The problem is that in reality the product owner’s responsibilities have gone beyond those of the typical PMBOK® Guide project manager. (The Scrum Master’s responsibilities have as well, but that’s a different discussion).

As a former programmer (developer in today’s terms), who has been subject to multiple business managers and others telling me what they wanted and when at the same time, I applaud the concept of “one neck to wring” which limits the business authority over the team of developers to only one voice. However, in practice this means that the product owner must be an interface between all the business people requiring input to a particular feature or function or system, all those business constituents who may be impacted by changes being made, and the development team as well. In the past, this might be a job for the business analyst—full time. But since the business analyst as a role has been eliminated from Scrum and therefore most of Agile, it falls on the product owner to do it. And the product owner, according to Scrum, is supposed to be part time, working in the business area to be sure that the product owner understands the business rationale behind the items on the product backlog. (This particular aspect differentiates Scrum from Extreme Programming (XP) which demands that the business representative (called the On Site Customer) be devoted not only 100% to the project, but physically co-located with the team for the duration.)

What is the typical product owner expected to do?

  • Negotiate and mediate among the affected business units
  • Deal with individual business constituents and their idiosyncratic requirements
  • Build and maintain or “groom” the product backlog and be able to explain in detail every single item
  • Be responsible for the prioritization and ultimate delivery of the product which includes release planning
  • Maintain positive relationships with the members of the development team
  • Review and comment on the product in progress every two weeks
  • Be available to answer questions from any member of the development team at any time
  • Attend the various ceremonies of Scrum at the team’s request
  • Provide motivation and vision to the development team to spur them on
  • Provide management with progress and other status about the product

All while still doing their primary job for which they were hired. Oh, yes, and take the blame from the development team for poor backlog items and just about anything else that gets in the way. (I have seen the product owner listed as an “impediment” so many times that I think that removing the product owner will remove all the “impediments” to success for any given team).

Ivar Jacobson calls the product owner “the single indispensable person on the project, without whom nothing can be done”, but also calls the product owner an “Achilles Heel”. Ken Schwaber reminds us that “Scrum does not define … what the Product Owner should do.” Schwaber goes on to say, “Delegation of product owner responsibilities continues the deep divide between development and its customers.”

The product owner was designed from the developer’s point of view to help the development understand what the business wants so that the development team can produce a valuable product. As such, the product owner definition is skewed to a developer perspective while being defined from a business perspective. This, of course, can create as big an issue as the issue it was created to solve.

How do you define the product owner on your teams? Does the product owner have all the responsibilities listed above? If not, who does have them? We’ll talk more of the role of product owner in upcoming articles.

PMBOK Guide® is a registered mark of the Project Management Institute, Inc.



Why Choose React?
by ROI’s Web Development Team

Suddenly, everybody seems to be talking about React. In Stack Overflow’s latest annual developer survey—the largest such survey on earth—React didn’t just win the “trending tech” section, it wasn’t even close. React was up 311%, double the year-on-year growth of its nearest competitor.


So, what is React, and why are people so excited about it?

React is a JavaScript Single Page Application framework, like Angular and Ember and Meteor and Durandal… and so many others that I don’t have the space for them all here. These days, it seems you can hardly turn around without someone releasing another SPA framework. So why, with so many SPAs out there, would you want to use React?

Facebook created React. Why do they use it? According to the React website, they created it “to solve one problem: building large applications with data that changes over time.”

That’s interesting, it tells us that React is designed to handle large sites. And the fact that it’s being used by some of the biggest names out there, like Instagram and Uber and Expedia, tells us it can play in the big leagues. But it still doesn’t tell us why we should choose it over all the other frameworks. So what is it that has the community so excited about React?

For me, the heart of React—the reason why I find it a joy to write, and one of the most productive frameworks I’ve ever used—is a very simple idea: unidirectional data flow. React components don’t update the DOM. All they do is render their output based on the current state of the data. That’s it. As a React programmer, you don’t update the DOM at all. Ever. There’s no complex logic responding to data changes and modifying the DOM. You just render the output based on the current data.

And that’s wonderful. It’s like the early days of the web, when you never had to worry about anything complex happening on the client, because every change meant a full request-response cycle, and a new page being sent back to the client. Life was so much simpler then. Well now that simplicity is back but without the server round-trip.

Of course, the complexity is all still there underneath, otherwise you couldn’t do all of the cool stuff that makes your SPA modern and functional. But the beauty of React is that the complexity is managed for you by the engine itself. Your component re-renders a virtual DOM any time anything changes. React uses a diffing algorithm and executes the minimum necessary DOM updates to bring the two back in sync. And those updates are fast, very fast.


The fundamental simplicity of React leaves you with pure JavaScript components that fulfill the Single Responsibility principle and are clean, testable, and devoid of dependencies on the current state of the DOM (and the fragility that comes with that). All they do is render their output based on the current state of the data, and raise events based on user actions. And they’re not even limited to rendering HTML. React Native uses the same principles to build native Android and iOS apps. Nor are they limited to JavaScript-aware clients. Isomorphic React allows your components to render on the server-side as readily as on the client.

React is flexible, fast, and a joy to develop. So how do you go about writing a React component? Well, that’s a story for another day.

Introspection in Python 2.7
by Arthur Messenger

I was reading about introspection in Python 2.7 and came across code similar to what is shown in Figure 1: class Bag. It turns out that it is a very interesting passage of code for me as it uses many of what I would consider intermediate Python techniques. In Part I of this blog post, I will cover some of the interesting Python constructs that crossed my mind when looking at the code. Part II is a short explanation of introspection in Python 2.7.

1 class Bag:
2     def __init__(self, **d):
3         for k,v in d.iteritems():
4             exec(“self.%s = %s” % (k,v))

Figure 1: class Bag (figure numbers are only for reference)

This code as written does not execute in Python 3. ROI is in the process of converting our Python courses to Python 3 as Ubuntu 16.04 has Python 3 as the default version. Hopefully, there will be a few blog posts on our experiences.

Part I: Looking at and Using the Passage
Line 3 contains a **d variable. This is a keyword value gather variable. The code below is used to show how it works.

1 #! /usr/bin/env python
3 def afun(**d):
4     print “type of d: “, type(d)
5     for key in d.keys():
6         print key, ‘=’, d[key]
8 afun(l = 5, m = ‘this’, n = [1,2,3])

Figure 2:

If you execute, you see:

$ ./
type of d: <type ‘dict’>
m = ‘this’
l = 5
n = [1, 2, 3]

As you can see, **d creates a dictionary.

Here is the class Bag again.

1 class Bag:
2     def __init__(self, **d):
3         for k,v in d.iteritems():
4             exec(“self.%s = %s” % (k,v))

Figure 3: class Bag

It is the exec() function of line 4 that is of interest. Why not an eval?

The syntax for eval is:

eval(string, [[global], [local]])

The string is an expression to be parsed. The value of the parsed expression is returned. Example is below.

In [1]: a = eval(‘5 + 3’)
In [2]: print a

global is a dictionary which is used as the global namespace. If __builtins__ is not in the dictionary, the current globals are copied in before the string (expression) is parsed. The key/valued pairs defined in globals have precedence. If global is omitted, the global namespace of the script is used.

local is another dictionary with local definitions.

If the local dictionary is omitted, it defaults to the global dictionary. If both dictionaries are omitted, the environment where the eval is called is used.

Below is an example of eval using the global dictionary.

In [1]: x = 100
In [2]: a = eval( ‘x + 10’, { ‘x’:20 })
In [3]: print a

Notice that the key/value pair in the global dictionary has precedence over a value declared before the eval in the script, data from global namespace.

Here is the exec statement again:

exec(“self.%s = %s” % (k,v))

Here I am using the tuple form for exec because it makes for a very easy translation to Python 3 where exec is a function.

The general format for the tuple format is:

exec( expression [, global [, local]])

The expression can be a string, a file containing a script, or a code object. global and local are the same as in eval. As a statement, exec() does not return a value. The changes are made as side effects of the execution of the code in the expression. Below is a simple example.

In [1]: exec(“%s = %d” % (“y”, 25))
In [2]: print y

In the class Bag, the exec statement can add new attributes at the time the object is created. This is the original reason for looking into introspection.

I wanted to bring the Bag class definition into iPython. This blog post explores three different methods of making code in a file (a module) available to a script.

Approach 1:
I could use:

import bag

and then accessing the class Bag as:

a = bag.Bag(l = “‘this'”, m = 5, n = [1, 2, 3])

And executing:

In [1]: import bag
In [2]: a = bag.Bag( l = “‘this'”, m = 5, n = [1, 2, 3])
In [3]: a.l
Out[3]: ‘this’
In [4]: a.m
Out[4]: 5
In [5]: a.n
Out[5]: [1, 2, 3]

On In [2], notice the “‘this’“. This is required by the exec statement used in line 4 of Figure 1 above. This will be covered after accessing the module.

Approach 2:
I could use:

from bag import Bag

and then accessing the class Bag as:

a = Bag(l = “‘this'”, m = 5, n = [1, 2, 3])

and executing:

In [1]: from bag import Bag
In [2]: a = Bag(l = “‘this'”, m = 5, n = [1, 2, 3])
In [3]: a.l
Out[3]: ‘this’
In [4]: a.m
Out[4]: 5
In [5]: a.n
Out[5]: [1, 2, 3]

These two methods are in common use and are working here exactly as expected.

Approach 3:
There is a third way using the built-in execfile().

The execfile() is closer to a C/C++ #include concept. Each time the execfile() function is executed, the file is accessed, parsed, and executed. This is done because the Python interpreter does not have control over the contents of the file and must assume that it has changed.

Using the builtin function execfile():


and using class Bag:

a = Bag(l = “‘this'”, m = 5, n = [1, 2, 3])

and executing:

In [1]: execfile(“./”)
In [2]: a = Bag(l = “‘this'”, m = 5, n = [1, 2, 3])
In [3]: a.l
Out[3]: ‘this’
In [4]: a.m
Out[4]: 5
In [5]: a.n
Out[5]: [1, 2, 3]

The function execfile() parses the file each time it is called. This is different from the import statement, which only executes the module (file) the first time it is called, creating a .pyc file which will be used for the other calls.

Figure 4 is a very short program using execfile().

1 #! /usr/bin/env python
3 execfile(“./”)
5 a = Bag(l = “‘this'”, m = 5, n = [1, 2, 3])
7 print a.l
8 print a.m
9 print a.n

Figure 4:

Figure 5 shows the directory before execution, execution, and the directory after execution for

$ ls
$ ./
[1, 2, 3]
$ ls
Arthurs-MacBook-Pro:Python arthur$

Figure 5: Directory listing and execution of

Notice that no .pyc file is generated for

Figures 6 and 7 show the same sequence for Notice the .pyc generated by the import statement.

1 #! /usr/bin/env python
3 from bag import Bag
5 a = Bag(l = “‘this'”, m = 5, n = [1, 2, 3])
7 print a.l
8 print a.m
9 print a.n

Figure 6:

$ ls
$ ./
[1, 2, 3]
$ ls       bag.pyc

Figure 7: Directory listing and execution of

Using execfile() inside a module also prevents import from creating a .pyc when the module is called. Not usually a problem unless you have initialization code that is designed to only be run once.

Lastly, the first parameter to execfile() is the path to the file to be parsed. There are a second and third parameter available. The second parameter is a dictionary of global variables to be used for the parsing and executing of the file. The third parameter only makes sense when execfile() is used inside a class or function, and is a dictionary of local variables. These work the same in the eval() command.

Part II: Introspection
As it turned out, the tools for introspection are very easy to understand. I really haven’t come across a good use case for using a class like Bag and then introspection to see what the object contains.

In [1]: execfile(“”) has the definition of the class Bag.

1 class Bag:
2     def __init__(self, **d):
3         for k,v in d.iteritems():
4             exec(“self.%s = %s” % (k,v))

In [2]: a = Bag(l = “‘this'”, m = 5, n = [1, 2, 3])

This creates the object a that is of type Bag.

In [3]: getattr(a, ‘l’)
Out[3]: ‘this’

The syntax is:

getattr(object, \
    attribute_name_as_string \
    [, default_value])

In [4]: getattr(a, ‘Q’)
– – – – – – – – – – – – – – – – – – – – – – – – – – – – –
AttributeError          Traceback (most recent call last)
<ipython-input-4-47aea6ca945c> in <module>()
– – – -> 1 getattr(a, ‘Q’)

AttributeError: Bag instance has no attribute ‘Q’

The error can be trapped in a try except block. Using the default value available in getattr is faster.

In [5]: getattr(a, ‘Q’, “Not found!”)
Out[5]: ‘Not found!’

Instead of the string, a boolean could have been returned.

In [6]: hasattr(a, ‘Q’)
Out[6]: False

Returns True if object has attribute, and False otherwise.

In [7]: setattr(a, ‘Q’, “Added!”)

This adds a new attribute. The syntax is:

settattr(object, \
    attribute_name_as_string, \

In [8]: getattr(a,’Q’)
Out[8]: ‘Added!’

Just showing that it was added.

In [9]: setattr(a, ‘l’, “THIS”)

This is done to show that you can modify an attribute of the object.

In [10]: getattr(a, ‘l’)
Out[10]: ‘THIS’

Confirmation that the attribute was changed.

C Function
by Arthur Messenger

I asked a friend to create a function and put it in a module for me. I must not have stated that I wanted the function to be written in Python because what I got back was in C. Not exactly what I needed. After some thought and discussion, I said “Thank you.”

I know of SWIG, SIP, Boost.Python, Pyrex, and ctypes. These are all methods of adding C code to a Python script. SWIG and SIP will work for medium to large libraries. They are not part of the “batteries included”. Good solutions if you have small to medium size libraries to convert. Pyrex is interesting but very new to me and again not part of “batteries included.” Boost.Python requires you to be an expert in C++ and they have a reputation of not maintaining backward compatibility. I cannot afford to be burned on this a second time. ctypes is part of the “batteries included.” It works well with a small number of C functions.

This blog was written as a reminder to me of how to set this up. I have written, as always, the simplest program I can for starting out in this area.

Here is the C function:

C Function 1

It takes in 1 variable and returns 1 variable.

Passing in more than one parameter is just listing the other parameters in C. There is no need to worry about the order of parameters. None, integers, longs, byte, and Unicode strings are the only native parameters that can be passed directly to a function. None is passed in as a null pointer, strings are passed as a pointer to a data block. Other data types and structures can be used. They require the building of unique data objects for each object type. You can pass back pointers to data that are converted to the correct type in Python.

I created a second function called goodbye.c to show a little more about why I created the library using the following procedure.

Step 1: Creating Position Independent Code (PIC) object

gcc -fPIC -o hello.o hello.c

gcc -fPIC -o goodbye.o goodbye.c

The -fPIC creates the position independent code. You may see this as -fpic-fPIC is truly independent while -fpic for some environments creates slightly smaller code. The -o <name>.o names the output object modules. They must end in .o. The code has been compiled into binary objects.

The <name>.c is the name of the source code file.

I did not have to build or specify any unique data object because my functions are passing and returning only data types that the ctype module understands.

If you are planning on a lot of changes to your library, you need to set up a repository for these object modules so you do not have to recreate them each time you want to update the library. There is no method of adding or subtracting data from a shared object library. You just create a new one.

Step 2: Creating the Shared Object Libraries

gcc -shared -wl,-soname, -o hello.o goodbye.o

The -shared is creates the shared library. The -wl,-soname, passes the shared object name to the linker. The lib is for library hg and is replaced by the name of your library. The .so is for shared object. The .1.0 represents this is the first release. The command syntax does require it to be created. The -o is the name of the library file.

You can see the objects in the library with the command:

nm -D –defined-only

C Function 2

What is of interest to us are the entries for hello and goodbye.

Step 3: Creating Python Module Wrapping

C Function 3

The magic is in line 3. This creates the object templib and each of the functions in the C library become an attribute of the object. By using the absolute path in the cytpes.CDLL call, I do not have to have an exported LD_PATH_LIBRARY variable to find the shared object library. Line 5 tells ctypes to convert the pointer to a character string to a string variable in Python. Please note that the use of CDLL does NOT mean that this is making a Windows-style dll library. This library is usable in Python on any Python supported O/S. Line 6 was added just to make this look more like a standard module, to make calling from a Python program just libhg.hello(“john”). Lines 8 and 9 repeat the actions of lines 5 and 6 for the function goodbye.

Step 4: Testing

Execute the following:

C Function 4

Porting from Standard GAE to Managed VM: Part I
by Arthur Messenger

It started out so simple. I found this little module, RandomWords, for generating random words or word lists and I wanted to show how to add this package to the Standard GAE environment (GAE). So I modified the GAE HelloWorld app to say “Hello <random_word>”.  This did not work. GAE only allows you to import modules that are written in pure Python and RandomWords compiles a C-shared object as part of its install. Now curious, I found a module, names, written in pure Python that generates random first names, last name, or full names based on the 1990 US Census data ( This blog post covers what I did to make this work.

The question came up: How do you use RandomWords? Answer: Move it to Google Managed VMs (MVM). At least for Python, there are two ways to port my HelloCensus app to Managed VMs. The first is to use a custom install with a base compatible with the Standard GAE environment. This is the subject of Part II: Porting Python App to a compatible MVM environment. Obviously, the third part of this is porting of my Standard GAE environment to the standard MVM environment.

Adding a Python Module in the Standard GAE Development Environment

Step 1:  Creating a Google Project

This is just standard stuff on I did it so that I could measure the cost of this exercise. The project name is roi-add-gae.

Step 2: Creating the Virtual Environment

Create a virtual environment.

  • Execute: virtualenv HelloNames
  • Execute: cd HelloNames
  • Execute: bin/activate
Step 3:  Creating the and the index.html Files

Figure 1:

  1 import os
  3 import jinja2
  4 import webapp2
  6 import names
  8 JINJA_ENVIRONMENT = jinja2.Environment(
  9     loader=jinja2.FileSystemLoader(os.path.dirname(__file__)),
 10     extensions=[‘jinja2.ext.autoescape’],
 11     autoescape=True)
 13 class MainPage(webapp2.RequestHandler):
 14     def get(self):
 15         random_name = names.get_full_name()
 17         template_values = {
 18             ‘random_name’: random_name,
 19         }
 21         template = JINJA_ENVIRONMENT.get_template(‘index.html’)
 22         self.response.write(template.render(template_values))
 24 app = webapp2.WSGIApplication([
 25     (‘/’, MainPage),
 26 ], debug=True)


Figure 2: index.html

  1 <!DOCTYPE html>
  2 {% autoescape true %}
  3 <html>
  4   <head>
  5   </head>
  6   <body>
  7     <p> Hello {{ random_name }} </p>
  8   </body>
  9 </html>
 10 {% endautoescape %}

Nothing very exciting as far as coding goes.

Three of the imports are of interest—webapp2 is a defaulted module in GAE. If you can live with the default version which shows up as deprecated in the documentation, there is nothing to do. I want to use a newer version—jinja2 has to be specified in the Libraries passage of app.yaml to be included in the environment. To see all of the provided modules, see

Figure 3: app.yaml

  1 runtime: python27
  2 api_version: 1
  3 threadsafe: true
  5 handlers:
  6 – url: /.*
  7   script:
  9 libraries:
 10 – name: webapp2
 11   version: “2.5.1”
 12 – name: jinja2
 13   version: “2.6”


Lines 10 through 13 tell the GAE to include these libraries. Line 10 and 11 include webapp2 version 2.5.1 which is later than the default version and that is the reason it is here. Lines 12 and 13 add jinja2 at version 2.6.  If the version line is not included, at the time of deployment, the latest version of the module is added. The version of the module can only be changed by redeploying the app.

Making the module names available is a three-step process.

  1. In the root directory of the app, the directory with app.yaml, create a new file call Figure 4,, shows the two lines which must be in the file.

Figure 4:

from google.appengine.ext import vendor


lib is the name of a directory in the root of the app which will contain the modules to be imported by the app.

  1. Change to the root directory of the app and make the directory lib with the command mkdir lib.
  1. Still in the root directory, use pip to install the module in lib.

$ pip install –t lib names

Step 4:  Testing in the Development Environment
  1. Execute:   
  2. Open a browser window to localhost:8080


3.Kill the dev server with a <Ctrl-C>

Step 5: Uploading to Production and Testing
  1. Execute: gcloud config set project roi-add-gae
  2. Execute: gcloud preview app deploy
  3. Open a browser to


Step 6:  Cleanup

Execute: deactivate

This stops the virtual environment. What to do about the code and control files? I will zip up the directory and archive in the Python folder on the Google Drive.

HUB.DOCKER.COM and Deleting a Repository
by Arthur Messenger


If you have used the registry at, you already know that the option to delete an individual tagged entry is not available on the interface.

From what I can tell, there isn’t a way to accomplish this, even in the REST interface. The best I have been able to do is to use the REST DELETE command to delete the repository. This means downloading any images I want to save, deleting the repository, recreating the repository, and uploading these saved images back to the repository.

The rest of this blog is what happened when I used the REST DELETE command to delete the repository.

Steps to Delete a Repository

  • Go to https:/ and sign in and go to Repositories screen. I did this to verify the repository I wanted to delete.


  • I wanted to delete the repository arthurm10/hello.js.
  • Open a terminal window.
  • The command is:
curl –raw -L -X DELETE \
 –user <repository>:<password> \
 -H “Accept: application/json” \
 -H “Content-Type: application/json” \
 –post301 \
  1. The -L says follow any redirects (see 5.)
  2. <repository> is the string in front of the / and is also called the name space.
  3. <name> is the string after the / and is called the name.
  4. <password> is the password used for access to the repository.
  5. The –post301 is to prevent switching to a GET after a 301 redirect.
  • The previous example was a template. Here is my invocation with my userid:
curl –raw -L -X DELETE \
> –user arthurm10:<password> \
> -H “Accept: application/json” \
> -H “Content-type: application/json” \
> –post 301 \
  • The return was:
 Arthurs-MacBook-Pro:~ arthur$

Not very helpful.

  • The website didn’t show anything useful until after a refresh and then showed this:


  • I waited 10 minutes for this action to complete, then I tried refreshing the screen. Same thing. Then for reasons I don’t understand, I issued the same command using the up arrow. I got exactly the same response on the command line.
  • Refreshed the browser window and the repository is gone.


I really don’t know if it’s gone. (I am not paying for the hub account.) It’s just no longer cluttering up my window.

Interesting Reads
by Arthur Messenger

Interesting Reads


Google Cloud Platform Load Balancer

Google shares software network load balancer design powering GCP networking

Very quick introduction to Google’s Load Balancer showing their sophistication in balancing loads. Examine this with a quick reflection on hardware load balancers in the beginning of web servers, noting how every device used to accomplish these and similar tasks is being virtualized.

History of Kubernetes

Borg, Omega, and Kubernetes

Twenty-four PDF pages makes it a good read for the ride to work. This is an extremely well-written treatise on Google’s experience with 3 container systems. Especially revealing is their candidness in the section at the end soliciting ideas for remaining unsolved issues.

Micro Python

These two articles will give you some insight into Micro Python. Engineers who work at the embedded level are familiar with the requirements brought on by limited cpu cycles and memory. They routinely work in assembler and C so it’s interesting to see them considering higher-level languages.

Using Micro Python for real-time software development

Getting Started with Micro Python

But before you go too deep into Micro Python, take a quick look at their approach to fixing an old standby, C language.

Fixing C



Comparing Google Cloud Platform with Amazon Web Services
by Doug Rehnstrom

It’s common when talking about cloud computing services to divide them into Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), and Data as a Service (DaaS). So, let’s do that and compare the offerings of Amazon’s AWS and Google’s GCP offerings in each category.