The End of the Career as a Stage in Life?

I’ve had several conversations and read several pieces across the web on changing careers, preparing for retirement, and generally, what we’re supposed to be doing with our working lives.  I’m an intern and full time PhD student, so I’m still in the highly fluid state of an early career.  But, I’ve learned that usually, around this time, people begin to settle into what becomes their “career.”

The career is an odd thing really.  It’s the product of the need for job security for individual employees and a part of the corporate bargain (typically with unions) to take care of the worker in exchange for dedicated service.  It’s protection from at-will work typical of piecework, temp work, support work, and other forms of what is now called “precarious labor” and is a luxury of sorts for the middle and upper-middle classes and some members of the working class.  The classic career was built on tenure and guaranteed by the pension.  So long as you gave several decades of your life to a single company, that company paid for you to live the last decade or two of your life without the need to work.  Today, tenure doesn’t guarantee job security and few people have pensions.  The bright side of this is that the golden handcuffs are much looser than they were ten or twenty years ago.

Everyone with a decent-paying job knows the golden handcuffs – feeling locked into a job or company because the pay and benefits are too good to give up, even if you hate the work.  With the demise of pensions and decreasing importance of tenure, workers’ material ties to an organization do not increase over time like they used to.  At the age of 50, you can move to another job and take your retirement with you.  If you’re moving into a job/career that you’re good at or that’s in higher demand, you may even earn a higher income.

What this means is that the people riding out a job, feeling like they threw their lives away, or who say that work is a sacrifice you make to live the life you want in retirement; are less and less able to justify that perspective.  It’s not that these feelings have gone away.  It’s that workers are not as locked into a job like they once were.  Right now, many people are discovering this in the middle of their careers.  However, I find few young people (including myself) or mid-career people who see themselves changing directions when they’re 45, 50, or 60.  But, there’s less and less reason not to start thinking about it and more and more reason to start planning for it.

Right now, the (middle-class western) life course is to go to college, get married, have kids, settle on a career, retire.  If we were to put ages to these, they would be something like: 18, 24, 27, 35, 65.  Notice how there’s thirty years between the time one starts a career and then retires where nothing is supposed to change in the life course (not so oddly enough “the mid-life crisis” occurs in the middle of this).  In part, this is an artifact of the career system whereby the same thing is supposed to happen for thirty years.  (For those who say a lot happens in those thirty years, you’re right.  But seldom are promotions, vacations, children’s graduations, or personal accomplishments as life-defining as going to college, having children, or getting married.  That’s what makes these ‘stages’ in the life course. Things like divorce happen, but they’re not ‘supposed’ to happen.)   In a sense, when we signed up for the career with a pension and tenure ladder, we created a thirty year period of stasis.  As those supports have gone away, we now have an opportunity to rethink how we want to live our lives during those thirty years.  In a sense, we now have an extra thirty years to define and redefine our lives in the same fundamental ways that we did with school, marriage, our first professional job, and children.

If you’ve followed me up to this point and agree that moving the 401k and finding better, higher paying jobs in your 40s and 50s is possible, the question now is: what do you want to do with your extra 30 years?

I do feel the need to assuage hiring managers (and economists) that the idea of such worker mobility is actually a good thing.  And, I don’t think I have to say much.  While replacing an employee is expensive (several thousand dollars in replacement costs, lost wages’ worth of work, and lost value in expertise) no one wants an unproductive or under-productive employee who isn’t engaged in their work – the employee who rides out their tenure to retirement or who only skirts by with the minimum.  (Actually, few people work like this.  People generally hate being bored and feeling like what they do is meaningless for very long.  We’re good at finding meaning and energy in co-workers, family, or something else and bring that into the job).  But, in a society without careers, employees are increasingly making an active choice to work in a particular job with a particular company.  If employees can move more frequently and define their lives in such flexible ways, those who stay are those who want to be there.  The premise of employee engagement, as consultants are so quick to emphasize, is exactly people’s ability to regularly affirm themselves through work.

So, as work becomes more flexible with the decreasing role of tenure and flexibility of retirement savings, we may want to consider getting rid of the idea of a career as we’ve conceptualized it.  In this increasingly flexible world, we’ve been given an extra thirty years (or a full 1/3rd of our lifespan) to redefine ourselves.  What would you do with all that time?

Posted in Age, Current Issues, Historical Trends, Jobs | Tagged , , , , , | Leave a comment

2N Analytics – Why More Data is Never Enough

The fervor over big data has largely focused on the number of data points now at our disposal by which ever-more specific and powerful analytic insights can be made. But managing the amount of computations is not the biggest challenge.  The biggest challenge is what I call 2N Analytics – creating knowledge within the proliferating data that can be analyzed.  As I’ll show, even very small data is still impossible to compute.  The challenge now, as it has always been, is in developing analysis and knowledge without the ability to compute it all.

In computer science, there are a class of problems called NP-Complete problems.  These are problems where the computations would take such a long time to perform that doing them at a useful scale would be computationally impossible.  One such problem is finding cliques in a network.  (Cliques are groups of people or nodes in which everyone is connected to everyone else.)  To solve this problem, you have to literally check every possible combination of connections between nodes to determine whether the nodes are mutually aware of one another and the group.  Mathematically, this requires 2^N computations where, for every nth node added to the network, the number of computations increases by a power (it’s actually (2^N)-1, but I’m rounding here).  In a network of three people, a computer must do 2^3, or 8, computations.  In a network of just 300 nodes, the computer must do 2^300, or 2×10^90, calculations!  Just for reference, it would take IBM’s Sequoia supercomputer, the fastest we have, 6×10^73 seconds to compute, which is well over 4 times as long as the universe as been around!

Big Data presents the same problem, but not because we have 50 million data points.  Instead, we have 50 million data points across 300 dimensions.  In the same way that clique detection is NP-Complete, so is high dimensional data analysis.  For every new dimension added to data, the analytic possibilities increase at 2^N.  If we just have two dimensions, say cost and sales and we’re trying to predict future earnings, we can calculate the isolated effect of each on profit, the interactive effect of both on profit, and the effects of each controlling for the other.  That’s four calculations for just two variables.  As our number of analytic parameters increases, so do the possibilities for analytic insight grow exponentially.  This is not so new really.  The most widely used social survey, the General Social Survey, collects data on over 1,000 dimension from race and gender to attitudes about the environment and politics.

However, there’s another 2^N problem that does make this problem more salient than ever.  As the number of dimensions grows, our ability to gain meaningful insight from them diminishes because there aren’t enough individual observations.  A basic heuristic in statistics is that, for every variable you put into a linear regression, you need 10-15 observations.  For a regression on 300 dimensions, this is only 3000-4500 observations.  As above, we can multiply the 2×10^90 calculations needed to analyze all 300 dimensions by another 10 or 15.  But, this gets even more mind-numbingly complicated and computationally intractable when we want to do an analysis within dimensions.

Let’s return to the cost and sales example.  Say, you want to compare sales for low-cost versus high cost items.  Knowing your product portfolio, you know that items over $1,000 are your high-end items.  But, though you have 1000 observations, you’ve only sold five items over $1,000 in the past year.  You have a lot of data, but not a lot of data about this fairly rare event.   So, all of a sudden, the two dimensions you can analyze in four ways becomes impossible. Even with 1000 data points because the dimension of interest is too rare.  The thing is, rarity actually becomes extremely common in 2N Analytics and this is a big problem.  Every dimension added actually has at least 2 subdimensions and as many as N subdimensions.  In the case of low- and high-cost items, a 1000-dimension variable is reduced to a 2-dimensional variable (assuming every item costs something slightly different).  This is typically a strength, but when you want to make inferences about specific sub-dimensions (the power of big-N data), the data can run out fairly quickly.

Let’s use an example with the entire U.S. Population.  Using the U.S. census (some of the oldest big data, now containing roughly 300 million people), let’s say you want to investigate the probability of unemployment (7%) for a black (12%) man (50%) in his thirties (13%) in a poor neighborhood (12%) of Detroit (.25%) to a similar man in a similar place in Chicago (1%).  [Note the probabilities here are independent, the unemployment rate for black men in these places is actually much higher. I use these because I happen to know most of the stats off the top of my head.]  Combining these probabilities (.12*.5*.13*.12 = .000936; *.0025; *.01; *.7; x 300 million) you will find that there are 701 such men in Detroit (49 of which are unemployed) and 2889 men in Chicago of whom 202 are unemployed.  In adding five variables, we’ve cut a data set of 300,000,000 people into a data set of 3,500 people of which only 251 have the effect we’re testing.  The power of big-N data is that we still have several thousand people.  But, we still have a couple hundred variables in the American Community Survey (an in-depth survey of samples within the U.S.) we could add to understand the employment likelihood for these two groups of people (political ideology, family, education, transportation access, home ownership, etc.).  Who wants to image how small the data becomes when you compare these 3500 people by their political ideology and family structure?

Hopefully by now, I’ve convinced you that these computational problems are not solved with bigger data and faster computers.  Big data has made us better at getting estimates at such a fine-grained level.  But, the scale needed to solve these problems should be considered unreachable.  Instead, the promise of big data relies on analysts and their ability to choose the right features, set up the right kind of data collection, perform the right kind of analysis, and develop the right kind of conclusions.  What is new is neither the data nor the computers, but our capacity to analytically and computationally engage in reducing these 2N problems to a meaningful and manageable scale from which we can build new insight.

Posted in Current Issues, Data, Methodology, Organizations, Technology | Tagged , , , , , | Leave a comment

Research Fugue: Measuring Power in Political Campaigns

I’ve been working on a project inspired by the Center for Investigative Reporting and moderated by Kaggle.  I used a network analysis of the movement of money between campaign committees to measure the extent to which different campaigns and different committees were more or less independent, controlling, or broadly influential.  It turns out that corporations have the most broadly influential committees while the most seasoned congressional candidates are the most independent.  However, when you look at the committees that are the most controlling or dependent, things get a bit interesting.  You can download the report and raw results at Influence, Control, Dependent, and Independent.  The code will be up soon.

Posted in Miscellaneous | 1 Comment

Doing Program Evaluation Scientifically

I was inspired to write this post after reflecting on James Boutin’s series of posts critiquing the construction and use of data in schools.  There are a lot of ways to screw up evaluations, beginning with misguided initial theories, terrible instrument design, and inept analysis and interpretation.  In this post, I’m not going to tell you all of the ways you can fail and how to succeed.  There are too many for a single post.  Instead, I want to provide the big picture process for doing evaluation scientifically so that you know what you should be getting into when you decide to evaluate.

Evaluation has two components – assessing the causal processes and developing the monitoring system (i.e. benchmarks) to continually assess them.  The causal assessment tells you what about your program and what about your operating environment are influencing your outcomes.  It allows you to say something like, “participation in our interview-skills training program increases the probability of employment by 25%, but the lack of access to public transportation decreases our clients’ probability by 30%.”  The benchmarks allow you to keep track of these influential variables and outcomes and detect any changes or problems with the program.  They allow you to say “over the past year, 50 clients have participated in our interview skills training, but 40 did not have access to public transportation.”  These two pieces of information can play a very influential role in getting city government to expand train or bus routes in your direction or increased funding to pay for bus passes.

My suggestion for a general strategy is to perform a causal analysis once every five or ten years and use the findings to select which benchmarks to track.  This 5-10 year interval is a heuristic.  Some programs operate in very dynamic environments that change quickly relative to other programs.  The more dynamic your environment and the more changes you make to your program, the more often you will have to redo the causal analysis.  In the example above, a new bus line might change the interview program dynamics in several, indirect ways: more clients may come from new areas changing group dynamics, while better access to other resources like a public library or health facilities may improve the job chances of participants but not because of your program.

Causal Evaluation:   Assessing causal relationships is not only the most important part of evaluation, but also the most difficult and most susceptible to bias, misinterpretation, and generally terrible research.  That is why I strongly advise hiring an expert, typically someone who has at least a master’s level training in appropriate research methodologies.  Causal inference involves the highest standards of social sciences research and requires some of the most sophisticated methods we’ve developed (which is why I describe this approach as doing evaluation “scientifically”).  In essence, I suggest paying the $10,000-$50,000 (or more for larger, more complex programs and organizations) once every five to ten years to hire a well-qualified contract researcher or consultant.  My earlier post “Researching With Nonprofits” goes a bit into what this process might be like.  Even better would be to hire one full-time, but I won’t get into the difficulties with financing operating costs.

The most important part of putting the causal evaluation together is the program logic model (for entrepreneurs, this is why you must make one).  Writing out the logic model gives you an explicit understanding of what you believe are the most important processes determining your program’s outcomes and is the starting point for designing the analysis.  Depending on how much data you can gather and to what extent you’re able to randomly select clients to participate in programs, you can expect several waves of data collection or possibly one big one.  Large amounts of data allow for several sophisticated analyses that provide evidence for causal inference.  Small datasets require multiple measures over time to both gather enough data and add temporal variables that help support causal inference.  So, if you’re a small organization or the program is small, you can expect waves of data collection lasting for a period of time determined by the turnover in your program.

So what do you get for your investment?  It depends on the results.  If the study fails to find any significant causal connections and there’s nothing wrong with the data, then a full program review is in order since your program logic model has not received empirical validation.  This is the difference between benchmarks and a causal analysis and why benchmarks are not useful in themselves.  For the interview skills program example, benchmarks would say “40 clients used the service and 30 received a job offer.”  Great, right?  Nope.  The causal analysis concludes that those 30 people would have gotten those jobs without the training.

Benchmarks tell you what’s happening.  The causal analysis can tell you whether you should take credit for it.  The overall goal then is to get the causal part right and then ride on the results for as long as the causal dynamics remain stable.

Benchmarking: If the study succeeds in isolating key causal relationships, then those variables become benchmarks.  To go back to the interview skills program example, if you find that, say, access to transportation, client’s education-level, or involvement in other programs all affect the probability of receiving a job offer, then you collect that information, put it into a spreadsheet, and monitor the changes.  So, if the rate of job offers decreases, you can look and find that your client base in the last cycle was less educated or less involved in the rest of your programs.  Thus, you can say that the program is working with more disadvantaged clients and that you need to do more to get clients involved in other programs.   Hopefully, you can see how this might inspire confidence among your staff and board and encourage donors to open their wallets.

Long-Term Planning:  The basic feature of planning evaluations over time is understanding the dynamics in your environment.  As mentioned above, programs not only have their own dynamics which may change over time, but they also operate within dynamic environments, the causal processes of which will change.  I see three indicators of when a new causal analysis might be necessary.  First, front-line staff and program managers can recognize when dynamics are changing.  Changes in client demographic, new complaints about new issues, or decreasing contact with potential employers can each indicate new dynamics entering the program.  Second, changes in benchmarks can indicate underlying changes in the causal dynamic.  For example, in the interview skills program, if job offers decline, and none of the other measures change correspondingly, it might be time to do another causal analysis.  Finally, dynamics will likely change when you substantively alter your programs.  If you redesign your program to include resume writing or professional writing, dynamics associated with writing like immigration status, race, and education will likely influence how well clients write in your programs and, if the writing component has an impact, the rate of job offers.

Lastly, I would like to take note of current national and sector-level governments, organizations, and thinkers pushing for accountability.  While I believe that data-informed program development and evaluation is the way to go, there isn’t a one-size fits all approach to developing good data and the capacity of organizations to do their own high-standard evaluations represents probably the single biggest barrier to accountability.  Anyone can do research, but to do good research by social scientific standards requires specific training in hypothesis testing, data collection design, and data analysis.  If the accountability movement wants to succeed, it needs to develop the financial and technical resources necessary for organizations to develop this capacity.

Posted in Applied Research, Miscellaneous, Nonprofits, Organizations | Tagged , , , , | Leave a comment

The Diminishing Power of the Public, Part 1: Nonprofits as Privatization

This is the first in a series of posts on privatization, the decline of public power, and its implications for democracy and the provision of public and social goods.

A common argument among globalization’s flattening earth theorists is the assertion that state power is being eclipsed by capital mobility, international governmental organizations, immigration, and innovations in transportation and communication.  Here, I want to walk through a counter-argument I’m thinking about.  Historically speaking, state autonomy was actually diminished by democratization.  The more proper question is whether or not public power, engendered by democratic processes and public accountability, is diminishing.  I argue that public power is significantly diminishing, at least in the U.S., and being replaced by a multitude of private powers.  The major forms of this privatization are the outsourcing of responsibility for the provision of public and social goods, the encroachment of private organizations on these goods’ provision, and the privatization of public funds.  In this first part, I want to introduce the question of the declining power of the public and elaborate my first argument: that the provision of public and social goods are being outsourced to private corporations, particularly nonprofits.

First, there’s an ambiguity in the idea of state power.  For globalization researchers, the decline in state power is the declining ability of the state to determine its own policies.  The primary driver for many is global capital flight in which, if states choose anti-capitalist policies, multinational corporations will pick up and move.  Hence, states are forced to dismantle welfare, minimize taxation, and deregulate.  While I would agree that state policy is being influenced by global capital markets, I believe that this conception of state power as policy autonomy obscures what state autonomy actually is.  I argue that states generally are less and less autonomous the more democratic they become.  Democratic states are significantly less autonomous because they are fundamentally beholden to the voters, interest groups, and other public groups that shape elections, policy making, and program implementation.  In essence, the decline of state autonomy has already happened for democracies.

The more pertinent change in state power over the past four decades, best exemplified by the U.S., is the increasingly private control of state money and programmatic responsibility.  This is a broader definition of privatization, which typically refers to governments contracting public enterprises like waste management and parking meters out to for-profit companies.  I define privatization as the private control and responsibility for public resources and programs.  Of course, privatization comes with political overtones and I do not mean to take sides as to whether these trends are better or worse for providing public and social goods.  I only mean to hypothesize about its relationship to public power.

Nonprofits as Privatization:  Prime examples of private responsibility for public programs are nonprofits and traditional privatization initiatives.  Some may be surprised to consider nonprofits as a form of privatization, but they are, in fact, privately-operated corporations (that’s what the “c” in 501(c)(3) stands for).  What is categorically significant about this form of privatization is that the implementation of publicly determined programs is not democratically accountable in the same ways as public programs.  Charter schools are a perfect example of the nonprofit form of privatization.  We elect the school boards who oversee our public school systems.  We do not elect the CEO’s who run charter school management corporations.  Some may think this is a specious distinction since charters are overseen by school boards or other state offices (hence they are still “public”).  But, two important differences should be noted.  First, charter schools are granted exemptions from some of the (democratically chosen) rules and regulations governing public schools.  Secondly, the oversight process is at arms length compared with traditional public schools.

The potential implications of nonprofit privatization are surely more numerous than I’ve come up with, but here are some key points.  First, this privatization likely leads to more innovation, at minimum because of sheer organizational diversity and competition for funding.  This diversity cuts both ways, in that some organizations will be much less effective and potentially harmful while others wildly successful.  The key is the competitive mechanisms which ensure that the ineffective fail and the effective survive.  This gets me to my second implication.  The arms-length relationship between democratic oversight and program implementation problematizes the oversight process because inspection and grant reporting, rather than direct management and public reporting, ensure compliance.  While direct management is no panacea for good governance (think state-run institutions for people with mental illness), an annual inspection has little hope of doing better.  This, I believe is the source for the accountability movement in the third sector.

Third, it allows public programs to tap into a broader range of private resources, particularly foundations (this is more apparent in social services like homeless shelters and services for people with developmental disabilities, than education).  The access to private wealth for public and social programs is a double-edged sword.  On the one hand, the depth of private, philanthropic pocketbooks is enormous.  While there are some policy areas that have long thrived on public and private funding (health, education, research, the arts), other areas like mental illness, job re-training, and homelessness have much more fragmented funding histories that have been positively transformed through the development of the third sector.  On the other hand, it has enabled the retrenchment of the state and the decline in public funding for publicly initiated programs.  Access to private resources did not necessarily cause state budgets to continue to be scaled back, but the ability of social and public services to access private wealth has certainly prevented widespread failure in the nonprofit marketplace in the face of declining public funding.

Finally, this privatization may have shifted the onus of civic engagement into professionalized volunteerism and under-informed philanthropy, rather than political action or democratic civic organizing.  This point goes back to the shift in public provision of services from benevolent associations (like the Elks) to nonprofits.  Before the post-WWII era, public and civic resources circulated through communities via politically active civic groups with regular meetings and democratically elected leadership.  There was a marriage of long-term civic engagement, political activism, and community self-help.  Those days are long gone, replaced by short-term, hyper-circumscribed volunteerism in the professional machinery of an albeit virtuously intending corporation.  Individual philanthropy, rather than being donations to your civic group’s democratically-controlled community pot, are determined by friendship networks (“the ask”), entertainment (galas, concerts, and the like), and emotional appeals.  This represents an information poor market driven by social convenience and an appealing narrative, rather than long-term social relationships, systematic knowledge, and democratic control over the use of donor funds.  It should come as no surprise that nonprofit leaders like Sean Stannard-Stockton and nouveau-riche philanthropists like Bill Gates and Pierre Omidyar are so interested in treating philanthropy as a form of investment.  There is wide-spread concern that the philanthropic marketplace is driven by emotions and convenience (and institutionalized traditions among old-school foundations) rather than impact.  As for volunteers and donors, they’ll have to get their democratic community elsewhere.

In conclusion, the increasing amount of private control over public resources and responsibilities, which I’ve broadened to include nonprofits, has significant, if morally ambiguous, consequences.  This shift, broadly speaking, represents a significant decline in the power of the public to control the provision of public and social services.  This nonprofit form of privatization is not, as some may argue, a capitalist take-over of the public sector because the nonprofit sector is categorically not capitalistic (though it is a marketplace).  Other forms of the declining power of the public, however, are capitalistic as I explain in the next post on the encroachment of private enterprises on public services.

Posted in Civil Society, Historical Trends, Nonprofits, Public Policy | Tagged , , , , | Leave a comment

DonorsChoose Supplement Part 3: Market Corrections

Note: In preparation for the results announcement by DonorsChoose, this series is meant to carve up different issues raised by my work on the DonorsChoose Data and address them directly and more fully.  You can find the original announcement and report at Predicting Success on DonorsChoose.org.

If, based on my findings, we believe that there are some deserving projects that are being unwarrantably disadvantaged, by say teacher gender or metropolitan location or even the state of origin, there are a couple of ways we can use the algorithm to change the dynamics of the market and test the efficacy of those changes.  My philosophy behind this is that, given that the DonorChoose market is biased towards urban schools, for example, if we don’t believe urban schools are any more deserving than suburban or rural schools (see my discussion on deservingness for why this might be true), then I would call that systematic under-valuation in the market.  Using the algorithm, we can test potential correctives for that.

The first intervention was actually suggested to me by Jonathan Eyler-Werve.  He suggested that search pages could weight the search results based on whether they were urban/suburban/rural or by state, for example, such that under-valued projects could be found earlier.  Technically speaking, this random sort would be weighted, such that more rural and suburban projects, randomly selected from those returned by a user’s search, would show up earlier.  So, say that you’re looking to help out a music project that’s coming down to the wire.  You might not care whether it’s urban or suburban, but, as things stand now, the higher number of urban projects in the system means that roughly 60% of the project’s you’ll see will be urban.  You’ll more likely donate to an urban school just by sheer roll of the dice.  With this weighted, random sort, the search results will balance out the proportion of urban, suburban, and rural projects.  Of course, this would not apply to searches that explicitly ask for urban, suburban, or rural projects.  Testing the impact of these corrections would involve re-running the analysis that produced this model on the post-implementation rates of success and seeing whether the significance of the urban/suburban/rural variables decreased.  If the significance decreases, then the bias has decreased.

Another form of market correction, and one which I mentioned in the report, would be allowing donors to see a project’s chances of success or sort their results by them.  This directly informs donors of the value given to these projects by the market and let’s donors decide if the project is really deserving of a 30% chance.  Thus, a donor could look at two similar projects, like two music programs in Chicago, and know that one has a 60% chance of success and the other an 80% chance.  If the donor thinks the first one is actually more deserving, they might be more motivated to donate to it to try and help its chances.  They may even start a giving page around it.  This approach is a donor-driven market correction in which donor’s can use their own set of preferences to determine if the 60% project is really less deserving than the 80% project.

Monitoring the effect of this implementation would involve re-running the model after this has been implemented and testing any changes in the probability of projects.  Thus, if the original algorithm predicted a 60% probability of success and the post-implementation data shows some projects going to 80%, we can see which variables in those projects correlate with the increase in probability.  If we find, for example, that projects posted by female teachers increase in probability, then we can infer that donors are correcting for the existing gender bias.  The same goes for any variable measured.

Finally, offline strategic initiatives can be developed to target under-valued projects.  For example, a foundation focusing on rural development may be very interested in trying to build support on DonorsChoose for rural projects.  Most importantly, this research provides justification for this strategy, in that rural schools are less likely to reach project completion.  Thus, such a foundation might be convinced to offer matching funds to rural projects or distribute gift cards to rural areas to raise awareness of DonorsChoose and build the rural donor pool.  The same goes for under-engaged states.  I’m not sure what the retention rate of gift cards is, though it could easily be figured out from the data provided for this competition.  In the case of a matching funds initiative, assessing the impact would involve the first method mentioned above, seeing whether significance of the rural variable decreased during the fund period.  As for recruiting new donors through gift cards, not only can we assess how many people used the gift cards, but, using the second method mentioned above, we can estimate the lasting effect of the initiative.

If you have any other ideas of how this prediction algorithm might be used to improve the DonorsChoose market, please feel free to discuss it in the comments section below.

Posted in Applied Research, Economy, Education, Internet, Nonprofits | Tagged , , , , | 1 Comment

The Revolutionary Potential of Social Enterprise

Over the past forty years, we’ve become accustomed to the legal, economic, organizational, and moral distinctions between for profit and non-profit enterprises.  For-profits, like McDonald’s and Proctor and Gamble, provide goods and services in exchange for money which then gets distributed to workers, owners, investors, and the like.  Non-profits, like Feeding America or the Salvation Army, provide free or nominally priced services to those who likely could not afford them otherwise.  Their income is through the generosity of individuals, philanthropists, and governments/taxpayers and spent to pay employees and subsidize these services’ cost to clients.  Money left over may be socked away for a rainy day or invested into expansion.  There are no investors or shareholders in the for-profit sense (nonprofits still take out bank loans and other lines of interest-bearing credit).  This institutionalized distinction between for-profit and nonprofit, I believe, is becoming incoherent and social enterprise demonstrates the revolutionary potential.

First, a hypothetical.  What if McDonald’s wanted to become a nonprofit, what would it take?  Under the IRS definition of a 501(c)(3) it would need to be operated for an exempt purpose whose income does not “inure” to controlling individuals or shareholders.  I may be wrong, but a $1 McDouble seems like a charitable price for feeding those in poverty.  The key difference I believe is the “inuring” of profit.  So, McDonald’s could re-privatize its ownership, change certain lines of investment capital, rework its executive benefits and viola! (I ignore the political limits because they are less relevant to my point here).  In fact, Panera Bread now has a self-sustaining nonprofit arm integrated with its restaurants.

What social enterprise has demonstrated is that you don’t have to give away things for free to be a nonprofit.  This is an inversion of what business students say – “You can do well and do good.”  This is the revolutionary potential of social enterprise.  Many for-profit business could qualify as charitable.  Many charities could turn a profit and still be providing a social good.  The line between socially beneficial and business is being recognized as transient because there are not many activities that could not be considered “charitable.”  The distinction we’re accustomed to is the artifact of a custom whereby services for those in need were organized by nonprofits.  For-profit entrepreneurs and executives are only now realizing that, for the most part, the only thing preventing them from being a nonprofit is “inuring” profit.  The biggest disadvantage to filing as a nonprofit is the access to investment capital.  Hence comes the L3C designation.

L3C stands for a low-profit limited liability company.  Essentially, they are for-profit companies that, for providing a social good or service, can accept return-bearing investments from traditional nonprofit sources, like foundations and governments, (called “program related investments“) but cannot have profit as a “significant purpose.”  While the initial rationale for the L3C was to enable would-be nonprofit organizations to gain more (traditionally capitalist-like) investment, it can go both ways.  Would be for-profit companies (who happen to provide a charitable service) can adopt the L3C as a sign of their ethical commitment to consumers. (What operations and rules define profit as a non-”significant purpose” is left wide-open (4th paragraph), hence I only assert that the L3C is an ethical signal, rather than an operational restriction)

Capitalism would be completely different if the business community recognized that what many of them produce could be considered charitable in a legal sense.  Imagine a world where McDonald’s, Walmart, and Coke are nonprofits (or L3C’s. I want to address the question of capital access in a later discussion).  They already provide cheap food, clothing, potable water, and other essential items to billions of people.  Would you buy a burger from a nonprofit McDonald’s or a for-profit Burger King?  Would Walmart clothes still be made in sweatshops?  How silly does the ideology of shareholder value sound now?

There is no clear push for this radical of a restructuring of capitalism.  But, the emergence of new energizing strategies in business, social enterprise in particular, indicates that Americans are seriously challenging our assumptions that conducting business is ultimately just for profit and providing services to those in need is just charity.  How far we could take it seems pretty revolutionary.

Posted in Economy, Foundations, Nonprofits, Poverty | Tagged , , , , , | Leave a comment