Can Mixed Methods Be Used for Environmental Economics?
Carl Mackensen
Professor Sanjay Pandey
Mixed Methods
Final Paper
Abstract
My research question is whether environmental economics can be completed rigorously using mixed methods. I am interested in this because I want to complete environmental economical analysis for my dissertation. I employed a key word search of Google Scholar and the GWU library websites to identify economic analyses that used mixed methods in their analyses. They ran the gamut in terms of methodology used and analytics created. As such, I refined my analysis to ten key articles which showcased the different means available to analysts of an economic nature. These articles were all relevant to my research questions, as they were exploratory in nature. Reviewing these studies informs the way in which I will go about conducting my dissertation. I found answers to what methods are available, but this was not exhaustive. I then put the ten most compelling articles into a table, where I analyzed what each had to offer, and how each was structured. I found that the methods are diverse, but that all are defensible and rigorously researched. In terms of what this means for my research question, I Feel motivated to employ mixed methods in my dissertation now, and I have learned a great deal about this methodology. What is missing is exactly how I will employ this, and what conclusions I may draw. This depends on what I ultimately decide to focus on in my dissertation, though I have a fairly straight forward idea of what it will be.
Section One: Write an introduction talking about your research question(s) and why does this question interest you?
At my core, I am an Economist. My first Masters was from Duke University, in Environmental Economics and Policy. As I move through the TSPPA and complete my PhD in PPPA, I am increasingly drawn back to the methods of Economists that I learned over a decade ago. My research question is, how do Economists employ mixed-methods methodology to implement studies that are more powerful and impactful than traditional, quantitative studies. For my first Masters, my training was almost entirely in quantitative methods. Now, having taken the Mixed-Methods course at GWU, I am increasingly interested in employing such a methodology in my dissertation. Specifically, I will be comparing Greek and Turkish Cypriot farmers, in their access to finance and capital for climate mitigation and adaptation. I chose this topic because Cyprus is a single geographical entity separated by two governments. As such, it forms the basis of a perfect natural experiment. I will be conducting surveys, primarily of farmers, and attempt to do some non market valuation for the ecosystem services which they perform by virtue of their farming activities. This may be employ either revealed preference or stated preference measures to approximate these values.
As such, I view this assignment as an opportunity to better familiarize myself with the literature of mixed methods methodologies employed by Economists. I am interested in this because it will serve my dissertation well. I am interested in employing such a methodology in my dissertation, in turn, because I believe, after taking this course, that such a method can be more fruitful than either a strictly qualitative or quantitative method. I am interested in my dissertation topic because I believe climate change poses an existential risk to life on Earth, with the only equivalently dangerous human activity being nuclear weapons proliferation. I have devoted my academic career thus far to Environmental Economics and Policy. This began as an undergraduate student at GWU, when I took an Environmental and Natural Resource Economics course, as part of my Economics major. I also majored in Psychology and Philosophy, and minored in Statistics and Peace Studies. My first Masters is described above, and my second was an MPA in energy and environment from Columbia University, and my third was an MIA in security and sustainability from The Hertie School in Berlin. Now I have moved on to the PhD, and hope that my dissertation will be informed by a mixed methods methodology.
As for a roadmap for this paper, I begin by scanning the literature. I describe my search strategy, and my results. I describe how many articles I found, and what struck me about them. I detail how relevant these articles are to my research questions and what makes them relevant. I then detail what kind of adjustments are necessary to move on to the search strategy and research questions. In the next section, selecting the literature, I focus on ten specific studies, and how I chose them. I detail whether reviewing these studies will help me answer my research questions fully, and what I will find answers to as well as what I won’t. I ponder how I would have expanded upon this if I did not have a time constraint. My next section, extraction, I describe what information I need from the studies and why. I create a table for this purpose. I then describe in words the information extracted in the table, to highlight and go over any key points. I then conclude with a conclusion, in which I reflect on what this work means for my research question, what new items I have picked up about the research question, and what I may have missed or remain ignorant of, and how this may be remedied.
Section Two - SCANNING THE LITERATURE (name the literature here and hereafter) -What search strategy did you develop to identify relevant scholarly literature (search terms, databases, inclusion / exclusion rules)? -What were the results of your search – how many articles did you identify; what struck you about these articles? How relevant are the articles to your research question(s) and what makes them relevant? -What kind of adjustment is needed for you to proceed (to the search strategy? to the research questions?)?
Section Two A: Literature
Silva, J. A., & Mosimane, A. W. (2013). Conservation-based rural development in Namibia: a mixed-methods assessment of economic benefits. The Journal of Environment & Development, 22(1), 25-50.
Al-Hamad, A., Yasin, Y. M., & Metersky, K. (2024). Predictors, barriers, and facilitators to refugee women’s employment and economic inclusion: A mixed methods systematic review. Plos one, 19(7), e0305463.
Downward, P., & Mearman, A. (2007). Retroduction as mixed-methods triangulation in economic research: reorienting economics into social science. Cambridge Journal of Economics, 31(1), 77-99.
Malhotra, S. K., Mantri, S., Gupta, N., Bhandari, R., Armah, R. N., Alhassan, H., ... & Masset, E. (2024). Value chain interventions for improving women's economic empowerment: A mixed‐methods systematic review and meta‐analysis. Campbell Systematic Reviews, 20(3), e1428.
Mistry, R. S., Lowe, E. D., Benner, A. D., & Chien, N. (2008). Expanding the family economic stress model: Insights from a mixed‐methods approach. Journal of marriage and family, 70(1), 196-209.
Masset, E., Kapoor Malhotra, S., Gupta, N., Bhandari, R., White, H., MacDonald, H., ... & Sharma Waddington, H. (2023). PROTOCOL: The impact of agricultural mechanisation on women's economic empowerment: A mixed‐methods systematic review. Campbell Systematic Reviews, 19(3), e1334.
Salloum, R. G., D’Angelo, H., Theis, R. P., Rolland, B., Hohl, S., Pauk, D., ... & Fiore, M. (2021). Mixed-methods economic evaluation of the implementation of tobacco treatment programs in National Cancer Institute-designated cancer centers. Implementation science communications, 2, 1-12.
Cloutier, L. M., Arcand, S., Laviolette, E. M., & Renard, L. (2017). Collective economic conceptualization of strategic actions by Québec cidermakers: a mixed methods–based approach. Journal of Wine Economics, 12(4), 405-415.
van der Putten, I. M., Evers, S. M., Deogaonkar, R., Jit, M., & Hutubessy, R. C. (2015). Stakeholders’ perception on including broader economic impact of vaccines in economic evaluations in low and middle income countries: a mixed methods study. BMC public health, 15, 1-11.
Schmidt, S. M., Iwarsson, S., Hansson, Å., Dahlgren, D., & Kylén, M. (2023). Homeownership While Aging—How Health and Economic Factors Incentivize or Disincentivize Relocation: Protocol for a Mixed Methods Project. JMIR Research Protocols, 12(1), e47568.
Dopp, A. R., Mundey, P., Beasley, L. O., Silovsky, J. F., & Eisenberg, D. (2019). Mixed-method approaches to strengthen economic evaluations in implementation research. Implementation Science, 14, 1-9.
Ananthapavan, J., Sacks, G., Moodie, M., Nguyen, P., & Carter, R. (2022). Preventive health resource allocation decision-making processes and the use of economic evidence in an Australian state government—a mixed methods study. Plos one, 17(9), e0274869.
Rathod, S. D., Guise, A., Annand, P. J., Hosseini, P., Williamson, E., Miners, A., ... & Platt, L. (2021). Peer advocacy and access to healthcare for people who are homeless in London, UK: a mixed method impact, economic and process evaluation protocol. BMJ open, 11(6), e050717.
Beulen, Y. H., Geelen, A., De Vries, J. H., Super, S., Koelen, M. A., Feskens, E. J., & Wagemakers, A. (2020). Optimizing low–socioeconomic status pregnant women’s dietary intake in the Netherlands: Protocol for a mixed methods study. JMIR Research Protocols, 9(2), e14796.
Stuber, J. M., Lakerveld, J., Beulens, J. W., & Mackenbach, J. D. (2023). Better understanding determinants of dietary guideline adherence among Dutch adults with varying socio-economic backgrounds through a mixed-methods exploration. Public health nutrition, 26(6), 1172-1184.
Section Two B: Reflection
In order to find the required scholarly articles, I searched both Google Scholar, and the GWU Library databases. I used the search term ‘economics mixed methods.’ I included articles which were focused on economic issues and employed a mixed method methodology. The results of my search were hundreds of articles and books. Being unable to peruse them all, I examined the first 50 results of each respective database. I subsequently identified 15 articles. I was struck by both the depth of the analytics of these articles, as well as the breadth of topics covered by the search term. The 15 articles are all good examples of mixed methods approaches to different economic questions, and as such they are relevant to my research question articulated above. In order to move forward, I need to focus on articles which are core studies of issues, as opposed to meta studies, or reflections on methodology. I need articles which do real world analytics for my research questions, and as such, I narrowed the selection to ten core articles which will form the basis of the remainder of my analysis.
Section Three: SELECTING THE LITERATURE - For your “adjusted search”, draw a flow diagram to document the results of your search. -How did you decide on which 10 studies to review? Elaborate - will reviewing these studies help you answer your research questions fully? What will you likely find good answers to and what will remain unanswered. If you did not have the time constraint of a semester, how would you augment this step?
I decided on the ten studies described below by deeply diving into the literature returned by my search. I found articles which are not reflections on methodology, but tackle real world problems with their analytics. I made sure that each article was a mixed methods methodology that focused on economic issue of different verities which would inform my dissertation. Reviewing these articles will allow me to answer the bulk of my research questions fully, as my research question is exploratory in nature.
I want to better understand the nature of economic mixed methods approaches to analytics. In order to do this, I need to survey examples of such analyses. I am doing this because my dissertation will employ a mixed methods approach to the topic at hand. As such, I want to completely familiarize myself with all the types of mixed methods available to economists.
I will likely find answers to what different methodologies economists have used in order to perform mixed methods analytics. This is the nature of my research question, specifically, what mixed methods approaches are available to economists for their analytics. I will likely not exhaust this topic, as my search is limited by time and length of this assignment. If I did not have the time constraint of this semester, I would augment this analysis by also searching for articles by economists which have the key search words ‘qualitative’ and ‘quantitative.’ Doing this would produce more results than my initial search, and allow for a fuller picture of what methods are available to economists.
Section Four: EXTRACTING RELEVANT INFORMATION FROM THE LITERATURE 3 - Knowing the research question, what information do you need from the studies? Why? - Design and populate a table to extract information from the 10 papers. - Describe in words, in a page or two, the information extracted in the table – purpose is to describe the table as well as highlight and elaborate on key points.
As my research question is primarily exploratory, whatever information I can glean from the examination of the refined ten studies will be helpful. This is because I am attempting to learn best practices for when I complete my dissertation. My dissertation will be an environmental economics mixed methods study of Cypriot farmers, comparing their access to capital for climate change mitigation and adaptation. As such, any economic study which has been published is of interest to me in terms of the methodology they employ for examination of their respective topics. As such, the information I need from these studies primarily is methodological, but can also include the research questions posed, as well as the conclusions that were able to be drawn as a result of their analyses.
Section Four B: Reflection:
The table above covers a myriad of topics of interest for the ten papers I looked at in greater depth. Most notably, the papers have all been completed within the last twenty years. They employ different, though sometimes overlapping methodological frameworks for their analyses. Their findings, as a result, are varied. The key point from this table is that economic mixed methods approaches are diverse, and as such, there are many tools available to me to employ in my dissertation.
All of the studies, however, share a few things in common. Most notably, aside from all having been published and all using mixed methods, the articles are deep in the scholarship around their desired topics of interest. As such, they are highly informative as to the methods and conclusions available to me for use in my dissertation.
I have not yet decided exactly what form my dissertation will take in terms of the methodology, but I am certain that I wish to employ a mixed methods methodology. As such, all of these papers have been fruitful to examine for me moving forwards. My research question for this paper was exploratory, namely, how do economists use mixed methods. What I have found is a plethora of examples as to how and why these methods are employed.
For my specific dissertation, which will be informed by this paper, I will likely engage in non market valuation of the benefits of climate adaptation and mitigation which results from access to capital. As such, I will need to perform interviews and surveys. The above literature has made clear to me how and why this should be employed. I will then move on to the quantitative measure, namely, taking that qualitative data and determining the value added to society, as an economist would, for these interventions. As such, it will prove to be challenging, but impactful and generalizable.
Section Five: Write a conclusion that answers the following questions: - What does all this work mean for my research question? What new things have I learned about the research question? What limits and blind spots remain and how may they be addressed?
This work shows that my core research question, that of whether economic analysis can be performed using mixed methods, is a resounding yes. I have learned many different methods and methodologies for completing the economic analysis of multiple different types and kinds of data, for different purposes, and for different research questions. I feel confident now that I can complete a mixed method evaluation like that described above for my dissertation.
I have learned that I am not limited to the traditional methods of quantitative analysis for my research question. I can combine surveys with interviews for qualitative analysis, and complete quantitative analysis either separately, or making use of the qualitative outputs. Further, once I define my research questions for the dissertation adequately, I feel confident that I can use these methods to make impactful and substantive contributions to my field. I also believe now that I do not have to wait to go to Cyprus to begin the analytics. I can conduct systematic and systemic reviews of the existing data and research, and have this form the basis of my future analysis.
The limits and blind spots that remain for my research question moving forward are primarily which of the existing mixed methods methodologies I may choose to employ. I want to conduct environmental economics, specifically non market valuation, but I am not limited to the traditional methods of revealed preference or stated preference. As such, my only limit is my imagination.
Human Capital and Public Finance Final: Vocational School in the USA and Germany: Which System is Better?
Human Capital and Public Finance
Professor Burt Barnow
Carl Mackensen
Final Paper
Vocational School in the USA and Germany: Which System is Better?
Introduction
The USA and Germany have markedly different educational paradigms. These paradigms mean that students are taught in dramatically different ways. The purpose of this project is to determine if GDP, and perhaps GDP growth rate, are related to the proportion of students who are in vocational schools. This will determine if vocational schools are more economically beneficial to a country than otherwise. This is done by virtue of a fixed effects regression of percent of the population enrolled in vocational school on GDP per capita from the years 2005 to 2020.
I am interested in this subject because I am a dual US and German citizen. I have completed a high degree of higher education, but I have seen that in Germany, my cousins have completed education in a vastly different mode of schooling. What is further, I am yet to secure a vibrant full time job that challenges me, and this seems to be attainable to my cousins even while they are still pursuing their educations. In order to better understand this, before we get into the lit review, let us examine some of the initial differences between the US and Germany.
In the US, vocational school varies from state to state. Specifically, it is post secondary schools that teach the skills necessary to help students get jobs in certain fields. These schools are predominantly privately owned institutions. 30% of all credentials in teaching are given by two year community colleges. Other programs are offered through military teaching or government operated adult education centers. Historically, these schools are considered less lucrative than a bachelor’s. However, some programs lead to higher income than a bachelor’s. High schools have offered vocational courses such as home economics, wood and metal shop, typing, business courses, drafting, construction, and auto repair. These have mostly been cut due to funding issues. School-to-work is a series of federal and state pushes to link academics to work, sometimes including gaining work experience on a job site without pay.
In Germany, it also varies from state to state. Students can complete three types of school leaving qualifications, ranging from the more vocational Hauptschulabschluss and Mittlere Reife compared to the more academic Abitur. German public universities generally don’t charge fees. Germany is well known internationally for its vocational training model, the Ausbildung (Apprenticeship), with about 50 percent of all school leavers entering vocational training. Germany has high standards for the education of craftspeople. Historically, very few attended college. In the 1950s 80 percent had only primary school of 6 or 7 years. 5 percent entered college. An upper middle class life was still attainable by craftspeople. Today, more people attend college, but craftspeople are still highly valued in German society.
Literature Review
To date, there is no research available that directly compares US and German vocational schools, and their outcomes. As such, my literature review looks at each system separately, trying to distill the essence of how vocational school operates in each respective country. I begin with Germany, and then move on to the US.
The German Model
Germany maintains a high degree of standard for its craftspeople, or those who go through apprenticeship. “In the 1950s, 80 percent had only Volksschule ("primary school") education of 6 or 7 years. Only 5 percent of youths entered college at this time and still fewer graduated. In the 1960s, six percent of youths entered college. In 1961 there were still 8,000 cities in which no children received secondary education.” https://en.wikipedia.org/wiki/Education_in_Germany#Apprenticeship, viewed on 6/1/2024). Germany is not a country populated by those without education, however. Those who did not receive higher education were still highly skilled, and upper middle class. Though more people go to higher education now, skilled laborers are still highly sought after.
Before the 20th century, the relationship between an apprentice and his master was fatherly in nature. For the most part, those learning were very young. Masters had to not just imbue skill, but also the virtues of a skilled laborer, as well as spiritual guidance. Training ended with the Freisprechung, or exculpation. Following this, the trainee could call themselves a Geselle, or journeyman. As such, he could either become a master, or work for a master. In these times, laborers and crafts were known as the ‘virtuous crafts.’ More recently, those seeking apprenticeship must find a master, or Ausbilder. The Ausbilder must teach the craft, give social skills and character lessons, and sometimes give room and board. Originally, most trainees only had primary education. Today, only trainees with secondary school completion can attempt apprenticeships. Apprenticeship typically takes three years, in which the trainee both works and goes to vocational school. This entire way of doing things is known as the dual education system. (Duale Ausbildung)
In Germany, this system became formalized by the passing of the Vocational Training Act of 1969. (Pritchard, 1992). It was further reformed in 2005. In the past, “vocational training was organized by the various guilds through apprenticeships, as their members sought to ensure that they had a talented labor pool to perpetuate their respective industries.” (www.technicaledeucationmatters.org, viewed 6/1/2024). The Vocational Training Act made the system uniform across Germany, and was the basis by which government, the private sector, and trade unions could work together to facilitate the dual system.
In The German Vocational Education and Training System: Its Institutional Configuration, Strengths, and Challenges by Solga et al (2014), considerable thought is given to thte nature of vocational schools in Germany. Germany is well known for its high quality educational system, including vocational education and training (VET). There are two primary aspects to this way of doing things, firstly, a company based program done in conjunction with a school based aspect, usually numbering one to two days a week. Apprentices, as they are known, get their more generalized education for secondary school in main subjects and the theory that is relevant to their occupation. This combination of the theory and practicality of types of knowledge, gotten through both school and work, is done in concert with the public-private combination of the way of governance. The recession of 2008 caused a great deal of attention to be called towards this dual system by the international order. While youth unemployment increased greatly across Europe, this was not the case in Germany.
This dual system is to a high degree a part of the labor market structure. Job-relevant skills are rewarded by those who employ these students, as well as given weight during collective bargaining. The overall system depends greatly on how well firms are doing. It seems the case that this sytem of apprenticeship is desirous as a means of getting into skilled labor for a high degree of young people who are not able to continue on to higher education. It gives society skilled labor for both the service and occupational sectors. The primary negative, which is similar to higher education generally, it leaves out the low achievers. It is difficult for other countries to emulate the dual system. The system has evolved over a long time period, and the things needed to facilitate such a system, both normatively and institutionally, is quite high. We can still conclude that making connections between schools and firms facilitates transitioning to the workplace. Making certificates and standardization of training allows for transfer of skills between companies. This system works not only with the private sector, but also trade unions, as well.
In Vocational Education and Training in Germany: Trends and Issues, by
Cockrill & Scott, the authors attempt a deep dive into the current structure of the German Vocational Education System, and look at some of the key issues and stresses in which it operates. They begin by describing the dual system and how it is undergirded by the larger educational system. They cite some problems which may harm the continued worth of the program. Some of the most important are pressure for the differentiation of giving training, as well as more flexibility, as well as redesigning the way funding and costs are configured.
In WHAT MATTERS IN THE TRANSITION FROM SCHOOL TO VOCATIONAL TRAINING IN GERMANY, Educational credentials, cognitive abilities or personality? By Protsch, (2011), the dual system is further explored, specifically the transition from school to apprenticeship, or vocational training. He argues that the system is an institutional means by which the prevention of some people from specific backgrounds of class being able to access learning is ameliorated. Specifically, the author asks whether the change from school to vocational school and apprenticeship allows students with lower secondary degrees or intermediate degrees to show their abilities, regardless of school credentials. This is further examined by an appeal to the place of the Big Five personality traits in the process. They find that the type of school degree is vital to the process of change. Specifically, it is easier for those with intermediate degrees than lower ones. Further, they find that the specific importance of personality, credentials, and abilities on success in the labor market isn’t a universal issue. The specific things that aid this change significantly based on the nature of the school degree.
The US Model
Vocational school in the United States changes based on which state you are in. Vocational, or Tech, schools, are post-secondary schools in which students study after completing high school or getting a GED. They teach skills needed to allow students to get jobs in specific industries. Two year colleges play a significant role, allowing for transfer of credits to four year schools. Military and government operated adult education centers are also employed.
In the past, vocational school was seen as providing a lower return on investment in the long run than an undergraduate degree. However, there are a number of crafts jobs which allow for a decent income, and cost significantly less time and money. “Even ten years after graduation, there are many people with a certificate or associate degree who earn more money than those with a degree”. (Torpey, 2019).
Traditionally, high schools would give some vocational courses, such as home economics, wood and metal shop, typing, business courses, drafting, construction, and auto repair. (https://en.wikipedia.org/wiki/Vocational_education_in_the_United_States viewed 6/1/2024). These programs have largely been cut, however. This may be due to funding issues, and the push to emphasize academics due to the standards based education reform.
The largest discrepancy between vocational school and traditional schooling is the amount of time invested in order to finish their studies. Vocational schools offer programs of one to two years, for the most part. Traditional schools require a broader education, as well. Vocational schools focus on specifically what the student needs for their specific desired field.
The federal government is involved by virtue of the Carl D. Perkins Vocational and Technical Education Act. “The Office of Career, Technical, and Adult Education in the US Department of Education also supervises activities funded by the act, along with grants to individual states and other local programs.” ( Office of Career, Technical, and Adult Education).
By the early 20th century, the US sought to emulate the German model. “Researchers such as Holmes Beckwith described the relationship between the apprenticeship and continuation school models in Germany and suggested variants of the system that could be applied in an American context.” (Beckwith, 1913). This system of industrial education grew following World War I, and became the contemporary vocational education system. The following is a rough timeline of events:
· Vocational education was initiated with the passing of the Smith-Hughes Act in 1917, set up to reduce the reliance on foreign vocational schools, improve domestic wage earning capacity, reduce unemployment, and protect national security.
· Around 1947, the George-Barden Act expanded federal support of vocational education to support vocations beyond agriculture, trade, home economics, and industrial subjects.
· The National Defense Education Act, signed in 1958, focused on improving education in science, mathematics, foreign languages, and other critical areas, especially in national defense.
· In 1963, the Vocational Education Act added support for vocational education schools for work-study programs and research.
· The Vocational Education Amendments of 1968 modified the Act and created the National Advisory Council on Vocational Education.
· The Vocational Education Act was renamed the Carl D. Perkins Vocational and Technical Education Act in 1984.
· Amendments in 1990 created the Tech-Prep Program, designed to coordinate educational activities into a coherent sequence of courses.
· The Act was renamed the Carl D. Perkins Career and Technical Education Act of 2006.
(https://en.wikipedia.org/wiki/Vocational_education_in_the_United_States viewed 6/1/2024
In DEPTH OVER BREADTH: The Value of Vocational Education in U.S. High Schools, Kreisman et al examine how vocational school in the US is seen to contribute towards national welfare and education. In 1983, A Nation at Risk was published. Since then, those in decision making positions have attempted to prevent the diminishment of readiness by virtue of academia for US students. International standardized tests give worrying signs as to our country’s ability to meet higher education needs. They underscore that many young people are not ready for college or work. As a result, many states have made high school graduation more challenging. Today, high schools finish more courses and higher level studies than they did three decades ago. This would seem to portend beneficial change on the topic.
These developments, however, have come to be by virtue of cutting vocational, career, and technical education. Some argue that this is good, as it better prepares students for higher education, and will not lead to students finding themselves in dead end jobs. Others, however, point out that there are not enough skilled professionals out there, and that for a subset of students, vocational school could be the distinction between middle and lower class lives. Forcing everyone to focus on academic courses, while neglecting the trades, results in a dearth of tradespeople and corresponding credentials. This makes us wonder, “What is the relationship between modern-day vocational or career and technical coursework and high-school graduates' success in college or in the workforce? Is vocational education an off ramp to college foisted upon lackluster students, or a different and less costly path toward adult success?”
The authors examined 4,000 adults to determine the answers to these questions. Their study looked at a representative sample of young working professionals, and found that students will follow the vocational path if it is made available to them. It is not the case that ‘unfit’ students are left to languish in these schools. “Further, we find that not all vocational classes are equal: students earn about 2 percent more annually for each advanced or upper-level vocational class they take, but enjoy no wage premium for having completed lower-level or introductory vocational study.” (Kreisman, 2019).
There are some conclusions to be drawn from these findings. Primarily, those who benefit from vocational coursework seek out those courses. Programs which limit this, like a higher course requirement for academics, is not best for these students. Lastly, benefits come to students who focus on specific topics, and not those who take more than one course. Because of this, depth of study should be facilitated.
In Economic Returns to Vocational Courses in U.S. High Schools by Bishop et al, the economic gains of high school level vocational classes is examined further. What they call High school career-technical education (CTE) in the US is big business. Students spent “1.5 billion hours in vocational courses of one kind or another. Of the twenty-six courses taken by the typical high school graduate, 4.2 (16%) are career-tech courses (NCES, 2003a). Courses in general labor market preparation (principles of technology, industrial arts, typing, keyboarding, etc.) and family and consumer sciences are offered in almost every lower and upper secondary school.” Further, “High school graduates in the year 2000 took 1.2 full-year introductory CTE courses during upper secondary school and probably almost as many during middle school (NCES, 2003a).” Occupation-specific education is also available to most high school students.
95% of high school students attend comprehensive high schools. 60% of these offer preparation for specific labor markets. For those without access to such programs, they are able to use a portion of their school days at a local vocational-technical center. These account for roughly 6.2% of high schools. In a great deal of bigger, urban school districts, students can attend a full day CTE school. “About 4.6% of the nation’s high schools are of this type offering concentrated occupational studies and related academic coursework all in one building (Silverberg et al., 2003).” Those which are specifically for CTE have a larger degree of technical programs, and usually are better quality. Many take part in CTE. “Nearly every graduate takes at least one CTE course and 90.7% take at least one occupation specific course.”
“Forty-four percent take three or more occupation specific courses and 25% take a sequence of three or more courses in a specific occupational field (referred to as an occupational concentration) (Levesque, 2003). Occupational concentrators allocate about one-third of their time in high school to vocational courses. The total number of occupational vocational credits earned has been remarkably stable: 3.00 for 1982 graduates and 3.03 for year 2000 graduates (Digest of Education Statistics, 2003: 163). Averaging across all graduates, introductory vocational courses accounted for 4.5% of courses taken during the four years of high school by graduates in the year 2000. Occupation-specific vocational courses account for 11.6% of courses taken. The twenty-five per cent of graduates who are occupational concentrators allocate about one-third of their time in high school to vocational courses.”
Bailey et al in their piece, The Vocational Education and Training System in the United States detail the system further. They argue that the relationship drawn between work and education is clear and obvious. People who don’t graduate high school usually cannot find a well paying job, experience higher levels of poverty and unemployment.
Finishing high school is seen as the minimum that a person can do, with regards to education and job preparedness. Those without it usually work in low or unskilled jobs. Those with an associates, or certificate, usually work in technical jobs or skilled jobs. These students are less likely to get to professional or managerial work, with that going to those with bachelor’s degrees, as those positions need postgraduate school.
Data and Methods
In order to complete my analysis, I needed data on GDP by year for Germany and the US respectively, as well as population per year for the desired years. I was able to find this, through the websites listed in my References section, for the years 2005 to 2020. I also included the percentage of students in vocational schools for each country for these years. It proved very difficult to find any other demographic or economic variables that were present for both Germany and the US for these years, which results in some confusing outcomes of my analysis, which I will get into later.
I begin my analysis by running a fixed effects regression of Germany versus US GDP per capita, with the concept being that GDP per capita represents a decent indicator of the economic vibrancy of the country in question. I found GDP per capita for the years 2005 to 2020 by first finding the GDP per year for each country, and then finding the population for each year. I then divided GDP by population, giving per capita GDP.
After comparing GDP per capita between the US and Germany, to get a baseline of how these countries are doing relative to one another, I ran fixed effects regressions of the percent of students in vocational schools, a dummy for whether the country was Germany or the US, and the year. After doing this, I ran the same regressions for Germany and the US individually, to look at the relationships for those countries to be able to better compare them.
Results
For my first regression, as stated above, I compared the GDP per capita for the US and Germany. The output is below.
This shows that Germany has a GDP per capita of 10,221.63 dollars less than the US. This is unsurprising, as the US leads the world in GDP per capita, and is a significantly larger and more robust economy.
I then ran a fixed effects regression of percentage of vocational school students on GDP per capita. Results are below.
Here, we can see that for a one percent increase in vocational school participation, per capita income went up by 3,274,199 dollars. Year was used as a control variable to tease out any time trends. This makes little to no sense, and I will elaborate in my Discussion section.
I then ran individual regressions of the same variables for the US and Germany separately. Below are the findings.
Germany:
US:
These results show a massive increase for Germany, as well as a massive decrease for the US, in per capita GDP based on a one unit increase in vocational school students. Again, this makes little to no sense.
Discussion
While the first regression seems accurate in terms of its ability to tease out the differences in per capita income between the US and Germany for the time period 2005 to 2020, the remaining regressions all do not make intuitive sense. First and foremost, it is highly unlikely that a one unit increase in vocational school attendance, for either country, could lead to a change in per capita GDP that values in the millions, either positively, as it was for Germany, or negatively, as it was for the US. The only explanation I have for this, is that the explanatory work of the variables included was too weak, and more control and fixed effects variables were needed to make this study more accurate. Had I had the information for such things as race, gender, or other demographics, this could have been a much more nuanced study. Further variables such as percentage of workers with degrees, and which degrees, would also have allowed for a more detailed study. Working with what I was able to access, however, has lead to results which seem to defy common sense.
What is more likely is that there has been an error in my computation that results in a factor of 1,000 being applied to one of the numbers, meaning that the difference is much more likely to be in the thousands, rather than millions. This makes intuitive sense, and after reviewing my Excel file, this seems to be the case.
What, then, can we say about the comparison between the US and Germany, as regards percentage of vocational school attendees and per capita GDP? Each country works differently. For the US, an increase in this population results in a lower GPD per capita. For Germany, a higher one. It is important to remember the material covered in the Literature Review, that these countries have vastly different paradigms and social structures. When considering this, it makes sense that these differences are in place
Conclusion
German and US vocational school image and performance are markedly different. Germany has a lower Per Capita Income. Further, Per Capita Income increased with time passing. For every percent increase in German vocational school, Per Capita Income went up by $3,274. For every percent increase in US vocational school, Per Capita Income went down by $1,191. I surmise that more vocational school attendance will lead to higher economic activity. Based on anecdotal evidence and interviews, German vocational school can lead to a happy, upper middle class life for most. Vocational school in the US is looked down upon, in our obsession with everyone completing a four year degree as being seen as the only means to economic empowerment. Hopefully my study sheds light on alternatives.
Discussion Questions for Further Work
• Why would vocational attendance help Germany but hurt the USA?
• What other covariates should I include in my analysis?
• What do these results mean in the context of Human Capital?
• Should the US try to emulate Germany? Or vice versa?
References
Torpey, Elka (January 2019). "High-wage occupations by typical entry-level education, 2017". Bureau of Labor Statistics. Department of Labor. Retrieved February 9, 2019.
Beckwith, Holmes (1913). German Industrial Education and its Lessons for the United States. Washington, D.C.: United States Bureau of Education.
(https://en.wikipedia.org/wiki/Vocational_education_in_the_United_States, viewed 6/1/2024).
https://en.wikipedia.org/wiki/Education_in_Germany#Apprenticeship, viewed on 6/1/2024
www.technicaledeucationmatters.org, viewed 6/1/2024
Solga, H., Protsch, P., Ebner, C., & Brzinsky-Fay, C. (2014). The German vocational education and training system: Its institutional configuration, strengths, and challenges (No. SP I 2014-502). WZB Discussion Paper.
Cockrill, A., & Scott, P. (1997). Vocational education and training in Germany: Trends and issues. Journal of vocational education and training, 49(3), 337-350.
Protsch, P., & Dieckhoff, M. (2011). What matters in the transition from school to vocational training in Germany: Educational credentials, cognitive abilities or personality?. European Societies, 13(1), 69-91.
Office of Career, Technical, and Adult Education
Kreisman, D., & Stange, K. (2019). Depth over breadth: The value of vocational education in US high schools. Education Next, 19(4), 76-84.
Bishop, J. H., & Mañe, F. (2005). Economic returns to vocational courses in US high schools. In Vocationalisation of secondary education revisited (pp. 329-362). Dordrecht: Springer Netherlands.
Digest of Education Statistics, 2003: 163
Bailey, T., & Berg, P. (2009). The vocational education and training system in the United States. In Vocational Training (pp. 271-294). Routledge.
For German vocational school numbers and German population:
https://www.statista.com/statistics/1182007/number-vocational-trainees-germany/
For USA population:
For USA vocational school:
https://nces.ed.gov/pubs/web/95024-2.asp
For GDP measures for both countries:
https://www.macrotrends.net/global-metrics/countries/DEU/germany/gdp-gross-domestic-product
Environmental Policy Final
Environmental Policy Final
Professor Nina Kelsey
Carl Mackensen
Word Count: 2173
Introduction
There have been perhaps few projects in history that so grasp the human mind and heart as space exploration does. It may be the case that nothing has had the same draw on the imaginations of so many as since the dawn of the finding of the New World. It is in that spirit, therefore, that I recommend that NASA support the privatization of all aspects of space exploration and development, now that governments have proven exploration is technically feasible. This will truly transform humanity.
Negotiations
On December Eighth, 1966, it was announced that the twenty eight countries which formed the United Nations Outer Space Committee had reached agreement on a treaty that for the first time declared principles detailing the business of states in the ‘exploration and use of outer space, the moon, and other celestial bodies.” (Johnson, 1966). On December 17th, 1966, the treaty was put forward by the Political Committee of the General Assembly with full support and no dissenters. On the 19th, it achieved a unanimous vote by the UN General Assembly. For the first time, ‘nations often in conflict with one another and adhering to widely divergent political philosophies have agreed on the first Treaty of general applicability governing activity in outer space.’ (U.N. Doc. A/C.l/L.396 (1966)).
The official name of the treaty is the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies, also known as the Outer Space Treaty (hereafter OST). It is the foundation for all of space law, and includes numerous parties. It was put into force by the United Nations, and on January 27th 1967 was signed by the United States, United Kingdom, and the Soviet Union. It came to be effective law on October 10th, 1967. By March 2024, “115 countries are parties to the treaty – including all major spacefaring nations – and another 22 are signatories.” (Treaty, retrieved September 16th, 2017)
The OST was implemented in part due to the rise of the intercontinental ballistic missiles of the 1950s, which traveled through space. The USSR launched the first satellite, Sputnik, in October 1957. This led to a ‘space race’ between the USA and USSR. In turn, this resulted in the promulgation of the view that outer space be exempt from military development, or ownership. All members of the UN GA supported a resolution banning the use of weapons of mass destruction in space on October 17th 1963. Arms control talk continued in December 1966, leading up to the taking up of the OST. “Key provisions of the Outer Space Treaty include prohibiting nuclear weapons in space; limiting the use of the Moon and all other celestial bodies to peaceful purposes; establishing that space shall be freely explored and used by all nations; and precluding any country from claiming sovereignty over outer space or any celestial body.” (Dembling, page 419)
Between 1968 and 1984, the OST led to another four rules, including “rules for activities on the Moon; liability for damages caused by spacecraft; the safe return of fallen astronauts; and the registration of space vehicles.” (Buono, 2020). The OST was at the nexus of a number of international treaties and power talks to realize the optimal conditions for world security based in nuclear weaponry. The OST also makes it clear that space is available to all people, from all countries, for all humankind. It is based largely on the Antarctic Treaty of 1961, which explicitly stated that not only could there be no ownership, but no build up of military forces. (OST, 2021). Because of this, space law is largely unsettled around newer space activities such as mining of celestial bodies such as the Moon or asteroids. Despite this, it is foundational to space law, and forms the basis for international objectives both present and future, including the International Space Station, and the US Artemis Program, which seeks to return astronauts to the moon.
For the purposes of this piece, we are most interested in Article II of the OST, in which it is forbidden that any governing body ‘appropriate’ any celestial body including asteroids, planets and moons, “whether by declaration, use, occupation, or any other means.” (Frakes, 2003). Artificial objects in space, however, are considered to be within the jurisdiction of the country that launched it into space, (OST Article 8) and further, said state is considered responsible for any damages which the artificial object may have caused. (OST Article 7). The OST is primarily an arms treaty which puts forward the peaceful use of outer space and its objects, and as such doesn’t speak to mining or ownership of objects in space. Therefore, it is up for debate whether the mining and removal of resources in space falls under the purview of the OST which prohibits taking ownership of things in space, or is considered commercial use. (Koch, 2008).
In 2015, private companies in the US questioned the US government, which in turn put forward the US Commercial Space Launch Competitiveness Act of 2015, which legalized mining in space. (McCarthy, 2015). Luxembourg, Japan, China, India and Russia are introducing similar legislation. The US has also put forward a number of bilateral negotiations called the Artemis Accords, which attempt to make clear some of the ambiguous language of the OST, part of which includes the use of resources found in space.
The Client
The client to which this piece is addressed is the National Aeronautics and Space Administration, or NASA, which is an agency of the US federal government which is independent in nature. Its mandate includes the civil space program, and research within both space and aeronautics. NASA was founded in 1958 following the National Advisory Committee for Aeronautics, with the aim of making development of space exploration more civilian based, with a focus on science and peaceful operations in space. It has led the space exploration programs of the US. This included Projects Mercury, and Gemini, the Apollo Moon landing missions, Skylab, the Space Shuttle, and currently gives support to the International Space Station, as well as the above mentioned Artemis Program.
The headquarters of NASA is in Washington, DC. Employees are required to be US citizens, unless exceptional circumstances are met. The administrator is a Presidential nominee and is confirmed by the US Senate. There are four strategic goals, including
· Expand human knowledge through new scientific discoveries
· Extend human presence to the Moon and on towards Mars for sustainable long-term exploration, development, and utilization
· Catalyze economic growth and drive innovation to address national challenges
· Enhance capabilities and operations to catalyze current and future mission success
In terms of resources, NASA’s budgets are constructed by NASA and given approval by the governing administration, and then passed on to the US Congress. Below is a breakdown by year:
Year
Budget Request
in bil. US$
Authorized Budget
in bil. US$
U.S. Government
Employees
2018
$19.092[223]
$20.736[224]
17,551[225]
2019
$19.892[224]
$21.500[226]
17,551[227]
2020
$22.613[226]
$22.629[228]
18,048[229]
2021
$25.246[228]
$23.271[230]
18,339[231]
2022
$24.802[230]
$24.041[232]
18,400 est
NASA’s resources are not simply monetary, however. The organization has a wealth of intellectual capital in its workforce, a history of accomplishment, and a recognizable brand synonymous with elite performance of its mission.
The strategic position of NASA in the exploration of Space and conduct of Space science cannot be overstated. With perhaps the exception of the Russian space program, there has been no other entity so responsible for the development of resources in Space, and the surpassing of hurdles to exploration. NASA, however, is not nearly as highly funded as the general public believes. There was for sometime a movement entitled ‘a penny for NASA’ which sought to give NASA one percent of the entire federal budget. In reality, as stated above, their budgets have been far below this, despite people believing it to be multiple times higher. As space exploration becomes privatized, NASA can return to its mission of pushing the envelope of exploration and technology. They therefore have an existential impetus to encourage the movement of established technology and methods to the private sector, so that they can focus on what they do best.
The obstacles that lay in the way of NASA’s return to an emphasis on exploration through privatization of all other missions are primarily budget and PR related. Many believe that spending money on the space program at all is foolish, and maintain that such money should be spent here at home. They seek to defund NASA, and mothball all of its programs. Alternatively, there are those who maintain that privatization of anything, particularly something that previously had been government provided, is tantamount to evil and allowing the greedy to take over yet another industry, this time turning space into a trash heap or strip mine. The capabilities NASA has to meet these obstacles, and to push forward new legislation, is primarily based in its expertise to date, and its articulation of the mission of exploration as paramount to the human experience. Without exploration, what are we? We would never have stood upright to look at the horizon, or developed tools to make our thriving easier. In a more nuts and bolts perspective, NASA can point to spin offs of technology developed originally for space exploration, such as electric motors, solar power, and energy efficiency. Should it fully support privatization, it can point to further development of new technologies and the decrease of price to established methodologies, as this is what competition engenders.
Recommendations
NASA wants to explore space. This can be accomplished through a number of ways, such as increasing their budget, devoting more people to the organization, increasing ties with other space organizations abroad, and so on. The best way for them to go about getting this goal, in my opinion and as I have put forward above, is privatizing as much of the space exploration infrastructure as possible. Don’t just allow SpaceX and Blue Origin to fly capsules to the International Space Station. Allow companies to mine asteroids for precious metals. Allow people or organizations to purchase real estate on Mars. Allow the bounty of exploration fall not just to the general public in the form of increased scientific knowledge, but also the provision of profits to private companies the public can invest in. NASA already partners with European Space Agency, Japan Aerospace Exploration Agency, Roscosmos, China National Space Administration, and Indian Space Research Organization. Let this partnership set the stage for new international law around privatization.
How can NASA accomplish these goals? What actions can be taken? NASA can pursue a multi-pronged approach to the legalization of privatization, which would facilitate exploration. Legalization of privatization could be pursued at both the international and domestic levels. This could be done via both the UN, as the original OST was, and through the US Congress, as domestic funding is done. The negotiation strategy would build on NASA’s history as an elite organization that accomplishes moon-shot goals. In both venues, NASA could make a similar argument, but targeted at different segments of society. Domestically and internationally, NASA could cite the rich history world powers have of initially financing exploration, such as the exploration of the New World, followed by privatization of activities once the large expenditure of exploration has been completed by a government. Columbus may have been the first to reach the New World, but it was the Dutch East India company which truly began the development of it. This argument could be put forward both for the US, and internationally for other countries. In fact, in terms of NASA’s objectives, the more entities that enter into the competitive sphere the better, for that will bring down prices and increase need for further pure science exploration, just what NASA seeks. Hence this argument is truly international in scope.
The obstacles NASA would face would primarily be, as stated above, from those who would prioritize other activities for research and development dollars. NASA could counteract this by putting forward that NASA is simply doing what it has always done; facilitating exploration. That many people would gain financially is simply an added bonus, one that is in no way related to the amount of money spent on NASA. NASA could again emphasize the value of spin offs, as well as the importance of exploration for humankind, as well as make clear that, through the legalization of privatization of space resources, the whole world would stand to benefit.
Conclusion
It is here that I leave my piece. Exploration is something that humans have always done, and excelled at. It may be one of the few attributes that make us truly unique as a species. As such, there is little that excites the mind more than finding what is beyond the nearest horizon, and how we can conquer it and flourish there. Were NASA and international governments to privatize all space exploration, humanity would enter a new era. An era of scientific exploration mixed with entrepreneurial development. An era the likes of which we can scarcely imagine today.
References
Dembling, Paul G., Arons, Daniel M., The Evolution of the Outer Space Treaty, Journal of Air Law and Commerce 33 (1967) pp. 419 - 456
President Lyndon B. Johnson. U.S./U.N. Press Release 5011, reprinted in 2 PRESIDENTULZ. DOCUMENTS 1781 (1966); 11 DEP'T STATE BULL. 912 (1966); N.Y. Times, 9 Dec. 1966, at 1, col. 8.
"Treaty on Principles Governing the. Activities of States in the Exploration and Use of Outer Space, Including the Moon and Ocher Celestial Bodies," annexed to a resolution of the General Assembly. U.N. Doc. A/C.l/L.396 (1966). The text of the treaty is reproduced in 33 J. AIR L. 8r COM. 132 (1967)
Buono, Stephen (2 April 2020). "Merely a 'Scrap of Paper'? The Outer Space Treaty in Historical Perspective". Diplomacy and Statecraft. 31 (2): 350-372
"The Outer Space Treaty". www.unoosa.org. Retrieved 24 September 2021.
Frakes, Jennifer (2003). "The Common Heritage of Mankind Principle and the Deep Seabed, Outer Space, and Antarctica: Will Developed and Developing Nations Reach a Compromise?". Wisconsin International Law Journal (21 ed.): 409
Koch, Jonathan Sydney (2008). "Institutional Framework for the Province of all Mankind: Lessons from the International Seabed Authority for the Governance of Commercial Space Mining". Astropolitics. 16 (1): 1–27
U.S. Commercial Space Launch Competitiveness Act (H.R.2262). 114th Congress (2015–2016) Sponsor: Rep. McCarthy, Kevin. 5 December 2015
Heiney, Anna (14 August 2020). "NASA, SpaceX Targeting October for Next Astronaut Launch". blogs.nasa.gov. Retrieved 27 August 2020.
Mann, Adam; Harvey, Ailsa (August 17, 2022). "NASA's Artemis program: Everything you need to know". Space.com.
"NASA FY2022 Strategic Plan" (PDF). Archived (PDF) from the original on September 7, 2022
Human Capital and Public Finance Book Report on The Future of Work
Human Capital and Public Finance
Burt Barnow
Book Report
Carl Mackensen
The book I am reviewing is The Future of Work: Robots, AI and Automation, by Darrell M. West, published in 2018. I will go through a summary of each chapter, followed by a critique of the book including questions on the book itself, as well as further questions that can be extrapolated from the book which would have been nice to have answers on. The book is broken down into sections, the first has chapters on robots, AI, and the Internet of things. The second, on Rethinking Work, Lifetime Learning, and a New Social Contract. The final section includes chapters on Is Politics up to the Task, and Economic and Political Reform. This book, overall, explores the impact of emerging technologies on work, education, politics, and public policy. It agues we must rethink work and move towards lifetime learning. It questions whether the US is up to the task of easing this transition. If we cannot meet the coming issues, there could be serious economic and political disruptions.
Section One: Summary
Robots
The argument for replacing labor with technology is an old one. In brief, the following quote sums this up; ‘They’re always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”” (Page 3). Also, a computer kiosk doesn’t need to be paid 15 dollars an hour to take orders. Digital tools cut costs, improve productivity, and reduce the reliance on human employees. Tech is replacing both blue and white collar jobs. This is not the first time we have been faced with a ‘megachange’. One hundred years ago we moved from an agrarian to industrial economy. Now we are moving from an industrial to a digital economy. Poor governance is a serious threat to expanding the definition of jobs, revising the social contract, and extending models of lifetime learning. Current political dysfunction, inequality, polarized media coverage, and societal divisions, make it not clear that we can weather the storm of disruptions. We need more effective governance, and its absence could undermine democracy.
Costs of robots have fallen, leading to greater use because they are a substitute to labor, particularly in China and abroad. In the US, the push for a higher minimum wage has exacerbated this. New robots cover working in hazardous areas to dealing with sick chickens to hotel service products to education through virtual telepresence. Used to be that robots did mechanical, repetitive tasks. Today they go further, learning from experiences of other devices. They sense and learn. Do specific tasks and adjust as they perform. Some have a sense of disappointment with the tech advances. Wanted tech to empower people, undermine social hierarchy, and revolutionize daily life. This has not occurred. Some cite rules and regulations as slowing development. Most big problems are beyond tech. Access to healthcare, poverty rates, lack of access to education. Need to address the underlying social and economic problems. Sometimes, tech makes things worse, such as increasing inequality or increasing social and cultural tensions. Further, who is liable for robot actions? What about the gig economy, where workers don’t have benefits?
AI
Some see no threat, others, such as Musk, see large disruption and even threat. AI is spreading into everything. This can improve human productivity. It can change communication and commerce. It can change how people getting information by introducing algorithms to decision making. It is causing change in the workforce and daily life. This piece particularly looks at AI, machine learning, facial recognition, autonomous vehicles, drones, VR, and digital assistants.
AI could transform much. Machines make decisions using certain criteria. They are included in finance, transportation, defense, resource management, and beyond. One example would be stock market trades powered by quantum computers, another smart buildings and energy. Defense data mining is another possibility for unusual or suspect situations. Service delivery would also be a venue, such as tailored medical help in emergencies. Bankruptcy is another, through reading past cases and evolving. China is also investing heavily, particular in face and voice recognition, using massive data for administrative operations, law enforcement, and national security. Every country needs data for AI to work.
Internet of Things
New tech increases efficiency, but business needs people play as consumers. Tech can create jobs and aid society. How innovation affects workers is also important. IOT links a lot of things together through high speed communication and intelligent software. High speed networks, sensors, automated processes are all part of this. Health care, transportation, energy management, and public safety could all be effected. The developed world will be increasingly connected, accelerating tech innovation and its impact on society. Increased convenience will be the name of the game, and it will be disruptive to the social, economic, and political status quo. High speed 5G networks will be instrumental. Small devices will do high level computations. Connected devices will have billions of nodes. High speed internet and instant transactions are already coming to fruition. The huge amount of data generated will be analyzed in real time for better decision making. Suggestions will be more personalized, immersive, and result in enhanced experiences. This can be applied to appliances, home security, energy grids, entertainment. Faster uploads and downloads facilitate digital services. Geography will no longer be a constraint, particularly for underserved rural and urban populations. Cloud storage facilitates this further. Data analytics mines health information for the benefit of the consumer. This is done through deidentifying, cleaning, aggregating, and probing data in large databases. Tech helps increase quality and reduce costs.
Rethinking Work
There is the possibility of a new human era. We need to rethink work, and what counts as work. We need to provide benefits for those whose work doesn’t provide them, and broaden work to include volunteering, parenting, mentoring, and expand leisure time activities.
Machines will displace workers. Between tech and outsourcing, there just isn’t as mch need for as many full time workers. Large firms with small US based full time workers will define the 21st century economy. It will be characterized by a small workforce, external supply chain, and reliance on independent contractors or outsourcing. New jobs today are in online ventures. The number of jobs lost in some sectors exceeds those being created. The most falloff is among the high school educated and Black men. Increased tech and trade are responsible for reduced demand for young and middle aged male employees. Worker prosperity is also a concern. Worker incomes have suffered over the past several decades due to tech, and also decreased union bargaining power. In some fields, tech is substituted for labor. Tech destroys jobs and creates new and better ones, but a smaller number of them.
“Researchers predict AI will outperform writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). These experts believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.” (page 68)
Tech is changing operations, not just the number of jobs. It is hard to predict the impact due to its early stage. “47 percent of US workers have a high probability of seeing their jobs automated over the next 20 years” (page 70). Most likely to be automated are “physical activities in highly structured and predictable environments…which make up 51 percent of activities in the economy and account for almost $2.7 trillion in wages” (Page 71).
Others dispute this, saying new jobs will be created. Gains and losses will even out. Work will be transformed, but humans will still be needed. High and low ends of work may be favored, while the middle may be hollowed out. Some see the need for robots as population grows and ages to maintain the standard of living. Some still argue that completely new jobs will be developed, as has happened previously.
This brings us to new models and the sharing economy; business models are changing. The ‘fissured workplace’ where business leaders have devolved authority to a broad network of outside companies, distant suppliers, and remote managers” (page 79) will be the name of the game. The sharing economy, “the peer-to-peer-based activity of obtaining, giving, or sharing access to goods and services, coordinated through community-based online services.” (Page 80) will also develop. Development of the gig economy will also take place. Sharing will continue to grow, such as ride sharing, bike sharing, Airbnb, eBay, craigslist, second hand goods amazon, and renting goods. This is gGood for flexibility and part time work. But there are no benefits for the workers. Volunteering and Parenting can be seen as work; work also includes part time, volunteering, mentoring, and parenting. Millennials have different approaches to work and leisure, and want to help their communities. With less work and more leisure, we should think about making volunteer work eligible for social benefits.
There could be more leisure time. If workers are not needed, we can construct meaning outside of work. Even if we work, we may have more leisure. The time for nonwork activities such as art, culture, music, sports, and theater will grow. We will move from consumption to creativity. There will also be more time for family and friends, as well as hobbies.
A New Social Contract
The new economy doesn’t guarantee income, health care, or retirement benefits. Employers are moving to temporary staff. The lack of benefits could lead to societal discontent. “Employment is becoming less routine, less steady, and generally less well remunerated. Social policy will therefore have to cover the needs of not just those outside the labor market, but even many inside it.” (Page 89). The lack of full time jobs would exacerbate socioeconomic divisions by weakening the distribution of benefits. New models are needed. Some possibilities include establishing citizen accounts with portable benefits provided by the government, paid family and parental leave, revamping the earned income tax credit to help the working poor, expanding trade adjustment assistance for technology disruptions, providing a UBI, and deregulation of licensing requirements so that it is easier to pursue part time positions.
Who should pay? Bill Gates proposed a ‘robot tax.’ Some say the problem isn’t automation, but the inequality caused by it. We may end up with inequality, social conflict, political unrest, and a repressive government. How do we retrain workers, and who pays transition costs? We could raise the income tax for high earners in top 1%. However, the amount needed would probably be more. We could do a progressive tax on high consumption goods. A solidarity tax, taxing net property, stock, pension and financial assets owned by high net worth people, is also a possibility that is done in some countries. It has been proven that trickle down won’t work, in which benefits go to the top 1%.
Lifetime Learning
With rapid technological, organizational, and economic transition, it is vital that people engage in lifelong learning. We can expect to switch jobs, see whole sectors disrupted, and need to develop additional skills due to economic shifts. There are different possibilities for life time learning. Community colleges, private businesses, distance learning for vocational training that is inexpensive and accessible to adults are all options. However, these skills need to be those needed for the long term. Young people need the most relevant training. We need some way for people to pay for new skills acquisition. Disruption will be the hallmark of the future workforce. This is due to the rise of digital tech.
Is Politics Up to the Task?
A major challenge currently is how to generate a social consensus around needed workforce and policy changes. Changes could threaten income provision, health benefits, and retirement support. Developed countries could have underemployment or unemployment, which could lead to risks to civil peace and prosperity. Things are currently very polarized. It is hard to get people to think about digital disruption and the future of work.
There have been large disruptions in the past. Changing from agrarian to industrial was devastating for workers. Real wages fell 10 percent between 1770 and 1810, and didn’t rise until 60 to 70 years later (page128). Change brought severe transition costs. Mass production and factories caused the need to retrain workers, address food safety, enact child labor laws, reduce economic concentration, and manage mass migration. Leaders devised new policies and built new business models. Economic and political reforms helped adaptation of these problems. Primary elections, direct election of senators, and state initiatives gave people a voice. Social security and unemployment were started. Women were given the vote and income tax was started. Following WWII, there was similar turmoil. The World Bank and IMF were started. The Marshall Plan was introduced.
Political and business leaders have difficulty coming up with realistic ideas that Congress will agree on. Tech can undermine truth and trust. Public discontent is signaled by politics. Inequality makes it impossible to finance what is needed. This is tied to emerging technologies. It is not just a financial issue, but affects politics and basic governance. The rich are much more involved in politics and hold views different to those of the public, being far more conservative on issues related to social opportunity, education, and health care. They do not support a major role for the public sector. There is a strong link between ‘affluence and influence.’ All of this makes everyday people cynical.
‘Flexicurity’ is the separate provision of benefits from jobs. We could also change the number of hours worked. Government will lose tax dollars with autonomous vehicles and other progress. The challenge is to bring the benefits of digital revolution to a wider range of people. Many don’t have digital access, including 20 % in America and half around the world.
There are risks to inaction. Worsening inequality and social upheaval are some possibilities. Growth alone doesn’t help inequality. Trump saw manufacturing and trade as bad for workers, but in reality, it’s spread far beyond that. There is a mismatch between economic output and political representation. 15% of counties in America generated 64% of GDP, on the coasts and voted for Hillary. The 85% of other counties that voted for Trump generated 36% of GDP. These areas felt neglected. US geographical economic inequality is growing. There is a push against globalization, free trade, immigration, and open economies. There is also a decline in trust in journalism coupled with a rise in disinformation campaigns.
Economic and Political Reform
Politicians could take short term measures for security that are actually detrimental in the long run. There are some short run answers. Developing a new concept of work including parenting, mentoring, and volunteering, enacting paid family and medical leave, expanding the earned income tax credit, and improving health, education, and well-being.
To address economic dislocations, enacting universal voting to reduce political polarization, reducing geography-based inequities, improving legislative representation, abolishing the electoral college, enacting campaign finance reform, and adopting a solidarity tax to finance needed social programs would all make change.
In the past, jobs weren’t the only means of having meaning in life. It used to be linked to your family, ethnic group, religion, neighborhood, or tribe. We could return to this, and have a job be only part of what we do. We need to be able to earn income and social benefits outside of a job. Some possible means include privately operated citizen accounts, worker-controlled benefits, or government-run benefit exchanges. Workers should control their benefits, and they should be portable across employment and geography. The demise of the American Dream and the social mobility associated has impacted the working and middle classes negatively. We need to help the health and education of those negatively impacted by tech innovation and economic disruption, including improving pre-school and worker training.
We also need a new kind of politics. We are highly polarized around role of government. Conservatives want less, and liberals believe public sector has important tasks to stabilize, deal with imperfect markets, and help people adjust.
Section Two: Critique
In this section I detail some of the questions I found after completing reading this work. Some are on the basic foundations of the text, others are questions that I would have liked to have seen answered, but weren’t.
Does the author provide adequate evidence for conclusions and recommendations?
The author provides virtually no evidence for his conclusions and recommendations. He cites a few quotes from famous thinkers here and there, and has this work sprinkled with references to the past, but there is no true evidence or quantitative analysis anywhere in this work. This is a core weakness for the piece, as it makes all of its arguments seemingly off the cuff arm chair philosophizing by someone who I suppose may be qualified to do so, but without having any true substance to the arguments.
Did the author use appropriate qualitative and quantitative methods?
What methods there were, were almost entirely qualitative. This is not to say that qualitative methods are bad ones, I certainly believe that a well articulated case study can teach much, but this is not the form that the qualitative analysis takes. As I say above, it is more in the way of anecdotes that the author introduces these methods, then in any substantive or defensible way.
Did the author ignore relevant literature?
There was virtually no reference to other texts in this piece. Instead, the author relied on a few quotes and public opinion surveys. For someone who wants to completely reshape the nature of work, you would think that a deep dive into relevant literature would form a back bone for his arguments. This was not the case.
What further research and recommendations are called for?
In his final chapter, the author details Economic and Political Reform that he would like to see take place as a result of his analysis. This is all fine and good, and much of it makes sense if you accept his initial premises, but without a substantive analytical framework, there is little in the way of supportive evidence in place to have us accept his recommendations.
The following questions are ones that were generated by me in response to reading this piece.
What is the role of government?
The author argues for a lot of change. This is fine, but he doesn’t get to the core issue in his prescriptions; namely, what the role of government is. For his assertions to be true, and his recommendations take effect, we need a clearly argued and reasonable defense for why government should take up these tasks, as opposed to the private sector, or individual citizens. Personally, while I favor market mechanisms in terms of policy prescriptions, I also am a ?New Deal Democrat, and believe in big and bold government programs. I believe the government should spearhead much of the development that the author argues for.
Who should pay the costs?
This is slightly gone into, by virtue of discussing different taxes, particularly solidarity tax and wealth tax, but the author admits that his analysis is insufficient. In order to actually implement most of these changes, we will need a radical rethinking of what taxes are taken from whom, and how they are redistributed throughout society. Personally, I believe that a consumption tax would go a far way towards funding the author’s recommendations, combined with other taxes.
How should that payment take place?
Again, wealth and solidarity taxes are argued for, but this is a weak argument. Something like a consumption based tax would make a lot more sense, or perhaps private industry paying directly into funds that allow for the movement of benefits for workers from one place to another.
Do we need radical change, or incremental?
This is not addressed at all. The author cites some projections of how much of the economy will be driven by tech in the future, but then there is no prescription as to whether his proposed changes should take place immediately overnight, or gradually with time. Personally, after reading this piece, I do actually agree that change has to take place, and fast, but to do so immediately would ostracize many in society. Realistically a phased in approach makes a lot more sense.
Given the speed of tech change and inequality, will developing countries be left in the dust, or leap frog?
This is not addressed whatsoever in this text. The core focus is on the US, and perhaps other developed countries. The author does cite that inequality will worsen both nationally for the US, and internationally for all people, but there is no talk of how to ameliorate this. I believe the developing world holds a unique opportunity for leap frogging old technology and moving right into the new, as Africa has seen solar and wind developments for energy without needing a classical grid. This would go a long way towards bridging the gap between the developed and developing.
Should growth be the focus of macroeconomic policy, or a new, circular economy?
This is not alluded to whatsoever in this piece, aside from the seemingly implicit bias that growth is always a good thing. I do not believe that growth needs to be the focus of all economic policy. We can have an economy of maintaining the status quo while our population grows, but also focus on a circular, recycling economy in which value is added at every transaction, even ones where traditionally there would be waste.
What would you want work to look like for your children and grand children?
Future is in the title of the book, so the author does write with future generations in mind, but he never makes it personal enough to really pack a rhetorical punch. Personally, for my future and my children’s future, I would like to see an economy that operates at the height of technological development, but also one where everyone has a fair chance at working towards what is rightfully theirs. I also see a world where divisions that currently plague us have been passed by, and where everyone on Earth has the same basic starting conditions. I realize this is likely impossible, but I can still be motivated to get us as close to that world as possible.
Economics of Technological Change and Innovation Final
Carl Mackensen
Professor Vonortas
Economics of Technological Change and Innovation
Final Exam
Question 2
The maximum potential of an innovation requires its being taken up by the entire economy at large. In order to realize increased productivity everywhere, new technical processes must be focused on by producers. Positive externalities will result for the buyers and users of innovations. This, in turn, justifies the intervention in the market by policymakers. ““…In the history of diffusion of many innovations, one cannot help being struck by two characteristics of the diffusion process: its apparent overall slowness, on the one hand, and the wide variations in the rates of acceptance of different inventions, on the other.” (Rosenberg, 1972). It is quite expensive to examine the rate of take up of a new innovation at the overall aggregate level. Specific individual firms or consumers make the decision to take up the new product or service. It proves difficult to explain why some take this up sooner or later.
In terms of innovation adoption, market saturation takes place, “when there is complete displacement of older generation methods by a new technique or the displacement of earlier product varieties by the new types of goods and services. At this point the Schumpeterian notion of creative destruction is fully realized.” (Vonortas, 10/9/2023). In reality, incomplete saturation is more likely. To explain the decision making to take up new goods and services, we can examine the place of social norms, the way firms and customers are related, and how decisions come to take place.
E.M. Rogers, in “Diffusion of Innovations,” argued that there are five components which influence possible take up of the innovation. These include: “The relative advantage of the innovation, Its compatibility with the potential adopter’s current way of doing things and with social norms, The complexity of the innovation, and Trialability, the ease with which the innovation can be tested by a potential adopter, and Observability, the ease with which the innovation can be evaluated after trial.” (Vonortas, 10/9/2023). Rogers also argued that there are a number of exogenous aspects of society that may either aid or hinder adoption. These include, if the decision is a collective, individual, or central authority one, the channels of communication implemented to get information about the change such as word of mouth or media, the specifics of the social system within which possible take up is placed in, including such things as interconnectedness and norms, and how much marketing takes place.
While others focus on the external environment, Economists focus on the rational decision making of individuals. Specifically, they look at the aggregate of this, in which the person calculates the benefits of adoption versus the costs of change. Such a cost benefit analysis occurs in an environment of a lack of certainty with non-perfect information. The final decision takes place on the demand side, however it is the case that costs and benefits are usually influenced by supply side decision making. The final rate of diffusion takes place through adding up the decisions of the individual.
One way of construing diffusion of an innovation is the Epidemic Model. In this model, the spread of disease is replaced by the diffusion of technology. Here, two people have a random encounter within which one has already taken up the change, and the other hasn’t. Then, information changes hands detailing the innovation, which possibly can lead to the other person taking it up. “There is a fixed population of potential adopters N and all members of this population are identical in all characteristics. When a meeting occurs between an adopter and a non-adopter there is a fixed probability B that the current non-adopter will become adopter. The chance of such a meeting occurring depends on the proportion of the population that have already adopted the innovation D. Such meetings are random encounters, so the probability that an adopter meets a non-adopter is D(1-D). Then, the rate of adoption is dD/dt = BD(1-D)”. (Vonortas, 10/9/2023). Predictions can be garnered from this model. Specifically, the rate of adoption is bell shaped, like an S, adopter rate follows a logistic curve, and saturation takes place when all have adopted. This saturation depends on the likelihood that an encounter causes take up.
There are, however, limitations to this model. While the S shaped rate of adoption is proven, this rate has no basis in Economics. Further, people and firms aren’t always exactly the same. Another point is that not all decisions are rational. People may either take up the technology en masse, or resist. We cannot assume a static population of people to take up the tech. Further, while adoption is S shaped, we have no way of knowing the time scale.
There is another possible model we can use. Specifically, we can base it on the plausibility of the concept that tech diffusion depends on how much economic gain it endows people who take it up with, when considered in comparison with the status quo. “The advantage may consist of higher returns on investment through cost savings, increased sales, higher prices for an improved product, or a combination of these. It is an expected advantage representing the net present value of a future stream of monetary benefits, discounted by the probability that these benefits will actually occur. These probabilities are in reality the assessments of decision makers of the chances that a given decision will yield the anticipated results.” (Vonortas, 10/9/2023). As decision makers get more information about the tech change, we can assume an S shaped diffusion shape.
Stimulus-response models are yet another way of explaining the S shaped adoption curve. This allows for possible differences between firms, which in turn explain the way tech diffusion takes place. Stimulus-response questions focus on which aspects of firms and/or their environment explain when firms adopt, as well as information such as on the market or technical information must be internalized ahead of adoption. The stimulus is information, while firm attributes influence the degree to which they adopt.
There are a number of aspects that influence how an innovation diffuses through society. These include: “Characteristic of the Innovation, such as its origin, effect on inputs, place in the established production scheme, changes in the innovation, and complementariness among innovations. Characteristics of Potential Adopters, including tech specificity of the existing system, the firm’s financial position, tech capability, market position and alternative strategies, managerial attitudes, age of firms and industries. Characteristics of the Diffusion Process, such as external and internal information, external interests in diffusion, international diffusion. Characteristics of the Institutional Environment, including the patent system, laws and government regulations, specification writing agencies, insurance companies, and so on, and labor unions.” (Vonortas, 10/9/2023)
Factors on the supply-side which influence diffusion rates include the continuity of inventive activity, improvements in inventions after their first introduction, the development of technical skills among users, the development of skills in machine-making, complementariness between different techniques, and improvements to old technologies as well as the institutional context. (Vonortas, 10/9/2023)
Lastly, there are network effects and lock-in effects which influence tech diffusion. Network effects refer to the gains that one realizes based on the decisions of others to adopt. It is possible that not the best tech will be adopted, as network effects can lead to the adoption and maintenance of inferior tech based on it’s becoming a standard or universally adopted. Lock-in is based on their being a cost associated with changing tech.
Network externalities and standards are related, in so far as standards cause a proliferation of things which inculcate network externalities. “A technological standard increases the probability that communication between two products will be successful. Standards ease learning and encourage adoption when the same or similar standards are used in a range of products. A successful standard increases the size of the potential market for a good, which can be important in lowering the cost of its production and in increasing the variety and availability of complementary goods. “ (Vonortas, 10/9/2023).
In the end, products live in an ecosystem of their own making. The S shaped curve details the ways in which technology adapts, grows, replaces itself, and promotes economic growth. Such adoptions of technology take on characteristic and similar traits, so as to make the S shaped curve a reality. The diffusion rate can be construed as many things, such as an epidemic, or a natural word of mouth phenomenon. The core question becomes, when does an individual, firm, or country, jump from one S curve to the next? This is often dependent on a whole host of reasons. Most prominently, you need to have the installed capacity of the tech to be at a certain minimum level. Additionally, the new tech must prove to be a marked increase in betterment than the tech it replaces. To jump either too early or too late involves significant costs, and punishments. Therefore, it is vital to time things correctly. In order to maintain technological dominance, there should be a ready stream of potential new technologies available to the decision makers. Lock in effects can be significant hurdles to tech adoption, but there is also the potential for leap frogging. We must keep one eye to the future, when evaluating where we are.
Question 3
The Neoclassical Growth Model begins with a simple aggregate production function. Y = A f(K,L), where Y is GDP, K is capital stock, L is the size of the employed workforce. And A is the level of technology. The aggregate specific production function modifies this slightly to Y = A K^a * L^(1-a). Increasing K or L individually will increase Y, but this takes place at a diminishing rate. There are diminishing marginal products of K and L. The level of tech in the second equation is exogenous, and can be either increased or decreased. An increase in technology increases the value added for a particular input. Likewise, a innovation in the process of production causes an increase in the value added, for both firms and more generally.
We can rewrite the second equation as output per worker. This takes the form of y = Y/L = (A K ^a * L^(1-a))/L which in turn becomes (AK^a)/L^a = Ak^a where k = K/L. This can be interpreted as output per individual is based on the degree of tech and capital per worker. Another vital aspect is the accumulation equation. Accumulation can only benefit the stock of capital. S, or savings, is a constant proportion of output and is used for capital investment. A portion of this investment goes towards replacing obsolete and depleted capital, known as depreciation or d. Overall investment is equal to savings in a closed economy. For capital to grow, depreciation must be lower than savings. Labor, on the other hand, can not increase or accumulate. However, the model does assume a stable rate of growth which is exogenous. The core assumptions are given by the following equations. First, as we discussed, y=Ak^a, which means that output is based on technology level times capital to the power of a, as well as dK/dt = sY – dK which shows that the change in capital over the change in time is equal to savings time output minus depreciation times capital.
A further equation can be found, which is dK/dt = sy – (d+n)K. This can be interpreted as the change in capital per worker is equal to the savings per worker minus the depreciation and dilution component. The growth of capital per worker equals total investment per worker, minus the investment required for depreciation as well as to equip new workers. Given that we have the production function and accumulation equation, both in per worker terms, we can solve. In sum, the core outcome of the neoclassical model is that the economy comes to become a steady-state of capital and output per worker. An important corollary of this is the observation that the greater the economy is below the steady-state level, the quicker its growth rate of output per worker will be.
Increasing the savings rate results in a higher capital impact per worker and increased labor productivity. If we allow individuals to determine the level of savings, the final analysis of the model does not change. The economy comes to reach a steady-state of output per worker, or per capita. This is because of the diminishing marginal product of capital. When technology increases the output per capita grows, regardless of whether capital per capita remains the same. It will also increase the marginal product of capital.
Baumol, Blackman, and Wolff (1994) offer some additions which the preceding theory has neglected. Firstly, they offer the convergence hypothesis, which states that when one country is more productive than another, due to discrepancies in their ways of production, the worse off countries which are close to the leaders can begin a process of catching up. Some will even succeed. Further, ‘the catch up process will continue as long as the economies that are approaching the leader’s performance have a lot to learn from the leader.” As long as there is something to be learned by the worse off country, they will continue to grow until they have fully internalized all of the innovations which the leader has. There are also countries which are significantly behind the leaders. It is not practical for them to focus on the innovations of the leader, and these countries could fall victim to falling even further behind.
There are some criticisms of all of these. Today, there is significant inequality within advanced nations. There is inequity both within and between nations. Further, there is sample bias in the convergence theory. The countries examined for this are ex post successful. The chosen countries are chosen on the criteria of whether or not they have achieved economic prosperity towards the end of the period being examined, as opposed to their situation at the start. Looking at the industrialized, middle range, and low income countries, it has been shown that there is convergence between the first and second, but the latter falls further behind. The reasons given were ‘lack of education and impeding social arrangements that tend to swamp the advantages of backwardness.” (Baumol, Blackman, and Wolff, 1994).
There are other models available to explain growth discrepancies. Innovation is one such topic. Here, there is an upgrade in resources, as well as structural change. As different stages, there are different absorptive capacities. Further, in early stages, autocratic regimes do better than democratic ones. There is a need to consume less and invest more. How do we put productive activity to good use? There is imitative entrepreneurship, on which other nations are imitated, but it is new to the country in question that is developing. Further, just because a country gets close to the top players doesn’t mean that it is guaranteed to stay there.
Abramovitz (1994) notes why leaders are slower than followers in terms of growth. They have already invested in things. It takes time to change. The behind have less capital, so there is an opportunity for capital investment. The laggards move low productivity into high productivity, often into cities. Further there is the question of social capability. Growth is a ‘socio cultural economic technological phenomenon” (Vonortas, //2023). There is the need to entice multinationals to invest, whether with tax relief, cheap labor, well trained labor, or a large market. Lee and Malebra (2018) point out the middle income trap versus low income trap. Few manage to complete close the gap with the rich. There is a need to dramatically change policy and transform. Many fail. Further, catching up doesn’t mean copying. As they get closer, they need to change. For the neoclassical model, the mainstream view is that intervention by government is justified where markets fail. Public goods are one such example, and knowledge has aspects of being a public good. Further, serious positive externalities see under investment. Further, concentrated markets are not innovative, which also warrants intervention. People are assumed to know productive techniques and be rational. This is not always so, we see bounded rationality being exhibited. Those catching up need capability building and institution building through creative innovation systems. Lastly, when a nation enters the market is critical. Laggards enter late, and suffer a disadvantage, but they can offset this with cheap resources or labor.
Cicera and Maloney (2017) outline the ‘innovation paradox’ in the first chapter of their work for the World Bank. They argue that developing countries do far less innovation than do their developed counterparts. They cite Pritchett (1997) who found a ‘Great Divergence’ over the last two hundred years. The poor don’t catch up, while the rich continue to grow. This is argued to be due to the discrepancies between the rate of adoption of innovation between rich and poor. The capacity to ‘identify, absorb, and adapt technologies…is indeed a key part of the divergence story.” (Cicera and Maloney, 2017, page 3). Countries that may one hundred years ago have started with the same general economic conditions had significantly different capacities to innovate. Countries which have historically been unable to innovate and use technological development to their existing firms are also not likely to do so for newer industries and ventures. Therefore, innovation capacity seems to be the more vital point for economic development.
To me, I find these arguments to be interesting when compared with the neoclassical description of traditional growth analysis. I turn as an example to the state of the world at the end of World War II compared to today. Germany and Japan were decimated, with the USA and Russia being the only two viable nations left standing. While the USA continued to grow by virtue of pro growth and neoclassical development it also invested deeply in innovation. Russia maintained a system of centralized economic planning, which lead to its stagnation and eventual collapse in 1989. Germany and Japan, on the other hand, came back from the edge of non existence in terms of development and became, by the end of the century, two of the richest countries in the world. How did this happen? Through an emphasis on education, human capital development, science, and innovation. Further, they also pursued perfecting existing technology rather than costly experiments into the completely novel, leaving this for the USA and other rich countries. Using these methods, both have become economic powerhouses, and lessons that the developing world of today could learn from.
Question 4
Chapter seven of Gregory Tassey’s book “The Innovation Imperative” (2007) makes a number of claims about the way that technological development takes place. “As the engine of long-term economic growth, technology drives the creation of entirely new industries, adds substantial value to the economy, but eventually becomes obsolete and loses its value”. (Tassey, 2007, page 180). This is the way that technological diffusion occurs within a society. The “installed base effect” causes industries that dominate one cycle to rarely succeed in the following one. For the 40 years after WWII, the US dominated the technological economy sphere. During this time, dealing with life cycles was not a priority. However, this is no longer the case. As a result, life cycle analytics is an increasingly important component of dealing with economic growth. There are major cycles, and minor ones within those cycles. For long term growth to be sustained, the attributes that influence the S shaped growth curves must be understood.
Life cycles are accelerating, but there is constraint on innovation. Business concerns of old and new companies are different. “High tech firms are concerned with amount and type of government R&D funding, IP rights, cost of risk capital, and availability of scientists and engineers and…skilled workers. Older…industries…cite taxes, regulation, health and pension costs, trade barriers and tort laws as their most serious problems.” (Tassey, 2007, page 181).
To manage the attributes of long-term economic growth, we must recognize that product cycles are nested within larger technology life cycles which in turn construct a major technology cycle. A series of product cycles can come to fruition from an underlying general foundation of technology. For all subsequent technology cycles, aspects of the tech grow towards routinization and standardization, with consequential slowing of change, showing the approach of the exhaustion of the generic technology’s possibilities. As a life cycle reaches its natural end, competition shifts from significant changes in products to smaller, incremental ones as well as innovations in the means of production, or processes. Price-based competition becomes more the norm.
When we examine a life cycle, we can find that tech changes and innovation occur with fits and starts as time passes, mostly due to the occasional betterment of the overall tech. This does not mean, however, that when one cycle ends another simply begins. There is a demand side effect in which an older tech is still needed and necessary. As such, we have what is construed as a nested life cycle. Life cycles begin at differing origination points in the major overall lifecycle, before they are completely obsolete and replaced by new technologies.. When the generic technology reaches saturation and is available to all, tech change occurs at the level of products. Indeed, it is the case that the over all technology needed for each part of a larger system have to be open to use by all to facilitate multiple streams of innovation which in turn push forward the over all system.
By the time a technology comes to the middle of its saturation progress, and correspondingly large markets with a decidedly defined structure, larger firms with more R&D resources can facilitate significant general research. New tech, on the other hand, typically suffers from underinvestment, as firms see it as risky.
The overall tech life cycles are quite significant and relevant to the long term economic growth of an area, as they facilitate a progression of nested cycles which incorporates a whole field of similar tech development. It is also the case, however, that shifting from one cycle to another can be particularly onerous. “The length of a major cycle and the competitive position of the domestic industry over such cycles are particularly vital for general purpose technologies due to the fact that they create a whole ecosystem of innovative industries with huge economic impact. Unfortunately, global leverage by an initially innovative domestic industry is usually not continued over an entire technology life cycle.”
Lee and Malerba offer a different interpretation of the way technology changes. Changes in industry leadership is known as the ‘catch-up cycle’ take place with the passage of time, within a subset of the economy. During these cycles, laggards come forward as the would-be leaders and the established status quo leaders are usurped. This process repeats itself. Using a ‘sectoral system framework … identifies windows of opportunity that may emerge during the long run evolution of an industry.” (Lee and Malerba, page 338, 2016). Three such windows exist. The first deals with changes in knowledge and technology. The second, changes in demand, and the final, changes to public policy and institutions. A window opening, coupled with the response elicited among the latecomers and laggards impacts both catch up and leadership. Different sectors are of course different, with specific examples being studied, including, “mobile phones, cameras, semiconductors, steel, mid-sized jets, and wines.” (Lee and Malerba, page 349, 2016)
There are four stages to the catch-up cycle, including entry, gradual catch-up, forging ahead, and falling behind.” (Lee and Malerba, page 349, 2016). Leapfrogging can take place in which one actor adopts newer tech than its rivals. This is a component of the forging ahead state. There are many windows of opportunity that come to be frequently and surprisingly. They conclude on a ‘Schumpeterian’ view, as ‘exploiting a technological window is very critical to forging ahead.’ (Lee and Malerba, page 349, 2016). Further, windows can be competence-enhancing or competence destroying in terms of tech development. Further, we must also look at capabilities and strategies of both the advanced and laggards. New tech often coexists with existing tech, at least for a time. Alternatively, rapid change from one schema to another can also take place (as with cell phones). Laggards must capitalize on windows, and ‘build sector-specific capabilities that support actors, networks, and institutions.” (Lee and Malerba, page 350, 2016). However, this may be time intensive. The developed, on the other hand, must focus on lock-in or traps. They should attempt to maintain there advantage with others, and even improve it through innovation. A laggard could become stuck in the ‘middle income trap’ whereby they fail to ‘upgrade to high value-added products and is confined to performing activities with low value in the global value chain.” (Lee and Malerba, page 350, 2016). These countries should follow a set of policies that builds capabilities for innovation to take full advantage of the opening of such a window of opportunity, and also develop systems in which the full embrace of catch up innovation is possible.
The most profound thing a laggard country, industry, or firm can do to escape the middle-income trap is investment. I do not mean this strictly in the capital sense, though this is important, as is illustrated by the neoclassical growth theory. What I mean, as alluded to above in my second essay, is invest in fruitful innovation education. Through this, they have the potential to leap frog the leading countries and adopt tech that is substantially better, more developed, or more efficient. The use and deployment of both cell phones and solar energy are good examples. Countries such as Kenya and India have, at least since the end of World War II and the fall of colonialism, been laggards in their adoption of technology. Living a primarily agrarian existence, their growth was anything but assured. First came the boom of mobile telephony and the internet.
Rather than install land lines in huts, people in Kenya adopted cell phones. Not only this, but they were among the first to use cell phones as a means of mobile banking in low income countries. They developed a system by which small amounts of money could be paid from one person to another through their cell phones, thus revolutionizing the entire financial industry of the country, and allowing many small businesses to grow and flourish. This had further effects of increasing their international trade.
India, on the other hang, faced a similar scenario, but with electricity provision. Over the last 20 years the Climate Crisis has grown in significance to the entire world. It is now generally accepted that we cannot continue with business as usual, as concerns our emission of green house gases. When India was told that it could not develop the same way that all the Western industrialized countries had, it claimed hypocrisy on their would be international minders. Instead of moving forward completely with greenhouse gas emitting electrification, however, during the Paris Climate Accords, Al Gore made a fateful deal that would share the tech of Western PV solar companies with India.
In brief, countries can leap frog their former leaders by adopting new tech in novel and useful ways. This is based in the fundamental sharing of methods and innovations, as well as the absorptive capacity of the country in question. In order to full capitalize on the tech change window, a unit must be prepared. For a nation, this means having a populace that is well educated, particularly in terms of science and tech. As Benjamin Franklin said, “Luck is opportunity meeting preparedness.”
References
Vonortas, Nicholas. Economics of Technological Change and Innovation, 2023
Baumol, Blackman, and Wolff, 1994
Abramovitz, 1994
Cicera and Maloney, The Innovation Paradox, 2017
Tassey, Gregory, The Innovation Imperative, 2007
Lee and Malerba, Catch-up cycles and changes in industrial leadership: Windows of opportunity and responses of firms and countries in the evolution of sectoral systems, Research Policy 46 (2017)
Pricing Fresh Air: Relating Smoking to Carbon Emission
PPPA 6014
Professor Anil Nathan
Final Policy Brief
Carl Mackensen
12/15/2023
Pricing Fresh Air: Relating Smoking to Carbon Emission
Abstract
Clean air is vital to not just human civilization, but the entire world’s biome and continued existence. The climate crisis is real, and we can already see its effects in raging forest fires, higher sea levels and flooding, and increased temperatures with records being broken every year, with nothing to say of the anthropogenic global mass extinction event. As such, it is of paramount importance to attempt to deal with this problem in every way that we can. The air is polluted because it is a global commons. It is non rivalrous and non excludable. As such, people feel free to emit as much pollution, particularly carbon for energy, as they see fit. This is also due to the fact that pollution is an externality, in which a third party is impacted by the economic activity of two others. There is also a free rider effect in which many are tempted to skimp on cutting carbon emissions after others have committed to. One such proposed solution is to put a price on carbon. This would be highly helpful. Another method, examined here, would be to attempt to put a dollar figure on the price of breathing fresh air.
In this paper, this is done via looking at a case study; the economics of smoking, and its cessation, and as a result, the cost of smoking as a proxy for putting a price on the shadow price of clean air. Through this, we can look at the figures and facts for damage, and make a first approximation of the social cost of breathing polluted air. As such, we can then conclude that the best method for dealing with this would be to make it more difficult to pollute. One of the best ways to do this is to make polluting more costly. This will subsequently affect every aspect of society, and the economy, on both a local and global scale. How this is done, whether through a tax or market mechanism, is mainly academic. The real results of this would be extremely beneficial to those who have already been and will continue to be, adversely affected. Climate change is caused by one group of people and affects another group of people, despite being different in time and space. The results of the actions of the former group are that they damage the latter. This being the case, we can attempt to define the damage of polluted air more systematically and econometrically. This is done through the separate act of putting dollar amounts on breathing unclean air, or smoking. This is a nuanced difference in procedure from looking at a price in carbon. In truth, the two go hand in hand. But it is important to note at the outset that they employ different means. A price on carbon, which will be touched on here, is separate from the cost of breathing dirty air. Pricing air, as a global commons, is a different sort of economic solution to the problem. Doing so would take us a long way towards dealing with this vital and serious issue.
I: Introduction
Pollution is a systems level problem. It has pervaded every aspect of global economies since the dawn of the first Industrial Revolution. This can best be combatted through systems-level thinking, and systems-level solutions. After examining the broader issues in the climate justice movement, and subsequently a case study on attempting to put figures on the damages of breathing unclean air through the examination of the social costs and cessation benefits of smoking, this paper endeavors to do this. Why smoking? Because, as will be detailed, what we are essentially dealing with is an externality where one group of people, though removed in time and geography from another, does damage to them, through the pollution of the air. In essence, the environmental economist attempts to put cost figures on these things so that it can best be determined how to help people. Therefore, looking at something related to breathing pollution that has been clearly documented and has a long-standing history of literature, such as smoking, is eminently helpful in this pursuit.
The people who put the carbon in the air may be temporally removed by generations, or geographically removed by thousands of miles, but this matters very little. It is still one group of people doing active harm to another group of marginalized people. This is at the heart of the growing climate justice movement, an outgrowth of environmental justice, which is itself the intersection between social justice and environmental issues.
Many of the solutions that are being proposed to deal with air pollution, specifically that of carbon, circulate around attempting to put a price on its emission. If we can calculate the social cost of carbon, and consider this in our subsequent analyses of how much to pollute, then we can maximize the cost-benefit relationship. Prices ranging from 40 dollars per ton of CO2 emitted to as high as 800 dollars per ton, with the most likely estimate being approximately 417 dollars per ton. (Nuccitelli, 2018) Looking purely at the damages to the USA, the social cost of carbon estimates are roughly 40 to 50 dollars per ton, while the value of the reduction in line with what this would result in would be on the order of seven times this, or 280 to 350 dollars per ton. (Nuccitelli, 2018)
Whether this would come in the form of a tax or through a market mechanism such as a cap-and-trade marketable permit scheme is somewhat irrelevant for our purposes, as long as firms internalize their costs of production and what was once an externality becomes simply part of the cost of doing business. There are pros and cons for both a flat tax per ton emitted, and a cap-and-trade scheme. However this is accomplished, according to the calls of benign scientists and statisticians, we could truly come to estimate the effect of carbon emissions.
But is this the best way? Or, alternatively, is it the only way? It does address some of the core issues around carbon emissions, namely putting a price on them. Is it possible to attempt to put a price on the value of a breath of fresh air? To find its shadow price? After all, carbon emissions are, at least seemingly at the outset, a global commons problem. No one owns the air, and as a result, everyone is incentivized to pollute it.
There are numerous solutions available for dealing with a commons issue. Privatization of the commons is possible. But it would seem somewhat draconian to suggest forcing making people pay for the air they breathe. You can regulate emissions, as the social cost of carbon attempts to do through pricing and market mechanisms. But this too is one method open to a lot of different interpretations. How much should a ton of carbon emitted into the air cost? And how is it best to enforce this? Going forward, in this paper, then, I will now focus on looking at the cost of smoking as a proxy for putting a price on clean air. It is an interesting attempt at dealing with this commons issue and features a dynamic, and substantive, body of literature.
II: Literature Review
The Social Cost of Smoking
For many years, the public health community has put forth that smoking causes great societal costs, and that smokers should carry the burden of these costs. There are three primary types of costs which include the direct medical costs of preventing, diagnosing, and treating smoking-related diseases, the indirect morbidity costs associated with lost earnings from work due to smoking, and the indirect mortality costs related to the loss of future earnings due to premature smoking-caused deaths. (Chaloupka and Warner, 2000, page 1575). Primarily, this research has been done in The United States, but other countries such as Canada, Great Britain, China, and others have conducted analyses as well. Furthermore, many state-specific analyses have been conducted within the US, based on the Smoking-Attributable Morbidity, Mortality, and Economic Costs (SAMMEC) model. (Shultz et al., 1991)
These analyses, which attempt to get at the cost of smoking, use a great variety of methods of estimating the different cost components. These studies leave much to be desired. They omit or ignore certain types of smoking related health care, such as the treatment of burn victims from smoking caused fires and perinatal care for low-birth-weight babies of smoking mothers. (Chaloupka and Warner, 2000, page 1576) There are a few studies that have dealt with the costs of treatment of diseases caused by tobacco smoking. None have attempted to value intangible costs, such as the pain and suffering of smoking-related disease victims and their families. These intangible costs may well exceed those that are already quantifiable.
These studies considerably underestimate smoking’s burden on the health care system due to its failure to consider how smoking complicates the effects of many illnesses that are not directly associated with smoking. For example, diabetics who smoke often have more complications of their diabetes compared to those who do not smoke. Smokers recover more slowly from surgeries of all types than nonsmokers do, extending post-surgical hospital stays. Smokers with HIV may be more likely to develop near-term AIDS than nonsmokers with HIV. The inclusion of similar costs in these cost-of-smoking-analyses could result in an increase of 50% or more. (Chaloupka and Warner, 2000, page 1576) These studies also neglect to consider a great many direct costs in addition to medical costs, such as the time and transportation costs associated with getting patients to and through health care services, the direct costs of home modifications to deal with smoking-related disabilities, damage to buildings due to smoking-induced fires, smoking-related maintenance costs in industrial and home settings, and the increased frequency of laundering necessitated by smoking. Leaving out these nonmedical costs is routine for nearly all of the broader cost of illness studies and papers. Sometimes, these omissions are acknowledged, with authors saying they seemed too negligible to warrant further investigation.
The indirect morbidity and mortality costs have been critiqued frequently as being an insufficient means of valuing the avoidable premature loss of life. They place no value on life per se. These studies also calculate some of the economic ‘benefits’ of smoking, such as the reduction in Social Security payments for smokers who die prematurely, and medical expenditures avoided due to the premature death of smokers. (Shoven et al., 1989) There is, as a result, a great oversupply of studies that attempts to discern whether the overall impact of smoking is positive or negative. The debate of whether these ‘negative costs’, or cost offsets, should be incorporated in the calculation of the social cost of smoking has become a large issue in the academic battle over the initial definition of the social costs of smoking. The importance of this debate is potentially substantial. At the heart of the public health community’s putting forth the need for a higher cigarette tax is the social cost argument that smokers (or the industry that feeds their addiction) are imposing an enormous economic burden on the society and should pay for it through higher taxes. Using the public health construction of social cost, some analysts have concluded that in the United States, the cigarette excise tax should be roughly on the order of three to four dollars, or more, to cover these costs. Economists of many different political persuasions have rejoined that to discern an optimal cigarette excise tax, the ideal notion of social cost is the traditional economist’s measure of externalities, or the costs imposed by smokers on others, which excludes their own family members.
Economics of Smoking Cessation
It is clear from above that smoking imposes a huge economic burden on society, currently up to 15% of total healthcare costs in developed countries. (Parrott et. Al, 2004) Smoking cessation can save years of life , at a very low cost compared with alternative interventions. The most straightforward benefits of smoking cessation are increased gains in life expectancy and the prevention of disease. Cessation also improves peoples’ quality of life, as smokers tend to have a lower self-reported health status than non-smokers, and this effect improves after stopping smoking. There are also wider economic benefits to people and society, coming from reductions in the effects of passive smoking on non-smokers, and savings to the health service and the employer. These larger benefits are usually omitted from the economic evaluations of cessation interventions and as a result, underestimate the true value for money caused by such programs.
There have been many estimates of the economic cost of smoking in terms of health resources. In the United States, they typically range from roughly 0.6% to 0.85% of gross domestic product. (Parrott et. Al, 2004). In absolute terms, the United States public health service estimates a total cost of $50 billion a year for the treatment of smoking related diseases, in addition to an annual $47 billion in lost earnings and productivity. (Parrott et. Al, 2004). When considered as a percentage of gross domestic product, the economic burden of smoking seems to be increasing. However, in truth, the burden may not be increasing, but rather, as more diseases are known to be caused by smoking, the share attributed to smoking increases. Earlier estimates may simply have underestimated the true cost.
In the United States, passive smoking has been estimated to cause roughly 19 percent of total expenditure on childhood respiratory conditions, and maternal smoking has been shown to increase healthcare expenditure by $120 a year for children under age five and $175 for children under age two. (Parrott et. Al, 2004) Absenteeism as a result of smoking related diseases is also a large cause of lost productivity, a cost incurred by employers. An annual estimated 34 million days are lost in England and Wales through sickness absence resulting from smoking related illnesses, and in Scotland, the cost of this productivity loss is about 400 million pounds, (Parrott et. At, 2004)
There is clear evidence that smoking cessation interventions are effective. However, to show value for money, the costs, as well as the effectiveness of such programs, have to be examined. Overwhelmingly, the evidence is that face to face cessation interventions result in great monetary value for money compared with the large majority of other medical interventions. There are some complex factors that influence cost-effectiveness. The cost-effectiveness of putting the US Agency for Healthcare Research and Quality’s clinical guidelines on smoking cessation into practice has been estimated, for combined interventions based on smokers’ preferences for different types of the five basic recommended interventions. The cost of implementation was estimated at $6.3 billion in the first year, as a result of which society would gain 1.7 million new quitters at an average cost of $3,779 per quitter, $2,587 per life saves, and $1,915 per quality-adjusted year of life. (Parrott et. Al, 2004) In this study, the most intensive interventions were calculated to be more cost effective than briefer therapies. With these results, however, we must be careful when extrapolating from them, as cost effectiveness estimates are more than likely to be time and country-specific, and dependent highly on the healthcare system in question.
III: Discussion and Extrapolations
So what can be concluded from this concomitance of data, analyses, and extrapolations? Firstly, smoking is damaging, not only to the individual, which has been well documented for quite some time, but to larger society generally, which hasn’t always been included in the cost and calculus of previous analyses. We can, therefore, attempt to incorporate these social costs into our understanding of the damage that smoking does, and come to a better understanding of how much it actually costs us to smoke a cigarette or, as I would put forward, breathe polluted air.
If we treat the negative externalities of carbon emissions in a similar vein as these analyses, as a public health issue, we can perhaps come to a better understanding of the true cost of polluting carbon. That is precisely what this paper aims to do, through pricing fresh air and health, with the recommendation being that we must make it more difficult to emit carbon through principled, evidence-based policies, nonviolent activism, and advocacy. As a result, both people and the planet will benefit greatly.
A price on carbon will filter through every facet of the economy, and resultantly, society, around the world. Food that travels far distances will go up in price because of the cost of fuel associated with transporting it, and as a result, local food would be cheaper, all else held constant. Travel by car will also become more expensive, and more people would turn to public transportation. Investment in research and development into alternative forms of energy production for consumer use will flourish as the oil and gas industries still receive substantial subsidies. With a price on carbon, renewables would become even more productive and beneficial, from a societal and economic perspective, than they currently are. Innovation would be spurred towards storage in conceptual solutions which we can’t fully conceive of yet and distributed generation, such as rooftop solar as opposed to traditional centralized power plants, would flourish. A price on carbon is truly the silver bullet that would benefit all of us.
The understated irony of comparing smoking to carbon emissions is that both of these activities feature an addictive component. For better or worse, smokers are addicted to smoking cigarettes, and emitters are addicted to emitting carbon. Smokers have are very rigid elasticity of demand regarding cigarettes; people who argue for taxing cigarettes often say that the price will deter would-be smokers. That may be true to a certain degree, however it is essentially taxing an addicted population for being addicted. Similarly, at the systems level, we seem unable to be able to move past carbon emissions when it comes to energy production. As the negative externality is not incurred by the economic agents making the decision, it doesn’t factor in at all. Even if it were to be considered, as is proposed in certain areas like RGGI in the US or the social cost of carbon for the EU, perhaps we won’t see such dramatic changes in energy use as a result of a higher price. Perhaps this will simply be an additional cost that the public will have to bear, because we are locked into this framework. However, I like to think about this topic differently.
By bringing our emissions in line with the projections of scientists to limit warming to a manageable level of 1.5 degrees Celsius, countless lives would be spared from death, destruction, relocation, economic hardship, and forced migration. Droughts and wildfires would be mitigated, and rural farmers in low-income countries wouldn’t have to abandon their ancestral farming territories. Island nations wouldn’t be demolished. This is merely the benefits to humankind, which does not include the saving of literally millions of species and vulnerable ecosystems. Such measures would have a far-reaching impact.
All of this is contingent on a price on carbon being universally adopted. There is incentive to all to free ride on others taking serious action, while the free rider pursues a low-cost approach, when there is no price on carbon. The above sections of the literature review attempt to flesh out a different, though related, topic. Namely, what is the shadow price of breathing fresh air? While it is not so simple as to be able to come up with a figure per breath, we can use the information summarized above to inform how we approach the topic of breathing clean air. As we have seen, smoking is costly, and cessation has real economic benefits. It is not far off from the introduction of a price on carbon to similarly have a price on breathing fresh air.
IV: Conclusion
This paper looks at the economics of smoking cessation and the damages of smoking because it is one of the few areas of the literature where there are established facts, studies, and analyses on what I described earlier, namely, the cost of breathing polluted air. This is an attempt to get at the social cost, for humans, of dealing with air pollution, from an environmental economic standpoint. This analysis is a proxy for a further discussion and examination of the social cost of carbon, which itself has a growing base of literature, and of which I hope this is a helpful addition, if ever so slightly. Putting a ‘price’ on a breath of clean air may seem odd to the general public and layperson, but as I have attempted to detail, it would be priceless in terms of promoting the prevention and mitigation of damages to people, and the environment. It is one group of people, though removed in time and space, indirectly harming another group of people through their actions, either whether initially unknowingly or, more recently, simply through lack of education or apathy. In order to combat this, we must first and foremost educate, and subsequently advocate, for change at a systemic level. This can begin to be done by putting a price on carbon, and/or pricing fresh air.
Another concept of restorative justice that could be incorporated into the framework I have proposed would be to earmark the funds developed from the taxation of carbon emissions, and redistributing them to those who have already, and will continue to be, most affected. If we were to do this, both people and the planet would benefit. While this would not be the only method needed to abate the dramatic consequences of the climate crisis, nor work best for every and all situations, it would be a good first step towards the long path of structural oppression detailed above.
How this would actually be done is up for debate and falls outside the realm of this paper. I mention it here to firmly ground the discussion of restorative climate justice to theorist Galtung, and his notion of positive peacebuilding, which has been adopted by the UN, and other international organizations. Further work could also be done on different avenues for similar research, such as the social cost of smog in cities, or the decreased value in real estate close to pollution emitting sources. This, while interesting and certainly within the purview of environmental economics, falls outside the realm of this paper. I leave it for future analysis.
V: References
[1] Chaloupka, Frank J and Warner, Kenneth E. (2000), “Handbook of Health Economics: Chapter 29 The Economics of Smoking”, Elsevier B.V. Pages 1539-1627
[2] Parrott, Steve and Godfrey, Christine (2004), “Economics of Smoking Cessation”, BMJ 2004 Apr 17; 328(7445):947-949
[3] Nuccitelli, Dana, (October 21st, 2018), “New study finds incredibly high carbon pollution costs”, CCL Economics Policy Network Team, Citizens Climate Lobby, citizensclimatelobby.org, https://citizensclimatelobby.org/new-study-finds-incredibly-high-carbon-pollution-costs/
[4] Shoven, J.B., Sundberg, J.O., and Bunker, J.{. (1989), “The social security cost of smoking”, in: D.A. Wise, ed., The Economics of Aging (University of Chicago Press, Chicago) 231-254.
[5] Shultz, J.K., Novotny, T.E., and Rice, D.P. (1991), “Quantifying the disease impact of cigarette smoking with SAMMEC II software”, Public Health Reports 106:326-333.
Ethical Leadership - Public Management
Carl Mackensen
Professor Sanjay Pandey
Public Management
Final Paper
Introduction
In this piece I examine ethical leadership from a myriad of perspectives. I begin with Rainey’s work on Transformational and Charismatic leadership. I then examine some empirical evidence, and begin looking at authentic leadership. Spiritual leadership is thereafter articulated, followed by the situational antecedents of ethical and unethical behavior. I then examined the importance of ethical role models, as well as the significance of the ethical situation of the organization in question. Thereafter, I look at personality traits which impact ethical development and leadership. After this, I go through the relevance of motivation to ethical leadership. I conclude with a meditation on Virtue Ethics, which is of note to ethical leadership as traits of the person in question are found to be of paramount importance. I then conclude with a reflection on all which I have found.
Transformational Leadership
In the 1970s researchers were dissatisfied with the theories which they had in place. Research focused on exchanges and highly quantified models and analyses. Many argued for focusing on larger aspects and alternative sources of thought on leadership. This included both looking at the past, politics, and increasing qualitative research. Burns (1978) was influential. He was able to tease apart transformative leadership from transactional leadership. Transactional entails receiving support and performance in exchange for the noting of worker needs and giving rewards. Transformational leaders, on the other hand, focus on subordinates’ goals and try to increase them to higher levels with an emphasis on transcendental higher-level goals such as Maslow’s self-actualization. In doing so they rise above their individual self0interest, and work towards the bettering of the community, organization, or country.
Bass was able to systematize and routinize the approach used. Transformational leadership lifts us up. Among followers, it changes their preoccupation from lower to higher order focuses. In. addition, these leaders can encourage their followers to give up self-interest by demonstrating that they are met of tied to the community, or higher-order attributes. However, this modus operandi can have negative aspects. Bad transformational leadership can hurt both followers, and outsiders, as evidenced by Adolf Hitler. Transformational leadership has an emotional and intellectual component. For the emotional, this would be charisma. For the intellectual, it would include paying close attention to people on an individual level, and doing so in a “benevolent, developmental, mentoring nature, as well as intellectual stimulation.” (Rainey, 2021, 365) Subordinates usually admire such leaders because they are particularly adept at that which they engage in.
On occasion, transactional leadership is necessary to give both goals and directions, as well as rewards. Relying too much on the interactions with subordinates, especially punishing them, can have detrimental impacts. “Transformational leadership lifts and expands the goals of individuals, not by overemphasizing direct, extrinsic satisfaction of self-interest, but rather by inspiring new, higher aspirations. Empowerment, charisma, inspiration, individual consideration, and intellectual stimulation are all hallmarks. They don’t directly control followers but influence the climate in which they work. This leads to a concern on managing organizational culture.” (Rainey, 2021, 365)
Charismatic Leadership
Researchers have looked at the nature by which leaders can influence subordinates not merely through such things as authority or hierarchy, but in addition personal traits and attributes. There are two main schools of thought, the attributional theory of perspective, and a self-concept theory. The attributional examines charisma as traits which subordinates endow to their leaders. This causes identification with the leader, and their attributes become internalized. Followers are motivated to make their leader happy, and emulate him or her. Particularly, followers are more likely to do this when the leader does the following:
1) Advocates a vision that is different from the status quo, but still acceptable to followers
2) Acts in unconventional ways in pursuit of the vision
3) Engages in self-sacrifice and risk taking in pursuit of the vision
4) Displays confidence in the leader’s ideas and proposals
5) Uses visioning and persuasive appeals to influence followers, rather than relying mainly on formal authority
6) Uses the capacity to assess context and locate opportunities for novel strategies
(Rainey, 2021, page 366). Situations where these traits can be built upon and capitalized on are usually the place where such leaders emerge.
The self-concept theory looks at the enumerated traits of leaders and followers. In addition, it is born from the way in which people prefer to maintain their self-concept, and the way in which these leaders influence that. “Leaders have charismatic effects on followers when the followers:
1) Feel that the leader’s beliefs are correct
2) Willingly obey the leader and feel affection for him or her,
3) Accept high performance goals for themselves
4) Become emotionally involved in the mission of the group and feel that they contribute to it, and
5) Regard the leader as having extraordinary abilities.
(Rainey, 2021, page 366)
Charismatic leaders are able to get such interactions by articulating a concept which is positive through good communication. They show confidence and trust in their subordinates, hold them to a high standard, and give them praise, rewards, and the resources to do things. They usually will take risks, as well as give up things for the larger community. As a result, subordinates have skin in the game for the leader’s success, as well as what they do. Because of this they will endeavor to aid their leader, and work more.
Not all aspects of charismatic leadership are good ones. People can come to rely on the leader. What, then, happens when they leave? Secondly, Rainey distinguishes between ‘positive charismatics and negative charismatics’ (Rainey, page 367). An example of the latter would be Hitler. They can be ‘self-absorbed, dependent on adulation, and excessively self-confident. They may take excessive risks and inhibit followers from suggesting improvements or pointing out problems” (Rainey, page 367).
Empirical Work
Of late, there have been a number of high profile cases of degradation of public morality in business and government. This has consequently resulted in increased attention being placed on leaders to behave ethically. It is of utmost importance for both credibility and being able to substantially impact followers. It also impacts the careers of managers. Most work on this issue examines the workplace ethical leadership. Antecedents, outcomes, and contingencies are still largely unknown. It is for these reasons that we must focus on a behavioral and perceptual view of ethical leadership.
Early empirical work on transformational leadership usually portrayed it as positive, moral, and values based. Researchers have looked at authentic and pseudo-transformational leadership or personalized (unethical) and socialized (ethical) charismatic leadership. Here, we can examine the social versus self-oriented use of power and the morality of the means and ends to differentiate between ethical and unethical leaders.
Authentic Leadership
Authentic transformational leadership has a moral foundation and emphasizes serving the collective rather than oneself. In contrast, pseudo-transformational leaders behave immorally and focus on self-serving rather than collective goals. It is difficult for those being led to distinguish between the good and the bad, because it requires knowledge of the leader’s true intentions. Authentic transformational leadership assumes that people act on altruistic values for the good of the group, organization, or society, but this can compete with morality. Leaders could pursue what is best for the group while denying the needs of outsiders.
We can describe morality along two axes: being a moral person, and being a moral manager. The later concerns how those in managerial roles and leadership promote ethics in the workplace. Brown and colleagues (2005, p. 120) define ethical leadership as “the demonstration of normatively appropriate conduct through personal actions and interpersonal relationships, and the promotion of such conduct to followers through two-way communication, reinforcement, and decision-making.” questions that remain include, ethical for whom, what constitutes ethical failure, and does this include out-group members’ moral rights? (Hartog, page 412)
Exchange relationships develop through a series of mutual exchanges that yield a pattern of reciprocal obligation (e.g., Masterson et al. 2000). Over time, the norm for reciprocity leads to followers reciprocating the fair and caring treatment of ethical leaders through showing desired behaviors (e.g., Walumbwa et al. 2011).
Traits
People examining the functioning of organizations, both public and private, have for some time argued that there are personal attributes, like integrity, which are significant for the impression of being an effective leader. Studies have corroborated this. For example, survey research has linked perceived leader effectiveness with perceptions of the leader's honesty, integrity, and trustworthiness (Den Hartog et al., 1999). And, cognitive trust (the exercise of care in work, being professional, dependable; (McAllister, 1995) has been associated with effective styles of leadership as well. (Dirks and Ferrin, 2002).
The interviews involved with these surveys showed that there were several characteristics of a person which impacted perceptions of ethical leadership. These leaders were believed to be trustworthy and honest. Moving past that, ethical leaders were construed as fair minded, and having principles, which impacted their decision making. It was also perceived that they give weight to others and society as a whole, and that they are ethical both personally and professionally. Those who conducted the study construed these attributes as constituting a moral person, which is a key aspect of ethical leadership. Such a designation reflects the belief or perception of motivation, character, and traits.
In addition, this paper showed another significant component of ethical leadership. This is what Treviño and colleagues' described as the moral manager component. This attribute of leading ethically reflects the person’s attempts to impact subordinates ethical or unethical behavior. These leaders incorporated ethics as a pronounced aspect of their leadership style through communicating both ethics and values in their messaging. They observably and purposefully role model good behavior and utilize a rewards schema to make subordinates accountable for good behavior. “Such explicit behavior helps the ethical leader to make ethics a leadership message that gets followers' attention by standing out as socially salient against an organizational backdrop that is often ethically neutral at best.” (Trevino et al., 2000, Trevine et al., 2003)
“Authentic leaders are “individuals who are deeply aware of how they think and behave and are perceived by others as being aware of their own and others' values/moral perspective, knowledge, and strengths; aware of the context in which they operate; and who are confident, hopeful, optimistic, resilient, and high on moral character” (Avolio, Luthans, & Walumbwa, 2004, p. 4). view authentic leadership as a “root construct” that “could incorporate charismatic, transformational, integrity and/or ethical leadership”. But, they also argue that these constructs are distinct from each other.” (Luthans & Avolio (2003, p. 4)
“Self-awareness, openness, transparency, and consistency are at the core of authentic leadership.” (Brown et al, 2006, page 599). It is also paramount to give weight to beneficial end goals and values and having concern for others, instead of being beholden to self interested motivations. “Authentic leaders show good character and virtue traits such as hope, optimism, and resiliency.” (Brown, 2006, page 599). This leadership style is comparable to ethical leadership, but there are some key aspects which are different. This includes being authentic and aware of oneself. “Authenticity, or being true to oneself, was rarely if ever mentioned in the interviews conducted by Treviño & colleagues (2000) about ethical leadership.” (Brown et al, 2006, page 599).
Spiritual Leadership
Spiritual leadership is an altogether different leadership style also, and “is comprised of “the values, attitudes, and behaviors that are necessary to intrinsically motivate one's self and others so that they have a sense of spiritual survival through calling and membership” (Fry, 2003, p. 711) and “is inclusive of the religious-and ethics and values-based approaches to leadership” (Fry, page 693). “Alternatively, spiritual leadership has also been described as “occurring when a person in a leadership position embodies spiritual values such as integrity, honesty, and humility, creating the self as an example of someone who can be trusted, relied upon, and admired. Spiritual leadership is also demonstrated through behavior, whether in individual reflective practice or in the ethical, compassionate, and respectful treatment of others” (Reave, 2005, p. 663).”
Situation and Context
Situational influences, which are the precursors of ethical leadership, can also be examined, in addition to individual influences. Situational influences include “ethical role modeling, the organization's ethical context, and the moral intensity of the issues that the leader faces in his or her work.” (Brown, 2006, page 600). Both leaders and followers can get information from using role models. “By observing an ethical role model's behavior as well as the consequences of their behavior, leaders should come to identify with the model, internalize the model's values and attitudes, and emulate the modeled behavior (Bandura, 1986).”
Those interviewed by Treviño et al.'s (2000) put forth that visibly seeing an ethical role model was very relevant and a precursor of ethical leadership. To examine ethical role modeling, Weaver, Treviño, & Agle (2005) interviewed people who had experienced being mentored by role models at work who behaved ethically. Attributes that were relevant for learning from ethical role models included, “caring, honesty, fairness and behaviors such as setting high ethical standards and holding others accountable were similar to those previously associated with ethical leadership. But, interviewees also identified some characteristics of ethical role models that differed from those previously associated with ethical leadership such as willingness to turn mistakes into learning experiences and humility.” (Brown, 2006, page 600) “Weaver and colleagues called ethical role modeling a “side by side phenomenon” because “ethical role models are well known by their daily conduct and interactions — the way they behave and the way they treat other people” (Weaver et al., 2005, p. 12).”
Role Models
Brown & Treviño (2006b) looked into the impact of three different potential styles of role models in so far as they influenced good leadership. This included early childhood role models, career mentors, and top managers. They found in their study that the effect of having experienced a mentor who was ethical in the participant’s professional life was positively related to ethical leadership. Those in leadership positions who put forth that they had in the past experienced an ethical role model in the workplace were, as a result, significantly more probable to be construed as ethical by their subordinates. Early childhood models, however, as well as higher managerial ethical role modeling, did not have a relationship to ethical leadership. This is in line with what Weaver et al. (2005) found, and it is intuitive because “early childhood ethical role models would not necessarily have modeled behavior relevant to leadership in the workplace.” (Brown, 2006, page 600). This resulted in the authors concluding that “Being able to identify a proximate, ethical role model during one's career is positively related to ethical leadership.” (Brown, 2006, page 601)
Ethical Situation
Of additional importance to good leadership is the organization’s ethical situation. (Treviño, Butterfield, & Mcabe, 1998). Most research has examined the ethical climate (Victor & Cullen, 1988) as well as the ethical culture (Treviño, 1990). Ethical climate refers to “the prevailing perceptions of typical organizational practices and procedures that have ethical content” or “those aspects of work climate that determine what constitutes ethical behavior at work”” (Victor & Cullen, 1988, p. 101). Treviño (1986) proposed “ethical culture as a subset or slice of the organization's overall culture that can moderate the relationship between an individual's moral reasoning level and ethical/unethical behavior.” (Brown, 2006, page 602) The overriding culture of an organization affects those people who have more moral development less.
Treviño, Weaver, Gibson, & Toffler (1999) determined that cultural components (including leadership and structures of rewards which uplift ethical behavior, treating employees fairly, including ethics in the decision-making of everyday activities, and being employee-minded all aided the pro social ethics based behaviors and attitudes. Of particular note for ethical culture is the system of rewards which encourage unethical or ethical actions. (Treviño et al., 1999)
Good leadership comes about in scenarios in which the culture and context of ethics is uplifted. Models, policies, and norms can all be helpful towards this ends. Here, people form habits to the concept that ethical behavior has benefits, and corresponds to promotion and good outcomes. If an organization has neither ethical culture, nor context, and which fosters unethical behavior, those in positions of power take actions which are in line with the ethos of their organization. This corresponds to unethical leadership. This results in a second maxim, “An ethical context that supports ethical conduct will be positively related to ethical leadership.” (Brown, 2006, page 602)
Moral Intensity
“Moral awareness (recognizing the moral aspects of a given situation) is a first interpretive step in the ethical decision-making process (Jones, 1991; Rest, 1986).” (Brown, 2006, page 602). In order to perform ethically, we must first comprehend that a particular situation includes the ethical aspect. The intensity of moral situations is more explored in business studies than in other disciplines. This includes how serious the consequences are, as well as social consensus (Brown, 2006, page 602). Should someone in a position of power be made to interpret a situation where there are serious outcomes for their behavior, and they act well, subordinates will take note and emulate that behavior. People look to leaders in serious and intense scenarios. Brown “proposes that morally intense situations will interact with the ethical context to influence ethical leadership. Specifically, morally intense situations will enhance the relationship between ethical contexts and ethical leadership.” (Brown, 2006, page 602). This leads to another maxim, namely, “Moral intensity (magnitude of consequences and social consensus) enhances the relationship between ethical context and ethical leadership.” (Brown, 2006, page 602)
Personality Traits
Aside from situational elements which influence good leaders, there are additionally personality traits which have impact. The Five Factor Model (Tupes & Christal, 1961) is of note. “The Five Factor (or Big Five) typology conceptualizes personality as clusters of traits that are organized within five dimensions: agreeableness (describing someone who is altruistic, trusting, kind and cooperative), openness (imaginative, curious, artistic, insightful), extraversion (active, assertive, energetic and outgoing), conscientiousness (dependable, responsible, dutiful, determined), and neuroticism (anxious, hostile, impulsive, stressed).” (Brown, 2006, page 603). Through performing a meta-analysis, it was determined that “extraversion and openness to experience are the traits most dominantly associated with general leadership effectiveness, (Brown, 2006, page 603) conscientiousness and extraversion are associated with leader emergence, (Judge, Bono, Ilies, & Gerhardt, 2002), and neuroticism and agreeableness are minorly related to leadership. (Judge et al., 2002)
It has been found that, though controversial, agreeableness is the trait of personality which most significantly determines good leadership. “This is because it incorporates being trusting, altruistic, and cooperative.” (Brown, 2006, PAGE 603). “By definition, ethical leaders are altruistically motivated, caring, and concerned about their followers and others in society (Treviño et al., 2003).” People who are conscientious show self-control, plan carefully, are well organized, and reliable. (Brown, 2006, page 603). “Low scorers are not necessarily lacking in moral principles, but they are less exacting in applying them” (Costa & McCrae, 1992, p. 16). In order to be conceived of as a good leader, these people have to articulate forthright standards and concepts, and be willing to apply them not just to their subordinates, but also themselves. “Neuroticism is negatively associated with ethical leadership. This is because it shows a leader’s likelihood to allow negative emotions take control, such as anger, fear, and anxiety.” (Brown, 2006, page 603). They are by their nature hostile to people.
Motivation
How and why people are motivated as leaders is also a serious issue when considering what constitutes good leadership. McClelland's (1975, 1985) “theory of motivation specifies that individuals are driven by three main motives— the power motive (the need to influence others), the achievement motive (the desire to accomplish something better or more efficiently than it has been done previously), and the affiliation motive, (the desire to have positive relationships with others).” (Brown, 2006, page 603). “Research suggests that a high need for power, a moderate need for achievement, and a moderate to low need for affiliation are associated with leader effectiveness.” (McClelland & Boyatzis, 1982). In reference to the need for power, there is a distinction between those seeking power for self-aggrandizement, and those who want to help others. “Research by Howell & Avolio (1992) revealed important differences between socialized and personalized charismatic leadership, with the former being the more ethical of the two styles of leadership.” (Brown, 2006, page 604).
Virtue Ethics
Much of the debate we have heretofore examined has described the traits that make a leader ethical. This way of seeing the world has its roots in Aristotelian Virtue Ethics. It is worth, therefore, examining Virtue Ethics to a degree before we conclude. Aristotle asks, what attributes make someone a virtuous person? He argues that virtue is what all humans should aspire towards. In order to do this, people must follow a life of reason. In Modern Moral Philosophy, Anscombe argues that contemporary secular philosophy has gone far afield of its traditions. She argues that modern philosophy is not logical, because there is no one to articulate the laws which we must follow.
Virtues and vices are actions which come about from habit. Virtues are desirable, and vices undesirable. Virtues lead to good character, from which good actions spring, and the opposite is true of vices. For Aristotle, virtue lay at the middle of two vices. Courage is between the vices of foolhardiness and cowardice. Courage was believed to be the primary virtue needed by people, because it is required to begin the journey towards acquisition of other vices. Geach, however, did not believe this so forthrightly. He said, “Courage in an unworthy cause is no virtue; still less is courage in an evil cause. Indeed I prefer not to call this non-virtuous facing of danger ‘courage.’ (Geach, page 114). For Geach, there may be deeds which appear full of courage, but which are actually vices, or actively bad. Plato in Euthyphro argued something similar, or a situation where a son must testify against his father in the trial of a murder. In this meditation, the protagonist Socrates vacillates back and forth about whether this testimony should take place. It could be put forward that the virtue of being a good family member outweighs the vice of killing someone else. (Tredennick et al, pages 19 to 41).
Why is it desirable to have virtues? This depends on each virtue being discussed. Aristotle says, “virtues are important because the virtuous person will fare better in life.” (Rachels, page 178). Aristotle’s work, therefore, can be seen as a love letter to the virtuous life, or a life of human flourishing. It is more than simply what one should and shouldn’t do. Not all people need have the same virtues, however. Nietzsche said, “How naïve it is altogether to say: ‘Man ought to be such-and-such!’ Reality shows us an enchanting wealth of types, the abundance of a lavish play and change of forms – and some wretched loafer of a moralist comments: ‘No! Man ought to be different.’ He even knows what man should be like, this wretched bigot and prig: he paints himself on the wall and comments, ‘Ecce homo!’ (Behold the man!). (Kaufmann, page 491).
As people have variegated dispositions and attributes, the flourishing described depends on each individual nature. Aristotle argues in response to this that there are scenarios in which certain virtues are always needed irrespective of time, place and circumstance. This is due to the fact that there are commonalities between all humans. Much of Aristotle’s work focused on friendship, and political involvement. To him the basic description of humans was the ‘zoon politikan’, or social creature. For Aristotle human flourishing is at its highest when it is performed with the goal of ameliorating the human condition. In the end, character is the primary concern for virtue ethicists.
Conclusion
I have covered a lot of ground in this piece, and attempted to, through my research and literature review, deconstruct that which goes into making a moral leader. It can be said that it is not so simple as to paint one person as good, and another as evil. There is a complex interplay between personality, situation, context, traits, role models, and motivation. What can be said, however, is that ethical leadership is increasingly becoming an important consideration for not just public service organizations, but increasingly businesses as well. That moving forward we must find a way to steward the Earth through the climate crisis, and all that that entails, is enough of a motivator for all aspects, organizations, and hierarchies to reexamine their constituent foundations and build them up to meet the challenge that faces us. Good behavior is rewarded, and though this should not be our primary motivation, perhaps it is a good enough foot in the door technique to begin the process by which leaders become ethical. It is certainly high time, with all that we have witnessed in recent years, that leaders take these findings seriously. I can only hope that this morally-minded disposition takes root, and grows, as does our culture and society.
References
Rainey, Hal G. et al, (2021). Understanding and Managing Public Organizations, Sixth Edition, John Wiley and Sons, Hoboken, New Jersey.
Brown, M.E. and Treviño, L.K., (2006), The Leadership Quarterly 17, 595–616
Brown ME, Treviño LK, Harrison DA. (2005). Ethical leadership: a social learning perspective for construct development and testing. Organ. Behav. Hum. Decis. Process. 97, 117–34
Masterson SS, Lewis K, Goldman BM, Taylor MS., (2000). Integrating justice and social exchange: the differing effects of fair procedures and treatment of work relationship. Acad. Manag. J. 43:738–49
Walumbwa FO, Mayer DM, Wang P, Wang H, Workman K, Christensen AL., (2011), Linking ethical leadership to employee performance: the roles of leader–member exchange, self-efficacy, and organizational identification. Organ. Behav. Hum. Decis. Process. 115:204–13
Den Hartog DN, House RJ, Hanges PJ, Ruiz-Quintanilla SA, Dorfman PW., (1999), Culture specific and cross-culturally generalizable implicit leadership theories: Are attributes of charismatic/transformational leadership universally endorsed? Leadersh. Q. 10:219–56
Den Hartog, Deanne N., (2015), Ethical Leadership Amsterdam Business School, University of Amsterdam, 1018 TV Amsterdam, The Netherlands; Annu. Rev. Organ. Psychol. Organ. Behav., 2:409–34
D.J. McAllister, (1995), Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations, Academy of Management Journal, 38, pp. 24-59
K.T. Dirks, D.L. Ferrin, (2002), Trust in leadership: Meta-Analytic findings and implications for research and practice, Journal of Applied Psychology, 87, pp. 611-628
D.N. Den Hartog, R.J. House, P.J. Hanges, S.A. Ruiz-Quintanilla, P.W. Dorfman, et al.,
(1999), Culturally specific and cross-culturally generalizable implicit leadership theories: Are attributes of charismatic/transformational leadership universally endorsed? The Leadership Quarterly, 10, pp. 219-256
L.K. Treviño, L.P. Hartman, M. Brown, (2000), Moral person and moral manager: How executives develop a reputation for ethical leadership, California Management Review, 42, pp. 128-142
L.K. Treviño, M. Brown, L.P. Hartman, (2003), A qualitative investigation of perceived executive ethical leadership: Perceptions from inside and outside the executive suite, Human Relations, 55, pp. 5-37
L. Reave, (2005), Spiritual values and practices related to leadership effectiveness, The Leadership Quarterly, 16, pp. 655-687
L.W. Fry, (2003), Toward a theory of spiritual leadership, The Leadership Quarterly, 14, pp. 693-727
L.K. Treviño, L.P. Hartman, M. Brown, (2000), Moral person and moral manager: How executives develop a reputation for ethical leadership, California Management Review, 42, pp. 128-142
B. Avolio, F. Luthans, F.O. Walumbwa, (2004), Authentic Leadership: Theory Building for Veritable Sustained Performance, Working paper, Gallup Leadership Institute, University of Nebraska, Lincoln
K.S. Cameron, J.E. Dutton, R.E. Quinn (Eds.), (2003), Positive Organizational Scholarship., Berrett–Koehler, San Francisco
A. Bandura, (1986), Social foundations of thought and action, Prentice–Hall, Englewood Cliffs, NJ
Brown, M. E., & Treviño, L. K. (2006b). Role modeling and ethical leadership. Paper presented at the 2006 Academy of Management Annual Meeting. Atlanta, GA.
L.K. Treviño, L.P. Hartman, M. Brown, (2000), Moral person and moral manager: How executives develop a reputation for ethical leadership, California Management Review, 42, pp. 128-142
G.R. Weaver, L.K. Treviño, B. Agle, (2005), “Somebody I look up to”: Ethical role models in organizations, Organizational Dynamics, 34, pp. 313-330
L.K. Treviño, G.R. Weaver, D.G. Gibson, B.L. Toffler, (1999), Managing ethics and legal compliance: What hurts and what works, California Management Review, 41, pp. 131-151
B. Victor, J.B. Cullen, (1988), The organizational bases of ethical work climates, Administrative Science Quarterly, 33, pp. 101-125
L.K. Treviño, (1986), Ethical decision making in organizations: A person–situation interactionist model, Academy of Management Review, 11, pp. 601-617
L.K. Treviño, K.D. Butterfield, D.M. Mcabe, (1998), The ethical context in organizations: Influences on employee attitudes and behaviors, Business Ethics Quarterly, 8, pp. 447-476
L.K. Treviño, (1990), A cultural perspective on changing organizational ethics
R. Woodman, Passmore (Eds.), Research in organizational change and development, JAI Press, Greenwich, CT, pp. 195-230
T.M. Jones, (1991), Ethical decision making by individuals in organizations: An issue contingent model, Academy of Management Review, 16, pp. 366-395
J.R. Rest, (1986), Moral development: Advances in research and theory, Praeger, New York
T.A. Judge, J.E. Bono, R. Ilies, M.W. Gerhardt, (2002), Personality and leadership: A qualitative and quantitative review, Journal of Applied Psychology, 87, pp. 765-780
E.C. Tupes, R.E. Christal, (1961), Recurrent personality factors based on trait ratings
(Tech. Rep. ASD-TR-61-97), U.S. Air Force, Lackland Air Force Base, TX
L.K. Treviño, M. Brown, L.P. Hartman, (2003), A qualitative investigation of perceived executive ethical leadership: Perceptions from inside and outside the executive suite, Human Relations, 55, pp. 5-37
P.T. Costa Jr., R.R. McCrae, (1992), Revised NEO Personality Inventory (NEO-PI-R) and NEO Five-Factor Inventory (NEO-FFI) professional manual, PAR, Odessa, FL (1992)
D.C. McClelland, (1975), Power: The inner experience, Irvington, New York
D.C. McClelland, (1985), Human Motivation, Scott, Foresman, Glenview, IL
D.C. McClelland, R.E. Boyatzis, (1982), Leadership motivation pattern and long term success in management, Journal of Applied Psychology, 67, pp. 737-743
J.M. Howell, B.J. Avolio, (1992), The ethics of charismatic leadership: Submission or liberation
Academy of Management Executive, 6, pp. 43-54
The Nicomachean Ethics by Aristotle, (2020), Translated by Adam Beresford, Penguin Classics
Elizabeth Anscombe, (1958), Modern Moral Philosophy
Geach, Peter, (1977), the Virtues, Cambridge, Cambridge University Press
Hugh Tredennick and Harold Tarrant, (2003), Plato: The Last Days of Socrates,. New York: Penguin Books
James Rachels, (2019), The Elements of Moral Philosophy, Ninth Edition McGraw-Hill Education, New York, NY
Nietzsche, Frederick translated by Walter Kaufmann in The Portable Nietzsche, (1954), Twilight of the Idols, New York, Viking Press, 1954.
Literature of Policy Final
Section One: Introduction
Yehezkel Dror noted the government reform movements of the 1960s, and subsequently policy analysis, included an ‘economic approach to public decision making.’ In this piece, I first examine the historical development that lead up to this reform and the antecedents of the economic way of thinking. Here, I point out the forefathers of this way of approaching things. This includes people who theorized, discussed, promoted, and advocated for things like efficiency, understanding motivation, working with market mechanisms in policy problems, the specialization of the workforce, and so on. In Section Three I articulate the pros and cons of this revolution and development, first by clearly stating the benefits and costs, and then, through the example of the SO2 Cap and Trade Marketable Permit Scheme, give a fundamental and paradigmatic example of the type of thing economists advocate for, and examining its pros and cons. I then conclude with a reflection on this topic.
Section Two: The Historical Development of the Economic Mindset
In this section I will endeavor to detail the historical development of the economic approach in the field of public policy. There are a number of key concepts that were antecedents of the full throated use of economics. These include specialization, efficiency, rationality, the use of science, social equity, motivation, and empirics. I will also give voice and example to counter arguments and responses that developed as a result of the economic way of thinking.
I: Early Public Administration and Antecedents
Going back to Robin and Bowman are the concepts that the administration is specially trained and separate from the people, as well as from politics. (classnotes, 1/30/23) Specialization is a key aspect of economic thinking. Further, that administration is more of a behavioral problem than a structural one is also relevant. This precedes an economic approach to policy because the forefathers of behavioristic approaches to policy viewed administration as having to do with humans, the basis of economics, though this had not yet been articulated. That administration must be modernized and enlightened, using more empirics and emphasizing science and efficiency are also of note. The placement of faith in numbers, or an empirical approach, is a precursor to economic thinking.
Wilson is also clearly relevant, with his emphasis on efficiency and science. That administration is a business devoid of politics is also important. The theorists of the early 20th century saw administration as a specialized profession. In a response to the spoils system, they thought to enlighten and ennoble administrators by making them a separate ruling class with specialized functions. However, this did not work out, and as a result a scientific basis was questioned. Wilson, however, wanted administration to be more like a business (Robin and Bowman) post Civil War. In his piece “The Study of Administration” Wilson asked what government can do, and how to do it with efficiency. This is clearly a precursor of economic thinking. He emphasized the science of administration by looking at the government in action. Administration was looked at as a business, with politics removed. Wilson wrote in a context. The frontier was closing. The previous solution was simply to move west. For the first time we had to confront one another as space dried up. Urbanization was beginning with all that it entailed, namely, crime, environmental issues, manufacturing, technology change, the railroad, infrastructure, industries, immigration, internationalization, and income inequality. The notion of the Politics Administration Dichotomy is summarized by the position that politics should be separate from administration. The elected decide on a policy, and the unelected execute this policy. This specialization is at the heart of modern economics. Wilson argued that we can learn from efficient countries, regardless of if they are oppressive. This preoccupation with efficiency is also a precursor to economic thinking.
Goodnow in his piece “Politics and Administration” emphasizes that the scientific, technological, and commercial elements of government are in the administration, which is a basis of economic thought. White’s work “Introduction to the Study of Public Administration” is also relevant as it states that there is one process to administration. Again, efficiency is emphasized, as is specialization of tasks and people. With the Industrial Revolution, laissez faire was dropped as there were new problems. Social cooperation was needed, with the state intervening for the weak. This is a new sort of economic thinking, or that of welfare economics.
The emphasis by Friedrich on keeping bureaucracy effective is similar. That experts and science, as well as empirical data, are privileged is also relevant. That there is objectivity, and that it should govern decision making is also of note. Both Finer and Friedrich emphasis a reliance on experts and science. For Finer, they are restrained by the elected, for Friedrich, by norms. Both assume human nature, which is again what economics does. For Finer, it is negative, for Friedrich, positive. The set up the conditions for the ‘man of reason.’ They privilege expertise and empirical data over experience, folk knowledge, and rules of thumb. It is an emphasis on pure rationality, and assumes self interest, as well as uses science. Objectivity should govern decision making. This is, in its core, based in the US Constitution, or that the typical American is a ‘man of reason.’
The debate between the Federalists and Anti-Federalists point to a division in the precursors to economic thinking. The debate is between a strong central government versus states’ rights. This fundamentally has two different views of society. The question is how are we meant to govern ourselves? Federalists want government driven by reason, objectivity, and capitalism. The Anti-Federalists through local associations, in person discourse, debate, and bottom up rather than top down. Both are indicative of different economic frameworks which were to flourish later in the 20th century.
Stivers notion of Bureaucracy Men, who are guided by science, efficiency, and fixing the system is important and exemplifies the economic tradition. This debate took place during the Industrial era. Following women’s enfranchisement, public administration became more technical and efficient. It was also ruled by a specialized class, or the elite. The response by Wamsley and Wolf that what is of the most importance is public equity, rather than efficiency, heralds a different governing economical model.
II: Scientific Management
The notion of “Scientific Management” from the inter war period by Taylor is clearly influential. During this time, the federal government was growing. Scientific Management is traditional public administration on steroids. The meritocracy that Taylor envisioned after his trip to Europe in the 1880s clearly has roots in economic theory. It is in essence a carrot and stick approach (classnotes, 2/13/23). You break down a task, and get less argument and favoritism after objective measurement of tasks. For Scientific Management, you get buy in from workers, people specialize, and knowledge is accrued, leading to an optimal output, all of which are connected to economics. Selection and development of workers, as well as combining science with trained workers, are also important. Collaboration being emphasized is also relevant. The move from rules of thumb to concrete measurements is also indicative of economic thinking. That many business schools took up Scientific Management is indicative of how deeply entrenched in Economics it is. In the public sector, it focused on waste and inefficiencies. However, there were critiques that this style was too rigid, overly reliant on management, dehumanizing, and caused overwork.
For Weber, that there are levels of authority or management and hierarchy is of note, as is specialization and following rules that apply to all. Gulick’s work “Notes on the Theory of Organization” emphasizes work division and specialization being the foundation of organizations is significant, as is the needing of authority by the organization and there being limits of time, energy, and knowledge. This brought Scientific Management into the public sector. As a head of an NY bureau, he differed from Wilson. Wilson wanted to stop specific abuses, such as the spoils system. Gulick advocated restructuring, rationalizing, professionalizing, and remaking government. Operating under constraints are clearly an economic issue. The division of labor and specialization are clearly economic concepts, as is the emphasis on technical efficiency. Gulick argues that both top down and bottom up structures are needed. This movement started in the private sector, then public sector academics, and lastly the public sector.
The accounting practices detailed by Rosenthal set the stage for scientific management to come to the fore. They emphasized control, hierarchy, numerical outputs, efficiency over lived experience, and numbers being abstractions of people. Barnard’s principles of cooperative action being effectiveness, efficiency, and related to motives is also of note. Simon’s preoccupation with efficiency and specialization are important.
III: The Human Relations Movement
The Human Relations Movement of the 1920s to 1940s offers an interesting reaction to Scientific Management. Before this, business studies predominated. Emphasis on the psycho-social was undertaken during World War I and World War II. During these wars, there was a large increase in the bureaucracy on both the left and the right. Notably, women entered the workforce, and the theoretical field became richer. At its core, HR looked at human motivation. This is very much in keeping with economics. Previously, a rational agent had been assumed, going back to philosophers such as John Stuart Mill or Adam Smith. Now, questions like what inspires public service, and what makes someone perform came to the fore. Motivation is key to economics, and is traditionally thought of strictly in terms of the carrot and stick. Maslow’s Hierarchy of needs was an important step, identifying that needs begin with the most basic physiological and continue up to the emotional, self esteem, and self actualization. McGregor is also of note. He argued that we are at a state of economic and political development that allows us to manage people differently, because they want to self actualize and have self esteem. Work became the means to do this. Decentralization was emphasized, and putting control lower down the hierarchy. This is in keeping with leftist economics. Merton’s position that we examine values, how we pay for things, and a budget as a statement of values all are relevant to economics.
Barnard emphasized principles of cooperative action, in keeping with welfare economics. Effectiveness and efficiency were emphasized. The question of how an organization satisfies its motives was found to be efficiency. Elements of formal structure include incentives and authority. Cooperation is vital. Mayo in the 1930s with his “The Human Problem of Industrialization” was pre New Deal, pre Depression. Capitalism reigned unfettered. Researchers challenged business economic mindsets, and showed that there is a physical part to people working as well. The factory-ization of work, a very economic based movement, lead to monotony and fatigue, as well as melancholy and income inequality and globalization. More material wealth and less community was also common. The social world disappeared, and the state took this up.
IV: Iron Triangles
The 1960s to 1970s saw the rediscovery of politics with iron triangles. I won’t go into detail describing that phenomenon as it lies outside the purview of this piece, though I will explore some of the relevant thinking of some of the theorists of that time. Wilson asked why the private and public sector bureaucracies are so different. Bureaucracy was seen as out of control. Simon bridged the gap between Scientific Management and HR, and new politics. In the iron triangle set up, people generally act in self interest, which is in line with economic thinking. Waldo examined how different events impacted the rise of public administration, specifically that the wars lead to the expansion of PA. The governing ideology was science and efficiency, very in keeping with economics. He argued for a more decentralized administration and less hierarchy. He also noted that PA deals with the good life, however it is not a rational exercise to him. To him, it is not economics but political theory. He questioned what is public and private. Values were important, and he distinguished between management (scientific) and administration.
Lowi begins with the end of Capitalism. It was put forward that an economy can’t self regulate. There would be a transition from iron triangles to issue networks, where organized networks get what they want. “Interest group liberalism is socialism for the organized and Capitalism for the unorganized.” (classnotes, 3/6/2023). For Derthick and Quirk, iron triangles and issue networks inherently exposed bounded rationality, an economic concept that would come to the fore in the coming years.
V: Policy Analysis
Policy Analysis was another reaction that took theorists in a different direction. Values were emphasized, rather than process and procedure. What you are efficient at matters, where before efficiency in general was its own end. It found its roots in PPBS, which began in World War II and computers as well as analytical methods. It sought to allocate resources more effectively. The ‘whiz kids’ at the Pentagon said that government was not allocating resources rationally, and that budgeting ought to be done around programs. It was also argued that the Fed should be more rational. In the 1970s, the Ford Foundation got involved, they found that traditional PA was not analytical enough. The basic idea was that organizations would be training public servants around analytical tools of the social sciences, specifically Economics and Political Science as well as operations research, or optimization. MPP graduates were sought to be like MBAs, and were to be taught how to think, as well as how to approach problems. In the 1980s, MPP programs took off. APPAM was founded in the 1980s, and within it there was little management, with applied economics predominating. (classnotes, 3/20/2023)
VI: Thinking Like an Economist
In Elizabeth Popp Berman’s work “Thinking Like an Economist” she details the advent of economic style thinking in PA. Between the 1960s and 1980s, economics styled reasoning proliferated and impacted the structure and implementation of federal programs, including, “healthcare, environmental, housing, transportation, antitrust regulation and other forms of market governance.” (Ngumbah, 2023) This was proclaimed as neutral in values and based in economic logic. Efficiency was paramount, at the expense of equality and other values. “The economic style of reasoning is a loose approach to policy problems that are grounded in the academic discipline of economics but has travelled well beyond it. It is often perceived as politically neutral, but it nevertheless contains values of its own - like choice, competition and especially efficiency” (Berman 2022, Page 4). In this work, economic thinking became prominent across social policy, market governance, and social regulation. This thinking began in the 1950s and influenced policymaking between 1965 and 1985. This began in academia with economics PhD programs graduating students.
There were two intellectual associations that came to be between 1960 and 1980. The Systems analysts from the Research and Development (RAND) Corporation and the Industrial economists. The former asked how should government make decisions, and the latter how should we govern markets? Some antecedents were Institutional economists and macroeconomists. The former began in the 1930s, but they had diminished influence post WWII. The latter came to be in the 1930s and peaked in the 1960s, and approached the entire economy, focusing on “employment levels, economic growth, inflation rates, and business cycles” (Berman 2022, Page 25). The Planning Programming Budgeting System (PPBS) began with large scale goals of agencies, finding programs that could be used to realize these goals, quantifying the cost effectiveness of these programs, and using that to influence budgeting. (Berman 2022, Page 43). This faction was strongly linked with the Kennedy Administration, and aimed to better government by improving budgets. PPBS’s influence waned following the 1970s, but the links between economists and policy makers remained. Industrial organization economists were from Harvard and Chicago, and began the law and economics developments of the 1960s. They sought to institutionalize an economic approach that sought to deregulate. Governance of markets was reprioritized, with emphasis on efficiency and deregulation and ignoring corporate power and market stability. Government increased regulation in the 1970s, and as a result those seeking deregulation coupled with economists pushing for efficiency and cost-benefit analysis. By the 1980s and 1990s economists sought to improve government, such as the SO2 Cap and Trade Marketable Permit Scheme, which will be examined in Section Three. It is important to note that the economic way of thinking places no emphasis on the non measurable, such as concepts like justice, ecology, equity, diversity, inclusion, and so on. This allied well with Republicans, with Democrats being sidelined.
VII: Political Economy and Public Choice
Political Economy and Public Choice then came to the fore. Rational Choice Theory from economics was taken up by political science. We all try to maximize self interest, it was argued. There are many assumptions. These included a static human nature, that we know our preferences, that preferences are fixed and stable, and that preferences change slowly. The adaptation to political science started with Anthony Downs, and his Median Voter Model. Here, voters go towards the middle, and parties do the same. The self interest is to maximize votes. In his piece, “An Economic Theory of Democracy,” Downs articulates the Rational Choice Theory. There are two hypotheses. Citizens behave rationally, and representatives try to get the most votes. Voters have rational preferences, they establish which party the prefer and vote. There is uncertainty, however, and voters can be influenced. Information helps curb this. Voters will consume information until cost equals the marginal rate of return. Parties try to maximize votes, but uncertainty restricts parties. Rationality is assumed. The criticism is that people don’t behave rationally. Also, another assumption is that elections are markets. Voters rely on cues based on which product to buy. Representatives are buying votes with policy. Niskanen and his piece “Bureaucracy and Representative Government” is a supply side model of public services. The bureaucrat maximizes utility. The social outcome is that bureaucracy is too big with too many public projects.
VIII: Street Level Bureaucrats
The study of street level bureaucrats is an extension of the HR movement, which asked how people behave inside organizations. It asks how people behave in the external environment. It looks at citizens, and sub populations. The big questions are how do bureaucrats serve stakeholders, what drives behavior of a bureaucracy, understanding motivations better, are they agents of citizens or regulators, and service of new populations. The attempt to better understand motivations is key to economic thought. Lipsky’s piece “Street Level Bureaucracy” had some HR, and some economic analysis. Fundamentally, the piece sought to understand bureaucracy. The main problem is scarcity of resources, a very economic issue. The SLB serves the population and makes choices. That bureaucrats are connected to each other can result in a better allocation of resources. This structure promotes decentralization, another economic concept. When there are scarce resources, there is a tension between the need to scientifically manage and dealing with people’s issues as they arise. When SLBs rationalize, they pass on cost, time, energy and red tape. New Public Management deals primarily with economics, whereas New Public Service deals with rights.
IX: New Public Management
New Public Management, or NPM, is a response to what came before, but is fundamentally ‘its own thing.’ (classnotes, 4/12/23). The implications of the classical model are that we seek self interest and we want to put ourselves out of business. In the 1970s, there was stagflation, or the co occurrence of high unemployment and high inflation. This was primarily due to the supply shock of the oil issues. This gave birth to NPM. This is an economic minded way of doing government. Market mechanisms are relied on. Contracting out was common, as was increased competition between agencies. Citizens were seen as customers rather than citizens with a stake in democracy. This is both a positive and a critique, depending on who you ask. Contracting out decreased the size of government employees, though government continued to grow. NPM started in the Commonwealth of countries, and then moved to the academia in the USA, and then was put into practice. It was fundamentally popularized during the Clinton Administration. Some examples of the rhetoric surrounding this would be Gore’s ashtray example, and the space pen example. It was seen as a solution to problems, including, being pro consumer, bringing prices down, having no waste, was non inflationary, appealed to conservatives because of smaller government. It opened up new areas of research in academia.
Government was seen as doing something innovative. However, NPM lead to a split between left and right. It was one thing to deregulate, and quite another to have environmental, health, and other outcomes significantly worsen. In “Breaking Through Bureaucracy” Barzelay articulated that the problem of PA is that it is inefficient. Bureaucracy was associated with hierarchy control, a lack of efficiency, and a lack of innovation. A new model is proposed. Government should be service providers to customers. Market mechanisms should be utilized because there are incentives and competition. The role of administration was reimagined. This is related to the HR movement. There is implicitly an appeal to responsibility, accountability, increased innovation, and increased problem solving. Also, there is more discretion for PA, and less hierarchy and control. Some critiques are who is the customer, what is a right, and is business influence good?
In “The Global Public Management Revolution,” Kettl looked at New Zealand in the 1970s. They struggled with socialism, which was inefficient. They looked to the Chicago School, where markets were successful. This approach went around the world, to Europe, the US, and academia. Reagan and Thatcher looked at markets, and NPM really began with Clinton. Government in the US did not shrink. Authority was sent to the states in the US. For big problems, a global government was sought. Some critiques are that this doesn’t look at the downside, some predictions didn’t work out, and there was fatigue. Some manifestations of NPM were school vouchers, agency score cards, and performance budgeting. Customer service was also an issue. In the US, this was a bit of a moderate movement, as Clinton ran as a moderate. What does NPM leave out? It ‘gets things done’ often at the expense of other values, as well as processes. These include diversity, equity, and inclusion.
X: New Public Service
New Public Service, or NPS, was a response to NPM, and somewhat of a critique. Some antecedents were Waldo, who was a skeptic of expertise, and Wilson, who moved PA to a field of expertise and supported public opinion. In their work New Public Service: Serving Not Steering, Denhardt and Denhardt argue that public value is not about markets but adding value. Communities should engage together and propose solutions. NPS also harkens back to the HR movement, due to its Humanistic component. It was sought to bring back dignity to public service and respect for expertise, as well as giving more discretion to act entrepreneurially. Civic engagement, or people directly participating in democracy was also relevant. Equity was also included in the positions of NPS. NPS emphasizes other ways of democratic participation than voting as well. Voting is important, however, as it is necessary to be a democracy, however. With rampant examples of suppression and challenges to voting, it was seen that NPS was bringing voting back into the fold. Mary Parker Follet saw the interpersonal process as important to democracy, which implicitly results in decentralization. McSwyte harkened back to the Anti Federalists, with his emphasis on discourse, debate, bottom up participation, and decentralization. In the 1990s, the emphasis was on Diversity, Equity, and Inclusion, or DEI. Talk began around globalization and minorities. NPS inspired new focus on DEI.
Representative Bureaucracy should be a microcosm of the larger society. This goes back to the 50s. The disappointment with neoliberal economics which didn’t deliver also fostered NPS. NPS was seen as an enabler of democracy, and the 1990s were ‘the decade that history forget’, sandwiched between the fall of the Berlin Wall and 9/11. During this time there was increased democracy, markets were good, the internet came to fruition, globalization was taking hold, and there was generally ‘irrational exuberance.’ There were no longer scarce resources, and public servants didn’t have hard choices. There was the ‘peace dividend’ which led to social programs rather than defense. Following 9/11 and the recession of 2008, however, things returned to the way that they were.
XI: The Legacy of Empire and Colonialism
During this time, western countries took a hard look at their colonial past. Their legacy of resource extraction and slavery came home to roost. This is directly tied not just to racism, as I will discuss, but the economic mindset. There is a familiar narrative about the origins of American public administration that is incomplete and self-serving. The narrative says the field emerged mainly out of municipal reform efforts between the 1890s and 1910s. America was different from Europe, which emphasized a top down and centralized structure. Americans preferred decentralization and democracy. PA became defined in the 1920s and 30s. This prevented racist ideologies that permeated other disciplines that began earlier. This narrative is incomplete. America experimented with Empire. PA scholars reinvented themselves as experts of colonial administration and justified themselves with appeals to the ‘white man’s burden.’ Citizens of the colonies were not considered American citizens, but members of ‘subject races’ and the colonists looked to Europe for lessons.
Goodnow was a key figure. “Goodnow believed that humanity was divided by color and degree of civilization, and that there was competition among races. The white race had reached the highest stage of civilization, mainly through its mastery of science and technology (Goodnow 1913). The brown and yellow peoples of India, China, and Japan were less advanced but still had some degree of civilization, while in South America interbreeding had produced “a new race ... vastly superior to the Indian race as the Spaniards found it”” (Roberts, Page 187). He believed that white countries had the right and obligation to establish empires. Often natives were considered unworthy of self rule. There was broad support in American academia for building an American empire. This was not separate from domestic administration, though it predated it. Scholars such as Young, Lowell and Munro also contributed to this body of work and college courses. Often it was thought that colonial administrators required unique and specific training. GWU was among the schools that thought of setting up schools of colonial administration. There was one exception. Sudhindra Bose, of the University of Iowa and an Indian native, condemned racial prejudice.
The study of colonial administration was a focus of American scholars’ work, and tied to reform within the US, using dependencies as laboratories for work, where rulers had complete autonomy. They argued that the Constitution did not apply to these areas. Reform could be done more quickly in the dependencies and then imported home. Scholars such as Willoughby cut their teeth as administrators in dependencies, only to come home to high status positions. Many thought the local populations were not ready for self government. In 1910 President Taft created the Commission on Economy and Efficiency which sought reforms at home, which was heavily influenced by municipal reform movement and applying the principles of colonial administration, emphasizing strong executive leadership by a white elite. Often these scholars returned to posts abroad to continue to spread their ideologies in practice. Again, Bose took issue with many of these positions.
More recently there have been calls on the field of PA to look more squarely at its racist past. “Many people who we count among the pioneers of public administration were deeply engaged in the study and practice of colonial administration between 1898 and 1918. They designed and operated systems for governing subject peoples and justified these systems by invoking theories of racial difference. They were unabashed in describing these as systems of white rule, designed to bring the “blessings of Anglo-Saxon civilization” to “backward races” (Roberts). This was unambiguously a project in “top-down state building.” Military forces were used to pacify the dependencies, often through brutal methods. The systems of civilian administration that were imposed afterward always reserved final authority in the hands of American administrators. People living in the dependencies were classified as subjects and not citizens (Roberts). Rather than distancing themselves from the experience of European states, American experts studied the European empires closely, searching for lessons on how to rule American colonies.” (Roberts, page 193). Non Teutonic US citizens were subject to immigration and naturalization restrictions, voting restrictions, restrictions on public employment and access to public services, controls on home ownership and mobility, and biased treatment by police and courts. Reformers did not seek to ameliorate these conditions, but endorsed them. This did not disappear when overt racism went out of fashion as PA was coalescing in the 20s, blatant discrimination still occurred. Experts in colonial administration were able to continue to speak this way because there were rarely representatives of the ‘dependent races’ present to challenge them.
XII: Current and Future Theorists
The Rational Choice Movement, or RCM, is deeply steeped in economics. Some antecedents are Simon’s bounded rationality. More recently, the response to NPM has been more critical. In recent times, with the advent of tech, NPM was characterized as removing bureaucratic discretion, being neutral, and making policy problems technical problems. This did not work. NPM tries to quantify everything, for better or worse, to automate, and take human inputs and quantify. However, often these algorithms are biased at best, and discriminatory at worst. In “Automating Inequality” by Eubanks we are shown the depth to which this can happen, and what it bodes for the future. This is a response to NPM, and its economic basis, and represents a shift to a more human way of doing things. Here, the solution is NPS. The poor need to be organized, so that they can effect political change.
Section Three: Pros and Cons, and a Tangible Example
The effects of economic thinking on policy and policy analysis have been variegated. As detailed in the history, there is a long legacy of economics and its antecedents being utilized in theory about policy and PA. Some of the positives include introducing concepts that improved policy making. This includes an emphasis on efficiency, specialization, incentives, market mechanisms, and motivations for work. Much of this is down to the ‘rational man’ assumption. Some of the negatives include the factory-ization of work and corresponding fatigue and alienation, abstracting into numbers to the extend that it is dehumanizing, using economics as justification for slavery and colonialism, and removing power from the public and putting it in the hands of specialists.
It is most informative to study the pros and cons of economic thinking in its effect on policy by examining a case study, namely, the SO2 Cap and Trade Marketable Permit Scheme. The first attempt at dealing with SO2 was put forward under the 1970 Clean Air Act. The 1990 CAA Amendment sought to revise and improve on the original CAA by implementing a Cap-and-Trade (CAT) system for SO2 emissions. The hoped-for advantages of the new program were clear from an economic perspective. Free trade and market mechanisms would lead to efficient operation, while the over-arching objective of decreasing pollution control would be realized. All that was required for the successful operation of such a program and efficient abatement would be done by cost-minimizing utilities and an efficient market for trading.
As the primary goal of the CAAA under Title IV was to set SO2 emission levels at half that of 1980 levels, an aggregate nation-wide cap was set at roughly 8.95 million tons. This overall goal was to be achieved in two stages. The first stage began in 1995, and targeted the 110 dirtiest coal-fired power plants in the nation. Stage 2 began in 2000, and opened coverage to smaller power plants that produced at least 25 megawatts of electricity, as well as those plants which had a fuel sulfur content of greater than 0.05%.
The program issued a total number of permits which were equal to the desired cap, with each permit allowing the owner to emit one ton of SO2. The historic heat output of each plant was used as a baseline for how many permits were issued to any individual firm. Thereafter, firms could trade these permits with outside firms, or among subsidiary plants. In addition, if a particular firm, at the end of a given year, had in its possession more permits than were needed in that year, they were allowed to bank the extra permits for later use or trade. Therefore, for a given calendar year, total average aggregate emissions must equal or be less than the cap, plus any outstanding unused banked permits from previous years.
The amount of permits issued to a firm by the government was for the most part below its current level of emissions. Should this be the case, a firm could then reduce emission levels to the amount of permits in the firm’s possession. This abatement generally causes production to be moved from dirtier to cleaner production facilities, burning coal with a lower content of sulfur, or installing ‘best available technology,’ which was usually scrubbers. In no situation does the government mandate by what means a given firm should decrease its pollutant emissions to the level that they have allowances for; they leave this up to the firm, as well as the market. In the end, all that is required of a firm by CAT is that a particular firm can only emit SO2 to that amount which they have permits for. This is where the market component comes into play. If a firm cannot decrease the amount of SO2 emitted, it has one final possibility. A firm could either purchase permits from other firms which do not have immediate need for them, or the firm can reallocate the possession of permits throughout it’s company, therefore realigning SO2 emissions in a more efficient way. This allows the utilization of ‘pollution rights’ by those that truly value them. Those firms that operate at high marginal abatement cost buy permits from those firms with lower MAC.
Upon first examination, it would appear quite evidently that CAT’s success buttresses the position that economic, free-market solutions led to unbridled success. Examining the literature behind the matter, however, shows us that there is still some debate about the topic. Specifically, there are concerns that CAT may have led to the development of unforeseen externalities, as CAT placed no regional or temporal controls on the trading market. SO2 is a highly regional pollutant, predominantly affecting those in the immediate vicinity of the pollution site. While the aimed-for level of aggregate national emissions may reach the desired level, there may still be dangerous concentrations of pollution in specific areas. Concentrations of SO2 at harmful levels may well arise as the result of trading and banking. While it may be cost-effective for firms in the west to sell a large proportion of their permits to eastern counterparts, this may still have dire consequences. In this example, inhabitants of the surrounding area of the eastern power plants, however, may be exposed to a much higher level of pollution than their western counterparts, who are exposed to much lower concentrations. It seems reasonable to hypothesize that this could lead to economic costs in the east that exceed the benefits accrued. While it may remain true that net national economic benefits of CAT may be positive, very real negative effects may be felt by the inhabitants of a region in which pollution increases as a result of the trading of permits.
The fact that permits can be banked indefinitely may also cause similar damage. As of yet it seems that firms were generally over-complying, and were banking a large proportion of their permits in anticipation of future need or unforeseen events. Should this trend continue, it may well come to pass that future emissions could be significantly over the capped amount. This was already shown to be the case during the first few years of stage 2. As a result, firms may emit SO2 at considerably high levels, given the temporal nature of banking. This would lead to a lack of pollutant symmetry similar to the geographic concern, though of a temporal nature. Before we so whole-heartedly accept CAT as a uniformly unambiguous success, these concerns must be addressed.
The CAT program was an amendment to previous legislation governing such emissions, and as such plants must continue to abide by local standards as well. It should still, however, be highly informative to establish which states and counties, if any, experienced an increase in SO2 emissions in either stage 1 or stage 2, relative to stage 0, as states and counties can often be outside of attainment, purposefully or unpurposefully obfuscate attempts at measurement and regulation, and cause aggregate state or county level effects of SO2 emissions.
We have with this example an encapsulation of what NPM sought to put forward. Namely, that we can employ market mechanisms for policy objectives. We also see both the benefits and pitfalls of such a tactic. Through the implementation of the program, emissions fell to the number of permits issued, but localized and temporal hotspots, or externalities, occurred as a result. This shows both the promise and the peril of economics’ influence upon policy decisions. Other perils include, as described above, colonialism and slavery, as well as ignoring certain fundamental human values such as DEI.
Section Four
This piece has examined the history of economic thinking in PA from numerous concepts including motivation, value, market mechanisms, efficiency, specialization, and so on. It has also pointed out those in opposition to this framework. In section three, I delve explicitly into the pros and cons, and give a detailed example of a paradigmatic program, the SO2 CAT scheme. The pros and cons are clear. Efficiency is clearly beneficial, as it replaces waste with thrift. Policy makers are presented with scarcely resourced budgets, and are expected to do the best with them. This is what economists have to offer. They help us think of how to deal with scarce resources. However, there are negatives associated with this. DEI is not included, and at its worst, this way of thinking leads to the abstraction of human lives into numbers, and at worst outright oppression, slavery, colonialism, and so on. The only way that we can take the good and omit the bad is with a thorough understanding of each. This is what this piece hopes to do.
Section Five
References
Rabin & Bowman, eds., Politics & Administration, Chs 1-3, 11-13 (1984)
W. Wilson, “The Study of Administration” (1887)
F. Goodnow, “Politics & Administration” (1900)
L. White, “Introduction to the Study of Public Administration” (1926
C. Stivers, Bureau Men & Settlement Women (2002)
G. Wamsley & J. Wolf, Refounding Public Administration (1990)
F. Taylor, The Principles of Scientific Management (1947)
C. Rosenthal, Accounting For Slavery: Masters & Management (2019)
F. Taylor, “Scientific Management” (1912)
M. Weber, “Bureaucracy” (1922)
L. Gulick, “Note on the Theory of Organization” (1937)
C. Barnard, “Informal Organizations & their Relation to Formal Organizations” (1938)
H. Simon, Administrative Behavior (1945)
Maslow, “A Theory of Human Motivation” (1943)
D. McGregor, “The Human Side of Enterprise” (1957)
E. Mayo, Human Problems of Industrial Civilization (1933)
J. Wilson, “The Bureaucracy Problem,” The Public Interest: 6 (1967): 3-9.
D. Waldo, The Administrative State (1948)
T. Lowi, The End of Liberalism (1969)
M. Derthick & P. Quirk, The Politics of Deregulation (1985)
E. Berman, Thinking Like an Economist. (2022)
L. Ngumbah Wolloh, Summary of Thinking Like an Economist, (3/27/2023)
Downs, An Economic Theory of Democracy (1957)
W. Niskanen, Bureaucracy & Representative Government (1971)
M. Lipsky, Street-Level Bureaucracy (1980)
M. Barzelay, Breaking Through Bureaucracy (1992)
D. Kettl, The Global Public Management Revolution (2000)
Denhardt & Denhardt, New Public Service: Serving Not Steering (2003)
M. Follett, The New State (1919)
Roberts. “American Empire & the Origins of PA.” Perspectives on Public Management & Governance. 2020: 185-194.
Eubanks, Automating Inequality
Science and Tech Policy Final
Question One
For the past three to four decades, there has been much debate about the role of university entrepreneurship in America. The small business association was established in 1982 (Vonortas, 11/22/2022) in response to the stagflation of the 1970s. Looking for solutions, policy makers (hereafter PMs) believed that small companies were better equipped to kick start the economy and growth. Also in the 1980s, legislation came forward that gave intellectual property (hereafter IP) to researchers, even if they had originally accepted government funds. The debate circulates around the core role of a university, whether they should foster small businesses and entrepreneurship, or focus on university research and university-industry relations. Fully articulating the nature of this debate may lead us to form some conclusions about what the ideal balancing act is. Therefore, this is vital for both educational and economic concerns. In this paper, I go through the history of the development of this debate, delve into specific factors within it, and conclude with lessons to be learned and recommendations. This debate has never been more timely, as the current economy is set to slow in the near future, and universities have become increasingly profit-focused. Getting the articulation of the debate correct, and drawing good conclusions from that debate, is therefore of the highest importance, and what I will do here.
We can consult data ranging as far back as the 1970s. An Information Technology and Innovation Foundation report found that “universities and federal laboratories have become more important sources of the top 100 innovations over the last 35 years. In 1975, industry accounted for more than 70% of the 100 most significant R&D advances; by 2006, academia was responsible for more than 70% of the top 100 innovations.” (Vonortas, page 28). The Bayh-Dole Act facilitated innovation by making standard the IP ownership of inventions created with federally funded research. (Vonortas, page 29). It allows the inventor, whether they be non profits or universities, to “retain intellectual property ownership from federally sponsored research and development.” (Vonortas, page 30). Further, the incentives it creates for technology transfer are vital for institutions seeking innovation. “There are many reasons for this growth in commercialization stemming from the passage of the Bayh-Dole Act: universities have substantially increased investment in technology transfer programs, faculty have become aware of the commercial potential of their research results, and industry has realized the benefits of collaborating with universities.” (Vonortas, page 30). Indeed, university research not only merely plays a part in the creation of new products, but the start of entirely new industries. (Vonortas, page 28)
The passage of the Bayh-Dole Act was the result of years of opposition and emotional debate. Specifically, United States Senator Long was concerned that taxpayers would not receive a direct benefit of government-funded research (Baumel 2009). His opposition led to the inclusion of compromises, which, as explained by the preamble of the law, “ensure that the Government obtains sufficient rights” and “protect(s) the public against nonuse or unreasonable use of inventions”. (Vonortas, page 31). In general, the Bayh-Dole Act has served its purpose as a legal framework for technology commercialization. It created a stable, regulated environment for the arrangement of intellectual property rights proceeding from federally funded research activities (Vonortas, page 31)
Research universities are only a small portion of institutions, but they are vital to economic growth. Through the primary mission of research done by universities, basic issues are examined and new information is spawned. This is not the end of the story, however. Research is the starter of the fuel mix that leads to innovation that improves industry. (Vonortas, page 27). However, universities must at times consult what their core mission is, and conduct tests to examine what they are truly producing. “As U.S. universities expand their patenting, licensing, and commercializing of research, their potential to drive domestic innovation and economic growth increases. However, there is a balancing act to be achieved: creating new innovations while not decreasing the university’s primary role of education, research, and community outreach.” (Vonortas, page 27). Depending on the industry, spillover of research to the private sector can be high or little, but it is certainly the case that a number of industries have benefited from university research for application in the economy. These fields include, “agriculture, aerospace, biotechnology, medicine, software, computers, telecommunications, as well as social sciences industries such as network systems and communications, financial services, and transportation and logistical services.“ (Vonortas, pages 27-28). The development of university entrepreneurship can find its roots in industry’s call for technological innovation, as well as universities’ desire for new sources of funding, resulting from reduced federal funding for research. (Vonortas, page 29)
There are specific mechanisms by which research makes its way into the private sector. This can be through the employment of recent graduates, or more directly through university Technology Transfer Offices (TTOs). It is often argued that the American commercial success in high-technology sectors of the economy owes “an enormous debt to the entrepreneurial activities of American universities” (Vonortas, page 28). Technology transfer takes place in both directions, however. “These forms of technology transfer allow mutually beneficial relationships in which research findings and business information can be shared between and amongst universities, the government, and the private sector.” (Vonortas, page 29). The Bayh-Dole Act was passed in 1980. At that time, there were only 25 technology transfer offices. By the twenty-fifth anniversary of the Act, in 2005, there were 3300 such offices (Vonortas, page 35). The point of a TTO is to promote the utilization of inventions from university research. It allows universities and researchers to capitalize on the rights they gain through the Bayh-Dole Act while attempting to allay concerns regarding conflict of interest. Rather than relying on researchers to commercialize their inventions or implementing broad innovation strategies, many universities have channeled their innovation activities through a centralized TTO. TTOs are dedicated to identifying research that has potential commercial interest, providing legal and commercialization support to researchers, assisting with questions relating to marketability and funding sources, and serving as a liaison to industry partners, interested in commercializing university technologies. (Vonortas, page 35)
The effectiveness of a TTO is typically measured by its commercial output, including licensing (number of licenses, licensing revenue), equity positions, coordination capacity (number of shared clients), information processing capacity (invention disclosures, sponsored research), and royalties and patents (number of patents, efficiency in generating new patents) (page 36) The TTO is intended to facilitate the transition between academic research and commercialization. While some universities’ TTOs are effective in disseminating inventions, others have become hindrances to technology transfer with levels of administration and bureaucracy. (Vonortas, page 40) One possible idea to bridge the gaps in technology transfer, discussed by Litan and Mitchell (2010), is to create an open, competitive licensing system for university technology. (Vonortas, page 41)
Often from university research, comes the creation of new firms. “The attractions of using university-developed inventions to create new start-up companies (a new company created to commercialize a particular technology) have become widely recognized.” (Vonortas, page 37). Sometimes commercialization of the invention is best suited via the creation of a start-up company and TTOs are beginning to place more emphasis on creating new business start-ups as an optimal commercialization path. As evidence of this increase, the number of start-up firms for commercialization of university research grew from 241 in 1994 to 555 in 2007 (Vonortas, page 37). There are conflicting ideas on the role TTOs should play in promoting the launch of new firms, ranging from no role in start-ups to a very involved role in helping start-up firms succeed. (Vonortas, page 37). There is also much debate circulating around whether incentives to faculty actually result in the promulgation of new technology and business developments. Often, findings are that this is not the case. For a
Perhaps, one of the most pronounced conflicts surrounding a university’s governance is to support entrepreneurial activities without losing control of its primary education, research, and public service missions. (Vonortas, page 32). It is clear that should a university become too business oriented, it does so at the expense of its mission of higher education. Universities are increasingly moving towards the model of ‘the business of higher education,’ with tuition rising and faculty increasingly being limited to adjunct professors. This preoccupation with business concerns may also be a step in this direction, or the colonization of learning for profit at the expense of learning.
In sum, universities have a vital role to play in the development of business, particularly start ups. With the passage of various laws and the development in the education sector going increasingly towards business, this makes sense. However, we must ask ourselves, at what cost does these activities take place? If faculty are entirely focused on commercializing their discoveries, will they consider teaching students to simply be a requisite and lesser function of their job, instead of its primary focus? We must, then, balance the emphasis universities place on business development with what their core mission is; namely, teaching. It is all fine and good to encourage growth through universities, but it would be a poor trade off indeed if it also resulted in less education for the future workers of the economy. Universities owe a debt to students to create an environment fostering personal growth and exploration. Business concerns can be a part of this, but only if schools’ core missions are met. In the end, the two don’t have to be mutually exclusive; universities can have their cake and eat it too. However, as they grow and foster business, they must also at least equally encourage student growth. To do otherwise would be trading short term gains at the expense of long term development, in the economy.
Question Two
Initially, before the electrification of the economy, energy use was limited to burning biomass such as peat moss, sticks, or occasionally coal. This was supplemented by the use of wind and water power to turn mills. These activities had little to no environmental impact. Since the electrification of the economy, however, the developed world has relied on cheap and secure sources of energy to power society. Only within the last 40 years has it become readily apparent that the damage we are doing to our environment is unsustainable, and is significantly impacting not just our own health, but the very viability of every ecosystem on the planet. It is for this reason that it is vitally important that we examine the collection of energy sources available to us today, critique them, and offer insight into how and where government should intervene to improve the situation moving forward. This is what I will do in this paper. I will begin by describing each component of the energy system in the United States, their pros and cons, and then detail how, why, and where government should intervene to ameliorate things. There is nothing as vital as this, as not just our own future economy is impacted, but the very viability of all life on Earth.
The energy source that holds the lion’s share of energy provision for our economy is fossil fuels. These are fuels that were once living organic matter, but were pressured underground over thousands of years to create physical substances that are highly energy dense. That is to say, given their weight and size, they can be exploited for a great deal of energy. This category includes coal, oil and its derivatives, and natural gas. Historically, the US has relied on coal for power plants to create electricity for the country. This is done by burning the coal and heating water to the point that it turns the mechanism by which electricity is created, which is usually wires wrapped around a central magnet. This is then spread throughout the country through the grid of electricity provision. Gas operates in much a similar way, but is cleaner, relative to coal. Oil is refined for gasoline and other petroleum products, and primarily used by our transportation system, though there are some power plants that run on burning oil. The advantages of fossil fuels are that they are, as said above, very energy dense. This is what makes it possible to power a car or airplane, or have a power station work effectively. In addition, fossil fuels are historically easy to use in our power system; the system was initially designed to run on them. They are secure, and can be used throughout the day. Further, as said above, the infrastructure is in place to use them effectively. The negatives have only more recently in history become clear. Firstly, fossil fuels have historically been imported from autocratic countries. Second, burning fossil fuels emits a large amount of carbon dioxide into the atmosphere, as well as sulfur dioxide in the case of coal, and particulate matter as well. Just considering carbon dioxide, the effect of using fossil fuels is such that the world is now heating, known as the global climate crisis. We have already heated the world by roughly one degree Celsius, and are now attempting to limit warming to at most two degrees. The effect of this warming is catastrophic. We are in a man made extinction event. Severe weather is increasing. Coastal regions flood, and will eventually be submerged. The cumulative effects of so much CO2 in the atmosphere simply can’t be ignored, which is why we must turn to alternative sources of power.
The next class of power source I will describe I consider to be the ‘medium impact’ ones, namely, hydro power and nuclear power. In more recent history, these power sources have become more readily available and practical. Again, both operate and create electricity by turning a central turbine that generates power. Hydro does this by setting up a dam and allowing the passing water to turn a generator. Nuclear does this by starting a reaction of radioactive decay that generates heat, which heats water, and turns a generator. Both are stable and reliable sources of energy that operate all day every day, and as such are good for baseload power. Both are completely carbon free in terms of their resulting environmental impact. The primary downside of each is the danger they pose to the greater environment. For dams, this means interruption of the watershed of the river, and killing off wildlife, as well as having to relocate towns and people that are on the riverside that will be flooded upstream of the dam. This is it, however. For nuclear, the impact is primarily with what to do with the radioactive waste created, which remains dangerous for thousands of years, and what to do if the plant breaks down and there is nuclear contamination, as seen with Fukushima in Japan, as well as others. These are the pros and cons of each.
Lastly, the final power source is the newest, and fastest growing, one available, namely, renewables. This includes solar, wind, geothermal, and tidal. Solar and wind are the fastest growing in the US, so I will begin with them. Solar photovoltaic systems operate by harnessing direct or indirect sunlight and, by virtue of its mechanism, turning that sunlight directly into electrical power. Wind, whether onshore or off, relies on the wind energy to turn a turbine and create electrical current. The advantages of these two sources are that they are entirely carbon free, and prices are falling fast compared with other sources. The disadvantages are that we are still scrambling to make the grid conducive to these sources, and they are intermittent. This means that they operate highly at some points, and minimally at others. As such, battery systems are needed to store energy when there is excess, and release it when there is a paucity. This is the primary negative. Geothermal relies on the Earth’s internal temperature to heat water and turn a generator, and tidal is essentially similar to hydroelectric but placed off the coast and relies on the changing tides to turn generators. Again, they are completely carbon free, but the grid still needs work to be able to manage them.
What can government do to level the playing field and encourage a zero carbon future? There are several tactics it could take, and I will explore them here. Firstly, the most straight forward, and most lauded by Economists, would be to place a price on carbon. This would impact every aspect of society and make the real costs of carbon heavy energy provision internalized into the system, rather than having them remain externalities. Europe has a system in place for this, as do regions of the US, but the entire country should have a federal system in place by which carbon pricing is put into effect. The results of the price could be fed into subsidies for renewables. This brings me to the second thing government could do. End subsidies for dirty energy, and redeploy them to carbon free sources. It makes little to no sense to encourage the continued use of fossil fuels just because this has historically been what has been easiest in terms of power sources for electricity generation. These subsidies should be used to encourage zero carbon source of energy. Another thing the government could do is develop more of the ‘medium impact’ sources I have described above, namely hydro and nuclear. These sources are great for baseload power, are completely carbon free, and can operate around the clock. France operates almost entirely on nuclear power, and as a result is not at the whim of fossil fuel producing countries that may have dubious motives concerning the West. It doesn’t make sense to have nuclear in earthquake prone regions, but for the vast majority of the country, this is not a concern.
Two other possible solutions would be electrification, and energy efficiency. We must entirely electrify the entire energy system, including not just cars but also aviation, end use electricity products, and so on. With the move of the grid to a zero carbon future, electrifying everything that currently runs on fossil fuels, whether that be stove tops or water heaters, would go a long way in terms of ending emissions. Energy efficiency would be another tool that could be utilized. This includes a broad array of activity, including not just changing light bulbs to LEDs, but also insulating homes, businesses, and industrial concerns, so that less energy is needed to heat and cool them. Lastly, another possibility is investing in future ‘moon shot’ energy sources, such as fusion. It was recently detailed that for the first time, a fusion reaction created more energy than it used. This would go a long way in ameliorating the grid’s reliance on fossil fuels. There are also other types of solar power used around the world, such as direct solar thermal heating of water, and solar heating of a central mineral by use of focusing mirrors. These could be developed and exploited as well. Battery power must also be encouraged in the grid, as it will be necessary to provide base load power from intermittent renewables.
The real impetus for a change to renewables and carbon free sources of electricity is to forestall the global climate crisis. There are other routes that government can take to also reach this goal. Emissions are concentrated, per capita and historically, in the Western industrialized countries. As such, the activities of citizens of these countries would go a long way in ameliorating the climate crisis. Most specifically, citizens can do three things, and government should encourage these. Namely, they are fly less, drive less, and eat less meat. (Nicholas). Should governments subsidize these activities and encourage responsible consumption, this would go a long way towards solving the climate crisis.
In brief, the future for a zero carbon emission economy is a bright one. There are many routes available to us for use. Fossil fuels that emit harmful emissions must be completely phased out as rapidly as possible, and replaced with both medium type providers, such as hydro and nuclear, as well as renewables and experimental sources. With a diverse portfolio of energy providing sources, we can be confident that we will be able to meet the growing energy needs of our economy, while not contributing to the global climate crisis. We must think further than our own borders. CO2 is a global pollutant, and, as such, we must convince other countries to follow our example if we are to make any headway in forestalling the worsening of the environmental catastrophe that is already taking place. We must act as leaders in the field, as President Biden has done by rejoining the Paris Agreement, and through his Green New Deal inspired legislation. We must show the world what American ingenuity can do when harnessed for the benefit of the world. As such, we must be willing not only to lead, but to share our knowledge and practices with the rest of the world. In doing such, America will regain not just its primacy as a high tech hub of development as well as recreate good manufacturing jobs, but we will lead the world again. Only then, will the bane of the climate crisis be put at bay. Only then, will the world be safe for future generations. Only then, will we have not just prosperity and growth, but regeneration of life on this planet. After all, we cannot all escape to new worlds. This is the only planet we have, and, as such, it is certainly worth saving.
Question Three
As COVID-19 has made the public aware, there are serious issues at stake when discussing the national health system of a country. Literal lives are on the line. Countries varied in their approaches, revealing good and bad aspects of their respective systems in comparison to one another. In this piece, I describe the core strengths and weaknesses of the US health system, and then the possible points for government intervention. I differentiate between research and development (hereafter R&D) of pharmaceuticals (hereafter pharma), drug pricing, and service delivery. For each issue I will spend one paragraph describing the situation in the USA, and then follow this with a second paragraph detailing possible government actions to intervene. I will then close with a summary conclusion of core take aways.
Firstly, regarding R&D of pharmaceuticals. The USA is in a unique position. “The Food and Drug Administration approves drugs if they are shown to be ‘safe and effective.’ It does not consider what the relative costs might be.” (Kliff, page 9). The NIH spends $40 billion a year on funding basic research that contributes to new medications. (Vonortas, 11/8/22). The results of this research goes to the private sector. Typically, drugs are more expensive in the US than other countries. I’ll fully delve into this later in this piece. Why is this the case? Primarily, to provide incentives for innovation. Profitability makes pharma more attractive to investors. “Economic research suggests that price regulation might mean less innovative drugs, too. Investors respond to economic incentives. When they see a market that will pay lots of money for their products, they’ll put more money toward developing the type of drugs that market wants.” (Kliff, page 15-16). In other countries, the decision to approve a drug is on the marginal benefit of that drug. Does it add value compared to existing treatments? If so, how much? In essence, these countries ask if it is worth it. There is a trade off between a higher price and more drugs, or a lower one and less. This can also be reframed as a trade off between innovation and access. This has a direct effect on the R&D of pharma. In essence, the USA is subsidizing the development of drugs that the rest of the world can then either choose to accept or reject. Patents and Intellectual Property protection are also an important aspect of the process. “Patent rights play an important role in the development and pricing of pharmaceutical products. Patent law seeks to encourage innovation by granting the holder of a valid patent a temporary monopoly on an invention, potentially enabling him to charge higher-than-competitive prices.” (CRS, page 1). Patents allow companies in the USA to make sure that they recoup their R&D costs, and make a profit. However, for more tailored drugs, “a lot of candidate drugs fail…because they aim for targets that are not actually relevant to the biology of the condition involved.” (The Economist, Page 12)
Policy makers ask, how much innovation do we need to pay for? Some of the biggest lobbying of lawmakers is around this issue. Marketing expenditure for product differentiation and general awareness is also exorbitantly high for drug manufacturers. One way that policy makers (hereafter PM) could intervene is by limiting the lobbying power of big pharma. Rather than appease special interests, PMs could seek to bust monopolies and promote competition. After all, a large part of pharma R&D is done with public money. Why shouldn’t pharma R&D be treated as a public good in the USA? Yes, we must keep a profit incentive for innovation, but perhaps it would make sense to follow the playbook of other advanced countries, and ask about the tradeoffs listed above before giving 20 year monopoly rights to large multinational corporations. There is some legislation that protects generic companies, and fast tracks their approval. This could be expanded upon, and more deeply encouraged.
The next issue I address is that of drug pricing, which I have already alluded to in the above portion on R&D of parma. To quote Kliff, “The United States is exceptional in that it does not regulate or negotiate the prices of new prescription drugs when they come onto market. Other countries will task a government agency to meet with pharmaceutical companies and haggle over an appropriate price. These agencies will typically make decisions about whether these new drugs represent any improvement over the old drugs — whether they’re even worth bringing onto the market in the first place. They’ll pore over reams of evidence about drugs’ risks and benefits.” (Kliff, page 3). In the US, drugmakers set their own prices. Other companies negotiate their prices, because health is seen as a public good. Since every drug comes to market, there are higher copays at the drugstore. However, it isn’t so clear cut that we should simply mandate slashed drug prices. “What’s harder to see is that if we did lower drug prices, we would be making a tradeoff. Lowering drug profits would make pharmaceuticals a less desirable industry for investors. And less investment in drugs would mean less research toward new and innovative cures.” (Kliff pages 3-4). However, just because we have more drugs, doesn’t mean that we are necessarily getting better treatment. “We get expensive drugs that offer little additional benefit but might be especially good at marketing.” (Kliff page 9).
The government could intervene in one simple manner, by unifying insurance companies and negotiating prices with pharma companines the same way that other countries do. What would happen? “We’d spend less on prescription drugs. If the United States set up an agency that negotiated drug prices on behalf of the country’s 319 million residents, it would likely be able to demand discounts similar to those of European countries. This would mean that health insurance premiums wouldn’t go up nearly as quickly — they might even go down.” (Kliff, page 14). There would be tradeoffs. We would lose access to certain drugs that are currently covered. However, simply having drugs available and on the market doesn’t by definition mean that Americans are benefiting more so than other people. To have a drug that is too expensive to buy is the same as not having it on the market at all. Another possible fix would be, as stated above in the R&D section, limiting lobbying power and marketing of pharmaceuticals. Again, with more regulation, pharam could still make a tidy profit, while not charging absurdly high amounts. Lastly, encouraging generics, either through fast tracking their approval or limiting the monopoly time of big pharma monopolies, would also go a long way in alleviating drug prices.
The final issue to be examined is that of service delivery. Increasingly, at least since the 1980s, the push in the field has been behind personalized medicine and treatments. This started with the sequencing of the human genome, completed in 2000. However, this created a huge expectation that essentially all diseases would correspond to a gene error, and treatments could be developed. This has not been the case. Environment, and its interplay with genetics, is at the root of what causes certain genes, and diseases, to be expressed. However, studies of rare and personalized diseases are “not just a worthwhile end in themselves. Understanding what goes wrong…can reveal basic information about the body’s workings that may be helpful for treating other ailments….That will help doctors personalize their interventions.” (Economist, Pg 4). Further, an outgrowth of this attempt at personalization has been the ‘data deluge.’ (Vonortas, 11/8/2022). “The increase in other forms of data about individuals, whether in other molecular information from medical tests, electronic health records, or digital data recorded by cheap, ubiquitous sensors, makes what goes on in those lives ever easier to capture. The rise of artificial intelligence and cloud computing is making it possible to analyse this torrent of data.” (Economist, Page 4). All of this data needs to be crunched, and increasingly it is information technology, whether AI or cloud computing, that is doing it. As this comes to maturity, we may be able to have a very clear portrait of the full health profile of an individual, and as such tailor treatment accordingly. However, there is still the issue of rare diseases that affect less than 200,000 people. This has still been an area where growth has been slow, as companies see less profits.
There are a number of ways government could step in and contribute to change. The first would be to encourage the sequencing of as many genomes as possible, and building ‘biobanks’ that have this information. This would allow for treatment of not just common disease, but rare ones as well. The second is to discourage the already present monopolistic tendencies of companies, that would otherwise just be encouraged by the move to big data. Health is a public good, and DMs should treat it as such. If we encourage entities that do not seek profit to help, such as charities and nonprofits or NGOs, then this could go a long way in the promulgation of life saving information and service. Another core issue is that the diseases we treat well are predominantly those of white straight cis males. Growing the genetic database allows not just for better treatment for those who find themselves outside of the majority, but all people, as we have more information to base treatment on.
Whether it be pharma R&D, drug pricing, or personal service provision of health care, there are some key take aways for PMs. First, stand up to the private sector. A huge amount of public money goes into the sector, and as such, government should ask for something in return. Breaking monopolies would go a long way towards better provision. Other countries do it, so why can’t we? Treat health like the public good that it is, instead of the latest forefront in which to make exorbitant profits. First, for pharma R&D, encourage profits but at a limited scale. This would allow for the provision of R&D that still innovates, while reducing the prices associated. Another idea would be allowing other countries to contribute money or other resources towards the R&D of pharma. We saw this recently with COVID-19 in terms of the vaccine when Pfizer partnered with German small business BioNTech, which was started by a Turkish-German husband and wife team. Another recommendation would be to do studies that are truly representative of the global population. An example of doing otherwise would be how Japan refused to mass vaccinate until there were studies done with only Japanese people. Finally, we should both democratize and personalize health care provision, by making good use of all the data available, and treating people with full access to their information as possible. However, we must be careful not to allow the sale of personal information without people’s consent. We have seen through the rise of social media just how problematic this can be. As such, we also must be wary of cyber attacks that could leave billions vulnerable as a result of sloppy data handling. This too must be accounted for, through the private and public sector cooperating on security and setting standards. In the end, there is a bright future for health in the future. As long as we can learn from the past.
References
Question One
Pascoe, Cherilyn E. and Nicholas S. Vonortas (2014) “University Entrepreneurship: A Survey of U.S. Experience”, in Nicholas S. Vonortas, Phoebe C. Rouge and Anwar Aridi (eds) Innovation Policy: A Practical Introduction, Springer. [Ch 3]
Question Two
International Energy Agency (2022) “World Energy Investment 2022”, IEA “Overview and Key Findings” “R&D and Technology Innovation”
Nicholas, Kimberly, “Under the Sky We Make; How to Be Human in a Warming World”, Putnam, New York, New York, 2021
Question Three
Kliff, Sarah (2016) “The True Story of America’s Sky-High Prescription Drug Prices”, Vox, November 30.
Congressional Research Service (2019) “Drug Pricing and the Law: Pharmaceutical Panel Disputes”, In Focus, Washington: CRS, May 17.
“Personalized Medicine”, The Economist, March 14, 2020
Vonortas, Nicholas. Class Lecture, 11/8/2022
Philosophy of Policy Ethics Final
Philosophy of Policy
Professor Sanjay Pandey
Final Paper
Carl Mackensen
Objective Ethics Examined: Counter Arguments, Philosophical Frameworks, and Conclusions
Introduction
This paper examines whether Objectivity can be found in Ethics. This refers to there
being universally true statements of right and wrong. It is of paramount importance to policy, as well as our individual lives, to determine whether right and wrong exist universally, or are merely always shifting based on other external factors. Here, I detail what objective ethics is, then go on to examine some potential critiques of it, before concluding with looking at some contenders for which ethical system is most defensible should objectivity exist.
Part One: What is Objective Ethics?
Objective Ethics, simply put, is the position that there exists statements of right and wrong that are universally true. Such a system has been sought since prehistoric times by people in any and all places, and different times and cultures have come up with different views of what is right and wrong, defensible and not, morally praiseworthy and otherwise. To me, Objective Ethics, in the secular sense, is the guide by which we know how humans can best flourish in this world. It takes into account what type of being humans are, and what, as a result, it means to be human in the world. I believe that there exist moral truths that are not only objective, but transcendental. This means that they would remain true even if all humans would cease to be tomorrow. But this is not the place to detail that argument, rather, it is the purpose of this paper to argue first for objective ethics, and then detail some of the most historically prominent systems of belief in place that promulgate this. I hope to construct a system that is of use to me personally, moving forward. For, truly, ethics is nothing more than mere navel gazing if it is not put into practice.
Part Two: Cultural Relativism
It has oft been said, “when in Rome, do as the Romans do.” This, in essence, is at the heart of the Cultural Relativism argument. Different societies, whether separate geographically or in time, have different rules. As such, when operating within a society, we should abide by those rules. These rules are not universal, they are specific to the culture. As such, there is no sensible way to construe an ‘objective morality,’ as everything depends on the culture in question. In sum, “different cultures have different moral codes. What is right within one group may horrify another group, and vice versa.” (Rachels, Page 14). An example from Herodotus is that of the Greeks and the Callatians. The burying practices of the two widely diverged, with the former burning their dead, and the latter eating theirs. If you were to ask either member of each respective group about the practice of the other, they would be horrifies and exclaim it as an afront to the Gods. But within this comparison is the essence of cultural relativism. Neither group is ‘right,’ there are only differing sentiments that are culture bound and determine how we should react.
Another example is that of the Eskimos, examined by Freuchen. “The Eskimos…seemed to care less about human life. Infanticide…was common…When elderly family members became too feeble, they were left out in the snow to die.” (Rachels, Page 15). Compared to a Western audience, these practices would seem barbaric. However, to Eskimo communities, they are routinely practiced and make perfect sense within the logic of their society. Again, this is the crux of cultural relativism. “To call a custom ‘correct’ or ‘incorrect’ would imply that we can judge it by some independent or objective standard of right and wrong. But in fact, we would merely be judging it by the standards of our own culture. No independent standard exists; every standard is culture-bound.” (Rachels, Page 16). When we bring critiques to a practice, we are viewing that practice within our own construct, and, as such, this is an unfair manner in which to treat another society. This is perhaps most intuitively rooted in the feeling that we don’t want someone else judging our own common place behavior as abhorrent, so we should not do this to others. “Cultural relativism says…that there are is no such thing as universal truth in ethics; there are only the various cultural codes.” (Rachels, Page 16)
In brief, the language of the Cultural Differences Argument can be constructed thusly:
1) Different cultures have different moral codes
2) Therefore, there is no objective truth in morality. Right and wrong are only matters of opinion, and opinions vary from culture to culture
(Rachels, Page 18). However, upon closer examination, the conclusion does not follow from the premise. The stated premise deals with the beliefs of people, while the conclusion deals with what is actually truly so. It is perfectly plausible that the members of certain societies may simply be incorrect. A good example of this comes from the debate around a flat versus round Earth. It is clear that only one of these groups is correct. While moral issues seem a bit murkier than hard scientific facts, with a little thought and logic, we can often find a similar parallel.
What would it mean if Cultural Relativism was actually true? It would mean a number of things. Most notably, this would include,
1) We could no longer say that the customs of other societies are morally inferior to our own.
2) We could no longer criticize the code of our own society.
3) The idea of moral progress is called into doubt.
(Rachels, Pages 19 to 20). On the first point, we would not be able to condemn other actions found in different societies. On the face of things this may seem positive, as we certainly don’t want to encourage bigotry. However, what about a starker example, such as Nazi Germany, or female genital mutilation? Do we truly want to say that there are no circumstances whatsoever in which members of one group can’t criticize members of another? These examples make it clear that this is not the case. Further, we wouldn’t be able to criticize our own activities. All acts would be relative to the time and place that they occurred, and as such, could not be judged by history. Again, on the face of things this would seem desirable, as a certain amount of historical context is required to understand any activity or action. When we look at serious examples, however, this once more falls away. Do we want to be able to say that slavery was justified because it was the common practice of 1600s England and its empire? Certainly not. We have made moral progress since then. Again, this argument falls by the wayside. Lastly, we wouldn’t be able to say that we’ve made progress at all for similar reasons. As alluded to in response to the slavery argument above, this is clearly not the case. We should always have a certain historical context to our judgements, but to say that contemporary Germany is better than Nazi Germany is not a far reach in terms of realistic moral statements.
We may differ in our beliefs, but at our core we do not differ in our values. “Often, what seems to be a big difference turns out to be no difference at all.” (Rachels, Page 21). Just because two societies differ in customs, this does not mean they differ in values. Two good examples brought up above are the Eskimos, and the Greeks and Callatians. From the outside, the Eskimo practices may seem broadly condemnable. They kill babies, and leave the elderly to die. After a little bit of examination, however, we find that they are not so different from ourselves. Freuchen details how both of these activities are needed for the general health of the population. Female babies are killed because males are the ones that grow up to do the hunting which is requisite for survival. Were more females allowed to grow and mature, they would drain the resources of the whole, potentially leading to the entire collapse of the society. The same can be said of the elderly. As such, the Eskimos are simply engaging in something that every society does; that of self maintenance and survival. Similarly, regarding the burial customs, each society believed that they were honoring their dead. While burning or eating the dead may seem abhorrent to members of each respective culture, within the culture they are engaging in activities that stem from the same values.
Elaborating on this point, there are some values that are necessary to have a society at all. This includes some of the classics of child rearing, murder, and lying. Were everyone to simply lie to each other, or the threat of being killed ever present, or children not looked after until they could take care of themselves, these societies would simply not exist. “There are some moral rules that all societies must embrace, because those rules are necessary for societies to exist…Not every moral rule can vary from society to society.” (Rachels, Pages 23 to 24). Many who consider an action deplorable may be hesitant to outright call it wrong,. There are three main reasons for this.
1) First, there is an understandable nervousness about interfering in the social customs of other people.
2) Second, people may feel, rightly enough, that we should be tolerant of other cultures.
3) Finally, people may be reluctant to judge because they do not want to express contempt for the society being criticized.
(Rachels, Page 26). All of these points are valid and well thought out. However, when we introduce a stark example, we can again get some insight into what is truly the case. This time, let’s examine female genital mutilation. Based on a series of articles in the New York Times by Dugger, in many countries, this takes the form of the removal of the clitoris, or other damage done to the sexual organs of females. This is expressly done so that they do not experience sexual pleasure, and as such, so the argument goes, are better wives for they are less likely to cheat on their husbands. Surely we must feel less squeamish about condemning such a deplorable act, aside from any attempts at being polite or considerate. It is obvious on its face that these practices are not acceptable, and fall outside the realm of any notions of interfering, or tolerance, or showing contempt.
In brief, then, we find that cultural relativism is not defensible for a number of reasons. As with many thought experiments, this becomes expressly clear when we engage in serious consideration of the implications of the theory. If we consider Nazism, or slavery, or female genital mutilation, we see that it is not simply enough to ‘live as the Romans do’ and turn a blind eye to bad action. What we can deduce instead is that there are truly values that are universal, though they express themselves in different ways, and that at times some cultural practices go against these values. Whether they are the values required to have a society at all, or more specific regarding the treatment of a minority group, we can determine that some activity is simply reprehensible.
Part Three: Subjectivism
A similar, though different, source of attack on objective universal ethics is that of Subjectivism. Specifically, “ethical Subjectivism is the theory that our moral opinions are based on our feelings and nothing more…According to this theory, there is no such thing as right or wrong.” (Rachels, Page 34). A good example of this would be the debate over the legalization of Gay Marriage between former senator Mike Pence, and the head of the Human Rights Group, the group responsible for lobbying for marriage equality. Both feel they are correct, and we can not say whether one side is actually so or otherwise. Under subjectivism, they are simply competing views, each of which is an opinion to be respected. “If ethics has no objective basis, then morality is all just opinion, and our sense that some things are ‘really’ right or ‘really’ wrong is just an illusion.” (Rachels, Page 35)
There are several flavors of subjectivism. The first is Simple Subjectivism. In essence, what this entails is, “when a person says that something is morally good or bad, this means that he or she approves of that thing, or disapproves of it, and nothing more.” (Rachels, Page 35). Approval or disapproval is not in any way an indicator of whether something is objectively right or wrong, it is merely a statement of preference. However, following this tack to its logical conclusion, there is no room for disagreement. “When one person says ‘X is morally acceptable,’ and someone else says, ‘X is morally unacceptable,’ they are disagreeing. However, if Simple Subjectivism were correct, then they would not be.” (Rachels, Page 35). As such, we find that simple subjectivism doesn’t stand up to scrutiny.
A second approach is that of Emotivism. Under this view, “moral language is not fact-stating; it is not used to convey information. It is used, first, as a means of influencing people’s behavior…the utterance is more like a command than a statement of fact…Second, moral language is used to express attitudes.” (Rachels, Page 37). For Emotivism, then, moral conflict exists in a way that it does not under simple subjectivism. However, in our pronouncement that moral utterances are not fact stating, we miss something of the truth. Statements may be of attitude or beliefs, but they are also ways to make a statement about the trueness of falsehood of a moral statement. Disagreement takes several forms. It can be in belief or attitude. Under emotivism, moral disagreement is the latter.
Lastly, there is the The Error Theory, articulated by Mackie. Ethics does not contain facts under this view, and people are never right or wrong. However, it is the case that people believe that they are right. As such, we should construe them as attempting to put forward objective statements.
At the heart of Ethical Subjectivism is a theory of value and epistemology called Nihilism. Nihilists believe that values are not real, that these things are unknowable, or not comprehensible by the human mind. People may believe this or that in terms of moral beliefs, but in reality nothing is either right or wrong, or good or bad. Competing claims are neither reports of our own attitudes (Simple Subjectivism) or expressions of our feelings (Emotivism), but instead they are errors made by fallible humans. Different sides make a statement about morality which is incorrect, as there are no values on which to base them. As such, we conclude that there are no moral claims at all. This is the heart of Nihilism.
This may appeal for complex or hard to tease out moral issues, but it is less convincing for stark examples, such as Nazism, Slavery, Genital Mutilation, and so on. Defeating Nihilism is to defeat Subjectivism altogether. Nietzsche famously said that God is dead, and we have killed him. It is in the tradition of Western Philosophy, that with increased scrutiny, different moral positions have risen and fallen, only to eventually be discarded as incorrect, outdated, or inapplicable. This concludes with mid 20th Century philosophy such as Existentialism as articulated by Sartre, or Absurdism as detailed by Camus. These are competing ways in which to view the world, and ways to address Nihilism. Both were grown out of World War II, and both were in vogue for a time. More recently, with the advent of contemporary philosophy such as feminism and post modernism, the debate over objectivity has resurfaced. In the remainder of this paper, having articulated cultural relativity and subjectivism, I will discuss three main contenders for the basis on which to build a universal morality. The first is Virtue Ethics, as detailed by Aristotle, the second is Utilitarianism, as described by Mill, and the last is Deontology, as articulated by Kant.
Part Four: Virtue Ethics
Aristotle puts forward that the goodness of man is Virtue, and that this is what all humans should strive for. His core concern is what traits of character make someone good? This is indivisible from a life of reason, as seen from the ancient Greek point of view. Anscombe believes that secular philosophy has drifted too far from its Greek roots. Modern moral philosophy, under her view, is not logical as it operates on a system of ‘law without a lawgiver.’ This is an expression of the secular nature of philosophy today. Instead, traditional values of virtues within a person, such as courage, truthfulness, self control, diligence, kindness, and so on.
Aristotle articulated that a virtue is a habituated action that is expressing a trait of character. Vices, however, are similar. The difference is that virtues are good, and vices bad. Virtues are praiseworthy, while vices and condemnable. People who are virtuous are attractive to us, and those who are vice ridden are repellant. However, different people serve different utilities for us, and as such, we seek out different things in different people. We want different things in a doctor and in a president. However, we also evaluate people as just that; people. As such, we can come to the idea of a good person. As such, virtues are aspects that are habituated expressions of character which are good for any person to have.
Which attributes can be described as virtues by Aristotle? For him, virtues resided at the middle point between vices. Between the vices of cowardice and foolhardiness lies the virtue of courage. Cowardice is a paucity, while foolhardiness an over expression. Aristotle considered courage to be the primary virtue, as it is a requisite for the pursuit of all other harmonic means between vices, or other virtues. Geach, a recent philosopher, however took issue with this. He said, “Courage in an unworthy cause is no virtue; still less is courage in an evil cause. Indeed I prefer not to call this non-virtuous facing of danger ‘courage.’ (Geach, page 114). For Geach, this is because there are many actions that may seem courageous, but when in service of evil, are actually condemnable. Similarly, Plato in his meditation Euthyphro, describes a situation in which a son is called on to testify upon his father in a murder trial. Socrates debates whether this should take place, but Euthyphro sees this as no means of badness. While murder is certainly murder, it could be argued that there are other virtues at play that should also intervene in this behavior, namely, being a good family member. (Tredennick et al, pages 19 to 41). As such, there is more to virtue ethics than Aristotle originally articulated.
We must ask, why are virtues good? An appropriate response differs based on the virtue in question. Having courage is important for different reasons than being honest, or being loyal to family. In the end, Aristotle says, “virtues are important because the virtuous person will fare better in life.” (Rachels, page 178). As such, Aristotle’s entire meditation is a treatise on human flourishing, and not merely a collection of admonitions about what should and shouldn’t be done in different circumstances. To truly flourish as a person, one must be virtuous. Should we ask the same virtues of all people? Nietzsche said no. Specifically, on the topic, he argued, “How naïve it is altogether to say: ‘Man ought to be such-and-such!’ Reality shows us an enchanting wealth of types, the abundance of a lavish play and change of forms – and some wretched loafer of a moralist comments: ‘No! Man ought to be different.’ He even knows what man should be like, this wretched bigot and prig: he paints himself on the wall and comments, ‘Ecce homo!’ (Behold the man!). (Kaufmann, page 491). As such, we can construe that virtues should differ from person to person somewhat. Human flourishing differs because people are different, have different attributes and personalities, occupy different roles, and so on. The response by Aristotle is that certain virtues are required at all times and in all places. No matter how different people may be, or societies may grow, certain virtues are required for human flourishing in any and all circumstances. This is because all humans share certain basic conditions. Aristotle devoted a large section of his work to friendship, and political involvement. He called humans the ‘zoon politikan’, which roughly translates to social creature. Under his view, the highest human flourishing is that done in service of humanity, whether this be politics or teaching.
For virtue ethics, character is the central concern. This may seem incomplete, however, as it does not tell us what to do in certain situations. It is good to have a theory of morality that describes what people should aspire to be, but it is equally needed to have one that tells us what to do in tricky moral conundrums. This is where the two remaining schools I look at come into play, namely, Utilitarianism and Deontology.
Part Five: Utilitarianism
Jeremey Bentham who lived from 1748 to 1832 had a novel approach to morality that was different from past versions. He argued that morality is, “not about pleasing God, nor is it about being faithful to abstract rules; instead, it is about making the world as happy as possible.” (Rachels, page 101). For him, this principle of Utility maximizes happiness in the world. James Mill was one of his students. His son, John Stuart Mill (1806 to 1873) argued for the most complete version of Utilitarianism. In 1861 he published Utilitarianism. Here, he put forth that we are not just permitted but required to cause the most happiness in the world. Peter Singer, the contemporary utilitarian, says that, it is not ‘a system of nasty puritanical prohibitions…designed to stop people from having fun.” (Singer, page 1). Classical Utilitarianism can be articulated in three primary statements.
1) The morality of an action depends solely on the consequences of the action; nothing else matters.
2) An action’s consequences matter only insofar as they involve the greater or lesser happiness of individuals.
3) In the assessment of consequences, each individual’s happiness gets equal consideration.
(Rachels, page 118). The theory is remarkably egalitarian, and concise. It gives us guidance on what to do in difficult situations, and tells us how to behave in tricky situations. However, many reject this theory. I’ll look at some of the reasons why, before concluding with why I believe Utilitarianism is defensible.
It can be put forward that the query ‘what things are good’ is different from that of ‘what actions are right.’ Utilitarianism answers the second query by going back to the first. What is right is what is good, and what is good is happiness. Mill says, “The utilitarian doctrine is that happiness is desirable, and the only thing desirable, as an end; all other things being only desirable as means to that end.” (Mill, chapter 4, paragraph 2). This in turn engenders the question, what is happiness? Many see this as simply pleasure. In ancient times, this was known as hedonism. Hedonism is broadly dismissed by most ethical philosophers as insufficient in terms of a guiding ethical philosophy. This is simply because there are things other than pleasure that we also consider important. Some say that right action brings about pleasure, others have articulated goals that are in themselves valuable.
A second critique is that consequences are not all that matter. A Classic thought experiment along these lines is that of the sacrificial lamb and utopia. Imagine that society could be a perfect utopia for all members, but in order to achieve this, one innocent person must be killed every ten years. A strict utilitarian would argue that the ends justify the means, and that the greater good of a utopian society at the expense of a death every ten years would justify this. However, this thought experiment takes us to a place that we are uncomfortable with the conclusions of the strict utilitarian. It reveals that there are things that are more important than coldly tallying what the consequences are. Things like justice, rights, and beauty.
There are still arguments that answer these critiques. Firstly, it is argued that utility is not served by actions that are harmful, even if they promote the greater good. This is because these harmful actions also have consequences. However, this argument is somewhat incomplete as we see that sometimes this is the case, but not always. The second argument is that utilitarianism helps us to determine rules that we should live by, rather than actions to be taken. Here, we don’t look at the results of each specific action, but instead we ask what enumeration of rules is the best? What rules should we construct to promulgate happiness? Under this view, we then evaluate acts according to whether they are in line with these rules. What about exceptions to these rules, however? The rule-utilitarian would put forward that we can have a general rule that a rule can be broken if it is in the best interest of human flourishing. It may be wrong to steal a loaf of bread, but it should certainly not be condemnable to do so to feed your own starving family. The final response to critiques is that common sense needs modification. Smart writes, “Admittedly utilitarianism does have consequences which are incompatible with the common moral consciousness, but I tended to take the view “so much the worse for the common moral consciousness.” That is, I was inclined to reject the common methodology of testing general ethical principles by seeing how they square with our feelings in particular instances.” (Smart, page 10). Smart argues that we must consider why values are important. For him, what makes values important is their consequences. Further, we can’t trust our common place thinking in divergent and extreme cases. We may say all lying is wrong because we have seen negative consequences, but to say this when some lies result in a better world is an incomplete view. Lastly, we should consider every consequence. For the sacrifice and utopia example, is it better to have a world in which everyone is subject to violent death? What if the population in question is over 8 billion, as it is currently, and as such the lottery is 8 billion to one every ten years? We may not say this is an ideal outcome, but considering the full accounting of the situation presented must give us pause. Common sense, values, and weighing all outcomes are all important. If we look backwards in time, or across the world now, we see many situations in which what is publicly acceptable should not be. This was addressed in response to critiques of objective ethics earlier in this paper. Perhaps the lasting guidance of utilitarianism, when revised, is that we should not follow things out of habit, but rather attempt to set up a rational system that tells us not just what to be, but what to do, in the interest of human flourishing.
Part Six: Deontology
Immanuel Kant is the originator of the school of ethical thought called Deontology, or the idea that human beings are special, and that duty to persons is paramount in our evaluation of moral considerations. Humans are better than other animals, and other animals are only valuable in so far as they aid human welfare. Kant says, “But so far as animals are concerned, we have no direct duties. Animals…are there merely as means to an end. That end is man.” (Infield, pages 239 to 240). In brief, people have a degree of dignity that other beings or objects do not have. He argued this based on two positions. Firstly, people desire things, and as such things that meet these desires have value. Objects only have value to the degree that they aid people. Animals are things. Secondly, people have dignity due to the fact that they can act rationally. Kant argued that moral praiseworthiness can only come about is from people acting in goodwill. This is, to be motivated by duty. Were there to be no people, there would be no morality. This is directly in conflict with the ‘transcendental’ school of objective morality, which puts forward that moral truths are true even in the absence of people. But we can reserve this critique for later. For Kant, the Categorical Imperative was the source of all moral considerations. In brief, it is, “Act so that you treat humanity, whether in your own person or in that of another, always as an end and never as a means only.” (Beck, page 46).
What does this mean? It means treating people in good fashion. We must respect them, and refrain from using them, manipulating them, devaluing them, and so on. It does not matter what your goals are, or what the consequences are, we must always treat people with a basic level of respect, based on the type of being that people are. It is important to stipulate, that under Kant we can still use people in the sense of employing them for services and asking for favors and such, because in these circumstances the people freely choose to enter into an agreement, the prohibition comes about when we treat them only as a means to an ends. People should decide things for themselves, and not be forced into activity. Further, we should endeavor to develop ourselves, and not just others.
Bentham said, “all punishment is mischief: all punishment in itself is evil.” (Bentham, page 170). Society punishes people, but it always involves hurting them. This is a core tenant of justice, one component of ethical considerations at the societal level. Kant, however, said, “When, however, someone who delights in annoying and vexing peace-loving folk receives at last a right good beating, it is certainly an ill, but everyone approves of it and considers it as good in itself even if nothing further results from it.” (Beck, page 170). Bentham responds, “if punishment ought at all to be admitted, it ought to be admitted in as far as it promises to exclude some greater evil.” (Bentham, page 171). Kant argued that utilitarianism is in conflict with concerns of human dignity. It makes us consider how to use people. Punishment, in turn, only attempts to reform people into what others want them to be like, rather than allowing them themselves the ability to make that choice. We can punish them and pay them back, but we cannot manipulate them. Kant believes in punishment because guilty people have stepped on other’s dignity, and that the punishment should be proportionate to the crime. “But what is the mode and measure of punishment which public justice takes as its principle and standard? It is just the principle of equality, by which the pointer of the scale of justice is made to incline no more to the one side than the other…Hence it may be said: ‘If you slander another, you slander yourself; if you steal from another, you steal from yourself; if you strike another, you strike yourself; if you kill another, you kill yourself.’ This is…the only principle which…can definitely assign both the quality and the quantity of a just penalty.” (Rachels, page 151). Further, on capital punishment, he says, “Even if a civil society resolved to dissolve itself with the consent of all its members – as might be supposed in the case of a people inhabiting an island resolving to separate and scatter throughout the whole world – the last murderer lying in prison ought to be executed before the resolution was carried out. This ought to be done in order that everyone may realize the desert of his deeds, and that blood-guiltiness may not remain on the people; for otherwise they will all be regarded as participants in the murder…” (Rachels, pages 151-152). In brief, Kant believes justice is not done if the guilty remain unpunished. This is because we must treat people as ends unto themselves, or rational beings who take up the consequences of their actions. To be responsible means to accept this punishment.
In brief, then, Kant gives us another theory, similar to utilitarianism, that tells us what type of actions we should pursue, based on his musings about what type of beings we are. We must treat people with respect, and not as means to an ends, but as responsible beings in and of their own right. As such, punishment is permissible, not because of the consequences it leads to, but because of the way we construe a human being to be constructed. I disagree with Kant’s assessment of the human condition. Under his view, were a murdered come to your door and ask whether their quarry is hiding in your home, Kant would say it is morally justified to tell them yes, because you allow them to make their own decisions and be responsible for their own actions. Or, as articulated above, it wouldn’t be right to steal a loaf of bread to feed your own starving family. Kant’s view forces us to do things that are in conflict with what we intuitively know about moral questions, namely, that sometimes we must do distasteful things in the greater service of human flourishing.
Part Seven: Conclusion
We have seen that Cultural Relativism and Subjectivism are not defensible when considering serious moral quandaries. We have likewise seen that there are numerous approaches to constructing how it is humans can best flourish, and do right action. We have surveyed Virtue Ethics, Utilitarianism, and Deontology, some of the largest names in the field of Ethics.
To me, Deontology fails as a moral system for precisely the reasons I articulate at the end of that section; it does not allow for common sense measures that promote human flourishing. Both Virtue Ethics, and Utilitarianism have something to offer in my view. Virtue Ethics tells us what type of a person to be, while Utilitarianism tells us what types of actions we should pursue. Particularly, Rule Utilitarianism is very appealing as it allows us to nest a system of rules that can take precedence over one another, or facilitate a more nuanced approach to how we are guided to act. Therefore, it is my conclusion that there is such a thing as objective morality, and that this is best served by pursuing both Virtue Ethics and Utilitarianism. In this way, we can discern what it means to flourish as human beings. In this way, we can be guided in what to pursue and how to act. Humans are truly remarkable beings, and there are a whole host of other ethical frameworks that fall outside the bounds of this survey of a paper. Absurdism, Existentialism, Feminism, contemporary Post-Modernism all have their strong and weak points, and I would love to devote more time to detailing them, and debating which components of each theory best serves human beings as a well rounded people. I leave this to the future, and other scholars. For the time being, I am satisfied with the combination of Virtue Ethics and Rule Utilitariansim. It guides me further than other schools, and allows me to flourish, to the best degree possible.
Constructing a system of objective ethics is of paramount importance to the pursuit of policy and policy making, as it allows us to come up with a structure or edifice which, when consulted, gives us direction in terms of how we should act. Should we decrease carbon emissions? Certainly, because they result in harmful consequences. Should we lie in the pursuit of good consequences? Only when that lying would truly be called for, but not otherwise. What type of legislator should I become? One who adheres to certain virtues. Being a policy maker in the absence of a system of morality is a dangerous game, as one can fall sway to Nihilism, and simply act from a place of blind self interest. Having examined the pros and cons of objective ethics, as well as some systems that influence what type of ethics this should be, allows us to better understand how and why to act. It allows us to serve others not coming from a place of self aggrandizement, but a sense of purpose. It allows us to flourish. There is little more praiseworthy that I could imagine.
Part Eight: References
• The Elements of Moral Philosophy, Ninth Edition, James Rachels, McGraw-Hill Education, New York, NY, 2019
• Ethics, A Graphic Guide, Dave Robinson and Chris Garratt, Publishers Group West, Berkeley, CA, 2013
• The Histories, Heroditus, Translated by Aubrey de Selincourt, Penguin Classics, 2003
• Book of the Eskimos, Peter Freuchen, World Pub. Co., 1961
• NY Times series on female genital mutilation, Celia W. Dugger
• Ethics and Language, Charles L. Stevenson, Ams Pr Inc, 1944
• Ethics: Inventing Right and Wrong, J. L. Mackie, Penguin Books, 1991
• Objectivity and Truth: You Better Believe It, Ronald Dworkin,
• The Nicomachean Ethics by Aristotle, Translated by Adam Beresford, Penguin Classics, 2020
• Modern Moral Philosophy, Elizabeth Anscombe, 1958
• The Virtues, Peter Geach, Cambridge, Cambridge University Press, 1977
• Plato: The Last Days of Socrates, Hugh Tredennick and Harold Tarrant. New York: Penguin Books, 2003
• Twilight of the Idols, Frederick Nietzsche, translated by Walter Kaufmann in The Portable Nietzsche, New York, Viking Press, 1954.
• Practical Ethics, Peter Singer, Cambridge, Cambridge University Press, 1993
• Utilitarianism, John Stuart Mill
• Utilitarianism: For and Against, J.J.C. Smart and Bernard Williams, Cambridge, Cambridge University Press, 1973
• Lectures on Ethics, Immanuel Kant translated by Louis Infield, New York, Harper and Row, 1963
• Foundations of the Metaphysics of Morals, Immanuel Kant translated by Lewis White Beck, Indianapolis, Bobbs-Merril, 1959
• The Principles of Morals and Legislation, Jeremy Bentham, New York, Hafner, 1948
• The Metaphysical Elements of Justice, IImmanuel Kant, translated by John Ladd, Indianapolis, Bobbs-Merril, 1965
• Critique of Practical Reason, Immanuel Kant, translated by Lewis White Beck, Chicago, University of Chicago Press, 1949
Science and Tech Policy Memo
To Whom It May Concern,
Benjamin Franklin once said, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty no Safety.” In brief, increasing security constraints on Science and Technology (hereafter S&T) would be tantamount to a strangle hold on the very nature that makes S&T so fruitful and beneficial to our society – the free and open exchange of ideas.
The argument in favor of increasing security around the S&T community must first be articulated before we can examine its pros and cons. In essence, those in favor of this put forth that S&T can be used against the safety and interests of the United States and its population. Therefore, we should restrict all aspects of S&T to conform to the wishes of the security minded. This takes many forms. It may include limiting student visas for both those coming into and out of the country, deciding what can and can’t be published, classifying material and research, limiting international conference participation, and generally discouraging international links for those promoting S&T, particularly for what is deemed a sensitive area. We must ask ourselves, “At what point do scientific openness and free exchange of scientific information pose a risk to national security?” (Neal et al., pg 310)
The pros are limited, given the nature of S&T. They mainly include having more control over an unruly and open system. Potentially dangerous S&T would remain in US hands, and there would be a lesser chance of bad actors using this to damage the US, and its interests, so the argument goes. This would benefit the US in the short term, but the long term aims of the country are antithetical to this. The formulation of a comprehensive S&T system post WWII was facilitated by its value to defense, and The United States invested heavily in science for national security and defense purposes during WWII and the Cold War. It was S&T’s potential to contribute to national security that spawned the current US system for support of research. (Neal et al., pg 318) While this may have been where the modern system was born, this does not mean that we must limit S&T to the degree desired by some. The goal should be a system where S&T can be used for legitimate concerns and preventing opponents from accessing it. This requires a fine balance. Craig Venter, the former president of Celera Genomics said “Some people argue that publishing each genome is like publishing the blueprint to the atomic bomb. But it is also the blueprint for a deterrent and for a cure” (Neal et al., pg 322)
The cons are substantial. “Openness is the very heartbeat of science, the means toward progress, whereas secrecy is the password of the security community, a culture in which the sharing of information jeopardizes safety.” (Neal et al., pg 322) Foreign students who come to the US to study often stay, and improve the workforce. Immigrants make up a significant percentage of the total number of American scientists who have received a Nobel Prize. (Neal et al., pg 323) “Scientists often do not know what they will learn from their work or how their findings will be used until the research is done.” (Neal et al., pg 319) It is the very nature of S&T that collaboration is vital to deeper understandings., and the fostering of nascent technologies. Limiting S&T in the way described in the first paragraph would mean not just the erosion of ties, but would be a direct assault on our national goals – safety, prosperity, development, freedoms we hold dear, and an open world, all of which benefit from S&T collaboration.
In brief, the effects and dangers of pursuing this would be to discourage S&T, and therefore lessen the potential for transformational change in the interest of the US. There are numerous examples to draw upon. One from recent times would be the promulgation of visa restrictions for international students following the 9/11 terrorist attack. President Bush issued Homeland Security Presidential Directive 2, limiting student visas for those receiving training in sensitive areas (particularly concerning weapons of mass destruction). (Neal et al., pg 323) This proved very difficult to enforce, as what fell under the heading of WMD was vague. Consulate officials were asked to more deeply vet applicants, with the result being increased wait times and at times outright cancelation of study in the US. This led to applicants studying, working, and eventually settling abroad in competitor countries. This was not the first time this issue came up. “In January 1982…the State Department asked American universities to deny designated foreign students access to specific courses of study and laboratories and, further, to monitor their movements…(which) were refused in most cases.” (Neal et al., pg 320) In 1982, “the (Panel on Scientific Communication and National Security) concluded that (1) “security by secrecy” would ultimately weaken US technological capabilities; (2) there was no practical way to restrict international scientific communication without disrupting domestic scientific communication; (3) the nation must build “high walls around narrow areas” in pursuit of “security by accomplishment”; and (4) controls should be devised only for “gray areas.” (Neal et al., pg 320) Undertakings like the Human Genome Project and the search for the cause of SARS would not have been completed nearly as quickly without international partnerships. (Neal et al., pg 323) Many of those involved in designing and building the first atomic bomb were immigrants who came here seeking asylum from fascism and war in Europe, including Einstein. (Neal et al., 323)
Three examples of this that are germane to China specifically but also the general world order of S&T development would be the reaction to the Corona Virus, Space issues, and Quantum computing. Taking each in turn, let’s look at the example of the Corona Virus. The virus began in China, but due to a lack of global public health coordination, it rapidly spread across the world. This was a result of the secrecy between China and the US and the suspicion with which they shared information. Beyond the initial spread, the formulation of a successful vaccine could have been greatly ameliorated by intense global collaboration, as was the case with the development of the Pfizer vaccine in partnership with a small German start up. Some countries chose not to collaborate, such as Japan, which sought human trials on Japanese citizens rather than accepting the trials already conducted, leading to a longer roll out and increased loss of life. In the end, much suffering, pain, and even death could have been avoided with more scientific collaboration between countries, rather than less.
Regarding Space, the case is somewhat different. Some countries still collaborate, even in the midst of Russia’s aggression against Ukraine, specifically for the International Space Station. However, this bastion of S&T collaboration is nearing the end of its life, and Russia and China are seeking to make stations that are completely their own. Further, redundant missions to the Moon to profess national supremacy are wasting time and resources that could be better spent on exploring asteroids, or moving on to Mars. Last, there is the issue of satellites, and the current buildup of operational capabilities to cripple or maim what is vital for national security. Were countries to share their information and logistics, rather than hunker down into protectionism, much could be gained. While Space may be the next theater of war, this does not mean that our Space related S&T has to necessarily be hampered by a cold war.
Finally there is the realm of Quantum computing. Again, the US and China seem at odds. Both center the completion of this tech on the national security of the future, but are pursuing competitive and redundant research endeavors that duplicate what each are doing. Were the countries to share their knowledge, a workable tech could be developed for the benefit of all, with clear dual use applications, rather than renewing a cold war style tech buildup that wastes the efforts through duplication of what should be fundamental basic research.
These three examples matter for precisely the same reasons outline in sections one and two above. S&T, at its best, works to make the lives of people better. True, it can also be used against individual countries’ national interests. This debate is perennial and goes back to the first time one group of hominids began using tools. Stone tools could be used to open food, or to attack an enemy. When humanity comes together to share knowledge that would benefit all, we mark ourselves as unique in the animal kingdom. In fact, it may be the very definition of what makes us human. Let us not us S&T as an outpost for novel forms of division, but rather as the firm foundation for a new, and plentiful, global society.
References
Homer A. Neal, Tobin L. Smith, and Jennifer B. McCormick (2008) Beyond Sputnik: U.S. Science Policy in the 21st Century, University of Michigan Press, Ann Arbor.
Additional Reading
Garisto, Daniel. China Is Pulling Ahead in Global Quantum Race, New Studies Suggest. Scientific American. July 15th, 2021. https://www.scientificamerican.com/article/china-is-pulling-ahead-in-global-quantum-race-new-studies-suggest/
Knickmeyer, Ellen. A new space race? China adds urgency to US return to moon. AP, September 15th, 2022. https://apnews.com/article/astronomy-russia-ukraine-space-exploration-science-technology-f98448825e588e8902bb74519b55ba9f
Silver, Laura, Devlin, Kat and Huang, Christine. Americans Fault China for Its Role in the Spread of COVID-19. Pew Research, July 30th, 2020. https://www.pewresearch.org/global/2020/07/30/americans-fault-china-for-its-role-in-the-spread-of-covid-19/
Research Synthesis and Design: Neuroeconomics and Nonmarket Valuation
Carl Mackensen
Kathy Newcomer
Research Methods
Research Synthesis and Design
Summer 2022
I: Introduction
Climate change is a pernicious and ever evolving issue. In order to address the issue, we need to encourage action across the entire strata of society, from individuals to elected officials. The primary goal for this piece is the examination of how we can best motivated citizens to take up the cause of climate change, and hand a mandate to elected officials who would follow through on the subject. The research questions to be addressed include those on what types of cues people respond to, how do demographic groups differ, which brain regions are associated with the presentation and analysis of climate related information, and whether social desirability plays a role. The design of the study is straight forward. A random sample will be taken from the general population, and be presented with a rank ordering task for types of investment. The information will be said to come from different sources, in order to see which source elicits the greatest response. This will be done in an MRI scanner and result in fMRI scans of brain activity. The survey while in the MRI will be a straight forward ranking exercise, with investment in climate change mitigation and prevention compared to other expenditures. In addition, participants will provide demographic information, and participate in lengthy qualitative interviews to better understand where they are coming from. The analysis will consist of a simple regression used to compare the coefficients on dummy variables for each demographic, and treatment group, where each treatment was told the information was provided by a different source. For potential limitations, I hone them down to honesty of responses, causality (whether the read prompt actually causes the response, or if there is an intervening omitted variable), and generalizability, as this study, while random, will only represent the USA, and for full generalizability should be expanded to the entire world, particularly every country that relies on fossil fuels. The next section is a literature review of relevant articles, followed by the conclusion.
II: What Problem is being Addressed?
Climate change is the increase of severe weather outcomes, such as droughts, floods, fires, sea level rise, and more, caused by anthropogenic causes, specifically the burning of fossil fuels that emit carbon dioxide and methane into the atmosphere and warm the planet through the greenhouse effect. This has been occurring since the start of the Industrial Revolution, and the advent of the burning of coal for power and heat. Climate change has reached such a severe point that, should action not be taken to limit the rise in temperatures to 1.5 degrees Celsius, the effects could very well endanger not just all of humanity, but all life on the planet. Action must be taken within a very small window in order to do this, ideally before 2030, with drastic cuts to emissions. This issue poses a classic collective action case, in which there is little motivation for individual action because others have the potential to free ride. If America drastically cuts emissions, but Vietnam does not, Vietnam would be better off at least in the short term, because it would benefit from cheap, dirty fuel sources while emissions would still go down overall. Action is further complicated by the fact that greenhouse gases are a global pollutant, and require action by all emitters to reach the stated emissions targets, unlike other types of pollution such as sulfur dioxide, which is regional and can be addressed by a single country.
For us to make true change, everyone has to change their behavior. We need to not only change the composition of our energy sources to renewable means like solar, wind, and hydro, but also change how we heat and cool our homes, how we transport people, what we eat, and more. In order for public officials to make change on the issue, they need a clear mandate that change is desired. Right now, change is stymied by a number of Republican areas in the USA that still have key constituencies employed by fossil fuel industries. As such, despite the provision of the most up to date science available, elected officials are still hesitant to make change. Some go so far as to outrightly deny the existence of climate change, calling it a ‘Chinese Hoax’ meant to derail the economy. In order for meaningful laws to be passed, the public needs to not just be made aware of the issue, but they have to actively call for action. This is difficult, as people and legislators both have competing priorities and desires. A dollar spent on mitigating climate change is a dollar not spent on anything else.
How, then, can we influence both the public and legislators to take up the issue? A number of groups are attempting to do just this, but in order to influence the public, and thereafter legislators, we need to examine the best means available to effect change.
III: Research Questions
The research questions that I will attempt to address in this study are directly influenced by the goals outlined in the previous section, namely, how best to influence people to care about climate change. To wit, they include the following.
1) What types of informational cues influence people to choose to act to address environmental problems caused by climate change?
2) How does race, age and gender affect how likely people are to choose to act to address environmental problems caused by climate change?
3) What brain regions are associated with the act of weighing environmental investment options?
4) How prevalent is the threat of social desirability on responses to questions about addressing environmental problems caused by climate change?
These questions seek to best understand how we can make a difference. If different cues prompt people to address climate change, they should be tailored to the individual type of person as to be the most effective. Similarly, if there are demographic differences in how people respond to messaging such as broken down by race, age, gender, and so on, then this too should be considered when reaching out to people. The reason why brain regions are particularly interesting in this analysis is complex. Firstly, we would be able to tease out whether someone is lying in their responses, as regions for lying are known and established. Then, we can omit these responses, or follow up with the individual participant and question them further. Secondly, brain regions are interesting as they are very specific in terms of function. We may find that all respondents have a particular region activate for say the valuation process, and as such we would in essence be able to peer into their decision making process. Alternatively, brain regions for traditional values may light up, which would correspond to different outcomes. Brain region analysis is truly coming of age, and hasn’t been used in this context yet. Lastly, for social desirability, we would be able to again via the brain regions see if a comparison is being made, or if the respondent participant is acting of their own values.
IV: Type of Design
As for the purposes of this paper I have unlimited means, access, and abilities to perform my dream analysis, the type of design seeks to examine with the best available measures exactly how we can best motivate people to make change. As such, a randomized group will be selected from the general public for a randomized control trial. This could be done through a lottery system. The question of how to motivate people to actually participate falls outside of the realm of this paper, but conceivably there could be some benefits associated with participating. This could be the prestige of working on such a high profile study, or monetary benefits for participation. However I wouldn’t want the rewards to be so completely outsized that those who don’t participate would feel negatively towards participants. Again, however, this falls beyond the purview of this paper, and we can assume that this randomly selected group would be motivated to participate.
Participants will be split into several groups, each of which will watch a different prerecorded message. The message will be said to originate from either scientists, government officials, activists, or celebrities. After this, participants will be placed in a Magnetic Resonance Imager (MRI) scanner. Functional MRI (fMRI) scans will take place while participants complete a brief series of questions. This allows us to see which area of the brain is working while answering each question.
I take ethics very seriously in considering the type of design to be used in my proposed study. The most pressing ethical method that is available to me, drawn from medical literature and practice, is that of informed consent. Firstly, participants would be made aware of the benefits and costs of participation. Costs would be perhaps boredom from being stationary in an MRI, and some discomfort from being in the machine for probably roughly 30 minutes. The benefits would include adding to critical knowledge, as well as other possible fringe motivators I described above, such as monetary or prestige. Again, though, we can assume that participants will readily take part, and do so earnestly. Aside from the costs and benefits and informed consent, this study should be double blind, with no one who administers it knowing which of the treatments (here, the origin of the message) the participant has been assigned to. All individuating data should be stripped from the final data. In my dream world scenario this would be a project that would fall under the ‘big data’ heading, and as such include the participation of thousands of people not just in the USA, but around the world. However, for a pilot study like the one I am proposing, we can assume that the initial analysis would be restricted to a random sample of Americans and USA residents.
V: The Survey
The survey will be administered while the participants are in the MRI, and fMRI scans are being conducted. Ideally, it would be as short and to the point as possible, so as to not induce fatigue that would influence answers (perhaps people would not take things seriously and just seek to finish the survey as fast as possible to get out of the scanner). In order to accurately gather all of the relevant information, as well as do this in a timely manner, long thought will have to be given to the exact nature of the survey. Inside the scanner, participants have access to a screen that they can see information displayed on, as well as audio through ear phones, though this can be hard in practice as MRIs are relatively loud machines and often participants in scanners wear ear plugs. As such, we would rely on visual information displayed to them. Participants also have access to a small hand held controller that they can operate while in the scanner, in order to get their responses to visually presented information.
First, we will ask whether the participant leans politically left, right, or center. Then, we will ask a series of trade off questions about expenditures on environmental issues. For example, this could take the form of ‘your state has a budget surplus of $1,000 per capita. Should this be spent on climate change mitigation, or public schools?’ This would allow us in the analysis to rank order, as well as put an average price on, how much environmental expenditures are valued.
VI: The Analysis
Following the survey and fMRI scans, I will analyze whether there are any demographic differences among participants in terms of their responses. This would take the form of t tests of averages of or regressions with dummy variables coded on demographic information such as race, sex, age bucket, political affiliation, and so on. This aspect of the analysis should prove straight forward enough in terms of the actual statistical analysis. Ideally, a regression with expenditure on environmental investment would be the dependent variable, and all other information would be coded as independent variables. For the demographic analysis, therefore, it simply becomes a matter of looking at the coefficients on the regression independent variables of interest, as well as their significance levels.
Then I will examine which brain regions are active during the survey. This is the unique nature of fMRI scans, in that they show, through measuring the oxygenation of regions of the brain during the survey in the scanner, which ones are active. I will compare the different groups in terms of their responses. We will be able to rate things like source credibility, including confidence, reliability, truthfulness, and how responses are affected. It is increasingly well established which regions of the brain are responsible for different aspects of experience in a human being. Using this as a second analytical tool would be highly valuable, and be able to give us some insight into the thought pattern of each individual participant, perhaps even beyond what they know of themselves, as well as in the aggregate which regions were turned on or off.
These analyses would answer the first three research questions. The final one, on social desirability, is a little bit more tricky to detangle. The best way to answer this research question would be to compare the differences in what the participant said would be the desired environmental investment with brain regions that light up for the cognition types of self worth or self pleasure. This would prove somewhat challenging, as it is a merging of the two analytical methods that I have kept deliberately separate to date in order to facilitate ease of analysis, but it is not impossible. Firstly, we would find those for whom brain scans indicated they were acting out of social desirability pressures in their responses. Then we would run the same regression outlined above, and see how much they say that they value the environment. This would begin to get us a toe hold into how much these professed environmentalists claim they would like to see change, versus their more sincere counterparts, who would display different brain scans and have different responses.
I would supplement these analyses with as much qualitative work as possible. If the sample is smaller, and time and budget is not an issue, I would send trained interviewers into the participants homes for one on one interviews either before or after the scan, which too would be randomized, as it could affect responses. They would ask questions that would get a full portrait of the individual in terms of their fiscal priorities, experiences with environmentalism, and investment experiences, as well as attitudes on government programs or private sector initiatives, what educational and media sources they regularly consult, how much they value future generations compared to present consumption, and whether they have had any experience with climate related hardship, such as natural disasters. Again, this would be deindividuated so as to protect participants, with such qualitative work only being tied to the number assigned to each individual for the scanning. Interviewers would not know which treatment the individual would be assigned to, and the interviews would randomly take place either before or after the scan and survey. This pre and post treatment interviewing, which is randomized, would add another aspect to the study that we could analyze, namely, seeing whether the interview itself contributed to any changes in attitudes of the participants, all else held constant.
VII: Potential Limitations
I have attempted to make this study as comprehensive as possible in terms of how it is conceived, run, and analyzed. However, as with any human endeavor, particularly new ones, there is the potential for limitations inherent in the activity. Firstly, there is the limit of what we can technically know. This applies to all levels of this project, from the qualitative interviews, to the surveys and self reported data, to the fMRI scans. All are limited in terms of what we can actually infer. Brain scans only go so deep, so to speak, and while some regions are known to be associated with certain thoughts or behaviors, others remain mysterious and only generalities can be spoken of. Also, for a self reported survery, there is always the question of how honest participants will be. I have attempted to address the social desirability aspect, but participants may attempt to deceive or misrepresent themselves in other ways, without even consciously knowing that they are doing so. An additional aspect is that of bias. Different people in the different treatments of the authors of the environmental statements they read could be coming to the study with markedly different outlooks and assumptions about the groups they are assigned to (celebrities, activists, government officials, or scientists). How this information is reported to them could significantly impact the nature of their responses, and thereafter both the statistical analysis as well as the brain scans.
Next there is the question of causality. This is all important in terms of designing something that can be applied to the real world. If the goal is to learn how we can have the biggest impact on climate change through different people presenting the same information, whether the source actually causes the difference between treatment groups may be up for debate. There could be some intervening omitted variable that biases the responses, and analyses. The set up of the study may be such that we internalize this bias, and as such have faulty conclusions, and cannot actually do anything to address the stated issue of clmate change and people’s beliefs about it. I attempt to get at this by having both the quantitative approach of a randomized trial with different treatments, a double blind study, the fMRI scans, and the qualitative interviews, but I’m sure there would be additional things that I have missed.
Last, there is the issue of generalizability. Climate change is a global problem, and requires the active participation of everyone on the planet. Free riding, as stated above, is pernicious, and could infect even the most well meaning of people. If this study, despite being randomized with dfferent treatments and different types of analyses, is restricted to the USA only, as I have proposed for this trial analysis, it would fail to consider the multicultural validity argument, and hence generalizability would be sacrificed. Ideally, this same study should be run in every country in the Global North that is responsible for emissions. As such, different cultures may value different sources of information, or have different brain regions light up on average, or have different back stories in interviews for why they do or do not care about and value the environment. Doing this study globally would allow us to tailor responses and recommendations to each specific area, as there are bound to be differences. Similarly, if we want to initially focus on just the USA, we could do this study specifically for different demographic groups individually, and run the randomized control trial within each group. The responses of and results from Native Americans may be vastly different than those of New England Conservatives, and so on. Again, both the macro and micro administration of this study depend on time and money being available, as well as participants who are willing to take part, but for the purposes of this paper, we can assume that these issues are not present. How something appeals to a person may be highly personal, and vary from person to person. This study may give some general ideas, but is not a panacea.
Before moving on to the reviewing of relevant articles, it also makes sense to examine whether this proposed study meets the standards of different types of validity and reliability. Those to be examined include measurement validity and measurement reliability, internal and external validity, statistical conclusion validity, and multicultural validity. In terms of measurement validity and reliability, I believe that the way the study is set up will return measurements that are both valid and reliable. However, there could be threats to this. If the causal mechanism is not straight forward and there is an omitted variable, or if different people have different neural pathways for how they value things or prioritize things, then these measures could suffer. I believe that I have accounted for this by including the fMRI method, as it accounts for these issues and would allow us to see further what the situation is. While internal validity would be strong for these reasons as well, external validity would suffer because we are limiting ourselves to the USA. This holds true for multicultural validity as well. This can be ameliorated by, as I said, conducting the same study in other countries that currently do and historically have used fossil fuels. Lastly, for statistical conclusion reliability, I believe the analysis is statistically straight forward and conclusion would follow from the analysis detailed. However, again, causality could be an issue. In order to remedy this, we would simply need to control for as many variables as possible in the analysis, which would inform the pre test demographic study, and the post test qualitative interview.
VIII: Literature Review
I conducted a review of six articles related to this study. Predominantly, they were fMRI studies, with a few additional modalities and types of analysis. The first was Neural signatures of betrayal aversion: an fMRI study of trust. Here, people make either 'risky' decisions when facing uncertainty, or 'trusting' ones where they rely on someone else. Differences between these decisions are due to 'betrayal aversion', or not wanting to be betrayed by someone you trust. The research question was how significant is this, and what area of the brain is associated? The research design was while in an fMRI, participants made 82 binary trust decisions of different stakes, choosing to trust or not trust, and then payouts were calculated. The data collection techniques were fMRI scans and game results. 30 investors were sampled. Tabulation of investment winnings by demographic group was done, and the key findings were that significantly more trust was observed when betrayal aversion does not influence decision making. Activity in the right anterior insular cortex was found, as well as the medial frontal cortex and right dorsolateral prefrontal cortex. Measurement validity and reliability were not issues, nor was internal validity, but external validity was an issue as only 30 investors were sampled, which is not representative of the public. The statistical conclusions were valid, but there was little multicultural validity as the sample was not random or representative.
The second article was The neuroscience of investing: fMRI of the reward system. The primary question is what are the correlates of reward system brain regions with experimental behavior on investment, and can a model be made? Subjects played a monetary incentive delay task where repeated trials take place where they make or lose money depending on their ability to pay attention and react quickly. This takes place in an MRI. The results were fMRI scans and game results, and the population was Stanford students. It was found that the NACC and MPFC are associated with rewards, while impulsivity and motivated excitement may be rooted in the NACC. Again, measurement validity, reliability, internal validity and statistical conclusion validity were all fine based on the structure of the study, but external validity and multicultural validity both suffered as the study only looked at Stanford students.
The third piece was Mindfulness training increases cooperative decision making in economic exchanges: Evidence from fMRI. The key question is how does mindfulness training impact the emotional component of economic exchanges? This was a randomized longitudinal design involving mindfulness training or an alternative. FMRI scans of participants playing a game were conducted. The participants were 51 volunteers of equally numbered men and women, with white and black participants of people 'who want to learn to deal with stress issues in everyday life.' It was sought to find brain regions associated with cooperation. It was found that mindfulness increases cooperation, and that the septal region, linked to social attachment, was activated. Measurement validity and reliability are questionable here, as there is no real way to quantify how ‘mindful’ a person is. Internal validity is alright, as is statistical conclusion validity, however external validity and multicultural validity suffer as this was only 51 self selected participants.
Fourth, I looked at Risk patterns and correlated brain activities. Multidimensional statistical analysis of fMRI data in economic decision making study. The question was what neural substrates underlie decisions during risk in investment? Participants were exposed to an investment decision task while in an MRI. A Time series of 3D images of the brain, and the application of a panel version of the dynamic semiparametric factor model (DSFM) were used. FMRI scans and investment results were tabulated. 17 subjects participated who were exposed to an investment decision task. FMRI brain region scans and DSFM, as well as nonparametric statistical modeling, were conducted. Key findings included that decision making is comprised of valuation, comparison, and the final choice. Risk is associated with different brain regions than previously established findings on regions for investment. Measurement validity was alright, but measurement reliability was questionable, as risk is different for each individual. Internal validity and statistical conclusion validity were also fine, however again external validity and multicultural validity suffered as there were only 17 subjects.
Fifth, I read The mere green effect: An fMRI study of pro-environmental advertisements. The question was that purchasing behaviors do not reflect stated sentiment of preferring green products. Do green ads work? In an MRI, participants were exposed to green and standard ads and then rated their preferences and levels of liking the product. FMRI scans during the rating were used. 24 right handed women recruited through a community database was done, and it was found that ratings were more favorable for green ads, but fMRI showed the opposite - participants had more activation in regions associated with personal value and reward (ventromedial prefrontal cortex and ventral striatum) to the control ads. Measurement validity was questionable, as it relied on self reported scores as opposed to some objective measure. Measurement reliability was fine, as was internal and statistical conclusion validity, but again both external validity and multicultural validity suffered due to the limited participant pool.
Lastly, I looked at Identity on social networks as a cue: Identity, retweets, and credibility. The question was how does social media influence people’s source credibility with risk information. A posttest after treatment, where participants looked at social media posts, was administered, in addition to a self report questionnaire. Participants recruited from undergraduate communication courses at a southern research university. 434 people total ranging from 18 to 55 years old, roughly half men and half women. It was found that different online heuristic prompts influence the judgements of competence, good-will, and trustworthiness, while cues for authority strongly influenced credibility. Measurement validity and reliability are questionable, as this required self reporting. Internal validity and statistical conclusion validity were alright, as was external validity given the large sample, but multicultural validity suffered given a lack of diversity.
In sum, there were similarities, as well as differences, between the various studies I examined. The five fMRI studies all went about doing similar things, and taking fMRI scans. For the most part, this centered on investment under different conditions, or rating preferences. The conclusions were varied, with different study set ups finding different regions of the brain lighting up for activity. Another similarity regards the external validity of these papers. Their sample sizes were all quite small, and rarely representative of the public. Further, in some cases, they were self selected. This does not bode well for any generalizability to the broader public, and certainly not to the rest of the world. One strength the studies shared, however, would be that they all had relatively straightforward and reliable designs, with good measurement validity, measurement reliability, internal validity, and statistical conclusion validity. It is hard to knock a decent fMRI study, as they are so in depth and analytical by their nature. The only critique that flows from this would be under the statistical conclusion validity front, in which we could argue that correlation does not equate to causation. Just because a certain brain area is lighting up, it doesn’t necessarily mean that this is being caused by whatever they are doing. There may be intervening omitted processes that take place which are a result of the prompts, which in turn trigger the brain regions. This is a problem for all such studies.
IX: Conclusion
Through analyzing the previous articles, we can come to a synthesis that sets the stage for this novel study. fMRI techniques have not yet been used to examine green investing, though different constituent parts are present in the current literature. This experiment has the potential to address the stated research questions, as well as how honest their answers are. While there are questions regarding the honesty of responses, causality mechanism, and generalizability of conclusions, the outcome of this study would prove highly interesting, and ideally pave the way for further analysis on what motivates people to take up the cause of green investing. Climate change is a serious, and all pervasive issue that will only grow in importance as time goes on. In order to best effect change, this study attempts to get at the root causes of how to motivate people to care about the issue, and in turn, cause legislators and elected or appointed officials to make change. This is done through the structure of the survey and how it is administered, as well as how it is analyzed, and compared to existing literature. With results in hand, I imagine we’d see that certain cue sources for the information provided will be more influential than others, and that efforts should be made to encourage using these sources to reach people. How best to actually do this once we have found out the results lies beyond the purview of this paper, and I leave it to others in the futures.
X: References
Peterson, Richard L., The neuroscience of investing: fMRI of the reward system. Brain Research Bulletin 67 (2005), 391-397
Kirk et al, Mindfulness training increases cooperative decision making in economic exchanges: Evidence from fMRI. NeuroImage 138 (2016) 274-283
Bommel et al, Risk patterns and correlated brain activities. Multidimensional statistical analysis of fMRI data in economic decision making study. Psychometrika Vol. 79, No. 3, 489-514
Lin et al, Identity on Social Networks as a Cue: Identity, Retweets, and Credibility. Communication Studies Vol. 69, No. 5, 2018, 461-482
Aimone JA, Houser D, Weber B. 2014 Neural signatures of betrayal aversion: an fMRI study of trust. Proc. R. Soc. B 281: 20132127
I. Stephanie Vezich, Benjamin C. Gunter & Matthew D. Lieberman (2017) The mere green effect: An fMRI study of pro-environmental advertisements, Social Neuroscience, 12:4, 400-408
Energy Insecurity and COVID 19 Replication Paper
Econometrics II
Professor Leah Brooks
Carl Mackensen
Replication Paper
4/22/2022
Part I: Introduction
Energy insecurity is a significant issue for those dealing with poverty. It is markedly harder to deal with when family members become sick, and miss working time, or have to pay medical bills. This article, Sociodemographic disparities in energy insecurity among low-income households before and during the COVID-19 pandemic, by Memmott, Carley, Graff, and Konisky, attempts to answer the question, “Did COVID-19 make families at risk of energy insecurity worse off for this measure?” They do this through using a fixed effects model that has a number of demographic and COVID covariates, and examine the relationship between these measures, and three measures of energy insecurity, namely did the family not pay a bill, was a disconnection notice issued, and was a disconnection completed. They found that marginalized groups were at the most risk, and that COVID exacerbated energy insecurity for these groups. For my extension, I attempt to address some omitted variable bias by including both state level fixed effects, and month of unemployment fixed effects. I was able to for the most part replicate what the authors did, and my novel extension is an interesting addendum to add to the work of the authors
Part II: Article Summary
Energy insecurity is, in essence, the inability to meet basic energy needs for survival. Often, this mostly affects those in poverty, or members of marginalized groups such as racial minorities or disabled people. COVID-19 directly contributed to the exacerbation of energy insecurity for those at risk of it. In this study, the authors administered a survey for those at or below 200 percent of the federal poverty line between April and May 2020. It looks at data captured for the previous year so that analytics can examine energy insecurity and a number of correlates to see how these households have fared, between both pre COVID and during COVID times. The specific question they sought to answer was, how has COVID impacted energy insecurity for impoverished families, making the causal claim through their work that COVID exacerbated energy insecurity for marginalized groups through the use of a fixed effects model. Their specification for all of their regressions, both logistic and otherwise, was:
Energy Insecurity = a + B (demographic correlates) + B (COVID correlates) + e
The primary means of measuring energy insecurity in this survey are not being able to pay an energy bill, getting a notice of disconnection, and receiving a disconnection. After examining the relationship between a number of demographic factors on energy insecurity, the authors used logistic regressions to tease out the effect of correlates on energy insecurity. This was examined over both the previous year (before COVID onset), and last month (during early days of COVID). It was found that the poor, ethnic minorities, the disabled, and other marginalized groups were more at risk of exacerbation of energy insecurity as a result of COVID-19.
In an effort to describe the effect in greater detail, the authors also included data in their surveys on COVID-19 related issues. These included “whether they had received a COVID-19 stimulus payment…,whether their employment status had changed due to the pandemic and whether someone in their household had symptoms of or a positive test for COVID-19.” (page 189). They also constructed a measure of hardship due to COVID-19. It was found via reestimated logistic regressions that despite including the COVID measures, the same correlates were responsible for energy insecurity both before and in the early days of the pandemic, but that the COVID-19 correlates were also correlated in a positive way to energy insecurity. The table in Appendix II, Table One details both their findings, and my replication, which I will discuss below. Appendix II, Table Two focuses on the COVID measures. What is interesting is that the COVID measure of receiving the stimulus, this was found to be negatively correlated with the three energy insecurity measures. This makes sense, as having more money means you can put it to use for basic needs like energy provision. The remaining three COVID measures of hardship, lost job hours, and symptoms all had positive coefficients, to greater and lesser extent, which again makes sense as they negatively impact one’s ability to work and subsequently pay bills.
Part III: How the Results Match
After reading this piece, I found a few specific aspects of the analysis to be the most relevant to the author’s final conclusions, and most amenable to furthering their exploration of the topic. The first was Figure 4 from the original article, in which Energy Insecurity was examined across all three measures of energy insecurity, including whether a family was unable to pay a bill, whether a disconnection notice was issued, and whether there was a disconnection, for a number of COVID-19 related correlates. As such, I found the means and standard deviations for each correlate, each of which had three energy insecurity measures. I did this in STATA, and put the information gathered there into Excel to make bar graphs comparable for each correlate. While the original paper was able to nicely include the three measures of energy insecurity for each of the eight correlates on a single graph, I had to make separate graphs for each correlate, each graph containing the three energy insecurity measures. The results can be found in Appendix Section I, with Figure 4 being repeated after the fourth of the correlates graphs again for ease of comparison. I was able to come up with comparable percentages of energy insecurity measures for each correlate, however my Standard Deviations were way off from what the original authors had as their 95% confidence intervals for each measure, and as a result I did not include them. I was able to replicate the majority of the findings of the paper, however I was unable to replicate the confidence intervals. This is likely because the authors used a sample size that was larger than the one I used. It is unclear why the authors had more respondents in their summary table of means, as I used the direct data that they used. However, confidence intervals aside, the percentages for each figure generally match in size. Their figure, included below, is in percentage point terms, while mine is in percentages between zero and one.
I went on to recreate one of the logistic regressions ran by the authors, specifically the one in the Appendix, Table 10, or the logistic regression predicting energy insecurity in the last month, ie during the onset of COVID-19, or when the survey was administered. This is an important baseline measure to include, as will become apparent when I discuss my own analysis. The original results compared to mine are in Appendix Section II, Table One. This is the results of the logistic regression they conducted, of the demographic and COVID covariates on the three measures of insecurity for the preceding month, and the corresponding coefficients I found. I was, for the most part, able to get pretty close in terms of both sign and size of the coefficients. This is somewhat unsurprising, as I used their direct data. However, things are still somewhat off. The full results for table 10 and my replication of it are in Appendix II, Table One, but it is most informative to compare the coefficients on the COVID covariates specifically, as they are measures of how COVID impacted energy insecurity. Therefore, I have made a side by side comparison table of these results, juxtaposing their findings with my own. This is in Appendix Two, Table Two. Firstly, for the could not pay energy bill last month, the first measure, COVID stimulus, is -0.315 in theirs, and I found it to be 0.441. This is somewhat odd, as both the sign and magnitude are different. The rest of the measures for the COVID variables are similar in size and sign. The same holds true for the second measure of energy insecurity, namely received a shutoff notice. Again, theirs was -0.388 and mine was 0.477 for the COVID stimulus measure. Lastly, for disconnection, there is the same issue, with theirs being -1.225 and mine being 0.162 for the stimulus measure, while again the last four COVID measures were fairly similar. This is interesting, because it points to some heterogeneity between what I found, and what the authors found. This could be because they included additional fixed effects not specified in the paper, or it could be a result of my using a limited data set. Again, as described above, it makes logical sense that the coefficients for the stimulus would be negative in relationship to energy insecurity measures, as, again, it means having more money to pay bills. My findings, which were different from the authors’ as detailed above, show that perhaps the relationship is more nuanced, and perhaps that those who received the stimulus were still at risk of energy insecurity as they were not able to pay their bills.
I then went on to recreate one of the robustness checks the authors employed in their final analysis, Table 12 from the appendix of tables, which runs standard regressions to predict energy insecurity measures for the previous month, including covariates and COVID conditions. I ran the three standard regressions for each of the measures of energy insecurity. Again, I was able to predominantly recreate what the authors found, the comparative results are in Appendix III, Table One. Here, I was able to get much closer to the values that they had. Perhaps this is because they did not add additional covariates or fixed effects for this model, but rather did what they said they did, and simply ran the regression for all the coefficients listed. Again, the full results of the regression are in Appendix III, Table One, but I pulled out the four COVID measures to do a side by side comparison once more, in Appendix III, Table Two below. And again, for the first measure of COVID stimulus for the not paying a bill criteria, theirs was -0.033 while mine was 0.045, while the remaining COVID coefficients were similar. For the second measure, receiving a shutoff notice, for COVID stimulus theirs was -0.031 and mine was 0.035, while the rest of the measures were comparable. Lastly, for disconnection, again COVID stimulus was -0.033 for them, and 0.030 for mine, with the remaining four being comparable. All of this points to their being some sort of systematic difference between the data or specification they used, and what I used. It still makes sense that receiving a stimulus means less energy insecurity, as they found, but my own findings point to a more complex portrait of things, where other issues may be at play.
Part IV: Issues of Causality
The authors used fixed effects for both their logistic regression and OLS regression. It is certainly conceivable, however, that there are additional omitted variables which would bias the results. Specifically, the measures of interest are whether the COVID variables resulted in increased energy insecurity, as the authors claim they found, holding all demographic variables constant. It was found that COVID exacerbated the issue of energy insecurity, and that marginalized groups were particularly affected.
What omitted variables could bias the authors’ estimates? It seems straightforward to imagine a number of them. Firstly, there is the question of which state in the USA you are living in. Different states had vastly different COVID responses, and also had underlaying issues regarding how they approach poverty generally, and energy insecurity specifically. It is conceivable that different states with different COVID responses would therefore contribute to variables that are correlated with both COVID measures, and measures of energy insecurity. This could have taken the form of an abolishment of rent for certain populations during COVID, of how states managed furloughed and unemployed populations, of how energy providers either did or did not enforce their disconnections or threats of disconnection, and how much savings people in a given state had. I am sure there are a host of other issues too that I cannot myself imagine, but regardless it is the case that there may be very many variables correlated with both energy insecurity and COVID measures. Secondly, there is the temporal issue of when a given household became affected by COVID. For this data, we have a month of unemployment variable. It is conceivable that those who lost their jobs earlier compared to those who lost their jobs more recently would have a more difficult time paying for energy. As such, this would be correlated with both COVID measures, and energy insecurity measures.
Part V: The Extension
In order to address the causality issues and omitted variable bias threats outlined above, I did a novel analysis using the data provided by the authors. I chose to redo the logistic regression that I replicated for Table 10, which is the heart of the paper’s ultimate findings, and include both state level fixed effects, and fixed effects for the month of losing employment. This results in a broader total picture of the relationship between energy insecurity and COVID-19 impactedness. I used all the same original covariates as controls as well. I include my findings for the three measures of energy insecurity in Appendix IV, Table One, compared directly with those of the authors. Comparing this, we have a fuller picture of the overall effect.
My regression took the form:
Energy Insecurity = a + B (demographic correlates) + B (COVID correlates) + B (state fixed effects) + B (month of unemployment fixed effects)+ e
What is particularly interesting to me is that, after the inclusion of the state and month of unemployment fixed effects, we find results for the demographic and COVID covariates that are closer to those of the original paper’s findings, or Table 10. This leads me to believe that the authors did indeed include other covariates in their analysis using the logistic regression, at least to the extent that they included state level fixed effects. Again, the full results of the regression are in Appendix IV, Table One but as above I pulled out the COVID related covariates for direct comparison, between what they found for their original logistic regression, and what I found with the full specification including the state and month of unemployment fixed effects. This is in Appendix IV, Table Two. For the measure did not pay an energy bill, it was found that for COVID stimulus, theirs was -0.315 and mine was 0.567, with COVID Hardship and lost job hours both being comparable, but COVID symptoms being different, with theirs being 0.449 and mine being -0.033. For received a shutoff notice, again COVID stimulus was different, theirs being -0.388 and mine 0.763, while COVID hardship was larger for me than theirs, with theirs being 0.813 and mine being 2.102. The remaining measures were comparable. Lastly, for the disconnection measure, theirs was -1.225 and mine was 3.814 for the stimulus measure, which is a marked difference, while the remaining variables were off, but not so markedly as to be described here. Again, their findings make sense that stimulus is negatively related to energy insecurity, but my more robust specification points towards the potential of a broader story to be told about this relationship.
These differences could have come to pass by virtue of the fact that I included additional covariates in my logistic regression, including not just the demographic and COVID variables, but also the state and month of unemployment fixed effects. Further, the differences mentioned are significant, implying that there is a difference in the underlaying data used to construct the results.
It is also interesting is examining both the outcomes of the state level fixed effects, reproduced in Appendix IV, Table Three, and the month of unemployment fixed effects, reproduced in Appendix IV, Table Four. This has the potential of, if not solving the endogeneity issue laid out above, then at least making the specification more accurate. For the state fixed effects, a large number of states were omitted due to not having respondents from those areas, but the results are still interesting, and informative. For the second measure of energy insecurity, shutoff notices, Arizona had a -0.21, Georgia had a -3.11, and Maryland had a -0.32. This means that, for these states, it was markedly better for the potentially energy insecure than in other states, in terms of this measure of energy insecurity. For the disconnection measure, it was found that Arizona had a -1.94, California a -0.19, Hawaii a -0.17, Illinois a -0.86, Delaware a -1.30, Ohio a -3.60, Pennsylvania a -3.27, and Texas a -0.96. Again, this means that, for these states, it was markedly better, all else held constant, to be poor during COVID than in other states.
In terms of the month of unemployment fixed effects, the results were equally interesting. There were no clear temporal trends, but rather instead losing your job in different months is associated with different degrees of energy insecurity measures. There were a number of months that actually had negative values compared to the rest of the months, implying that it was actually better, in terms of being energy insecure in the previous month, to lose your job during these months than otherwise. This could be the result of a time lag between losing a job and being energy insecure in the previous month. Perhaps savings and wealth that were drawn upon impacted households’ energy expenditures as well. This proves difficult to quantify.
What these results show is that the general landscape for logistic regression analysis of COVID and demographic values compared to the three measures of energy insecurity is more complex than the authors initially let on. Perhaps they did not think to include these fixed effects, or found the results too strange to include in their analysis, but regardless, there is the issue of causality and omitted variable bias to contend with. There are a number of possible omitted variables present when considering the causal claims of the authors. I have attempted to examine these issues, and reported the findings. What is further interesting is that, even in the presence of this novel analysis, the results that the authors first reported are actually strengthened. I do not know why the authors would therefore omit such an analysis from their reported findings, as their position that COVID exacerbated the three energy insecurity measures is strengthened, but perhaps they simply did not want to get too deep into explaining the somewhat odd findings outlined in this section.
Part VI: Conclusion
In brief, I was, for the most part, able to replicate the work of the authors. Firstly, I replicated the summary statistics figure for the means of different demographics in terms of the three energy insecurity measures. This was for the most part straightforward, though the confidence intervals were somewhat off, as they used a different sample population than I did. I then replicated the results from Table 10, the logistic regression which had a number of demographic and COVID covariates regressed against the three separate measures of energy insecurity. My numbers were generally similar, and pulling out the COVID measures I found that there were some differences, as well as some similarities, again most likely because they used a different subset of the data. Following this, I completed replication of the ordinary least squares regression, finding for the most part similar numbers, though again pulling out the COVID measures, there were some differences, for the reasons detailed above.
What is most interesting to me is the extension that I conducted. It was found that some states had better or worse conditions for those at the threat of energy insecurity. This could be because different states had different COVID policies, as well as energy policies. Some states mandated that their energy providers not disconnect service. As such, it makes sense that for some states, there would be negative correlations for energy insecurity, while others had positive. This is a step in the right direction in terms of teasing out the potential omitted variable bias.
I also found that there is no clear time trend for losing one’s job that results in being energy insecure. This could be because there was a lag for the effects of unemployment, or that unemployment insurance was utilized, or that savings were drawn upon. Energy expenditure is not the first bill to be passed over in times of great hardship, but nor is it the last. However, I must note that there was no clear temporal trend for the month of unemployment regressions, and that, as a result, there were some months that had positive, and some negative, coefficients.
It was also instructive that the original findings of the authors was, for the most part, strengthened by my novel analysis, as they did not choose to do this and readily could have, having the data necessary. Whether during COVID and post COVID changes also line up with these findings I leave to future research.
Part VII: References
Memmott, Trevor; Carley, Sanya; Graff, Michelle; Konisky, David M. Sociodemographic disparities in energy insecurity among low-income households before and during the COVID-19 pandemic, Nature Energy, 6, 186-193 (2021)
The Climate Crisis: Politics and Public Policy
PSC 8229 Politics and Public Policy
Dr. Elizabeth Rigby
Fall 2021
Carl Mackensen
Policy Application V
12/23/2021
The Climate Crisis: Politics and Public Policy
I: Introduction
The Climate Crisis poses an existential threat to humanity, as well as all life on Earth. It is truly perhaps the most salient and defining issue of our times, and what we do over the next decade will prove vital to keeping the planet from warming too greatly, and all that that would impact. Economists propose a simple and, according to them, effective means of changing the situation for the better for all members of society; namely, putting a price on carbon emissions. Such an action would impact every aspect of humanity, from electricity generation to buying habits on locally versus internationally sourced avocados. Why is this the case? Because, according to Economists, if you want to do less of something, or buy less of something, then making that thing more expensive is the simplest and most efficacious means of doing so. There is debate about the exact nature of what the price would look like, whether a simple command and control (CAC) tax, or a cap and trade (CAT) program similar to what the United States of America did for sulfur dioxide as an amendment to the Clean Air Act. This debate is present in the confines of this paper in so far as it has a bearing on the politics and policy landscape, though the main issues examined deal with said landscape. What I examine here is why federal legislation on the issue has stalled, who the players are and how they operate, and what recommendations can be made. In short, I examine the politics and policy of the issue as it currently stands at the time of writing. Specifically, I concentrate on the USA, though comparisons to other countries can be illustrative.
II: Issues with Federal Legislation
A price on carbon is not anathema to US politics. There are regions in the country that already employ one or another scheme by which a price on carbon takes place. Specifically, California and the 11 member states of the Regional Greenhouse Gas Initiative (RGGI) in the northeast of the country both have prices on carbon, through different mechanisms and laws, which results in less emissions. Many European countries also have prices on carbon, usually significantly higher ones than in the US areas mentioned. Why, then, is it so difficult for the United States of America to pass federal legislation that would enact such a mechanism?
Simply put, because there has been a historic effort by special interest groups that benefit from the status quo to deny even the basic science of the issue. Specifically, we can be informed by the work of Schattschneider in the piece The Semi-Sovereign People by Donald Studler. In this piece, it is detailed how the primary group represented in federal legislation and action are business groups, which work with the Republican party to maintain the status quo and continue to enjoy special treatment. These groups are well represented, and actively work to prevent new issues or legislation that would threaten their dominance, or way of existing. They prevent the issues from even entering the public sphere in the form of debate, and generally dominant the discourse. Additionally, the privatization and socialization of the issue is examined. This language does not refer to whether goods and services are provided by the free market or a government program, but rather whether the actors taking place in the discourse and action around an issue are ‘private’, such as the status quo where it is just special interests and the Republican party, or ‘public’, such as including the general populous or activists. Public action on an issue can prove vital to the success of putting something on an agenda and moving that agenda forward, and different actors at the legislative level can either gain or lose by making said debate and agenda setting more public or private.
First, before examining each of these claims in turn, it is instructive to briefly detail some history of the issue by way of a precis. Evidence of the greenhouse effect resulting from emission of greenhouse gases has been present for at least over a century. For quite some time, however, it was not well known or understood, and the USA and other countries used fossil fuels to power their industrial revolutions. Starting in roughly the early 1990’s, scientists began to sound the alarm that the process of wantonly emitting as much as we would like into the atmosphere could have dire consequences. By the first decade of the 2000’s, this was quite well understood. By the time of the Paris Climate Accord in 2015, this was so well understood that many countries, with the prompting of the UN IPCC, pledged to significantly decrease their emissions, which stalled under the Trump administration and sputtered during the recent UN COP meeting in Glasgow of 2021, despite the Biden administration making action a centerpiece of its Build Back Better spending package. Currently, at the time of writing, that legislation, though passed in the House, will not move forward in the Senate thanks to the razor thin majority of 50 to 50 senators split between the parties, and a single democrat, Manchin, defecting from the party on the issue.
It is informative to examine each of Schattschneider theories in turn, and apply them to the climate crisis. Firstly, regarding special interests and the Republican party. Thanks to both organizations working in concert, even the basic science of the climate crisis has been in question among the populous. In no other industrialized and developed democratic country that I know of, when speaking of either left wing or right wing parties, is the science up for debate. Perhaps what is to be done about it still dominates discourse and discussion, but the basic science is decided, and referenced. The special interests working with the Republican party have done such a thorough job of misinforming the public that those who identify as Republican view the climate crisis as fabricated, as former President Trump called it a ‘Chinese hoax.’ If they do not view it as outrightly wrong, then they view action on the topic as so detrimental to the current economy as to be cost prohibitive, never mind that the global potential damages of inaction are estimated to fall in the tens of trillions of dollars. This works to the advantage of those who want to keep the status quo of relying on fossil fuels and emitting as much as possible. Regarding the socializing or privatizing of debate on the issue, the Republican party wants discourse on the issue to remain private to maintain the status quo, and progressive Democrats are fueled by the increasingly public actions of protestors and activists.
III: Bipartisanship
This is not to say that bipartisanship is dead generally, or on this issue specifically. In the book The Limits of Party, Congress and Lawmaking in a Polarized Era, it is detailed how, despite the current climate feeling more partisan than ever in living memory, federal legislators do still work cooperatively and pass legislation in a bipartisan manner. The authors do this by looking at congressional voting records of numerous kinds, and performing statistical analyses on them. They supplement their analytics with interviews with professionals in the know, which they often add after they present their analytics. In essence, the authors argue that things have changed little. Their position is that party control does not result in more legislative successes, and that laws are not passed on a strictly partisan basis. Minority support is needed at least as much as in the 1970s, and the majority party fails to pass their priorities routinely. They do stipulate that parties are for the most part more homogenous in terms of ideology than in the past and look at whether this failure to pass priorities is due to minority party opposition, or the diminishment of discipline within the party. They found that veto points by the minority were not the cause of this legislative failure, but instead coalition building within the party and a lack of party cohesion being sourced as the cause. It is also found that trying to overpower the minority does not result in success, but rather that bipartisanship is imperative.
This can certainly be seen to be the case with the BBB bill currently in the Senate, and the lack of support of Manchin, which proves enough to completely stop the progression of the legislation. It may prove to be the case that a separate, stand alone bill on the climate crisis, with support of not just Democrats but also a number of centrist Republicans, could prove to be a better vehicle forward.
But what would this look like? Again, as referenced above, we can look to both the states in the USA that have already acted on the issue, as well as countries abroad such as those in the EU. A core question is whether and to what degree the legislation should use market mechanisms, which centrists and Republicans are more likely to support, or a government command and control (CAC) tax or prohibition, which those on the left are more likely to support. This debate can be informed by looking at Good Enough for Government Work: The Public Reputation Crisis in American (and What we can do to Fix it) by Lerman. In this book, it is essentially argued that, in the past, government work was considered one of the highest standards of performance. More recently, however, perception of government work has degraded. It is now seen that, when a service is good, it is a private one, and when a service is poor, it is a public one, despite counter examples in each circumstance. As such, many support a cap and trade (CAT) program which employs market mechanisms over a more traditional CAC approach. Despite CAT being shown to work quite effectively for sulfur dioxide emissions, many on the left still support a CAC approach, being uncomfortable with the concept of issuing permits for emissions which is construed as a license for emitting at all.
IV: Public versus Private Methods
The advantage of a CAT approach is that, with trading of permits, what emissions are produced will be at or lower than the cap (which can be set at anything), and that as a result of market behavior and the trading of permits, those emissions that take place will go to the most economically advantageous aspects of society in terms of provision of goods and services. In essence, for those for which it is more costly to emit than abate, they will abate, and trade permits with those for which it is cheaper to buy permits than retrofit their operations. Again, this has the advantages described above. Alternatively, for CAC methods, the advantage is that sometimes the decarbonization that is most needed is not what is the low hanging economic fruit, and that time and consistent action is required for transformational change, such as with the move to renewables in the grid. Supporters of this fiat approach to either taxation or emissions limiting argue that we should simply ban emissions full stop, and have a number of regulations in place to do so.
The piece on Wilson wherein he looks at both markets and government is also instructive. Wilson puts forward that markets are more efficient than governments, but that there is still great power in government workers doing what they were hired to do. Agency capture is examined similarly to Schattschneider, though Wilson puts forward that it is a complex process. Wilson would argue that both strong markets and strong government are needed, with each doing their respective parts to ameliorate the situation. The efficiency of the CAT program would be welcomed, while the regulation of emissions could be done by policymakers and government workers. Perhaps he would argue for a two-pronged approach, or for both CAT and CAC policies.
In reality, both a CAT or tax, or regulations, can be effective. Things like tax credits or CAC techniques that take action on the topic are usually a result of compromises between the parties in terms of what their ideal legislation would be. As described above, bipartisanship is not dead, and there may be many avenues available for action on the issue at the federal level, if not in the BBB bill, then in some stand alone piece that garners more wide spread support in both Houses of congress. The barriers to moving forward, as described above in the detailing of the current political environment, are mainly around defection of Democrats from supporting a bill, and the capture of Republicans by special interests. Again, this can be ameliorated by broadening the theater from a privatized one to a more public one. In fact, we are seeing exactly that with the emergence of organizations like the Fridays for the Future movement of which Greta Thunberg is the de facto leader, and domestically the Sunrise Movement, which focuses on electing those in favor of the Green New Deal (GND).
The piece by Harrison on carbon emissions is also relevant. Harrison argues that CAT and taxes would be a better option than CAC policies. Market-based policy proponents such as centrists and even some Republicans would agree with this, though those who favor more CAC policies do not. Looking at Finland, Denmark, Germany, and Canada as case studies, the author argues that a tax is optimal. Policy entrepreneurship is needed and the voting public must be made aware of the issues as well. Policy entrepreneurship is the concept that there needs to be a safe space for policymakers to act in and put forward powerful ideas and a role for government action.
V: Where Do We Go from Here?
Kingdon offers some advice to activists, as well as policy makers, in the piece discussing Agendas. The first is to know your place. This entails being aware of where you stand in the policy process and how you can best operate. The second is to persist and be opportunistic. You must persevere through the hard times, and then when faced with an advantageous situation, capitalize on it. Third is to understand other streams. Sometimes, there is a window for action. At this point, collaboration with members of other streams may have already passed by the realm of possibility. It is always important, therefore, to cultivate these relationships not just because they are good to have in and of themselves, but because they can be enormously useful when the time is right. Whether it be policy entrepreneurs, politicians, or journalists, attempt to make connections. Oftentimes networks are already established, and may greatly influence the course of events. Lastly, accept chance. How would all of this advice apply to something like the FFF movement? Know that you are a movement of young people, continue to maintain pressure on those who are responsible, capitalize on opportunities, keep discourse with those of diverse backgrounds and positions, and accept the randomness inherent in what you are doing.
Olson’s piece on organizations and free riding is also relevant. It is argued that group joiners of publicly minded individuals are not just making a cost benefit calculation, but may choose to act for a number of reasons. Collective action is generally defined as any action by a group of people on a single issue, which often relates to free riding, or group members gaining the benefits of action without doing any of the requisite work. This is important to understanding how issues are brought to the fore. In reference to the climate crisis, action is, at its core, a collective action question that also involves free ridership. Whether it takes the forms of countries discussing emissions target at international venues like COP, or local activists marching in the streets, it is all too easy to allow personal action (whether on behalf of a country or by an individual) to be discouraged by the belief that nothing can be achieved, or that the issue will be taken up by others, or that action should not be taken until other parties, such as those responsible for the issue in the first place, take up action. All of this is defeatist thinking, and negates the very real change that collective action on an issue can bring about, regardless of the level of action. For anyone promoting action on the climate crisis, they have likely gone through a state of depression or anxiety about the state of affairs and the future that was almost paralyzing. How could anything an individual does matter? It is instructive in this moment to think of the practical application of a particularly theory of Ethics, which is the discipline that, more than try to dominate an individual’s life with rules, seeks to ameliorate human flourishing. According to Mill, a Consequentialist, we must act as if our actions would be generalized to all of society. We do not shoplift, we vote, we obey traffic laws, because were everyone not to we would all surely be worse off. The same applies to action on climate, and, interestingly, those who are involved with FFF or the Sunrise Movement largely already accept that we all must do our part. Collective action and the free rider problem are not really issues for these young people, they are simply motivated by the desire to make the world a better place. Unfortunately, the same cannot always be said of election minded legislators. Here, bold action is needed, and the tools described above are most salient for how that can be brought about.
Punctuated equilibrium as described by Baumgartner and Jones can also be applied to the climate crisis, and what to do about it. They put forward that change is not linear, but rather takes place in fits and starts, in a form they describe as punctuated equilibrium. Within the policy process, instability may take place as a result of the way agendas are set by relevant participants. Change can take place rapidly when the circumstances are right. What makes this so is highly dependent on who the players are. Specifically for the climate crisis, this would be both organizations like FFF and the Sunrise Movement, as well as policy entrepreneurs such as AOC and the members of her cohort. There are times when, for whatever reason, the public is open to broad changes. Most often this is during crises. With the coronavirus ravaging the world’s population, there is much on the table for action that wasn’t there before. Progressive legislators and grassroots advocates alike should seize this opportunity for what it is worth, and attempt to work not only on how to get out of our current situation, but to lay the foundation for change that we so desperately need on climate.
VI: Conclusion
The Chinese phrase, “may you live in interesting times” originally had more of the ring of a curse to it than a benediction. These are certainly interesting times. Another Chinese maxim comes to mind when digesting the issue thoroughly, however, that crisis and opportunity have the same character in the written language. There is the larger scope of the trade off of what this debate is actually over. Jason Bordoff, a former Obama administration professional working now at Columbia SIPA, that I had for US Energy Policy, said that when considering a policy you had to look at not just the environmental impact, but also the economic and national security ones as well. This is certainly the case for action on the climate crisis, whether it is a price on carbon or otherwise.
We will have to make some hard decisions in the coming years, including what our consumptive lifestyle does to our natural environment, what is considered dirty or clean fuel such as whether nuclear power should be included as a baseload energy provider or scrapped because of concerns around the severity and length of time of waste, and where and how our energy is sourced and provided for. There are ongoing conversations at many levels about how advisable it is to rely on energy coming from sources that have questionable motives towards the USA generally. That would pose the stick. The carrot would be the boost to the economy of transitioning not just our own economy to make all of the changes necessary, but to be at the forefront of the industry and leading the world’s production of such goods and services, as well as claiming the power of integrity when negotiating with other nations that are more hesitant about transitioning to renewables themselves when the USA hasn’t committed to such action itself.
Again, Economists have a straightforward answer; putting a price on carbon. But this does not really mesh well with the goals of either legislators in congress, or activists such as FFF or the Sunrise Movement. What, then, would it take to get both legislators and activists on board? I would argue that more involvement in the process on both ends would foster change. Perhaps we can take an incremental approach when needed, and a punctuated equilibrium approach for significant change when possible. Whether it takes the form of a price on carbon or piece meal regulations, some action is necessary to limit damages from the climate crisis that are already being felt.
This becomes even more apparent when broadening our view to other countries around the world. As heat waves, droughts, flooding, sea level rise and more continue to become common place, things like action to aid migrants and infrastructure repair will be increasingly needed. There is no way that the USA can simply wall up its borders and expect to ride out the coming changes. And I would argue that such an approach would not live up to our legacy as a country. Now is the time for bold action, by both government and the private sector, and now is the era that future generations will judge us by. We can make change, whether we are a legislator or grassroots activists, not simply by calling for it, but by advocating relentlessly for it through whatever means are available for us to do so. To do otherwise would be tantamount to surrender, and that is not a reality I would wish for anyone, whether they hold US citizenship or otherwise.
VII: References
Amy Lerman. 2019. Good Enough for Government Work: The Public Reputation Crisis in American (and What we can do to Fix it). Chicago: University of Chicago Press.
James Curry and Frances Lee. 2020. The Limits of Party: Congress and Lawmaking in a Polarized Era. Chicago: University of Chicago Press.
Martin Lodge, Edward C. Page, and Steven J. Balla. 2015. The Oxford Handbook of Classics in Public Policy and Administration. Commentaries on:
• James Q. Wilson, Bureaucracy: What Government Agencies Do and Why. By Bill Gormley.
•E.E. Schattschneider, The Semi-Sovereign People. By Donald Studler.
• John Kingdon. Agendas, Alternatives, and Public Policies. By Scott Greer.
• Frank Baumgartner and Bryan Jones, Agendas and Instability in American Politics. By Peter John.
• Mancur Olson, The Logic of Collective Action. By David Lowery.
Harrison, Katheryn. “The Comparative Politics of Carbon Taxation.” The Annual Review of Law and Social Science, 2010.6.507-529, https://www.annualreviews.org/doi/pdf/10.1146/annurev.lawsocsci.093008.131545
viewed on 10/23/2021
The Space Economy
The Space Economy: Past, Present, and Future
I: Introduction
The Space Economy, or the commercial use of space, encompasses essentially everything done for business purposes beyond the bounds of our terrestrial environment. Historically, exploration of space, and subsequent action in space, whether this be the Moon landing, the International Space Station, or probes sent to other planets and beyond our solar system, has been limited to large countries which could afford to shoulder the burden of the level of investment required over many years to complete such missions. The landscape of the use of space for productive means, however, is rapidly changing, and is set to change even more in the coming decades. As such, it seems both timely and informative to explore the current state of the space economy, as well as the possibilities for growth in the space economy.
II: The Current Landscape
Globally, the space sector is a technology dense environment that employed a minimum of 900,000 people around the world as of 2013 (OECD 2014). This included public administrations, such as space agencies and departments in civil and defense associated government agencies, the manufacturing industry including building satellites, rockets, and ground systems, direct supply of components, and the larger services sector primarily consisting of satellite telecommunications. This does not include the heavy investment in research and development, however, of which universities and research locations are direct, and often large, recipients. Most of the innovation occurs there, though the private sector is increasingly growing as well in this regard, as there seems to be the promise of capturing profit from space-based projects.
Getting and developing capabilities in space is highly coveted for strategic purposes, with both companies and countries continuing to invest in its pursuit. While many perceive investment in space associated industry as expensive, the actual investment by the G20 countries measured as a percentage of GDP is quite low, with the United States, the largest program in the world, being only 0.3% of GDP (OECD 2014). While OECD countries have the fattest pockets for space budgets (50.8 billion in 2013) (OECD 2014), more and more countries are moving into the sector, such as Brazil, Russia, India and China. The overall space economy had approximately 323 billion dollars in revenues in 2015, with 58% going to consumer services such as satellite related business, 33% going to manufacturing supply chain, and 8.4% going to satellite operators (Hively 2016).
III: Present Industries
Space Transportation
The space transportation industry gets the majority of its gains from putting satellites into orbit around the Earth. Private and government satellites are placed in low Earth orbit and geosynchronous Earth orbit. In the United States, the Federal Aviation Administration has allowed four commercial spaceports, while sites in China and Russia have also added capability. Commercial space flight has encouraged investment in reusable launch vehicles, which allow for the placing of larger payloads into orbit. Companies such as SpaceX and Amazon have made headlines with their pursuit of these technologies for purely commercial ventures.
Satellites and Equipment
Commercial satellite creation is primarily for civilian or non-profit use. This does not include military and human space flight programs. Annual growth of satellite manufacturing in the United States has, since approximately 1996, been roughly 11 percent, or doubling every seven years, while globally this figure was 13 percent for the same time period, doubling between every six and seven years (Wikipedia, 2017). The ground equipment manufacturing sector includes creation of ground station communication terminals, mobile telephones, and home television receivers, with a similar rate of growth as for satellites (Wikipedia, 2017). Those businesses and organizations that own and operate satellites provide access to data and telecommunication companies, for a price. Satellites are also used for imagery of both Earth and space. Navigation is also a core component of what satellites can do, with geospatial positioning being their primary purpose. Longitude, latitude, and elevation are all possible to be determined to a very high degree of specificity using satellites.
Space Tourism
Space tourism is the use of space travel for the recreation, leisure, and business experiences of those that are willing and able to pay for such services. There are a number of different types, including orbital, sub orbital, and lunar space tourism. As of now, these services have only been provided by the Russian Space Agency. Aerospace companies such as Blue Origin and Virgin Galactic, as well as SpaceX, are all working to fill this niche market, and as the price comes down, this may be more possible for an increasing segment of the population.
IV: Future Possibilities
Asteroid Mining
Asteroid mining is exactly what it sounds like; the mining of asteroids, minor planets, and near-Earth objects, for monetary gain. Minerals can be taken from asteroids or comets and then either used for construction in space, or brought back to Earth. Some of the minerals that this includes would be gold, iridium, silver, osmium, palladium, platinum, rhenium, rhodium, ruthenium, and tungsten, which could be brought back to Earth, and iron, cobalt, manganese, molybdenum, nickel, aluminum and titanium which could be used for construction (Wikipedia, 2017).
Given how expensive it currently is to put things into space using conventional non reusable rockets, due to both the cost of fuel and the cost of not being able to use the physical infrastructure again after its one-time use, mining in space could solve a great many issues associated with the desire to spread economic and human activity throughout the solar system. For instance, water captured in space could be decomposed into its constituent elements, oxygen and hydrogen, and used for fuel, so that fuel would not have to be brought up from Earth. There are a number of challenges before this can be realized, however, with not just the cost of getting the necessary infrastructure into space, but identification of good candidate asteroids suitable for mining and technical challenges with the actual transportation of the infrastructure to the asteroid and mining once there. As a result, at present time, terrestrial mining remains the only way for getting raw minerals today.
Given that Earth resources are becoming increasingly scarce, however, and both public and private funding of space development continues to grow as it has, then this could well change. There are concerns, however, about the fact that any massive development of an element rare on Earth, such as say platinum, that could be found on an asteroid, mined, and then brought back to Earth, would result in a glut on the market and, as a result, doom the venture to low profits. There are also considerable costs associated with asteroid mining, including research and development costs, exploration and prospecting costs, construction and infrastructure costs, operational and engineering costs, environmental costs, and time costs. There is also considerable concern about the legal status of anything procured in space. Previous treaties have stated that no one or country can own anything in space. As such, asteroid mining has a number of hurdles to leap before it can reach fruition.
Space Based Solar Power
In its essence, space based solar power is the idea of collecting solar power in space and sending it back to Earth. There are a number of advantages to this, including a higher collection rate, a longer collection period due to the lack of an atmosphere, and placing a solar array in place where there is constant sunlight. Roughly 55 to 60 percent of solar energy is lost when it travels through Earth’s atmosphere due to reflection and absorption (Wikipedia, 2017). Following collection, energy would be transmitted back to Earth’s surface and received by collector sites, with the energy most likely being sent in the form of microwave radiation. Launching materials into orbit, however, remains highly costly. Should this change, space based solar power could well become a great solution to both climate change and resource depletion. A gigawatt sized system, which would be comparable to a large commercial power plant, would need approximately 80,000 tons of material into orbit (Wikipedia, 2017). Should the cost of transporting goods into space come down, or, what is more likely, an in-space manufacturing system be brought online to construct things out of existing space resources, this could well become a possibility.
There are additional issues, however, such as the wireless transmission of power. While, as detailed above, the collecting satellite would change solar energy into electrical energy, it would then have to beam it to Earth in either microwave or laser form to a receiver on Earth. While this would not be harmful in any way to plant or animal life, there would be a great degree of land needed. In addition, the solar array itself would be vulnerable to both solar radiation and micrometeoroids. Despite these hurdles, space based solar power is being pursued by Japan, China, and Russia (Wikipedia, 2017).
Terraforming
Terraforming is the process by which a space body, such as a planet, is made similar to Earth ideally to the point of becoming habitable by changing its atmosphere, temperature, ecology, and surface features. Given the example of the rise of greenhouse gases on Earth and the resultant dramatic shift in global climate, it has now been proven that humans can change their environment to the point of being able to effect an entire planet. Proposals for dealing with climate change are similar in nature to those that could be employed in the future to modify another planet and make it habitable. Whether this includes fertilizing the air with black particles to reflect light, placing a large mirror in space to deflect some of the incoming solar radiation, or dramatically altering the atmosphere’s composition by carbon sequestration and storage, such methods have moved out of the realm of pure science fiction, and have become either science fact, or very real possibilities for the future.
Mars is usually seen as the ideal candidate for terraforming. There have been numerous studies done on changing the temperature and atmosphere of the planet. However, as with other projects detailed in this report, the economic power for such dramatic and large scale work is yet to materialize, and remains a significant hurdle. In addition, there are a host of questions around not just the technical logistics and methodology of doing this, but also the ethics, economics, and politics associated with deliberately modifying something completely beyond human reach previously.
Should the cost of reaching space decrease, and resultantly the cost of construction of infrastructure in space come down as well, Mars is a good candidate because it is similar in size to Earth, has an approximately 24-hour day, and has a wealth of water currently in the form of polar ice. A thicker atmosphere would be required, but this could be accomplished with emission of greenhouse gases, similar to how this has progressed terrestrially on Earth.
V: Conclusion
The economy of space is a bright place for new and adventurous development. It would seem that, with a bit of investment and technical progress, development of the space economy from where it stands now and what it services to the potential future gains to be had might well solve some of humankind’s biggest problems including, but not limited to, resource depletion, population growth, and energy consumption. The question, then, becomes how do we go about facilitating this? Historically space development has been the purview of large nation states competing or collaborating with one another. This phase of development seems to have stagnated recently, however, and private enterprise has taken up the slack. No one less than Stephen Hawking has stated that, for humanity to survive, it will have to spread throughout the solar system, and perhaps beyond. Hopefully industry in partnership with governments can facilitate this in the coming years, before it becomes too late.
VI: References
OECD (2014), The Space Economy at a Glance 2014, OECD Publishing
Hively, Carol, 2016. Space Foundation Report Reveals Global Space Economy at $323 Billion in 2015. Retrieved November 3rd, 2017. Space Foundation. Retrieved from https://www.spacefoundation.org/media/press-releases/space-foundation-report-reveals-global-space-economy-323-billion-2015
Commercial use of space. (n.d.). Retrieved November 5th, 2017, from Wikipedia: https://en.wikipedia.org/wiki/Commercial_use_of_space
Asteroid mining. (n.d.). Retrieved November 6th, 2017, from Wikipedia:
https://en.wikipedia.org/wiki/Asteroid_mining
Space-based solar power. (n.d.). Retrieved November 7th, 2017, from Wikipedia: