His Motive was Unclear?

A man stabs Salman Rushdie and The New York Times says, “the attacker’s motive was unclear. “

Unclear? Seriously? Are we that compromised by relativistic motives and legalisms? Do they not recall the Fatwa issued on Mr. Rushie’s life?  I am surprised they didn’t add “alleged” in that sentence. But then again, even the NYT couldn’t ignore the video of the brutal attempted murder.

Bari Weiss would know. She left the NYT in disgust after the blowback it got from publishing a few years back an editorial by a Republican Senator, Tom Cotton. It marked the beginnings of the “words are violence” trend in America (and elsewhere), and she knew it.

I am old enough to remember how “sticks and stones may break bones, but words can never hurt me.”

Today?

You know the answer. All of us keep our mouths shut. Free speech is dead.

This piece, authored by Ms. Weiss at her blog (her work not mine) is excellent. I put it here for my own reference, and perhaps yours.


We Ignored Salman Rushdie’s Warning

by Bari Weiss

We live in a culture in which many of the most celebrated people occupying the highest perches believe that words are violence. In this, they have much in common with Iranian Ayatollah Ruhollah Khomeini, who issued the first fatwa against Salman Rushdie in 1989, and with Hadi Matar, the 24-year-old who, yesterday, appears to have fulfilled his command when he stabbed the author in the neck on a stage in Western New York.

The first group believes they are motivated by inclusion and tolerance—that it’s possible to create something even better than liberalism, a utopian society where no one is ever offended. The second we all recognize as religious fanatics. But it is the indulgence and cowardice of the words are violence crowd that has empowered the second and allowed us to reach this moment, when a fanatic rushes the stage of a literary conference with a knife and plunges it into one of the bravest writers alive.

I have spoken on the same stage where Rushdie was set to speak. You can’t imagine a more bucolic place than the Chautauqua Institution—old Victorian homes with screened-in porches and no locks, a lake, American flags, and ice cream everywhere. It was founded in 1874 by the Methodists as a summer colony for Sunday school teachers. Now, it attracts the kind of parents and grandparents who love Terry Gross and never miss a Wordle. It is just about the last place in America where you would imagine an act of such barbarism.

And yet as shocking as this attack was, it was also 33 years in the making: The Satanic Verses is a book with a very bloody trail.

In July 1991, the Japanese translator of the condemned book, Hitoshi Igarashi, 44-years-old, was stabbed to death outside his office at the University of Tsukuba, northeast of Tokyo. The same month, the book’s Italian translator, Ettore Capriolo, was also stabbed—this time, in his own home in Milan. Two years later, in July 1993, the book’s Turkish translator, the prolific author Aziz Nesin, was the target of an arson attack on a hotel in the city of Sivas. He escaped, but 37 others were killed. A few months later, Islamists came for William Nygaard, the book’s Norwegian publisher. Nygaard was shot three times outside his home in Oslo and was critically injured.

And those are just the stories we remember.

Think back to 1989, when 12 people were killed at an anti-Rushdie riot in Mumbai, the author’s birthplace, where the book was also banned. Five Pakistanis died in Islamabad under similar circumstances.

As for Rushdie himself, he took refuge in England, thanks to round-the-clock protection from the British government. For more than a decade, he lived under the name “Joseph Anton” (the title of his memoir), moving from safe house to safe house. In the first six months, he had to move 56 times. (England was not immune from the hysteria: Rushdie’s book was burned by Muslims in the city of Bradford—and at the suggestion of police, two WHSmith shops in Bradford stopped carrying the book at the advice of police.)

Salman Rushdie has lived half of his life with a bounty on his head—some $3.3 million promised by the Islamic Republic of Iran to anyone who murdered him. And yet, it was in 2015, years after he had come out of hiding, that he told the French newspaper L’Express: “We are living in the darkest time I have ever known.”

You would think that Rushdie would have said such a thing in the height of the chaos, when he was in hiding, when those associated with the book were being targeted for murder. By 2015, you might run into Rushdie at Manhattan cocktail parties, or at the theater with a gorgeous woman on his arm. (He had already been married to Padma, for God’s sake.)

So why did he say it was the “darkest time” he had ever known? Because what he saw was the weakening of the very Western values—the ferocious commitment to free thought and free speech—that had saved his life.

“If the attacks against Satanic Verses had taken place today,” he said in L’Express, “these people would not have defended me, and would have used the same arguments against me, accusing me of insulting an ethnic and cultural minority.”

He didn’t have to speculate. He said that because that is exactly what they did.

See, when Salman Rushdie was under siege, the likes of Tom Wolfe, Christopher Hitchens, Norman Mailer, Joseph Brodsky, and Seamus Heaney stood up to defend him. The leader of the pack was Susan Sontag, who was then president of PEN America, and arranged for the book to be read in public.  Hitchens recalled that Sontag shamed members into showing up on Rushdie’s behalf and showing a little “civic fortitude.”

That courage wasn’t an abstraction, especially to some booksellers.

Consider the heroism of Andy Ross, the owner of the now-shuttered Cody’s Books in Berkeley, which carried the book and was bombed shortly after the fatwa was issued.

Here’s Ross:

“It was pretty easy for Norman Mailer and Susan Sontag to talk about risking their lives in support of an idea. After all they lived fairly high up in New York apartment buildings. It was quite another thing to be a retailer featuring the book at street level. I had to make some really hard decisions about balancing our commitment to freedom of speech against the real threat to the lives of our employees.”

After the bombing, he gathered all of his staff for a meeting:

“I stood and told the staff that we had a hard decision to make. We needed to decide whether to keep carrying Satanic Verses and risk our lives for what we believed in. Or to take a more cautious approach and compromise our values.  So, we took a vote. The staff voted unanimously to keep carrying the book. Tears still come to my eyes when I think of this. It was the defining moment in my 35 years of bookselling. It was at that moment when I realized that bookselling was a dangerous and subversive vocation. Because ideas are powerful weapons, I didn’t particularly feel comfortable about being a hero and putting other people’s lives in danger. I didn’t know at that moment whether this was an act of courage or foolhardiness. But from the clarity of hindsight, I would have to say it was the proudest day of my life.”

That was the late 1980s.

By 2015, America was a very different place.

When Rushdie made those comments to L’Express it was in the fallout of PEN, the country’s premiere literary group, deciding to honor the satirical French magazine Charlie Hebdo with an award. Months before, a dozen staff members of Charlie Hebdo were murdered by two terrorists in their offices. It was impossible to think of a publication that deserved to be recognized and elevated more.

And yet the response from more than 200 of the world’s most celebrated authors was to protest the award. Famous writers—Joyce Carol Oates, Lorrie Moore, Michael Cunningham, Rachel Kushner, Michael Ondaatje, Teju Cole, Peter Carey, Junot Díaz—suggested that maybe the people who had just seen their friends murdered for publishing a satirical magazine were a little bit at fault, too. That if something offends a minority group, perhaps it shouldn’t be printed. And those cartoonists were certainly offensive, even the dead ones. These writers accused PEN of “valorizing selectively offensive material: material that intensifies the anti-Islamic, anti-Maghreb, anti-Arab sentiments already prevalent in the Western world.”

Here’s how Rushdie responded: “This issue has nothing to do with an oppressed and disadvantaged minority. It has everything to do with the battle against fanatical Islam, which is highly organized, well-funded, and which seeks to terrify us all, Muslims as well as non-Muslims, into a cowed silence.”

He was right. They were wrong. And their civic cowardice, as Sontag may have described it, is in no small part responsible for the climate we find ourselves in today. (As I wrote this, I got a news alert from The New York Times saying the attacker’s “motive was unclear.”)

Motive was unclear?

The words are violence crowd is right about the power of language. Words can be vile, disgusting, offensive, and dehumanizing. They can make the speaker worthy of scorn, protest, and blistering criticism. But the difference between civilization and barbarism is that civilization responds to words with words. Not knives or guns or fire. That is the bright line. There can be no excuse for blurring that line—whether out of religious fanaticism or ideological orthodoxy of any other kind.

Today our culture is dominated by those who blur that line—those who lend credence to the idea that words, art, song lyrics, children’s books, and op-eds are the same as violence. We are so used to this worldview and what it requires—apologize, grovel, erase, grovel some more—that we no longer notice. It is why we can count, on one hand—Dave Chappelle; J.K. Rowling—those who show spine.

Of course, it is 2022 that the Islamists finally get a knife into Salman Rushdie.

Of course, it is now, when words are literally violence and J.K. Rowling literally puts trans lives in danger and even talking about anything that might offend anyone means you are literally arguing I shouldn’t exist.

Of course, it’s now, when we’re surrounded by silliness and weakness and self-obsession, that a man gets on stage and plunges a knife into Rushdie, plunges it into his liver, plunges it into his arm, plunges it into his eye.

That is violence.


Postscript: Rushdie will likely lose an eye and could be permanently disabled in a variety of other ways.

 

Posted in Death, General Musings, People (in general), People in general, State of the Nation, Victimhood | 2 Comments

The Business of Business is Business

First, it was that business had to “save the planet.” Then it was that business had to save the whales, the dolphins, and other assorted species. Today it is ESG – environmental, social, and governance – code for social justice and the so-called “triple bottom line.” It was all bullshit. In the end, a business must have made a profit to stay in business. One profit, one bottom line. And by staying in business, it employs people who can take their earnings and do with them whatever they please. But to decide to abandon its mission of making profit by producing goods and services the consumer and other businesses want or need, is to commit a kind of misfeasance.

This article, written 50+ years ago by Dr. Milton Friedman, is seminal. I am capturing it here for my own future reference and as a memory of when rational business minds tended the till.

The Business of Business is Business

The New York Times Magazine

September 13, 1970, originally titled:

The Social Responsibility of Business is to Increase its Profits

by Milton Friedman

When I hear businessmen speak eloquently about the “social responsibilities of business in a free-enterprise system,” I am reminded of the wonderful line about the Frenchman who discovered at the age of 70 that he had been speaking prose all his life. The businessmen believe that they are defending free enterprise when they declaim that business is not concerned “merely” with profit but also with promoting desirable “social” ends; that business has a “social conscience” and takes seriously its responsibilities for providing employment, eliminating discrimination, avoiding pollution and whatever else may be the catchwords of the contemporary crop of reformers. In fact, they are — or would be if they or anyone else took them seriously — preaching pure and unadulterated socialism. Businessmen who talk this way are unwitting puppets of the intellectual forces that have been undermining the basis of a free society these past decades.

The discussions of the “social responsibilities of business” are notable for their analytical looseness and lack of rigor. What does it mean to say that “business” has responsibilities? Only people have responsibilities.

A corporation is an artificial person and, in this sense, may have artificial responsibilities, but “business” as a whole cannot be said to have responsibilities, even in this vague sense. The first step toward clarity in examining the doctrine of the social responsibility of business is to ask precisely what it implies for whom.

Presumably, the individuals who are to be responsible are businessmen, which means individual proprietors or corporate executives. Most of the discussion of social responsibility is directed at corporations, so in what follows I shall mostly neglect the individual proprietor and speak of corporate executives.

In a free-enterprise, private-property system, a corporate executive is an employee of the owners of the business. He has direct responsibility to his employers. That responsibility is to conduct the business in accordance with their desires, which generally will be to make as much money as possible while conforming to the basic rules of the society, both those embodied in law and those embodied in ethical custom. Of course, in some cases his employers may have a different objective. A group of persons might establish a corporation for an eleemosynary purpose — for example, a hospital or a school. The manager of such a corporation will not have money profit as his objectives but the rendering of certain services.

In either case, the key point is that, in his capacity as a corporate executive, the manager is the agent of the individuals who own the corporation or establish the eleemosynary institution, and his primary responsibility is to them.

Needless to say, this does not mean that it is easy to judge how well he is performing his task. But at least the criterion of performance is straightforward and the persons among whom a voluntary contractual arrangement exists are clearly defined.

Of course, the corporate executive is also a person in his own right. As a person, he may have many other responsibilities that he recognizes or assumes voluntarily — to his family, his conscience, his feelings of charity, his church, his clubs, his city, his country. He may feel impelled by these responsibilities to devote part of his income to causes he regards as worthy, to refuse to work for particular corporations, even to leave his job, for example, to join his country’s armed forces. If we wish, we may refer to some of these responsibilities as “social responsibilities.” But in these respects, he is acting as a principal, not an agent; he is spending his own money or time or energy, not the money of his employers or the time or energy he has contracted to devote to their purposes. If these are “social responsibilities,” they are the social responsibilities of individuals, not of business.

What does it mean to say that the corporate executive has a “social responsibility” in his capacity as businessman? If this statement is not pure rhetoric, it must mean that he is to act in some way that is not in the interest of his employers. For example, that he is to refrain from increasing the price of the product in order to contribute to the social objective of preventing inflation, even though a price increase would be in the best interests of the corporation. Or that he is to make expenditures on reducing pollution beyond the amount that is in the best interests of the corporation or that is required by law in order to contribute to the social objective of improving the environment. Or that, at the expense of corporate profits, he is to hire “hard-core” unemployed instead of better qualified available workmen to contribute to the social objective of reducing poverty.

In each of these cases, the corporate executive would be spending someone else’s money for a general social interest. Insofar as his actions in accord with his “social responsibility” reduce returns to stockholders, he is spending their money. Insofar as his actions raise the price to customers, he is spending the customers’ money. Insofar as his actions lower the wages of some employees, he is spending their money.

The stockholders or the customers or the employees could separately spend their own money on the particular action if they wished to do so. The executive is exercising a distinct “social responsibility,” rather than serving as an agent of the stockholders or the customers or the employees, only if he spends the money in a different way than they would have spent it.

But if he does this, he is in effect imposing taxes, on the one hand, and deciding how the tax proceeds shall be spent, on the other.

This process raises political questions on two levels: principle and consequences. On the level of political principle, the imposition of taxes and the expenditure of tax proceeds are governmental functions. We have established elaborate constitutional, parliamentary and judicial provisions to control these functions, to assure that taxes are imposed so far as possible in accordance with the preferences and desires of the public — after all, “taxation without representation” was one of the battle cries of the American Revolution. We have a system of checks and balances to separate the legislative function of imposing taxes and enacting expenditures from the executive function of collecting taxes and administering expenditure programs and from the judicial function of mediating disputes and interpreting the law.

Here the businessman — self-selected or appointed directly or indirectly by stockholders — is to be simultaneously legislator, executive and jurist. He is to decide whom to tax by how much and for what purpose, and he is to spend the proceeds — all this guided only by general exhortations from on high to restrain inflation, improve the environment, fight poverty and so on and on.

The whole justification for permitting the corporate executive to be selected by the stockholders is that the executive is an agent serving the interests of his principal. This justification disappears when the corporate executive imposes taxes and spends the proceeds for “social” purposes. He becomes in effect a public employee, a civil servant, even though he remains in name an employee of a private enterprise. On grounds of political principle, it is intolerable that such civil servants — insofar as their actions in the name of social responsibility are real and not just window-dressing — should be selected as they are now. If they are to be civil servants, then they must be selected through a political process. If they are to impose taxes and make expenditures to foster “social” objectives, then political machinery must be set up to make the assessment of taxes and to determine through a political process the objectives to be served.

This is the basic reason why the doctrine of “social responsibility” involves the acceptance of the socialist view that political mechanisms, not market mechanisms, are the appropriate way to determine the allocation of scarce resources to alternative uses.

On the grounds of consequences, can the corporate executive in fact discharge his alleged “social responsibilities”? On the one hand, suppose he could get away with spending the stockholders’ or customers’ or employees’ money. How is he to know how to spend it? He is told that he must contribute to fighting inflation. How is he to know what action of his will contribute to that end? He is presumably an expert in running his company — in producing a product or selling it or financing it. But nothing about his selection makes him an expert on inflation. Will his holding down the price of his product reduce inflationary pressure? Or, by leaving more spending power in the hands of his customers, simply divert it elsewhere? Or, by forcing him to produce less because of the lower price, will it simply contribute to shortages? Even if he could answer these questions, how much cost is he justified in imposing on his stockholders, customers, and employees for this social purpose? What is his appropriate share and what is the appropriate share of others?

And, whether he wants to or not, can he get away with spending his stockholders’, customers’, or employees’ money? Will not the stockholders fire him? (Either the present ones or those who take over when his actions in the name of social responsibility have reduced the corporation’s profits and the price of its stock.) His customers and his employees can desert him for other producers and employers less scrupulous in exercising their social responsibilities.

This facet of “social responsibility” doctrine is brought into sharp relief when the doctrine is used to justify wage restraint by trade unions. The conflict of interest is naked and clear when union officials are asked to subordinate the interest of their members to some more general social purpose. If the union officials try to enforce wage restraint, the consequence is likely to be wildcat strikes, rank-and-file revolts, and the emergence of strong competitors for their jobs. We thus have the ironic phenomenon that union leaders — at least in the U.S. — have objected to government interference with the market far more consistently and courageously than have business leaders.

The difficulty of exercising “social responsibility” illustrates, of course, the great virtue of private competitive enterprise — it forces people to be responsible for their own actions and makes it difficult for them to “exploit” other people for either selfish or unselfish purposes. They can do good — but only at their own expense.

Many a reader who has followed the argument this far may be tempted to remonstrate that it is all well and good to speak of government’s having the responsibility to impose taxes and determine expenditures for such “social” purposes as controlling pollution or training the hard-core unemployed, but that the problems are too urgent to wait on the slow course of political processes, that the exercise of social responsibility by businessmen is a quicker and surer way to solve pressing current problems.

Aside from the question of fact — I share Adam Smith’s skepticism about the benefits that can be expected from “those who affected to trade for the public good” — this argument must be rejected on the grounds of principle. What it amounts to is an assertion that those who favor the taxes and expenditures in question have failed to persuade a majority of their fellow citizens to be of like mind and that they are seeking to attain by undemocratic procedures what they cannot attain by democratic procedures. In a free society, it is hard for “good” people to do “good,” but that is a small price to pay for making it hard for “evil” people to do “evil,” especially since one man’s good is another’s evil.

I have, for simplicity, concentrated on the special case of the corporate executive, except only for the brief digression on trade unions. But precisely the same argument applies to the newer phenomenon of calling upon stockholders to require corporations to exercise social responsibility (the recent G.M. crusade, for example). In most of these cases, what is in effect involved is some stockholders’ trying to get other stockholders (or customers or employees) to contribute against their will to “social” causes favored by the activists. Insofar as they succeed, they are again imposing taxes and spending the proceeds.

The situation of the individual proprietor is somewhat different. If he acts to reduce the returns of his enterprise in order to exercise his “social responsibility,” he is spending his own money, not someone else’s. If he wishes to spend his money on such purposes, that is his right and I cannot see that there is any objection to his doing so. In the process, he, too, may impose costs on employees and customers. However, because he is far less likely than a large corporation or union to have monopolistic power, any such side effects will tend to be minor.

Of course, in practice the doctrine of social responsibility is frequently a cloak for actions that are justified on other grounds rather than a reason for those actions.

To illustrate, it may well be in the long-run interest of a corporation that is a major employer in a small community to devote resources to providing amenities to that community or to improving its government. That may make it easier to attract desirable employees, it may reduce the wage bill or lessen losses from pilferage and sabotage or have other worthwhile effects. Or it may be that, given the laws about the deductibility of corporate charitable contributions, the stockholders can contribute more to charities they favor by having the corporation make the gift than by doing it themselves, since they can in that way contribute an amount that would otherwise have been paid as corporate taxes.

In each of these — and many similar — cases, there is a strong temptation to rationalize these actions as an exercise of “social responsibility.” In the present climate of opinion, with its widespread aversion to “capitalism,” “profits,” the “soulless corporation” and so on, this is one way for a corporation to generate good will as a byproduct of expenditures that are entirely justified in its own self-interest.

It would be inconsistent of me to call on corporate executives to refrain from this hypocritical window-dressing because it harms the foundations of a free society. That would be to call on them to exercise a “social responsibility”! If our institutions, and the attitudes of the public make it in their self-interest to cloak their actions in this way, I cannot summon much indignation to denounce them. At the same time, I can express admiration for those individual proprietors or owners of closely held corporations or stockholders of more broadly held corporations who disdain such tactics as approaching fraud.

Whether blameworthy or not, the use of the cloak of social responsibility, and the nonsense spoken in its name by influential and prestigious businessmen, does clearly harm the foundations of a free society. I have been impressed time and again by the schizophrenic character of many businessmen. They are capable of being extremely farsighted and clearheaded in matters that are internal to their businesses. They are incredibly shortsighted and muddle-headed in matters that are outside their businesses but affect the possible survival of business in general. This shortsightedness is strikingly exemplified in the calls from many businessmen for wage and price guidelines or controls or incomes policies. There is nothing that could do more in a brief period to destroy a market system and replace it by a centrally controlled system than effective governmental control of prices and wages.

The shortsightedness is also exemplified in speeches by businessmen on social responsibility. This may gain them kudos in the short run. But it helps to strengthen the already too prevalent view that the pursuit of profits is wicked and immoral and must be curbed and controlled by external forces. Once this view is adopted, the external forces that curb the market will not be the social consciences, however highly developed, of the pontificating executives; it will be the iron fist of government bureaucrats. Here, as with price and wage controls, businessmen seem to me to reveal a suicidal impulse.

The political principle that underlies the market mechanism is unanimity. In an ideal free market resting on private property, no individual can coerce any other, all cooperation is voluntary, all parties to such cooperation benefit or they need not participate. There are no “social” values, no “social” responsibilities in any sense other than the shared values and responsibilities of individuals. Society is a collection of individuals and of the various groups they voluntarily form.

The political principle that underlies the political mechanism is conformity. The individual must serve a more general social interest — whether that be determined by a church or a dictator or a majority. The individual may have a vote and say in what is to be done, but if he is overruled, he must conform. It is appropriate for some to require others to contribute to a general social purpose whether they wish to or not.

Unfortunately, unanimity is not always feasible. There are some respects in which conformity appears unavoidable, so I do not see how one can avoid the use of the political mechanism altogether.

But the doctrine of “social responsibility” taken seriously would extend the scope of the political mechanism to every human activity. It does not differ in philosophy from the most explicitly collectivist doctrine. It differs only by professing to believe that collectivist ends can be attained without collectivist means. That is why, in my book “Capitalism and Freedom,” I have called it a “fundamentally subversive doctrine” in a free society, and have said that in such a society, “there is one and only one social responsibility of business — to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.”

Milton Friedman (1912-2006) received the Nobel in economic science in 1976. He was a leader of the Chicago school of economics, associated with University of Chicago, and a lifelong advocate for free markets. His economic ideas influenced a generation of conservative politicians, notably Ronald Reagan and Margaret Thatcher, making him one of the most consequential economists of the 20th century.

 

Posted in Business, State of the Nation | Comments Off on The Business of Business is Business

Travesty of Travesty – Apparently, Some Doctors Out There are Pro-Life

 

 

 

 

 

 

Recently, University of Michigan medical students walked out of their white coat ceremony after their “demands” weren’t met. Problem is: They missed a transcendent lecture about staying human in an age of machines.

The following piece, appearing at Bari Weiss’ Common Sense blog on Sub stack, was written by a professor of medicine at UCSF. It is his work, not mine, and is worth pondering. Imagine – there are doctors who will speak publicly about being pro-life.

Amazing.


Dr. Kristin Collier is an assistant professor of internal medicine at the University of Michigan, where she has served on faculty for 17 years. She also is the director of the medical school’s Program on Health, Spirituality and Religion and has been published in publications including the Journal of the American Medical Association and the Annals of Internal Medicine.

Many describe her as a consummate physician and superb teacher—deeply liked and respected by her peers. That’s why, out of some 3,000 faculty at Michigan, Dr. Collier was chosen by students and her peers to be this year’s White Coat Ceremony speaker. The White Coat Ceremony is one bookend of medical school (graduation is the other), where students put on their white coats for the first time, take the Hippocratic oath, and begin the long path to becoming a doctor.

The trouble is that Professor Collier has views on abortion that are out of step with many Michigan medical students—likely the majority of them. She has stated that she defines herself as pro-life, though she does not state the extent of her position (i.e., whether she allows exemptions for rape or incest). In that same interview, in which she talks about her personal transformation from a pro-choice atheist to a Christian, she laments the intolerance for religious people among medical colleagues.

“When we consider diversity in the medical profession, religious diversity is not—should not—be exempt from this goal.”

After Michigan announced her speech, the university made it clear that Dr. Collier would not be addressing abortion in her talk. “The White Coat Ceremony is not a platform for discussion of controversial issues, and Dr. Collier never planned to address a divisive topic as part of her remarks,” the Dean of the medical school, Marshall Runge, wrote to students and staff earlier this month.

That didn’t stop hundreds of students and staff from signing a petition demanding Dr. Collier be replaced with another speaker. “While we support the rights of freedom of speech and religion, an anti-choice speaker as a representative of the University of Michigan undermines the University’s position on abortion and supports the non-universal, theology-rooted platform to restrict abortion access, an essential part of medical care,” they wrote. “We demand that UM stands in solidarity with us and selects a speaker whose values align with institutional policies, students, and the broader medical community.”

The school stood firm.

But on Sunday, just as Dr. Collier rose to give her remarks, dozens of students and family members began walking out of their white coat ceremony.

By now we are all accustomed to such displays from American students.

The particular shame here is that those who walked out missed a transcendent lecture about the meaning of practicing medicine in a culture that increasingly treats human beings like machines.

“The risk of this education and the one that I fell into is that you can come out of medical school with a bio-reductionist, mechanistic view of people and ultimately of yourself. You can easily end up seeing your patients as just a bag of blood and bones or human life as just molecules in motion,” Dr. Collier said.

“You are not technicians taking care of complex machines, but human beings taking care of other human beings,” she said. “Medicine is not merely a technical endeavor but above all a human one.”

Dr. Collier has handled the whole thing with grace. She tweeted yesterday: “I’ve heard that some of the students who walked out have been harassed and targeted—please stop. Everyone has a right to stand up for what they believe in.”

The decision of students to walk out of the lecture because they disagree with the speaker on another topic apparently has no limit.

In medicine, abortion is an important life or death issue. So too is universal health care, immigration, and school closure. All these topics have the highest stakes. And all are controversial. If students walk out on speakers discussing unrelated issues, where does it end?

Would they learn about the nephron from a nephrologist who favors strict immigration limits? Could they learn how to perform CPR from an instructor who lobbied to keep schools open during Covid-19?

Most concerning, what does it mean for American patients, if their future doctors cannot sit through a speech by a beloved professor who has a different view on abortion? Could you trust a physician knowing that may judge you for holding views that they deem beyond the pale?

As a professor at UCSF medical school, I worry deeply that we are not preparing our future doctors for practicing medicine on real people in the real world. Medicine has to meet patients where they are; often that means caring people and working with people with whom we disagree.

We can’t walk out on that.

Posted in General Musings, Victimhood | 2 Comments

Right on Schedule: A Killer Virus Emerges Just Before an Important Election

 

 

 

 

 

I expect either COVID or monkeypox, or both, will be cited as yet another reason to stay away from the election this coming Fall.

Funny how that happens.

The following was written by Donald G. McNeil Jr. and appeared in Bari Weiss’ Common Sense blog. I am reposting it here for my own reference, and yours (should it be of value). It is their work product, not mine.

Mr. McNeil writes …


As if one plague weren’t enough, we now have another: monkeypox. And we’re not handling it well.

At the moment, unless you are a gay man with multiple anonymous or casual sexual partners, you are probably not at much risk. In this new non-African outbreak that began in May, most of the cases have been inside that network—at least thus far. In seven central and west African countries, there have been tens of thousands of suspected cases and hundreds of deaths attributed to the virus over the last two decades. This new outbreak has risen from a dozen cases in Portugal, Spain, and Britain in mid-May to more than 12,000 in over 50 countries, from Iceland to Australia. Over 1,000 of them are in the U.S., many of them in cities like New York, Los Angeles, San Francisco, and Chicago.

But there’s no guarantee it will stay inside that network. Monkeypox is transmitted by sex, by skin-to-skin contact, by towels and sheets, and possibly even by kissing or coughing by patients with sores inside their mouths. A few nurses have caught it from patients, as have family members. There’s much we don’t know about it yet.

Some sexually transmitted diseases—like AIDS—have stayed mostly inside gay male sexual networks. (In the West, that is. In Africa, more than 50 percent of HIV cases are in women and girls.) Some, like syphilis, circulate generally but are more common among gay men. Others like herpes and HPV are widespread among heterosexuals. There’s no way to know yet where monkeypox will go.

We have a vaccine—two vaccines, actually—and also a treatment, so we are in a far better position than we were at the beginning of HIV, Covid or virtually any other previous epidemic. But right now, it’s still spreading fast, with the global number of cases rising by about 1,000 a day.

If there are two effective vaccines for this disease and one solid treatment, why are we losing the fight?

I blame several factors: shortages of vaccines and tests, the initial hesitancy by squeamish health agencies to openly discuss who was most at risk, and the refusal by the organizers of lucrative gay sex parties to cancel them over the past few months—even as evidence mounted that they are super-spreader events.

The monkeypox virus—misnamed because it normally circulates in African rodents, not simians—is related to smallpox, but it’s not nearly as lethal. The successful 25-year effort to eradicate smallpox held it in check: The smallpox vaccine also prevents monkeypox.

But vaccination ended in 1980 because the old vaccines had some rare but very dangerous side effects. So, in the 1990s, monkeypox cases began reappearing in rural Africa, mostly in children born after 1980.

In central Africa, two strains circulate: one with a fatality rate of about 10 percent, another of about one percent. It now looks as if, sometime in 2017 in southeast Nigeria, an even less lethal but more transmissible variant of the second strain emerged. It has circulated in Nigerian cities since then and a few cases were found among Nigerians traveling to Europe and the U.S. One new study suggests that variant was circulating in Europe at least as early as this past March and picking up new mutations.

This May, it appeared among gay men, especially those who had visited four venues: the Darklands leather fetish festival in Belgium; the annual Pride Festival in Spain’s Canary Islands; a gay rave at Berlin’s Berghain techno club; and the Paraiso sauna in Madrid, which, since it had darkened cubicles for orgies, a bondage cell and a bar, was really more of a huge sex club than a spa.

Even though one “sex-positive” party after another has turned into super-spreader events, there has been no willingness by the organizers of such parties to cancel or even reschedule them until more men can be vaccinated.  June was Pride Month in New York and cases are surging in the city now. Two recent parties in San Francisco, Electrolux Pride and the Afterglow Blacklight Discotheque had cases linked to them. And yet more events, like Provincetown Bear Week are going forward anyway.

Outside of Africa, the disease has not killed anyone yet. But for some victims, the pain—especially from pustules inside the mouth or rectum—is so severe that it requires hospitalization. Others feel miserable for weeks; a few get permanent scars. Also, those with monkeypox are supposed to stay in isolation until all the pox crust over and the scabs fall off, which can take up to a month.

The negligible fatality rate won’t necessarily persist if the virus escapes its current network: mostly young, mostly healthy adult men. In Africa, children and pregnant women are the most likely to die from monkeypox. (Older people, by contrast, are not. Many Africans born before 1980 and most Americans born before 1972 were vaccinated as children and doctors believe they still have some residual protection.)

We have two vaccines:

The older one, ACAM2000, approved in 2007, was made after fears were raised that Saddam Hussein had stocks of weaponized smallpox. The National Strategic Stockpile contains 100 million doses. But it has a very small risk of seriously hurting or even killing someone with undiagnosed H.I.V. or another immune-suppressive condition, or someone with widespread skin problems, such as eczema.  It also has about a one-in-500 risk of heart inflammation, which is scary but can usually be treated.

The newer one, Jynneos, made by Bavarian Nordic in Denmark and approved in 2019, is much safer. It has been tested on people with H.I.V. and with skin problems.  As of three years ago, Bavarian Nordic had supplied our Strategic National Stockpile with 28 million doses, but all those have expired. Nordic had a contract to make a longer-lasting freeze-dried version, but when this epidemic began, the stockpile held only 64,000 usable doses; another 800,000 are in bulk frozen form in Denmark. The company is currently converting them into a usable form. (According to New York magazine, the company did not do this sooner because, during the Covid pandemic, it couldn’t get the FDA to inspect its new factory. The Washington Post reports that the inspection has now been done and the vaccines are being processed and loaded onto freezer planes in batches of 150,000 doses at a time.)

We also have an antiviral medicine, tecovirimat or Tpoxx, which was developed by Siga Technologies as a defense against a smallpox bioterrorism event. It also lessens monkeypox symptoms and comes in both oral and intravenous forms. But many men are finding it hard to get because the Centers for Disease Control requires that men getting it be enrolled in a clinical trial.

So where are we now?  As in all epidemics, we’re in the early “fog of war” period. We don’t have rapid tests, so we have no idea how many cases the country really has. (The current test requires swabbing a pustule—those may not appear until many days after the initial infection.) We don’t know all the ways it’s transmitted. We don’t know if there is asymptomatic transmission. And we don’t have nearly enough Jynneos vaccine. When a city like New York gets a shipment, all the appointments are gone in within minutes.

I started writing on Medium about monkeypox on May 23, when most media outlets were saying either, “Monkeypox? Eww!” or “Don’t worry, it’s not Covid.” I argued that we needed to take the threat seriously.

In five subsequent articles, I’ve made some fairly strident suggestions.

First, that we talk frankly about risky gay male sex networks instead of fretting about stigmatization. Stopping an epidemic is more important than greenwashing it. Second, that this summer’s Pride celebrations for men (not the parades, the “sex-positive” after-parties) should be rescheduled until autumn, when many more men will be vaccinated, and rapid tests should be available to enable testing right at the party door. Third, that we stop pretending that ring vaccination could ever work and instead offer vaccine to all men with multiple sex partners, and to all sex workers. (Vaccinating the “ring” of contacts around each case is impossible when men have anonymous sex—they don’t know who their contacts are.) Fourth, that we roll out both the Jynneos and the ACAM2000 vaccines and screen men for the risks posed by the older vaccine. Fifth, that the government offer a month’s shelter to all men who test positive so they can isolate safely under medical surveillance.

Some of these things are finally being done, but I still don’t think we’re doing enough.

In mid-June, the CDC did finally issue warnings that were blunt about the risks of anonymous hook-ups, even mentioning fetish gear and sex toys.

On June 23, New York City’s health department imitated Montreal and started offering vaccine to all gay men who had recently had multiple or anonymous sex partners. Other American cities followed suit.

This is progress, but there still isn’t enough of the safer Jynneos vaccine. And the CDC doesn’t think the threat yet warrants releasing the huge stockpile of ACAM2000.

The Food and Drug Administration needs to do whatever it can to speed up access to the stocks of Jynneos owned by the U.S. government but frozen in Denmark. If they aren’t enough to stop the epidemic, some hard choices about ACAM2000 will have to be made. The FDA must also speed up access to Tpoxx. In an epidemic, it is unfair to demand that every suffering recipient enroll in a clinical trial in order to be treated.

Moreover, gay men—particularly the owners of businesses and organizations sponsoring parties at which sex is encouraged—are not, in my opinion, making the sacrifices needed to slow down the spread.  Their reluctance to reschedule reminds me of the early 1980s, when the owners of San Francisco’s bathhouses bitterly fought the city’s attempts to close them, cloaking themselves in the mantle of gay freedom even as their clients died of AIDS. There was just too much money to be made. For Bear Week—as in the baths 40 years ago—the sponsors now claim their events will be “educational.”

I see the need to stop this epidemic as urgent. Helping more men avoid misery is in itself a worthy goal. But beyond that, after 50 years of progress, a serious backlash against gay acceptance is growing in this country. If the virus, which is already pegged a “gay disease,” keeps spreading and even kills some children or pregnant women, that will get much worse.

For once, we have the tools to stop a budding epidemic. But we don’t yet have enough of them. We should ask men to show restraint until enough rapid tests and vaccines are ready. Then we should crush it.

After that, for the long term, we should encourage other nations to join us in investing billions in a campaign to wipe out this virus at its source, in Africa. Not only would that save the lives of Africans—most of them children—but it’s far cheaper than having to fight this battle again and again on our soil.

 

Posted in Anxiety, Blogging, Death, People (in general), People in general, State of the Nation | Comments Off on Right on Schedule: A Killer Virus Emerges Just Before an Important Election

Successful People Keep Lists

The.Schindlers_List-[C.A.A]-front

Successful people keep lists so that they do not forget certain things or simply to track their progress. The world of several dozen Jews was forever saved because Oskar Schindler kept his list. That was, of course, an extraordinary example of list-making.

In every-day life, and for most of the time, such lists have a way of contributing positively to the success of successful people, since they offer balance and direction. Also, a list helps you to be organized and focused. I keep lists in my iPhone (using the Notes app), and on my tablet and in my Commonplace Book, so that I always have it hand. It’s in all three places for a variety of reasons, including the fact that because paper doesn’t run out of battery life I am always sure to have it.

Wherever you keep your lists, here are some things you should keep written down to help you attain success.

1. Creative ideas

Ideas can be very volatile. They are important to your success, as one idea could change the direction and course of your whole life. This is why it is essential to have a list of ideas written down. These ideas should be the ones you want to act upon or that excite you just as you receive their spark. Many successful minds have a way of carrying a journal or notebook at all times. This way, when an idea pops up, they write it down, so they won’t forget them for future use.

2. Books to be read

Every successful person reads. Reading books boost your chances for success, as they are one item that could challenge you intellectually and inspire smart decisions. Whether it is a non-fiction or a fictional work, always have a list of the books that will satisfy your career or success interests. Sometimes such books are recommended by others, while other times we simply stumble upon them.

3. Thoughts to share with others

Just like creative ideas that could trigger your success, it is important to write something that could inspire and educate others. Success is not just about you, but also offering others a hand in achieving their own success. You could be offering thoughts or knowledge in the form of blog posts, an article in a famous magazine, or a book. This could also propel your career and make you an authority in your field.

4. The interesting people you have met

I once met an important and knowledgeable person at a conference. Even after a very interesting discussion, I did not take down his contact information. This is something I still regret, because who knows for what and when I would have needed his expert advice again? It is always important to keep a list of interesting people you have met and how you can once again reach out to them. Their knowledge and expertise could add to the talent pool of a business you are building or working for.

5. Media persons

From TV hosts to journalists, media persons can be pivotal to offering you the recognition you need to be successful. Reaching out and keeping a list of such persons can be rewarding and offer the needed boost in publicity you need. Don’t be shy to connect with a media person or to engage them on social media. Make sure you are noticed by them.

6. Progress journal

Success requires that you keep track of your progress. Many successful persons like Oprah, Eminem, and J.K Rowling have a journal which they use in tracking their progress. Progress may not mean initial and momentary success, but rather a holistic view of what you define as success. With such a journal, you can identify your areas of strength and weakness and see how you can navigate your terrain for the future.

7. To do lists (I call mine a TooDue™ List)

This is a list of things that you intend to accomplish in the future. For some, it is a “bucket list,” yet this helps you identify what is important and what is not. It also helps you ascertain what direction you want to take. Each item on this list should help to make you a more fulfilled person.

8. Things to see list

Many may not consider this essential, but just as books engage and develop the mind, certain movies, documentaries, and TV shows have a way of making a considerable impact on our success. Try checking out reviews and listening or reading critics to know what shows would align with your goals.

 

Do you keep lists? I’d love to hear from you!

 

— from www.lifehacker.com

Posted in Business, General Musings, List Making, Positive Mental Attitude | Comments Off on Successful People Keep Lists

How to Think like an Adult: Dr Russo’s Review of Cognitive Distortions

This is a rather long post, intended for clients and students as an adjunct to what I’ve covered in sessions or classes. It is a repeat of a much earlier post (from 2016) and is worth re-reading.

In my practice as a mentor and life coach I have worked many clients who present with clinically significant distress that, if not wholly based on distortive thinking, is largely the result of so-called “monkey mind.”  This is simply that, when we cognitively distort what has happened to us, we literally jump around like excited monkeys, in ways which result in clinical distress way out of proportion to what actually occurred (or is occurring).

We begin this little treatise on distortive thinking by examining what the great ancient stoic, Epictetus, had to say. He taught that philosophic inquiry is simply a way of life and not just a theoretical approach to the world. To Epictetus, all external events are beyond our control. Further, he taught that we should calmly, and without passion, accept whatever happens. Individuals are responsible for their own actions, which they can moderate through rigorous self-discipline. The quote, attributed to him, sums it up nicely:

“Men are disturbed not by things, but by the view which they take of them.”

Within counseling, the seminal theorists Aaron Beck and Albert Ellis seized upon Epictetus as they developed their respective therapeutic techniques: cognitive behavior therapy (CBT) and rational emotive behavior therapy (REBT).  As we go about examining cognitive distortions, keep in mind that both theorists had turned their back on then-traditional psychoanalytic techniques which saw depression arising from motivational‐affective considerations; in other words, as misdirected anger, swallowed anger, or “bottled up anger.” 

By the way, I do not use the term “anger management.” I stress to clients that anger, like any other emotion, is meant to be learned from. What occasioned your anger? Did it arise from a violation of your core beliefs? Injustice? No matter the activating event, it is important to salute the emotion – it’s real, after all – and to master how you handle it. Don’t bottle it up. Feel it and then use it to change your world. I call this “anger mastery,” a term borrowed from Kevin Burke.

Anyway, in his practice work, Beck found that his clients reported their feelings of depression in ways differing from these psychoanalytical conceptualizations of depression. Like Ellis, Beck found his clients illustrated evidence of irrational thinking that he called systematic distortions.  Therefore, the basic premise of Beck’s Cognitive‐Behavioral Therapy concerns these distortions and follows the philosophy of Epictetus: It is not a thing that makes us unhappy, but how we view things that make us unhappy. Consequently, if we avoid struggling to change things, and instead change our own interpretations of things, we change how we feel and how we act in the future.

So, then, what is cognition? Cognitions are verbal or pictorial representations, available to the conscious mind, which form according to the interpretations we make about the things that happen to us. These interpretations and assumptions are shaped by a bunch of unconscious presuppositions we make about people and things, based on past experience ‐ and when I say past experience, I mean experiences going all the way back to birth.

When we are infants, we take in the whole world in a rather naive and unthinking fashion. Some term this process as one of “accepting introjected beliefs.” We live with these introjections throughout life (some refer to this as the “appraisal approach” to the world). By way of a simple example, most of us are reluctant to touch a hot stove because of what our mothers and fathers commanded us NOT to do; namely, to touch a hot stove.  We lived with that introjected understanding of stoves for a long time, accepting that they were right. But we never knew for ourselves that they were right! That is, until we accidentally touched a hot stove.

In order to make sense of our world, we recursively form cognitive filters, schemas, or a set of assumptions and expectations of how events will transpire, and what they mean to us. Such expectations can be (and often are) illogical and irrational.  It is as if our accumulated introjections preclude us from making even the simplest of logical leaps of faith.

These introjections get in the way and often result in the aforementioned clinical distress. Beck’s therapy seeks to uncover instances where distorted, illogical thoughts and images lead to unwanted or unproductive emotions. We say unproductive emotions, because while these emotions can be either good or bad both can lead to unproductive behaviors. 

Beck’s typology of cognitive distortions is somewhat like Ellis’s notion of “irrational beliefs,” which the latter challenged in therapy through a process of disputation.  All distortions represent evidence of our emotions subsuming a logical thought process. We may therefore label them as logical fallacies.

Regardless of the over-riding clinical efficacy of appealing to our clients’ intellectual reasoning abilities (which, by itself, can be problematic), it is helpful to share with clients a list of irrational beliefs, or cognitive distortions, as a starting point in the work we will do:

  1. Catastrophizing or Minimizing ‐ weighing an event as too important or failing to weight it enough.
  2. Dichotomous Thinking ‐ committing the false dichotomy error ‐ framing phenomena as an either/or when there are other options. (Remember here the “genius of AND versus the tyranny of OR”).
  3. Emotional Reasoning ‐ feeling that your negative affect necessarily reflects the way a situation really is.
  4. Fortune Telling ‐ anticipating that events will turn out badly. I see this perhaps more so than any other distortion.
  5. Labeling ‐ this occurs when we infer the character of a person from one behavior, or from a limited set of behaviors, i.e., a person who forgets something one time is “an idiot.” This amounts to “telling a book by its cover” type-thinking.
  6. Mental Filter ‐ we all have mental filters, but this distortion refers to specific situations where we filter out evidence that an event could be other than a negative one for us.
  7. Mind Reading ‐ believing that we can know what a person thinks solely from their behaviors. This one is related strongly to the notion of “projection” onto someone else the feelings that we, ourselves, might have in a similar situation.
  8. Overgeneralization ‐ one event is taken to be proof of a series or pattern of events. Basic statistics courses teach us (or SHOULD teach us) that patterns are hard to find in nature. They only appear to be such. We can view them as otherwise.
  9. Personalization ‐ here we are assuming that a person is at fault for some negative external event. Said another way, we take complete responsibility for something that is nowhere near our fault!
  10. Should Statements ‐ statements that begin with “Shoulds” or “Musts” are often punishing demands we make on ourselves. Generally, the assumption that we Must or Should do something is absolutist, and therefore most likely false.

Ellis, Beck, and other theorists and therapists who employ CBT approaches in their practices, have similar approaches to cognitive distortions. They begin with a laser-like focus upon symptom relief, which in turn, looks to find those cognitive distortions that many clients suffer from.  Beck’s CBT is short-term in nature, as we know, but remember that symptom reduction has been shown by empirical research to be as effective as longer-term help. The idea here is that by focusing on symptoms we can help effect “core” character changes.

Becks’ CBT, along with Ellis’s REBT, have many therapeutic approaches in common.  And while this treatise was not intended to be a detailed review of those therapies, it might be helpful to see how each address the notion of “cognitive distortive thinking.”

In both approaches, the therapist is active, didactic and directive. This means that he or she tells you what he is doing, the reason why he is doing it, and even teaches you how to do it for yourself. For example, if she assigns homework for a client, she tells the client the reasoning behind the homework. This also has the additional benefit of allowing the client to practice new behaviors in the actual environment where they will occur. Keeping a list of cognitive distortions handy can be an example of where the therapist has asked that the client practice how to recognize them.

I am an REBT devotee. As such, I often provide clients with a toolkit of self-help techniques. I want my client to, in effect, become a specialist in dealing with his own problems. My intent is to help the client become more independent. In other words, I am putting myself out of business by freeing my clients from the need for therapy. I want them to actively prepare for similar events in their lives in the future and be prepared with techniques they can employ themselves.

So, for example, when the client senses that they are suffering from distortive thinking, they will be able to whip out the list of ten cognitive distortions and do the work themselves.

One thing that CBT and REBT therapists (among others) focus upon is the here-and-now, the present, as well as the future. Without question, events in our past have shaped who we are. We need to embrace that fact, but at the same time, we need to look at what we can do now to change our view of life. We cannot change the past. We can only adjust our view of the present and future events. Consequently, I approach the therapeutic relationship in a highly collaborative fashion, as opposed to an authoritarian, adversarial or a neutral fashion. In effect, and while I begin as the authority on what we are about to do in therapy, I actively transfer the power of the relationship to the client.

To that end, and like what Beck did, I often set an agenda at the beginning of therapy and then return to that agenda time and time again to gauge progress. Beck set a great example when he outlined these precise steps (in terms of agenda and session structure):

  1. Set an agenda
  2. Review self‐report data
  3. Review presenting problem
  4. Identify problems and set clearly definable and measurable goals
  5. Educate patient on the cognitive model – discuss cognitive distortions
  6. Discuss the patient’s expectations for therapy
  7. Summarize session and assign homework
  8. Elicit feedback from the patient

Let’s focus for a moment on step 5.  Without question, behavioral therapists have many tools at their disposal, including psycho education, relaxation training, coping skills, exposure, and response prevention.  But is cognitive restructuring that most specifically addresses distortive thinking, and which can offer the aforementioned symptom relief.

At step 5, I will acquaint my client with the ideas of both Beck and Ellis and talk about how both (but mostly Ellis) seek to focus on the client’s core evaluative beliefs about himself.  Of course, Ellis tended to revert somewhat to the past by explaining how unconscious conflict may exist, based upon past experiences, while Beck tended to eschew this.  He was more concerned with working with observable behaviors, and thereby potentially uncover the distortive thinking.

To that end, I focus more on what is referred to as one’s “automatic thoughts,” which tend to link back to core beliefs. Ellis would have me attack those core beliefs (to the extent they are maladaptive), while Beck would have been simply to try to change them. But when you stop and think about it, BOTH would have the therapist help the client to change those core beliefs.

We all have automatic thoughts – indeed, such automatic thinking helps to keep us alive. The so-called Gift of Fear comes into play here, which while largely unconscious, is what governs our approach to the world. It is when such automatic thinking results in distress that we as therapists are called upon.

Beck did not view automatic thoughts as unconscious in a Freudian sense. He merely saw them as operating without our notice; in a word, “automatic.”

Remember from above that such thinking arises from the underlying assumptions and rules we have accepted (via introjections) and made up (through experience) about how to do deal with the world. And it is HOW we have previously dealt with the world, for the good or for the bad, that has resulted in our core beliefs (about ourselves and about others around us). They are, almost by definition, highly charged and rigid “takes” on the world. They govern what we do.

Beck and to large extent, Ellis, engaged in what they called cognitive restructuring.  First, you identify the cognitive distortions that appear in those automatic thoughts, and which point to the self-defeating core beliefs the client has allowed to set in his or her cognitions. Often this is done through active disputation. I prefer to go about disputation in a somewhat scientific way, through guided discovery, hypothesis testing, supporting through evidence, and looking for alternative theories. Clients are, for the most part, receptive.

Here is an example:

Automatic thought: I can’t do this; it is too hard.

Assumption: I will fail.

Core Belief: Because I am a loser.

I would begin with the statement, “I cannot do this.”  I would work with the client to uncover “real evidence” of their inability to do … whatever.  I simply ask” What evidence do you have? And often, there is NO evidence. The statement, “I can’t do this” is not literally true.  Perhaps he’s having trouble because he’s trying to do too much at once.  The core belief is what I then attack, by asking questions around when they have NOT been a loser; by asking for examples of a time when they were successful in resolving a situation to their satisfaction (read: successfully); and by asking, is that what you truly believe about yourself?

In the most general sense, we can discuss cognitive restructuring in the following fashion: Perception and experiencing in general are active processes that involve both inspective and introspective data, in that the clients’ cognitions represent a synthesis of internal (mental filters) and external stimuli (the world about him.)  How people appraise a situation is generally evident in their cognitions (thoughts and visual images). These cognitions constitute their stream of consciousness or phenomenal field, which reflects their configuration of themselves, their past and future, and their world. Alterations in the content of their underlying cognitive structures affect their affective state and behavioral patterns. Through psychological intervention, clients can become aware of their cognitive distortions. Correction of those faulty dysfunctional constructs can lead to clinical improvement.

According to cognitive theory, cognitive dysfunctions are the core of the affective, physical and other associated features of depression.  Apathy and low energy are results of a person’s expectation of failure in all areas. Similarly, paralysis of will stems from a person’s pessimist attitude and feelings of hopelessness.

Take depression – a negative self‐perception whereby people see themselves as inadequate, deprived and worthless.  They experience the world as negative and demanding.  They learn self‐defeating cognitive styles, to expect failure and punishment, and for it to continue for a long time.  The goal of cognitive therapy is to alleviate depression and to prevent its recurrence by helping clients to identify and test negative cognitions, to develop alternative and more flexible schemas, and to rehearse both new cognitive and behavioral responses in the confines of the therapeutic chamber. By changing the way people think, the depressive disorder can be alleviated.

The beginnings of Cognitive Restructuring employ several steps:

  1. Didactic aspects.   The therapy begins by explaining to the client the theoretical concepts of CBT or REBT, by focusing on the belief that faulty logic leads to emotional pain. Next, the client learns the concept of joint hypothesis formation, and hypothesis testing.  In depression, the relationship between depression and faulty, self-defeating cognitions are stressed, as well as the connection of affect and behavior, and all rationales behind treatment.

  2. Eliciting automatic thoughts.   Every psychopathological disorder has its own specific cognitive profile of distorted thought, which provides a framework for specific cognitive intervention. In depression, we see the negative triad: a globalized negative self-view, negative view of current experiences and a negative view of the future.

    For example, in hypo manic episodes we see inflated views of self, experience and future. In anxiety disorder, we see irrational fear of physical or psychological danger. In panic disorder, we see catastrophic misinterpretation of body and mental experiences.  In phobias, we see irrational fear in specific, avoidable situations. In paranoid personality disorder: negative bias, interference by others. In conversion disorder: concept of motor or sensory abnormality. In obsessive‐compulsive disorder: repeated warning or doubting about safety and repetitive rituals to ward off these threats. In suicidal behavior: hopelessness and deficit in problem solving. In anorexia nervosa, the ear of being fat. In hypochondriasis, the attribution of a serious medical disorder.

  3. Testing automatic thoughts.   Acting as a teacher, the therapist helps a client test the validity of her automatic thoughts. The goal is to encourage the client to reject inaccurate or exaggerated thoughts.  As therapists know all too well, clients often blame themselves for things outside their control.

  4. Identifying maladaptive thoughts.    As client and therapist continue to identify automatic thoughts, patterns usually become apparent. The patterns represent rules of maladaptive general assumptions that guide a client’s life.

    As an example, “To be happy, I must…”  The primary assumption is: “If I am nice, and suffer for others, then bad things won’t happen to me,” with a secondary assumption: “It is my fault when bad things happen to me, because I was not nice enough. Therefore, “Life is unfair, because I am nice and still bad things happen.”  

    You can see how such rules inevitably lead to disappointment, depression, and ultimately, depression.

Some concluding thoughts which Beck had about depression and his view of how psychopathology occurs in general:

  • Emotional disorders are the results of distorted thinking or an unrealistic appraisal of life events.
  • How an individual structures reality determines his emotional state.
  • A reciprocal relation exits between affect and cognition wherein one reinforces the other, resulting in escalations of emotional and cognitive impairment.
  • Cognitive structures organize and filter incoming data and are acquired in early development.
  • Too many dissonant distortions lead to maladjustment.
  • Therapy involves learning experiences for the client that allow them to monitor distorted thinking to realize the relation between thoughts, feelings and behavior and to test the validity of automatic thoughts to substitute more realistic cognitions and to learn to identify and later the underlying assumptions that predispose the client to the distorted thoughts in the first place.

Finally, both Beck and Ellis came up with what they saw as the rudiments of so-called Mature Thinking as compared to primitive thinking. They comprise a set of ways of thinking about yourself, the world, and the future, that lead to cognitive, emotional and behavioral success in life.

Primitive thinking is non-dimensional and global:  I am the living embodiment of failure

Mature thinking is multidimensional and specific: I make mistakes sometimes, but otherwise I can be clever at many things.

Primitive thinking is absolutistic and moralistic: I am a sinner, and I will end up in hell.

Mature thinking is relativistic and non-judgmental: I sometimes let people down, but there is no reason I can’t make amends.

Primitive thinking is invariant: I am hopeless

Mature thinking is variable: There may be some way…

Primitive thinking resorts to “character diagnosis” and labeling: I am a coward

Mature thinking examines behaviors and engages in behavior diagnosis: I am behaving like a coward right now.

Primitive thinking is irreversible and sees things as immutable: There is simply nothing I can do about this.

Mature thinking is reversible, flexible and ameliorative: Let’s see what I can do to fix this…

Hopefully this piece has taught you something about cognitive distortions and how everyone – all of us – suffer from them from time to time. The key is to try always to engage in mature thinking. Hard to do! And often it can take a lifetime! But I urge you to try!

© Dr. Joseph V Russo (2019), All Rights Reserved

Posted in Counseling Concepts, General Musings | Comments Off on How to Think like an Adult: Dr Russo’s Review of Cognitive Distortions

We Are Rasing Excellent Sheep (only these sheep cannot be shorn)

Without question, this article nails it, at least to my way of thinking and teaching as I do freshmen coming into University. I am reminded each and every semester of how the high schools are doing their best to turn-out exemplarily woke graduates who cannot write to save their lives, are barely passable in terms of math skills, and haven’t yet read even one of the classics (you know, dead white guys like Aristotle, Plato, et. al.).

Yet, they have their preferred pronouns down straight, have varying colored hair (I have yet to figure out that meta-message), and are anxious to look around for a statue they can desecrate or even tear down. Mention how (in my classes, anyway) boys are boys and girls are girls and you get hauled before a civil rights star chamber. XX and XY are dog whistles they say for desiring a continuance of something called toxic masculinity and the oppressive patriarchy.

Yes, we are turning out sheeple. This article speaks to the whole idea of how that will be only at our peril. Read on.


We Are Raising Excellent Sheep

By William Deresiewicz

(c) 2022, All Rights are His (assuming his preferred pronoun is indeed “his”)

I taught English at Yale University for ten years. I had some vivid, idiosyncratic students—people who went on to write novels, devote themselves to their church, or just wander the world for a few years. But mostly I taught what one of them herself called “excellent sheep.”

These students were excellent, technically speaking. They were smart, focused, and ferociously hard-working.

But they were also sheep: stunted in their sense of purpose, waiting meekly for direction, frequently anxious and lost.

I was so struck by this—that our “best and brightest” students are so often as helpless as children—that I wrote a book about it. It came out in 2014, not long before my former colleague Nicholas Christakis was surrounded and browbeaten by a crowd of undergraduates for failing to make them feel coddled and safe—an early indication of the rise of what we now call wokeness.

How to reconcile the two phenomena, I started to wonder. Does wokeness, with its protests and pugnacity, represent an end to sheephood, a new birth of independence and self-assertion, of countercultural revolt? To listen to its radical-sounding sloganeering—about tearing down systems and doing away with anyone and anything deemed incorrect—it sure sounded like it.

But indications suggest otherwise. Elite college graduates are still herding toward the same five vocational destinations—law, medicine, finance, consulting, and tech—in overwhelming numbers. High-achieving high school students, equally woke, are still crowding toward the same 12 or 20 schools, whose application numbers continue to rise. This year, for example, Yale received some 50,000 applications, more than twice as many as 10 years ago, of which the university accepted less than 4.5%.

Eventually, I recognized the deeper continuities at work. Excellent sheephood, like wokeness, is a species of conformity. As a friend who works at an elite private university recently remarked, if the kids who get into such schools are experts at anything, it is, as he put it, “hacking the meritocracy.” The process is imitative: You do what you see the adults you aspire to be like doing. If that means making woke-talk (on your college application; in class, so professors will like you), then that is what you do.

But wokeness also serves a deeper psychic purpose. Excellent sheephood is inherently competitive. Its purpose is to vault you into the ranks of society’s winners, to make sure that you end up with more stuff—more wealth, status, power, access, comfort, freedom—than most other people. This is not a pretty project, when you look it in the face. Wokeness functions as an alibi, a moral fig leaf. If you can tell yourself that you are really doing it to “make the world a better place” (the ubiquitous campus cliché), then the whole thing goes down a lot easier.

All this helps explain the conspicuous absence of protest against what seem like obviously outrageous facts of life on campus these days: the continuing increases to already stratospheric tuition, the insulting wages paid to adjunct professors, universities’ investment in China (possibly the most problematic country on earth), the draconian restrictions implemented during the pandemic.

Yes, there have been plenty of protests, under the aegis of wokeness, in recent years: against statues, speakers, emails about Halloween costumes, dining hall banh mi. But those, of course, have been anything but countercultural. Students have merely been expressing more extreme versions of the views their elders share. In fact, of the views that their elders have taught them: in the private and upscale public high schools that have long been dominated by the new religion, in courses in gender studies, African American studies, sociology, English lit.

In that sense, the protesters have only been demonstrating what apt pupils they are. Which is why their institutions have responded, by and large, with pats on the head. After the Christakis incident, two of the students who had most flagrantly attacked the professor went on to be given awards (for “providing exemplary leadership in enhancing race and/or ethnic relations at Yale College”) when they graduated two years later.

The truth is that campus protests, not just in recent years but going back for decades now, bear only a cosmetic resemblance to those of the 1960s. The latter represented a rejection of the authority of adults. They challenged the very legitimacy of the institutions at which they were directed, and which they sought to utterly remake. They were undertaken, at a time when colleges and universities were still regarded as acting in loco parentis, by students who insisted on being treated as adults, as equals; who rejected the forms of life that society had put on offer; and who were engaged, at considerable risk—to their financial prospects, often to their physical safety—in a project of self-authoring.

I was involved in the anti-apartheid protests at Columbia in 1985. Already, by then, the actions had an edge of unreality, of play, as if the situation were surrounded by quotation marks. It was, in other words, a kind of reenactment. Student protest had achieved the status of convention, something that you understood you were supposed to do, on your way to the things that you’d already planned to do, like going to Wall Street. It was clear that no adverse consequences would be suffered for defying the administration, nor were any genuinely risked. Instead of occupying Hamilton Hall, the main college classroom building, as students had in 1968, we blocked the front door. Students were able to get to their classes the back way, and most of them did (including me and, I would venture to say, most of those who joined the protests). “We’ll get B’s!” our charismatic leader reassured us, and himself—meaning, don’t worry, we’ll wrap this up in time for finals (which is exactly what happened). The first time as tragedy, the second time as farce.

And, so, it’s been since then: the third, fourth, tenth, fiftieth time.

In a recent column, Freddie deBoer remarked, in a different context, that for the young progressive elite, “raised in comfortable and affluent homes by helicopter parents,” “[t]here was always some authority they could demand justice from.”

That is the precise form that campus protests have taken in the age of woke: appeals to authority, not defiance of it. Today’s elite college students still regard themselves as children and are still treated as such. The most infamous moment to emerge from the Christakis incident, captured on a video the world would later see, exemplifies this perfectly. Christakis’s job as the head of a residential college, a young woman (one could more justly say, a girl) shriek-cried at him, “is not about creating an intellectual space! It is not! Do you understand that? It’s about creating a home!”

We are back to in loco parentis, in fact if not in law. College is now regarded as the last stage of childhood, not the first of adulthood. But one of the pitfalls of regarding college as the last stage of childhood is that if you do so then it very well might not be. The nature of woke protests, the absence of Covid and other protests, the whole phenomenon of excellent sheephood: all of them speak to the central dilemma of contemporary youth, which is that society has not given them any way to grow up—not financially, not psychologically, not morally.

The problem, at least with respect to the last two, stems from the nature of the authority, parental as well as institutional, that the young are now facing. It is an authority that does not believe in authority, which does not believe in itself. That wants to be liked, that wants to be your friend, who wants to be thought of as cool. That will never draw a line, which will always ultimately yield.

Children can’t be children if adults are not adults, but children also can’t become adults. They need something solid: to lean on when they’re young, to define themselves against as they grow older. Children become adults—autonomous individuals—by separating from their parents: by rebelling, by rejecting, by, at the very least, asserting. But how do you rebel against parents who regard themselves as rebels? How do you reject them when they accept your rejection, understand it, sympathize with it, join it?

The 1960s broke authority, and it has never been repaired. It discredited adulthood, and adulthood has never recovered. The attributes of adulthood—responsibility, maturity, self-sacrifice, self-control—are no longer valued, and frequently no longer modeled. So, children are stuck — they want to be adults, but they don’t know how. They want to be adults, but it’s easier to remain children. Like children, they can only play at being adults.

So here is my commencement message to the class of 2022. Beware of prepackaged rebellions; that protest march that you’re about to join may be a herd. Your parents aren’t your friends; be skeptical of any authority that claims to have your interests at heart. Your friends may turn out to be your enemies; as one of mine once said, the worst thing you can do to friends is not be the person they want you to be.

Self-authoring is hard. If it isn’t uncomfortable, it isn’t independence.

Childhood is over.

Dare to grow up.

 

Posted in General Musings, Helicopter Parenting, People (in general), People in general, State of the Nation | Comments Off on We Are Rasing Excellent Sheep (only these sheep cannot be shorn)

The Yerkes-Dodson Law

This post is largely for my students, past present and future, but perhaps it makes for interesting reading for others. Aside from the intrinsic value of understanding the concept of curvilinear relationships, the Law affords us a solid understanding of why too much stress is harmful to just about anything.

Note: I reference Vygotsky’s Zone of Proximal Development which can be more fully explained here.

The Yerkes-Dodson Law and Performance

Class, I mangled this morning’s lecture when it came to presenting the Yerkes-Dodson law. I wanted to depict for you the relationship between too little arousal (stress), or too much, and the resulting impact on performance (learning). We can and often do take students far beyond Vygotsky’s Zone of Proximal Development (“ZPD”), or even too far into the ZPD itself, and that can result in frustration. By the same token, too little stress (challenge) and the typical child might become bored. All of this suggests that elevated arousal levels can improve performance up to a certain point.

This has everyday implications for you and me. This little Announcement will hopefully relate how this works and why sometimes a little bit of stress can actually help you perform your best.

Arousal and Performance

Have you ever noticed that you perform better when you are just a little bit nervous? For example, you might do better at an athletic event if you are excited about participating or do better on an exam if you are somewhat anxious about your score.

In psychology, this relationship between arousal levels and performance is known as the Yerkes-Dodson Law. What impact can this have on our behavior and performance?

How the Law Works

The Yerkes-Dodson Law suggests that there is a relationship between performance and arousal. Increased arousal can help improve performance, but only up to a certain point. At the point when arousal becomes excessive, performance diminishes. This graphic is apropos.

The law was first described in 1908 by psychologists Robert Yerkes and John Dillingham Dodson. They discovered that mild electrical shocks could be used to motivate rats to complete a maze, but when the electrical shocks became too strong, the rats would scurry around in random directions to escape.

The experiment demonstrated that increasing stress and arousal levels could help focus motivation and attention on the task at hand, but only up to a certain point.

The anxiety you experience before an exam is one example of how the Yerkes-Dodson Law operates. An optimal level of stress can help you focus on the test and remember the information that you studied, but too much test anxiety can impair your ability to concentrate and make it more difficult to remember the correct answers.

Athletic performance offers another great example of the Yerkes-Dodson Law. When a player is poised to make an important move, like making a basket during a basketball game, an ideal level of arousal can sharpen their performance and enable them to make the shot. When a player gets too stressed out, however, they might instead “choke” and miss the shot.

Observations

So, how do you determine what arousal levels are ideal? The key thing to remember is that this can vary from one task to the next. Research in 2007 found, for example, that performance levels decrease earlier for complex tasks than for simple tasks even with the same levels of arousal. What does this mean exactly?

If you are performing a relatively simple task, you are capable of dealing with a much larger range of arousal levels. Household tasks such as doing laundry or loading the dishwasher are less likely to be affected by either very low or very high arousal levels.

If you were doing a much more complex task, such as working on a paper for a class or memorize difficult information, your performance would be much more heavily influenced by low and high arousal levels.

If your arousal levels are too low, you might find yourself drifting off or even falling asleep before you can even get started on the assignment. Arousal levels that are too high could be just as problematic, making it difficult to concentrate on the information long enough to complete the task.

Too much and too little arousal can also have an effect on different types of athletic performance tasks. While a basketball player or baseball player might need to control excessive arousal in order to concentrate on successfully performing complex throws or pitches, a track sprinter might rely on high arousal levels to motivate peak performance.

In such cases, the type of task and complexity of the task plays a role in determining the optimal levels of arousal.

— Dr Russo (307) 761 9197

 

Posted in Anxiety, Counseling Concepts, General Musings, People (in general), People in general | Comments Off on The Yerkes-Dodson Law

Professional Decline – It May be Coming Sooner than You Think

I may have posted this a couple of years ago, but it is worth a re-post (if for no other reason than to remind me that my own professional decline has already happened). The article is useful in my own life as well as with my clients.

YOUR PROFESSIONAL DECLINE IS COMING (MUCH) SOONER THAN YOU THINK

Here’s how to make the most of it.

By Arthur C. Brooks, in The Atlantic (c) 2019


“It’s not true that no one needs you anymore.”

These words came from an elderly woman sitting behind me on a late-night flight from Los Angeles to Washington, D.C. The plane was dark and quiet. A man I assumed to be her husband murmured almost inaudibly in response, something to the effect of “I wish I was dead.”

Again, the woman: “Oh, stop saying that.”

I didn’t mean to eavesdrop but couldn’t help it. I listened with morbid fascination, forming an image of the man in my head as they talked. I imagined someone who had worked hard all his life in relative obscurity, someone with unfulfilled dreams—perhaps of the degree he never attained, the career he never pursued, the company he never started.

At the end of the flight, as the lights switched on, I finally got a look at the desolate man. I was shocked. I recognized him—he was, and still is, world-famous. Then in his mid‑80s, he was beloved as a hero for his courage, patriotism, and accomplishments many decades ago.

As he walked up the aisle of the plane behind me, other passengers greeted him with veneration. Standing at the door of the cockpit, the pilot stopped him and said, “Sir, I have admired you since I was a little boy.” The older man—apparently wishing for death just a few minutes earlier—beamed with pride at the recognition of his past glories.

For selfish reasons, I couldn’t get the cognitive dissonance of that scene out of my mind. It was the summer of 2015, shortly after my 51st birthday. I was not world-famous like the man on the plane, but my professional life was going very well. I was the president of a flourishing Washington think tank, the American Enterprise Institute. I had written some best-selling books. People came to my speeches. My columns were published in The New York Times.

But I had started to wonder: Can I really keep this going? I work like a maniac. But even if I stayed at it 12 hours a day, seven days a week, at some point my career would slow and stop. And when it did, what then? Would I one day be looking back wistfully and wishing I were dead? Was there anything I could do, starting now, to give myself a shot at avoiding misery—and maybe even achieve happiness—when the music inevitably stops?

Though these questions were personal, I decided to approach them as the social scientist I am, treating them as a research project. It felt unnatural—like a surgeon taking out his own appendix. But I plunged ahead, and for the past four years, I have been on a quest to figure out how to turn my eventual professional decline from a matter of dread into an opportunity for progress.

Here’s what I’ve found.

The field of “happiness studies” has boomed over the past two decades, and a consensus has developed about well-being as we advance through life. In The Happiness Curve: Why Life Gets Better After 50, Jonathan Rauch, a Brookings Institution scholar, and an Atlantic contributing editor, reviews the strong evidence suggesting that the happiness of most adults declines through their 30s and 40s, then bottoms out in their early 50s. Nothing about this pattern is set in stone, of course. But the data seem eerily consistent with my experience: My 40s and early 50s were not an especially happy period of my life, notwithstanding my professional

So, what can people expect after that, based on the data? The news is mixed. Almost all studies of happiness over the life span show that, in wealthier countries, most people’s contentment starts to increase again in their 50s, until age 70 or so. That is where things get less predictable, however. After 70, some people stay steady in happiness; others get happier until death. Others—men in particular—see their happiness plummet. Indeed, depression and suicide rates for men increase after age 75.

This last group would seem to include the hero on the plane. A few researchers have looked at this cohort to understand what drives their unhappiness. It is, in a word, irrelevance. In 2007, a team of academic researchers at UCLA and Princeton analyzed data on more than 1,000 older adults. Their findings, published in the Journal of Gerontology, showed that senior citizens who rarely or never “felt useful” were nearly three times as likely as those who frequently felt useful to develop a mild disability, and were more than three times as likely to have died during the course of the study.

One might think that gifted and accomplished people, such as the man on the plane, would be less susceptible than others to this sense of irrelevance; after all, accomplishment is a well-documented source of happiness. If current accomplishment brings happiness, then shouldn’t the memory of that accomplishment provide some happiness as well?

Maybe not. Though the literature on this question is sparse, giftedness and achievements early in life do not appear to provide an insurance policy against suffering later on. In 1999, Carole Holahan and Charles Holahan, psychologists at the University of Texas, published an influential paper in The International Journal of Aging and Human Development that looked at hundreds of older adults who early in life had been identified as highly gifted. The Holahans’ conclusion: “Learning at a younger age of membership in a study of intellectual giftedness was related to … less favorable psychological well-being at age eighty.”

This study may simply be showing that it’s hard to live up to high expectations, and that telling your kid she is a genius is not necessarily good parenting. (The Holahan’s surmise that the children identified as gifted might have made intellectual ability more central to their self-appraisal, creating “unrealistic expectations for success” and causing them to fail to “consider the many other life influences on success and recognition.”) However, abundant evidence suggests that the waning of ability in people of high accomplishment is especially brutal psychologically. Consider professional athletes, many of whom struggle profoundly after their sports career ends. Tragic examples abound, involving depression, addiction, or suicide; unhappiness in retired athletes may even be the norm, at least temporarily. A study published in the Journal of Applied Sport Psychology in 2003, which charted the life satisfaction of former Olympic athletes, found that they generally struggled with a low sense of personal control when they first stopped competing.

Recently, I asked Dominique Dawes, a former Olympic gold-medal gymnast, how normal life felt after competing and winning at the highest levels. She told me that she is happy, but that the adjustment wasn’t easy—and still isn’t, even though she won her last Olympic medal in 2000. “My Olympic self would ruin my marriage and leave my kids feeling inadequate,” she told me, because it is so demanding and hard driving. “Living life as if every day is an Olympics only makes those around me miserable.”

Why might former elite performers have such a hard time? No academic research has yet proved this, but I strongly suspect that the memory of remarkable ability, if that is the source of one’s self-worth, might, for some, provide an invidious contrast to a later, less remarkable life. “Unhappy is he who depends on success to be happy,” Alex Dias Ribeiro, a former Formula 1 race-car driver, once wrote. “For such a person, the end of a successful career is the end of the line. His destiny is to die of bitterness or to search for more success in other careers and to go on living from success to success until he falls dead. In this case, there will not be life after success.”

Call it the Principle of Psychoprofessional Gravitation: the idea that the agony of professional oblivion is directly related to the height of professional prestige previously achieved, and to one’s emotional attachment to that prestige. Problems related to achieving professional success might appear to be a pretty good species of problem to have; even raising this issue risks seeming precious. But if you reach professional heights and are deeply invested in being high up, you can suffer mightily when you inevitably fall. That’s the man on the plane. Maybe that will be you, too. And, without significant intervention, I suspect it will be me.

The Principle of Psychoprofessional Gravitation can help explain the many cases of people who have done work of world-historical significance yet wind up feeling like failures. Take Charles Darwin, who was just 22 when he set out on his five-year voyage aboard the Beagle in 1831. Returning at 27, he was celebrated throughout Europe for his discoveries in botany and zoology, and for his early theories of evolution. Over the next 30 years, Darwin took enormous pride in sitting atop the celebrity-scientist pecking order, developing his theories, and publishing them as books and essays—the most famous being On the Origin of Species, in 1859.

But as Darwin progressed into his 50s, he stagnated; he hit a wall in his research. At the same time an Austrian monk by the name of Gregor Mendel discovered what Darwin needed to continue his work: the theory of genetic inheritance. Unfortunately, Mendel’s work was published in an obscure academic journal and Darwin never saw it—and in any case, Darwin did not have the mathematical ability to understand it. From then on he made little progress. Depressed in his later years, he wrote to a close friend, “I have not the heart or strength at my age to begin any investigation lasting years, which is the only thing which I enjoy.”

Presumably, Darwin would be pleasantly surprised to learn how his fame grew after his death, in 1882. From what he could see when he was old, however, the world had passed him by, and he had become irrelevant. That could have been Darwin on the plane behind me that night.

It also could have been a younger version of me, because I have had precocious experience with professional decline.

As a child, I had just one goal: to be the world’s greatest French-horn player. I worked at it slavishly, practicing hours a day, seeking out the best teachers, and playing in any ensemble I could find. I had pictures of famous horn players on my bedroom wall for inspiration. And for a while, I thought my dream might come true. At 19, I left college to take a job playing professionally in a touring chamber-music ensemble. My plan was to keep rising through the classical-music ranks, joining a top symphony orchestra in a few years or maybe even becoming a soloist—the most exalted job a classical musician can hold.

But then, in my early 20s, a strange thing happened: I started getting worse. To this day, I have no idea why. My technique began to suffer, and I had no explanation for it. Nothing helped. I visited great teachers and practiced more, but I couldn’t get back to where I had been. Pieces that had been easy to play became hard; pieces that had been hard became impossible.

The data are shockingly clear that for most people, in most fields, professional decline starts earlier than almost anyone thinks.

Perhaps the worst moment in my young but flailing career came at age 22, when I was performing at Carnegie Hall. While delivering a short speech about the music I was about to play, I stepped forward, lost my footing, and fell off the stage into the audience. On the way home from the concert, I mused darkly that the experience was surely a message from God.

But I sputtered along for nine more years. I took a position in the City Orchestra of Barcelona, where I increased my practicing but my playing gradually deteriorated. Eventually I found a job teaching at a small music conservatory in Florida, hoping for a magical turnaround that never materialized. Realizing that maybe I ought to hedge my bets, I went back to college via distance learning, and earned my bachelor’s degree shortly before my 30th birthday. I secretly continued my studies at night, earning a master’s degree in economics a year later. Finally, I had to admit defeat: I was never going to turn around my faltering musical career. So, at 31 I gave up, abandoning my musical aspirations entirely, to pursue a doctorate in public policy.

Life goes on, right? Sort of. After finishing my studies, I became a university professor, a job I enjoyed. But I still thought every day about my beloved first vocation. Even now, I regularly dream that I am onstage, and wake to remember that my childhood aspirations are now only phantasms.

I am lucky to have accepted my decline at a young enough age that I could redirect my life into a new line of work. Still, to this day, the sting of that early decline makes these words difficult to write. I vowed to myself that it wouldn’t ever happen again.

Will it happen again? In some professions, early decline is inescapable. No one expects an Olympic athlete to remain competitive until age 60. But in many physically nondemanding occupations, we implicitly reject the inevitability of decline before very old age. Sure, our quads and hamstrings may weaken a little as we age. But as long as we retain our marbles, our quality of work as a writer, lawyer, executive, or entrepreneur should remain high up to the very end, right? Many people think so. I recently met a man a bit older than I am who told me he planned to “push it until the wheels came off.” In effect, he planned to stay at the very top of his game by any means necessary, and then keel over.

But the odds are he won’t be able to. The data are shockingly clear that for most people, in most fields, decline starts earlier than almost anyone thinks.

According to research by Dean Keith Simonton, a professor emeritus of psychology at UC Davis and one of the world’s leading experts on the trajectories of creative careers, success, and productivity increase for the first 20 years after the inception of a career, on average. So, if you start a career in earnest at 30, expect to do your best work around 50 and go into decline soon after that.

The specific timing of peak and decline vary somewhat depending on the field. Benjamin Jones, a professor of strategy and entrepreneurship at Northwestern University’s Kellogg School of Management, has spent years studying when people are most likely to make prizewinning scientific discoveries and develop key inventions. His findings can be summarized by this little ditty:

Age is, of course, a fever chill
that every physicist must fear.
He’s better dead than living still
when once he’s past his thirtieth year.

The author of those gloomy lines? Paul Dirac, a winner of the 1933 Nobel Prize in Physics.

Dirac overstates the point, but only a little. Looking at major inventors and Nobel winners going back more than a century, Jones has found that the most common age for producing a magnum opus is the late 30s. He has shown that the likelihood of a major discovery increases steadily through one’s 20s and 30s and then declines through one’s 40s, 50s, and 60s. Are there outliers? Of course. But the likelihood of producing a major innovation at age 70 is approximately what it was at age 20—almost nonexistent.

Much of literary achievement follows a similar pattern. Simonton has shown that poets peak in their early 40s. Novelists generally take a little longer. When Martin Hill Ortiz, a poet and novelist, collected data on New York Times fiction best sellers from 1960 to 2015, he found that authors were likeliest to reach the No. 1 spot in their 40s and 50s. Despite the famous productivity of a few novelists well into old age, Ortiz shows a steep drop-off in the chance of writing a best seller after the age of 70. (Some nonfiction writers—especially historians—peak later, as we shall see in a minute.)

Whole sections of bookstores are dedicated to becoming successful. There is no section marked “managing your professional decline.”

Entrepreneurs peak and decline earlier, on average. After earning fame and fortune in their 20s, many tech entrepreneurs are in creative decline by age 30. In 2014, the Harvard Business Review reported that founders of enterprises valued at $1 billion or more by venture capitalists tend to cluster in the 20-to-34 age range. Subsequent research has found that the clustering might be slightly later, but all studies in this area have found that the majority of successful start-ups have founders under age 50.

This research concerns people at the very top of professions that are atypical. But the basic finding appears to apply more broadly. Scholars at Boston College’s Center for Retirement Research studied a wide variety of jobs and found considerable susceptibility to age-related decline in fields ranging from policing to nursing. Other research has found that the best-performing home-plate umpires in Major League Baseball have 18 years less experience and are 23 years younger than the worst-performing umpires (who are 56.1 years old, on average). Among air traffic controllers, the age-related decline is so sharp—and the potential consequences of decline-related errors so dire—that the mandatory retirement age is 56.

In sum, if your profession requires mental processing speed or significant analytic capabilities—the kind of profession most college graduates occupy—noticeable decline is probably going to set in earlier than you imagine.

Sorry.

If decline not only is inevitable but also happens earlier than most of us expect, what should we do when it comes for us?

Whole sections of bookstores are dedicated to becoming successful. The shelves are packed with titles like The Science of Getting Rich and the 7 Habits of Highly Effective People. There is no section marked “Managing Your Professional Decline.”

But some people have managed their declines well. Consider the case of Johann Sebastian Bach. Born in 1685 to a long line of prominent musicians in central Germany, Bach quickly distinguished himself as a musical genius. In his 65 years, he published more than 1,000 compositions for all the available instrumentations of his day.

Early in his career, Bach was considered an astoundingly gifted organist and improviser. Commissions rolled in; royalty sought him out; young composers emulated his style. He enjoyed real prestige.

But it didn’t last—in no small part because his career was overtaken by musical trends ushered in by, among others, his own son, Carl Philipp Emanuel, known as C.P.E. to the generations that followed. The fifth of Bach’s 20 children, C.P.E. exhibited the musical gifts his father had. He mastered the baroque idiom, but he was more fascinated with a new “classical” style of music, which was taking Europe by storm. As classical music displaced baroque, C.P.E.’s prestige boomed while his father’s music became passé.

Bach easily could have become embittered, like Darwin. Instead, he chose to redesign his life, moving from innovator to instructor. He spent a good deal of his last 10 years writing The Art of Fugue, not a famous or popular work in his time, but one intended to teach the techniques of the baroque to his children and students—and, as unlikely as it seemed at the time, to any future generations that might be interested. In his later years, he lived a quieter life as a teacher and a family man.

What’s the difference between Bach and Darwin? Both were preternaturally gifted and widely known early in life. Both attained permanent fame posthumously. Where they differed was in their approach to the midlife fade. When Darwin fell behind as an innovator, he became despondent and depressed; his life ended in sad inactivity. When Bach fell behind, he reinvented himself as a master instructor. He died beloved, fulfilled, and—though less famous than he once had been—respected.

The lesson for you and me, especially after 50: Be Johann Sebastian Bach, not Charles Darwin.

How does one do that?

A potential answer lies in the work of the British psychologist Raymond Cattell, who in the early 1940s introduced the concepts of fluid and crystallized intelligence. Cattell defined fluid intelligence as the ability to reason, analyze, and solve novel problems—what we commonly think of as raw intellectual horsepower. Innovators typically have an abundance of fluid intelligence. It is highest relatively early in adulthood and diminishes starting in one’s 30s and 40s. This is why tech entrepreneurs, for instance, do so well so early, and why older people have a much harder time innovating.

Crystallized intelligence, in contrast, is the ability to use knowledge gained in the past. Think of it as possessing a vast library and understanding how to use it. It is the essence of wisdom. Because crystallized intelligence relies on an accumulating stock of knowledge, it tends to increase through one’s 40s, and does not diminish until very late in life.

Careers that rely primarily on fluid intelligence tend to peak early, while those that use more crystallized intelligence peak later. For example, Dean Keith Simonton has found that poets—highly fluid in their creativity—tend to have produced half their lifetime creative output by age 40 or so. Historians—who rely on a crystallized stock of knowledge—don’t reach this milestone until about 60.

Here’s a practical lesson we can extract from all this: No matter what mix of intelligence your field requires, you can always endeavor to weight your career away from innovation and toward the strengths that persist, or even increase, later in life.

Like what? As Bach demonstrated, teaching is an ability that decays very late in life, a principal exception to the general pattern of professional decline over time. A study in The Journal of Higher Education showed that the oldest college professors in disciplines requiring a large store of fixed knowledge, specifically the humanities, tended to get evaluated most positively by students. This probably explains the professional longevity of college professors, three-quarters of whom plan to retire after age 65—more than half of them after 70, and some 15 percent of them after 80. (The average American retires at 61.) One day, during my first year as a professor, I asked a colleague in his late 60s whether he’d ever considered retiring. He laughed and told me he was more likely to leave his office horizontally than vertically.

I need a reverse bucket list. My goal for each year of the rest of my life should be to throw out things, obligations, and relationships.

Our dean might have chuckled ruefully at this—college administrators complain that research productivity among tenured faculty drops off significantly in the last decades of their career. Older professors take up budget slots that could otherwise be used to hire young scholars hungry to do cutting-edge research. But perhaps therein lies an opportunity: If older faculty members can shift the balance of their work from research to teaching without loss of professional prestige, younger faculty members can take on more research.

Patterns like this match what I’ve seen as the head of a think tank full of scholars of all ages. There are many exceptions, but the most profound insights tend to come from those in their 30s and early 40s. The best synthesizers and explainers of complicated ideas—that is, the best teachers—tend to be in their mid-60s or older, some of them well into their 80s.

That older people, with their stores of wisdom, should be the most successful teachers seems almost cosmically right. No matter what our profession, as we age, we can dedicate ourselves to sharing knowledge in some meaningful way.

A few years ago, I saw a cartoon of a man on his deathbed saying, “I wish I’d bought more crap.” It has always amazed me that many wealthy people keep working to increase their wealth, amassing far more money than they could possibly spend or even usefully bequeath. One day I asked a wealthy friend why this is so. Many people who have gotten rich know how to measure their self-worth only in pecuniary terms, he explained, so they stay on the hamster wheel, year after year. They believe that at some point, they will finally accumulate enough to feel truly successful, happy, and therefore ready to die.

This is a mistake, and not a benign one. Most Eastern philosophy warns that focusing on acquisition leads to attachment and vanity, which derail the search for happiness by obscuring one’s essential nature. As we grow older, we shouldn’t acquire more, but rather strip things away to find our true selves—and thus, peace.

At some point, writing one more book will not add to my life satisfaction; it will merely stave off the end of my book-writing career. The canvas of my life will have another brushstroke that, if I am being forthright, others will barely notice, and will certainly not appreciate very much. The same will be true for most other markers of my success.

What I need to do, in effect, is stop seeing my life as a canvas to fill and start seeing it more as a block of marble to chip away at and shape something out of. I need a reverse bucket list. My goal for each year of the rest of my life should be to throw out things, obligations, and relationships until I can clearly see my refined self in its best form.

And that self is … who, exactly?

Last year, the search for an answer to this question took me deep into the South Indian countryside, to a town called Palakkad, near the border between the states of Kerala and Tamil Nadu. I was there to meet the guru Sri Nochur Venkataraman, known as Acharya (“Teacher”) to his disciples. Acharya is a quiet, humble man dedicated to helping people attain enlightenment; he has no interest in Western techies looking for fresh start-up ideas or burnouts trying to escape the religious traditions they were raised in. Satisfied that I was neither of those things, he agreed to talk with me.

I told him my conundrum: Many people of achievement suffer as they age, because they lose their abilities, gained over many years of hard work. Is this suffering inescapable, like a cosmic joke on the proud? Or is there a loophole somewhere—a way around the suffering?

Acharya answered elliptically, explaining an ancient Hindu teaching about the stages of life, or ashramas. The first is Brahmacharya, the period of youth and young adulthood dedicated to learning. The second is Grihastha, when a person builds a career, accumulates wealth, and creates a family. In this second stage, the philosophers find one of life’s most common traps: People become attached to earthly rewards—money, power, sex, prestige—and thus try to make this stage last a lifetime.

The antidote to these worldly temptations is Vanaprastha, the third ashrama, whose name comes from two Sanskrit words meaning “retiring” and “into the forest.” This is the stage, usually starting around age 50, in which we purposefully focus less on professional ambition, and become more and more devoted to spirituality, service, and wisdom. This doesn’t mean that you need to stop working when you turn 50—something few people can afford to do—only that your life goals should adjust.

Vanaprastha is a time for study and training for the last stage of life, Sannyasa, which should be totally dedicated to the fruits of enlightenment. In times past, some Hindu men would leave their family in old age, take holy vows, and spend the rest of their life at the feet of masters, praying and studying. Even if sitting in a cave at age 75 isn’t your ambition, the point should still be clear: As we age, we should resist the conventional lures of success in order to focus on more transcendentally important things.

I told Acharya the story about the man on the plane. He listened carefully and thought for a minute. “He failed to leave Grihastha,” he told me. “He was addicted to the rewards of the world.” He explained that the man’s self-worth was probably still anchored in the memories of professional successes many years earlier, his ongoing recognition purely derivative of long-lost skills. Any glory today was a mere shadow of past glories. Meanwhile, he’d completely skipped the spiritual development of Vanaprastha, and was now missing out on the bliss of Sannyasa.

There is a message in this for those of us suffering from the Principle of Psychoprofessional Gravitation. Say you are a hard-charging, type-A lawyer, executive, entrepreneur, or—hypothetically, of course—president of a think tank. From early adulthood to middle age, your foot is on the gas, professionally. Living by your wits—by your fluid intelligence—you seek the material rewards of success, you attain a lot of them, and you are deeply attached to them. But the wisdom of Hindu philosophy—and indeed the wisdom of many philosophical traditions—suggests that you should be prepared to walk away from these rewards before you feel ready. Even if you’re at the height of your professional prestige, you probably need to scale back your career ambitions in order to scale up your metaphysical ones.

When the New York times columnist David Brooks talks about the difference between “résumé virtues” and “eulogy virtues,” he’s effectively putting the ashramas in a practical context. Résumé virtues are professional and oriented toward earthly success. They require comparison with others. Eulogy virtues are ethical and spiritual and require no comparison. Your eulogy virtues are what you would want people to talk about at your funeral. As in, “He was kind and deeply spiritual,” not, “He made senior vice president at an astonishingly young age and had a lot of frequent-flier miles.”

You won’t be around to hear the eulogy, but the point Brooks makes is that we live the most fulfilling life—especially once we reach midlife—by pursuing the virtues that are most meaningful to us.

I suspect that my own terror of professional decline is rooted in a fear of death—a fear that, even if it is not conscious, motivates me to act as if death will never come by denying any degradation in my résumé virtues. This denial is destructive, because it leads me to ignore the eulogy virtues that bring me the greatest joy.

The biggest mistake professionally successful people make is attempting to sustain peak accomplishment indefinitely.

How can I overcome this tendency?

The Buddha recommends, of all things, corpse meditation: Many Theravada Buddhist monasteries in Thailand and Sri Lanka display photos of corpses in various states of decomposition for the monks to contemplate. “This body, too,” students are taught to say about their own body, “such is its nature, such is its future, such is its unavoidable fate.” At first this seems morbid. But its logic is grounded in psychological principles—and it’s not an exclusively Eastern idea. “To begin depriving death of its greatest advantage over us,” Michel de Montaigne wrote in the 16th century, “let us deprive death of its strangeness, let us frequent it, let us get used to it; let us have nothing more often in mind than death.”

Psychologists call this desensitization, in which repeated exposure to something repellent or frightening makes it seem ordinary, prosaic, not scary. And for death, it works. In 2017, a team of researchers at several American universities recruited volunteers to imagine they were terminally ill or on death row, and then to write blog posts about either their imagined feelings or their would-be final words. The researchers then compared these expressions with the writings and last words of people who were actually dying or facing capital punishment. The results, published in Psychological Science, were stark: The words of the people merely imagining their imminent death were three times as negative as those of the people actually facing death—suggesting that, counterintuitively, death is scarier when it is theoretical and remote than when it is a concrete reality closing in.

For most people, actively contemplating our demise so that it is present and real (rather than avoiding the thought of it via the mindless pursuit of worldly success) can make death less frightening; embracing death reminds us that everything is temporary and can make each day of life more meaningful. “Death destroys a man,” E. M. Forster wrote, but “the idea of Death saves him.”

Decline is inevitable, and it occurs earlier than almost any of us wants to believe. But misery is not inevitable. Accepting the natural cadence of our abilities sets up the possibility of transcendence, because it allows the shifting of attention to higher spiritual and life priorities.

But such a shift demands more than mere platitudes. I embarked on my research with the goal of producing a tangible road map to guide me during the remaining years of my life. This has yielded four specific commitments.

  1. JUMP

The biggest mistake professionally successful people make is attempting to sustain peak accomplishment indefinitely, trying to make use of the kind of fluid intelligence that begins fading relatively early in life. This is impossible. The key is to enjoy accomplishments for what they are in the moment, and to walk away perhaps before I am completely ready—but on my own terms.

So: I’ve resigned my job as president of the American Enterprise Institute, effective right about the time this essay is published. While I have not detected deterioration in my performance, it was only a matter of time. Like many executive positions, the job is heavily reliant on fluid intelligence. Also, I wanted freedom from the consuming responsibilities of that job, to have time for more spiritual pursuits. In truth, this decision wasn’t entirely about me. I love my institution and have seen many others like it suffer when a chief executive lingered too long.

Leaving something you love can feel a bit like a part of you is dying. In Tibetan Buddhism, there is a concept called bardo, which is a state of existence between death and rebirth— “like a moment when you step toward the edge of a precipice,” as a famous Buddhist teacher puts it. I am letting go of a professional life that answers the question Who am I?

I am extremely fortunate to have the means and opportunity to be able to walk away from a job. Many people cannot afford to do that. But you don’t necessarily have to quit your job; what’s important is striving to detach progressively from the most obvious earthly rewards—power, fame and status, money—even if you continue to work or advance a career. The real trick is walking into the next stage of life, Vanaprastha, to conduct the study and training that prepare us for fulfillment in life’s final stage.

  1. SERVE

Time is limited, and professional ambition crowds out things that ultimately matter more. To move from résumé virtues to eulogy virtues is to move from activities focused on the self to activities focused on others. This is not easy for me; I am a naturally egotistical person. But I have to face the fact that the costs of catering to selfishness are ruinous—and I now work every day to fight this tendency.

Fortunately, an effort to serve others can play to our strengths as we age. Remember, people whose work focuses on teaching or mentorship, broadly defined, peak later in life. I am thus moving to a phase in my career in which I can dedicate myself fully to sharing ideas in service of others, primarily by teaching at a university. My hope is that my most fruitful years lie ahead.

  1. WORSHIP

Because I’ve talked a lot about various religious and spiritual traditions—and emphasized the pitfalls of overinvestment in career success—readers might naturally conclude that I am making a Manichaean separation between the worlds of worship and work and suggesting that the emphasis be on worship. That is not my intention. I do strongly recommend that each person explore his or her spiritual self—I plan to dedicate a good part of the rest of my life to the practice of my own faith, Roman Catholicism. But this is not incompatible with work; on the contrary, if we can detach ourselves from worldly attachments and redirect our efforts toward the enrichment and teaching of others, work itself can become a transcendental pursuit.

“The aim and final end of all music,” Bach once said, “should be none other than the glory of God and the refreshment of the soul.” Whatever your metaphysical convictions, refreshment of the soul can be the aim of your work, like Bach’s.

Bach finished each of his manuscripts with the words Soli Deo gloria—“Glory to God alone.” He failed, however, to write these words on his last manuscript, “Contrapunctus 14,” from The Art of Fugue, which abruptly stops mid-measure. His son C.P.E. added these words to the score: “Über dieser Fuge … ist der Verfasser gestorben” (“At this point in the fugue … the composer died”). Bach’s life and work merged with his prayers as he breathed his last breath. This is my aspiration.

  1. CONNECT

Throughout this essay, I have focused on the effect that the waning of my work prowess will have on my happiness. But an abundance of research strongly suggests that happiness—not just in later years but across the life span—is tied directly to the health and plentifulness of one’s relationships. Pushing work out of its position of preeminence—sooner rather than later—to make space for deeper relationships can provide a bulwark against the angst of professional decline.

Dedicating more time to relationships, and less to work, is not inconsistent with continued achievement. “He is like a tree planted by streams of water,” the Book of Psalms says of the righteous person, “yielding its fruit in season, whose leaf does not wither, and who prospers in all he does.” Think of an aspen tree. To live a life of extraordinary accomplishment is—like the tree—to grow alone, reach majestic heights alone, and die alone. Right?

Wrong. The aspen tree is an excellent metaphor for a successful person—but not, it turns out, for its solitary majesty. Above the ground, it may appear solitary. Yet each individual tree is part of an enormous root system, which is together one plant. In fact, an aspen is one of the largest living organisms in the world; a single grove in Utah, called Pando, spans 106 acres, and weighs an estimated 13 million pounds.

The secret to bearing my decline—to enjoying it—is to become more conscious of the roots linking me to others. If I have properly developed the bonds of love among my family and friends, my own withering will be more than offset by blooming in others.


When I talk about this personal research project I’ve been pursuing, people usually ask: Whatever happened to the hero on the plane?

I think about him a lot. He’s still famous, popping up in the news from time to time. Early on, when I saw a story about him, I would feel a flash of something like pity—which I now realize was really only a refracted sense of terror about my own future. Poor guy really meant I’m screwed.

But as my grasp of the principles laid out in this essay has deepened, my fear has declined proportionately. My feeling toward the man on the plane is now one of gratitude for what he taught me. I hope that he can find the peace and joy he is inadvertently helping me attain.

Arthur C. Brooks is a contributing writer at The Atlantic, the William Henry Bloomberg Professor of the Practice of Public Leadership at the Harvard Kennedy School, and a professor of management practice at the Harvard Business School. He’s the host of the podcast series How to Build a Happy Life and the author of From Strength to Strength: Finding Success, Happiness, and Deep Purpose in the Second Half of Life.
Posted in Anxiety, Counseling Concepts, Death, Depression, General Musings, People in general, Positive Mental Attitude | 2 Comments

The Struggle with Porn is Really a Struggle with Responsibility

[This is for my clients struggling with pornography]

In this very short post, as a follow up to what I’ve written recently about the attachment to porn that some 90% of men have (and likely suffer from), I want to remind my clients that I will speak of porn as an escape from responsibility.

Indeed, as they undoubtedly know deep within their conscience, these men feel guilt and shame not because they consume porn but because they know that they are eschewing responsibility.

How utterly convenient is it that one can surf to a porn site and avoid having to interact with a real person?

There is good news: Those feelings of guilt are nature’s reminder that we have a moral sense. It need not be a burden.

Quoting here from The Sociopath Next Door (by Dr. Martha Stout):

No, the best part of possessing a moral sense is the deep and beautiful gift that comes to us inside, and only inside, the wrappings of conscience. The ability to love comes bundled up in conscience, just as our spirits are bundled up in our bodies. Conscience is the embodiment of love, imbued into our very biology. It lives in the part of the brain that reacts emotionally, and in their favor, when the ones we love need our attention, our help, or even our sacrifice. We have already seen that when someone’s mind is not equipped to love, he can have no genuine conscience either, since conscience is an intervening sense of responsibility based in our emotional attachments to others.

We can turn this psychological equation around. The other truth is that should a person have no conscience, he could never truly love. When an imperative sense of responsibility is subtracted from love, all that is left is a thin, tertiary thing—a will to possess, which is not love at all.

Think about it.

Posted in General Musings, Pornography | Comments Off on The Struggle with Porn is Really a Struggle with Responsibility