分类目录归档:English

数字达尔文主义

Digital Darwinism

先抄一些meme代替阅读过程的体会吧:

要理解技术产生的背后的意义,而不是技术本身。

新技术应当应用与业务核心core,而并非边沿edge。(感觉大多数保守的公司都只愿意将新技术用在edge业务上,避免风险)

人们通常低估了技术应用的深度,高估了短期影响,也低估了长期影响。

转型中最大的忧虑是,何时采用新技术,要不要再等等,等更好更完善的(这跟买手机、买电脑有点像)。

旧的会议方式难以评估错过新技术/思想的损失,所以财务去主导转型不合适。

技术是转型的背景,而不应是主导,人们需要的是解决方案,而非技术本身。

几个采用新技术的思路:

  • 自我中断,比如Netflix,直接把租DVD的客户转成订阅流服务的客户,大胆、冒险。
  • 持续重新发明、改善、创新、补全,比较低的风险,一点点自我革新。
  • 原有业务不变,投资新业务应用新技术
  • 投资对冲基金

应用AI的策略上,作为公司,应有精心设计的战略,而不是在边际上进行小实验。

套用已有模型不会有明显的成功,大脑中的模型要革新,不应该是之前的XXX,直接修改成数字时代的XXX、人工时代的XXX。

目前看到AI的应用,只是取代人类原来的工作(岗位、职责),将失去真正转型的意义。要考虑改变企业架构(为前提),考虑做人类没有做过的事情。

忘记大数据,聚焦innate data,数据不需要多大规模,但有用且效果好。

围绕人去设计新的技术方案,而不是做了方案/产品,投钱去推广。


看完后,最大的感想是,希望能找到真正的新范式,才能利用好AI。否则,不过是让AI重复人类、取代人类而已。

大外交

每天看10~20页的进度看完了Kissinger的《Diplomacy》。

中文译名叫大外交,大字显得有气势。

从黎塞留说起,接着是梅特涅,英国的首相们,到俾斯麦,到一站前,凡尔赛,威尔逊,二战前,二战中后期,二战后,冷战,朝鲜战争、苏伊士运河危机、越南战争,后面多是美国几任总统的政治指导和外交取向,直到苏联解体,后冷战期。

当故事看也是相当的精彩,比如拿破仑三世被俾斯麦碾压,斯大林和希特勒在没有翻脸之前的勾斗,三巨头几次会面的营营役役,赫鲁晓夫接连挑起危机而无后着,中美的破冰外交,越共如何让泥足深陷的美帝一地鸡毛地逃离西贡,戈尔巴乔夫又是如何挽不起危局,完全失去控制力。

总的来说,Kissinger还是将美国的外交描述为一种idealism和national interest之间的摇摆。他也指出,冷战结束后,美国将继续看重national interest而不存在别人挑战其的机会。因为它就是地球上唯一的super power。

我感觉,美国要想回到威尔逊主义,实在是艰难,要回到一种孤立主义,在现在的国际资本强势的情况下,就更加困难了。

对于中国来说,谋求的道路必须是美国能逐步让出全球性的美国利益而退到威尔逊主义或者门罗主义的局面。否则会遭遇美国(也许是美国利益绑定较密切的全球资本)的强力打压。

比特币为何没能在大学中发明出来

原文链接:https://bitcoinmagazine.com/culture/bitcoin-could-never-be-invented-in-a-university

作者Korok Ray,是德州农工大学梅斯商学院副教授兼梅斯创新研究中心主任。

大意就是当前大学和学院的学术研究主要集中在某一学科的增量中,很难得到一个跨学科的创新,而比特币正是密码学、货币/经济学、网络科学的交叉成就。后面就给了大学发展的一些建议。

Since the announcement of its inception in October 2008, Bitcoin has reached a market capitalization of over $1 trillion. Its growth has drawn both retail and institutional investment, as the financial community now begins to see it as a legitimate store of value and an alternative to traditional assets like gold. Innovations in second-layer settlements like the Lightning Network make it increasingly possible for bitcoin to serve as a medium of exchange.

Yet, Bitcoin has a precarious and somewhat checkered history in academia. Curricula in universities are largely devoid of any mention of Bitcoin. Instead, the teachings are often left to student clubs and nonprofits. Over time this may change, as Bitcoin and the entire cryptocurrency market continues to grow, attracting attention from top talent in both engineering and business. Bitcoin’s absence from university is not a problem with Bitcoin itself, but rather the academy, with its insufficient embrace of innovation, its emphasis on backward-looking data analysis and its excessive preoccupation with individual disciplines rather than collective knowledge. Bitcoin can serve as an inspiration for what academic research can and should be. In fact, it presents a roadmap to change higher education for the better.

Similarities With The Academy

One may wonder why anyone should even assume a relationship between Bitcoin and universities. Technologists are in constant contact with real needs of customers today, while faculty develop basic science that (may) have application far into the future. After all, innovations like Facebook, Microsoft, Apple and even Ethereum were launched by young men who didn’t graduate from college. Yet, it’s no accident Silicon Valley and Route 128 both emerged in proximity to our nation’s greatest coastal universities. So, there’s certainly a correlation between universities and the tech sector. Even so, Bitcoin is different. Bitcoin has an even tighter relationship with its intellectual and academic roots. To understand this, we must peer into Bitcoin’s history.

At the turn of the century, a ragtag band of cryptographers, computer scientists, economists and libertarians — the cypherpunks — exchanged messages over an internet mailing list. This was an obscure electronic gathering of a diverse cadre of scientists, technologists and hobbyists who were developing and sharing ideas of advancements in cryptography and computer science. Here’s where some of the early giants of applied cryptography spent time, like Hal Finney, one of the early pioneers of Pretty Good Privacy (PGP).

It was on this mailing list that the pseudonymous creator of Bitcoin, Satoshi Nakamoto, announced his solution for an electronic payment system. After that announcement, he began to field questions from the forum on both the concept and its execution. Shortly thereafter, Nakamoto provided the full implementation of Bitcoin. This allowed participants of the forum to download the software, run it and test it on their own.

The Bitcoin white paper bears similarity to academic research. It follows the structure of an academic paper, has citations and looks similar to what any paper in computer science may look like today. Both the white paper and the conversations around it reference prior attempts at implementing the proof-of-work algorithm, one of the core features of Bitcoin. For example, the white paper cites HashCash from 2002, also part of the corpus of knowledge that preceded Bitcoin. Adam Back came up with proof-of-work for HashCash while trying to solve the problem of eliminating spam in emails.

Thus, Bitcoin didn’t fall out of the sky, but emerged out of a long lineage of ideas developed over decades, not days or weeks. We tend to think of technology as operating at warp speed, changing rapidly and being driven by ambitious, young college dropouts, but Bitcoin wasn’t based on “move fast and break things.” It was and is the opposite: a slow, careful deliberation based on decades of real science practiced not by kids, but more like their parents. The cryptography forum was similar in nature to an academic research seminar, where professional scientists politely but insistently attempt to tear down ideas to arrive at the truth. Though the concept of a white paper is now all the rage among alternative cryptocurrency coins and tokens, it’s the hallmark method of communicating ideas among the professional research community.

Even though the cryptocurrency economy today occupies center stage in the financial press and a growing share of national attention, when it emerged Bitcoin was as far from this as possible. It was obscure, technical and very fringe. In its long gestation from ideas that had been around for decades but unknown except to a small circle of cryptographers, economists and political philosophers, Bitcoin shares more in common with other radical innovations, like the internet, the transistor and the airplane. Just like those innovations, the story of Bitcoin is the triumph of individual reason over collective misperception. Just as the Wright brothers proved the world wrong by showing man could fly even though physicists claimed it was mathematically impossible, so too did Bitcoin confound the naysayers by building digital scarcity for the first time ever.

Why should we focus on Bitcoin rather than some of the other cryptocurrency tokens, like Ethereum? If you look under the hood, the majority of the innovation of cryptocurrency came from Bitcoin. For example, Ethereum relies on the same elliptic curve as Bitcoin, utilizing the same public key cryptography. Bitcoin emerged over a long gestation period and secret development by a pseudonymous applied cryptographer and was released and debated in an obscure mailing list. For this reason, Bitcoin shares many similarities to the arcane academic circles that occupy modern universities. No professional cryptographer made Ethereum; rather, it was a teenager who even admits he rushed its development. Thus, it’s only Bitcoin that has deep connection to the academy, whereas the more incremental innovations crowding the cryptocurrency space now are more similar to the small advances taken in the modern technology sector.

Differences From The Academy

Bitcoin differs from the academy in important ways. Most significantly, Bitcoin is fundamentally interdisciplinary in a way universities today aren’t. Bitcoin fuses together three separate disciplines: mathematics, computer science and economics. It’s this fusion that gives Bitcoin its power and shatters traditional academic silos.

Public key cryptography has been the major innovation in applied cryptography and mathematics since its conception 50 years ago. The core concept is simple: Users can secure a message with a private key known only to themselves that generates a public key known to all. Therefore, the user can easily distribute the public key without any security consequence, as only the private key can unlock the encryption. Public key cryptography achieves this through hash functions — one-way transformations of data that are impossible to reverse. In Bitcoin, this occurs through elliptic curves over finite fields of prime order.

But public key cryptography isn’t enough. Because Bitcoin seeks to serve as an electronic payment system, it must solve the double-spending problem. If Alice pays Bob using bitcoin, we must prevent Alice from also paying Carol with that same bitcoin. But in the digital world, copying data is free and therefore, preventing double spending is seemingly hopeless. For this, Nakamoto utilized the blockchain, a construct from computer science. Cryptographer David Chaum laid the groundwork for the concept of a blockchain as early as 1983, in research that emerged from his computer science dissertation at Berkeley.

The blockchain is a linked list that points backwards to the original (genesis) block. Each block contains thousands of transactions, each transaction containing the ingredients for transferring bitcoin from one address to another. The blockchain solves the double-spending problem because it’s distributed, i.e., publicly available to all nodes on the Bitcoin network. These nodes constantly validate the blockchain with new transactions added only when all other nodes on the network agree (consensus). In our prior example, when Alice pays Bob, this transaction enters the blockchain, which all nodes observe. If Alice tries to use those same bitcoin to pay Carol, the network will reject that transaction since everyone knows that Alice has already used those bitcoin to pay Bob. It’s the distributed, public nature of the blockchain that prevents double spending, a problem unique to electronic payments.

Indeed, Satoshi designed the blockchain specifically as a solution to double spending. It’s inherently inefficient, as it requires the entire network to constantly validate and reproduce the same data. This is also why most applications of blockchain technology outside of Bitcoin make little sense, as it forces an inefficient solution custom-built for electronic payments onto other applications that would be efficiently solved with central databases. The notion of a blockchain as a reverse-linked list by itself is not revolutionary in computer science, but its distributed nature specifically designed to prevent double spending is.

Even so, cryptography and blockchain aren’t enough. There needs to be a reason for the network to secure the blockchain. This is where the economics of Bitcoin shine. Nakamoto proposed a group of computers that would prove that the history of transactions did in fact occur. This proof requires costly work to be done. Nakamoto solved this by setting up a tournament in which individual computers (called miners) would compete to find a seemingly random answer through a one-way function called SHA256. The winner would receive newly minted bitcoin, which the network would release. The answer to the function must be sufficiently challenging that the only way to solve it is to deploy more computational resources. Bitcoin mining requires real computation and therefore real energy, similar to gold mining a few generations ago. But unlike gold mining, the issuance schedule of new bitcoin is known by everyone.

The economics of mining is the design of a contest that rewards new bitcoin to miners that solve a puzzle. This is a form of a microeconomics mechanism, i.e., a game economy design where individual agents compete for a reward. The macroeconomics of Bitcoin pertains to the issuance schedule, which adjusts predictably over time, with the block reward reducing by half every four years. This forces the constraint of 21 million bitcoin. This inherently limits the inflationary growth of the currency and imposes a constraint no fiat currency today must adhere to. The difficulty of the underlying puzzle adjusts roughly every two weeks regardless of the computing power of the network, providing a robust implementation despite exponential advances in computing power in the decades since Bitcoin launched.

This interdisciplinary feature of Bitcoin is existential, not incremental. Without any of its three components (public key cryptography, a backward-linked blockchain and a mining contest using proof-of-work), Bitcoin would not function. By itself, each of the three components consisted of a coherent body of knowledge and ideas. It was their combination that was Nakamoto’s genius. So too will future radical innovations need to link together multiple disciplines in existential ways, without which their combination would not survive.

Why Not The Academy?

Why could Bitcoin not have emerged out of the academy? First, Bitcoin is inherently interdisciplinary, yet scholars at universities are rewarded for excellence in single domains of knowledge. Bitcoin fuses together ideas from computer science, mathematics and economics, yet it is unlikely any single university faculty would have the breadth of knowledge necessary for interdisciplinary consilience.

Second, the academy suffers from incrementalism. Academic journals explicitly ask their authors for the incremental contribution their work provides to the literature. This is how knowledge advances, inch by inch. But Bitcoin — like other radical innovations in history, such as the airplane and the transistor — made giant leaps forward that would likely not have survived the peer review process of the academy.

Third, Bitcoin rests on libertarian political foundations which are out of favor among the mainstream academy, especially professional economists. Baked into the software are algorithmic representations of sound money, where the Bitcoin protocol releases new bitcoin on a predictable schedule. This is very different from the world we live in today, where the Federal Open Market Committee has full discretionary authority on the money supply. The cypherpunks who vetted Bitcoin v0.1 shared a skepticism of collective authority, believing technology and cryptography can provide privacy to individuals out of the watchful eyes of the government or any large organization.

Most economists don’t share this skepticism towards central authority. At least the social science community never took Bitcoin seriously. Besides, the Federal Reserve has an outsize role in both funding and promoting mainstream academic economic research. It recruits from top Ph.D. programs, hires bank presidents and governors who were former professors of economics, and encourages its staff to publish in the same academic journals as the academy. It is no wonder the university of faculty, influenced by the culture of the Fed, would not embrace technology that radically replaces it.

I asked all living Nobel laureates of economics to speak at the Texas A&M Bitcoin Conference, and all but one declined. Some admitted to not knowing enough about Bitcoin to warrant a lecture; at least they were honest about the constraints of the disciplinary model that they’ve so successfully thrived in. Others, like Paul Krugman, view cryptocurrencies as the new subprime mortgage (he also once predicted that the internet would have the same impact on the economy as the fax machine). Academic economists dedicated almost no attention to Bitcoin’s rise and even now remain ignorant of how the Bitcoin blockchain works, despite being the only real innovation in finance this last decade.

Bitcoin is first and foremost an intellectual contribution. It doesn’t require a deep knowledge of industry, special insight into the current practices of firms or knowledge of idiosyncratic details of the labor and capital markets. It didn’t build from existing practice, but rather from existing theory. For these reasons, Bitcoin emerged unapologetically out of the land of ideas, and should, in some sense, have come from the academy. An academic economist could’ve possibly designed the mining tournament, a computer scientist developed the blockchain and a mathematician developed public key cryptography. It takes an unlikely fellow (or team) to combine these three innovations together. Universities develop faculties with deep expertise in their individual disciplines but do nothing to tie the disciplines together in the way Bitcoin does. For this reason, Bitcoin couldn’t have emerged out of the university, even though it rests on disciplines well established within the university. The problem isn’t the knowledge itself but its organization. And therein lies the opportunity.

How Did We Get Here?

In its current form, the academy is not suited for innovations like Bitcoin. After students enter graduate school, they learn the techniques of their own discipline, which they use to publish in specialized journals that earn them tenure and future academic recognition with a small set of peers within that discipline. These isolated corridors of knowledge have ossified over centuries ever since the early universities. How did this happen?

There are two primary trends in the academy since World War II. By far the most important is the digital revolution. As computing power became accessible to anyone, the objective of science shifted from building theory to measurement. Suddenly, a wide array of social and natural science data was available to researchers from a laptop anywhere in the world. The growth of the internet spread data sharing and data availability, and advances in microprocessing power made large analysis of data cheap and easy. The academic community shifted en masse to data analysis and moved from trend to trend on 10-15 year cycles. The first cycle was on summary statistics and variance analysis, the second was on linear regression and the third on machine learning. When problems arose in the specific domain of each discipline, rarely did scholars return to their underlying theory for revision. Instead, they simply fed more data into the machine, hoping measurement error and omitted variables were to blame.

The growth of big data and statistics, in concert with machine learning, has led us to the present where artificial intelligence (AI) is a black box. No researcher can fully explain what exactly AI is doing. At the same time, questions have become smaller. Before, development economics as a field would ask, “Why is Africa so poor?” Now, research in the field asks whether placing a sign on the left or the right side of a bathroom door is more likely to lead to usage. This preoccupation with causality is intellectually worthwhile but comes at a high price, as often the researcher must narrow his domain to behaviors that are easily observable and measurable. The large, complex and mathematical theories developed after World War II were largely untestable, and so empirical researchers abandoned those theoretical foundations. Where once academics held the intellectual high ground by asking the biggest questions of the day, now empirical research dominates academic journals. Experimental physicists and empirical economists alike mostly cite other data-driven work.

As computers filtered throughout our society, students had exposure to computation earlier in their lives. By the time they arrived in college and in graduate school, they already had basic facilities with data manipulation and analysis. Why bother with mathematics when some simple experiments and linear regressions can provide tables of results that can be quickly published? Over time, students gravitated towards data work as the academic profession slowly migrated away from math.

It became far easier for journals to accept papers with some small experimental or empirical fact about the world. Given that editors and referees make decisions on academic research on a paper-by-paper basis, there’s no overarching evaluation of whether the body of empirical and experimental work truly advances human knowledge. As such, data analysis has run amuck with teams of researchers making ever more incremental advances, mining the same core data sets, and asking smaller and more meaningless questions. Does rain or sunshine affect the mood of traders and therefore their stock picks? Can the size of a CFO’s signature on an annual statement measure his narcissism and predict if he will commit fraud? (I’m not making this stuff up.)

One might think that advances in computation would have led research to verify some of the theories developed after World War II, but that has not been the case. In technical terms, many of those complex models are endogenous, with multiple variables determined in equilibrium simultaneously. As such, it’s a challenge for empirical researchers to identify specifically what’s happening, such as whether increasing the minimum wage will increase unemployment, as Economics 101 suggests. That has led to a turn to causality. But causal inference requires precise conditions, and often those conditions do not hold over the economy but rather in a few specific examples, like U.S. states that adopted anti-abortion laws at different times. The Freakonomics revolution in economics may not dominate the Nobel Prizes, but certainly has influenced the majority of published social science research.

The chief problem with this data-driven approach is its ultimately backward-looking approach. By definition, data is a representation of the world at a point in time. The entire fields of business and economics research are now almost wholly empirical, where scholars race to either gather new datasets or use novel and empirical techniques on existing datasets. Either way, the view is always from the rearview mirror, looking back into the past to understand what did or didn’t happen. Did low interest rates cause the Global Financial Crisis? Do abortions reduce crime? Does the minimum wage reduce employment? These questions are fundamentally preoccupied with the past, rather than designing new solutions for the future.

The second trend has been the shrinking of the theory community, both inside and outside the academy. The number of theorists has vastly shrunk, and they also have refused to collaborate with their much larger empirical and experimental colleagues. This tribalism led theorists to write ever more complex, intricate and self-referential mathematical models with little basis in reality and no hope for possible empirical validation. Much of game theory remains untestable, and string theory is perhaps the most extreme example of a self-referential world that can never be fully verified or tested.

Finally, academic theory trails technology by a long time. Often, mathematicians, physicists and economists provide ex-post rationalizations of technologies that have already been successful in industry. These theories don’t predict anything new, but rather simply affirm conventional wisdom. As the complexity of theory grows, its readership falls, even among theorists. Just like everything else in life, the tribalism of theory leads the community to act as a club, barring members who don’t adopt its arcane language and methods.

Thus, we’ve arrived at something of a civil war; the theory tribe is shrinking year by year and losing relevance to reality, while the empirical/experimental data community grows over time, asking smaller questions with no conceptual guidance. Both academics and technologists are left in the dark about what problems to solve and how to approach them. It also leads to a pervasive randomness in our collective consciousness, leading us to blow in whatever direction the winds of the moment take us. Economics has well-established theories of markets and how they function, yet technology companies are massive marketplaces unmoored in much of that same economic theory. Computer science rests on a sturdy foundation of algorithms and data structures, yet the theory community is obsessed with debates on computational complexity, while trillion-dollar tech companies perform simple A/B tests to make their most significant decisions.

We’ve reached a tipping point in the scale of human knowledge, where scholars refine their theories to ever more precise levels, speaking to smaller and smaller communities of scholars. This specialization of knowledge has led to hyperspecialization, where journals and academic disciplines continue to divide and subdivide into ever smaller categories. The profusion of journals is evidence of this hyperspecialization.

From Science To Engineering

Much future innovation will occur at the boundaries of the disciplines, given that much knowledge has already been discovered within existing disciplines, but there must be a greater transformation. Universities today still largely adopt the scientific method, establishing knowledge for its own sake and seeking to know the natural, physical and social world, but we shouldn’t stop there. Given their fundamental knowledge, scientists are in the best position to engineer better solutions for our future. Moving to an engineering mindset will force academics to design and implement solutions to our most pressing problems. In the long term, it will also close the gap between the academy and industry. The pressure students face to search for jobs and start companies, which takes a toll on their academic coursework, emerges because there’s a gap between the needs of the market and the academic curriculum. Were this gap to close, and students instead spent time in college building better solutions for the future, this cognitive dissonance would dissipate.

This transformation has already begun in some disciplines, like economics. One of the most successful applied areas of economics is market design, which unambiguously adopted an engineering mindset and delivered three Nobel Prizes in the last decade alone. These scholars came from engineering and adapted game theory to build better markets that can work in the real world, such as better ways to match kidney donors to recipients, students to schools or medical residents to hospitals. They also designed many of the largest auctions in use today, such as the spectrum auction of the government and the ad auction within Google. There’s no reason the rest of the economics profession, or even the rest of higher education and the academic community, cannot similarly position themselves towards adopting more of this engineering mindset.

Over time, closing this gap between the academy and industry will relieve much of the  
public outcry against escalating tuition and student debt. Once students and professors orient their research to develop better solutions for society, so too will their students and the companies that employ them. Students will no longer resent their faculty for spending time on research rather than teaching if that research directly creates technologies that ultimately benefit the students, future employers and society at large. Over time, this naturally will close the skills gap that America currently faces. Universities no longer will need to focus on STEM skills explicitly, but rather focus on providing technological solutions that will ultimately draw heavily from the STEM areas anyway.

A Call To Action

How can we reform higher education to produce the next Bitcoin? Of course, the next Bitcoin won’t be Bitcoin per se, but rather a first-principled innovation that conceives of an old problem in an entirely new way. I have three specific recommendations for university culture, priorities and organizational structure.

First, the academy must more explicitly embrace engineering more than science — even on the margin. The Renaissance and the Age of Reason have led American higher education to celebrate science and knowledge for its own sake. The motto for Harvard is “Veritas,” or “truth,” while that of the University of Chicago is “Crescat scientia, vita excolatur,” meaning “Let knowledge grow from more to more, and so human life be enriched.” These universities, based on the scientific and liberal arts traditions, have done much to establish the corpus of knowledge necessary for human progress, but this last half-century has been the age of the engineering universities, with Stanford and MIT competing to build solutions for the world, not just to understand it. This ethos of engineering should extend beyond engineering departments, but even and especially, to social science. For example, require all freshmen to take a basic engineering class to learn the mental framework of building solutions to problems. Economists have articulated the benefits of sound money for generations, but only through an engineered system like Bitcoin can those debates become reality.

This shift in engineering is happening somewhat within the social sciences. For example, the recent Nobel Prizes given to Paul Milgrom and Bob Wilson in economics celebrated their work in designing new markets and auctions to solve real problems in resource allocation problems that governments and society face. This community of microeconomic theorists are still a small minority within the economic profession, yet their work blends theory and practice like no other field and should have higher representation among practicing scholars. Universities should abandon the forced equity in treating all disciplines as equal, allocating an even share of faculty lines and research dollars to every discipline, no matter its impact on society. Instead, prioritize disciples willing and able to build solutions for the future. This culture must come from the top and permeate down towards recruiting decisions of faculty and students.

Second, reward interdisciplinary work. The traditional, centuries-old model of deep disciplinary work is showing its age, while most of the exciting innovations of our time lie at the boundaries of the disciplines. Universities pay lip service to interdisciplinary work as a new buzzword across college campuses, but unless the incentives for faculty change, nothing will. Promotion and tenure committees must reward publications outside of a scholar’s home discipline and especially collaborations with other departments and colleges. While large government agencies, like the National Science Foundation, have increased allocation of funding toward cross-disciplinary teams, when it comes times to promotion and tenure decisions, faculty committees are woefully old-fashioned and still reward scholars within rather than across disciplines. Over time, I expect this to change as the older generation retires, but the most pressing problems of society cannot wait and universities should pivot faster now. Unless promotion and tenure committees explicitly announce recognition for interdisciplinary work, nothing else matters.

Third, the academy must aim high. Too often, academic journals are comfortable seeking incremental contributions to the fund of knowledge. Our obsession with citations and small improvements inevitably leads to small steps forward. Academic communities have a reflexive desire to be self-referential and tribal. Therefore, scholars like small conferences of like-minded peers. Some of the biggest steps forward in the history of science came from giant leaps of understanding that only could have occurred outside of the mainstream. Bitcoin is one example, but not the only one. Consider the discovery of the double helix, the invention of the airplane, the creation of the internet and more recently the discovery of the mRNA sequence for the COVID-19 vaccine. True progress comes from unapologetically tossing out the existing intellectual orthodoxy and embracing an entirely fresh look. The standards of excellence for our faculty and students must insist they aim to solve the biggest problems facing humanity. Too often this discourse is silenced from campus, and over time, it erodes the spirit of our young people. To achieve this, allocate research funding based on impact and make these requirements strict.

The vast increase in wealth from the technology sector has put various pressures on campus. For one, it induces young students to drop out and start new companies, following in the footsteps of the young founders who dominate the technological and financial press. This happens only because there’s a rift between the rewards of the market and the activities of the university. Remember that Bitcoin emerged from a small community of intellectuals seeking to engineer a solution to an ancient problem using new technology. This could’ve easily occurred within the academy, and in some sense, it should have.

The corporate firm, either start-up or established, is the natural locus for incremental innovation. The constant noise of customer needs, investor demands and industry knowledge make it a natural place for small changes in society’s production possibilities. Radical innovation is uniquely suited to the academy with its longer, more deliberate timescale, access to deep science and isolation from the noise of the market, but it’s up to the academy to rise to that challenge. Let Bitcoin inspire us, so the academy becomes the quarterback and not just the spectator to the next radical innovation of our time.

The Purpose Driven Life

不小心入坑。因为看了Amazon上的书评,居然有5.0的高分。事实上,我看到1/3处才猛然想起,也许这类型涉及宗教信仰的书籍,是不存在打低星的机会。

作者要求读者每天一章,连看四十天。不过内容上,存在很多反复强调的,类似于车轱辘话的教理。

如何敬神,如何以基督的特征要求自己,如何和教友建立伙伴关系,如何回报神恩,如何宣扬教义。

这五个Purpose贯彻全书。

其实回头看,如果我们纯粹无神论而只认可进化论的话,最终也应该如各种宗教一样,对于孕育自我思想的背景文化进行反哺。而此教义只是需要我们将反哺的对象偶像化/神化,这样就可以简单推导,你该如何做而得以永恒。

勿追逐人世间的短暂所得而放弃宇宙间的永恒,用文化的角度看,这些永恒,也就是附着于不同的宗教教义、人生观、价值观上,让人类得以世代相传。

Count Zero

翻完这本书,结论就是,英文阅读能力不行。

完全不能理出这个故事的脉络,只能看懂各种碎片场景和碎片情节。

相比之下,神经漫游者要好理解很多了。

看来要回归看中文版本的了。

神经漫游者(Neuromancer)

不算很长的科幻小说,但是生僻的英语单词的数量较多,以至于看了很久。配合着其他的一些内容介绍,才勉强算看完了。

从这两年metaverse兴起的角度看,神经漫游者无疑是一部很经典的元宇宙小说。

然后跟其他电影的关系,比如黑客帝国,比如盗梦空间。

为什么cyberpunk的经典小说出自于对技术基本白痴的Gibson。

技术进化,为何人类越活越low越苦了。对应到的是机器为各种低端人口提供的简易服务,只为榨干口袋里那一点点钱。比如胶囊旅馆。

很多当时科幻的概念陆续被实现,而也确实是为了低端而设置的。相反,老板则是在原始的自然中进行享受。

鉴于生僻单词太多,下次看估计都遥遥无期了。

Good to Great

终于把这本书看完了,看的是电子版。

虽然进入书中的Great的例子,十年后大多数都比较凄惨,对不起Great的名头,尤其是Fannie Mae。

但如果视之为诅咒,或者是作者所总结的优点并不能覆盖其可能存在的缺陷。

作者总结的能从Good到Great的公司特点有这些:

第5级经理人:CEO谦逊而能力强,先想到的是公司的发展而非个人的得失。然而,我的延伸想法是:有这样的leader自然是幸事,然而这类人虽然不少,但能成为CEO的却不多,因为在竞争中往往容易因为谦逊的特质而容易被高调的同事所覆盖。真正的leadership是什么,在公司选则CEO的时候,往往不是这类型的人才占优。这也是为什么赛道一般的公司跑出来Great的情况才有机会是这样的人当CEO。因为高调的都跑到别的更光鲜的赛道去了

公司先有正确的人,再有正确的人在合适的岗位上。在今天来说,这一点算是老生常谈了。

坦然面对残酷现实,但不要悲观,坚持。乐观才能成就Great,但实在残酷的现实面前乐观。

刺猬理念,专注于核心竞争力和对应的目标。这一点很重要,反反复复、周期性地检查自身的核心竞争力和目标的情况,保持清醒,不要被突然的机会所改变。快钱赚了之后就再也回不到这种刺猬的心态了。

3 Circles,更准确的来说,是三点:自我优势在哪里?增长引擎的关键什么?激情/热爱的事情是什么?这三点如能统一,基本上就是无往不利了。

工作纪律。

掌握好技术后,再对业务进行加速。并不过度追求技术领先。这一点很有意思,从统计数据上看,拥有领先技术的公司并不一定会变Great,但要成为Great,还是要合理利用成熟的领先技术,对业务进行加速。

积累每天的进步,逐渐成就成功,而不是特变。所谓的每天进步一点点……

过华清宫

Just try to translate the famous poem, referred to other translation.

Passing Huaqing Palace
Du Mu

Viewed from Chang’an, Mount Li seemed full of embroidery, pile and pile;
On the top-hill, countless gates opened one after another with rhythm;

The imperial concubine smiled, to meet the horse stepped red dust;
No one knew it was for the litchi arrived.

Restart the English writing

Which level is my English grammar? I am not sure.

Today, I install the Grammarly app to my chrome. When I do input the English in the web pages, the plug-in will check my input and give the hint when it found errors.

Therefore, I will try to do some writing on my personal blog–guipo.com.

 

Blink

在Digital Paper上看这书的原版,英文,感觉每页都有1~2个意义不明白的单词,断断续续查一下字典,也看得7788了。

《Blink》给我最震撼的是,原来我们一直引以为傲的识人、审时度势等等方面,AI难以取代的地方,在细节上已经陆陆续续被解构了。

比如Mind-reading,身体语言分析,表情分析,语言分析,所有都表明,后现代+统计学+计算机,已经解构了我们的日常,这么想下去,未来管理角色,搞不好还是AI做比较好,然后AI就统治人类了,AI再把底层的工作做了,人类就没有立锥之地了。

以前觉得后现代很荒谬,现在觉得错得很离谱,后现代一旦被应用,人类的生活、关系、行为,铁定被肢解,人类很难对抗一个有自学习能力的强大AI去解构人类社会。目前看来,大数据只是在人群维度上去统计,再未来一点,智能助手就可以在大数据的基础上去解构人类个体。

人类如何还击?个性解放、鼓励多元化,会有助于适当地对抗AI,但AI发展比人类的世代更迭要快多了。人类在本代内如果不能对AI进行抵抗,将毫无机会。