Little schemer.
58 stories
·
3 followers

Effective Slack alerts

1 Share

"I think if you maintain a force in the world that comes into people's sleep, you are exercising a meaningful power."

— Don DeLillo, Underworld

Over the last few years, Slack quietly took over my world. As 8th Light grew from one office in Chicago to five cities on two continents, group chat became an important tool for collaborating across multiple time zones and diverse interests. On client teams, Slack became an always-on part of our development workflow, aggregating information from many disconnected systems in one place (right next to the cat gifs). In open source, all but the most stubborn holdouts left the old world of IRC to settle an archipelago of undiscovered Slacks.

Some days, Slack lives up to its multibillion dollar promise to "put collaboration at your fingertips." On other days, it's "an all-day meeting with random participants and no agenda." Either way, it's usually the first app I open in the morning, and the last one I close. Even then, it follows me home on my phone and seeps into my email inbox, one of those tools I can never really quit.

Slack took off in tech in part by making it easy for teams to aggregate notifications in one place and respond in real time. On many software teams, bots and integrations provide a constant stream of updates from project management tools, bug trackers, version control, and continuous integration systems, at every step of the development and deployment pipeline. Unlike email, where billions of unread automated messages have become permaclutter in inboxes and servers around the world, the real time nature of these alerts are a good match for a real time tool like chat. More important, Slack and its many integrations have made monitoring and alerting easier and more accessible for small teams.

Sending alerts to Slack can have huge benefits by providing fast feedback in one central location, but it's easy for notifications to become noisy and overwhelming. More than one of my teams have resorted to creating dedicated "no robots" channels where humans can talk to each other free of automated interference. When notifications become noise, teams lose the information sharing benefits that made Slack so useful in the first place, and critical signals can quickly drown in a sea of irrelevant detail.

Fortunately, the same principles that apply to monitoring production systems apply to Slack notifications. Good alerts of any kind should be actionable, focused, unique, real, and urgent.

Actionable

If a human doesn't need to do something in response to an alert, it's probably noise. Does your team need to see every new Git commit on every branch, or just a few important ones? Does anyone need to react when the CI build passes or test coverage increases? Team chat makes it easy to tell whether notifications are actionable: just pay attention to whether anyone discusses them or does something when they appear. If an automated message doesn't provoke a conversation or communicate something useful, think hard about whether it really needs to exist.

Focused

One person's signal is another's noise. Focus notifications on the smallest set of people who need to know about a particular alert rather than notifying everyone of everything all the time. (A sure way to drive everyone bonkers all the time). Create user groups to notify subsets of the full team. Use separate channels with distinct purposes instead of sending all your alerts to one place. And consider dedicated channels for important alerts that might otherwise disrupt human conversations.

Unique

Integrations sometimes overlap. If two tools are posting similar messages at the same time, one of them is probably noise. But the perils of duplication run deeper. "Don't repeat yourself" is about knowledge, both in code and on Slack. If the same notification appears in many channels, the information it conveys and the conversation it triggers will end up fragmented between different people in different places.

Real

At their best, false positives are distracting annoyances. At their worst, they create chronic background noise that will eventually cause real alerts to go unnoticed. If an alert is a false alarm, fix or remove it right away. This is obvious in principle but surprisingly hard in practice. Software changes frequently and alerts tend to lag behind the pace of code changes. It’s easy for alerts that used to be useful to turn into noise over time as the circumstances of your application change. Be ruthless about pruning false positives.

Urgent

Real time tools are about the present. If your alert does not need attention in the present moment, don’t send it to chat. (Ever notice that Slack's own weekly summary messages are emails? Now imagine the horrifying nightmare world where Slackbot sends these messages instead). Of course, if your alert really really needs attention in the present moment, don't rely only on Slack. Send it over multiple channels and consider using a tool designed more explicitly for incident response and escalation. "Hope someone on the team hasn't discovered Do Not Disturb" is not an effective alerting strategy.

Slack has squarely conquered the tech world. As it turns towards worlds yet to conquer, automated alerts may well become part of workplaces everywhere. Whether your team monitors a fleet of servers, a folder of spreadsheets, or occasional dentist appointments, following the principles of effective monitoring will make your alerts more effective and your team happier.

Read the whole story
ecmendenhall
1981 days ago
reply
Chicago
Share this story
Delete

Essential & Relevant: A Unit Test Balancing Act

2 Shares
posts/2019-02-19-essential-and-relevant-unit-tests/unit-test-essential-relevant-social.jpg

I have never been a fan of "DRYing," out unit tests (i.e., abstracting duplicated test setup). I have always preferred to keep all of my test setup inside each individual test, and I opined about how this made my test suite more readable, isolated, and consistent; despite all of the duplication. I've never been good at articulating why I preferred to do things this way, but I felt that it was better than the alternative: a test suite full of setup methods that forced me to scan many lines of code to try to understand how the tests work.

Then, I read xUnit Test Patterns by Gerard Meszarso. In his book, he codified some of the most profound formulas for writing unit tests. Of them all, the most well-known is probably The Four-Phase Test. Later disseminated as a distilled variant, "Arrange, Act, Assert" (and its BDD variant "Given, When, Then"), the core of it remains the same: all unit tests, in all programming languages, can take the following form:

test do
  setup
  exercise
  verify
  teardown
end

In the setup step, we instantiate our system under test, or SUT, as well as the minimum number of dependencies it requires to ensure it is in the correct state:

user = User.new(first_name: "John", last_name: "Doe")

In the exercise step, we execute whatever behavior we want to verify, often a method on our subject, or a function we're passing our subject into:

result = user.full_name()

In the verify step, we assert that the result of the exercise step matches our expectation:

assert(result == "John Doe")

Finally, in the teardown step, we restore our system to its pre-test state. This is usually taken care of by the language or framework we're using to write our tests.

All together, our test ends up like so:

// Example 1
...
  describe("User#full_name") do
    it("returns the full name of the user") do
      user = User.new(first_name: "John", last_name: "Doe")
      result = user.full_name()
      assert(result == "John Doe")
    end
  end
...

It's in the "setup" step where we want to establish only the essential & relevant information needed throughout the test. Example 1 demonstrates this: we're verifying that a user's full name is the concatenation of their first and last, therefore, including their first and last name explicitly within the test setup is both essential & relevant.

In Meszaro's book, he writes about the testing anti-pattern, called the Obscure Test, which addresses the imbalance between what is essential and what is relevant to our test setup.

Non-Essential & Irrelevant

As an example of non-essential & irrelevant test setup, we could tweak our original assertion like this:

// Example 2
...
  describe("User#is_logged_in?") do
    it("returns false by default") do
      user = User.new(first_name: "John", last_name: "Doe")
      result = user.is_logged_in?()
      assertFalse(result)
    end
  end
...

Here, instead of testing user.full_name() as the concatenation of first_name and last_name, we're testing that the user returned by User.new() responds to the is_logged_in?() message with false.

Is having a first_name and last_name relevant to is_logged_in?()? Probably not, but perhaps a user is only valid with a first_name and last_name, which is what makes that setup essential to the test. In this case, the only essential & relevant setup we need explicitly in our test is a valid user who is not logged in.

Having this irrelevant setup makes for an Obscure Test of the Irrelevant Information variety.

...Irrelevant Test can also occur because we make visible all the data the test needs to execute rather than focusing on the data the test needs to be understood. When writing tests, the path of least resistance is to use whatever methods are available (on the SUT and other objects) and to fill in all the parameters with values whether or not they are relevant to the test.

-xUnit Test Patterns

We fix this by extracting a setup function/factory method:

// Example 3
...
  describe("User#is_logged_in?") do
    it("returns false by default") do
      user = valid_user()  // setup function
      result = user.is_logged_in?()
      assertFalse(result)
    end
  end
...

The relevant information is here by way of the method name, and the essential setup is on the other side of the valid_user() method.

Essential But Irrelevant

Assuming there are a lot tests with similar setup, it's common to pull duplicated setup code into a setup function like the example above. This is also the solution to writing tests which have a verbose setup, and it helps us to ensure that we don't include any essential but irrelevant information in our tests:

// Example 4
...
  describe("User#full_name") do
    it("returns the full name of the user") do
      user: User.new(
          first_name"" "John"
          last_name: "Doe"
          street_address: "1000 Broadway Ave"
          city: "New York"
          state: "New York"
          zip_code: "11111"
          phone_number: "555555555"
          )
      result = user.full_name()
      assert(result == "John Doe")
    end
  end
...

In this case, it may be essential to instantiate a valid user with a first_name, last_name, street_address, etc., but some of it is irrelevant to our assertion!

Like in Example 1, we're asserting against user.full_name(), and we established that including the first_name and last_name in the setup was in fact relevant to our test. However, if we used the valid_user() setup function from Example 2 here, our setup would not contain all of the relevant information:

// Example 5
...
  describe("User#full_name") do
    it("returns the full name of the user") do
      user = valid_user() // setup function
      sult = user.full_name()
      assert(result == "John Doe")
    end
  end
...

This type of Obscure Test is called Mystery Guest.

When either the fixture setup and/or the result verification part of a test depends on information that is not visible within the test and the test reader finds it difficult to understand the behavior that is being verified without first having to find and inspect the external information, we have a Mystery Guest on our hands.

-xUnit Test Patterns

This is a case where there is essential & relevant information missing from the test. The solutions here are to 1) create an explicitly named setup function that returns the user we need, 2) create a setup function that returns a mutable user that we can update before our assertion, or 3) alter our setup function to accept parameters:

// Example 6
...
describe("User#full_name") do
  it("returns the full name of the user") do
    user = valid_user(first_name: "John", last_name: "Doe")  // new setup function
    sult = user.full_name()
    assert(result == "John Doe")
  end
end
...

This is called a Parameterized Creation Method and we use it to execute all of the essential but irrelevant steps for setting up our test. With it, we're able to keep our test setup DRY by creating a reusable method that keeps essential information inline.


When judging when to DRY our unit tests, I've found it important to consider what is essential for our setup vs relevant to our test reader. There are thousands of pages more about what makes good unit tests, and I find this topic particularly nascent as the focus begins to shift from "why should we TDD" to "how do we TDD well." Being able to articulate what is essential & relevant to a test is the key to finding the balance between people like me, who always opposed DRY unit tests, to people who prefer to keep things tidy. There are smells in both directions, but essential & relevant is the middle ground.

Read the whole story
ecmendenhall
2109 days ago
reply
Chicago
Share this story
Delete

Retract with the old, add with the new

1 Share

How Datomic made me reconsider data

Recently I’ve been working on a client project whose data paradigm has opened my eyes to a new way to look at and explore data. We’re using a Datomic database, which has compelled me to confront and challenge some of the assumptions I’d previously made about data storage.

Relational Data

For many of us, the idea of a database instantly brings to mind a data store that utilizes a relational model. We think of tables that have relationships to one another using primary keys, and data that can be queried using SQL. We understand data as occupying individual spaces in these tables, and we navigate specifically to those places in memory whenever we want to access, delete, or change data. For example, I may have a contacts table that looks like this:

It’s possible that some of our contacts have several email addresses that they’d like to share with us. Let’s create another table called alternate_email_addresses to store this information, if we have it. We can link the contact to their alternate_email_address by including the contact_id in this table:

We can query for Eli Manning’s alternate email address(es) using the following SQL statement:

SELECT *
FROM alternate_email_addresses
WHERE contact_id=2;

Now, let’s pretend that Eli decided to quit football, leave the New York Giants, and start a software apprenticeship at 8th Light. We will need to change his primary email address in our database from “eli@nygiants.com” to “eli@8thlight.com,” since his New York Giants email address will presumably be deactivated.

UPDATE contacts
SET primary_email_address='eli@8thlight.com'
WHERE contact_id=2;

As you’ll notice, after changing Eli’s primary_email_address, we no longer have record that this email address was ever “eli@nygiants.com.” We could have moved it to the alternate_email_addresses table. However, because this email address has been deactivated, we would want to add a flag to that table to identify deactivated email addresses.

This change may require some other data cleanup: we would probably want to indicate that all of the other records in alternate_email_addresses are still active. This could get tricky depending on how many contacts we have. And all of this effort is just to save the fact that Eli used to have a different email_address. This information may never be needed again… But what if it is?

Let’s take a little break from our New York Giants (there are some weeks when many of us fans sure want to!) to talk about Datomic and how it helps us address that question.

Datomic

Datomic is a distributed database that uses a logical query language called Datalog. Rich Hickey's Intro to Datomic video is a great overview of the rationale, architecture, and mechanics of Datomic, and I highly recommend viewing it if you’re interested in learning more. For now, I’ll give a brief introduction on its three main components: the data store, Peers, and the Transactor.

The data store is unique in that it is external, which allows users to persist their data in anything from a SQL system such as Postgres to a NoSQL service such as Amazon DynamoDB. The Datomic team reasoned that storing data is a problem that computing has already solved, so they focused on other challenges, like handling reads and writes.

A Peer is any application that includes the Datomic Peer Library. These Peers have the ability to query data within the application, communicate with other elements of the system (Transactor and data store), and also represent a partial copy of the database through caching. Because our application is able to query its local memory, each application instance has the ability to interact with the dataset independently.

The Transactor is separate from both of these components. Its sole responsibilities are to write information to the data store, and then alert its Peers about new data updates. It ensures that all data remains ACID compliant.

This three-part architecture has a lot of interesting implications, and there’s a lot of potential for really cool and exciting innovations in the future.

However, I can’t wait any longer to mention my favorite, and perhaps most mind-bending part about Datomic: all data in Datomic is immutable. This means that each element of data can never be changed, and is never deleted!

Immutability

The main tenant of Datomic is that it never forgets. In Rich Hickey’s Intro to Datomic video, he likens Datomic to the method of record keeping that has been used for hundreds of years, before computers or databases. Facts were written down on paper—on hanging wall calendars, or in address books—and each new piece of information that needed to be recorded would simply be added. Existing data would never be overwritten or erased.

I understand that you’re probably dying to see how this immutability can be helpful, and how it can even work in a large-scale production application that ever needs to edit data. Why would we want to go back in time to an older way of recording data?

In relational databases, we often overwrite our existing records with new information. We saw this in our example with Eli Manning’s email address. In order to represent that Eli’s primary_email_address is now “eli@8thlight.com,” we had to lose the fact that it was once “eli@nygiants.com” by overwriting it with the new data. This idea of “new information replaces old” originated when disk space was expensive, and computers didn’t have any to spare. Data is saved to a specific place, and then recovered via a pointer. We remove old information with the assumption (or the hope) that we’ll never need it again, and this allows us to add new data without eating up additional disk space.

Nowadays, though, disk space is plentiful, and we can afford to save everything.

Rather than organizing data in a series of boxes (or relational tables) that are stored in a particular place, Datomic databases can be thought of as a ledger of facts that are written at a particular time. Whenever we write to the data store, we’re adding a new set of facts that we want to be able to look back to and remember.

Every transaction written by the Transactor is put in a new place in memory, and is given a timestamp that connects it to a specific transaction. The implication of knowing which transaction added which piece of data means that we are able to easily access all of our data as-of any point in time, or even within a window of time.

As we’ll soon see, we don’t need to worry about losing Eli’s email address from his days in the NFL, because we can always query our database as-of a time before his 8th Light apprenticeship, when he was still throwing touchdowns for the Giants.

The Datom

Datomic’s one and only data structure is the Datom. That’s right: Every piece of data in Datomic fits the definition of a Datom. As per the official Datomic glossary, that definition of a Datom is:

An atomic fact in a database, composed of entity/attribute/value/transaction/added.

For those of us whose brains are accustomed to thinking of data in rows and columns, we can correlate a Datom to a row, and a Datom’s attribute to a column.

Let’s take a look at how Eli Manning’s information may be stored in a Datomic database comprised of several Datoms.

We have one Datom for each attribute that makes up Eli’s contact entity. This entity itself is represented by a unique entity-id, 2, which is generated by the Transactor.

Let’s go back to our example of replacing Eli’s New York Giants email address with a brand new 8th Light one. In Datomic, it is possible to define attributes with varying cardinality so that they can be associated with just one value, or with multiple values. This means that the idea of an alternate, or additional email address, is already built in. We’ll touch on this a little more soon, but for now let’s take a look at the new fact that we would write in our database to represent replacing both of Eli’s email addresses:

Notice that:

  • We have the same entity number, because we’re still working with Eli’s contact entity.
  • We include the attribute we’re editing, :contact/email-addresses.
  • The value of that attribute is what we’re changing, so notice how we still include the “eli@themannings.com” address, but we change the other one. This allows us to say that these are both of Eli’s current email-addresses as of transaction 1001.
  • Like we mentioned before, if we want to see what Eli's email-addresses were as-of transaction 1000, or 999, or 1, we could do that at any time.

In the column farthest to the right, we have our “added” component of the Datom, which is true for all of these Datoms. We’re also able to retract a fact if it is no longer true for an entity. To illustrate this, let’s suppose that Eli decides to go off the grid entirely and deactivate both of his current email-addresses. To reflect this in our database, we would want to add a new Datom that says that these email-addresses are now false.

To Schema, or not to Schema

Datomic does utilize a schema, though it differs from what we typically think about with traditional relational schemas. A Datomic schema describes specific characteristics of attributes, but doesn’t necessarily need to limit those attributes to a specific type of entity—this can be done in the application code. Defining an attribute requires three pieces: the unique identifier (or name) of the attribute (:db/ident), the type (:db/type), and the cardinality (:db/cardinality). You can see that even Datomic’s built-in attributes follow this structure: the identifier of each (:db/ident, :db/type, :db/cardinality) is defined under the :db namespace.

For our contact database, we may create a schema defining three attributes:

Although it is not essential to include a namespace for each attribute that matches up to their entity, it does help to avoid name collisions, and also helps to communicate the intent of each attribute. In this schema, we are expressing that we’re setting our database up for a contact entity with three attributes.

Benefits & Challenges

The benefits of implementing a looser structure in our data allows us to be adaptable to almost anything that our ever-changing requirements may throw at us. With Datomic, we are not bound by a rigid structure of existing tables, relationships, and keys.

The beauty of the Datom being fairly generic is that it’s also ubiquitous. We can use it for any entity use case that comes our way. What if Eli’s big brother Peyton also decides that he wants to join 8th Light, but instead of supplying his email addresses he gives us several phone numbers? We could easily accommodate this with our loosely structured Datomic database. We would need to add a :contact/phone-numbers attribute to our schema:

Then we would simply add a Peyton entity and associate the attributes that are pertinent to him, without having to worry about the mismatch in data between the two contact entities (email-addresses vs. phone-numbers).

And we're good to go!

The possibilities for extensibility are endless! Going further (or maybe a little bit too far), we could even start recording details about each of the Super Bowls that every new 8th Light apprentice has won. Again, we wouldn’t have to worry about the fact that most incoming 8th Lights do not, in fact, have a Super Bowl ring. Yet we could easily still add this attribute to both of the Manning entities.

Conclusion

Perhaps this is a lesson that we can bring into our development of systems that use a relational model. Why should we shy away from adding more data that is useful to our application just because our current structure of tables isn’t set up for it? After exploring Datomic’s ability to allow us to be flexible, adaptable, and agile, I am encouraged to continually seek out more creative ways to represent data.

It is worth mentioning that Datomic is still a relatively new technology, which brings about its own set of challenges. It is still evolving, and as such, there aren’t always a plethora of resources online. It’s often difficult to find more than a few Stack Overflow entries, blog posts, and tutorials to assist, besides Datomic’s own documentation. However, in lots of Googling, I have noticed that Rich Hickey himself is very active in answering questions on both Stack Overflow and even a Google Group dedicated to Datomic, which is cool and encouraging. There are several early adopters who are using Datomic in production in addition to my current client—so though Datomic is new, it is production-ready and the community surrounding it is supportive and growing.

Sometimes it can feel like you’re the first one to ever try to query data in a particular way—which can be daunting. But honestly, the opportunity to think about data in a new and different way makes it worth it. Understanding data in a linear/time-sensitive way—instead of relationally—is eye-opening. It’s helped me to understand my data more thoroughly, and also realize new, creative ways that data can be linked and understood. We’re not always restricted by the database structure that we currently have—and this allows us to add even more relationships between our data.

Read the whole story
ecmendenhall
3333 days ago
reply
Chicago
Share this story
Delete

America's Ur-Choropleths

1 Share

Choropleth maps of the United States are everywhere these days, showing various distributions geographically. They’re visually appealing and can be very effective, but then again not always. They’re vulnerable to a few problems. In the U.S. case, the fact that states and counties vary widely in size and population means that they can be a bit misleading. And they make it easy to present a geographical distribution to insinuate an explanation. Together the results can be frustrating. Gabriel Rossman remarked to me a while ago that most choropleth maps of the U.S. for whatever variable in effect show population density more than anything else. (There’s an xkcd (I think there’s an xkcd cartoon strip about this, too.) The other big variable, in the U.S. case, is Percent Black. Between the two of them, population density and percent black will do a lot to obliterate many a suggestively-patterned map of the United States. Those two variables aren’t explanations of anything in isolation, but if it turns out it’s more useful to know one or both of them instead of the thing you’re plotting, you probably want to reconsider your theory.

So as a public service, here are America’s two ur-choropleths, ur-chorolpeths, by county. First, Population Density.

US Population Density.

US Population Density Estimates, by county, 2014. Source: US Census.

And second, Percent Black.

Percent Black Population, by county.

Percent Black Population, by county, 2013. Source: US Census.

And as a bonus, here are those two variables plotted against each other, with region highlighted.

Population Density vs Percent Black Population, by county.

Population Density vs Percent Black Population. Source: US Census.

If you’re interested in making some maps of your own, the code and data are on github. Thanks to Bob Rudis for his excellent R projection code, by the way.

Read the whole story
ecmendenhall
3433 days ago
reply
Chicago
Share this story
Delete

My thoughts on quadratic voting and politics as education

3 Comments

That is the new paper by Lalley and Weyl.  Here is the abstract:

While the one-person-one-vote rule often leads to the tyranny of the majority, alternatives proposed by economists have been complex and fragile. By contrast, we argue that a simple mechanism, Quadratic Voting (QV), is robustly very efficient. Voters making a binary decision purchase votes from a clearinghouse paying the square of the number of votes purchased. If individuals take the chance of a marginal vote being pivotal as given, like a market price, QV is the unique pricing rule that is always efficient. In an independent private values environment, any type-symmetric Bayes-Nash equilibrium converges towards this efficient limiting outcome as the population grows large, with inefficiency decaying as 1/N. We use approximate calculations, which match our theorems in this case, to illustrate the robustness of QV, in contrast to existing mechanisms. We discuss applications in both (near-term) commercial and (long-term) social contexts.

Eric Posner has a good summary.  I would put it this way.  Simple vote trading won’t work, because buying a single vote is too cheap and thus a liquid buyer could accumulate too much political power.  No single vote seller internalizes the threshold effect which arises when a vote buyer approaches the purchase of an operative majority.  Paying the square of the number of votes purchased internalizes this externality by an externally imposed pricing rule, as is demonstrated by the authors.  This is a new idea, which is rare in economic theory, so it should be saluted as such, especially since it is accompanied by outstanding execution.

The authors give gay marriage as an example where a minority group with more intense preferences — to allow it — could buy up the votes to make it happen, paying quadratic prices along the way.

My reservation about this and other voting schemes (such as demand revelation mechanisms) is that our notions of formal efficiency are too narrow to make good judgments about political processes through social choice theory.  The actual goal is not to take current preferences and translate them into the the right outcomes in some Coasean or Arrovian sense.  Rather the goal is to encourage better and more reasonable preferences and also to shape a durable consensus for future belief in the polity.

(It is interesting to read the authors’ criticisms of Vickrey-Clarke-Grove mechanisms on p.30, which are real but I do not think represent the most significant problems of those mechanisms, namely that they perform poorly on generating enough social consensus for broadly democratic outcomes to proceed and to become accepted by most citizens.  One neat but also repugnant feature of democratic elections is how they can serve as forums for deciding, through the readily grasped medium of one vs. another personae, which social values will be elevated and which lowered.  “Who won?” and “why did he win?” have to be fairly simple for this to be accomplished.)

I would gladly have gay marriage legal throughout the United States.  But overall, like David Hume, I am more fearful of the intense preferences of minorities than not.  I do not wish to encourage such preferences, all things considered.  If minority groups know they have the possibility of buying up votes as a path to power, paying the quadratic price along the way, we are sending intense preference groups a message that they have a new way forward.  In the longer run I fear that will fray democracy by strengthening the hand of such groups, and boosting their recruiting and fundraising.  Was there any chance the authors would use the anti-abortion movement as their opening example?

If we look at the highly successful democracies of the Nordic countries, I see subtle social mechanisms which discourage extremism and encourage conformity.  The United States has more extremism, and more intense minority preferences, and arguably that makes us more innovative more generally and may even make us more innovative politically in a good way.  (Consider say environmentalism or the earlier and more correct versions of supply-side economics, both innovations with small starts.)  But extremism makes us more innovative in bad ways too, and I would not wish to inject more American nutty extremism into Nordic politics.  Perhaps the resulting innovativeness is worthwhile only in a small number of fairly large countries which can introduce new ideas using increasing returns to scale?

By elevating persuasion over trading in politics (at some margins, at least), we encourage centrist and majoritarian groups.  We encourage groups which think they can persuade others to accept their points of view.  This may not work well in every society but it does seem to work well in many.  It may require some sense of persuadibility, rather than all voting being based on ethnic politics, as it would have been in say a democratic Singapore in the early years of that country.

In any case the relevant question is what kinds of preference formation, and which kinds of groups, we should allow voting mechanisms to encourage.  Think of it as “politics as education.”  When it comes to that question, I don’t yet know if quadratic voting is a good idea, but I don’t see any particular reason why it should be.

Read the whole story
ecmendenhall
3622 days ago
reply

Chicago
Share this story
Delete
2 public comments
jepler
3621 days ago
reply
"[Economists] notions of formal efficiency are too narrow to make good judgments about political processes"
Earth, Sol system, Western spiral arm
stefanetal
3621 days ago
reply
New idea! New idea?
Northern Virginia

cauliflower cheese

2 Shares

cauliflower cheese

cauliflower cheese

What, you’ve never had cauliflower cheese before? Why, it’s right up there on the American Heart Association’s recommended diet, above the kale and below the oat bran. Okay, well, maybe just the cauliflower is. I realize this dish may sound strange if you’ve never heard of it. The first time I saw it on a menu in the UK last fall, I thought a word was missing, perhaps “with” or “and.” I mean, you cannot make cheese out of cauliflower or vice-versa, or at least I hope not.* And then I tried it, bubbling and brown in a small ramekin aside my roast** at a tiny Inn in the middle of nowhere that looks like something you’d see in a Bridget Jones Diary (basically where I learned everything I knew about the UK before I got there, well, that and Morrissey songs) and I stopped talking. I stopped thinking. My heart may or may not have stopped beating for a moment, though I’m sure it was love, not fibrillations. How could it be anything but, when cauliflower florets are draped with a sharp cheddar cheese sauce spiked with mustard and a bit of cayenne and then baked in the oven until bronzed and, wait, what were we talking about again?

cauliflower, spice, s/p, butter, milk, cheese
chopped florets

This is a British dish, if the sharp cheddar, mustard powder, cayenne and charmed name didn’t give it away. I realize that British food has long been a punching bag for other supposedly superior world cuisines, but I found this to be anything but the case. Even if I had, the awesome names of national dishes — toad in the holes, bubble and squeaks, spotted dicks, hole, bubble and squeak, spotted dick, singing hinnies, jam roly-polys roly rolys and doorstop sandwiches — would have more than compensated for any failures in the flavor department.

cook until firm-tender

... Read the rest of cauliflower cheese on smittenkitchen.com


© smitten kitchen 2006-2012. | permalink to cauliflower cheese | 198 comments no comment to date | see more: British, Cauliflower, Photo, Side Dish, Thanksgiving, Vegetarian

Read the whole story
ecmendenhall
3691 days ago
reply
Chicago
Share this story
Delete
Next Page of Stories