If you’re new to the topic of “open access” — “unrestricted online access to scholarly research” — or you want a broad perspective, I’d suggest instead of this book the fairly good Wikipedia entry on “Open Access,” from which the above definition comes. After that, John Willinsky’s The Access Principle, 2006, available free from MIT Press, which focuses on the key idea, of expanding access to knowledge creation and use, by various models in different contexts globally. (if you do want to read Open Access and the Humanities, get the complete Kindle, ePub, or PDF versions free at Unglue.it or Internet Archive, not Cambridge Books Online’s amusingly not-with-the-program collection of 16 separate PDF files).
Martin Eve’s recent Open Access and the Humanities, by contrast to the above, has a narrower and somewhat polemical purpose: to review current debates among mainly UK (some US + EU) academics over alternate ways to pay for and give access to their scholarly articles or monographs in humanities disciplines, and to advocate a particular (and controversial) model of academics giving away all copyright-based rights to their work except for attribution. The author is a lecturer at Lincoln University, UK, and a founder of a new publishing venture, Open Library of Humanities, of which I was a co-founder.
“iTown” proposal/visualization by Alfred Twu, 2014.
Recently there seems to be a bit of a crescendo in the flood of discussion about the Bay Area’s housing crisis. Here are four intersecting threads I saw or joined on Twitter, leading to (#4) a proposal for a “Reshape Silicon Valley” public event & envisioning workshop. Featuring, by section:
- Startup incubator heads & venture capitalists
- Urban planners & transit advocate
- Journalists & filmmaker
- Entrepreneurs & technologists
1. Startup incubators, venture capitalist:
On Nov 3, Sam Altman, President of leading startup incubator Y Combinator, based in Mountain View:
there were many responses to this tweet, including creative ideas such as
[note: Altman’s observation is essentially what in urban/planning/housing studies is commonly described as the “homevoter hypothesis” or “homeowner hypothesis,” as described in The Homevoter Hypothesis by William A. Fischel, 2002.]
comment submitted 3pm Thurs on “The Search of Silence” by Allison Arieff, The New York Times Opinion, 20 March.
great piece, and welcome attention to a crucial issue which, as ARUP’s Cushner aptly (and punningly?) notes, is “often overlooked.”
Sound concerns are still often dismissed as intolerance or efforts at cultural/class suppression — which they might sometimes be in part, but not generally. As you observe, sound issues are highly complex and intermingled with other factors, such as lighting and people’s sense of control over their environment. There is huge opportunity for ahead-of-the-curve companies such as ARUP who understand this and develop leading expertise.
Of particular interest to me is the great potential for “sound interfaces” to devices and information systems. This may be a key to addressing the crucial problem of managing our attention for better health, productivity, and engagement.
I found especially interesting the contrast between ARUP’s Sound Labs/prototyping approach, versus the “highly regulated spaces like hospitals or airports [which] feel..like the worst noise offenders.” We might infer a more general lesson there: complex human environments like cityscapes need iterative and adaptive design, evaluated on the total outcome; and conventional regulatory control may prevent this, not work, or even backfire. Many areas such as traffic control, building code, zoning, and parking might benefit from such rethinking, as the “Lean Urbanism” movement, most recently, advocates.
Palo Alto, California
comment posed on Southern Fried Science (David Shiffman) post 10 March, 2014, “5 things we discussed in my #scio14 “social media as a scientific research tool” session.”
> it can be inexpensive (even free) and simple to get the data you need.
It may not be as simple as it appears. To take the example of Twitter — probably the most-used and most-studied social data source — most collection tools are used with either Twitter Search API or Streaming API, both of which have known incompleteness and sample bias. So for example, a collection of “all” tweets employing a given hashag, made with those tools, will likely not include all tweets actually sent with that hashtag. Also, it is hard to know what portion of, or in what pattern, tweets may have been missed.
The only data source Twitter even claims any completeness for is full “firehose” data, available only by arrangement with them or one of their data partners like Gnip. Even with this data, there are questions about how its completeness or neutrality might be assessed or verified. The scrupulous path, I think, is to assume there isn’t really any “raw” or self-evidently neutral data, from any source so complex and mediated as Twitter; there are just data artifacts, which have to be critically interpreted.
Conversary, Palo Alto
Note: posting the comment here because, as quite often happens, I wrote comment, submitted it (after logging in, with Twitter account in this case), nothing appeared, and there was no information to say if or how it might be posted. Site-specific comment systems are almost all broken, from a commenter’s standpoint.
comment on “How to use social media for science — 3 views” (Tips from science and journalism pros at the American Association for the Advancement of Science (AAAS) annual meeting). by Alison Bert, Elsevier Connect, 25 February 2014).
Great panel and excellent writeup. I followed parts of #AAASmtg via Twitter remotely, but wish I’d been there in person.
The panel seems to have focused on ways & reasons scientists might post on social media, which perhaps was implied by the panel title “Engaging with Social Media.” However, I’d like to pose the question, is it possible that the most important potential use of social media, at least for most scientists is not for posting, but for reading, discovery, and more indirect use?
James Watts’ “centrifugal governor” 1788
Below are the tweets from an exchange on Twitter with Tim O’Reilly about “algorithmic regulation.” The term was apparently coined by O’Reilly in a Google+ post 19 Sept 2011:
18 months after President Obama authorized a program providing $7.6 billion to states to help homeowners escape foreclosure, fund have been awarded to only about 7500 homeowners. In the same time period, banks have foreclosed on 1.5 million homes.
Stories like this one fuel disgust with government. What they really highlight is that we’re trying to manage 21st century problems with 19th century methods.
Rather than building a government bureaucracy to award funds, the program should have set goals for number of homeowners whose mortgages would be relieved (or even better, the conditions that would justify loan modification) and left it to the banks to meet those expectations.
The regulatory overhead should have been in testing outcomes, not managing process.
Let me be clear by analogy. Imagine that Google’s search quality team wrote a set of rules for sites to be approved for inclusion in Google, and had a bureaucracy to allow sites into search results. Instead, Google tests the quality of search results, and uses algorithmic regulation to remove results that are deemed bogus.
The analogy in this case isn’t exact, but the idea of algorithmic regulation is central to all internet platforms, and provides a fruitful area for investigation in the design of 21st century government.
Subsequently O’Reilly wrote book chapter, “Open Data and Algorithmic Regulation” in the 2013 compilation volume Beyond Transparency from Code for America (free to download). I recently came across and read this chapter, and my comment on it led to this exchange: