SXSW Interactive 2015 Wrap-Up

Spring break has come and gone in Austin, which means that we’re recovering from another amazing SXSW Interactive festival. This year for me was a year of narrative story technologies and Community.  For the last several years I’ve been going to SXSW with my wife, Irma, and this year she had her own session.  That meant she spent a lot of time in the women in tech tracks, and we didn’t see each other as much as usual.  It’ll be interesting to read her write up, when she gets to it.

Friday – Al, Tim, BBQ, Old Friends, & 3D Printed Clothes

Friday started off with Life in the OASIS: Emulating the 1980’s in-Browser, a panel from Ready Player One author Ernest Cline (who has a new novel coming out, Armada, which I just pre-ordered), and Jason Scott, rogue librarian at the Internet Archive, talking about 80’s video games an in-browser emulators. Unfortunately due to our bus driver getting lost (transportation was a recurring pain-point at SXSW this year) I wasn’t able to make the session, but Jason, being a free-range archivist, put up the audio for all of us to enjoy.

Al GoreWhile we didn’t have time to get to the Life in the OASIS session, we had some time to burn till the session after, so Irma and I headed to Exhibit Hall 5, which is big and always has a lot of room to plop down and get your stuff sorted.  SXSW is the kind of conference where you can be just looking for a place to get your bearings and end up listening to Al Gore talk about climate change, the Pope, and his newfound optimism, which is exactly what happened.

After Al we moved up front for a presentation from perennial SXSW personality Tim Ferriss, who had a 30 minute How To Rock SXSW in 4 Hours talk, followed by QA. It’s always weird for me to see Tim at SXSW. I was on the first row of his first SXSW talk on the tiny Day Stage promoting the about to be released 4 Hour Work Week, way back in 2007.  To say our paths diverged would be an interesting understatement.  The main points of Tim’s talk were: Don’t be a jerk and treat everyone like they could make your career (because they probably can),.  He had some hangover cure suggestions (eat avocados before you go party), and reminded all the introverts to take the time to breathe.

Tim FerrissOne anecdote he told on the treating everyone well point was from one of his early CESes.  He spent most of his time in the bloggers lounge (a good place to meet people), and while everyone was trying to get the attention of Robert Scoble, he chatted up the lady checking people in.  Eventually he made a comment about Robert, and she said, “Oh, you should totally talk to him.  He’s my husband, let me give you a ring back in San Francisco and we’ll have lunch.”  So yea, you never know who people are.  I was standing in line at a session later in the conference and started talking to the lady next to me, who turned out to be the head of innovation at Intuit.

After Tim it was time for lunch, and we ended up at Ironworks BBQ. They have a $16.45 3 meat sampler plate (beef rib, sausage, and brisket), and well… a picture’s worth a thousand words…

Ironworks BBQ Plate

Our buddy Matt Sanders (formerly a Polycot, then an HPer, and now at Librato) was in town from San Francisco for the conference, and joined us to indulge in smoked meat.  We ended up eating at Ironworks at least 3 more times, which was kind of expensive, but fast and good.

After lunch Matt and I headed over to the new JW Marriott to a panel from Dutch fashion designer Pauline van Dongen titled Ready to Wear? Body Informed 3D Printed Fashion. This session was a perfect example of what makes SXSW such a unique conference: It’s a subject that I’m curious about, but one I’d never go to a conference specifically to see.  Pauline was wearing some of her tech-enabled fashions (a shirt with solar cells embedded in it), and talked about how fashion meets technology and how often in technology we design for the static (interlocking shapes), not the organic.  She profiled two of her projects, one a sleeve that morphs based on the wearer’s movement, and the other a neck ruff that uses electrically contracting wire to ‘breathe’ while worn. The challenges she faced (48 hour print cycles, unpredictability of material behavior) and insights discovered were really interesting, and it was one of the panels I kept thinking about most over the next few days.

After this panel I wandered through the job fair for a bit, which has expanded significantly in the last year.  It was interesting to see Target and Apple looking for candidates at SXSW.

Saturday – Storytelling Machines, Future Crime, New Parents in Tech

Talk Photo

First thing Saturday morning was my session with Jon Lebkowsky: Machines That Tell Stories.  We had a great turnout, and there are notes from the discussion at the link. Looking over the schedule, storytelling and storytelling systems were a very hot topic.  I was talking to Deus Ex Machina (an interactive theater project) producer Robert Matney later that it felt like the story zeitgeist erupted out of nowhere, and the flood of sessions made for a very interesting conference.  The discussion was really interesting, and it was gratifying to hear that there was a lot of cross-pollination between attendees.  I even heard that people were still connecting at the airport on their way home.

SocksChris Hurd, one of my friends and the guy behind DVinfo.net, gave me a tip one time from his years working big trade shows like NAB: The best way to keep a spring in your step at a long conference is to change your socks in the middle of the day.  So after leading our session, and stopping into the 3M booth, we went back to the car, dumped our stuff, and I changed my socks.  We had a long day ahead of us, and it was definitely worth it.

Next I went to a session titled Future Crimes From the Digital Underworld by Marc Goodman. It’s always interesting to see the people who’ve given a talk a lot of times versus people who are presenting the material for the first time.  Marc’s obviously really practiced at this talk, complete with jokes, audience-call-outs, and what have you.  It’s a fun talk, but the net-net is that everything in security is terrible, and it’s just going to get worse with trillions of IoT devices.  I didn’t need Marc to tell me that.  I have Taylor Swift.

After that was Irma’s meetup: New Parents in Tech. She had an interesting turnout, with only one other woman (there was a lot of competition for the women technologist this year, with a strong moms in tech panel opposite), but a lot of dads.  Two product guys from Fisher Price showed up, too, and I had an interesting discussion with them about Baby’s Musical Hands (Clara’s first app) and iPad cases (they don’t sell many anymore, possibly due to the kid market saturating with hand-me-downs).  After a good discussion, it was on to…

Saturday/Sunday – Community

Harmontown

Community!  And Harmontown!  And Dan Harmon!

Ok, I’ll admit it. Community is my biggest takeaway from SXSW. It’s my favorite TV show, the only thing I watch obsessively.  I’ve seen every episode, most of them a half dozen times. I follow the actors (even the lesser-known ones) on twitter. Fanboy = Me.

Yahoo! Screen picked up Community last year after NBC didn’t renew it.  The switch from broadcast to streaming distribution made it perfect SXSW fodder, especially after the Harmontown documentary premiered at SXSW Film last year.  Yahoo! pulled out the big guns, though, and beyond just having Dan show up to talk about the switch, they brought the whole cast, and premiered the 1st episode of the new season a few days early for the fans.  It was epic.

Community CastWe had great seats for Harmontown, and were next to the stage when they brought out the cast and showed the season premier. When they showed the episode everybody sat down on the floor, and in one of the most amazing things I’ve ever been in the middle of, we all sang along kumbaya style to the theme song.

The episode was great, and we had a wonderful time.

The next morning, after a panel I’ll talk about in a second, was a SXSW panel with the Community cast.  Everyone was there again, and there was a great discussion about the show.  For my money, it was even more interesting than Community’s previous Paleyfest discussions, probably due to the fact that there wasn’t a real moderator, just Dan Harmon asking the cast questions.  We had front row seats for that one, too.

Community Panel

Some notes for Community fans:

  • In the QA someone mentioned Conspiracy Theories and Interior Design as their favorite (it’s my favorite too, I was the weirdo in the audience who applauded at that), and Dan told a story about how they essentially threw out the last 1/3rd of the episode (the original ending involved the teachers creating the conspiracy) while they were shooting.  The final scene didn’t get scripted until they were in the study room blocking it off. They had NBC Standards and Practices on the phone, because there’s a lot of gunplay, and they were describing it, and finally the person from Standards and Practices said ‘Is there any way you can make it about gun safety?’  And if you’ve seen the episode, that’s how the ending happened.  Lots of Community episodes come together at the last minute.
  • The speech Ben Chang gives in Analysis of Cork-Based Networking (about the character being a real person, but just being portrayed as crazier and crazier) was lifted nearly verbatim from an email Ken Jeong wrote to Dan Harmon about the character. When Ken was performing it, he teared up (those are real tears) because he was so touched.
  • Alison Brie’s contract is up this season, we’ll see if she’s back if they make a movie or Season 7.
  • In discussing the longer episodes now that they aren’t constrained by NBC commercial breaks, Joel McHale noted that The Dick Van Dyke Show episodes were 29 minutes, which made me think that there’s an interesting comparison between The Dick Van Dyke Show as Community and I Love Lucy as The Big Bang Theory.

Ok, so that’s Community.  It was great.  Watch it on Yahoo! Screen.

Sunday – Transmedia Storytelling, Bot Authors, & Makers

Before the morning Community panel was a session titled Worlds Without Boundaries: Books, Games, Films, with James Frey (author and media maker) and Jon Hanke (architect behind Google’s Ingress game).  It was a fascinating discussion about media that crosses boundaries.  James mentioned he was heavily inspired as a kid by the book Masquerade, which included puzzles and a treasure hunt in the real world.  In his series, Endgame, there’s a puzzle pointing to keys that unlock a chest in the Caesars Palace casino in Las Vegas holding $.5, $1, and $1.5 million dollars in gold (per book, respectively).  They’re doing an app with Niantic Labs (Google), and they’re planning films.  It’s an interesting product development scheme: Have a stable of creatives come up with a world.  Sell some of the rights (film, TV), partner to do some products (games), and do others in house (books, novellas).

FloorAfter the Community panel I spent some time in the SXSW trade show.  General themes this year were lots of Japanese hardware startups (on Kickstarter, natch), lots of countries, and almost no hosting or cloud booths (save for Softlayer).  A lot more music, and a little ergonomic furniture.  Overall a less interactive-heavy trade show than years gone by.  I’m not entirely sure why that was, but there you go.  Wordpress didn’t come with their great t-shirts this year, so I guess I’ll have to actually go to the store to buy my clothes.  I did manage to pick up a new Olloclip, though, and even got to see it built in front of me.

In the afternoon I made it over to Automated Insights panel When Robots Write The News, What Will Humans Do?, moderated by James Kotecki, Automated Insights’ PR guy. This was an great discussion between Robbie Allen, the CEO of Automated Insights, and Lou Ferrara, a VP from Associated Press.  Automated Insights’ software produces AP stories in sports and stock reporting from raw data, and it was interesting to hear their discussion about what will be automated and where human value really lies.  Automated Insights thing is producing one billion pieces of content for one person each, which I think everyone can agree is where a large part of the content we consume is headed.

After the Automated Insights panel I headed over to SXSW Create, the free maker area of the conference.  While I was there I got to try out Lumo, a new interactive projector for kids that’s about to hit Indiegogo.  More on that later.

The Gaming Expo next door to Create was as crazy as ever, and really starting to outgrow the space they have for it.  The only larger space downtown is the Convention Center, though, which puts them in kind of a bind.  VR headsets were everywhere (almost always accompanied by lines of people waiting to use them), and it was good to see indie games like That Dragon, Cancer represented.

Monday – API Fails, Narrative Systems, Non-Linear Story Environments, Enchanted Objects, Home Projector Installations, & BBQ Science

The next day I tried to get into Wynn Netherland’s talk Secrets to Powerful APIs, but I was late, so Matt and I couldn’t fulfill our tweet-promise.

Instead, Matt and I went to the MedTech expo, and saw an interesting startup doing a small wireless body temp monitor for babies (slap a bandage over it, battery’s good for a month), some interesting sleep trackers (lot of quantified self folks at SXSW) and of course Withings with their smart watch whose batteries work for 8 months.

After that I headed to Technology, Story, and the Art of Performance by Elena Parker (who came to our Machines That Tell Stories session and had some of the best insights) and Michael Monello from Campfire, a division of Sapient Nitro that specializes in unique interactive experiences.

Elena and Michael’s presentation was one of the most meaty of the sessions I attended, full of helpful, hard-won insights into interactive projects.  Slides are available here, and audio is here..  They showcased three: Deja View, a project for Infinity where actors you see on screen talk to you through the phone (dynamic video branching and voice recognition), Hunted, an ARG-like that used some clever magic tricks to make users think they were being controlled, and a project for the From Dusk Till Dawn TV series, where players could call in and talk to the character Santanico Pandemonium and she would try to recruit them for her cult (branching narrative, voice recognition).

Some of my major takeaways from the presentation:

  • MaxMax: Elena talked about how while in games you program for MinMax (system constantly minimizes the players chances while maximizes the games chances, by attacking the player, moving enemies toward them, etc), in interactive story experiences you want to optimize for MaxMax, where you give the users as much of a chance of progressing as you can.  They’re likely only going to experience it once (replay value not being high, except for people who want to understand how the system works), so work as hard as you can to make sure they succeed.
  • Embrace Genre: When you’re giving people a new and unfamiliar experience, ground them in tropes and genre conventions they already understand. That way they have something to hold on to.
  • 3 Act Structure: Use the standard exposition, rising action, climax story structure.  Everybody understands it, it works, if you’re re-inventing all the other wheels, don’t re-invent that one.
  • Magic!: There’s an interesting cross-over with the magic community.  They worked with a magician to design some of their interactive tricks (powered, in the end, by people sitting in a call center). Fooling the brain is what they do, and is what delights our users.
  • Don’t Branch: Traditional Choose Your Own Adventure stories use a branching structure, which leads to some short experiences and some long ones.  That’s a negative for experiences you want users to fully enjoy, so instead of branches, create a looping structure where each act breaks into sections, but they all come back at the end.  Something like this:  -=-=-=-
  • Test: When you’re testing an interactive narrative, write out only the main 80% line first, and test it on 5 people.  This will validate your assumptions about how most people will view it, and won’t waste your time creating alternate paths if your base assumptions are broken.  Once you’ve passed 5, write out the rest and test on 50 people (I think this was how it went, they should post the slides soon) to validate your overall script.  Then run a production beta test on as many people as you can to get data for all the subtle things you wouldn’t expect.

Stories Asunder PanelAfter this panel I kept with the story theme and went to the Stories Asunder: Tales for the Internet of Things panel, with Lisa Woods from the Austin Interactive Installation meetup, and her team that produced Live at the Dead Horse Drum, an iBeacon powered non-linear location-based story experience on the east side of Austin.  Also joining them was Klasien van de Zandschulp from Lava Lab, who’s created some really fascinating geo-fence/iBeacon based non-linear story experiences in the Netherlands, including one under development where inhabitants of museum paintings create a social network the user can browse (think HogwartsPaintingBook).  They had some really interesting examples, and I hope to experience some of their work soon.

Enchanted ObjectsOn the way to my next panel I walked by the SXSW bookstore, and noticed that David Rose whose book Enchanted Objects I’d done a double-take on a few days before was going to be signing it just about then.  So I bought a copy and a few minutes later David showed up, and we had a great 10 minute long conversation about projection mapped interactive art objects.  David teaches at the MIT media lab, in addition to a lot of other stuff, and his book has moved straight to the top of my to-read stack.  It sounds exactly like a subject I’ve posted about here before, and something that feels like it’s moving from Bruce Sterling design fiction to real world product very quickly.

After talking to David I headed to the Storytelling Engines for Smart Environments panel, which had Meghan Athavale (aka Meg Rabbit) from Lumo Play on it. Meg’s been doing interactive projection installations in Museums for many years, and has had the question ‘Can I get this in my house?’ posed more than a few times. Recently component prices have been dropping, so she’s designed the Lumo Interactive Projector, a projector based toy for kids, and is about to run an Indiegogo for it.  Meg’s the kind of entrepreneur you can’t help but root for.  She came to SXSW by herself, set up a booth in Create, and is trying to drum up as much excitement as she can.  I really hope her Indiegogo is a big hit.

After Storytelling Engines I headed over to the GE BBQ Research Center with Matt and Irma for some free BBQ.  It was good, but Irma didn’t care for it.  Research accomplished!

Tuesday – AR/VR, Moonshots, New Assets, Space Cleaners, & Happy Bruce

Mixed Reality HabitatsTuesday morning I hit the Mixed Reality Habitats: The New Wired Frontier panel presented by IEEE. My biggest ‘wow’ takeaway, aside from the fact that nobody seems to know what Microsoft’s up to with Hololens or those Magic Leap guys (light fields?) with their headset, was from Todd Richmond, Director at USC’s Institute for Creative Technologies, who said his group felt that most people would be wearing headsets (Google Glass-like or Hololens AR like, or Oculus Rift VR like) 8 hours a day for work in 5-10 years.  When someone says something like that, I think it’s time to take notice.  Consider the headsets of today as the original iPhone.  Think about how far we’ve come in the 6 years since that was released.

After that I watched Astro Teller speak about Moonshots at Google [x]. His main point was that they always strive to fail quickly and get real-life feedback as fast as possible.  He talked about a bunch of wild projects they’re working on like delivery drones, internet by high-altitude self-driving balloons, kite-based wind power, the self-driving car, and others.  With each he emphasized how failure early lead to faster learning.

I hit a session entitled How to Rob a Bank: Vulnerabilities of New Money, with some fairly impressive speakers. Their main point seemed to be that your personal information is an emerging asset class that you should be concerned about.  That just like your dollars in a bank, your purchase history and address and Facebook posts have value, and we don’t really know how to protect that yet.

On our way back through the trade show Irma and I ran into Astroscale, a company from Singapore started by some Japanese ex-finance guys (follow me, here).  They’ve hired engineering resources to design a satellite that will de-orbit space debris.  Imagine that your $150 million dollar satellite is going to be impacted by a bit of out-of-control space junk.  You pay these guys $10 million, and then go find that space junk, attach their micro-satellite to it, and de-orbit it before it can crash into you.  And they’re running a promotional time capsule project with Pocari Sweat and National Geographic to collect well wishes from kids and send them via a SpaceX launch to the moon.  So yea, 30 years after Reagan’s Star Wars and Brilliant Pebbles, and here’s what we’ve got.  I’m surprised it isn’t on Kickstarter.

The end of SXSW is always Bruce Sterling’s talk, and this year was no different.  Bruce was kind of happy this year, and was almost channeling Temple Grandin in appearance, but happy Bruce isn’t always most interesting Bruce.  If you’d like to give his talk a listen, it’s up on SoundCloud.  Hopefully next year he’ll have some tales from Casa Jasmina to share.

So that was it, SXSW Interactive 2015.  5 days of old friends, new friends, stories, the future, BBQ, and space junk.  I can’t wait to see you next year!

Machines That Tell Stories: SXSW 2015

Last Saturday at SXSW Interactive Jon Lebkowsky and I curated a Core Conversation titled Machines That Tell Stories. I proposed the topic as a book project to Jon last year, and we put together this discussion as a stepping stone. Software storytellers are in the air. There were over a dozen sessions at SXSW this year on storytelling systems, and that kind of consensus usually heralds a new wave about to break. We’ve setup a twitter and tumblr for this project, if you want to follow along.

Machines That Tell Stories PlacardsOur argument: Software is moving beyond raw data and into narrative.  First it will help you weave the tales you want to spin, but soon it may be telling stories better than all but the best human storytellers.

The conversation was all over the place, and I don’t think anyone recorded it, but here are some notes and references that could be helpful…

IMG_8519

  • Lisa Cron’s Wired for Story: “A story is how what happens affects someone who’s trying to achieve what turns out to be a difficult goal, and how she changes as a result.”
  • Wired For Story Takeaway: Story is about mechanics, the trappings that you think of as important aren’t as critical as hitting the right beats that resonate with the human brain.
  • The Future of Storytelling Conference – Great speaker videos
  • Dwarf Fortress’s Legends mode, Procedural Poetry Analysis (Leave the creative imagination up to the user. Provide concrete, easy to procedurally generate elements, and let the brain fill in the rest.)
  • Weavrs as storytellers
  • The Nest Home Report monthly email as a machine-generated story
  • Collaborative human/machine storytelling at DARPA
  • Machine data into text reporting at Automated Insights (1 billion articles for 1 person each, instead of 1 article for 1 billion readers) More at CNN
  • Games by Angelina – Procedurally generated videogames, played through brute-force to see if they’re solvable. Potentially compare play throughs to known-pleasing physical interactions (progressively more complex button presses and movements)
  • Mechanical Turk as a part of a story machine, using human filtering to produce more compelling procedural content
  • Turing in The Imitation Game: The question isn’t whether machines will think like humans, it’s whether machines will think like machines.
  • tmbotg – Random TMBG tweeting bot, sometimes interacted with by humans due to serendipity
  • Talk PhotoWhy limit to text? Is software that generates a song based on your day’s quantified self data creating a story?
  • Shadows of Mordor’s Nemesis System as a storytelling engine – characters continue to exist when you aren’t looking, maintain the thread without you
  • Games as half-way points: PROCJAM’s The Inquisitor as procedural murder mystery
  • NaNoGenMo – Software generated novels
  • Eugene Goostman – Chatbot & Winner of the Loebner Prize.  13 year old Ukrainian boy personification: more constraints (space on twitter, language barrier with Eugene) result in increased credibility
  • Deus Ex Machina interactive theater project in Austin, sms polling to a web UI to allow for story decisions
  • Communal entertainment as a cultural touchstone: In a world where everyone gets personalized entertainment, does it become harder to relate to other people?  (No more watercooler conversations?)
  • Storium as a story generation human/software collaboration system

We had a great crowd for the conversation, and even managed to be “Hot” in the schedule.  Thanks to everyone who was able to make it!

Schedule-Hot

Games That Play Themselves

Some thoughts on RPGs and God games that keep playing when you aren’t watching, and what new hardware platforms like the Raspberry Pi and cheap tablets might mean for them.

A few days ago a new iOS app called Dreeps landed in my news feed, heralded with headlines like Maybe The Laziest RPG You Could Ever Play and A Video Game That Plays Itself. Dreeps is an app where a little robot boy goes on an adventure, Japanese RPG style.  You set an alarm to tell him to rest, and that’s it.  When the alarm goes off, he gets up and gets on with his adventure, fighting monsters and meeting NPCs.  There’s pixel art and chiptune audio.  Dialog is word balloons with squiggly lines for text.  It’s all very atmospheric.  You just don’t do anything, really, but watch when you want and suggest he get up when he’s resting after a fight.

Dreeps is a lot like Godville, a game I talked about in a post about Pocket Worlds back in 2012.  They’re games that (appear, depending on the implementation) to be running and progressing even when you’re not around.  While Godville does its magic with text, Dreeps has neat graphics and sound.  They’re essentially the same game, though.  A singular hero you have slight control over goes on a quest.  In Godville it’s for your glory (since you’re their god), in Dreeps it’s to destroy evil (I think).

screenshot02

Both Dreeps and Godville are passive entertainment experiences, they’re worlds that are all about you, but not really games you play.  They’re games you experience, or perhaps we need a new word for this kind of thing.  While books and TV shows and music (although not playlists, as we’ve seen with Pandora) are hard to create for just one person’s unique enjoyment, games are great at that.  They can take feedback and craft an experience just for you, and as we built more complex technology and can access more external datasets, they can get even more unique.

Imagine a game like Dreeps where the other characters (or maybe even the enemies) are modeled algorithmically after your Facebook friends (or LinkedIn contacts).  Take their names, mash them through a fantasy-name-izer, do face detection and hue detection to pick hair color and eye color, maybe figure out where they’re from (geolocated photos, profile hometowns or checkins) for region-appropriate clothing.  Weather from where they are, or where your friends live, maybe playing on an appropriate map.  You could even use street view and fancy algorithms to identify key regional architectural elements and generate game levels that ‘feel’ like the places they live.  That starts to get pretty interestingly personalized, though much less predictable.

261308-animal_crossing_screenMike Diver over at Vice posted an article about Dreeps titled I Am Quite OK With Video Games That Play Themselves, where his main point was that he’s figured out that he’s actually bad at games, and it’s nice to have something where you can enjoy the progression without working about your joystick skills.  Maybe Mike should spent more time with Animal Crossing, a game series I think Dreeps shares a lot of DNA with.  In Animal Crossing your character inhabits a town that progresses in real-time.  You can go fishing and dig up treasure and pick fruit and talk to the other inhabitants in your little village, but the world keeps going when you’re not playing, so if you leave it alone for a long time, you come back to a game that’s progressed without you (with the game characters wondering where you’ve been).  Dreeps is like that, but without the active user participation.  It’s like a zen Farmville.  Take out the gamification, add in some serenity.

It feels like Dreeps could be a really fantastic lock-screen-game, if that’s a thing.  You nudge your phone awake, and see your guy trudging along.  He’s always there, in a comforting, reassuring, living way.  Maybe Samsung or someone with some great cross-vertical reach could implement lock-screen or sleep-screen as a platform across TVs, phones, tablets, fridges, etc.  That’d be something.

ant-farmI was talking to a friend of mine about these kind of games yesterday, pondering where this is headed, and I mentioned that the experience almost feels like an Ecosphere.  Ecospheres are those totally enclosed ecosystems, where aside from providing a reasonable temperature and sunlight, you’re a completely passive observer. There’s something nice about walking by and peeking in on it every once in a while.  Something comforting about knowing that even when you’re not watching it’s going on about its fantastically complex business without you.  But there’s also a spiritual weight to it, because it’s a thing that could cease to exist.  I could cover the Ecosphere with a sheet or leave it out in the cold, I could delete Godville or Dreeps from my phone, or have my phone stolen, unable to retrieve my little robotic adventurer.

It isn’t a huge weight now that we carry with these sorts of things.  In fact, I stopped checking in on my Godville character a few months ago, after over a year of nearly daily care.  Sometimes you just lose the thread.  But these systems are going to become more complex, more compelling.  They’re going to have more pieces of ourselves in them.  How would I feel if a friend of mine was a major character in Dreeps, always showing up to help me out, and then he died in real life?  What if Dreeps decides to shutter their app, or not release an upgrade for the new phone I get after that?  Would I leave my device plugged in, forever stuck at iOS whatever, just so the experiences could keep going?  The Weavrs I created for myself back in 2012 are gone, victims to this onward march of technology and unportability of complex cloud-based systems.  I’m fortunate that I never got too attached.  Droops is an app, but there’s still a lot there outside of my control.

GodBenderI’m particularly interested in where this stuff intersects with physical objects.  Tamagotchis are still out there, and we’re building hardware with enough smarts to be able to create interesting installations.  There’s an Austin Interactive Installation meetup I keep meaning to go to that’s probably full of folks who would have great ideas about this.  Imagine a pico-projector or LCD screen and a RaspberryPi running a game like Dreeps, but with the deep complexity and procedural generation systems of Dwarf Fortress.  Maybe a god game like Populous, with limited interaction.  You’d be like Bender in Godfellas, watching a civilization grow.  Could that sit in your home, on your desk or by the bookshelf, running a little world with little adventurers for years and years?  Text notifications on your phone when interesting things happened.  A weekly email of news from their perspective?  As it sat on your desk for longer, would it be harder and harder to let go of?  When your kids grew up, would they want to fork a copy and take it with them?

4 years ago there were no low-power GPU sporting Raspberry Pis or globally interconnected Nest thermostats or dirt-cheap tablet-sized LCD screens or PROCJAM.  Minecraft was still in alpha, the indie game scene hadn’t exploded, the App Store was still young, procedural content generation was a niche thing.  Now all those pieces are there, just waiting to be plugged together.  So who’s going to be the first one to do it?

Data Day Texas 2015 Recap

Saturday was Data Day Texas (twitter), a single day conference covering a variety of big data topics up at the University of Texas’s conference center.  I went in my HP Helion big data guy role, and my wife Irma went as a python developer and PyLadies ATX organizer.  I’ve written up some notes on the conference for those interested and unable to attend.  As far as I know, there weren’t any recordings made, so this may be more useful than some other more archived conferences.

The conference was held at the University of Texas’s Conference Center.  It’s a nice facility, and probably appropriate for the number of people, but I think the place they hold Lone Star Ruby’s a little more friendly.  Conference organizers estimated the turnout at about 600 folks.  From what I saw, when presenters asked questions like ‘how many of you are x’, the audience breakdown was something like:

  • 70% app developers (not clear # of big data app vendors vs devs wanting to use big data)
  • 10% data scientists
  • 10% business types
  • 10% ops people

Big takeaways were that landscape immaturity is a big deal, and that’s forcing people to weigh trade-offs between the approaches they think are right, and the ones with the most traction (specific example was samza vs spark streaming at Scaling Data), because nobody wants to commit to building out all the features themselves, or getting stuck with the also-ran.  This is a problem for serious developers who want to architect or build systems with multi-year lifespans.  Kafka got mentioned a lot as a glue piece between parts of data pipelines, both at the front and at the back.  Everybody was talking about Avro and Parquet as best practice formats, and lots of calls not to just throw CSVs into HDFS.  There was a Python Data Science talk that ended on a somewhat gloomy note (the chance to build a core Python big data tool may have passed, and a lot of work will need to be done to stay competitive, slides at http://www.slideshare.net/wesm/pydata-the-next-generation).

The specific sessions I went to:

A New Year in Data Science: ML UnpausedPaco Nathan from Databricks

A talk that wandered through the ecosystem.  Paco’s big into containers right now.  Things he specifically called out as good:

The Thorn in the side of big Data: Too Few Artists by Christopher Re

A Few Useful Things to Know about Machine Learning by Pedro Domingos

He emphasized focusing on features, not algorithms as you develop your big data solutions.  Don’t get tied to a model, as our practices are all around proving or disproving models.  Build something that helps you build models.

Machine Learning: A Historical and Methodical Analysis (Historic, AI Magazine 1983)

He recommended the Partially Derivative Podcast, too.

Application Architectures with HadoopMark Grover

Related to the O’Reilly book: http://shop.oreilly.com/product/0636920033196.do

Mark talked about likely tradeoffs weighed in building a Google Analytics style clickstream processing pipeline.  Talked about Avro and Parquet, optimizing partition size (>1 gig data per day = daily partitions, <1 gig = monthly/weekly), Flume vs Kafka and Flume + Kafka, Kafka Channel as a buffer to ensure non-duplication, Spark Streaming as a micro-batch framework, and the tradeoffs of resiliency vs latency.  I think the clickstream analytics example is one of the ones in the book, so if this is interesting and you want more details, just buy an early access copy.

Beyond the Tweeting ToasterP Taylor Goetz

A general talk about sensors, Arduino, and Hadoop.  The demo was a tweeting IoT device, and Irma won it in the giveaway!

Real Time Data Processing Using Spark StreamingHari Shreedharan

Hari talked about Spark Streaming’s general use cases.  Likely flow was:

Ingest (Kafka/Flume) -> Processing (Spark Streaming) -> R/T Serving (Hbase/Impala)

He talked about how Spark follows the DAG to re-create results as its fault-tolerance model.  This was pretty cool, and an interesting way of thinking about the system.  Because you know all the steps taken to create the data, you can re-generate it at any time if you lose part of it by tracing it back and running those steps on that data subset again.  Spark uses Resilient Distributed Datasets to do this, and Spark Streaming essentially creates timestamped RDDs based on your batch interval (Default 2 seconds).

There’s good code reuse between spark streaming and regular spark, since you’re running on RDDs in the same code execution environment.  No need to throw your code away and start over if you want to do batch vs micro-batch.

Containers, Microservices and Machine LearningPaco Nathan

On the container and microservices front, Paco recommended watching Adrian Cockroft’s DockerCon EU keynote, State of the Art In Microservices.  He then walked through an example using textrank and pagerank as a way to create keyword phrases out of a connected text corpus (specifically apache mailing lists).

He mentioned databricks spark training resources, which look extensive: http://databricks.com/spark-training-resources

Building Data Pipelines with the Kite SDKJoey Echevarria

http://www.kitesdk.org/

Kite is an abstraction layer between the engine and your data that enforces best practices (always use compression, for instance).  It uses a db->table->row model that it calls namespace->dataset->entity.  He mentioned that they’d seen little performance difference between using raw HDFS vs Hive for ETL tasks, all things considered.  Use Avro for row based data (when you need context) and Parquet for column oriented data (when you need to sum/scan or only deal with a few columns).

Building a System for Event -Oriented Data by Eric SammerCTO of Scaling Data

A great talk on practical problems building large scale systems.  Scaling Data has built a product that essentially creates a kafka firehose for the enterprise datacenter, re-creating a lot of tooling I’ve seen at Facebook and other places, and making a straightforward-to-install enterprise product out of it.  They pipe stuff into solr for full text search (ala splunk), feed dashboards for alerts, archive everything for later forensics, etc.

He recommended this blog post by Jay Kreps at Linkedin on real-time data delivery mechanics:

http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying

Said their biggest nut to crack was the 2 phase delivery problem, guaranteeing that events would only land once.  They write to a tmp file in HDFS, close the hdfs file handle and ensure sync, then mark as read in kafka, then go process the tmp file.

Talked a lot about summingbird.  Said it was probably the right way to add stuff up, but that it was too big and gangly, so they’d written something themselves.  He recommended this paper by Oscar Boykin on Summingbird that covers a lot of the problems building this kind of system.

Also talked about Samza (best approach for the transform part of the pipeline, in their opinion, but low level and lacking community support), Storm (rigid, slow in their experience), and Spark (they hate it, but the community likes it, so they use it).

Wrapup

It was a harried (no lunch break, no afternoon break, if you were feeling burned out, you had to skip a session) conference, but that might be the nature of a one day brain-binge.  The organizers were happy to reserve a table for PyLadies in the Data Lounge, and they had a mini-meetup and got a little outreach done.

SXSW 2014: The One About Privacy

Kramer SXSW Cloud PlacardsTwo weekends ago SXSW Interactive graced our fair city, and as usual, I was there and even spoke a little.  Thankfully my house wasn’t robbed this time.

This year’s SXSW Interactive was heavy on privacy, internet security, and wresting our freedom back.  There weren’t keynotes from social players aiming to get you to join their thing, instead it was Edward Snowden, Julian Assange, and Neil DeGrasse Tyson telling you to learn and think for yourself.  It’s a refreshing change, and I’m eager to see what the tone of next year will be.

SXSW started really going on Friday this year, as the conference pushes up against it’s 5 day time window.  There wasn’t much in the morning on Friday.  Eric Schmidt, Jared Cohen, and Steven Levy had an interesting talk, mainly riffing on their book, The New Digital Age.  Eric and Jared are mainly concerned with technologies greater impacts, but there’s a certain large corporate mindset in what they say that clearly paints Google as a crusader for good.  You wouldn’t expect much else from the authors, but if you read it, keep that in the back of your mind.  There are other opinions.  One notable excerpt considering the Wikileaks presentation the next day were Eric Schmidt’s arguments against  transparency, which boil down to ‘Imagine what would happen if everything was transparent and open, nations wouldn’t be able to defend themselves from aggressors because they’d have to publish their attack plans before hand,’ which is just, well.  Ugh.

Show Your Work!Next up was Austin Kleon, who’s on tour supporting his new book, Show Your Work!  Austins keynote was the first one I really got something out of, my first big takeaway of the conference, which was that the concept of the lone creative visionary genius was patently false, and that we’re all products of the environment we’re in, and by showing your work in progress and getting involved and contributing, you can be a Scenius (hat tip to Brian Eno, there).  The people to avoid when you enter a creative community are the Vampires (people who feed off others energy to create their own work) and Human Spam (people who exist only to promote their thing, and are tone deaf to anything else).  Once SXSW pushes up video, Austin’s is a keynote worth checking out.

Julian Assange at SXSW 2014Saturday was Julian Assange, and you can watch the talk yourself here, but the long and short was that privacy is good, governments do bad things, people will act better if they know that what they’re doing will be made public later, and it’s impossible to do bad things on a large scale without creating a paper trail.  Julian is obviously a smart guy, but he isn’t a very dynamic speaker, and takes about three times as long to answer a question as he should.  He was in front of a green screen, and they use this constantly dripping wikileaks graphic that is incredibly distracting.  Pair that with his slow delivery, and it doesn’t make for a very exciting presentation.

Neil deGrasse Tyson at SXSW 2014There were exciting people at SXSW, though, and this years most exciting (to the point that he won Speaker of the Event), was Neil deGrasse Tyson.  Neil is a dynamic speaker, knows how to rally a crowd, and was there on the eve of the premier of his new series, Cosmos.  Takeaway from Neil is that science is cool, we keep learning new things, the universe is an amazing, mind-bogglingly-immense place and if you consider it in relation to our tiny planet and our tiny lives, it really puts everything in perspective the next time the kid is screaming and smearing blackberries on the table.

SXSW Gaming Expo

After Tyson’s talk we headed over to the SXSW Gaming Expo, which is a free event and a lot of fun for children of all ages.  They had a pretty big CCG pit, shown above, an area just for indie video games, and a big Gaming Tournament area where they were playing something I don’t keep up with anymore.

Escape the InternetSaturday night was the EFF-Austin SXSW party, In the future everything will work: Cyberpunk 2014.  It was a great time, with a cool Museum of Computer Culture exhibit of old machines and cutting-edge-way-back-when Hypercard decks.  Due to another commitment we weren’t able to stay late, but there was a panel discussion with Cory Doctorow, Bruce Sterling, Gareth Branwyn, William Barker and my buddy Jon Lebkowsky.  I’m bummed that I missed that, but you gotta do what you gotta do.

SXSW 2014 SessionSunday was my session, a Core Conversation titled A Cloud of One’s Own with me and Dave Sanford of the Austin Personal Cloud meetup group.  We had a really good turnout, and had a great, wide-ranging conversation on everything from Quantified Self analytics to Home Automation to crypto and authentication standards.  Dave prefaced the session with Life Simplified with Connected Devices, a piece of design fiction from the Connected Devices Laboratory at BYU.  It was written by Phil Windley’s son (Phil of Fuse connected car and Kynetx fame).

SXSW Bitcoin ATMAfter my talk we hit the trade show.  There was a bitcoin ATM this year, and NASA had a great booth with a 1/10th scale inflatable model of their new rocket that’s intended to take astronauts to Mars (this is Irma with a smaller model). There were some other interesting booths, including another good WordPress offering.  Irma and I grabbed some lunch, and while we were eating made our biggest missed connection of the whole conference, when only a sheet of glass separated us from Community’s own Troy Barnes, Donald Glover.  It was a very squee moment.  Donald was in town as Childish Gambino on his Deep Web Tour, and did a hackathon with WordPress.  We didn’t make it out to any of his events, which is a shame, but it was nice to have him here.

Edward Snowden at SXSWMonday started with Cory Doctorow talking with Barton Gellman about security and privacy and leaks.  It was arguably a more interesting talk than the actual Snowden interview that followed. The Snowden (video here), in a videocast with more technical challenges than Assange’s.  Assange was using Skype, though they lost audio.  Snowden was using Google Hangouts, but said he was bouncing through 7 proxies to get there.  The Snowden discussion was setup with two other speakers from the ACLU, so if he couldn’t make it, there would still be a talk, but this relegated him to a ‘voice on the phone’ role.  He’s very sharp, that Mr. Snowden, and knows how to make his point quickly.  It was an interesting contrast to Assange.  There weren’t any big bombshells from the Snowden interview, but it was an interesting moment in time.

After the Snowden talk I walked over to the Identity, Reputation, and Personal Clouds meetup session, as the organizers had been kind enough to come to ours.  We had a good discussion, and I ran into Chris Dancy, who I didn’t know before but seems to be the most connected man on the planet.  It was nice of him to come to the meetup, and we certainly had a rousing discussion.

Infinite Future PanelAfter the meetup was a great talk and interview with Adam Savage (I walked by someone I eventually recognized as Jamie Hyneman on the walk back to the Convention Center), I tried to get into the Infinite Future panel with Joi Ito, Bruce Sterling, Warren Ellis, and Daniel Suarez, but the room was tiny and way overbooked, and Irma couldn’t get in, so we headed back to the trade show instead, and then home.  I heard it was great, I hope there’s video or a recording somewhere.

Takei at SXSWTuesday was a quieter day, we showed up for George Takei’s interview, which was funny and interesting, though for someone who the US put into a detention camp, he has some interesting ideas on what Ed Snowden should do.  After that we bumped into our friend Carlos Ovalle, who’d been live tweeting the conference.  Carlos won my Personal Cloud SXSW Badge Contest the year before, and it was great to see the conference was good enough for him to come back.

After lunch it was Tim Ferriss talking about his new show The Tim Ferriss Experiment. He regaled us with stories of the brutal nature of shooting reality TV on a tight schedule, but the biggest takeaway I had was a quote from Jim Rohn on the law of averages, that: “You are the average of the five people you spend the most time with.” That really struck me, as I can see that pretty clearly, but it’s interesting to quantify.

Jon Lebkowsky Introduces Bruce Sterling at SXSW 2014The last session of SXSW is where Bruce Sterling gives his thoughts, and this year was no different.  My buddy Jon Lebkowsky introduced him (left).  Bruce’s talk was subdued this year, covering notable people who weren’t there (or were only there virtually), and who should be invited later.  It wasn’t a barn burner like it had been in the past, but I think Bruce’s thoughts were perhaps more with the things he is building and making, where he’s getting his hands dirty with real stuff.  It’s worth your time to track down some previous talks, though, because they’re great.

To close off, I will leave you with this, a photo of the traditional slice of Peanut Butter Mousse Pie that we had at Moonshine after SXSW wrapped up.  Moonshine and this pie is almost a tradition in itself at this point.

Moonshine Peanut Butter Mousse PieFollowing up on Austin’s idea of scenius and sharing your work, I’ve started a newsletter, Muniment.  I send it out every week or two, and preview new stuff that will end up on the blog, or put more context around interesting stuff I see.  You should sign up.

Book Review: On Intelligence by Jeff Hawkins

On IntelligenceWith On Intelligence, I find myself in the unique position of having heavily evangelized a book before I’ve even finished it.  I read half of it and started buying copies for friends.  This is something I’ve never done before, so if you’re busy, you can take a quick tl;dr, and assume that if you’re interested in how intelligence works, namely how the brain functions at a high level (learning patterns, predicting the future, forming invariant representations of things) and how we might functionally simulate that with computers, do not pass go, do not collect $200, go buy a copy (Amazon, Powells) and read it.

Still here?  Good, because I have a lot to say.  This isn’t really a book review, it’s more of a book summary and an exhortation to activity.  You’ve been warned.

A Little Backstory

Earlier this year I went to OSCON, and at OSCON the keynote that impressed me the most was by Jeff Hawkins, creator of the PalmPilot and founder of HandSpring.  Here’s the video:

As appropriate for an Open Source conference, Jeff’s company, Numenta was announcing that they were open sourcing their neocortical simulator library, NuPIC, and throwing it out there for people to hack on.  NuPIC was based on the work Numenta had done on neocortical simulations since he wrote the book, On Intelligence, in 2005.  NuPIC is software that simulates the neocortex, the sheet of grey matter on the outside of your brain where all your experiences live.  3 years of French?  It’s in the neocortex.  The ability to figure out that two eyes and a nose equals a face?  The neocortex.  The neocortex even has the ability to directly control your body, so that muscle memory you rely on to do that thing you do so well, like riding a bike or painting or driving a car?  That’s all in your neocortex.  It’s the size of a large dinner napkin (the largest in humans, but every mammal has one), is about as thick as 7 business cards, and wraps around the outside of your head.  It is you.

Intrigued, I went to the full length session that the Numenta team presented…

One of their main demos was an electrical consumption predictor for a gymnasium.  When initialized, the NuPIC system is empty, like a baby’s brain.  Then you start to feed it data, and it starts to try to predict what comes next.  At first, its predictions fall a little behind the data it’s receiving, but as the days of data go by, it starts to predict future consumption an hour out (or whatever you’ve configured), and it gets pretty good at it.  Nobody told NuPIC what the data was, just like our DNA doesn’t tell our brains about French verbs, the structure is there and with exposure it gets populated and begins to predict.

At the end of the talk, their recommendation for learning more about this stuff was to read On Intelligence.  So, eventually, that’s what I did.

A Little Hyperbole

The simulation, in software and silicon, of the biological data handling processes, and building software off of that simulation, is the most interesting thing I’ve seen since Netscape Navigator.  Everything up to your iPhone running Google Maps is progressive enhancement and miniaturization of stuff I’ve seen before.  Building brains feels different.

Newton AdI have a Newton Messagepad 2000 around here somewhere.  It had mobile email over packet radio with handwriting recognition in 1997.  In 2001 I was using a cell phone with a color screened to look up directions and browse web sites in Japan.  It’s all iterating, getting better bit by bit, so that when we look back in 10 years we think that we’ve made gigantic leaps.  Have we?  Maybe, but software is still stubbornly software-like.  If I repeat the same error 10 times in a row, it doesn’t rewrite itself.  My computer doesn’t learn about things, except in the most heavy handed of ways.

Sir You Are Being Hunted MapWriting and shipping working software is hard, there’s rarely time to focus on the things that aren’t necessities.  In video games this has given us nicer and nicer graphics, since for the last few generations, graphics have sold games.  Now developers are starting to realize there’s a huge swath of interesting stuff you can do that doesn’t require a 500 person art department and a million dollar budget.  Sir, You Are Being Hunted has procedural map generation.  Rimworld has AI storytellers that control events in the game world to create new experiences.

Looking beyond the game space, a few weeks ago I was talking with a large networking company about some skunkworks projects they had, and one of them was a honey pot product for catching and investigating hack attempts.  The connections between deep simulations like Dwarf Fortress or the AI Storyteller in Rimworld and how a fake sysadmin in a honey pot should react to an intruder are obvious.  If it’s all scripted and the same, if the sysadmin reboots the server exactly 15 seconds after the attacker logs in, it’s obviously fake.  For the product to work, and for the attacker to be taken, it has to feel real, and in order to fool software (which can pick up on things like that 15 second timer), it has to be different every time.

One thing that these procedural and emergent systems have in common is that they aren’t rigidly structured programs.  They are open to flexibility, they are unpredictable, and they are fun because unexpected things happen.  They’re more like a story told by a person, or experiencing a real lived-in world.

I believe that to do that well, to have computers that surprise and delight us as creators, is going to require a new kind of software, and I think software like Numenta’s NuPIC neocortical simulator is a huge step in that direction.

Let’s Deflate That a Bit

Ok, so NuPIC isn’t a whole brain in a box.  It’s single threaded, it’s kind of slow to learn, and it can be frustratingly obtuse.  One of the samples I tried did some Markov chaining style text prediction, but since they fed each letter into the system as a data point instead of whole words, the system would devolve into returning ‘the the the the the’, because ‘the’ was the most common word in the data set I trained it with.

Neocortical simulators are a new technology in the general developer world.  We’ve had brute force data processing systems like Hadoop, methods developed to deal with the problems of the Google’s of the world, and now we have NuPIC.  The first steps towards Hadoop were rough, the first steps towards neocortical simulators are going to be rough.

It’s also possible that we’re entering another hype phase, brought on by the rise of big data as the everywhere-buzzword.  We had the decades of AI, the decade of Expert Systems, the decade of Neural Networks, but without a lot to show for it.  This could be the decade of the neocortex, where in 10 years it’ll be something else, but it’s also possible that just like the Web appeared once all the pieces were in place, the age of truly intelligent machines could be dawning.

Oh, This Was a Book Review?

It’s hard to review On Intelligence as a book, because how well it’s written or how accessible the prose may be is so much less important than the content.  Sandra Blakeslee co-wrote the book, and undoubtedly had a large hand in hammering Jeff’s ideas into consumable shape.  It isn’t an easy read due to the ideas presented, but it’s fascinating, and well worth the effort.

In the book Jeff describes the memory-prediction framework theory of the brain.  The theory essentially states that the neocortex is a big non-specialized blob that works in a standard, fairly simple way.  The layers in the sheet of the neocortex (there are 7 of them), communicate up and down, receiving inputs from your sensory organs, generalizing the data they get into invariant representations, and then pushing predictions down about what they will receive data about next.  For instance, the first layer may get data from the eye and say, there’s a round shape here, and a line shape next to it.  It pushes ’round shapes’ and ‘line shapes’ up to the next level, and says “I’ll probably continue to see round shapes and line shapes in the future”.  The bouncing around of your natural eye movements gets filtered out, and the higher levels of the brain don’t have to deal with it.  The next level up says, “This kind of round shape and line shapes seem to be arranged like a nose, so I’m going to tell the layer up from me that I see a nose”.  The layer up from that gets the ‘I see a nose’ and two ‘I see an eye’ reports and says, “Next layer up, this is a face”.  If it gets all the way to the top and there’s no mouth, which doesn’t match the invariant representation of ‘face’, error messages get sent back down and warning flags go off and we can’t help but stare at poor Keanu…

Neo Mouth

These layers are constantly sending predictions down (and across, to areas that handle other related representations) about what they will experience next, so when we walk into a kitchen we barely notice the toaster and the microwave and the oven and the coffee maker, but put a table saw onto the counter and we’ll notice it immediately.

As we experience things, these neurons get programmed, and as we experience them more, the connections to other things strengthen.  I figure this is why project based learning works so much better than rote memorization, because you’re cross connecting more parts of your brain, and making it easier for that information to pop up later.  Memory palaces probably work the same way.  (I’m also half way through Moonwalking with Einstein, about that very thing.)

So, Have We Mentioned God Yet?

The Microcosmic GodThis is where things start to get weird for me.  I grew up in a very religious family, and a large part of religion is that it gives you an easy answer to the ‘what is consciousness’ question when you’re young.  Well, God made you, so God made you conscious.  You’re special, consciousness lets you realize you can go to heaven, the dog isn’t conscious and therefor can’t, etc.

About a third of the way into On Intelligence I started having some minor freakouts, like you might have if someone let you in on the Truman Show secret.  It was like the fabric of reality was being pulled back, and I could see the strings being pulled.  Data in, prediction made, prediction fulfilled.  Consciousness is a by-product of having a neocortex.  (Or so Jeff postulates at the end of the book.)  You have awareness because your neocortex is constantly churning on predictions and input.  Once you no longer have predictions, you’re unconscious or dead, and that’s that.

Kid + RobotThat’s a heavy thing to ponder, and I think if I pondered it too much, it would be a problem.  One could easily be consumed by such thoughts.  But it’s like worrying about the death of the solar system.  There are real, immediate problems, like teaching my daughter how stuff (like a Portal Turret) works.

Let’s Wrap This Thing Up With A Bow

I’m sorry this post was so meandering, but I really do think that neocortical simulators and other bio processing simulations are going to be a huge part of the future.  Systems like this don’t get fed a ruleset, they learn over time, and they can continue to learn, or be frozen in place.  Your self-driving car may start with a car brain that’s driven simulated (Google Street View) roads millions of miles in fast-forward, and then thousands of miles in the real world.  Just like everyone runs iOS, we could all be running a neocortex built on the same data.  (I imagine that really observant people will be able to watch Google’s self driving cars and by minor variations in their movements, tell what software release they’re running.)  Or we could allow ours to learn, adjust its driving patterns to be faster, or slower, or more cautious.

The power of software is that once it is written, it can be copied with nearly no cost.  That’s why software destroys industries.  If you write one small business tax system, you can sell it a million times.  If you grow a neocortex, feed it and nurture it, you’ve created something like software.  Something that can be forked and copied and sold like software, but something that can also continue to change once it’s out of your hands.  Who owns it?  How can you own part of a brain?  Jeff writes in the book about the possibilities of re-merging divergent copies.  That’s certainly plausible, and starts to sound a whole lot like what I would have considered science fiction 10 years ago.

I finally finished On Intelligence.  I have Ray Kurzweil’s book, How to Create a Mind on my nightstand.  I’ve heard they share a lot of similar ideas.  Ray’s at Google now, solving their problem of understanding the world’s information.  He’s building a brain, we can assume.  Google likes to be at the fore-front.

DoctorWe could throw up our hands and say we’re but lowly developers, not genius computer theorists or doctors or what have you.  The future will come, but all we can do is watch.  The problem with that is that Google’s problems will be everyone’s problems in 5 years, so for all the teeth gnashing about Skynet and Bigdog with a Google/Kurzweil brain, it’s much more productive to actually get to work getting smarter and more familiar with this stuff.  I wouldn’t be surprised if by 2020 ‘5+ Years Experience Scaling Neocortical Learning Systems in the Cloud’ was on a lot of job postings.  And for the creative, solving the problem of how the Old Brain’s emotions and fears and desires interfaces with the neocortex should be rife with experimental possibilities.

NuPIC is on github.  They’re putting on hackathons.  The future isn’t waiting.  Get to it.

Updates

Here’s a video from the Goto conference where Jeff talks about the neocortex and the state of their work.  This video is from October 1st of 2013, so it’s recent.  If you have an hour, it’s really worth a watch.

Book Review: Kill Decision by Daniel Suarez

Kill Decision Book CoverDaniel Squarez‘s latest techno-thriller Kill Decision isn’t a happy book.  It’s an especially unhappy book if you’re excited about quadcopters, RC planes, self-organizing swarm AI, or any of that neat, fun stuff.

Daniel’s first published book was Daemon, a novel about a programmer who, upon discovering that his time is up, creates a distributed dumb-agent network of actions and actors triggered by reports in news feeds.  The thing that made Daemon so interesting wasn’t just that concept, it was that Daniel has a really good grasp on the technology, so everything that happened in the book kind of made sense.  There was no magic bullet, it was all ‘oh, yea, that could work’.

Kill Decision is a book about drones, specifically autonomous drones that can kill.  It was only a few years ago that I remember wondering when someone was going to strap a handgun (even a fake one) to a quadcopter and attempt a robbery by drone.  Kill Decision is a book about just that, except the handgun is quadcopter optimized and the person getting robbed is the USA.

It’s been a while since I’ve read any popular techno-thrillers, but from what I remember, Kill Decision follows the arc pretty well.  There’s a tough soldier type, a naive but smart audience proxy, a team of good guys for gun fodder, and a big bad.  The pacing is good, the details are good, and the book keeps you guessing.  I guess my only complaint is also the books point, that in the end, with a robot that can kill, it’s really hard to figure out who the bad guy is.  In Kill Decision there isn’t a Snidley Whiplash twirling his mustache just off stage, at least that we get to see, and that lack of a direct villain gives the book a feeling of existential angst.  The bots just keep coming, and in the end, there isn’t a clear win or loss.

Lots of thrillers are spy novels with more gadgets.  They’re Jason Bourne, a lone operative outwitting the watchful, ever-present eye of big evil.  It’s a big data dream, outwitting the system.  Kill Decision is different.  Kill Decision is a zombie novel, except the zombies are cheap, deadly, swarming technology.

If you can handle that kind of anxiety, and you like books about AI, maker, and military technology, Kill Decision is an easy recommendation.  Also, go watch this video of Joi Ito interviewing Daniel Suarez at the Media Lab.  Joi gives Kill Decision two thumbs up.

Magical Objects: The Future of Craft

Marken PhotographOf the thousands of pictures I’ve taken since I got into photography, there are only a few on display in my house.  Only one of them is what you might call professionally framed.  It’s that one, to the right.  It was taken in Marken, Netherlands, on the Wandelroute Rond Marken Over de Dijk.  Not exactly here, but close by, on a little path at the edge of an island next to the ocean.  The thing is, it isn’t a photograph.  It looks like a photograph, but it’s actually a panorama, digitally spliced together from half a dozen shots.  It’s a photograph, re-interpreted by software.  And it could be the first step on the road to something new.

Ode to a Camera Gathering Dust

A few weeks ago I read a blog post by Kirk Tuck talking about the recent drop in camera sales, and the general decline of photography as a hobby.  Kirk’s assertion was that when a lot of us got into photography, gear made a big difference.  There was the high end to yearn for, but with the right skill and tricks you could make up for it.  There were good sized communities online where you could share photos with other people in the same spot, and you were all getting a little better.  It was something you could take pride in.  Now all the gear is great.  Your cell phone camera is great.  It’s hard to stand-out.  Everyone has read the same tutorials, everyone can do HDR and panoramas.  They can even do them in-camera with one button.  And as photography goes, so goes video.

Dust Bunny 3D PrintsFor a while I thought that 3d printing and the maker movement might be a little like photography.  There’s plenty of gear to collect, and it can make a big difference in the final product, but skill and technique and creativity still count for a lot.  Now I’m leaning towards 3d printing and the maker movement really being a rediscovery of the physical after the birth of the age of software.  Before personal computers ate the world you could still find plenty of folks who knew about gear ratios and metallurgy and who’d put together crystal radios when they were kids.  I grew up in the 80s, and I don’t know anything about either of those things, but I was diagnosing IRQ conflicts before I liked girls.  So the maker movement is kind of new, and photography is kind of past the curve, so what’s new-new?  What’s going to eat our time and interest and energy and fill our walls and display shelves next?  What are we going to collect and tinker with and obsess over?

Beautiful, New Things

It’s been said that we’re all in the attention game now.  Attention is currency.  In an indirectly monetized world it’s what people have to give.  When you create something, you’re vying for that bit of attention.  Given that, I think we’re looking at the birth of a new kind of craft, and a new kind of object.

Let’s call them magical objects: Objects that use software and computation to break or make irrelevant their inherent limitations, for the purpose of entertaining or informing.  They’re objects that use software to amplify their Attention Quotient.  (AQ, is that a thing? It should be.)

First, I’d like you to look at a video that hit a few days ago, Box.  It’s what happens when you combine a bunch of creative folks, some big robot arms, projectors, cameras, and a whole bunch of software.

That’s pretty awesome, right?  Not really practical for your house, but pretty.  Let’s find something smaller, something more intimate.  Maybe something more tactile.  Something like… a sandbox…

Ok, now we’re getting somewhere.  It’s a sandbox that reacts to your input.  The software and the projectors and the cameras make the sandbox more than just a sand table with some water on it, the whole thing becomes an application platform, with sand and touch as it’s interface.  The object becomes magical.  When you look at a sandbox, you know what it can do.  When you look at an augmented sandbox, you don’t know what it does.  You have to play with it.  You have to explore.  It has a high attention quotient.

These kind of objects are going to proliferate like crazy in the next few years.  We’re already starting to see hints of it in iOS 7’s Parallax wallpaper.  The only reason that parallax wallpaper exists is to make your iDevice more magical.  It serves no other purpose than to use software (head distance, accelerometer movement tracking) to overcome the limitations of hardware (2d display), for the purpose of delighting the user (magic).

Kids These Days

So as we think about the future, let’s step back for a second, and think about the children.  At the Austin Personal Cloud meetup a few weeks ago I had a realization that everyone in the room was probably over the age of 30, and there were plenty over the age of 50.  We have to be really careful about prognosticating and planning the future, because the world that we see isn’t the world that those in their teens and 20’s see.  They have different reference points, and they’re inspired by different things.  I’ve written before about Adventure Time and The Amazing World of Gumball as training for future engineers.  But it occurs to me that when it comes to magical objects, we only need to look at the name to tell us where the inspiration for the next generation will spring.

Luna LovegoodPart of the thing that makes Harry Potter’s world wonderful is that things are more than they appear.  A car isn’t just a car, a hat isn’t just a hat, and a map isn’t just a map.  For all the plot-driving magical objects in Harry Potter like the Time Turner, there are plenty of wandering portraits, chocolate frog trading cards, and miscellaneous baubles.  They amp up the attention quotient of the world.  Maybe they’re the reason we don’t see Harry and Hermione checking Facebook all day, or maybe they just have awful coverage at Hogwarts.

My daughter’s about to turn 2, and her newest discovery is that if she holds a cup to her ear, it kind of sounds like the ocean.  After I showed her that, she held the cup to her ear for a good 20 minutes.  I hold the cup up to my ear, and I hear science.  She holds the cup to her ear, and she hears magic.  Her eyes are wide, and she says, “Ocean!” over and over.

We can make these magical objects now, and we have a generation that would love more meaningful interaction from physical things.  We just need to start assembling the bits and deciding on a few simple standards so we can create ecosystems of art.  We don’t have magic, but we have something that’s nearly as good.  We have software…

That’s a documentary about Processing.  You don’t need to watch the whole thing, but it’s pretty, and interesting. Processing is a programming language for visual arts.  Usually those interesting visual things live on a screen, or through a projector in space or on a building.  They rarely live in your house.  But they could, and they could be really cool.

Wherein We Sketch Out the Future

I think that by combining the artistic software movement, emergent behavior fields like procedural game world generation, and a little bit of hardware hacker know-how, we can create a new type of thing.  A magical, home object.  Let’s look at one…

Back of an Envelope SketchSo this is a thing.  Literally a back-of-an-envelope sketch.  It’s a bowl, or a box, with an arm extending over it.  In the bowl is sand, or perhaps something more pure-white but still eco-friendly and non-toxic.  At the end of the arm is a little pod, it has two cameras in it, for stereoscopic 3D, and a pico projector.  Maybe there’s even another projector pointing up out of it.  Under the bowl is the descendant of a Raspberry Pi, or a Beaglebone Black, or something like it.  It lives on a side table or end table in your house.

This magical device runs programs.  The programs use the sand (or whatever you put under the arm) as an interface.  It can recognize other objects, maybe little shovels or pointers or what have you.  Maybe simple programs are like our virtual sandbox above.  Maybe it’s like a bonsai, but instead of a virtual tree, it runs a simulation of an ecological ecosystem.  Dig out your valleys and pile up your mountains, and see trees grow, animals roam the steppes, birds fly…  Maybe you can even run a game on that, like Populous, but instead of looking into the screen you can walk around it and touch it.  You can watch your little minions wander around the landscape.  Maybe you can talk to it.  Maybe it’s like the asteroid that hits Bender in Futurama’s Godfella’s episode, like Black and White but designed for the long-haul.  Maybe when I’m not running my civilization on it, it plays selections from a feed of cool Processing visualizations across my ceiling.

Back to the Beginning

I’m sure there will be all kinds of form factors for these magical objects.  They’ll come in pocket-sized compacts, or ceiling projectors, or robotically controlled room projectors (imagine a bunch of tiny Disney-esque mice that live in your house, but are only projected onto the walls and floorboards, not actually chewing through them).  Or maybe it’s like my photo of Marken, in a frame on the wall, except that it’s based off a video clip, or some software analyzes the scene and says, “Hey, this is grass, let’s make it wave a little, and these are clouds, so they should float by, and this is a sailboat, so it should drift back and forth.”  And maybe, if you lean in really close, you can hear the ocean.

Building a Personal Cloud Computer

Wednesday I presented a talk at the Austin Personal Cloud meetup about Building a Personal Cloud computer.  Murphy was in full effect, so both of the cameras we had to record the session died, and I forgot to start my audio recorder.  I’ve decided to write out the notes that I should have had, so here’s the presentation if it had been read.

Personal Cloud Meetup Talk.001

In this presentation we’re talking about building a personal cloud computer.  This is one approach to the personal cloud, there are certainly others, but this is the one that has been ringing true to me lately.

Personal Cloud Meetup Talk.002

A lot of what people have been talking about when they speak about the personal cloud is really personal pervasive storage.  These are things like Dropbox or Evernote.  It’s the concept of having your files everywhere, and being able to give permission to things that want to access them.  Think Google Drive, as well.

These concepts are certainly valid, but I’m more interested in software, and I think computing really comes down to running programs.  For me, the personal cloud has storage, but it’s power is in the fact that it executes programs for me, just like my personal computer at home.

That computer in the slide is a Commodore +4, the first computer I ever laid fingers on.

Personal Cloud Meetup Talk.003

Back then, idea of running programs for yourself still appealed to the dreamers.  They made movies like TRON, and we anthropomorphized the software we were writing.  These were our programs doing work for us, and if we were just smart enough and spent enough time at it, we could change our lives and change the world.

Personal Cloud Meetup Talk.004

This idea isn’t new, in fact AI pioneers were talking about it back in the 50s.  John McCarthy was thinking about it back then, as Alan Kay relates when he talks about his 3rd age of computing:

They had in view a system that, when given a goal, could carry out the details of the appropriate computer operations and could ask for and receive advice, offered in human terms, when it was stuck. An agent would be a ‘soft robot’ living and doing its business within the computer world.

That’s been the dream for a long time…

Personal Cloud Meetup Talk.005

But that never really happened.  The personal computer revolution revolutionized business, and it changed how we communicated with each other, but before the Internet things didn’t interconnect to the point where software could be a useful helper, and then we all went crazy making money with .com 1.0 and Web 2.0, and it was all about being easy and carving out a market niche.  Then something else hit…

Personal Cloud Meetup Talk.006

Mobile exploded.  If you’ll notice, mobile applications never really had an early adopter phase.  There was no early computing era for mobile.  You could say that PDAs were it, but without connectivity that isn’t the same as the world we have now.  Most developers couldn’t get their app onto a mobile device until the iOS app store hit, but that platform was already locked down.  There was no experimentation phase with no boundaries.  We still haven’t had the ability to have an always-connected device in our pocket that can run whatever we want.  The Ubuntu phones may be that, but we’re 6 iterations into the post-iPhone era.

Personal Cloud Meetup Talk.007And who doesn’t love mobile?  Who doesn’t love their phone?  They’re great, they’re easy to use, they solve our problems.  What’s wrong with them?  Why do we need something else?  Well, let’s compare them to what we’ve got…

Personal Cloud Meetup Talk.008With the PC we had a unique device in so far as we owned the hardware, we owned our data, and EULA issues aside, we owned the software.  You could pack up your PC, take it with you to the top of a mountain in Nepal, and write your great novel or game or program, with no worries about someone deactivating it or the machine being EOLed.  Unfortunately the PC is stuck at your house, unscalable, badly networked, loaded with an OS that was designed for compatibility with programs written 25 years ago.  It isn’t an Internet era machine.

With the web we got Software as a Service (SaaS), and with this I’m thinking about the Picasa’s and Flickr’s and Bloggers of the world.  No software to maintain, no hardware to maintain, access to some of your data (but not all of it, such as not having access to traffic metrics with Flickr unless you paid, and only export rights if you were paid up).  But in this new world you can’t guarantee your continuity of experience.  Flickr releases a redesign and the experience you’ve depended on goes away.  The way you’ve organized and curated your content no longer makes sense.  Or maybe as in the case of sites like Gowalla, the whole thing just disappears one day.

Mobile has it’s own issues.  You often don’t own the hardware, you’re leasing it or it’s locked up and difficult to control.  You can’t take your phone to another provider, you can’t install whatever software you want on it.  Sometimes it’s difficult to get data out.  How do you store the savegame files from your favorite iPhone game without a whole-device snapshot?  How do you get files out of a note taking app if it doesn’t have Dropbox integration?  In the end, you don’t even really own a lot of that software.  Many apps only work with specific back-end services, and once your phone gets older, support starts to disappear.  Upgrade or throw it in the junk pile.

Cloud offers us new options.  We don’t have to own the hardware, we can just access it through standards compliant means.  That’s what OpenStack is all about.  OpenStack’s a platform, but OpenStack is also an API promise.  If you can do it with X provider, you can also do it with Y provider.  No vendor lock-in is even one of the bullet points on our homepage at HP Cloud.

Implicit in cloud is that you own your own data.  You may pay to have it mutated, but you own the input and the output.  A lot of the software we use in cloud systems is either free, or stuff that you own (usually by building it or tweaking it yourself).  It’s a lot more like the old PC model than Mobile or SaaS.

Personal Cloud Meetup Talk.009

All of these systems solve specific types of problems, and for the Personal Cloud to really take off, I think it needs to solve a problem better than the alternatives.  It has to be the logical choice for some problem set.  (At the meetup we spent a lot of time discussing exactly what that problem could be, and if the millennials would even have the same problems those of us over 30 do.  I’m not sure anyone has a definitive answer for that yet.)

Personal Cloud Meetup Talk.010 Personal Cloud Meetup Talk.011

This is what I think the Personal Cloud is waiting for.  This explosion of data from all our connected devices, from the metrics of everything we do, read, and say, and what everyone around us says and does.  I think the Personal Cloud has a unique place, being Internet-native, as the ideal place to solve those problems.  We’re generating more data from our activities than ever before, and the new wave of Quantified Self and Internet of Things devices is just going to amplify that.  How many data points a day does my FitBit generate?  Stephen Wolfram’s been collecting personal analytics for decades, but how many of us have the skill to create our own suite of tools to analyze it, like he does?

Personal Cloud Meetup Talk.012

The other play the Personal Cloud can make is as a defense against the productization of you.  Bruce Sterling was talking about The Stacks years ago, but maybe there’s an actual defensive strategy against just being a metric in some billion dollar corporations database.  I worked on retail systems for a while, it wouldn’t surprise me at all if based on the order of items scanned out of your cart at Target (plus some anonymized data mining from store cameras) they could re-construct your likely path through the store.  Track you over time based on your hashed credit card information, and they know a whole lot about you.  You don’t know a whole lot about them, though.  Maybe the Personal Cloud’s place is to alert you to when you’re being played.

Personal Cloud Meetup Talk.013In the end I think the Personal Cloud is about you.  It’s about privacy, it’s about personal empowerment.  It’s uniquely just about you and your needs, just like the Personal Computer was personal, but can’t keep up, so the Personal Cloud Computer will take that mantel.

Personal Cloud Meetup Talk.014 Personal Cloud Meetup Talk.015

The new dream, I think, is that the Personal Cloud Computer runs those programs for you, and acts like your own TRON.  It’s your guardian, your watchdog, your companion in a world gone data mad.  Just like airbags in your car protect you against the volume of other automobiles and your own lack of perfect focus, so your Personal Cloud protects you against malicious or inconsiderate manipulation and your own data privacy unawareness.

Personal Cloud Meetup Talk.016

To do this I think the Personal Cloud Computer has to live a central role in your digital life.  I think it needs to be a place that other things connect to, a central switching station for everything else.

Personal Cloud Meetup Talk.017

And I think this is the promise it can fulfill.  The PC was a computer that was personal.  We could write diary entries, work on our novel for years, collect our photos.  In the early days of the Internet, we could even be anonymous.  We could play and pretend, we could take on different personas and try them out, like the freedom you have when you move to a new place or a new school or job.  We had the freedom to disappear, to be forgotten.  This is a freedom that kids today may not have.  Everything can connect for these kids (note the links to my LinkedIn profile, Flickr Photos, Twitter account, etc in the sidebar), though they don’t.  They seem to be working around this, routing around the failure, but Google and others are working against that.  Facebook buys Instagram because that’s where the kids are.  Eventually everything connects and is discoverable, though it may be years after the fact.

Personal Cloud Meetup Talk.018

So how do I think this looks, when the code hits the circuits?  I think the Personal Cloud Computer (or ‘a’ personal cloud computer) will look like this:

  • A Migratory – Think OpenStack APIs, and an orchestration tool optimized for provider price/security/privacy/whuffie.
  • Standards Compliant – Your PCC can talk to mine, and Facebook knows how to talk to both.
  • Remotely Accessible – Responsive HTML5 on your Phone, Tablet and Desktop. Voice and Cards for Glass.
  • API Nexus – Everything connects through it, so it can track what’s going on.
  • with Authentication – You authenticate with it, Twitter authenticates with it, you don’t have a password at Twitter.
  • Application Hosting – It all comes down to running Apps, just like the PC.  No provider can build everything, apps have to be easy to port and easy to build.
  • Permission Delegation – These two apps want to talk to each other, so let them.  They want to share files, so expose a cloud storage container/bucket for them to use.
  • Managed Updates – It has to be up to date all the time, look to Mobile for this.
  • Notifications – It has to be able to get ahold of you, since things are happening all the time online.
  • and Dynamic Scaling Capabilities – Think spinning up a hadoop cluster to process your lifelog camera data for face and word detection every night, then spinning it down when it’s done.

Personal Cloud Meetup Talk.022So how do we actually make this happen?  What bits and bobs already exist that look like they’d be good foundational pieces, or good applications to sit on top?

Personal Cloud Meetup Talk.023No presentation these days would be complete without a mention of docker, and this one is no different.  If you haven’t heard of docker, it’s the hot new orchestration platform that makes bundling up apps and deploying lightweight linux container images super-easy.  It’s almost a PaaS in a box, and has blown up like few projects before it in the last 6 months.  Docker lets you bundle up an application and run it on a laptop, a home server, in a cloud, or on a managed Platform as a Service.  One image, multiple environments, multiple capacities.  Looking at that Ubuntu Edge, that looks like a perfect way to sandbox applications iOS style, but still give them what they need to be functional.

Personal Cloud Meetup Talk.024

Hubot is a chat bot, a descendant of the IRC bots that flourished in the 90’s.  Hubot was built by Github, and was originally designed to make orchestration and system management easier.  Since they connect and collaborate in text based chat rooms, Hubot sits in their waiting for someone to give it a command.  Once it hears a command, it goes off and does it, whether it be to restart a server, post an image or say a joke.  You can imagine that you could have a Personal Cloud Computer bot that you’d say ‘I’m on my way home, and it’s pot roast night’ to, and it would switch on the Air Conditioner, turn on the TV and queue up your favorite show, and fire up the crock pot.

Personal Cloud Meetup Talk.025

The great thing about Hubot, and the thing about these Personal Cloud Bots, is that like WordPress Plugins, they’re developed largely by the community.  Github being who they are, Hubot embraces the open development model, and users have developed hundreds of scripts that add functionality to Hubot.  I expect we’ll see the same thing with the Personal Cloud Computer.

Personal Cloud Meetup Talk.027

I’ve talked about Weavrs pretty extensively here on the blog before, so I won’t go into serious depth, but I think that the Personal Cloud Computer is the perfect place for something like Weavrs to live.  Weavrs are social bots that have big-data derived personalities, you can create as many of them as you like, and watch them do their thing.  That’s a nice playground to play with personalities, to experiment and see what bubbles to the top from the chaos of the internet.

Personal Cloud Meetup Talk.031

If you listen to game developers talk, you’ll start to hear about that initial dream that got them into game development, the dream of a system that tells stories, or tells stories collaboratively with you.  The Kickstarted game Sir, You Are Being Hunted has been playing with this, specifically with their procedurally generated British Countryside Generator.  I think there’s a lot of room for that closely personal kind of entertainment experience, and the Personal Cloud Computer could be a great place to do it.

Personal Cloud Meetup Talk.032

Aaron Cope is someone you should be following if you aren’t.  He used to be at Flickr, and is now at the Cooper-Hewett Design Museum in New York.  His Time Pixels talk is fantastic.  Two of the things that Aaron has worked on of interest are Parallel Flickr, (a networkable backup engine for Flickr, that lets you backup your photos and your contacts photos, but is API compatible with Flickr) and privatesquare (a foursquare checkin proxy that lets you keep your checkins private if you want, or make them public).  That feels like a really great Personal Cloud app to me, because it plays to that API Nexus feature.

Personal Cloud Meetup Talk.033

The Numenta guys are doing some really interesting stuff, and have open sourced their brain simulation system that does pattern learning and prediction.  They want people to use it and build apps on top of it, and we’re a long way away from real use, but that could lead to some cool personal data insights that you run yourself.  HP spent a bunch of money on Autonomy because extracting insights from the stream of data has a lot of value.  Numenta could be a similar piece for the Personal Cloud.

Personal Cloud Meetup Talk.036

That’s the Adafruit Pi Printer, Berg has their Little Printer, and they’re building a cloud platform for these kind of things.  These devices bring the internet to the real world in interesting ways, and there’s a lot of room for personal innovation.  People want massively personalized products, and the Personal Cloud Computer can be a good data conduit for that.

Personal Cloud Meetup Talk.037

Beyond printers, we have internet connected thermostats, doorknobs, and some of those service companies will inevitably go away before people stop using their products.  What happens to your wifi thermostat or wifi lightbulbs when the company behind it goes way?  Personal Cloud lets you support that going forward, it lets you maintain your own service continuity.

Personal Cloud Meetup Talk.038 Personal Cloud Meetup Talk.039

Having an always-on personal app platform lets us utilize interesting APIs provided by other companies to process our data in ways we can’t with open source or our own apps.  Mashape has a marketplace that lets you pick and switch between api providers, and lets you extend your Personal Cloud in interesting ways, like getting a sentiment analysis for your Twitter followers.

Personal Cloud Meetup Talk.041

In addition to stuff we can touch over the network, there’s a growing market of providers that let you trigger meatspace actions through an API.  Taskrabbit has an API, oDesk does, Shapeways does, and we haven’t even begun to scratch the possibilities that opens up.

Personal Cloud Meetup Talk.042

One thing to watch is how the Enterprise market is adapting to utility computing and the cloud.  The problems they have (marketplaces, managed permissions, security for apps that run premises, big data) are problems that all of us will have in a few years.  We can make the technology work with enterprise and startups, but for end users, we have to make it simple.  We have to iPhone it.

Personal Cloud Meetup Talk.045

So where do we start?  I think we have to start with a just good enough, minimum viable product that solves a real problem people have.  Early adopters adopt a technology that empowers them or excites them in some way, and whatever Personal Cloud platforms appear, they have to scratch an itch.  This is super-critical.  I think the VRM stuff from Doc Searls is really interesting, but it doesn’t scratch an itch that I have today in a way I can comprehend.  If you’ve been talking about something for years, what will likely happen is not that it’ll eventually grow up, it’s that something radical will come out of left field that uses some of those ideas, but doesn’t honor all of them.  That’s my opinion, at least.  I think the Personal Cloud community that’s been going for years with the Internet Identity Workshop probably won’t be where the big new thing comes from, but a lot of their ideas will be in it.  That’s just my gut feeling.

Personal Cloud Meetup Talk.046The last caveat is that Apple and Microsoft and Google are perfectly positioned to make this happen with vendor lockin easily.  They all already do cloud.  They all have app stores.  They have accounts for you, and they want to keep you in their system.  Imagine an Apple App Store that goes beyond your iPhone, iPad and even Apple TV, but lets you run apps in iCloud?  That’s an easy jump for them, and a huge upending of the Personal Cloud world.  Google can do the exact same thing, and they’re even more likely to.

Personal Cloud Meetup Talk.047 So thanks for your time, and for listening (reading).  If you have comments, please share them.  It’s an exciting time.

PyTexas 2013: 110 Miles to Aggie Land

Two weeks ago I had the pleasure of speaking at my first PyTexas conference.  I’d never been to PyTexas before, but I’ve been to it’s Ruby relative, Lone Star Ruby a bunch of times.  In a lot of ways it was similar (the local crowd, lots of enthusiasts, two tracks of talks), but in some ways, very different…

A&M Memorial Student CenterThe first and most notable thing to mention about PyTexas is that it’s held at the Memorial Student Center at Texas A&M University, which is in College Station.  That means the conference is two hours from Austin and Houston, and three hours from San Antonio and Dallas/Fort Worth.  This isn’t a complaint, it’s a nice facility, but it explains something about PyTexas: It’s not and will never be a large programming conference, simply due to being too far from the Texas programmer population.  That being said, it’s impressive how many people they’ve pulled in, and is a testament to the Texas Python community that so many people (about 100 folks the day I was there) made the trip.

The tradeoff for the drive is that the event (being hosted by the A&M School of Architecture) is really inexpensive ($25 early bird, $50 regular).  I would have thought that would have meant there would have been a big student turnout, but that didn’t seem to be the case.  School hadn’t started yet, so that may be one reason.  There were a lot of interested, engaged professionals there, and a lot of people doing serious day to day work with python.  I saw a couple of Rackers, and though there wasn’t anyone else I knew from HP Cloud, there was some OpenStack talk in the halls.

PyTexas RegistrationMy wife has been getting into python recently, and since I wasn’t planning on spending the night away from home (2 year old daughter + 7 months pregnant wife = at home at night), I talked her into coming with me for the day.  Registration was well organized, and there were good snacks.  The event had a few sponsors I wasn’t familiar with, including MapMyFitness, which tracks exercise metrics for folks, and StormPulse, which provides weather forecasts for businesses.  It’s always nice to see businesses showing how they’re using a language for real.  The Lone Star Ruby conference companies tend towards web startups and Rails.

HP Cloud StickersThe gender balance was about what you’d expect, maybe 10:1.  If it was a little bigger there might be a more organized outreach, but right now it’s just word of mouth.  I did hear about it on the PyLadies ATX list, and there may have been more women on the tutorial day.

I think there were some challenges on the organization side of the conference.  Speakers didn’t seem to get into the registration system, and two of the speakers didn’t show up.  That’d be easier to compensate for at a bigger conference, but when there are only two tracks it really shows.  Unfortunately one of the no-shows was Thomas Hatch of SaltStack, whose talk I was really looking forward to.  Maybe it’s online somewhere.

I’d proposed two talks, but only had time to prepare one, so I ended up spending 50 minutes talking the audience through building two simple Bottle applications.  One of the apps serves as an API service, the other as a web-exposed UI.  The code for both, built step by step with comments, is up on GitHub.  I’ll link to the video of the talk whenever they post it.

PyTexas Panorama

Walker Hale from the Baylor College of Medicine down in Houston spoke before me, talking about Bottle’s sister microframework Flask.  Flask and Bottle are really, really, really similar, so he stole a bit of my thunder, but I think the audience enjoyed the live coding I did (with paper diffs!), and I got some good feedback.  Unfortunately the Memorial Students Center is a no-hat building (out of respect for the Aggies who’ve given their lives in defense of the country), so the audience had to endure my out of control mop.

Docker Lightning TalkLunch was included in the cost of registration and provided by a nice local food truck.

There were a couple of lightning talks at the end of the day, including Barbara Shaurette of PyLadies Austin talking about her interesting new initiative to connect professional programmers with high school computer classrooms.  No set of lightning talks would be complete without the next big thing, Docker.io, so of course there were two (!) of those.  Docker’s going to take over the world, believe me.

PyTexas was a fun little conference, though driving down in the morning and back in the evening was really exhausting.  It’s small, and isn’t as slick as some larger conferences, but it has a nice raw charm.  The love the attendees and speakers have for python really shows through.  If it’s easy for you to get to, and you aren’t busy, I recommend it.  If they moved it to Austin or San Antonio, I’d go for the whole thing and I think the conference would be at least three times as big.  (Speaking of Texas python conferences, if you haven’t signed the Austin PyCon 2016/2017 petition, please do!)