Matt Asay, Author at ReadWrite https://readwrite.com/author/matt-asay/ IoT and Technology News Mon, 27 Aug 2018 02:32:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://readwrite.com/wp-content/uploads/cropped-rw-32x32.jpg Matt Asay, Author at ReadWrite https://readwrite.com/author/matt-asay/ 32 32 How eero uses Akka to make the truly connected home a reality https://readwrite.com/akka-eero-home-wifi-in-the-cloud-pt1/ Wed, 20 Jul 2016 20:00:47 +0000 https://readwrite.com/?p=4729

Nest revolutionized the humble thermostat and in many ways made the concept of Internet of Things (IoT) part of the […]

The post How eero uses Akka to make the truly connected home a reality appeared first on ReadWrite.

]]>

Nest revolutionized the humble thermostat and in many ways made the concept of Internet of Things (IoT) part of the mainstream consumer tech conversation. Smart homes would have intelligent devices communicating among themselves and with the owner via a mobile app over the cloud.

That’s the theory, anyway.

In practice, even things like the fussy home WiFi router, possibly the most loathed home computing device ever invented, keep giving us the Internet of Broken, Disconnected Things. That’s one reason I find eero, a San Francisco startup, so interesting. Instead of conceptualizing the home WiFi router as an isolated endpoint for the Internet, eero’s cloud-based, mesh networking approach to WiFi threaten to heighten user expectations with sub-second response times and always-on availability.

To deliver on this promise, eero needed a platform that would make it easy to scale as it grew in complexity and that could handle concurrency at a very high scale. I recently spoke with John Lynn, Cloud Platform Manager at eero, to learn how the team designed its integrated device and cloud solution.

As IoT becomes a more pervasive delivery model for consumer technology, eero represents a look at the future of how consumer devices, cloud-native applications and the back-end of IoT participate in delivering a highly reactive experience to customers.

ReadWriteWhat was the genesis for eero?

John Lynn: We were founded in 2014 to blanket the home in fast, reliable WiFi while setting new standards in user-friendliness and accessibility options. We created a product that evolves beyond the old WiFi model where you just set up a router and pray that it never breaks. Before we even began product development, our founders were certain that the single router model wasn’t the future where a growing army of connected devices are all competing for valuable WiFi. To provide reliable coverage and consistent performance across large homes and WiFi-crowded urban environments, we chose to deliver our product as a distributed mesh platform.

RWA lot of the value of eero seems to live in your cloud platform.

JL: Instead of the arcane home networking equipment approach of forcing consumers to login via IP address, eero delivers a cloud-based back-end that allows customers to access their device and network information on their mobile device from any location using the Internet. When a WiFi network is experiencing connectivity issues, now the user has the opportunity to access troubleshooting data on a mobile device via the cloud, regardless of location. The eero cloud is like having a networking engineer constantly making sure the network is working well for our customers.

RWWhat considerations did you face in architecting a cloud-based solution?

JL: We turned to Lightbend and its Reactive Platform for help in building our backend solution in the cloud. The challenge was how to build a highly available, high performance infrastructure that’s able to communicate with each eero device.

The central problem with traditional web architectures boils down to concurrency and database shared memory. To solve these challenges, we chose Akka — an open-source toolkit and runtime for building highly concurrent, distributed and resilient message-driven applications. Akka and the actor model gave us a powerful in-memory data architecture that allowed for very high performance communication between customer device endpoints the eero cloud. It’s a messaging architecture that lets us scale customer endpoints, broker data between those endpoints and its back-end, and deliver a highly responsive cloud interface to customers, regardless of the status of individual devices.

Akka also allows eero to model each customer’s network and nodes with actors, including pushing firmware updates out to endpoints. Our architecture thinking is very much informed by the ideas behind the Reactive Manifesto.

RW: For more technical readers, elaborate on the specific problems your design had to address and how you solved for them.

JL: For example, idempotency can be difficult when someone gets click-happy with a web form. From a RESTful service standpoint, for an operation, or service call, to be idempotent, clients can make that same call repeatedly while producing the same result. In other words, making multiple identical requests has the same effect as making a single request.

Bespoke distributed locking mechanisms and centralized control systems are used to prevent worker processes from stepping on each other’s state. More concurrent requests, long-running workers, and massively parallelized jobs become very complex, very quickly. And as things scale, the introduction of performance optimizations, like caching, further complicates things.

But the biggest problem of traditional web architectures is that the database becomes the shared memory in a vastly concurrent system. One of the things you learn early on in multi-threaded programming is that shared memory introduces a lot of complexity. Primitives such as locks, semaphores, and mutexes are employed to guarantee consistency across concurrent threads. In typical web services, we rarely attempt to coordinate access to data.

With the popularity of powerful ORMs and MVC frameworks, it becomes easy to fetch the data you need from the database in order to service a request. If you need to guarantee consistency, you’re on your own. As systems become more distributed with multiple request servers, async workers, caches, etc., there’s an increased likelihood that different parts of your system have different representations of the same piece of data. As data moves throughout your systems, consistency is harder and harder to maintain.

RW:What other benefits did you see by building a cloud-first home device?

JL: We’re committed to continuous product innovation over time – from new features and capabilities in the devices themselves, to mobile application features that make managing home networks easy. We updated our software numerous times since we launched the product, and the reliability and flexibility that Akka brings to the messaging between devices and the back-end is core to our ability to innovate. 

The post How eero uses Akka to make the truly connected home a reality appeared first on ReadWrite.

]]>
Pexels
Is the GPL the right way to force IoT standardization? https://readwrite.com/gpl-force-iot-standardization-pl1/ Mon, 11 Jul 2016 22:00:57 +0000 https://readwrite.com/?p=4242

The Internet of Things has tremendous potential, but remains a mishmash of conflicting “standards” that don’t talk to each other. […]

The post Is the GPL the right way to force IoT standardization? appeared first on ReadWrite.

]]>

The Internet of Things has tremendous potential, but remains a mishmash of conflicting “standards” that don’t talk to each other. As various vendors erect data silos in the sky, what is actually needed is increased developer communication between disparate IoT projects.

I’ve argued before that this is one reason IoT needs to be open sourced, providing neutral territory for developers to focus on code, not business models. But there’s still an open question as to what kind of open source best facilitates developer-to-developer sharing. In Cessanta CTO and co-founder Sergey Lyubka’s view, the restrictive GNU General Public License (GPLv2) is the right way to license IoT, at least for now.

He might be right.

Giving developers something to work with

By Evans Data estimates, there are now 6.2 million developers worldwide focused on IoT applications and systems, up from 4.1 million developers last year, a 34% increase. Importantly, this swelling developer population isn’t primarily interested in cashing in on IoT. As VisionMobile’s extensive survey data uncovers, these developers are mostly looking for fun and a challenge as they explore IoT boundaries.

Not surprisingly, then, open source has become the lingua franca of IoT projects, with 91% of IoT developers acknowledging the use of open source software in at least one area of their projects, according to a separate VisionMobile survey. The reason, the report concludes, is that “open source technology is very strong in solving the nitty-gritty, niche challenges that developers have; areas that commercial vendors would struggle to address.”

The question, then, isn’t whether open source should be part of IoT. It is, and will continue to be such. No, the question is what kind of open source license is best-suited to reaching this growing population of IoT developers.

Permissive or insistent?

According to Lyubka, a more restrictive, free software license like the GPLv2 is the best approach for IoT licensing, at least as it pertains to firmware. In his view, the GPL ensures that “firmware [will be] easily available and affordable for prototyping and DIYing.”

He also feels it’s the best license because it affords the developer the option to dual-license her code, offering a proprietary (“commercial”) license of the same code so that the originating developer gets paid while the downstream developer can use the code without concern of having to open source her own proprietary code.

He explains this in more detail:

We need more developers to easily access the internet of things and code for connected devices. We need to share ideas amongst engineers and product developers to better understand what works and what doesn’t.

There is no reason why startups, DIYers and even established companies should have to pay for firmware as they experiment and prototype exciting new products that will help fulfill the market mandate.

At the same time, businesses who develop IoT solutions need to be able to compensate their developers to keep making those IoT solutions stronger, simpler and more scalable for everyone.

That’s why the GPLv2 option, in my opinion, works again for IoT firmware. Once someone commercially applies your code and doesn’t want to open their own solution, they pay.

Though I’ve spent years arguing that such restrictive licensing inhibits developer adoption and offers a poor way to monetize code, Lyubka may have a point in this early IoT market. It’s true that developers increasingly turn to permissive, Apache-style licensing (or completely eschew licensing), but there’s something to be said for a copyleft approach, forcing developers to stick together in the early days of a project, IoT or otherwise.

Would copyleft help us standardize IoT?

Given the tremendous importance of standardizing IoT protocols and firmware, allowing disparate systems to talk to each other and even share code, it makes sense to keep developers from pulling code and embedding it in a proprietary product, thereby creating more IoT silos.

The early days of Linux, for example, were arguably aided by GPL licensing that kept all the developers rowing in the same direction, differentiating themselves at the packaging layer rather than in foundational code differences between distributions.

In the long run, permissive licensing like MIT or Apache strikes me as the absolute best approach, given their propensity to lower barriers to developer adoption. But there just might be reason to force IoT firmware to cohere, at least in the early days.

I’d love to hear your thoughts, one way or another.

The post Is the GPL the right way to force IoT standardization? appeared first on ReadWrite.

]]>
Pexels
Employers aren’t picky when it comes to developers https://readwrite.com/employers-arent-picky-comes-developers-pl1/ Fri, 01 Jul 2016 15:00:46 +0000 https://readwrite.com/?p=3808

Given how consumed the world has become with big data, artificial intelligence, and the Internet of Things, one would think […]

The post Employers aren’t picky when it comes to developers appeared first on ReadWrite.

]]>

Given how consumed the world has become with big data, artificial intelligence, and the Internet of Things, one would think employers would be laser-focused on hiring people with those skills. According to a new Dice hiring report, however, employers seem to want generalists, not specialists.

This isn’t to say that there isn’t demand for IoT-focused developers. As VisionMobile highlighted two years ago, there is a desperate need for millions of IoT developers to help build the future. But when the job reqs start flowing, employers want generic “developers” or “software engineers” to the tune of 66%.

What’s your strategy?

Even though it’s still new, there’s a lot of money in IoT. Analyst firm IDC forecasts that firms will spend upwards of $232 billion on IoT technologies in 2016. Gartner polled enterprises to uncover IoT adoption and found that 50% of companies plan to roll out an IoT project in 2016. In total, 64% of enterprises expect to climb aboard the IoT train at some point in the not-so-distant future.

That’s the good news.

The bad news is that few of them seem to know why.  According to Chet Geschickter, research director at Gartner, firms are stymied by IoT, even as they rush to implement it, largely for two reasons:

The first set of hurdles are business-related. Many organizations have yet to establish a clear picture of what benefits the IoT can deliver, or have not yet invested the time to develop ideas for how to apply IoT to their business. The second set of hurdles are the organizations themselves. Many of the survey participants have insufficient expertise and staffing for IoT and lack clear leadership.

In other words, enterprises know that IoT will be BIG, BIG, BIG…but they don’t have a clue what to do with it, and they don’t have the in-house talent necessary to figure it out. A Northeastern University-Silicon Valley survey of 200 IoT professionals at the recent Sensor World found the same: nearly 50% of those surveyed pinpoint the development of a “comprehensive IoT strategy as the biggest challenge in IoT.”

This is eerily similar to what plagued big data early on.

Just get me a developer

It’s interesting, therefore, that enterprises aren’t aggressively trying to hire IoT developers. At least, not self-styled IoT developers.

According to the Dice report, while developers are in heavy demand, with 51% of hiring managers identifying “developers” as their top hiring priority in 2016, and another 15% picking out “software engineers” as their priority. While the two are similar, a software engineer tends to be the person designing a system, while the developer actually builds it.

Data specialists (analytics, etc.) are the top priority for just 3% of hiring managers. And IoT-specific professionals? Well, that’s a rounding error.

This doesn’t evince a lack of interest in IoT. Far from it. If anything, it may simply be a recognition that great developers can apply their expertise to any number of types of applications. The key is to find a great developer, then she can learn IoT.

The post Employers aren’t picky when it comes to developers appeared first on ReadWrite.

]]>
Pexels
Could Blackberry have a real chance in IoT? https://readwrite.com/crazy-blackberry-might-actually-real-chance-iot-tl1/ Mon, 27 Jun 2016 15:00:49 +0000 https://readwrite.com/?p=3682

The world may be melting down into Brexitian chaos, but for a company like Blackberry that’s the least of its […]

The post Could Blackberry have a real chance in IoT? appeared first on ReadWrite.

]]>

The world may be melting down into Brexitian chaos, but for a company like Blackberry that’s the least of its worries. After all, customers already voted themselves out of Blackberry’s ecosystem years ago, choosing to embrace Apple’s iPhone or Google’s Android phones. Brexit is just one more kick in the teeth for a company that has struggled for years to regain relevance.

And yet…there are signs of life at Blackberry.

The most important sign is the Internet of Things. While every company is pretending to have an IoT strategy these days, Blackberry actually has the raw materials necessary to build a highly relevant IoT business.

Chasing a niche is a good plan

To be honest, it has been years since I last thought about Blackberry, either as a vendor or as blog fodder. I didn’t want to use their phones and I couldn’t muster any energy to write yet another eulogy for the once powerful company.

So when I saw that Larry Dignan had penned a piece suggesting that Blackberry Radar could fuel a turnaround, I was surprised but intrigued. Larry is a smart guy. Why would he bother writing about a corporate corpse?

Blackberry Radar is an “end-to-end, Internet of Things (IoT)-based system that monitors the location of trailers and containers and delivers timely, actionable data to transportation managers via a secure, online portal.” In one sense, it’s a niche solution for the Transportation vertical. Even though Blackberry CEO John Chen rightly calls out the “there are anywhere between three million to 12 million [truck] trailers currently in the U.S. alone,” a fraction of which (14 to 20%) with telematics services attached to them, it’s still hardly something worth going long on BBRY as an investment.

Does Blackberry have the right IoT assets?

However, it’s not so much Blackberry Radar of itself that is interesting. Rather, it’s that Radar reminds us of the systems expertise that Blackberry has honed over decades.

For example, Chen went on to talk about QNX, the embedded, real-time operating system that has been around for eons, and sits at the heart of Blackberry’s connected car platform:

We’ve built and operate a secure end-to-end system to deliver over-the-air software updates to cars, to automotive, automobile. This technology is a growing imperative for automotive OEMs, with the average vehicle nowadays using about 60 million to 100 million lines of software code. Our solution will help the auto industry provide proactive maintenance update, without time consuming visit to the repair shop. This solution has been derived from our technology for updating 50 million mobile phones in over 100 countries.

If that sounds like exactly what is needed to operate a powerful, sophisticated IoT network then that’s because it is.

It Just Works

Two years ago I wrote about how a lack of standardization in IoT would make open source an imperative. While that shift toward open source is happening, it’s also true that developers and the enterprises they serve are hungry for workable platforms that can get them started faster. That’s where Blackberry comes in.

Companies have been calling out for someone to solve the Wi-Fi, real-time location tracking, bar codes, mobile and GPS-related IoT problems they have. Blackberry, because of its years building out a massive smartphone network, coupled with its QNX experience, has this in spades.

So today it’s right that Blackberry should start with an isolated market like trucking logistics. But there’s no reason that this same system, and its underlying assets, can’t power a whole host of other IoT projects in vastly different markets. Could this be the start of a Blackberry resurrection?

The post Could Blackberry have a real chance in IoT? appeared first on ReadWrite.

]]>
Pexels
Most IoT developers aren’t in it for the money https://readwrite.com/iot-developers-arent-money-pl1/ Mon, 20 Jun 2016 20:50:34 +0000 https://readwrite.com/?p=3400 energous

Developers aren’t necessarily like you and me. You may choose to spend your free time making bird houses or watching Friends […]

The post Most IoT developers aren’t in it for the money appeared first on ReadWrite.

]]>
energous

Developers aren’t necessarily like you and me. You may choose to spend your free time making bird houses or watching Friends reruns. Developers, meanwhile, are trying to get Windows 95 to run on an Apple Watch (and succeeding).

This need to tinker is especially pronounced within the IoT developer set. According to new research from VisionMobile, analyzing survey data from over 4,400 IoT developers, there are eight segments of IoT developers, and just a third of these developers are professionally involved in IoT projects, compared to 50 to 70% in other markets. What this means, in practice, is that most IoT developers are just in it for fun and learning, and don’t have any interest in making you money.

This seems like bitter medicine for those IoT platform companies that hope to corral a body of developers to extend their hardware or service. Indeed, as Stijn Schuermans notes, “Key players in every IoT market build their strategy around developers who can extend the product beyond what it was when it left the factory.”

Just because IoT developers aren’t overwhelmingly motivated by cash doesn’t mean they can’t deliver huge benefits to those that are. It’s just a matter of harnessing different motivations to build up value that makes a platform enticing.

Fun-loving hobbyists

Just like the prince in Monty Python’s Holy Grail, some developers just want to sing. According to VisionMobile’s recently released IoT Developer Segmentation report, many, indeed most, IoT developers are in it for fun, learning, and personal development. A full 22% of IoT developers have zero interest in making money, either for you or for them, and another 21% of IoT developers are “simply exploring the technology without any specific use case in mind.”

Interestingly, “Fun-loving Hobbyists (~1/3 of IoT developers) and Explorers looking for opportunities and learning (~1/3) form the overwhelming majority of IoT developers in 2016, a much higher number than in other sectors like mobile or cloud development.” IoT, it seems, is heavily driven by developers looking to take Raspberry Pi and other hardware into insanely cool new ground. The percentage of IoT developers that are hoping to collect a paycheck has remained static, and relatively small, for some time.

Even for the money grubbers among the IoT developer set, creativity and a sense of belonging to a developer community weigh more heavily in their motivations than simply cash.

IoT-dog millionaire?

Which is not to say that there is no money to be made in IoT, or that its fun-loving developers can’t be helpful to those seeking to build businesses. Attracting these developers early turns out to be very important, too: though roughly a third of IoT developers start out unaffiliated with any particular vertical, after three years of experience writing IoT code, that number plummets to under 10%. That same population starts off with limited understanding of how to profit from IoT (29% of developers) to just 10% within three years.

As VisionMobile’s report uncovers, though nearly two-thirds of IoT developers are Hobbyists and Explorers – developers not focused on cash but rather personal exploration – these same developers will “influenc[e] the evolution of IoT technology going forward, by making certain technologies more popular than others, and by taking those technologies into their professional lives at a later stage.” Not surprisingly, the report continues, “the Internet of Things is still a young, emerging market, where the excitement and fun of new technology is more important than money or business success in a not yet fully developed market.”

In sum, there is a land grab for IoT developers today, or should be, but it’s not about delivering developer cash. The cash will come, but today platform providers need to be thinking about how to provide opportunities for developers to explore this still nascent market.

In practice, those platforms that want to entice IoT developers must make their tooling approachable and the documentation clear enough to allow casual development. Raspberry Pi is a classic example of a developer platform that hits all the right notes in terms of giving developers an easy, cheap-to-use playground to experiment upon.

Those that can appeal to the fun side of IoT developers today will find it should translate into their business motivations tomorrow.

The post Most IoT developers aren’t in it for the money appeared first on ReadWrite.

]]>
Pexels
What enterprise wants from Google’s cloud https://readwrite.com/enterprise-wants-googles-cloud-pl1-2/ Tue, 07 Jun 2016 15:30:51 +0000 https://readwrite.com/?p=2987

Google has a dream. It’s an ambitious dream but, with hard work and dedication, Google’s cloud chief Diane Green figures […]

The post What enterprise wants from Google’s cloud appeared first on ReadWrite.

]]>

Google has a dream. It’s an ambitious dream but, with hard work and dedication, Google’s cloud chief Diane Green figures Google can realize its dreams.

The dream? To be hopelessly, blindingly dull. Enterprise dull.

Green is telling people that Google is “very serious about the enterprise,” but she has yet to demonstrate that Google really understands the enterprise in the way her old company, VMware, does. The problem, then, for this would-be contender to the cloud crown is that the enterprise is finding it hard to be “very serious” about Google, and it’s even harder to see this changing anytime soon.

And the last shall be first?

By rights Google should be completely dominating Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) markets. Unlike every other major contender, Google was born in the cloud. Unlike an Oracle or IBM, it doesn’t have a data center past to shed. Unlike Microsoft, it doesn’t have tens of billions of dollars tied up in server-based software. And unlike Amazon Web Services, Google already runs the most sophisticated cloud on the planet to support its daily operations.

And yet…Google doesn’t dominate IaaS or PaaS or any area of cloud computing. In fact, according to Forrester’s tally of cloud revenue, Google is dead last among cloud vendors:

To be fair, Google has had to play catch-up. Amazon Web Services officially launched in 2006, and by 2007 already claimed nearly 200,000 developers. Google App Engine launched in April 2008, with Google Cloud Storage joining the fray in May 2010 (four years after AWS released S3). At each step, Google fell a bit further behind as AWS relentlessly introduced new services while improving others.

However, the same could be said of Microsoft Azure, which also launched after AWS (announced in October 2008 but released in February 2010), yet has had much more success than Google in attracting customers. In fact, Microsoft started off more slowly than Google, with one commentator describing Azure’s documentation in 2011 as “incomprehensible.”

Fast forward to 2016, however, and Microsoft Azure is the clear alternative to AWS, grabbing over $1.5 billion in platform revenue in 2015, double that of any other competitor (except AWS, of course), and five times as much as Google. Even if we use Morgan Stanley numbers – which show AWS at roughly $8 billion, Microsoft Azure at $1.1 billion, and Google at $500 million – Google has a long way to go.

Over their heads?

Green told a crowd at Google I/O that “We are quite enterprise-ready,” but this isn’t yet a credible claim. Microsoft has corporate DNA coming out of its Redmond wazoo, and mainly needs to help developers believe that it can also master the cloud. Through a series of incremental advances and a super-savvy CEO, Microsoft is winning over developers.

Amazon, for its part, is both learning to speak suit (witness its increased willingness to indulge in “hybrid cloud” CIO fantasies, though Amazon Web Services GM Matt Wood tells me it’s just a matter of helping enterprises transition to full public cloud) and also helping the suits to grok cloud. AWS has made massive investments in both technology and people to ensure that it can exceed expectations for what public cloud computing can do in terms of performance and security, but also to meet expectations for customer support.

Google, quite simply, has not.

At GCP Next, the Google cloud confab, Google trotted out a bevy of enterprise-y customers, and then chairman Eric Schmidt started to toss around words like “tedious” to get the suits’ attention: “the cloud is about automating the tedious details and empowering people.” You had me at “tedious,” Dr. Schmidt.

The company couldn’t stop at tedious, however, and had to indulge in science fiction.

As Jack Clark puts it, Google wants to give outsiders access to its inside operations. But Google’s “powerful internal systems work in radically different ways [than how most data centers work], which can make selling it harder.” Google needs to dumb things down, in other words.

Google keeps missing this message, however. At GCP Next the company touted machine learning and other cutting edge applications, but rather than inspiring the masses it probably frightened them. As former Netflix cloud chief Adrian Cockcroft notes, “For new server-less computing and machine learning applications Google have a compelling story, but they don’t appeal to the kind of mainstream enterprise applications that are currently migrating to AWS.”

Which is why, perhaps not surprisingly, when Google put customers on stage, they tended to be from its sweet spot: media and advertising. AWS put GE on stage; Google landed Disney. Both are great brands, but the former says something to the bulk of mainstream enterprises: we understand your pain.

Google, on the other hand, kept talking “NoOps,” anathema to mainstream enterprises that have hordes of IT folks that aren’t looking forward to retirement just yet.

A deeper problem than tech

If it were just a matter of marketing or technology, however, Google would be fine, even though, as Cockcroft argues, Google “is falling further behind AWS and Azure rather than catching up.” Ultimately, however, the problem comes down to people. Google’s cloud business is filled with exceptionally smart people, but it still seems to lack the safe-and-stodgy enterprise DNA that might help enterprises place more trust in it.

Google, after all, has a lot to prove. This is the company that exults in the perpetual beta and has notoriously non-existent customer support. (I have personally experienced the hell of trying to reach a live person to get an issue fixed with Gmail and other Google services.) Google’s history is one of rapid trial-and-error, pulling the plug on services that users depend upon and spinning up whimsically named new ones. It’s an impressive company that innovates at a frenetic pace, but that’s not the right way to win over the enterprise.

Google, in short, needs to learn to be boring.

A race against time

Unfortunately, it doesn’t have much time. As former Netflix cloud chief Adrian Cockcroft points out, “entire datacenters are being closed and replaced by AWS.” Public cloud VMs are dramatically outpacing (20X growth) private cloud VM growth (3X) as enterprises push more workloads to public clouds. AWS is getting most of this business today, with Microsoft a growing contender in second place.

Amazon is redefining the enterprise while Microsoft cleans up with those that want one foot in the new world and one in the old. This is part technology but, again, it’s also part culture.

Microsoft, for example, wins deals against AWS despite having a less-impressive tech line-up, according to Gartner analyst Lydia Leong: “Azure almost always loses tech evals to AWS hands-down, but guess what? They still win deals. Business isn’t tech-only.” It’s also culture, and in both culture and tech, Google may be too far ahead of its time, and its prospective customers.

The post What enterprise wants from Google’s cloud appeared first on ReadWrite.

]]>
Pexels
Open source near ubiquitous in IoT, report finds https://readwrite.com/open-source-near-ubiquitous-iot-report-pl1/ Wed, 04 May 2016 17:00:28 +0000 https://readwrite.com/?p=1513

Open source is increasingly standard operating procedure in software, but nowhere is this more true than Internet of Things development. […]

The post Open source near ubiquitous in IoT, report finds appeared first on ReadWrite.

]]>

Open source is increasingly standard operating procedure in software, but nowhere is this more true than Internet of Things development. According to a new VisionMobile survey of 3,700 IoT developers, 91% of respondents use open source software in at least one area of their software stack. This is good news for IoT because only open source promises to reduce or eliminate the potential for lock-in imposed by proprietary “standards.”

What’s perhaps most interesting in this affection for open source, however, is that even as enterprise developers have eschewed the politics of open source licensing, IoT developers seem to favor open source because “it’s free as in freedom.”

All open source, all the time

According to VisionMobile’s survey data, IoT developers both use and contribute to open source projects. This isn’t surprising given the wealth of open source options available to IoT developers, whether software, hardware, or data.

As for operaSource: VisionMobileting systems, developers can choose between Raspbian, Ubuntu Core, Google Brillo, Contiki, FreeRTOS, or other open options. For frameworks or libraries developers are also spoiled for choice: Siddhi, bip.io, KinomaJS, RHIOT, Zetta, and Yaler, among many others. In fact, the software options are so rich that 71% of IoT developers expect to use one or more of these options.

As VisionMobile concludes, this high adoption rate suggests that “open source technology is very strong in solving the nitty-gritty, niche challenges that developers have; areas that commercial vendors would struggle to address.”

But it’s not just software.

Indeed, hardware components like Raspberry Pi, Arduino, Flutter, and more capture the fealty of 77% of IoT developers. Beyond software and hardware, by some estimates 41% of developers not only use but also publish open data for IoT.

This open source adoption isn’t mere pragmatism, however, which is somewhat different from enterprise adoption of open source. As VisionMobile finds, “Only 1 in 5 open source [IoT] users is completely pragmatic when it comes to open source decisions (only use open source when it’s the best alternative).”

Let IoT freedom ring

That open source is more than a matter of a $0.00 price tag is clear from contribution levels. A majority – 58% – of IoT developers contribute back in at least one part of the stack. Yes, core contribution rates are somewhat low – 9% to 12% – but this is true of open source, generally. It turns out that it’s very hard to invest the time necessary to build up enough expertise in a particular project to become a core committer.

Source: VisionMobile

Even so, developers remain committed to open source even when they’re not steering IoT projects. A majority – 55% – cite ideology as the key driving factor behind their open source adoption, while a lesser 35% indicate they use it because it’s the best option due to community updates.

This isn’t to suggest that IoT developers are pie-in-the-sky idealists. Thirty-two percent do indicate they like the community support, and it’s telling that the above-mentioned 35% believe open sourcde is better because communities make it so.

VisionMobile highlights:

The popularity of open source communities increases from 49% for IoT developers with less than a year of software experience to 70% for developers with over 6 years’ experience. They are are the second most important source of information for IoT developers, right after vendor documentation. Likewise, the popularity of Q&A sites increases from 39% (no experience) to 58% (6+ years’ experience).

However, there’s a big caveat in all this free-as-in-speech love for open source: hobbyist developers tend to care much more about software freedom, and they still comprise a big chunk of the IoT developer population. A full 64% of the hobbyist crowd is into the freedom ideology of open source, whereas professional developers skew pragmatic.

In other words, while open source will remain a big deal to IoT developers even as the space commercializes, we’re likely to see it embraced more for its quality than for its ideology over time.

The post Open source near ubiquitous in IoT, report finds appeared first on ReadWrite.

]]>
Pexels
What everyone’s missing in Apple’s earnings drop https://readwrite.com/apple-pay-services-earnings-miss-pr1/ Thu, 28 Apr 2016 19:00:32 +0000 https://readwrite.com/?p=1395

Apple’s iPhone sales declined for the first time since 2003. That’s the bad news, and judging from the pundits, the sky […]

The post What everyone’s missing in Apple’s earnings drop appeared first on ReadWrite.

]]>

Apple’s iPhone sales declined for the first time since 2003. That’s the bad news, and judging from the pundits, the sky has started to fall on Apple’s halcyon days. One analyst goes so far as to say that “the only thing that is plainly clear concerning Apple is that it has saturated the market with its legacy, hardware products and largely achieved its total addressable market.”

The truth, however, is very different.

What this analyst and others seem to be missing is that the future for Apple’s iPhone business has less to do with hardware and more to do with services. While Apple will always generate huge sales (and profits) from its hardware sales, its second largest revenue-generating category in the quarter was “Services,” or things like App Store sales, Apple Music, and Apple Pay. Each of these businesses is soaring, and isn’t dependent on stratospheric sales of devices to grow.

We are all doomed!

Source: Screen shot from Seeking Alpha

No one should have been surprised by Apple’s Q2 earnings miss. Back in January the company reported the softest revenue growth since the iPhone was introduced in 2007, and warned that Q2 revenue would decline for the first time in 13 years. Analysts wrung their hands at the time, but waited to go into full-scale panic until after Q2 results hit.

Not that Apple is alone in this. According to Strategy Analytics, global smartphone shipments declined for the first time ever, dropping 3% to 334.6 million devices from 345 million devices. Apple’s contributed 51.19 million iPhones to the slide, down from 61.17 million units a year earlier.

ZDNet’s Jason Perlow challenges Apple’s future, arguing that headwinds from developing economies like China mean “a meteor is heading right for… [Apple, forcing it to cede the market] to the Huaweis, the ZTEs and the Xiaomis of the world, the big commodity producers.”

Despite the fact that Apple has shown time and time again that it can sell upmarket, and grow that upmarket pie, Perlow may be right.

But where Perlow’s analysis goes awry, as with many other onlookers, is in this statement: “Like the PC industry before it, the smartphone/mobile device industry has become so mature that there isn’t much new that can be done.” In fact, there is “much new that can be done.” It’s just that the new runs on the phones, or on servers that connect with the phones over wireless networks. That’s where the big bank in the sky will shower Apple again.

The Apple services force awakens

While Apple’s iPhone shipments keep slowing, use of them keeps accelerating. Apple characterizes such use as “engaged” users, i.e., those who have purchased a service (movie in iTunes, app from the App Store, and so on) within the last 90 days. In Q1 the number of “engaged” Apple customers grew 25%, generating $5.5 billion in Services revenue, which was up 15% year over year.

In Q2, those engaged customers spent even more, with Services revenue at its highest ever amount, climbing 20% to $6 billion. As Apple CEO Tim Cook detailed on the earnings call, App Store revenue also jumped 35% to top Q1 2016’s all-time record, and Apple Music has finally reversed quarters of decline to nab over 13 million paying subscribers. (Even so, I’m steering clear of it for now.)

Here’s where things get interesting. According to Cook, to really get a read on Apple’s future you need to decouple devices from the services that they enable:

The Services business is powered by our huge installed base of active devices, which crossed 1 billion units earlier this year….[T]hose 1 billion-plus active devices are a source of recurring revenue that is growing independent of the unit shipments we report every three months. In fact, the purchase value of services tied to our installed base was a record $9.9 billion in the March quarter, up 27% over last year, accelerating from the 24% growth rate we reported in the December quarter.

One of the most promising Services that Apple offers, one that I personally have started to use daily, is Apple Pay. Cook extolled the success of Apple Pay: “Apple Pay is growing at a tremendous rate, with more than five times the transaction volume of a year ago and 1 million new users per week.” That 5X increase in transaction volume meshes well with what Cook reported in the Q1 earnings call: “In the second half of 2015, we saw a significant acceleration in usage, with a growth rate 10 times higher than in the first half of the year.”

Oh, and as if Apple weren’t profitable enough, its Services business shows “a profitability that is higher than company average,” according to Apple CFO Luca Maestri.

In short, the question isn’t whether the iPhone era is over. It’s not, and things like the Apple Watch will buttress and augment it. But we need to stop making a fetish of device sales when the services powered by those devices increasingly command center stage. In this area Apple is making real progress, and it’s only just beginning. Importantly, no one else – not Samsung, not Google, not Facebook – is creating a similarly rich services ecosystem for which consumers are happy.

The post What everyone’s missing in Apple’s earnings drop appeared first on ReadWrite.

]]>
Pexels
Amazon to competitors: You’re not failing enough https://readwrite.com/amazon-competitors-not-failing-sl1/ Mon, 11 Apr 2016 23:25:44 +0000 https://readwrite.com/?p=604

It’s a tough slog, competing with Amazon, particularly Amazon Web Services. The cloud computing giant earns billions more in revenues than its […]

The post Amazon to competitors: You’re not failing enough appeared first on ReadWrite.

]]>

It’s a tough slog, competing with Amazon, particularly Amazon Web Services. The cloud computing giant earns billions more in revenues than its next nearest competitors, even as it cranks out innovation at a dizzying pace. To such legacy IT and cloud competitors, failure is familiar.

The problem, according to Amazon CEO Jeff Bezos, however, is that Amazon competitors don’t fail nearly enough.

In his annual letter to Amazon shareholders, Bezos exults, “[W]e are the best place in the world to fail,” which willingness to fail translates into a culture that succeeds through iteration…to the tune of $7.3 billion in cloud revenue last year. And they expect around $10 billion in revenues next year.

Amazon

Amazon the best place in the world to fail

There are many things to like about Amazon, and many things that set it apart. But according to Bezos, perhaps the defining attribute of Amazon is its appreciation of failure:

One area where I think we are especially distinctive is failure. I believe we are the best place in the world to fail (we have plenty of practice!), and failure and invention are inseparable twins. To invent you have to experiment, and if you know in advance that it’s going to work, it’s not an experiment. Most large organizations embrace the idea of invention, but are not willing to suffer the string of failed experiments necessary to get there.

The reason that this willingness to tolerate and even celebrate failure is so critical, he continues, is that outsized returns hover behind potential failures:

Outsized returns often come from betting against conventional wisdom, and conventional wisdom is usually right. Given a ten percent chance of a 100 times payoff, you should take that bet every time. But you’re still going to be wrong nine times out of ten.

We all know that if you swing for the fences, you’re going to strike out a lot, but you’re also going to hit some home runs.

The difference between baseball and business, however, is that baseball has a truncated outcome distribution. When you swing, no matter how well you connect with the ball, the most runs you can get is four. In business, every once in a while, when you step up to the plate, you can score 1,000 runs. This long-tailed distribution of returns is why it’s important to be bold. Big winners pay for so many experiments. (Emphasis added.)

And so AWS experiments, and at an ever-increasing pace. In fact, in 2015 AWS added 722 significant new features to its 70-plus cloud services (S3, EC2, Aurora, etc.), representing a 40% increase over 2014.

Not that this willingness to fail is aimless.

Atomizing the Amazon

According to Bezos, “90 to 95% of what we build in AWS is driven by what customers tell us they want.” Importantly, however, the company is structured in a way that allows atomistic innovation as it seeks to build on behalf of its customers. Amazon “is made up of many small teams with single-threaded owners, enabling rapid innovation,” Bezos notes.

Critically, such teams “speak” to each other through open APIs, an approach mandated years ago by Bezos:

  • All teams will henceforth expose their data and functionality through service interfaces.
  • Teams must communicate with each other through these interfaces.
  • There will be no other form of inter-process communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
  • It doesn’t matter what technology they use.
  • All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

In case Amazon employees thought this was optional, Bezos closed affectionately: “Anyone who doesn’t do this will be fired. Thank you; have a nice day!”

What would Bezos build?

Though AWS, Amazon’s retail business, and the Kindle get much of the press, one of the more interesting areas that Amazon is giving customers what they want is in the area of IoT. Interestingly, Amazon is hitting IoT from both the developer and consumer angle.

On the developer side, last October Amazon announced the AWS IoT Platform – “a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices.” It’s a natural fit for the AWS crowd, giving developers an easy way to source fully managed infrastructure for building IoT applications.

Amazon also has released developer candy for the Amazon Echo, a voice-activated device that has the potential to serve as a central hub for all home automation. Though the Echo is focused on consumers, Amazon has opened it up to developers so they can build services (like Domino’s Pizza ordering) that extend the Echo platform.

In the time that I’ve owned the Echo I’ve seen a host of service improvements as Amazon and its developer ecosystem iterate on my happiness. Amazon released the Echo with relatively little fanfare. I didn’t even know about it until I saw one at my sister’s house and watched as they updated their collective shopping list by voice. It has quickly become a small but growing part of my family’s life.

Of course, it still has the potential to flame out like Amazon’s ill-fated Fire Phone. But that’s the point. Amazon is happy to make lots of bets, big and small, as it flirts with failure. In so doing, the company has ensured that its retail, cloud, and device businesses keep booming, not despite repeated failures, but because of them.

The post Amazon to competitors: You’re not failing enough appeared first on ReadWrite.

]]>
Pexels
Shipping out: How Docker could scale up massively https://readwrite.com/docker-could-scale-up-massively-sf4/ Fri, 01 Apr 2016 16:30:31 +0000 https://readwrite.com/?p=277

The world is rushing to embrace Docker containers as the new, easy way to package applications. As cool as it […]

The post Shipping out: How Docker could scale up massively appeared first on ReadWrite.

]]>

The world is rushing to embrace Docker containers as the new, easy way to package applications. As cool as it is to package applications with containers, as I have written before, the biggest challenge companies face is actually deploying their Docker containers – especially at scale.

Well, nothing spells scale in computing quite like CERN, the particle and nuclear physics research institute outside Geneva, Switzerland. It’s the home of the Large Hadron Collider and some 8,000 scientists from 500 universities. CERN and Mesosphere, whose pioneering concept of a data center operating system (DCOS) I wrote about in 2014, are working on a clever approach to solving the problem of scaling Docker containers in production.

I recently spoke with Mesosphere co-founder and Apache Mesos creator, Benjamin Hindman to learn more.

ReadWrite: What’s so challenging about storing and shipping Docker containers in production?

Benjamin Hindman: One of the beauties of containers as a way to package applications is that you don’t need to include an operating system, unlike with a virtual machine image, which means you can keep them small. The smaller the image, the less that you need to store and the less that you need to send around the network.

But it’s easy for container images to get huge; as you include all the dependent libraries and supporting files into the container’s file system the containers can get very big. And sometimes without even knowing it you end up adding things you don’t actually need.

Docker uses what it calls layers to help reuse parts of the file system between containers. Using layers can help the containers stay small, assuming all of the containers properly build on top of preexisting layers. But accomplishing this takes diligence, and from what I’ve seen in practice, this rarely is the case.

RW: Explain to me why Docker’s layers don’t work in production.

BH: From what I’ve seen in practice it’s pretty rare that developers diligently build on top of each other’s layers. In fact, it’s almost too easy to create a layer that diverges in such a way that another developer won’t want to build on top of your layer because that will actually bring in unnecessary stuff for your application.

The consequences of this can be pretty severe though.

For example, consider two cases, one where everyone ends up adding some library to their container image independently and another where they build on top of a layer that includes the library. In the world where the library is contained within the layer you’ll only have to download the layer the first time you launch a container which uses that layer, and all subsequent containers will get to reuse that layer.

In the world where each container image includes the library independently you’ll have to re-download the bits for the library every single time. This can be extremely wasteful, both on repository storage and the network. Repository sizes explode, choking network traffic with gigabytes of Docker downloads and storage requirements go through the roof.

At Mesosphere we see customers struggling with this problem in production. We tested a number of alternative solutions.

One approach was basically doing something where you pushed containers out to a few nodes and then they self-propagated using peer-to-peer technologies. That led to the insight that we should really just be looking inside the container image and shipping only the data that we haven’t previously shipped in the past. That is to say, focus on and address the content that we need within the container rather than focusing on layers which contain the content.

RW: How did integrating CVMFS from CERN come about to solve this problem?

BH: I was in Switzerland giving a talk at CERN and I got to meet with some of the team that had built CernVM-FS (CVMFS), a technology originally developed by CERN back in 2008.  At the time, CERN was looking into hardware virtualization in a similar way that people are trying out containers today — how best to deploy applications. Instead of creating images or packages, CERN wanted to use a globally distributed file system. This would allow scientists to install their software once on a web server, and then access it from anywhere in the world. When I was in Geneva their team gave me a demo of CVMFS and I could immediately see that it was a perfect match for containers and would solve our problem.

CVMFS is perfect for propagating containers because it uses a combination of extensive indexing, de-duplication, caching and geographic distribution to minimize the number of components associated with individual downloads, and it’s all automated. This significantly reduces the amount of duplicate data that needs to be transferred and greatly speeds up the transfer of files that share data.

We realized that if we integrated CVMFS with Apache Mesos and the Mesosphere DCOS we could massively reduce the redundant data transfers and make container distribution very fast. That was our ah-hah moment!

docker-1

RW: How does Apache Mesos and the Mesosphere DCOS deal with containers?

BH: Mesos and the DCOS rely on what we call “containerizers.” Containerizers are responsible for isolating running tasks from one another, as well as for limiting the resources (such as CPU, memory, disk and network) available to each task.  A containerizer also provides a runtime environment for the task which we call a container. A container itself is not a primitive or first class object from Linux, it’s more of an abstract thing using control groups (cgroups) and namespaces. The Mesos containerizer supports all the different image formats that exist today, including Docker and appc.

RW: How far do you think this integration of Apache Mesos and CVMFS can scale?

BH: Theoretically it should scale to millions of containers. We’re testing now. The good news is that we already know that Mesos and DCOS can scale and that CVMFS can scale.

And the actual integration work was straightforward. The way it works is that instead of downloading the entire image up front, our integration uses the CVMFS client to mount the remote image root directory locally. It takes as its input the name of the CVMFS repository (which internally is mapped to the URL of the CVMFS server) as well as a path within the repository that needs to be used as container image root.

So now you can have multiple container images published within the same CVMFS repository. From the point of view of the containerizer, nothing changes. It is still dealing with the local directory that contains the image directory tree, on top of which it needs to start the container.

The big advantage, however, is that the fine-grained deduplication of CVMFS (based on files or chunks rather than layers with Docker) means we now can start a container without actually having to download the entire image. Once the container starts, CVMFS downloads the files necessary to run the task on the fly. Because CVMFS uses content addressable storage, we never need to download the same file twice. The end result is a much more efficient way to deploy Docker containers at massive scale without blowing up storage capacity and choking network bandwidth.

The post Shipping out: How Docker could scale up massively appeared first on ReadWrite.

]]>
Pexels
Microsoft’s New Developer Song Wasn’t Written In Redmond https://readwrite.com/microsoft-developers-song-pc1/ Fri, 25 Mar 2016 21:30:36 +0000 https://readwrite.com/?p=123

Microsoft used to be able to count on developers to embrace its technologies and extend its lead in the enterprise. […]

The post Microsoft’s New Developer Song Wasn’t Written In Redmond appeared first on ReadWrite.

]]>

Microsoft used to be able to count on developers to embrace its technologies and extend its lead in the enterprise. Today that relationship is much more complicated, as evidenced by a new Stack Overflow developer survey.

Let’s be clear: the new Microsoft under CEO Satya Nadella has been rejuvenating its partnership with developers for years, and that work is bearing fruit. But as the Stack Overflow survey of over 50,000 developers highlights, there’s still work to do. Maybe.

Even as the survey suggests growing dissatisfaction with Microsoft’s native platform development tooling, Microsoft has extended its reach to where developers prefer to play: cloud and web application development. So Microsoft may be losing one skirmish in the native platform battle, even as it wins the overall developer war.

Microsoft developers loving the new vibe…maybe

Though Microsoft is arguably cool again, much of its development tooling is not, according to Stack Overflow’s massive survey. For example, Visual Basic is the second-most popular development environment…but 79.5% of developers apparently hope to never see it again.

Before we pen a eulogy for Windows, however, a quick look at Windows 10, which was the fastest-growing desktop OS in the Stack Overflow 2016 survey, shows remarkable growth. Less than a year after its release, nearly 21% of developers have embraced it. So maybe Windows developers were just waiting for Microsoft to get its desktop OS act together?

Then there’s Sharepoint, Microsoft’s collaboration technology. A full 72% of developers that use Sharepoint are hoping not to have to go on. As for Microsoft and mobile, 65% of developers are moving away from Windows Phone, according to the survey.

Soure: Stack Overflow
Source: Stack Overflow

Despite all this apparent developer antipathy toward Microsoft tooling, it’s important to look at the macro trends.

For example, Microsoft has twice the cloud revenue of any competitor (besides Amazon Web Services, of course), according to Forrester analysis. This cloud love is a good indication of developer love, because developers now live in the cloud.

So, while Microsoft still gets a lot of developer love with .Net and C#, the company is taking those developers and helping them along to the future (cloud) by giving them a clean on-ramp with Azure, and then extending their horizons beyond Microsoft-built technology with support for Node.js and a range of other technologies on Azure (including Linux!). In this way, Microsoft is proving it can speak the polyglot language that developers demand.

The data tells the truth…slant

Nadella has stated his goal of having “every developer on every platform…build intelligent apps” on Microsoft Azure and beyond. It’s no longer enough to serve up Windows, .Net, and other tools to encase developers in an all-encompassing cocoon they’ll never leave. The world doesn’t work that way anymore. Developers demand choice, especially as they move to the free-flowing world of cloud and web.

Looking at the survey data, developers seem to be down on Microsoft technology. But what is, exactly, “Microsoft technology”?

Is it Linux? No way, you scream, and yet Microsoft now embraces Linux on Azure. Or what about Android? Yes, Microsoft has not backed away from requiring patent royalties for Android, as Simon Phipps has argued, but it has opened Azure to Android developers. And then there’s Apache Mesos, Drupal (competitor to Sharepoint), and a host of other open-source technologies too long to list here, not to mention Microsoft’s opening up of CNTK, its artificial intelligence engine and, in fact, opening up its entire R&D process.

microsoft-developers-2
Source: Stack Overflow

So Microsoft gets a few strikes against it for home-grown technologies, which often just means they’re popular (developers, like all of us, love to complain about their tools), even as it gets kudos for opening up to a host of technologies that developers love.

In short, if we ask whether developers are falling out of love with Microsoft technologies, the answer is that “it’s complicated.” That complication arises from developers shift to the cloud and web, two areas that involve technologies we own in common (open source). By making Azure a platform that makes it simple and powerful to run these winning open source projects, Microsoft is winning…no matter what developers may say about its native development tools.

The post Microsoft’s New Developer Song Wasn’t Written In Redmond appeared first on ReadWrite.

]]>
Pexels
Big Data Investors Put a Premium on Proprietary Software https://readwrite.com/investors-premium-proprietary-big-data/ Wed, 09 Mar 2016 02:01:00 +0000 http://ci01e70cd1d0002653

Open source may be the foundation for big data, but it's not the ticket to riches.

The post Big Data Investors Put a Premium on Proprietary Software appeared first on ReadWrite.

]]>

Despite a meager track record for generating outsized returns, investors keep piling money into open source start-ups. As reported by The Wall Street Journal, a minimum of 110 “open source startups” had raised more than $7 billion from venture capitalists as of 2015, up over 100% since 2013. 

The hottest open source companies are focused on big data, with companies like DataStax, Cloudera, and MongoDB bagging billion-dollar valuations as they earn ever-increasing revenues. But even hotter, according to new data from mutual fund filings and Dow Jones VentureSource, are proprietary big data software startups.

This is surprising in some ways, as most big data technology is open source. The trick seems to be how companies choose to monetize it.

Valuing open source

The only public benchmark for open source big data companies is Hortonworks, which went public at a billion-dollar valuation only to see its value slide to $638 million as of the time of this writing. 

Much of the investor concern over Hortonworks‘ valuation remains a persistent concern that the company’s pure-play open source business model doesn’t work. This is ironic given that Hortonworks has taken far less time to get to $100 million in revenue than some industry bellwethers – like open source peer Red Hat, not to mention Oracle and Salesforce. 

Source: Hortonworks

Even so, this concern over open source business models plagues other companies, too, including MongoDB, which has seen its mutual fund investors write down their valuations of the company by 30% in the last two years:

Source: Dow Jones VentureSource

Proprietary software’s free pass

The more a big data company focuses on proprietary differentiation, even for otherwise open source products, however, the more investors have tended to give it a free pass.

Take Cloudera, for example, perhaps the closest analog to Hortonworks. Both companies offer Hadoop distributions, but Cloudera has been much more willing to offer proprietary add-ons to complement its open source platform. In response, investors have bid up its still-private shares by 75% in the last two years:

Source: Dow Jones VentureSource

Step away from open source entirely, however, and valuations have risen even more.

Domo – a big data analytics startup that has been happy to question Hadoop’s right to the big data throne – has seen its valuation go up 90% in the last two years, despite going against the open source grain:

Source: Dow Jones VentureSource

Or take Palantir, whose software is used to uncover patterns in massive quantities of data. Palantir uses and contributes quite a lot of open source software, but makes its money selling proprietary software. Its reward? A 152% increase in valuation since Q3 2012:

Source: Dow Jones VentureSource

Selling more than free

None of which is to suggest that proprietary software is better than open source. With over 15 years working for open source companies, I simply don’t believe that.

But it is an indication that the right way to monetize open source is by selling something other than open source software. Former Wall Street analyst Peter Goldmacher nailed this years ago, arguing that the companies getting rich in the “big data Gold Rush” are the “apps and analytics vendors that abstract the complexity of working with very complicated underlying technologies into a user-friendly front end,” or those “business people that have identified opportunities to use data to create new opportunities or disrupt legacy business models.”

The first group fits the Palantir model. The second includes companies like Facebook or Uber.

Investors aren’t infallible in their estimation of where value lies in the big data ecosystem, as Hortonworks’ Shaun Connolly, vice president of strategy, points out in the article referenced above. But they’re a reasonable indication of where most of the big data money is pooling. And while open source is an essential ingredient of nearly all big data companies, delivering proprietary value on top of that software seems best at paying the bills.

The post Big Data Investors Put a Premium on Proprietary Software appeared first on ReadWrite.

]]>
Pexels
This New Open Source Project Is 100X Faster than Spark SQL In Petabyte-Scale Production https://readwrite.com/new-fast-sql-project/ Tue, 23 Feb 2016 06:53:02 +0000 http://ci01e5e060d0012661

Alluxio is getting attention from Baidu and other data giants because of its in-memory speed at profound scale.

The post This New Open Source Project Is 100X Faster than Spark SQL In Petabyte-Scale Production appeared first on ReadWrite.

]]>

Baidu, like Google, is much more than a search giant. Sure, Baidu, with a $50 billion market cap, is the most popular search engine in China. But it’s also one of the most innovative technology companies on the planet. 

Also like Google, Baidu is exploring autonomous vehicles and has major research projects underway in machine learning, deep translation, picture recognition, and neural networks. These represent enormous data-crunching challenges. Few companies manage as much information in their data centers.

In its quest to dominate the future of data, Baidu has attracted some of the world’s leading big data and cloud computing experts to help it manage this explosive growth and build out an infrastructure to meet the demands of its hundreds of millions of customers and new business initiatives. Baidu understands peak traffic hammering on I/O and stressing the data tier. 

Which is what makes it so interesting that Baidu turned to a young open source project out of UC Berkeley’s AMPLab called Alluxio (formerly named Tachyon) to boost performance.

Co-created by one of the founding committers behind Apache Spark — also born at AMPLab — Alluxio is suddenly getting a lot of attention from big data computing pioneers that range from the global bank Barclays to Alibaba and engineers and researchers at Intel and IBM. Today Alluxio released version 1.0, bringing new capabilities to this software that acts like a programmable interface between big data applications and the underlying storage systems, delivering blazing memory-centric performance. 

Shaoshan Liu

I spoke to Baidu Senior Architect Shaoshan Liu about his experiences running Alluxio in production to find out more.

ReadWriteWhat problem were you trying to solve when you turned to Alluxio?

Shaoshan Liu: How to manage the scale of our data, and quickly extract meaningful information, has always been a challenge. We wanted to dramatically improve throughput performance for some critical queries.

Due to the sheer volume of data, each query was taking tens of minutes, or even hours, just to finish — leaving product managers waiting hours before they could enter the next query. Even more frustrating was that modifying a query would require running the whole process all over again. About a year ago, we realized the need for an ad-hoc query engine. To get started, we came up with a high-level of specification: the query engine would need to manage petabytes of data and finish 95% of queries within 30 seconds.

We switched to Spark SQL as our query engine. Many use cases have demonstrated its superiority over Hadoop MapReduce in terms of latency. We were excited and expected Spark SQL to drop the average query time to within a few minutes. But it did not quite get us all the way. While Spark SQL did help us achieve a 4-fold increase in the speed of our average query, each query still took around 10 minutes to complete.

Digging deeper, we discovered our problem. Since the data was distributed over multiple data centers, there was a high probability that a query would hit a remote data center in order to pull data over to the compute center: this is what caused the biggest delay when a user ran a query. It was a network problem. 

But the answer was not as simple as bringing the compute nodes to the data center.

RW: What was the breakthrough?

SL: We needed a memory-centric layer that could provide high performance and reliability, and manage a petabyte scale of data. We developed a query system that used Spark SQL as its compute engine, and Alluxio as the memory-centric storage layer, and we stress-tested for a month. For our test, we used a standard query within Baidu, which pulled 6TB of data from a remote data center, and then we ran additional analysis on top of the data.

The performance was amazing. With Spark SQL alone, it took 100-150 seconds to finish a query; using Alluxio, where data may hit local or remote Alluxio nodes, it took 10-15 seconds. And if all of the data was stored in Alluxio local nodes, it took about five seconds, flat — a 30-fold increase in speed. Based on these results, and the system’s reliability, we built a full system around Alluxio and Spark SQL.

RW: How has this new stack performed in production?

SL: With the system deployed, we measured its performance using a typical Baidu query. Using the original Hive system, it took more than 1,000 seconds to finish a typical query. With the Spark SQL-only system, it took 300 seconds. But using our new Alluxio and Spark SQL system, it took about 10 seconds. We achieved a 100-fold increase in speed and met the interactive query requirements we set out for the project.

In the past year, the system has been deployed in a cluster with more than 200 nodes, providing more than two petabytes of space managed by Alluxio, using an advanced feature (tiered storage) in Alluxio. This feature allows us to take advantage of the storage hierarchy, e.g. memory as the top tier, SSD as the second tier, and HDD as the last tier; with all of these storage mediums combined, we are able to provide two petabytes of storage space.

Besides performance improvement, what is more important to us is reliability. In the past year, Alluxio has been running stably within our data infrastructure and we have rarely seen problems with it. This gave us a lot of confidence. 

Indeed, we are preparing for larger scale deployment of Alluxio. To start, we verified the scalability of Alluxio by deploying a cluster with 1,000 Alluxio workers. In the past month, this cluster has been running stably, providing over 50 TB of RAM space. As far as we know, this is the largest Alluxio cluster in the world.

The post This New Open Source Project Is 100X Faster than Spark SQL In Petabyte-Scale Production appeared first on ReadWrite.

]]>
Pexels
Is There An App Divide? https://readwrite.com/apps-for-the-rich/ Thu, 18 Feb 2016 06:06:00 +0000 http://ci01e552b50001265f

Developers in lower-income markets shut out from the big opportunities

The post Is There An App Divide? appeared first on ReadWrite.

]]>

Mobile has promised to be the great social leveler, with 90% of the world’s population over the age of 6 projected to own a phone by 2020. But according to a new Caribou Wireless study, virtually all of the value in the app economy is captured by the rich.

Rich consumers. Rich developers. 

Unfortunately, it’s only going to get worse. Developing economies sometimes scrape themselves out of a trade deficit by manufacturing for more established economies (e.g., Taiwan building semiconductors for the West). But in the app economy, “69% of developers [in lower-income countries] were not able to export” their apps to higher-income markets, as a report from Caribou Wireless finds.

The Rich Get Richer

Apps have never been a great way to make money. According to VisionMobile’s survey of 8,000+ developers, revenue from app store sales grew 70% year over year in 2015. Even so, “60%+ of developers are under the app poverty line,” the report concludes, meaning the developers make less than $500 per month on iOS apps and even less from Android.

Dig into those numbers, however, and it’s clear (though perhaps not surprising) that things are much worse in lower-income economies, as Caribou Wireless finds:

  • 81% of developers [are] in high-income countries, which are also the most lucrative markets;
  • 95% of the estimated value in the app economy is captured by just 10 countries;
  • ~33% of developers only serve their domestic market, “but this inability to export to other markets was much more pronounced for developers in lower-income countries, where 69% of developers were not able to export, compared to high-income countries, where only 29% of developers were not able to export. For comparison, only 3% of U.S. developers did not export.”

Part of the problem for a developer in a lower-income market is that she is sometimes blocked by Google and Apple, which control the dominant app stores, from selling her apps through those stores. 

But even where developers face no such prohibitions, their micro-markets are too small to sustain them, while they get lost in the larger markets, assuming they’re not geo-blocked. The app store model results in a winner-takes-all bonanza for the fortunate few developers who can stand out. Everyone else…languishes.

Buying A Future

Fortunately, there’s mobile commerce. Caribou Wireless’ report focuses on income derived from selling apps, but a far larger opportunity awaits those that sell physical goods through mobile websites and apps. Though the U.S. dominates the app economy, it has been comparatively slow to embrace mobile commerce, leaving a more open playing field, as a new Criteo report uncovers:

Source: Criteo

This increasing acceleration of mobile commerce has been evident for some time, and it promises to do more for the app developer underclass than apps ever have. How much more? Consider that in 2015 mobile commerce was 2.5x bigger than all of the app economy, according to VisionMobile

That’s a lot more “pie” for aspiring developers, if only they would. According to that same VisionMobile developer survey, just 9% of app developers are focused on mobile commerce. This needs to change if app developers hope to get above the app poverty line. 

In short, rather than trying to export mobile apps to established economies, aspiring app developers should turn their attention to selling physical goods through mobile apps and websites within their home markets. In such markets mobile phones are already the preferred consumption device, giving them a solid base of consumers.

The post Is There An App Divide? appeared first on ReadWrite.

]]>
Pexels
New Open Source Contributions Might Just Save Docker https://readwrite.com/open-source-docker/ Thu, 11 Feb 2016 01:02:03 +0000 http://ci01e4e942e0002661

Managing storage and networking in Docker containers can be a nightmare. New Kubernetes contributions may help.

The post New Open Source Contributions Might Just Save Docker appeared first on ReadWrite.

]]>

As I’ve written before, Docker provides a better way to package and distribute software, which is one reason Docker adoption keeps booming, growing 5X in the last year. Awesome! But good luck getting those Docker containers into production. Not so awesome.

Enter Google. Google (now Alphabet) helped us all by open sourcing and spinning off its container orchestration solution called Kubernetes. Kubernetes makes a big difference, yet simply orchestrating Docker with Kubernetes doesn’t necessarily make life easy for developers. There remains a yawning gap from test to production. 

Why? Docker containers remain mired in the enormous complexity associated with managing the dependencies in the networking and storage layers. It’s a modern day variation of yesteryear’s DLL hell.

This gap represents a huge market opportunity for a team of former Cisco Unified Computing Systems (UCS) honchos who started a company called Datawise. While still in stealth, their team has quietly been working on critical new capabilities for Kubernetes that they donated to the popular open-source project. Internally dubbed “Project 6” by the company’s engineers, the contributed code makes it much easier to use Kubernetes to deploy containers in production. 

Mark Balch

I recently caught up with Mark Balch, vice president of Products at Datawise, to learn more about their Kubernetes contributions and how the new software helps organizations actually get Docker off developer notebooks and running real workloads on servers.

ReadWrite: Tell me about the Kubernetes contributions Datawise made? Is this yet another case of vanity contributions to an open-source project that help no one but the contributing vendor?

Mark Balch: Our Kubernetes contributions have been accepted into the main trunk for version 1.2. We provided a vendor-agnostic, standard platform for I/O resource scheduling in Kubernetes. Now developers can describe network and storage requirements when building an application by just using the familiar Kubernetes pod definition file. That frees developers to work with the network and storage providers that deliver the best capabilities to meet their cloud-native application needs.

RW: So what is hard about storage and networking for containers? How have challenges in networking and storage been an obstacle to their production use?

MB: There is broad agreement that production-grade networking and storage for containers is lacking. As everyone finds out quickly enough, do-it-yourself doesn’t work. It’s a maintenance nightmare. 

Storage is a complete custom design: use a SAN, use local caching, the list goes on. There is absolutely no standard workflow or best practices. Networking, for its part, is either a software overlay model (overhead, high latency), operational compromises (port mapping, lack of interoperability with existing network services and networks), or custom integration. 

For developers it means the risk of getting stuck in a walled garden with lock-in to AWS or other cloud providers. Or you can hire consultants to build a custom environment, then pay more to maintain it. Or hire internal employees to build and maintain what should work off the shelf. No thanks.

RW: What’s the significance of the new Kubernetes open-source contributions by Datawise? What have you “unlocked” at the storage and networking layers for Kubernetes-managed containers?

MB: No code changes required. That’s the biggest advantage our contributions give to developers. Docker containers just work in Kubernetes. There is no vendor lock-in. Our technology-agnostic APIs to specify storage and network requirements are in the open source container specification. Then we contributed another set of APIs to extend workload placement (scheduling) and configuration so containers are deployed for optimal performance to get what the developer requested. 

Before Datawise’s contributions, it was a complete Wild West. Developers worked separately to configure network and storage and then somehow custom linked to their containers.

RW: How far are we from containers on bare metal displacing virtual machines?

MB: Virtual machines are ubiquitous and not going away any time soon. Enterprises have enormous investments in their virtualized infrastructure. But there is absolutely a growing ecosystem around bare metal containers and Kubernetes, including Openstack Magnum. Openstack was built on KVM, yet Magnum supports both virtualized and bare-metal. Clearly developers want options and the leaders in container deployment today, like Google, are running bare metal. We expect more enterprises to increase the mix of bare metal running containers in their data centers.

RW: Where do you see Kubernetes having momentum today?

MB: Beyond the obvious example of Google, Kubernetes is at the core of solutions from Red Hat and CoreOS and certainly it’s been embraced by Openstack’s Magnum project, as mentioned. VMware demonstrated Kubernetes integration last year. It’s not the only game in town, but arguably it provides the deepest out-of-box container support to date. 

Having said that, we’re not a Kubernetes company – we support an open ecosystem with the goal to ensure rapid deployment with predictable results regardless of the open source tool chain.

RW: Google has an army of engineers to manage things like this. How are average enterprises supposed to handle all these highly customized configurations around storage and networking in their own production container environments?

MB: That’s the key point. If you have the cloud-native database PhD army, deploy your forces. Most businesses don’t. They ultimately want to create unique value and not re-engineer what has already been solved. Our goal is to help enterprises deploy containerized applications quickly, knowing with certainty how they will perform, and that they will work off the shelf in an open ecosystem.

The post New Open Source Contributions Might Just Save Docker appeared first on ReadWrite.

]]>
Pexels
Google’s Mobile Challenge https://readwrite.com/google-mobile-challenge/ Tue, 26 Jan 2016 21:13:31 +0000 http://ci01e3a9dfe0009512

Mobile is about apps, not search. That could be a problem.

The post Google’s Mobile Challenge appeared first on ReadWrite.

]]>

Google’s master plan has always been clear: Get more people using the Internet, and sell more ads alongside their searches. As I’ve written, that adds up to $6.30 per Internet user per year.

Unfortunately, that plan has hit a snag, as the Guardian’s Charles Arthur uncovers. As the desktop dwindles and mobile devices surge, “new users and new platforms on which Google is available aren’t as valuable as the old ones.”

Put more bluntly, “Mobile search is a real problem for Google: people don’t do it nearly as much as … it would like.”

Don’t Drink Don’t Search: What Do You Do?

To arrive at this conclusion, Arthur digs through Google’s public numbers. As he notes, there are 1.8 billion smartphones in use outside China (Google doesn’t have a straightforward presence in China, though it’s working to fix that), and 50 billion mobile searches per month. 

The resulting math is simple, even for an English major like me: 0.925 mobile searches per day per device. And you don’t need my writing background to pen the sorry conclusion: anemic mobile search revenue for Google. 

Yes, Arthur details, this same basic math applies to the desktop, too. But given that many desktops are not in regular use, the actual number of searches per user per day is greater than one. And, more importantly, the spread of non-searchers to power searchers is very different: “although the proportion doing more than seven searches per day is about the same (5% or so), [in mobile] you have a far greater number who don’t ever get beyond zero.”

As he concludes: 

“So there is the problem for Google: the PC base is static or even falling, while the number of people holding smartphones is growing. But the latter group tends not to use search, and so doesn’t see its most profitable ads.”

What do they do?

It’s An App-Eat-Search World

The answer, in a word, is “apps.”

We already know that 90% of our smartphone use is consumed by apps, not the mobile Web. Of course, some of that app time is actually Web time, as we read articles and watch videos through apps like Facebook and Twitter. But still, most people don’t spend much of their time browsing the mobile Web. 

This could be a big problem for Google.

For Apple, however, it’s fantastic, as Arthur details: “[I]t turns out that search wasn’t actually the gatekeeper to mobile; having a well-stocked app store is. That’s where the searching really happens.”

Apple generated over $20 billion in app revenues in 2015, keeping more than $6 billion for itself. While that doesn’t add up to Google’s $12 billion-plus in mobile search revenue (it was $11.8 billion in 2014), Apple’s App Store revenue has been growing at a 50% clip, with no end to the growth in sight. Plus Google reportedly paid Apple $1 billion last year just to remain the default search engine on the iPhone.

Google has reorganized as Alphabet to simultaneously let Google focus on its core advertising business while the rest of the company figures out different ways to make money. It comes not a moment too soon. As the company must know better than anyone, mobile changes everything, including how often we search.

Photo of Google CEO Sundar Pichai by Owen Thomas for ReadWrite

The post Google’s Mobile Challenge appeared first on ReadWrite.

]]>
Pexels
Apple TV: 2,500+ Apps And Growing Like Crazy https://readwrite.com/apple-tv-tvos-app-store/ Wed, 09 Dec 2015 18:03:35 +0000 http://ci01dfb15c00012a83

There's an app on your TV for that.

The post Apple TV: 2,500+ Apps And Growing Like Crazy appeared first on ReadWrite.

]]>

I admit it. I doubted.

I’ve been an Apple TV customer for  years, but I’ve been content to let Apple TV serve a very basic purpose: giving me access to Netflix, iTunes TV shows and movies, and little else. 

So when Apple CEO Tim Cook declared Apple TV’s app-centric experience the “future of TV,” I rolled my eyes. 

Well, I’m not rolling them anymore.

While I haven’t seen my life change through an appified Apple TV experience, I’m convinced that developers are the key to any product taking flight. And the data says that developers love Apple TV, to the tune of 2,624 apps today, according to Appfigures. Ariel Michaeli, the founder and CEO of the app-tracking service, projects that will rise to 5,000 by the end of December and 10,000 by early 2016.

Maybe TV was meant to be about apps, after all.

There’s Apps In Them Thar Hollywood Hills

We’re just a month into Apple shipping an app-friendly new version of its Apple TV hardware and tvOS software. On average, 447 apps find their way to tvOS every day, which is where Michaeli gets his projections from. The question is, what kinds of apps? And the answer? 

Games. Lots and lots of games.

Source: appfigures

Of course, games aren’t the only apps available. Mashable does a good job of showcasing the type of newly minted apps that point to the future of Apple TV. But given that we generally default to our TVs to entertain us, it’s not a bad thing that categories like games and entertainment dominate tvOS apps. 

Importantly for developers, Appfigures finds that 39% of the available apps are available for purchase, rather than being free with in-app purchases. This suggests, according to Appfigures’ Michaeli, that “consumers trust the new device to provide them with an experience that’s worth paying for upfront.”

What Consumers Want To “Watch”

While games dominate the available app inventory, they don’t (yet) dominate consumer interest. 

To glean consumer interest, Appfigures pulled the top 50 tvOS apps in terms of downloads and then matched them to categories. Just 16% of the top 50 apps are games, while 56% fall into the entertainment category. Small wonder, then, that the top 10 apps are all streaming apps from popular TV services and cable channels.

But the real story isn’t about the plethora of mostly unloved apps. That’s the story of apps, generally: 90% of our time is spent on mobile apps, but the vast majority of that goes to a precious few apps. On phones, that’s Facebook, messaging, and so forth. On TV, no surprise, we like to watch.

What that tells us is that tvOS and the Apple TV App Store are the real deal, that developers are paying attention, and this isn’t just another “most amazing thing ever” boast by Apple. 

A year and a half ago, ReadWrite’s Adriana Lee wrote that for smart TVs, “it’s all about the apps now.” I’m a believer. The future of TV really does look like apps, and now, with tvOS bringing its powerful App Store to the living room, Apple looks set to own that future.

The post Apple TV: 2,500+ Apps And Growing Like Crazy appeared first on ReadWrite.

]]>
Pexels
Touch ID Is The Gift That Keeps On Giving https://readwrite.com/apple-pay-touch-id-holidays/ Mon, 30 Nov 2015 18:54:07 +0000 http://ci01def4a8000099de

Apple Pay may be slow to start, but Touch ID will still be a game changer for commerce.

The post Touch ID Is The Gift That Keeps On Giving appeared first on ReadWrite.

]]>

If you are like Farhad Manjoo, it’s hard to understand the appeal of Apple Pay:

Judging from the reactions to his tweet, he’s in good company.

And, indeed, I was in that company until very recently. I use a Killspencer iPhone 6 case that affixes to the back of my phone and keeps my driver license and three credit cards with me at all times. Nothing could be easier than pulling out a card and swiping to purchase.

Nothing, it turns out, except Apple Pay.

The reason has nothing to do with Apple Pay, per se, and everything to do with Touch ID, which is by far the biggest innovation in mobile phones in a long, long time.

Who Pays With Apple Pay?

Apple CEO Tim Cook declared 2015 would “be the year of Apple Pay.” A year later, it’s clear that 2015 has been anything but that, for all the reasons ReadWrite outlined a year ago. 

Apple is used to launching products and seeing them soar, bringing in massive profits. Apple Pay adoption, however, has been much more pedestrian.

According to survey data collected by PYMNTS.com, while Apple Pay adoption keeps rising, that rise is very, very slow:

Source: PYMNTS.com

As for why adoption has been so slow, chalk it up to inertia and ignorance:

Source: PYMNTS.com

Ignorance of Apple Pay will fade over time, leaving the biggest problem being ease of use. But I’d argue that this, too, has more to do with ignorance than anything else. 

Touch ID To The Rescue 

As mentioned, I’ve been an Apple Pay unbeliever for the past year. As someone that tends to be an early adopter, I tried it early on but kept running into issues with the registry of my debit card. 

This wasn’t a problem with Apple Pay: It was a problem with my bank, which is so concerned about my security that they make it virtually impossible to sign up a card. (Example: “On June 5, 2003, you bought something that cost $23.35. Please name the merchant.” What?!) 

Because of persistent fraud problems with that debit card, I switched to paying with my Chase VISA credit card, which was a snap to set up. (Take a picture of your credit card and boom! you’re registered.) Once registered, I started to test the waters at various merchants, hoping to pay with Apple Pay.

As it turns out, the merchants I visit most frequently (grocery stores) all accept Apple Pay, meaning that I now use Apple Pay more often than not. 

This wouldn’t have been the case, however, if using Apple Pay weren’t significantly easier than pulling out a card (which, remember, are affixed to the back of my phone). But it is.

And the reason is Touch ID. 

While some think Apple Pay requires you to open the Apple Wallet (formerly Passbook), find the card, hold it in front of the register, and then authenticate using Touch ID, you don’t.

All you actually have to do is hold the phone near the register with your finger on the Touch ID “home” button. That’s it.

Immediately afterward, you get a notification that tells you the purchase went through. They can be turned off, but personally, I love those notifications so much that I’ve enabled them for all my credit card purchases (using Apple Pay or not). It’s fascinating to see how you’re billed in real-time. Uber, for example, immediately dings me $1 to test my card when I order a car. 

A Bright Future For Apple Pay

We’ve already seen more than half of online retail purchases shift to our mobile devices from the desktop web this holiday season, according to the Adobe Digital Index (ADI), which tracks over 90% of all online purchases. This represents people like me deciding it’s wiser to buy online with a mobile device than to park in crowds of Black Friday shoppers.

But we’re also going to increasingly see people paying with their phones, and predominately Apple Pay, as users discover the ease and simplicity of Touch ID. Faster than a card swipe and easier for those of us that can hardly remember what to do with a pen, Touch ID-enabled Apple Pay is inevitable, even if you haven’t yet succumbed. 

Trust me. You will. 

Lead photo courtesy of McDonald’s

The post Touch ID Is The Gift That Keeps On Giving appeared first on ReadWrite.

]]>
Pexels
One Googler’s War Against JavaScript Frameworks https://readwrite.com/javascript-frameworks-apps-developers/ Wed, 25 Nov 2015 21:41:34 +0000 http://ci01de5e24b0012a83

Making developers happy, but at users’ expense.

The post One Googler’s War Against JavaScript Frameworks appeared first on ReadWrite.

]]>

Google cares a lot about the mobile web. 

Though the web giant can arguably be faulted for underinvesting in HTML5 for years, the company is more than making up for it now. The tech giant is doubling down with missionary zeal to convert would-be app heathens like India’s Flipkart back to the web, not to mention serious efforts to dramatically improve mobile web performance. 

See also: How Good Developers Deal With Bad Code 

Given Google’s now-obvious concern for the mobile web, should we trust its view on the JavaScript frameworks used to build mobile web apps? Because at least one representative Googler, Paul Lewis from the Chrome Developer Relations team, thinks the developer benefits of JavaScript frameworks are outweighed by poor user experience.  

The problem with this view, according to Ember.js co-founder Tom Dale, is that it’s wrong. 

Making Developers Happy At Users’ Expense

The hot issue in app development seems to be whether app publishers should be building native apps versus web apps. There are great reasons for both, and there are signs that things like (framework) React Native are so fantastic that the whole debate may go away

However, the heart of the debate between Lewis and Dale is about developers leaning on frameworks, versus building on the web stack without a framework safety net. 

Lewis, for his part, acknowledges that there are significant developer benefits that come from JavaScript frameworks like Ember.js or Angular.js, Google’s homegrown framework. They’re fun, he says, and they help developers build a minimum viable product extra fast, among other reasons. 

The problem, however, is that they may impose a significant performance hit, a definite no-no, especially in an environment with spotty network connectivity and relatively underpowered devices.

But this isn’t merely a matter of performance degradation for the end-user, though that can be significant. In Lewis’ experience, frameworks are suboptimal because they impose hits on latency, bandwidth, CPU usage (battery draining), memory usage, and more.

It’s also not a straightforward win for developers. They must learn the framework, then relearn it when it changes—or learn an entirely new one when the cool new framework hits the web. 

Making Users Happy Because Developers Are Happy

Dale—with whom I used to work at Strobe, an HTML5 company acquired by Facebook—wades in and takes issue with a number of Lewis’ contentions. 

His primary argument: “Frameworks let you manage the complexity of your application as it and the team building it grows over time.” 

That is, the bigger and more complex an app becomes, the harder it will be for that single developer to keep up with changing features. The issue dramatically worsens once new developers join the project. “Many developers have worked on a project where the complexity of the codebase swelled to be so great that every new feature felt like a slog,” he said. “Frameworks exist to help tame that complexity.” 

If you’re just building an app for a quick-and-dirty demo, Dale continues, coding without the benefit of a framework’s guidance isn’t an issue. 

But when you’re building for the long haul, a framework can be critical, and can ultimately help end users. “The more productive the developer is, the more bugs they can fix, features they can add, slow code they can profile,” Dale said. 

Finding Middle Ground

Dale’s long-term view doesn’t necessarily hold up, according to Microsoft developer evangelist Christian Heilmann. Apps and business priorities change. 

But Lewis’ opinion may not hold up either, argues Paravel lead developer Dave Rupert: “[I]n client services if I deliver a site that is super fast but impossible to maintain, I have failed at my job.”

The correct answer is probably that both Dale and Lewis are right. And wrong. As ever, it depends on the app and it depends on the developer. However, Lewis finishes with one statement that seems right, regardless of whether you’re pro- or anti-framework: “Investing in knowledge of the web platform itself is the best long-term bet.” 

Using a framework is probably a good idea, but it’s an even better idea for developers to ensure they’re savvy about what the frameworks are trying to abstract. 

The post One Googler’s War Against JavaScript Frameworks appeared first on ReadWrite.

]]>
Pexels
Yelp Open Sources Its PaaS To Liberate You From Docker https://readwrite.com/paas-yelp-cloud/ Tue, 10 Nov 2015 18:16:46 +0000 http://ci01dd516040009512

Yelp wants to keep you from getting locked into someone else's cloud.

The post Yelp Open Sources Its PaaS To Liberate You From Docker appeared first on ReadWrite.

]]>

The world’s best cloud and big data software isn’t for sale. Instead you download it for free.

You won’t see Oracle, IBM, HP, or any of the erstwhile enterprise IT giants developing it, either. In fact, this incredibly rich treasure trove of software isn’t being developed by software vendors at all. It’s the Googles and Facebooks that are releasing it.

Well, add Yelp to that list. 

See also: Building Your Own Cloud Is “Table Stakes,” Says Former AWS Engineer

Yelp, a quiet hero in open source, just released its internal PaaS (“platform as a service”) to the open source community called, cleverly, PaaSTA. The coding genius behind PaaSTA at Yelp is Kyle Anderson, a site reliability engineer for the company who has been tinkering with servers for more than a decade. He and his team at Yelp worked on PaaSTA for 18 months and today it runs more than 100 production applications at Yelp.

I sat down with Anderson to plumb the details of this impressive contribution.

Not Just Any Old PaaS

Open source is nothing new for Yelp. Indeed, it’s already a leading contributor to more than 58 other open-source projects

But this is different. 

This is Yelp giving away the secret sauce that powers its computing infrastructure, which has to scale to support user-generated reviews of more than 50 million local businesses in 32 countries. PaaSTA is Yelp’s internal platform for automating the deployment and management of services running inside Docker containers. 

PaaSTA relies on three core components of Mesosphere’s Datacenter Operating System (DCOS), all of which are open source: Apache Mesos, Marathon and Chronos. Mesos handles the work of actually deploying containers onto servers, while Marathon (which was developed by Mesosphere) makes sure long-running PaaStA services re-launch should something crash. Chronos schedules containers to launch at preordained times for recurring tasks or batch processing. 

ReadWriteWhat is PaaSTA? Why did you go this route instead of using a commercial PaaS? Why are you giving it to the community as open source?

Kyle Anderson: PaaSTA is an opinionated PaaS built on existing, opinionated, open-source tools like Mesos and Marathon. It gives developers a coherent workflow for going from a raw idea, to a git repo, to a monitored service in production.

Commercial PaaS solutions are often less flexible than in-house solutions. In some sense an in-house built PaaS organically grows to meet an organization’s particular needs. Sometimes this is good, and can lead to good cohesion with your particular environment. Sometimes this is bad, and leads to crufty unmaintained spaghetti. 

With PaaSTA, we needed something that was flexible enough to allow developers to make the transition from our legacy platform, and give us room to grow and remain flexible in the long term. PaaSTA is the outcome of this effort.

We are sharing PaaSTA with the community because we think it’s pretty cool, and we are proud of it! We want others to be able to benefit from what we’ve worked hard to create. We were only able to build such a cool PaaS by standing on the shoulders of some open-source giants.

PaaSTA In Action

RW: Describe how Yelp uses PaaSTA: scale, services, workloads.

KA: Yelp uses PaaSTA as the default platform for all new services, and for legacy services that are moving over at a rapid pace. Scale on this platform is a very tractable problem, thanks to the hard work that has already been put into scaling Mesos. If we need to get more hardware, we can do that, or if we need to burst we can scale up our ASGs in AWS.

But at the same time, scaling the “number” of services is also easy in PaaSTA. 

This is because in PaaSTA, a service is just a git repo and a couple of config files describing how it should be run and monitored. This is super powerful for organizations with lots of teams. No team needs to be “blocked” on getting a new service running—the barrier to entry is very low for developers. 

Splitting out applications into individual services is not a new technique, despite the “microservices” hype. It is the natural progression of things as organizations grow up and scale as an org. The important part is having a platform that can grow up with you. For Yelp, PaaSTA is that “growing up.”

PaaSTA right now just handles “stateless” workloads, either long running or scheduled. In our opinion, the storage primitives are a tad immature for running production stateful workloads (like MySQL, Cassandra, etc.). For those workloads, we use traditional infrastructure building tools. Luckily, that works great for Yelp! Most services that our developers develop are stateless.

On The Shoulders Of Mesos

RWHow does Apache Mesos fit into your strategy? Why did you elect to run your PaaS on Mesos?

KA: We chose Mesos as a proven technology for building scalable distributed systems in real-life production settings. 

We also chose to build on it because of its “opinionated” nature. Mesos “does one thing well,” and that is the resource management of clusters. It doesn’t actually do the scheduling and deciding what to run, it leaves that up to frameworks. 

We really like this model; it means we can start with relatively simple frameworks like Marathon and Chronos, but we can expand with our own custom frameworks. For example, we already use a custom Mesos framework called “Seagull” to handle running large test suites across a large number of Mesos slaves using Amazon Spot instances.

Another reason we believe that Mesos is a good foundation upon which to build a PaaS is the fact that it has pluggable executors and containerizers. That means that we are not locked into say, Docker. Docker is cool, but we don’t want to be locked into one particular container implementation. 

I’m really excited for pluggable “containerizers.” So far, we limit things to particular CPU shares and memory, but wouldn’t it be cool if we could go up a level and start talking about costs per hour to run your service? Mesos doesn’t care what the metrics are, it just see ints, floats, and sets. 

I look forward to that day when we can see compute as a raw utility, and I believe that Mesos is going to help empower that shift in thinking.

Go Custom Or Commercial?

RWWhat has been the payoff? The ROI? What are the benefits of your approach over how you did this before or if you had gone with an off-the-shelf PaaS from a vendor? How is your PaaS better for Yelp than alternatives you considered?

KA: The payoff is huge. The most obvious return comes from the lower barrier to writing new services. 

It used to be that developers would have “a dream deferred” due to the large overhead of provisioning a new service, getting resources, adding monitoring, etc. Now it is so easy to launch services, we find that developers use the platform for launching experiments during our frequent hackathons. 

It is a good sign that we build something good when developers choose to use it, even when they are free to do anything (during the hackathon).

Longer term, we expect to see better resource utilization out of our hardware thanks to real automated scheduling (as opposed to manual partitioning). We also look forward to fine-grained auto-scaling and taking advantage of opportunistic low cost servers (AWS Spot Fleet / Spot instances). We are already leveraging this in dev environments for bursty workloads. 

From an organizational perspective, the biggest gain from building your own PaaS is the chance to reuse your existing engineering work on the components. For Yelp, this means we can reuse our existing service discovery mechanisms, monitoring tools, and docker images, to allow service authors to incrementally grow into PaaSTA. 

Often with other commercial PaaS’s it is a much more significant migration. 

By using existing open-source components with your PaaS, you can use them for auxiliary applications. For example, in PaaSTA we use Sensu to do the monitoring and alerting for services, but we can also use Sensu in a more generic way to monitor more conventional things like switches and routers. 

A more turnkey PaaS from a vendor may have a tightly integrated monitoring solution with their product, but it is unlikely you can use that same monitoring tool for other things. With PaaSTA, the value of the whole is truly greater than the sum of its parts.

Lead photo by George Thomas 

The post Yelp Open Sources Its PaaS To Liberate You From Docker appeared first on ReadWrite.

]]>
Pexels