Chris Adamson is the co-author of iPhone 10 SDK Development (Pragmatic Programmers) and Learning Core Audio (Addison-Wesley Professional). He's also an independent iOS and Mac developer, based in Grand Rapids, Michigan, where he also writes the Time.code() blog at subfurther.com/blog and hosts livestreams at invalidstream.com . Over the years, he has managed to own fourteen and a half Macs.
AV Foundation, introduced in iOS 4, offers broad and deep media functionality to third-party apps. Working from a "media document" model, it provides APIs for audio and video capture, editing, playback, and export. Developed in parallel with the iOS version of iMovie (which it powers), AV Foundation is a sensible first choice for most media needs on iOS, and is so compelling that it is being added to Mac OS X in Lion. In this session, we'll survey the kinds of features AV Foundation provides (and note the cases where you'd want to use something else, like Core Audio), and then dig into the basics of its capture, playback, and editing features.
The iOS version of iMovie uses the AV Foundation framework, and indications are that Final Cut Pro X will be using the Mac OS X version of AVF. And if AV Foundation is powerful enough to provide the core functionality of Final Cut, it must have some great stuff going on, right? In this session, we'll dig into the more powerful (and more challenging) APIs in AV Foundation, including reading and writing raw samples, performing live processing of incoming data at capture time, and advanced editing features like mixing audio and video tracks and adding Core Animation-based titles.
AV Foundation -- introduced in iOS 4, ported to Lion, and enhanced further in iOS 5 -- delivers a comprehensive framework for audio and video capture and playback. The capture functionality is so good, it's now the preferred option for still photography applications. In this session, we'll focus squarely on AV Foundation as a media capture framework. Attendees will learn: * How to get the most out of the device for still photography, by using AV Foundation to access the flash, white-balance, and image resolution. * How to capture audio and video to the file system. * How to process incoming audio and video capture buffers in memory, to create real-time effects or pick out interesting parts of the scene on the fly.
There sure are a lot of "Core" frameworks in iOS, but what do they do for you? Core Foundation is often assumed to just be a C version of the familiar Objective-C objects in Foundation, but wait... what's this CFPlugIn? That sure doesn't have an NS-equivalent. And collections like CFBagRef and CFTreeRef, what are they? What's this CFUUID that Apple says I have to use instead of -[UIDevice uniqueIDentifier]?
That's just the beginning: beyond Core Foundation, there's even more C-only functionality to be had. Core Graphics' CGPDF functions let you draw to and from PDFs, and even parse their notoriously nasty innards. And there are more interesting C-only treats in Core Text, Core Telephony, and others.
In this session, we'll make peace with iOS' C frameworks by getting used to the conventions of allocators, opaque types, run loops, and the toll-free bridge, and tour some of the unique functionality that's only available at this level of the iOS stack.
"In Soviet Russia, panel questions you!"
Borrowing an idea from the Penny Arcade Expo (PAX) and the panels held there by Harmonix (makers of the "Rock Band" games), a "Reverse Q&A" literally turns the tables on the traditional panel. Speakers become questioners, and attendees are the ones with the answers.
It's a new and novel idea, letting attendees have their moment in the limelight to say what they're really thinking, and letting speakers learn more about what people want from conferences, books, and their development life in general. With a combination of polls, follow-ups, person-on-the-street questions, and funny stories that we can all relate to, the Reverse Q&A will shake the cobwebs out of the old panel format and turn it into a two-way discussion that both sides of the mics can learn from.
If your iOS app streams video, then you're going to be using HTTP Live Streaming. Between the serious support for it in iOS, and App Store rules mandating its use in some cases, there realistically is no other choice. But where do you get started and what do you have to do? In this session, we'll take a holistic look at how to use HLS. We'll cover how to encode media for HLS and how to get the best results for all the clients and bitrates you might need to support, how to serve that media (and whether it makes sense to let someone else do it for you), and how to integrate the HLS stream into your app.
Core Audio is the low-level iOS API for processing audio in real-time, used in games, virtual instruments, web radio clients, music mixers, and more. It also has a well-earned reputation for being a challenging and unforgiving framework. In this advanced all-day tutorial, we'll shake off the scary and dig into Core Audio, playing with its powerful pieces and cutting through its thicket of jargon and secrets disclosed only in header files.
The all-day class will be broken into four hands-on sections:
1. Sounds and Samples, Properties and Callbacks: the concepts of digital audio, the conventions of Core Audio, and our first audio apps
2. Pretty Packets All in a Row: Recording and Playback with audio queues
3. Dreaming of Streaming: Building a web radio client with Core Audio
4. Go With the Flow: Audio Units and Audio Processing Graphs
This is an advanced tutorial, and it is assumed that attendees are intermediate to advanced iOS developers. At a minimum, you should know how to build and run apps, link and use iOS frameworks, and be fairly comfortable with C code (including pointers and malloc/free). Of course, you should have a MacBook with Xcode 4.3 or higher. Being able to deploy to a iOS device during the tutorial is optional, but may be helpful.
Core Audio gets a bunch of neat new tricks in iOS 6, particularly for developers working with Audio Units. New effect units include an improved ability to vary pitch and playback speed, a digital delay unit, and OS X's powerful matrix mixer. There's now a new place to use units too, as the Audio Queue now offers developers a way to "tap" into the data being queued up for playback. To top it all off, a new "multi-route" system allows us to play out of multiple, multi-channel output devices at the same time.
Want to see, and hear, how all this stuff works? This section is the place to find out.
Tim O'Reilly once passed along an observation from Broderbund founder Doug Carlston: "We consider a productivity application to be any application where the user's own data matters more to him than the data we provide."
The iPad is an absurdly wonderful device for this kind of application: big touch screen for drawing/designing/writing, gigs of storage, wireless networking to put your work in Dropbox or iCloud, and multi-core CPUs to grind through the heavy lifting. And don't let the lack of a user-visible file system fool you: inside the SDK, there is deep support for writing world-class applications to create, manage, edit, export, and distribute the user's data. Cut/copy/paste, import/export, rip/mix/burn, it's all there. But how do you find it, and how do you put the pieces together?
In this all-day tutorial, we'll look at the most essential SDKs for writing apps that let users make the most of their data, whether it's words, pictures, numbers, or media. We'll cover:
...plus more neat tricks to make our users more productive.
Audiobus is an iOS app that allows other apps to work together as an audio-processing toolchain: play your MIDI keyboard into one app, run it through filters in other apps, and mix it in a third. All in real-time, foreground or background. That such a thing is possible on the locked down iOS platform is remarkable enough, but what's even more remarkable is that hundreds of audio apps have added Audiobus support in the few months since its debut, including Apple's own GarageBand. In this session, we'll take a look at the Audiobus SDK and see how to create inputs, outputs, and filters that can be managed by the Audiobus app to process audio in collaboration with other apps on the device.
The iPhone is the best iPod Apple's ever made, and the iPad has replaced the TV for many users. And while developers can use documentation and books master the media frameworks (AV Foundation, Core Audio, and the rest), there's nothing in Xcode that will keep your audio from dropping out, fix artifacting on video with a lot of motion, or properly balance performance on the most-capable new Retina devices with backwards-compatibility with older ones. This session offers a ground-level intro to what's actually in your iTunes songs and streaming videos, and how to best encode them for the realities of iOS devices, their storage capacities and the networks they live on. We'll shoot, compress, and stream, all from a MacBook Air, and take a close look and listen to the results.
From the days of Mac telling PC about iMovie, to 2013's holiday ad with the kid making an on-the-spot family Christmas video, Apple's platforms have long excelled at working with audio and video. In the 90's and 2000's, that was thanks to QuickTime, but over the last few years, that's given way to AV Foundation, introduced in iOS 4 and Mac OS X 10.7 (Lion). This Objective-C-based framework gives the iOS or Mac developer the ability to play and create audio and video files, stream from the network, and capture from cameras and microphones, all with a deep level of customizability.
In this class, you'll get a deep dive into AV Foundation on iOS -- playing, capturing, editing, and exporting -- touring the highlights of the framework, and peeking behind the curtain into the underlying Core Media framework. Nearly all of the material covered can also be ported as-is to OS X.
The tentative agenda is as follows:
Because AV Foundation's capture classes are not supported in the Simulator, attendees should bring an iOS device and everything needed to run code on that device (i.e., an appropriate cable, and Xcode provisioned to deploy to the device).
AV Foundation makes it reasonably straightforward to capture video from the camera and edit together a nice family video. This session is not about that stuff. This session is about the nooks and crannies where AV Foundation exposes what's behind the curtain. Instead of letting AVPlayer read our video files, we can grab the samples ourselves and mess with them. AVCaptureVideoPreviewLayer, meet the CGAffineTransform. And instead of dutifully passing our captured video frames to the preview layer and an output file, how about if we instead run them through a series of Core Image filters? Record your own screen? Oh yeah, we can AVAssetWriter that. With a few pointers, a little experimentation, and a healthy disregard for safe coding practices, Core Media and Core Video let you get away with some neat stuff.
Graphics on iOS and OS X isn't just about stroking shapes and paths in Core Graphics and trying to figure out OpenGL. The Core Image framework gives you access to about 100 built-in filters, providing everything from photographic effects and color manipulation to face-finding and QR Code generation. It can leverage the power of the GPU to provide performance fast enough to perform complex effects work on real-time video capture. But even if you're not writing the next Final Cut Pro or Photoshop, it's easy to call in Core Image for simple tasks, like putting a blur in part of your UI for transitions or privacy reasons. In this session, we'll explore the many ways Core Image can make your app sizzle.
When the first 128K Macs landed in 1984, it was the first time many of us could undo a mistake with just a keystroke, or exchange data between documents or applications with cut/copy/paste and the system clipboard. Fast forward 30 years and we all use this stuff… but do you know how to actually implement it? Especially on iOS, these everyday features are surprisingly absent from many developers' toolchests. In this session, we'll flashback to the era of Reagan, Rubik's Cubes, and Return of the Jedi, to see these hot hits of the early 80's are represented in modern-day Cocoa.
watchOS 2.0 brings media functionality to Apple Watch, offering audio and video playback and audio capture. But lest you plan on writing Logic or Final Cut for the watch: what's available on the wrist has its limits, and you hit them quickly. In this session, we'll see what the WKInterfaceController offers us for miniature mobile media, and how we can get the benefits of AV Foundation and Core Audio by moving our movies, songs, and podcasts back and forth between the watch and the iPhone.
Apple TV offers a friendly SDK, full of familiar view controllers and Foundation classes, with everything an iOS developer needs to develop their own streaming channel. Except for… you know… the streaming part. In this session, we'll look at how Apple's HTTP Live Streaming video works -- from flat files or live sources -- and how to get it from your computer to a streaming server and then to an Apple TV. We'll also look at common challenges for building streaming channel apps, like serving metadata, protecting content, and supporting single sign-on.
With Facebook shutting down Parse, everybody knows to never again depend on a third party for their backend solution, right? Sure, and after you spend six months trying to write your own syncing service, how's that working? In 2016, Google has added a ton of features to Firebase, their popular backend-as-a-service solution. Firebase's primary offering is a realtime database in the cloud that syncs changes to and from multiple concurrent users, and their Swift-friendly iOS SDK makes it ideal for mobile use. In this session, you'll learn how to set up a Firebase backend and build an iOS app around it.
Swift is great for writing iOS and Mac apps, and its creators also mean for it to be used as a systems programming language. However, certain traits about Swift make it officially off-limits for use in some audio/video-processing scenarios. What's the deal, is it not fast enough or what? We'll look at what media apps can and can't do in Swift, and what you're supposed to do instead. We'll also look at strategies for knowing what responsibilities to dole out to Swift and to C, and how to make those parts of your code play nicely with each other.
What’s Apple planning for its media frameworks in the next 12 months? What’s it doing with Apple TV, or the HTTP Live Streaming standard? We won’t know until the curtain drops on WWDC! In this talk, we’ll amass everything audio- and video-related that gets announced throughout the week, combine it with the solid base of frameworks already present in the Apple platforms, and figure out from there what we’re going to be playing with in 2018.