The end of DST?

Twice a year we go through the process of changing clocks either forward or back when daylight saving time rolls around or comes to a crashing end. Yesterday I had to change the time on 5 devices and luckily a bunch more automatically changed. While this process is a little bit of a pain and losing the hour of sleep wreaks havoc on our schedules for awhile, I like daylight saving time as the extra hour of daylight is most welcome.

Recently, a member of the California Legislature has introduced a bill (AB-2496) to eliminate DST. While this sounds like it would simplify things and not subject us to schedule adjustment, generally everything I’ve read indicates that people want DST all year long and not have PST. Unfortunately, this bill doesn’t address that.

If people really wanted DST all year long and still get that extra “hour” of daylight, then we, as a society, have to shift our schedules and notions of when things start. So instead of having a normal workday be 9-5 (OK, I know that 9-5 is a cliché, but bear with me), we’d make it 8-4. Everything would have to shift so that we would have the perception that we have an extra hour of daylight; we wouldn’t gain an hour of daylight, we’d just start and end the routine activities (like work) early so that we can have free time when it is daylight. This, of course, is never going to happen.

Given that we’re stuck with either our current system of DST or ditching DST and not shifting our schedules, ditching DST is not easy. While yes we wouldn’t have to change our clocks, think about all the computers that automatically change their clocks. So now instead of Pacific Standard Time which assumes that DST is followed, we’d have another option like “California time” which doesn’t handle DST. This means that computers and IoT devices have to be updated to support this. It isn’t technically difficult, but rolling this out could be harder than the daylight saving time switch in 2007 (based on the Energy Policy Act of 2005) as consumers would have to explicitly choose this new time zone (with the last change, the rules which specify the DST rules just had to change with no user interaction).

If you take a look at an iCalendar entry, you can see that most modern calendaring programs already take into account DST rules.

    BEGIN:VCALENDAR
    VERSION:2.0
    PRODID:-//Apple Inc.//Mac OS X 10.11.3//EN
    CALSCALE:GREGORIAN
    BEGIN:VTIMEZONE
    TZID:America/Los_Angeles
    BEGIN:DAYLIGHT
    TZOFFSETFROM:-0800
    RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
    DTSTART:20070311T020000
    TZNAME:PDT
    TZOFFSETTO:-0700
    END:DAYLIGHT
    BEGIN:STANDARD
    TZOFFSETFROM:-0700
    RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
    DTSTART:20071104T020000
    TZNAME:PST
    TZOFFSETTO:-0800
    END:STANDARD
    END:VTIMEZONE
    BEGIN:VEVENT
    CREATED:20160304T043923Z
    UID:AAAAAA
    DTEND;TZID=America/Los_Angeles:20160311T220000
    TRANSP:OPAQUE
    X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC
    SUMMARY:Some Event
    DTSTART;TZID=America/Los_Angeles:20160311T180000
    DTSTAMP:20160304T043925Z
    LAST-MODIFIED:20160304T043923Z
    SEQUENCE:0
    END:VEVENT
    END:VCALENDAR

Yes, EVERY calendar entry you have has DST rules in it because they are a mess worldwide. So while calendaring programs are already ready to handle a DST change, are we ready for it? Can you imagine someone in California scheduling a meeting for people in New York and Portland? People in Phoenix already do this, so it should be easy, right?

As much as I don’t like changing clocks and having to wake up the Monday after the change to DST to get ready for my day, I’m OK with the current system. I know that a recent study shows a link between the switch to DST and stroke, but there are also links between the Super Bowl and violence. Should we get rid of the Super Bowl as well just because of that? (There are other reasons to get rid of a sport that almost encourages head injuries, but that is a different story.)

Review: UniFi AP AC Lite and AP AC LR

I’ve been using my EdgeRouter Lite for more than 6 months now and couldn’t be happier with it. After posting my review, Ubiquiti contacted me and asked if I was interested in testing out some new hardware. As I love playing with new hardware, I couldn’t say no. I was actually eyeing the 802.11 ac access points, but the price tag put me off as I didn’t need a new wireless access point; my Time Capsule has been working fine in bridge mode providing coverage throughout my house pretty well.

Ubiquiti sent me a UniFi AP AC Lite and UniFi AP AC LR for testing. Both units are basically identical with the LR providing better range and potentially better throughout on the 2.4 GHz range. I’m going to focus on the LR device as the price difference ($89 vs $109) is so low, that for the home and small business use, the LR is a no brainer when compared to the Lite (the Lite is also a bit smaller which could make it fit in better on the ceiling in a home).

Most home users purchase an off the shelf router such as the Apple Time Capsule which includes a router as well as a WiFi access point. This serves most people’s needs, however some people find that they need additional access points to fill in the dead spots in their homes. In order to do this, they either use repeaters or additional routers in bridge mode. This is basically wasting a large portion of the router. While this isn’t what I’m doing because I didn’t need to fill in gaps in coverage, I was quite intrigued about a WiFi access point that simply did WiFi. In addition, the UniFi access points are Enterprise grade access points which means (to me) that they’re highly reliable and highly configurable.

When I first opened the AC Lite (I tested it first), it had the access point, a mounting bracket, and a PoE injector in it. A PoE injector allows power to be supplied over Ethernet; this means that only 1 Cat6 cable goes to the access point and the injector is placed close to the switch and plugged into a power strip. The first thing that disappointed me about this access point is that it didn’t use the 802.3af PoE standard which would have allowed me to connect it directly to my Linksys PoE+ Switch. When I asked Ubiquiti about this, I was told that a lot of their customers are price conscious and when deploying a lot of devices, the cost difference can be significant. In these cases, their customers use the UniFiSwitch which provides passive PoE (like the injector). For my testing setup, I simply turned the access point upside down (nose pointing down) on a high shelf. For permanent installation they should be mounted on a ceiling (the docs indicate they can also be wall mounted, but based on the antenna design, ceiling mounting will be better). If I had known about access points that were this cost effective and could be PoE powered, I would definitely have run extra Cat 6 to central places in the ceilings. Anyone that is remodeling and putting in Ethernet cable should throw in a few extra runs in the ceilings to mount access points; even if they aren’t UniFi access points, some type of PoE access point could easily be installed.

The second step was to install the UniFi Controller software on my server. The software is used for initial setup (they also have an Android app and an iOS app that onfigures the access point), monitoring and ongoing maintenance of one or more access points as well as some of the other products in the UniFi line. The controller is a Java app and installed without too many problems. If installing on OS X Server, I recommend modifying the ports that it uses by going to ~/Library/Application Support/UniFi/data/system.properties and change the ports; OS X Server likes to run the web server on the standard port even if you turn off websites. Note that you have to run UniFi Controller once to create this file. In addition, when modifying the file make sure that there is nothing on the line following the port such as a comment as that will prevent the file from being read. (After spending 30 minutes trying to figure this out, I found a forum post with this information in it.)

I don’t like Java apps for daily use, but for server use, I have no objections to them (I also run Jenkins and it runs well without bogging down the machine). After installing the controller, I wanted to use my own SSL certificate (I’m not a huge fan of accepting self-signed SSL certificates). I setup an internal hostname for the machine and then using my SSL certificate in a pem file:

openssl pkcs12 -export -in server.pem -out ~/Desktop/mykeystore.p12 -name "unifi"
cd /Users/mediacenter/Library/Application Support/UniFi/data
keytool -importkeystore -srckeystore ~/Desktop/mykeystore.p12 -srcstoretype PKCS12 -srcstorepass aircontrolenterprise -destkeystore keystore -storepass aircontrolenterprise

(This requires restarting the controller software.)

Once I had the controller software installed, I went to a browser and connected to port 8443 on my server. The controller software walks you through the simple setup and the access point is up and running. I don’t recommend stopping here as there are a number of options to setup to take full advantage of the access point.

Unificontroller

The controller interface is very utilitarian, but in my opinion is not easy to use. For the basic access point, it shows a lot of stuff that is irrelevant and can’t be turned off. The good news is that the controller software isn’t used all that often. I spent a bit of time experimenting with the interface to get what I wanted. First off, I wanted separate 2.4 and 5 GHz networks. If you have one SSID for both 2.4 GHz and 5 GHz, Apple devices pick the frequency with the better signal and this tends to be the 2.4 GHz and won’t jump over to 5 GHz automatically. I found a reference to an Apple technote describing the behavior. By separating out the 2.4 GHz and 5 GHz networks, you can explicitly select the frequency. (Apparently the band steering option in the UniFi access points is supposed to help with that.) Next up was a guest network. While the controller can setup a guest network and portal mode, this turns on QoS (Quality of Service) and actually degrades performance even if no one is connected to the guest network. This was not acceptable to me, so I just created a separate SSID and told it to use VLAN 1003 and used what I wrote about before to separate out the traffic. While I would have liked to use the built in guest network and play with the portal, I rarely have people using the guest network so the tradeoff wasn’t worth it for me.

There are also settings for controlling power and bands for the router, but the default settings work for me.

So now that everything was setup, the next question was “do they work?” Well, it’s pretty hard for access points not to work! I setup the networks separate from my Time Capsule so that I didn’t subject my household to my testing and put my devices on it. Would my devices stay connected? Did the access point have hiccups and require rebooting? How was the performance of it?

I’ve been testing with my 2012 MacBook Pro, iPhone 6 and iPhone 6s, and iPad Mini 2. The iPhone 6 and 6s do 802.11ac, the iPad Mini 2 does 802.11n, and the MacBook Pro does 802.11n. I’ve found that the MacBook Pro consistently stays connected on the 5 GHz network (preferred network) and usually negotiates at 300 Mbps. Using iperf connecting to a local server, I get 150-200 Mbps. That’s not too shabby. The connection is rock solid and I don’t see the MacBook Pro switching to the 2.4 GHz network. Using the iPad Mini 2, I can stay connected to the 2.4 GHz network, but the Mini seems to require me to toggle WiFi periodically to see all the networks including the 5 GHz network. I have no idea why, but not an access point issue. When I use the 2.4 GHz network, I can get 50-60 Mbps and on the 5 GHz network, I can get 110-140 Mbps. My iPhone has no problem with the 5 GHz network and gets 100-110 Mbps. (I used iPerf3 on iOS to do the measurements. iPerf3 has an awful user interface, but it does work.) I saw similar, if not better performance with my Apple Time Capsule. Indications from reading the forum is that these access points have trade offs for supporting more users vs higher performance on a small number of users like in my situation. However, the performance is more than acceptable given that I currently have a 100 Mbps Internet connection and the only time I could exceed that is hitting my internal network.

While I don’t live in a condo or a multi-unit dwelling with units stacked on top of other, I do live in an area with crowded airwaves. The 2.4 GHz frequency as you can see below has a few peaks (my networks) and a lot of access points. Performance on the 2.4 GHz is acceptable and since I don’t normally run speed tests is more than adequate for my 50 Mbps downstream cable modem connection (for now until I get 200 Mbps hopefully next month).

2 4GHz

The 5 GHz frequency is a lot less crowded which is why I try to get my devices on it at all costs (I’m tempted to have the devices forget the 2.4 GHz network, but I suspect that will cause more problems).

5Ghz

Since I love statistics, I turned on SNMP in the UniFi controller (it actually tells the access point to turn on SNMP and monitoring is done by connecting to the AP and not the controller), and setup Cacti to monitor traffic. There, of course, is very little use in me monitoring traffic on my network, but I’m always curious about network performance and utilization. However, the graphs do tell me that very, very rarely do I ever see bandwidth spikes above 50 Mbps.

Catci

This access point is definitely a step up from consumer grade router/access point combos. It is extremely flexible, cost effective, and unobtrusive (I forgot to mention that it looks like a smoke detector). I’ve been so happy with my EdgeRouter Lite and this access point, that I have already purchased a UniFi AP AC Pro to see how that will perform.

Pros

  • Highly configurable
  • Easy to install
  • PoE for placement with just an Ethernet cable
  • Unobtrusive
  • SNMP capable
  • Decent performance in the single user environment
  • Low cost

Cons

  • Lite and LR units use passive PoE instead of 802.3af
  • Controller software is a bit cumbersome to use
  • Not all advertised features are currently available such as band steering and airtime fairness
  • Guest portal and rate limiting options drastically affect performance

Summary

While the UniFi access points are designed for enterprises, they are a great addition to the EdgeRouter Lite. If anyone has a little time to setup an access point and can deal with the not so consumer friendly controller software, I would definitely recommend this line of access points. If you’re OK with the 3×3 MIMO on 2.4 GHz and 2×2 MIMO on 5 GHz vs 3×3 on 5 GHz, than the LR access point is probably the better bang for your buck. The Lite for the home network where $20 isn’t going to break the bank may not a great choice, unless the smaller size is attractive due to mounting. In my case, I’ll be mounting 1 access point behind my TV and 1 in my office, so no one will see them. If you’re like me and the lack of the 802.3af PoE bothers you, than the Pro access point is the way to go. Since I already have a PoE switch (actually 2 of them and neither is a Ubiquiti switch that provides passive PoE), having to use an unsightly injector (which uses an extra power outlet) doesn’t excite me.

The Ubiquiti forums provide a wealth of information for the tinkerer. Ubiquiti staff is very helpful and provide lots of answers (as do community members). The controller software and AP firmware is being updated all the time which is very exciting; I don’t need new features, but a fresh UI and more options (such as being able to turn off the LED not just using a command line) would be nice.

For better coverage, getting at least 2 access points would go a long way to having full coverage in a house. While 1 will get me coverage bars all over my house, a second one will give me better performance and not just bars of coverage. Once I get the Pro unit, I’ll be able to space out my access points.

Most home users just accept mediocre WiFi coverage and buy into the marketing of many router/access points that say that they’re access points perform better than others. The problem really is that the access point can have higher transmit power (based on the maximum allowed), but really if your device can’t connect or have good WiFi performance, it doesn’t help. More access points are going to provide better, more consistent coverage. The UniFi access points do that quite well at a reasonable price.

NOTE: Test units were provided to me at no cost from Ubiquiti Networks. However, that didn’t influence the results of this review and no conditions were placed on what I wrote about the units.

Enhancing my TV Viewing Experience

In my last post about TV viewing, I wrote that I’ve switched to MythTV with a few custom scripts to export to Plex. This setup has been working well and I’m quite pleased with Plex on the AppleTV. One of the features of the AppleTV 4 is the ability to ask Siri things including “What did he say?” where the video rewinds a few seconds and turns on subtitles. This is an extremely useful feature that we’ve used a number of times watching shows on Netflix. However, since I have my own system for recording/watching shows, I didn’t have the ability to do this.

So, I decided to see what it would take to enable this. From what I read, Plex will show subtitles and if the subtitles are in the video container itself (i.e. in the mp4), it wouldn’t require extra processing power on the Plex Media Server to show the subtitles. That got me thinking that if I could turn the closed captioning into subtitles and put them in the mp4, maybe I could get this feature working.

Using the ever so powerful ffmpeg, I figured out that to extract the closed captioning (closed captioning isn’t the same as subtitles as far as I can tell as they are transmitted differently) using this:

/usr/local/bin/ffmpeg -f lavfi -i "movie=${TEMPDIR}/${FILENAME}[out0+subcc]" -map s ${TEMPDIR}/${FILENAME}.srt

Where ${FILENAME} is the MPEG2 file with the commercials stripped out (otherwise the titles wouldn’t match up).

Then I could use this:

    /usr/local/bin/ffmpeg -i ${TEMPDIR}/${FILENAME}.mp4 -f srt -i ${TEMPDIR}/${FILENAME}.srt -metadata:s:s:0 language=eng -c:v copy -c:a copy -c:s mov_text "${PLEXDIR}/${TITLE}/${OUTPUT_FILENAME}"

Which merges the newly created subtitle file (.srt) into the mp4 file.

After doing all that (it takes awhile to do the processing), I had an mp4 file with subtitles in it! I tried it out in QuickTime Player, turned on subtitles and show words. Next I put the video into Plex and played it through the Plex interface; good there as well. Now the last test was to play it through the AppleTV and use the Siri command to find out the last words. Amazingly that worked as well! There was a slight hiccup with Plex where the video stutters after the subtitles are turned off, but that’s pretty minor.

Putting that all together, I updated my plexexport script that I call from MythTV. While this whole setup has taken a little bit to get running, I just smile every time we watch shows as it really works seamlessly and requires no babysitting (I do check on the recorded shows each morning just to make sure that nothing went wrong, but that’s just because I can’t believe it really works!).

The TiVo works wonderfully for the average user; I’m not the average user and I consider all this work a hobby. Will I get tired of putting together my own systems like this? I have no idea, but for now, I’ll keep working on it until it is perfect (this system is almost perfect, but I’m sure there can be more).

OS X Server and the runaway process

Yesterday I heard the fan running like mad on my Mac Mini. My mini is my workhorse machine that is my media center, surveillance system, and build server. This is not uncommon as it processes video at times and when it builds certain projects (Swift seems to cause it more), the processor has to work harder. Normally the load is about 30% (Quad Core i7), but when the fan comes on loud enough for me to notice, it is hitting the 50-90% range. I did a little poking at the machine and saw a process called sdmd hogging the CPU. I had never heard of this process, but a quick search found some references on Apple’s support discussion threads.

I read through all the responses and turned off “Create personal folders when users connect on iOS”.

FileSharing

Unfortunately that was only half the solution. The other half of the answer that I ignored on the first pass was a setting in each Shared Folder that enables iOS sharing. I had recently added a shared item and didn’t notice the iOS setting.

Sharing

Once I unchecked iOS, the load started going down and little files stopped being created. By this time, I had over 150,000 little files! These were thumbnails from my cameras collected over the last week. Using the Finder to delete the files was an exercise in waiting, so I searched for a UNIXy way to do this and found the answer. Switch to the /Library/Server/ServerDocs/Data directory and issue:

sudo find . -type f -print -delete

This quickly nukes all files in the current directory.

I still have mixed feelings about OS X Server; it is working pretty well for me right now to perform basic functions including Time Machine backups, but sometimes it does magic where it takes time for me to figure out how to undo its magic.

Review: Amazon Echo

[19 Jan 2015, 7:38 AM – Minor Edits]

When Apple first announced HomeKit, I was excited for an easy to use system for home automation. Unfortunately or fortunately, they left the actual implementation to the manufacturers. The first HomeKit devices that came out were pretty simplistic and would allow you to turn lights on and off. To me, this was a bit useless. In order for automation to be truly valuable, there had to be rules for different things to happen based on inputs that worked even if my phone wasn’t around (what good is automation if it can’t turn on lights in the middle of the night when you’re not home to scare someone away?). Now that more manufacturers are getting into automation, we’re seeing hubs such as Insteon incorporate HomeKit; I’m not exactly sure what it does, but it is a start.

I glanced at the forums for the Vera hub that I use periodically and saw that someone had created a Vera HomeKit Bridge. I had some time one day and installed it and picked up one of the apps on the iOS app store that had an Apple Watch component and gave it a whirl. Telling Siri on my watch to turn a light on and off was interesting, but due to the speed of Siri, it was more of a gimmick than anything else.

I was now intrigued with voice control of my automation, but it had to be more seamless and work even when I didn’t have my watch on my wrist (raising my wrist to yell at Siri to turn on a light was pretty lame). I saw that someone created a Bridge Application for Amazon Echo and I started looking at the Amazon Echo. The Echo looked very gimmicky with a lot of promise. This bridge, if it worked better than the HomeKit bridge, could be the next step in my home automation. The cost of the Echo was a bit more than I was willing to spend on an experiment, so I put it on my wishlist.

Much to my surprise, one Saturday evening, the Echo arrived (someone bought it for me off my wishlist) and I quickly set it up, told it to discover devices (the Echo Bridge emulates a Hue Bridge), and I was ready to start experimenting. I was turning lights on and off in no time. My son got a bit too excited about the Echo and basically annoyed my wife to no end; the Echo was banished to my office. After a few days, my wife let me put the Echo back in our kitchen (we have a split/tri level house with the kitchen/living area in the middle). My son soon learned to not annoy my wife and I started routinely using the Echo to turn lights on and off. For instance, when I go out back to put compost in the bin, I tell the Echo to turn on the back lights (unfortunately due to some choices I made, the switch for the back lights is not near the door).

So now I had a “toy” to control my automation. In a few short weeks, the “toy” became a tool where I routinely told the Echo to turn off lights that my family had left on around the house. It wasn’t until one day my wife walked upstairs, sat on the couch and used the Echo to turn off the downstairs light, that I knew the Echo was a keeper. Since we have an open floor plan with our living room adjacent to our kitchen, the Echo can “hear” us anywhere in our main living quarters which is about 500 sq feet. (Our house isn’t all that big and the light switch was literally 5 feet from the couch.) While I initially thought that turning lights on and off was too simplistic, it is really something that is done all the time in a house that can’t be automated.

Other than the Echo controlling my automation, I use it occasionally to hear news and weather and we sometimes ask it questions.

The Amazon Echo has tons of potential and if it wasn’t for the Echo Bridge, it would be an expensive, seldom used gadget. The bridge makes the Echo an excellent addition to my home; however, the price is a bit much for what it does. Developers are adding more “skills” to it all the time and maybe it will make sense for it to be used by more than just early adopters.

Some people have privacy concerns over an always on microphone. As far as I understand it, the trigger word processing “Alexa” is all done locally and doesn’t transmit until the trigger word is heard. To me, this is fine to have the processing done remotely after the trigger word is said; nothing I say to the Echo is all that interesting.

Pros

  • Voice recognition works well.
  • Microphone picks up voice from across the room.
  • Easy setup.
  • Developers are extending built in functionality.

Cons

  • Limited utility for most people.
  • High cost for a gadget.
  • May have privacy concerns.
  • Doesn’t integrate well into iOS ecosystem (tasks and reminders need IFTTT to get to the device).
  • Current skills are pretty mundane.

Summary

For me, the Amazon Echo has been an excellent addition to my home automation system. I don’t think a day goes by that we don’t use it. However, without the bridge to my Vera automation system, the Echo would not be used. The current price is a bit much without a particular use. I wouldn’t recommend purchasing the Echo unless you have a particular use in mind and are willing to get used to using your voice to control things. The Echo Bridge is a great piece of software that completes the Echo; while it was designed around the Vera, anything you can control via a URL can be controlled via the Bridge.

At this point, I think I’d feel like one hand was tied behind my back if I didn’t have the Echo!

Economics of Water Conservation

Last week I received my latest City of San Diego water bill; we’re billed every 2 months. Our bill has a base fee for water, base fee for sewer, a few other fixed costs and then the actual cost of water that, of course, varies based on usage. The problem that I immediately noticed is that water for Tier 1 was $3.9183 per HCF (1 HCF = 748 gallons). We used 6 HCF for a total cost of $23.51 in variable costs; the fixed costs total over $100. Just for reference, the average usage in my area is 13 HCF, so we’re doing pretty well in conserving.

While I can completely understand that fixed costs handle infrastructure and personnel costs, the actual water cost doesn’t provide any incentive to conserve. One of the fixed fees is adjusted each year based on usage, but it is still fixed for the entire year. If I save 1 HCF of water every 2 months (16% reduction), I’ll reduce my bill a whopping $3.92! If on the other hand I double my usage, I’ll pay an additional $23.51 which is nothing.

What I propose is that the fee structure change such that each HCF of water costs say $20 and have no fixed costs. Throw in a minimum fee that say includes 4 or 6 HCF of water and now there is incentive for people in my area to reduce usage. At the current rate structure, the average user in my area pays about $155 (I think the base sewer fee is a bit higher, but I’m not sure); under my proposed $20 per HCF, the average user would pay $260. Now, people reading are probably saying that this is outrageous to increase the water rates so much, but think about how this would encourage conservation while still covering fixed costs for infrastructure and personnel.

Of course, my proposed number is simply based on taking my total bill and dividing by the number of HCF used, so it could easily be adjusted. There is probably something I’m missing here as this simple pricing change seems like it would go a long way to encouraging conservation.

Attempting to perfect TV viewing system

As I’ve written about in the past, I’ve gone to great lengths to record and play TV shows (I don’t torrent and I get my shows over the air). My system up until now has relied on El Gato’s EyeTV 3 software running on my Mac Mini combined with a few scripts and now a command line program I wrote called EyeTVExporter to drive ffmpeg. The whole system has worked well, but one problem with EyeTV is that I have to screen share to the Mac Mini to add or modify schedules. Also, this process is far too complex to maintain in the long run. Also, I don’t currently have faith that EyeTV will continue to be updated; for instance the latest issue I have with it is that the Export routines (either in the app or via AppleScript) convert all the audio to stereo instead of preserving the AC3 5.1 audio that is recorded on many shows and exists in the MPEG2 file that EyeTV internally stores. When I’ve tried to just export to MPEG2 with the cut marks which mark the commercials, it has garbled some of the video.

Never content with the status quo, I decided to give MythTV a try this weekend. I had tried to install it before, but the pre-canned installers failed. So this time, I decided to build MythTV using MacPorts using the instructions on the MythTV wiki. Other than a few minor hiccups due to old cruft lying around, the install succeeded after awhile of building. I also installed mythweb to manage the whole thing (I changed the port on the version of Apache installed so I didn’t have to futz with the standard OS X Server apache install.)

After the initial configuration and purchasing a 2 month trial of Schedules Direct for the guide data, I was off and running. First step was to record. That went smoothly and the built in commercial skipping seemed adequate for my needs; having it built in instead of using an external component was a big win for me. The next part became trickier and required me to work on a script. I created a user job in the mythbackend that has the following calling parameters:

plexexport %DIR% %FILE% %CHANID% %STARTTIMEUTC% "%TITLE%" %SEASON% %EPISODE%

I then created a script that I placed in /usr/local/bin/plexexport (chmod +x the file after placing it there). Basically the script uses the commercial flags that MythTV inserted, converts those to a cut list and then exports to the appropriate location for Plex to pick up. My ffmpeg settings seem to work well for 1080i and 720p video and preserves the 5.1 audio. I did find that ffmpeg didn’t stop when it was done and duplicating my test video for 9 hours of video; using a little script magic and ffprobe, I cleared up that problem by explicitly telling ffmpeg the duration.

With MythTV you can have it run the user job right after the show records, but I want everything to happen at night when I don’t have to hear the box if I’m in my office, so I had to create a simple script based on a sample I found that I invoke every night (I use Lingon to set it up). This script tells MythTV to locate all shows that are less than 24 hours old and to queue a commercial flagging job and a Plex Export job. I have MythTV setup to do 1 job at a time so I don’t have to worry about the processor getting overloaded.

The last piece in this puzzle is deleting old recordings automatically and yes, I have another script to accomplish that task.

This may sound like it was more complicated than in the past, but I had 4 scripts before and 1 custom program to do the same work. If this all works, my time will have been worth it.

This allows me to use a web browser to schedule shows and easily view upcoming recordings. While the web interface isn’t great, it is usable. One caveat is that mucking with some things can have cause lots of problems as the web interface does very little error checking.

Does this save me money? No, Schedules Direct is $25/year while the EyeTV guide data is $20/year, so it is $5 more. However, by undertaking this now, I’ve proven to myself that when/if EyeTV stops functioning, I have a viable alternative. Hopefully this experiment pans out.

New Car Decision

A few weeks ago I went to the San Diego International Auto Show with my father and my son to kick some tires and see if anything wowed me. We looked at the “green area” and there wasn’t much exciting there. To keep my son entertained, we went to Camp Jeep, rode in a Jeep Renegade on their indoor course and my soon climbed a rock wall. Unfortunately (for my car looking), this was the highlight of the trip.

We went by the Audi booth and didn’t see the A3 e-Tron which was quite disappointing. Where is Audi’s commitment to the car if they didn’t bother showcasing it? We took a look at Acura and Lexus as well. The Acura RDX which I’ve been eyeing for 9 years is a nice looking car, but man the gas mileage kills me. The Lexus CT200h caught my eye because of the excellent gas mileage, but the overall look isn’t enough for me to bite.

I’ve decided that the best choice for me right now is to stick with my 2003 Toyota Highlander. I’m going to keep my out for Apple to update it’s list of cars supporting CarPlay and keep re-evaluating. This whole process has kind of honed in what I want in a new car:

  • CarPlay – This may seem a bit like I am an Apple fan boy (OK, I am), but not having to update maps and having an interface that can get updated interests me.
  • Decent gas mileage – This either means a hybrid or a plug-in hybrid at the moment. I’m not ready for a pure electric car; we may consider one for my wife’s next car and have my car which is driven less as one that consumes gas.
  • Semi-luxurious interior – At the car show, I sat in a lot of cars. The leather in Acura, Audi, and Lexus cars seemed much nicer than that of say the Subaru and the whole cabin just seemed better put together (hard to describe, but felt nicer). I’m not saying I want a Mercedes or a Ferrari interior, just a step up from a Ford.
  • Decent looks

So I’ll wait and see what 2017 models are released and see if anything catches my eye. Kind of a let down as I’ve been looking forward to a new car for awhile; however, it is the right choice to do nothing.

First week with the Apple TV 4th Generation

Last Friday I received my new 4th Generation Apple TV (note to people living in Southern California…don’t pay extra for rush shipping from Apple as products shipped from Apple tend to arrive quickly) and I’ve been playing with it on and off since. For years now, I’ve been looking for the best way to handle my family’s video entertainment needs. I’ve played with a number of streaming boxes including the Apple TV 1st generation, Apple TV 2nd generation, Roku 3, and Fire TV 1st generation. With the exception of the 1st generation Apple TV, all the boxes have served some of our needs.

When my wife first wanted to watch Netflix, I bought the 2nd generation Apple TV as watching it on our Nintendo Wii was not really something I wanted to do. At the time, we were recording shows with El Gato’s EyeTV and playing it through a Mac Mini. Having 2 devices was not ideal, so I pieced together a setup to put the EyeTV recordings into iTunes and then play them on the Apple TV. This setup worked quite well for several years.

Two years ago when I bought a new Vizio TV that had Amazon Prime Video (and Netflix) as apps in it, I tried to use it and found the UI to be impossible to use. I had heard good things about the Roku, so I purchased one. That left me with 2 boxes, one for Amazon Prime Video/Netflix and one for recorded TV shows. I saw that Roku had a Plex app on it, so I setup Plex, changed my setup to move TV shows to Plex and I was basically back down to 1 box (except for AirPlay). Last year, I was given a Fire TV which had Netflix, Amazon Prime Video (obviously) as well as the ability to play games. It also had a Plex app so I was hopeful that this box could be the box and it would give my son the ability to learn to play some video games (yes, I know I’m corrupting my son by wanting him to play games!). Unfortunately the Fire TV’s UI (1st generation box) was pretty poor and difficult to navigate. It does have “X-Ray” for Prime Video so it has stayed in use for that as well as some games (my son likes Minecraft). I wrote about the Fire TV before.

So where did that leave me before the new Apple TV? Well, we still used the Roku 3 for most of our watching, but switched to the Fire TV for games and Amazon Prime Video. The Apple TV was only used for AirPlay. Like every streaming box I’ve tried, I was hopeful that the 4th generation Apple TV would replace all the other boxes.

After I unpacked the Apple TV, I put it in place of my old Apple TV in my equipment rack and fired up the TV. Due to issues with RF signals, I was expecting to have to mount it on the wall, but was going to put that off for awhile (turns out the Bluetooth on it is good enough that I can just leave it on the top shelf of my rack). When I started the setup for the Apple TV, I couldn’t understand why moving my finger on the remote didn’t change the menus. Turns out I had the same problem that others have, I was holding the remote upside down. This does continue to be a problem for me that I have to solve. The Apple TV is able to turn my TV on and off using HDMI-CEC which is an awesome feature. Also, the volume buttons control my Vizio sound bar via IR, so the Apple TV remote is on track to be the only remote I need.

On day 1, we were able to watch Netflix and play a few games. I did buy the Nimbus SteelSeries game controller so that we could more easily play games. Pangea had a number of titles available and since I had already bought them on the iPad and the games were universal purchase, I could download them for no additional charge. There were a few other games available as well. My son and I played games and in general, they work well with the game controller. No Minecraft or Goat Simulator, yet, so the Fire TV still has to be connected.

After a few days, Plex appeared on the app store and I quickly downloaded it. It took me about a day to figure out why all my video was being transcoded when being played, but after that, Plex seemed to work well (I had set the maximum bandwidth for the streaming to be 10 Mbps instead of unlimited). The only feature that Plex is missing is the ability to delete episodes. I like the scrubbing and the navigation works well. I do have to train my wife to use the remote and Plex, but that will come with time.

Now that I had a few games, Netflix, and Plex, the Apple TV could become the primary device. I unplugged the Roku (I’ll probably have to plug it back in for my wife until she gets used to the Apple TV, however). I am very close to making the Apple TV the only box we need and I have high hopes that this will happen. However, Amazon has to get off its high horse and port their iPad Amazon Prime Video app to the Apple TV, Minecraft has to be ported as well as Goat Simulator. (Yes, Goat Simulator is a dumb game, but my son loves it.) Given that these are all on the iPad, it shouldn’t be a monstrous effort (famous last words) to bring them to the Apple TV.

I’ve been using the Apple TV for a week and am really enjoying it; I’ve been playing Oceanhorn on it, watching Netflix and TV shows via Plex. My son has been watching Bill Nye via Netflix and playing a few games. He loves the box because one remote turns the TV on and off, changes volume, and he can use Siri to skip ahead; I’m not so enamored by Siri as he is. I really hope to retire the Fire TV, but Amazon Prime Video will likely prevent that from happening.

Even without 4K video (I don’t have a 4K TV and all of my content is at most 1080p), the Apple TV, in my opinion, is the box to get for people that don’t have anything today. For people that already have an Apple TV or another box, the decision becomes a lot more difficult. The Roku boxes work real well for video; they just didn’t have the ability to play many games (casual gaming). The Fire TV (1st generation at least before the promised software update), has a horrible UI. If someone needs/wants Amazon Prime Video, the only real option today (sorry, AirPlay from an iPad isn’t a viable option), is the Roku. The Apple TV has tons of potential and does most of what I need it to do today. I’m quite satisfied with the purchase. (It doesn’t hurt that I managed to get a good price for my used 2nd generation Apple TV on eBay.)

When is zip not zip?

Like most experienced iOS developers, I use an automated build system. A colleague of mine and I have spent portions of the last 2 years building up our system so what we do looks like magic to others! As part of this system, we’ve written tools and put together scripts to package our application as an .ipa (iPhone application). An .ipa file is simply a zip file with the extension changed.

Well, it isn’t that simple. It appears that how the zip is created is just as important as the structure of the package. There are various flavors of zip, libraries that do zip, and other tools that zip. In one of our tools, we were using a zip library. It appears that Apple made a change in iOS 9.0.2 or 9.1 that caused applications created by our tool to not install on devices. However, the problem was only present if the app was installed over the air or through iTunes; installed through Xcode’s Devices window succeeded. After an arduous day of debugging trying to determine the failure point (provisioning is usually to blame for failures and they can be super frustrating), I switched our tool to use the command line zip (/usr/bin/zip) and amazingly the problem went away.

It would appear that iTunes, iOS itself, and Xcode use slightly different methods for unzipping and installing applications. Since Apple’s xcrun command for packaging (PackageApplication) uses /usr/bin/zip, I think it is a safe bet. It is invoked using something like:

/usr/bin/xcrun -sdk iphoneos PackageApplication -v MyApp.app -o MyApp.ipa" --sign "iPhone Distribution: Scott Gruby"

On a side note, it also appears that there is an error in the PackageApplication script found at:

/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/PackageApplication

that has:

"--resource-rules=$destApp/ResourceRules.plist");

In Mac OS X 10.10 and higher, this line is no longer valid, so if you use this command, you need to modify the script.