• So today a staff member at one of the schools in my district told me that she didn’t understand what all these Financial emails were that she was receiving from Charles Schwab. Turns out she thought my IT status updates were marketing for brokerage services. Now that we’ve met face to face, she’s not deleting them anymore. Yay!

    I guess I need to work on my subject lines.

    PS. I am running for the Board of Directors of CUE. If you are a CUE member, don’t forget to vote.

  • Things used to be simple. I architected networks, administered servers and made sure email flowed uninterrupted. I was an IT guy and I was pretty good at it. Then one day I became a classroom teacher and my simple life in IT, focused on up time and putting six computers in the back of every classroom, became much more complicated. I became fascinated by the art and craft of learning and how technology might improve the school experience for kids. I took a hard look at how we provided and funded technology for teachers and realized we were doing it badly. I started looking at places like Minarets High School that were pushing the boundaries of student trust and teacher empowerment, using technology not for it’s own sake but as an instrument capable of transforming the learning process to focus on individual students instead of teaching to the middle. I started reading books like Drive, The Tipping Point and Disrupting Class and somewhere along the way I stopped being just an IT guy.

    You see, I am no longer as interested in servers and networks as I once was. I still see them as necessary and recognize their importance to the bigger picture but for me they aren’t my main focus anymore. They are commodities with known costs that can be planned and budgeted for with a little bit of forward thinking. The same goes for most of what might be considered “traditional IT”. Plan to refresh the network every 10 years. You can budget for that. Replace the wireless network every 5 years. You can budget for that too. Replace laptops every 4 years and desktops and servers every 5 and your major support issues go away. Take advantage of free cloud based services and hosted service offerings and reduce the support requirements of IT significantly. Use open source wherever possible to further control costs. Scale out student computing with cheap mobile non-windows devices. It’s all fairly straightforward (and if you think about it, pretty boring). It’s basically a Deferred Maintenance Plan for technology and anyone with a spreadsheet can make one.

    The challenge comes in convincing leadership that this is how to build a sustainable and affordable integrated technology environment for teaching and learning. Convincing folks that their thinking and funding models for the past 30 years are obsolete can get tiring but once a district moves past this point, technology ceases to become an add-on and starts to become an enabler.

    That’s why I’m interested in moving past the discussions about technology’s merits and role in the classroom (it’s 2013 for crying out loud) to look at new pedagogies (or the old ones too long ignored) that get to the heart of learning. However, this is the domain of the “Curriculum & Instruction” world where IT folks generally find themselves marginalized and reduced to filling equipment and software orders when schools try to spend all their left over dollars before the end of the year.

    What I think most school leaders don’t yet see is the monumental shift that is happening right now with technology in education. For decades technology was a periphery at worst and something to go to a lab to learn at best but in the past three years technology has infiltrated the very heart of education. Edupreneurs, brave teachers young and old and many others are transforming what it means to educate using technology in new and powerful ways. Some, like Sugata Mitra are showing how technology has the potential to fundamentally disrupt the foundational beliefs that our current education system is built on.

    When powerful connected devices reach sub $150 (which they will within the next 18 months) it will be difficult for districts to continue to say that they can’t afford one for every student. A school full of 1:1 classrooms looks completely different than today’s technology baren classrooms. I don’t think school leaders comprehend this yet although State Superintendent of Public Instruction Torlakson apparently does. His Ed Tech Task Force has called for one internet connected device for every student in CA.

    School business leaders see technology as a cost to be contained, curriculum and instruction leaders see technology as something to be defined, professionally developed and used to address specific deficiencies in learning. Technology leaders are often caught in the middle. Now more than ever, this is true, as both the Business and C&I people are about to have their worlds upended by the education technology tidal wave. Unfortunately for many it will be a rogue wave that catches them unprepared. There will be winners and losers, which is sad because we’re talking about kids futures here.

    I am currently caught between these two worlds watching the wave come barreling in. One foot is still solidly in IT advocating for smart infrastructure decisions and sustainable funding that minimizes support and maximizes the ability to scale out student computing with the other foot creeping into C&I urging teacher empowerment through technology to build life long learners and develop professionals that will have the ability to adapt to the rapid change that technology is about to unleash upon them.

    I hear often that technology is just a tool and that may be. But then so was the printing press, the pencil and the chalkboard. Systems either adapt or become obsolete and die. It’s time for school districts to recognize the technology wave is coming and adapt before it’s too late. The forward thinking districts are taking the necessary steps; building technology sustainability into their budgets, moving past one time technology professional development days to ongoing, continuos learning opportunities for teachers, building technology integration into common core implementation and bringing technology leaders to district leadership tables to start looking at technology as a critical strategic component in planning and operations moving forward.

    What’s your district doing to prepare for the coming wave?

  • Today I was in a Google Hangout on Air with Dr. Yong Zhao.

    Hangout With Dr. Zhao

    Dr. Zhao is author of several books about education including his latest, “World Class Learners: Educating Creative and Entrepreneurial Students” and has keynoted all over the world. I saw his keynote at ISTE12 and was totally amazed.

    Today the power of the Internet enabled my co-host (Mike Vollmert) and myself to connect with a fellow educator who is passionate about our education system and our student’s futures and have a discussion about big ideas and issues facing education. I can’t imagine what my education would have been like if these kinds of tools had been available to me in my classes back in the day. What’s really sad is that in the majority of classrooms today, these powerful tools for connecting and learning still aren’t being used. But think about what would be possible if they were. Every kid could connect with someone that was passionate about something they were passionate about and would have the freedom to pursue their own interests, set their own goals and discover the world in an environment of trust, support and connectedness. These are the kind of learning experiences we should be building for our 21st Century learners. The technology is here, why aren’t we?

    You can find more interviews with big thinkers in education at http://rebootedpodcast.com

  • This is part 4 in a series of posts on my not so great experience inheriting a VDI infrastructure at a school district. We left off with the student virtual desktops running on repurposed non-redundant, redundant hardware at the secondary site and teachers and staff running on the DO hardware. Through the first few months of school, teacher and staff performance remained an issue mainly because the VMs themselves originally weren’t allocated enough system resources to run Windows XP SP3 well. On top of that, the C: and D: (user data) drives were continually running out of space with every Adobe or Java update. The SAN was rapidly approaching 65% utilization, which for NFS represents the beginning of the performance degradation threshold. To increase memory and hard drive space to the level needed to improve Windows XP performance would require doubling the hardware resources which represented a significant investment in continuing to run VDI.

    On the student desktop side, desktops were running ok off of the secondary hardware while the 200 new Wyse clients started to slowly come on line. However we started seeing network connection errors, disconnects and View provisioning errors probably related to running the View hosts across a 600Mbps connection link instead of local to the VCenter server. Student machines were also affected by limited resources assigned to the OS and again, increasing them to the appropriate settings would require doubling hardware. For our older machines running the VDI blaster, we continued to see bad hard drives causing tough to diagnose connectivity issues.

    We also started to experience VCenter server issues with dropped connections, loss of connectivity and the VCenter Service crashing. Through many troubleshooting calls with VMWare support and trying many KB suggested steps, we would resolve the issues for a time but they would return inevitably and bring more problems with them each time. And then we went on Thanksgiving break and something amazing happened. The system worked fine. With 30-100 users, the system didn’t experience any of the major performance issues we were seeing under normal use. This led me to the basic conclusion that fundamentally VM Sprawl was killing the system.

    While at the VMWorld conference over summer, I heard a VMWare rep during a presentation say that Hertz, the rental car company, built a 4,000 seat VDI test infrastructure using traditional storage, just like us and that the system hit the wall at 800 active users. Extrapolating that out to our system, which was supposedly built to support 1500 Windows XP SP3 systems with 512MB RAM and 8GB HDD, they probably hit the wall at 300 but because the district cut over all desktops in one shot, there was no wall to stop them. They had started off with an under engineered system, overloaded it on day one and then because of the ease of adding new clients as well as the perception that thin clients were cheap and inexpensive for schools to purchase had continued to add on clients and suffer performance problems without regard to the inevitable consequences.

    Now, running 1200 VMs with 850 active connections, the system was continuously failing, running the split scenario didn’t address the core storage issues because the drive allocations inside the VMs was not sufficient and the connectivity issues were requiring constant daily intervention to keep the student desktops running. Going into Winter break, something had to change. In Part 5, enter the Big Band-aid and the Great Migration plan.

  • In my previous VDI posts, I outlined the Virtual Desktop infrastructure I inherited and discussed the stability issues being faced. In this, part three, I will talk about redundancy or lack thereof and how I was able to use some over engineering to my advantage, if only for a little while.

    Summer was fast coming to a close, a few hundred more Virtual Desktops were about to come on line and system performance was horrible. We had already decided not to go down the road of making the significant investments required to rebuild a VDI system capable of supporting the swelling number of desktops and providing acceptable levels of performance for staff and teachers. Desktops after all were a dying paradigm but if we didn’t do something, the whole system would implode under the load of the additional desktops. Enter the promise of total system redundancy.

    The initial system design called for a redundant backup site with identical SAN and Server hardware to failover to in the event of an outage at the DO. Using VMWare’s Sight Recovery Manager (SRM), the VMs at the DO would automatically fail over to the secondary site and come up with only minor interruption. Email would continue to flow, users would continue to work on their virtual desktops, all would be right with the world. In fact, everyone was under the impression that this was in place when I arrived. Only it wasn’t. SRM was not working. SAN replication was not working. Digging further, I found that even had they been working, when the VMs failed over to the secondary site, they would have had no where to go because all but the secondary site would have been unable to access them. There were no redundant links from the other school sites to the secondary site. There was also no backup Internet connection at the secondary site. It would have been a failover to nowhere.

    A further breakdown revealed that SRM had been configured to failover servers only (no View Virtual Desktops) and was at some point actually working. However, it broke when the DO site was upgraded to 4.1 and the secondary sight was not. To add insult to injury, in the course of evaluating storage upgrade options, it was discovered that the DO SAN, the one that was running all the district’s servers, email and virtual desktops was purchased with only a single controller. Somehow the project moved forward with redundant firewalls, web filters and routers, but not the SAN controller. So much for redundancy. As for the site connections, the redundancy plan obviously called for a network design that never materialized and left the district with a secondary site full of equipment, under warranty, sucking up power and AC that was basically sitting idle with nothing to do except run two backup Active Directory servers.

    So in an unorthodox (and probably unsupported) move, I decided to harness the idle power of the secondary site to run the student virtual desktops. Because of the way the view connection brokers were setup with the DevonIT echo server, I could not separate out the student pools onto their own VCenter server in the time allotted, as much as I wanted to. Instead, I attached the hosts from the secondary sight to the VCenter at the DO, moved the student master images over to the secondary SAN and reconfigured the pools for the new cluster, effectively running all of the student virtual desktops on the secondary site hardware. This re-use of the redundant hardware saved us for a while. We found out later that the combined IOPS between the two SANs in this configuration was running at 18,000-20,000 which would have easily brought the DO SAN to it’s knees.

    However, even over the 600Mbps connection from the DO to the secondary site we started to see problems. Pool errors during provisioning pointed to communication problems so we disabled the stop provisioning on error settings. Incidents of disconnecting student computers, either because of the increased sensitivity of PCoIP or because of the old hardware being used as clients, increased. Under the increased student load, VCenter started becoming unresponsive. Staff and Teacher pools continued to have performance problems even with the students running on separate hardware. Despite this, the system did not totally crash and burn. It limped into the new school year with no money invested and another 200 desktops online or so we thought.

    In part 4, how 200 new desktops turned into 300 and what happened when schools actually started using them.

  • This is the second post in a multi part series about my experience with VDI over the past 10 months. In the first post I laid out the VDI situation I inherited. To recap  the situation, post VMWorld in August, I realized that I had a VDI system that was under spec’d, not well implemented or configured, was a major version out of date (4.1u1), was badly over utilized (suffering from VMWare sprawl, more on that later) and was providing users with a very poor computing experience. So I set out to develop a plan to fix it, as any good IT person would do.

    Initially I was looking for ways to stabilize the system and improve the end user experience. Never mind that the desktop paradigm for teachers and students is horribly outdated in the age of anywhere, anytime learning. Never mind that tying teachers to a desktop fixed in space makes building collaborative Professional Learning Communities around student assessment data basically impossible. And never mind that virtual desktops unable to run skype or google hangouts or webcams, that can’t play videos or connect to other classrooms over the Internet, or authors or NASA, that continuously run out of hard drive space with every adobe flash or java update, do not empower teachers or students with 21st Century learning abilities and are not the kind of computer environments we should be building for teachers today.

    I was looking for cost effective ways to get the system back to what it was designed to do, which was provide a platform for teachers to take attendance, enter grades, check email and marginally support student computing. It turns out, cost effective and VDI don’t really play well together. Just to stabilize the system, do a health check and migration to 5.0, was a six figure prospect. Adding the hardware to increase RAM and HDD capacity in Guest VMs, more six figures. Fixing the storage problems with something better suited for the peak demands of 1200 virtual desktops, more six figures. The management software needed to really see what was going on with the complex moving parts? Only 5 figures, but with a high recurring cost. Replacement end point devices for teachers, six figures yet again. The numbers kept adding up, and no matter how I tried to slice and dice them, the conclusion was to get the system stable and viable over the next three years, it was going to be expensive. Certainly much more than the low cost system it was initially pitched as.

    There was another factor I was considering when looking at price. Support had been pre-paid for five years with VMWare renewal just one budget year away and SAN and Server renewals due the year after . On top of that, the server hardware and existing SAN would soon be five years old. Five years for critical infrastructure that 99% of all the desktops in the district where running on. Now I have run servers out to seven and even eight years but never critical systems. Five years has always been my end of life for critical production servers and in this case, this equipment had experienced two major high heat events when the Air Conditioning failed in the server room. In one instance, the thermometers were pegged at 120 and the SAN did not shut itself down. Not the environment that lends itself to extending the life of computer hardware.

    Factor in a significant investment to make the VDI system right, a critical lack of sysadmin capacity and skill level (I’ve learned more about VDI in the past 10 months than I care to know, it’s basically my second job) and the prospect of significant support renewal costs on the horizon; the only cost effective solution was obvious. Scale back the number of users to a point where the existing hardware could support decent performance and phase out the VDI system over time. We would make the best use of the investment that had been made but not throw more money into an outdated paradigm that we weren’t equipped to support and couldn’t afford to maintain over the long term. But this would take time. Time we did not have.

    Storage was the major issue. With several schools bringing new thin clients online over summer (purchases already in the pipeline when I arrived), we were looking at a total system collapse if we didn’t do something. The solution presented itself in an unexpected place. In part three, We talk redundancy!

  • I sat down to write a post about storage solutions and my recent decision to purchase a Nimble Storage Array however I wanted to properly address why I was looking for new storage to begin with and didn’t want a simple post to turn into an 8,000 word greek tragedy. So let me set the stage with the (multi-part) backstory and I’ll write the storage post a bit later.

    Almost 10 months ago today I stepped into a new district and inherited a VDI infrastructure that on paper most IT people dream about. Lots of Dell chassis and blades, big iron SANs, VMWare View, redundancy. The whole nine yards. At least on paper. You see the district, back in 2009, decided to be an early adopter to VDI. Rather than pilot a few dozen users and scale up, they went all in and cut over every user in the district virtually over night. And they did it with a Vendor that had never implemented VDI on such a scale. To be fair I don’t think very many had back then. Suffice it to say, there were issues. Upon my arrival I found a system suffering from major performance problems with many different causes.

    The traditional SAN storage, which pretty much everyone now acknowledges as being critical for running VDI desktop environments was an obvious bottle neck. The system was also suffering from a major case of VM sprawl. More and more client machines had been added without consideration for server side capacity. After all, adding VMs was so easy in the new View environment. Additionally, to the school sites adding clients became a cheap proposition in the case of sub $400 Wyse thin clients or free in the case of donated desktops running thin client software (we are using DevonIT VDI Blaster).

    As if all that were not enough, in planning for the initial hardware resources, the absolute bare minimum requirements for memory and hard drive space were used. The Guest VMs were setup to run with 512MB of RAM and 8GB HDD for Windows XP SP3. As you can imagine this caused Operating System performance issues inside the VMs in addition to the external storage performance issues with the SAN. In a perfect world, we would simply allocate more resources to each VM, however an insufficient lack of forward planning meant that the original hardware purchased was just enough to meet the initial VM Guest requirements  We did not have enough Host memory or SAN space to provide additional system resources to the Windows XP clients, forget about trying to do an upgrade to Windows 7.

    A cost saving decision that haunts us to this day (and there were others) was to re-use old computer hardware. Those machines are now anywhere from 8-12 years old. Not only do we see hardware failures among these systems present themselves as intermittent connection and stability problems but many of them can only run the RDP protocol. Audio/Video playback, something that has become critical to classroom instruction over the last few years, is painful if not downright impossible under RDP. This severely limits the options teachers and students have for accessing 21st Century learning resources.

    There are also many moving parts with the VDI system. There is a SQL Server Database Server hosting the databases for VCenter and View. I have had to dust off my rusty Database server skills to fix major downtime causing issues both with the the SQL Server and the individual Databases. There are also two connection servers which provide the View connection brokering. There have been several issues with both of these. The most interesting was a corruption with the local ADAM database which caused all kinds of odd behavior with our View Desktop Pools. The single VCenter server is managing both ESX server hosts and View hosts and often appears to “pause” under the heavy task load of serving up 1400+ available VMs with over 800 active connections. After a particularly bad power outage, when all the systems went down hard, two View hosts appeared perfectly fine but when active in their respective clusters caused all kinds of havoc with desktop provisioning. Active Directory and networking also play a pivotal role in the system and on more than one occasion both have thrown a wrench into the system in one way or another.

    By now you may be thinking, “What’s the big deal, IT department spends lots of time keeping VDI running. Isn’t that what the IT department should be doing?” No. We should not. Not with a staff of three including me.  After attending the VMWorld conference this summer I was struck by how often I heard “Storage Team”, “Server Team” and “Database Team” in conversations about supporting VDI. These were people talking about 200-500 desktop deployments. There I was with 1200 (at the time) VMs thinking, “Teams? I’ve got me, a network engineer and a desktop tech. There are no Teams!”. With the limited staff available to me and the many moving parts, complex enterprise moving parts I might add, keeping the system running was an exercise in extreme firefighting. We had no time to be proactive and when the system hiccuped, every user in the district was affected. It was an untenable situation to find oneself in but that is where I was after coming back from VMWorld. Hit with the realization that I had a hugely complex system that had not been setup well and was failing on many levels.

    What was I to do? The answer, perhaps, to come later in Part 2.

  • In the continuing saga of my iPad mini v. Nexus 7 use, I’ve come up against another issue. Standby time. I’m using both devices daily now. The Nexus 7 is my breakfast table news reader. I spend about 25 minutes in the morning on Flipboard reading the headlines before I set it down for the day. I’m using the iPad mini at night for watching Video (most recently the Ray Mears Bushcraft series on YouTube) which I do for about an hour. I then put it in my backpack where it usually spends the day at work.

     

    Battery

     

    What I’ve found is that in these use cases, the Nexus 7 runs out of juice within two days, even with minimal use, while the iPad mini can go for three to four days without requiring a charge. In fact I constantly find myself picking up the Nexus 7 in the morning and getting the 13% battery notice or on a few occasions, find that it has turned itself off and when I power it on, it immediately shuts down again. I’ve yet to have that experience with the iPad mini. Even when I get down to 20% and then 10% I can still make it through a video before plugging it in for the night.

    A few weeks ago I took the kids skiing in wireless no man’s land and left the iPad on the dresser with around 60% battery. When we came home after being away for 3 days, it still had over 50% left. The Nexus 7, which was half charged as well, was completely dead. I’m also seeing the same thing with my kid’s iPad, she’s using it for 20-30 minutes daily and we’re only having to charge it maybe once a week.

    Standby time is one of those things I’m really starting to appreciate in daily use of these tablets and Apple seems to be doing it better than anyone else at the moment.

  • Today was day one of the California League of Schools (CLS) K-12 Common Core, English Learners & Technology Conference in Monterey, CA. This is my second year (maybe third, it’s late ok?) attending as a presenter and unlike other EdTech conferences, the focus here is not as “tech heavy” as others. Today’s Keynote by Dr. Kate Kinsella is a perfect example. None of the strategies or topics presented required technology to implement. However that did not stop my mind from going into overdrive thinking about all the ways technology could be integrated into teaching Academic Language which was the main topic of the keynote.

    I am not an English Teacher. I don’t even play one on TV, so I found the keynote presentation about Common Core and English Language Arts fascinating. I hadn’t given much thought to all that goes into teaching kids English fluency. The closest experience I’ve had has been watching Kid1 spend every waking moment with a book glued to her face since she was old enough to read (and I don’t remember when/how that happened exactly) and hearing Kid2, now just over two, start using complete sentences and emulating her big sister’s fascination with books. So my understanding of ELA instruction is mighty thin.

    I’ve known the Common Core was coming for some time and realized early on that it harbored big changes to what classroom instruction should/would look like (that’s why I pushed so hard for modern teacher tech and 1:1 student computing at Le Grand UHSD) but this morning I came away with a clearer picture of just how big the hurdle for ELA (and all teachers actually) is about to become. Here are some of my notes from the session:

    • Students are going to be required to read more informational text, with a much higher level of Academic vocabulary than found in the old standards and much more challenging that what is currently tested under CST.
    • Students are going to have to learn to write differently in the form of academic summary vs. what they “liked” about a text.
    • “The New Basic” will be Far Below Basic (FBB) under Common Core, implying that students that score Basic on the current CST tests will struggle under the new Common Core Assessments and score lower than they do now.
    • Implementing Common Core successfully does not mean doing what we’ve been doing only better but looking at changing what we’re doing altogether.
    • When planning lessons, it is no longer enough to ask students what they think about or for their ideas on the objectives. Students must be able to answer and provide justification, evidence and conclusions and explain why they answered they way that they did.
    • Group Work is overdone and poorly executed. Group work and Partner work can be effective when used with structured procedures, scaffolding and repetition.
    • High Utility Vocabulary will be important to student’s academic success.

    I was impressed with how Dr. Kinsella modeled her instructional methodologies throughout the session with active audience participation. She repeatedly stressed the teaching of Career (and College) appropriate communication. Basically these are the soft skills that Employers and Universities complain students graduating high school don’t have.

    A random thought that popped into my head at one point was, “It sounds like she wants to make kids act and sound like little academics!” And I suppose she does. I’m curious to know what Sir Ken Robinson’s take on this approach would be, since the Common Core and ELA instruction tailored around information text and strict Academic Language would seem to further drive out Creativity and Play from our classrooms. But then again, Dr. Kinsella did seem to think Kindergarten teachers posed a particular challenge and I’m quite fond of the idea that all school should look more like Kindergarten.

    We were provided an excellent 58 page handout (yes, 58 pages!) that I will be sharing with my Ed Services department when I get into the DO on Monday. While technology was mostly absent, save the Keynote and Video presentations used, it was an informative and thought provoking opening. Common Core is coming and things are going to change. That much is certain. Those that have recognized this and have already started adapting are poised to provide their student’s a distinct advantage in preparedness for what awaits them beyond school. For the rest, it could get ugly.

    What do you think? Is Information Text and Academic Vocabulary the way forward for preparing kids for the unknown?

     

    PS. Tomorrow at 2:30 I present:

    Small School Big Tech – The 1:1 Challenge

    iPads, Netbooks, Chromebooks, MacBooks, Tablets, Apps, Wifi, Cloud, Google Apps. What’s a school to do? How do we scale from 60 to 600 to 6000 devices? We’ll talk about strategies for leveraging free and open source resources to minimize infrastructure costs and maximize classroom technology from a district perspective.  Where are we spending our limited technology dollars? Build a five year tech budget with a ten year vision. Have a plan! What’s the future of edtech look like, what’s important to be investing in now? We’ll discuss key areas to focus on for building a 21st Century technology footprint for today and tomorrow. 

    I should probably start working on the slide deck…

  • I don’t think the iPad mini likes me very much. This is an update regarding the picture quality of the iPad mini when used as a doc cam. Upon further research, it turns out the rear camera on the mini should be just as good as the iPad (3rd & 4th Gens) so I have no idea why I was getting such pixelated images when zooming compared to my old iPad (3rd Gen) during my doc cam testing. I decided to run the test again. Here is a picture of the setup:

     

    iPad Doc Cam

     

    And the results:

    iPad 3rd Gen Full Zoom

    iPad 3rd Gen

     

    iPad mini Full Zoom (From same height, with same lighting and pointed at the same document)

    iPad (3rd Gen)

    You can see the iPad mini image is much more pixelated. This is not a function of the picture upload or it being encoded for the web. This is how the two pictures look on the iPads. You’ll notice how crisp the camera icons are in the iPad mini screen shot above.

    So what am I doing wrong here? Shouldn’t the iPad mini have better picture quality than my old iPad 3? I’m staying away from the mini for teachers mainly because of this issue. Now I’m a little concerned given that they should all have the exact same cameras. Anyone care to test this with a 4th Gen iPad?