There is no box

thinking out loud about technology, education and life

  • The Executive Briefing Center. A magical place where customers go to become enamored with products or ecosystems or solutions. Cisco, Brocade, Apple, Google and Microsoft. I’ve been to several of these corporate spaces and I’m struck by the differences of each.

    Cisco’s was over the top impressive. Like visiting OZ. But look behind the curtain and you’ll find a room full of boxes, wires and network engineers making it work. They wowed with the tech but the reality isn’t so nice and clean. Superintendents beware.

    Brocade was very straight forward. Nice facility, down to business. No giant heads or hidden curtains, just good clean big iron hardware. I think they tell it how it is. Tech Directors, ask your questions.

    Apple. Clean and crisp and focused. Just like their products. Polished presentations on point and showcasing the ecosystem. Very tailored to the audience. Impressive, most impressive. CBOs, watch your checkbooks.

    Google. A startup environment with stacks of extra chairs along the wall and round tables. Loosey goosey and organic. Or perhaps just youthful and inexperienced. I have a feeling that the next briefing will be iteratively better and different just like their products. Go back often to stay up to date.

    Microsoft. Winding spaces, with stairs and hallways and doors everywhere. Chaperones in the halls. Almost locked down but not quite. An air of restriction in movement and options. A reflection of corporate culture evident in their products. The space feels like being inside of Windows. I’m not sure who this space is for.

    Interesting to me that the corporate cultures are so reflective in these spaces. How are the spaces in our schools representing school culture and student outcomes and absent billions of dollars, what can we do about it?

  • In a perfect EdTech world, I would give every teacher an ultrabook running Ubermix, an iPad with AppleTV and a Projector or other Large Format Display (LFD) device. Perhaps even two. This would be the basic “Technology Package”. I’d wrap it around Google Apps for Education and the web. Then I’d throw in a classroom set of student devices; Chromebooks, iPads, Ubermixed notebooks, Nexus 7 tablets or BYOD devices and shake well. Windows and Office would be things learned about in history books as part of the first great wave of personal computing.

    Absent too would be printers. The bane of Help Desks everywhere. When everyone has a device, printing becomes a throwback to a different era. A muscle memory that must be excised through conscious and concerted effort. Packaged Curriculum would also be a thing of the past, replaced with teacher generated and curated content, projects, inquiry, search and the web. With a device for every student, they would own their learning, top to bottom.

    Classroom technology doesn’t have to be expensive or difficult or restricted. It doesn’t have to conform to the old business norms of yesterday. It just requires a different way of thinking about education technology and what we want to accomplish in the classroom. Free your OS and the rest will follow.

  • RDP in Chrome

    It’s finally happened! Someone (2x specifically) has made a true HTML5 RDP client for Chrome. No public gateways, no server side clients. Just an RDP client in the browser. I think I am in heaven. Now I really can use a Chromebook and get some Windows Admin work done too. Check it out – http://www.2x.com/rdp-client/chrome/?c=1

  • Think Differently About Your IT Spending:

    EdTech Free

     

     

  • So today a staff member at one of the schools in my district told me that she didn’t understand what all these Financial emails were that she was receiving from Charles Schwab. Turns out she thought my IT status updates were marketing for brokerage services. Now that we’ve met face to face, she’s not deleting them anymore. Yay!

    I guess I need to work on my subject lines.

    PS. I am running for the Board of Directors of CUE. If you are a CUE member, don’t forget to vote.

  • Things used to be simple. I architected networks, administered servers and made sure email flowed uninterrupted. I was an IT guy and I was pretty good at it. Then one day I became a classroom teacher and my simple life in IT, focused on up time and putting six computers in the back of every classroom, became much more complicated. I became fascinated by the art and craft of learning and how technology might improve the school experience for kids. I took a hard look at how we provided and funded technology for teachers and realized we were doing it badly. I started looking at places like Minarets High School that were pushing the boundaries of student trust and teacher empowerment, using technology not for it’s own sake but as an instrument capable of transforming the learning process to focus on individual students instead of teaching to the middle. I started reading books like Drive, The Tipping Point and Disrupting Class and somewhere along the way I stopped being just an IT guy.

    You see, I am no longer as interested in servers and networks as I once was. I still see them as necessary and recognize their importance to the bigger picture but for me they aren’t my main focus anymore. They are commodities with known costs that can be planned and budgeted for with a little bit of forward thinking. The same goes for most of what might be considered “traditional IT”. Plan to refresh the network every 10 years. You can budget for that. Replace the wireless network every 5 years. You can budget for that too. Replace laptops every 4 years and desktops and servers every 5 and your major support issues go away. Take advantage of free cloud based services and hosted service offerings and reduce the support requirements of IT significantly. Use open source wherever possible to further control costs. Scale out student computing with cheap mobile non-windows devices. It’s all fairly straightforward (and if you think about it, pretty boring). It’s basically a Deferred Maintenance Plan for technology and anyone with a spreadsheet can make one.

    The challenge comes in convincing leadership that this is how to build a sustainable and affordable integrated technology environment for teaching and learning. Convincing folks that their thinking and funding models for the past 30 years are obsolete can get tiring but once a district moves past this point, technology ceases to become an add-on and starts to become an enabler.

    That’s why I’m interested in moving past the discussions about technology’s merits and role in the classroom (it’s 2013 for crying out loud) to look at new pedagogies (or the old ones too long ignored) that get to the heart of learning. However, this is the domain of the “Curriculum & Instruction” world where IT folks generally find themselves marginalized and reduced to filling equipment and software orders when schools try to spend all their left over dollars before the end of the year.

    What I think most school leaders don’t yet see is the monumental shift that is happening right now with technology in education. For decades technology was a periphery at worst and something to go to a lab to learn at best but in the past three years technology has infiltrated the very heart of education. Edupreneurs, brave teachers young and old and many others are transforming what it means to educate using technology in new and powerful ways. Some, like Sugata Mitra are showing how technology has the potential to fundamentally disrupt the foundational beliefs that our current education system is built on.

    When powerful connected devices reach sub $150 (which they will within the next 18 months) it will be difficult for districts to continue to say that they can’t afford one for every student. A school full of 1:1 classrooms looks completely different than today’s technology baren classrooms. I don’t think school leaders comprehend this yet although State Superintendent of Public Instruction Torlakson apparently does. His Ed Tech Task Force has called for one internet connected device for every student in CA.

    School business leaders see technology as a cost to be contained, curriculum and instruction leaders see technology as something to be defined, professionally developed and used to address specific deficiencies in learning. Technology leaders are often caught in the middle. Now more than ever, this is true, as both the Business and C&I people are about to have their worlds upended by the education technology tidal wave. Unfortunately for many it will be a rogue wave that catches them unprepared. There will be winners and losers, which is sad because we’re talking about kids futures here.

    I am currently caught between these two worlds watching the wave come barreling in. One foot is still solidly in IT advocating for smart infrastructure decisions and sustainable funding that minimizes support and maximizes the ability to scale out student computing with the other foot creeping into C&I urging teacher empowerment through technology to build life long learners and develop professionals that will have the ability to adapt to the rapid change that technology is about to unleash upon them.

    I hear often that technology is just a tool and that may be. But then so was the printing press, the pencil and the chalkboard. Systems either adapt or become obsolete and die. It’s time for school districts to recognize the technology wave is coming and adapt before it’s too late. The forward thinking districts are taking the necessary steps; building technology sustainability into their budgets, moving past one time technology professional development days to ongoing, continuos learning opportunities for teachers, building technology integration into common core implementation and bringing technology leaders to district leadership tables to start looking at technology as a critical strategic component in planning and operations moving forward.

    What’s your district doing to prepare for the coming wave?

  • Today I was in a Google Hangout on Air with Dr. Yong Zhao.

    Hangout With Dr. Zhao

    Dr. Zhao is author of several books about education including his latest, “World Class Learners: Educating Creative and Entrepreneurial Students” and has keynoted all over the world. I saw his keynote at ISTE12 and was totally amazed.

    Today the power of the Internet enabled my co-host (Mike Vollmert) and myself to connect with a fellow educator who is passionate about our education system and our student’s futures and have a discussion about big ideas and issues facing education. I can’t imagine what my education would have been like if these kinds of tools had been available to me in my classes back in the day. What’s really sad is that in the majority of classrooms today, these powerful tools for connecting and learning still aren’t being used. But think about what would be possible if they were. Every kid could connect with someone that was passionate about something they were passionate about and would have the freedom to pursue their own interests, set their own goals and discover the world in an environment of trust, support and connectedness. These are the kind of learning experiences we should be building for our 21st Century learners. The technology is here, why aren’t we?

    You can find more interviews with big thinkers in education at http://rebootedpodcast.com

  • This is part 4 in a series of posts on my not so great experience inheriting a VDI infrastructure at a school district. We left off with the student virtual desktops running on repurposed non-redundant, redundant hardware at the secondary site and teachers and staff running on the DO hardware. Through the first few months of school, teacher and staff performance remained an issue mainly because the VMs themselves originally weren’t allocated enough system resources to run Windows XP SP3 well. On top of that, the C: and D: (user data) drives were continually running out of space with every Adobe or Java update. The SAN was rapidly approaching 65% utilization, which for NFS represents the beginning of the performance degradation threshold. To increase memory and hard drive space to the level needed to improve Windows XP performance would require doubling the hardware resources which represented a significant investment in continuing to run VDI.

    On the student desktop side, desktops were running ok off of the secondary hardware while the 200 new Wyse clients started to slowly come on line. However we started seeing network connection errors, disconnects and View provisioning errors probably related to running the View hosts across a 600Mbps connection link instead of local to the VCenter server. Student machines were also affected by limited resources assigned to the OS and again, increasing them to the appropriate settings would require doubling hardware. For our older machines running the VDI blaster, we continued to see bad hard drives causing tough to diagnose connectivity issues.

    We also started to experience VCenter server issues with dropped connections, loss of connectivity and the VCenter Service crashing. Through many troubleshooting calls with VMWare support and trying many KB suggested steps, we would resolve the issues for a time but they would return inevitably and bring more problems with them each time. And then we went on Thanksgiving break and something amazing happened. The system worked fine. With 30-100 users, the system didn’t experience any of the major performance issues we were seeing under normal use. This led me to the basic conclusion that fundamentally VM Sprawl was killing the system.

    While at the VMWorld conference over summer, I heard a VMWare rep during a presentation say that Hertz, the rental car company, built a 4,000 seat VDI test infrastructure using traditional storage, just like us and that the system hit the wall at 800 active users. Extrapolating that out to our system, which was supposedly built to support 1500 Windows XP SP3 systems with 512MB RAM and 8GB HDD, they probably hit the wall at 300 but because the district cut over all desktops in one shot, there was no wall to stop them. They had started off with an under engineered system, overloaded it on day one and then because of the ease of adding new clients as well as the perception that thin clients were cheap and inexpensive for schools to purchase had continued to add on clients and suffer performance problems without regard to the inevitable consequences.

    Now, running 1200 VMs with 850 active connections, the system was continuously failing, running the split scenario didn’t address the core storage issues because the drive allocations inside the VMs was not sufficient and the connectivity issues were requiring constant daily intervention to keep the student desktops running. Going into Winter break, something had to change. In Part 5, enter the Big Band-aid and the Great Migration plan.

  • In my previous VDI posts, I outlined the Virtual Desktop infrastructure I inherited and discussed the stability issues being faced. In this, part three, I will talk about redundancy or lack thereof and how I was able to use some over engineering to my advantage, if only for a little while.

    Summer was fast coming to a close, a few hundred more Virtual Desktops were about to come on line and system performance was horrible. We had already decided not to go down the road of making the significant investments required to rebuild a VDI system capable of supporting the swelling number of desktops and providing acceptable levels of performance for staff and teachers. Desktops after all were a dying paradigm but if we didn’t do something, the whole system would implode under the load of the additional desktops. Enter the promise of total system redundancy.

    The initial system design called for a redundant backup site with identical SAN and Server hardware to failover to in the event of an outage at the DO. Using VMWare’s Sight Recovery Manager (SRM), the VMs at the DO would automatically fail over to the secondary site and come up with only minor interruption. Email would continue to flow, users would continue to work on their virtual desktops, all would be right with the world. In fact, everyone was under the impression that this was in place when I arrived. Only it wasn’t. SRM was not working. SAN replication was not working. Digging further, I found that even had they been working, when the VMs failed over to the secondary site, they would have had no where to go because all but the secondary site would have been unable to access them. There were no redundant links from the other school sites to the secondary site. There was also no backup Internet connection at the secondary site. It would have been a failover to nowhere.

    A further breakdown revealed that SRM had been configured to failover servers only (no View Virtual Desktops) and was at some point actually working. However, it broke when the DO site was upgraded to 4.1 and the secondary sight was not. To add insult to injury, in the course of evaluating storage upgrade options, it was discovered that the DO SAN, the one that was running all the district’s servers, email and virtual desktops was purchased with only a single controller. Somehow the project moved forward with redundant firewalls, web filters and routers, but not the SAN controller. So much for redundancy. As for the site connections, the redundancy plan obviously called for a network design that never materialized and left the district with a secondary site full of equipment, under warranty, sucking up power and AC that was basically sitting idle with nothing to do except run two backup Active Directory servers.

    So in an unorthodox (and probably unsupported) move, I decided to harness the idle power of the secondary site to run the student virtual desktops. Because of the way the view connection brokers were setup with the DevonIT echo server, I could not separate out the student pools onto their own VCenter server in the time allotted, as much as I wanted to. Instead, I attached the hosts from the secondary sight to the VCenter at the DO, moved the student master images over to the secondary SAN and reconfigured the pools for the new cluster, effectively running all of the student virtual desktops on the secondary site hardware. This re-use of the redundant hardware saved us for a while. We found out later that the combined IOPS between the two SANs in this configuration was running at 18,000-20,000 which would have easily brought the DO SAN to it’s knees.

    However, even over the 600Mbps connection from the DO to the secondary site we started to see problems. Pool errors during provisioning pointed to communication problems so we disabled the stop provisioning on error settings. Incidents of disconnecting student computers, either because of the increased sensitivity of PCoIP or because of the old hardware being used as clients, increased. Under the increased student load, VCenter started becoming unresponsive. Staff and Teacher pools continued to have performance problems even with the students running on separate hardware. Despite this, the system did not totally crash and burn. It limped into the new school year with no money invested and another 200 desktops online or so we thought.

    In part 4, how 200 new desktops turned into 300 and what happened when schools actually started using them.

  • This is the second post in a multi part series about my experience with VDI over the past 10 months. In the first post I laid out the VDI situation I inherited. To recap  the situation, post VMWorld in August, I realized that I had a VDI system that was under spec’d, not well implemented or configured, was a major version out of date (4.1u1), was badly over utilized (suffering from VMWare sprawl, more on that later) and was providing users with a very poor computing experience. So I set out to develop a plan to fix it, as any good IT person would do.

    Initially I was looking for ways to stabilize the system and improve the end user experience. Never mind that the desktop paradigm for teachers and students is horribly outdated in the age of anywhere, anytime learning. Never mind that tying teachers to a desktop fixed in space makes building collaborative Professional Learning Communities around student assessment data basically impossible. And never mind that virtual desktops unable to run skype or google hangouts or webcams, that can’t play videos or connect to other classrooms over the Internet, or authors or NASA, that continuously run out of hard drive space with every adobe flash or java update, do not empower teachers or students with 21st Century learning abilities and are not the kind of computer environments we should be building for teachers today.

    I was looking for cost effective ways to get the system back to what it was designed to do, which was provide a platform for teachers to take attendance, enter grades, check email and marginally support student computing. It turns out, cost effective and VDI don’t really play well together. Just to stabilize the system, do a health check and migration to 5.0, was a six figure prospect. Adding the hardware to increase RAM and HDD capacity in Guest VMs, more six figures. Fixing the storage problems with something better suited for the peak demands of 1200 virtual desktops, more six figures. The management software needed to really see what was going on with the complex moving parts? Only 5 figures, but with a high recurring cost. Replacement end point devices for teachers, six figures yet again. The numbers kept adding up, and no matter how I tried to slice and dice them, the conclusion was to get the system stable and viable over the next three years, it was going to be expensive. Certainly much more than the low cost system it was initially pitched as.

    There was another factor I was considering when looking at price. Support had been pre-paid for five years with VMWare renewal just one budget year away and SAN and Server renewals due the year after . On top of that, the server hardware and existing SAN would soon be five years old. Five years for critical infrastructure that 99% of all the desktops in the district where running on. Now I have run servers out to seven and even eight years but never critical systems. Five years has always been my end of life for critical production servers and in this case, this equipment had experienced two major high heat events when the Air Conditioning failed in the server room. In one instance, the thermometers were pegged at 120 and the SAN did not shut itself down. Not the environment that lends itself to extending the life of computer hardware.

    Factor in a significant investment to make the VDI system right, a critical lack of sysadmin capacity and skill level (I’ve learned more about VDI in the past 10 months than I care to know, it’s basically my second job) and the prospect of significant support renewal costs on the horizon; the only cost effective solution was obvious. Scale back the number of users to a point where the existing hardware could support decent performance and phase out the VDI system over time. We would make the best use of the investment that had been made but not throw more money into an outdated paradigm that we weren’t equipped to support and couldn’t afford to maintain over the long term. But this would take time. Time we did not have.

    Storage was the major issue. With several schools bringing new thin clients online over summer (purchases already in the pipeline when I arrived), we were looking at a total system collapse if we didn’t do something. The solution presented itself in an unexpected place. In part three, We talk redundancy!