The Access Network
WP Cumulus Flash tag cloud by Roy Tanck and Luke Morton requires Flash Player 9 or better.
Last year, when the iPhone 7 was launched I think, I had been reading about the new Apple Teacher program and got quite excited about signing up – only to find out that I couldn’t because it was for the United States only. It did give a page to keep checking back on that they promised to update as the program became available in other countries or regions – and I had even been remembering to check! The last time I checked it was after we came back from the Christmas holidays, and I was still faced with the single line of availability: United States
Anyway, last Thursday, I got an email notifying me that the Apple Distinguished Educator (ADE) programme was open to applications again. Remembering the heft of the application last time, I thought I would have a quick glance to see what was involved this time. Imagine my surprise to find that being an exisiting Apple Teacher was a prerequisite to applying to be an ADE !
When I dug deeper into things, I found that the list of countries had been updated (on January 24th, just in time for BETT?) and now included Australia, Denmark, Hong Kong, Ireland, Mexico, the Netherlands, New Zealand, Singapore, Sweden and the United Kingdom as well as the United States.
So obviously I had to go and have a look.
An early stumbling block you might face is to do with your Apple ID. The Apple Teacher site states pretty specifically that it’s your own personal Apple ID you’ve to sign up with, and not an ID provided by your establishment. That’s fine for people like me – who surf the wave of our copious IDs with ease – but for some other teachers it may prove a bit more challenging.
Once you are through the sign-up hoop, you will find yourself logged into the Apple Teacher Learning Centre. Pick your device of choice – esentially iOS or Mac – and there are a set of tutorials and quizzes for you to complete to become an Apple Teacher. I can’t speak for Mac, but the iOS ones were:
Having completed the quizzes for iOS, I can confirm that they are not pitched at “Expert” level, the main plank of evidence being that I managed to pass them all. I got a very nice, shiny email for my trouble:
Interestingly, passing quizzes opens up more quizzes and the interface itself is pretty user friendly – as you’d expect from Apple. I’m looking forward to seeing how the site and the program develop, that’s for sure.
If you’re interested, you can sign yourself up for Apple Teacher at:
It’s been a while since I blogged (a freshly minted child and 2 house moves will do that kind of thing to you….) but I saw something this week that made me think “People need to know about that. I should stick it on my blog.” Given how inactive I’ve been on here for so long, there may be a fundamental flaw in my logic there, but we’re going to let that slide for the moment….
Office Lens – did I mention it was free?
The thing that I saw was down to Ian Stuart. I had been asking some questions about OneNote and Class Notebook, and obviously Ian is the Go-To-Guy for such queries. He came out to visit me at school (many thanks Ian!) and ran through a few things with me. One of them was the amazing set of ‘Learning Tools’ available as a plugin for OneNote, and given our iOS situation he showed me the free Office Lens app too, but gave the caveat that it was only available in an iPhone version – although this could be used on the iPad like many iPhone apps.
After I got home, I went to download Office Lens to my iPad and found out that the info Ian had given me was inaccurate. There was an iPad version of Office Lens available! Turns out that it had literally just been released that day. I must have been one of the very first people to download it (and did I mention it was free?).
So what does it do?
Well, put simply, if you have a piece of text, you point Office Lens at it, take a photo of it and it will then read it to you and also covert it into an editable document. See the pics below for an idea of how it works.
First, frame your document in the camera, and capture an image using the onscreen red button.
A thumbnail will be displayed of the image you just captured. You can now take more pictures, if you have more pages to scan.
Choose where you want the image to be sent.
Let’s start with the Immersive Reader.
The conversion is reasonably quick, on a decent signal at least.
Immersive Reader provides a clean and pretty clutter free interface.
Press the play button, and the text will be read out to you. The speed of the reading can be varied to suit your individual needs.
The current word being spoken is highlighted as it is read, and you can make the speech faster or slower to suit.
Did I mention it was free? And we’re not finished yet…..
If you have a compatible OneDrive account – like I don’t know, a school account or through Glow – then you can upload the scanned document to Word through OneDrive….
…where it just happens to become fully editable text. As with any OCR technology, it’s not perfect – but it is pretty good.
As an easy to use app which is simple and user friendly, it’s mightily impressive. And did I mention it was free? Get it for iOS at http://tiny.cc/OfficeLens
It’s also available as an Android or Windows (naturally) app, but I haven’t seen them up and running. Definitely worth a look though.
So, that’s Lens. What about ‘through a lens’?
Well, an interesting thing happened when I was showing a colleague how Lens worked. This technology, which would have been jaw-dropping a couple of years ago say, is free to download and easy to use – and I’m listening to myself say “Yeah – it’s a shame you can’t change the colour of the background it’s reading from, or how the highlighting works. And I wish you could add a Scottish accent….”
And then I stopped and listened to myself. I smiled, and thought about what the app is capable of and what our reaction was to seeing. And it’s a telling glimpse of where we are. We are insatiable. It doesn’t matter how good a piece of software, or hardware or work is, we always want it to do more, be more, achieve more. Which is good, in a way, and where progress and improvement comes from. But sometimes you just need to stop for a minute and say good job, well done.
So Microsoft; good job, well done.
So, it’s been a while since I posted, but I had some really exciting news today and thought this would be a good place to break it, as well as a good reason to get me writing again.
If you work with me, follow me on Twitter or are unfortunate enough to be one of my Facebook friends, then you will have been getting plagued for about the last fortnight to vote for a video I created telling people about “My SMART moment”. This was part of the application process for the 2014 Global SMART Exemplary Educator Summit in Calgary this summer – from July 19th to 26th. Sorry you had to put up with it, and thanks to anyone who watched the video, voted, harassed anyone else into voting, thought about voting or liked, RTd or shared.
Although I never got enough votes to snag myself an automatic space (some of those other SMARTees had thousands of votes!) between my video, the votes cast and my application form I must have done enough to impress someone on the panel, because I received an email today telling me that I had been selected to attend!
I’m delighted and excited, and looking forward to meeting educators from around the world, learning loads and possibly getting a sneak peek at the new technologies that SMART are developing.
Hopefully, I’ll be able to tell you all about it on here, but that’s in the future. For now, here’s some flag mashups I created using Notebook (obviously!) to celebrate! Which is your favourite?
Last weekend, I took part in the Pedagoo event #tmlovelibraries. It was a fantastic day, and I learned loads. At the pub session afterwards, there was a sort of TeachMeet Unplugged event, similar in feel to the TeachMeet 365 events or, as Fearghal testified, to the very early TeachMeets themselves. Fearghal had asked us all to come with something we were prepared to share; as I have been doing a bit of work with OpenBadges and have been very impressed with them, I decided that this was what I was going to talk about.
Then I hit the problem. 2 minutes is not a very long time, particularly to talk about something you have been working on for months and have found out so much about. So, to keep things short, I decided to create an OpenBadge for all the participants of tmlovelibraries and then give it to them as a present. By claiming it, they could find out a bit about Openbadges themselves.
This idea seemed to work well in the keeping things short arena, as well as the engaging the audience area – the word ‘gift’ seemed to be the important one in achieving this! As Fearghal commented on the night, my talk also had the effect of taking his carefully honed structure and blasting it into a million pieces as people went scurrying to the internet to find their badge. The badge is shown below, together with its claim code for anyone who was there. To claim it, navigate to the badg.us site and insert the claim code ‘kapyua’ into the “Claim award from code” box. This will prompt you to either sign in to your Mozilla Backpack if you already have one, or sign up with an email address to create one before awarding you the tmlovelibraries – Participant badge, which you can then display on your blog, Facebook profile or Twitter feed.
In the impromptu break that followed my talk, I was talking to a few different people, and realised that there was a real appetite for finding out more about using OpenBadges. Quite a few people had looked at the concept themselves, before deciding that the project was too technical for them to use effectively. This, of course, is exactly the same decision I came to myself when I first started looking into digital badges. I had been impressed with the ease of creating badges for recognising various achievements on Edmodo, but had hoped for some way to display them in fronter, our school’s virtual learning environment. When I had approached the extremely helpful people at Edmodo asking if this was possible, they said that whilst they were happy for the badges to be displayed elsewhere, but it would need to be purely a case of copying them as an image and uploading them elsewhere.
I felt sure that there had to be a more efficient way of doing this, and went off doing a bit of digital badge research. It soon became clear that OpenBadges were exactly what I was looking for, but despite the fact that there were plentiful resources available for those with an ability to code, there was nothing I could find that was very user-friendly for a class teacher.
Until I chanced across the ForAllBadges site that is. Straight from the off, ForAllBadges allowed me to create an OpenBadge simply by uploading an image to the site and filling in the information fields to attach to it. Perfect for what I wanted. But ForAllBadges had far more to offer than I had been looking for. It gave me a whole badge-management system, allowing me to upload classes and add staff, create and issue badges and – most crucially given the age of my pupils – a way to display the badges earned without needing a Mozilla Backpack (currently, a Mozilla Backpack is only available to learners over the age of 13).
I soon had a pilot badge system up and running and a fronter page created with links to the pupil’s individual Trophy Rooms; here their badges could be seen through viewing their ForAllBadges badge journal. After an email exchange with the amazing people at ForAllBadges, the ability for the student to add a reflective comment to their badge journal was quickly added. This setup now allowed for a badge to be created, issued, displayed and reflected upon as well as having the advantage of being part of the OpenBadge system allowing a great degree of portability for the badges once the pupil reaches the age of 13 (or Mozilla update their terms & conditions to allow under 13s to have a Backpack with permission from their parent/carer – a change that is on the cards very soon I believe).
This was perfect for what I was looking to use it for in school, but perhaps a bit too complicated to use in ‘open play’. I had been thinking that OpenBadges could be a great way to document CPD activities such as TeachMeets or MOOCs for example, but how could an event organiser award a badge to someone whose details they didn’t know? Would they have to do all the data-inputting themselves? This sounded like a prohibitive amount of work.
Fortunately, a site that David Muir had pointed me towards had the answer. Badg.us allows a user to create badges very simply, and in much the same way as ForAllBadges. However, the badg.us site interfaces drectly with the Mozilla Backpack and Persona sign-in service, making it a far more user-friendly solution when you will be issuing badges to people from outwith your organisation or whose details you are unaware of in advance. It also lightens the administrative burden of issuing badges, as the onus is on the claimant to provide their details. The site allows you to set up reusable codes (like the one above) for large-scale issuing, or one-use codes when you are looking to target your badge claimants more precisely (I used this to create “Presenter” and “Organiser” badges for tmlovelibraries, printed up claim codes for these and gave them to Fearghal to distribute).
In my opinion, these tools make the whole process of creating and awarding badges far more accessible to the typical classroom practitioner; teachers who, much like myself and Fearghal, would previously have found the process too technical can use these services to gain the benefits of OpenBadges without having to become coding wizards. Other tools have been developed that can do a similar job – for instance, WPBadger and WPBadgeDisplay allow you to utilise WordPress blogs to issue and display badges whilst OpenBadges.me provides a very useful badge designer for either online use or as a WordPress plugin . Recently, the ForAllBadges site has joined together with its sister site ForAllRubrics, and you can set things up so that once a rubric has been com pleted, an OpenBadge can be awarded automatically. After some late-night Twitter conversations between myself and the founder of ForAllSystems, ForAllRubrics also has built-in links to the CfE Experiences & Outcomes. A very handy teacher toolkit!
So, now it begins to get exciting. The badges are no longer a concept. Now that a teacher – or a student? – can create and award these badges, what might they do with them? I have a number of ideas that I’ll be trying in my school, and I know Fearghal had an inclination to use them as part of a programme he delivers at his school (this provoked a very interesting side discussion with David Gilmour about extrinsic/intrinsic motivation). I know that other organisations (including the Scout Association and – believe it or not – the SQA) have been looking at introducing them too.
What would you do with OpenBadges?
Audio Blog Post – click here to listen
created using vozme.com
Link to Quiz
I thought that I had been doing pretty well on the ITR12 course, and then December came along. In common with teachers around the country as soon as the calendar turns to 1st December I lose all semblance of any order to my professional life as all the ‘other things’ – which, to be fair, are vital parts of the wider life of the school – start making increasing demands on your time at the same time as your nearest and dearest start doing the same outside of work. I managed to do very little for the course during this time (but was impressed I managed to do anything!). Normally, the holidays can be a good time to pick up some of the slack, but this year we were lucky enough to have had arranged a trip to New York and so no slack could be taken up. “No problem,” I thought, “I’ll just get dug right in when we get home.”
Or alternatively I’ll catch some bug and spend the next two and a half weeks feeling absolutely lousy and unable to focus on any kind of work!
Anyway, feeling a bit more human now, and noticing the course moving on relentlessly without me I thought I had better try and get caught up. So, apologies for lagging behind, but the catch-up starts now!
I have to admit to having some amount of trepidation about my forthcoming confession. There’s no need for it really, I could easily write a blog post reflecting fairly honestly on my audio experiences without making things so clear, but I feel that to do so would be disingenuous at best and downright dishonest at worst. So here it is.
I hated it.
Actually, the answer to that is No, but Yes as well. That sounds a bit confusing, so I should probably explain.
During my ‘audio experience’ I had a chance to listen to a couple of stories and a novel as audiobooks. These I really enjoyed. The stories were fairy tales from CDs being given away with The Guardian during September, and feature actors Stephen Mangan and Tamsin Grieg on voice duties. The novel was the audiobook of “The Great Hamster Massacre” read by someone whose name I didn’t recognise – Susie Riddell – but who turned out to be a graduate of the Royal Welsh College of Music and Drama; a professional actress and voiceover artist who has narrated many books, acted in many radio plays and is currently a regular in “The Archers”.
With such expertise on voice duties, it is perhaps unsurprising that I really enjoyed these. And perhaps looking back over the years and seeing where I have enjoyed many radio dramas or professional readings, no great surprise. I can easily see why people would enjoy listening to these, and why people would choose to listen to them. I even thought about audiobooks for my car journey to and from work – could be interesting and make a change.
Next, encouraged by my audiobook experience and inspired by David’s blog post I decided to have a go using my computer and browsing the internet via audio.
And I hated it.
Why was it so bad?
First of all, I have to admit I’m no expert at setting up or using voice accessibility on the computer (in this case Windows Narrator), so perhaps I contributed to my own downfall somewhat. But then, on the other hand, I’m the guy who should be able to do it for our school, so I’m not going to cut myself any slack there.
I could find nothing properly. Nothing. Think about that for a minute – absolutely nothing. Despite the fact I am a pretty proficient computer user and have some experience in assistive technology, I was unable to open a file, start an application or browse to a webpage purely using the audio. I had to peek. A lot.
And that’s not the half of it. David’s sums it all up pretty well in his fantastic post, so I’m not going to try and do the same (although I am going to recommend you read his post!) but I will add a couple of points of my own.
Firstly, just listening to the voices is hard, hard work. Much harder than listening to a recorded voice – even an amateur one – and certainly much harder than listening to a ‘voice professional’ like those discussed above. To try and illustrate what I mean, I am going to insert some short audio clips in here as evidence. Using the introduction to this post as the reading material, I am going to add a text generated Chirbit of a synthetic voice reading the passage and then an AudioBoo of myself reading it (I did ask Stephen Fry to record the same clip too, but it turns out he’s rather busy).
Check this out on Chirbit
I find the synthetic voice – and it’s not just this one, it’s most of them – incredibly difficult to listen to. They often seem to read too quickly, although I know you can slow the speed of a lot of them down. And if you miss a bit, or want to check something again, it can be very awkward, but it’s more than that, I just don’t think my ears ‘like’ doing it.
To compound this misery, the screen reader reads out everything that’s on screen – and I mean everything. And including loads of things that aren’t on on screen too! Compare that to when you read a piece of text yourself – you know you’re just looking for the actual body of text, so you probably ignore internet addresses, headers, footers, font type, page numbers, prices, copyright notices….you get the idea. The screen reader has no such discrimination; depending on how much text or links are on a page you may end up being there for quite a while. You take for granted how much filtering you do when reading without even thinking about it, when you suddenly lose this ability it’s a nightmare. Then you have to think about all the keyboard shortcuts at the same time to try and get Narrator to read what you want it to. I’ve been trying for about a fortnight with the shortcuts in front of me and I still can’t manage it properly.
The last thing I’m going to mention is just how tiring the whole experience is. Possibly due to an interaction of the previous two points, I found the whole experience exhausting. I couldn’t believe how tiring I was finding such simple tasks – and there’s a thought to take back to the classroom.
Now, perhaps some or all of this is due to never having done these things before. Perhaps I would get better the more I practised, and would find the whole experience less uncomfortable. I would like to think that this would be the case, because if it doesn’t get easier, and that is what some students have to go through every day then I think we need to come up with a Better Way – and fast.
I’d been asked by my Head Teacher to see what my network had to say about concept mapping. A few shouts on Twitter and some retweets from the pedagoo crew got me a pile of responses, so thanks to Kenny Pieper, Fearghal Kelly, Drew Burrett, Sinclair Mackenzie, Alan Stewart, Samantha Williams, Malcolm Wilson and Allan Reid for all their help.
A pile of stuff actually. On the free side, as well as being pointed towards bubbl.us which I have used before, I was also given links to FutureLabs exploratree and the quite interesting text2mindmap whilst Google suggested I take a look at Simple Mapper and I also stumbled across the Seeing Reason Tool from Intel. Commercial resources mentioned included SMART’s SMART Ideas, Mindomo, MindMeister and creately (most of which have free versions with limited functionality). Alan sent an address for a Livebinder which as well as having most of these links and a pile of others, also reminded me how useful LiveBinder could be.
Sadly not. Over and above the resources themselves, I’d been hoping to find examples from people who are working with concept mapping already, and nobody seemed to have anything to share on this point. We’d also been quite hopeful of finding someone who might be able to deliver some training on the effective use of concept mapping, and whilst I had noticed that iansyst had a mention of concept mapping training on their site, I could find little else.
So, that’s where things stand just now. But I’ll keep looking and listening and see if I can find out anything else!
I got a DM on Twitter from a friend of mine in the Western Isles. He had been reading some of my #itr12 posts, and wanted to draw my attention to something pretty fundamental – he could hardly read the text! He suggested that changing the font and its size might be a good idea.
I said “No problem” and headed off to edit my posts. Interesting thing though, my WordPress blog was offering me very limited font choices, and no way of selecting font size. Hmmmm. What’s a chap to do?
The solution involved a number of things. It involved installing the Editor FontSize, FontMeister, Space Invaders, Text Control and WP Editor plugins for my blog (not just them as it happens, I tried a few others on for size too!). Basically, these all provide some sort of formatting help, meaning I can now choose my font size, line spacing and certain other features too. But the fonts are best of all.
With any website, it has to take a gamble as it will never know what fonts are installed on the computer that it is being displayed on. Sure, Times New Roman is likely to be available, but apart from that it’s a lottery. Obviously there are some fonts which there is more chance of a computer having than other fonts, but esentially it’s a gamble. Websites get round this by suggesting a font family, or a list of fonts to try and an order to try them in. From an accessibility point of view, this isn’t great, as you can never be sure what font your reader will be seeing.
But all that is changing. Web Fonts are OpenSource fonts which are available from the web – that means that they don’t have to be installed on a computer to be able to be displayed on that computer. Using FontMeister I added some Google Web Fonts as options to my blog, and am currently trying the Andika font to see how I get on with it. There is talk of the Open Dyslexic font being available as a web font soon too.
Turned out to be nearer 15, but that’s still okay for a post I reckon!!!!!
I had never really given much thought to the structure of my documents until I started the ITR12 course. I don’t really know why, it had just never been something that had cropped up I guess. And that’s a large part of the problem here – PR. Making documents structured isn’t difficult or time-consuming and it doesn’t need any expensive software; all it needs is an increased awareness.
Well, there’s the obvious answer to this question, and the less obvious answer too. The obvious answer is that by making your document structured, you make it easier for screen readers to ‘understand’ your document, and as a result make it more accessible to the person using the reader, thus giving them a better chance of understanding it. The less obvious answer is that there are benefits of structuring your document anyway – it is easier for anyone to navigate around; it can be a more dynamic document with hyperlinks to other sections; it’s portability is increased (eg for export to PDF or conversion to HTML) and apart from anything else after an initial period, it should be quicker to create than an unstructured document.
The most important part of making a structured document is formatting your document properly. For the most part this means using headings to break your document into sections. “Easy,” I hear you cry, “I do that all the time anyway!!!!” But do you really? When you are putting a heading into your document, do you select font size, type heading, select heading text, Bold, Underline, return, change font size back & Un-Bold, Un-underline? Yeah, me too. So that must be good, right?
Wrong. Whilst this may look like a structured document, there is no ‘metadata’ attached to this structure to allow it to be correctly identified. What you need to do is open your word processor up and have a look for a bit of the interface that you have probably largely ignored until now – the styles section. You know the one….
By using the style settings to apply styles, we can create a document that is capable of providing screen reading software with the information it needs to ‘make sense’ of the document. Now, this seems very simple – and it is. After an initial period spent setting your styles up the way you want them (choice of font, font size, font style), it actually makes it quicker to format your document than marking each heading out as you need it.
Well, that’s the main thing, but there are another couple of things to bear in mind too. The first of these is remembering to add alternative text (alt text) to any images that you put into your document. This will allow screen readers to provide a description of the image for a reader who cannot see it. Care needs to be taken with the alt text – if the filename is used as default for instance, this is likely to be something pretty meaningless and user-unfriendly, such as image (1).png. Providing a short but accurate description of the image (eg ‘style options from Word’ for the image above) .
The same principle applies to any links you add to your text. Hyperlinking to another blogpost on this site, the address to use would be http:is much more useful//h-blog.me.uk/?p=365. Now, if a screen reader reads that out, it isn’t going to mean much to the reader. The title of the blog post “EduBlogs Awards – My Nominations” would make a lot more sense. Adding this descriptive text to a hyperlink can be easily achieved by typing (or cutting and pasting) the desired text into your document, selecting it and right clicking and choosing ‘edit hyperlink’.
As well as these three main principles, font size needs to be considered, and should be at least 12 points. Underlining text should be avoided, as this can make reading text more difficult, as can using block capitals. Text should not be justified, as the differences in word and letter spacing can cause problems with reading; rather it should be left-aligned. Any bulleted or numbered lists should be formatted using the relevant tools rather than numbered/bulleted by hand. Similarly, columns should be added using the correct formatting tools rather than by ‘tabbing’. For larger documents, a table of contents should be considered – this should be easy to create for a document that is properly structured!
If you are lucky enough to be using the 2010 version of Word, there is a built-in accessibility checker that can help you spot accessibility issues in your document. It will highlight these to you, advising how important it feels the error is and offering advice on how to fix it. Similar extensions are available for OpenOffice and LibreOffice.
That is a very good question. I think it is possibly a lack of education about the benefits of structured documents as well as how easy it can be to provide that structure at the time of writing. As excuses go, it’s pretty flimsy; so perhaps it’s time we all took a bit of responsibility for sharing the information with our colleagues.