Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Muhammad Younus

Pages: [1] 2
1
A cross-functional team has members with a variety of skills, but that does not mean each member has all of the skills.
Specialists Are Acceptable on Agile Teams

It is perfectly acceptable to have specialists on an agile team. And I suspect a lot of productivity has been lost by teams pursuing some false holy grail of having each team member able to do everything.
If my team includes the world’s greatest database developer, I want that person doing amazing things with our database. I don’t need the world’s greatest database developer to learn JavaScript.

Specialists Make It Hard to Balance Work

However, specialists can cause problems on any team using an iterative and incremental approach such as agile. Specialists make it hard to balance the types of work done by a team. If your team does have the world’s greatest database developer, how do you ensure your team always brings into an iteration the right amount of work for that person without bringing in too much for the programmers, the testers, or others?
To better see the impact of specialists, let’s look at a few examples. In Figure 1, we see a four-person team where each person is a specialist. Persons 1 and 2 are programmers and can only program. This is indicated by the red squares and the coding prompt icon within them. Persons 3 and 4 are testers who do nothing but test. They are indicated by the green square and the pencil and ruler icons within those. You can imagine any skills you’d like, but for these examples I’ll use programmers (red) and testers (green).
The four-person team in Figure 1 is capable of completing four red tasks in an iteration and four green tasks in an iteration. They cannot do five red tasks or five green tasks.

But if their work is distributed across two product backlog items as shown in Figure 2, this team will be able to finish that work in an iteration.But, any allocation of work that is not evenly split between blue and green work will be impossible for this team to complete. This means the specialist team of Figure 1 could not complete the work in any of the allocations shown in Figure 3.

he Impact of Multi-Skilled Team Members

Next, let’s consider how the situation is changed if two of the specialist team members of Figure 1 are now each able to do both red and green work. I refer to such team members as multi-skilled individuals. Such team members are sometimes called generalists, but I find that misleading. We don’t need someone to be able to do everything. It is often enough to have a team member or two who has a couple of the skills a team needs rather than all of the skills.
Figure 4 shows this team. Persons 1 and 2 remain specialists, only able to do one type of work each. But now, Persons 3 and 4 are multi-skilled and each can do either red or green work. This team can complete many more allocations of work than could the specialist team of Figure 1. Figure 5 shows all the possible allocations that become possible when two multi-skilled members are added to the team.
By replacing just a couple of specialists with multi-skilled members, the team is able to complete any allocation of work except work that would require 0 or 1 unit of either skill. In most cases, a team can avoid planning an iteration that is so heavily skewed simply through careful selection of the product backlog items to be worked on. In this example, if the first product backlog item selected was heavily green, the team would not select a second item that was also heavily green.
The Role of Specialists on an Agile Team

From this, we can see that specialists can exist on high-performing agile teams. But, it is the multi-skilled team members who allow that to be possible. There is nothing wrong with having a very talented specialist on a team--and there are actually many good reasons to value such experts.
But a good agile team will also include multi-skilled individuals. These individuals can smooth out the workload when a team needs to do more or less of a particular type of work in an iteration. Such individuals may also benefit a team in bringing more balanced perspectives to design discussions.

Evidence from My Local Grocery Store

As evidence that specialists are acceptable as long as they are balanced by multi-skilled team members, consider your local grocery store. A typical store will have cashiers who scan items and accept payment. The store will also have people who bag the groceries for you. If the bagger gets behind, the cashier shifts and helps bag items. The multi-skilled cashier/bagger allows the store to use fewer specialist baggers per shift.

What Role Do Specialists Play on Your Team?

What role do specialists play on your team? What techniques do you use to allow specialists to specialize? Please share your thoughts in the comments below.

2
Whatever you do, don’t call this an ‘interesting’ idea

My understanding of the word interesting came not from school but from a 14-inch black-and-white television showing Star Trek reruns in the late 1970s. ‘Fascinating is a word I use for the unexpected,’ I heard Mr Spock explain. ‘In this case, I should think interesting would suffice.’

Spock was the epitome of logic in the original Star Trek series. Although he had a human mother, it was the Vulcan half that was firmly in control. If he said that something was interesting, as I understood it, then he was describing an expected, objective fact. That notion is embedded deeply in today’s popular culture: cable news segments, websites and Facebook posts compete for our attention with surprising but allegedly genuine – interesting – truths.

It didn’t occur to me at the time that when Spock said that something was interesting, he wasn’t talking about that thing, he was talking about himself. Forty years later, I see things more clearly. The well-meaning writers on Star Trek set a bad example for us all, and the taint has only kept spreading. Calling something interesting is the height of sloppy thinking. Interesting is not descriptive, not objective, and not even meaningful.

Interesting is a kind of linguistic connective tissue. When introducing an idea, it’s easier to say ‘interesting’ than to think of an introduction that’s simultaneously descriptive but not a spoiler. I hear interesting all the time at conferences when someone is introducing a speaker. I hear interesting on the radio, when a host introduces an upcoming interview. These flighty little protocols happen so rapidly that they transit almost below the level of conscious discourse, serving only to prime me to pay attention.



In practice, interesting is a synonym for entertaining. This conflation has become especially problematic in higher education. Back in 2010, an article in US News & World Reports said that the number-one sign of a bad professor is that ‘the professor is boring … Even in the very first classes, you can tell if the professor presents the material in an interesting way.’ Likewise, a blog post from Concordia University in Portland about teaching strategies offers advice on ‘how to become a professor who keeps lectures interesting’. The Princeton Review’s series of college guides (eg The Best 381 Colleges) give every college and university a ‘Profs interesting rating’.

In today’s data-driven educational enterprise, faculty who do not entertain frequently do not get promoted – or even retained – because of the influence of student evaluations. The same goes for information technology workshops and conferences I attend, where questions such as ‘I found the speaker interesting’ on evaluation forms help to determine who is invited back in subsequent years. TED talks are the logical conclusion of this fashion, inspiring lectures with high production values and well-rehearsed presentations. They hold one’s interest, but they convey little information. Seriously, what do you remember from the last five ‘interesting’ TED talks that you watched?

What’s the result of society’s increasing emphasis on entertainment over substance? Novelty and innovation are valued above rigour; boring truth loses out to flamboyant falsehoods. I see it in today’s click-bait headlines, and even in the practice of science.

People say interesting to convey importance – and they shouldn’t. I review papers for academic conferences and scientific journals, and I’m routinely frustrated when other reviewers write dismissively that an article under consideration ‘isn’t very interesting’. That word, it does not mean what these reviewers mean. What they’re trying to say is that the scientific findings aren’t presented effectively, or that the results are only incremental, or (heaven help us) that the findings are not new, but merely replicate work that’s been done by others.

Replication and repeatability are thought by many laypersons to be a shared ideal among many scientists. In practice, few scientific studies are ever replicated. Last year, a survey by Vox.com of 270 scientists found few attempting replication studies because of the difficulty in funding and publishing. Funding agencies pride themselves on sponsoring transformative, breakthrough research – interesting work that, almost by definition, doesn’t repeat (read: replicate) what’s been done before. And journals generally don’t print articles that merely replicate findings that have been previously published; such articles aren’t considered sufficiently interesting.

The results are bad for the practice of science, because the scientific method relies on replication. Without it, it takes a lot longer for erroneous studies to be corrected. But getting things right is not interesting, it’s pedantic.

So, when you write or speak, don’t say that something is interesting. It might attract your interest, sure, but whether your audience finds something interesting is determined by a complex set of preconditions including their background knowledge and other items competing for their attention. Their interest depends, too, on their pre-existing emotional state. The Diagnostic and Statistical Manual of Mental Disorders (the DSM-5 ) states that ‘markedly diminished interest or pleasure in all, or almost all, activities most of the day, nearly every day’ over two weeks or more is one of the diagnostic symptoms of major depressive disorder. Meaning that, if your audience doesn’t find your astronomy talk interesting, the fault might indeed be in themselves, and not in the stars.

Conversely, if someone tells you ‘this is interesting’, remember that they aren’t describing the thing at all. They are describing the effect of that thing on them. Even though we hear it a lot from the would-be Vulcans around us, interesting is a subjective, emotional word, not the objective, logical word we want it to be.

It must be Spock’s human half talking.
[collected]

3
Software Engineering / HTML5 Video Autobuffers, Always
« on: April 20, 2017, 03:18:26 PM »
John Gruber of Daring Fireball says that the HTML5 video element, simple as it is, always autobuffers on Safari, Chrome, and Firefox. It’s something others have also come up against. Any videos on the page will start downloading right away, regardless of the “autobuffer” attribute’s setting:

The HTML5 spec defines an autobuffer attribute for the video and other media elements (bold emphasis added):

The autobuffer attribute is a boolean attribute. Its presence hints to the user agent that the author believes that the media element will likely be used, even though the element does not have an autoplay attribute. (The attribute has no effect if used in conjunction with the autoplay attribute, though including both is not an error.) This attribute may be ignored altogether.

It would appear, in my testing, that all three of these browsers take the spec up on the aforebolded offer to ignore this attribute. Even if you do not explicitly turn this attribute on, Safari, Chrome, and Firefox will still auto-buffer the content for your <video> (and <audio>) elements. There is no way to suppress this using HTML markup.

As Gruber points out, this might seem like a good thing for fast UI: videos start playing as soon as the user wants them to. That would be true in a world of unlimited bandwidth, but for now, this feature is likely to be a massive bandwidth hog. There is a nice workaround, albeit one that peels back the utter simplicity of a single <video> tag:

In the HTML markup, rather than a <video> element, instead use an <img> element with the intended poster frame.
Add an onclick JavaScript handler to the <img> element, which, when invoked, does some DOM jiggery-pokery to remove the just-clicked-upon <img> element and replace it with a <video> element that sources the intended video files.
And, in fact, that is exactly what I resorted to for my PastryKit videos. Do a View Source on that page to see the solution.

It’s difficult to see <video> becoming the web’s standard video component if every video buffers as soon as the page loads.

4
he WebM project is dedicated to developing a high-quality, open video format for the web that is freely available to everyone.

The WebM launch is supported by Mozilla, Opera, Google and more than forty other publishers, software and hardware vendors.

WebM is an open, royalty-free, media file format designed for the web.

WebM defines the file container structure, video and audio formats. WebM files consist of video streams compressed with the VP8 video codec and audio streams compressed with the Vorbis audio codec. The WebM file structure is based on the Matroska container.



It happened. Today, Google is up on stage at I/O unveiling a new WebM project alongside a slew of partners (notably: Mozilla and Opera on the browser side) that gets the On2 codec out into the open. This is huge news for the fight for Open Video, and everyone will now have eyes on Safari.

YouTube will be a huge push here, and you can go to their html5 version: http://www.youtube.com/html5 and check it out. Today it is available in trunk builds on Chromium and Firefox. Soon, an Opera beta, Chrome dev release, and more.

The project is going after:

Openness and innovation. A key factor in the web’s success is
that its core technologies such as HTML, HTTP, and TCP/IP are open
for anyone to implement and improve. With video being core to the
web experience, a high-quality, open video format choice is needed.
WebM is 100% free, and open-sourced under a
BSD-style license.

Optimized for the web. Serving video on the web is different
from traditional broadcast and offline mediums. Existing video
formats were designed to serve the needs of these mediums and do
it very well. WebM is focused on addressing the unique needs of
serving video on the web.

Low computational footprint to enable playback on any device,
including low-power netbooks, handhelds, tablets, etc.*

Simple container format

Highest quality real-time video delivery

Click and encode. Minimal codec profiles, sub-options; when
possible, let the encoder make the tough choices.

* Note: The initial developer preview releases of browsers supporting WebM are not yet fully optimized and therefore have a higher computational footprint for screen rendering than we expect for the general releases. The computational efficiencies of WebM are more accurately measured today using the development tools in the VP8 SDKs. Optimizations of the browser implementations are forthcoming.

Congrats Open Web.

Update: Flash will ship VP8, as will IE9. Now everyone looks at the Safari team :)

(One thing though about IE9 support: “In its HTML5 support, IE9 will support playback of H.264 video as well as VP8 video when the user has installed a VP8 codec on Windows.”). That is a bummer.

5
Software Engineering / HTML5 Video; YouTube Perspective
« on: April 20, 2017, 03:15:43 PM »
The YouTube API blog put their point of view on HTML5 video on the table. I would love to know why they felt like this was the right time, and what their angle is. I find myself often confused with the Google strategy. On one hand they are doing amazing things for the Open Web (Chrome, tools, Steve Souders and Web performance work), but on the other we see an alignment with Adobe and Flash (a differentiator to Apple).

Man, I am torn. The pragmatist totally gets it. But the guy who realizes that it was the Web openness that allowed the likes of Google come from nothing to the powerhouse that it is today in a decade, gets confused.

If you are a Flash fan you see this as “see! Flash is here to stay!” As someone who wants to see the Web standards get better fast, we see some of the momentum (fact that we have audio and video, and the WebM codec) and features that we need to get in:

Robust video streaming

Closely related to the need for a standard format is the need for an effective and reliable means of delivering the video to the browser. Simply pointing the browser at a URL is not good enough, as that doesn’t allow users to easily get to the part of the video they want. As we’ve been expanding into serving full-length movies and live events, it also becomes important to have fine control over buffering and dynamic quality control. Flash Player addresses these needs by letting applications manage the downloading and playback of video via Actionscript in conjunction with either HTTP or the RTMP video streaming protocol. The HTML5 standard itself does not address video streaming protocols, but a number of vendors and organizations are working to improve the experience of delivering video over HTTP. We are beginning to contribute to these efforts and hope to see a single standard emerge.

Content Protection

YouTube doesn’t own the videos that you watch – they’re owned by their respective creators, who control how those videos are distributed through YouTube. For YouTube Rentals, video owners require us to use secure streaming technology, such as the Flash Platform’s RTMPE protocol, to ensure their videos are not redistributed. Without content protection, we would not be able to offer videos like this.

Encapsulation + Embedding

Flash Player’s ability to combine application code and resources into a secure, efficient package has been instrumental in allowing YouTube videos to be embedded in other web sites. Web site owners need to ensure that embedded content is not able to access private user information on the containing page, and we need to ensure that our video player logic travels with the video (for features like captions, annotations, and advertising). While HTML5 adds sandboxing and message-passing functionality, Flash is the only mechanism most web sites allow for embedded content from other sites.

Fullscreen Video

HD video begs to be watched in full screen, but that has not historically been possible with pure HTML. While most browsers have a fullscreen mode, they do not allow javascript to initiate it, nor do they allow a small part of the page (such as a video player) to fill the screen. Flash Player provides robust, secure controls for enabling hardware-accelerated fullscreen displays. While WebKit has recently taken some steps forward on fullscreen support, it’s not yet sufficient for video usage (particularly the ability to continue displaying content on top of the video).

Camera and Microphone access

Video is not just a one-way medium. Every day, thousands of users record videos directly to YouTube from within their browser using webcams, which would not be possible without Flash technology. Camera access is also needed for features like video chat and live broadcasting – extremely important on mobile phones which practically all have a built-in camera. Flash Player has provided rich camera and microphone access for several years now, while HTML5 is just getting started.

Time to knuckle down and deliver great new video features in the browsers!

6
Software Engineering / Secure Document Sharing
« on: April 20, 2017, 03:12:18 PM »
Aaron created this originally to allow him to share documents between his own computers in a secure manner.

The web application uses local storage in IE 5+ and Firefox 2 to store an encryption key. This means that your notes are encrypted with a key that never leaves your machine.

In traditional web applications your data is not encrypted at all. It’s sitting there on the server in plaintext for all to see. If the provider screws up, your data will be leaked, for example what happened with AOL.

Even if it were encrypted, you’d either have to enter the key every time you wanted to view the data, or else store the key on the server, which would defeat the purpose of encrypting it in the first place.

Local storage offers a way around this issue, and I wrote halfnote as a sort of proof of concept for client side encryption in web applications.

Features

Client-side encryption. Your notes are strongly encrypted with your account password before they leave your browser. Even if I totally screw up someday and pull an AOL, your data will remain pretty safe.
Auto-save. Save buttons are lame. Halfnote saves your notes automatically whenever you stop typing.
Synchronization. You can have Halfnote open on multiple computers and they will stay in sync with each other. You don’t have to worry about accidentally overwriting notes you entered on another computer.

7
Software Engineering / Google Apps – Premier Edition
« on: April 20, 2017, 03:09:59 PM »
From the You-Know-When-Ajax-Has-Gone-Mainstream-Dept, Google announced today it will be offering businesses a premium service for its key productivity applications, at $50/user/year. The package includes:

Access to office-style applications – Google Docs & Spreadsheets, Google Page Creator. No presentation package yet – perhaps Google should acquire S5 :-).

Access to communication applications – GMail (@your-own-domain), Google Calendar, Google Talk (voice/IM).
Access to Google Homepage (maybe corporations could deck this out to become their intranet homepage?)
Control panel to manage the domain
Ads can be turned off
Storage at 10GB/user
Integration with organisation’s sign-on and email infrastructure

Phone support
The apps themselves are available to anyone, but the integration and extra services come with the premium service. Google provides this comparison table.

The giant elephant in this room is your company’s data sitting on Google’s servers. In the absence of an “Apps Appliance” sitting inside the firewall, there will always be a major proportion of the market unwilling to commit to a solution like this – increased risk of data loss, theft, and manipulation. Google’s pure-external model keeps things nice and simple, but it’s not for everyone.

Zoho, for example, offers “in-premise edition” to run inside an organization’s network. Similarly, Zimbra’s collaboration app. It’s also becoming possible to make your own stack, with apps like Wikicalc and the various wikis, though nothing as comprehensive as Google’s offering. It’s feasible MS will move their apps in that direction too.

The comparison among these approaches will be worth watching in coming months. For now, though, it’s great to see how much Ajax and the web has evolved in the past two years, with Google providing a lot of the inspiration. From TechCrunch: “Beyond competition and concerns, tonight is a good time to recognize the incredible force of innovation that Google is as well. Its nearly full-service suite of sophisticated, integrated online services is something of historic proportion.”

8
Software Engineering / Search for the Holy Mail (template)
« on: April 20, 2017, 03:08:41 PM »
Glen Lipka has been frustrated with the task of producing quality HTML email that works across various email clients, which of course got even harder when Outlook 2007 took email design back a few years.

Anyway, Glen thinks that he nailed it:

Outlook 2007 actually has a little more CSS support than I thought.  Just because I don’t get positioning and float and a decent DIV or the right box model or margins, doesn’t mean that I can’t still make it work.  Using a couple of tables, borders, padding and width, I think I came up with a solid solution that still looks like clean html.
I refuse to use spacer.gif.  Spacer.gif can kiss my shiny metal ass.  Boo spacer!  As a side note, I sometimes interview web developers for positions.  I look at their html.  If I see spacer.gif I say, “Nope, they stink”.  Sorry, it’s a pet peeve.
Opening up your email html in Word 2007 is NOT the same thing as opening up your html as an email in Outlook 2007.  They are really really close, but they have differences.  I kept seeing them, so I stopped trying to use Word 2007.
There is a bug in Outlook 2007.  If you have a table, and each cell has padding of 10px and then you put a cell in the middle to be 0px, it shortens the height of the cell and basically makes a HOLE in your table.  I was dumbstruck by this one.  It seemed impossible to do, but it does it.  The fix is to keep the padding on top and bottom, but remove left and right.  The bug is related to height, not to width.  It shortens vertically, but not horizontally.
Gmail is evil.  They only allow inline CSS.  They do this to avoid overlapping CSS rules. They could have dealt with overlap CSS rules using a rewrite scheme that put a prefix in front of all the classes.  It just made the html really messy.  I did my best in my template above to make it clean.  But still, that’s lame.
Gmail strips all height css rules.  Why height??  What did height ever do to them?  I got around this problem using padding, but in the dynamic app, it means we need to calculate specific padding rules based on the height the user requested minus the height of existing content.  Not trivial, but doable.  Why does Gmail allow width?  What’s the deal with height?
Borders can not be defined as 0px width.  In Outlook 2007, if you declare a border as 0px width, it shows up anyway.  I couldn’t figure that out, so I said, “Ok, I won’t do that.”  I saved an example which works in the browser, but not Outlook 2007.
Divs can have borders, but not padding in Outlook 2007.  Why not Microsoft?  Come on, work with me here.  Meet me halfway.  YUCK!

9
Software Engineering / Google and Mozilla 3D Round-up
« on: April 20, 2017, 03:07:03 PM »
Years ago, we covered an announcement about Mozilla’s plans to basically put OpenGL ES in the browser and call it Canvas 3D and to do so working with a new working group over at the OpenGL standards body, Khronos.

This week, we covered Google’s own 3D announcement, a plug-in offering a high-level scene graph API and embedded V8 run-time.

And of course, don’t forget about Opera’s 3D work, which we covered back in November 2007.

So now there are three approaches to 3D:

Mozilla: Low-level, OpenGL wrapper
Opera: Mid-level proprietary scene-graph-ish API
Google: The full COLLADA monty
Where should the web go? Mozilla’s Chris Blizzard compares the debate to Canvas vs. SVG:

Canvas is a very simple API, much like what we’ve proposed to Khronos for 3D support. It’s well-scoped, well understood and integrates very well with other web technologies. And it’s been getting a huge amount of traction on the web. People are writing all kinds of really neat technology on top of it, including useful re-usable libraries for visualization. Have a look through Google’s own promotional site for Chrome – a huge number of them use canvas. It has traction. And we’ve gone through a couple of iterations – we’ve added support for text and a couple of other odds and ends once we understood what people were trying to do with it.

Now compare this to SVG and SMIL. Each of those specs are multi-hundred page documents with very large APIs and descriptions of how to translate their retained-mode graphics into something that’s usable on the web. (SVG 1.1 is a 719 page PDF. SVG 1.2 Tiny is 449 pages. The spec for SMIL is a 2.7MB HTML file.) We’ve seen some implementation of SVG and SMIL in browsers, but it’s been slow in coming and hasn’t seen full interoperability testing nor any real pick up on the web. The model for these specs was wrong, and I think it shows.

Chris doesn’t directly say that Google’s approach is “wrong”, but he wonders if the Google proposal of a bigger and more ambitious API would represent too great a compatibility burden for browser vendors and developers.

In the comments of his post, Henry Bridge of the Google O3D team replied; here’s a lightly edited excerpt:

We agree that to keep a standards process focused, APIs should be as minimal as possible while remaining useful, and so we would likely keep things like that out of any first attempt at a standard and, as you say, let it evolve over time. But the usefulness question brings up an important, and we think, unresolved point. We’d love to build the animation and skinning system in JS, but we just couldn’t get a JS-based animation system fast enough — even on our retained-mode API. Javascript is getting faster all the time and we love that, but until someone builds some apps it’ll be hard to know what’s fast enough.

Standardizing [an Open GL-like] immediate mode API for JS makes total sense. It’s a well defined problem, lots of people know GL, and we think it will be useful. But some of the demos we wrote _already_ don’t run well without a modern JS implementation, and moving to [Open GL] won’t help that (but we’d love to be proven wrong). That’s why we think it makes sense to explore both an immediate and a retained mode 3D, and make sure they work well together.

What do you think?

10
Software Engineering / WebGL available in Firefox Nightly
« on: April 20, 2017, 03:04:50 PM »
We mentioned that WebGL had landed in WebKit source, when it joined Firefox.

Vladimir Vuki?evi? of Mozilla has posted on how it shows up in a nightly instead of just source (which requires a compiler flag etc.

This is incredibly exciting, as Jon Tirsen said:

Your next 3D shooter will sport a nice “Your browser is not supported please install Chrome, Safari or Firefox.” (Re: WebGL.)

Hopefully IE gets there too of course (Opera is in the group so we should see something there too).

Here is Vlad:

Along with the Firefox implementation, a WebGL implementation landed in WebKit fairly recently.  All of these implementations are going to have some interoperability issues for the next little while, as the spec is still in flux and we’re tracking it at different rates, but will hopefully start to stabilize over the next few months.

If you’d like to experiment with WebGL with a trunk nightly build (starting from Friday, September 18th), all you have to do is flip a pref: load about:config, search for “webgl“, and double-click “webgl.enabled_for_all_sites” to change the value from false to true.  You’ll currently have the most luck on MacOS X machines or Windows machines with up-to-date OpenGL drivers.

We still have some ways to go, as there are issues in shader security and portability, not to mention figuring out what to do on platforms where OpenGL is not available.  (The latter is an interesting problem; we’re trying to ensure that the API can be implementable on top of a non-GL native 3D API, such as Direct3D, so that might be one option.)  But progress is being quickly made.

When paired with high-performance JavaScript, such as what we’ve seen come from both Firefox and other browsers, should allow for some exciting fully 3D-enabled web applications.  We’ll have some simple demos linked for you soon, both here

11
Software Engineering / Making the OpenSocial API feel more at home
« on: April 20, 2017, 02:58:09 PM »
Chris Chabot has been doing a lot of experimentation with the new OpenSocial APIs. He has written up his experience and created two prototype wrappers.

The first short article has some general information and background.

The second article includes the first library you can tell to load (owner, viewer, ownerFriends and/or
viewerFriends) information, and presents this information in an uniform way (instead of having to do different type of calls for different information fields) and with proper consistent error handling. With it you can very easily create your first OpenSocial container application in a friendly prototype style environment. You can take a direct look at the library itself.

The third article contains a Ajax.Request implementation, since Prototype’s version won’t work well or even at all in the cross domain environment of open social containers, it allows you to re-use your current Prototype based programs by trying to mimic Prototype’s Ajax call as well as possible given the constraints of the situation. Under the hood, _IG_FetchContent is used to talk back to the server.

It is good to see people take the raw APIs and make them feel more like their library of choice.

12
Software Engineering / Canvas Color Cycling, digital imaging
« on: April 20, 2017, 02:54:04 PM »
Interest in Canvas, as well as mobile apps, has led to a renaissance of old-school 8-bit graphics. Joe Huckaby of Effect Games has been playing around with color cycling, leading to some stunning effects.

Anyone remember Color cycling from the 90s? This was a technology often used in 8-bit video games of the era, to achieve interesting visual effects by cycling (shifting) the color palette. Back then video cards could only render 256 colors at a time, so a palette of selected colors was used. But the programmer could change this palette at will, and all the onscreen colors would instantly change to match. It was fast, and took virtually no memory.

There’s a neat optimization going on here too: instead of clearing and redrawing the entire scene with each frame, he only updates the pixels that change:

In order to achieve fast frame rates in the browser, I had to get a little crazy in the engine implementation. Rendering a 640×480 indexed image on a 32-bit RGB canvas means walking through and drawing 307,200 pixels per frame, in JavaScript. That’s a very big array to traverse, and some browsers just couldn’t keep up. To overcome this, I pre-process the images when they are first loaded, and grab the pixels that reference colors which are animated (i.e. are part of cycling sets in the palette). Those pixel X/Y offsets are stored in a separate, smaller array, and thus only the pixels that change are refreshed onscreen. This optimization trick works so well, that the thing actually runs at a pretty decent speed on my iPhone 3GS and iPad!
 :P :P :P :P

13
Software Engineering / Speech Recognition with Javascript
« on: April 20, 2017, 02:49:35 PM »
Recently Google’s free text to speech api has made the rounds. The reverse is also possible, converting speech to text.

With speechapi.com’s javascript API, it is possible to build interesting speech-web mashups that include both speech-to-text as well as text-to-speech.

A combination of several technologies and open source tools make this possible. In the browser, Flash is used to access the microphone and stream the audio to an RTMP server. Red5 is used because its a versatile media server that has the benefit of being open source and free.

Once that audio is received on the server, it needs to be converted to text. There are many speech recognition engines to choose from. Many are proprietary and provide very good accuracy results but they are pricey and closed source. There are some state of the art opensource speech recognition engines too, such as julius and Sphinx to name a couple. The speechapi service uses sphinx because it is license friendly and has a strong community.

Now this is great, we can transmit audio and convert it to text but we need to control the process and use the results in the web page. That is where Javascript comes in. Speechapi.com provides a Javascript API. There is a setupRecognition method that sets up the grammar used in the speech-to-text process. There is a simple grammar mode, where you can just provide a comma seperated list of words. JSGF is also supported and is useful for more complex grammars. There are also methods that communicate with the flash control to indicate when to start transmitting audio and when to stop transmitting audio. You can also use the flash controls built in press to speak button to specify the speech endpoints.

Recognition results are returned to your web page in a callback that you specify in the speechapi constructor. The results are passed from the server to client as a JSON string. The result object contains the raw text results as well as other information that can be useful for you speech client, like pronunciation and “grammar tags” that can be useful for semantic interpretation of the results.

We think this technology is pretty cool and we encourage you to try it out. You can try it for free at speechapi.com where you just include a few lines of of javascript and html into your webpage to enable speech recognition. We are also open sourcing the package over the next few months, so sign up at our site if your interested.
Thanks for your interest
Younus

14
Software Engineering / SoundManager2 now with HTML5 Audio
« on: April 20, 2017, 02:46:47 PM »
Scott Schiller, the best moustache-d frontend engineer around, has updated his awesome SoundManager library. The latest SoundManager 2 version now comes with free HTML5 Audio support which makes it a HTML5 Audio()-capable JavaScript Sound API, backwards-compatible via Flash fallback for MP3/MP4 formats. Existing SM2 API seamlessly uses HTML5 where supported, currently experimental; and of course… works on iPad.

Highlights

Experimental HTML5 Audio() support, with Flash fallback for MP3/MP4 as required. (HTML5 disabled by default except for iPad + Palm Pre, but easily configurable.)
100% Flash-free, HTML5-only playback of MP3, MP4 (AAC) and WAV files possible on Apple iPad and Palm Pre (and Safari 4.1.5 on OS X 10.5; buggy behaviour observed with 4.1.5 on OS X 10.6, see https://bugs.webkit.org/show_bug.cgi?id=32159#c9 )
API is unchanged, transparent whether using HTML5 or Flash; SM2 handles switching of technology behind the scenes, depending on browser support.

Here is how it works:

soundManager.useHTML5Audio

Determines whether HTML5 Audio() support is used to play sound, if available, with Flash as the fallback for playing MP3/MP4 (AAC) formats. Browser support for HTML5 Audio varies, and format support (eg. MP3, MP4/AAC, OGG, WAV) can vary by browser/platform.

The SM2 API is effectively transparent, consistent whether using Flash or HTML5 Audio() for sound playback behind the scenes. The HTML5 Audio API is roughly equivalent to the Flash 8 feature set, minus ID3 tag support and a few other items. (Flash 9 features like waveform data etc. are not available.)

SoundManager 2 + useHTML5Audio: Init Process

At DOM ready (if useHTML5Audio = true), a test for Audio() is done followed by a series of canPlayType() tests to see if MP3, MP4, WAV and OGG formats are supported. If none of the “required” formats (MP3 + MP4, by default) are supported natively, then Flash is also added as a requirement for SM2 to start.

soundManager.audioFormats currently defines the list of formats to check (MP3, MP4 and so on), their possible canPlayType() strings (long story short, it’s complicated) and whether or not they are “required” – that is, whether Flash should be loaded if they don’t work under HTML5. (Again, only MP3 + MP4 are supported by Flash.) If you had a page solely using OGG, you could make MP3/MP4 non-required, but many browsers would not play them inline.

SM2 will indicate its state (HTML 5 support or not, using Flash or not) in console.log()-style debug output messages when debugMode = true.

15
David Humphrey and the hit squad of audio gurus have some new amazing demos for us. Perfect for a Friday. This is all through the rich Mozilla Audio API work which will hopefully be pushed into other browsers at some point in the not so distant future.
Charles Cliffe has some awesome WebGL visualizations from Audio. David narrates:
What I like most about these (other than the fact that he’s written the music, js libs, and demo) is that these combine a whole bunch of JavaScript libraries: dsp.js, cubicvr.js and beatdetection.js, and processing.js. Some people will tell you that doing anything complex in a browser is going to be slow; but Charles is masterfully proving that you can do many, many things at once and the browser can keep pace.

Corban and Ricard Marxer have been busy exploring how far we can push audio write, and managed to also produce some amazing demos. The first is by Ricard, and is a graphic equalizer (video is here):
The second is by Corban, and shows a JavaScript based audio sampler. His code can loop forward or backward, change playback speed, etc. (video is here)

Chris McCormick has been working on porting Pure Data to JavaScript, and already has some basic components built. Here’s one that combines processing.js and webpd (video is here):
I think that my favourite demo by far this time around is one that I’ve been waiting to see since we first began these experiments. I’ve written in the past that our work could be used to solve many web accessibility problems. A few weeks ago I mentioned on irc that someone should take a shot at building a text to speech engine in JavaScript, now that we have typed arrays. Yury quietly went off and built one based on the flite engine. When you run this, remember that you’re watching a browser speak with no plugins of any kind. This is all done in JavaScript (demo is here, video is here)

In order to do this he had to overcome some interesting problems, for example, how to load large binary voice databases into the page. The straightforward approach of using a JS array was brittle, with JS sometimes running out of stack space trying to initialize the array. After trying various obvious ways, Yury decided to use the web to his advantage, and pushed the binary data into a PNG, then loaded it into a canvas, where getImageData allows him to access the bytes very quickly, using another typed array. The browser takes care of downloading and re-inflating the data automatically.

My favourite line is:
What began as a series of experiments by a small group of strangers, has now turned into something much larger.
What an awesome community you guys have… and we are all benefitting. Thank you.

Pages: [1] 2