Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Muhammad Younus

Pages: [1] 2
1
Software Engineering / Re: What is Ontology?
« on: May 06, 2017, 10:40:48 AM »
what is the difference between algorithm and ontology??

2
vista done...

3
সুন্দর !!

4
Software Engineering / Re: The future of wireless communications
« on: April 22, 2017, 11:28:54 AM »
Wireless communication would be the only mode of communication within few decades.

5
Software Engineering / Re: Tongue Drive System to Operate Computers
« on: April 22, 2017, 11:27:10 AM »
I have MATLAB code for a project " eye tracking based mouse operation". If you want I can share it with you

6
Madam, I believe there is no unique method that is 100% effective for engaging your students in the classroom environment. In my opinion classroom teaching should be multidimensional. As a teacher, someone can not follow the same rules or method for teaching.

7
A cross-functional team has members with a variety of skills, but that does not mean each member has all of the skills.
Specialists Are Acceptable on Agile Teams

It is perfectly acceptable to have specialists on an agile team. And I suspect a lot of productivity has been lost by teams pursuing some false holy grail of having each team member able to do everything.
If my team includes the world’s greatest database developer, I want that person doing amazing things with our database. I don’t need the world’s greatest database developer to learn JavaScript.

Specialists Make It Hard to Balance Work

However, specialists can cause problems on any team using an iterative and incremental approach such as agile. Specialists make it hard to balance the types of work done by a team. If your team does have the world’s greatest database developer, how do you ensure your team always brings into an iteration the right amount of work for that person without bringing in too much for the programmers, the testers, or others?
To better see the impact of specialists, let’s look at a few examples. In Figure 1, we see a four-person team where each person is a specialist. Persons 1 and 2 are programmers and can only program. This is indicated by the red squares and the coding prompt icon within them. Persons 3 and 4 are testers who do nothing but test. They are indicated by the green square and the pencil and ruler icons within those. You can imagine any skills you’d like, but for these examples I’ll use programmers (red) and testers (green).
The four-person team in Figure 1 is capable of completing four red tasks in an iteration and four green tasks in an iteration. They cannot do five red tasks or five green tasks.

But if their work is distributed across two product backlog items as shown in Figure 2, this team will be able to finish that work in an iteration.But, any allocation of work that is not evenly split between blue and green work will be impossible for this team to complete. This means the specialist team of Figure 1 could not complete the work in any of the allocations shown in Figure 3.

he Impact of Multi-Skilled Team Members

Next, let’s consider how the situation is changed if two of the specialist team members of Figure 1 are now each able to do both red and green work. I refer to such team members as multi-skilled individuals. Such team members are sometimes called generalists, but I find that misleading. We don’t need someone to be able to do everything. It is often enough to have a team member or two who has a couple of the skills a team needs rather than all of the skills.
Figure 4 shows this team. Persons 1 and 2 remain specialists, only able to do one type of work each. But now, Persons 3 and 4 are multi-skilled and each can do either red or green work. This team can complete many more allocations of work than could the specialist team of Figure 1. Figure 5 shows all the possible allocations that become possible when two multi-skilled members are added to the team.
By replacing just a couple of specialists with multi-skilled members, the team is able to complete any allocation of work except work that would require 0 or 1 unit of either skill. In most cases, a team can avoid planning an iteration that is so heavily skewed simply through careful selection of the product backlog items to be worked on. In this example, if the first product backlog item selected was heavily green, the team would not select a second item that was also heavily green.
The Role of Specialists on an Agile Team

From this, we can see that specialists can exist on high-performing agile teams. But, it is the multi-skilled team members who allow that to be possible. There is nothing wrong with having a very talented specialist on a team--and there are actually many good reasons to value such experts.
But a good agile team will also include multi-skilled individuals. These individuals can smooth out the workload when a team needs to do more or less of a particular type of work in an iteration. Such individuals may also benefit a team in bringing more balanced perspectives to design discussions.

Evidence from My Local Grocery Store

As evidence that specialists are acceptable as long as they are balanced by multi-skilled team members, consider your local grocery store. A typical store will have cashiers who scan items and accept payment. The store will also have people who bag the groceries for you. If the bagger gets behind, the cashier shifts and helps bag items. The multi-skilled cashier/bagger allows the store to use fewer specialist baggers per shift.

What Role Do Specialists Play on Your Team?

What role do specialists play on your team? What techniques do you use to allow specialists to specialize? Please share your thoughts in the comments below.

8
Whatever you do, don’t call this an ‘interesting’ idea

My understanding of the word interesting came not from school but from a 14-inch black-and-white television showing Star Trek reruns in the late 1970s. ‘Fascinating is a word I use for the unexpected,’ I heard Mr Spock explain. ‘In this case, I should think interesting would suffice.’

Spock was the epitome of logic in the original Star Trek series. Although he had a human mother, it was the Vulcan half that was firmly in control. If he said that something was interesting, as I understood it, then he was describing an expected, objective fact. That notion is embedded deeply in today’s popular culture: cable news segments, websites and Facebook posts compete for our attention with surprising but allegedly genuine – interesting – truths.

It didn’t occur to me at the time that when Spock said that something was interesting, he wasn’t talking about that thing, he was talking about himself. Forty years later, I see things more clearly. The well-meaning writers on Star Trek set a bad example for us all, and the taint has only kept spreading. Calling something interesting is the height of sloppy thinking. Interesting is not descriptive, not objective, and not even meaningful.

Interesting is a kind of linguistic connective tissue. When introducing an idea, it’s easier to say ‘interesting’ than to think of an introduction that’s simultaneously descriptive but not a spoiler. I hear interesting all the time at conferences when someone is introducing a speaker. I hear interesting on the radio, when a host introduces an upcoming interview. These flighty little protocols happen so rapidly that they transit almost below the level of conscious discourse, serving only to prime me to pay attention.



In practice, interesting is a synonym for entertaining. This conflation has become especially problematic in higher education. Back in 2010, an article in US News & World Reports said that the number-one sign of a bad professor is that ‘the professor is boring … Even in the very first classes, you can tell if the professor presents the material in an interesting way.’ Likewise, a blog post from Concordia University in Portland about teaching strategies offers advice on ‘how to become a professor who keeps lectures interesting’. The Princeton Review’s series of college guides (eg The Best 381 Colleges) give every college and university a ‘Profs interesting rating’.

In today’s data-driven educational enterprise, faculty who do not entertain frequently do not get promoted – or even retained – because of the influence of student evaluations. The same goes for information technology workshops and conferences I attend, where questions such as ‘I found the speaker interesting’ on evaluation forms help to determine who is invited back in subsequent years. TED talks are the logical conclusion of this fashion, inspiring lectures with high production values and well-rehearsed presentations. They hold one’s interest, but they convey little information. Seriously, what do you remember from the last five ‘interesting’ TED talks that you watched?

What’s the result of society’s increasing emphasis on entertainment over substance? Novelty and innovation are valued above rigour; boring truth loses out to flamboyant falsehoods. I see it in today’s click-bait headlines, and even in the practice of science.

People say interesting to convey importance – and they shouldn’t. I review papers for academic conferences and scientific journals, and I’m routinely frustrated when other reviewers write dismissively that an article under consideration ‘isn’t very interesting’. That word, it does not mean what these reviewers mean. What they’re trying to say is that the scientific findings aren’t presented effectively, or that the results are only incremental, or (heaven help us) that the findings are not new, but merely replicate work that’s been done by others.

Replication and repeatability are thought by many laypersons to be a shared ideal among many scientists. In practice, few scientific studies are ever replicated. Last year, a survey by Vox.com of 270 scientists found few attempting replication studies because of the difficulty in funding and publishing. Funding agencies pride themselves on sponsoring transformative, breakthrough research – interesting work that, almost by definition, doesn’t repeat (read: replicate) what’s been done before. And journals generally don’t print articles that merely replicate findings that have been previously published; such articles aren’t considered sufficiently interesting.

The results are bad for the practice of science, because the scientific method relies on replication. Without it, it takes a lot longer for erroneous studies to be corrected. But getting things right is not interesting, it’s pedantic.

So, when you write or speak, don’t say that something is interesting. It might attract your interest, sure, but whether your audience finds something interesting is determined by a complex set of preconditions including their background knowledge and other items competing for their attention. Their interest depends, too, on their pre-existing emotional state. The Diagnostic and Statistical Manual of Mental Disorders (the DSM-5 ) states that ‘markedly diminished interest or pleasure in all, or almost all, activities most of the day, nearly every day’ over two weeks or more is one of the diagnostic symptoms of major depressive disorder. Meaning that, if your audience doesn’t find your astronomy talk interesting, the fault might indeed be in themselves, and not in the stars.

Conversely, if someone tells you ‘this is interesting’, remember that they aren’t describing the thing at all. They are describing the effect of that thing on them. Even though we hear it a lot from the would-be Vulcans around us, interesting is a subjective, emotional word, not the objective, logical word we want it to be.

It must be Spock’s human half talking.
[collected]

9
Software Engineering / HTML5 Video Autobuffers, Always
« on: April 20, 2017, 03:18:26 PM »
John Gruber of Daring Fireball says that the HTML5 video element, simple as it is, always autobuffers on Safari, Chrome, and Firefox. It’s something others have also come up against. Any videos on the page will start downloading right away, regardless of the “autobuffer” attribute’s setting:

The HTML5 spec defines an autobuffer attribute for the video and other media elements (bold emphasis added):

The autobuffer attribute is a boolean attribute. Its presence hints to the user agent that the author believes that the media element will likely be used, even though the element does not have an autoplay attribute. (The attribute has no effect if used in conjunction with the autoplay attribute, though including both is not an error.) This attribute may be ignored altogether.

It would appear, in my testing, that all three of these browsers take the spec up on the aforebolded offer to ignore this attribute. Even if you do not explicitly turn this attribute on, Safari, Chrome, and Firefox will still auto-buffer the content for your <video> (and <audio>) elements. There is no way to suppress this using HTML markup.

As Gruber points out, this might seem like a good thing for fast UI: videos start playing as soon as the user wants them to. That would be true in a world of unlimited bandwidth, but for now, this feature is likely to be a massive bandwidth hog. There is a nice workaround, albeit one that peels back the utter simplicity of a single <video> tag:

In the HTML markup, rather than a <video> element, instead use an <img> element with the intended poster frame.
Add an onclick JavaScript handler to the <img> element, which, when invoked, does some DOM jiggery-pokery to remove the just-clicked-upon <img> element and replace it with a <video> element that sources the intended video files.
And, in fact, that is exactly what I resorted to for my PastryKit videos. Do a View Source on that page to see the solution.

It’s difficult to see <video> becoming the web’s standard video component if every video buffers as soon as the page loads.

10
he WebM project is dedicated to developing a high-quality, open video format for the web that is freely available to everyone.

The WebM launch is supported by Mozilla, Opera, Google and more than forty other publishers, software and hardware vendors.

WebM is an open, royalty-free, media file format designed for the web.

WebM defines the file container structure, video and audio formats. WebM files consist of video streams compressed with the VP8 video codec and audio streams compressed with the Vorbis audio codec. The WebM file structure is based on the Matroska container.



It happened. Today, Google is up on stage at I/O unveiling a new WebM project alongside a slew of partners (notably: Mozilla and Opera on the browser side) that gets the On2 codec out into the open. This is huge news for the fight for Open Video, and everyone will now have eyes on Safari.

YouTube will be a huge push here, and you can go to their html5 version: http://www.youtube.com/html5 and check it out. Today it is available in trunk builds on Chromium and Firefox. Soon, an Opera beta, Chrome dev release, and more.

The project is going after:

Openness and innovation. A key factor in the web’s success is
that its core technologies such as HTML, HTTP, and TCP/IP are open
for anyone to implement and improve. With video being core to the
web experience, a high-quality, open video format choice is needed.
WebM is 100% free, and open-sourced under a
BSD-style license.

Optimized for the web. Serving video on the web is different
from traditional broadcast and offline mediums. Existing video
formats were designed to serve the needs of these mediums and do
it very well. WebM is focused on addressing the unique needs of
serving video on the web.

Low computational footprint to enable playback on any device,
including low-power netbooks, handhelds, tablets, etc.*

Simple container format

Highest quality real-time video delivery

Click and encode. Minimal codec profiles, sub-options; when
possible, let the encoder make the tough choices.

* Note: The initial developer preview releases of browsers supporting WebM are not yet fully optimized and therefore have a higher computational footprint for screen rendering than we expect for the general releases. The computational efficiencies of WebM are more accurately measured today using the development tools in the VP8 SDKs. Optimizations of the browser implementations are forthcoming.

Congrats Open Web.

Update: Flash will ship VP8, as will IE9. Now everyone looks at the Safari team :)

(One thing though about IE9 support: “In its HTML5 support, IE9 will support playback of H.264 video as well as VP8 video when the user has installed a VP8 codec on Windows.”). That is a bummer.

11
Software Engineering / HTML5 Video; YouTube Perspective
« on: April 20, 2017, 03:15:43 PM »
The YouTube API blog put their point of view on HTML5 video on the table. I would love to know why they felt like this was the right time, and what their angle is. I find myself often confused with the Google strategy. On one hand they are doing amazing things for the Open Web (Chrome, tools, Steve Souders and Web performance work), but on the other we see an alignment with Adobe and Flash (a differentiator to Apple).

Man, I am torn. The pragmatist totally gets it. But the guy who realizes that it was the Web openness that allowed the likes of Google come from nothing to the powerhouse that it is today in a decade, gets confused.

If you are a Flash fan you see this as “see! Flash is here to stay!” As someone who wants to see the Web standards get better fast, we see some of the momentum (fact that we have audio and video, and the WebM codec) and features that we need to get in:

Robust video streaming

Closely related to the need for a standard format is the need for an effective and reliable means of delivering the video to the browser. Simply pointing the browser at a URL is not good enough, as that doesn’t allow users to easily get to the part of the video they want. As we’ve been expanding into serving full-length movies and live events, it also becomes important to have fine control over buffering and dynamic quality control. Flash Player addresses these needs by letting applications manage the downloading and playback of video via Actionscript in conjunction with either HTTP or the RTMP video streaming protocol. The HTML5 standard itself does not address video streaming protocols, but a number of vendors and organizations are working to improve the experience of delivering video over HTTP. We are beginning to contribute to these efforts and hope to see a single standard emerge.

Content Protection

YouTube doesn’t own the videos that you watch – they’re owned by their respective creators, who control how those videos are distributed through YouTube. For YouTube Rentals, video owners require us to use secure streaming technology, such as the Flash Platform’s RTMPE protocol, to ensure their videos are not redistributed. Without content protection, we would not be able to offer videos like this.

Encapsulation + Embedding

Flash Player’s ability to combine application code and resources into a secure, efficient package has been instrumental in allowing YouTube videos to be embedded in other web sites. Web site owners need to ensure that embedded content is not able to access private user information on the containing page, and we need to ensure that our video player logic travels with the video (for features like captions, annotations, and advertising). While HTML5 adds sandboxing and message-passing functionality, Flash is the only mechanism most web sites allow for embedded content from other sites.

Fullscreen Video

HD video begs to be watched in full screen, but that has not historically been possible with pure HTML. While most browsers have a fullscreen mode, they do not allow javascript to initiate it, nor do they allow a small part of the page (such as a video player) to fill the screen. Flash Player provides robust, secure controls for enabling hardware-accelerated fullscreen displays. While WebKit has recently taken some steps forward on fullscreen support, it’s not yet sufficient for video usage (particularly the ability to continue displaying content on top of the video).

Camera and Microphone access

Video is not just a one-way medium. Every day, thousands of users record videos directly to YouTube from within their browser using webcams, which would not be possible without Flash technology. Camera access is also needed for features like video chat and live broadcasting – extremely important on mobile phones which practically all have a built-in camera. Flash Player has provided rich camera and microphone access for several years now, while HTML5 is just getting started.

Time to knuckle down and deliver great new video features in the browsers!

12
Software Engineering / Secure Document Sharing
« on: April 20, 2017, 03:12:18 PM »
Aaron created this originally to allow him to share documents between his own computers in a secure manner.

The web application uses local storage in IE 5+ and Firefox 2 to store an encryption key. This means that your notes are encrypted with a key that never leaves your machine.

In traditional web applications your data is not encrypted at all. It’s sitting there on the server in plaintext for all to see. If the provider screws up, your data will be leaked, for example what happened with AOL.

Even if it were encrypted, you’d either have to enter the key every time you wanted to view the data, or else store the key on the server, which would defeat the purpose of encrypting it in the first place.

Local storage offers a way around this issue, and I wrote halfnote as a sort of proof of concept for client side encryption in web applications.

Features

Client-side encryption. Your notes are strongly encrypted with your account password before they leave your browser. Even if I totally screw up someday and pull an AOL, your data will remain pretty safe.
Auto-save. Save buttons are lame. Halfnote saves your notes automatically whenever you stop typing.
Synchronization. You can have Halfnote open on multiple computers and they will stay in sync with each other. You don’t have to worry about accidentally overwriting notes you entered on another computer.

13
Software Engineering / Google Apps – Premier Edition
« on: April 20, 2017, 03:09:59 PM »
From the You-Know-When-Ajax-Has-Gone-Mainstream-Dept, Google announced today it will be offering businesses a premium service for its key productivity applications, at $50/user/year. The package includes:

Access to office-style applications – Google Docs & Spreadsheets, Google Page Creator. No presentation package yet – perhaps Google should acquire S5 :-).

Access to communication applications – GMail (@your-own-domain), Google Calendar, Google Talk (voice/IM).
Access to Google Homepage (maybe corporations could deck this out to become their intranet homepage?)
Control panel to manage the domain
Ads can be turned off
Storage at 10GB/user
Integration with organisation’s sign-on and email infrastructure

Phone support
The apps themselves are available to anyone, but the integration and extra services come with the premium service. Google provides this comparison table.

The giant elephant in this room is your company’s data sitting on Google’s servers. In the absence of an “Apps Appliance” sitting inside the firewall, there will always be a major proportion of the market unwilling to commit to a solution like this – increased risk of data loss, theft, and manipulation. Google’s pure-external model keeps things nice and simple, but it’s not for everyone.

Zoho, for example, offers “in-premise edition” to run inside an organization’s network. Similarly, Zimbra’s collaboration app. It’s also becoming possible to make your own stack, with apps like Wikicalc and the various wikis, though nothing as comprehensive as Google’s offering. It’s feasible MS will move their apps in that direction too.

The comparison among these approaches will be worth watching in coming months. For now, though, it’s great to see how much Ajax and the web has evolved in the past two years, with Google providing a lot of the inspiration. From TechCrunch: “Beyond competition and concerns, tonight is a good time to recognize the incredible force of innovation that Google is as well. Its nearly full-service suite of sophisticated, integrated online services is something of historic proportion.”

14
Software Engineering / Search for the Holy Mail (template)
« on: April 20, 2017, 03:08:41 PM »
Glen Lipka has been frustrated with the task of producing quality HTML email that works across various email clients, which of course got even harder when Outlook 2007 took email design back a few years.

Anyway, Glen thinks that he nailed it:

Outlook 2007 actually has a little more CSS support than I thought.  Just because I don’t get positioning and float and a decent DIV or the right box model or margins, doesn’t mean that I can’t still make it work.  Using a couple of tables, borders, padding and width, I think I came up with a solid solution that still looks like clean html.
I refuse to use spacer.gif.  Spacer.gif can kiss my shiny metal ass.  Boo spacer!  As a side note, I sometimes interview web developers for positions.  I look at their html.  If I see spacer.gif I say, “Nope, they stink”.  Sorry, it’s a pet peeve.
Opening up your email html in Word 2007 is NOT the same thing as opening up your html as an email in Outlook 2007.  They are really really close, but they have differences.  I kept seeing them, so I stopped trying to use Word 2007.
There is a bug in Outlook 2007.  If you have a table, and each cell has padding of 10px and then you put a cell in the middle to be 0px, it shortens the height of the cell and basically makes a HOLE in your table.  I was dumbstruck by this one.  It seemed impossible to do, but it does it.  The fix is to keep the padding on top and bottom, but remove left and right.  The bug is related to height, not to width.  It shortens vertically, but not horizontally.
Gmail is evil.  They only allow inline CSS.  They do this to avoid overlapping CSS rules. They could have dealt with overlap CSS rules using a rewrite scheme that put a prefix in front of all the classes.  It just made the html really messy.  I did my best in my template above to make it clean.  But still, that’s lame.
Gmail strips all height css rules.  Why height??  What did height ever do to them?  I got around this problem using padding, but in the dynamic app, it means we need to calculate specific padding rules based on the height the user requested minus the height of existing content.  Not trivial, but doable.  Why does Gmail allow width?  What’s the deal with height?
Borders can not be defined as 0px width.  In Outlook 2007, if you declare a border as 0px width, it shows up anyway.  I couldn’t figure that out, so I said, “Ok, I won’t do that.”  I saved an example which works in the browser, but not Outlook 2007.
Divs can have borders, but not padding in Outlook 2007.  Why not Microsoft?  Come on, work with me here.  Meet me halfway.  YUCK!

15
Software Engineering / Google and Mozilla 3D Round-up
« on: April 20, 2017, 03:07:03 PM »
Years ago, we covered an announcement about Mozilla’s plans to basically put OpenGL ES in the browser and call it Canvas 3D and to do so working with a new working group over at the OpenGL standards body, Khronos.

This week, we covered Google’s own 3D announcement, a plug-in offering a high-level scene graph API and embedded V8 run-time.

And of course, don’t forget about Opera’s 3D work, which we covered back in November 2007.

So now there are three approaches to 3D:

Mozilla: Low-level, OpenGL wrapper
Opera: Mid-level proprietary scene-graph-ish API
Google: The full COLLADA monty
Where should the web go? Mozilla’s Chris Blizzard compares the debate to Canvas vs. SVG:

Canvas is a very simple API, much like what we’ve proposed to Khronos for 3D support. It’s well-scoped, well understood and integrates very well with other web technologies. And it’s been getting a huge amount of traction on the web. People are writing all kinds of really neat technology on top of it, including useful re-usable libraries for visualization. Have a look through Google’s own promotional site for Chrome – a huge number of them use canvas. It has traction. And we’ve gone through a couple of iterations – we’ve added support for text and a couple of other odds and ends once we understood what people were trying to do with it.

Now compare this to SVG and SMIL. Each of those specs are multi-hundred page documents with very large APIs and descriptions of how to translate their retained-mode graphics into something that’s usable on the web. (SVG 1.1 is a 719 page PDF. SVG 1.2 Tiny is 449 pages. The spec for SMIL is a 2.7MB HTML file.) We’ve seen some implementation of SVG and SMIL in browsers, but it’s been slow in coming and hasn’t seen full interoperability testing nor any real pick up on the web. The model for these specs was wrong, and I think it shows.

Chris doesn’t directly say that Google’s approach is “wrong”, but he wonders if the Google proposal of a bigger and more ambitious API would represent too great a compatibility burden for browser vendors and developers.

In the comments of his post, Henry Bridge of the Google O3D team replied; here’s a lightly edited excerpt:

We agree that to keep a standards process focused, APIs should be as minimal as possible while remaining useful, and so we would likely keep things like that out of any first attempt at a standard and, as you say, let it evolve over time. But the usefulness question brings up an important, and we think, unresolved point. We’d love to build the animation and skinning system in JS, but we just couldn’t get a JS-based animation system fast enough — even on our retained-mode API. Javascript is getting faster all the time and we love that, but until someone builds some apps it’ll be hard to know what’s fast enough.

Standardizing [an Open GL-like] immediate mode API for JS makes total sense. It’s a well defined problem, lots of people know GL, and we think it will be useful. But some of the demos we wrote _already_ don’t run well without a modern JS implementation, and moving to [Open GL] won’t help that (but we’d love to be proven wrong). That’s why we think it makes sense to explore both an immediate and a retained mode 3D, and make sure they work well together.

What do you think?

Pages: [1] 2