Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - najnin

Pages: 1 [2] 3 4 ... 9
Telecom Forum / Different Kinds of Switches in Telecommunication
« on: July 28, 2015, 02:47:54 PM »
Digital Switch
A digital switch is a device that handles digital signals generated at or passed through a telephone company central office and forwards them across the company's backbone network. It receives the digital signals from the office's channel banks that have been converted from users' analog signals and switches them with other incoming signals out to the wide area network.
Digital switches are described in terms of classes based on the number of lines and features that are provided. A private branch exchange (PBX) is a digital switch owned by a private company. A centrex is a digital switch at the central office that manages switching for the private company from the central office.

Softswitch (software switch) is a generic term for any open application program interface (API) software used to bridge a public switched telephone network (PSTN) and Voice over Internet Protocol (VOiP) by separating the call control functions of a phone call from the media gateway (transport layer).

Switching fabric
Switching fabric is the combination of hardware and software that moves data coming in to a network node out by the correct port (door) to the next node in the network.
Switching fabric includes the switching units (individual boxes) in a node, the integrated circuits that they contain, and the programming that allows switching paths to be controlled. The switching fabric is independent of the bus technology and infrastructure used to move data between nodes and also separate from the router. The term is sometimes used to mean collectively all switching hardware and software in a network.
The term uses a fabric metaphor to suggest the possible complexity and web-like structure of switching paths and ports within a node. The switching fabric typically includes data buffers and the use of shared memory.

Intelligent switch
An intelligent switch is a high-level storage area network (SAN) routing switch that provides features such as storage virtualization, quality of service (QoS), remote mirroring, data sharing, protocol conversion, and advanced security. Intelligent switches are an important part of storage area management (SAM), a methodology that is gaining in importance as networks become increasingly complex and expensive to deploy, operate, and maintain.
Intelligent switches can make it possible to manage storage in heterogeneous environments, reduce SAM costs, and provide expandability and scalability for existing SANs in large and growing businesses. However, intelligent switches are still in the evolutionary stage, and may not be an ideal solution for smaller enterprises, or in SANs not expected to grow or change substantially in the immediate future.
The intelligent switches in some sophisticated SANs are the latest product in a technology that is decades old. Primitive intelligent switches first appeared in telephone networks during the 1980s, and were used for automatic call routing. Similar switches found applications in other communications networks, including the Internet as it evolved during the 1990s.

Telecom Forum / How to measure voice quality -- MOS
« on: July 28, 2015, 02:08:51 PM »
There are some attempts at standardizing the measurement of voice quality. One of the best known is the MOS scale that is based on subjective measurements. We can also relate the E model since it brings in some objective parameters like network delay and packet loss. Something interesting about this last model is that it contemplates the conversion of MOS scale results, which allows us to obtain a standard scale to quantify voice quality.

MOS scale
The MOS scale is really a recommendation of the ITU. Specifically the ITU-T P.800 recommendation. This scale writes down a voice quality scale based on the subjective samples that are realized through a series of techniques known as Absolute Category Rating (ACR).
For this, a group of people is brought together and they are asked to rate voice quality in a subjective way. Before starting the evaluation, they must listen to some previously defined examples from the recommendation so that the users have a reference frame.
Once this is done a series of phrases is transmitted (they are also pre-defined by the recommendation) through the telephone line and the users proceed to rate the voice quality.

 The following is a summary of the MOS scale.

MOS rating   Quality   Effort:

5   Excellent   No effort needed
4   Good   It is necessary to pay attention but no significant effort is needed.
3   Acceptable   Moderated effort
2   Poor   Great effort
1   Bad   Can’t be understood

A disadvantage of using the MOS scale is without a doubt the quantity of time necessary to determine voice quality in a simple line. Imagine trying to coordinate a series of tests with a great amount of people whom we would have to previously train just to evaluate voice quality of a single line.

ভাল লাগলো পোস্টটি, অনেক কিছু জানলাম! ধন্যবাদ আপনাকে।

সাশ্রয়ী মূল্যেই তো মনে হচ্ছে!

জানলাম। কাজে লাগবে! ধন্যবাদ!

Telecom Forum / Features of 5th generation Mobile Communication
« on: June 30, 2015, 12:00:14 PM »
The major difference, from a user point of view, between current generations and expected 5G techniques must be something else than increased maximum throughput; other requirements include: 

•   Lower battery consumption.
•   Lower outage probability; better coverage and high
•   data rates available at cell edge.  Multiple concurrent data transfer paths.
•   Around 1Gbps data rate in mobility.
•   More secure; better cognitive radio /SDR security.
•   Higher system level spectral efficiency.
•   Worldwide wireless web (WWWW), wireless based web applications that include full multimedia capability beyond 4G speeds. More applications combined with artificial intelligence (AI) as human life will be surrounded by artificial sensors which could be communicating with mobile phones. 
•   Not harmful to human health.
•   Cheaper traffic fees due to low infrastructure
•   deployment costs.

 The 5G core is to be a re-configurable, multi technology core. The core could be the convergence of new technologies such as nanotechnology, cloud computing and cognitive radio and based on all IP Platform.

Unfied Frame Vision for Lower Part of Spectrum (< 6 GHz)

•   Classical “bit pipe” traffic (type I) with highend spectral efficiency exploits orthogonality and synchronicity, wherever it is possible, e.g. when serving cell-centre users.
•   Vertical layering at common time-frequency resources generates a non-orthogonal signal format supporting interference limited transmissions more efficiently (heterogeneous cell structures and cell edge). For high-volume data applications in those cell areas (type II), a multi-cell, multiuser transceiver concept is required.
•   Machine-Type Communication (MTC) is expected to be one dominant application of 5G systems. For this sporadic traffic type (type III), a contention based-access technique is attractive, saving overhead by dropping the strict synchronicity requirement.
•   Sensor-type traffic (type IV), the open weightless standard [3] has shown that, from an energy-efficiency perspective, it is beneficial to stretch the transmissions in time by spreading.

High Data Rate Communication in Higher Part of Spectrum (> 10 GHz)

Use the mm-wave bands
•   Access link 
•   Fronthaul link 
•   Backhaul link 
•   Device to device links

 Establish an overlay network where and when high capacity / data rate is needed 
•   Seamless integration into 3GPP standards
•   Full indoor  & outdoor mobility support 
•   Cost & energy reduction
Fig. 2

IT Forum / Re: Hide Drives
« on: June 30, 2015, 11:03:11 AM »
প্রয়োজনে ট্রাই করা যাবে। ধন্যবাদ।

কাজের পোস্ট!

FB messenger দিয়ে কি ফ্রেন্ডলিস্টের সবার সাথে কথা বলা যাবে?

Telecom Forum / Re: Ericsson Radio Dot System in Bangladesh
« on: June 15, 2015, 02:15:48 PM »
জানলাম। ভাল লাগলো পোস্টটি।

Telecom Forum / Re: Find your disgusting caller location & name
« on: June 15, 2015, 02:14:15 PM »
দরকারী পোস্ট! ধন্যবাদ।

Telecom Forum / Minimal Invasve Education for Primary Children
« on: May 02, 2015, 02:00:04 PM »
Minimally invasive education (MIE) is a form of learning in which children operate in unsupervised environments. The methodology arose from an experiment done by Sugata Mitra while at NIIT in 1999, often called The Hole in the Wall, which has since gone on to become a significant project with the formation of Hole in the Wall Education Limited (HiWEL), a cooperative effort between NIIT and the International Finance Corporation, employed in some 300 'learning stations', covering some 300,000 children in India and several African countries.

Professor Mitra, Chief Scientist at NIIT, is credited with proposing and initiating the Hole-in-the-Wall programme.

The Experiment
When Professor Mitra met British Science Fiction writer Arthur C. Clarke, the writer told him that the primary education should be self-organized, it should be achieved by a student without teacher from his surroundings. Then from 1982 Professor Mitra were thinking by himself how it could be. Then after a long time on 26 January 1999, Professor Mitra's team carved a "hole in the wall" that separated the NIIT premises from the adjoining slum in Kalkaji, New Delhi. Through this hole, a freely accessible computer with touch pad and internet connection was put up for use. Some hours later some little boys noticed it and try to use it randomly. This computer proved to be popular among the slum children. With no prior experience, the children learned to use the computer on their own. This prompted Mitra to propose the following hypothesis: The acquisition of basic computing skills by any set of children can be achieved through incidental learning provided the learners are given access to a suitable computing facility, with entertaining and motivating content and some minimal (human) guidance.

Mitra has summarised the results of his experiment as follows. Given free and public access to computers and the Internet group of children can
•   Become computer literate on their own, that is, they can learn to use computers and the Internet for most of the tasks done by lay users.
•   Teach themselves enough English to use email, chat and search engines.
•   Learn to search the Internet for answers to questions in a few months time.
•   Improve their English pronunciation on their own.
•   Improve their mathematics and science scores in school.
•   Answer examination questions several years ahead of time.
•   Change their social interaction skills and value systems.
•   Form independent opinions and detect indoctrination.

Nowadays, not only India but also in South Africa, Uganda, Rwanda, Mozambique, Zambia, Swaziland, Botswana, Nigeria and Cambodia, they run this kind of self-exploration IT education without any assistant and the slum children really do very nice. It’s like a discovery and learning system. HiWEL along with NIIT and IFC of United Nations are doing this project in around 600 Playground Learning Stations(PLS).

The Oscar win movie Slumdog Millionaire was inspired by this project which is written in a book “Q & A” by Dr. Sugata.

Here the TED video where Dr. Sugata explains his awesome idea on minimal invasive eduction for future,

Another video on this topic,

A light-emitting diode (LED) is a two-lead semiconductor light source. It is a basic pn-junction diode, which emits light when activated. When a biasing voltage is applied to the leads, electrons are able to recombine with electron holes within the device, releasing energy in the form of photons. This effect is called electroluminescence, and the color of the light (corresponding to the energy of the photon) is determined by the energy band gap of the semiconductor.

An LED is often small in area (less than 1 mm2) and integrated optical components may be used to shape its radiation pattern.
Appearing as practical electronic components in 1962, the earliest LEDs emitted low-intensity infrared light. Infrared LEDs are still frequently used as transmitting elements in remote-control circuits, such as those in remote controls for a wide variety of consumer electronics. The first visible-light LEDs were also of low intensity, and limited to red. Modern LEDs are available across the visible, ultraviolet, and infrared wavelengths, with very high brightness.

Early LEDs were often used as indicator lamps for electronic devices, replacing small incandescent bulbs. They were soon packaged into numeric readouts in the form of seven-segment displays, and were commonly seen in digital clocks.

Recent developments in LEDs permit them to be used in environmental and task lighting. LEDs have many advantages over incandescent light sources including lower energy consumption, longer lifetime, improved physical robustness, smaller size, and faster switching. Light-emitting diodes are now used in applications as diverse as aviation lighting, automotive headlamps, advertising, general lighting, traffic signals, and camera flashes. However, LEDs powerful enough for room lighting are still relatively expensive, and require more precise current and heat management than compact fluorescent lamp sources of comparable output.

The blue and white LED

The first high-brightness blue LED was demonstrated by Shuji Nakamura of Nichia Corporation in 1994 and was based on InGaN Its development built on critical developments in GaN nucleation on sapphire substrates and the demonstration of p-type doping of GaN, developed by Isamu Akasaki and Hiroshi Amano in Nagoya. In 1995, Alberto Barbieri at the Cardiff University Laboratory (GB) investigated the efficiency and reliability of high-brightness LEDs and demonstrated a "transparent contact" LED using indium tin oxide (ITO) on (AlGaInP/GaAs). The existence of blue LEDs and high-efficiency LEDs quickly led to the development of the first white LED, which employed a Y3Al5O12:Ce, or "YAG", phosphor coating to mix down-converted yellow light with blue to produce light that appears white. Nakamura was awarded the 2006 Millennium Technology Prize for his invention. Akasaki, Amano, and Nakamura were awarded the 2014 Nobel prize in physics for the invention of efficient blue LED On October 7, 2014, the Nobel Prize in Physics was awarded to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura for "the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources" or, less formally, LED lamps.
The development of LED technology has caused their efficiency and light output to rise exponentially, with a doubling occurring approximately every 36 months since the 1960s, in a way similar to Moore's law. This trend is generally attributed to the parallel development of other semiconductor technologies and advances in optics and material science, and has been called Haitz's law after Dr. Roland Haitz.
In 2000  and 2002, processes for growing gallium nitride (GaN) LEDs on silicon were successfully demonstrated. In January 2012, Osram demonstrated high-power InGaN LEDs grown on silicon substrates commercially. It has been speculated that the use of six-inch silicon wafers instead of two-inch sapphire wafers and epitaxy manufacturing processes could reduce production costs by up to 90%.

Why blue in particular?

Well, blue was the last -- and most difficult -- advance required to create white LED light. And with white LED light, companies are able to create smartphone and computer screens, as well as light bulbs that last longer and use less electricity than any bulb invented before. At the time, scientists developed LEDs that emitted everything from infrared light to green light… but they couldn't quite get to blue. That required chemicals, including carefully-created crystals, that they weren't yet able to make in the lab.

Once they did figure it out, however, the results were remarkable. A modern white LED lightblub converts more than 50 percent of the electricity it uses into light. Compare that to the 4 percent conversion rate for incandescent bulbs, and you have one efficient bulb. Besides saving money and electricity for all users, white LEDs' efficiency makes them appealing for getting lighting to folks living in regions without electricity supply. A solar installation can charge an LED lamp to last a long time, allowing kids to do homework at night and small businesses to continue working after dark.

A modern white LED light bulb converts more than 50 percent of the electricity it uses into light. Compare that to the 4 percent conversion rate for incandescent bulbs.

LEDs also last up to 100,000 hours, compared to 10,000 hours for fluorescent lights and 1,000 hours for incandescent bulbs. Switching more houses and buildings over to LEDs could significantly reduce the world's electricity and materials consumption for lighting.

A white LED light is easy to make from a blue one. Engineers use a blue LED to excite some kind of fluorescent chemical in the bulb. That converts the blue light to white light.


Telecom Forum / Monte Carlo Simulation: History and Application Example
« on: October 11, 2014, 01:24:25 PM »
Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results; typically one runs simulations many times over in order to obtain the distribution of an unknown probabilistic entity. The name comes from the resemblance of the technique to the act of playing and recording results in a real gambling casino. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to obtain a closed-form expression, or unfeasible to apply a deterministic algorithm. Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration and generation of draws from a probability distribution.

The modern version of the Monte Carlo method was invented in the late 1940s by Stanislaw Ulam, while he was working on nuclear weapons projects at the Los Alamos National Laboratory. It was named by Nicholas Metropolis, after the Monte Carlo Casino, where Ulam's uncle often gambled. Immediately after Ulam's breakthrough, John von Neumann understood its importance and programmed the ENIAC computer to carry out Monte Carlo calculations.

Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example,

In telecommunications, when planning a wireless network, design must be proved to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process.

Monte Carlo Example for BER in BPSK System:


Pages: 1 [2] 3 4 ... 9