Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - nafees_research

Pages: [1] 2 3 ... 19
1
21st century skills and the 4th Industrial Revolution (by Manzoor Ahmed)

Prime Minister Sheikh Hasina recently said, "It is not only Bangladesh, the whole world will need skilled manpower… and for that we have reformed our education system, giving priority to vocational training." She was speaking at the international conference on "Skills Readiness for Achieving SDG and Adopting Industrial Revolution 4.0" on February 2, 2020. The event was organised by the Institute of Diploma Engineers Bangladesh (IDEB) and the Colombo Plan Staff College in Manila, Philippines.

The Prime Minister has rightly indicated an important priority. The question is: how are buzzwords such as the "Fourth Industrial Revolution" understood and what is happening on the ground in the thousands of secondary level institutions across the country?

Klaus Schwab, the founder of the World Economic Forum and the organiser of the annual Davos Summit, is credited with popularising this term. As Schwab explains, the First Industrial Revolution started in the 1780s, using water and steam power to mechanise production. The Second, beginning in the 1870s, used electric power to create assembly lines and lead to mass production. The Third, starting from the 1960s, used electronics and information technology, also known as digital technology, to automate production. The Fourth Industrial Revolution (4IR) now builds on the digital revolution.

The latest Industrial Revolution blurs the lines between the physical, digital and biological spheres in an unprecedented way. The 4IR is radically different, since it is more than only a technological shift in economic production, as the previous three were. It opens unlimited possibilities for addressing critical challenges of poverty, inequality and sustainable development. However, beyond the hype surrounding 4IR, the potentials and challenges have to be seen from the perspective of the real world, especially from the point of view of low income countries like Bangladesh where the majority of the world's people still live. The prospects and problems are spectacularly different for most people in these countries when compared to those in wealthier countries.

Over 80 percent of our workforce are employed in the informal economy, which is not regulated by worker welfare and rights standards. A third of the workforce has no education, 26 percent have only primary education and 31 percent have only up to secondary education, according to a 2017 Labour Force Survey. Over 40 percent of workers are engaged in the low-skill and low-wage agricultural sector. The concept note for the Eighth Five Year Plan (FY2021-25) that is under preparation says that the overall quality of the labour force is much below the level that is needed to achieve the planned 15 percent growth in manufacturing, to expand the organised service sector, and to facilitate the transition to an upper middle income country.


Life and the livelihoods of the majority of people in Bangladesh are largely characterised by the use of the second or even the first Industrial Revolution technologies. At the same time, ironically, most people are also touched by the third Industrial Revolution through the penetration of mobile phone technology. The features of 4IR can be found in a handful of the better educated and privileged population who benefit from or contribute to its development at home or abroad. What this means is that simultaneously, technologies and people's skills, as well as their attitudes and aspirations, have to be lifted across the board in all four phases of industrial revolutions, starting from wherever the people are on this spectrum. This is where skills formation, the role of the education system and the relevance of 21st century skills come in.



What we call 21st century skills are not necessarily all novel, nor do they mark a clean break from what were important in the 20th century or the 19th century. There are common and timeless elements of quality and relevance for learners and the whole of society in any system of education. Education systems have always struggled to achieve and maintain these essential elements, and they have not become invalid in the 21st century.

This formulation of 21st century skills recognises the value of the foundational skills of multiple literacies, the essential tools for learning. This is the base on which the higher order skills of solving problems and thinking critically are built. Young people also have to be helped with social and emotional maturity and acquiring moral and ethical values—the qualities of character. A lifelong learning approach has to be adopted for this. As in the case of technology adoption and adaptation, skills development and education also need to consider the perennial basic and essential elements that can respond to the diverse phases of technology, production, consumption, lifestyle and expectations in which people find themselves.

The education authorities—the two divisions of the Bangladesh Ministry of Education and the National Curriculum and Textbook Board—are engaged in a review of school curricula in the context of 21st century challenges. What is more important than formulating the curriculum is to find effective ways of implementing the curriculum. Teachers—their skills, professionalism and motivations—are the key here. So is the way students' learning is assessed. Look at the negative backwash effects of the current public examinations—too early and too frequent; many questions on what they actually assess; and the distortion of the teaching-learning process in schools.

A good move is to start streaming students to different tracks from 11th grade rather than 9th grade, something which is under consideration now. The aim is to build a common foundation of competencies for all, and not force young people to foreclose their life options early.

Klaus Schwab had warned that we face the danger of a job market that is increasingly segregated into "low-skill/low-pay" and "high-skill/high-pay" segments, giving rise to growing social tensions. Coping with the implications of this danger for education and skill development is a continuing concern. We cannot discuss the numerous structural and operational obstacles to necessary reforms in education and skills formation and how to deal with these within the confines of this article. But we can hardly ignore them either.

The decision-makers of today find it difficult to free themselves from the trap of traditional, linear thinking. They are too absorbed by the multiple, immediate crises knocking at their doors every day. Can they find the time and focus their mind enough to think strategically, looking at the bigger picture and with a longer time horizon, about the forces of change and disruption that are shaping our future?

Manzoor Ahmed is Professor Emeritus at Brac University.

Source: https://www.thedailystar.net/opinion/news/21st-century-skills-and-the-4th-industrial-revolution-1868884

2
QA Mechanism / THE World University Rankings 2020: methodology
« on: February 17, 2020, 12:16:48 PM »
THE World University Rankings 2020: methodology

In collecting and considering data for the World University Rankings, we are scrupulous and transparent. Here we detail what goes into our assessment of almost 1,400 institutions worldwide

The Times Higher Education World University Rankings are the only global performance tables that judge research-intensive universities across all their core missions: teaching, research, knowledge transfer and international outlook. We use 13 carefully calibrated performance indicators to provide the most comprehensive and balanced comparisons, trusted by students, academics, university leaders, industry and governments.

The performance indicators are grouped into five areas: Teaching (the learning environment); Research (volume, income and reputation); Citations (research influence); International outlook (staff, students and research); and Industry Income (knowledge transfer).

Teaching (the learning environment): 30%
Reputation survey: 15%
Staff-to-student ratio: 4.5%
Doctorate-to-bachelor’s ratio: 2.25%
Doctorates-awarded-to-academic-staff ratio: 6%
Institutional income: 2.25%

he most recent Academic Reputation Survey (run annually) that underpins this category was carried out between November 2018 and March 2019. It examined the perceived prestige of institutions in teaching. The responses were statistically representative of the global academy’s geographical and subject mix. The 2019 data are combined with the results of the 2018 survey, giving more than 21,000 responses.



As well as giving a sense of how committed an institution is to nurturing the next generation of academics, a high proportion of postgraduate research students also suggests the provision of teaching at the highest level that is thus attractive to graduates and effective at developing them. This indicator is normalised to take account of a university’s unique subject mix, reflecting that the volume of doctoral awards varies by discipline.

Institutional income is scaled against academic staff numbers and normalised for purchasing-power parity (PPP). It indicates an institution’s general status and gives a broad sense of the infrastructure and facilities available to students and staff.

Research (volume, income and reputation): 30%
Reputation survey: 18%
Research income: 6%
Research productivity: 6%
The most prominent indicator in this category looks at a university’s reputation for research excellence among its peers, based on the responses to our annual Academic Reputation Survey.

Research income is scaled against academic staff numbers and adjusted for purchasing-power parity (PPP). This is a controversial indicator because it can be influenced by national policy and economic circumstances. But income is crucial to the development of world-class research, and because much of it is subject to competition and judged by peer review, our experts suggested that it was a valid measure. This indicator is fully normalised to take account of each university’s distinct subject profile, reflecting the fact that research grants in science subjects are often bigger than those awarded for the highest-quality social science, arts and humanities research.

To measure productivity we count the number of publications published in the academic journals indexed by Elsevier’s Scopus database per scholar, scaled for institutional size and normalised for subject. This gives a sense of the university’s ability to get papers published in quality peer-reviewed journals. Last year, we devised a method to give credit for papers that are published in subjects where a university declares no staff.

Citations (research influence): 30%
Our research influence indicator looks at universities’ role in spreading new knowledge and ideas.

We examine research influence by capturing the average number of times a university’s published work is cited by scholars globally. This year, our bibliometric data supplier Elsevier examined 77.4 million citations to 12.8 million journal articles, article reviews, conference proceedings, books and book chapters published over five years. The data include more than 23,400 academic journals indexed by Elsevier’s Scopus database and all indexed publications between 2014 and 2018. Citations to these publications made in the six years from 2014 to 2019 are also collected.

The citations help to show us how much each university is contributing to the sum of human knowledge: they tell us whose research has stood out, has been picked up and built on by other scholars and, most importantly, has been shared around the global scholarly community to expand the boundaries of our understanding, irrespective of discipline.

The data are normalised to reflect variations in citation volume between different subject areas. This means that institutions with high levels of research activity in subjects with traditionally high citation counts do not gain an unfair advantage.

We have blended equal measures of a country-adjusted and non-country-adjusted raw measure of citations scores.

In 2015-16, we excluded papers with more than 1,000 authors because they were having a disproportionate impact on the citation scores of a small number of universities. In 2016-17, we designed a method for reincorporating these papers. Working with Elsevier, we developed a fractional counting approach that ensures that all universities where academics are authors of these papers will receive at least 5 per cent of the value of the paper, and where those that provide the most contributors to the paper receive a proportionately larger contribution.

International outlook (staff, students, research): 7.5%
Proportion of international students: 2.5%
Proportion of international staff: 2.5%
International collaboration: 2.5%
The ability of a university to attract undergraduates, postgraduates and faculty from all over the planet is key to its success on the world stage.

In the third international indicator, we calculate the proportion of a university’s total research journal publications that have at least one international co-author and reward higher volumes. This indicator is normalised to account for a university’s subject mix and uses the same five-year window as the “Citations: research influence” category.

Industry income (knowledge transfer): 2.5%
A university’s ability to help industry with innovations, inventions and consultancy has become a core mission of the contemporary global academy. This category seeks to capture such knowledge-transfer activity by looking at how much research income an institution earns from industry (adjusted for PPP), scaled against the number of academic staff it employs.

The category suggests the extent to which businesses are willing to pay for research and a university’s ability to attract funding in the commercial marketplace – useful indicators of institutional quality.

Exclusions

Universities can be excluded from the World University Rankings if they do not teach undergraduates, or if their research output amounted to fewer than 1,000 relevant publications between 2014 and 2018 (with a minimum of 150 a year). Universities can also be excluded if 80 per cent or more of their research output is exclusively in one of our 11 subject areas.

Data collection

Institutions provide and sign off their institutional data for use in the rankings. On the rare occasions when a particular data point is not provided, we enter a conservative estimate for the affected metric. By doing this, we avoid penalising an institution too harshly with a “zero” value for data that it overlooks or does not provide, but we do not reward it for withholding them.

Getting to the final result

Moving from a series of specific data points to indicators, and finally to a total score for an institution, requires us to match values that represent fundamentally different data. To do this, we use a standardisation approach for each indicator, and then combine the indicators in the proportions indicated in the table.

The standardisation approach we use is based on the distribution of data within a particular indicator, where we calculate a cumulative probability function, and evaluate where a particular institution’s indicator sits within that function.

For all indicators except for the Academic Reputation Survey, we calculate the cumulative probability function using a version of Z-scoring. The distribution of the data in the Academic Reputation Survey requires us to add an exponential component.

The calculation of the Times Higher Education World University Rankings 2020 has been independently audited by professional services firm PricewaterhouseCoopers (PwC), making these the only global university rankings to be subjected to full, independent scrutiny of this nature.

Source: https://www.timeshighereducation.com/world-university-rankings/world-university-rankings-2020-methodology

3
Quantum Technology

Quantum technology, particularly quantum information science and the development of future electronics, is considered to be one of the major technologies that would totally change human life and lead us into the Fourth Industrial Revolution. It lies at the heart of many future technologies because most of the future technologies – for example, artificial intelligence, the Internet of Things, and their future development and potential – are all based on and limited by the power of computation and communication. It is believed that only quantum information processing can fully allow AI and other relevant technologies to achieve their potential or even revolutionise them.

We are now entering the era of the so-called second quantum revolution, where many intriguing quantum properties, such as superposition, entanglement and nonlocality, can be utilised to develop new technologies. Implementation of these quantum technologies combines a wide range of research, from quantum theory and algorithms to all aspects of quantum engineering in materials, devices, architectures, and so on. Over the past decade, NCKU has put a lot of resources and manpower towards quantum science and technology. This investment has led to the finest research outcomes in Taiwan and a new Center for Quantum Frontiers of Research and Technology (QFort).



QFort focuses on three strongly correlated research directions: theory for quantum devices and quantum computations; superconductor-semiconductor (and other emergent materials) hybrid quantum devices and qubits; and quantum material foundry.

Many outstanding enterprises, including Google, Microsoft, Intel and IBM, etc, have started to develop their own quantum computers or quantum software. The ultimate goal is to construct the universal quantum computer, providing overwhelming advantages against the classical one. The research team at QFort will use the quantum computer platforms, such as IBM-Q or Rigetti Computing, to test our quantum theory. From this, they will develop some efficient quantum algorithms to accomplish the desired tasks. They plan to examine the non-trivial properties in quantum correlations, including the spatial quantum steering and its temporal version, by constructing the efficient quantum circuits running on these platforms. In addition, to support the experimental research, they plan to study temporal quantum correlations in our hybrid quantum devices. This includes formulating precise measures of quantum correlations and investigating applications in quantum networks. They also plan to explore potential quantum sensing applications.

 

Another major research direction is to create and develop complementary metal oxide semiconductor-compatible (ie, promising for rapid scaling) hybrid quantum bits and devices by integrating superconductors and semiconductors, or other emergent materials, including topological insulators. For example, it can address a major challenge of scaling up the number of semiconducting spin qubits, since entanglement between them has been limited by very short-range exchange interactions. Hybridising superconducting microwave cavities and semiconductor spin qubits makes it possible to achieve long-distance entanglement of a large number of qubits. The research team has so far achieved many pioneering advancements in semiconductor quantum electronics. For example, it developed the world’s first all-electric all-semiconductor spin transistor by employing quantum point contacts as the spin injector and detector. It has also created a method to simultaneously manipulate and probe two spin types with controllable quantum phase correlation, which could lead to new kinds of interferometer or design principles for spintronics. Moreover, the zigzag Wigner crystal and some new types of Hall effect that were recently observed by the team are not merely of fundamental interest but also have potential technological applications; for example, serving as a quantum mediator to couple qubits that are physically apart.

Emergent quantum materials can create new paths to advance quantum technology, since intriguing and exotic quantum features contained in these materials can lead to new design principles and architectures in quantum logics; for example, the topological quantum computing based on the Majorana fermions. The QFort centre integrates scholars with different expertise – ranging from theoretical prediction to numerical simulation, epitaxial growth and material characterisation – to achieve cutting-edge breakthroughs in quantum materials. Their research interests centre on the topological materials, complex oxides and van der Waals materials. Some of their notable works include the theoretical prediction of the very first topological semimetal, an all-optical control of multiple ferroic orders beyond room temperature in a non-volatile way, and the first realisation of a gate-free monolayer pn diode.

Quantum science and technology is a highly interdisciplinary field, associated with physics, nanotechnology, material science, electrical engineering, computer science, and so on. QFort at NCKU has successfully integrated outstanding scientists and engineers from different disciplines and different countries to serve as a research hub and facilitate the development of quantum technology.

Source: https://www.timeshighereducation.com/hub/national-cheng-kung-university/p/quantum-technology

4
ভাঙার অনুমোদন নিয়ে গোপনে জাহাজ নির্মাণ!

জাহাজ ভাঙা শিল্পে বিশ্বের শীর্ষ দেশ এখন বাংলাদেশ। সম্প্রতি আন্তর্জাতিক এনজিও সংস্থা     ‘শিপ ব্রেকিং প্লাটফর্মের’ এক প্রতিবেদনে বাংলাদেশকে এ স্বীকৃতি দেয়া হয়। তবে দেশের শিপইয়ার্ডগুলোতে বিদেশ থেকে আমদানি করা স্ক্র্যাপ জাহাজ ভাঙার লাইসেন্স দেয়া হলেও নিয়ম লঙ্ঘন করে অনেক ইয়ার্ডে নতুন জাহাজ নির্মাণ করা হচ্ছে। সরকারকে রাজস্ব ফাঁকি দিতে পরিত্যক্ত জাহাজের অব্যবহূত যন্ত্রাংশ দিয়ে বানানো হচ্ছে এসব নৌযান। এতে নৌপথে ঝুঁকি বাড়ছে। যেটা সরাসরি আইনের লঙ্ঘন বলে জানিয়েছে পরিবেশ অধিদপ্তর। প্রতিষ্ঠানটি বলছে, শিপ ব্রেকিং ইয়ার্ডগুলোকে ছাড়পত্র দেয়া হয়েছে আমদানি করা পুরনো জাহাজ কাটার জন্য, নতুন জাহাজ নির্মাণের জন্য নয়। লাইসেন্স ছাড়া যেকোনো সাইজের নতুন জাহাজ কেউ নির্মাণ করতে চাইলে সেটা সম্পূর্ণ বেআইনি।

শিপইয়ার্ডগুলোতে নতুন জাহাজ নির্মাণ হচ্ছে এমন চিত্র উঠে এসেছে পরিবেশ অধিদপ্তরের সাম্প্রতিক এক প্রতিবেদনে। প্রতিবেদন অনুযায়ী, চট্টগ্রামের সীতাকুণ্ড উপজেলার পাঁচটি ইয়ার্ডে নতুন জাহাজ নির্মাণের কাজ চালানো হচ্ছে বলে প্রমাণ পাওয়া গেছে। এসব ইয়ার্ডের মধ্যে মেসার্স এইচএ মান্নান স্টিল, মেসার্স ক্রিস্টাল শিপার্স লিমিটেড, মেসার্স ওশান ইস্পাত, মেসার্স জিরি সুবেদার স্টিল রি-রোলিং মিলস লিমিটেড এই চারটি শিপইয়ার্ডে সরাসরি পুরনো স্ক্র্যাপের সহায়তায় বার্জ জাহাজ তৈরি করা হচ্ছে। অন্যদিকে মেসার্স আরএ শিপ ব্রেকিং ইয়ার্ডে সরেজমিন প্রমাণ না পেলেও সেখানে গোপনে নতুন জাহাজ নির্মাণ হচ্ছে বলে ধারণা করছেন পরিবেশ অধিদপ্তরের পরিদর্শকরা।


সম্প্রতি স্ক্র্যাপ জাহাজ বিভাজনকারী শিপ ব্রেকিং ইয়ার্ডে অনুমোদনহীনভাবে জাহাজ নির্মাণ কার্যক্রম পরিচালনার দায়ে অভিযুক্ত চার ইয়ার্ডের বিরুদ্ধে এনফোর্সমেন্ট মামলা দায়ের করার জন্য সুপারিশ করেন সরেজমিন পরিদর্শন করা দুই পরিদর্শক। বাংলাদেশ পরিবেশ সংরক্ষণ আইন ১৯৯৫ (সংশোধিত ২০১০) ও পরিবেশ সংরক্ষণ বিধিমালা, ১৯৯৭ (সংশোধিত ২০০২), অনুসারে এ মামলার সুপারিশ করা হয়।

পরিবেশ অধিদপ্তরের পরিদর্শকরা জানান, লোকবল সংকটের সমস্যা থাকলেও আমরা নিয়মিত শিপইয়ার্ডগুলো পরিদর্শন করে থাকি। তবে আমাদের কাছে তথ্য আসে যে, মালিকরা তাদের ইয়ার্ডে পুরনো স্ক্র্যাপের সাহায্যে নতুন জাহাজ গোপনে তৈরি করে সেগুলো ব্যবহার করছে।

পরিদর্শকদের দেয়া তথ্য অনুযায়ী, মেসার্স  এইচএ মান্নান স্টিলে আট-নয় বছর ধরে কোনো স্ক্র্যাপ জাহাজ কাটা না হলেও পুরনো জাহাজের প্লেট ব্যবহার করে বার্জ নির্মাণ করে মালিক অন্য জায়গায় নিয়ে ব্যবহার করছে। এদিকে মেসার্স ক্রিস্টাল শিপার্স লিমিটেডের জাহাজ নির্মাণের কাজ শেষ পর্যায়ে। অন্যদিকে মেসার্স ওশান ইস্পাত ইয়ার্ডে একটি জাহাজ নির্মাণ শেষ করে সমুদ্রে চলাচলের জন্য ব্যবহার করছে এবং আরেকাটি নির্মাণের কাজ শেষ পর্যায়ে। আর মেসার্স জিরি সুবেদার স্টিল রি-রোলিং মিলস লিমিটেডের ইয়ার্ডে দুটি জাহাজের নির্মাণকাজ চলছে। যার মধ্যে একটি নির্মাণের শেষ পর্যায়ে, অন্যটির কাজ চলছে।

সর্বশেষ মেসার্স আরএ শিপ ব্রেকিংয়ের ইয়ার্ডে জাহাজ নির্মাণ হচ্ছে এমন অভিযোগ থাকলেও পরিদর্শনে সেটা পাওয়া যায়নি। তবে এ ইয়ার্ডে গোপনে বার্জ জাহাজ নির্মাণ হয়ে থাকতে পারে বলে আশঙ্কা করছেন তারা। নতুন এ জাহাজ নির্মাণ অন্যান্য ইয়ার্ডে হচ্ছে কিনা, সে বিষয়ে গোপনে খোঁজ রাখছেন পরিবেশ অধিদপ্তরের সংশ্লিষ্ট কর্মকর্তারা। অসমর্থিত সূত্রের বরাত দিয়ে তারা বলছেন, এগুলোর বাইরে আরো ১০টির বেশি ইয়ার্ডে এ জাহাজ নির্মাণ হতে পারে।

এ প্রসঙ্গে পরিবেশ অধিদপ্তরের পরিচালক (চট্টগ্রাম অঞ্চল) মোহাম্মদ মোয়াজ্জেম হোসাইন বণিক বার্তাকে বলেন, জাহাজ ভাঙা শিল্প এবং জাহাজ নির্মাণ শিল্পের জন্য ভিন্ন ছাড়পত্র সংগ্রহ করতে হয়। কিন্তু কেউ জাহাজ ভাঙা শিল্পের জন্য লাইসেন্স নিয়ে জাহাজ নির্মাণ করতে পারে না। আমাদের কাছে যে প্রমাণ আছে তাতে চারটি প্রতিষ্ঠান সরাসরি পরিবেশ অধিদপ্তরের অনুমতি না নিয়েই ছয়টি নতুন জাহাজ নির্মাণ করছে। যেটা পরিবেশ আইনের সুস্পষ্ট লঙ্ঘন। এমনকি তাদের কাছে বিডার (বাংলাদেশ বিনিয়োগ উন্নয়ন কর্তৃপক্ষ) কোনো অনুমতিপত্র আছে কিনা, সেটিও খতিয়ে দেখা হচ্ছে। আমরা প্রতিষ্ঠানগুলোর বিরুদ্ধে কঠোর পদক্ষেপ নেব। প্রয়োজন হলে ইয়ার্ডের অনুমোদন বাতিল করার মতো সিদ্ধান্ত নিতেও পিছপা হবে না পরিবেশ অধিদপ্তর।

এর আগে গত বছর ইস্পাত শিল্পের গ্রুপ কেএসআরএমের মালিকানাধীন খাজা স্টিলে দুটি নতুন জাহাজ এবং গোল্ডেন ইস্পাত লিমিটেডের মালিকানাধীন গোল্ডেন আয়রন অ্যালায়েন্স শিপইয়ার্ডে একটি জাহাজ নির্মাণের প্রমাণ পাওয়া যায়। পরবর্তী সময়ে তাদের জরিমানা করে জাহাজ নির্মাণ বন্ধ করে দেয় পরিবেশ অধিদপ্তর।

সংশ্লিষ্টরা বলছেন, দেশের জাহাজ ভাঙা শিল্প খাত বিশ্বে শীর্ষস্থানে থাকলেও দেশে এ খাতে ক্রমশ জটিলতা বাড়ছে। এদিকে সরকারের কর ফাঁকি দেয়ার জন্য এসব জাহাজ নির্মাণ করা হচ্ছে কিনা, সে বিষয়ে সংশ্লিষ্ট দপ্তরের তদন্ত করা উচিত।

Source: https://bonikbarta.net/home/news_description/220611/%E0%A6%AD%E0%A6%BE%E0%A6%99%E0%A6%BE%E0%A6%B0-%E0%A6%85%E0%A6%A8%E0%A7%81%E0%A6%AE%E0%A7%8B%E0%A6%A6%E0%A6%A8-%E0%A6%A8%E0%A6%BF%E0%A7%9F%E0%A7%87-%E0%A6%97%E0%A7%8B%E0%A6%AA%E0%A6%A8%E0%A7%87-%E0%A6%9C%E0%A6%BE%E0%A6%B9%E0%A6%BE%E0%A6%9C-%E0%A6%A8%E0%A6%BF%E0%A6%B0%E0%A7%8D%E0%A6%AE%E0%A6%BE%E0%A6%A3?fbclid=IwAR0gUPsWDPU2PM6X62yKpNJXTnBPp_0H4gnEHF2OZXKLT12yCg5wEpYDPhM

5
শিক্ষা চলছে একদিকে, চাকরির বাজার আরেক দিকে

আমরা মধ্য ও উন্নত আয়ের দেশের দিকে যাচ্ছি। দেশে অনেক শিল্প ও ব্যবসাপ্রতিষ্ঠান গড়ে উঠেছে। এসব প্রতিষ্ঠানের উদ্যোক্তারা বলছেন, দেশে উচ্চশিক্ষিত ছেলেমেয়ের অভাব নেই। কিন্তু সমস্যা হলো এসব তরুণ প্রজন্মের চাকরি–উপযোগী শিক্ষার অভাব রয়েছে। ইংরেজি ভাষায়ও তাঁরা দক্ষ নন। এই কারণে তাঁদের বাদ দিয়ে বিদেশি কর্মীদের নিয়োগ দিতে হচ্ছে। এসব বিদেশি বছরে কয়েক বিলিয়ন ডলার বাংলাদেশ থেকে নিয়ে যাচ্ছেন।

এদিকে আমাদের শিক্ষার্থীরা স্নাতক ও স্নাতকোত্তর শেষে চাকরি খুঁজছেন। তাঁদের প্রায় প্রত্যেকেরই চাকরি দরকার। অধিকাংশই চাকরি পাচ্ছেন না। এ জন্য অন্যতম দায় আমাদের শিক্ষাব্যবস্থার। তাই বিশ্বায়নের এই যুগে প্রতিযোগিতায় আমরা হেরে যাচ্ছি।

এক বিচিত্র নিয়ম আমাদের পরীক্ষার! পরীক্ষা আসছে তো ইংরেজি গ্রামার, প্যারাগ্রাফ, ডায়ালগ, লেটার—এসব মুখস্থ করো। পরীক্ষায় লিখে দাও। পাস ঠেকায় কে। এবার সনদ নিয়ে ঘুরতে থাকো। কিন্তু এ সনদে চাকরি পাওয়া খুবই কঠিন। দেখা যাচ্ছে, এই সনদধারীদের একটি বড় অংশ ভালো করে ইংরেজিতে একটি চিঠি লিখতে পারে না। ই-মেইল করতে পারে না। ইংরেজিতে কথা বলতে পার না। অধিকাংশই বিদেশি বায়ারসহ বিভিন্ন ব্যক্তির সামনে একটি প্রেজেন্টেশন দিতে পারে না। কারণ, শিক্ষাব্যবস্থায় ব্যবহারিক চর্চা নেই বললেই চলে।

আবার অনেক প্রকৌশলীর অবস্থাও একই। লেখাপড়া শেষ করছেন। হাতেকলমে শিখছেন না। এ কারণে চাকরিজীবনে ঠিকভাবে কাজ করতে পারেন না।

বিশ্বে প্রযুক্তি পরিবর্তনের ধারা চলছে। নতুন বিষয় আসছে। আমরা দাঁড়িয়ে আছি একই জায়গায়। ব্যবসায়ী নেতা ফজলুল হকের মতে, প্রতিষ্ঠানের মধ্য ও উচ্চপর্যায়ে পেশাদার মানবসম্পদের যথেষ্ট ঘাটতি রয়েছে। তথাকথিত শিক্ষাব্যবস্থা আজকের দিনের চাকরি বাজারের চাহিদা মেটাতে পারছে না। বিশেষ করে টেক্সটাইল ও পোশাকশিল্পের ব্যবস্থাপনায় প্রতিবেশী দেশের নাগরিকদের প্রভাব অত্যন্ত বেশি। চাকরির ক্ষেত্রে সাধারণ শিক্ষার্থীর যোগ্যতার অবস্থা খুবই খারাপ। আবার যাঁরা কারিগরি শিক্ষায় শিক্ষিত হচ্ছেন, তাঁদেরও অধিকাংশ ঠিকভাবে কাজ করতে পারেন না। শিক্ষাব্যবস্থার প্রায় সব ক্ষেত্রেই এ ধরনের সমস্যা।

এখন দেশে হাজার হাজার উৎপাদনশীল ব্যবসাপ্রতিষ্ঠান রয়েছে। চাকরির বিজ্ঞাপনে দাপ্তরিক পদের তুলনায় অনেক গুণ বেশি আবেদনপত্র পাওয়া যায়। কারিগরি পদের জন্য আবেদন খুব কম পড়ে। দেশের লাখ লাখ শিক্ষার্থী সাধারণ শিক্ষায় পড়াশোনা করেন। কারিগরি বিষয়ে আগ্রহ কম।

বিশেষজ্ঞরা মনে করেন, একাদশ শ্রেণি পর্যন্ত কারিগরি শিক্ষা বাধ্যতামূলক করা উচিত। এ ক্ষেত্রে উচ্চমাধ্যমিক পাস করার পর সাধারণ বিষয়ে উচ্চশিক্ষা কমান দরকার। অন্যদিকে কারিগরি শিক্ষার প্রবণতা বাড়ান দরকার। ব্যবসায়ী ও উদ্যোক্তাদের মতে কোনো শিল্পের জন্য কী ধরনের মানবসম্পদ দরকার, সেট ঠিক করতে হবে। শিল্পপ্রতিষ্ঠানের চাহিদা অনুযায়ী জনশক্তি তৈরি করতে হবে। জার্মানিসহ অনেক উন্নত দেশে সেটাই হয়।

পৃথিবীর কোনো দেশে শতভাগ দক্ষ মানবসম্পদ থাকে না। আমাদের মতো দেশে এ সমস্যা আরও গভীর। কোনো সন্দেহ নেই সংকট তীব্র। কিন্তু সত্যি কথা বলতে, এটাও কোনো সমস্যা না। সমস্যা হলো নীতিনির্ধারকদের মানসিকতায়। এ সমস্যা সমাধানের জন্য প্রয়োজন সঠিক উদ্যোগ। কার্যকর পরিকল্পনা এবং দ্রুত বাস্তবায়নের মানসিকতা।

একটা কথা মনে রাখতে হবে, যাদের আমরা অদক্ষ বলছি, তাদের নিয়েই বিদেশিরা কাজ করছেন। পৃথিবীর প্রায় সব দেশে বাংলাদেশিদের কাজের সুনাম আছে। সুনাম আছে তাদের দক্ষতা ও সততার। বাংলাদেশিরা বিদেশে অনেক গুরুত্বপূর্ণ পদে কাজ করছেন। আর আমরা দক্ষ মানবসম্পদ পাচ্ছি না।

বিশ্বের প্রায় কোনো দেশেই সরকার সব করে না। বেসরকারিভাবে ও দায় নিতে হয়। দেশের অনেক শিল্পপ্রতিষ্ঠান শত কোটি টাকা আয় করছে। তাদের কয়টা প্রশিক্ষণ কেন্দ্র আছে? কয়টা গবেষণাকেন্দ্র আছে? তাদেরও দক্ষ মানব সম্পদের চাহিদা রয়েছে। সরকারের পাশাপাশি এসব প্রতিষ্ঠানেরও দক্ষ মানবসম্পদ তৈরির দায় নেওয়া জরুরি।

অনেক বেসরকারি প্রতিষ্ঠানে অবৈধ শ্রমিক কাজ করছেন। অবৈধ শ্রমিক ও প্রতিষ্ঠান কেউই দেশের প্রচলিত আইনের ঊর্ধ্বে না। শুধু বেসরকারি প্রতিষ্ঠান না অনেক সরকারি প্রকল্পেও অবৈধ বিদেশি শ্রমিক কাজ করছেন। এটা কী করে সম্ভব! অনেক ক্ষেত্রে প্রকল্পের চুক্তিতে বিদেশি নিয়োগের বিষয় থাকতে পারে। এখানে কঠিন শর্ত থাকা জরুরি। সেটা হলো শুধু যেসব কাজ দেশের কর্মী দিয়ে করা সম্ভব না কেবল সে ক্ষেত্রে বিদেশি নিয়োগের বিষয়টি বিবেচনা করা যেতে পারে। আর একটি নির্দিষ্ট সময়ের মধ্যে দেশের মানবসম্পদ দিয়ে বিশেষায়িত কাজের দক্ষতা অর্জনের লক্ষ্যমাত্রা ঠিক করতে হবে।

সেন্টার ফর পলিসি ডায়ালগ সিপিডি পোশাকশিল্প নিয়ে একটি গবেষণা করেছে। তাদের গবেষণা অনুযায়ী দেশের ২৪ শতাংশ তৈরি পোশাক কারখানাতে বিদেশি কর্মী আছেন। বাংলাদেশ থেকে প্রতিবছর কয়েক শ কোটি ডলার বিদেশিরা নিয়ে যায়।

আমাদের কলেজ ও বিশ্ববিদ্যালয় অধিকাংশ ক্ষেত্রে শিক্ষিত বেকার তৈরি করছে। প্রতিবছর প্রায় ২০ লাখের বেশি মানুষ কর্মবাজারে আসছেন। বিশ্ববিদ্যালয় থেকে যাঁরা উচ্চশিক্ষিত হচ্ছেন, তাঁদের অধিকাংশই বেকার থাকছেন। তারপরও কোনোভাবেই থামছে না উচ্চশিক্ষার ঢল। প্রতিটি শিক্ষার্থীর পেছনে বিপুল রাষ্ট্রীয় ও পারিবারিক অর্থ ব্যয় হয়। অনেক অর্থ ও সময় ব্যয় করে একজন শিক্ষার্থী পড়াশোনা শেষ করছেন। এত কিছু করে চাকরির বাজারে তিনি কতটুকু যোগ্য হচ্ছেন? আজকের দিনে সেটি একটি অনেক বড় প্রশ্ন। এসব বিষয়ে নীতিনির্ধারকদের এখনই গুরুত্বের সঙ্গে ভাবতে হবে।

২০২১ সালে মধ্য আয়ের দেশ আর ২০৪১ সালে উন্নত দেশ হতে হলে চাকরির চাহিদা অনুযায়ী শিক্ষা কারিকুলাম করতেই হবে। শুধু ইংরেজিটা ঠিকভাবে শেখাতে পারলেও সমস্যার অনেকটাই সমাধান হয়। দেশ স্বাধীনের প্রায় ৫০ বছরেও এতটুকু উন্নতি আমরা করতে পারিনি। এটা খুবই দুঃখজনক।

আশফাকুজ্জামান: সাংবাদিক

Source: https://www.prothomalo.com/opinion/article/1639261/%E0%A6%B6%E0%A6%BF%E0%A6%95%E0%A7%8D%E0%A6%B7%E0%A6%BE-%E0%A6%9A%E0%A6%B2%E0%A6%9B%E0%A7%87-%E0%A6%8F%E0%A6%95%E0%A6%A6%E0%A6%BF%E0%A6%95%E0%A7%87-%E0%A6%9A%E0%A6%BE%E0%A6%95%E0%A6%B0%E0%A6%BF%E0%A6%B0-%E0%A6%AC%E0%A6%BE%E0%A6%9C%E0%A6%BE%E0%A6%B0-%E0%A6%86%E0%A6%B0%E0%A7%87%E0%A6%95-%E0%A6%A6%E0%A6%BF%E0%A6%95%E0%A7%87

6
দিনাজপুরে ডায়াবেটিস ও ক্যান্সার প্রতিরোধক ‘চিয়া’ চাষ
দিনাজপুরে ডায়াবেটিস – পুষ্টি ও ওষুধি গুণ সম্পন্ন লাভজনক ফসল ‘চিয়া’ বীজ বাংলাদেশে চাষের সফলতা পেয়েছে। বাংলাদেশ কৃষি বিশ্ববিদ্যালয়ের গবেষণার পর দিনাজপুরের এক কৃষক চিয়া বীজের চাষে সফলতাও পেয়েছেন। স্বল্প খরচে লাভজনক হওয়ায় দিনাজপুরের কৃষক নুরুল আমিন বাণিজ্যিকভাবে এর চাষ শুরু করেছেন। বাংলদেশের জলবায়ু ও মাটি এই চিয়া ফসলের চাষ উপযোগী। তাই বাংলাদেশে রবি ফসল চাষযোগ্য যে কোনো জমিতে ‘চিয়া’ চাষ সম্ভব। কৃষকরা সহজেই এই বীজ উৎপাদন ও সংরক্ষণ করতে পারবেন।

ওষুধি গুণ সম্পন্ন দানাদার ফসল চিয়ায় প্রচুর এন্টি অক্সিডেন্টসহ প্রোটিন রয়েছে। এটি ডায়াবেটিস, ক্যান্সার, হৃদরোগসহ বিভিন্ন রোগ প্রতিরোধে সক্ষম। তাই বাংলাদেশের মানুষের স্বাস্থ্য সুরক্ষায় চিয়া ফসল ব্যাপক ভূমিকা পালন করবে এমনটাই বলছেন বাংলাদেশ কৃষি বিশ্ববিদ্যালয়ের কৃষিতত্ত্ব বিভাগের প্রফেসর ড. মশিউর রহমান।

মেক্সিকো, গুয়েতেমালা, কানাডা ও কলম্বিয়াসহ কয়েকটি দেশে ওষুধি ফসল হিসেবে চিয়া চাষ হচ্ছে। বাংলাদেশে চিয়ার পরিচিতি ও ব্যবহার কম হলেও আমেরিকা ও ইউরোপের বিভিন্ন দেশে এর চাহিদা ব্যাপক। স্বাস্থ্য সুরক্ষায় এবং এটি লাভজনক হওয়ায় অনেকে এর চাষ সম্পর্কে জানতে দিনাজপুরের নুরুল আমিনের কাছে আগ্রহ প্রকাশ করেছেন। তবে এর চাষ সারাদেশে ছড়িয়ে পড়লে একদিকে কৃষক লাভবান হবেন, অন্যদিকে স্বাস্থ্য সুরক্ষায় বিশেষ অবদান রাখবে।
বাংলাদেশ কৃষি বিশ্ববিদ্যালয়ের প্রফেসর ড. মশিউর রহমান ২০১৬ সালে কানাডার এক বন্ধুর কাছ থেকে এক কেজি চিয়া বীজ পেয়ে তিনি পরীক্ষামূলকভাবে কৃষি বিশ্ববিদ্যালয়ের কৃষিতত্ত্ব খামার গবেষণাগারে প্রথম চিয়া চাষ করেন। তাতে সফল হওয়ায় পরিচিত কৃষক নুরুল আমিনকে চিয়ার বীজ দিয়ে চাষে সহযোগিতা করেন তিনি।

দিনাজপুর সদর উপজেলার সুন্দরবন গ্রামের কৃষক নুরুল আমিন পরীক্ষামূলক চাষে সাফল্যের পর এবার ৪৫ শতক জমিতে এই চিয়া ফসলের চাষ করেছেন। কিছুদিন আগে নুরুল আমিনের চিয়া চাষ দেখতে সরেজমিনে মাঠ পরিদর্শন করেন বাংলাদেশ কৃষি স¤প্রসারণ অধিদপ্তরের প্রকল্প পরিচালক ড. মেহেদী মাসুদ, বাংলাদেশ কৃষি বিশ্ববিদ্যালয়ের কৃষিতত্ত¡ বিভাগের প্রফেসর ড. মশিউর রহমান এবং হর্টিকালচারে পরিচালকসহ একদল ঊর্ধ্বতন কৃষিবিদ।

এ ব্যাপারে সুন্দরবন গ্রামের কৃষক নুরুল আমিন বলেন, বাংলাদেশ কৃষি বিশ্ববিদ্যালয়ের কৃষিতত্ত্ব  বিভাগের প্রফেসর ড. মশিউর রহমানের সহযোগিতায় ২০১৭ সালে প্রথম পাঁচ শতক জমিতে ১০ কেজি চিয়া চাষ করি। এই বীজ নিয়ে ৪৫শতক জমিতে চাষ করছি। তিনি বলেন, অক্টোবর মাসে চিয়া বীজ বপন করতে হয় এবং ১১০-১২০দিন পর এর ফল পাওয়া যায়। আমার ৪৫ শতকে ১০০ কেজি চিয়া পাওয়া যাবে। দেশে এক চিয়া আমদানিকারকের কাছে এই চিয়া প্রতি কেজি এক হাজার টাকায় বিক্রি করবেন বলে জানান তিনি।

Source: https://dhakalive24.com/national-news/%E0%A6%A6%E0%A6%BF%E0%A6%A8%E0%A6%BE%E0%A6%9C%E0%A6%AA%E0%A7%81%E0%A6%B0%E0%A7%87-%E0%A6%A1%E0%A6%BE%E0%A7%9F%E0%A6%BE%E0%A6%AC%E0%A7%87%E0%A6%9F%E0%A6%BF%E0%A6%B8/

7
গুগল আপনার তথ্য নিচ্ছে, নিরাপদ থাকতে যা করবেন

প্রতিদিনই বিভিন্ন প্রয়োজনে আমরা গুগলের সেবা নিচ্ছি। আর এসব সেবার মাধ্যমেই আমাদের তথ্য সংগ্রহ করছে। ভারতের প্রযুক্তিবিষয়ক সংবাদমাধ্যম গেজেটস নাউ এক প্রতিবেদনে এ তথ্য জানিয়েছে।

তিবেদন থেকে জানা যায়, কোনো ব্যক্তি যদি অ্যান্ড্রয়েড স্মার্টফোন ও কম্পিউটারে ক্রোম ব্রাউজার ব্যবহার করেন, তা হলে খুব সম্ভবত গুগল আপনার প্রত্যেকটি পদক্ষেপ সম্পর্কে জানে।

প্রতিবেদনে আরও জানা যায়, গুগল প্লে-স্টোর থেকে আমরা অনেক কিছু ডাউনলোড করি। সেসব তথ্য ও কম্পিউটারে কোন ওয়েবসাট ব্যবহার করছে তার বিস্তারিত তথ্য গুগলের কাছে রয়েছে। আর গুগল অ্যাসিস্ট্যান্টের মাধ্যমে আপনি যেসব সহযোগিতা নেন তাও গুগল অ্যাকাউন্টে সংরক্ষিত থাকে।

তবে গুগল আপনার সম্পর্কে কী তথ্য সংগ্রহ করছে, সেগুলো আপনি দেখতে পাবেন। এমনকি গুগল যেন পরে আর আপনার সম্পর্কে কোনো তথ্য সংগ্রহ করতে না পারে, সেই ব্যবস্থাও গ্রহণ করতে পারবেন।

আসুন জেনে নিই তথ্য নেয়া থেকে গুগলকে বিরত রাখবেন যেভাবে-

১. প্রথমেই আপনার কম্পিউটার থেকে জিমেইলে প্রবেশ করুন। ডানপাশের ওপরের কোণে থাকা প্রোফাইল পিকচারে ক্লিক করুন।

২. এর পর ‘ম্যানেজ ইওর অ্যাকাউন্ট’ নামের অপশনে ক্লিক করতে হবে। এ সময় পুরো জিমেইল সেটিংস নতুন একটি ট্যাবে খুলবে।

৩. এবার ‘প্রাইভেসি অ্যান্ড পারসোনালাইজেশন’ অপশন থেকে ‘ম্যানেজ ইওর ডাটা অ্যান্ড পারসোনালাইজেশন’ অপশনে যান। সেখানে ‘অ্যাক্টিভিটি কন্ট্রোল’ সেকশনে ‘ওয়েব অ্যান্ড অ্যাপ অ্যাক্টিভিটি’ সেকশন পাবেন।

৪. এই সেকশনটিই গুগলের বিভিন্ন সাইট ও অ্যাপে আপনার তথ্য সংরক্ষণ করে। ‘প্রাইভেসি অ্যান্ড পারসোনালাইজেশন’ অপশনে আপনি লোকেশন হিস্ট্রি ও ইউটিউব হিস্ট্রিও পাবেন।

৫. গুগলকে এসব তথ্য সংগ্রহ করা থেকে বিরত রাখতে প্রত্যেকটির ‘সুইচ’ অফ করে দিতে হবে। এগুলো অফ না করলে গুগল আপনার সম্পর্কে তথ্য সংগ্রহের কাজ চালিয়েই যাবে।

Source: https://www.jugantor.com/tech/274487/%E0%A6%97%E0%A7%81%E0%A6%97%E0%A6%B2-%E0%A6%86%E0%A6%AA%E0%A6%A8%E0%A6%BE%E0%A6%B0-%E0%A6%A4%E0%A6%A5%E0%A7%8D%E0%A6%AF-%E0%A6%A8%E0%A6%BF%E0%A6%9A%E0%A7%8D%E0%A6%9B%E0%A7%87-%E0%A6%A8%E0%A6%BF%E0%A6%B0%E0%A6%BE%E0%A6%AA%E0%A6%A6-%E0%A6%A5%E0%A6%BE%E0%A6%95%E0%A6%A4%E0%A7%87-%E0%A6%AF%E0%A6%BE-%E0%A6%95%E0%A6%B0%E0%A6%AC%E0%A7%87%E0%A6%A8

8
ইউজিসি’র গবেষণা সহায়তা সেবা অনলাইনে

এখন থেকে বাংলাদেশ বিশ্ববিদ্যালয় মঞ্জুরী কমিশন (ইউজিসি) দেশের বিশ্ববিদ্যালয় শিক্ষকদের প্রদেয় গবেষণা সহায়তা মঞ্জুরী অনলাইনে দেওয়া হবে। কমিশনের চেয়ারম্যান প্রফেসর ড. কাজী শহীদুল্লাহ আজ (১১ ফেব্রুয়ারি ২০২০) ইউজিসিতে আয়োজিত “অনলাইন সাবমিশন ফর রিসার্চ গ্র্যান্টস” সফটওয়্যার উদ্বোধন করেন।

ইউজিসি ওয়েবসাইটে অনলাইন আবেদনে ‘গবেষণা সহায়তা মঞ্জুরী’ ক্লিক করে সেবাগ্রহীতা তার কাক্সিক্ষত সেবা পেতে পারেন। এর জন্য সেবা প্রত্যাশিকে তার এনআইডি দিয়ে নাম নিবন্ধন করতে হবে।

ইউজিসি’র গবেষণা সহায়তা ও প্রকাশনা বিভাগ থেকে ম্যানুয়াল পদ্ধতিতে বিদেশে অনুষ্ঠিত আন্তর্জাতিক কনফারেন্স/ সেমিনার/সিম্পোজিয়াম/ওয়ার্কশপ এ অংশগ্রহণ ; উচ্চতর ডিগ্রি অর্জনের জন্য যাতায়াত খরচ, দেশের অভ্যন্তরে জাতীয় বা আন্তর্জাতিক কনফারেন্স/সেমিনার/সিম্পোজিয়াম/ ওয়ার্কশপ এর জন্য সহায়তা এবং এমফিল/পিএইচডি/পোস্ট-ডক্টোরাল গবেষণায় এই সহায়তা দিয়ে থাকে।

অনলাইন সাবমিশন ফর রিসার্চ গ্র্যান্টস সফটওয়্যার এর মাধ্যমে ইউজিসি’র প্রাতিষ্ঠানিক এ সকল সেবা সহজে পাওয়া যাবে। গবেষণা সহায়তা মঞ্জুরী খুব কম সময়ে ও সহজে প্রদান করা যাবে। আবেদনকারী গবেষণা সহায়তার আবেদনটি সর্বশেষ কোন পর্যায়ে রয়েছে তা জানতে পারবেন। পাশাপাশি, এর মাধ্যমে ব্যবহারকারীর সংখ্যা ও গবেষণা সহায়তার পরিমাণ জানা যাবে।

ইউজিসি সদস্য প্রফেসর ড. মোঃ সাজ্জাদ হোসেন- এর সভাপতিত্বে সফটওয়্যার উদ্বোধন অনুষ্ঠানে বিশেষ অতিথি ছিলেন কমিশনের সদস্য প্রফেসর ড. দিল আফরোজা বেগম। সভায় কমিশনের বিভাগীয় প্রধান ও বিভিন্ন পর্যায়ের কর্মকর্তাবৃন্দ উপস্থিত ছিলেন। অনুষ্ঠানে ইউজিসি’র গবেষণা সহায়তা ও প্রকাশনা বিভাগের পরিচালক (চলতি দায়িত্ব) মোঃ ওমর ফারুখ ও আইএমসিটি বিভাগের অতিরিক্ত পরিচালক মোহাম্মদ জামিনুর রহমান ‘অনলাইন সাবমিশন ফর রিসার্চ গ্র্যান্টস’ সফটওয়্যার এর কার্যকারিতা ও বিভিন্ন দিক তুলে ধরেন।

প্রধান অতিথির ভাষণে ইউজিসি চেয়ারম্যান বলেন, অনলাইন সাবমিশন ফর রিসার্চ গ্র্যান্টস সফটওয়্যারটি ব্যবহার করলে কাউকেই আর ইউজিসিতে গবেষণা সহায়তা মঞ্জুরীর জন্য আসতে হবে না। এর মাধ্যমে সেবা দ্রুত পাওয়া যাবে। এটি ইউজিসি কর্মকর্তা-কর্মচারীদের কর্মদক্ষতা বৃদ্ধি করবে এবং কর্ম পরিবেশ উন্নত করবে। সফটওয়্যারটির সুফল পেতে এর যথাযথ ব্যবহার নিশ্চিত করতে হবে বলে তিনি জানান। এসময় তিনি উচ্চশিক্ষার সুফল পেতে দেশের বিশ্ববিদ্যালয়সমূহ ও ইউজিসি’র সেবা জনবান্ধব করার জন্য সংশ্লিষ্টদেরকে পরামর্শ প্রদান করেন।

সভাপতির ভাষণে প্রফেসর সাজ্জাদ হোসেন বলেন, আমাদেরকে উন্নত বিশ্বের সাথে তাল মিলিয়ে চলতে প্রযুক্তি ব্যবহারের সংস্কৃতি গড়ে তুলতে হবে। ডিজিটাল বাংলাদেশ বিনির্মাণে সকল জায়গায় প্রযুক্তির সদ্ব্যবহার নিশ্চিত করতে হবে। সফটওয়্যার তৈরি এবং এর মাধ্যমে সহজে সেবা প্রদান যেন বিঘ্নিত না হয় সেদিকে লক্ষ্য রাখতে তিনি পরামর্শ প্রদান করেন।

Source: https://www.facebook.com/ugc.gov.bd/

9
ইউজিসি’র গবেষণা সহায়তা সেবা অনলাইনে

এখন থেকে বাংলাদেশ বিশ্ববিদ্যালয় মঞ্জুরী কমিশন (ইউজিসি) দেশের বিশ্ববিদ্যালয় শিক্ষকদের প্রদেয় গবেষণা সহায়তা মঞ্জুরী অনলাইনে দেওয়া হবে। কমিশনের চেয়ারম্যান প্রফেসর ড. কাজী শহীদুল্লাহ আজ (১১ ফেব্রুয়ারি ২০২০) ইউজিসিতে আয়োজিত “অনলাইন সাবমিশন ফর রিসার্চ গ্র্যান্টস” সফটওয়্যার উদ্বোধন করেন।

ইউজিসি ওয়েবসাইটে অনলাইন আবেদনে ‘গবেষণা সহায়তা মঞ্জুরী’ ক্লিক করে সেবাগ্রহীতা তার কাক্সিক্ষত সেবা পেতে পারেন। এর জন্য সেবা প্রত্যাশিকে তার এনআইডি দিয়ে নাম নিবন্ধন করতে হবে।

ইউজিসি’র গবেষণা সহায়তা ও প্রকাশনা বিভাগ থেকে ম্যানুয়াল পদ্ধতিতে বিদেশে অনুষ্ঠিত আন্তর্জাতিক কনফারেন্স/ সেমিনার/সিম্পোজিয়াম/ওয়ার্কশপ এ অংশগ্রহণ ; উচ্চতর ডিগ্রি অর্জনের জন্য যাতায়াত খরচ, দেশের অভ্যন্তরে জাতীয় বা আন্তর্জাতিক কনফারেন্স/সেমিনার/সিম্পোজিয়াম/ ওয়ার্কশপ এর জন্য সহায়তা এবং এমফিল/পিএইচডি/পোস্ট-ডক্টোরাল গবেষণায় এই সহায়তা দিয়ে থাকে।

অনলাইন সাবমিশন ফর রিসার্চ গ্র্যান্টস সফটওয়্যার এর মাধ্যমে ইউজিসি’র প্রাতিষ্ঠানিক এ সকল সেবা সহজে পাওয়া যাবে। গবেষণা সহায়তা মঞ্জুরী খুব কম সময়ে ও সহজে প্রদান করা যাবে। আবেদনকারী গবেষণা সহায়তার আবেদনটি সর্বশেষ কোন পর্যায়ে রয়েছে তা জানতে পারবেন। পাশাপাশি, এর মাধ্যমে ব্যবহারকারীর সংখ্যা ও গবেষণা সহায়তার পরিমাণ জানা যাবে।

ইউজিসি সদস্য প্রফেসর ড. মোঃ সাজ্জাদ হোসেন- এর সভাপতিত্বে সফটওয়্যার উদ্বোধন অনুষ্ঠানে বিশেষ অতিথি ছিলেন কমিশনের সদস্য প্রফেসর ড. দিল আফরোজা বেগম। সভায় কমিশনের বিভাগীয় প্রধান ও বিভিন্ন পর্যায়ের কর্মকর্তাবৃন্দ উপস্থিত ছিলেন। অনুষ্ঠানে ইউজিসি’র গবেষণা সহায়তা ও প্রকাশনা বিভাগের পরিচালক (চলতি দায়িত্ব) মোঃ ওমর ফারুখ ও আইএমসিটি বিভাগের অতিরিক্ত পরিচালক মোহাম্মদ জামিনুর রহমান ‘অনলাইন সাবমিশন ফর রিসার্চ গ্র্যান্টস’ সফটওয়্যার এর কার্যকারিতা ও বিভিন্ন দিক তুলে ধরেন।

প্রধান অতিথির ভাষণে ইউজিসি চেয়ারম্যান বলেন, অনলাইন সাবমিশন ফর রিসার্চ গ্র্যান্টস সফটওয়্যারটি ব্যবহার করলে কাউকেই আর ইউজিসিতে গবেষণা সহায়তা মঞ্জুরীর জন্য আসতে হবে না। এর মাধ্যমে সেবা দ্রুত পাওয়া যাবে। এটি ইউজিসি কর্মকর্তা-কর্মচারীদের কর্মদক্ষতা বৃদ্ধি করবে এবং কর্ম পরিবেশ উন্নত করবে। সফটওয়্যারটির সুফল পেতে এর যথাযথ ব্যবহার নিশ্চিত করতে হবে বলে তিনি জানান। এসময় তিনি উচ্চশিক্ষার সুফল পেতে দেশের বিশ্ববিদ্যালয়সমূহ ও ইউজিসি’র সেবা জনবান্ধব করার জন্য সংশ্লিষ্টদেরকে পরামর্শ প্রদান করেন।

সভাপতির ভাষণে প্রফেসর সাজ্জাদ হোসেন বলেন, আমাদেরকে উন্নত বিশ্বের সাথে তাল মিলিয়ে চলতে প্রযুক্তি ব্যবহারের সংস্কৃতি গড়ে তুলতে হবে। ডিজিটাল বাংলাদেশ বিনির্মাণে সকল জায়গায় প্রযুক্তির সদ্ব্যবহার নিশ্চিত করতে হবে। সফটওয়্যার তৈরি এবং এর মাধ্যমে সহজে সেবা প্রদান যেন বিঘ্নিত না হয় সেদিকে লক্ষ্য রাখতে তিনি পরামর্শ প্রদান করেন।

Source: https://www.facebook.com/ugc.gov.bd/

10
The Top 5 Tech Trends That Will Disrupt Education In 2020 - The EdTech Innovations Everyone Should Watch

One solid indicator that EdTech is big business is the number of billionaires the sector created. According to Deloitte, the Chinese education market should reach $715 billion by 2025 and was responsible for creating seven new billionaires. The richest was Li Yongxin, who leads Offcn Education Technology that provides online and offline training for individuals who want to take civil service exams, but there were other EdTech business leaders represented. Here we consider the key technologies that underpin the EdTech revolution as well as the top 5 tech trends set to disrupt education in 2020.

The Top 5 Tech Trends That Will Disrupt Education In 2020 - The EdTech Innovations Everyone Should Watch

Key Technologies that Underpin the EdTech Revolution

A discussion about the top tech trends that will disrupt education must first begin with the technologies that will influence these trends.

Artificial intelligence will continue to fill gaps in learning and teaching and help personalize and streamline education. As students interact with connected Internet of Things (IoT) devices and other digital tools, data will be gathered. This big data and analysis of it is instrumental for personalized learning, determining interventions, and what tools are effective. Extended reality, including virtual, augmented, and mixed realities, helps create different learning opportunities that can engage students even further. Education is increasingly becoming mobile, and educational institutions are figuring out ways to enhance the student experience by implementing mobile technology solutions. Of course, this technology requires a capable network to handle the traffic demands, and 5G technology will provide powerful new mobile data capabilities. Finally, blockchain technology offers educational institutions to store and secure student records. 

Top 5 Tech Trends That Will Disrupt Education in 2020

1.  More accessible education

There aren’t only financial considerations when speaking about how accessible education is. The UN estimates there are more 263 million kids globally who are not getting a full-time education. While there are many reasons for this statistic, such as access to a qualified educational facility, there are also issues with proper materials, learning accommodations, and more. Online learning makes education available to those even in remote areas as well as make it easy to share curriculum across borders. EdTech solutions can overcome many common barriers to a quality education.

Technology can improve access to education. Digital textbooks that can be accessed online 24/7 won't require transportation to get to an educational facility or library during certain hours. Digital copies are relatively cheap to produce, so textbook fees aren't as taxing for digital versions as they might be with physical versions that cost more to create. Similarly, translating physical textbooks into all the languages natively spoken is cost-prohibitive for publishers when they are producing only physical copies of books. Digital versions make these translations much more feasible.

Within the classroom, the ultimate accommodation for learning differences is called differentiated learning. This allows students to have learning that is tailored to their personal needs. This and student-paced learning where students can move through and review material at the speed they need is much more feasible when using technology. There are also tech solutions for students who have physical or learning disabilities.

2.  More data-driven insights

Just like it does for other industries, technology can help educational institutions and educators be more effective and efficient. By analyzing the data about how digital textbooks are consumed, or educational technology is used, valuable data-driven insights for how to enhance learning can be attained as well as provide info to make decisions about what tools aren't effective. Technology, including big data, machine learning, and artificial intelligence, will also allow for more in-depth personalization of the content for an individual's learning needs. At the university level, data is no longer siloed into individual department's Excel spreadsheets but is consolidated at the institution level, so insights can be extracted. With the assistance of data-driven insights to readily see where students need more support and what support is necessary, teachers are freed up to inspire students and change lives.

3.  More personalized education

While a personalized education experience isn't a novel concept, technology can make achieving it much easier. Today's classrooms are diverse and complex, and access to technology helps better meet each student's needs. Technological tools can free teachers up from administrative tasks such as grading and testing to develop individual student relationships. Teachers can access a variety of learning tools through technology to give students differentiated learning experiences outside of the established curriculum.

4.  More immersive education

Extended reality encompassing virtual, augmented, and mixed reality brings immersive learning experiences to students no matter where they are. A lesson about ancient Egypt can literally come alive when a student puts on a VR headset and walks around a digital version of the time period. Students can experience hard-to-conceptualize current-day topics through extended reality, such as walking among camps of Syrian refugees. This technology enables learning by doing. Students are used to using voice interfaces at home when asking Alexa to define a word when doing homework, but this technology can also support learning and improve education in other ways. Chatbots can deliver lectures via conversational messages and engage students in learning with a communication tool they have become quite comfortable with, such as what CourseQ offers. Ultimately, if chatbots can make the learning process more engaging for students and reduce the workload on human educators, their use in education will continue to grow.

5.  More automated schools

Many schools already rely on online assessments that are flexible, interactive, and efficient to deliver. Automation will continue to alter schools as more smart tools get incorporated, including face recognition technology to take attendance, autonomous data analysis to inform learning decisions so teachers don't need to analyze data as well as help automate administrative tasks. When a student interacts with online technology, they leave a digital footprint that informs learning analytics. But automation will also help control building costs by automatically controlling lighting and heating/cooling systems and to help keep students safe with automated school security systems.

Source: https://www.forbes.com/sites/bernardmarr/2020/01/20/the-top-5-tech-trends-that-will-disrupt-education-in-2020the-edtech-innovations-everyone-should-watch/#29bffe352c5b

11
Universities will be essential to meeting the SDGs in 10 years

If one of the purposes of a university is to find solutions to society’s biggest challenges, there could not be a better and more challenging to-do list than the United Nations’ 17 Sustainable Development Goals, the “Global Goals”.

The template of indicators encourages urgent attention on everything from poverty to continuous collaboration, supported by a series of targets that map a pathway to success.

This universal framework, adopted by all UN member states in 2015, has been broadly adopted worldwide and universities are seen as having a crucial role to play through teaching, finding novel solutions in their research and promoting Global Goal ideas across institutions and into our communities.

The United Nations Academic Impact (UNAI) initiative, which seeks specifically to align institutions of higher education, scholarship and research with the United Nations and with each other, has designated 17 global hubs to help take the SDG agenda forward.

De Montfort University in Leicester is currently the UK’s only higher education institution with hub status and has responsibility for SDG 16: peace, justice and strong institutions.

The role was awarded to the university following a positive working relationship with the UN’s Department of Global Communications to energise universities’ support for refugees and migrants in their communities.

Each hub has a responsibility to serve as a resource for best practices for the UNAI network, currently composed of more than 1,400 universities and colleges in more than 130 countries worldwide.

For universities seeking to champion an SDG, across the 17 goals, there may be some more obvious and tempting goals to take on, for example SDG 1: zero poverty, SDG 3: health and well-being or SDG 4: quality education. These all feel like an easy sell to academic colleagues to deliver work on.

But since becoming the SDG 16 hub, university academic and professional staff at DMU have revelled in the challenge of understanding the goal and its interlinkages within their own fields.

Last month, for example, there were 15 senior East Midlands police detectives on campus discussing their biggest challenges in organised and serious crime to 30 academic staff united by the goal of seeking impact on SDG 16 targets.

It’s lecturers and events like this where researchers representing communities from the arts, health, business, law, technology or media have enthusiastically come together to share their knowledge, and in some cases rethink how they apply it in a new context. The real opportunity of being an SDG hub lies in using the Global Goals to unite academic staff behind a common objective.

One of the challenges for universities around the SDGs is how to progress towards targets on global issues at a time when UK institutions are increasingly being asked to focus on their local civic and community commitments.

Want to write for THE? Click for more information

There is no simple solution to this tension, other than to adopt the mantra of “think globally, act locally”.  The majority of the targets across the SDGs can be as relevant in our cities and rural communities as in developing countries. And potential solutions can be tested here and scaled.

Universities have already made great strides in getting their own houses in order to demonstrate leadership on this issues. Climate emergency declarations, plastic reduction schemes, heating efficiencies investment and even meat bans are now commonplace in UK higher education.

In the communities we serve, the SDG messages are being shared and exciting projects co-created and developed around them.

Furthermore, many international issues often manifest themselves closer to home. The hidden crisis of human trafficking and its link to UK business supply chains is one stark example.

Momentum is growing across the sector and league table rankings, such as Times Higher Education’s University Impact Rankings, have helped focus minds. Now that the UN’s “Decade of Action” to accelerate the impact of the SDGs has begun we must act. Missing the targets for 2030 is not an option.

Mark Charlton is associate director of public engagement in the directorate of social impact and engagement at De Montfort University.

Mr. Charlton will be speaking at the THE University Impact Forum: Peace, Justice & Strong Institutions at the University of Deusto in Bilbao, Spain 20-21 February, 2020.  Find out more.

Source: https://www.timeshighereducation.com/blog/universities-will-be-essential-meeting-sdgs-10-years

12
ফার্নিচার থেকে আইফোন, সবই ভাড়ায় নিচ্ছে ভারতীয় তরুণরা

মিলেনিয়ালদের হাতেই চলছে সমাজের ভাঙা-গড়া। প্রযুক্তির উল্লম্ফন থেকে শুরু করে সামাজিক সংগঠন, মূল্যবোধ, অর্থনীতির স্বরূপ ও গতিপ্রকৃতি সবই পাল্টে দিচ্ছে এ প্রজন্ম। অতি আশ্চর্য (!) এ প্রজন্মের নারী-পুরুষের বয়স এখন ২৪-৩৯ বছর। এদের হাতেই গড়ে উঠছে তথাকথিত শেয়ার্ড ইকোনমি (অংশীদারিত্বের অর্থনীতি)। প্রচলিত মালিকানা নিয়ে প্রতিষ্ঠিত সামাজিক ধ্যানধারণা তাদের কাছে আবেদন হারিয়ে ফেলছে। প্রতিশ্রুতিকে তারা ভাবছে শিকলে আবদ্ধ থাকার নামান্তর। এই মিলেনিয়ানরা টাকা দিয়ে কিনছে ‘অভিজ্ঞতা’। স্থায়ী মালিকানার কোনো অর্থ নেই তাদের কাছে। এ ধরনের দৃষ্টিভঙ্গির কারণেই বিশ্বব্যাপী জনপ্রিয় হচ্ছে আসবাবপত্র থেকে শুরু করে সেলফোন ভাড়া দেয়ার বাণিজ্য।

বাংলাদেশে স্থানীয়ভাবে বিয়ের পোশাক, ইভেন্ট ম্যানেজমেন্ট প্রতিষ্ঠান, এয়ারকন্ডিশনার (এসি) ভাড়া দেয়ার ব্যবসা বেশ পুরনো। তবে ভারতে এখন রীতিমতো অ্যাপভিত্তিক বাণিজ্য শুরু হয়ে গেছে। ফারলেঙ্কো, রেন্টোমজো, গ্র্যাব অন রেন্টের মতো বেশ কয়েকটি প্রতিষ্ঠান জীবনযাপনের যাবতীয় জিনিসপত্র অনেক কম টাকায় ভাড়া দিয়ে থাকে। অনেক প্রতিষ্ঠান আবার বিনামূল্যে জিনিসপত্র স্থানান্তরের সুবিধাও দেয়।

সম্প্রতি এ নিয়ে বার্তা সংস্থা এএফপি একটি প্রতিবেদন প্রকাশ করেছে। প্রতিবেদনে মুম্বাইয়ের স্পন্দন শর্মা নামে ২৯ বছর বয়সী এক তরুণ জানান, তার নিজের ফ্ল্যাট, গাড়ি এমনকি একটি চেয়ারও নেই। ভারতের মিলেনিয়ালদের মধ্যে তার মতো লোকের সংখ্যা বাড়ছে। তারা প্রচলিত ধারণাকে ভেঙে চুরমার করে দিচ্ছে। কেনার চাইতে ফার্নিচার থেকে আইফোন পর্যন্ত ভাড়ায় ব্যবহার করছে।



স্পন্দন শর্মা প্রতি মাসে ৪ হাজার ২৪৭ রুপির বিনিময়ে তার ঘর সাজিয়েছেন। সেখানে আসবাবপত্র, ফ্রিজ, মাইক্রওয়েভ ওভেন থেকে শুরু করে সব কিছুই ভাড়ায় নেয়া।  স্পন্দনের বাবা একটি সরকারি ব্যাংকে চাকরি করার সময় একটি ফ্ল্যাট ও গাড়ি কেনার জন্য একটু একটু করে টাকা জমাতেন। কিন্তু শর্মা তার জীবনটাকে অন্যভাবে দেখতে শিখেছেন। ‘অভিজ্ঞতায় বিনিয়োগে’ বিশ্বাসী তিনি। মাত্র সাত বছরের মধ্যে তার দুটি দেশের পাঁচটি শহরে নিজের থাকার একটা জায়গা আছে। এটা তার বাবার ক্ষেত্রে অচিন্ত্যনীয় ছিল। কিন্তু শর্মার জন্য এটাই এখন বাস্তবতা।

শুধু বাসা বাড়ির জন্যই নয়, মিলেনিয়ালরা অফিসের প্রয়োজনীয় জিনিসপত্রও ভাড়ায় নিচ্ছে। এমনটাই জানিয়েছেন উদীয়মান উদ্যোক্তা বন্দিতা মোরারকা। ২০১৭ সালে নারী অধিকার বিষয়ক অলাভজনক প্রতিষ্ঠান ‘ওয়ান ফিউচার কালেক্টিভ’ প্রতিষ্ঠা করেন ২৫ বছর বয়সী বন্দিতা। তিনি তার অফিসের প্রায় সবকিছুই ভাড়ায় নিয়েছেন। সেখানে টেবিল চেয়ার থেকে শুরু করে ল্যাপটপ পর্যন্ত ভাড়া নেয়া। তিনি বলেন, এতে করে প্রচুর বিনিয়োগের টাকা বাঁচিয়ে তিনি ২৫ জন স্টাফকে ঠিকমতো বেতন দিতে পারছেন। বন্দিতা বলেন, এই সিস্টেমটি আমাকে আরো বেশি ঝুঁকি নেয়ার সুযোগ তৈরি করে দিয়েছে। আমাদের যদি কখনো অফিস পরিবর্তন করে দূরে কোথাও যেতে হয় তাহলে নতুন করে মোটা অংকের টাকা বিনিয়োগের দরকার পড়বে না। তাছাড়া জিনিসপত্র বয়ে নিয়ে যাওয়ার ঝক্কিও থাকছে না।

ব্যবসা সংশ্লিষ্টরা বলছেন, এটি এখন একটি অত্যন্ত সম্ভাবনাময় বাণিজ্য হিসেবে আবির্ভুত হয়েছে। তরুণরা এখন কোনো কিছুই কিনতে চাচ্ছে না। আর জিনিসপত্র শেয়ার করার ক্ষেত্রে তাদের প্রাচীনপন্থীদের মতো কোনো ছুৎমার্গও নেই।
বেঙ্গালুরু ভিত্তিক রেন্টোমজোর প্রতিষ্ঠাতা গীতাংশ বামনিয়া বলেন, তিনি আশা করছেন, ৩০ মাসের মধ্যে ১০ লাখ অর্ডার পাবেন। এ প্রতিষ্ঠানটি ঘর ও অফিসের আসবাবপত্র, গৃহস্থালী জিনিসপত্র, জিমের সরঞ্জাম, আইফোন এবং স্মার্ট হোম ডিভাইস যেমন, গুগল হোম এবং অ্যামাজন ইকো এসবও ভাড়া দেয়। বামনিয়া বলেন, ভাড়ায় স্মার্টফোন পাওয়ায় তরুণদের জন্য সুবিধা হয়েছে। তারা অনেক কম খরচে বাজারে আসা সর্বশেষ প্রিমিয়াম ডিভাইসটির অভিজ্ঞতা নিতে পারছে।

২০১২ সালে ফারলেঙ্কো প্রতিষ্ঠা করেন বিনিয়োগ ব্যাংকের সাবেক কর্মকর্তা অজিত করিম্পানা। কোম্পানিটির বর্তমান গ্রাহক সংখ্যা ১ লাখেরও বেশি। ২০২৩ সাল নাগাদ ফারলেঙ্কোর আয় ৩০ কোটি ডলার ছাড়িয়ে যাবে বলে আশা করছেন অজিত।

একাধিক গবেষণা প্রতিষ্ঠান বলছে, যুক্তরাষ্ট্রে রেন্ট দ্য রানওয়ে এবং নুলির মতো ওয়েবসাইটগুলো ফ্যাশন সচেতন গ্রাহকদের পোশাক কেনার পরিবর্তে ভাড়া নিতে উৎসাহিত করে। চীনা গ্রাহকদের অ্যাপের মাধ্যমে ভাড়ায় বিএমডব্লিউ গাড়ি পাওয়ারও সুযোগ করে দেয়। বিদেশে এরকম প্রতিষ্ঠানে সফলতার দৃষ্টান্ত দেখে ভারতেও এ ব্যবসা ফুলে ফেঁপে উঠছে। ফার্নিচার থেকে হোম অ্যাপ্লায়েন্স এমনকি স্বর্ণালঙ্কারও এখন অ্যাপের মাধ্যমে ভাড়ায় পাওয়া যাচ্ছে।

পরামর্শক প্রতিষ্ঠান প্রাইসওয়াটারহাউসকুপার্সের (পিডব্লিউসি) হিসাবে, ২০২৫ সাল নাগাদ অ্যাপভিত্তিক জিনিসপত্র ভাড়া দেয়ার ব্যবসার বার্ষিক রাজস্ব দাঁড়াবে ৩৩ হাজার ৫০০ কোটি ডলার। আরেক পরামর্শক প্রতিষ্ঠান রিসার্চ নেস্টারের হিসাবে, ২০২৫ সাল নাগাদ ভারতে শুধু আসবাবপত্র ভাড়ার বাজার হবে ১৮৯ কোটি ডলার।

এনডিটিভি অবলম্বনে উম্মে সালমা

Source: https://bonikbarta.net/home/news_description/216528/%E0%A6%AB%E0%A6%BE%E0%A6%B0%E0%A7%8D%E0%A6%A8%E0%A6%BF%E0%A6%9A%E0%A6%BE%E0%A6%B0-%E0%A6%A5%E0%A7%87%E0%A6%95%E0%A7%87-%E0%A6%86%E0%A6%87%E0%A6%AB%E0%A7%8B%E0%A6%A8-%E0%A6%B8%E0%A6%AC%E0%A6%87-%E0%A6%AD%E0%A6%BE%E0%A7%9C%E0%A6%BE%E0%A7%9F-%E0%A6%A8%E0%A6%BF%E0%A6%9A%E0%A7%8D%E0%A6%9B%E0%A7%87-%E0%A6%AD%E0%A6%BE%E0%A6%B0%E0%A6%A4%E0%A7%80%E0%A7%9F-%E0%A6%A4%E0%A6%B0%E0%A7%81%E0%A6%A3%E0%A6%B0%E0%A6%BE?fbclid=IwAR178MP6P3-18cM4aWHzDeM0ngGghkc19axl1HcB4eARUbvny9xfDdWsSD4

13
Manager and machine: The new leadership equation

n a 1967 McKinsey Quarterly article, “The manager and the moron,” Peter Drucker noted that “the computer makes no decisions; it only carries out orders. It’s a total moron, and therein lies its strength. It forces us to think, to set the criteria. The stupider the tool, the brighter the master has to be—and this is the dumbest tool we have ever had.”1
How things have changed. After years of promise and hype, machine learning has at last hit the vertical part of the exponential curve. Computers are replacing skilled practitioners in fields such as architecture, aviation, the law, medicine, and petroleum geology—and changing the nature of work in a broad range of other jobs and professions. Deep Knowledge Ventures, a Hong Kong venture-capital firm, has gone so far as to appoint a decision-making algorithm to its board of directors.

What would it take for algorithms to take over the C-suite? And what will be senior leaders’ most important contributions if they do? Our answers to these admittedly speculative questions rest on our work with senior leaders in a range of industries, particularly those on the vanguard of the big data and advanced-analytics revolution. We have also worked extensively alongside executives who have been experimenting most actively with opening up their companies and decision-making processes through crowdsourcing and social platforms within and across organizational boundaries.

Our argument is simple: the advances of brilliant machines will astound us, but they will transform the lives of senior executives only if managerial advances enable them to. There’s still a great deal of work to be done to create data sets worthy of the most intelligent machines and their burgeoning decision-making potential. On top of that, there’s a need for senior leaders to “let go” in ways that run counter to a century of organizational development.

If these two things happen—and they’re likely to, for the simple reason that leading-edge organizations will seize competitive advantage and be imitated—the role of the senior leader will evolve. We’d suggest that, ironically enough, executives in the era of brilliant machines will be able to make the biggest difference through the human touch. By this we mean the questions they frame, their vigor in attacking exceptional circumstances highlighted by increasingly intelligent algorithms, and their ability to do things machines can’t. That includes tolerating ambiguity and focusing on the “softer” side of management to engage the organization and build its capacity for self-renewal.

The most impressive examples of machine learning substituting for human pattern recognition—such as the IBM supercomputer Watson’s potential to predict oncological outcomes more accurately than physicians by reviewing, storing, and learning from reams of medical-journal articles—result from situations where inputs are of high quality. Contrast that with the state of affairs pervasive in many organizations that have access to big data and are taking a run at advanced analytics. The executives in these companies often find themselves beset by “polluted” or difficult-to-parse data, whose validity is subject to vigorous internal debates.

This isn’t an article about big data per se—in recent Quarterly articles we’ve written extensively on what senior executives must do to address these issues—but we want to stress that “garbage in/garbage out” applies as much to supercomputers as it did 50 years ago to the IBM System/360.2 This management problem, which transcends CIOs and the IT organization, speaks to the need for a turbocharged data-analytics strategy, a new top-team mind-set, fresh talent approaches, and a concerted effort to break down information silos. These issues also transcend number crunching; as our colleagues have explained elsewhere, “weak signals” from social media and other sources also contain powerful insights and should be part of the data-creation process.3
The incentives for getting this right are large—early movers should be able to speed the quality and pace of decision making in a wide range of tactical and strategic areas, as we already see from the promising results of early big data and analytics efforts. Furthermore, early movers will probably gain new insights from their analysis of unstructured data, such as e-mail discussions between sales representatives or discussion threads in social media. Without behavioral shifts by senior leaders, though, their organizations won’t realize the full power of the artificial intelligence at their fingertips. The challenge lies in part with the very notion that machine-learning insights are at the fingertips of senior executives.

That’s certainly an appealing prospect: customized dashboards full of metadata describing and synthesizing deeper and more detailed operational, financial, and marketing information hold enormous power for the senior team. But these dashboards don’t create themselves. Senior executives must find and set the software parameters needed to determine, for instance, which data gets prioritized and which gets flagged for escalation. It’s no overstatement to say that these parameters determine the direction of the company—and the success of executives in guiding it there; for example, a bank can shift the mix between lending and deposit taking by changing prices. Machines may be able to adjust prices in real time, but executives must determine the target. Similarly, machines can monitor risks, but only after executives have determined the level of risk they’re comfortable with.

Consider also the challenge posed by today’s real-time sales data, which can be sliced by location, product, team, and channel. Previous generations of managers would probably have given their eyeteeth for that capability. Today’s unaware executive risks drowning in minutiae, though. Some are already reacting by distancing themselves from technology—for instance, by employing layers of staffers to screen data, which gets turned into more easily digestible Power Point slides. In so doing, however, executives risk getting a “filtered” view of reality that misses the power of the data available to them.

As artificial intelligence grows in power, the odds of sinking under the weight of even quite valuable insights grow as well. The answer isn’t likely to be bureaucratizing information, but rather democratizing it: encouraging and expecting the organization to manage itself without bringing decisions upward. Business units and company-wide functions will of course continue reporting to the top team and CEO. But emboldened by sharper insights and pattern recognition from increasingly powerful computers, business units and functions will be able to make more and better decisions on their own. Reviewing the results of those decisions, and sharing the implications across the management team, will actually give managers lower down in the organization new sources of power vis-à-vis executives at the top. That will happen even as the CEO begins to morph, in part, into a “chief experimentation officer,” who draws from acute observance of early signals to bolster a company’s ability to experiment at scale, particularly in customer-facing industries.

We’ve already seen flashes of this development in companies that open up their strategy-development process to a broader range of internal and external participants. Companies such as 3M, Dutch insurer AEGON, Red Hat (the leading provider of Linux software), and defense contractor Rite-Solutions have found that the advantages include more insightful and actionable strategic plans, as well as greater buy-in from participants, since they helped to craft the plan in the first place.4
In a world where artificial intelligence supports all manner of day-to-day management decisions, the need to “let go” will be more significant and the discomfort for senior leaders higher. To some extent, we’re describing a world where top executives’ sources of comparative advantage are eroding because of technology and the manifested “brilliance of crowds.” The contrast with the command-and-control era—when holding information close was a source of power, and information moved in one direction only, up the corporate hierarchy—could not be starker. Uncomfortable as this new world may be, the costs of the status quo are large and growing: information hoarders will slow the pace of their organizations and forsake the power of artificial intelligence while competitors exploit it.

The human edge
If senior leaders successfully fuel the insights of increasingly brilliant machines and devolve decision-making authority up and down the line, what will be left for top management to do?

Asking questions
A great deal, as it turns out—starting with asking good questions. Asking the right questions of the right people at the right times is a skill set computers lack and may never acquire. To be sure, the exponential advances of deep-learning algorithms mean that executive expertise, which typically runs deep in a particular domain or set of domains, is sometimes inferior to (or can get in the way of) insights generated by deep-learning algorithms, big data, and advanced analytics. In fact, there’s a case for using an executive’s domain expertise to frame the upfront questions that need asking and then turning the machines loose to answer those questions. That’s a role for the people with an organization’s strongest judgment: the senior leaders.

The importance of questions extends beyond steering machines, to interpreting their output. Recent history demonstrates the risk of relying on technology-based algorithmic insights without fully understanding how they drive decision making, for that makes it impossible to manage business and reputational risks (among others) properly. The potential for disaster is not small. The foremost cautionary tale, of course, comes from the banks prior to the 2008 financial crisis: C-suite executives and the managers one and two levels below them at major institutions did not fully understand how decisions were made in the “quant” areas of trading and asset management.

Algorithms and artificial intelligence may broaden this kind of analytical complexity beyond the financial world, to a whole new set of decision areas—again placing a premium on the tough questions senior leaders can ask. Penetrating this new world of analytical complexity is likely to be difficult, and an increasingly important role for senior executives may be establishing a set of small, often improvisatory, experiments to get a better handle on the implications of emerging insights and decision rules, as well as their own managerial styles.

Attacking exceptions
An increasingly important element of each leader’s management tool kit is likely to be the ability to attack problematic “exceptions” vigorously. Smart machines should get better and better at telling managers when they have a problem. Early evidence of this development is coming in data-intensive areas, such as pricing or credit departments or call centers—and the same thing will probably happen in more strategic areas, ranging from competitive analysis to talent management, as information gets better and machines get smarter. Executives can therefore spend less time on day-to-day management issues, but when the exception report signals a difficulty, the ability to spring into action will help executives differentiate themselves and the health of their organizations.

Senior leaders will have to draw on a mixture of insight—examining exceptions to see if they require interventions, such as new credit limits for a big customer or an opportunity to start bundling a new service with an existing product—and inspiration, as leaders galvanize the organization to respond quickly and work in new ways. Exceptions may pave the way for innovation too, something we already see as leading-edge retailers and financial-services firms mine large sets of customer data.

Tolerating ambiguity
While algorithms and supercomputers are designed to seek answers, they are likely to be most definitive on relatively small questions. The bigger and broader the inquiry, the more likely that human synthesis will be central to problem solving, because machines, though they learn rapidly, provide many pieces without assembling the puzzle. That process of assembly and synthesis can be messy and slow, placing a fresh premium on the senior leaders’ ability to tolerate ambiguity.

A straightforward example is the comfort digitally oriented executives are beginning to feel with a wide range of A/B testing to see what does and does not appeal to users or customers online. A/B testing is a small-scale version of the kind of experimentation that will increasingly hold sway as computers gain power, with fully fledged plans of action giving way to proof-of-concept (POC) ones, which make no claim to be either comprehensive or complete. POCs are a way to feel your way in uncertain terrain. Companies take an action, look at the result, and then push on to the next phase, step by step.

This necessary process will increasingly enable companies to proceed without knowing exactly where they’re going. For executives, this will feel rather like stumbling along in the dark; reference points can be few. Many will struggle with the uncertainty this approach provokes and wrestle with the temptation to engineer an outcome before sufficient data emerge to allow an informed decision. The trick will be holding open a space for the emergence of new insights and using subtle interventions to keep the whole journey from going off the cliff. What’s required, for executives, is the ability to remain in a state of unknowing while constantly filtering and evaluating the available information and its sources, tolerating tension and ambiguity, and delaying decisive action until clarity emerges. In such situations, the temptation to act quickly may provide a false sense of security and reassurance—but may also foreclose on potentially useful outcomes that would have emerged in the longer run.

Employing ‘soft’ skills
Humans have and will continue to have a strong comparative advantage when it comes to inspiring the troops, empathizing with customers, developing talent, and the like. Sometimes, machines will provide invaluable input, as Laszlo Bock at Google has famously shown in a wide range of human-resource data-analytics efforts. But translating this insight into messages that resonate with organizations will require a human touch. No computer will ever manage by walking around. And no effective executive will try to galvanize action by saying, “we’re doing this because an algorithm told us to.” Indeed, the contextualization of small-scale machine-made decisions is likely to become an important component of tomorrow’s leadership tool kit. While this article isn’t the place for a discourse on inspirational leadership, we’re firmly convinced that simultaneous growth in the importance of softer management skills and technology savvy will boost the complexity and richness of the senior-executive role.

How different is tomorrow’s effective leader from those of the past? In Peter Drucker’s 1967 classic, The Effective Executive, he described a highly productive company president who “accomplished more in [one] monthly session than many other and equally able executives get done in a month of meetings.” Yet this executive “had to resign himself to having at least half his time taken up by things of minor importance and dubious value … specific decisions on daily problems that should not have reached him but invariably did.”5 There should be less of dubious value coming across the senior executive’s desk in the future. This will be liberating—but also raises the bar for the executive’s ability to master the human dimensions that ultimately will provide the edge in the era of brilliant machines.

About the author(s)
Martin Dewhurst and Paul Willmott are directors in McKinsey’s London office.

Source: https://www.mckinsey.com/featured-insights/leadership/manager-and-machine?cid=other-eml-cls-mip-mck&hlkid=87157089145745578f14324d0af83e00&hctky=1370656&hdpid=5de210fe-6adc-4201-9d10-1e5f9c04e8ef

14
Why digital transformation is now on the CEO’s shoulders

Big data, the Internet of Things, and artificial intelligence hold such disruptive power that they have inverted the dynamics of technology leadership.

hen science and technology meet social and economic systems, you tend to see something akin to what the late Stephen Jay Gould called “punctuated equilibrium” in his description of evolutionary biology. Something that has been stable for a long period is suddenly disrupted radically—and then settles into a new equilibrium.1 Analogues across social and economic history include the discovery of fire, the domestication of dogs, the emergence of agricultural techniques, and, in more recent times, the Gutenberg printing press, the Jacquard loom, urban electrification, the automobile, the microprocessor, and the Internet. Each of these innovations collided with a society that had been in a period of relative stasis—followed by massive disruption.

Punctuated equilibrium is useful as a framework for thinking about disruption in today’s economy. US auto technology has been relatively static since the passage of the Federal interstate-highway act, in 1956. Now the synchronous arrival of Tesla, Uber, and autonomous vehicles is creating chaos. When it’s over, a new equilibrium will emerge. Landline operators were massively disrupted by cell phones, which in turn were upended by the introduction of the iPhone, in 2007—which, in the following decade, has settled into a new stasis, with handheld computing changing the very nature of interpersonal communication.

The evidence suggests that we are seeing a mass disruption in the corporate world like Gould’s recurring episodes of mass species extinction. Since 2000, over 50 percent of Fortune 500 companies have been acquired, merged, or declared bankruptcy, with no end in sight. In their wake, we are seeing a mass “speciation” of innovative corporate entities with largely new DNA, such as Amazon, Box, Facebook, Square, Twilio, Uber, WeWork, and Zappos.

Mass-extinction events don’t just happen for no reason. In the current extinction event, the causal factor is digital transformation.

Awash in information
Digital transformation is everywhere on the agendas of corporate boards and has risen to the top of CEOs’ strategic plans. Before the ubiquity of the personal computer or the Internet, the late Harvard sociologist Daniel Bell predicted the advent of the Information Age in his seminal work The Coming of Post-Industrial Society.2 The resulting structural change in the global economy, he wrote, would be on the order of the Industrial Revolution. In the subsequent four decades, the dynamics of Moore’s law and the associated technological advances of minicomputers, relational databases, computers, the Internet, and the smartphone have created a thriving $2 trillion information-technology industry—much as Bell foretold.

In the 21st century, Bell’s dynamic is accelerating, with the introduction of new disruptive technologies, including big data, artificial intelligence (AI), elastic cloud computing (the cloud), and the Internet of Things (IoT). The smart grid is a compelling example of these forces at work. Today’s electric-power grid—composed of billions of electric meters, transformers, capacitors, phasor measurement units, and power lines—is perhaps the largest and most complex machine ever developed.3 An estimated $2 trillion is being spent this decade to “sensor” that value chain by upgrading or replacing the multitude of devices in the grid infrastructure so that all of them are remotely machine addressable.4
When a power grid is fully connected, utilities can aggregate, evaluate, and correlate the interactions and relationships of vast quantities of data from all manner of devices—plus weather, load, and generation-capacity information—in near real time. They can then apply AI machine-learning algorithms to those data to optimize the operation of the grid, reduce the cost of operation, enhance resiliency, increase reliability, harden cybersecurity, enable a bidirectional power flow, and reduce greenhouse-gas emissions. The power of IoT, cloud computing, and AI spells the digital transformation of the utility industry.

A virtuous cycle is at work here. The network effects of interconnected and sensored customers, local power production, and storage (all ever cheaper) make more data available for analysis, rendering the deep-learning algorithms of AI more accurate and making for an increasingly efficient smart grid. Meanwhile, as big data sets become staggeringly large, they change the nature of business decisions. Historically, computation was performed on data samples, statistical methods were employed to draw inferences from those samples, and the inferences were in turn used to inform business decisions. Big data means we perform calculations on all the data; there is no sampling error. This enables AI—a previously unattainable class of computation that uses machine and deep learning to develop self-learning algorithms—to perform precise predictive and prescriptive analytics.

The benefits are breathtaking. All value chains will be disrupted: defense, education, financial services, government services, healthcare, manufacturing, oil and gas, retail, telecommunications, and more. To give some flavor to this:

Healthcare. Soon all medical devices will be sensored, as will patients. Healthcare records and genome sequences will be digitized. Sensors will remotely monitor pulse, blood chemistry, hormone levels, blood pressure, temperature, and brain waves. With AI, disease onset can be accurately predicted and prevented. AI-augmented best medical practices will be more uniformly applied.
Oil and gas. Operators will use predictive maintenance to monitor production assets and predict and prevent device failures, from submersible oil pumps to offshore oil rigs. The result will be a lower cost of production and a lower environmental impact.
Manufacturing. Companies are employing IoT-enabled inventory optimization to lower inventory carrying costs, predictive maintenance to lower the cost of production and increase product reliability, and supply-network risk mitigation to assure timely product delivery and manufacturing efficiency.

Source: https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/why-digital-transformation-is-now-on-the-ceos-shoulders?cid=other-eml-cls-mip-mck&hlkid=bd8b865d926e45aa9859f6e89b216204&hctky=1370656&hdpid=5de210fe-6adc-4201-9d10-1e5f9c04e8ef

15
The promise and challenge of the age of artificial intelligence

AI promises considerable economic benefits, even as it disrupts the world of work. These three priorities will help achieve good outcomes.

The time may have finally come for artificial intelligence (AI) after periods of hype followed by several “AI winters” over the past 60 years. AI now powers so many real-world applications, ranging from facial recognition to language translators and assistants like Siri and Alexa, that we barely notice it. Along with these consumer applications, companies across sectors are increasingly harnessing AI’s power in their operations. Embracing AI promises considerable benefits for businesses and economies through its contributions to productivity growth and innovation. At the same time, AI’s impact on work is likely to be profound. Some occupations as well as demand for some skills will decline, while others grow and many change as people work alongside ever-evolving and increasingly capable machines.

This briefing pulls together various strands of research by the McKinsey Global Institute into AI technologies and their uses, limitations, and impact. It was compiled for the Tallinn Digital Summit that took place in October 2018. The briefing concludes with a set of issues that policy makers and business leaders will need to address to soften the disruptive transitions likely to accompany its adoption.

AI’s time may have finally come, but more progress is needed
The term “artificial intelligence” was popularized at a conference at Dartmouth College in the United States in 1956 that brought together researchers on a broad range of topics, from language simulation to learning machines.

Despite periods of significant scientific advances in the six decades since, AI has often failed to live up to the hype that surrounded it. Decades were spent trying to describe human intelligence precisely, and the progress made did not deliver on the earlier excitement. Since the late 1990s, however, technological progress has gathered pace, especially in the past decade. Machine-learning algorithms have progressed, especially through the development of deep learning and reinforcement-learning techniques based on neural networks.

Several other factors have contributed to the recent progress. Exponentially more computing capacity has become available to train larger and more complex models; this has come through silicon-level innovation including the use of graphics processing units and tensor processing units, with more on the way. This capacity is being aggregated in hyperscale clusters, increasingly being made accessible to users through the cloud.

Another key factor is the massive amounts of data being generated and now available to train AI algorithms. Some of the progress in AI has been the result of system-level innovations. Autonomous vehicles are a good illustration of this: they take advantage of innovations in sensors, LIDAR, machine vision, mapping and satellite technology, navigation algorithms, and robotics all brought together in integrated systems.

Despite the progress, many hard problems remain that will require more scientific breakthroughs. So far, most of the progress has been in what is often referred to as “narrow AI”—where machine-learning techniques are being developed to solve specific problems, for example, in natural language processing. The harder issues are in what is usually referred to as “artificial general intelligence,” where the challenge is to develop AI that can tackle general problems in much the same way that humans can. Many researchers consider this to be decades away from becoming reality.

Deep learning and machine-learning techniques are driving AI
Much of the recent excitement about AI has been the result of advances in the field known as deep learning, a set of techniques to implement machine learning that is based on artificial neural networks. These AI systems loosely model the way that neurons interact in the brain. Neural networks have many (“deep”) layers of simulated interconnected neurons, hence the term “deep learning.” Whereas earlier neural networks had only three to five layers and dozens of neurons, deep learning networks can have ten or more layers, with simulated neurons numbering in the millions.

There are several types of machine learning: supervised learning, unsupervised learning, and reinforcement learning, with each best suited to certain use cases. Most current practical examples of AI are applications of supervised learning. In supervised learning, often used when labeled data are available and the preferred output variables are known, training data are used to help a system learn the relationship of given inputs to a given output—for example, to recognize objects in an image or to transcribe human speech.

Unsupervised learning is a set of techniques used without labeled training data—for example, to detect clusters or patterns, such as images of buildings that have similar architectural styles, in a set of existing data.

In reinforcement learning, systems are trained by receiving virtual “rewards” or “punishments,” often through a scoring system, essentially learning by trial and error. Through ongoing work, these techniques are evolving.

Limitations remain, although new techniques show promise
AI still faces many practical challenges, though new techniques are emerging to address them. Machine learning can require large amounts of human effort to label the training data necessary for supervised learning. In-stream supervision, in which data can be labeled in the course of natural usage, and other techniques could help alleviate this issue.

Obtaining data sets that are sufficiently large and comprehensive to be used for training—for example, creating or obtaining sufficient clinical-trial data to predict healthcare treatment outcomes more accurately—is also often challenging.

The “black box” complexity of deep learning techniques creates the challenge of “explainability,” or showing which factors led to a decision or prediction, and how. This is particularly important in applications where trust matters and predictions carry societal implications, as in criminal justice applications or financial lending. Some nascent approaches, including local interpretable model-agnostic explanations (LIME), aim to increase model transparency.

Another challenge is that of building generalized learning techniques, since AI techniques continue to have difficulties in carrying their experiences from one set of circumstances to another. Transfer learning, in which an AI model is trained to accomplish a certain task and then quickly applies that learning to a similar but distinct activity, is one promising response to this challenge.

Section 2We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com
Businesses stand to benefit from AI
While AI is increasingly pervasive in consumer applications, businesses are beginning to adopt it across their operations, at times with striking results.

AI’s potential cuts across industries and functions
AI can be used to improve business performance in areas including predictive maintenance, where deep learning’s ability to analyze large amounts of high-dimensional data from audio and images can effectively detect anomalies in factory assembly lines or aircraft engines. In logistics, AI can optimize routing of delivery traffic, improving fuel efficiency and reducing delivery times. In customer service management, AI has become a valuable tool in call centers, thanks to improved speech recognition. In sales, combining customer demographic and past transaction data with social media monitoring can help generate individualized “next product to buy” recommendations, which many retailers now use routinely.

Such practical AI use cases and applications can be found across all sectors of the economy and multiple business functions, from marketing to supply chain operations. In many of these use cases, deep learning techniques primarily add value by improving on traditional analytics techniques.

Our analysis of more than 400 use cases across 19 industries and nine business functions found that AI improved on traditional analytics techniques in 69 percent of potential use cases (Exhibit 1). In only 16 percent of AI use cases did we find a “greenfield” AI solution that was applicable where other analytics methods would not be effective. Our research estimated that deep learning techniques based on artificial neural networks could generate as much as 40 percent of the total potential value that all analytics techniques could provide by 2030. Further, we estimate that several of the deep learning techniques could enable up to $6 trillion in value annually.

Exhibit 1

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com
So far, adoption is uneven across companies and sectors
Although many organizations have begun to adopt AI, the pace and extent of adoption has been uneven. Nearly half of respondents in a 2018 McKinsey survey on AI adoption say their companies have embedded at least one AI capability in their business processes, and another 30 percent are piloting AI. Still, only 21 percent say their organizations have embedded AI in several parts of the business, and barely 3 percent of large firms have integrated AI across their full enterprise workflows.

Other surveys show that early AI adopters tend to think about these technologies more expansively, to grow their markets or increase market share, while companies with less experience focus more narrowly on reducing costs. Highly digitized companies tend to invest more in AI and derive greater value from its use.

At the sector level, the gap between digitized early adopters and others is widening. Sectors highly ranked in MGI’s Industry Digitization Index, such as high tech and telecommunications, and financial services are leading AI adopters and have the most ambitious AI investment plans (Exhibit 2). As these firms expand AI adoption and acquire more data and AI capabilities, laggards may find it harder to catch up.

Exhibit 2

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com
Several challenges to adoption persist
Many companies and sectors lag in AI adoption. Developing an AI strategy with clearly defined benefits, finding talent with the appropriate skill sets, overcoming functional silos that constrain end-to-end deployment, and lacking ownership and commitment to AI on the part of leaders are among the barriers to adoption most often cited by executives.

On the strategy side, companies will need to develop an enterprise-wide view of compelling AI opportunities, potentially transforming parts of their current business processes. Organizations will need robust data capture and governance processes as well as modern digital capabilities, and be able to build or access the requisite infrastructure. Even more challenging will be overcoming the “last mile” problem of making sure that the superior insights provided by AI are inculcated into the behavior of the people and processes of an enterprise.

On the talent front, much of the construction and optimization of deep neural networks remains an art requiring real expertise. Demand for these skills far outstrips supply; according to some estimates, fewer than 10,000 people have the skills necessary to tackle serious AI problems, and competition for them is fierce. Companies considering the option of building their own AI solutions will need to consider whether they have the capacity to attract and retain workers with these specialized skills.

Section 3We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com
Economies also stand to benefit from AI, through increased productivity and innovation
Deployment of AI and automation technologies can do much to lift the global economy and increase global prosperity. At a time of aging and falling birth rates, productivity growth becomes critical for long-term economic growth. Even in the near term, productivity growth has been sluggish in developed economies, dropping to an average of 0.5 percent in 2010–14 from 2.4 percent a decade earlier in the United States and major European economies. Much like previous general-purpose technologies, AI has the potential to contribute to productivity growth.

AI could contribute to economic impact through a variety of channels
The largest economic impacts of AI will likely be on productivity growth through labor market effects including substitution, augmentation, and contributions to labor productivity.

Our research suggests that labor substitution could account for less than half of the total benefit. AI will augment human capabilities, freeing up workers to engage in more productive and higher-value tasks, and increase demand for jobs associated with AI technologies.

AI can also boost innovation, enabling companies to improve their top line by reaching underserved markets more effectively with existing products, and over the longer term, creating entirely new products and services. AI will also create positive externalities, facilitating more efficient cross-border commerce and enabling expanded use of valuable cross-border data flows. Such increases in economic activity and incomes can be reinvested into the economy, contributing to further growth.

The deployment of AI will also bring some negative externalities that could lower, although not eliminate, the positive economic impacts. On the economic front, these include increased competition that shifts market share from nonadopters to front-runners, the costs associated with managing labor market transitions, and potential loss of consumption for citizens during periods of unemployment, as well the transition and implementation costs of deploying AI systems.

All in all, these various channels net out to significant positive economic growth, assuming businesses and governments proactively manage the transition. One simulation we conducted using McKinsey survey data suggests that AI adoption could raise global GDP by as much as $13 trillion by 2030, about 1.2 percent additional GDP growth per year. This effect will build up only through time, however, given that most of the implementation costs of AI may be ahead of the revenue potential.

The AI readiness of countries varies considerably
The leading enablers of potential AI-driven economic growth, such as investment and research activity, digital absorption, connectedness, and labor market structure and flexibility, vary by country. Our research suggests that the ability to innovate and acquire the necessary human capital skills will be among the most important enablers—and that AI competitiveness will likely be an important factor influencing future GDP growth.

Countries leading the race to supply AI have unique strengths that set them apart. Scale effects enable more significant investment, and network effects enable these economies to attract the talent needed to make the most of AI. For now, China and the United States are responsible for most AI-related research activities and investment.

A second group of countries that includes Germany, Japan, Canada, and the United Kingdom have a history of driving innovation on a major scale and may accelerate the commercialization of AI solutions. Smaller, globally connected economies such as Belgium, Singapore, South Korea, and Sweden also score highly on their ability to foster productive environments where novel business models thrive.

Countries in a third group, including but not limited to Brazil, India, Italy, and Malaysia, are in a relatively weaker starting position, but they exhibit comparative strengths in specific areas on which they may be able to build. India, for instance, produces around 1.7 million graduates a year with STEM degrees—more than the total of STEM graduates produced by all G-7 countries. Other countries, with relatively underdeveloped digital infrastructure, innovation and investment capacity, and digital skills, risk falling behind their peers.

Section 4We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com
AI and automation will have a profound impact on work
Even as AI and automation bring benefits to business and the economy, major disruptions to work can be expected.

About half of current work activities (not jobs) are technically automatable
Our analysis of the impact of automation and AI on work shows that certain categories of activities are technically more easily automatable than others. They include physical activities in highly predictable and structured environments, as well as data collection and data processing, which together account for roughly half of the activities that people do across all sectors in most economies.

The least susceptible categories include managing others, providing expertise, and interfacing with stakeholders. The density of highly automatable activities varies across occupations, sectors, and, to a lesser extent, countries. Our research finds that about 30 percent of the activities in 60 percent of all occupations could be automated—but that in only about 5 percent of occupations are nearly all activities automatable. In other words, more occupations will be partially automated than wholly automated.

Three simultaneous effects on work: Jobs lost, jobs gained, jobs changed
The pace at and extent to which automation will be adopted and impact actual jobs will depend on several factors besides technical feasibility. Among these are the cost of deployment and adoption, and the labor market dynamics, including labor supply quantity, quality, and associated wages. The labor factor leads to wide differences across developed and developing economies. The business benefits beyond labor substitution—often involving use of AI for beyond-human capabilities—which contribute to business cases for adoption are another factor.

Social norms, social acceptance, and various regulatory factors will also determine the timing. How all these factors play out across sectors and countries will vary, and for countries will largely be driven by labor market dynamics. For example, in advanced economies with relatively high wage levels, such as France, Japan, and the United States, jobs affected by automation could be more than double that in India, as a percentage of the total.

Given the interplay of all these factors, it is difficult to make predictions but possible to develop various scenarios. First, on jobs lost: our midpoint adoption scenario for 2016 to 2030 suggests that about 15 percent of the global workforce (400 million workers) could be displaced by automation (Exhibit 3).

Exhibit 3

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com
Second, jobs gained: we developed scenarios for labor demand to 2030 based on anticipated economic growth through productivity and by considering several drivers of demand for work. These included rising incomes, especially in emerging economies, as well as increased spending on healthcare for aging populations, investment in infrastructure and buildings, energy transition spending, and spending on technology development and deployment.

The number of jobs gained through these and other catalysts could range from 555 million to 890 million, or 21 to 33 percent of the global workforce. This suggests that the growth in demand for work, barring extreme scenarios, would more than offset the number of jobs lost to automation. However, it is important to note that in many emerging economies with young populations, there will already be a challenging need to provide jobs to workers entering the workforce and that, in developed economies, the approximate balance between jobs lost and those created in our scenarios is also a consequence of aging, and thus fewer people entering the workforce.

No less significant are the jobs that will change as machines increasingly complement human labor in the workplace. Jobs will change as a result of the partial automation described above, and jobs changed will affect many more occupations than jobs lost. Skills for workers complemented by machines, as well as work design, will need to adapt to keep up with rapidly evolving and increasingly capable machines.

Four workforce transitions will be significant
Even if there will be enough work for people in 2030, as most of our scenarios suggest, the transitions that will accompany automation and AI adoption will be significant.

First, millions of workers will likely need to change occupations. Some of these shifts will happen within companies and sectors, but many will occur across sectors and even geographies. While occupations requiring physical activities in highly structured environments and in data processing will decline, others that are difficult to automate will grow. These could include managers, teachers, nursing aides, and tech and other professionals, but also gardeners and plumbers, who work in unpredictable physical environments. These changes may not be smooth and could lead to temporary spikes in unemployment (Exhibit 4).

Exhibit 4

Second, workers will need different skills to thrive in the workplace of the future. Demand for social and emotional skills such as communication and empathy will grow almost as fast as demand for many advanced technological skills. Basic digital skills have been increasing in all jobs. Automation will also spur growth in the need for higher cognitive skills, particularly critical thinking, creativity, and complex information processing. Demand for physical and manual skills will decline, but these will remain the single largest category of workforce skills in 2030 in many countries. The pace of skill shifts has been accelerating, and it may lead to excess demand for some skills and excess supply for others.

Third, workplaces and workflows will change as more people work alongside machines. As self-checkout machines are introduced in stores, for example, cashiers will shift from scanning merchandise themselves to helping answer questions or troubleshoot the machines.

Finally, automation will likely put pressure on average wages in advanced economies. Many of the current middle-wage jobs in advanced economies are dominated by highly automatable activities, in fields such as manufacturing and accounting, which are likely to decline. High-wage jobs will grow significantly, especially for high-skill medical and tech or other professionals. However, a large portion of jobs expected to be created, such as teachers and nursing aides, typically have lower wage structures.

In tackling these transitions, many economies, especially in the OECD, start in a hole, given the existing skill shortages and challenged educational systems, as well as the trends toward declining expenditures on on-the-job training and worker transition support. Many economies are already experiencing income inequality and wage polarization.

AI will also bring both societal benefits and challenges
Alongside the economic benefits and challenges, AI will impact society in a positive way, as it helps tackle societal challenges ranging from health and nutrition to equality and inclusion. However, it is also creating pitfalls that will need to be addressed, including unintended consequences and misuse.

AI could help tackle some of society’s most pressing challenges
By automating routine or unsafe activities and those prone to human error, AI could allow humans to be more productive and to work and live more safely. One study looking at the United States estimates that replacing human drivers with more accurate autonomous vehicles could save thousands of lives per year by reducing accidents.

AI can also reduce the need for humans to work in unsafe environments such as offshore oil rigs and coal mines. DARPA, for example, is testing small robots that could be deployed in disaster areas to reduce the need for humans to be put in harm’s way. Several AI capabilities are especially relevant. Image classification performed on photos of skin taken via a mobile phone app could evaluate whether moles are cancerous, facilitating early-stage diagnosis for individuals with limited access to dermatologists. Object detection can help visually impaired people navigate and interact with their environment by identifying obstacles such as cars and lamp posts. Natural language processing could be used to track disease outbreaks by monitoring and analyzing text messages in local languages.


Visualizing the uses and potential impact of AI and other analytics
Explore the interactive
Our work and that of others has highlighted numerous use cases across many domains where AI could be applied for social good. For these AI-enabled interventions to be effectively applied, several barriers must be overcome. These include the usual challenges of data, computing, and talent availability faced by any organization trying to apply AI, as well as more basic challenges of access, infrastructure, and financial resources that are particularly acute in remote or economically challenged places and communities.

AI will need to address societal concerns including unintended consequences, misuse, algorithmic bias, and questions about data privacy
In economic terms, difficult questions will need to be addressed about the widening economic gaps across individuals, firms, sectors, and even countries that might emerge as an unintended consequence of deployment. Other areas of concern include the use and misuse of AI. These range from use in surveillance and military applications to use in social media and politics, and where the impact has social consequences such as in criminal justice systems. We must also consider the potential for users with malicious intent, including in areas of cybersecurity. Multiple research efforts are currently under way to identify best practices and address such issues in academic, nonprofit, and private-sector research.

Some concerns are directly related to the way algorithms and the data used to train them may introduce new biases or perpetuate and institutionalize existing social and procedural biases. For example, facial recognition models trained on a population of faces corresponding to the demographics of artificial intelligence developers may not reflect the broader population.

Data privacy and use of personal information are also critical issues to address if AI is to realize its potential. Europe has led the way in this area with the General Data Protection Regulation, which introduced more stringent consent requirements for data collection, gives users the right to be forgotten and the right to object, and strengthens supervision of organizations that gather, control, and process data, with significant fines for failures to comply. Cybersecurity and “deep fakes” that could manipulate election results or perpetrate large-scale fraud are also a concern.

Three priorities for achieving good outcomes
The potential benefits of AI to business and the economy, and the way the technology addresses some of the societal challenges, should encourage business leaders and policy makers to embrace and adopt AI. At the same time, the potential challenges to adoption, including workforce impacts, and other social concerns cannot be ignored. Key challenges to be addressed include:

The deployment challenge
We have an interest in embracing AI, given its likely contributions to business value, economic growth, and social good, at a time when many economies need to boost productivity. Businesses and countries have a strong incentive to keep up with global leaders such as the United States and China. Increased and broad deployment will require accelerating the progress being made on the technical challenges, as well making sure that all potential users have access to AI and can benefit from it. Among measures that may be needed:

Investing in and continuing to advance AI research and innovation in a manner that ensures that the benefits can be shared by all.
Expanding available data sets, especially in areas where their use would drive wider benefits for the economy and society.
Investing in AI-relevant human capital and infrastructure to broaden the talent base capable of creating and executing AI solutions to keep pace with global AI leaders.
Encouraging increased AI literacy among business leaders and policy makers to guide informed decision making.
Supporting existing digitization efforts that form the foundation for eventual AI deployment for both organizations and countries.
The future of work challenge
A starting point for addressing the potential disruptive impacts of automation will be to ensure robust economic and productivity growth, which is a prerequisite for job growth and increasing prosperity. Governments will also need to foster business dynamism, since entrepreneurship and more rapid new business formation will not only boost productivity, but also drive job creation. Addressing the issues related to skills, jobs, and wages will require more focused measures. These include:

Evolving education systems and learning for a changed workplace by focusing on STEM skills as well as creativity, critical thinking, and lifelong learning.
Stepping up private- and public-sector investments in human capital, perhaps aided by incentives and credits analogous to those available for R&D investments.
Improving labor market dynamism by supporting better credentialing and matching, as well as enabling diverse forms of work, including the gig economy.
Rethinking incomes by considering and experimenting with programs that would provide not only income for work, but also meaning and dignity.
Rethinking transition support and safety nets for workers affected, by drawing on best practices from around the world and considering new approaches.
The responsible AI challenge
AI will not live up to its promise if the public loses confidence in it as a result of privacy violations, bias, or malicious use, or if much of the world comes to blame it for exacerbating inequality. Establishing confidence in its abilities to do good, at the same time as addressing misuses, will be critical. Approaches could include:

Strengthening consumer, data, and privacy and security protections.
Establishing a generally shared framework and set of principles for the beneficial and safe use of AI.
Best practice sharing and ongoing innovation to address issues such as safety, bias, and explainability.
Striking the right balance between the business and national competitive race to lead in AI to ensure that the benefits of AI are widely available and shared.

Source: https://www.mckinsey.com/featured-insights/artificial-intelligence/the-promise-and-challenge-of-the-age-of-artificial-intelligence?cid=other-eml-cls-mip-mck&hlkid=e43707df8a79425fbab2deac048a02dc&hctky=1370656&hdpid=5de210fe-6adc-4201-9d10-1e5f9c04e8ef

Pages: [1] 2 3 ... 19