Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - sanzid.swe

Pages: 1 [2] 3 4
16
SparkLabs Group has announced the launch of Connex, a latest startup accelerator focusing on IoT, smart cities and proptech.

According to the company, the accelerator is based on SparkLabs’ proven model and will leverage its presence in key innovation ecosystems, including Silicon Valley, Seoul, Shenzhen, Taipei and Singapore.

The new accelerator will make investments in startups that address key market drivers that include 5G monetisation, sustainability and green building initiatives and leverage emerging technology enablers like Low Power WANs, 5G, eSIMs, AI and security.

One customer looking at Connex is Nokia. “Working with SparkLabs Connex will enable us to address the growing smart city opportunity,” said Danial Mausoof, global head of enterprise sales at Nokia. “We can help solve the challenges of smart city orchestration with our solutions, and expand our smart city ecosystem while answering the needs of our enterprise customers.”

According to IDC research last month, worldwide spending on smart city initiatives will total £95 billion in 2020 at an increase of 18.9% year over year. The top 100 cities that made investments on smart initiatives in 2019 represented nearly 29% of global spending, the research found. IDC also predicted that Singapore will continue to remain one of the top investors in smart cities initiatives followed by Tokyo, New York City and London.

Additionally, ABI Research’s latest smart cities market data report, released in January, predicted that smart utility metering and video surveillance will hold a significant portion of the smart city segment, with the number of connections representing 87% of the total number of smart city connections by 2026. According to the report, though surveillance cameras with embedded AI computing capabilities from vendors like NVIDIA are already being deployed, low latency 5G connectivity will allow real-time local response management in future by utilising powerful cloud-based AI capabilities.

Reference: https://www.iottechnews.com/news/2020/mar/02/sparklabs-launches-connex-iot-and-smart-city-accelerator/

17
Waymo has announced that it has raised $2.25 billion (£1.75bn) in a fundraising round led by Silver Lake, Canada Pension Plan Investment Board and Mubadala Investment Company. It is Waymo’s first external investment that also included Magna, Andreessen Horowitz and AutoNation, as well as parent firm Alphabet.

Waymo CEO John Krafcik wrote in a company blog: “With this injection of capital and business acumen, alongside Alphabet, we’ll deepen our investment in our people, our technology, and our operations, all in support of the deployment of the Waymo Driver around the world.”

During the 2020 CES event in Las Vegas, Waymo announced that self-driving cars developed by Waymo has now driven more than 20m miles on public roads and the company’s autonomous vehicle technology is almost ready for widespread deployment.

As this publication reported at the time, while a number of self-driving vehicles were involved in accidents, Waymo has somehow managed to avoid – so far, at least – any fatalities like the Arizona crash involving Uber’s self-driving test vehicle that gave its driverless car project a bad publicity. While self-driving vehicle technologies are now fairly advance, regulatory environment to support them is still nascent. Till the time regulations catch up, self-driving cars will be reserved to select areas.

Pony.ai, an autonomous vehicle technology provider, secured £312.6 million in funding from Toyota last month, pushing the company’s valuation to more than £2.31bn. The firm is aiming on deepening and expanding its partnership with Toyota in mobility services. There are also plans to further boost their combined efforts in autonomous vehicle technology development and mobility service deployment.

Reference: https://www.iottechnews.com/news/2020/mar/03/waymo-secures-225-billion-first-external-investment/

18
A so-called ‘DeepFake’ video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat.

The station, Q13, broadcasted a doctored Trump speech in which he somehow appeared even more orange and pulled amusing faces.

You can see a side-by-side comparison with the original below:

Following the broadcast, a Q13 employee was sacked. It’s unclear if the worker created the clip or whether it was just allowed to air.

The video could be the first DeepFake to be televised, but it won’t be the last. Social media provides even less filtration and enables fake clips to spread with ease.

We’ve heard much about sophisticated disinformation campaigns. At one point, the US was arguably the most prominent creator of such campaigns to influence foreign decisions.

Russia, in particular, has been linked to vast disinformation campaigns. These have primarily targeted social media with things such as their infamous Twitter bots.

According to Pew Research, just five percent of Americans have ‘a lot of trust‘ in the information they get from social media. This is much lower than in national and local news organisations.

It’s not difficult to imagine an explosion in doctored videos that appear like they’re coming from trusted outlets. Combining the reach of social media with the increased trust Americans have in traditional news organisations is a dangerous concept.

While the Trump video appears to be a bit of fun, the next could be used to influence an election or big policy decision. It’s a clear example of how AI is already creating new threats.

Reference: https://artificialintelligence-news.com/2019/01/14/trump-speech-deepfake-ai-threat/

19
Nvidia launched an online space called AI Playground on Monday which allows people to mess around with some deep learning experiences.

AI Playground is designed to be accessible in order to help anyone get started and learn about the potential of artificial intelligence. Who knows, it may even inspire some to enter the field and help to address the huge skill shortage.

The experience currently features three demos:

Imagine InPainting
Artistic Style Transfer
Photorealistic Image Synthesis
As you probably guessed from their names, all of the current demos are based around imagery.

Imagine InPainting allows the user to upload their own image and edit it with powerful AI tools. Content is able to be removed and replaced.

Artistic Style Transfer is fairly self-explanatory. The style of an uploaded image can be copied in another. This will help to satisfy the curiosity of anyone who wondered how it would look if Leonardo Da Vinci painted them instead of Lisa Gherardini. A convolutional neural network based on 80,000 images of people, scenery, animals, and moving objects had to be trained for this project.

Finally, Photorealistic Image Synthesis. This demo entirely fabricates photorealistic images and environments with eerie detail.

Bryan Catanzaro, VP of applied deep learning research at Nvidia, said in a statement:

“Research papers have new ideas in them and are really cool, but they’re directed at specialised audiences. We’re trying to make our research more accessible.

The AI Playground allows everyone to interact with our research and have fun with it.”

Nvidia plans to add more demos to its AI Playground over time.

Reference: https://artificialintelligence-news.com/2019/03/19/deep-learning-nvidia-ai-playground/

20
Amazon has kicked off its annual re:Invent conference in Las Vegas and made three major AI announcements.

During a midnight keynote, Amazon unveiled Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer.

Transcribe Medical

The first announcement we’ll be talking about is likely to have the biggest impact on people’s lives soonest.

Transcribe Medical is designed to transcribe medical speech for primary care. The feature is aware of medical speech in addition to standard conversational diction.

Amazon says Transcribe Medical can be deployed across “thousands” of healthcare facilities to provide clinicians with secure note-taking abilities.

Transcribe Medical offers an API and can work with most microphone-equipped smart devices. The service is fully managed and sends back a stream of text in real-time.

Furthermore, and most importantly, Transcribe Medical is covered under AWS’ HIPAA eligibility and business associate addendum (BAA). This means that any customer that enters into a BAA with AWS can use Transcribe Medical to process and store personal health information legally.

SoundLines and Amgen are two partners which Amazon says are already using Transcribe Medical.

Vadim Khazan, president of technology at SoundLines, said in a statement:

“For the 3,500 health care partners relying on our care team optimisation strategies for the past 15 years, we’ve significantly decreased the time and effort required to get to insightful data.”

SageMaker Operators for Kubernetes

The next announcement is Amazon SageMaker Operators for Kubernetes.

Amazon’s SageMaker is a machine learning development platform and this new feature lets data scientists using Kubernetes train, tune, and deploy AI models.

SageMaker Operators can be installed on Kubernetes clusters and jobs can be created using Amazon’s machine learning platform through the Kubernetes API and command line tools.

In a blog post, AWS deep learning senior product manager Aditya Bindal wrote:

“Customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale.”

Amazon says that compute resources are pre-configured and optimised, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete.

SageMaker Operators for Kubernetes is generally available in AWS server regions including US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland).

DeepComposer

Finally, we have DeepComposer. This one is a bit more fun for those who enjoy playing with hardware toys.

Amazon calls DeepComposer the “world’s first” machine learning-enabled musical keyboard. The keyboard features 32-keys and two octaves, and is designed for developers to experiment with pretrained or custom AI models.

In a blog post, AWS AI and machine learning evangelist Julien Simon explains how DeepComposer taps a Generative Adversarial Network (GAN) to fill in gaps in songs.

After recording a short tune, a model for the composer’s favourite genre is selected in addition to setting the model’s parameters. Hyperparameters are then set along with a validation sample.

Once this process is complete, DeepComposer then generates a composition which can be played in the AWS console or even shared to SoundCloud (then it’s really just a waiting game for a call from Jay-Z).

Developers itching to get started with DeepComposer can apply for a physical keyboard for when they become available, or get started now with a virtual keyboard in the AWS console.

Reference: https://artificialintelligence-news.com/2019/12/03/amazon-ai-announcements-reinvent-2019/

21
Several experts have given their thoughts on what threats AI poses, and unsurprisingly fake content is the current biggest danger.

The experts, who were speaking on Tuesday at the WSJ Pro Cybersecurity Executive Forum in New York, believe that AI-generated content is of pressing concern to our societies.

Camille François, chief innovation officer at social media analytics firm Graphika, says that deepfake articles pose the greatest danger.

We’ve already seen what human-generated “fake news” and disinformation campaigns can do, so it won’t be of much surprise to many that involving AI in that process is a leading threat.

François highlights that fake articles and disinformation campaigns today rely on a lot of manual work to create and spread a false message.

“When you look at disinformation campaigns, the amount of manual labour that goes into creating fake websites and fake blogs is gigantic,” François said.

“If you can just simply automate believable and engaging text, then it’s really flooding the internet with garbage in a very automated and scalable way. So that I’m pretty worried about.”

In February, OpenAI unveiled its GPT-2 tool which generates convincing fake text. The AI was trained on 40 gigabytes of text spanning eight million websites.

OpenAI decided against publicly releasing GPT-2 fearing the damage it could do. However, in August, two graduates decided to recreate OpenAI’s text generator.

The graduates said they do not believe their work currently poses a risk to society and released it to show the world what was possible without being a company or government with huge amounts of resources.

“This allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses,” said Vanya Cohen, one of the graduates, to Wired.

Speaking on the same panel as François at the WSJ event, Celeste Fralick, chief data scientist and senior principal engineer at McAfee, recommended that companies partner with firms specialising in detecting deepfakes.

Among the scariest AI-related cybersecurity threats is “adversarial machine learning attacks” whereby a hacker finds and exploits a vulnerability in an AI system.

Fralick provides the example of an experiment by Dawn Song, a professor at the University of California, Berkeley, in which a driverless car was fooled into believing a stop sign was a 45 MPH speed limit sign just by using stickers.

According to Fralick, McAfee itself has performed similar experiments and discovered further vulnerabilities. In one, a 35 MPH speed limit sign was once again modified to fool a driverless car’s AI.

“We extended the middle portion of the three, so the car didn’t recognise it as 35; it recognised it as 85,” she said.

Both panellists believe entire workforces need to be educated about the threats posed by AI in addition to employing strategies for countering attacks.

There is “a great urgency to make sure people have basic AI literacy,” François concludes.

Reference: https://artificialintelligence-news.com/2019/12/04/experts-discuss-current-biggest-threats-ai/

22
Researchers from Intel have published a study examining whether AI can recognise people’s faces using thermal imaging.

Thermal imaging is often used to protect privacy because it obscures personally identifying details such as eye colour. In some places, like medical facilities, it’s often compulsory to use images which obscure such details.

AI is opening up many new possibilities so Intel’s researchers set out to determine whether thermal imaging still offers a high degree of privacy.

Intel’s team used two sets of data sets:

The first set, known as SC3000-DB, was created using a Flir ThermaCam SC3000 infrared camera. The data set features 766 images of 40 volunteers (21 women and 19 men) who each sat in front of a camera for two minutes.
The second set, known as IRIS, was created by the Visual Computing and Image Processing Lab at Oklahoma State University. It features 4,190 images collected by 30 people and differs from the first set in that it contains various head angles and expressions.
Each image from the data sets were first cropped to only contain each person’s face.

A machine learning model then sought to numerically label facial features from the images as vectors. Another model, trained on VGGFace2 – a model trained on visible light images – was used to validate whether it could be applied to thermal images.

Here’s the full results for each data set:


The model trained on visible image data performed well in distinguishing among volunteers by extracting their facial features. 99.5 percent accuracy was observed for the SC3000-DB data set and 82.14 percent for IRIS.

Intel’s research shows that thermal imaging may not offer the privacy that many currently believe it to and it’s already possible to distinguish people using it.

“Many promising visual-processing applications, such as non-contact vital signs estimation and smart home monitoring, can involve private and or sensitive data, such as biometric information about a person’s health,” wrote the researchers.

“Thermal imaging, which can provide useful data while also concealing individual identities, is therefore used for many applications.”

Reference: https://artificialintelligence-news.com/2020/01/10/intel-examines-ai-recognise-faces-thermal-imaging/

23
Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot.

Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock that was also integrated into Samsung Health since last year.

The chatbot aims to reduce the burden on GPs and A&E departments by automating the triage process to determine whether someone can treat themselves at home, should book an online or in-person GP appointment, or go straight to a hospital.

A Twitter user under the pseudonym of Dr Murphy first reached out to us back in 2018 alleging that Babylon Health’s chatbot was giving unsafe advice. Dr Murphy recently unveiled himself as Dr David Watkins and went public with his findings at The Royal Society of Medicine’s “Recent developments in AI and digital health 2020“ event in addition to appearing on a BBC Newsnight report-

Over the past couple of years, Dr Watkins has provided many examples of the chatbot giving dangerous advice. In one example, an obese 48-year-old heavy smoker patient who presented himself with chest pains was suggested to book a consultation “in the next few hours”. Anyone with any common sense would have told you to dial an emergency number straight away.

This particular issue has since been rectified but Dr Watkins has highlighted many further examples over the years which show, very clearly, there are serious safety issues.

In a press release (PDF) on Monday, Babylon Health calls Dr Watkins a “troll” who has “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”.

According to the release, Dr Watkins has conducted 2,400 tests of the chatbot in a bid to discredit the service while raising “fewer than 100 test results which he considered concerning”.

Babylon Health claims that in just 20 cases did Dr Watkins find genuine errors while others were “misrepresentations” or “mistakes,” according to Babylon’s own “panel of senior clinicians” who remain unnamed.

Speaking to TechCrunch, Dr Watkins called Babylon’s claims “utterly nonsense” and questions where the startup got its figures from as “there are certainly not 2,400 completed triage assessments”.

Dr Watkins estimates he has conducted between 800 and 900 full triages, some of which were repeat tests to see whether Babylon Health had fixed the issues he previously highlighted.

The doctor acknowledges Babylon Health’s chatbot has improved and has issues around the rate of around one in three instances. In 2018, when Dr Watkins first reached out to us and other outlets, he says this rate was “one in one”.

While it’s one account versus the other, the evidence shows that Babylon Health’s chatbot has issued dangerous advice on a number of occasions. Dr Watkins has dedicated many hours to highlighting these issues to Babylon Health in order to improve patient safety.

Rather than welcome his efforts and work with Dr Watkins to improve their service, it seems Babylon Health has decided to go on the offensive and “try and discredit someone raising patient safety concerns”.

In their press release, Babylon accuses Watkins of posting “over 6,000” misleading attacks but without giving details of where. Dr Watkins primarily uses Twitter to post his findings. His account, as of writing, has tweeted a total of 3,925 times and not just about Babylon’s service.

This isn’t the first time Babylon Health’s figures have come into question. Back in June 2018, Babylon Health held an event where it boasted its AI beat trainee GPs at the MRCGP exam used for testing their ability to diagnose medical problems. The average pass mark is 72 percent. “How did Babylon Health do?” said Dr Mobasher Butt at the event, a director at Babylon Health. “It got 82 percent.”

Given the number of dangerous suggestions to trivial ailments the chatbot has given, especially at the time, it’s hard to imagine the claim that it beats trainee GPs as being correct. Intriguingly, the video of the event has since been deleted from Babylon Health’s YouTube account and the company removed all links to coverage of it from the “Babylon in the news” part of its website.

When asked why it deleted the content, Babylon Health said in a statement: “As a fast-paced and dynamic health-tech company, Babylon is constantly refreshing the website with new information about our products and services. As such, older content is often removed to make way for the new.”

AI solutions like those offered by Babylon Health will help to reduce the demand on health services and ensure people have access to the right information and care whenever and wherever they need it. However, patient safety must come first.

Mistakes are less forgivable in healthcare due to the risk of potentially fatal or lifechanging consequences. The usual “move fast and break things” ethos in tech can’t apply here.

There’s a general acceptance that rarely is a new technology going to be without its problems, but people want to see that best efforts are being made to limit and address those issues. Instead of welcoming those pointing out issues with their service before it leads to a serious incident, it seems Babylon Health would rather blame everyone else for its faults.

Reference: https://artificialintelligence-news.com/2020/02/26/babylon-health-doctor-ai-chatbot-safety-concerns/

24
Elon Musk has once again called for more stringent regulations around the development of AI technologies.

The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked.

Of course, given the nature of the companies Musk has founded, he is also well aware of AI’s potential.

Back in 2015, Musk co-founded OpenAI – an organisation founded with the aim of pursuing and promoting ethical AI development. Musk ended up leaving OpenAI in February last year over disagreements with the company’s work.

Earlier this week, Musk said that OpenAI should be more transparent and specifically said his confidence is “not high” in former Google engineer Dario Amodei when it comes to safety.

Responding to a piece by MIT Technology Review about OpenAI, Musk tweeted: “All orgs developing advanced AI should be regulated, including Tesla.”

In response to a further question of whether such regulations should be via individual governments or global institutions like the UN, Musk said he believes both.

Musk’s tweet generated some feedback from other prominent industry figures, including legendary Id Software founder John Carmack who recently stepped back from video game development to focus on independent AI research.

Carmack asked Musk: “How would you imagine that working for someone like me? Cloud vendors refuse to spawn larger clusters without a government approval? I would not be supportive.”

Coder Pranay Pathole shared a similar scepticism to Musk’s call as Carmack, saying: “Large companies ask for regulations acting all virtuous. What they are really doing is creating barriers for entry for new competition because only they can afford to comply with the new regulations.”

The debate over the extent of AI regulations and how they should be implemented will likely go on for some time – we can only hope to get them right before a disaster occurs. If you want to help Musk in building AI, he’s hosting a “super fun” hackathon at his place.

Reference: https://artificialintelligence-news.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/

25
A team of MIT researchers have used AI to discover a welcome new antibiotic to help in the fight against increasing resistance.

Using a machine learning algorithm, the MIT researchers were able to discover a new antibiotic compound which did not develop any resistance during a 30-day treatment period on mice.

The algorithm was trained using around 2,500 molecules – including about 1,700 FDA-approved drugs and a set of 800 natural products – to seek out chemical features that make molecules effective at killing bacteria.

Reference: https://artificialintelligence-news.com/2020/02/21/mit-researchers-use-ai-to-discover-a-welcome-new-antibiotic/

After the model was trained, the researchers tested it on a library of about 6,000 compounds known as the Broad Institute’s Drug Repurposing Hub.

“We wanted to develop a platform that would allow us to harness the power of artificial intelligence to usher in a new age of antibiotic drug discovery,” explains James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.

“Our approach revealed this amazing molecule which is arguably one of the more powerful antibiotics that has been discovered.”

Antibiotic resistance is terrifying. Researchers have already discovered bacterias that are immune to current antibiotics and we’re very much in danger of illnesses that have become simple to treat becoming deadly once more.

Data from the Centers for Disease Control and Prevention (CDC) already indicates that antibiotic-resistant bacteria and antimicrobial-resistant fungi cause more than 2.8 million infections and 35,000 deaths a year in the United States alone.

“We’re facing a growing crisis around antibiotic resistance, and this situation is being generated by both an increasing number of pathogens becoming resistant to existing antibiotics, and an anaemic pipeline in the biotech and pharmaceutical industries for new antibiotics,” Collins says.

The recent coronavirus outbreak leaves many patients with pneumonia. With antibiotics, pneumonia is not often fatal nowadays unless a patient has a substantially weakened immune system. The current death toll for coronavirus would be much higher if antibiotic resistance essentially sets healthcare back to the 1930s.

MIT’s researchers claim their AI is able to check more than 100 million chemical compounds in a matter of days to pick out potential antibiotics that kill bacteria. This rapid checking reduces the time it takes to discover new lifesaving treatments and begins to swing the odds back in our favour.

The newly discovered molecule is called halicin – after the AI named Hal in the film 2001: A Space Odyssey – and has been found to be effective against E.coli. The team is now hoping to develop halicin for human use (a separate machine learning model has already indicated that it should have low toxicity to humans, so early signs are positive.)

26
When Alexa replied to my question about the weather by tacking on ‘Have a nice day,’ I immediately shot back ‘You too,’ and then stared into space, slightly embarrassed. I also found myself spontaneously shouting words of encouragement to ‘Robbie’ my Roomba vacuum as I saw him passing down the hallway. And recently in Berkeley, California, a group of us on the sidewalk gathered around a cute four-wheeled KiwiBot – an autonomous food-delivery robot waiting for the traffic light to change. Some of us instinctively started talking to it in the sing-song voice you might use with a dog or a baby: ‘Who’s a good boy?’

We’re witnessing a major shift in traditional social life, but it’s not because we’re always online, or because our tech is becoming conscious, or because we’re getting AI lovers like Samantha in Spike Jonze’s film Her (2013). To the contrary, we’re learning that humans can bond, form attachments and dedicate themselves to non-conscious objects or lifeless things with shocking ease. Our social emotions are now being hijacked by non-agents or jabbering objects such as Amazon’s Alexa, Apple’s Siri or IBM’s Watson, and we’re finding it effortless, comfortable and satisfying.

The sophistication level of human-like simulation that AI needs in order to elicit our empathy and emotional entanglement is ridiculously low. A Japanese study in 2008 showed that elderly residents of a senior care home were quickly drawn into substantial social interactions with a rudimentary, toy-like robot seal named ‘Paro’. The seniors experienced increased motor and emotional stimulation with the bot, but also increased social interactions with each other regarding Paro. Tests showed that the reactions of the seniors’ vital organs to stress improved after the introduction of the robot. And in a test in 2018 at the Max Planck Institute for Intelligent Systems in Germany, researchers built robots that administered ‘soft-warm hugs’ to people, who reported feeling trust and affection for the robot – even saying that they felt ‘understood by’ the robot. The point is not that robots are now such convincing counterfeit persons that we’re falling into relationships with them. It’s that humans are suckers for any vague sign of social connection. All of us are a hair’s breadth away from Tom Hanks’s character in Cast Away (2000), who forges a deep bond with a volleyball he names Wilson.

Recently, science has come to understand the emotions of social bonding, and I think it helps us understand why it’s so easy to fall into these ‘as-if intimacies’ with things. Care or bonding is a function of oxytocin and endorphin surging in the brain when you spend time with another person, and it’s best when it’s mutual and they’re feeling it too. Nonhuman animals bond with us because they have the same brain chemistry process. But the system also works fine when the other person doesn’t feel it – and it even works fine when the other person isn’t even a ‘person’. You can bond with things that cannot bond back. Our emotions are not very discriminating and we imprint easily on anything that reduces the feeling of loneliness. But I think there’s a second important ingredient to understanding our relationship with tech.

The proliferation of devices is certainly amplifying our tendency for anthropomorphism, and many influential thinkers claim that this is a new and dangerous phenomenon, that we’re entering into a dehumanising ‘artificial intimacy’ with gadgets, algorithms and interfaces. I respectfully disagree. What’s happening now is not new, and it’s more interesting than garden-variety alienation. We are returning to the oldest form of human cognition – the most ancient pre-scientific way of seeing the world: animism.

Animistic beliefs dominate the everyday lives of people in Southeast and East Asia, as I discovered while living there for several years. Local spirits, called neak ta in Cambodia, inhabit almost every farm, home, river, road and large tree. Thai people usually refer to these spirits as phii, and the Burmese call them nats. The next time you visit a Thai restaurant, notice the spirit house near the cash register or kitchen, probably decorated with offerings such as flowers, fruit, even a shot of alcohol. These offerings are designed to please neak ta and phii, but also to distract and pull mischievous spirits into the mini-homes, thereby saving the real homes from malady and misfortune. Animism was never entirely supplanted by modern beliefs, and we see it fancifully portrayed in the Japanese films of Hayao Miyazaki.

Like my relationship with Alexa, animists have the same as-if perspective toward their spirits. They understand that the shot glass of booze is not really consumed by the thankful ghost (it’s still there the next day), but they gently commit to it anyway.

Animism is strong in Asia and Africa, but really it is everywhere around the globe, just below the surface of more conventional official religions. In actual numbers and geographic spread, belief in nature spirits trounces monotheism, because even the one-godders are closet animists. Spend some time in New Orleans, with its voodoo and hoodoo cultures, and you’ll see that animism is alive and interwoven with mainstream religions such as Catholicism.

The word ‘animism’ was first employed by the English anthropologist Edward Burnett Tylor (1832-1917) to describe the early ‘primitive’ stage of human religion – a stage that was eventually supplanted by what was later called Axial Age monotheism, which in turn would be supplanted, Tylor hoped, by what we’d call Deism. Anthropologists today debate the usefulness of the term animism since folk religions are so diverse, but two essential features mark all animism: one, belief that there are ‘agents’ or even persons in natural objects and artifacts (and even geographic places); and two, belief that nature has purposes (teleology) woven throughout it. Animism commits to the view that there are many kinds of persons in the world, only some of whom are humans.

Sigmund Freud (1856-1939) typified the usual condescension about animism when he wrote in Totem and Taboo (1919) that ‘spirits and demons were nothing but the projection of primitive man’s emotional impulses’. But I want to extend the more charitable view of David Hume (1711-76) that we are all somewhat animistic – even secular humanists and science devotees. ‘There is a universal tendency among mankind to conceive all beings like themselves and to transfer to every object those qualities with which they are familiarly acquainted and of which they are intimately conscious.’

Animism is not so much a set of beliefs as a form of cognition. I think we are all natural-born animists, and those of us in Western developed countries slowly learn to discount this mode of cognition in favour of a mechanical view of the world. Indigenous approaches to nature are dubbed uneducated or juvenile because they use agency and purpose to think about nature (eg, ‘the pine tree is for the warbler,’ or ‘the river wants revenge’, etc). However, some philosophers and psychologists are striking back, pointing out that animistic thinking reveals many of the subtle ecological relations in nature that mechanical approaches miss.

If animist thinking is childish and uneducated, then why are indigenous peoples so much better at surviving and thriving in local natural ecologies? Some kinds of animism are adaptive and aid our survival, because they focus our attention on ecological connections, but they also train our social intelligence to predict and respond to other agents. If your world is thick with other agents – all vying for their desires and goals – then you spend a lot of time organising, revising and strategising your own goals in a social space of many competing aims.

So our new ‘tech-animism’ might not be detrimental at all. I might not really be ‘helping’ the robot, and it might not be ‘helping’ me, but behaving as if we’re actually relating – even bonding – keeps our empathic skills honed and ready for when it really counts. Immersion in tech relationships is not creating the loneliness epidemic. It’s a response to it. The actual causes of the loneliness epidemic started way before digital dominance. Our new animism – animism 2.0 – might be quite helpful in keeping the social emotions and skills healthy enough for real human bonding, perspective-taking and empathy. Instead of dehumanising us, this tech-animism could actually be keeping us human.

Reference : https://aeon.co/ideas/ancient-animistic-beliefs-live-on-in-our-intimacy-with-tech

27
আইটি বিশ্বে বর্তমান সময়ের একটি হুমকির নাম Ransomware ভাইরাস…
যা গত কয়েকদিনে সারা বিশ্বে প্রায় ৭৫ হাজারের মত কম্পিউটারে দাপটের সাথে এট্যাক করেছে সঙ্গে হাতিয়ে নিয়েছে কম্পিউটারে থাকা ডেটা এবং বিনিময় অর্থ। এর ফলসরূপ বাংলাদেশেও আমার পরিচিত/ অপরিচিত অনেকেই ভিক্টিম হয়েছেন এই ভাইরাসের ছোবলে!

সংক্ষেপে বলতে গেলে Ransomware মূলত রাশিয়ায় বিস্তার হওয়া একটি বিধ্বংসী ভাইরাস, যেটাকে কম্পিউটার ডেটা কিডন্যাপার বললে ভুল হবে না!!
এই ভাইরাসের মূল স্ট্রাটেজি হল কোন একটি পিসিতে ঢুকে কয়েক মুহুর্তের মধ্যেই সেই কম্পিউটারের সকল প্রয়োজনীয় ডেটা এনক্রিপ্ট করে ফেলা..
এরপর শুরু হবে হ্যাকারদের খেলা! আপনার পিসি স্ক্রিনে ডেটা গুলো ডিক্রিপ্ট করার জন্য অর্থের দাবি করে একটা নোটিশ দিবে এবং সেটা মোটামুটি ৩০০ ডলার থেকে শুরু!!
আরেকটা কথা মনে রাখবেন… নতুন করে অপারেটিং সিস্টেম দিয়েও আপনি পার পাবেন না!! আপনার ডেটাগুলো এনক্রিপ্টেড অবস্থাই বিদ্যমান থাকবে…
কিভাবে মুক্তি পাবেন তাহলে এই Ransomware থেকে???



 

আগেই বলে রাখি…

Ransomware এর জন্য তেমন কার্যকরী এন্টিসাপোর্ট দেয়ার মত কিছু তৈরি হয়নি এখন পর্যন্ত।
প্রিভেনশন ইজ বেটার দেন কিউর। এই কথা সবসময় মনে রাখবেন।
১. অপরিচিত মেইল খুলবেন না। স্প্যাম মেইল দেখলেই চেনা যায়, সোজা ডিলিট!
২. ফ্রী আইফোন লেখা দেখে চোখ মুখ বন্ধ করে ঝাপিয়ে পড়বেন না সেই লিংকে! দেয়ার ইজ নাথিং কল্ড ফ্রী লাঞ্চ 😛
৩. একই কথা সব সোস্যাল সাইটের জন্য। “মেয়েটি এ কি করলেন সবার সামনে (দেখুন ভিডিওসহ)” লেখা, আর একটি ছবি দেখে ক্লিক দিয়ে দিবেন না। দিলেই শেষ হয়ে যেতে পারেন!
৪. ডাউনলোড করা ফাইল আগে ভালো করে চেক করবেন এর পর চালাবেন। প্রয়োজনীয় ফাইলগুলোর মধ্যেই থাকতে পারে ভাইরাস ঢুকানো!
৫. দুই নম্বর বা ক্রাক করা এন্টিভাইরাস ব্যবহার করবেন না। প্রায় ১০ ভাগ কেইসে এই জিনিষ দেখা গেছে।
৬. ডার্ক ওয়েব বা তা সংক্রান্ত সফটওয়্যার নিয়ে ঘাটাঘাটি করবেন না। ওটা আপনার জন্য কোন কাজের জিনিষ না। আরও বড় বিপদে পরবেন। (যদি আপনি নিজেই হ্যাকার হয়ে থাকেন তাহলে আলাদা কথা)
৭. যেহেতু প্রতিদিনই ম্যালওয়্যার মেকাররা তাদের প্রোগ্রাম আপডেট করে তাই আপনিও আপনার সিস্টেম আর এ্যান্টিভাইরাস আপডেটেড রাখুন!
সবশেষে বলবো…
আপনি যদি একজন হবু সিএসই ইঞ্জিনিয়ার হিসেবে ভবিষ্যতে বুক উচু করে নিজেকে পরিচয় দিতে চান, তাহলে আজই চুরি করা উইন্ডোজ ফেলে দিয়ে ইন্সটল করে ফেলুন লিনাক্সের যেকোন একটি ডিস্টো, যেমন- উবুন্তু! 🙂
চুরি করা প্রোডাক্ট, আর ক্রাক করতে গিয়ে ভাইরাসের শিকার হওয়া আর কত দিন??
বিশ্বাস করুন আর নাই করুন, আপনি ছোট থেকে যদি লিনাক্সের সাথে পরিচিত হতেন তাহলে আপনি নিজেই বলতেন জানালা থেকে লিনাক্স হাজার গুন ভাল একটি অপারেটিং সিস্টেম!!!
সব আসলে অভ্যাসের ব্যাপার…

 

ডেভেলপাররা যেমন এখনো এমন কোন টুলস খুজে পায়নি যা দ্বারা  Ransomware শতভাগ প্রোটেক্ট করা যায়, আমিও একইভাবে এমন কোন টুলস খুজে পাইনি যা লিনাক্সে উইন্ডোজের অলটারনেটিভ হিসেবে নাই!! 🙂
এখানেও সবকিছু করা যায় এবং উইন্ডোজের সবকিছুর বিকল্প লিনাক্সে আছে।
যাই হোক…

 

উবুন্তু কেন জানালার থেকে হাজার গুন বেশি সিকিউর, আমাদের জন্য উবুন্তুর প্রয়োজনীয়তা, আমি কেন উবুন্তু ব্যবহার করে মজা পাই এবং আপনি কেন ব্যবহার করবেন, বিস্তারিত বলবো পরবর্তী কোন এক পোস্টে।
থাকুন মুক্ত সফটওয়্যারের(Open Source) সাথে..
মুক্ত থাকুন Ransomware থেকে…
ধন্যবাদ সবাইকে… 🙂


আমার ব্লগের মূল লেখা- https://bit.ly/2wrwugk

28
আমরা গত পর্বগুলোতে অনেক ধরনের কাজ শিখেছি যেগুলো হয়তো আমরা  আমাদের দৈনন্দিন কাজে ব্যবহার করতে পারবো। সেই কাজগুলোকে আমরা মনেকরি একটা দিনের কাজ, অথবা আপনার প্রত্যহিক কাজকর্ম যেমন- ঘুম থেকে ওঠা, দাত ব্রাশ করা, খাওয়া, রেস্ট নেয়া, আবার ঘুমানো, ঘুড়তে যাওয়া ইত্যাদি ইত্যাদি।

একদিনের এই কাজগুলো হয়তো আমরা প্রোগ্রামিং ল্যাংগুয়েজে রিপ্রেজেন্ট করতে পারবো, বাট এই ধরনের কাজ যদি বলা হয় এক সপ্তাহ বা এক মাস অথবা সারাজীবন চলতে থাকবে… তাহুলে কিভাবে সেটা প্রোগ্রামে রান করা যায়? হ্যা, সেটার জন্যই আমাদের প্রয়োজন লুপ স্টেটমেন্ট। যা সাধারনত আমরা দুইটা নামে চিনি- for লুপ এবং while লুপ। ব্যাশ প্রোগ্রামিং এ এই দুইটি লুপ কিভাবে কাজ করে তা আজকের পর্বে দেখাবো-

 

for লুপ
আমাদের যেহেতু বেসিক কিছু প্রোগ্রামিং নলেজ সম্পর্কে জ্ঞান আছে, লুপ সম্পর্কেও আশা করি সবাই একটু আধটু ওয়াকিব আছেন। আমি নিচে ব্যাশ প্রোগ্রামিং for লুপিং এর স্ট্রাকচার দেখিয়ে তা বিশ্লেষন করার চেষ্টা করছি-

for var in loopingLimit
do
   statement(to be executed)
done
1
2
3
4
for var in loopingLimit
do
   statement(to be executed)
done
এখন যদি আমরা চাই ১ থেকে ১০ পর্যন্ত সংখ্যাগুলো আমরা for লুপ চালানোর মাধ্যমে করবো তাহলে নিচের মতো লিখবো-

for i in 1 2 3 4 5 6 7 8 9 10
do
  echo $i
done
1
2
3
4
for i in 1 2 3 4 5 6 7 8 9 10
do
  echo $i
done
এখানে প্রথম লাইনে for লুপ কী-ওয়ার্ড নিয়ে i লিখেছি, এই i আসলে লুপ ভেরিয়েবল। যেটি লুপের প্রতিটি ইটারেশনকে ক্যারি করে। এরপর লুপিং লিমিট নিয়েছি, এটা ব্যাশে seq _ _ (সিকুয়েন্স) ফাংশনের মাধ্যমেও নেয়া যায়। যেমন- যদি এখানে লিখতাম for i in seq 1 10 তাহলেও একই আউটপুট পেতাম।
২ নাম্বার লাইনে do অর্থাৎ যা করবো তার শুরু করেছি।
৩ নাম্বার লাইনে স্টেটমেন্টে i প্রিন্ট করেছি, অর্থাৎ লুপের প্রতিটি ইটারেশনে i এর মান প্রিন্ট করতে বলেছি।
এরপর done কী-ওয়ার্ড দিয়ে do স্টেটমেন্ট ক্লোজ করেছি।
 

while লুপ
আমি নিচে ব্যাশ প্রোগ্রামিং while লুপিং এর স্ট্রাকচার দেখিয়ে তা বিশ্লেষন করার চেষ্টা করছি-

while command
do
 Statement (to be executed)
done
1
2
3
4
while command
do
 Statement (to be executed)
done
এখন যদি আমরা চাই ১ থেকে ১০ পর্যন্ত সংখ্যাগুলো আমরা for লুপ চালানোর মাধ্যমে করবো তাহলে নিচের মতো লিখবো-

i=1
while [ $i -le 10 ]
do
   echo $i
   i=$(($i+1))
done
1
2
3
4
5
6
i=1
while [ $i -le 10 ]
do
   echo $i
   i=$(($i+1))
done
 

এখানেও ফার্স্টেই i নামে একটা ভেরিয়েবল ডিক্লেয়ার করেছি, যেটা আমাদের লুপিং ভেরিয়েবল বলা চলে।
এরপর while কী-ওয়ার্ড দিয়ে কন্ডিশন লিখেছি, অর্থাৎ এটি 1 থেকে শুরু করে 10 পর্যন্ত চলবে।
এবং do স্টেটমেন্ট এর মধ্যে আমরা লুপিং ভেরিয়েবলের মান প্রিন্ট করেছি এবং লুপ পরবর্তীতে চলার জন্য i এর মান এক এক করে প্রতি ইটারেশনে বারিয়েছি।
সর্বশেষ done কী-ওয়ার্ড লিখে do স্টেটমেন্ট শেষ করেছি।
তাহলেই while লুপের মাধ্যমেও আমরা for লুপের ঐ একই আউটপুট পাবো। এভাবে সাধারণত দুইটা লুপের যেকোন একটি ইউজ করেই যেকোন প্রবলেম সলভ করা যায়। যার কাছে যেটা সহজবোধ্য মনে হবে সেটি দিয়ে কাজ করবে… তবে সময় সাপেক্ষে কাজের সুবিধা- অসুবিধার জন্য আলাদা যেকোন একটি ব্যবহার করা যেতে পারে।


আমার ব্লগের মূল লেখা- https://bit.ly/2PKkoWK

29
আমি আগের পর্বে মাল্টিপল কন্ডিশনাল লজিক নিয়ে লিখেছি। এবার অনেকগুলো কন্ডিশন থাকলে সেখানে elif  ব্যবহার না করে সুইস কেইস এর মাধ্যমে কিভাবে খুবই সহজে, একই কাজটি সেরে ফেলতে পারবো সেটি বুঝানোর চেষ্টা করবো।

সুইস কেইস বলতে সেখানে কোন একটি কন্ডিশনের আন্ডারে অনেকগুলো এক্সিকিউশন কেইস থাকবে, এরপর যে কেইসটি কন্ডিশন অনুযায়ী সত্য হবে শুধুমাত্র সেই কেইসটিই এক্সিকিউট হবে। এবার নিচের কোডটি লক্ষ্য করুন-

echo "Is it raining?"
read ans

case $ans in
    yes|y) echo "Yes, raining!";;
    no|n) echo "No,not raining!";;
    *) echo "Invalid input!"
esac
1
2
3
4
5
6
7
8
echo "Is it raining?"
read ans
 
case $ans in
    yes|y) echo "Yes, raining!";;
    no|n) echo "No,not raining!";;
    *) echo "Invalid input!"
esac
এখানে অনেকগুলো ব্যাপার বুঝতে হবে, একটু চেষ্টা করলে এটার বস হয়ে যাওয়া কোন ব্যাপার নাহ!

প্রথমে একটা লাইন প্রিন্ট দেখিয়ে ans নামের একটা ভেরিয়েবল ইনপুট নিয়েছি, যেখানে এখন বৃষ্টি হচ্ছে কিনা সেটা হ্যা অথবা না ইনপুট নিবে।
চতুর্থ লাইনে এসে case নামে একটা কী-ওয়ার্ড নিয়েছি যেটা কেইস নিয়ে কাজ করার জন্য নেয়া অবধারিত। এরপর আমরা যে ভেরিয়েবলের মধ্য থেকে এনসার হ্যা অথবা না নিয়ে কাজ করবো সেটি লিখে স্পেস দিয়ে in নামের আরেকটি -কী-ওয়ার্ড লিখবো এবং এর পর থেকে কেইসগুলো লেখা শুরু করবো।
আমি এখানে তিনটি কেইস নিয়ে কাজ করেছি… yes, no এবং অন্য যেকোনকিছু- (বিঃ দ্রঃ শেষের কেইস বাদে প্রতিটা কেইসের শেষে দুইটা কোলন ( ;; ) দিতে ভুলবেন না)
একজন ইনপুট দেয়ার সময় yes অথবা শর্টকাটে y, যেকোন একটি দিতে পারে। সেজন্য yes|y, অর্থাৎ দুইটার মধ্যে অথবা বুঝানোর জন্য “|” চিহ্নটি ব্যবহার করা হয়ে থাকে। এটি লেখার পর একটি প্রথম বন্ধনী দ্বারা কন্ডিশনটিকে আটকে দিতে হয় এবং এর পরের অংশ হল বডি, অর্থাৎ কন্ডিশনটি সত্য হলে যে অংশ এক্সিকিউট হবে।
একইভাবে no এর জন্য no|n লিখেছি এবং বাকীটা আগের মতো।
এরপর আমরা কন্ডিশন হিসেবে “*” নিয়েছি, এর অর্থ মুলত বাকী সবকিছু। যারা এসকিউএল ডেটাবেইস করেছেন, বুঝবেন কোনকিছুর আগে একটা স্টারিক সিম্বল দিলে বুঝায় ALL… এখানেও প্রায় তাই। অর্থাৎ যেগুলো আগের কেইসের মধ্যে আসছে সেগুলো বাদে অন্যকিছু ইনপুট হিসেবে নিলেই এটার বডি এক্সিকিউট হবে।
সবশেষে esac হল case কী-ওয়ার্ডের ফিনিশিং কী-ওয়ার্ড, যেটি এক্স্যাক্ট case এর রিভার্স (খেয়াল করে দেখুন)।
এবার নিচের কোডটি লক্ষ্য করুন-

Shell
echo "Enter a number between 1 and 5"
read n
case $n in
 5) echo "Five!";;
 4) echo "Four!";;
 3) echo "Three!";;
 2) echo "Two!";;
 1) echo "One!"
 *) echo "Not valid!"
esac
1
2
3
4
5
6
7
8
9
10
echo "Enter a number between 1 and 5"
read n
case $n in
 5) echo "Five!";;
 4) echo "Four!";;
 3) echo "Three!";;
 2) echo "Two!";;
 1) echo "One!"
 *) echo "Not valid!"
esac


আমার ব্লগের মূল লেখা- https://bit.ly/2TBsMsN

30
আগের একটা লেখাতে কন্ডিশনাল লজিক কি, কেন এবং কিভাবে লিখতে হয় তা মোটামুটিভাবে ক্লিয়ার করেছি। আজকের লেখাতে দেখাবো মাল্টিপল লজিক, অর্থাৎ অনেকগুলো লজিক একসাথে ব্যবহার করা।

একই কন্ডিশনে একাধিক লজিক নিয়ে কাজ করাঃ
মাল্টিপল লজিক বলতে আসলে বুঝায় একাধিক যুক্তি, মানে যেখানে একের অধিক কন্ডিশন মেইনটেইন করে আমাদের কোন কাজ সম্পাদন করতে হবে। আমরা আগেই দেখেছি, একটা কন্ডিশনাল লজিককে আমরা নিচের মত করে রিপ্রেজেন্ট করতে পারি-

if  [ condition.. ];
then echo "body.."
else
echo "condition.."
fi
1
2
3
4
5
if  [ condition.. ];
then echo "body.."
else
echo "condition.."
fi
ধরুন বলা হল, ফাহাদ সাহেবের ছেলে হলে সিএসই পড়বে আর মেয়ে হলে মেডিকেল পড়বে। তাহলে এটি খুব সহজেই করে ফেলা যাবে। কিন্তু যদি বলা হয়, ফাহাদ সাহেবের যদি ছেলে হয় এবং জন্মাবস্থায় ওজন ২ কেজির বেশি হয় তাহলে সিএসই পড়বে আর মেয়ে হলে মেডিকেল পরবে… তাহলে কিভাবে করবো?? হ্যা, এটিই হল মাল্টিপল লজিক।
এবার নিচের কেডগুলো দেখি-

read a
if [ $a -gt 5 -a $a lt 8 ]
then echo "ok!"

else
if [ $a -le 5 ]
then echo "out of lower range!"
else echo "out of upper range!"
fi
fi
1
2
3
4
5
6
7
8
9
10
read a
if [ $a -gt 5 -a $a lt 8 ]
then echo "ok!"
 
else
if [ $a -le 5 ]
then echo "out of lower range!"
else echo "out of upper range!"
fi
fi
এখানে প্রথম লাইনে একটা ভেরিয়েবল ইনপুট নিয়েছি এবং সেকেন্ড লাইনে করেছি মাল্টিপল লজিকের মূল কাজ।
সেকেন্ড লাইনে খেয়াল করলে আমরা দুইটা কন্ডিশন দেখতে পারছি- একটা হল $a -gt এবং অন্যটি $a lt 8 যেখানে একটি কী-ওয়ার্ড দিয়ে কন্ডিশন দুইটিকে সংযুক্ত করা হয়েছে এবং সেটি হল -a যার অর্থ এখানে AND. এই ধরনের আরও এক্সপ্রেশনগুলো ব্যাশ প্রোগ্রামিং এ নিচের মত রিপ্রেজেন্ট করা হয়-
NOT → !
AND → -a
OR → -0
৩ নাম্বার লাইনে আগের কন্ডিশনের বডি লিখেছি, অর্থাৎ আগের কন্ডিশনটি সত্য হলে এটি এক্সিকিউট হবে।
এরপর else কী-ওয়ার্ড ইউজ করে সেটির মধ্যে আবার একটি কন্ডিশনাল লজিক লেখা হয়েছে, এটিকে বলে নেস্টেড কন্ডিশন। ( এটা সকলেরই বোধগম্য হয়েছে আশা করি)।
শেষে দুইটা fi অর্থাৎ, if এর দুইটা ক্লোজিং কী-ওয়ার্ড ব্যবহার করা হয়েছে- একটি প্রথম if এর জন্য এবং অন্যটি পরের if এর জন্য।
এটি টেক্সট ইডিটরে লিখে রান করলে সুন্দর একটি স্ক্রিপ্ট তৈরি হয়ে যাবে দেখতে পাবেন।

 

একই কন্ডিশনাল লজিকের মধ্যে অনেকগুলো কন্ডিশনে কাজ করাঃ
নিচের প্রোগ্রাম টা লক্ষ্য করি-

read a
if [ $a -gt 0 ]
then echo "positive!"
elif [ $a -eq 0 ]
then echo "zero!"
else echo "negative!"
fi
1
2
3
4
5
6
7
read a
if [ $a -gt 0 ]
then echo "positive!"
elif [ $a -eq 0 ]
then echo "zero!"
else echo "negative!"
fi
আশা করি বুঝতে পেরেছেন। এখানে ৪ নাম্বার লাইনে শুধুমাত্র if এবং else এর বাইরে অন্য কোন কন্ডিশন ইউজ করতে elif কী-ওয়ার্ড ইউজ করা হয়েছে।
 

এবার এই কনসেপ্টকে ব্যবহার করে আমরা একটা প্রবলেম সলভ করতে পারি-

#প্রবলেমঃ ফাহাদ সাহেবের ছেলে হয়েছে, এবং বাচ্চাটার ওজন ছিল ২.৩ কেজি। ছেলেটিকে পূর্বপরিকল্পনা অনুযায়ী সিএসই ইঞ্জিনিয়ারিং এ ভর্তি করিয়েছেন। তার ফলশ্রুতিতে ফাহাদ সাহেবকে নিজের হাতে তার ছেলের প্রতিটা সাবজেক্টের জিপিএ হিসাব করতে হয় প্রতিনিয়ত। মার্কসের হিসাবটা এরকম-

৮০ – ১০০% হলে A+
৭৫ – ৭৯% হলে A
৭০ – ৭৪ হলে A-
৬৫ – ৬৯ হলে B+
৬০ – ৬৪ হলে B

আমার ব্লগের মূল লেখা- https://bit.ly/2VQaUgz

Pages: 1 [2] 3 4