Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Muzaffar

Pages: [1] 2 3 4
1
Ramadan and Fasting / The Dos and Don’ts of Ramadan
« on: May 19, 2018, 11:01:05 AM »

Much like everything we do – from our social interactions to as menial a task as eating – has certain rules of engagement that one must abide by. Likewise, during the holy month of Ramadan, Muslims are required to take caution and follows certain rules; which may be said or unsaid.
Of the very basic acts that are expected of a Muslim is that of abstention from food and drinks. However, this is not all. There is much more to Ramadan than the mechanical act of keeping and breaking a fast. One should keep in mind that this month is about patience, forgiveness and goodness and anything that hinders a Muslim achieving this state should be avoided at all costs.
The DO NOTs of Ramadan
Lose patience
It is imperative that one does not lose patience with other people. It is no surprise that deprivation of food and the scorching sun can make one irritable and vulnerable – which is why keeping your calm is highly advisable. Don’t yell, fight and indulge in any destructive behavior because the Ramadan is all about patience and tolerance and it does not do a Muslim brother any good to do anything that is not in line with the spirit of this blessed month. The Prophet (PBUH) said:
And it is the month of patience, and surely, the reward of patience is Paradise. (Al-Kafi 4:66)
Begrudge anyone
Forgiveness is one of the recurring themes in Ramadan, if our Lord Almighty can forgive us for our sins that we commit throughout the year then why not follow the same suite? The Holy Prophet Muhammad (PBUH) has said:
It (Ramadan) is the month, whose beginning is mercy, its middle, forgiveness, and its end, emancipation from the fire (of hell). (Bihar al-Anwar 93: 342)
So in the spirit of Ramadan, forgive anyone who has wronged you and don’t hold grudges – this Ramadan try to let the little things pass because Allah loves those who forgive others.
Refrain from gossip
In our daily lives, we sometimes bad mouth other people and discuss their problems and lives. This is another act that one should be extra careful with, especially in this month. Watch your actions and guard yourself against the evils of gossip and backbiting. Allah wants men to refrain from food but He also loves those who make constant effort to better themselves. It is even said in hadith that:
“Whoever does not give up false speech and acting upon it, Allah has no need of him giving up his food and drink.” (Al-Bukhari)
While these are the things that a Muslim should definitely refrain from, there are many other actions that one should engage in, during this holy month.
The DO’S of Ramadan
Recite the Quran
Remember the Almighty as frequently as possible. Ramadan is the greatest opportunity to communicate with Allah because there are no barriers between man and his Almighty God. No wrath of the Satan is beset upon man during this month as the Messenger of Allah (PBUH) said:
"When the month of Ramadan starts, the gates of the heaven are opened and the gates of Hell are closed and the devils are chained." (Imam Bukhari)
Hence, it is the perfect opportunity to seek forgiveness from the Lord, reestablish ties with Him and recite Quran as much as possible.
Follow the Sunnah
It is always advisable for a Muslim to follow in the footsteps of the Holy Prophet (PBUH), however, the spirit of Ramadan requires that the Sunnah of the Prophet (PBUH) is adhered to, to reap the benefits of this holy month. The routine of the Prophet (PBUH) during this month should be a standard for every Muslim no matter where. Imitating the best of the human beings will surely make us better people and even better role models as Muslims in good faith.
“The most honored by Allah amongst you are those best in taqwa."(Q 49:13)
Taraweeh
Another essential element of this month is the performance of the Taraweeh prayer, in addition to the five daily prayers. Taraweeh prayers are to be performed after Isha each day to create a stronger bond with the Almighty and to gain the blessings of God Almighty.
Give Zakat
Zakat, being one of the five pillars of Islam, is a definite act to indulge in. Ramadan is all about helping others and feeling their pain – putting one in the shoes of another and having empathy. So, what better time to help someone than in the holy month of Ramadan? So in a month which requires from a Muslim nothing but goodness, be aware of those who need your help and give Zakat because it not only fulfills your rights towards the Lord but also towards His creation.
This Ramadan, let us try and be better humans and even better Muslims and take care of the rules and regulations of this month. It is our chance to be better than we were in the past year and in light of doing good and being better, do keep these little reminders with you in Ramadan.


Source: https://www.islamicfinder.org/iqra/the-dos-and-donts-of-ramadan/


2

The month of blessings and giving, the Holy month of Ramadan, is here! As Muslims, it is our duty to remember that we might be fortunate enough to associate our Ramadan with food and festivities but there are millions of people in the world who have nothing but hunger and poverty to associate with Ramadan. Just as Allah (SWT) has blessed us in this life, it is our moral duty to help the poor and the needy.
The whole act of fasting is a reminder of the suffering and pains the poor go through every day of their lives where they have nothing to eat or drink on a daily basis. The purpose of fasting is to first empathize with those who are not as fortunate as us and then to act upon that empathy and help them. It is by the grace of Allah (SWT) that we have been lucky enough to have food on our table every day, which is all the more reason for us to feel a certain sense of greater responsibility towards the people around us.
There is no bigger incentive for a Muslim to give back in Ramadan than the fact that all rewards and blessings for good deeds are multiplied double fold in the Holy month of Ramadan. The rewards that we can earn in just one month can become equivalent to the rewards we may be able to earn in a lifetime.
This Ramadan, it is your opportunity to give back to society and earn Allah’s (SWT) blessings and rewards. When you prepare iftar at home, it would not hurt to make some extra food, which you can then distribute among the people in your community.
There are various types of people that you can help at the time of iftar in Ramadan, not necessarily those in need of charity but regular people around you, some of which include:
1• People in Hospitals (patients and attendants)
This Ramadan, you can visit hospitals and distribute food amongst the patients, their waiting families and the attending staff at the hospital. Hospitals usually do not have good cafeterias or food choices for either the patients or the attendants, hence, you can make small food bags with the essentials for iftar and give them out – they will wish you well for it.
2• Traffic police officers
While we are busy opening our fasts with lavish feasts for iftar at home, people like the community traffic police officers are serving their traffic duties almost 24/7 and they may not get the time to even open their fasts while they’re on the road. So perhaps this Ramadan, you can drive around and hand out iftar food bags to the traffic police officers so that they may also open their fasts and enjoy a good meal while on duty.
3• People stuck in traffic
In Ramadan, around the time of iftar, people are often stuck in traffic while trying to reach home to their loved ones. Always keep some extra dates in your car during Ramadan and if you see people around you stuck on the road during Maghrib, distribute these dates or some other food/drink item among them so that they are able to break their fast on time because at the end of the day, Ramadan is all about sharing with others and caring for them.
4• Ambulance drivers
Ambulance drivers need to be on call 24/7 – they do not have the luxury of choosing their work timings, their duty is dependent on the arising emergencies in the locality and they need to be vigilant. Moreover, hospital food is not always the best food option hence you could take this opportunity to distribute food for iftar amongst the ambulance drivers as well.
5• Street children
Prepare food bags for street children this Ramadan; drive around the city and distribute the food bags to the children who are roaming the streets. If you remember areas in the city where you have seen street children selling small toys and stickers etc. or even begging, visit those places and instead of money, give them food.
6• Guards
Almost every shop, every school, every street that you see has a guard patrolling it. You can give out the food bags to them as well – they will really appreciate it. Their monthly incomes are usually extremely low and with their monthly bills and family expenditures, they barely have anything left. Hence, share your food with them - they also deserve your kindness, just as much as the rest of mankind.
There are plenty of people that can use your help this Ramadan, all you have to do is make the intention and then work towards helping them. Allah (SWT) loves those who are kind to His creation and what better way to earn Allah’s (SWT) love than by being good to His creation.

Source: https://www.islamicfinder.org/iqra/6-types-of-people-you-can-help-at-iftar-in-ramadan/

3

May 12, 2018 - Updated May 12, 2018, 17:17         RAÚL ÁLVAREZ
Many believe that the future is within the use of bitcoin and other crypto currencies , but this takes on a new dimension when brands or large companies also believe it and are willing to promote this type of project. We already had for example the case of Telegram , which although the public offer ended up being canceled , the proposal moved within private funds and even raised 1,700 million dollars. Now, it seems that it's the turn of someone much bigger.
According to Alex Heath , Facebook is planning to launch its own cryptocurrency , which would be used to make transactions within the platform, ranging from mobile payments, to sending currency to contacts and purchasing products in the Marketplace.
Facebook in that search for being an independent institution
So far there are few details, since it would be a project at a very early planning stage, but at least it would already have the necessary bases to start it. In fact, the idea is that this cryptocurrency was operational within a good couple of years.
The importance of this is that thanks to its large user base, and especially the daily assets, Facebook would not only have a tool that would help to carry out operations, saving a lot of management, operation and commissions, but would help to popularize this type of technology within those people who know nothing about this topic.
Come on, this movement by Facebook could be classified as a possible catalyst in the world of cryptocurrency , promoting its use and causing greater adoption, not only within Facebook, but throughout the world.
It is said that David Marcus, who would have taken command of the new division of blockchain on Facebook after leaving the position of director in Messenger, would be responsible for developing the project of the new cryptocurrency where he would seek to "explore how to make the most of blockchain in Facebook, starting from scratch. "
In fact, Facebook has already confirmed that they have formed a new small team that will be dedicated to exploring the different applications using blockchain , in order to take advantage of all the power of this technology. And they ended up mentioning that until now they do not have anything else to share.

Courtesy: www.xataka.com/criptomonedas/facebook-quiere-tener-su-propia-criptomoneda-para-transacciones-dentro-de-su-plataforma

4

By Rob Enderle
Mar 5, 2018 10:57 AM PT

Dell recently collaborated with the Institute for the Future -- an interesting think tank largely driven by futurists, which focuses on helping firms ride future waves rather than being killed by them -- on a survey that creates a frightening view of 2030. It could be far closer to Terminator than the utopia we once hoped for.
I think more companies should go through a process like this. The reason is that it could help overcome what has become an overwhelming trend to ignore the future and instead focus excessively on quarterly results.
That trend has proven to be a company killer, driving short-term decisions -- like massive layoffs and cutbacks -- that spike income but eventually kill the firm. Dell and the institute surveyed both futurists and thousands of business leaders to assess what the future likely will be and how well prepared we are for it. Their responses are fascinating, and I'll summarize them here.
I'll close with my product of the week: Blade Shadow PC, a new PC in the Cloud service that may represent the future of personal computing.
Half of Business Leaders Are Clueless
One of the big takeaways from the survey is that business leaders collectively had no real clue about the future. Responses were so close to 50/50 on every major question as to be worthless. Business leaders have not been thinking about and discussing the future enough to get to consensus; thus they are inconsistently -- and likely incorrectly -- guessing about the future that will come.
This does not bode well for the long-term strategic investments they are making. A question as simple as "Will automated systems free-up our time?" should have gotten a resoundingly positive response. Yet exactly 50 percent disagreed, suggesting that in their shops, this was not the case, and they doubted it ever would be.
If automation hasn't been freeing up time, then someone clearly has done something very wrong -- and if 50 percent of firms have been deploying technology badly, as this survey suggests, then perhaps some focus should be placed on fixing that.
One particularly frightening response came up in a query about artificial intelligence. When asked for their response to the statement, "We'll learn on the job with AR," 46 percent agreed, implying that they would deploy autonomous machines but only learn how to use them after they were deployed.
If you do not know how to use a thing, how do you select the best thing in the first place? If you were going to create Skynet (the evil AI in the Terminatormovie franchise) that's probably how you'd do it. Gee, let's deploy an AI over our weapons systems and then learn how to -- oops, we're dead...
Another troubling response was that only 42 percent believed that offloading undesirable tasks to machines would result in a gain in job satisfaction. So, 58 percent believed that our future -- the future those very folks will be helping to create -- will suck. I'm not even sure how you get there without bad intent.
Think about it: Create a scenario in which a machine does all the stuff you hate about your job, but as a result, your job satisfaction drops. What is implied is that the machine must also be affecting your work in ways you'd like even less. However, the implication is that fixing that just isn't a priority for the majority of business leaders. On the contrary, they seem to expect that with their guidance, employee job satisfaction will degrade.
The survey results also imply that 50 percent of the businesses surveyed not only would fail by 2030, but also that their leaders expected them to fail and weren't prioritizing ways to avoid that outcome. More importantly, if an individual, group or company believes strongly enough that there is no chance of success, then failure likely will be a self-fulfilling prophecy. About half the business leaders appeared to need an attitude adjustment.
Businesses Will Spend Billions to Put Themselves out of Business
Where the survey shows clear intent is with respect to change. Recall that half the companies not only had no real clue about the coming technology but also believed it would make things worse. More than 85 percent indicated that in five years R&D would drive their organization forward, that they would convert to a software-defined business, that they would deliver their product as a service, and that their cybersecurity defense would become effective.
This brings to mind a survey and task force I was part of right before Y2K. We discovered a bunch of companies that appeared underprepared but were reporting they were ready. It made no sense until someone came up with the idea to look at the retirement dates for the related CIOs.
There was a high correlation between the CIOs in the firms that weren't ready but were reporting they were ready and retirement dates prior to 2000. Fortunately, much of this was caught, and we didn't end up in a Mad Maxworld, but it was a surprisingly close thing. This survey indicates that a lot of business leaders think they will be ready by 2030, but they aren't doing what it will take to be ready. Might want to check their retirement dates.
Here's the thing: Anyone running a race intuitively knows that you have to know where you are going before you start running. Otherwise, there is a high probability that you'll accelerate away from the finish line rather than toward it. This survey indicates that around half of the companies are going to spend a ton of money without knowing where they are going, making things worse.
Missing Advice
The report covering the survey has some great advice on how to go fast, but it lacks advice on where to go. In short, it is strong on executing but weak on strategic direction. That is not unusual. Dell, like any vendor, is focused on what it does -- and what it does isn't what its customers do.
Dell wouldn't be a good place to go to for advice on the future of healthcare. It would, however, be good at helping a healthcare company get where it wanted to go.
In a way, technology firms are like mechanics in a car race. They can help a good driver win a race, but if the driver doesn't know which way to go, or sucks at driving, the mechanic likely will just help the driver get lost or dead faster.
What I find interesting, however, is the embedded advice from Karen Quintos, the chief customer officer at Dell:
"Stronger human-machine partnerships will result in stronger human-human relationships, as companies take a customer-first approach and lead with insights. By applying machine learning and AI to customer data, companies will be able to predict and understand customer behavior like never before."
This sounds like an excellent place to start to build an idea of where a company should be focusing to evolve into the future. Using machine learning and AI to begin to determine direction would be a powerful first step, but you'd still need to learn how to use those tools before you deployed them. Otherwise, you'd still end up pointed in the wrong direction.
Wrapping Up: Dell as an Example
A company like Dell isn't a good place to go for advice on direction -- only for advice on how to get there faster and more efficiently. However, one of the things that makes Dell unique is that it is private, and therefore has significant resistance to the excessive quarterly focus that plagues most other firms. This suggests it can be used as an example of how to determine that direction.
Eliminate the excessive focus on quarterly results (go private if you have to), ensure that leadership is both competent and aligned with the goals of the firm's strategic future (too many CEOs are excessively focused on their own short-term compensation), and create a relatively diverse organization focused like a laser on customer values. That's Dell, and it makes a very nice example of a firm that has set a good direction and is executing against that vision.
Michael Dell's aspirational statement that "we're entering the next era of human-machine partnership, a more integrated, personal relationship with technology that has the power to amplify exponentially the creativity, inspiration, intelligence and curiosity of the human spirit" needs one addition. My suggestion is this: "If you don't set the right goals, all this seemingly wonderful stuff will just put you out of business faster."
Put a different way, putting a rocket on your car could guarantee that you win a race -- unless you are facing a brick wall rather than the race course. So, pointing the car in the right direction first should be a major plan requirement. This study suggests that about 50 percent of the firms plan both to buy rockets and to face them in the wrong direction. If that isn't fixed, it won't end well.
By the way, one final thought: Were I running a team to set strategic goals, Dell's partner in this study would be one of the places I'd go to start. The Institute for the Future is one of the better resources to help you plan for the future you want to live in.

Courtesy: www.technewsworld.com/story/85171.html

5
Science and Information / Facebook's Vulnerabilities Surface
« on: May 14, 2018, 10:35:42 AM »

By Denis Pombriant • CRM Buyer • ECT News Network
Mar 7, 2018 12:35 PM PT

Adam Smith famously referred to "the invisible hand" of the free market in his landmark book The Wealth of Nations, and with that made himself one of the very first political economists. Smith's observation was so on point that most of us assume markets run through the agency of individuals pursuing their enlightened self-interests. A lot of this drove the evolution of CRM as a tool for tracking customers.
If you pay attention today, you can notice the not-so-invisible hand functioning in multiple areas. For instance, if you've been following the aftermath of the school shooting in Florida, you know that a group of kids kick-started a nationwide activist movement to get something done about gun safety. The #MeToo movement -- women banding together to change the workplace by eliminating sexual harassment -- is another example. So is the Black Lives Matter movement.
What they have in common is the initiative by engaged individuals to cause change in what essentially are "marketplaces" in the broadest conception of that term. Much closer to home, even in the technology world we're seeing the stirrings of user dissatisfaction with social media, and it's not clear where this will go. Its impact on CRM could be big, because social has become one of the key channels linking vendors and customers.
Grudging Assistance
A recent article in Wired, "Facebook Doesn't Know How Many People Followed Russians on Instagram" by Issie Lapowsky, documents Facebook's foot dragging on producing information for the various inquiries surrounding the 2016 American election.
Jonathan Albright of Columbia University's Tow Center for Digital Journalism, has been looking at the details and producing information that's uncomfortable to Facebook. He's been quoted in Wired, The New York Times and elsewhere.
Albright's work has uncovered many things concerning Facebook's approach to the investigation that you might consider passive-aggressive. For instance, when asked why it had not produced information about how many people had seen Instagram information created by Russian-operated troll accounts, a Facebook spokesperson said, "We have not been asked to provide that information." (Facebook owns Instagram.)
It's not necessary to repeat the article here; it's worth reading, but that's your call. It documents how Facebook has assisted investigators so far, but only if they ask the right questions. The final paragraph summarizes this point:
"Facebook has shown consistent reticence in detailing how these trolls infiltrated its platform and who that propaganda reached. They've repeatedly had to correct prior statements about the reach of these ads and accounts. By working with outside researchers like Albright, the company might be able to paint a more complete picture, but Facebook has been unwilling to open its data up to researchers."
It's not necessary to re-examine every time Facebook denied its involvement or disputed findings that upwards of 150 million people saw content from the Russians, or that all the U.S. intelligence services have agreed that the Russians did indeed hack the election. That's all very interesting from another journalistic angle but not this one.
When Doubt Sets In
The totality of Facebook's unwitting involvement in the hack combined with its efforts to downplay its importance brings up a bigger issue for Facebook and, by extension, all social media: How useful are Facebook and social media generally, considering the Russian hacks?
A glib answer might be that they don't have to be terribly useful because they are free, and users get whatever utility they want from using them. That argument misses the point. If Facebook's utility were small or especially if were disputable, its business model would be in serious trouble.
Social media's primary product always has been the user. It is valuable to each of us when we use it to gather information about our personal graphs, and we knowingly pay an in-kind fee by letting social sites collect data about us, which they then can sell to advertisers. It's a classic network effect -- the greater the audience, the more valuable the output of its data.
But what happens if the veracity of information on social media is in doubt? Social media's value is directly proportional to its veracity. If one can doubt that veracity, then it might be prudent to seek alternatives. People and corporations that invest heavily in using information from social media might begin doubting if their investments deliver value.
My Thoughts
Facebook's approach to the hacking scandal so far has been to deny and ignore it, only admitting something when there's no other choice. This presents another problem associated with stonewalling -- dissipating trust. However unpleasant the facts, the more a party tries to ignore or hide them, the lower the market's trust in that entity.
The truth value of what people post on the networks and what they believe about the truthfulness of others' posts makes social media's world go around. That truth is what makes some people spend hours a day surfing the sites. Once that trust begins to erode, even a little, the business model can begin to unravel.
Whoever is advising Facebook on its strategy should reconsider. It's human nature to dislike dealing with criticism and serious accusations. However, impinging the free flow of information won't solve the problem. Free markets depend on transparency, and Facebook is a free market of ideas. If it stops being that, or even if people just stop believing it, then there's no reason for them to continue using it.

Courtesy: www.technewsworld.com/story/85185.html

6

By Rob Enderle
Apr 9, 2018 10:56 AM PT

The shooting incident that took place last week at YouTube had less to do with guns than with the failure of the police to act on information in a timely way and the inability of social media to be anything but part of the problem.
Google has been giving this issue little more than lip service, but I expect it has become motivated to do more, given that YouTube -- not some distant school -- was the latest target. Funny how personal risk can change perspectives.
However, in both the Parkland and YouTube events, social media -- Facebook in the case of Florida, and YouTube in the case of YouTube -- seemed to help inflame the attackers, or at least did nothing to reduce their anger or eliminate the threats.
Foreign governments apparently want to interfere with elections and polarize U.S. citizens, which showcases abuse of an incredible power. Why can't that power be used in better ways -- say to keep kids and Silicon Valley employees safer?
It can, and I'll suggest how before closing with my product of the week: a new satellite box for the TiVo service.
 
The Problem
The tech industry has a problem, as I pointed out in a recent column and as the book Technically Wrong spells out in detail.
That problem appears to be worst with social media companies, which have exhibited nearly complete disregard for their users -- who aren't their customers -- and even disregard for their home countries.
At the heart of the problem is the disconnect between those who provide revenue for these "free" firms and those who use them. Mark Zuckerberg years ago became rather famous for pointing out that those who used his service were stupid, though he used a far more interesting term at the time.
Since then he has insisted that he's changed his mind, but I watched him in an interview on TV last week, and it seemed he still thought we were all dumb f*cks, claiming that Facebook shared only what users put on the service to share. In other words, "what's the problem again?"
I'm starting to think the name "Zuckerberg" should be the new single word alternative to "tone deaf."
We know that foreign operators used social media services aggressively in an effort to influence the outcome of the last presidential election and to change public opinion on a national level. In short, social networks have been used aggressively to harm U.S. citizens.
We also have learned that in both the Parkland and the YouTube shootings, social media should have been aware of an attack threat long before one occurred.
Whether we agreed to share information or not, we certainly did not sign up to weaken our country or to facilitate mass shootings.
Social media services have demonstrated the power to change opinions, and they certainly have the data and resources to identify threats. The thing is, they need to manage these capabilities appropriately and at scale. I interviewed a company last week that has a solution.
Darwin Ecosystem + IBM Watson
Darwin Ecosystem is one of a new class of companies that is artificial intelligence-centric. In this case, it uses the IBM Watson platform to analyze writing to determine personality types and changes in personality.
One of the interesting things it did during the last election was to analyze the candidates. It even created a dynamic graph so you could look at each key personality trait individually.
One of the interesting findings was that, over time, the personality differences between Clinton and Trump seemed to converge, while Sanders remained largely the same. It's arguable that the convergence appeared to hurt Clinton and assisted Trump's victory (though with all of the social media influence, this cause and effect is questionable). Another finding was that, at least with regard to certain personality traits, Trump was surprisingly close to Obama, also suggesting a connection between personality and success.
What Darwin Ecosystem focuses on, however, is analyzing writing to determine employee problems at every level in a company. For instance, a board could use it as a tool to gauge whether a CEO was taking direction and becoming the CEO it wanted, as opposed to becoming far too enamored with abusing the privileges of the office. Management could use it to make sure an employee was well utilized and not burning out or even becoming violent.
Anecdotal evidence suggests this tool also could be used to determine if a child or adult was becoming depressed and suicidal. In theory, it could be used to determine if someone was developing homicidal tendencies.
Like all AI systems, this one uses pattern analysis and deep learning to make determinations about the authors of written works. As with most new AI systems, Darwin Ecosystems' tools can be used at scale.
This suggests that were a solution like Darwin Ecosystems applied to services like Facebook and YouTube, and tied into a response system modeled after what the Russians allegedly used against the U.S., you could have a strong defense against violence.
This same model could be used to identify traitors, bullies, trolls, and a whole variety of bad actors as well, and then appropriate steps could be taken to alter their behavior. Basically, what it would amount to is human programming at scale. We're already doing it -- we just don't apply it properly to our defense.
Violence Defense
What I'm suggesting is a solution that would start with an AI-based threat detection system that first would identify those who were displaying violent characteristics. Then it would be followed by two threat mitigation programs. One would feed posts designed to point the individual to nonviolent forms of rebellion or attack, while another would notify authorities who then could respond at the appropriate level.
The system could generate and send to law enforcement a report ranking threats and attacks by their potential, so law enforcement or social services could respond appropriately.
The mitigating posts sent to the emerging attacker would provide information on the collateral damage associated with attacks, how attackers were punished or killed, and credible posts suggesting meaningful alternatives that could result in better outcomes than violent acts.
They could include pointers to suicide prevention or other services that could deal with whatever the AI, or the flagged human moderator, might decide would be the best program to move the target off the violent path.
Instead of allowing technology to be used against us, we could be using it to keep us safe.
Wrapping Up
These giant social media companies have become not only a danger to the nation, but also a danger to their own employees. Their near total disregard for their users has created an environment where they really should start thinking about building defensible fortresses rather than open campuses. If they don't change their behavior, their consistent lack of regard for users undoubtedly will result in more violence.
However, there are AI tools that could be used to neutralize blossoming attackers and/or alert authorities about related impending harm, rather then being used to manipulate the populace in general.
These services ultimately will manipulate us in some ways. However, they could be used to help keep us safe. They've done little to address other mass shootings, but given that their lives are now on the line as well, maybe they will step up.
By the way, for anyone thinking of a better perimeter camera system, I also ran into Umbo Computer Vision last week. Its intelligent (neural network) camera system is capable of identifying a variety of approaching threats and eventually should be useful in identifying an approaching known attacker.
Note to the NRA: This is a path that has nothing to do with gun control and yet could do a lot to keep the next mass shooting from happening. If folks stop shooting other folks, the Second Amendment would be a ton safer. Just saying.

Courtesy: www.technewsworld.com/story/85257.html

7
Science and Information / What Should We Expect From AI?
« on: May 14, 2018, 10:28:31 AM »

By Jim McGregor
Apr 12, 2018 7:00 AM PT
Fear mongering about killer robots and the recent deaths connected with Uber and Tesla autonomous vehicles have rekindled concerns about artificial intelligence in the machines around us. We are well beyond answering Alan Turing's question, "can machines think?" There is now good reason to ask how we should think of AI, and what we should expect from it.
There have been phenomenal advances in AI in just the past few years. They are due in part to advances in processor technology that have increased exponentially the compute performance for artificial neural networks, the development of deep learning software frameworks, and the massive amounts of data mined directly from the Internet and the world around us.
We now can train artificial neural networks in the time it would take to make a cup of coffee. Should that scare people? Not really.
Don't Expect Perfection
You have to remember that these solutions are being trained for a specific function. They do not think out of the box, do not ponder the meaning of life, and do not have feelings. In most cases, especially today, both the initial training and continued training are limited to large server systems in cloud data centers.
As a result, public interaction with AI is limited to cloud-related services like Web browsers or trained models that then are passed down to what we call "edge devices" (referring to the edge of the network) such as smart speakers, smartphones or even cars.
Eventually, continued training or even initial training may be done at the edge, but that may take a revolutionary change in processor technology -- such as neuromorphic computing, which is only in the research stages.
"AI" is exactly as the name implies -- the ability to acquire and apply knowledge and skills -- meaning that it learns over time and, more importantly, learns with additional data. The more data a system utilizes for training in the form of files or even live sensors, the more accurate it will be in performing a specific task.
However, as a form of intelligence, it never will be perfect. Just as humans learn through new information and interactions, so do machines. New teenage drivers may be caught by surprise the first time they drive on ice, but they learn from the experience and get better with time. So too will AI-based systems, but there always will be uncertainty with new data or circumstances.
A Safer World
The potential for AI to enhance people's lives and change society are endless, but the areas where we'll see the greatest short-term impact are healthcare and transportation. Consider the possibility of having genetically engineered prescriptions for each person, or the ability to find cures for an infectious disease in days, or even hours, because of the abilities of AI systems.
Also think about autonomous trucks and cars being able to ferry people and goods around the world with no need for stop lights. This is all possible, and it's coming sooner than you think.
AI already is used in a wide variety of scientific, financial, Web applications, user interfaces, manufacturing, and more. This is one of the most enabling advances in technology ever -- and like other major advances, it will change the world dramatically. However, it won't be perfect.
With autonomous vehicles, for example, the only way to eliminate any possibility of a human death is to separate pedestrian and vehicular traffic completely. That might happen, but it will require significant infrastructure changes that could take from decades to a century.
As a result, there will be more accidents that may result in more deaths from cars and other autonomous machines enabled by AI. However, the number of deaths and injuries will be drastically lower compared to human-operated machines. Just as airline accidents have become uncommon, so too will auto and other accidents, due to the use of AI. The rarity of such accidents, however, will result in spectacular headlines when they do occur.
AI also will be used in defense applications, another case in which it should improve systems to reduce or prevent virtual and physical attacks, as well as loss of human life.
So, what should we expect from AI? We should expect a safer world with significant advances enabled through intelligent systems. How should we think about AI? We should consider it a breakthrough technology that already is changing the world around us for the better.

Courtesy: www.technewsworld.com/story/85269.html

8

By Rob Enderle
Apr 16, 2018 10:42 AM PT
After reading Facebook CEO Mark Zuckerberg's testimony, and viewing some video clips of his appearances before Senate and House committee hearings last week, it became very clear to me -- and I expect many in Congress (these were unprecedented events, and it's an election year) -- that social media companies need to be regulated.
However, I think this is only a step in the path that governments -- and I do mean more than the U.S. -- will take to ensure their effectiveness and protect their people. I'm not suggesting the U.S. Congress, whose members currently appear to be woefully out of step, could run these companies better.
What I can see easily, though, is that as technology evolves, the need for national security will drive Congress to take control of the digital identities of citizens, both to protect the people better and to ensure that the government and country survive.
I'll share some thoughts on that this week and close with my product of the week: IBM's new "Skinny" System Z mainframe.
 
More Powerful Than a Nuclear Bomb
If we think in terms of ordnance, the nuclear bomb is the most powerful, and no government would be willing to allow a company to amass any of those things -- let alone enough to overthrow the government. Yet all a bomb can do is intimidate and destroy. It doesn't really control.
Were a firm to acquire a critical mass of nuclear weapons, it is likely more than one government would work inordinately hard to remove the threat to ensure the sovereignty of that government. (I'm thinking there would be a crater where that company used to be.)
What the various world governments are coming around to understanding is that deep data on individuals can be far more powerful than a bomb. Deep data can be used to overthrow governments without the government even knowing it is at risk.
The citizens are the real power behind a government. If you can control them, you effectively can control a nation -- the government becomes both redundant and subordinate.
We are currently dealing with the fact that Facebook, which clearly now has that power, sold it to a third party, potentially aiding a foreign government's efforts to control a national U.S. election. Attempting to influence an election isn't new -- governments have done it to one other for centuries. However, this may very well be the first time that a large company has had this power andmade it available to a hostile nation for use against its own country.
This kind of control and really bad decision making used to exist only in the public sector. Based on investigations in the U.S. and in the EU, the public sector appears to be coming around, slowly, to the idea that something on a national level -- if not world level -- needs to be done.
It is interesting to note that traditional media outlets, which were hurt massively when both Google and Facebook emerged, seem to be on the front lines in this effort.
Taking Over Facebook
The U.S. government currently has the power to seize all of Facebook's physical assets. The FTC has in place a consent decree with Facebook, which seemingly was violated. Based on the formula in the decree, the fines that could be imposed possibly could exceed Facebook's current assets by a significant amount. Certainly, they could amount to more than even the most capital-rich company could pay. The U.S. government has the power, authority, and resources to fine Facebook into nonexistence.
Current discussions suggest the plan is to do something far less severe than that -- but do realize that if you or I were facing this situation after committing a crime, the fact that we couldn't pay the fine likely would be our problem to solve.
I doubt the government actually will take over Facebook at this point. However, the lack of control over user data and the fact that a second company apparently has misused it suggest that we probably haven't seen the last of these disclosures. (You may recall how the information about the extent of the Yahoo breaches kept getting worse over time.) It's possible that the Facebook scandal still could escalate to a level even Facebook couldn't survive.
I'm not expecting a near-term fix, but what I do foresee is that with 44 senators attending the Zuckerberg hearing, "fixing Facebook," or possibly eventually nationalizing it, will become a common political goal. That goal should mature to action around the time of the next presidential election -- thus the five- to 10-year range of my prediction.
Who Should Own Your Data?
Given that most of us have been giving away our data unthinkingly, and that it can be used to manipulate us as a nation, ownership shouldn't reside with us alone. In much the same way that the government attempts to protect us from ourselves with laws and restrictions surrounding alcohol use, drug use, sexual behavior, reproduction, driving, smoking -- well, the list could go on for a while -- it probably will conclude that it must at least have joint ownership over our data.
It's arguable that the governments of some countries, where power is more absolute -- for example, Russia, North Korea and China -- already do.
Still, given that governments tend to misuse the power they have, such ownership likely would result in illegal actions within the government that potentially could subvert democratic processes. We certainly have seen this in governments that own or control their own press or media.
Citizens' personal data only makes that control far more effective. In the hands of any government, it not only would ensure the death of democratic processes, but also the eventual abuses of citizens at massive scales.
I think personal data should be regulated by an organization independent of any government and with the power to defend itself against any government. I'd point to the United Nations as the closest entity that approaches that power, but the UN really isn't much more than a paper tiger. The kind of power I'm talking about would guarantee its becoming even more deeply compromised by the most power governments controlling it.
One of my favorite TV shows years ago was The Man from U.N.C.L.E."U.N.C.L.E." stood for "United Network Command for Law and Enforcement." It was kind of a superset of Interpol with far more capabilities and power, and that's what we need in the case of data protection, before any one government -- even our own -- nationalizes Facebook and Google.
Google Under the Radar
Facebook is now in the spotlight, but I think Google is by far the bigger problem. I'm still thinking about the books Brotopia and Technically Wrong. In both books, Google is by far represented as the worse actor between the two firms.
Google also has far more data about people and far more that we didn't realize we gave up. It effectively holds a superset of what Facebook holds, and it has been accused of lobbying against regulations to limit sex trafficking -- the sale of young girls into slavery. Google apparently was the most powerful company aggressively trying to block the legislation.
It recently failed, but until then it had been successful, possibly because it had far more influence on the Obama administration then it does on the Trump administration. However, those efforts ensure its position at the top of the pile of bad-acting U.S. companies.
There is a lot of concern about someone developing an artificial intelligence system that would have Terminator-like tendencies. The company that has come up in most every conversation I've been part of as most likely to develop such a system is Google. Ironically, the company with the tag line "do no evil" appears to have evolved into the stereotypical Bond villain.
Google's potential power and focus makes it far more dangerous in the long term than Facebook ever could become.
Wrapping Up
Whether we foolishly gave Facebook power or watched Google seize it, the result is that it isn't in the long-term health interests of any government -- or even those of Facebook and Google -- that they have it.
If nothing is done to mitigate the risks, governments will view these firms as the existential threats they already are and use draconian measures to seize them. Nationalizing these companies on a country-by-country level is now the most likely outcome. These firms still might have the power to avoid this threat and take actions to put in place a company- and government-independent control structure to ensure our personal information won't be weaponized against us.
Certainly, any small number of employees could come up with a more practical plan than members of the U.S. Congress, who continually seem to struggle with technology, could come up with. However, if Facebook and Google don't fix the threats they represent, they are building toward their own terminations. Last week's hearings should have been a huge wake up call to both.

Courtesy: https://www.technewsworld.com/story/85276.html

9

By Ed Moyle • E-Commerce Times • ECT News Network
Apr 18, 2018 5:00 AM PT

The security skills gap has become a topic of acute interest among practitioners responsible for building security teams for their organizations -- and keeping them running smoothly. It impacts everything from how they staff, how they cultivate and develop their workforces, and how they train, to the operational controls they put in place, and potentially numerous other things about their security programs.
The term "skills gap," in a nutshell, refers to specific challenges organizations have confronted over the past few years in finding and retaining competent, trained resources for security efforts. It is a measurable trend across the industry as a whole.
For example, it takes most organizations (54 percent) more than three months to fill open security positions, the recently released 2018 ISACA Global State of Cybersecurity Survey found. That figure is consistent with its prior year's findings.
In terms of the skills in highest demand, technical skills are the most difficult to find, and the level of position being sought is individual contributor rather than managerial in nature, the ISACA data suggest.
While these data points are interesting in and of themselves -- for example as a generic barometer of staffing considerations in security as a whole -- they also are important in ways that may not be intuitive. At least, that's true for savvy practitioners. That is, the report serves as an tool for security managers to benchmark their own staffing performances.
The fact that the skills gap exists and is being measured by numerous parties outside your organization means that the measurements you take about your own team can be compared directly to an objective, organization-agnostic benchmark. How often do opportunities to do that arise?
Say you're planning your daughter's birthday party and you're thinking about serving ice cream. If your daughter doesn't like vanilla, how much would it influence your decision making about which flavor to buy if I told you that vanilla was the most popular ice cream flavor in the world? Or that it was the most popular flavor in the U.S.? Both of those statements would be true, but would that matter? Not at all, right?
Are You Keeping Track?
The point is that both types of information can be useful. Understanding the broader trend is important because having that can help you plan more effectively. For example, knowing that it might be challenging to staff up certain skills (e.g., technical skills) might cause you to invest in strategies to maintain talent you already have in order to minimize attrition.
Further, that knowledge might prompt you to invest in strategies that let you creatively cultivate new team members in unconventional ways (e.g. through internships, "externships," or other avenues), or invest in strategies that automate some processes.
There could be multiple viable options, but picking the one that is right for you is dependent on having some clue about what is going on in the first place.
However, understanding the broader trend in the context of how your team specifically performs is exponentially more valuable. Why? Because it lets you evaluate how the strategies you invest in are playing out. For example, if you decide to serve ice cream (vanilla or otherwise) every Friday to help make the workplace more fun, is it a useful talent retention strategy? Who can tell if you're not measuring the outcome?
Benchmarking your own staffing efforts relative to peers, while valuable, does take a bit of legwork. It means, first of all, that you're keeping track of performance metrics relative to staffing considerations ("temet nosce" -- know yourself).
It likewise means that you're keeping an eye on data sources available externally -- that you have some degree of situational awareness of staffing issues.
Neither of these things are rocket science, but you'd be surprised how frequently security managers (even CISOs and CIOs) don't track things like turnover, open headcount, time to fill positions, staff training goals/needs, and so forth.
It's not that they don't want to -- it's just that doing so is less of an operational priority than more tactical considerations -- like dealing with the threat du jour, or deploying operational tools.
Remember the triad of people, process and technology? Each one is an important pillar in organizational performance. An advantage in any one of these areas means an advantage relative to peers overall. Those who can't find staff, who have sub-par staff, or who otherwise have an ineffective or operationally deficient staffing strategy are at a disadvantage, while those who excel in these areas have an advantage.
Taking It Forward
As a practical measure, what can organizations do to make sure they're developing their teams in a competitive way? There are a few things that can be helpful:
1.   It is a good idea to keep track of some metrics about staffing -- both your organization's ability to bring in new folks and to retain existing personnel. The few metrics I listed above are a useful starting point, but they are by no means the only possible options.
You might want to track softer instrumentation, like staff perception about opportunities for advancement, fun in the workplace, and overall job satisfaction. These things can be correlated to harder values like turnover rate in a particular area, or other metrics that are more outcome-focused. The specific choice is up to you, of course, but the fact that you're tracking something will give you data that can be honed and explored over time.
2.   Trending information can be valuable. In fact, it's so important in terms of your ability to correlate measures you implement to specific goals and outcomes that it's often better to have less specificity in terms of what you measure but a higher frequency of doing so.
For example, if you're experimenting with a new training regimen, you may find it more useful to assess the perceived value of the training more frequently (which allows you to get more real-time feedback and potentially pivot if you're not getting what you want) vs. doing a more in-depth exploration of employee perceptions less frequently, perhaps once a year.
3.   It's useful to solicit partners. HR organizations often do an employee satisfaction survey or engagement survey, for example, or use another measuring instrument (or combination of them) to benchmark employee perceptions of the organization at large.
Leveraging this data where it already exists can provide useful data points that can help security leaders build the best teams and -- maybe even more importantly -- retain the resources that have proven so difficult to replace.

Courtesy: https://www.technewsworld.com/story/85285.html

10


By Chris Bucholtz • CRM Buyer • ECT News Network
May 9, 2018 9:59 AM PT

Much of the discussion around sales and artificial intelligence has been outwardly focused. It's exciting to think about using AI to examine leads, sift through data, and help deliver answers that allow salespeople to close more and bigger deals.
However, the excitement over using AI to organize customer information overshadows another use of AI that could be equally lucrative. AI also can be used to look inward and solve pressing sales issues -- for example, sales churn.
Annual sales turnover rates are 26.9 percent for inside salespeople and 25.7 percent for outside salespeople, according to DePaul University's Sales Effectiveness Survey. The average of the two -- 26 percent -- is twice as high as the average for all other professions.
Also, the cost of replacing salespeople has been increasing. The cost of replacing the average salesperson has ballooned to almost US$115,000, the same study indicated. If your sales organization has 100 salespeople, and your retention is average, you're losing 26 salespeople a year, and it costs you $2.99 million to replace them.
Now, imagine that you could halt that churn. If you could even reduce it by a third, you'd add $1 million to your bottom line without landing a single new customer (although having those sales slots filled probably would help you get signatures on a lot more deals).
Don't Punish the Overachievers
How can you attack the churn problem? First, you have to be able to spot the problem -- only then can you take steps to address it. Spotting the problem is where AI can make a big impact.
To make AI your ally in reducing churn, you need to identify the indicators of churn, the telltale signs that a salesperson is getting close to jumping ship. AI can't read salespeople's minds, but it can read the data -- if you train it properly and give it access to the right information around key indicators. What might those indicators be?
If you have an overachieving salesperson whose numbers to achieve bonuses or reach accelerators repeatedly have been adjusted upward, that salesperson is likely to look for other opportunities. Moving the goalposts on top performers is dangerous -- the first adjustments may make sense, because they correct early assumptions about the salesperson's capabilities.
If you make adjustments beyond that, however, a salesperson can be come resentful. It can seem as if goals will be increased until the salesperson fails -- a perception that pits the salesperson against the company, and which can force the salesperson to seek greener pastures.
Using AI to flag such situations can enable managers to counter those perceptions and manage more effectively at the same time.
Offer the Right Tools
Engaged salespeople want to know about the products they sell, and about additions to the product line. If you use an automatic training tool, that can provide you with insight about salespeople's interest in continuing to represent the company. A drop-off in usage may suggest a likely churn scenario.
The flip side is when increased use of educational material fails to move the needle, creating a frustrating scenario that could cause a salesperson to move to a new gig. Using AI to find correlations between data can point out team members that managers may need to work with to overcome frustrations and keep talent.
A similar scenario involves an increase in the sales enablement system's use of sales content and other material that isn't complemented by an increase in sales performance. A salesperson may become dissatisfied about the support tools the organization uses and may decide to move on. If trained properly, AI can spot these conditions and flag them for managers.
The failure to capitalize on available bonuses and incentives is another red flag for churn. When salespeople go on autopilot and aren't moved by new incentives, they're likely to be disengaged and thus ready to jump to another company.
Utilization of sales technology is another classic indicator of sales dissatisfaction. It suggests that salespeople aren't happy with the tools they've been given -- and if that's the case, then they may
Train Your AI Well
These are just a few areas where AI can surface indicators that salespeople are on the lookout for other opportunities. Sales managers need to examine their own operations and understand the most common sources of dissatisfaction, then work with Sales Ops to identify the data sources they need to train AI through the process of machine learning.
An organization might even set up a "churn scoring" system, similar to what is done for marketing that can weigh the churn chances for every salesperson on an ongoing basis and allow managers to focus their retention efforts on those who score highest.
The flip side is that this also can help with planned attrition; if a company regularly lets some low performers go, it's smarter to say goodbye to the ones already dissatisfied, and to retain and train salespeople with lackluster performance but who indicate by their behaviors that they are engaged and still want to stay with your team.
Training AI to spot these indicators may seem simple -- but it's not. AI focused on potential customers brings with it a major data challenge that sales and Sales Ops must work together to solve.
AI focused on salespeople hands Sales Ops a similar but different data challenge, tapping into internal systems -- like sales performance management -- to power AI. It may be tough at first, but saving the business $1 million per year finally might get Sales Ops the credit it deserves, and at the same time deliver a stronger, longer-tenured and more skilled sales force.

Courtesy: www.technewsworld.com/story/85328.html

11

On January 22, 2018, Amazon opened their first Amazon Go store in Seattle, Washington. The store - which one shopper called the "future of grocery shopping" - is equipped with censors and cameras that track which items customers take or put back. This, and the help of a special Amazon Go app, automatically charges customers after they exit the store. This eliminates the need for lines, making shopping a much quicker experience.

https://www.ranker.com/list/cool-new-tech-2018/eric-vega

12

No one should ever drive drunk, but it can be difficult to monitor your blood alcohol content levels without taking a breathalyzer test. While there are some small breathalyzers ar on the market, they can be clumsy and awkward to use in public. Proof is a unique and elegant solution to this problem, as it's a wearable wristband that can discretely analyze you BAC levels through your skin. The makers of Proof managed to earn nearly twice what they were asking for after launching a successful crowdfunding campaign. The wristband is set to begin shipping at full production capacity in late 2017.

Source: https://www.ranker.com/list/cool-new-tech-2018/eric-vega

13

Smart glasses are far from a new invention. The glasses, which project information onto your retina as you go about your day, once came with clunky equipment attached. Cameras, extra appendages, and microphones made the glasses a bit of an eyesore that easily led to unwanted attention directed at their wearers. Intel's new Vaunt glasses, however, offer a streamline design that's vastly more user friendly. The glasses look exactly like regular eyeglasses and are available in various prescription lenses. This means users can enjoy the benefits of the glasses throughout the day without attracting unwanted stares or having to deal with uncomfortable equipment.

https://www.ranker.com/list/cool-new-tech-2018/eric-vega

14

In 2017, the cinema camera making company RED announced it was working a smartphone with a "holographic screen." Consumers were unsure what this meant until September 2017. Essentially, the phone would provide a 3D viewing experience on screen, projecting things like buildings and landscapes in an interactive map. A user could essentially take a virtual tour via the phone screen, looking around, above, behind, or in-between various objects.
The RED smart phone is currently available for pre-order, for the fairly hefty price tag of $1,195. No official release date has been announced, but the company says users can expect to see the product sometime in early 2018.

Source: https://www.ranker.com/list/cool-new-tech-2018/eric-vega

15
Science and Information / Putin Has a Brand New Limo
« on: May 09, 2018, 11:26:35 AM »

Putin ditches his Mercedes-Benz for a locally-designed Rolls-Royce-inspired limo with help from Porsche and Bosch.
 
Vladimir Putin was recently inaugurated for his fourth term as Russian president, and the occasion called for a special debut: The car known as Project Cortege, a locally-built presidential limousine co-developed by Porsche, Bosch, and the Moscow-based Central Scientific Research Automobile & Engine Institute. After almost three decades, it seems the Russian leadership is now officially done buying Mercedes-Benzes.
Is the Cortege limousine powered by a V8 or a V12? You tell us! Sources are highly unclear on that, as usual. Not that it matters. Rest assured, Putin's new ride is fast enough, bulletproof, and will spot nefarious secret agents within a radius of four miles. Allegedly.
It's got a long wheelbase, it drives, it corners, it stops. It's black, and it's only slightly Chrysler 300/Rolls-Royce Phantom-looking. Not much more a President can ask for.

Courtesy : Máté Petrány, www.popularmechanics.com, May 8, 2018

Pages: [1] 2 3 4