Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - motiur.swe

Pages: [1] 2
1
Daffodil International University in research report from SCOPUS (2003-2020)

Total Documents: 753
Documents in 2019: 280 (Maximum Publication till today - 02/02/2020)
Total Authors: 587


Documents by year (2003-2020)


Top Subject Area (2003-2020)


Type of publication (2003-2020)


Top sources (2003-2020)


Top Collaboration Institution (2003-2019)


Top Authors (2003-2020)


Top collaboration Country (2003-2020)


2
Daffodil International University in research report from SCOPUS (2003-2020)

Total Documents: 753
Documents in 2019: 280 (Maximum Publication till today - 02/02/2020)
Total Authors: 587


Documents by year (2003-2020)


Top Subject Area (2003-2020)


Type of publication (2003-2020)


Top sources (2003-2020)


Top Collaboration Institution (2003-2020)


Top Authors (2003-2020)


Top collaboration Country (2003-2020)


3
This time you’ll build a basic Deep Neural Network model to predict Bitcoin price based on historical data. You can use the model however you want, but you carry the risk for your actions.

You might be asking yourself something along the lines:

Can I still get rich with cryptocurrency?
Of course, the answer is fairly nuanced. Here, we’ll have a look at how you might build a model to help you along the crazy journey.

Or you might be having money problems? Here is one possible solution:
Here is the plan:

Cryptocurrency data overview
Time Series
Data preprocessing
Build and train LSTM model in TensorFlow 2
Use the model to predict future Bitcoin price
Data Overview
Our dataset comes from Yahoo! Finance and covers all available (at the time of this writing) data on Bitcoin-USD price. Let’s load it into a Pandas dataframe:

csv_path = "https://raw.githubusercontent.com/curiousily/Deep-Learning-For-Hackers/master/data/3.stock-prediction/BTC-USD.csv"
df = pd.read_csv(csv_path, parse_dates=['Date'])
df = df.sort_values('Date')

Note that we sort the data by Date just in case.

Fore More info:
https://towardsdatascience.com/cryptocurrency-price-prediction-using-lstms-tensorflow-for-hackers-part-iii-264fcdbccd3f

4



It’s 2019, and the majority of the ML community is finally publicly acknowledging the prevalence and consequences of bias in ML models. For years, dozens of reports by organizations such as ProPublica and the New York Times have been exposing the scale of algorithmic discrimination in criminal risk assessment, predictive policing, credit lending, hiring, and more. Knowingly or not, we as ML researchers and engineers have not only become complicit in a broader sociopolitical project that perpetuates hierarchy and exacerbates inequality, but are now actively responsible for disproportionate prison sentencing of black people and housing discrimination against communities of color.

Acknowledgement of this bias cannot be where the conversation ends. I have argued, and continue to argue, that even the individual ML engineer has direct agency in shaping fairness in these automated systems. Bias may be a human problem, but amplification of bias is a technical problem — a mathematically explainable and controllable byproduct of the way models are trained. Consequently, mitigation of existing bias is also a technical problem: how, algorithmically, can we ensure that the models we build are not reflecting and magnifying human biases in data?

Unfortunately, it is not always immediately possible to collect “better training data.” In this post, I give an overview of the following algorithmic approaches to mitigating bias, which I hope can be useful for the individual practitioner who wants to take action:

adversarial de-biasing of models through protection of sensitive attributes,
encoding invariant representations with semi-supervised, variational “fair” autoencoders,
dynamic upsampling of training data based on learned latent representations, and
preventing disparity amplification through distributionally robust optimization.
I do my best to link to libraries/code/tutorials/resources throughout, but if you’re looking to dive right in with code, the AI 360 toolkit looks like a decent place to start. Meanwhile, let’s get started with the math :).

I. Adversarial De-biasing
The technique of adversarial de-biasing is currently one of the most popular techniques to combat bias. It relies on adversarial training to remove bias from latent representations learned by the model.

Let Z be some sensitive attribute that we want to prevent our algorithm from discriminating on, e.g. age or race. It is typically insufficient to simply remove Z from our training data, because it is often highly correlated with other features. What we really want is to prevent our model from learning a representation of the input that relies on Z in any substantial way. To this end, we train our model to simultaneously predict the label Y and prevent a jointly-trained adversary from predicting Z.

The intuition is as follows: if our original model produces a representation of X that primarily encodes information about Z (e.g. race), an adversarial model could easily recover and predict Z using that representation. By the contrapositive, if the adversary fails to recover any information about Z, then we must have successfully learned a representation of the input that is not substantially dependent on our protected attribute.

We can think of our model as a multi-head deep neural net with one head for predicting Y and another head for predicting Z. We backpropogate normally, except we send back a negative signal on the head that predicts Z by using the negative gradient.

For More:
https://towardsdatascience.com/algorithmic-solutions-to-algorithmic-bias-aef59eaf6565

5
Machine Learning/ Deep Learning / Neural Network Optimization
« on: June 30, 2019, 01:24:50 PM »
This article is the third in a series of articles aimed at demystifying neural networks and outlining how to design and implement them. In this article, I will discuss the following concepts related to the optimization of neural networks:

  • Challenges with optimization
  • Momentum
  • Adaptive Learning Rates
  • Parameter Initialization
  • Batch Normalization
You can access the previous articles below. The first provides a simple introduction to the topic of neural networks, to those who are unfamiliar. The second article covers more intermediary topics such as activation functions, neural architecture, and loss functions.

These tutorials are largely based on the notes and examples from multiple classes taught at Harvard and Stanford in the computer science and data science departments.

All the code that is discussed in this and subsequent tutorials on the topics of (fully connected) neural networks will be accessible through my Neural Networks GitHub repository, which can be found at the link below.

https://github.com/mrdragonbear/Neural-Networks

Challenges with Optimization
When talking about optimization in the context of neural networks, we are discussing non-convex optimization.

Convex optimization involves a function in which there is only one optimum, corresponding to the global optimum (maximum or minimum). There is no concept of local optima for convex optimization problems, making them relatively easy to solve — these are common introductory topics in undergraduate and graduate optimization classes.

Non-convex optimization involves a function which has multiple optima, only one of which is the global optima. Depending on the loss surface, it can be very difficult to locate the global optima

For a neural network, the curve or surface that we are talking about is the loss surface. Since we are trying to minimize the prediction error of the network, we are interested in finding the global minimum on this loss surface — this is the aim of neural network training.

There are multiple problems associated with this:

What is a reasonable learning rate to use? Too small a learning rate takes too long to converge, and too large a learning rate will mean that the network will not converge.
How do we avoid getting stuck in local optima? One local optimum may be surrounded by a particularly steep loss function, and it may be difficult to ‘escape’ this local optimum.
What if the loss surface morphology changes? Even if we can find the global minimum, there is no guarantee that it will remain the global minimum indefinitely. A good example of this is when training on a dataset that is not representative of the actual data distribution — when applied to new data, the loss surface will different. This is one reason why trying to make the training and test datasets representative of the total data distribution is of such high importance. Another good example is data which habitually changes in distribution due to its dynamic nature — an example of this would be user preferences for popular music or movies, which changes day-to-day and month-to-month.
Fortunately, there are methods available that provide ways to tackle all of these challenges, thus mitigating their potentially negative ramifications.

Local Optima
Previously, local minima were viewed as a major problem in neural network training. Nowadays, researchers have found that when using sufficiently large neural networks, most local minima incur a low cost, and thus it is not particularly important to find the true global minimum — a local minimum with reasonably low error is acceptable.

For More:

https://towardsdatascience.com/neural-network-optimization-7ca72d4db3e0

6
Machine Learning/ Deep Learning / Transfer Learning
« on: March 16, 2019, 02:03:25 PM »
Transfer Learning is the reuse of a pre-trained model on a new problem. It is currently very popular in the field of Deep Learning because it enables you to train Deep Neural Networks with comparatively little data. This is very useful since most real-world problems typically do not have millions of labeled data points to train such complex models. This blog post is intended to give you an overview of what Transfer Learning is, how it works, why you should use it and when you can use it. It will introduce you to the different approaches of Transfer Learning and provide you with some resources on already pre-trained models.





What is it?


In Transfer Learning, the knowledge of an already trained Machine Learning model is applied to a different but related problem. For example, if you trained a simple classifier to predict whether an image contains a backpack, you could use the knowledge that the model gained during its training to recognize other objects like sunglasses.

With transfer learning, we basically try to exploit what has been learned in one task to improve generalization in another. We transfer the weights that a Network has learned at Task A to a new Task B.


The general idea is to use knowledge, that a model has learned from a task where a lot of labeled training data is available, in a new task where we don’t have a lot of data. Instead of starting the learning process from scratch, you start from patterns that have been learned from solving a related task.

Transfer Learning is mostly used in Computer Vision and Natural Language Processing Tasks like Sentiment Analysis, because of the huge amount of computational power that is needed for them.

It is not really a Machine Learning technique. Transfer Learning can be seen as a ‘design methodology’ within Machine Learning like for example, active learning. It is also not an exclusive part or study-area of Machine Learning. Nevertheless, it has become quite popular in the combination with Neural Networks, since they require huge amounts of data and computational power.

How it works


For example, in computer vision, Neural Networks usually try to detect edges in their earlier layers, shapes in their middle layer and some task-specific features in the later layers. With transfer learning, you use the early and middle layers and only re-train the latter layers. It helps us to leverage the labeled data of the task it was initially trained on.

Let’s go back to the example of a model trained for recognizing a backpack on an Image, which will be used to identify Sunglasses. In the earlier layers, the model has learned to recognize objects and because of that, we will only re-train the latter layers, so that it will learn what separates sunglasses from other objects.


In Transfer Learning, we try to transfer as much knowledge as possible from the previous task, the model was trained on, to the new task at hand. This knowledge can be in various forms depending on the problem and the data. For example, it could be how models are composed which would allow us to more easily identify novel objects.


Why it is used?


Using Transfer Learning has several benefits that we will discuss in this section. The main advantages are basically that you save training time, that your Neural Network performs better in most cases and that you don’t need a lot of data.

Usually, you need a lot of data to train a Neural Network from scratch but you don’t always have access to enough data. That is where Transfer Learning comes into play because with it you can build a solid machine Learning model with comparatively little training data because the model is already pre-trained. This is especially valuable in Natural Language Processing (NLP) because there is mostly expert knowledge required to created large labeled datasets. Therefore you also save a lot of training time, because it can sometimes take days or even weeks to train a deep Neural Network from scratch on a complex task.

According to Demis Hassabis, the CEO of DeepMind Technologies, Transfer is also one of the most promising techniques that could someday lead us to Artificial General Intelligence (AGI)

When you should use it


As it is always the case in Machine Learning, it is hard to form rules that are generally applicable. But I will provide you with some guidelines.

You would typically use Transfer Learning when (a) you don’t have enough labeled training data to train your network from scratch and/or (b) there already exists a network that is pre-trained on a similar task, which is usually trained on massive amounts of data. Another case where its use would be appropriate is when Task-1 and Task-2 have the same input.

If the original model was trained using TensorFlow, you can simply restore it and re-train some layers for your task. Note that Transfer Learning only works if the features learned from the first task are general, meaning that they can be useful for another related task as well. Also, the input of the model needs to have the same size as it was initially trained with. If you don’t have that, you need to add a preprocessing step to resize your input to the needed size.

Approaches to Transfer Learning


Now we will discuss different approaches to Transfer Learning. Note that these have different names throughout literature but the overall concept is mostly the same.

1. Training a Model to Reuse it
Imagine you want to solve Task A but don’t have enough data to train a Deep Neural Network. One way around this issue would be to find a related Task B, where you have an abundance of data. Then you could train a Deep Neural Network on Task B and use this model as starting point to solve your initial Task A. If you have to use the whole model or only a few layers of it, depends heavily on the problem you are trying to solve.

If you have the same input in both Tasks, you could maybe just reuse the model and make predictions for your new input. Alternatively, you could also just change and re-train different task-specific layers and the output layer.

2. Using a Pre-Trained Model
Approach 2 would be to use an already pre-trained model. There are a lot of these models out there, so you have to do a little bit of research. How many layers you reuse and how many you are training again, depends like I already said on your problem and it is therefore hard to form a general rule.

Keras, for example, provides nine pre-trained models that you can use for Transfer Learning, Prediction, feature extraction and fine-tuning. You can find these models and also some brief tutorial on how to use them here.

There are also many research institutions that released models they have trained. This type of Transfer Learning is most commonly used throughout Deep Learning.

3. Feature Extraction
Another approach is to use Deep Learning to discover the best representation of your problem, which means finding the most important features. This approach is also known as Representation Learning and can often result in a much better performance than can be obtained with hand-designed representation.


Most of the time in Machine Learning, features are manually hand-crafted by researchers and domain experts. Fortunately, Deep Learning can extract features automatically. Note that this does not mean that Feature Engineering and Domain knowledge isn’t important anymore because you still have to decide which features you put into your Network. But Neural Networks have the ability to learn which features, you have put into it, are really important and which ones aren’t. A representation learning algorithm can discover a good combination of features within a very short timeframe, even for complex tasks which would otherwise require a lot of human effort.

The learned representation can then be used for other problems as well. You simply use the first layers to spot the right representation of features but you don’t use the output of the network because it is too task-specific. Simply feed data into your network and use one of the intermediate layers as the output layer. This layer can then be interpreted as a representation of the raw data.

This approach is mostly used in Computer Vision because it can reduce the size of your dataset, which decreases computation time and makes it more suitable for traditional algorithms as well.

Popular Pre-Trained Models
There are a some pre-trained Machine Learning models out there that became quite popular. One of them is the Inception-v3 model, which was trained for the ImageNet “Large Visual Recognition Challenge”. In this challenge, participants had to classify images into 1000 classes, like “Zebra”, “Dalmatian”, and “Dishwasher”.

Here you can see a very good tutorial from TensorFlow on how to retrain image classifiers.

Microsoft also offers some pre-trained models which are available for both R and Python development, through the MicrosoftML R package and the microsoftml Python package.

Other quite popular models are ResNet and AlexNet. I also encourage you to visit pretrained.ml which is a sortable and searchable compilation of pre-trained deep learning models, along with demos and code.

Summary
In this post, you have learned what Transfer Learning is and why it matters. You also discovered how it is done along with some of its benefits. We talked about why it can reduce the size of your dataset, why it decreases training time and why you also need less data when you use it. We discussed when it is appropriate to do Transfer Learning and what are the different approaches to it. Lastly, I provided you with a collection of models that are already pre-trained.

Source: towardsdatascience

7
What is Transfer Learning?
Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task.

Transfer Learning differs from traditional Machine Learning in that it is the use of pre-trained models that have been used for another task to jump start the development process on a new task or problem.

Transfer learning involves the concepts of a domain and a task. A domain DD consists of a feature space XX and a marginal probability distribution P(X)P(X) over the feature space, where X=x1,⋯,xn∈XX=x1,⋯,xn∈X. For document classification with a bag-of-words representation, XX is the space of all document representations, xixi is the ii-th term vector corresponding to some document and XX is the sample of documents used for training.

The benefits of Transfer Learning are that it can speed up the time it takes to develop and train a model by reusing these pieces or modules of already developed models. This helps speed up the model training process and accelerate results.

Source: medium

8
Machine Learning/ Deep Learning / How do exactly machines learn?
« on: July 28, 2018, 11:33:10 AM »
The process flow depicted here represents how machine learning works:


There are two popular methods of machine learning named supervised learning and unsupervised learning. It is estimated that about 70 percent of machine learning is supervised learning, while unsupervised learning ranges from 10 – 20 percent. Other methods that are less-often used are semi-supervised and reinforcement learning.

Supervised Learning

This kind of learning is possible when inputs and the outputs are clearly identified, and algorithms are trained using labeled examples. To understand this better, let’s consider the following example: an equipment could have data points labeled F (failed) or R (runs).


The learning algorithm using supervised learning would receive a set of inputs along with the corresponding correct output to find errors. Based on these inputs, it would further modify the model accordingly. This is a form of pattern recognition, as supervised learning happens through methods like classification, regression, prediction, and gradient boosting. Supervised learning uses patterns to predict the values of the label on additional unlabeled data.

Supervised learning is more commonly used in applications where historical data predict future events, such as fraudulent credit card transactions.


Unsupervised Learning

Unsupervised learning, unlike supervised learning, is used with data sets without historical data. An unsupervised learning algorithm explores surpassed data to find the structure. This kind of learning works best for transactional data; for instance, it helps in identifying customer segments and clusters with certain attributes—this is often used in content personalization.


Popular techniques where unsupervised learning is used also include self-organizing maps, nearest neighbor mappig, singular value decomposition, and k-means clustering. Basically, online recommendations, identification of data outliers, and segment text topics are all examples of unsupervised learning.


Semi-Supervised Learning

As the name suggests, semi-supervised learning is a bit of both supervised and unsupervised learning and uses both labeled and unlabeled data for training. In a typical scenario, the algorithm would use a small amount of labeled data with a large amount of unlabeled data.


This type of learning can again be used with methods such as classification, regression, and prediction. Examples of semi-supervised learning would be face and voice recognition techniques.


Reinforcement Learning

This is a bit similar to the traditional type of data analysis; the algorithm discovers through trial and error and decides which action results in greater rewards. Three major components can be identified in reinforcement learning functionality: the agent, the environment, and the actions. The agent is the learner or decision-maker, the environment includes everything that the agent interacts with, and the actions are what the agent can do.


Reinforcement learning occurs when the agent chooses actions that maximize the expected reward over a given time. This is best achieved when the agent has a good policy to follow.


Some Machine Learning Algorithms And Processes

If you’re studying machine learning, you should familiarize yourself with these common machine learning algorithms and processes: neural networks, decision trees, random forests, associations and sequence discovery, gradient boosting and bagging, support vector machines, self-organizing maps, k-means clustering, Bayesian networks, Gaussian mixture models, and more.

Other tools and processes that pair up with the best algorithms to aid in deriving the most value from big data include:

   
  • Comprehensive data quality and management
  • GUIs for building models and process flows
  • Interactive data exploration and visualization of model results
  • Comparisons of different machine learning models to quickly identify the best one
  • Automated ensemble model evaluation to identify the best performers
  • Easy model deployment so you can get repeatable, reliable results quickly
  • Integrated end-to-end platform for the automation of the data-to-decision process


Source: simplilearn

9
Abstract. Text vectorization, features extraction and machine learning algorithms play a vital role to the field of sentiment classification.  Accuracy of sentiment classification varies depending on various machine learning approaches, vectorization models and   features extraction methods.  This paper represents multiple ways of evaluations with the necessary steps needed to achieve highest accuracy for classifying the sentiment of reviews. We  apply two n-gram vectorization  models - Unigram and Bigram individually. Later on, we also apply features extraction method TF-IDF with Unigram and Bigram respectively.  Five ensemble machine learning algorithms namely Random Forest (RF), Extra Tree (ET), Bagging Classifier (BC), Ada Boost (ADA) and Gradient Boost (GB) are used here. The key findings in this study is to determine which combination of vectorization models (Bigram, Unigram) along with feature  extraction  method  (TF-IDF) and ensemble classifier gives the better performance of sentiment classification.

Details: http://iopscience.iop.org/article/10.1088/1742-6596/1060/1/012036/pdf

10
Cyber security is the practice of ensuring the integrity, confidentiality and availability (ICA) of information. It represents the ability to defend against and recover from accidents like hard drive failures or power outages, and from attacks by adversaries. The latter includes everyone from script kiddies to hackers and criminal groups capable of executing advanced persistent threats (APTs), and they pose serious threats to the enterprise. Business continuity and disaster recovery planning are every bit as critical to cyber security as application and network security.

Types of cyber security


The scope of cyber security is broad. The core areas are described below, and any good cyber security strategy should take them all into account.

Critical infrastructure

Critical infrastructure includes the cyber-physical systems that society relies on, including the electricity grid, water purification, traffic lights and hospitals. Plugging a power plant into the internet, for example, makes it vulnerable to cyber attacks. The solution for organizations responsible for critical infrastructure is to perform due diligence to protect understand the vulnerabilities and protect against them. Everyone else should evaluate how an attack on critical infrastructure they depend on might affect them and then develop a contingency plan.

Network security

Network security guards against unauthorized intrusion as well as malicious insiders. Ensuring network security often requires trade-offs. For example, access controls such as extra logins might be necessary, but slow down productivity.

Tools used to monitor network security generate a lot of data — so much that valid alerts are often missed. To help better manage network security monitoring, security teams are increasingly using machine learning to flag abnormal traffic and alert to threats in real time.

Cloud security


The enterprise’s move into the cloud creates new security challenges. For example, 2017 has seen almost weekly data breaches from poorly configured cloud instances. Cloud providers are creating new security tools to help enterprise users better secure their data, but the bottom line remains: Moving to the cloud is not a panacea for performing due diligence when it comes to cyber security.

Application security

Application security (AppSec), especially web application security, has become the weakest technical point of attack, but few organizations adequately mitigate all the OWASP Top Ten web vulnerabilities. AppSec begins with secure coding practices, and should be augmented by fuzzing and penetration testing.

Rapid application development and deployment to the cloud has seen the advent of DevOps as a new discipline. DevOps teams typically prioritize business needs over security, a focus that will likely change given the proliferation of threats.

Internet of things (IoT) security

IoT refers to a wide variety of critical and non-critical cyber physical systems, like appliances, sensors, printers and security cameras. IoT devices frequently ship in an insecure state and offer little to no security patching, posing threats to not only their users, but also to others on the internet, as these devices often find themselves part of a botnet. This poses unique security challenges for both home users and society.

Types of cyber threats

Common cyber threats fall under three general categories:

Attacks on confidentiality:
Stealing, or rather copying, a target's personal information is how many cyber attacks begin, including garden-variety criminal attacks like credit card fraud, identity theft, or stealing bitcoin wallets. Nation-state spies make confidentiality attacks a major portion of their work, seeking to acquire confidential information for political, military, or economic gain.

Attacks on integrity: Also known by its common name, sabotage, integrity attacks seek to corrupt, damage, or destroy information or systems, and the people who rely on them. Integrity attacks can be subtle — a typo here, a bit fiddled there — or a slash and burn campaign of sabotage against a target. Perpetrators can range from script kiddies to nation-state attackers.

Attacks on availability: Preventing a target from accessing their data is most frequently seen today in the form of ransomware and denial-of-service attacks. Ransomware encrypts a target's data and demands a ransom to decrypt it. A denial-of-service attack, typically in the form of a distributed denial-of-service (DDoS) attack, floods a network resource with requests, making it unavailable.

The following describes the means by which these attacks are carried out.

Social engineering


Attackers aren't going to hack a computer if they can hack a human instead. Socially engineered malware, often used to deliver ransomware, is the No. 1 method of attack (not a buffer overflow, misconfiguration, or advanced exploit). An end-user is tricked into running a Trojan horse program, often from a website they trust and visit often. Ongoing user education is the best countermeasure against this attack.

Phishing attacks

Sometimes the best way to steal someone's password is to trick them into revealing it This accounts for the spectacular success of phishing. Even smart users, well-trained in security, can fall for a phishing attack. That's why the best defense is two-factor authentication (2FA) — a stolen password is worthless to an attacker without a second factor, such as hardware security token, or soft token authenticator app on the user's phone.

Unpatched software

It's hard to blame your enterprise if an attacker deploys a zero-day exploit against you, but failure to patch looks a lot like failure to perform due diligence. If months and years pass after disclosure of a vulnerability, and your enterprise has not applied that security patch, you open yourself to accusations of negligence. Patch, patch, patch.

Social media threats


Catfishing isn't just for the dating scene. Believable sock puppet accounts can worm their way through your LinkedIn network. If someone who knows 100 of your professional contacts strikes up a conversation about your work, are you going to think it strange? Loose lips sink ships. Expect social media espionage, of both the industrial and nation-state variety.

Advanced persistent threats

Speaking of nation-state adversaries, your enterprise has them. Don't be surprised if multiple APTs are playing hide-and-go-seek on your corporate network. If you're doing anything remotely interesting to someone, anywhere, you need to consider your security posture against sophisticated APTs. Nowhere is this more true than in the technology space, an industry rich with valuable intellectual property many criminals and nations will not scruple to steal.

Source: J.M. Porup, senior writer, CSO  (security geek since 2002)

11
IEEE Blockchain Newsletter, July 2018
 

Blockchain. The market for this fast-growing technology is expected to hit $20 billion by 2024. It has the potential to transform industries far beyond its roots in banking –supply chain, IoT, and power and energy, for example. With an eye to shaping the future of this disruptive technology, we’d like to welcome you to the inaugural issue of the quarterly IEEE Blockchain Newsletter.

Proudly sponsored by the IEEE Blockchain Initiative, this first edition of our newsletter brings you a carefully curated selection of timely, relevant insights, including real-world blockchain use cases for medical record and data sharing, expert commentary on the need for a stronger and more reliable blockchain, and new IoT access controls based on smart contracts. But before we dive into the blockchain pool, we want to introduce and thank the following editors for their diverse contributions and efforts in making the IEEE Blockchain Newsletter the best it can be:

   
  • Mohammed Atiquzzaman, Edith Kinney Gaylord Presidential Professor, School of Computer Science, University of Oklahoma, USA
  • Claire-Isabelle Carlier, Enterprise Architecture Business Analyst, Brookfield Renewable Partners, Canada
  • Kim-Kwang Raymond Choo, Cloud Technology Endowed Professorship, University of Texas at San Antonio, Senior IEEE Member, USA
  • Francisco Curbera, Director, Blockchain and Consumer Health Development, IBM Watson Health, USA
  • Mahmoud Daneshmand, Industry Professor, Stevens Institute of Technology, IEEE Life Member, USA
  • Andy Lippman, Senior Research Scientist, Massachusetts Institute of Technology (MIT), USA
  • Chengnian Long, Professor, Shanghai Jiaotong University, China
  • Qinghua Lu, Senior Research Scientist, Data 61, CSIRO, Australia
  • Ammar Rayes, Distinguished Engineer, Technology Director, Cisco Systems, USA
  • Weisong Shi, Professor, Wayne State University; IEEE Fellow, USA
  • Hong Wan, Associate Professor, School of Industrial Engineering, Purdue University, USA
  • Honggang Wang, Associate Professor, University of Massachusetts (UMass) Dartmouth, USA
  • Jiang Xiao, Associate Professor, Huazhong University of Science and Technology, China
  • Zheng Yan, Professor, Xidian University; IEEE Senior Member, China
  • Shucheng Yu, Associate Professor, Electrical and Computer Engineering, Stevens Institute of Technology, USA
  • Yan Zhang, Professor, Department of Informatics, University of Oslo; IEEE Senior Member, Norway
Though Satoshi Nakamoto's brilliant Bitcoin: A Peer-to-Peer Electronic Cash System white paper was published some 10 years ago, only recently has blockchain technology – also known as distributed ledger technology – begun attracting tangible research and application attention in academia and industry. Why? First-generation blockchain systems focused primarily on digital currencies.

With blockchain’s second generation and the rise of smart contracts, the realistic applicability of the technology across a broader landscape of domains and scenarios like supply chain management, connected health, and IoT, has captured people’s imagination. The challenge now is in demonstrating and identifying potential pitfalls, and exploring how this disruptive, decentralized technology can improve or even revolutionize existing industries, most of which, if not all, are centralized systems requiring meaningful levels of trust between participants. Today, all aspects of blockchain technologies, including interactions and implications for other emerging technologies like AI, big data, edge computing, and industrial internet, are being thoroughly revisited and evaluated. And, this is just the beginning.

IEEE has a rich library of special issues focusing on blockchain, as do many other external publications. However, the IEEE Blockchain Newsletter is the first to be solely dedicated to this innovative technology, with the goal of informing and advancing blockchain and related technologies. What can you expect to find in each issue? Reports and articles covering new blockchain designs, challenges, applications, standards development, and more. By publishing timely, concise technical articles, this newsletter delivers the practical knowledge and forward-looking vision researchers, industry practitioners, and casual observers need to fully tap into blockchain’s profound potential. This newsletter is also a springboard for fostering cross-industry research, education, and innovation.

If you’re working in the blockchain arena, why not consider contributing to the effort? We’re always on the lookout for succinct articles written at the architectural rather than the design level. While articles can detail company or organizational efforts, we ask authors to refrain from incorporating commercial agendas or direct marketing in their submissions. Submissions will be reviewed and given feedback by our editorial team. The submission deadline for our September 2018 issue is August 6, so you still have some time to get us your draft.

Thanks for checking out the first issue of the IEEE Blockchain Newsletter, and welcome to the block(chain) party. Feel free to share this newsletter with collegues and friends. Please contact the Managing Editor at blk-newsletter@ieee.org to submit articles. We think it’s going to be a great experience, and we hope you do too.

 
Chonggang Wang
Editor-in-Chief, IEEE Blockchain Newsletter

12
Blockchain / The Blockchain a New Web 3.0?
« on: July 16, 2018, 01:23:56 PM »

13
Abstract - The Open Source Software Development (OSSD) is a movement, challenges many traditional and commercial theories of software  development.  A group of  developers, programmers,  and other  community members  develop  the Open  Source  Software (OSS)  in  a  collaborative  manner.  Community  and  contributors  provide  great  support  to  make  the  source  code  of  the  software easily understandable and modifiable. However, there are insignificant such standard model or methodologies for OSSD has yet been established. Currently, researchers are proposing methodologies in this area. This paper proposes a new model, OScrum by modifying the scrum to make it applicable for OSSD. The proposed model has been constructed after analyzing the key metrics, pillars, and values of Scrum and OSSD. The model has been evaluated through comparing implementation process and working procedure of OSSD. The result shows that the implementation process of OSCRUM has a very close relationship with the process of OSSD and therefore, it fits well in such software development.

Details: http://ijssst.info/Vol-19/No-3/paper20.pdf

14
A data analytics technique that teaches machines or computers to do what comes naturally to humans and animals: learn from previous data or experience is known as Machine learning. To learn information directly from data without relying on a predetermined equation as a model, computational methods are used by Machine learning algorithms. Based on the number of samples available for learning increases, the algorithms improve their performance.

Why Machine Learning Matters


Machine learning has become a key technique with the rise in big data for solving problems in areas, such as:

  • Computational finance, for credit scoring and algorithmic trading
  • Image processing and computer vision, for face recognition, motion detection, and object detection
  • Computational biology, for tumor detection, drug discovery, and DNA sequencing
  • Energy production, for price and load forecasting
  • Automotive, aerospace, and manufacturing, for predictive maintenance
  • Natural language processing, for voice recognition applications

More Data, More Questions, Better Answers

Machine learning algorithms find natural patterns in data that generate insight and help make better decisions and predictions. They are used every day to make critical decisions in medical diagnosis, stock trading, energy load forecasting, and more. For example, media sites rely on machine learning to sift through millions of options to give song or movie recommendations. Retailers use it to gain insight into their customers’ purchasing behavior.


15
Software Engineering / Understanding the American Education System
« on: July 06, 2018, 07:44:49 PM »
The American education system offers a rich field of choices for international students. There is such an array of schools, programs and locations that the choices may overwhelm students, even those from the U.S. As you begin your school search, it’s important to familiarize yourself with the American education system. Understanding the system will help you narrow your choices and develop your education plan.

The Educational Structure
PRIMARY AND SECONDARY SCHOOL
Prior to higher education, American students attend primary and secondary school for a combined total of 12 years. These years are referred to as the first through twelfth grades.
Around age six, U.S. children begin primary school, which is most commonly called “elementary school.” They attend five or six years and then go onto secondary school.
Secondary school consists of two programs: the first is “middle school” or “junior high school” and the second program is “high school.” A diploma or certificate is awarded upon graduation from high school. After graduating high school (12th grade), U.S. students may go on to college or university. College or university study is known as “higher education.”

More: https://www.studyusa.com/en/a/58/understanding-the-americaneducation-system

Pages: [1] 2