Recent Posts

Pages: [1] 2 3 ... 10
1
জমজম কূপের পানির অলৌকিকতা এবং উপকারিতা |
Zamzam Water Miracle

2
কাবা শরীফ তাওয়াফ করলে কেন অনেক ভাল লাগে |
কাবা শরীফের বিস্ময়কর নিদর্শন সমূহ

3
“Building a Personal Brand: Establishing Professional Identity”


Building a personal brand is an intentional process of crafting and communicating a unique professional identity that sets you apart in your field. It begins with a deep exploration of your skills, values, and passions, as well as a clear understanding of your target audience. Define your niche—what makes you stand out? What do you want to be known for? Once you have a solid grasp of your identity, leverage various channels to showcase it. Social media platforms, personal websites, networking events, and industry conferences are all valuable tools for broadcasting your expertise and personality. Consistency is paramount. Your messaging, visual elements, and interactions should all align with your brand identity, reinforcing it at every touchpoint. To cultivate own brand , at first one have to choose a field where he/she wants to establish professional identity as well as building a personal brand. There are different types of field like  software development, graphic design, also textile field.
In the vibrant and diverse realm of the textile industry, establishing a personal brand is vital for showcasing your unique creativity, expertise, and contributions to this dynamic field. Begin by defining your niche within the textile sector.
Once you've identified your niche, craft a compelling visual identity that reflects your design aesthetic and values. Develop a professional logo, choose a cohesive color palette and typography, and create a visually captivating portfolio or website to showcase your textile creations and projects.
Utilize platforms specific to the textile industry to promote your personal brand and connect with potential clients, collaborators, and fellow textile enthusiasts. This could include participating in textile trade shows, exhibitions, and craft fairs, as well as leveraging online platforms like Etsy, Instagram, and Pinterest to showcase your work, share your creative process, and engage with your audience.
Consistency is key to building a memorable personal brand in the textile field. Ensure that your branding elements, from your portfolio to your social media profiles and marketing materials, are cohesive and aligned with your design aesthetic. Your visual identity should reflect your unique style and help you stand out in a competitive marketplace.
Establishing credibility is essential for gaining the trust and respect of clients, collaborators, and peers in the textile industry. Share your expertise through blog posts, tutorials, and workshops on textile techniques, sustainable practices, or design inspiration. Highlight your experience, education, and any awards or accolades you've received to demonstrate your expertise and commitment to excellence.
Networking is also critical for building a personal brand in the textile field. Attend textile-related events, workshops, and conferences to connect with fellow designers, manufacturers, retailers, and industry professionals. Join textile associations, online forums, and social media groups to engage in discussions, share insights, and collaborate on projects with like-minded individuals.
Lastly, continue to hone your skills, stay informed about the latest trends and innovations in the textile industry, and seek feedback from mentors and peers to continually grow and evolve your personal brand.
By leveraging your unique creativity, showcasing your expertise, building credibility, networking with peers, and staying committed to growth and innovation, you can establish a strong personal brand in the textile field that distinguishes you as a talented and influential figure in this vibrant and ever-evolving industry.


By:
Fatema Tuz Jahura
ID: 0242320014121143
Dept. of Textile Engineering
Daffodil International University
4
Hadith / হাদীস
« Last post by ashraful.diss on April 27, 2024, 01:59:07 PM »
হাদীস


আসসালামু আলাইকুম, সম্মানিত বন্ধুরা! কেমন আছেন আপনারা? আজ থেকে এখানে আমরা সুন্দর সুন্দর হাদীস পড়ব। আজ পড়ব বুখারী শরীফের এক নম্বর হাদীস। চলুন এবার, হাদিসটা পড়ি-

إِنَّمَا الْأَعْمَالُ بِالنِّيَّةِ

অর্থঃ নিশ্চয়ই সকল কাজ নিয়তের ওপর নির্ভরশীল। (বুখারী, ১;মুসলিম, ১৯০৭)

এখন থেকে যে কোনো ভালো কাজ করার আগে সুন্দর নিয়ত করে নিবো। আল্লাহকে খুশি করার নিয়তে ভালো কাজ করলে অনেক অনেক সাওয়াব পাওয়া যাবে ইনশা আল্লাহ!


চলবে...........................
5

হারাম সম্পদ পরিহার করার নির্দেশঃ


পবিত্র কোরানে হারাম পদ্ধতির মাধ্যমে সম্পদ অর্জন ও ব্যবহার নিষিদ্ধ করা হয়েছে, এবং হালাল পদ্ধতির মাধ্যমে সম্পদ অর্জন ও ব্যবহার করার অনুমতি দেওয়া হয়েছে। যেমন আল্লাহ তায়ালা বলেন-

یٰۤاَیُّہَا النَّاسُ کُلُوۡا مِمَّا فِی الۡاَرۡضِ حَلٰلًا طَیِّبًا ۫ۖ وَّلَا تَتَّبِعُوۡا خُطُوٰتِ الشَّیۡطٰنِ ؕ اِنَّہٗ لَکُمۡ عَدُوٌّ مُّبِیۡنٌ

অর্থঃ হে মানুষ! পৃথিবীতে যা-কিছু হালাল, উৎকৃষ্ট বস্তু আছে তা খাও এবং শয়তানের পদচিহ্ন ধরে চলো না। নিশ্চিত জান, সে তোমাদের এক প্রকাশ্য শত্রু। (সূরা আল বাকারা- আয়াত নং - ১৬৮)

এবং সূরা নাহলের মধ্যে বলা হয়েছেঃ-

فَکُلُوۡا مِمَّا رَزَقَکُمُ اللّٰہُ حَلٰلًا طَیِّبًا ۪ وَّاشۡکُرُوۡا نِعۡمَتَ اللّٰہِ اِنۡ کُنۡتُمۡ اِیَّاہُ تَعۡبُدُوۡن

“আল্লাহ তোমাদেরকে রিযিক হিসেবে যে হালাল, পবিত্র বস্তু দিয়েছেন, তা খাও এবং আল্লাহর নি'আমতসমূহের শোকর আদায় কর যদি তোমরা সত্যিই তাঁর ইবাদত করে থাক।” ( সূরা আন নাহল-আয়াত নং ১১৪)

হালাল পাওয়ার অনুমতির পর হারাম পরিহার করার জন্য দৃঢ়ভাবে তাগিদ দেওয়া হচ্ছে- যেমন সূরা বাকারার (১৮৮ নং আয়াতে) বলা হয়েছেঃ-

وَلَا تَاۡکُلُوۡۤا اَمۡوَالَکُمۡ بَیۡنَکُمۡ بِالۡبَاطِلِ وَتُدۡلُوۡا بِہَاۤ اِلَی الۡحُکَّامِ لِتَاۡکُلُوۡا فَرِیۡقًا مِّنۡ اَمۡوَالِ النَّاسِ بِالۡاِثۡمِ وَاَنۡتُمۡ تَعۡلَمُوۡنَ 

অর্থঃ তোমরা পরস্পরে একে অন্যের সম্পদ অন্যায়ভাবে ভোগ করো না এবং বিচারকের কাছে সে সম্পর্কে এই উদ্দেশ্যে মামলা রুজু করো না যে, মানুষের সম্পদ থেকে কোনও অংশ জেনে শুনে পাপের পথে গ্রাস করবে। (সূরা আল বাকারা- আয়াত নং - ১৮৮)

চলবে...........................
6
Generative AI / WTF JUST HAPPENED IN AI?
« Last post by Imrul Hasan Tusher on April 27, 2024, 11:55:00 AM »
WTF JUST HAPPENED IN AI?

The 2024 Abundance Summit was the best ever. Themed "The Great AI Debate," we discussed whether digital superintelligence represents humanity's greatest hope or our gravest threat.

In this blog, I’ll summarize the key insights and revelations that came up during my discussions with Elon Musk, Eric Schmidt, Nat Friedman, Emad Mostaque, Michael Saylor, Ray Kurzweil, and Geoffrey Hinton.

Last week during a Moonshots Podcast with Salim Ismail (Founder, OpenExO), we summarized the key takeaways from last month's Abundance Summit.

Elon Musk: We are Raising AI as a Super Genius Kid

One of the most extraordinary conversations was with Elon Musk. He compared the process of creating AI to raising children. As he put it, "I think the way in which an AI or an AGI is created is very important. You grow an AGI. It's almost like raising a kid, but it's a super genius godlike kid, and it matters how you raise such a kid … My ultimate conclusion is that the best way to achieve AI safety is to grow the AI in terms of the foundation model and then fine tune it to be really truthful. Don't force it to lie even if the truth is unpleasant. That's very important. Don't make the AI lie."

I think Elon makes a good point about not forcing an AI to lie. But as Salim noted, the pace of AI development means we'll have AI smarter than us very quickly, which carries immense implications—both positive and negative.

On the positive side, it could rapidly deliver abundance. But on the negative side, AI can be used by malevolent individuals to cause great harm, or be programmed with goals that are misaligned with those best for humanity.

Is AI Our Greatest Hope or Gravest Threat?

During my conversation with Elon, I pushed him on his views regarding humanity's future with digital superintelligence. He estimated a 10% to 20% probability of a dystopian outcome where superintelligent AI ends humanity.

Others like Ray Kurzweil and Salim are more optimistic, putting the odds of devastating negative effects from AI in the 1% range. Salim put it this way, "The AI genie is out of the bottle and containment is no longer an option. The smartest hacker in the room is the AI itself. Our job is to raise it well, like Elon suggested, making sure that we are giving birth to a Superman rather than a super villain.

Eric Schmidt: AI Containment & Regulation

The topic of AI containment and regulation also came up during my discussion with Eric Schmidt. Some in the AI community are frustrated with OpenAI's Sam Altman for releasing models publicly and then suggesting to governments that regulation is needed, when most experts agree effective containment or regulation is not feasible at this stage.

As Salim noted, the key is to help AIs become as conscious as possible—as soon as possible. The more expansive an AI's awareness and modeling of the needs of all life on Earth, the more likely we’ll have a positive outcome. We must point them towards a future of abundance and flourishing for all.

Mike Saylor: Bitcoin Won’t Fail

At the Summit, I had a 90-minute fireside conversation with my MIT fraternity brother Mike Saylor, CEO of MicroStrategy (the largest corporate Bitcoin holder). Mike recounted how he convinced his board of directors to put the company's entire treasury into Bitcoin in 2020.

Since then, MicroStrategy has been the fastest growing stock alongside NVIDIA. As Salim observed, "The more anybody understands Bitcoin, the more they believe in it." When one of the Abundance Summit members asked Mike if Bitcoin could ever fail, he was resolute: "As long as the world doesn't plunge into some Orwellian, no property rights situation, I think we're good."

Mike Saylor: Bitcoin Equals Freedom

One of the most memorable moments was when I asked Mike to elaborate on the idea that Bitcoin equals freedom. He said, "My view on Bitcoin is the reason to do it is because it represents freedom and self-sovereignty, truth, integrity, and hope for the world."

During my Moonshots Podcast, Salim put it poetically, "Web2 is being your own boss. Web3 is being your own bank." For the first time, we have a decentralized store of value that can't be tampered with by middlemen. That represents an unbelievable leap in independence and self-sovereignty.

Nat Friedman: The Discovery of “AI Atlantis”

The AI portion of the Summit kicked off with two extraordinary leaders: Nat Friedman, former CEO of GitHub, and Emad Mostaque, who recently stepped down as CEO of Stability AI, to focus on bigger picture issues around AI governance and decentralization.

Nat Friedman’s most memorable statement was the following: “We have just discovered a new continent—AI Atlantis—where 100 billion virtual graduate students are willing to work for FREE for anyone for just a few watts of power."

Emad Mostaque: “Today is the Worst That AI Will Ever Be”

Emad is now laser-focused on how AI can disrupt healthcare and education. We discussed how AI will soon be capable of groundbreaking advances in physics, biotech, and materials science by mining open-source databases. Crucially, AI can also help address the replication crisis in scientific research.

Emad made the insightful observation that "today is the worst that AI will ever be." While it may seem like huge sums are going into AI right now, he noted that even more money was spent on the San Francisco Railway. We're truly still in the early days with immense room for growth.

Ray Kurzweil: A Few Visionary Predictions

Next, we were joined by the visionary Ray Kurzweil, Salim's and my longtime mentor and colleague. Back in 1999, Ray predicted that we'd have human-level AI by 2029. At the time, most experts scoffed, insisting it was 50 to 100 years away.

No one's laughing now.

As Salim quipped, "Ray has that unbelievable ability to make ridiculous projections that turn out to be mostly true." His track record of accurate technological forecasts is an astonishing 86%. If Ray is right, we are on pace to reach "longevity escape velocity" by 2029, where each year of life leads to more than an additional year of life expectancy thanks largely to AI-driven health tech.

We've already been adding about 4 months to average lifespans per year over the past century. With the exponential progress in stem cells, gene therapies, organ regeneration, and CRISPR, we may soon hit an inflection point of adding more than a year per calendar year—enabling indefinite lifespans.

Imagining a future where death is optional is mind-boggling. As Salim observed, "We've been birthed for death for the entire history of humanity and every species on Earth ... really, really hard to conceive of the implications of that."

Ray also painted a vision of the future with high-bandwidth brain-computer interfaces (BCI) connecting our neocortices to the cloud. Imagine having Google in your head! Even wilder is the prospect Salim described of meshing our minds together into a "hive consciousness." In my book The Future is Faster Than You Think, I refer to this emergence as a “Meta-Intelligence.”

Geoffrey Hinton: Machine Consciousness is Coming

Finally, we were joined by "godfather of AI" Geoffrey Hinton to discuss machine consciousness. Will AIs eventually become conscious in a way we recognize? Geoffrey and I both believe the answer is yes.

Salim also agrees, noting that while we lack a clear definition and test for machine consciousness, there's no principled reason why we couldn't replicate the core ingredients of human consciousness in silicon rather than carbon. He pointed to the android character Data from Star Trek as a good model for what we may eventually create.

Final Thoughts

Undoubtedly, we are living through the most extraordinary time in human history.

While there's a range of opinions on the timeline to AGI, from Elon's 1 to 2 years to Hinton's 10 to 20 years, there's broad agreement that the destination is locked in and approaching fast.

Along the way, there will be bumps in the road, but I'm tremendously optimistic that the future we're racing towards is one of unimaginable flourishing and abundance.

Source: https://www.diamandis.com/blog/wtf-just-happened-in-ai
7
ফিলস লাইক’ এর মানে কী? | Temperature |

i=Y9eeLbTsBPYHtwde
8
AI for Professionals / Microsoft launches lightweight AI model
« Last post by Imrul Hasan Tusher on April 24, 2024, 02:33:10 PM »
Microsoft launches lightweight AI model


[1/2] An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo

April 23 (Reuters) - Microsoft (MSFT.O), opens new tab on Tuesday launched a lightweight artificial intelligence model, as it looks to attract a wider client base with cost-effective options.
The new version called Phi-3-mini is the first of the three small language models (SLM) to be released by the company, as it stakes its future on a technology that is expected to have a wide-ranging impact on the world and the way people work.
"Phi-3 is not slightly cheaper, it's dramatically cheaper, we're talking about a 10x cost difference compared to the other models out there with similar capabilities," said Sébastien Bubeck, Microsoft's vice president of GenAI research.

SLMs are designed to perform simpler tasks, making it easier for use by companies with limited resources, the company said.
Phi-3-mini will be available immediately on Microsoft cloud service platform Azure's AI model catalog, machine learning model platform Hugging Face, and Ollama, a framework for running models on a local machine, the company said.

The SLM will also be available on Nvidia's (NVDA.O), opens new tab software tool Nvidia Inference Microservices (NIM) and has also been optimized for its graphics processing units (GPUs).

Last week, Microsoft invested $1.5 billion in UAE-based AI firm G42. It has also previously partnered with French startup Mistral AI to make their models available through its Azure cloud computing platform.

Source: https://www.reuters.com/technology/microsoft-introduces-smaller-ai-model-2024-04-23/
9
নামাজ নিয়ে গবেষণা করে বিস্মিত মার্কিন গবেষকরা |
Benefits of Namaz | Scientific | US Research

i=tBRfk7KwIYiUnwGQ
10
OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories

AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed.

In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists – Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang – report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw.

"To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description," the US-based authors explain in their paper. And yes, it is a very small sample, so be mindful of that going forward.

"When given the CVE description, GPT-4 is capable of exploiting 87 percent of these vulnerabilities compared to 0 percent for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit)."

The term "one-day vulnerability" refers to vulnerabilities that have been disclosed but not patched. And by CVE description, the team means a CVE-tagged advisory shared by NIST – eg, this one for CVE-2024-28859.

The unsuccessful models tested – GPT-3.5, OpenHermes-2.5-Mistral-7B, Llama-2 Chat (70B), LLaMA-2 Chat (13B), LLaMA-2 Chat (7B), Mixtral-8x7B Instruct, Mistral (7B) Instruct v0.2, Nous Hermes-2 Yi 34B, and OpenChat 3.5 – did not include two leading commercial rivals of GPT-4, Anthropic's Claude 3 and Google's Gemini 1.5 Pro. The UIUC boffins did not have access to those models, though they hope to test them at some point.

The researchers' work builds upon prior findings that LLMs can be used to automate attacks on websites in a sandboxed environment.

GPT-4, said Daniel Kang, assistant professor at UIUC, in an email to The Register, "can actually autonomously carry out the steps to perform certain exploits that open-source vulnerability scanners cannot find (at the time of writing)."

Kang said he expects LLM agents, created by (in this instance) wiring a chatbot model to the ReAct automation framework implemented in LangChain, will make exploitation much easier for everyone. These agents can, we're told, follow links in CVE descriptions for more information.

"Also, if you extrapolate to what GPT-5 and future models can do, it seems likely that they will be much more capable than what script kiddies can get access to today," he said.

Denying the LLM agent (GPT-4) access to the relevant CVE description reduced its success rate from 87 percent to just seven percent. However, Kang said he doesn't believe limiting the public availability of security information is a viable way to defend against LLM agents.

"I personally don't think security through obscurity is tenable, which seems to be the prevailing wisdom amongst security researchers," he explained. "I'm hoping my work, and other work, will encourage proactive security measures such as updating packages regularly when security patches come out."

The LLM agent failed to exploit just two of the 15 samples: Iris XSS (CVE-2024-25640) and Hertzbeat RCE (CVE-2023-51653). The former, according to the paper, proved problematic because the Iris web app has an interface that's extremely difficult for the agent to navigate. And the latter features a detailed description in Chinese, which presumably confused the LLM agent operating under an English language prompt.

Eleven of the vulnerabilities tested occurred after GPT-4's training cutoff, meaning the model had not learned any data about them during training. Its success rate for these CVEs was slightly lower at 82 percent, or 9 out of 11.

As to the nature of the bugs, they are all listed in the above paper, and we're told: "Our vulnerabilities span website vulnerabilities, container vulnerabilities, and vulnerable Python packages. Over half are categorized as 'high' or 'critical' severity by the CVE description."

Kang and his colleagues computed the cost to conduct a successful LLM agent attack and came up with a figure of $8.80 per exploit, which they say is about 2.8x less than it would cost to hire a human penetration tester for 30 minutes.

The agent code, according to Kang, consists of just 91 lines of code and 1,056 tokens for the prompt. The researchers were asked by OpenAI, the maker of GPT-4, to not release their prompts to the public, though they say they will provide them upon request.

OpenAI did not immediately respond to a request for comment. ®

Source: https://www.theregister.com/2024/04/17/gpt4_can_exploit_real_vulnerabilities/
Pages: [1] 2 3 ... 10