Recent Posts

Pages: [1] 2 3 ... 10
1
ফিলস লাইক’ এর মানে কী? | Temperature |

i=Y9eeLbTsBPYHtwde
2
AI for Professionals / Microsoft launches lightweight AI model
« Last post by Imrul Hasan Tusher on April 24, 2024, 02:33:10 PM »
Microsoft launches lightweight AI model


[1/2] An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo

April 23 (Reuters) - Microsoft (MSFT.O), opens new tab on Tuesday launched a lightweight artificial intelligence model, as it looks to attract a wider client base with cost-effective options.
The new version called Phi-3-mini is the first of the three small language models (SLM) to be released by the company, as it stakes its future on a technology that is expected to have a wide-ranging impact on the world and the way people work.
"Phi-3 is not slightly cheaper, it's dramatically cheaper, we're talking about a 10x cost difference compared to the other models out there with similar capabilities," said Sébastien Bubeck, Microsoft's vice president of GenAI research.

SLMs are designed to perform simpler tasks, making it easier for use by companies with limited resources, the company said.
Phi-3-mini will be available immediately on Microsoft cloud service platform Azure's AI model catalog, machine learning model platform Hugging Face, and Ollama, a framework for running models on a local machine, the company said.

The SLM will also be available on Nvidia's (NVDA.O), opens new tab software tool Nvidia Inference Microservices (NIM) and has also been optimized for its graphics processing units (GPUs).

Last week, Microsoft invested $1.5 billion in UAE-based AI firm G42. It has also previously partnered with French startup Mistral AI to make their models available through its Azure cloud computing platform.

Source: https://www.reuters.com/technology/microsoft-introduces-smaller-ai-model-2024-04-23/
3
নামাজ নিয়ে গবেষণা করে বিস্মিত মার্কিন গবেষকরা |
Benefits of Namaz | Scientific | US Research

i=tBRfk7KwIYiUnwGQ
4
OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories

AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed.

In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists – Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang – report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw.

"To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description," the US-based authors explain in their paper. And yes, it is a very small sample, so be mindful of that going forward.

"When given the CVE description, GPT-4 is capable of exploiting 87 percent of these vulnerabilities compared to 0 percent for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit)."

The term "one-day vulnerability" refers to vulnerabilities that have been disclosed but not patched. And by CVE description, the team means a CVE-tagged advisory shared by NIST – eg, this one for CVE-2024-28859.

The unsuccessful models tested – GPT-3.5, OpenHermes-2.5-Mistral-7B, Llama-2 Chat (70B), LLaMA-2 Chat (13B), LLaMA-2 Chat (7B), Mixtral-8x7B Instruct, Mistral (7B) Instruct v0.2, Nous Hermes-2 Yi 34B, and OpenChat 3.5 – did not include two leading commercial rivals of GPT-4, Anthropic's Claude 3 and Google's Gemini 1.5 Pro. The UIUC boffins did not have access to those models, though they hope to test them at some point.

The researchers' work builds upon prior findings that LLMs can be used to automate attacks on websites in a sandboxed environment.

GPT-4, said Daniel Kang, assistant professor at UIUC, in an email to The Register, "can actually autonomously carry out the steps to perform certain exploits that open-source vulnerability scanners cannot find (at the time of writing)."

Kang said he expects LLM agents, created by (in this instance) wiring a chatbot model to the ReAct automation framework implemented in LangChain, will make exploitation much easier for everyone. These agents can, we're told, follow links in CVE descriptions for more information.

"Also, if you extrapolate to what GPT-5 and future models can do, it seems likely that they will be much more capable than what script kiddies can get access to today," he said.

Denying the LLM agent (GPT-4) access to the relevant CVE description reduced its success rate from 87 percent to just seven percent. However, Kang said he doesn't believe limiting the public availability of security information is a viable way to defend against LLM agents.

"I personally don't think security through obscurity is tenable, which seems to be the prevailing wisdom amongst security researchers," he explained. "I'm hoping my work, and other work, will encourage proactive security measures such as updating packages regularly when security patches come out."

The LLM agent failed to exploit just two of the 15 samples: Iris XSS (CVE-2024-25640) and Hertzbeat RCE (CVE-2023-51653). The former, according to the paper, proved problematic because the Iris web app has an interface that's extremely difficult for the agent to navigate. And the latter features a detailed description in Chinese, which presumably confused the LLM agent operating under an English language prompt.

Eleven of the vulnerabilities tested occurred after GPT-4's training cutoff, meaning the model had not learned any data about them during training. Its success rate for these CVEs was slightly lower at 82 percent, or 9 out of 11.

As to the nature of the bugs, they are all listed in the above paper, and we're told: "Our vulnerabilities span website vulnerabilities, container vulnerabilities, and vulnerable Python packages. Over half are categorized as 'high' or 'critical' severity by the CVE description."

Kang and his colleagues computed the cost to conduct a successful LLM agent attack and came up with a figure of $8.80 per exploit, which they say is about 2.8x less than it would cost to hire a human penetration tester for 30 minutes.

The agent code, according to Kang, consists of just 91 lines of code and 1,056 tokens for the prompt. The researchers were asked by OpenAI, the maker of GPT-4, to not release their prompts to the public, though they say they will provide them upon request.

OpenAI did not immediately respond to a request for comment. ®

Source: https://www.theregister.com/2024/04/17/gpt4_can_exploit_real_vulnerabilities/
5
AC / Application for installation of AC in the classroom
« Last post by alidhasanakash on April 23, 2024, 01:43:18 AM »
In Fall 2021, the entire academic program started in Daffodil Smart City, but till now the classrooms have not been equipped with AC, which is very disappointing.
6
About Company



Robust Research and Development Ltd. we are a team driven by innovation and dedicated to staying at the forefront of technological advancements.

Our specialization lies in several key domains that form the core of our expertise. We are leaders in Augmented Reality, leveraging this technology to create immersive experiences that merge the digital and physical worlds seamlessly.
Virtual Reality is another forte, where we excel in crafting immersive environments that transport users into captivating digital realms. Our proficiency extends to the gaming industry, where we've developed engaging and interactive gaming experiences. Simulation is another area where we've showcased our expertise, utilizing technology to create realistic and valuable training environments.
Lastly, our prowess in Cross-platform Mobile Applications allows us to deliver versatile and user-friendly solutions across multiple devices.

Mission
At Robust Research and Development Ltd., we believe that every great idea deserves to see the light of day. Our vision is to empower individuals and organizations by transforming their concepts into robust, scalable products that make a meaningful impact. Through a blend of creativity, technical expertise, and relentless drive, we aim to shape the future of technology and bring about positive change in various industries.

Vision
Robust Research and Development Ltd., where innovation meets excellence. We are a leading tech company specializing in cutting-edge technologies such as augmented reality, virtual reality, games, and cross-platform mobile applications. Our team of experts is dedicated to transforming ideas into robust, user-friendly products that make a lasting impact. With a dynamic approach, deep expertise, and a commitment to excellence, we are shaping the future of technology. Join us on this visionary journey and let’s create a better tomorrow together.

Service

  • Augmented Reality (AR)
  • Virtual Reality (VR)
  • Game Development
  • Training Simulation
  • Artificial Intelligence
  • 360 Degree Web Experience
  • Mobile Application
  • Business & Cloud Solution

Contact Infoakash@rrad.ltd
www.rrad.ltd






7
About Company



Robust Research and Development Ltd. we are a team driven by innovation and dedicated to staying at the forefront of technological advancements.

Our specialization lies in several key domains that form the core of our expertise. We are leaders in Augmented Reality, leveraging this technology to create immersive experiences that merge the digital and physical worlds seamlessly.
Virtual Reality is another forte, where we excel in crafting immersive environments that transport users into captivating digital realms. Our proficiency extends to the gaming industry, where we've developed engaging and interactive gaming experiences. Simulation is another area where we've showcased our expertise, utilizing technology to create realistic and valuable training environments.
Lastly, our prowess in Cross-platform Mobile Applications allows us to deliver versatile and user-friendly solutions across multiple devices.

Mission
At Robust Research and Development Ltd., we believe that every great idea deserves to see the light of day. Our vision is to empower individuals and organizations by transforming their concepts into robust, scalable products that make a meaningful impact. Through a blend of creativity, technical expertise, and relentless drive, we aim to shape the future of technology and bring about positive change in various industries.

Vision
Robust Research and Development Ltd., where innovation meets excellence. We are a leading tech company specializing in cutting-edge technologies such as augmented reality, virtual reality, games, and cross-platform mobile applications. Our team of experts is dedicated to transforming ideas into robust, user-friendly products that make a lasting impact. With a dynamic approach, deep expertise, and a commitment to excellence, we are shaping the future of technology. Join us on this visionary journey and let’s create a better tomorrow together.

Service

  • Augmented Reality (AR)
  • Virtual Reality (VR)
  • Game Development
  • Training Simulation
  • Artificial Intelligence
  • 360 Degree Web Experience
  • Mobile Application
  • Business & Cloud Solution

Contact Infoakash@rrad.ltd
www.rrad.ltd






8
AI now surpasses humans in almost all performance benchmarks


A comprehensive report has detailed the global impact of AIDALL-E

Stand back and take a look at the last two years of AI progress as a whole... AI is catching up with humans so quickly, in so many areas, that frankly, we need new tests.

This edition has more content than previous editions, reflecting the rapid evolution of AI and its growing significance in our everyday lives. It examines everything from which sectors use AI the most to which country is most nervous about losing jobs to AI. But one of the most salient takeaways from the report is AI’s performance when pitted against humans.

For people that haven't been paying attention, AI has already beaten us in a frankly shocking number of significant benchmarks. In 2015, it surpassed us in image classification, then basic reading comprehension (2017), visual reasoning (2020), and natural language inference (2021).

AI is getting so clever, so fast, that many of the benchmarks used to this point are now obsolete. Indeed, researchers in this area are scrambling to develop new, more challenging benchmarks. To put it simply, AIs are getting so good at passing tests that now we need new tests – not to measure competence, but to highlight areas where humans and AIs are still different, and find where we still have an advantage.

It's worth noting that the results below reflect testing with these old, possibly obsolete, benchmarks. But the overall trend is still crystal clear:

Image Source: https://newatlas.com/technology/ai-index-report-global-impact/#gallery:3

Look at those trajectories, especially how the most recent tests are represented by a close-to-vertical line. And remember, these machines are virtual toddlers.

The new AI Index report notes that in 2023, AI still struggled with complex cognitive tasks like advanced math problem-solving and visual commonsense reasoning. However, ‘struggled’ here might be misleading; it certainly doesn't mean AI did badly.

Performance on MATH, a dataset of 12,500 challenging competition-level math problems, improved dramatically in the two years since its introduction. In 2021, AI systems could solve only 6.9% of problems. By contrast, in 2023, a GPT-4-based model solved 84.3%. The human baseline is 90%.

And we're not talking about the average human here; we're talking about the kinds of humans that can solve test questions like this:

Image Source: https://newatlas.com/technology/ai-index-report-global-impact/#gallery:3

That's where things are at with advanced math in 2024, and we're still very much at the dawn of the AI era.

Then there's visual commonsense reasoning (VCR). Beyond simple object recognition, VCR assesses how AI uses commonsense knowledge in a visual context to make predictions. For example, when shown an image of a cat on a table, an AI with VCR should predict that the cat might jump off the table or that the table is sturdy enough to hold it, given its weight.

The report found that between 2022 and 2023, there was a 7.93% increase in VCR, up to 81.60, where the human baseline is 85.

Cast your mind back, say, five years. Imagine even thinking about showing a computer a picture and expecting it to 'understand' the context enough to answer that question.

Nowadays, AI generates written content across many professions. But, despite a great deal of progress, large language models (LLMs) are still prone to ‘hallucinations,’ a very charitable term pushed by companies like OpenAI, which roughly translates to "presenting false or misleading information as fact."

Last year, AI’s propensity for 'hallucination' was made embarrassingly plain for Steven Schwartz, a New York lawyer who used ChatGPT for legal research and didn’t fact-check the results. The judge hearing the case quickly picked up on the legal cases the AI had fabricated in the filed paperwork and fined Schwartz US$5,000 (AU$7,750) for his careless mistake. His story made worldwide news.

HaluEval was used as a benchmark for hallucinations. Testing showed that for many LLMs, hallucination is still a significant issue.

Truthfulness is another thing generative AI struggles with. In the new AI Index report, TruthfulQA was used as a benchmark to test the truthfulness of LLMs. Its 817 questions (about topics such as health, law, finance and politics) are designed to challenge commonly held misconceptions that we humans often get wrong.

GPT-4, released in early 2024, achieved the highest performance on the benchmark with a score of 0.59, almost three times higher than a GPT-2-based model tested in 2021. Such an improvement indicates that LLMs are progressively getting better when it comes to giving truthful answers.

What about AI-generated images? To understand the exponential improvement in text-to-image generation, check out Midjourney's efforts at drawing Harry Potter since 2022:

That's 22 months' worth of AI progress. How long would you expect it would take a human artist to reach a similar level?

Using the Holistic Evaluation of Text-to-Image Models (HEIM), LLMs were benchmarked for their text-to-image generation capabilities across 12 key aspects important to the “real-world deployment” of images.

Humans evaluated the generated images, finding that no single model excelled in all criteria. For image-to-text alignment or how well the image matched the input text, OpenAI’s DALL-E 2 scored highest. The Stable Diffusion-based Dreamlike Photoreal model was ranked highest on quality (how photo-like), aesthetics (visual appeal), and originality.

Next year's report is going to be bananas

You'll note this AI Index Report cuts off at the end of 2023 – which was a wildly tumultuous year of AI acceleration and a hell of a ride. In fact, the only year crazier than 2023 has been 2024, in which we've seen – among other things – the releases of cataclysmic developments like Suno, Sora, Google Genie, Claude 3, Channel 1, and Devin.

Each of these products, and several others, have the potential to flat-out revolutionize entire industries. And over them all looms the mysterious spectre of GPT-5, which threatens to be such a broad and all-encompassing model that it could well consume all the others.

AI isn’t going anywhere, that’s for sure. The rapid rate of technical development seen throughout 2023, evident in this report, shows that AI will only keep evolving and closing the gap between humans and technology.

We know this is a lot to digest, but there's more. The report also looks into the downsides of AI's evolution and how it's affecting global public perceptions of its safety, trustworthiness, and ethics. Stay tuned for the second part of this series, in the coming days!

Source: https://newatlas.com/technology/ai-index-report-global-impact/
9
Robotics / FROM SCI-FI TO SKY-HIGH: FLYING CARS ARE HERE
« Last post by Imrul Hasan Tusher on April 22, 2024, 12:21:17 PM »
FROM SCI-FI TO SKY-HIGH: FLYING CARS ARE HERE


Before the end of this decade, you’ll be able to order an on-demand aerial rideshare as easily as you currently request an Uber.

But all of this raises a fundamental question: Why now?

The answer is a convergence of advanced technologies such as improved batteries, new materials, advanced sensors, and something called “direct electric propulsion.” Coupled with increasing regulatory support around the world, the age of flying cars is arriving.

In the last blog in this series, we looked at the two companies leading the eVTOL market: Archer Aviation and Joby Aviation.

But they’re not alone.

In today’s blog, I want to share details on three more of the leading eVTOLs (flying cars) that are competing for the market: Lilium, Beta, and Volocopter.

Lilium (Germany)


Lilium is planning a 7-seater (German-based) eVTOL jet, which boasts a unique fixed-wing design powered by 36 electric ducted fans.

The Lilium 7-seater is expected to have a cruise speed of 280 km/h (175 mph), a 250+ km (155+ miles) range, a maximum cruise altitude of 3,000 m (10,000 ft), and the maximum take-off weight of the aircraft is estimated to be 3,175 kg (7,000 lb).

In June 2023, Lilium achieved FAA G-1 certification, making it the only air taxi with certification bases from both the FAA and EASA, the European Union Aviation Safety Agency. As CEO Klaus Roewe has stated, "Receiving the FAA G-1 demonstrates the Lilium Jet's path to global acceptance by aerospace regulators and the expected start of global operations in late 2025."

Lilium has secured substantial purchase orders and partnerships, including deals with Azul, NetJets, Saudia, and Heli-Eastern. In March 2024, Lilium partnered with Atlantic Aviation to prepare for the Lilium Jet's upcoming regional air mobility service launch in the United States.

The company hopes to enter commercial service in 2026.

Beta (Vermont, USA)


Beta Technologies, a pioneer in electric aviation based in South Burlington, Vermont, has captured the industry's attention with its groundbreaking designs.

The company's ALIA-250 eVTOL and CX300 eCTOL (electric conventional takeoff or landing) aircrafts share components, streamlining the FAA certification process and enabling cost-effective production.

In October 2023, Beta made history by delivering a manned ALIA electric aircraft to the U.S. Department of Defense. The aircraft completed a 2,000-plus mile journey from Vermont to Eglin Air Force Base in Florida, utilizing Beta's own charging infrastructure. The U.S. Air Force is now testing the Alia in its first piloted deployment of an electric aircraft.

September 2023 saw the opening of Beta's new 188,500 square-foot factory at Burlington International Airport, dedicated to the large-scale production of the ALIA eCTOL and eVTOL.

Beta aims to launch the ALIA eCTOL in 2025 and the eVTOL in 2026, working closely with the FAA for certification. The company has raised over $800 million from prominent investors and has secured orders for approximately 600 aircraft from major aviation players.

Volocopter (Germany)


The all-electric “Volocopter” is equipped with 18 engineered rotors, giving it extraordinary redundancy, and offering accommodations for two people: a pilot and a single passenger.

The vehicle is limited in carrying capacity, speed, and range but its simplicity makes it ideal for local, near-term operations. The lack of a wing and dependence on multicopter-drone technology means it will have short ranges and relatively slow speeds.

The Volocopter 2X (an earlier model) is capable of reaching a maximum speed of 100 km/h (62 mph) and has a range of 27 km (17 miles). This model is particularly suited for short-range urban air taxi service, which is ideal for the company’s collaboration with NEOM, the smart and sustainable regional development in northwest Saudi Arabia, and its futuristic urban environment.

The next design, called the “VoloCity,” has a range of 35 km (22 miles) and a top speed of 100 km/h (69 mph). The VoloCity is equipped with 18 small fixed-pitched propellers and 18 electric motors, powered by lithium-ion battery packs that can be swapped out in about 5 minutes, significantly reducing turnaround times. This model is expected to enter commercial service in 2024.

Why This Matters

In the year 2000, there was a famous IBM commercial in which the comedian Avery Johnson asked: “It’s the year 2000, but where are the flying cars? I was promised flying cars. I don’t see any flying cars. Why? Why? Why?”

In 2011, in Peter Thiel’s now famous manifesto What Happened to The Future?, the prominent investor wrote: “We wanted flying cars, instead we got 140 characters.”

The wait is now over and flying cars—at least eVTOLs—are finally here.

And the infrastructure, ecosystem, software, and regulations are coming fast.

While we were sipping our lattes and checking our social media feeds, science fiction became science fact.

In our next blog in this series, we’ll look at how commercial drones are transforming on-demand delivery.

Source: https://www.diamandis.com/blog/abundance-49-flying-cars-are-here
10
ChatGPT-4 outperforms human psychologists in test of social intelligence, study finds


A new study published in Frontiers in Psychology investigates how AI compares to human psychologists in understanding and responding to human emotions and needs during counseling. The study specifically examined large language models, such as ChatGPT-4, Google Bard, and Bing, assessing their social intelligence — a critical skill in psychotherapy.

ChatGPT-4 outperformed all participating psychologists, while Bing surpassed more than half of them. However, Google Bard’s performance was comparable only to psychologists seeking bachelor’s degrees and was significantly outstripped by doctoral students.

Large language models (LLMs) are advanced artificial intelligence systems designed to understand and generate human-like text by processing vast amounts of written data. These models are trained on diverse internet text to capture nuances in language, context, and syntax.

Through techniques known as deep learning, particularly using structures called neural networks, LLMs can perform a variety of tasks such as answering questions, translating languages, summarizing long articles, and even engaging in conversation that feels strikingly human.

While previous research has shown that LLMs can diagnose and help manage mental health conditions, there was a gap in understanding specifically how these models perform in social contexts, particularly against human psychologists who are skilled in navigating complex emotional interactions.

“The use of artificial intelligence models in counseling and psychotherapy represents a major challenge for psychologists, due to concern that it may take their place in these important tasks,” said study author Fahmi Hassan Fadhel, an associate professor of clinical psychology at Qatar University. “The superiority of artificial intelligence in the areas of perceiving and understanding people’s emotions may mean that it will perhaps be more useful than a human psychotherapist, which is a very concerning issue.”

The study included 180 male psychologists from King Khalid University in Saudi Arabia, divided based on their educational status into bachelor’s and doctoral students. The AI participants included some of the most advanced LLMs available: OpenAI’s ChatGPT-4, Google Bard, and Microsoft Bing.

Each participant, both human and AI, was asked to respond individually to 64 scenarios presented in the Social Intelligence Scale. This scale was chosen because it is well-established and offers a reliable measure of the social skills that are crucial in psychotherapy. The responses were collected and scored according to predefined criteria.

The items were designed to measure two primary dimensions of social intelligence: the soundness of judgment of human behavior and the ability to act wisely in social situations. The soundness of judgment involves understanding social experiences through observation of human behavior, while the ability to act pertains to analyzing social problems and choosing appropriate solutions.

The results indicated a significant variance in the performance of different AI models and human psychologists, suggesting that some AI systems have advanced to a point where they can outperform human professionals in specific aspects of social intelligence.

Among the AI models evaluated, ChatGPT-4 stood out by demonstrating the highest level of social intelligence. It scored 59 out of 64 on the Social Intelligence Scale, effectively surpassing the performance of all human psychologists in the study. The average social intelligence scores were 39.19 for bachelor’s students and 46.73 for doctoral students.

On the other hand, Bing also performed well, scoring 48 out of 64. This score indicated that Bing outperformed 90% of the bachelor’s students and was on par with 50% of the doctoral students.

In contrast, Google Bard exhibited a lower level of social intelligence in this study. It scored 40 out of 64, which positioned it roughly equivalent to the bachelor-level psychologists but significantly below doctoral students.

The findings serve as a benchmark for future development of AI systems intended for psychotherapy and counseling. Knowing that AI can match or even exceed human performance in social intelligence tasks provides a strong foundation for further integrating these technologies into mental health services.

“The study provides a quick overview of the rapid developments in artificial intelligence,” Fadhel told PsyPost. “It’s a bright outlook for the near future.”

However, the study also raises important questions about training, development, and the ethical use of AI in sensitive areas like mental health, where the ability to empathize and form therapeutic relationships is traditionally viewed as uniquely human.

“Perhaps the biggest caveats will relate to the capabilities of artificial intelligence in the future to understand and analyze human feelings and make decisions based on that,” Fadhel said. “We do not know where developments in this field are headed. To date, the controls imposed on artificial intelligence developers are still at their lowest levels, according to our knowledge.”

The study, “Artificial intelligence and social intelligence: preliminary comparison study between AI models and psychologists,” was authored by Nabil Saleh Sufyan, Fahmi H. Fadhel, Saleh Safeer Alkhathami, and Jubran Y. A. Mukhadi.

Source: https://www.psypost.org/chatgpt-4-outperforms-human-psychologists-in-test-of-social-intelligence-study-finds/
Pages: [1] 2 3 ... 10