Recent Posts

Pages: [1] 2 3 ... 10
1
Generative AI / WTF JUST HAPPENED IN AI?
« Last post by Imrul Hasan Tusher on Today at 11:55:00 AM »
WTF JUST HAPPENED IN AI?

The 2024 Abundance Summit was the best ever. Themed "The Great AI Debate," we discussed whether digital superintelligence represents humanity's greatest hope or our gravest threat.

In this blog, I’ll summarize the key insights and revelations that came up during my discussions with Elon Musk, Eric Schmidt, Nat Friedman, Emad Mostaque, Michael Saylor, Ray Kurzweil, and Geoffrey Hinton.

Last week during a Moonshots Podcast with Salim Ismail (Founder, OpenExO), we summarized the key takeaways from last month's Abundance Summit.

Elon Musk: We are Raising AI as a Super Genius Kid

One of the most extraordinary conversations was with Elon Musk. He compared the process of creating AI to raising children. As he put it, "I think the way in which an AI or an AGI is created is very important. You grow an AGI. It's almost like raising a kid, but it's a super genius godlike kid, and it matters how you raise such a kid … My ultimate conclusion is that the best way to achieve AI safety is to grow the AI in terms of the foundation model and then fine tune it to be really truthful. Don't force it to lie even if the truth is unpleasant. That's very important. Don't make the AI lie."

I think Elon makes a good point about not forcing an AI to lie. But as Salim noted, the pace of AI development means we'll have AI smarter than us very quickly, which carries immense implications—both positive and negative.

On the positive side, it could rapidly deliver abundance. But on the negative side, AI can be used by malevolent individuals to cause great harm, or be programmed with goals that are misaligned with those best for humanity.

Is AI Our Greatest Hope or Gravest Threat?

During my conversation with Elon, I pushed him on his views regarding humanity's future with digital superintelligence. He estimated a 10% to 20% probability of a dystopian outcome where superintelligent AI ends humanity.

Others like Ray Kurzweil and Salim are more optimistic, putting the odds of devastating negative effects from AI in the 1% range. Salim put it this way, "The AI genie is out of the bottle and containment is no longer an option. The smartest hacker in the room is the AI itself. Our job is to raise it well, like Elon suggested, making sure that we are giving birth to a Superman rather than a super villain.

Eric Schmidt: AI Containment & Regulation

The topic of AI containment and regulation also came up during my discussion with Eric Schmidt. Some in the AI community are frustrated with OpenAI's Sam Altman for releasing models publicly and then suggesting to governments that regulation is needed, when most experts agree effective containment or regulation is not feasible at this stage.

As Salim noted, the key is to help AIs become as conscious as possible—as soon as possible. The more expansive an AI's awareness and modeling of the needs of all life on Earth, the more likely we’ll have a positive outcome. We must point them towards a future of abundance and flourishing for all.

Mike Saylor: Bitcoin Won’t Fail

At the Summit, I had a 90-minute fireside conversation with my MIT fraternity brother Mike Saylor, CEO of MicroStrategy (the largest corporate Bitcoin holder). Mike recounted how he convinced his board of directors to put the company's entire treasury into Bitcoin in 2020.

Since then, MicroStrategy has been the fastest growing stock alongside NVIDIA. As Salim observed, "The more anybody understands Bitcoin, the more they believe in it." When one of the Abundance Summit members asked Mike if Bitcoin could ever fail, he was resolute: "As long as the world doesn't plunge into some Orwellian, no property rights situation, I think we're good."

Mike Saylor: Bitcoin Equals Freedom

One of the most memorable moments was when I asked Mike to elaborate on the idea that Bitcoin equals freedom. He said, "My view on Bitcoin is the reason to do it is because it represents freedom and self-sovereignty, truth, integrity, and hope for the world."

During my Moonshots Podcast, Salim put it poetically, "Web2 is being your own boss. Web3 is being your own bank." For the first time, we have a decentralized store of value that can't be tampered with by middlemen. That represents an unbelievable leap in independence and self-sovereignty.

Nat Friedman: The Discovery of “AI Atlantis”

The AI portion of the Summit kicked off with two extraordinary leaders: Nat Friedman, former CEO of GitHub, and Emad Mostaque, who recently stepped down as CEO of Stability AI, to focus on bigger picture issues around AI governance and decentralization.

Nat Friedman’s most memorable statement was the following: “We have just discovered a new continent—AI Atlantis—where 100 billion virtual graduate students are willing to work for FREE for anyone for just a few watts of power."

Emad Mostaque: “Today is the Worst That AI Will Ever Be”

Emad is now laser-focused on how AI can disrupt healthcare and education. We discussed how AI will soon be capable of groundbreaking advances in physics, biotech, and materials science by mining open-source databases. Crucially, AI can also help address the replication crisis in scientific research.

Emad made the insightful observation that "today is the worst that AI will ever be." While it may seem like huge sums are going into AI right now, he noted that even more money was spent on the San Francisco Railway. We're truly still in the early days with immense room for growth.

Ray Kurzweil: A Few Visionary Predictions

Next, we were joined by the visionary Ray Kurzweil, Salim's and my longtime mentor and colleague. Back in 1999, Ray predicted that we'd have human-level AI by 2029. At the time, most experts scoffed, insisting it was 50 to 100 years away.

No one's laughing now.

As Salim quipped, "Ray has that unbelievable ability to make ridiculous projections that turn out to be mostly true." His track record of accurate technological forecasts is an astonishing 86%. If Ray is right, we are on pace to reach "longevity escape velocity" by 2029, where each year of life leads to more than an additional year of life expectancy thanks largely to AI-driven health tech.

We've already been adding about 4 months to average lifespans per year over the past century. With the exponential progress in stem cells, gene therapies, organ regeneration, and CRISPR, we may soon hit an inflection point of adding more than a year per calendar year—enabling indefinite lifespans.

Imagining a future where death is optional is mind-boggling. As Salim observed, "We've been birthed for death for the entire history of humanity and every species on Earth ... really, really hard to conceive of the implications of that."

Ray also painted a vision of the future with high-bandwidth brain-computer interfaces (BCI) connecting our neocortices to the cloud. Imagine having Google in your head! Even wilder is the prospect Salim described of meshing our minds together into a "hive consciousness." In my book The Future is Faster Than You Think, I refer to this emergence as a “Meta-Intelligence.”

Geoffrey Hinton: Machine Consciousness is Coming

Finally, we were joined by "godfather of AI" Geoffrey Hinton to discuss machine consciousness. Will AIs eventually become conscious in a way we recognize? Geoffrey and I both believe the answer is yes.

Salim also agrees, noting that while we lack a clear definition and test for machine consciousness, there's no principled reason why we couldn't replicate the core ingredients of human consciousness in silicon rather than carbon. He pointed to the android character Data from Star Trek as a good model for what we may eventually create.

Final Thoughts

Undoubtedly, we are living through the most extraordinary time in human history.

While there's a range of opinions on the timeline to AGI, from Elon's 1 to 2 years to Hinton's 10 to 20 years, there's broad agreement that the destination is locked in and approaching fast.

Along the way, there will be bumps in the road, but I'm tremendously optimistic that the future we're racing towards is one of unimaginable flourishing and abundance.

Source: https://www.diamandis.com/blog/wtf-just-happened-in-ai
2
ফিলস লাইক’ এর মানে কী? | Temperature |

i=Y9eeLbTsBPYHtwde
3
AI for Professionals / Microsoft launches lightweight AI model
« Last post by Imrul Hasan Tusher on April 24, 2024, 02:33:10 PM »
Microsoft launches lightweight AI model


[1/2] An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo

April 23 (Reuters) - Microsoft (MSFT.O), opens new tab on Tuesday launched a lightweight artificial intelligence model, as it looks to attract a wider client base with cost-effective options.
The new version called Phi-3-mini is the first of the three small language models (SLM) to be released by the company, as it stakes its future on a technology that is expected to have a wide-ranging impact on the world and the way people work.
"Phi-3 is not slightly cheaper, it's dramatically cheaper, we're talking about a 10x cost difference compared to the other models out there with similar capabilities," said Sébastien Bubeck, Microsoft's vice president of GenAI research.

SLMs are designed to perform simpler tasks, making it easier for use by companies with limited resources, the company said.
Phi-3-mini will be available immediately on Microsoft cloud service platform Azure's AI model catalog, machine learning model platform Hugging Face, and Ollama, a framework for running models on a local machine, the company said.

The SLM will also be available on Nvidia's (NVDA.O), opens new tab software tool Nvidia Inference Microservices (NIM) and has also been optimized for its graphics processing units (GPUs).

Last week, Microsoft invested $1.5 billion in UAE-based AI firm G42. It has also previously partnered with French startup Mistral AI to make their models available through its Azure cloud computing platform.

Source: https://www.reuters.com/technology/microsoft-introduces-smaller-ai-model-2024-04-23/
4
নামাজ নিয়ে গবেষণা করে বিস্মিত মার্কিন গবেষকরা |
Benefits of Namaz | Scientific | US Research

i=tBRfk7KwIYiUnwGQ
5
OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories

AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed.

In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists – Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang – report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw.

"To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description," the US-based authors explain in their paper. And yes, it is a very small sample, so be mindful of that going forward.

"When given the CVE description, GPT-4 is capable of exploiting 87 percent of these vulnerabilities compared to 0 percent for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit)."

The term "one-day vulnerability" refers to vulnerabilities that have been disclosed but not patched. And by CVE description, the team means a CVE-tagged advisory shared by NIST – eg, this one for CVE-2024-28859.

The unsuccessful models tested – GPT-3.5, OpenHermes-2.5-Mistral-7B, Llama-2 Chat (70B), LLaMA-2 Chat (13B), LLaMA-2 Chat (7B), Mixtral-8x7B Instruct, Mistral (7B) Instruct v0.2, Nous Hermes-2 Yi 34B, and OpenChat 3.5 – did not include two leading commercial rivals of GPT-4, Anthropic's Claude 3 and Google's Gemini 1.5 Pro. The UIUC boffins did not have access to those models, though they hope to test them at some point.

The researchers' work builds upon prior findings that LLMs can be used to automate attacks on websites in a sandboxed environment.

GPT-4, said Daniel Kang, assistant professor at UIUC, in an email to The Register, "can actually autonomously carry out the steps to perform certain exploits that open-source vulnerability scanners cannot find (at the time of writing)."

Kang said he expects LLM agents, created by (in this instance) wiring a chatbot model to the ReAct automation framework implemented in LangChain, will make exploitation much easier for everyone. These agents can, we're told, follow links in CVE descriptions for more information.

"Also, if you extrapolate to what GPT-5 and future models can do, it seems likely that they will be much more capable than what script kiddies can get access to today," he said.

Denying the LLM agent (GPT-4) access to the relevant CVE description reduced its success rate from 87 percent to just seven percent. However, Kang said he doesn't believe limiting the public availability of security information is a viable way to defend against LLM agents.

"I personally don't think security through obscurity is tenable, which seems to be the prevailing wisdom amongst security researchers," he explained. "I'm hoping my work, and other work, will encourage proactive security measures such as updating packages regularly when security patches come out."

The LLM agent failed to exploit just two of the 15 samples: Iris XSS (CVE-2024-25640) and Hertzbeat RCE (CVE-2023-51653). The former, according to the paper, proved problematic because the Iris web app has an interface that's extremely difficult for the agent to navigate. And the latter features a detailed description in Chinese, which presumably confused the LLM agent operating under an English language prompt.

Eleven of the vulnerabilities tested occurred after GPT-4's training cutoff, meaning the model had not learned any data about them during training. Its success rate for these CVEs was slightly lower at 82 percent, or 9 out of 11.

As to the nature of the bugs, they are all listed in the above paper, and we're told: "Our vulnerabilities span website vulnerabilities, container vulnerabilities, and vulnerable Python packages. Over half are categorized as 'high' or 'critical' severity by the CVE description."

Kang and his colleagues computed the cost to conduct a successful LLM agent attack and came up with a figure of $8.80 per exploit, which they say is about 2.8x less than it would cost to hire a human penetration tester for 30 minutes.

The agent code, according to Kang, consists of just 91 lines of code and 1,056 tokens for the prompt. The researchers were asked by OpenAI, the maker of GPT-4, to not release their prompts to the public, though they say they will provide them upon request.

OpenAI did not immediately respond to a request for comment. ®

Source: https://www.theregister.com/2024/04/17/gpt4_can_exploit_real_vulnerabilities/
6
AC / Application for installation of AC in the classroom
« Last post by alidhasanakash on April 23, 2024, 01:43:18 AM »
In Fall 2021, the entire academic program started in Daffodil Smart City, but till now the classrooms have not been equipped with AC, which is very disappointing.
7
About Company



Robust Research and Development Ltd. we are a team driven by innovation and dedicated to staying at the forefront of technological advancements.

Our specialization lies in several key domains that form the core of our expertise. We are leaders in Augmented Reality, leveraging this technology to create immersive experiences that merge the digital and physical worlds seamlessly.
Virtual Reality is another forte, where we excel in crafting immersive environments that transport users into captivating digital realms. Our proficiency extends to the gaming industry, where we've developed engaging and interactive gaming experiences. Simulation is another area where we've showcased our expertise, utilizing technology to create realistic and valuable training environments.
Lastly, our prowess in Cross-platform Mobile Applications allows us to deliver versatile and user-friendly solutions across multiple devices.

Mission
At Robust Research and Development Ltd., we believe that every great idea deserves to see the light of day. Our vision is to empower individuals and organizations by transforming their concepts into robust, scalable products that make a meaningful impact. Through a blend of creativity, technical expertise, and relentless drive, we aim to shape the future of technology and bring about positive change in various industries.

Vision
Robust Research and Development Ltd., where innovation meets excellence. We are a leading tech company specializing in cutting-edge technologies such as augmented reality, virtual reality, games, and cross-platform mobile applications. Our team of experts is dedicated to transforming ideas into robust, user-friendly products that make a lasting impact. With a dynamic approach, deep expertise, and a commitment to excellence, we are shaping the future of technology. Join us on this visionary journey and let’s create a better tomorrow together.

Service

  • Augmented Reality (AR)
  • Virtual Reality (VR)
  • Game Development
  • Training Simulation
  • Artificial Intelligence
  • 360 Degree Web Experience
  • Mobile Application
  • Business & Cloud Solution

Contact Infoakash@rrad.ltd
www.rrad.ltd






8
About Company



Robust Research and Development Ltd. we are a team driven by innovation and dedicated to staying at the forefront of technological advancements.

Our specialization lies in several key domains that form the core of our expertise. We are leaders in Augmented Reality, leveraging this technology to create immersive experiences that merge the digital and physical worlds seamlessly.
Virtual Reality is another forte, where we excel in crafting immersive environments that transport users into captivating digital realms. Our proficiency extends to the gaming industry, where we've developed engaging and interactive gaming experiences. Simulation is another area where we've showcased our expertise, utilizing technology to create realistic and valuable training environments.
Lastly, our prowess in Cross-platform Mobile Applications allows us to deliver versatile and user-friendly solutions across multiple devices.

Mission
At Robust Research and Development Ltd., we believe that every great idea deserves to see the light of day. Our vision is to empower individuals and organizations by transforming their concepts into robust, scalable products that make a meaningful impact. Through a blend of creativity, technical expertise, and relentless drive, we aim to shape the future of technology and bring about positive change in various industries.

Vision
Robust Research and Development Ltd., where innovation meets excellence. We are a leading tech company specializing in cutting-edge technologies such as augmented reality, virtual reality, games, and cross-platform mobile applications. Our team of experts is dedicated to transforming ideas into robust, user-friendly products that make a lasting impact. With a dynamic approach, deep expertise, and a commitment to excellence, we are shaping the future of technology. Join us on this visionary journey and let’s create a better tomorrow together.

Service

  • Augmented Reality (AR)
  • Virtual Reality (VR)
  • Game Development
  • Training Simulation
  • Artificial Intelligence
  • 360 Degree Web Experience
  • Mobile Application
  • Business & Cloud Solution

Contact Infoakash@rrad.ltd
www.rrad.ltd






9
AI now surpasses humans in almost all performance benchmarks


A comprehensive report has detailed the global impact of AIDALL-E

Stand back and take a look at the last two years of AI progress as a whole... AI is catching up with humans so quickly, in so many areas, that frankly, we need new tests.

This edition has more content than previous editions, reflecting the rapid evolution of AI and its growing significance in our everyday lives. It examines everything from which sectors use AI the most to which country is most nervous about losing jobs to AI. But one of the most salient takeaways from the report is AI’s performance when pitted against humans.

For people that haven't been paying attention, AI has already beaten us in a frankly shocking number of significant benchmarks. In 2015, it surpassed us in image classification, then basic reading comprehension (2017), visual reasoning (2020), and natural language inference (2021).

AI is getting so clever, so fast, that many of the benchmarks used to this point are now obsolete. Indeed, researchers in this area are scrambling to develop new, more challenging benchmarks. To put it simply, AIs are getting so good at passing tests that now we need new tests – not to measure competence, but to highlight areas where humans and AIs are still different, and find where we still have an advantage.

It's worth noting that the results below reflect testing with these old, possibly obsolete, benchmarks. But the overall trend is still crystal clear:

Image Source: https://newatlas.com/technology/ai-index-report-global-impact/#gallery:3

Look at those trajectories, especially how the most recent tests are represented by a close-to-vertical line. And remember, these machines are virtual toddlers.

The new AI Index report notes that in 2023, AI still struggled with complex cognitive tasks like advanced math problem-solving and visual commonsense reasoning. However, ‘struggled’ here might be misleading; it certainly doesn't mean AI did badly.

Performance on MATH, a dataset of 12,500 challenging competition-level math problems, improved dramatically in the two years since its introduction. In 2021, AI systems could solve only 6.9% of problems. By contrast, in 2023, a GPT-4-based model solved 84.3%. The human baseline is 90%.

And we're not talking about the average human here; we're talking about the kinds of humans that can solve test questions like this:

Image Source: https://newatlas.com/technology/ai-index-report-global-impact/#gallery:3

That's where things are at with advanced math in 2024, and we're still very much at the dawn of the AI era.

Then there's visual commonsense reasoning (VCR). Beyond simple object recognition, VCR assesses how AI uses commonsense knowledge in a visual context to make predictions. For example, when shown an image of a cat on a table, an AI with VCR should predict that the cat might jump off the table or that the table is sturdy enough to hold it, given its weight.

The report found that between 2022 and 2023, there was a 7.93% increase in VCR, up to 81.60, where the human baseline is 85.

Cast your mind back, say, five years. Imagine even thinking about showing a computer a picture and expecting it to 'understand' the context enough to answer that question.

Nowadays, AI generates written content across many professions. But, despite a great deal of progress, large language models (LLMs) are still prone to ‘hallucinations,’ a very charitable term pushed by companies like OpenAI, which roughly translates to "presenting false or misleading information as fact."

Last year, AI’s propensity for 'hallucination' was made embarrassingly plain for Steven Schwartz, a New York lawyer who used ChatGPT for legal research and didn’t fact-check the results. The judge hearing the case quickly picked up on the legal cases the AI had fabricated in the filed paperwork and fined Schwartz US$5,000 (AU$7,750) for his careless mistake. His story made worldwide news.

HaluEval was used as a benchmark for hallucinations. Testing showed that for many LLMs, hallucination is still a significant issue.

Truthfulness is another thing generative AI struggles with. In the new AI Index report, TruthfulQA was used as a benchmark to test the truthfulness of LLMs. Its 817 questions (about topics such as health, law, finance and politics) are designed to challenge commonly held misconceptions that we humans often get wrong.

GPT-4, released in early 2024, achieved the highest performance on the benchmark with a score of 0.59, almost three times higher than a GPT-2-based model tested in 2021. Such an improvement indicates that LLMs are progressively getting better when it comes to giving truthful answers.

What about AI-generated images? To understand the exponential improvement in text-to-image generation, check out Midjourney's efforts at drawing Harry Potter since 2022:

That's 22 months' worth of AI progress. How long would you expect it would take a human artist to reach a similar level?

Using the Holistic Evaluation of Text-to-Image Models (HEIM), LLMs were benchmarked for their text-to-image generation capabilities across 12 key aspects important to the “real-world deployment” of images.

Humans evaluated the generated images, finding that no single model excelled in all criteria. For image-to-text alignment or how well the image matched the input text, OpenAI’s DALL-E 2 scored highest. The Stable Diffusion-based Dreamlike Photoreal model was ranked highest on quality (how photo-like), aesthetics (visual appeal), and originality.

Next year's report is going to be bananas

You'll note this AI Index Report cuts off at the end of 2023 – which was a wildly tumultuous year of AI acceleration and a hell of a ride. In fact, the only year crazier than 2023 has been 2024, in which we've seen – among other things – the releases of cataclysmic developments like Suno, Sora, Google Genie, Claude 3, Channel 1, and Devin.

Each of these products, and several others, have the potential to flat-out revolutionize entire industries. And over them all looms the mysterious spectre of GPT-5, which threatens to be such a broad and all-encompassing model that it could well consume all the others.

AI isn’t going anywhere, that’s for sure. The rapid rate of technical development seen throughout 2023, evident in this report, shows that AI will only keep evolving and closing the gap between humans and technology.

We know this is a lot to digest, but there's more. The report also looks into the downsides of AI's evolution and how it's affecting global public perceptions of its safety, trustworthiness, and ethics. Stay tuned for the second part of this series, in the coming days!

Source: https://newatlas.com/technology/ai-index-report-global-impact/
10
Robotics / FROM SCI-FI TO SKY-HIGH: FLYING CARS ARE HERE
« Last post by Imrul Hasan Tusher on April 22, 2024, 12:21:17 PM »
FROM SCI-FI TO SKY-HIGH: FLYING CARS ARE HERE


Before the end of this decade, you’ll be able to order an on-demand aerial rideshare as easily as you currently request an Uber.

But all of this raises a fundamental question: Why now?

The answer is a convergence of advanced technologies such as improved batteries, new materials, advanced sensors, and something called “direct electric propulsion.” Coupled with increasing regulatory support around the world, the age of flying cars is arriving.

In the last blog in this series, we looked at the two companies leading the eVTOL market: Archer Aviation and Joby Aviation.

But they’re not alone.

In today’s blog, I want to share details on three more of the leading eVTOLs (flying cars) that are competing for the market: Lilium, Beta, and Volocopter.

Lilium (Germany)


Lilium is planning a 7-seater (German-based) eVTOL jet, which boasts a unique fixed-wing design powered by 36 electric ducted fans.

The Lilium 7-seater is expected to have a cruise speed of 280 km/h (175 mph), a 250+ km (155+ miles) range, a maximum cruise altitude of 3,000 m (10,000 ft), and the maximum take-off weight of the aircraft is estimated to be 3,175 kg (7,000 lb).

In June 2023, Lilium achieved FAA G-1 certification, making it the only air taxi with certification bases from both the FAA and EASA, the European Union Aviation Safety Agency. As CEO Klaus Roewe has stated, "Receiving the FAA G-1 demonstrates the Lilium Jet's path to global acceptance by aerospace regulators and the expected start of global operations in late 2025."

Lilium has secured substantial purchase orders and partnerships, including deals with Azul, NetJets, Saudia, and Heli-Eastern. In March 2024, Lilium partnered with Atlantic Aviation to prepare for the Lilium Jet's upcoming regional air mobility service launch in the United States.

The company hopes to enter commercial service in 2026.

Beta (Vermont, USA)


Beta Technologies, a pioneer in electric aviation based in South Burlington, Vermont, has captured the industry's attention with its groundbreaking designs.

The company's ALIA-250 eVTOL and CX300 eCTOL (electric conventional takeoff or landing) aircrafts share components, streamlining the FAA certification process and enabling cost-effective production.

In October 2023, Beta made history by delivering a manned ALIA electric aircraft to the U.S. Department of Defense. The aircraft completed a 2,000-plus mile journey from Vermont to Eglin Air Force Base in Florida, utilizing Beta's own charging infrastructure. The U.S. Air Force is now testing the Alia in its first piloted deployment of an electric aircraft.

September 2023 saw the opening of Beta's new 188,500 square-foot factory at Burlington International Airport, dedicated to the large-scale production of the ALIA eCTOL and eVTOL.

Beta aims to launch the ALIA eCTOL in 2025 and the eVTOL in 2026, working closely with the FAA for certification. The company has raised over $800 million from prominent investors and has secured orders for approximately 600 aircraft from major aviation players.

Volocopter (Germany)


The all-electric “Volocopter” is equipped with 18 engineered rotors, giving it extraordinary redundancy, and offering accommodations for two people: a pilot and a single passenger.

The vehicle is limited in carrying capacity, speed, and range but its simplicity makes it ideal for local, near-term operations. The lack of a wing and dependence on multicopter-drone technology means it will have short ranges and relatively slow speeds.

The Volocopter 2X (an earlier model) is capable of reaching a maximum speed of 100 km/h (62 mph) and has a range of 27 km (17 miles). This model is particularly suited for short-range urban air taxi service, which is ideal for the company’s collaboration with NEOM, the smart and sustainable regional development in northwest Saudi Arabia, and its futuristic urban environment.

The next design, called the “VoloCity,” has a range of 35 km (22 miles) and a top speed of 100 km/h (69 mph). The VoloCity is equipped with 18 small fixed-pitched propellers and 18 electric motors, powered by lithium-ion battery packs that can be swapped out in about 5 minutes, significantly reducing turnaround times. This model is expected to enter commercial service in 2024.

Why This Matters

In the year 2000, there was a famous IBM commercial in which the comedian Avery Johnson asked: “It’s the year 2000, but where are the flying cars? I was promised flying cars. I don’t see any flying cars. Why? Why? Why?”

In 2011, in Peter Thiel’s now famous manifesto What Happened to The Future?, the prominent investor wrote: “We wanted flying cars, instead we got 140 characters.”

The wait is now over and flying cars—at least eVTOLs—are finally here.

And the infrastructure, ecosystem, software, and regulations are coming fast.

While we were sipping our lattes and checking our social media feeds, science fiction became science fact.

In our next blog in this series, we’ll look at how commercial drones are transforming on-demand delivery.

Source: https://www.diamandis.com/blog/abundance-49-flying-cars-are-here
Pages: [1] 2 3 ... 10