Trending December 2023 # Nvidia’s Ai Model To Save Earth, Grabs Funding From Nasa # Suggested January 2024 # Top 13 Popular

You are reading the article Nvidia’s Ai Model To Save Earth, Grabs Funding From Nasa updated in December 2023 on the website Tai-facebook.edu.vn. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Nvidia’s Ai Model To Save Earth, Grabs Funding From Nasa

It’s a breathtaking sight when meteor showers light up the night sky. However, the threat of larger celestial objects colliding with Earth poses a real danger. To counteract this potential catastrophe, a team led by physics professor Philip Lubin and his undergraduates at the University of California, Santa Barbara (UCSB) is working on a groundbreaking PI-Terminal Planetary Defense initiative. Their goal is to detect and mitigate space threats more efficiently, and they have recently received phase II funding from NASA for their research. NVIDIA has provided the team with an NVIDIA RTX A6000 graphics card through their Applied Research Accelerator Program to aid them in their mission. Let’s dive into the details of this innovative AI project that aims to safeguard our planet from cosmic hazards.

Also Read: Alien-Inspired Spacecraft Design: NASA’s Bold Leap Into Space’s Future

Pulverizing Space Threats

The core objective of the PI-Terminal Planetary Defense initiative is to detect relevant threats sooner and take decisive action to minimize their impact. In the face of an impending collision, the UCSB team plans to utilize an array of hypervelocity kinetic penetrators. These specialized devices are designed to pulverize and disassemble an asteroid or small comet, effectively neutralizing the threat before it reaches Earth’s surface. By breaking down these celestial bodies, the potential damage and risk to life on Earth can be greatly minimized.

Detecting Impending Catastrophe

Recognizing threats is the first crucial step in protecting Earth. Lubin and his students have harnessed the power of artificial intelligence (AI) to analyze vast amounts of astrophysical data. While modern surveys collect massive amounts of data, processing and analyzing this information at the required speed is challenging. To overcome this hurdle, the UCSB team is designing a large-scale survey tailored to planetary defense. This survey will generate even more data, which needs to be rapidly processed and analyzed.

Training an AI Sentinel

Lubin’s group has trained a neural network called “You Only Look Once Darknet” using machine learning techniques. This near real-time object detection system operates in less than 25 milliseconds per image. By utilizing a large dataset of labeled images, the neural network has been trained to identify low-level geometric features such as lines, edges, circles, and threats like asteroids and comets. Early results indicate that the AI-powered source extraction process is up to 10 times faster and nearly 3 times more accurate than traditional methods.

Also Read: AI Discovers New Planet Outside the Solar System, Scientists Failed to Find

Supercharging Processing Speed

To accelerate their image analysis process, the UCSB team has incorporated the NVIDIA RTX A6000 GPU and the CUDA parallel computing platform. The team initially faced challenges in reducing the processing time and meeting GPU memory requirements. However, with the RTX A6000’s 48GB of memory, they can handle complex graphics and large datasets without sacrificing performance. By implementing new CuPy-based algorithms, the team significantly reduced their subtraction and identification time, allowing the entire pipeline to run in just six seconds.

Tackling Technical Challenges

As the project grows and accumulates more training data, the team faces the challenge of handling increasingly large file sizes. The RTX A6000’s generous memory capacity enables the team to handle datasets of images with resolutions of approximately 100 megapixels. This powerful GPU eliminates the data transfer bottleneck, ensuring smooth processing and analysis.

Realistic Simulations for Precise Solutions

The UCSB team conducts simulations to demonstrate various aspects of their project. These simulations include modeling the ground effects of shock waves & optical light pulses emitted by fragments burning in Earth’s atmosphere. The team develops custom codes in multithreaded, multiprocessor C++ and Python for local simulations. For more intensive visualizations, such as the hypervelocity intercept of threat fragments, the team relies on the NASA Advanced Supercomputing (NAS) facility at the NASA Ames Research Center. Equipped with Intel Xeon CPUs and NVIDIA RTX A6000 GPUs, the NAS supercomputers provide over 13 petaflops of computing performance.

Also Read: NVIDIA Builds AI SuperComputer DGX GH200

Our Say

NASA’s decision to invest in space exploring AI technology seems to be in the right direction. The PI-Terminal Planetary Defense initiative led by Professor Philip Lubin and his team at UCSB represents an innovative approach to safeguarding Earth from space threats. The model combines cutting-edge AI technology, such as the NVIDIA RTX A6000 GPU, and innovative data processing and analysis methods. This makes it capable of detecting and mitigating cosmic hazards faster and more efficiently than ever before. With their ongoing research and development, the team brings us one step closer to a safer future where we can confidently admire meteor showers without fearing the unknown.

Related

You're reading Nvidia’s Ai Model To Save Earth, Grabs Funding From Nasa

Nasa Shares The First Images From Landsat 9

NASA shares the first images from Landsat 9

In late September of this year, NASA and the US Geological Survey launched the latest satellite in the Landsat fleet called Landsat 9. The satellite has now collected and shared its first images of Earth. The images were taken on October 31 and show how the mission will help manage resources and understand how climate change is impacting the planet. The Landsat series of satellites have been in operation for nearly five decades.

Landsat 9’s first images have given NASA and USGS critical data about the landscapes and coastlines the planet seen from space. NASA says that it will continue to work with the USGS to strengthen and improve accessibility to Landsat data for decision-makers in the US and around the world. Both organizations hope images gathered by the satellite will help decision-makers understand the climate crisis, manage agricultural practices, preserve natural resources, and respond more effectively to natural disasters.

Among the locations the first satellite images shared by NASA show are Detroit, Michigan, and nearby Lake St. Clair. Images also show cities and beaches along the coastline of Florida, and images also show Navajo County, Arizona. NASA says the data helps to monitor crop health and manage irrigation water. Also seen in the new images is data about the changing landscape in the Himalayas in High Mountain Asia and on the coastline of northern Australia.

NASA says Landsat 9 is quite similar to Landsat 8, which has been in orbit since 2013 and is still currently in orbit. However, the new satellite has multiple improvements, including the capability to send data with higher radiometric resolution back to Earth for study. With improved radiometric resolution, the satellite can detect more subtle differences in the landscape compared to older satellites, particularly over water or dense forests.

Landsat 9 has the capability of differentiating more than 16,000 shades in a given wavelength color. While Landsat 8 is still in orbit, the satellite that Landsat 9 is replacing is Landsat 7. By comparison, Landsat 7 is only able to detect 256 shades. Landsat 9 project scientist Jeff Masek says that first light for the satellite is a significant milestone, and he notes that the images look fantastic.

The satellite has a pair of instruments aboard to gather images, including the Operational Land Imager 2 used to detect visible, near-infrared, and shortwave infrared light across nine wavelengths. The second instrument is the Thermal Infrared Sensor 2, designed to detect thermal radiation across two wavelengths and is used to measure the surface temperature of the earth and any temperature changes.

Currently, the team is conducting a 100-day check-out that has them testing satellite systems and subsystems and calibrating instruments. Once the calibration checkout is over, the mission will be turned over to USGS in January. USGS will operate Landsat 9 and Landsat 8 together. The two satellites are expected to collect around 1500 images of the surface of the planet daily and cover the entire globe every eight days.

NASA notes that all data collected by Landsat 9 will be available for free to the public from the USGS website once the satellite begins normal operations. Landsat 9 launched on September 17 at 2:12 PM Eastern time utilizing a United Launch Alliance Atlas V rocket. While the new satellite is the latest in the Landsat series, the very first launched way back in 1972.

How To Save Videos From Twitter On Iphone

When it came into being, Twitter was just a source of text-rich content but over the years, the microblogging platform has seen itself grow into a full-fledged social media app where you can share images, videos, and links. If you see something fascinating within the app, the Twitter app lets you save images from a tweet or share the tweet across other apps on your phone but what if the content you wish to share is a Twitter video? 

In this post, we’ll explain different ways you can save a video from a tweet on your iPhone. 

Can you save Twitter videos natively on your iPhone?

You can also access iOS’ native Share Sheet from the Share via option that will provide you with more options like opening the tweet on Safari, adding it to Safari’s Reading List, and using other tools you may have configured the iOS Share Sheet with. However, there’s no in-built option that lets you store videos from Twitter onto your iPhone. 

How to save Twitter videos on iOS (2 ways)

Since there’s no native way to save videos from Twitter on your iPhone, you will need to rely on external resources to get it to work. The following are two methods you can use to download videos from Twitter directly on iOS without installing third-party apps. 

Method #1: Using TVDL Shortcut

Although the Twitter app lacks an inbuilt download tool, what if we tell you that you can download a Twitter video directly from your iOS Share sheet? Sound too good to be true? Surprisingly, there’s a way you can add a download option for Twitter videos directly onto the iOS Share sheet – using a user-made Siri shortcut. If you’ve previously used Siri Shortcuts to get stuff done, you’ll know how easy it is to add it to your iOS device and use it. 

To get started with this method, download the TVDL Shortcut on your iPhone by going to this link and tapping on the Get the Shortcut option on the webpage that opens. 

When you do that, you’ll see the TVDL shortcut appear inside the Shortcuts app. 

You can install this shortcut on your iPhone by tapping on Add Shortcut at the bottom. 

You should now be able to see the TVDL shortcut appear inside the My Shortcuts tab on the Shortcuts app. This means the option to download Twitter videos has now been added to your iOS share sheet. 

When Twitter’s share menu appears, tap on Share via. 

This should open the iOS Share Sheet on your screen. From this screen, tap on the TVDL option marked with the version name of the shortcut. In this instance, you should see the option marked as “TVDL v3.1” because we’re using version 3.1 of the shortcut. 

TVDL will now grab the video you wish to download and suggest different options based on the quality of video you want to save. Select an option between High, Medium, and Low to download the video at your preferred quality. 

In the next prompt that appears, select Allow. 

Method #2: Using TwitterVideoDownloader 

In case you don’t prefer the above method for downloading Twitter videos, there’s another easy way you can go through to achieve similar results. While there are a bunch of websites that offer the ability to let you save Twitter videos, we’re using TwitterVideoDownloader in this method. This tool is available for free and lets you download Twitter videos in different qualities; so we chose it for showing an example. If there are other websites you like to you, the functionality will be more or less similar. 

Before you can download a Twitter video, you need to launch the Twitter app on iOS and open the video you wish to download. When the tweet with a video is open, tap on the Share icon below the tweet. 

When Twitter’s share menu appears, select Copy Link. 

The Twitter app will now show that the link to the tweet has been copied to your clipboard. 

Now, open the Safari app on your iPhone and go to chúng tôi On this webpage, tap on the text box under “Paste Tweet URL Here”. 

You can tap on this box once more and then select Paste to paste the URL of the tweet you copied earlier. 

Once the URL has been pasted, tap on Download. 

The Twitter video that you wanted to download should now load up on the next page. 

Scroll down on this page and you should see the Download Video links appear adjacent to different resolutions for the video in descending order of quality. To download a video, tap and hold on any one of these Download Video links. 

Now, select the Download Linked File option from the overflow menu that appears. 

When you select a downloaded video, it should start playing within the same screen and you get additional options to crop, trim, and share it from here. 

That’s all you need to know about saving Twitter videos on an iPhone. 

3 Easy Steps To Save Yourself From Stupid Passwords

Passwords are stupid.

Yet what’s stupid about passwords is not that they are inherently insecure, but they allow users—and in fact, encourage users—to do insecure things. When faced with the creation, and subsequent memorizing, of a new password, most users decide to use the same, stupid, easy-to-remember password they’ve used elsewhere. That’s just the kind of vulnerability hackers are looking for.

Don’t be that victim. You can turn all your stupid passwords into safer ones that are easier to manage, in three easy steps.

1. Acknowledge you have a password problem 

Everyone has stupid passwords. Take the findings of managed security firm Trustwave, which regularly tests the security of its clients to find vulnerabilities. During its security tests in 2014, the company collected 625,000 password hashes (the scrambled form in which passwords are stored), and its researchers tried to break them. Within two minutes, more than half—54 percent—fell to common password guessing techniques. In a month, the company had recovered 92 percent of the passwords.

The most common passwords? “Password1,” followed by “Hello123” and, yes, “password.”

“The inherent problem with passwords is that they give the users far too much ability to do something stupid, but good security controls should not allow users to do stupid things,” says Charles Henderson, vice president of Trustwave.

No wonder tech companies and online services are looking for alternatives. The recent announcement by Yahoo! that the company will allow devices to store and send passwords—thus, eliminating the need for the user to remember them—is one example. Adding a second factor, such as the fingerprint sensor on Apple’s TouchID or the facial recognition of Windows 10, is another.

Source: Trustwave 2014 Business Password Analysis

The most common patterns of passwords are 6 letters and a number or six numbers, according to Trustwave. Nearly 30 percent of passwords are one of those two combinations

Yet, these solutions have their own problems. Consumer-level biometrics are often easy to defeat, because companies trade security for convenience. Apple’s TouchID fell to hackers within months, and other fingerprint sensors have had similar problems.

“Everyone in the security community agrees that passwords stink, but we are not going to get rid of passwords anytime soon,” says Henderson.

2. Use a password manager to create new codes

Creating secure passwords means using long strings of characters, numbers and special characters. While passwords are stored as one-way “hashes,” attackers have learned a variety of tricks to crunch through millions of possibilities very quickly, making complex passwords a necessity.

But let’s be honest: You can’t create them all by yourself. A variety of password managers—from LastPass to Dashlane to 1Password to KeePass—allow users to generate complex passwords, manage them across devices, and autofill login forms. There are even mobile-app password managers readily available. 

3. DIfferent account, different password

The average user holds between 30 and 60 online accounts. With so many breaches of online services, there’s every reason to have a different password for each service. Otherwise, a breach at one site allows a attacker to try the same username and password on other sites.

Assigning a single password to each account, however, means the number of tricky passwords or passphrases that people have to remember has skyrocketed, according to password-management service Dashlane. “Now, we not only need several tens of passwords, but we also need to use them on various devices at different times,” says Emmanuel Schalit, CEO of Dashlane. “The complexity has blown up and become too much for human beings to manage.”

This is the other reason to use a password manager. Just remember to use them for good, not stupid. Avoid storing the same bad passwords in your password manager. Create the longest, most complex passwords possible, and a different one for every account.

Business Analytics In 2023: From Bi To Ai

Collectively, humans now generate 2.5 quintillion bytes of new data per day. The data we generate in a single year dwarfs every metric ever created between 2023 and the beginning of recorded history. In other words, the BI tools of the past can hardly be expected to keep up with today’s demands.

Not only is the overall amount of data increasing, the number of types of data are increasing, and the applications that store and generate data are increasing as well. Older BI tools can’t cope with larger volumes of data, and they also find it difficult to process data from new applications; it often takes a lot of manual adjustments to make an old BI tool fit a new app. As such, companies using BI tools may miss out on data-driven insights that are now available.

Exploring the Six Main Differences Between AI and BI

Traditional BI can no longer cope with the volume, variety and velocity of enterprise data. It’s time for new AI-powered tools to pick up the slack. But how is this new generation of tools different from what came before?

Data Collection and Integration

Within five years, 80 percent of your data will be unstructured. This data resists classification in databases, which makes it hard to tag, search and edit. With traditional BI tools, unstructured data sits in silos and is analyzed slowly, if at all. Data scientists spend about 80 percent of their time preparing this data before it can be analyzed.

With modern BI tools, preparation is faster and automatic. No matter what kind of data you need to analyze, these new tools can sort and categorize them within a single seamless data lake, making silos a thing of the past. These tools are self-service, making it possible for data scientists to begin receiving actionable intelligence in just hours or days, without involving IT operations.

Metric Coverage

Traditional KPIs – the ones that you set and research manually – only cover three percent of the metrics in play within your organization. If your KPI dashboards cover a hundred 2.5 qui, that means you’re missing 3,300 others. In fact, for a modern enterprise, 3,300 metrics would be on the small scale.

If something goes wrong in a user-facing application, the overwhelming likelihood is that it will go wrong in a metric that you aren’t currently covering. As long as the KPIs you’re monitoring don’t drop, you won’t be able to detect the error or outage until your customers let you know about it.

By contrast, AI tools know that it’s impossible for any organization to monitor all of their KPIs manually – so they take that problem out of your hands. No matter how many metrics your company generates, the orders of magnitude don’t matter. They’re able to ingest millions of metrics at a time and still provide instantaneous feedback if something goes wrong.

Thresholds and Baselines

Traditional manual alerting practices require data scientists to set thresholds for KPIs. When a KPI drops below a certain threshold or above another one, it sets off an alarm. Unfortunately, metrics tend to spike and drop unpredictably, even during normal behavior. Even if you set your baselines above and below those spikes, this discounts the possibility that abnormal behavior could still occur within the thresholds that you’ve set.

This practice also ignores seasonality, which is a normal variation in certain metrics that occurs on a daily, weekly or monthly cycle. To a traditional BI program, all seasonality looks like an anomaly, leading to a slew of false positives and false negatives.

Modern analytics platforms take a completely autonomous approach to baselining. They rely on machine learning algorithms to learn your metrics’ normal behavior and identify their baselines, eliminating the need for manual thresholding.

Detection and Alerting

Setting up traditional BI systems with manual alerting has a natural consequence – too many alerts. Alert fatigue is a real issue. In some disciplines, such as information security, personnel can experience over a million alerts per day. This makes it difficult for analysts to tell real emergencies apart from noise in the data.

With AI-driven reporting, there are no manual thresholds. The only alerts are “real” alerts – genuinely odd behavior in a metric. Even on its own, this behavior cuts down on false positives immensely. AI goes further than that, however. Modern BI tools give you the ability to alert on only the most severe deviations, allowing your response teams to focus only on what’s most important.

Root Cause Analysis

Anomalies don’t occur on their own. Using a traditional dashboard, you may be able to see an anomaly occur in one of the three percent of metrics you’re monitoring. Unfortunately, you won’t be able to see where else that anomaly shows up. This, in turn, means that it will take longer for you to understand where an anomaly is occurring and how to fix it.

By contrast, Autonomous Analytics reports on the full context of every alert. If two anomalies take place at the same time in related metrics, your alerting will reflect this. If these anomalies happen to coincide with a patch, an equipment failure or Black Friday, your reporting will reflect this as well. This makes it much easier to detect and mitigate anomalies.

Forecasting

Forecasting is different from anomaly detection – but with traditional BI, the same difficulties apply. It takes a long time to prepare data for forecasting, which is unfortunate when the business needs forecasts sooner rather than later. Since traditional analytics tools are constrained by the number of analytics they can ingest, your forecast will fail to consider all of the metrics that could potentially affect the business. In short, you’ll get a less-accurate forecast that takes longer to prepare.

With autonomous analytics, you get the forecast you need when you want it. Not only will autonomous analytics provide forecasts in seconds, the forecasts get more accurate every time you run them. The model will automatically compare its forecasts to subsequent events and then refine its conclusions based on what it got right or wrong – the longer it runs, the more accurate it becomes.

What Kind of AI do You Need?

Autonomous Analytics programs eliminate the friction between data and analysis. Under a traditional solution, data doesn’t go where it should and needs to be massaged before it can be processed. It’s become too vast for humans or limited tools to process, and its metrics vary unexpectedly. In short, data is too large and varies too rapidly for the previous generation of tools to understand.

Leading solutions in the BI space are adding AI features to their existing products in order to keep up, but not every solution is created equal. Incumbents are adding their solutions piecemeal, without the completeness of green-field AI projects. Other vendors provide anomaly detection, but only for infrastructure data – which doesn’t provide the complete picture your company needs.

Only a fully autonomous anomaly detection and forecasting solution can provide you with the scale and speed you need to cope with the full velocity, volume and variety of your data. Whether you’re a seasoned data analyst or an inexperienced business user, these tools will help you get the actionable insights you need to compete in a changing environment.

Author Bio:

Amit Levi is VP of product and marketing at Anodot. He is passionate about turning data into insights. Over the past 15 years, he’s been proud to accompany the development of the analytics market.

Difference Between Incremental Model And Waterfall Model

The Waterfall model and the Incremental Model are widely used in software development. The objective of having these models is to ensure that the software is developed in a systematic, organized and efficient manner. Read this article to find out more about the Waterfall model and the Incremental model and how they are different from each other.

What is the Incremental Model?

The incremental Model is a software development model in which the entire model is divided into various sub−development phases where the corresponding testing phase for each development phase is practiced. The execution of the phases, i.e., development and testing happen in a sequential manner, hence the model is sequential/parallel in nature. Since the sequential phases need to be functional, the cost of development is higher as compared to that of the Waterfall Model.

The complexity of the incremental model is higher than the waterfall model. The probability of the total number of defects in the development of an application is low, because testing is done in parallel to the development of the application.

The incremental model of software development involves breaking a project down into smaller parts, known as “increments”, which can be easily managed. Each “increment” builds on the previous one, adding new functionality and features until the final product is complete. It provides more flexibility because the updates can be easily incorporated into the development process.

What is Waterfall Model?

Waterfall Model is the classical model of software development where each phase of application development is completed in a linear fashion. In the Waterfall Model, the complete process is divided into several phases and the process follows a linear and sequential approach, with each phase of the project being completed before moving on to the next phase. Testing is done at end phase of the development. The waterfall model is also known as the classical model or the traditional model. It is generally not regarded as a suitable model to handle large projects.

Difference between Incremental Model and Waterfall Model

The following table highlights how the Incremental model of software development is different from the Waterfall model−

Key Incremental Model Waterfall Model

Definition It is the development model in which the entire model is divided into various sub development phase where corresponding testing phase for each development phase is practices. For every stage in the development cycle, there is an associated testing phase and the corresponding testing phase of the development phase is planned in parallel.

Waterfall model there is first development of the application and after which the different testing of application takes place. The complete process is divided into several phases, and each phase flows into the next, after its completion. Testing is done at the end of the development.

Type/Nature The execution of the phases, i.e., development and testing takes place in a sequential manner, so the process is sequential/parallel in nature. It is a relatively linear sequential design approach, as each phase should be completed in order to reach the next phase. So, the type of this model is Continuous in nature.

Testing and Validation Each development phase is followed by its own testing. If any validation requires to be implemented, then it could be implemented at that phase. Testing is carried out after the development is completed. Hence, if any missing validation is identified to be implemented, then that phase of development needs to be recognized and then that validation gets implemented.

Cost and Complexity As sequential phases need to be functional, hence the cost is higher as compared to that of the Waterfall Model. Also, the complexity is more than the Waterfall model. Due to linear development, only one phase of development is operational and hence the cost and complexity is low as compared to that of Incremental Model.

Defects The probability of the total number of defects in the development of application is low as testing is done in parallel to the development. The probability of total number of defects in the development of application is high as testing is done post development.

Conclusion

The most significant difference that you should note here is that the entire development phase in an Incremental Model is divided into several subdevelopment phases with their corresponding testing phases; whereas the Waterfall Model is one where each phase, after its completion, flows into the next and the entire testing part is left to be done at the end of the development.

Update the detailed information about Nvidia’s Ai Model To Save Earth, Grabs Funding From Nasa on the Tai-facebook.edu.vn website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!