Zac Amos, Author at ReadWrite https://readwrite.com/author/zac-amos/ IoT and Technology News Fri, 25 Aug 2023 20:10:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://readwrite.com/wp-content/uploads/cropped-rw-32x32.jpg Zac Amos, Author at ReadWrite https://readwrite.com/author/zac-amos/ 32 32 6 Steps to Implementing Cloud Security Automation https://readwrite.com/6-steps-to-implementing-cloud-security-automation/ Fri, 25 Aug 2023 18:00:29 +0000 https://readwrite.com/?p=231725 Implement cloud security automation

Cloud security automation is crucial for protecting your team’s cloud environment from today’s ever-changing threat landscape. Automating security protocols can […]

The post 6 Steps to Implementing Cloud Security Automation appeared first on ReadWrite.

]]>
Implement cloud security automation

Cloud security automation is crucial for protecting your team’s cloud environment from today’s ever-changing threat landscape. Automating security protocols can be overwhelming — especially if your team is new to cybersecurity. Luckily, a straightforward six-step process can take you from default security protocols to a customized, automated cloud security framework.

1. Evaluation and Risk Assessment

The first step to automate cloud security is a thorough evaluation and risk assessment. Before automating anything, you need to understand how your cloud environment is running. This first stage will identify key automation opportunities, highlighting vulnerabilities and risk factors. That data will be the foundation of your cloud security automation strategy.

Suppose you or your organization have not run a cybersecurity risk assessment before. In that case, a basic five-step approach can prevent confusion. While the risk assessment should include all the organization’s systems, prioritize cloud-related data and infrastructure. Keep in mind an app can be highly secure and still be high risk.

A risk assessment should highlight the threats facing your organization’s most important data, apps, systems, and infrastructure. Cybersecurity risk rankings indicate what could occur in the case of compromise. Ideally, all high-risk systems and data are highly protected. Take note whenever the risk assessment reveals something is both high risk and highly vulnerable.

At this stage, it’s also important to establish your organization’s goals for cloud security. After thoroughly reviewing the risk assessment results, pinpoint a few measurable areas for improvement. For example, you may want to automate some system updates using scripting or implement an automated API security scanner.

These targets will be the foundation of your cloud security automation strategy. It may even be helpful to rank a few goals from highest to lowest priority. This will provide a starting point for your team to focus on as you begin implementing automated cloud security solutions.

2. Expand Cloud Visibility

A crucial part of effective cybersecurity is visibility, but it can be easy to miss things in a cloud environment due to its dispersed nature. Securing the cloud effectively requires expanding your visibility of your cloud resources.

During the risk assessment stage, you may have even stumbled on risks or opportunities you didn’t realize you had. Those are signs you need to improve your visibility of your cloud environment. Building out a cloud asset management platform can pool all your cloud resources into one hub where you can keep an eye on things.

A cloud asset management platform acts as a control center for your cloud environment. It includes all the devices, apps, services, servers, and systems running in your cloud environment — and any critical data, such as usage statistics.

Remember to include physical devices in your management platform. It’s easy to concentrate on software when working with the cloud, but an increasing number of cloud systems rely on input from physical technologies. Those same devices may depend on the cloud to operate correctly.

A great example of this is IoT appliances. These devices are great for automating data collection from sensors, but they are also highly vulnerable to DDoS attacks and often suffer from poor visibility. IoT devices have notoriously weak default security parameters, as well. As a result, it is crucial to have high visibility of IoT devices’ activity and connections to ensure tight security.

Many pre-built cloud asset management platforms are available today, although building your own is possible. However, check with your cloud provider before purchasing or building a management platform. Some may offer one with your subscription, or have a partnership or discount available for 3rd party management platforms.

3. Automated Cloud Security Basics

Once you have a clear understanding of the principal risks and priorities in your cloud environment and a way of monitoring all of it, you can begin implementing automation. It is often a good idea to start with basic automated cloud security measures. This includes automation that covers high-risk gaps and establishes a minimum security level for the whole cloud environment.

For example, every cloud environment should utilize encryption, which most of today’s leading cloud providers offer some level of. You should encrypt your cloud data in three stages (securityboulevard.com)/  — transit, rest, and in-use. This protects your data from unauthorized use, even if it is somehow intercepted or compromised at any stage.

The encryption does not automate any processes but ensures data is safe as it moves through your cloud environment. This allows you to implement automated strategies with less anxiety about potentially putting your data at risk.

Automated cloud data backups are another crucial security measure to implement. Data backups to the cloud are becoming more common today, but you can also back up data already in the cloud. Automating regular backups is a crucial part of any disaster recovery plan, including natural disasters and cyber-attacks.

The cloud is more resilient to natural disasters than on-prem servers, but accidents can still happen. Whether it’s the result of a cyber-attack or an unfortunate accident, losing crucial data causes about 60% of small businesses to go under within six months of the loss. So, ensure your cloud data is backed up in a different server location than the data center your cloud resources usually run from. You could even store backups in on-premises data storage. The important part is to make sure backups are happening autonomously at scheduled intervals.

Access control is the third must-have protocol to implement before automating security on a larger scale. It is all too easy for unauthorized users to move through cloud environments since they are dispersed and untethered to physical devices. Effective access control automates the process of denying access to unauthorized users and accounts.

4. Implement Case-Specific Cloud Security Automation

Now that some basic cloud security measures are in place, you can automate more complex processes. At this stage, refer to the goals you established in the first step of the cloud security automation process. Use those aims to identify what you want to automate first, and focus on one or two new integrations at a time.

In this stage, your team will automate higher-risk, more complex security protocols beyond the basics. Each organization’s cloud security automation strategy will differ significantly depending on your unique risk factors and cloud environment.

For example, your team might use a lot of APIs in your workflows. APIs are great for getting different apps and services to work well together but can also be big security risks. Luckily, you can automate API security scans to verify that the tools your team is using are trustworthy. Workload security scans can also be automated.

Similarly, you can use MFA and 2FA to automate identity verification and strengthen your access control. Scripting is another excellent cloud security automation tool to try out. Scripting can automate repetitive security processes like configuration or server updates.

Certain circumstances may also warrant unique cloud security automation tactics. For example, if some of your team members work remotely, you face unique cloud security risks. Muli-factor authentication and automated security updates using scripting will be especially helpful in this situation.

What if you want to automate specific processes on some cloud applications but not others? In this case, you can separate your cloud environment into isolated segments. You don’t need a private cloud to do this, either. You can use a hypervisor to create a remote server in any cloud environment, even shared public clouds.

A virtual private server allows you to customize the security protocols of different chunks of your cloud environment. In fact, segmenting your cloud resources can even improve cybersecurity. It prevents bad actors from gaining complete access to your cloud resources and limits the potential blast radius of a cyber attack.

5. Integrate Automated Threat Monitoring

Threat monitoring is a critical component of any cloud security automation strategy. Automating this is a high-risk process, so it is best to implement automated threat monitoring without any distractions. When trusting an AI to key an eye on your cloud environment, you must dedicate time and effort to ensuring you use a trustworthy algorithm.

Many organizations are diving into AI tools today, including cybersecurity algorithms. Running AI in the cloud allows you to use those tools without intensive on-prem computing resources. AI can be helpful for employees, customers, maintenance, security, and more, but it does come with some risks.

For example, poorly trained AI models can suffer from outdated data, compromised data, or even data bias. Researching an AI model and its developer carefully is crucial before investing in any AI security tools. Look for an algorithm trained on a large data set that gets updates regularly. Timely updates are vital for preventing zero-day attacks.

Schedule a pilot program once you identify an AI threat monitoring program that fits your cloud environment well. There are many ways to go about this. For instance, you could automate threat monitoring in one segment of your cloud environment and continue manual monitoring in others. Closely track and analyze the algorithm’s performance during this testing stage.

You can integrate AI into your cloud environment if it is more effective than manual monitoring. If the algorithm’s performance is disappointing, don’t be afraid to try out other AI threat monitoring tools. Take your time to find the model that gives your cloud resources the best protection possible.

6. Track, Evaluate, and Adjust

Each time you integrate a new automated cloud security measure, carefully track and evaluate its performance. Ideally, automated tools will save time and catch more suspicious activity. If something is hurting the network or simply not practical, take time to adjust it or replace it with a different automated security tool.

Automating security in the cloud is an ongoing process. It requires regular check-up sessions to evaluate success and identify what needs updating. Remember — the cloud threat landscape is always changing. Some automation solutions may eventually go out of date or become obsolete. Carefully monitor security news and emerging threats, and analyze your automation strategy for ways to stay ahead of hackers.

Automating Security in the Cloud

As more and more operations, businesses, tools, and computing environments move to the cloud, building resilient cloud security is increasingly important. You can use these six steps to go from zero cloud security to a robust and flexible automated cloud security system. Continuous improvement is critical to adapting to emerging threats, so repeat this process periodically and closely monitor automated security performance.

Featured Image Credit: Photo by Ola Dapo; Pexels; Thank you!

The post 6 Steps to Implementing Cloud Security Automation appeared first on ReadWrite.

]]>
Pexels
Best UEBA Use Cases to Implement in Healthcare https://readwrite.com/best-ueba-use-cases-to-implement-in-healthcare/ Wed, 23 Aug 2023 18:00:13 +0000 https://readwrite.com/?p=231617 Cases to Implement in Healthcare

Security is essential for all industries, but healthcare faces more pressure than most. Hospitals store vast amounts of highly sensitive […]

The post Best UEBA Use Cases to Implement in Healthcare appeared first on ReadWrite.

]]>
Cases to Implement in Healthcare

Security is essential for all industries, but healthcare faces more pressure than most. Hospitals store vast amounts of highly sensitive information, making them ideal targets for cybercrime, so their defenses must be extensive. User and entity behavioral analytics (UEBA) are one of the most helpful tools in that endeavor.

The medical sector is no stranger to artificial intelligence, but most medical AI applications focus on patient care or administrative work. Applying it to cybersecurity in the form of UEBA is a crucial step forward.

What Is User and Entity Behavioral Analytics?

User and entity behavioral analytics use machine learning to detect threats like breached accounts or ransomware. While protections like multi-factor authentication try to prevent attacks, UEBA instead focuses on stopping threats that slip through the cracks before they can cause much damage.

UEBA analyzes how different users and entities — like routers or Internet of Things (IoT) devices — behave on a network. After establishing baselines for normal behavior, machine learning tools can detect suspicious activity. They may see an account trying to access a database it rarely needs or downloading something at an odd time and flag it as a potential breach.

This process is similar to how your bank may freeze your credit card if you make a few unusual purchases. However, it applies the concept to network behavior and uses AI to make it faster and more accurate.

UEBA Benefits

UEBA use cases have many benefits spanning multiple applications. Here’s a brief look at some of their most significant.

Accuracy

Behavioral analytics systems are highly accurate. Machine learning can pick up on trends and patterns in data humans may miss, so UEBA tools can outperform human analysts when determining what is and isn’t suspicious. When properly applied, UEBA can also yield false positive rates as low as 3%, ensuring security teams don’t waste their time or resources.

UEBA can achieve higher accuracies than rule-based monitoring systems because it’s adaptive. Machine learning algorithms continually gather new data and adjust their decision-making as trends shift. That way, they can account for nuances like users slowly adopting new habits or activities being normal in some situations but not others.

Efficiency

Another benefit of UEBA is it’s fast. Machine learning tools can detect and classify anomalies almost instantly when it may take a human a few minutes. Even if those time savings are just a few seconds, they can make a considerable difference when dealing with cyber threats.

UEBA tools can often detect suspicious behavior before an account or breached device causes any real damage. By identifying and isolating threats earlier, they can dramatically reduce the impact of an attack. IBM found reducing data breach response timelines saves organizations $1.12 million on average.

Versatility

UEBA is also versatile compared to similar security tools. Some organizations employ user behavior analytics (UBA), which provides similar benefits but only looks at user activity. By also including entities, UEBA expands its detection capabilities to IoT attacks and other hardware breaches, helping prevent a broader range of incidents.

Machine learning tools like UEBA are also more versatile than rule-based anomaly detection. AI models can adapt to changing situations and account for situational differences, which rule-based systems can’t. That flexibility is vital for healthcare organizations, as telehealth has grown 38 times over its pre-COVID levels, meaning more medical staff may access systems from changing locations.

UEBA Use Cases in Healthcare

These benefits are impressive, but how much medical companies experience them depends on how they apply this technology. In that spirit, here are the five best user and entity behavior analytics use cases in healthcare.

1. Automating Risk Management

Risk management automation is one of healthcare organizations’ most beneficial UEBA use cases. IT monitoring is crucial in this industry, but many businesses need more time or staff to manage it manually. Cybersecurity talent faces a skills gap across all sectors, and over 70% of medical workers say they already work more hours because of electronic health records (EHRs).

UEBA reduces that burden by handling network threat detection without manual input. Hospitals don’t need large security teams to monitor their systems 24/7 because AI will do it for them.

Because UEBA is so accurate and efficient, medical staff can use electronic systems more efficiently. There will be fewer verification stops or run-ins because of false positives, helping reduce the burden of EHRs. Those time savings improve both cybersecurity and patient care.

2. Detecting EHR Breaches

UEBA has many advantageous specific use cases under the automation umbrella, too. One of the most relevant for healthcare organizations is detecting and responding to breaches in EHR systems.

Electronic records make it far easier to manage patient data, but they also introduce significant security risks. There were over 700 health record breaches of 500 records or more in 2022 alone, with an average of almost two breaches daily. Given this issue’s common and severe, UEBA is an indispensable tool.

UEBA can recognize when an app or account is accessing an unusual amount of records or interacting with them atypically. It can then lock the user or entity in question before it can delete, download, or share these files, preventing a breach.

3. Stopping Ransomware Attacks

Ransomware prevention is another leading UEBA use case in healthcare. The rise of ransomware-as-a-service has made these attacks increasingly common, and the medical industry is a prime target.

Ransomware attacks against healthcare organizations have more than doubled between 2016 and 2021. Stopping these incidents early is critical to minimizing damage and protecting patients’ privacy. UEBA provides that speed.

Before ransomware can steal or lock any files, it must access them all. However, UEBA will notice an unknown program suddenly trying to access a large amount of data. It can then restrict access and isolate the file, account or device from which the ransomware spreads before it can encrypt anything. That way, hospitals can prevent ransomware before losing any sensitive information.

4. Preventing Insider Threats

UEBA is also a valuable tool for addressing insider threats, which are particularly prevalent in healthcare. In fact, insider error accounts for more than twice as many breached medical records as malicious activity. Because UEBA detects all anomalies — not just those from outsiders — it can help find and prevent these mistakes.

If a doctor, nurse or other staff member tried to access something they don’t usually need, UEBA would flag it as suspicious. If it were just an accident, this stoppage would bring the issue to the employee’s attention, letting them see and correct their mistake; if it were a malicious insider, UEBA would stop them from abusing their privileges.

UEBA can detect more than just unusual access activity too. It can also identify and stop actions like sharing credentials or attempts to send files to unauthorized users. That way, it can prevent employees from falling for phishing attempts, which account for most insider threats.

5. Securing IoT Endpoints

As IoT adoption in healthcare grows, IoT security becomes an increasingly advantageous UEBA use case. The IoT falls out of the scope of traditional user behavior analytics use cases because UBA systems don’t account for devices, only people. By contrast, UEBA includes endpoints, so it can address IoT concerns.

Just as UEBA spots irregular behavior in user accounts, it can detect unusual connections or access attempts from IoT devices. Consequently, it can stop hackers from using a smart device with low built-in security as a gateway to more sensitive systems and data.

Stopping this lateral movement is crucial, as IoT devices typically have weak security, and hospitals use a lot of them. More than half of all medical IoT devices also feature critical known vulnerabilities, so improving IoT security is essential for the industry.

Behavioral Analytics Are a Must for Healthcare

These UEBA use cases scratch the surface of what this technology can do for medical organizations. As EHR adoption and cybercrime both rise, capitalizing on these applications will become all the more important.

The healthcare industry must take cybercrime seriously. User and entity behavioral analytics systems are some of the most effective tools for that goal.

Featured Image Credit: Provided by the Author; Pexels; Thank you!

The post Best UEBA Use Cases to Implement in Healthcare appeared first on ReadWrite.

]]>
Pexels
Ethical Considerations in IoT Data Collection https://readwrite.com/ethical-considerations-in-iot-data-collection/ Thu, 17 Aug 2023 17:00:49 +0000 https://readwrite.com/?p=231445 Ethical IoT Data Collection

Last year, a court determined Richard Dabate — who police had found with one arm and one leg zip-tied to […]

The post Ethical Considerations in IoT Data Collection appeared first on ReadWrite.

]]>
Ethical IoT Data Collection

Last year, a court determined Richard Dabate — who police had found with one arm and one leg zip-tied to a folding chair in his home — was guilty of his wife’s murder. His elaborate story of a home invasion might have held water had it not been for Connie Dabate’s Fitbit, which showed her moving around for an hour after the alleged intruder took her life.

Few would argue this was a case of unethical data collection, but ethics and privacy have a complicated, at times sordid history. Rising from the ashes of such experiments as Henrietta Lacks’ cancer cell line, in which a biologist cultured a patient’s cells without her knowledge or consent, a new era of privacy ethics is taking shape — and it has people questioning right from wrong.

What Is IoT?

The Internet of Things (IoT) is shorthand for the vast, interconnected network of smart devices that collect and store information online. Projected to be worth over $1 trillion by 2030, it includes appliances people use at home — like TVs, voice assistants, and security cameras — as well as infrastructure like smart streetlights and electric meters. Many businesses use IoT to analyze customer data and improve their operations.

Unethical Data Collection and Use

There’s no question that IoT data is helpful. People use it for everything from remotely turning off the AC to drafting blueprints for city streets, and it has enabled significant improvements in many industries. However, it can also lead to unethical data collection and applications.

For example, using a person’s demographic information without their consent or for purposes beyond marketing and product development can feel like a breach of trust. Data misuse includes the following violations.

1. Mishandling Data

Collecting and storing vast amounts of data brings ethics and privacy into question. Some 28% of companies have experienced a cyberattack due to their use of IoT infrastructure, and these breaches often expose people’s sensitive or confidential information.

The average data breach cost in 2022 was $4.35 million — and a loss of consumer trust. For example, hospital network hacks can reveal patients’ medical history, credit card numbers, and home addresses, leaving already-struggling people even more vulnerable to financial woes. The loss of privacy can make people wary about using a service again.

Mishandling data isn’t unique to IoT devices, of course — 40% of salespeople still use informal methods like email and spreadsheets to store customer info, and these areas are also targets for hackers. But IoT devices often collect data beyond what you’d find on a spreadsheet.

2. Collecting Highly Personal Info

Home IoT devices are privy to uniquely private data. Although 55% of consumers feel unseen by the brands they interact with, many people would be shocked at how much businesses actually know about them.

Some smartwatches use body temperature sensors to determine when a user is ovulating, guessing their fertility levels, or predicting their next period. Smart toothbrushes reduce dental insurance rates for people who brush regularly and for the recommended two-minute interval.

In many cases, smart devices collect as much information as a doctor would, but without being bound by pesky HIPAA privacy laws. As long as users consent, companies are free to use the data for research and marketing purposes.

It’s an easy way to find out what customers really want. Like hidden trail cameras capturing snapshots of elusive animals, smart devices let businesses into the heart of the home without resorting to customer surveys or guesswork.

3. Not Following Consent and Privacy Ethics

It’s one thing to allow your Alexa speaker to record you when you say its name; most users know this feature. However, few realize Amazon itself holds onto the recordings and uses them to train the algorithm. There have also been cases where an Amazon Echo secretly recorded a conversation and sent it to random people on the users’ contact list, provoking questions about unethical data collection and privacy ethics.

Getting explicit consent is crucial when collecting, analyzing, and profiting off of user data. Many companies bury their data use policies deep in a terms-and-conditions list they know users won’t read. Some use fine print many people struggle to make out.

Then, there’s the question of willing consent. If users have to sign up for a specific email service or social media account for work, do they really have a choice of whether to participate in data collection? Some of the most infamous cases of violating privacy ethics dealt with forced participation.

For example, U.S. prisoners volunteered to participate in studies that would help the war effort during World War II. Still, they could not fully consent because they were physically trapped in jail. They tested everything from malaria drugs to topical skin treatments. Some volunteered in exchange for cigarette money or to potentially shorten their sentences.

Even if users give explicit consent, most people now consider collecting data — medical or otherwise — unethical by coercing people into doing so. Collecting data from people unaware they’re giving away sensitive information is also an ethics and privacy violation.

Characteristics of Ethical Data Use

How can data scientists, marketers, and IoT manufacturers keep users’ best interests in mind when collecting their data?

1. Ask for Permission

It’s crucial to always ask before using someone’s data — and ensure they heard you. IoT devices should come with detailed information about how the device will collect data, how often it will do so, how it will use the information, and why it needs it in the first place. These details should be printed in a clear, legible, large font and not be buried deep in a manual heavy enough to use as a paperweight.

2. Gather Just Enough

Before collecting information, decide if you really need it. How will it help advance your company’s objectives? What will you and your customers gain from it? Only gather data relevant to the problem at hand, and avoid collecting potentially sensitive information unless absolutely necessary.

For example, smart beds can track users’ heart rates, snoring, and movement patterns, but they can also collect data about a person’s race or gender. How many of these metrics are necessary for marketing and product development purposes?

3. Protect Privacy

After gathering data, keep it hidden. Strong cybersecurity measures like encryption and multi-factor authentication can hide sensitive data from prying eyes.

Another way to protect consumer privacy is to de-identify a data set. Removing all personally identifiable information from a data set and leaving just the numbers behind ensures that even if someone leaks the data, no one can connect it to real people.

4. Examine Outcomes

How might your data be used — intentionally or not — for other purposes? It’s important to consider who your data could benefit or harm if it leaves the confines of your business.

For example, if the data becomes part of an AI training set, what overall messages does it send? Does it contain any inherent biases against certain groups of people or reinforce negative stereotypes? Long after you gather data, you must continually track where it goes and its effects on the world at large.

Prioritizing Ethics and Privacy

Unethical data collection has a long history, and IoT plays a huge role in the continued debate about privacy ethics. IoT devices that occupy the most intimate of spaces — the smart coffee maker that knows you’re not a morning person, the quietly humming, ever-vigilant baby monitor — give the most pause when it comes to data collection, making people wonder if it’s all worth it.

Manufacturers of smart devices are responsible for protecting their customers’ privacy, but they also have strong incentives to collect as much useful data as possible, so IoT users should proceed with caution. It’s still a wild west for digital ethics and privacy laws. At the end of the day, only you can decide whether to unwind with a smart TV that might be watching you back — after all, to marketing companies, you are the most interesting content.

Featured Image Credit:

The post Ethical Considerations in IoT Data Collection appeared first on ReadWrite.

]]>
Pexels
How Important Is Explainability in Cybersecurity AI? https://readwrite.com/how-important-is-explainability-in-cybersecurity-ai/ Mon, 14 Aug 2023 21:00:30 +0000 https://readwrite.com/?p=231441 Explainability in Cybersecurity

Artificial intelligence is transforming many industries but few as dramatically as cybersecurity. It’s becoming increasingly clear that AI is the […]

The post How Important Is Explainability in Cybersecurity AI? appeared first on ReadWrite.

]]>
Explainability in Cybersecurity

Artificial intelligence is transforming many industries but few as dramatically as cybersecurity. It’s becoming increasingly clear that AI is the future of security as cybercrime has skyrocketed and skills gaps widen, but some challenges remain. One that’s seen increasing attention lately is the demand for explainability in AI.

Concerns around AI explainability have grown as AI tools, and their shortcomings have experienced more time in the spotlight. Does it matter as much in cybersecurity as other applications? Here’s a closer look.

What Is Explainability in AI?

To know how explainability impacts cybersecurity, you must first understand why it matters in any context. Explainability is the biggest barrier to AI adoption in many industries for mainly one reason — trust.

Many AI models today are black boxes, meaning you can’t see how they arrive at their decisions. BY CONTRAST, explainable AI (XAI) provides complete transparency into how the model processes and interprets data. When you use an XAI model, you can see its output and the string of reasoning that led it to those conclusions, establishing more trust in this decision-making.

To put it in a cybersecurity context, think of an automated network monitoring system. Imagine this model flags a login attempt as a potential breach. A conventional black box model would state that it believes the activity is suspicious but may not say why. XAI allows you to investigate further to see what specific actions made the AI categorize the incident as a breach, speeding up response time and potentially reducing costs.

Why Is Explainability Important for Cybersecurity?

The appeal of XAI is obvious in some use cases. Human resources departments must be able to explain AI decisions to ensure they’re free of bias, for example. However, some may argue that how a model arrives at security decisions doesn’t matter as long as it’s accurate. Here are a few reasons why that’s not necessarily the case.

1. Improving AI Accuracy

The most important reason for explainability in cybersecurity AI is that it boosts model accuracy. AI offers fast responses to potential threats, but security professionals must be able to trust it for these responses to be helpful. Not seeing why a model classifies incidents a certain way hinders that trust.

XAI improves security AI’s accuracy by reducing the risk of false positives. Security teams could see precisely why a model flagged something as a threat. If it was wrong, they can see why and adjust it as necessary to prevent similar errors.

Studies have shown that security XAI can achieve more than 95% accuracy while making the reasons behind misclassification more apparent. This lets you create a more reliable classification system, ensuring your security alerts are as accurate as possible.

2. More Informed Decision-Making

Explainability offers more insight, which is crucial in determining the next steps in cybersecurity. The best way to address a threat varies widely depending on myriad case-specific factors. You can learn more about why an AI model classified a threat a certain way, getting crucial context.

A black box AI may not offer much more than classification. XAI, by contrast, enables root cause analysis by letting you look into its decision-making process, revealing the ins and outs of the threat and how it manifested. You can then address it more effectively.

Just 6% of incident responses in the U.S. take less than two weeks. Considering how long these timelines can be, it’s best to learn as much as possible as soon as you can to minimize the damage. Context from XAI’s root cause analysis enables that.

3. Ongoing Improvements

Explainable AI is also important in cybersecurity because it enables ongoing improvements. Cybersecurity is dynamic. Criminals are always seeking new ways to get around defenses, so security trends must adapt in response. That can be difficult if you are unsure how your security AI detects threats.

Simply adapting to known threats isn’t enough, either. Roughly 40% of all zero-day exploits in the past decade happened in 2021. Attacks targeting unknown vulnerabilities are becoming increasingly common, so you must be able to find and address weaknesses in your system before cybercriminals do.

Explainability lets you do precisely that. Because you can see how XAI arrives at its decisions, you can find gaps or issues that may cause mistakes and address them to bolster your security. Similarly, you can look at trends in what led to various actions to identify new threats you should account for.

4. Regulatory Compliance

As cybersecurity regulations grow, the importance of explainability in security AI will grow alongside them. Privacy laws like the GDPR or HIPAA have extensive transparency requirements. Black box AI quickly becomes a legal liability if your organization falls under this jurisdiction.

Security AI likely has access to user data to identify suspicious activity. That means you must be able to prove how the model uses that information to stay compliant with privacy regulations. XAI offers that transparency, but black box AI doesn’t.

Currently, regulations like these only apply to some industries and locations, but that will likely change soon. The U.S. may lack federal data laws, but at least nine states have enacted their own comprehensive privacy legislation. Several more have at least introduced data protection bills. XAI is invaluable in light of these growing regulations.

5. Building Trust

If nothing else, cybersecurity AI should be explainable to build trust. Many companies struggle to gain consumer trust, and many people doubt AI’s trustworthiness. XAI helps assure your clients that your security AI is safe and ethical because you can pinpoint exactly how it arrives at its decisions.

The need for trust goes beyond consumers. Security teams must get buy-in from management and company stakeholders to deploy AI. Explainability lets them demonstrate how and why their AI solutions are effective, ethical, and safe, boosting their chances of approval.

Gaining approval helps deploy AI projects faster and increase their budgets. As a result, security professionals can capitalize on this technology to a greater extent than they could without explainability.

Challenges With XAI in Cybersecurity

Explainability is crucial for cybersecurity AI and will only become more so over time. However, building and deploying XAI carries some unique challenges. Organizations must recognize these to enable effective XAI rollouts.

Costs are one of explainable AI’s most significant obstacles. Supervised learning can be expensive in some situations because of its labeled data requirements. These expenses can limit some companies’ ability to justify security AI projects.

Similarly, some machine learning (ML) methods simply do not translate well to explanations that make sense to humans. Reinforcement learning is a rising ML method, with over 22% of enterprises adopting AI beginning to use it. Because reinforcement learning typically takes place over a long stretch of time, with the model free to make many interrelated decisions, it can be hard to gather every decision the model has made and translate it into an output humans can understand.

Finally, XAI models can be computationally intense. Not every business has the hardware necessary to support these more complex solutions, and scaling up may carry additional cost concerns. This complexity also makes building and training these models harder.

Steps to Use XAI in Security Effectively

Security teams should approach XAI carefully, considering these challenges and the importance of explainability in cybersecurity AI. One solution is to use a second AI model to explain the first. Tools like ChatGPT can explain code in human language, offering a way to tell users why a model is making certain choices.

This approach is helpful if security teams use AI tools that are slower than a transparent model from the beginning. These alternatives require more resources and development time but will produce better results. Many companies now offer off-the-shelf XAI tools to streamline development. Using adversarial networks to understand AI’s training process can also help.

In either case, security teams must work closely with AI experts to ensure they understand their models. Development should be a cross-department, more collaborative process to ensure everyone who needs to can understand AI decisions. Businesses must make AI literacy training a priority for this shift to happen.

Cybersecurity AI Must Be Explainable

Explainable AI offers transparency, improved accuracy, and the potential for ongoing improvements, all crucial for cybersecurity. Explainability will become more critical as regulatory pressure and trust in AI become more significant issues.

XAI may heighten development challenges, but the benefits are worth it. Security teams that start working with AI experts to build explainable models from the ground up can unlock AI’s full potential.

Featured Image Credit: Photo by Ivan Samkov; Pexels; Thank you!

The post How Important Is Explainability in Cybersecurity AI? appeared first on ReadWrite.

]]>
Pexels
How to Use Automation to Reduce Your Attack Surface https://readwrite.com/how-to-use-automation-to-reduce-your-attack-surface/ Fri, 11 Aug 2023 20:30:10 +0000 https://readwrite.com/?p=231301 use-automation-to-reduce-your-attack-surface-feature

Companies are diversifying their resources and data silos. Some enterprises move this information to cloud providers, while others swear by […]

The post How to Use Automation to Reduce Your Attack Surface appeared first on ReadWrite.

]]>
use-automation-to-reduce-your-attack-surface-feature

Companies are diversifying their resources and data silos. Some enterprises move this information to cloud providers, while others swear by on-site hardware. Internet of Things (IoT)-connected devices and digital nomadism are expanding the number and type of devices attached to a business, and it’s no wonder hackers are finding more avenues to breach sensitive data stores. Entities must reduce attack surface area to stay protected.

Automation is an invaluable addition to a risk prevention and remediation strategy when reducing the attack surfaces in an organization. What are these tactics, and how can they relieve the burdens of stressed analysts?

What Is an Attack Surface in Cybersecurity?

Several buzzphrases float around to describe points of entry for cybercriminals. Attack surfaces encapsulate every pathway and vulnerability a threat actor could exploit. Experts refer to them as attack vectors. The more attack vectors there are, the larger the attack surface is — expanding how much confidential and sensitive data is up for grabs by malicious individuals.

Every attack vector allows ransomware, phishing, or malware to creep in, compromising identities and infrastructure. These are some of the most common gateways businesses may not even acknowledge as entryways for criminals:

  • Weak or compromised credentials
  • Outdated software that requires patching
  • Utility connections
  • Remote desktop connections
  • Social engineering to produce insider threats
  • Email or text message inboxes
  • Third-party vendors and suppliers
  • IoT-connected devices and sensors
  • Security systems and cameras
  • Data centers

Attack surfaces take physical and digital forms, making protection methods diverse. These are only several, shedding light on how many forms an attack vector can make.

Overseeing every digital and physical corner to prevent threats would require more power than most companies can justify. Automation can handle countless mundane scans and tasks to aid workforces in defending each path, especially as attack surfaces are more varied than ever.

What Are the Best Ways to Minimize Attack Surfaces With Automation?

Reducing attack surface can take many forms, but automation can make the most of time and financial investment in a few high-value ways.

1. Execute Scheduled Data Minimization

Data minimization and inventory management — digitally and physically — are the top recommendations in the cybersecurity landscape, especially as regulations become a hot topic for world governments. The EU’s General Data Protection Regulation (GDPR) and the U.S.’s American Data Privacy Protection Act (ADPPA) explain how corporations must rein in and be transparent about data collection and use.

The fewer data stores programs that handle that information, the better. Instead of manually combing through countless bytes daily, automation could perform minimization practices on a schedule with proactive programming and secure code, such as:

  • Deleting ex-employee or outdated, irrelevant data
  • Performing automated data backups to segmented or isolated systems
  • Removing data that doesn’t include what’s necessary for operations
  • Limiting employee or customer input when gathering data via forms

However, a strategy like this could be a double-edged sword. Programmers and cybersecurity experts may schedule codes to perform these tasks, but more programs running expands the surface area. Experts must optimize the codes to perform various tasks so the surface area remains minimal.

2. Leverage AI and Machine Learning Data

Incorporating AI into a cybersecurity strategy could save companies around $3.05 billion for a much cheaper upfront investment. They must do more than purchase an AI system and hope for the best — it must integrate with an organization’s current technological ecosystem. Otherwise, it could present more attack vectors in the surface area than intended.

Using AI with appropriate tech could remove some drawbacks, including false positives. With well-curated oversight and data management, machine learning could adapt to productive learning environments over time.

AI and machine learning data can funnel into a centralized program to provide more holistic visibility about potential attack vectors. A localized scope of the attack area with data to prove what’s most threatening can guide analysts to eliminate or update these pain points proactively. This eliminates reactivity after a breach.

Real-time data can also indicate trends over time, where IT professionals can see how attack vectors perform as companies implement new tech or adopt digital strategies. It can show how many attempts hackers made against redundant legacy software versus cloud servers. It can gather historical data about vulnerabilities from misconfigurations or out-of-date software to change patching and update schedules. Automating will be invaluable for budget allocations and task prioritization.

3. Reduce Access With Zero Trust

Perhaps a tech stack has to be expansive to cover services and tasks. Reducing the attack surface area could compromise efficiency or service availability. However, automation can execute zero trust to minimize threat vectors by automatically denying access or packet requests. It is still an impactful way to maximize security and automation while keeping tech assets and creating walls against vectors.

Automation can analyze requests based on the time of day and the habits of the credential holder. It could require multiple authentication points before allowing entry, even if someone is granted access. It reduces the chance of hackers taking advantage of human error by remotely questioning a request.

Combining this with the principles of least privilege can get the best of both automation worlds. Automation can assign access controls based on role responsibilities, and zero trust can analyze those assignments to determine safety. It can minimize the 79% of identity-related breaches that will undoubtedly rise if automation doesn’t hone in on authorization and access.

4. Perform Vulnerability Scanning and Management

Many corporations undergo penetration testing, using internal or third-party services to try to break through the digital barriers of businesses actively. Hopefully, they don’t find any vulnerabilities. However, playing the role of an attacker can reveal mismanaged priorities or efforts.

Vulnerability scans do not have the same degree of attention as manual penetration testing but can supplement the time between trials. It can highlight the most critical issues first so organizations know where to place efforts between more human-driven defensive strategies. The scans could execute asset discovery, revealing attack vectors companies never previously recognized and allowing them to fill the hole or eliminate it from the equation altogether.

Recent research revealed these figures about attack surface discovery that vulnerability scans could assist with:

  • 72% of respondents claim executing attack surface discovery takes more than 40 hours
  • 62% say surfaces have expanded in the last several years
  • 56% don’t know which attack vectors are critical to the business, meaning there is little direction on what to protect

What if Companies Don’t Reduce Attack Surfaces?

What is an attack surface other than an opportunity? Increasing the number of attack vectors benefits nobody except for the offensive side of the digital battle. Therefore, defenders must minimize them to prevent the worst from happening.

It’s more complicated because humans have developed tech landscapes past what perimeter security can guard. Companies that don’t attempt to make their nebulous digital borders tangible will be misguided about how protected they are.

The price of cybersecurity breaches rises yearly, especially as businesses move to remote operations inspired by the pandemic. Massive media scandals from careless data breaches reflect the damage one uncared-for attack vector can have on a company. It potentially jeopardizes decades of enterprise forging and risks employees’ livelihoods.

A company that’s hacked could lose its reputation as publications spread the word about its inability to protect consumers, employees, or third-party relationships. Bad press equates to lost revenue, making public relations and marketing departments work overtime to salvage what automation can do a relatively accurate job of preventing.

Reduce Attack Surfaces to Eliminate Hackers’ Options

Minimize the attack surface in an organization’s tech stack with intelligently deployed automation tools. They can take many forms, either as external equipment or software, but it will always come back to how well programmers crafted the tools and how attentively analysts oversee them.

Automation can relieve stress and perform many tasks with high accuracy, but it must align with dedicated professionals who take care of these systems for optimization.

Featured Image Credit: Pexels; Thank you!

The post How to Use Automation to Reduce Your Attack Surface appeared first on ReadWrite.

]]>
Pexels
Can We Trust AI Decision-Making in Cybersecurity? https://readwrite.com/can-we-trust-ai-decision-making-in-cybersecurity/ Sun, 14 May 2023 15:00:05 +0000 https://readwrite.com/?p=225426 AI Decision-Making in Cybersecurity

As technology advances and becomes a more integral part of the modern world, cybercriminals will learn new ways to exploit […]

The post Can We Trust AI Decision-Making in Cybersecurity? appeared first on ReadWrite.

]]>
AI Decision-Making in Cybersecurity

As technology advances and becomes a more integral part of the modern world, cybercriminals will learn new ways to exploit it. The cybersecurity sector must evolve faster. Could artificial intelligence (AI) be a solution for future security threats?

What is AI Decision-Making in Cybersecurity?

AI programs can make autonomous decisions and implement security efforts around the clock. The programs analyze much more risk data at any given time than a human mind. The networks or data storage systems under an AI program’s protection gain continually updated protection that’s always studying responses to ongoing cyber-attacks.

People need cybersecurity experts to implement measures that protect their data or hardware against cyber criminals. Crimes like phishing and denial-of-service attacks happen all the time. While cybersecurity experts need to do things like sleep or study new cybercrime strategies to fight suspicious activity effectively, AI programs don’t have to do either.

Can People Trust AI in Cybersecurity?

Advancements in any field have pros and cons. AI protects user information day and night while automatically learning from cyber attacks happening elsewhere. There’s no room for human error that could cause someone to overlook an exposed network or compromised data.

However, AI software could be a risk in itself. Attacking the software is possible because it’s another part of a computer or network’s system. Human brains aren’t susceptible to malware in the same way.

Deciding if AI should become the leading cybersecurity effort of a network is a complicated decision. Evaluating the benefits and potential risks before choosing is the smartest way to handle a possible cybersecurity transition.

Benefits of AI in Cybersecurity

When people picture an AI program, they likely think of it positively. It’s already active in the everyday lives of global communities. AI programs are reducing safety risks in potentially dangerous workplaces so employees are safer while they’re on the clock. It also has machine learning (ML) capabilities that collect instant data to recognize fraud before people can potentially click links or open documents sent by cybercriminals.

AI decision-making in cybersecurity could be the way of the future. In addition to helping people in numerous industries, it can improve digital security in these significant ways.

It Monitors Around the Clock

Even the most skilled cybersecurity teams have to sleep occasionally. When they aren’t monitoring their networks, intrusions, and vulnerabilities remain a threat. AI can analyze data continuously to recognize potential patterns that indicate an incoming cyber threat. Since global cyber attacks occur every 39 seconds, staying vigilant is crucial to securing data.

It Could Drastically Reduce Financial Loss

An AI program that monitors network, cloud, and application vulnerabilities would also prevent financial loss after a cyber attack. The latest data shows companies lose over $1 million per breach, given the rise of remote employment. Home networks stop internal IT teams from completely controlling a business’s cybersecurity. AI would reach those remote workers and provide an additional layer of security outside professional offices.

It Creates Biometric Validation Options

People accessing systems with AI capabilities can also opt to log into their accounts using biometric validation. Scanning someone’s face or fingerprint creates biometric login credentials instead of or in addition to traditional passwords and two-factor authentication.

Biometric data also save as encrypted numerical values instead of raw data. If cybercriminals hacked into those values, they’d be nearly impossible to reverse engineer and use to access confidential information.

It’s Constantly Learning to Identify Threats

When human-powered IT security teams want to identify new cybersecurity threats, they must undergo training that could take days or weeks. AI programs learn about new dangers automatically. They’re always ready for system updates that inform them about the latest ways cybercriminals are trying to hack their technology.

Continually updating threat identification methods mean network infrastructure and confidential data are safer than ever. There’s no room for human error due to knowledge gaps between training sessions.

It Eliminates Human Error

Someone can become the leading expert in their field but still be subject to human error. People get tired, procrastinate, and forget to take essential steps within their roles. When that happens with someone on an IT security team, it could result in an overlooked security task that leaves the network open to vulnerabilities.

AI doesn’t get tired or forget what it needs to do. It removes potential shortcomings due to human error, making cybersecurity processes more efficient. Lapses in security and network holes won’t remain a risk for long, if they happen at all.

Potential Concerns to Consider

As with any new technological development, AI still poses a few risks. It’s relatively new, so cybersecurity experts should remember these potential concerns when picturing a future of AI decision-making.

Effective AI Needs Updated Data Sets

AI also requires an updated data set to remain at peak performance. Without input from computers across a company’s entire network, it wouldn’t provide the security expected from the client. Sensitive information could remain more at risk of intrusions because the AI system doesn’t know it’s there.

Data sets also include the latest upgrades in cybersecurity resources. The AI system would need the newest malware profiles and anomaly detection capabilities to provide adequate protection consistently. Providing that information can be more work than an IT team can handle at one time.

IT team members would need the training to gather and provide updated data sets to their newly installed AI security programs. Every step of upgrading to AI decision-making takes time and financial resources. Organizations lacking the ability to do both swiftly could become more vulnerable to attacks than before.

Algorithms Aren’t Always Transparent

Some older methods of cybersecurity protection are easier for IT professionals to take apart. They could easily access every layer of security measures for traditional systems, whereas AI programs are much more complex.

AI isn’t easy for people to take apart for minor data mining because it’s supposed to function independently. IT and cybersecurity professionals may see it as less transparent and more challenging to manipulate to a business’s advantage. It requires more trust in the automatic nature of the system, which can make people wary of using them for their most sensitive security needs.

AI Can Still Present False Positives

ML algorithms are part of AI decision-making. People rely on that vital component of AI programs to identify security risks, but even computers aren’t perfect. Due to data reliance and the newness of technology, all machine learning algorithms can make anomaly detection mistakes.

When an AI security program detects an anomaly, it may alert security operations center experts so they can manually review and remove the issue. However, the program can also remove it automatically. Although that’s a benefit for real threats, it’s dangerous when the detection is a false positive.

The AI algorithm could remove data or network patches that aren’t a threat. That makes the system more at risk for real security issues, especially if there isn’t a watchful IT team monitoring what the algorithm is doing.

If events like that happen regularly, the team could also become distracted. They’d have to devote attention to sorting through false positives and fixing what the algorithm accidentally disrupted. Cybercriminals would have an easier time bypassing both the team and the algorithm if this complication lasted long-term. In this scenario, updating the AI software or waiting for more advanced programming could be the best way to avoid false positives.

Prepare for AI’s Decision-Making Potential

Artificial intelligence is already helping people secure sensitive information. If more people begin to trust AI decision-making in cybersecurity for broader uses, there could be potential benefits against future attacks.

Understanding the risks and rewards of implementing technology in new ways is always essential.

Cybersecurity teams will understand how best to implement technology in new ways without opening their systems to potential weaknesses.

Featured Image Credit: Photo by cottonbro studio; Pexels; Thank you!

The post Can We Trust AI Decision-Making in Cybersecurity? appeared first on ReadWrite.

]]>
Pexels
Don’t Forget Hardware in IoT Security https://readwrite.com/dont-forget-hardware-in-iot-security/ Wed, 19 Apr 2023 15:00:36 +0000 https://readwrite.com/?p=223628 Hardware in IoT Security

It’s easy to find cybersecurity software solutions for Internet of Things (IoT) devices to make your life effortless. However, many […]

The post Don’t Forget Hardware in IoT Security appeared first on ReadWrite.

]]>
Hardware in IoT Security

It’s easy to find cybersecurity software solutions for Internet of Things (IoT) devices to make your life effortless. However, many people forget about the hardware.

Software solutions can monitor and manage your IoT devices but don’t necessarily address the underlying problem — these items aren’t secure in their own right. That’s why you should look at ways to protect your technology from hardware attacks.

The Importance of Hardware Security for IoT

Hardware security in IoT devices is necessary to protect users’ collected data. IoT gadgets have become increasingly common, and it’s expected there will be 75 billion connected devices in 2025. This introduces a new set of security challenges.

These devices are often inexpensive, and their manufacturers do not always have the expertise to ensure they are secure. Therefore, they are increasingly vulnerable to attacks. The fact that these devices are connected to the internet makes them ideal targets for hackers. Attackers that access one can infiltrate all other gadgets on the network.

The consequences can devastate end users and businesses using these devices for critical functions such as manufacturing or health care. A hacker could steal sensitive information or tamper with data without detection by anyone else who uses the system, making it impossible for them to detect any problems until it’s too late.

Why Do You Need Hardware Alongside Software Security?

Hardware is necessary alongside software security because it provides a layer of protection that software alone cannot.

For starters, some software applications use standard systems and services that come with a device’s operating system — alongside other apps installed on top of the base OS. These can be vulnerable to attack. Problems often arise from how these programs interact with hardware components controlling access to data or other sensitive information.

Another reason why hardware in IoT security is important is because of how easy it is to compromise devices. In fact, the number of gadgets at risk is so great that organizations can no longer rely on traditional software security solutions alone.

One instance of a security breach in 2019 proves this. Hackers were able to install software on 1.5 billion WhatsApp users’ devices, compromising their personal information.

Security breaches occur because many companies use off-the-shelf components for their products, meaning they need more expertise in-house to design secure software for those parts. They might not see the need because they overlook how much damage just one compromised piece of hardware can do.

Types of Hardware Attacks on IoT

There are various attacks hackers use to compromise IoT devices. The most common ones are:

  • Side-channel attacks: This type of cyberattack uses information that is observable to attackers, not end users. For example, they may use the electromagnetic radiation given off by devices or time information to gain access to your device.
  • Brute-force attacks: This trial-and-error method is used to access data by trying many passwords or PINs until the automated software guesses the right one.
  • Rowhammer attacks: This is a form of denial-of-service attack on a device that uses flash memory. The name comes from how the attacker floods the memory with repeated read commands, causing it to write over itself and potentially corrupt or destroy data.
  • Fuzzing attacks: This involves sending random data to an IoT device until it crashes or fails to function properly.

How to Improve Hardware Security in IoT Devices

Organizations should take the following hardware security measures to protect endpoint devices.

1. Remotely Update the Firmware

IoT devices are increasingly used in critical systems, from smart cars to medical equipment. These systems are becoming significantly more complex and often include hundreds of different components that must communicate with each other. As these systems become more intricate, it becomes harder for manufacturers to ensure all pieces are working correctly and that there are no security vulnerabilities.

Updating the firmware in these devices can enhance hardware security. However, this is usually done by sending new code over a network connection. If someone else can access that connection, they can send malicious code.

On the other hand, remotely updating firmware can protect against attacks because it ensures that only authorized users can access your system. This makes it much more challenging for hackers or unauthorized users to get into your network and use it maliciously.

2. Lock All Devices After Deployed Into Production

Locking IoT devices is a crucial step in improving hardware security. It’s a simple concept that many companies should pay more attention to when maintaining an item’s protection.

IoT devices are vulnerable to attack once they’re deployed into production. The longer it stays connected, the more exposed it becomes to malicious activity. The only way to protect the gadget is by implementing strict security measures, which should be in place before using it.

Locking down an IoT device involves restricting access by requiring users to enter an authentication code or password every time they want to log in. This eliminates any unwanted access attempts and keeps hackers at bay.

3. Use Tamper Pins to Implement Hardware-Based Authentication

Tamper pins are a simple but effective way to improve the security of IoT devices.

The IoT is the fastest-growing global market today, and the demand for IoT hardware is skyrocketing. However, as with other technologies, this growth can lead to serious security breaches if you don’t take proper precautions. To stay safe in such an environment, you must install tamper pins on your devices.

Certain hardware attacks may require the attacker to manually remove parts of the device to access debug ports or memory channels. However, tamper pins can enhance hardware security and detect when someone attempts to break into it.

Once detected, the tamper pin will instruct the processor to perform a routine that involves a reboot to protect sensitive data, such as deploying a complete memory wipe.

4. Use a Trusted Platform Module (TPM) Chip to Store Cryptographic Keys

A trusted platform module (TPM) chip in your IoT device can secure your data and keep it safe from hackers.

A TPM is a secure cryptoprocessor that runs independently of your computer’s or other devices’ main processor. It stores sensitive information, such as encryption keys, passwords, and digital certificates.

You can use TPMs in IoT devices to ensure they are always running in a trusted state and remain secure even if they’re compromised by malware. This prevents attackers from accessing sensitive data on your system without your knowledge.

The TPM chip is also used to protect cryptographic keys and passwords so unauthorized users cannot steal them.

5. Leverage a Secure Boot Process

Another way to improve IoT security is by leveraging a secure boot process. This ensures your device is running the correct operating system and that nothing has been tampered with or compromised it. This process also guarantees the hardware is secure against any malicious modifications or attacks during its life cycle.

The secure boot process starts when you first turn on your device. At this point, the hardware checks itself for any signs of tampering. Then, it verifies the integrity of all software components within it. It also ensures firmware components are up-to-date and authentic.

You can implement a secure boot process in several ways. One method involves storing a master key within the device before shipping it out to customers. The device will use this key to verify that any updates are legitimate before applying them to your device.

Protect Your Endpoint Devices With Hardware Security

It is important to remember that hardware plays an important role in IoT security. Failing to consider the potential risks of your device may put your customers at risk, and the legal implications can be serious. The last thing you want to happen is to suffer an attack that you could have prevented.

However, if you keep these tips in mind and implement them properly, you will be able to ensure the safety of all users.

Featured Image Credit: Provided by the Author; Pexels; Thank you!

The post Don’t Forget Hardware in IoT Security appeared first on ReadWrite.

]]>
Pexels
How Does a Ransomware Negotiation Work? https://readwrite.com/how-does-a-ransomware-negotiation-work/ Wed, 08 Feb 2023 18:04:24 +0000 https://readwrite.com/?p=220799 Ransomware Negotiation Work

Criminals have always held people hostage to get what they want. In the modern digital world, they prefer stealing data […]

The post How Does a Ransomware Negotiation Work? appeared first on ReadWrite.

]]>
Ransomware Negotiation Work

Criminals have always held people hostage to get what they want. In the modern digital world, they prefer stealing data to force consumers or corporations to pay top dollar for its return — and unfortunately, ransomware isn’t going away anytime soon. Often, victims need their data back; but without backups, their options typically dwindle to either paying the full price or negotiating.

This is how a ransomware negotiation works and everything you need to know to stay safe in the digital age.

What Is a Ransomware Attack?

Cryptoviral extortion doesn’t always involve breaking into a business to steal computers. It doesn’t even require the theft of hard drives. Many ransomware criminals send malicious software (malware) to potential victims that appear to come from a trusted person or company. (https://www.itproportal.com/features/the-four-most-popular-methods-hackers-use-to-spread-ransomware/)

What Do Ransomware Attacks Look Like?

When someone clicks on a bugged link, attachment, or photo in a phishing email, the malware searches their computer for valuable, sensitive data. That can be information such as:

  • Passwords
  • Social Security numbers
  • Credit card numbers
  • Banking information
  • Phone numbers

Cybercriminals want this data to extort victims. They know people need that information to pay bills, keep the lights on, and access food, so they present a short-term deadline to pay a ransom and get the data back.

If people don’t pay the amount requested, the ransomware attackers may steal money from the victim’s bank accounts and publish private data so others can do the same.

How to Protect Against Ransomware Attacks

There are a few ways people can protect against ransomware attacks from happening to themselves, their loved ones, or their co-workers. Practice using these tips to keep your data safe.

1. Use Strong Passwords

A study found that 80% of hacked security breaches happen because people use weak passwords or the same ones for multiple accounts. Your preferred passwords may be too short and uncomplicated to protect your sensitive data adequately.

Experts recommend that anyone with a digital presence use 16-character passwords that include alphabetical and numerical digits, plus special characters like exclamation marks or ampersands. You can also look into an encrypted security bank to save your complicated passwords and autofill them when you need to log into websites.

2. Attend Phishing Training Classes

Every workplace should have annual training classes to teach everyone how to spot and avoid phishing scams. Whether in-person or digital training, don’t miss the valuable education.

If your workplace doesn’t currently have phishing training, speak with your manager or the business owner about starting it. The latest research shows that this type of training reduces clicking on phishing links by nearly half, from a 47.5% click-through rate to a 24.5% rate.

3. Talk About Cybersecurity Automation

Automated cybersecurity is another layer of protection between people and cybercriminals. Talk about investing in a program with your boss or other leaders in your company if you’re a business owner.

Automated cybersecurity provides multiple benefits, including automated testing and responses to potential ransomware links before any employee can click on them. It also immediately alerts selected users of activated ransomware if an attack occurs.

4. Only Open Verified Emails

It’s always a good idea to only open emails from people you know personally. Check each sender’s address to ensure it isn’t a copycat email or a spam sender with heightened-risk content.

You can also check with the person who potentially sent the email to verify they emailed you the link or attachment. It only takes a moment to determine if something is safe to open. The extra effort will keep you or your company from paying the average $1.4 million ransom (sophos dot com)to get your sensitive data back.

5. Install Anti-Malware Software

Anti-malware software is easy to install and works behind the scenes while you spend time online. It automatically tests each link, attachment, and downloaded content before you can click on anything. Your chosen software may also remove any suspected malware so you can’t accidentally open it in the months or years ahead.

Should Attack Victims Engage in Ransomware Negotiation?

The U.S. Federal Bureau of Investigation (FBI) recommends that anyone involved in a ransomware attack submit an online tip or call their local field office for legal assistance. It’s best to get advice from people professionally trained to handle that type of situation to potentially save yourself from paying anything at all.

Most of the time, law enforcement recommends that victims avoid paying the fee for their data. It only teaches the hackers that you’re willing to hand your money over, so they’ll likely return.

There’s also a likely chance they’ll take your money and never return your sensitive information. A 2021 report found that only 4% of ransomware victims who paid the fee actually got all their data back.

How a Ransomware Negotiation Works

When ransomware hits, an incident response team or trained professional will verify how the attacker got your information, kick them off your network, and establish their credibility. They’ll also contact law enforcement for additional response guidance.

It’s also in your best interest to contact any insurance providers who have a digital security policy with you to ask them for approval for legal counsel and potentially pay the ransom.

Attackers usually require that victims use a specified communication channel for all conversations. People must then decide if it will cost more to keep their network down and allow law enforcement to track the cybercriminals or if they need to get back up immediately.

The second option is often what seems best for organizations like hospitals that need their software to treat emergency cases or surgical patients.

Tips to Negotiate a Ransomware Attack

If you believe you should engage in a ransomware negotiation with the attackers, use these tips to make the experience as seamless as possible.

1. Contact the FBI

Always follow the recommendations of law enforcement from the start of a ransomware attack. Filing a tip or calling your local FBI field office will connect you with experts who have handled similar situations. You’ll get the best results and legal advice if you don’t manage the problem alone.

2. Find Out What the Hackers Stole

The attackers should tell you exactly what information they stole and how much they have through their preferred chat system. They’ll name a price for the data and potentially decrypt a file or two as proof of what they have.

3. Look for Backups

Individuals and businesses should back up their data regularly to protect against loss. If a ransomware attack occurs or someone breaks their computer, you can restore your data from the latest backup and take control of the situation without losing money.

Even if your business has regularly scheduled backups, be sure to monitor them continuously. Many businesses think they’ve backed up their crucial data, but an average of 10-15% of that data is never backed up due to preventable errors.

4. Weigh Your Options

You’ll have to weigh your other options if you don’t have a data backup. Companies under immense time pressure — like hospitals that need to access digitally locked medications or businesses that provide essential services like natural gas distribution — may be unable to wait through negotiations.

Say you need to pay the ransomware. The attackers may work with you if you provide proof that you don’t have enough money. Many ransomware hackers will lower their original demand because they’ll take any payment over none at all.

5. Find a Data Recovery Service

Many data recovery services can save some, if not all, of your lost information without paying hackers. The fee may be extensive, but it could be less than the cybercriminals are demanding. Look into your options and get quotes before giving thieves any money.

Learn More About Ransomware Negotiation

It’s much easier to take preventive steps after learning how a ransomware negotiation works. Invest in malware software, upgrade your passwords, and look into insurance policies. They’ll minimize your risk and keep your information safe.

Featured Image Credit: Provided by the Author; Pexels; Thank you!

The post How Does a Ransomware Negotiation Work? appeared first on ReadWrite.

]]>
Pexels