News



MARCH, 2024

Spear Phishing and Employer Branding: Creating a Safe and Aware Work Environment

In today's rapidly changing digital world, protecting against spear phishing not only safeguards a company's confidential information but also strengthens its employer branding. Creating a secure and conscious work environment translates into trust and loyalty of employees, as well as the perception of the company as a responsible and safe employer.

How Does Spear Phishing Impact Employer Branding?
1. Employee Trust: Effective protection against cyber threats shows employees that you care about their safety and privacy.
2. Company Reputation: A responsible approach to cybersecurity positively influences the company's image, which is important in the recruitment process of new talents.
3. Culture of Cybersecurity Awareness: Building a culture that promotes digital security strengthens the internal sense of community and responsibility.

Strategies for Combining Spear Phishing Protection with Employer Branding:
1. Communication and Education: Regular training and workshops on cybersecurity not only increase employees' awareness but also build trust and show that the company invests in their development.
2. Openness and Transparency: Clearly communicate the steps the company takes to protect against cyberattacks. This builds trust and shows that the company is proactive in risk management.
3. Engaging Employees in Security Processes:  Encourage employees to actively participate in creating a safe work environment. This can include reward programs for reporting suspicious activities.
4. The Importance of Good Leadership: The company's leadership should model cyber security practices, which translates into strengthening corporate culture.
5. Building a Strong Employer Brand: Regularly communicate how your cybersecurity efforts contribute to creating a safe workplace. Use these stories in external and internal communication.

Conclusion:
Managing the risk of spear phishing in a company is crucial not only for data protection but also for building a strong employer brand. Creating a culture based on safety, awareness, and trust is invaluable in building a positive image of the company both in the eyes of current and future employees.

19 tips on AI for OSINT research

f you’re looking for guidance and tools to improve your use of AI in open-source intelligence investigations, look no further! This is a great roundup if you’re just starting with AI or want to expand its use in OSINT.

AI technology has radically changed how organizations and individuals use the internet. Since the release of ChatGPT, AI tools have witnessed a boost among internet users and companies alike. In the OSINT area, researchers have begun to use AI technologies to speed

up the intelligence cycle.

In this article, I will discuss how AI tools can aid OSINT researchers in enhancing their capabilities during OSINT gathering activities. The article will mention use cases and give links to AI-powered tools that can help OSINT gatherers during their search. However, before we start, it is worth noting the main benefits of using AI in OSINT investigations.

How can AI technology assist OSINT gatherers in their work?

There are numerous cases when AI-powered tools can be leveraged in OSINT. Before I list some tools, let me introduce the various use cases that OSINT researchers can benefit by using AI-powered tools in their gathering activities:

Web scraping activities

AI techniques can be used to harvest large volumes of online data from various sources, such as social media platforms, blogs, discussion forums and deep internet databases. The collected information can be used later in different scenarios, such as tracking entities over time or monitoring public behaviors over some time. Machine learning models can be trained to harvest specific types of information, such as people's comments and replies on social media to extract them automatically.  

Pattern recognition

Machine learning (ML) techniques can identify entities within large data sets: for example, analyzing a large number of files and finding linking relationships between different entities. Those entities could be an individual name, company name, address, email, phone number or any piece of data.

Content summary

Natural language processing algorithms can be used to summarize large sums of data. OSINT gatherers can use the summarization feature to analyze and extract information from datasets; for example, we can ask an AI summarization tool to extract all companies' names mentioned in a set of PDF files composed of hundreds of pages. In the same way, we can train the ML models to summarize a large amount of text or social media posts. 

Sentiment analysis

A significant benefit of using AI technology is its ability to interpret human emotions through reading text (e.g., social media posts). OSINT researchers may not be able to accurately identify the emotional state of a particular user by merely reading what they write online (such as on their social media posts and comments). Another aspect is knowing the prevalent attitude of a group of people or society. For example, we can monitor internet users’ comments on social media platforms and customer reviews on online merchants about a specific brand or product to predict their future buying behavior. The same thing can be applied on a large scale (e.g., all people within a country) to understand their opinion about political, economic or sports matters.

Image recognition

Computer vision is a sub-type of AI that interprets digital media file information, such as images and video content. In the OSINT arena, computer vision can provide unparalleled power when conducting OSINT investigations involving analyzing digital media, such as:

  • Face recognition: Identify a human face using AI and track their activities across different mediums, such as street surveillance cameras.
  • Metadata analysis: Almost all digital files have some metadata. The manual process of extracting metadata from harvested digital files can be daunting. By using an AI solution, this task can be simplified a lot.
  • Reverse image search: AI tools can speed up reverse image search significantly. It can also detect pictures produced using deep fake technology.  

AI detection

AI technology is not used by good people only; threat actors have begun to utilize AI tools to facilitate criminals’ activities. AI technology can be used to detect content, such as images, videos and text, produced by AI tools.  

The advantages of leveraging AI in OSINT tasks seem apparent; however, we must still consider the ethical and security (privacy) sides of using AI-powered tools to investigate and process sensitive data. This article will not cover the legal and privacy aspects; however, AI-powered tools should be used with caution when conducting online investigations for two main reasons:

  • Avoid revealing the investigation or the investigator's identity.
  • Avoid exposing sensitive information to the AI tools. For example, an investigator could upload a confidential file to ChatGPT to get a summary. ChatGPT is not fully private and may expose your uploaded files and text prompts used by investigators.

After we summarize the key areas where AI can be used in OSINT investigation, it’s time to mention some AI-powered tools that can help OSINT investigators in their research.   

Searching the internet

AI-powered tools can speed up searching internet resources and return more accurate results. Some tools can be used to search within specific niches.

Tapesearch

Tapesearch is a service that searches within podcast transcripts. At the time of writing, the tool could search within 915,468 AI-generated transcriptions from 4,670 podcasts. The tool promises to add new transcripts every day.

DorkGPT

DorkGPT is a convenient tool for creating Google Dorks queries. All you need to do is provide your proposed search query using plain English, and the service will create the equivalent Google dork (see Figure 1).
 https://authentic8.com/blog/19-tips-ai-for-osint

Announcing Nowacomm's Official Partnership with Authentic8: Advancing Digital Security in the European Union and Ukraine

We are extremely pleased to announce that Nowacomm has become the official partner of the American company Authentic8, founded by the directors of Postini, acquired by Google in 2007. As a partner of Authentic8, Nowacomm will be responsible for distributing their innovative products in the European Union and in Ukraine.

Authentic8 leads in creating a secure digital environment, offering comprehensive solutions that go beyond standard IT tools. In an era where the web browser is a key tool in business processes, Authentic8 provides security and IT control.

Their patented platform enables not only data isolation but also transfers key business operations to a secure cloud environment. We look forward to a collaboration that will allow companies in our region to safely use the internet, in accordance with the highest industry standards.

This is a milestone for Nowacomm and our clients, striving for a higher level of digital security.

Building Trust in AI: The Importance of Transparency and Accountability

Trust in AI systems is crucial, but their increasing complexity often leads to a lack of transparency and understanding, creating doubts and skepticism among stakeholders.
According to ISACA's 2023 Generative AI Survey, only 10% of organizations have implemented formal policies for generative AI, highlighting the existing hesitation and lack of trust in AI adoption.
ISACA's white paper, "The Promise and Peril of the AI Revolution: Managing Risk," offers a comprehensive framework to address these concerns and establish trust in AI systems.
The framework emphasizes the need for transparency, accountability, and ethics in AI, urging organizations to open the "black box" and provide mechanisms for understanding AI decision-making processes.
By implementing transparent AI systems, organizations can enable stakeholders to scrutinize and understand the outputs, instilling confidence and building trust in AI-driven decisions.
Additionally, holding individuals and organizations accountable for the outcomes of AI systems and defining clear governance mechanisms will further enhance trust and reliability in AI.
#TrustInAI #TransparencyMatters #AIethics #BuildingTrust #AIaccountability #ManagingRisk

https://www.infosecurity-magazine.com/blogs/building-digital-trust-ai-risk/

The Most Secure Messaging Apps in 2024

Secure messaging apps stop snoops from peeking at your conversations or stealing your personal data. The best messaging apps use encryption to ensure your chats are truly private. Check out our roundup of the most secure messaging apps on the market. And get a VPN to encrypt all your web traffic, no matter which messaging app you use. 

https://www.avast.com/c-most-secure-messaging-apps

Who you gonna believe? Me or your own lying AIs?

Jonathan Kolber, the author of "A Celebration Society: Solving the Coming Automation Crisis", aptly noted that "Deepfake will completely obscure the difference between facts and falsehood, a distinction that few citizens are able to make even now." This statement not only reflects the current state of technology but also poses a challenge that will become increasingly significant in our society.


 Oct 24, 2023

Fake accounts on social media platforms

According to Facebook data, about 10% of accounts on this platform are fake. This means that for every 10th user we follow, there might be an account that is not what it claims to be. How to recognize a fake account? Unusual biography, Scammers spinning intrigues, Lack of proportion between "followers" and "following"

  1. Unusual biography: Fake accounts may have descriptions and bios that are taken from other places, copied and combined. This often leads to language errors.
  2. Also pay attention to typos, excessive use of emojis and strange, artificial language that may indicate a bot.
  3. Scammers spinning intrigues: Scammers may use fake profiles on social media, similar to dating sites, to establish a romantic relationship online with the victim, and in the next step ask them to send money. It is worth primarily verifying their profile pictures using an image search engine and ceasing conversation as soon as suspicious behavior is noticed.
  4. Lack of proportion between "followers" and "following”: This is a red flag, especially on Instagram, where spam accounts automatically follow hundreds or thousands of users, but few of them have any followers of their own.
  5. Using profile pictures of close acquaintances: One of the mechanisms commonly used by scammers is cloning the accounts of our close acquaintances or family. They then send a fake message, e.g. with information about financial troubles and a request for money. Unfortunately, many users still fall for this type of scam. It's always necessary to thoroughly verify the account that contacts us, especially with some unusual request. It's also good to check, via another method of contact e.g. by phone, whether the acquaintance really sent us such a message.
  6. Spam in direct messages: Scammers often use private messages to send fake offers, encouraging users to forward such spam or click on malicious links. Fake accounts are used to spread these types of messages, and the topics of the messages can vary: from scams related to cryptocurrency investments, to fake sales offers.
  7. Lack of verification mark: Instagram, Facebook, TikTok and X (Twitter) have markings that allow the identification of the authenticity of accounts of companies or public figures. If you see an account, supposedly belonging to some known organization or person, but it doesn’t have such a mark, it's likely a scam.
  8. Frequency of activity: Fake accounts often post a lot of content at once, often with similar or identical captions, and then go silent. They might not post at all. So, it's necessary to check the amount, quality, and frequency of posts and verify whether it looks natural for the people who contact us. Offers of free gifts: Beware of accounts that offer gifts and/or cash, for example in exchange for completing a survey. Such fake profiles often impersonate well-known brands and are created mainly for the theft of users' personal data on social media.
  9. Heavily discounted products: Fake accounts may also promote luxury items that have been heavily discounted. Remember, if something is too good to be true, it usually is.
  10. Random comments: If an account leaves comments under your posts unrelated to their content, it's most likely fake as well.

 


Oct 15, 2023

Median Human

Sam Altman, by using the term "median human," might suggest that the future of AGI (Artificial General Intelligence) should replace "mediocrity" among us. But can the ubiquity of technology really diminish our human value and replace our skills?

https://futurism.com/sam-altman-replace-normal-people-ai


Sept 29, 2023

Election manipulations by Google?

 

Another voice and more research on manipulation by tech giants point to alarming practices. Dr. Epstein, a psychologist from Harvard, in his studies compares Google to a dictator with unprecedented power, using manipulation techniques unseen before in human history.

His research reveals that Google uses ephemeral manipulation tools, which are hard to detect and trace. The Search Engine Manipulation Effect (SEME) is just one of many subtle tools that Epstein's team has uncovered.

SEME is a term/research model, which describes the change in consumer and electoral preferences under the influence of internet search engines. Unlike search engine optimization, where websites strive to improve their ranking in search results, SEME focuses on the potential actions of the companies themselves - owners of search engines such as Google. In 2020, according to Epstein, Google could have swayed at least 6 million votes without people's knowledge.

In 2016, their algorithm could have influenced 2.6 million to 10.2 million votes in favor of Hillary Clinton. Despite his left-wing political beliefs, Epstein emphasizes that he cannot support such actions by Google. His research sheds new light on the power and influence of tech giants on our society.

https://www.lifesitenews.com/opinion/google-manipulates-search-results-to-censor-information-and-swing-elections/


Sep 21, 2023

Social Media - Report and Resources

[Toolkit] 6 resources every social media manager needs in 2024

https://sproutsocial.com/


Sep 20, 2023

Google's recent shifts have deeper implications for your marketing.

Here are some revelations:

1) Google's pulling back on the "HowTo" and "FAQ" rich results guides we see in searches. It might look like a small step back, but it's really a big hint about where things are going.

While many might see this as a setback, at NOWACOMM we see it as an opportunity in disguise.

2) Have you heard about Google upping their ad prices quietly?
It's more than just a price tag change.
It shows us where the online world is heading.

But here's the thing: it's not just about these changes.
It's about what they mean to you and how you can use them to your advantage.



Nowacomm Sp. z o.o. Krs 0001030927, Nip 8741808863 , Office WARSAW, GDYNIA, BRODNICA