Beruflich Dokumente
Kultur Dokumente
Section: E2
Cyber security
It's not just malicious actors, either. Employees can unwittingly sabotage
systems and create computer security threats through sheer ignorance.
Simple mistakes such as clicking rigged links in emails, messaging apps and
advertisements invite hackers to surveil companies and organizations with
massive consequences. The following are some key terms to remember when
considering how to prevent computer security threats from insiders.
Virus. A computer virus is malignant code that can steal passwords, spam
contacts, corrupt files, log keystrokes and even take over the infected device.
To become infected, someone must purposely or accidentally spread the
infection. The city of Akron, Ohio, suffered a virus attack in January 2019
that was traced back to ransom ware set off after two employees opened fake
invoices sent through spam emails.
Users can take preventative measures by reading terms and conditions before
installing software, avoiding pop-up ads and only downloading software from
trusted sources. Last year, Amnesty Internal became a victim of the Pegasus
spyware when an employee clicked on a rigged WhatsApp message. The
resulting spyware installation allows the employee's device to be remotely
monitored while granting hackers' access to messages, calendars, contacts
and its microphone.
With so many other high-profile cases of phishing schemes in the news, such
as the 2018 DNC hack and 2016 Russian election meddling, it's no wonder
insider threats keep security personnel up at night. As the case of Anthem
insurance shows, it only takes one person to click the wrong link and release
the breach floodgates.
Your organization could be next. What can you do about it? Here are 10 tips
to help you develop and implement an insider threat mitigation strategy.
Some may be complex and costly over the long haul, but others simply involve
reviewing your processes and policies and applying best practices. The main
point is to turn your information security radar inward.
Chat rooms -- like e-mail, instant messaging (IM) and online social
networks -- are virtual extensions of real-world human interaction.
Chat rooms are online spaces where users communicate with one
another through text-based messages. It's like a virtual cocktail party,
where strangers gather to flirt, argue about politics and sports, ask for
advice, talk about shared hobbies and interests, or simply hang out.
Search engine is a service that allows Internet users to search for content
via the World Wide Web (WWW). A user enters keywords or key phrases into
a search engine and receives a list of Web content results in the form of
websites, images, videos or other online data. The list of content returned
via a search engine to a user is known as a search engine results page
(SERP). To simplify, think of a search engine as two components. First a
spider/web crawler trolls the web for content that is added to the search
engine's index. Then, when a user queries a search engine, relevant results
are returned based on the search engine's algorithm. Early search engines
were based largely on page content, but as websites learned to game the
system, algorithms have become much more complex and search results
returned can be based on literally hundreds of variables.
There used to be a significant number of search engines with significant
market share. Currently, Google and Microsoft's Bing control the vast
majority of the market. (While Yahoo generates many queries, their back-
end search technology is outsourced to Microsoft.)
This definition was written in the context of World Wide Web
Search engines work by crawling hundreds of billions of pages using their own web
crawlers. These web crawlers are commonly referred to as search enginebots or
spiders. A search engine navigates the web by downloading web pages and following
links on these pages to discover new pages that have been made available.
The index includes all the discovered URLs along with a number of relevant
key signals about the contents of each URL such as:
The keywords discovered within the page’s content – what topics does the page
cover?
The type of content that is being crawled (using microdata called Schema) – what is
included on the page?
The freshness of the page – how recently was it updated?
The previous user engagement of the page and/or domain – how do people interact
with the page?
WHAT IS THE AIM OF A SEARCH ENGINE
ALGORITHM?
The aim of the search engine algorithm is to present a relevant set of high
quality search results that will fulfil the user’s query/question as quickly as
possible.
The user then selects an option from the list of search results and this
action, along with subsequent activity, then feeds into future learnings
which can affect search engine rankings going forward.
The algorithms used to rank the most relevant results differ for each search
engine. For example, a page that ranks highly for a search query
in Google may not rank highly for the same query in Bing.
In addition to the search query, search engines use other relevant data to
return results, including:
Location – Some search queries are location-dependent e.g. ‘cafes near me’ or
‘movie times’.
Language detected – Search engines will return results in the language of the user,
if it can be detected.
Previous search history – Search engines will return different results for a
query dependent on what user has previously searched for.
Device – A different set of results may be returned based on the device from which
the query was made.
Search engine algorithms judging the page to be of low quality, have thin
content or contain duplicate content.
The URL returning an error page (e.g. a 404 Not Found HTTP response code).