Sie sind auf Seite 1von 18

Anudeep Metuku

Mrs. Bagley
I/M-GT
27 October 2017
Annotated Source List

Amir, Sharif M. “It’s Written in the Cloud: The Hype and Promise of Cloud Computing.”
Journal of Enterprise Information Management, vol. 23, no. 2, 2010, pp. 131-34.
Emerald Insight. Accessed 10 Dec. 2017.

Cloud models could be perfect one day, with minimal risk, high speed Internet, and a
well versed team that manages the infrastructure and networks. Without a well informed
economy, or simply “the customers,” the cloud model cannot really evolve. The cloud is a
phenomenon that directly affects the consumers instead of going through other components of
the economy. In contrast, historically corporations like IBM, Microsoft, and Oracle have
believed that it is optimal for them to give their customers new and more powerful products. If
handled properly, the author believes that cloud computing can create even more independent
sectors of the technology industry, but it seems to be incorrectly seen as a revolution of its own.
Some call it the Web 3.0, a step ahead of 2.0, the phase of the Internet allowing for massive
amounts of user generated content, but it is rare to find users with any real power over
compliance and policies. Also, currently, when looking at the big picture of enterprises, it is only
a handful that are communicating with vendors to outsource and stimulate the economy and buy
new hardware and also manage and secure it. When the cloud does reach that universal level, it
would significantly reinvent the nature of IT as new skills would be in demand since entire new
groups will have to work together and manage such a big platform.
This paper reflects solely the thoughts of the author, that the cloud is deeply
misunderstood and can function as a much larger revolution if research focuses on how people
think of the cloud. There is little research evidence yet there are ample references to debates and
opposing viewpoints about various cloud companies. With a growing number of Internet users,
consumer behavior is especially important as multi-factor authentication, strong passwords, and
simply choosing the better cloud service providers are becoming the decisions of the end users
and, to an extent, enterprises instead of as results of the quality of the products themselves. The
contemporary increase of study of behavioral finance reveals that “non-technical specialists,”
like behavioral analysts and economists are also key to researching the effects of the cloud
because the cloud is not just a computer science or IT field. Thus far in my internship, the servers
themselves, a small part of the cloud infrastructure, could open up several ideas to study, such as
the efficiency of the hard drives or how fast it takes to scan them. As other academic works
indicate, it is not easy to classify cloud computing because of the number of factors that make it
possible. This paper corroborates by stating that the consumers and the vendors for enterprise
cloud models are essentially the same, so by definition, the consumers would have enormous
control over cloud resources even though vendors are not true customers. Overall, the paper
agrees with the general academic perspective that it will take the cloud several more years to
both become a functional component of the entire Internet and be a process that researchers can
assess the benefits of. As for my research, GRACE is a medium sized business, so it will take
about four more months to complete the turnaround, but it may still be a while later to effectively
and wholly determine the net benefits though it’s immediate effects on IT are easier to observe.

“AWS.” Amazon, Amazon Web Services, aws.amazon.com/. Accessed 24 Oct. 2017.

Amazon Web Services (AWS) is a page within the Amazon website the details all the
services and terms of its cloud services. Amazon has forty-four data center locations globally.
AWS allows IT departments and companies worldwide to have connections. The very
comprehensive site has information about migration, suggestions to groups from small startups
to big corporations, customer control of compliance and traditional programs, and career options.
AWS manages the “security of the cloud,” but it is ultimately the customers’ responsibility to
control their security in the cloud as they have to do as they need to protect their own data and
networks.
Low cost models for certain industries in large companies allow them to implement those
new features in large scale. The oldest location is from 2006 in Virginia, which may be a
location I can potentially visit or read more about. I want to use the AWS to compare with
working on data centers for GRACE because a lot of Amazon information is public on the site.
There are also specific cloud services not there at my internship, so I will also be able to see what
else could be done at GRACE. A large part of the information on the is generic, but it is
impressive nevertheless to see how a company has so many connected cloud applications and
services, which the site explains in more detail. AWS supports Microsoft applications and SAP,
both of which are used by GRACE, in two different sectors of the IT department, however, with
the latter being used to manage and edit business data more than the data storage and monitoring
of hardware. I will have to frequently compare Amazon’ services of these applications and
reference the immense amount of information, though one limitation for my work will be that the
technical information is more programmer oriented as there is only a little of scripting I may
have to ever work with.

“Beyond Hosting: ‘Everything as a Service’ Is the Next Big Cloud Opportunity.” Software
World, vol. 48, no. 4, July 2017, p. 14. Student Resources in Context. Accessed 2 Oct.
2017.

As the IT field grows, the hardware that is used, which is largely unknown to the end
user, will be ever more complex and expensive. “Everything as a service,” or aaS, can simplify
this complexity. It is generally said to be a subset of cloud computing, essentially making
everything available as an online service. For example, if purchases can be made online, but
accessing specific information about them can also be done online, so there is no time gap
between each action. By “transforming data” itself, hardware needs will decrease. There will also
be more roles to be played in IT: a wider range of management positions will need to work to
make this transition. 451 Research has shown that the “channel,” or IT information distribution,
will play a big role in the future. The article gives an overview of how 451 Research’s Managed
Services and Hosting research channel will cover a range of services like analyzing trends of
vendor activity to aid the continuity of an aaS model.
For most organizations, the complete change to cloud models are time consuming and
expensive. To make everything available through the Internet can pose challenges such as
maintaining certain parts of the cloud’s information as functional. At WR GRACE, which is
currently in the transition, the employees’ emails cannot be accessed and edited normally. This
would require a restarting of their systems. In this case, maintenance should be a large priority. A
large part of the move to the cloud is too see that it goes smoothly, and it is that important to be
aware that my work to assist in it will largely be to find and fix errors.

Bird, Taylor. “Data Center Migration to the Cloud: Approach & Key Considerations.” Nimbo,
digital ed., march 2015.

With each company moving to the cloud, the scope of that migration and associated costs
vary with each enterprise. Amazon is a large one, and this article is a whitepaper of sorts that
addresses cloud migrations for companies and uses examples of the Amazon move as part of a
general discussion of what to do and avoid. In addition to applying computing principles, it is
important for companies to leverage their relationships with vendors and brokers to be careful
and spend optimally. Lots of small companies are not aware of the risks yet they do not use their
business relations. Building on risks, every migration needs a recovery plan, and each process
needs to be adaptable, and employees should be made to use their cloud systems months before
“deployment” to confirm their functionality.
Regarding availability for end users during the migration, tradeoffs are involved: to
maintain high availability(HA) the paper suggests running multiple instances for application
needs on various data centers as an “easy win,” however, cost will also be a factor in deciding
how much of a priority HA will be for end users. These safety guards prolong the migration
greater. As far as I have seen in my internship, this is a necessary practice; a different type of
data center maintenance done by specialized teams on a VM software must be done on a daily
basis several times a day to manage the efficiency of data centers in various parts of Columbia.
International company branches would have to also manage their geographic data centers
similarly. An important part of cloud computing the article describes is that it is a “service-
oriented resource” and should be used as such. End users are, at least for Amazon, ultimately,
those who benefit from ease of access and information storage.
Breeding, Marshall. “Elevating Tech Skills for the Cloud.” Information Today, vol. 37, no. 8,
Oct. 2017, pp. 15-17. Gale Virtual Reference Library. Accessed 24 Oct. 2017.

In recent years, cloud computing is one of the biggest movements in technology. It is also
one of the most complex ones. It requires compliance and resources paid from end users,
enterprises, providers, vendors, and the IT departments of companies. There is also a division in
how each group uses it: IaaS(Infrastructure as a service), the most used storage concept,
SaaS(Software as a service) which deals with the software being readily available, and
application-service providers. Organizations with large amounts of large data centers will be able
to have the optimal redundancy and management of hardware. Libraries, for example, are largely
a SaaS and can use other data centers to run application services in addition. Microservices
architecture is the deployment of individual applications in the same system and servers, which
increases the complexity of functionality of the data centers and “eliminates” the need to rebuild
entire systems.
Bigger companies will be able to be more successful. Google and Amazon are commonly
cited as the forefathers of moving to the cloud. Because they have such a large amount of users
already, the move was able to adapt to it, and with a cheap cost model, an ambitious migration
was successful for both companies. WR GRACE, in comparison, is not as sophisticated or as
large as them, but as it is growing it also uses, in its scope, a diverse set of cloud services. A
similar effect is always seen in smaller companies that don’t have a cloud component. The O365
suite used in GRACE, which is also compared to school systems like HCPSS online tools, is an
example of microservice architecture. It is explained distinctly how it is important to have the
same suite of applications to increase functionality and redundancy.

Bruzzese, Peter. “Skype for Business Is Starting to Get Good.” InfoWorld, 20 Jan. 2017.
Accessed 15 Oct. 2017.

This article reviews Skype for business, a Skype feature used by many groups for that
purpose. Microsoft has worked on the technology to allow it to function with no dependence on
location. Skype for business allows unlimited calls, messaging, meetings, and screen sharing for
effective communication in everyday office activities. However, the mobile version of Skype has
not had the same quality, so videoconferences and such would not be of similar calibre for those
using different devices.
At GRACE, four big technologies used are Skype for Business, Sharepoint, Onedrive,
and Office 365. They are all associated with the Microsoft suite for business tools. Bruzzese
compares the position of other such Microsoft tools to Skype, which gives a model for me as to
how each platform is lacking. At the time of the publishing, Skype was not “compelling” enough
for firms to use with Office 365, but in the past year, changes have allowed these tools to
function in unison under a O365 plan as is being used in GRACE.
CBS News. “The Heart of the Cloud Is in Virginia.” Sunday Morning, 22 Oct. 2017. Accessed
24 Oct. 2017.

The general population always talks about the cloud and is not affected directly by any
possible single failure in a data center location because of redundancy, but in reality, seventy
percent of the world’s information passes through Virginia’s Amazon Web Services(AWS) data
centers. One of AWS’ problems is that they are not able to build as many data centers as they
need as they grow 40% each year, especially an area like Loudon County. Each data center
currently available has to be used efficiently because they are all full with digital information
passing through them. Another point raised is that if all cloud data centers were combined as a
nation, their energy usage would be fifth among all countries. However, a University of
Michigan Professor Tung-Hui Hu said that data centers are mostly idle, 96% of the time, until
there is a spark in traffic.
That idleness is a sign of a surplus of resources in a fully functional cloud system, but
relatively speaking, it is very well possible to have some data centers that do not do anything. If
the cloud uses so much energy, it can burn through limited energy resources available and very
expensive. In the not so distant future, for the best of cloud computing, there should be an uptick
in the powerful software to make best use of hardware and minimize energy costs. Also, on
premise and non-cloud storage can be equally as expensive. Regardless, the article only glosses
over how to combat general power costs as it strongly supports the speed and flexibility. For my
internship, a visit to the AWS data centers would allow me to see how data centers work in the
larger digital world.

“Cloud Compliance: What It Is and How to Achieve It.” ComputerWeekly.com, May 2011.
Accessed 24 Oct. 2017.

Cloud compliance is the providers agreeing to and following the rules as they and their
customers agree to. End users should know where their data resides, how it is protected, and who
has access to it. Mathieu Gorge, the CEO of VigiTrust, a security firm specializing in cloud
computing, says that users are the owners of data and are responsible to know of such factors.
The user should also have a disaster plan as the company can not always be held responsible,
especially for natural disasters, which increases the knowledge and compliance of end users and
enterprises.
An interesting idea that the article implicitly raises regarding large companies is the
federal regulation and standards of cloud compliance in various states. For GRACE, based in
several countries, there can easily be different ways to secure the cloud infrastructure at each
location. Though as a well established company there is a standard protocol, for a starting
business, this can be a time-expensive issue on which branches must have compliance. It’s
important for end users, including companies but also general consumers, to have backup plan
tested covering legal to the data condition. In companies, one part of this necessity would be
agreements with vendors to use resources as agreed upon and establishing a good cooperation to
conduct a migration and maintain the systems.

“Cloud Momentum; More Firms Are Embracing the Technology.” Accounting Today, vol. 31,
no. 9, 1 Sept. 2017. Student Resources in Context. Accessed 2 Oct. 2017.

Cloud based organizations are growing rapidly, and if they haven’t already, accounting
firms will switch at some point soon. Risk can be reduced by using cloud software, and the
scalability of the systems can allow the clients to easily access their services. The reduced labor
will allow for better work output, financial advising, and true accounting. There is an increased
importance of the education of consumers as the switch becomes bigger because they will have
to know more about where and how their data is stored.
The development of cloud technology reduces labor by having materials easily
accessible, but this article glosses over the element of risk, which is mentioned but not explained
how it relates to consumers, which is said to be reduced through cloud storage. This is true to the
extent that it is objectively easier to protect digital files stored off site, though there is a
tremendous amount of work input to make it possible to reduce the risk for all clients and
employees. The article is written to advocate cloud storage so not surprisingly, there is a lot less
mention of trade offs involved. However, it is acknowledged that organizations should move to
the cloud with specific purposes so that they have a goal to work towards, or else they will not be
able to secure their impromptu actions accordingly. WR GRACE has Office 365, for documents,
which is a standard software that goes hand in hand with Azure and Sharepoint. There are
several other cloud schemes, each with their own purpose. Some are to increase scalability and
size of business or to increase efficiency, so looking at the general purpose of cloud storage for
the long term will be important.

Datanational. www.datanat.com. Accessed 15 10 2017.

Off site data centers are groups of networked supercomputers stored in a secured location
other than the premise of an organization. They require less power costs due to their separate
location. This idea applies to many other factors: capital expenditures, auditing, risk
management, fire protection, security, bandwidth requirements, and business mobility. External
storage always allows a “spare hand” in energy and cost, referred to as “Need + 1,” or “N+1.” If
such data centers are built instead of on site, or even multiple sites for primary and secondary
data centers, in the long term, cost can be greatly reduced while providing security against
natural and unexpected physical disasters.
One of the things several companies like GRACE are working on currently is to have
such a physical component for their disaster recovery plans. GRACE still has data centers on
site, but redundancy can be established by having other locations to store the information. As
everything moves to the cloud, inevitably more groups will realize that third party data center
sites are necessary, so at some point, they will all make the move either as of knowledge or
correction to a mistake that results in data loss or corruption. Additionally, access can be
determined separately and specifically for these data centers, which allows companies to add
more security to their communications.

Farber, Dan. “The New Geek Chic: Data Centers.” CNET, 9 Sept. 2008. Accessed 22 Oct. 2017.

Every technological concept has a starting point in time, and with cloud computing it was
near 2008 that data centers were the desired commodities. Cloud computing was theorized in the
1960s to connect users regardless of time and location, but it was not until recently that it could
be executed in such a large scale. In 2008, AMR Research stated that in the next five years, until
2013, the Internet will change more dramatically than in previous periods due to the
“convergence” of different technological trends. The economic downturn will force cloud
computing to be more widely used, making effective use of resources and speeding up safety
implementations and making more data readily available to businesses that drive the economy.
The VP, Jonathan Yarmis, stated that a big concern would be to address issues and recover from
them, a vital part of the maintenance and recovery plans used today. Google and Amazon are two
companies with low cost models with necessary technological advances that can pave the way
for cloud computing.
The article promptly explains the economic aspect of the cloud and its origin.
Nevertheless, it was not until past 2013 that more companies began to adopt the cloud model.
This process is in progress, but the inclusion of both hardware and software advancements may
be part of the answer to why not everyone has adopted it and why it is expensive in time and
resources. Yarmis’ prediction is indeed true, and the article serves as a goal of the past to be
looked at to evaluate how the goals of cloud computing are changing, though Yarmis’ are related
more to how the Internet will revolutionize whereas my involvement in WR GRACE is
specifically for business implementation.

Ferrill, Paul. “Ipswitch WhatsUp Gold.” PC Mag, 13 Apr. 2016. Accessed 10 Dec. 2017.

There are many ways to administer cloud migrations for IT departments, and WhatsUp
Gold (WUG) by IpSwitch is an effective option. It only allows to control infrastructure
management, application performance management (APM), and network monitoring, but they
are the essential parts of a cloud model for a medium sized company. Only mid-sized groups of
services are allowed for management while services like Amazon Web and Microsoft Azure are
too large to manage with WUG. In theory, server pieces of WUG on AWS and the corporate
network connected with a VPN could allow infrastructure monitoring of bigger services. Another
downside is that between four and thirty-two gigabytes of RAM are essential for a smooth
operation in addition to fifteen gigabytes of disk space, which should actually be higher to store
files for a longer amount of time. This volume renders the application to be possible only on the
premises instead of on the cloud. For general interface, there are many distracting tools and
settings, but they offer several modes of customization. For example, the APM could be adjusted
to be run based on user defined policies. For analysts, these customizations can allow data
extraction to study trends and report server behaviors. Alerts also make managements easy, with
an efficient method to identify problems with specific servers, which are also grouped based on
the types of problems.
This review of WUG provides benefits and flaws that are very applicable to GRACE.
Since GRACE is only a medium sized company it is indeed beneficial to use WUG for
management. The downside of requiring a large disk space and memory provides an interesting
problem. Working with VMs on three different levels over the Internet and having the middle
level of computer requiring the most space could easily cause problems, especially for
employees without a sufficiently powerful personal computer. Additionally, the second level to
connect to, an on-premise computer in the middle layer, must meet these requirements. The
deeper third layer, for administration to access the VMware ViewCenter, is essentially an onsite
computer, which reflects a jump from cloud to onsite to another onsite computer. Therefore,
WUG is a unique tool to observe the power of VMs in a cloud model. Updating the servers
requires only WUG in addition to ViewCenter, so no other downloads and checkpoints are
required except for the mandatory portal log-ins at each level of the VM I am using. WUG is
also separate from the remote desktop connection that allows access to the management
computer, so the exclusivity allows at least one of the systems to be slow. However, this can
cause unparalleled display of information on WUG and ViewCenter such as duplicate servers,
incorrect or lacking IP addresses, and missing servers. The information is also grouped
differently, but with a more ambivalent nature as WUG groups servers into cluster based on
function, such as for SAP, Switches, or Linux devices, which also reflects the business/technical
duality of cloud management. Because it considers both physical and local, there can be
confusing information to work with. Some servers refer to physical boxes that may not be
activated.
Gast, Taylor. “Cyberattack Highlights the Costs of Breach Response and the Need for
Preparation.” Lexology, 19 Oct. 2016. Accessed 23 Oct. 2017.

75% of companies do not meet the disaster readiness expectations according to the
Disaster Recovery Preparedness Council in 2014. More than $5000 can be lost each minute from
any type of disaster, which excludes lost productivity and reputation damage. When hacker
groups demand ransoms, the information or cost that is traded can also greatly increase monetary
damages. For optimal security, at various points of a cloud migration, the needs for security will
change and increase, so IT should frequently update the plan and adapt to even environmental
changes.
With a strong and well functioning plan, a breach can be negligible as specialized teams
can quickly identify and close them. One unique part of the suggested DR plan is to test “critical
applications” more frequently to test with how much significant downtime the recovery can be
achieved; the specific part of each company’s plan is not always publicly available but definitely
includes this. With knowledge that critical applications should be secured more than others,
downtime can be significantly reduced. There is a distinct informal tone in the article, citing
fictional foreign hackers who take advantage of data important to the U.S. economy. According
to my work, it is indeed a reality as the article corroborates. Regarding updating DR plans, there
will always be a variety of potential natural disasters as there are GRACE branches in Germany
and the Philippines as well. With a large system already in place, a holistic approach to DR is
forces as a matter of fact.

GRACE. W.R. Grace, grace.com/en-us. Accessed 23 Oct. 2017.

WR GRACE is an American chemical company centered in Columbia, Maryland with


branches in over sixty nations. GRACE manufactures oils for the food industry, rubbers,
coatings, and metals. In its plants, there are several buildings for the chemical, business, and IT
departments. The company is known for declaring bankruptcy in 2001 after several lawsuits
regarding the harmful effects of asbestos yet has expanded to various plants and ranks in the top
two in global sales of various catalysts and silica based materials, its main products. In 2015, it
split into Catalysts Technologies and Materials Technologies segments.
A lot of the information of the largely unknown company is not shown on the website as
of proprietary reasons, but there is also little information specifically about cloud migration. Like
other companies still in migration or planning it, GRACE is primarily in a non-technology
industry, so it has less resources and thus, less information about it, at least available to the
public. The reduction in its employees after the asbestos incident is also a limiting factor.

Kamara, Seny, and Kristin Lauter. “Cryptographic Cloud Storage.” Financial Cryptography and
Data Security, Jan. 2010. ResearchGate. Accessed 3 Oct. 2017.
Research teams are working on ways to provide security to cloud storage users while
retaining its functions and ease of use. Kamara and Lauter examine the tradeoffs of several cloud
architecture models in pursuit of building the best service for users and corporations. It is in
verifying data where there is complexity because third parties have to verify a lack of tampering
and data corruption. This is how consumer architecture works, which is relatively simple.
Enterprise architecture requires the union of a cloud provider, the main corporation, and another
corporation, which sends a verification or data processing keyword of source to the original
corporation, which the provider oversees and lets the users of the corporations safely use their
data. Kamara and Lauter use the term cryptographic frequently as they are strongly stressing the
security of the models. The research paper explains some expected actions on the users’ part in
detail to ensure that both parties are essential to maintain the integrity of the system.
It will be useful to analyze the Microsoft cloud systems with the example enterprise
models. Since security will be a large part of my work, the cryptographic aspect to this paper is
beneficial. Kamara and Lauter are associated with Microsoft and discuss how Azure, a storage
service that also has recovery option benefits, can provide customers with “scalable and dynamic
storage.” Azure usually comes with Office 365, and while the paper does not specifically relate
to Microsoft applications, many of the proposed models are possibly based on the cloud systems
that are similar to the ones Microsoft storage services use.

Khajeh-Hosseini, Ali, and Ian Somervillle. “Research Challenges for Enterprise Cloud
Computing.” 1st ACM Symposium, 2010. Cornell University Library. Accessed 10 Dec.
2017.

Cloud Computing is a very expensive and “disruptive” process that is largely an


uninterested idea for many stakeholders of enterprises. The cost of building a cloud
infrastructure, securing it with a disaster recovery plan, and the early performance decrease of
moving to a new system are negative effects of a migration. For researching the cloud in this
context, there should be careful consideration of such limiting factors since many new or newly
migrated enterprises would be affected by these negative factors. It is also difficult to assess the
net benefits of a migration due to so many factors, which companies should be aware of because
the turnaround can take approximately six months. It would take nearly ten to fifteen years to
reduce that time to a single night. Furthermore, an increasing number of companies rely on a
hybrid infrastructure, so for each component of the cloud model, including compliance of
information, there is no guarantee of a safe information transfer. Files with sensitive information
could be distributed between onsite and cloud devices and result in malware spreading. To truly
minimize this disturbance the enterprise would have to commit to having all of its services
hosted on the cloud. Cost is actually the primary factor in determining a migration, but unlike
risks and benefits, it can be determined almost immediately and is not a technical factor but an
economic one. However, despite the many issues that limit viewing the cloud in a single light,
the growing quantity of conferences and academic work can build enough knowledge to better
understand the overall benefits and risks.
It would take many years to reduce migration time to a day, but certainly by then, the
effects of a cloud model would be clear. It is important for those researching the cloud to be
considerate of this obstacle since the research would be from the lens of the present whereas with
broader research topics like data centers or the Internet in general, there is a larger time span to
study. The business perspectives of a migration and the technical ones bridge science and
business, which is a crucial point to note as it essentially creates a new field of study but one
which requires applications from multiple groups. For WR Grace, there is an equal emphasis in
understand how the business will work because of managing servers for the migration. Each
updated server can be used to improve accessibility for working with a number of applications in
the cloud. Consequently, consulting more business oriented IT professionals on site would be
helpful to better assess the broader effects of cloud infrastructure on the business. My mentor’s
colleagues are better versed in some of the business side, which has helped me to properly
update servers with WhatsUp Gold and ViewCenter, which are a business and technical
management softwares respectively that reflect the dual nature of the cloud model.

Lacy, Eric, and Steven R. Reed. “BWL Cyberattack Bills Reach Nearly $2M.” Lansing State
Journal, 22 Sept. 2017. Accessed 2 Oct. 2017.

As data moves to the cloud, it is important to secure that information some way as if it is
lost it can be permanent. Disaster recovery plans have always been a large part of a computer
based business model, but some groups, especially the newer, have not maintained that level of
security. In Lansing a year ago, the Board of Water & Lights (BWL) was under a ransom for
information, which was allegedly paid. The state journal reports that it is uncertain how the
damages’ expenses will be paid. It was speculated that if they focused on securing their power
generation, the BWL would have less secure everyday office communication, which is in fact,
the problem that caused these spending issues. With no security or recovery plan, other parts of a
company can be strongly hindered as well in addition to just rebuilding or changing their data
centers.
Cost is said to be the biggest incentive for companies regarding having a Data Loss
Prevention (DLP) plan. Here, though there was not much cost paid to secure BWL’s
communication systems, the cost to fix the mistake is potentially larger. These costs were all
hardware related, and a new system was established to prevent future incidents like it. It is
important to have secure data centers; most companies have them though they must be off site or
strongly protected to be guaranteed of safety. Well managed off site data centers increase
redundancy and can offer financially viable solution to this problem.

McKenzie, Dennis. Interview. 14 Sept. 2017.


Dennis McKenzie is an IT professional at WR GRACE in Columbia. He is involved in
running and maintaining the local GRACE data centers for the company’s move to the cloud.
There are several periodic issues and projects that his team tries to solve.
I will be working with him in testing and using the cloud migration for the company. He
worked on obtaining for me a license and account to log on to the company virtual machine and
then to the network monitoring software. The migration has been in progress for several months
now, and the Microsoft platform suite (O365, Skype, OneDrive, and OneNote) are also things I
will be looking at, especially securing them on top of making sure they run as needed.

Metuku, Ashwini. Interview. 23 Oct. 2017.

SAP is Systems Applications Programming, which enterprises use to plan resources and
manipulate data and manage real time reporting and transactions. There are problems with access
from employees from various companies who want to work together and using this Enterprise
Resource Planning (ERP) software, the SAP expert can resolve these issues as they affect the
organization. Ashwini Metuku specializes in SAP and worked with it for IBM for several before
GRACE. His job consists of using ERPs to resolve the many problems through the functional
component, SAP, while others can use ABAP, the programming aspect of it.
My work will have little programming aspect to it, but as an ERP, SAP is a consideration
for study because it helps continuity of businesses. Ashwini Metuku’s involvement is to
functionally assist in running and enterprise while I will aid in the management of general and
broader necessities that are data centers. Therefore, while his work does not apply to the cloud,
some of the data center issues my mentor works with can indirectly be linked to SAP. He also
has many connections in his division of the IT department, any of who can be of great value to
helping me.

Morgenson, Gretchen. “Consumers, but Not Executives, May Pay for Equifax Failings.” The
New York Times, 13 Sept. 2017. Accessed 23 Oct. 2017.

Equifax is one of the United States’ “top three credit reporting agencies.” As such it
should have security for its data, but it does not. An attack in September 2017 rendered millions
of its consumers vulnerable to identity theft, “monetary losses,” and frustration. The stocks fell
35 percent soon after the company disclosed the breach, a loss of $6 billion for the company’s
value in which shareholders invest. Immense amounts of such data as SS numbers and driver’s
license were stolen. The agency’s accounting expert addressed that a disaster as a result of poor
maintenance, instead of natural disasters, is not forgivable.
Equifax does not concern cloud computing but shows the effects of a lack of a disaster
recovery plan that is even more essential to any cloud model. I do not intern for a public
organization like Equifax, but a part of the chief information that needs security is the chemical
formulas and practices of WR GRACE. If they are lost, or stolen, the value of the company
would further decrease. Equifax provides an example of what happens when the monitoring of a
company’s practices is “lax.” Also, since Equifax is a more directly consumer based company,
the effects of its breach will be seen in the losses of the customers. It is a large company, and
though it has suffered damages, it can come back based on previous incidents with it and other
agencies. If it were smaller like GRACE or a startup, it would not have that benefit.

Mosca, Patrick, and Yanpin Zhang. “Cloud Security: Services, Risks, and a Case Study
on Amazon Cloud Services.” SciRes, Dec. 2014. Scientific Research. Accessed 10 Dec.
2017.

Many different ways exist to research the cloud model, and companies that have already
made the migration can serve as representative models for study. Amazon Web Services (AWS)
is one such company that has already popularized the cloud and can be used to assess the current
state and future possibilities of the cloud. Security issues are the most important to consider
before widespread cloud use, but it can be reduced through outsourcing, multi-tenancy, a basic
measure generally any cloud service provider takes, and massive data and intensive computation.
Each of these components also has its challenges: outsourcing leaves customers little access to
physical control over the hardware storing their data. Multi-tenancy stores data efficiently by
sharing hardware among several users and making all the users vulnerable to attacks. The four
general attacks that have and could again result are DDOS, unencrypted data, data loss, and
keeping data integrity by maintaining that only end users have access. Massive data and
intensive computation of the large amounts of stored data significantly limit security measures
from applying to all of the data efficiently. Combined with data integrity, massive data makes
security a seemingly impossible achievement because it is inevitable for programs to access the
data at some point. An example of an attack is the 2010 Amazon cross site scripting attack that
resulted in widespread infections of the Trojan virus, but it was not Amazon’s failure because the
attackers were able to infiltrate through another “vulnerable” domain. Therefore, even a single
vulnerable link of a network can affect the whole cloud system. Google, Facebook, and Twitter
also had this problem. Multi-factor authentication began to be a bigger feature, but it requires
more work from the end users. The Snowden leaks of 2013 could also have ramifications for the
cloud. The NSA can access a large amount of information but not from a singled out individual
and rather by scanning a network for activities instead.
Amazon’s scheme is compatible for the general public to store their data, but even from
an enterprise perspective, DDOS attacks and data leaks are large threats which are known but are
not discussed as much in academic works, but the cloud can actually accentuate them. In the
future, the cloud should be safe and accessible for the end users, who must also be
knowledgeable if they want their data secure. Outsourcing is very applicable to GRACE
however. Many of its devices come from across the world as in several companies, and the IT
departments have to constantly work solutions to get the systems in place first, so even if they do
not yet access information of the cloud, the system being set up is still done by several groups. In
this respect, an enterprise model is surprisingly safer than a public service. From my server work
with updates for GRACE, it is evident that its network is not as big as other companies and
certainly not Amazon’s although it has a fair degree of sophistication to manage several
departments and their information. The big data computation aspect is a worthwhile study of the
current cloud architecture for GRACE because it allows to study security without endangering a
large network.

“Office 365 Security and Compliance.” Microsoft, e-book, Microsoft, 2016.

Public information from companies has plenty of information of how its cloud computing
works. Microsoft uses several of the accepted and NIST standards for its growing cloud services
as outlined by this whitepaper from 2016. Regarding generic security layers, there are three:
physical, logical, and data, which the former being of most importance to cloud architecture.
Administrators of a network use role-based access control (RBAC) to monitor and manage
which level of employee or end user has access to certain information. The general ideology for
O365’s security strategy is a set of four pillars of thought: prevention, detection, response, and
recovery.
Expanding upon the security ideas for the Microsoft suite of business products is one of
my tasks under Mr. McKenzie. There is not a single security consensus for each application in
this suite, so it’s important to know how each tool works, which this paper explains, albeit
generically, though as the legal and representative ideology of Microsoft, this whitepaper can be
my reference source for studying how these cloud tools are securely used in a business
environment.. Another idea for consideration is mobile device management(MDM) for each of
these tools; a lot of functionality is lost when using different devices, namely mobile ones. The
online meetings’ presentation features do not offer the same wide range of options, for example,
syncing information between Outlook and Skype. RBAC differs in the environment it is used. At
GRACE, some employees have higher level access to certain applications, but often, there are
things one can do that would be preferred to be out of reach, but it is largely unknown to the
user; fixing these issues in a cloud environment is a possible issue that can be part of my
involvement. The idea of the recovery pillar is important to my work because as a worker for the
migration, specifically with data centers, it is more of response and recovery to maintain the
systems that takes work, not the former pillars as the company is still in migration and is not end
user based.

Quora. “What’s Stopping Legacy Large Companies from Switching to Cloud Technology?”
Forbes, 24 July 2017. Accessed 23 Oct. 2017.

For cloud computing to work, undoubtedly, there will be a hierarchy of access to


information and control of the cloud system. In a business in the 21st century, equality among
employees, especially budding startups, is greatly valued. IT, however, can disrupt that way of
thinking. Another issue is that cyber threats have grown tremendously, and for young companies,
migration and maintenance will be even more difficult. The costs of securing a cloud model and
maintaining it are great, and fear, the other factor, in a small group augments uncertainty. In non
tech companies, IT will be a necessary factor for sustainability, and when cloud computing is
added, the resources and demand that should go to the essential components of business will go
to IT. The bottom line is that risk and cost of moving to the cloud in a risky startup in an ever
more competitive environment is detrimental to a company’s growth.
Once a company does move to the cloud, however, an idea not touched by this article, in
the long run, productivity and information security will unequivocally increase. If a company
does increase and exceed its IT budget for its cloud move, there is a fear factor that inhibits
further maintenance and security, but the work on the move at that point could encourage
continuing the process. GRACE is already a smaller company than it once was, and it is in
migration, aware of risks, but looking at the long run, where data and productivity will increase,
the move is actually has more net benefits as several experts insist on cloud migration. They may
argue that a lot of the information in this article are paradoxical “catch 22” situations based on
fear and growth of business. If a company hires more employees then it surely it must be able to
grow the IT part, but there will always be risk and a need to also focus on the chemical
manufacturing, taking GRACE as a hypothetical startup.

Rafter, Michelle V. “Plugging in.” Workforce, vol. 96, no. 5, Sept. 2017. Student Resources in
Context. Accessed 2 Oct. 2017.
Human resources to manage the switch to cloud storage will be a big part of companies
that make the switch. They are valued commodities in the long term, although they are not
limited to IT experts. This allows HR staff to increasingly“function as business analysts,” which
adds more to a different dimension to existing jobs. Time will bring to realization that business
analysis and working with IT is a skill enterprises need. The author cites several businesses
which care for their HR employment status. A New Jersey County HR administrator attested that
by switching to the cloud, the department had “backfilled” with the right skillsets.
Cloud computing will connect business analysis and IT more than they are already.
Employees who will have better access to large amounts of information will be able to do their
jobs better and in a more enjoyable manner as the article advocates. This is a positive effect, but
the risks are not appropriately addressed because hospitals’ use of cloud based management is
dangerous without the correct security. The extent of cloud computing reaches beyond simply
the convenience of data storage: employees work efficiently, which makes the experts valuable
and makes a great benefit for my internship as a source of experience. However, this will be
difficult to observe because changes take a long time as Parkinson’s Law of Data says, which is
not alluded to in the article. “Data will always increase according to the available space for
storage.” Physically, it is impossible to maintain the expansion of cloud storage, so very soon in
the future, the value of IT professionals will peak, but possibly increase as physical computer
engineering would have to accordingly increase in focus for most corporations.

Rouse, Margaret. “Parkinson’s Law.” TechTarget, Apr. 2015,


whatis.techtarget.com/definition/Parkinsons-Law. Accessed 22 Oct. 2017.

Computers are growing very fast, an observation maintained for decades, since
Parkinson’s Law of 1955 and Moore’s Law. The latter states that computing power tends to
double every eighteen months. According to this article, the former observes that essentially, the
more computer power and storage there is, more work will be done, and more data will be stored.
However, there are other application to this idea, like productivity. If a task is given an hour, an
individual may take close to an hour to complete it, but with slightly increased effort, he can
complete it in forty-five minutes. By decreasing the amount of time allotted to a task gradually,
you can encourage productivity. Four rules to this process are to determine the task, determine
the average time, decreasing time, and authenticating the quality of work and repeating it while
decreasing the time until an optimal level is reached.
Data storage being a concern is what leads to cloud storage, a desired efficient method to
keep individuals’ and groups’ data somewhere, and secure. Relating to IT, resources will always
be tested. Since cost is the primary incentive to how cloud systems are planned and
implemented, it is important to know that resource allocation will always be an issue. As several
other whitepapers and company statements say, making efficient uses of resources is crucial. It is
not physically possible to test the effects of reduced costs without harming the continuity of a
business, for example, but Parkinson’s Law is definitely useful in the planning or early
implementation stages of processes like cloud migrations, where a majority of companies are.

Ruparelia, Nayan B. Cloud Computings. Cambridge, MIT Press, 2016.

Most works on any computer technology are highly technical with complex mathematical
and scientific concepts, but this MIT press book intends to inform the general public in a simple
way how cloud storage works. The book explains the mechanics, concerns, and implications of
the concept. The author, Ruparelia, takes a largely neutral stance, providing knowledge of how
the service providers operate while addressing security and ethical controversies, many of which
have already affected corporations and governments. He also mentions how it affects the job
market and computer economy. Though computing power increases tremendously, the costs of
operating it tend to lessen, but cost is in fact the main incentive and concerns when moving to the
cloud; people do not want to spend too much on infrastructure, yet software is moving ahead
much faster, which puts the place of IT and the cloud at a bizarre situation. Many principles of
cloud computing vary with each provider, but Ruparelia cites from the National Institute of
Standards and Technology (NIST), which provides objective cloud computing principles. A
large part of the book explains the business application of cloud computing and legal issues like
privacy. The final chapters discuss the author’s own views on the future of cloud computing.
The book is essentially a large outline of the principles of cloud computing with
significant details, so it will be helpful when I have to reference IT terminology, ideas, and
historical events I will look back on for my internship. The knowledge from the NIST will be
helpful in evaluating the cloud system at GRACE I will be working on.

Wang, Cong. “Privacy-Preserving Public Auditing for Secure Cloud Storage.” IEEE
Transactions on Computers, vol. 62, no. 2, 2013. Accessed 3 Oct. 2017.

Third party inspection of user data in a cloud model is a common principle, but how to
make the auditing process efficient is the subject of this paper. Random masking and the
“aggregation and algebraic properties” of the auditing system allow for more efficient
authentication. Total outsourcing of the data and inspection as well is a possible solution because
it significantly reduces the chance the content of the data stored is known. The research team ran
several experiments to verify the scheme works, and based on the graphs, there may be less leaks
of data from complete outsourcing. The process of making complete outsourcing work is very
complex because the abstraction of the process is reduced, but this scheme is also shown to be
successful to make storing and accessing data efficient.
Total outsourcing isn’t very compatible for corporations as it is with consumers, that is, it
makes the intercorporate relationships more complex and fragile both digitally and generally.
This paper, however, does show a possible solution to a possible future that depends on nearly
completely automated auditing systems, with less IT work being done. Complete outsourcing is
an ethical standard when looking at the future because it is the most secure auditing method if it
is ever implemented successfully.

Woods, Jack. “The Evolution of the Data Center: Timeline from the Mainframe to the Cloud.”
Silicon Angle, 5 Mar. 2017. Accessed 23 Oct. 2017.

Data centers are central to modern information technology. They were first introduced in
the late 1950s and were used to automate the storing and processing of passengers’ flight
information, but after the 1960s, they became more enterprise-focused. In 2013, Google was the
biggest “construction effort” in history to expand its global data center network. Currently, data
centers’ storage and connection to the Internet enables the cloud and are shifting to a “software
ownership model.” To meet the demands of end users, these data centers’ capabilities have to
rise accordingly. With cost control and cloud computing, data centers have been changing for
more efficiency
It is essential to know the change in cloud computing through data centers. Off premise
data centers stored and connected to the Internet to allow for information to be stored in large
amounts efficiently is essentially the business definition of the cloud. In addition, I work on data
centers in Columbia with my mentors team to resolve routine issues over the course of a day.
Computers have grown in the 1940s to current period, and data centers have to grow at a similar
rate to keep the IT industry running. Many companies’ move to the cloud is thankfully
encouraging this as more money and resources are planned and put aside to migrate. According
to the timeline, WR GRACE can be argued to be late relative to other companies, but
considering the majority, the small groups and startups, it is on track. With more knowledge of
the migration process, resources can also be efficiently allocated.

Das könnte Ihnen auch gefallen