Sie sind auf Seite 1von 16

Unit Name Secondary Curriculum 2A

Unit Code 102090


Assignment Title Assignment 2
Semester 2018 2H

Student Name Wenye Feng


Student ID 19143910
Student Email 19143910@student.westernsydney.edu.au
Part A
Formal Assessment Task
Business Studies
Year 12
Finance

Assessment Task No.2 Duration Weighting Total Marks


Financial Statement Analysis 3 weeks 30% 30
Report

CONTEXT (OR PURPOSE) OF THE TASK


This task provides you with the opportunity to examine the financial position of an
Australian company for decision-making purposes.
Your findings will be presented in the business report format. As the requirement of
the HSC, you need to be able to present logical and well-structured responses.

OUTCOMES TO BE ASSESSED
H6 evaluates the effectiveness of management in the performance of businesses
H7 plans and conducts investigations into contemporary business issues
H8 organises and evaluates information for actual and hypothetical business
situations
H9 communicates business information, issues and concepts in appropriate formats
H10 applies mathematical concepts appropriately in business situations
TASK
You need to:
 download the 2017 Annual Report of Coca-Cola Amatil Limited (CCA) via
https://www.ccamatil.com/-/media/Cca/Corporate/Files/Annual-
Reports/2018/Annual-Report-2017.ashx
 go to Coca-Cola Amatil Annual Report 2017 page 79-84 and evaluate CCA’s
financial position using the following four objectives:
- Liquidity
- Solvency
- Profitability
- Efficiency
 construct a business report on CCA’s business performance. The report should
be around 1000 words (+/- 10%).

Submission Details:
You must submit a clearly written business report of your analysis within the word
limit.
Your report need to include the following information:
- Calculate key financial ratios
Ratio
Liquidity  Current Ratio
Solvency  Gearing – Debt to Equity Ratio
Profitability  Gross Profit Ratio (GPR)
 Net profit Ratio (NPR)
 Return on Owner’s Equity (ROE)
Efficiency  Expenses Ratio
 Accounts Receivable Turnover Ratio

- Assess business performance using ratio analysis


- Recommend strategies to improve financial performance
ASSESSMENT CRITERIA
 Accuracy and completeness of financial ratios (7 marks)
 Evaluation of the business performance (12 marks)
 Strategies to improve financial performance (6 marks)
 Communicates in a cohesive business report format, using appropriate business
terminology and concepts (5 marks)

MARKING GUIDELINES
Criteria Mark
 At least 6 out of 7 financial ratios are correctly calculated 25-30
 Demonstrates clear and accurate analysis of business
performance
 Provides three to four specific and appropriate suggestions to
improve Coca-Cola Amatil’s financial performance
 Communicates in a cohesive business report format, using
appropriate business terminology and concepts
 4 to 5 financial ratios are correctly calculated 19-24
 Demonstrates fair analysis of business performance
 Makes three to four suggestions to improve Coca-Cola Amatil’s
financial performance
 Communicates in a business report format, using appropriate
business terminology and concepts
 4 to 5 financial ratios are correctly calculated 14-18
 Demonstrates effort to analyse business performance
 Makes some suggestions to improve Coca-Cola Amatil’s
financial performance
 Communicates in report format, using some business
terminology and concepts
 Less than 4 financial ratios are correctly calculated 7-13
 Some effort to analyse business performance
 Makes vague suggestions to improve Coca-Cola Amatil’s
financial performance
 May not complete the report within the length specified/ May not
complete the task in a report format
 May not identify the correct elements in financial statements 1-6
 May mention the business performance
 May suggest improvements for financial performance
 Shows limited communication skills

Part B: Personal Reflection

In all educational setting, assessment, and associated feedback, plays a significant role

in promoting student motivation and learning, and for ensuring that students actively

improve their learning (NSW Education Standards Authority, 2017a). Assessment is

the process of gathering information and making judgements about student

achievement (Board of Studies, n.d.), as well as a process of quality improvement

(Rockman, 2002). Proper assessment assists student learning in terms of enabling

students to demonstrate what they know and can do, and clarifies student

understanding of curriculum contents (NESA, 2017a). Meanwhile, teachers are

provided with opportunities to gather evidence about student achievement in relation

to syllabus outcomes (Board of Studies, n.d.). Assessment feedback helps students

take responsibility for their own learning. It is individualized, which clearly and

constructively indicates students’ strengths and weaknesses (NESA, n.d.). According

to Australian Professional Standards for Teachers 5.1 and 5.2, qualified teachers

should assess student learning and provide students with feedbacks on their learning

(Australian Institute for Teaching and School Leadership, 2011). It is also teacher’s

responsibility to make consistent and comparable judgements (AITSL, 2011).


Assessment is a broad tool that includes many forms, such as formal high stakes

testing, standardized testing and informal assessment that occur daily in classrooms

(NESA, 2017a). Formative assessment, which is to assess students’ progress as

learning, enables teacher to gather ongoing information from the classroom and make

in-time adjustments to their practices accordingly (Greenstein & MyiLibrary, 2010).

There is no doubt that all teachers should use formative assessment in their daily

practice and provide student with appropriate feedback. However, the real matter that

educators has been argued about, and this reflection will be focusing on, is the

impacts of high stakes testing, with specific reference to HSC examination. The

external HSC examination is a standardized high stake test that measures student

achievement in a range of syllabus outcome (NESA, 2017b). The question educators

need to think about is whether HSC examination provides a valid and reliable

measurement of students’ demonstration of the knowledge, understanding and skills

described for each course.

High stakes testing outcomes can have significant impacts on the test-taker (Jones &

Ennes, 2018). Testing becomes high stakes when it is used for decision making

purposes, such as graduation, promotion and admissions, and often associated with

public reporting of testing results (Jones & Ennes, 2018). Globally, high stakes

assessment programs are increasing (Lingard, et al. 2013; Smith 2014), and Australia

is no exception. It is believed that the Australian government’s push for standardized

high stakes testing can be attributed to the desire to maintain public confidence in the
quality of schooling, demonstrate transparency and meet public accountability

(Klenowski & Wyatt-Smith, 2012).

Large-scale standardized testing can have significant impacts on individual learners,

teachers and schools. Academics in NSW have surveyed seven Sydney schools, and

found that 42% of students suffered from high-level anxiety, and 6% of them show

symptoms for extreme severe levels of anxiety (Smith, 2015). On the other hand, test-

unrelated factors such as anxiety, pressure and test skills can affect the validity and

reliability of the test on some level. Thus, there has been rising concerns about the

reliability and fairness of admission tests. Base on meta-analytic evidence, test

anxiety is negatively correlated with test performance (Sommer & Arendasy, 2015).

Under such circumstances, test fairness is compromised. If two groups of test-takers

of equal level of ability, who differ in cognitive factors such as task-irrelevant

thinking and anxiety, may not receive identical expected results (Millsap, 2011;

Lubke, Dolan, Kelderman, & Mellenbergh, 2003; Sommer & Arendasy, 2015).

Besides, test results can be distorted by test preparation and lead to invalid

interpretations of learning gains (Jones & Ennes, 2018). Test skills plays a key role

when it comes to standardized testing. It is not uncommon that students who do well

in extensive questions found struggle with multiple choice questions, or students who

is good at abstract thinking but fails to communicate their thoughts clearly and

properly on the exam paper. Substantial research shows that in many cases teachers

“teach to the test” by incorporating explicit instruction of test-taking skills into their
teaching due to the pressure (Amrein and Berliner, 2002). High stakes tests such as

HSC also shapes the contents teachers teach. Teachers struggle to cover the vast

content listed in the state standards documents incluing NSW syllabuses in such a

short time and students struggle to remember (Reich & Bally, 2010).

At the state and school level, government pushes the use of high stakes tests as a

mechanism to label and rank schools. Test scores are used as indicators of the quality

of school and instructional programs (Jones & Ennes, 2018). Under some state

policies, schools are rewarded or sanctioned based on their rankings, that is, the

students’ performance on standardized test. Thus provide schools with an incentive to

rise their performance “at all costs” (Groves 2002; Winters, Trivitt, & Greence, 2009;

Nichols & Berliner, 2007). The Australian Primary Principals Association (APPA,

2013) has identified the unintended consequences that are emerging in Australia.

These include pressures on principles to lift performance, schools funding, more

attention given to those subjects that play a role in the accountability system and those

students who are more likely to achieve better grades, dismiss of low-achieving

students and increased instances of cheating (APPA, 2013).

On the other hand, high-stakes testing measures a consistent standard for all students

regardless of their ethnicity, socioeconomic status, gender, or location (Reich &

Bally, 2010). The tests allow fair comparison among peers and schools, and clearly

identify learning outcomes in a period of time (Reich & Bally, 2010). Further, by

reporting scores, teachers and students will be stimulated and more motivated to
achieve better grades. While countless journals and articles can be found criticizing

standardized testing and high stakes testing, few can be found supporting it. From my

perspective, though many researchers claim the urgent need for education reform,

state wide testing should not and cannot be replaced. It’s an effective tool to measure

educational progress and monitoring schools’ performance. The real issue for

educators across the country is how to meet accountability demands and maintain

high quality, high equity teaching and learning at the same time (Klenowski & Wyatt-

Smith, 2012).

The HSC use both school-based assessment (internal) and external examinations to

measure student learning towards syllabus outcomes, which eliminates the bias

(NESA, 2017b). Schools in NSW have been given greater autonomy and freedom in

terms of designing their programs and school-based assessment, and are taking more

responsibility for their own performance (Smith, 2005). In an evaluation of

assessment practices in NSW, researchers found surprisingly that state-wide tests

have come to be valued by most schools, teachers and parents while little hostility

remaining (Eltis, 2003). It is valued for its diagnostic scope, and for the ability to

locate the performance of schools and students relative to other schools and students

across the State. The experience in NSW is that the proper combination of different

assessment types can strike a balance between the twin goals of development and

accountability (Smith, 2005). Government and states should also ensure the ethical
use of rewards and sanctions to prevent unintended consequences of pressure on

schools (Klenowski & Wyatt-Smith, 2012).

In my future practice, it is important to develop realistic, clear, coherent, reliable and

fair assessment, formal and informal, and integral to the lessons and education

institutions to yield meaningful results. As a teacher, I will improve my assessment

strategies and reflect on myself based on the following principles. First, student

assessment activities should be designed and differentiated to ensure all students can

access and participate on the same basis despite their backgrounds (NESA, 2017b).

Second, student assessment should be a coherent part of the curriculum. Third,

formative and summative assessment should be ongoing. Further, and most

importantly, students’ assessment should include a variety of approaches and sources

of information and should reflect the complexity of student learning and the full range

of curriculum goals.
Reference List:

Amrein, A. L., & Berliner D.C. (2002). An analysis of some un- intended and

negative consequences of high-stakes testing. Education Policy Studies

Laboratory: Education Policy Research Unit. Retrieved from:

https://nepc.colorado.edu/sites/default/files/EPSL-0211-125-EPRU.pdf

Australian Institute for Teaching and School Leadership (AITSL). (2011). Australian

Professional Standards for Teachers. Retrieved from:

https://www.aitsl.edu.au/docs/default-source/general/australian-professional-

standands-for-teachers-20171006.pdf?sfvrsn=399ae83c_12

Board of Studies NSW. (n.d.). Advice on assessment. Retrieved from:

http://educationstandards.nsw.edu.au/wps/wcm/connect/77bf10ac-aa30-4904-

b65b-

834ea4acb42f/advice_on_assessment_guide_web.pdf?MOD=AJPERES&CVI

D=
Eltis, K. (2003). Time to Teach, Time to Learn: Report on the Evaluation of Outcomes

Assessment and Reporting in NSW Government Schools. NSW Department of

Education and Training.

Greenstein, L., & MyiLibrary. (2010). What Teachers Really Need to Know

About Formative Assessment. Retrieved from:

https://ebookcentral.proquest.com/lib/uwsau/detail.action?docID=56413

Groves, P. (2002). ‘Doesn’t it feel morbid here?’ high-stakes testing and the widening

of the equity gap. Educational Foundations, 16(2), 15–31. Retrieved from:

https://eric.ed.gov/?id=EJ660207

Jones, M. G., & Ennes, M. (2018). High-stakes Testing. Oxford Bibliographies. DOI:

10.1093/OBO/9780199756810-0200

Klenowski, V., & Wyatt-Smith, C. (2012). The impact of high stakes testing:

The Australian story. Assessment in Education: Principles, Policy &

Practice, 19(1), 65-79. DOI: 10.1080/0969594X.2011.592972


Lingard, B., Martino, W., & Rezai-Rashti, G. (2013). Testing regimes,

accountabilities and education policy: Commensurate global and national

developments. Journal of Education Policy, 28(5), 1-18.

DOI: 10.1080/02680939.2013.820042

Lubke, G.H., Dolan, C. V., Kelderman, H., & Mellenbergh, G. J. (2003). On the

relationship between sources of within- and between-group differences and

measurement invariance in common factor model. Intelligence, 31(6), 543-

566. DOI: 10.1016/S0160-2896(03)00051-5

Millsap, R.E. (2011). Statistical approaches to measurement invariance. New York:

Routledge.

Nichols, S. L., & Berliner, D. C. (2007). Collateral damage: How high-stakes testing

corrupts America’s schools. Cambridge, MA: Harvard Education Press.


NSW Education Standards Authority (NESA). (2017a). Principles of Assessment for

Stage 6. Retrieved from:

https://syllabus.nesa.nsw.edu.au/assets/global/files/years-11-12-assessment-

advice.pdf
NSW Education Standards Authority (NESA). (2017b). Assessment and Reporting in

Business Studies Stage 6. Retrieved from:

https://educationstandards.nsw.edu.au/wps/wcm/connect/402ca131-58ce-

47c7-b6c3-6e9c1504414c/assessment-and-reporting-in-business-studies-stage-

6.PDF?MOD=AJPERES&CVID=

NSW Education Standards Authority (NESA). (n.d.). Using feedback to support

student learning. Retrieved from

http://educationstandards.nsw.edu.au/wps/portal/nesa/11-12/Understanding-

the-curriculum/assessment/assessment-in-practice/feedback

Reich, G., & Bally, D. (2010). Get Smart: Facing High-Stakes Testing

Together. The Social Studies, 101(4), 179-184.

DOI: 10.1080/00377990903493838

Rockman, I. (2002). The importance of assessment. Reference Services

Review, 30(3), 181-182. DOI: 10.1108/00907320210435455

Smith, A. (2015). HSC 2015: Gifted girls suffer the most stress, study finds. The

Sydney Morning Herald. Retrieved from:


https://www.smh.com.au/education/hsc-2015-gifted-girls-suffer-the-most-

stress-study-finds-20151009-gk5glw.html

Smith, M. (2005). Data for schools in NSW: What is provided and can it help?

Retrieved from:

https://research.acer.edu.au/cgi/viewcontent.cgi?article=1011&context=resear

ch_conference_2005

Smith, W. C. (2014). The global transformation toward testing for accountability.

Education Policy Analysis Archives, 22(116), 1–34. DOI:

http://dx.doi.org/10.14507/epaa.v22.1571.

Sommer, M. & Arendasy, M. E. (2015). Further evidence for the deficit account

of the test anxiety-test performance relationship from a high-stakes

admission testing setting. Intelligence, 53, 72-80.

DOI: 10.1016/j.intell.2015.08.007

The Australian Primary Principles Association (APPA). (2013). Canvass report –

Primary Principals: Perspectives on NAPLAN Testing & Assessment.


Retrieved from: https://www.appa.asn.au/wp-

content/uploads/2015/08/Primary-Principals-Perspectives-NAPLAN.pdf

Winters, M.A., Trivitt, J. R. & Greene, J. P. (2010). The impact of high -stakes

testing on student proficiency in low-stakes subjects: Evidence from

Florida's elementary science exam. Economics of Education

Review, 29(1), 138-146. DOI: 10.1016/j.econedurev.2009.07.004

Das könnte Ihnen auch gefallen