Стратегии на образователната и научната политика

2013/6, стр. 769 - 806

NATIONAL AND INTERNATIONAL ASSESSMENTS OF STUDENT ACHIEVEMENT

Резюме:

Ключови думи:

ÎÖÅÍßÂÀÍÅÒÎованиеEducational AssessmentОценяваневучилищнотообразNATIONALANDINTERNATIONALASSESSMENTSOFSTUDENTACHIEVEMENT

Vincent Greaney Thomas Kellagan

Abstract: This introductory book describes the main features of national and international assessments, both of which became extremely popular tools for determining the quality of education in the 1990s and 2000s. This increase in popularity reects two important developments. The purposes and main features of national assessments are described in chapter 2. The reasons for carrying out a national assessment are considered in chapter 3, and the main decisions that have to be made in the design and planning of an assessment are covered in chapter 4. Issues (as well as common errors) to be borne in mind in the design, implementation, analysis, reporting, and use of a national assessment are identified in chapter 5. In chapter 6, international assessments of student achievement, which share many procedural features with national assessments (such as sampling, administration, background data collected, and methods of analysis), are described.

Keywords: national and international assessment, analysis, students, achievement

Contents
1. Introduction
2. National Assessments of Student Achievement
3. Why Carry Out a National Assessment?

4. Decisions in a National Assessment
5. Common Mistakes in the Design, Implementation, Analysis, and
Reporting of a National Assessment
6. International Assessments of Student Achievement

1. Introduction

In this introductory volume we describe the main features of national and international assessments, both of which have become extremely popular in assessing the quality of education in the 1990s and 2000s. This increase in popularity reects two important developments in the use of assessment data. First, it represents a shift from the traditional use of achievement tests to assess

ÎÖÅÍßÂÀÍÅÒÎindividual students to their use to obtain information about the achievements ofthe system of education as a whole (or a clearly deit rewith inputs (e.g., student participation rates, physical facilities, curriculummaterials, teacher training) to a concern with outcomes, that is the knowledgeand skills that students have acquired as a result of their exposure to schooling(Kellaghan & Greaney

fined part of it). And secondly,
ects a shift in emphasis in assessing the quality of education from a concern

, 2001).

National assessments are used to describe student learning achievement at a particular point in time and also to monitor changes in achievement over time. As part of its management function, a ministry of education should be able to describe national levels of learning achievement especially in key subject areas. It should also be able to use this information to compare achievement levels of key sub-groups, such as boys and girls, urban and rural students, and public and private school students. By having good quality achievement data at different points in time a ministry can also support or refute claims (and counter-claims) about rising or falling standards of student achievement.

The present series of booklets is designed to introduce readers to the complex technology that has grown up around the administration of national and international assessments. In this introductory volume, the purposes and main features of national assessments are described. Reasons for carrying out a national assessment are considered, as are the main decisions that have to be made in the design and planning of an assessment. Common mistakes in the design, implementation, analysis, and reporting of a national assessment are identified.

In the final chapter, international assessments of student achievement, which share many procedural features with national assessments (sampling, administration, the kind of background data collected, methods of analysis), are described. The main point of difference between the two is at once a strength and a weakness of an international assessment. It is a strength in that it provides data from a number of countries, allowing each individual country to compare its results with the results achieved by students in other countries; it is a weakness in that the requirement that test instruments be acceptable in all participating countries means that the instruments may not accurately reect the achievements of students in individual countries.

A further feature of international assessments is the fact that participating countries usually carry out internal analyses based on data collected within a country. Thus, the data collected for the international study are used for what is in effect a national assessment. However, the practice is not without its problems, and data that are collected in this way may be less appropriate than if they had been collected for a dedicated national assessment.

ÎÖÅÍßÂÀÍÅÒÎAn intermediate procedure that lies between national assessments in individualcountries and large-scale international studies that span the globe are regional studiesin which a number of countries in a region that may share many socioeconomic andfurther variation is a subnational assessment in which an assessment in. Subnational assessments have beenArgentina, Brazil, United States offer from national

cultural features collaborate in a study.

A confined to a province or state within a country carried out in a number of large countries (e.g., America) to meet local or regional information needs. These dif assessments in which all participants use the same instruments and procedures, but which allow (if numbers are large enough) disaggregation of data for regions within the country in reporting results.

In a series of appendices to the present volume, descriptions of the main features of national assessments in eight countries are provided, followed by descriptions of three international studies and three regional studies.

Details in the design and implementation of a national assessment are considered in subsequent volumes in this series. The issues addressed are:

instrument development;

sampling, data preparation, and management;

data analysis;

logistics;

reporting and using national assessment results.

2. National assessments of student achievement

A national assessment is designed to describe the achievement of students in a curriculum area aggregated to provide an estimate of the achievement level in the education system as a whole at a particular age or grade level. It provides data for a type of national education audit carried out to inform policy makers about key aspects of the system. Normally, it involves administration of achievement tests either to a sample or population of students, and may focus on a particular sector in the system (such as fifth grade or 13-year old students). Teachers and others (e.g., parents, principals) may be asked to provide background information, usually in questionnaires, which, when related to student achievement, can provide insights about how achievement is related to such factors as levels of teacher training, attitudes towards curriculum areas, and availability of teaching and learning materials.

National assessment systems in various parts of the world tend to have common features. All include an assessment of students’ language /literacy and mathematics/ numeracy. Some assess students’ achievements in science, a second language, art, music, or social studies. In practically all national assessments systems, students at the primary-school level are assessed. In many systems, national assessments are also carried out at secondary school, usually during the period of compulsory education.

ÎÖÅÍßÂÀÍÅÒÎin the frequency with which assessments are carried out. In some, an assessmentis carried out every yearfrom year to yearagencies have been employed to carry out a national assessment: the Ministryof Education, a national research centeruniversityor may be mandated. When voluntary

National assessment systems also differ from country to country. They differ

, though the curriculum area that is assessed may vary
. In other systems, assessments are less frequent. A variety of

, a consortium of educational bodies, a , and an examination board. Participation by a school may be voluntary, , non-participation of some schools may bias the results and lead to an inaccurate reection of achievement levels in the education system.

While most industrialized countries have had systems of national assessment for some time, it was only in the 1990s that the capacity to administer assessments became available in developing countries as a result of a shift in emphasis from educational inputs to outcomes in the assessment of quality following the 1990 Jomtien Declaration, World Declaration on Education for All. Article 4 of the Jomtien Declaration states that the focus of basic education should be “on actual learning acquisition and outcome, rather than exclusively upon enrolment, continued participation in organized programs and completion of certification requirements” (p. 5). More recently, the Dakar Framework for Action (UNESCO, 2000), produced at the end of the ten-year follow-up to Jomtien, again highlighted the importance of learning outcomes. Among its list of seven agreed goals was by 2015 to improve “all aspects of the quality of education… so that recognised and measurable outcomes are achieved by all, especially in literacy, numeracy and essential life skills” (7, iv).

These statements imply that, for countries pledged to achieving the goals of Education For All (EFA), efforts to enhance the quality of education will have to be accompanied by procedures that will provide information on students’ learning. As a result, national governments and donor agencies have greatly increased support for monitoring student achievement through national assessments. Ironically, the expectation that EFA and regular monitoring of achievement levels would result in an improvement in learning standards has not materialized. This may be because, while EFA led to rapid increases in numbers attending school, larger numbers may not have been matched by increased resources (especially trained teachers).

All assessments seek answers to one or more of the following questions.

– How well are students learning in the education system (with reference to general expectations, the aims of the curriculum, or preparation for life)?

– Is there evidence of particular strengths and weaknesses in students’ knowledge and skills?

– Do particular subgroups in the population perform poorly? Are there, for students in different regions of the country?

ÎÖÅÍßÂÀÍÅÒÎexample, disparities between the achievements of boys and girls, of students inurban and rural locations, of students from different language or ethnic groups, ofo what extent doesachievement vary with characteristics of the learning environment (e.g., schoolresources, teacher preparation and competence, type of school) or with students’Are government standards being met in the provision of resources (e.g.,

– What factors are associated with student achievement? T home and community circumstances?

– textbooks, desks, and other quality inputs)?

– Do the achievements of students change over time? This question may be of particular interest if reforms of the education system are being undertaken. To answer the question, it will be necessary to carry out assessments that yield comparable data at different points in time (Kellaghan & Greaney, 2001, 2004).

Most of these questions were addressed in the design and implementation of Ethiopia’s national assessment (Box 2.1).

Box 2.1Ethiopia: National Assessment Objectives1. To determine the level of student academic achievement and attitude development inEthiopian primary education.2. To analyze variations in student achievement by region, gender, location and languageof instruction.3. To explore factors that inuence student achievement in primary education.4. To monitor the improvement of student learning achievement from therstbaselinestudy in 1999/2000.5. To build the capacity of the education system in national assessment.6. To create reliable baseline data for the future.7. To generate recommendations for policy making to improve quality education.

Source: Ethiopia. National Organization for Examinations (2005).

A feature of Vietnam’s approach to national assessment, in addition to student achievement, was a strong focus on key inputs such as physical conditions in schools, access to educational materials, and teacher qualifications (Box 2.2).

ÎÖÅÍßÂÀÍÅÒÎBox 2.2Examples of Questions from Vietnam’s National AssessmentQuestions related to inputs:What are the characteristics of Grade 5 pupils?What are the teaching conditions in Grade 5 classrooms and in primary schools?What is the general condition of school buildings?Questions related to standards of educational provision:Were Ministry standards met regarding– class size?– classroom furniture?– qualications of staff?Questions related to equity of school inputs:Was there equity of resources amongprovinces and among schools within provinces interms of– material resource inputs?– human resource inputs?Questions related to achievementWhat percentage of pupils reached the different levels of skills in reading andmathematics?What was the level of Grade 5 teachers in reading and mathematics?Questions related to inuences on achievement:What were the major factors accounting for variance in reading and mathematicsachievement?What were the major variables that differentiated between the most and least effective schools?

World Bank (2004). Vietnam: Reading and mathematics assessment study. Washington: Author. What are the main elements in a national assessment? While national assessments can vary in how they are implemented, they tend to have a number of common elements (Box 2.3) (Kellaghan & Greaney, 2001, 2004).

Box 2.3Main Elements of a National Assessment– The Ministry of Education (MOE) appoints an implementing agency either within theMinistry or an independent external body (e.g., a university department or a researchorganization) and provides funding.– Policy needs to be addressed in the assessment are determined by the Ministry,sometimes in consultation with key educational stakeholders (e.g., teachers’representatives, curriculum specialists, business people, parents).– The MOE, or a steering committee nominated by it, identies the population to beassessed (e.g., fourth grade students).– The area of achievement to be assessed is determined (e.g., literacy, numeracy).– The implementing agency prepares achievement tests and supporting questionnairesand administration manuals.– The tests and supporting documents are pilot-tested, and subsequently reviewed todetermine curricular and technical adequacy.
ÎÖÅÍßÂÀÍÅÒΖ The implementing agency selects the targeted sample (or population) of schools/students, arranges for printing of materials, and establishes communication with selectedschools.– Test administrators (e.g., classroom teachers, school inspectors, or graduateuniversity students) are trained by the implementing agency.– Survey instruments (tests and questionnaires) are administered in schools on aspecieddate.– Survey instruments are collected, returned to the implementing agency, cleaned, andprepared for analysis.– Data analysis is carried out.– Draft reports areprepared and reviewed.– Thenal report(s) is prepared and disseminated.

It is clear from the list of elements in Box 2.3 that a good deal of thought and preparation is required before students respond to assessment tasks. A body with responsibility for collecting data has to be appointed, decisions have to be made about the policy issues to be addressed, and tests and questionnaires have to be designed and tried out. In preparation for the actual testing, samples (or populations) of schools and of students have to be identifi ed, schools have to be contacted, and test administrators selected and trained. In some countries (e.g., Vietnam and some African countries), teachers respond to the assessment tasks taken by their students. Following test administration, a lot of time and effort will be required to prepare data for analysis, to carry out analyses, and to write reports.

It is important that the student achievements that are assessed are considered to be important outcomes of schooling; that the method of sampling ensures that data that are collected adequately represent the achievements of the education system as a whole (or of a clearly identified part of it); and that analyses identify and describe the main features of the data that have been collected, including relationships between significant variables. All these activities require considerable resources and political support.

It should be recognized that low-income countries encounter problems over and above those encountered by other countries in attempting to carry out a national assessment. To begin, education budgets may be meager; some countries devote less than 1% of GDP to education (e.g., Azerbaijan., Cambodia, Central African Republic, Congo, Georgia, Haiti, Guinea, Guinea Bissau, Sudan) compared to over 5% in most middle-income countries. Competing demands within the education sector for activities such as school construction, teacher training, and provision of educational materials can result in non-availability of funds for monitoring educational achievement. Furthermore, many low and indeed middle-income countries have weak institutional capacity for carrying

ÎÖÅÍßÂÀÍÅÒÎout a national assessment. They may also have to face additional administrativeproblems due to inadequate roads, mail and telephone services. Finallyhigh between-school variation in student achievement typically found in low-income countries requires a larger sample than is required in more developedcountries.that a national assessment can and that if a country has a public examination system,, the veryHow does a national assessment differ from public examinations?It should not be assumed that public examinations can provide the information

there will not be a need for a national assessment system.

A number of features of public examinations mean that they cannot provide the kind of information that a national assessment seeks to provide. First, since public examinations play a major role in the selection of students (for the next highest level in the education system, and sometimes for jobs), they may not provide adequate coverage of the curriculum. Second, examinations, and the characteristic of students who take them, change from year to year, making comparisons over time very difficult. Third, the fact that “high stakes” are attached to performance (i.e., how students do on an examination has important consequences for them, and perhaps for teachers) has implications for the validity of the examinations. Although there are some exceptions, decisions about individual students, teachers, or schools are not normally made following a national assessment.

Fourth, information on student achievement is usually required at an earlier age than that at which public examinations are held. Fifth, the kind of contextual information (about teaching, resources, students and their homes) that is used in the interpretation of achievement data collected in national assessments is not available to interpret public examination results.

Box 2. 4 summarizes the major differences between national assessments and public examinations.

Box 2. 4Differences between National Assessments and Public ExaminationsNational AssessmentsPublic examinationsPurposeTo provide feedback topolicy makersTo certify and selectstudentsFrequencyFor individual subjectsoffered on a regular basis(e.g. every four years)Annually and more oftenwhere the system allows forrepeats.DurationOne or twodaysCan extend over a few weeks
ÎÖÅÍßÂÀÍÅÒÎWho is tested?Usually a sample of studentsat a particular grade or agelevelAll students who wish totake this examination at theexamination grade levelFormatUsually multiple-choice andshort-answerUsually essay and multiple-choiceStakes: Importance forstudents, teachers, etc.Low importanceGreat importanceCoverage of curriculumGenerally conned to one ortwo subjectsCover main subject areasEffect on teachingVery little direct effectMajor: teachers tend toteach what is expected onthe examinationAdditional tuition sought forstudentsVery unlikelyFrequentlyDo students get results?SeldomYesIs additional informationcollected from students?Frequently in studentquestionnairesSeldomScoringUsually involves statisticallysophisticated techniquesUsually a simple processbased on a predeterminedmarking schemeImpact on level of studentattainmentUnlikely to have impactPoor results or the prospectof failure can lead to earlydropout.Usefulness for monitoringtrends in achievement levelsover timeAppropriate if tests aredesigned with this in mindNot appropriate asexamination questions andcandidate populations varyfrom year to year.

How does a national assessment differ from classroom assessment? Classroom assessment is an integral part of the teaching-learning process. In addition to ongoing teacher observation, it also includes classroom questioning, quizzes, and marking of homework. It occurs during learning and is designed to assess students’ level of knowledge, to diagnose problems, and make decisions about the next instructional step. If well done, it can provide regular evidence of the learning of individual students and can cover a range of areas apart from memory work. A national assessment, on the other hand, is not primarily concerned with the achievements of individual students, or with any particular problems they may be experiencing. Data from individual students are aggregated to provide a picture of achievement in the education system in general. Furthermore, a

ÎÖÅÍßÂÀÍÅÒÎnational assessment usually provides a measure of student achievement in onlyone, two, or three core curriculum areas using a limited range of question formatsas infrequently as once every three or four years.national assessment.3. Why carry out a national assessment?There are a variety of reasons why a decision might be made to carry out a

First, an assessment may primarily play a symbolic role to legitimate state action by embracing internationally accepted models of modernity and imbueing the policy-making process with the guise of scientific rationality (Benveniste, 2000; Benveniste, 2003; Kellaghan, 2003). When this is the motivation, the act of assessment is of greater significance than its outcomes. When a national assessment is carried out simply to meet the requirement of a donor agency, it may have little more than symbolic value and its findings may not be seriously considered in management of the education system or in policy making.

Secondly, a national assessment may reect the efforts of a government to “modernize” its educational system by introducing a business management (corporatist) approach. This approach draws on concepts used in the world of business, such as strategic planning, a focus on deliverables and results, and may involve accountability based on performance. Viewed from this perspective, a national assessment is a tool for providing feedback on a limited number of outcome measures considered to be of interest to policy makers, politicians, and the broader educational community.

An important aspect of this approach is simply to provide information on the operation of the education system. Many governments lack basic information on key aspects of the system, including achievement levels and basic inputs to the system. National assessments can provide such information, a key prerequisite for sound policy making. For example, Vietnam’s national assessment helped establish that many classrooms lacked basic resources. In a similar vein, Zanzibar’s assessment reported that 45% of pupils lacked a place to sit. Bhutan’s national assessment noted that some students had to spend a number of hours each day travelling to and from school. Namibia’s assessment showed that many teachers had limited mastery of basic skills in English and mathematics.

An extension of describing conditions is to determine whether standards improve, disimprove, or remain static over time. A series of studies carried out in Africa established that between 1995/96 and 2000/01 there was a significant decline in reading literacy scores in Malawi, Namibia, and Zambia. In the United States, the National Assessment of Educational Progress (NAEP) monitored levels of reading achievement over almost three decades. It found that while nine-year old black and Hispanic children initially reduced the achievement gap with whites, the test score

ÎÖÅÍßÂÀÍÅÒÎdifferential remained fairly constant thereafter (Figure 3.1). In the U.S. also, NAEPhelped identify the changing levels of reading achievement in states (Figure 3.2).Source: Winograd & Thorstensen, 2004.

Figure 3.1

The Achievement Gap in the United States Age 9 Students NAEP Reading Assessment 1971-1999

Figure 3.2

4th Grade Students At or Above Proficient in Reading NAEP 1992–2003

ÎÖÅÍßÂÀÍÅÒÎdata were collected in a sample of schools or in a census in which informationis available about all (or most schools). In both cases, results can be used toprovide direction to policy makers interested in enhancing educational qualityThey can help governments identify the strength of the association betweenthe quality of student learning and various factors over which they have somecontrol (e.g., availability of textbooks, class size, and number of years of teacherpre-service training).The use that can be made of national assessment data depends on whether.

Analysis of findings can lead to decisions affecting the provision of resources in the education system in general (e.g., for the reform of curricula and textbooks, for teacher development) or in categories of school with particular characteristics (e.g., schools in rural areas, schools serving students in socioeconomically disadvantaged areas). Many examples can be found of the use of the fi ndings of national and international assessments for these purposes. For example, they have been used to give direction to the providers of inservice education for teachers in Africa; they have prompted curriculum reform in Eastern Europe, have helped divert financial resources to poorer schools in Chile, and have promoted teacher professionalism in Uruguay.

The results of a national assessment may also be used to change practice in the classroom. However, getting information to teachers and effecting changes in their behaviour that will substantially raise the achievements of students is not an easy task. The pressure on schools and classrooms to change is greatest when the results of a national assessment based on a census are available and high stakes are attached to performance. In this case, no specifi c action may be taken beyond the publication of information about performance (e.g., in league tables). In some cases, sanctions may be attached to performance. These can either take the form of inducements for improved performance (e.g., schools and/or teachers receive money if students achieve a specific target) or ‘punishments’ for poor performance (e.g., non-promotion of students, dismissal of teachers).

When a national assessment obtains information that can be used to make stakeholders accountable for student learning, usually it is assumed that teachers and schools bear the major responsibility. The focus is seldom on the accountability of those who are responsible for determining educational policies or for providing the funding to implement policies and programs.

4. Decisions in a national assessment

In this section, we consider the series of decisions that are involved in planning a national assessment.

1. Who should give policy guidance for the national assessment?

The Ministry of Education should appoint a National Steering Committee (NSC) to provide overall guidance to the committee which will actually carry out the assessment. Such a committee can help ensure that the national assessment

ÎÖÅÍßÂÀÍÅÒÎhas status and that key policy questions of interest to the Ministry and others areproblemsthat might arise during the implementation of the national assessment. Giving theNSC a degree of ownership over the direction and intent of the national assessmentalso increases the likelihood that the results of the assessment will play a role inThe composition of a NSC will vary from country to country depending onthe power structure within the education system. In addition to the Ministry ofnancialaddressed. It could also help resolve serious administrative andfuture policy making.

Education, NSCs might include representatives of major ethnic, religious, and linguistic groups as well as those whose members will be expected to act on the results, such as teacher trainers, teachers, school inspectors, and curriculum personnel. Addressing the information needs of these various stakeholders should help to ensure that the national assessment exercise does not result in a report that is criticized or ignored because of its failure to address the "correct" questions.

The NSC should not be overburdened with meetings or should not be required to address routine implementation tasks related to the national assessment. In some cases, it may provide direction at the initial stage by identifying the purpose and rationale of the assessment, determining the subjects and grade level (s) to be assessed, or select the agency or agencies to conduct the assessment, though these may also be decided before the committee is established. The NSC is likely to be most active at the start of the assessment exercise while the implementing agency will be responsible for most of the detailed work such as instrument development, sampling, analysis, and reporting. The agency, however, should provide the NSC with draft copies of tests and questionnaires and descriptions of proposed procedures so that committee members can provide guidance and ensure that the information needs that prompted the assessment in the first place are being adequately addressed. NSC members should also review draft reports prepared by the implementing agency.

2. Who should carry out the national assessment?

A national assessment should be carried out by a credible team or organization, whose work can command respect and enhance the likelihood of broad-scale acceptance of the findings. Various countries have assigned responsibility for national assessments to groups ranging from teams set up within the Ministry of Education to autonomous bodies (universities, research centers) to non-national technical teams. One would expect a variety of factors to inuence such a decision, including levels of national technical capacity, as well as administrative and political circumstances. For example, while autonomous bodies may have a higher degree of functional independence and technical legitimacy, they may be disconnected from ministry information needs, and so have little impact on policy. Some potential advantages and disadvantages of different categories of implementation agencies

ÎÖÅÍßÂÀÍÅÒÎwhich merit consideration in deciding who should carry out an assessment arelisted in Box 4.1.of a Ministry of Education in choosing an implementing agencyexample, provinces must authorize the curricular contents to be evaluated in thenational assessment. Initiallymany provinces lacked technical capacitywith a set of sample questions for their endorsement and the National Direction ofIn some cases, traditions and legislation may impose restrictions on the freedom. InArgentina, for, provinces were asked to produce test items. However,At a later stage, provinces were presented.

Evaluation (NDE) constructed the final assessment instruments from the pool of pre-approved test items. More recently, test items have been designed independently by university personnel and approved by the national Federal Council. The NDE remains responsible for the design of achievement tests, the analyses of results, and the general co-ordination of annual assessment activities.

Box 4.1Options for Implementing a National AssessmentDesignated agencyAdvantagesDisadvantages1. Drawn from staff ofMinistry of EducationLikely to be trusted byMinistry.Ready access to keypersonnel, materials, anddata (e.g., school populationdata).Funds for staff time may nothave to be secured.Findings might be subjectto political manipulationincluding suppression.May be viewed skeptically byother stakeholders.Staff may be required toundertake many other tasks.Technical capacity may belacking.2. Drawn from staff of PublicExamination UnitUsually credible.Experience in running secureassessments.Funds for staff time may nothave to be secured.Some skills (e.g., testdevelopment) can betransferred to enhance theExamination Unit.More likely to be sustainablethan some other models.Staff may be required toundertake many other tasks.Technical capacity may beweak.May lack ready access todata.Public examinationexperience may result in testitems that are too difcult.3. Drawn from research-university sectorFindings may be morecredible with stakeholders.Greater likelihood of havingsome technical competence.May use data for furtherstudies of the educationsystem.Have to raise funds to coverstaff costs.May be less sustainable thansome other models.May come into conictwithEducation Ministry.
ÎÖÅÍßÂÀÍÅÒÎ4. Recruited as foreigntechnical assistance (TA)More likely to recruit atechnically competent team.Nature of funding can helpensure timely completion.Likely to be expensive.May not be sensitive toeducational context.Difcult to ensureassessment sustainability.Possibly little nationalcapacity enhancement.5. Made up of a nationalteam supported with someinternational technicalassistance (TA)Can improve technicalcapacity of nationals.May ensure timelycompletion.May add credibility to theresults.Possibly difculttocoordinate work of nationalteam members and TA.Might be difcult to ensureskill transfer to nationals.6. Ministry team supportedwith National TACan ensure Ministry supportwhile obtaining national TA.Less expensive thaninternational TA.National TA may lack thetechnical capacity.Other potentialdisadvantages outlined in No1 above may apply.

Less expensive than international TA. National TA may lack the technical capacity.

Other potential disadvantages outlined in No 1 above may apply.

It is worth reecting on the wide variety of skills that is required to carry out a national assessment in deciding who should be given responsibility for the task. Furthermore, a national assessment is fundamentally a team effort. The team should be exible, willing to work under pressure and in a collaborative manner, and be prepared to learn new assessment and technological approaches. The team leader, sometimes termed the National Research Coordinator (NRC), should have strong managerial skills. He/she will be required to organize staff, coordinate and schedule activities, support training, and arrange and monitor finance. Given the need to report to a national steering committee, liaise with national, regional and in some instances district-level government bodies, and representatives of stakeholders such as teachers and religious bodies, the coordinator should be politically astute.

The team should have high-level implementation or operational skills. Tasks to be completed include the development of training materials; the organization of workshops for item writers and test administrators; arranging for printing and distribution of tests, questionnaires, and manuals; contacting schools; and collecting and recording data. A small dedicated team of test developers will be needed to analyze the curriculum, develop tables of specifications or a test blueprint, draft items, select items after pre-testing or piloting, and advise on scoring. Following test administration, open-ended and multiple-choice questions have to be scored.

The team will require support from one or more people with statistical and analytical competence in selecting samples, in weighting data, in data input and

ÎÖÅÍßÂÀÍÅÒÎle preparation, in item analysis of test data as well as general statistical analysisof the overall results, and preparing datagraduate students) to carry out secondary analyses.press releases, and focused pamphlets or newsletters. It might also be reasonablyexpected to play a key role in organizing workshops for teachers and othereducational offor teaching and learning.les for others (e.g., academics and post-The team should have the necessary personnel to draft and disseminate results,cials to discuss the importance of the results and their implications

Most members of the team may be part-time and employed “as needed”. This category could include item writers, especially practicing teachers with a good knowledge of the curriculum. It might also include experts in areas such as sampling and statistical analysis. Team members might be recruited from outside the education sector. For example, a national census bureau can be a good source of sampling expertise. Computer personnel with relevant experience could help with data cleaning, and journalists with drafting catchy press releases.

3. Who will administer the tests and questionnaires?

National administrative traditions and perceptions of levels of trust, as well as sources of finance, tend to inuence the selection of personnel responsible for administering the national assessment tests and questionnaires. Practice varies. For example, Colombia has used graduate students while Zambia has involved school inspectors and Ministry officials in test and questionnaire administration. In Argentina, each province selects its own independent proctors. In the Maldives, a test administrator must be a staff member of a school located on an island other than the island where the targeted school is located.

4. At what level of schooling will students be assessed?

Policy makers want information about the knowledge and skills of students at selected points in their educational careers. A decision that has to be made is whether populations are defined on the basis of age or grade, or indeed a combination of age and grade. In countries in which students vary widely in the age at which they enter school, and in which policies of nonpromotion are in operation, students of similar age will not be concentrated in the same grade. In this situation, a strong argument can be made for targeting grade level rather than age.

The grade to be assessed should normally be dictated by the information needs of the Ministry of Education. If, for example, the Ministry is interested in fi nding out about the learning achievement levels of students completing primary school, it might request that a national assessment be carried out towards the end of the last year of primary school (5th or 6th grade in many countries). The ministry could also request a national assessment in grades 3 or 4 if it needs data on how students are performing midway through the basic education cycle. This information could then be used to introduce remedial measures (e.g., in-service courses for teachers)

ÎÖÅÍßÂÀÍÅÒÎed in thearget grades for national assessments have varied from country to country.In the United States, student achievement levels are assessed in grades 4, 8,, atAfrica,Africa Consortiumfor Monitoring Educational Quality (SACMEQ) assessed grade 6 students.to address problems with specic aspects of the curriculum identiassessment.Tand 12; in Colombia, achievement is assessed at grades 3 and 5; in Uruguaygrades 1, 2, and 6; in Sri Lanka, at grades 4, 8, and 10. InAnglophonea regional consortium of educational systems, the Southern

Countries in the Francophone Africa consortium, Programme d’Analyse des Systèmes Educatifs de la CONFEMEN (PASEC), assess students in grades 2 and 5.

Sometimes pragmatic considerations dictate grade selection. The Nigerian Federal Ministry of Education decided to assess students in grade 4 as any lower level would have required translation of tests into 270 local languages. More senior grades were not considered suitable as students and teachers would be focused on secondary school entrance examinations.

Relatively few countries conduct large-scale assessments in grades 1 to 3. Students at this level might not be able to follow instructions, or cope with the cognitive tasks of the assessment or with the challenge of completing multiple-choice tests. A Jamaican study noted that a sizeable number of grade 1 students were unable to recognize the letters of the alphabet (Lockheed & Harris, 2005).

5. Will a whole population or a sample be assessed?

The purpose of an assessment is key to determining whether or not to test a sample or the entire population of targeted students. Cost also is a factor. For obvious reasons, samples are less expensive than population-based approaches. Most national and all regional and all international studies use sample-based approaches in determining national achievement levels. Some national assessments use both census and sample-based approaches, while most subnational assessments collect census data.

The decision to involve an entire population in a national assessment may re ect an intention to foster school, teacher, or even student accountability. It facilitates the use of sanctions (incentives, penalties), the provision of feedback on performance to individual schools, and the publication of league tables, as well as the identifi cation of schools with the greatest need for assistance (e.g., in Chile and Mexico). The samplebased approach, on the other hand, will only permit the detection of problems at the system level. It will not identify specific schools in need of support, though it can identify types or categories of school (e.g., small rural schools) that require attention. It can also identify problems relating to gender or ethnic equity.

In some countries, it may be difficult to define the population of schools or subpopulations to be assessed because of lack of up-to-date information on functioning

ÎÖÅÍßÂÀÍÅÒÎschools. Information on schools may not be accurate. Functioning schools may notappear on all ofinformation on pupil enrolment in individual grades may also be lacking or inaccurate.skills (language/literacy and mathematics/numeracy), a reof these outcomes for basic education. In some countries, science and socialstudies are included in an assessment. Whatever the domain of the assessment, itcial lists, while non-existent (ghost schools) may be listed. Detailedection of the importanceAll national assessments measure cognitive outcomes of instruction or scholastic6. What will be assessed?

is important that an appropriate framework be provided, in the first instance for the construction of assessment instruments and afterwards for the interpretation of results. The framework may be available in a curriculum if, for example, the curriculum provides expectations for learning which are clearly prioritized and operationalized. In most cases, however, such a framework will not be available, and those charged with the national assessment will have to construct it. In this task, close cooperation will be required between the assessment agency, those responsible for curricula, and other stakeholders.

An alternative to basing an assessment instrument on curriculum-embedded expectations or even prescriptions, which is feasible in the case of older students, is to build a test to reect the knowledge and skills that students are likely to need and build on in adult life. Thus, for example, the Programme for International Student Assessment (PISA) set out to assess the ‘mathematical literacy’ of 15-year olds defined as the ‘capacity to identify and understand the role that mathematics plays in the world, to make well-founded judgements and to use and engage with mathematics in works that meet the needs of the individual’s life as a constructive, concerned and reective citizen’ (OECD, 2003, p. 24). This approach fi tted well in an international study, since the alternative of devising an assessment instrument that would be equally appropriate to a variety of curricula is obviously problematic. However, it might also be used in a national assessment.

A few national assessments have collected information on affective outcomes (e.g., student attitudes to school, student self-esteem). However, measures of these outcomes tend to be unreliable, and analyses based on them have proved difficult to interpret.

Most national assessments collect information on student, school, and home factors which are considered relevant to student achievement (e.g., student gender and educational history, including grade repetition; resources in schools, including the availability of textbooks; level of teacher education/qualifi cations; socioeconomic status of students’ families). The information is normally collected in questionnaires (and sometimes in interviews) administered to students, teachers, principal teachers, and parents at the same time as the assessment instruments are administered.

ÎÖÅÍßÂÀÍÅÒÎcation of contextual factors related to student achievement can help in thecation of manipulable variables (e.g., time allocated to curriculum areas, theavailability of textbooks) that may affect student learning and in the determination(as well as pupils’) achievements have beenAfrican countries in the SACMEQ studies,teachers were required to take the same test items as their students to gain some. In Uganda, information was obtainedIdentiidentinancialof policy relating to the allocation ofIn some assessments, teachers’resources.assessed. In Vietnam and a number of

insight into their levels of subject mastery on the extent to which teachers claimed to be familiar with key offi cial curriculum documents.

7. How will achievement be assessed?

It is necessary to devise an instrument or instruments that will provide the information that the national assessment was designed to obtain. Since the purposes and proposed uses of national assessments vary, so too will the instruments used in the assessments.

A major distinction is often drawn between norm-referenced and criterionreferenced tests. If an assessment is designed primarily to compare the achievements of different groups, it might follow the procedures for the construction of normreferenced tests. It should be noted that the use of such tests does not preclude the possibility of examining student achievements in some detail (e.g., categorized by achievement domain). More explicit efforts may be made in instrument construction, however, to identify the extent to which students have acquired expected knowledge and skills (with perhaps judgments about the proportion of students whose achievements can be described as ‘minimum’ or ‘satisfactory’), in which case a more criterion-referenced approach will be adopted.

In practice, instrument development often involves aspects of both a normreferenced and a criterion-referenced approach. Thus, test development might begin with specifi cation of a framework in which expectations for learning are posited, following which test items are written to assess the extent to which students meet those expectations. However, if items do not meet certain criteria when tried out, in particular criteria relating to difficulty level, they may not be included in the final assessment instrument. Whatever approach is adopted, care should be taken to ensure that important curriculum objectives are re ected in an assessment, even if all or no students in the tryout provide evidence of achieving them.

Most national and international assessments rely to a considerable extent on the multiple-choice format in their instruments. However, these items will often be supplemented by open-ended items that require the student to write a work, phrase, or sentence.

In several national (e.g., US NAEP, Ireland) and international assessments (e.g., TIMSS, PISA), students respond to only a fraction of the total number of items used

ÎÖÅÍßÂÀÍÅÒÎin an assessment. This approach increases overall test coverage of the curriculum,without placing too great a burden on individual students. It also allows for theuse of extended passages (e.g., a short story or a lengthy newspaper article) in theassessment of reading comprehension. In other assessments, all students respond tothe same set of items. While there are advantages associated with having studentsrespond to only a fraction of items, there are also disadvantages, particularly forcountries beginning a national assessment program.distribution) is more complex, as is scoring and scaling of scores, while analysesAdministration (e.g., printing,

involving individual student or school data can be problematic (see Sofroniou & Kellaghan, 2004).

8. How frequently will assessments be carried out?

The frequency with which a national assessment is carried out varies from country to country, from every year to every ten years. There may be a temptation to assess achievement in the same curriculum areas and in the same population every year. This is unnecessary, as well as very expensive, if the aim is to monitor national standards. In the US, reading and mathematics are assessed every second year and other subjects less frequently. The international assessment of reading literacy (PIRLS) had a five-year span between the fi rst and second administration (2001-2006). In Japan, achievement in core curriculum areas is assessed every ten years to guide curriculum and textbook revision (Ishino, 1995).

If the aim of an assessment is to hold teachers, schools, and even students accountable for their learning, testing may be carried out every year. Furthermore, since such an assessment focuses on the performance of individuals, as well as performance at the system level, all (or most) students in the education system will be assessed. This system has been operated in Chile and in England.

However, if the purpose of an assessment is only to provide information on the performance of the system as a whole, an assessment of a sample of students in a particular curriculum area every three to five years would seem adequate. As educational systems do not change rapidly, more frequent assessments would be unlikely to register change. Over-frequent assessments would more than likely limit the impact of the results as well as incurring unnecessary costs.

9. How should student achievement be reported?

A variety of procedures have been used to describe student achievements in national assessments (see Volume 5 of this series). These range from the simple reporting of the percentage of students answering individual items correctly to the percentage of students performing at varying levels of ‘performance standards’ (e.g., ‘basic’, ‘profi cient’, ‘advanced’). Between these two extremes, performance may be reported in terms of curriculum domains or content areas (see Figure 4.1) or of ‘mastery’ of curriculum objectives.

ÎÖÅÍßÂÀÍÅÒÎNumberMeasurementShapeData RepresentationFACILTY80706050403020100Content Area

Source: Examinations Council of Lesotho &
National Curriculum Development Centre (2006)

Figure 4.1

Lesotho: Grade 6 Math Performance by Content Area

10. What kinds of statistical analysis should be carried out?

Some analyses will be dictated by the policy questions that prompted the assessment in the first instance. Most national assessments provide evidence on achievement by gender, region, urban/rural location, ethnic or language group membership, and type of institution attended (public/private). Analyses may also throw light on the school and background factors that contribute to achievement, and so merit consideration in educational policy-making and decision-making.

The limitations of these analyses and problems in inferring causation from studies in which data are collected at the same time on achievement and other variables should be recognized. It is difficult to disentangle the effects of community, home, and school factors on students’ learning. This has not deterred some investigations from assuming that data collected in national and international assessments can be interpreted causally. For example, in a study using TIMSS data in South Africa entitled ‘System-level evaluation: Language and other factors affecting mathematics achievement’, the question posed for analysis was: ‘What factors in school level, class level and student level in uence pupils’ performance in mathematics?’ (Howie, 2005, p. 177). Use of the words ‘affecting’ and ‘inuence’ imply a cause and effect relationship that is not warranted.

ÎÖÅÍßÂÀÍÅÒÎin smaller classes is sometimes interpreted to mean that a reduction in class sizewould do nothing to improve achievement. Howevernot be warranted if, for example, closer analysis revealed that smaller classes arefound in rural areas, and have poorer resources including less qualiwhile larger classes, located in urban schools have better resources, including morehighly qualiadvantaged home backgrounds.The fact that students in larger classes achieve at a higher level than students, again the conclusion mayedteachers,ed teachers. They probably also include more students from relatively

11. How should the results of a national assessment be communicated and used?

If the reports of a national assessment are to have an impact on national educational policy, they should be produced as soon as possible after the completion of data analysis. In the past, technical reports which featured a considerable amount of data and technical terminology tended to be the sole form of reporting. It is now increasingly recognized that other forms of reports are required. These include short summary reports for busy policy makers which focus on the main findings, press releases, special reports for radio and television, and separate reports for groups such as curriculum developers and teacher trainers. In some countries (e.g. Sri Lanka), separate reports are prepared for each province. The information needs of readers should determine the contents of additional reports. Volume 5 in this series has an extensive section on report writing.

National assessment results have been used to set benchmarks for monitoring learning achievement levels (e.g. Lesotho), curriculum reform, providing baseline data on the amount and quality of educational materials in schools (e.g., Vietnam), for identifying correlates of achievement, and for diagnosing aspects of the curriculum which are not being mastered by students. Uruguay, for instance, used its national assessment results to help prepare teacher guides and to identify the curriculum content and behavioural areas that subsequently helped direct a largescale teacher in-service program.

5. Common mistakes in the design, implementation, analysis, and reporting of a national assessment

Serious mistakes can undermine the degree of confidence placed in results and can render them of limited value or worthless for policy making. In this section, we identify some common mistakes that have been made in national assessments that have been carried out to date at the planning, implementation, analysis, report writing, and use of results stages. National assessment teams should study these and take steps to avoid making similar mistakes in their own national studies.

Planning Mistakes

ÎÖÅÍßÂÀÍÅÒÎnancial provision for key aspects of a national– Failing to set up a national steering committee, and use it as a source of– Lack of government ownership and commitment to the process of nationalected in the absence of a national steering committee or insimultaneous competing ongoing national assessments (often supported by– Failing to make adequateassessment, including report writing and dissemination.information and guidance during the course of the national assessment.assessment re

external donors).

– Failing to involve key stakeholders (e.g. teachers’ representatives, teacher trainers) in the planning of the national assessment.

– Omitting a key sub-group from the population assessed (e.g., private schools, students in remote areas).

– Setting unrealistic test score targets (e.g., 25% increase in scores over a fouryear period).

– Allowing inadequate time for test development.

Implementation Mistakes

– Assigning test development tasks to people who are unfamiliar with the likely levels of student performance (e.g., academics), resulting in tests that are too diffi cult.

– Inadequate representation of the curriculum in tests.

– Failing to pilot test items.

– Using an inadequate number of test items in the final version of the test.

– Failing to design items that compromise between what students should know and what they actually know, resulting in items that are too diffi cult.

– Failing to give a clear definition of the construct being assessed (e.g., reading).

– Including an insufficient number of sample items for students who are unfamiliar with the testing format.

– Not encouraging students to seek clarification from the test supervisor prior to taking the test.

– Failing to give adequate notification to printers of tests, questionnaires, and manuals.

– Paying insufficient attention to proof reading tests, questionnaires, and administrative manuals.

– Using inadequate/out-of-date national data on pupils and school numbers for sampling.

– Failing to carry out proper sampling procedures, including selecting a predetermined percentage of schools (e.g., 5%).

– Allowing local administrators and school officials too much exibility in selecting schools and students to participate in the national assessment. Giving inadequate training to test and questionnaire administrators.

– Allowing outside intervention (e.g., principal sitting in the classroom) during test administration.

ÎÖÅÍßÂÀÍÅÒÎcopying).complete key tasks on time.data in analysis.Allowing students sit close to each other during the assessment (encourages– Failing to establish a tradition of working outside normal work hours toAnalysis Mistakes– Using inappropriate statistical analyses, including failing to weight sample

– Basing results on small numbers (e.g., a minority of sampled teachers might have responded to a particular question).

– Contrasting student performance in different subject areas and claiming that students are doing better in one subject area based on mean score differences.

– Failing to emphasize the arbitrary nature of selected test score cut-off points (e.g., mastery/non mastery, pass/fail), dichotomizing results, and failing to recognize the wide range of test scores in a group.

– Not reporting standard errors associated with individual statistics.

– Computing and publicizing school rankings based on achievement test results without taking into account key contextual factors which contribute to the ranking. Different rankings emerge when school performances are compared using unadjusted performance scores, scores adjusted for contextual factors (e.g., the percentage of students entitled to free school meals), and scores adjusted for earlier achievement.

– Inferring causation where it might not be justified, e.g., attributing differences in learning achievement to one variable (e.g., private school administration, class size).

– Comparing test results over two time periods even though non-equivalent test items were used.

– Comparing test results over two time periods without reporting the extent to which important background conditions (e.g., curriculum , enrollment, household income, or level of civil strife) might have changed in the interim. While most education-related variables tend not to change rapidly over a short time period (e.g., 3- 4 years), some countries have introduced policies which have resulted in major changes in enrollment. Uganda, prompted by the EFA agenda, greatly increased the number of students enrolling in schools. Malawi’s policy of dropping school fees also led to a rapid enrollment increase.

– Limiting analysis in the main to a listing of mean scores of geographical or administrative regions.

Report-Writing Mistakes

– Writing overly-technical reports.

– Failing to highlight a few main fi ndings.

– Making recommendations in relation to a specific variable even though the analysis questioned the validity of the data on that variable. training issues. school contribute to test score performance.

ÎÖÅÍßÂÀÍÅÒΖ Failure to relate assessment results to curriculum, textbook, and teacher– Not acknowledging that factors outside the control of the teacher and the– Failing to recognize that differences between mean scores may not be

statistically signifi cant.

– Producing the report too late to inuence relevant policy decisions.

– Doing an over-extensive review of literature in the assessment report.

– Failing to publicize the key relevant messages of the report to separate stakeholder audiences.

Mistakes with Results

– Ignoring the results when it comes to policy making.

– Failure of key stakeholders (e.g. teacher trainers, curriculum personnel) to consider the implications of the national assessment fi ndings.

– Failure of the national assessment team to reect on lessons learned and to take note of these in follow-up assessments.

Box 5.1 contains a summary of points which might be used as a check list to help improve the overall quality of national assessment design, implementation, analysis, report writing, and follow-on activities. Not all of these are likely to be appropriate for every national assessment, as education systems tend to operate under different political, financial, and administrative constraints.

Box 5.1Checklist of Suggestions for Improving the Quality and Impactof National AssessmentsActivity1. Involve senior policy makers from the outset to ensure political support and to helpframe the assessment design.2. Address the information needs of policy-makers when selecting aspects of the curriculum,grade levels, and population subgroups (e.g., by region, by gender) to be assessed.3. Obtain teacher support by involving their representatives in assessment-related policydecisions.4. Be aware that attaching high stakes to students’ performance may lead to teacheropposition, and a narrowing of the effective curriculum to subjects or aspects of subjectsthat are assessed.5. Dene precisely the constructs and skills to be assessed.6. Secure the services of a person/unit with sampling expertise.
ÎÖÅÍßÂÀÍÅÒÎ7. Specify the dened target population (the population from which a sample will actuallybe drawn) (the sampling frame), and the excluded population (e.g., elements of thepopulation that are too difcult to reach or that would not be able to respond to theinstrument).8. Ensure that the proposed sample is representative and is of sufcient size to provideinformation on populations of interest with an acceptable level of error.9. Select members of the sample from the sampling frame according to knownprobabilities of selection.10. Entrust test development to personnel who are familiar both with curriculumstandards and learning levels of students ( especially practicing teachers).11. Use assessment instruments that adequately assess the knowledge and skills aboutwhich information is required, and which will provide information on sub-domains ofknowledge or skills (e.g., problem solving) rather than just an overall score.12. Develop clear and unambiguous test and questionnaire items and present them in aclear and attractive manner.13. Pilot test and revise items, questionnaires, and manuals.14. Proof-read all materials carefully.15. Follow a standard procedure when administering tests and questionnaires. Prepare anadministration manual.16. Ensure that test administrators are thoroughly familiar with the contents of tests,questionnaires and manual, and with administrative procedures.17. Prepare and implement a quality assurance mechanism to ensure that administrationprocedures are followed.18. Secure competent statistical services.19. Prepare a codebook with specic directions for preparing data for analysis.20. Check/clean data to remove errors (e.g., relating to numbers, out-of range scores,and mismatches between data collected at different levels).21. Calculate sampling errors taking into account complexities in the sample, such asstratication and clustering.22. Weight data so that the contribution of the various sectors of the sample toaggregate achievement scores reects their proportions in the target population.23. Identify the percentage of students who met dened acceptable levels/standards.24. Report results by gender and by region, if sample design permits.25. Analyze assessment data to identify factors that might account for variation instudent achievement levels or for research studies to help frame policy making26. Analyse results by curriculum domain. Provide information on the sub-domains of acurriculum area (e.g., aspects of reading, mathematics).
ÎÖÅÍßÂÀÍÅÒÎ27. Recognize that it is not usually possible to make direct comparisons betweenperformances in different curriculum areas such as reading and mathematics.28. Recognize that a variety of measurement, curricular, and social factors may accountfor student performance.29. Prepare reports in a timely manner with the needs of clients in mind, and presentthem in a format that is readily understood by interested parties, especially those in aposition to make decisions.30. Provide adequate information in the report or in a technical manual to allow forreplication of the assessment.31. Use results to provide direction for pre-and in-service teacher education courses andfor curriculum authorities.

6. International assessments of student achievement

An international assessment of student achievement is similar in many ways to a national assessment. Both exercises make use of similar procedures (in instrument construction, sampling, scoring, and analysis). They also may have similar purposes: to determine how well students are learning in the education system; to identify particular strengths and weaknesses in the knowledge and skills that students have acquired; to compare the achievements of subgroups in the population (e.g., defi ned in terms of gender or location); to determine the relationship between student achievement and a variety of a characteristics of the school learning environment and of homes and communities. Furthermore, both exercises may attempt to establish if student achievements change over time (Kellaghan & Greaney, 2004).

The main difference between an international and a national assessment is that the former is carried out in more than one country and has as an objective to provide policy makers, educators, and the general public with information about their education system in relation to one or more other systems. It is hoped that the information will contribute to a greater understanding of the factors (that vary from country to country) that contribute to differences in student achievement. Much of the interest in international assessments can also be attributed to the belief that human capital (in particular those aspects of it represented by mathematics and science achievements) plays an important role in economic growth. As a consequence, education policy around the world has increasingly focused on improving aggregate student achievement as a means to increase economic growth. There is some research evidence to support this approach, though it is not entirely consistent across countries or over time (Hanushek & Kimko, 2000; Ramirez, Luo, Schofer, & Meyer, 2006).

The strength of the belief in the importance of human capital, and that international studies adequately assess it, is reected in the fact that since the

ÎÖÅÍßÂÀÍÅÒÎ1960s, over 60 countries have participated in international studies of achievementin one or more of a variety of curriculum areas: reading, mathematics, science,writing, literature, foreign languages, civic education, and computer literacybest known international assessments are the IEAMathematics and Science (TIMSS) and Progress in International Reading LiteracyStudy (PIRLS) and the OECD Programme for International Student(PISA). Regional international assessments in reading and mathematics havebeen carried out in Southern and EasternTrends in Internationalstudies. TheAssessmentAfrica (SACMEQ), FrancophoneAfrica

(PASEC), and Latin America (LLECE).

The results of international assessments (TIMSS and PISA) and regional assessments can and have been used to prepare separate national reports on country-level performance. International data bases can be accessed to carry out such analyses.

Countries vary considerably in the extent to which they rely on international and national assessment results for policy making. Many developed countries conduct their own national assessments as well as participating in international assessments. The United States has its own National Assessment of Educational Progress (NAEP) for grades 4, 8, and 12 and also participates in international assessments of achievement. Some countries participated in international assessments but did not conduct national assessments (e.g., the Russian Federation, Germany). India, on the other hand, has run large-scale national assessments for some decades but, with one exception, has not participated in a major international assessment. Many of the world’s poorest countries do not participate in international assessments or carry out national assessments, though the situation has changed in recent years.

The curriculum areas that have attracted the largest participation rates in international studies over the years are reading comprehension, mathematics, and science. Studies have been carried out at primary and secondary school levels. Usually, a combination of grade and age is used to determine who will participate (e.g., students in two adjacent grades that contain the largest proportions of 9-year olds and 13-year olds in TIMSS; students in the grade levels containing most 9-year olds and most 14-year olds in the IEA Study of Reading Literacy; the upper of two adjacent grades with the most 9-year olds in PIRLS). In another study, separate age and grade samples were selected (the 1964 IEA First Mathematics Study). In yet another, students of a particular age were selected (15-year olds in PISA).

The number of countries participating in international studies has increased over the years. While typically less than 20 countries participated up to the 1980s, the IEA Reading Literacy Study attracted 32 countries in 1991. In 2003, 52 countries participated in TIMSS and 41 in PISA (30 member states of the OECD and 11 ‘partner’ countries). Furthermore, international studies in recent years have accorded a major focus to monitoring performance over time. All three major

ÎÖÅÍßÂÀÍÅÒÎcurrent international assessments (TIMSS, PIRLS, PISA) are administered on aParticipation by non-industrialized countries in international studies has, in line with the general increase in the number ofcountries that have taken part in international studies, the number has increasedgest number in 2003 (seven fromAs was the case generally ininternational studies, non-industrialized countries have shown a greater interestcyclic basis and are now described as ‘trend’studies.generally been low. Howeverover the years. TIMSS attracted the larAfrica and nine fromAsia and the Middle East).

in taking part in studies of mathematics and reading than in studies of other curriculum areas.

Advantages of International Assessments

A variety of reasons have been proposed to encourage countries to participate in an international assessment of student achievement. Perhaps the most obvious is that international studies provide a comparative framework in which to assess student achievement and curricular provision in a country. In several studies, comparisons of curricula have led to decisions to increase the focus on a curriculum area or to include or exclude curriculum material. In South Africa, for example, poor achievement results from TIMSS led to increased allocation of resources for science and mathematics (Reddy, 2005). The results of a national assessment may also identify curriculum areas (e.g., formal grammar, problem solving in mathematics, science) which differ from curricula in other countries, or have not kept up to date with developments in other countries. Following such an analysis, countries can decide to alter aspects of the curriculum or, having reected on alternatives, to stick with those aspects of the curriculum that are considered appropriate for local conditions.

By comparing results from different countries, assessment results can also be used to help define what is achievable, how achievement is distributed, and relationships between average achievement and its distribution. For example, can high average achievement co-exist with narrow disparities in performance?

International studies are likely to attract the attention of the media and of a broad spectrum of stakeholders (politicians, policymakers, academics, teachers, the public). Differences between countries in levels of achievement will be obvious in the descriptive statistics provided in reports of the studies and, indeed, these are usually highlighted in ‘league tables’ in which countries are ranked in terms of their mean level of achievement. The comparative data provided in the studies will have more ‘shock value’ than the results of a national assessment. Poor results can encourage debate, which in turn may provide politicians and other policymakers with a rationale for increased budgetary support for the education sector, particularly if poor results are associated with a lower level of expenditure on education.

ÎÖÅÍßÂÀÍÅÒÎinternational studies can capitalize on the variability that exists across educationsystems, broadening the range of conditions that can be studied beyond thoseoperating in any one countryroutinely consider associations between achievement on the one hand and a widerange of contextual variables (system-wide, school-level, and student-level)on the othertime spent on school work, teacher training, class size, and organization of theData on achievement provide only limited information. On the basis that, analyses of data collected in international studies.The range of variables considered include curriculum content,

education system. Clearly, the value of international studies is enhanced to the extent that they provide researchers and policy makers with information that suggests hypotheses about the reasons students differ in their achievements from country to country, as well as a basis for the evaluation of policy and practices.

International studies also have the potential to bring to light concepts for understanding education that have been overlooked in a country (e.g., in defi nitions of literacy, in conceptualizing curricula in terms of intention, implementation, and achievement). They can also help identify and lead to questioning assumptions that may be taken for granted (e.g., the value of comprehensive vs. selective education; that smaller class sizes are associated with higher achievement; that grade repetition benefi ts students).

International studies may also have a role to play in monitoring trends in achievement over time. This, of course, can also be achieved in a national assessment.

Finally, studies may contribute to the development of local capacity in a variety of technical activities: sampling, defining achievements, developing tests, statistical analysis, and report writing. Furthermore, staffing requirements and costs (for example, for instrument development, data cleaning, and analysis) may be lower than in national assessments since costs are shared with other countries.

Problems with International Assessments

Despite these obvious advantages, a number of problems associated with international assessments merit consideration before making a decision whether or not to participate in one.

First, it is difficult to design an assessment procedure that will adequately measure the outcomes of a variety of curricula. Any assessment can only incorporate a partial representation of any aspect of educational achievement. Although there are common elements in curricula across the world, particularly at the primary school level, there are also considerable differences between countries in what is taught and in expected standards of achievement.

South Africa’s review of TIMSS items showed that only 18% of the science items matched the national curriculum of grade 7, while 50% matched the grade

ÎÖÅÍßÂÀÍÅÒÎference between thecurricula and levels of achievement of countries participating in an internationalcult it is to devise an assessment procedure that will suitall countries and the more doubtful the validity of any inferences that are made aboutcomparative achievements. Qualitative differences between curricula and in theected in quantitativeOne would expect an achievement test based on the content of a national8 curriculum (Howie & Hughes, 2000). The greater the difassessment, the more difachievements will not be adequately restructure of students’comparative data.

curriculum to provide a more valid measure of curriculum mastery than one which was designed to serve as a common denominator of the curricula on offer in 30 to 40 countries. For example, a national curriculum authority and the designers of an international assessment might assign quite different weights of importance to a skill such as drawing inferences from a text. A national assessment, as opposed to an international assessment, can also test curricular aspects which are unique to individual countries.

It may be noted that assessment studies generally reect the values of industrialized countries and so may not take sufficient account of the national goals of participating developing countries which in the case of South Africa, for example, include transformation goals of access, redress, equity, and quality (Reddy, 2005).

It would seem more difficult to devise a common assessment instrument for some curriculum areas (e.g., science, social studies) than for others (e.g., reading). In the case of science, for example, achievement patterns have been found to be more heterogeneous than in mathematics. Furthermore, a greater number of factors is required to account for student performance differences in science than in mathematics. Thus, it is difficult to envisage a science test that would be appropriate for a variety of education systems.

A second problem with international studies is that while early studies had the ambitious aim of assessing the relative importance of a variety of school resources and instructional processes, in practice this turned out to be very diffi cult to do. A major problem is that, since most studies are cross-sectional, their fi ndings cannot be interpreted as representing causal relationships. This means that data from the studies relating achievement to background variables can, at best, only provide clues about what variables might be worth considering for manipulation. Further, they cannot give any assurance on what the effects of any manipulation might be.

Here, it may be noted that school and out-of-school contextual and socioeconomic factors can be very different in developing countries from those that prevail in industrialized countries, and include poverty, nutritional and health factors, and poor educational infrastructure and resourcing. However, a study designed for industrialized countries may not accord these adequate recognition.

ÎÖÅÍßÂÀÍÅÒÎthe complexity of interdependencies that exist among variables.how to present the results of analysis that may be extremely complex in a form thatwill be intelligible to non-specialists. This is also a concern in national assessments,but the fact that the relative effect of variables depends on the context in whichthey are embedded adds to the complexity in international studies. Thus, it cannotbe assumed that practices associated with high achievement in one country willshow a similar relationship in anotherthird problem relates to difculties in analysis and interpretation arising fromAArelated issue is

. In fact, the strength of correlations between background factors and achievement has been found to vary from country to country.

Fourth, analytic problems are also created by the fact that in international assessments (and increasingly in national assessments) each student takes only a fraction of a large number of assessment tasks. While this has the advantage that it is possible to extend the range of curriculum coverage in tests without increasing the burden on individual respondents, it has the disadvantage that the sample of achievement data obtained from individual students may be less than satisfactory when used in analyses which seek to describe relationships between the achievements of individual students and other factors (see Sofroniou & Kellaghan, 2004).

Fifth, a particular problem arises in both international and national assessments if it is necessary to translate the instruments of the assessment into one or more languages. If comparisons are to be made between performances assessed in different languages, it should be realized that the differences that may emerge may be attributable to language-related differences in the diffi culty of assessment tasks. The issue is partly addressed by changing words. For example, in South Africa, words such as gasoline (petrol) and ashlight (torch) had to be changed. Ghana replaced the word “snow” with “rain”. Problems involving calendar time can pose a difficulty in Ethiopia where the Coptic calendar has a thirteen-month year. It is not always possible to ensure, however, that the way questions are phrased and the cultural appropriateness of content are equivalent in all language versions of an assessment task. For example, material which is context-appropriate for African students, covering hunting, the local market place, agricultural pursuits, and local games, might be relatively unfamiliar in middle and high income countries.

Sixth, the populations and samples of students participating in international assessments may not be strictly comparable. Differences in performance might arise because countries differ in the extent to which categories of students are removed from mainstream classes and so may be excluded from an assessment (e.g., students in special programmes, students in schools in which the language of instruction differs from the language of the assessment). The problem is most obvious where retention and dropout rates

ÎÖÅÍßÂÀÍÅÒÎ, and is particularly relevant in studies inwhich industrialized and developing countries participate. In some developingcountries, large proportions of students have dropped out well before the endof the period of compulsory schooling. While primary school net enrolmentAmerica are almost 100 per cent, theAfrica are, on average, less than 60 percent (UNESCO, 2002). Patterns of early drop-out can differ from country toArab countries, boys are more likely than girlsdiffer from one country to anotherratios for Western Europe and Northratios for countries in Sub-Saharan

country. In Latin American and not to complete Grade 5; the reverse is true in some African countries (e.g. Guinea, Mozambique).

Seventh, variation in test score performance is an important factor in determining correlates of learning achievement. While carefully designed national tests can help ensure a relatively wide distribution of test scores, many items in international assessments have been too difficult for students from less developed countries, resulting in restricted test score variance. This is re ected in the data presented in Figure 6.1 which is based on the results of a selection of countries that participated in the TIMSS 2003 study. The data show the percentage of Grade 8 students that reached levels or benchmarks of performance based on all students who took the test. Roughly three-quarters of those who took the mathematics test achieved the “low” international benchmark, onehalf the” intermediate” benchmark, 23% the “high” benchmark, and 7% the “advanced” benchmark . In sharp contrast, 9% of Ghanaian students achieved the low benchmark and 2% the intermediate benchmark. Zero per cent achieved the “advanced” and “high” international benchmarks. Similarly on PISA 2003 , the limited utility of the assessment for internal policy making was underscored by the lack of test score variance in a number of participating countries; the majority of 15-year olds in Brazil, Indonesia and Tunisia scored below level 1. (It has been suggested that Level 2 be considered a minimum requirement for students entering the world of work and further education.)

Box 6.1
Percentage of Students Reaching TIMSS International Benchmarks1 in Mathematics, Grade 8: High and Low-Scoring Countries.

CountriesAdvancedHighIntermediateLowSingapore44779399Chinese Taipei38668596Rep of Korea35709098
ÎÖÅÍßÂÀÍÅÒÎInternat. Average7234974Philippines031439Bahrain021751South Africa02610Tunisia011555Morocco011042Botswana01732Saudi Arabia00319Ghana0029

1 Definitions used in TIMSS 2003:

Advanced: Students can organize information, make generalizations, solve nonroutine problems, and draw and justify conclusions from data.

High: Students can apply their understanding and knowledge in a wide variety of relatively complex situations.

Intermediate: Students can apply basic mathematical knowledge in straightforward solutions.

Low: Students have some basic mathematical knowledge.

Source: Mullis et al (2004), p. 64

Publisher source : http://timss.bc.edu/PDF/t03_download/T03_M_Chap2.pdf Eighth, a problem arises when the primary focus in reporting the results of an international assessment is on the ranking of countries in terms of the average scores of their students, usually the main interest of media. Rankings in themselves tell us nothing about the many factors that may underlie differences between countries in performance. Furthermore, rankings can be misleading when the statistical significance of mean differences in achievement is ignored while rankings can vary depending on the countries that participate, an important consideration when rankings over time are compared. Thus, for example, if the number of traditionally high achieving countries decreases and the number of traditionally low achieving countries increases, a country’s ranking may increase without necessarily implying an improvement in achievement.

Ninth, poor performance in an international assessment (as well as in a national assessment) can carry with it some political risks for key offi cials associated with the delivery of education, including Ministers and Secretaries of Education. The risk is likely to be greater when the international rank of a country is lower than that of a traditional rival country. Some countries have actually collected data and refused to allow them to be included in between-country published comparisons when they saw the results. Such an approach possibly hinders the the world that differ greatly in their level of socioeconomic development. ten Latin assessments in southern and eastern 1990s also allowed for international comparisons at a regional level.

ÎÖÅÍßÂÀÍÅÒÎdevelopment of a culture of rigorous assessment and evaluation within a country.Obtaining comparative data for neighboring countries or countries within aregion would seem more appropriate than obtaining data for countries acrossThus,American and Caribbean countries jointly carried out an assessmentof basic competencies in language and mathematics in 1997. The SACMEQAfrica by a network of ministries in the

Tenth, the primary purpose of both national and international assessments is to provide objective information on key aspects of the quality of the education system to enhance policy making. The likelihood that policy makers will use the results is likely to be enhanced when they have contributed to the design and implementation of the assessment. National assessments can be more easily tailored than international assessments to address the policy needs of key policy makers. For example, national policy makers can ensure that the assessment is directed at the grade level of most interest to them (e.g., the final year of primary school). Up to now, PISA has assessed the achievement levels of 15-year olds; in many developing countries, the majority of young people have left school by this age.

Eleventh, the demands of meeting deadlines may prove very diffi cult in countries that lack administrative personnel and have to cope with a poor communications infrastructure (see Box 6.1). Time allowed for carrying out various tasks (e.g., printing, distribution of booklets) associated with an international assessment , which may be deemed reasonable in developed countries, may be insufficient given the range of basic problems, including poor communication systems, that exist in many developing countries. Getting support staff can be difficult especially where Ministries of Education are slow in releasing funds to pay for services.

Box 6.1 South Africa’s Experience with International Assessments

South Africa’s experience with TIMSS and TIMSS-R underlines the problems facing implementers of international assessments. Deadlines imposed by organizers can be difficult, if not impossible, to meet in situations where there may be no mail or telephone services or funds for travel to schools. Other problems include lack of accurate population data on schools; poor management skills; insufficient attention to detail, especially in editing, coding, and data capture; lack of funding to support project workers; and difficulty in securing quality printing on time. Instructions to test administrators, for example to walk up and down the aisle, are obviously inappropriate when classrooms do not have an aisle.

Finally, in considering participation in an international assessment, countries with limited resources have to decide if the use of fi nancial and human resources

ÎÖÅÍßÂÀÍÅÒÎcan be justiprovision of teachers, running watercosts associated with participation in an international studyin TIMSS for grade 8 was expected to pay US$ 40,000 plus all costs associatedwith printing, distribution, test administration, data entryalso be recognized, of course, that national assessments also have considerableassociated costs.ed when a great many other needs are demanding attention (e.g., the, or electricity in schools). There are substantialAcountry participating., and scoring. It should

REFERENCES

Benveniste, L. (2002). The political structuration of assessment: Negotiating state power and legitimacy. Comparative Education Review, 46, 89–111.

Bernard, J-M (1999). Les enseignants du primaire dan cinq pays du Programme d’Analyse des Systèmes Educatifs de la CONFEMEN: Le rôle du maître dans le processus d’acquisition des élèves. Rapport réalisé pour le groupe de travail sur la profession enseignante, Section francophone de l’ADEA.

Carroll, D. (1996). The grade 3 and 5 assessment in Egypt. In P. Murphy et al (Eds.), National assessments: Testing the system (pp. 157–165). Washington DC: World Bank.

CONFEMEN (1999). Les faceurs de l’efficacité dans l’enseignement primaire: Les resultats due programme PASEC sur neuf pays d’Afrique et de l’Océan Indien. Dakar: Author.

Elley, (2005)

Elley, W.B. (Ed.). (1994). The IEA Study of Reading Literacy: Achievement and Instruction in Thirty-two School Systems. Oxford: Pergamon.

Ferrer, G. (2006). Educational assessment systems in Latin America: Current practice and future challenges. Washington DC: Partnership for Educational Revitalization in the Americas (PREAL).

Hanushek, E.A., & Kimko, D.D. (2000). Schooling, labor-force quality, and the growth of nations. American Economic Review, 90, 1184-1208.

Howie, S. (2005). System-level evaluation: Language and other factors affecting mathematics achievement. Prospects, 35, 175–186.

Ishino, T. (1995). Japan. In Performance standards in education. In search of quality, (pp. 149-161). Paris: OECD.

Johnson, E.G. (1992). The design of the National Assessment of Educational Progress. Journal of Educational Measurement, 29, 95-110.

Kellaghan, T. (2003). Local, national and international levels of system evaluation. Introduction. In T. Kellaghan & D.L. Stuf ebeam (Eds.), International handbook of educational evaluation (pp. 873–882). Dordrecht: Kluwer Academic.

Kellaghan, T., & Greaney, V. (2001). Using assessment to improve the quality of education. Paris: UNESCO: International Institute for Educational Planning.

Kulpoo, D., & Coustère, P. (1999). Developing national capacities for assessment and monitoring through effective partnerships. In Partnerships for capacity building and quality improvements in education: Papers from the ADEA 1997 biennial meting, Dakar. Paris: ADEA (Association for the Development of Education in Africa).

Mullis, I.V.S., Martin, M.O., Gonzalez, E.J., & Chrostowski, S.J. (2004). TIMSS 2003 International Mathematics Report: Findings From IEA’s Trends in International Mathematics and Science Study at the Fourth and Eighth Grades. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College.

Mullis, I.V.S., Martin, M.O., Gonzalez, E.J., & Kennedy, A.M. (2003). PIRLS 2001 international report: IEA’s study of reading literacy achievement in primary schools. Chestnut Hill MA: PIRLS International Study Center, Boston College.

Murimba, S. (2005a). The impact of the Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ). Prospects, 35, 91–108

Murimba, S. (2005b). The Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ): Mission approach and projects. Prospects, 35,75-89.

OECD (Organisation for Economic Co-operation and Development). (2003). The PISA 2003 assessment framework: Reading, mathematics, science and problem solving knowledge and skills. Paris: Author.

OECD. (2004). Learning for tomorrow’s world. First results for PISA 2003. Paris: Author.

Postlethwaite, T. N. (2004). What do international assessment studies tell us about the quality of schools systems? Background paper for EFA Global Monitoring Report 2005.

Prakash, V., Gautam, S.K.S., & Bansal, I.K. (2000). Student achievement under MAS: Appraisal in Phase-II States. New Delhi: National Council of Educational Research and Training.

Raminez, F.O., Luo, X., Schofer, E., & Meyer, J.W. (2006). Student achievement and national economic growth. American Journal of Education, 113, 1-29.

Ravella, P. (2005b). Personal communication, March 2.

Štraus, M. (2005). International comparisons of student achievement as indicators for educational policy in Slovenia. Prospects, 35, 187–198.

Task Force on Education Reform in Central America (2000). Tomorrow is too late. http://thedialogue.org/publications/preal/tomorrow.pdf

UNESCO. (2000). The Dakar Framework for Action. Education for All: Meeting our collective commitments. Paris: Author.

UNESCO. (2001). Technical report of the first international comparative study. Santiago, Chile: OREALC

UNESCO. (2002). EFA global monitoring report, 2002: Is the world on track? Paris: Author.

UNESCO. (2003). Monitoring Learning Achievement (MLA) Project. Update. Paris: Author.

UNESCO. (2005) Education for All global monitoring report 2005 – The Quality Imperative. Paris: Author.

U.S. National Center for Education Statistics. (2005). National Assessment of Educational Progress: The nation’s report card, Reading 2005. Washington DC: Author.

U.S. National Center for Education Statistics. (2006) NAEP overview. http://nces.ed.gov/nationsreportcard/about/

Greaney, Vincent; Kellaghan, Thomas. 2008. Assessing National Achievement Levels in Education. © Washington, DC: World Bank. https://openknowledge. worldbank.org/handle/10986/6904 License: CC BY 3.0 Unported.

Винсънт Грийн Томас Кeлаган

2025 година
Книжка 4
ТРАНСФОРМАЦИИ НА ПАЗАРА НА ТРУДА И НУЖДАТА ОТ ОБРАЗОВАТЕЛНИ РЕФОРМИ

Ваня Иванова, Андрей Василев, Калоян Ганев, Ралица Симеонова-Ганева

Книжка 3
FORMING ENTREPRENEURIAL CULTURE THROUGH EDUCATION

Prof. Dr. Milena Filipova, Adriana Atanasova, PhD student

Книжка 2s
THE STATE OF INCLUSION IN ADAPTED BASKETBALL

Dr. Stefka Djobova, Assoc. Prof., Dr. Ivelina Kirilova, Assist. Prof.

THE IMPACT OF AGE ON ADULT’S PARTICIPATION IN PHYSICAL ACTIVITIES DURING LEISURE TIME

Dr. Despina Sivevska, Assoc. Prof. Dr. Biljana Popeska, Assoc. Prof.

Книжка 2
MODEL OF PROFESSIONALLY DIRECTED TRAINING OF FUTURE ENGINEER-TEACHERS

Prof. Ivan Beloev, Dr. Valentina Vasileva, Assoc. Prof. Dr. Іnna Savytska, Assoc. Prof., Dr. Oksana Bulgakova, Assoc. Prof. Dr. Lesia Zbaravska, Assoc. Prof., Dr. Olha Chaikovska, Assoc. Prof.

QUALITY OF HIGHER EDUCATION IN BULGARIA: COMMUNICATION AND COMPUTER TECHNOLOGY TRAINING

Prof. Rositsa Doneva, Dr. Silvia Gaftandzhieva, Assoc. Prof.

ВЛИЯНИЕ НА ОБРАЗОВАНИЕТО И ЧОВЕШКИЯ КАПИТАЛ ВЪРХУ ФОРМАЛНАТА И НЕФОРМАЛНАТА ИКОНОМИКА

Проф. д-р Стефан Петранов, доц. д-р Стела Ралева, доц. д-р Димитър Златинов

DETERMINANTS AFFECTING ACADEMIC STAFF SATISFACTION WITH ONLINE LEARNING IN HIGHER MEDICAL EDUCATION

Dr. Miglena Tarnovska, Assoc.Prof.; Dr. Rumyana Stoyanova, Assoc.Prof.; Dr. Angelina Kirkova-Bogdanova; Prof. Rositsa Dimova

Книжка 1s
CHALLENGES FACED BY THE BULGARIAN UNIVERSITIES IN THE CONTEXT OF SCIENCE – INDUSTRY RELATIONS

Dr. Svetla Boneva, Assoc. Prof., Dr. Nikolay Krushkov, Assoc. Prof.

INVENTING THE FUTURE: CAN BULGARIAN UNIVERSITIES FULFILL THEIR MISSION AS CATALYSTS FOR ECONOMIC GROWTH AND SUSTAINABILITY?

Dr. Ralitsa Zayakova-Krushkova, Assist. Prof., Dr. Alexander Mitov, Assoc. Prof.

AN INNOVATIVE MODEL FOR DEVELOPING DIGITAL COMPETENCES OF SOCIAL WORKERS

Prof Dr. Lyudmila Vekova, Dr. Tanya Vazova, Chief Assist. Prof., Dr. Penyo Georgiev, Chief Assist. Prof., Dr. Ekaterina Uzhikanova-Kovacheva

BUSINESS ASPECTS OF ACADEMIC PUBLISHING

Dr. Polina Stoyanova, Chief Assist. Prof.

THE ECONOMIC IMPACT OF MUSIC STREAMING

Dr. Dimiter Gantchev, Assist. Prof.

FILM INCENTIVE SCHEME IN THE REPUBLIC OF BULGARIA

Dr. Ivan Nachev, Assist. Prof.

PATENT PROTECTION OF DIGITAL TWINS

Dr. Vladislava Pаcheva, Chief Assist. Prof.

Книжка 1

МНОГОСТРАНЕН ПОДХОД ЗА ИЗСЛЕДВАНЕ РАВНИЩЕТО НА ДИГИТАЛИЗАЦИЯ В ПОДГОТОВКАТА НА БЪДЕЩИ УЧИТЕЛИ

Доц. д-р Бистра Мизова, проф. д-р Румяна Пейчева-Форсайт Проф. д-р Харви Мелър

2024 година
Книжка 6s
DISRUPTIVE TECHNOLOGIES RISK MANAGEMENT

Dr. Miglena Molhova-Vladova, Dr. Ivaylo B. Ivanov

THE DUAL IMPACT OF ARTIFICIAL INTELLIGENCE: CATALYST FOR INNOVATION OR THREAT TO STABILITY

Prof. Diana Antonova, Dr. Silvia Beloeva, Assist. Prof., Ana Todorova, PhD student

MARKETING IN TOURISM: PRACTICAL EVIDENCES

Dr. Fahri Idriz, Assoc. Prof.

DEVELOPMENT OF THE INFORMATION ECONOMY CONCEPT AND THE TRANSITION TO INDUSTRY 5.0

Dr. Dora Doncheva, Assist. Prof., Dr. Dimitrina Stoyancheva, Assoc. Prof.

THE GLOBAL MARKET AS A PROJECTION OF THE INFORMATION ECONOMY

Dr. Vanya Hadzhieva, Assist. Prof. Dr. Dora Doncheva, Assist. Prof.

ACADEMIC ENTREPRENEURSHIP: PRACTICAL RESULTS AND TRAINING

Prof. Nikolay Sterev, DSc., Dr. Daniel Yordanov, Assoc. Prof.

Книжка 6
AN INTEGRATIVE APPROACH TO ORGANIZING THE FORMATION OF STUDENTS’ COGNITIVE INDEPENDENCE IN CONDITIONS OF INTENSIFICATION OF LEARNING ACTIVITIES

Dr. Albina Volkotrubova, Assoc. Prof. Aidai Kasymova Prof. Zoriana Hbur, DSc. Assoc. Prof. Antonina Kichuk, DSc. Dr. Svitlana Koshova, Assoc. Prof. Dr. Svitlana Khodakivska, Assoc. Prof.

ИНОВАТИВЕН МОДЕЛ НА ПРОЕКТНО БАЗИРАНО ОБУЧЕНИЕ НА ГИМНАЗИАЛНИ УЧИТЕЛИ: ДОБРА ПРАКТИКА ОТ УниБИТ

Проф. д-р Жоржета Назърска, доц. д-р Александър Каракачанов, проф. д-р Магдалена Гарванова, доц. д-р Нина Дебрюне

Книжка 5s
КОНЦЕПТУАЛНА РАМКА ЗА ИЗПОЛЗВАНЕ НА ИЗКУСТВЕНИЯ ИНТЕЛЕКТ ВЪВ ВИСШЕТО ОБРАЗОВАНИЕ

Акад. д.н. Христо Белоев, проф. д.н. Валентина Войноховска, проф. д-р Ангел Смрикаров

ИЗКУСТВЕНИЯТ ИНТЕЛЕКТ В БИЗНЕСА – ФИНАНСОВИ, ИКОНОМИЧЕСКИ И МАРКЕТИНГОВИ АСПЕКТИ

Проф. д-р Андрей Захариев, доц. д-р Драгомир Илиев Гл. ас. д-р Даниела Илиева

RECENT TRENDS AND APPLICATIONS OF THE ARTIFICIAL INTELLIGENCE IN THE EDUCATION

Prof. Dr. Plamen Zahariev, Prof. Dr. Georgi Hristov, Prof. Dr. Ivan Beloev

COMPARATIVE ANALYSIS OF UTILIZING POPULAR INTELLIGENT COMPUTER SYSTEMS IN EDUCATION

Dr. Galina Ivanova, Assoc. Prof. Dr. Aleksandar Ivanov, Assoc. Prof.

CONCEPTUAL MODEL OF TRAINING IN REMOTE VIRTUAL SUPERVISION IN SOCIAL WORK

Dr. Silviya Beloeva, Assist. Prof. Dr. Nataliya Venelinova, Assist. Prof.

ИЗСЛЕДВАНЕ ПРИЛОЖИМОСТТА НА БЛОКОВИ ВЕРИГИ ОТ ПЪРВО НИВО (L1) В СИСТЕМА ЗА ЕЛЕКТРОННО ОБУЧЕНИЕ

Андриан Минчев, проф. Ваня Стойкова, гл. ас. д-р Галя Шивачева Доц д-р Анелия Иванова

DIGITAL DISCRIMINATION RISKS IN THE TRANSFORMATION OF HIGHER EDUCATION

Dr. Silviya Beloeva, Assist. Prof. Dr. Nataliya Venelinova, Assist. Prof.

OPPORTUNITIES, CHALLENGES AND SOLUTIONS FOR DIGITAL TRANSFORMATION OF THE EDUCATIONAL PROCESSES THROUGH 3D TECHNOLOGIES

Prof. Georgi Hristov, Prof. Plamen Zahariev, Dr. Diyana Kinaneva, Assist. Prof., Georgi Georgiev, Assist. Prof.

ДИГИТАЛНОТО ПОКОЛЕНИЕ VS. СЛЯТОТО, ПОЛУСЛЯТОТО И РАЗДЕЛНОТО ПИСАНЕ

Доц. д-р Владислав Маринов, ас. Анита Тодоранова

OPPORTUNITIES AND CHALLENGES FOR THE EDUCATION OF STUDENTS WITH SPECIAL EDUCATIONAL NEEDS IN THE DIGITAL ENVIRONMENT: THE NEW NORMAL

Prof. Julia Doncheva, DSc., Dr. Galina Ivanova, Assoc. Prof. Dilshod Tojievich Oblokulov

ИЗГРАЖДАНЕ НА КОМПЕТЕНЦИИ ЗА РАЗРАБОТВАНЕ НА STEM ОБУЧИТЕЛНИ РЕСУРСИ У БЪДЕЩИ УЧИТЕЛИ ПО ПРИРОДНИ НАУКИ

Доц. д-р Евгения Горанова, проф. д.н. Валентина Войноховска, проф. д-р Ангел Смрикаров

APPLICATION OF ZSPACE TECHNOLOGY IN THE DISCIPLINES OF THE STEM CYCLE

Boyana Ivanova, Assist. Prof. Dr. Kamelia Shoilekova, Assoc. Prof. Dr. Desislava Atanasova, Assoc. Prof. Dr. Rumen Rusev, Assoc. Prof.

TEACHERS' ADAPTATION TO CHANGES IN AN INCREASINGLY COMPLEX WORLD THROUGH THE USE OF AI

Prof. Zhanat Nurbekova, Kanagat Baigusheva, Kalima Tuenbaeva, Bakyt Nurbekov Prof. Tsvetomir Vassilev

АТОСЕКУНДНОТО ОБУЧЕНИЕ – МЕТАФОРА НА ДНЕШНОТО ОБРАЗОВАНИЕ

Проф. д.н. Юлия Дончева, Денис Асенов, проф. д-р Ангел Смрикаров проф. д-р Цветомир Василев

APPLICATION AND ASSESSMENT OF DIGITAL RESOURCES IN THE EDUCATION OF FUTURE PEDAGOGUES

Dr. Galina Ivanova, Assoc. Prof., Dr. Milena Velikova, Assist. Prof.

IDENTIFYING PLAYER TYPES IN THE CLASSROOM FOR EFFECTIVE GAMIFICATION

Dr. Desislava Atanasova, Assoc. Prof., Viliana Molnar

DEVELOPMENT AND INTEGRATION OF AUDIO AND VISUAL MICRO-RESOURCES IN THE LEARNING PROCESS THROUGH THE USE OF ARTIFICIAL INTELLIGENCE SYSTEMS

Dr. Petya Stefanova, Assist. Prof., Dr. Assist. Elitsa Ibryamova, Assist. Prof., Prof. Angel Smrikarov, Dr. Galina Ivanova, Assoc. Prof.

АНАЛИЗ НА ПРОГРАМНИТЕ МОДЕЛИ ЗА АВТОМАТИЗИРАНЕ НА КОГНИТИВНИ ПРОЦЕСИ

Доц. д-р Валентин Атанасов Доц. д-р Анелия Иванова

Книжка 5
MANAGING A POSITIVE AND LIFE-SKILLS DEVELOPMENT IN THE SCHOOL-BASED CURRICULA: A LITERATURE REVIEW ON THE SUSTAINABLE EDUCATION

Dr. Lindita Durmishi, Assoc. Prof., Dr. Ardian Durmishi Prof. Milena Filipova Dr. Silva Ibrahimi

APPLICATION OF THE COMPETENCY MODEL IN BUSINESS ADMINISTARATION HIGHER EDUCATION IN HORIZON 2030

Prof. Nadya Mironova, Dr. Tatyana Kicheva, Assoc. Prof., Dr. Miglena Angelova, Assoc. Prof.

Книжка 4s
THE EDUCATION AND RESEARCH IN THE QUADRUPLE HELIX AND THE REGIONAL INNOVATION PROSPECTS

Prof. Dr. Milen Baltov Dr. Stela Baltova, Assoc. Prof. Dr. Vilyana Ruseva, Assoc. Prof.

Книжка 4
ATTITUDES OF STUDENTS – FUTURE TEACHERS, FOR THE APPLICATION OF GENERATIVE ARTIFICIAL INTELLIGENCE

Assoc. Prof. Nikolay Tsankov, DSc. Dr. Ivo Damyanov, Assist. Prof.

EDUCATIONAL NEEDS OF THE JUDICIAL ADMINISTRATION IN THE CONTEXT OF DIGITALIZATION

Dr. Diana Dimitrova, Dr. Darina Dimitrova, Assoc. Prof., Dr. Velina Koleva

MANAGERIAL ASPECTS OF COOPERATION AMONG HIGHER EDUCATION INSTITUTIONS AND THEIR STAKEHOLDERS

Prof. Olha Prokopenko, DSc. Dr. Svitlana Perova, Assoc. Prof. Prof. Tokhir Rakhimov, DSc.

APPLICATION OF EDUCATIONAL STRATEGIES IN STUDYING THE DYNAMICS OF STATE POWER STRUCTURES: IMPLEMENTATION OF FORMAL AND INFORMAL MECHANISMS OF INFLUENCE

Prof. Stoyan Denchev, DSc. Dr. Miriyana Pavlova, Assist. Prof. Dr. Steliana Yordanova, Assist. Prof.

ДИАГНОСТИКА НА ФОРМИРАНАТА ПРОФЕСИОНАЛНА КОМПЕТЕНТНОСТ НА БЪДЕЩИ ИНЖЕНЕРИ ПО ЕНЕРГЕТИКА

Гл. ас. д-р Надя Илиева Доц. д-р Елена Бояджиева Ивалина Маринова

Книжка 3s
A MODEL FOR CALCULATING THE INDIRECT ADDED VALUE OF AI FOR BUSINESS

Dr. Petya Biolcheva, Assoc. Prof., Prof. Nikolay Sterev, DSc.

AI EFFECTIVENESS AND RISK ASSESSMENT OF INVESTMENTS IN HIGH-RISK START-UPS

Sotir Ivanov, PhD Student, Dr. Petya Biolcheva, Assoc. Prof.

COMPETITIVENESS OF TEXTILE PRODUCERS IN DIGITAL BUSINESS ERA

Prof. Nikolay Sterev, DSc., Dr. Vyara Milusheva, Assoc. Prof.

CHALLANGES OF USING ARTIFICIAL INTELLIGENCE IN MANAGEMENT DECISION MAKING

Dr. Bozhana Stoycheva, Assist. Prof. Dr. Pavel Vitliemov, Assoc. Prof.

THE SIGNIFICANCE OF ERASMUS+ MOBILITY IN BUSINESS EDUCATION: AN EXAMINATION OF A SUCCESSFUL BULGARIAN-MEXICAN COLLABORATION

Dr. Lyudmila Mihaylova, Assoc. Prof. Dr. Emil Papazov, Assoc. Prof. Dr. Diana E. Woolfolk Ruiz

Книжка 3
ИГРОВИ ПОДХОДИ В ОБУЧЕНИЕТО: УНИВЕРСИТЕТСКИ КОНТЕКСТ

Проф. д.н. Цветан Давидков Силвия Тонева, докторант

Книжка 2
FORMATION OF PROFESSIONAL SKILLS OF AGRICULTURAL ENGINEERS DURING LABORATORY PRACTICE WHEN STUDYING FUNDAMENTAL SCIENCE

Dr. Ivan Beloev, Assoc. Prof. Dr. Oksana Bulgakova, Assoc. Prof., Dr. Oksana Zakhutska, Assoc. Prof., Dr. Maria Bondar, Assoc. Prof. Dr. Lesia Zbaravska, Assoc. Prof.

ИМИДЖ НА УНИВЕРСИТЕТА

Проф. д.п.н. Галя Христозова

Книжка 1s
COMPETITIVENESS AS A RESULT OF CREATIVITY AND INNOVATION

Dr. Nikolay Krushkov, Assoc. Prof. Dr. Ralitza Zayakova-Krushkova

INNOVATION, TECHNICAL PROGRESS AND ECONOMIC DEVELOPMENT

Dr. Aleksandar Aleksandrov, Assist. Prof.

ENHANCING ECONOMIC SECURITY THROUGH INTELLECTUAL PROPERTY

Dr. Dimiter Gantchev, Assist. Prof.

INTELLECTUAL PROPERTY AND SECURITY IN THE INTEGRATED CIRCUITS INDUSTRY

Dr. Ivan Nachev, Dr. Yuliana Tomova, Iskren Konstantinov, PhD student, Marina Spasova, student

GREEN TRADEMARKS AND SUSTAINABILITY

Dr. Silviya Todorova, Assist. Prof.

ARTIFICIAL INTELLIGENCE AND ITS PROTECTION AS AN INVENTION

Dr. Vladislava Pаcheva, Assist. Prof.

Книжка 1
PROBLEMS AND PERSPECTIVES FOR SOCIAL ENTREPRENEURSHIP IN HIGHER EDUCATION

Prof. Dr. Milena Filipova Prof. Dr. Olha Prokopenko Prof. Dr. Igor Matyushenko, Dr. Olena Khanova, Assoc. Prof. Dr. Olga Shirobokova, Assoc. Prof. Dr. Ardian Durmishi

RESEARCH OF USING THE SYSTEM APPROACH TO INCREASE PROFESSIONAL COMPETENCE OF STUDENTS IN THE PROCESS OF STUDYING NATURAL SCIENCES

Dr. Ivan Beloev, Assoc. Prof. Dr. Іnna Savytska, Assoc. Prof., Dr. Oksana Bulgakova, Assoc. Prof. Prof. Iryna Yasinetska, Dr. Lesia Zbaravska, Assoc. Prof.

2023 година
Книжка 6s
TRANSFORMING MARITIME EDUCATION FOR A DIGITAL INDUSTRY

Dr. Christiana Atanasova, Assist. Prof.

DEVELOPMENT OF A COMMON INFORMATION SYSTEM TO CREATE A DIGITAL CAREER CENTER TOGETHER WITH PARTNER HIGHER SCHOOLS

Prof. Dr. Yordanka Angelova, Dr. Rossen Radonov, Assoc. Prof. Vasil Kuzmov, Assist. Prof. Stela Zhorzh Derelieva-Konstantinova

DRAFTING A DIGITAL TRANSFORMATION STRATEGY FOR PROJECT MANAGEMENT SECTOR – EMPIRICAL STUDY ON UAE

Mounir el Khatib, Shikha al Ali, Ibrahim Alharam, Ali Alhajeri Dr. Gabriela Peneva, Assist. Prof., Prof. Jordanka Angelova, Mahmoud Shanaa

VOYAGE OF LEARNING: CRUISE SHIPS WEATHER ROUTING AND MARITIME EDUCATION

Prof. Svetlana Dimitrakieva, Dr. Dobrin Milev, Assist. Prof., Dr. Christiana Atanasova, Assist. Prof.

RESEARCH ON THE SUSTAINABLE DEVELOPMENT COMPETENCES OF THE LANDSCAPE ARCHITECT IN PRACTICE

Land. arch. Elena Dragozova, Assoc. Prof., Dr. Stanislava Kovacheva, Assoc. Prof.

STUDY OF THE KEY FACTORS INFLUENCING THE EFFECTIVE PLANNING AND UTILIZATION OF PRODUCTION FACILITIES IN THE INDUSTRIAL ENTERPRISE

Dr. Tanya Panayotova, Assoc. Prof., Dr. Krasimira Dimitrova, Assoc. Prof., Neli Veleva, PhD student

SIMULATOR TRAINING – UNIQUE POWERFUL INSTRUMENT FOR EDUCATING, SKILLS CREATING, MITIGATING SKILLS AND RESILIENCE CREATING

Prof. Dimitar Dimitrakiev, Vencislav Stankov, Assist. Prof., Dr. Christiana Atanasova, Assist. Prof.

СТРУКТУРНИ ПРОМЕНИ В ОБУЧЕНИЕТО НА МЕНИДЖЪРИ ЗА ИНДУСТРИЯ 5.0

Доц. д-р Недко Минчев, доц. д-р Венета Христова, гл. ас. д-р Иван Стоянов

RESEARCH OF THE INNOVATION CAPACITY OF AGRICULTURAL PRODUCERS

Dr. Siya Veleva, Assoc. Prof.; Prof. Dr. Eng. Margarita Mondeshka Dr. Anka Tsvetanova, Assoc. Prof.,

Книжка 6
Книжка 5s
ПРЕСЕЧНАТА ТОЧКА НА СПОРТА, СИГУРНОСТТА И КРИПТО ФЕН ТОКЕНИТЕ

Полк. доц. Георги Маринов Доц. Милена Кулева

ВИДОВЕ ТРАВМИ В ПАРАШУТИЗМА И ПРЕВЕНЦИЯТА ИМ

Капитан III ранг Георги Калинов

ОБУЧЕНИЕ В ХОДЕНЕ С ПОМОЩНИ СРЕДСТВА – РИСКОВЕ И СИГУРНОСТ ЗА ПАЦИЕНТА

Атанас Друмев Доц. д-р Данелина Вачева, доц. д-р Искра Петкова

Книжка 5
ПОДХОДИ ЗА ПСИХОСОЦИАЛНА ПОДКРЕПА НА УНИВЕРСИТЕТСКИ ПРЕПОДАВАТЕЛИ В УСЛОВИЯ НА КРИЗА

Доц. д.н. Цветелина Търпоманова, доц. д.н. Веселина Славова

Книжка 4s
DETERMINING THE DEGREE OF DIGITALIZATION OF A HIGHER EDUCATION INSTITUTION

Acad. DSc. Hristo Beloev, Prof. Dr. Angel Smrikarov, Assoc. Prof. DSc. Valentina Voinohovska, Assoc. Prof. Dr. Galina Ivanova

A STUDY ON THE POSSIBILITIES TO INTEGRATE THE MODERN 3D TECHNOLOGIES IN THE SCIENTIFIC ACTIVITIES OF THE HIGHER EDUCATION INSTITUTIONS

Prof. Dr. Georgi Hristov, Assoc. Prof. Dr. Ivan Beloev, Assoc. Prof. Dr. Plamen Zahariev, Assist. Prof. Dr. Diyana Kinaneva, Assist. Prof. Georgi Georgiev

THE ROLE OF THE UNIVERSITIES AS ACCELERATORS FOR THE INTEGRATION OF THE STEM LEARNING METHODS IN THE PRIMARY AND SECONDARY SCHOOLS

Prof. Dr. Georgi Hristov, Assoc. Prof. Dr. Ivan Beloev, Assoc. Prof. Dr. Plamen Zahariev, Assist. Prof. Georgi Georgiev

ОТ STEM КЪМ BEST: ДВА СТАНДАРТА, ЕДНА ЦЕЛ

Проф. д-р Андрей Захариев, проф. д-р Стефан Симеонов, гл. ас. д-р Таня Тодорова

ВЪЗМОЖНОСТИ ЗА ПРИЛОЖЕНИЕ НА БЛОКЧЕЙН ТЕХНОЛОГИЯТА В ОБРАЗОВАНИЕТО

Докторант Андриан Минчев, доц. д-р Ваня Стойкова

ПРЕДИЗВИКАТЕЛСТВА НА ДИГИТАЛНАТА ТРАНСФОРМАЦИЯ В ОБРАЗОВАНИЕТО – СРАВНИТЕЛЕН АНАЛИЗ НА СТУДЕНТСКОТО МНЕНИЕ

Гл. ас. д-р Мирослава Бонева, доц. д-р Антон Недялков, проф. д.н. Милена Кирова

CHALLENGES, REQUIREMENTS, OPPORTUNITIES AND SOLUTIONS FOR THE DIGITAL TRANSFORMATION OF THE TRANSPORT EDUCATION

Prof. Dr. Georgi Hristov, Assoc. Prof. Dr. Ivan Beloev, Assoc. Prof. Dr. Plamen Zahariev

Книжка 4
EFFECT OF RESILIENCE ON BURNOUT IN ONLINE LEARNING ENVIRONMENT

Dr. Radina Stoyanova, Prof. Sonya Karabeliova, Petya Pandurova, Dr. Nadezhda Zheckova Dr. Kaloyan Mitev

STATE AND PROSPECTS OF DEVELOPMENT OF ACADEMIC MOBILITY IN THE SYSTEM OF TRAINING A SPECIAL EDUCATION SPECIALIST

Dr. Tetiana Dokuchyna, Assoc. Prof., Prof. Dr. Svitlana Myronova, Dr. Tetiana Franchuk, Assoc. Prof.

Книжка 3s
STRATEGIES AND POLICIES TO SUPPORT THE DEVELOPMENT OF AI TECHNOLOGIES IN EUROPE

Assoc. Prof. Miglena Molhova, Assoc. Prof. Petya Biolcheva

BULGARIA'S TECHNOLOGICAL DEVELOPMENT THROUGH THE PRISM OF HIGHER EDUCATION POLICIES

Assoc. Prof. Ivaylo B. Ivanov, Assoc. Prof. Miglena Molhova

INTELLIGENT ANIMAL HUSBANDRY: FARMER ATTITUDES AND A ROADMAP FOR IMPLEMENTATION

Prof. Dr. Dimitrios Petropoulos, Koutroubis Fotios Assoc. Prof. Petya Biolcheva Evgeni Valchev

EFFECTIVE MANAGEMENT OF HUMAN RESOURCES IN TOURISM THROUGH MOTIVATION

Assoc. Prof. Fahri Idriz Assoc. Prof. Marin Geshkov

Книжка 3
САМООЦЕНКА НА ОБЩООБРАЗОВАТЕЛНИТЕ И РЕСУРСНИТЕ УЧИТЕЛИ ЗА РАБОТА В ПАРАДИГМАТА НА ПРИОБЩАВАЩОТО ОБРАЗОВАНИЕ

Проф. д.н. Милен Замфиров, проф. Емилия Евгениева, проф. Маргарита Бакрачева

STUDY OF THE DEVELOPMENT OF THE USE OF COMMUNICATIVE TECHNOLOGIES IN THE EDUCATIONAL PROCESS OF ENGINEERS TRAINING

Assoc. Prof. Ivan Beloev, Assoc. Prof. Valentina Vasileva Assoc. Prof. Sergii Bilan, Assoc. Prof. Maria Bondar, Assoc. Prof. Oksana Bulgakova, Assoc. Prof. Lyubov Shymko

SAFETY THROUGH ARTIFICIAL INTELLIGENCE IN THE MARITIME INDUSTRY

Assoc. Prof. Petya Biolcheva Evgeni Valchev, PhD student

Книжка 2
РАЗПОЛОЖЕНИЕ НА ВИСШИТЕ УЧИЛИЩА В БЪЛГАРИЯ В КОНТЕКСТА НА ФОРМИРАНЕ НА ПАЗАРА НА ТРУДА

Гл. ас. д-р Цветелина Берберова-Вълчева, доц. д-р Камен Петров, доц. д-р Николай Цонков

CHARACTERISTICS AND COMPONENTS OF THE CYBER HYGIENE AS A SUBCLASS OF CYBER SECURITY IN MILITARY ENVIRONMENT AND EDUCATIONAL ISSUES

Prof. Boyan Mednikarov, DSc. Prof. Yuliyan Tsonev Dr. Borislav Nikolov, Prof. Andon Lazarov, DSc.

Книжка 1
MODERNIZATION OF THE CONTENT OF THE LECTURE COURSE IN PHYSICS FOR TRAINING FUTURE AGRICULTURAL ENGINEERS

Dr. Ivan Beloev, Assoc. Prof., Dr. Valentina Vasileva, Assoc. Prof. Prof. Vasyl Shynkaruk, DSc., Assoc. Prof. Oksana Bulgakova, Assoc. Prof. Maria Bondar Assoc. Prof. Lesia Zbaravska, Assoc. Prof. Sergii Slobodian

THE NEW PANDEMIC NORMAL THROUGH THE EYES OF BULGARIAN STUDENTS

Prof. Vyara Stoilova, Assoc. Prof. Todorka Kineva

2022 година
Книжка 6
ORGANIZATION OF AN INCLUSIVE EDUCATIONAL ENVIRONMENT FOR THE STUDENTS WITH SPECIAL NEEDS

Prof. Halyna Bilavych Prof. Nataliia Bakhmat Prof. Tetyana Pantiuk, Prof. Mykola Pantiuk Prof. Borys Savchuk

ДИГИТАЛИЗАЦИЯ НА ОБРАЗОВАНИЕТО В БЪЛГАРИЯ: СЪСТОЯНИЕ И ОБЩИ ТЕНДЕНЦИИ

Д-р Теодора Върбанова, проф. д-р Албена Вуцова, доц. д-р Николай Нетов

СКРИНИНГ НА ЗРЕНИЕТО – ПРОФИЛАКТИКА И ЕЛЕМЕНТ ОТ ПРАКТИКАТА НА СТУДЕНТИ И ОБУЧЕНИЕТО НА УЧЕНИЦИ

Руска Драганова-Христова, д-р Славена Стойкова, доц. д-р Снежана Йорданова

Книжка 5
ПРАВОТО НА ИЗБОР В ЖИВОТА НА ДЕЦАТА В РЕПУБЛИКА БЪЛГАРИЯ

Проф. д.п.н. Сийка Чавдарова-Костова, гл. ас. д-р Даниела Рачева, ас. Екатерина Томова, доц. д-р Росица Симеонова

SUSTAINABLE PROFESSIONAL DEVELOPMENT THROUGH COACHING: BENEFITS FOR TEACHERS AND LEARNERS

Assoc. Prof. Irina Ivanova, Assoc. Prof. Penka Kozhuharova, Prof. Rumyana Todorova

SELF-ASSESSMENT – A COMPONENT OF THE COMPETENCE-BASED TRAINING IN THE PROFESSION “APPLIED PROGRAMMER”

Assoc. Prof. Ivaylo Staribratov, Muharem Mollov, Rosen Valchev Petar Petrov

Книжка 4
BENCHMARKING FOR DEVELOPMENT OF SPEED AND POWER CHARACTERISTICS

Assist. Prof. Dr. Darinka Ignatova Assoc. Prof. Dr. Alexander Iliev

DIAGNOSIS AS A TOOL FOR MONITORING THE EFFECTIVENESS OF ADDICTION PREVENTION IN ADOLESCENTS

Prof. O.A. Selivanova Assoc. Prof. N.V. Bystrova, Assoc. Prof. I.I. Derecha, Assoc. Prof. T.S. Mamontova, Assoc. Prof. O.V. Panfilova

Книжка 3
ПУБЛИЧНОТО РАЗБИРАНЕ НА НАУКАТА В МРЕЖОВИЯ СВЯТ

Д-р Светломир Здравков, д-р Мартин Й. Иванов, д-р Петя Климентова

ОБРАЗОВАНИЕ ЗА УСТОЙЧИВО РАЗВИТИЕ – ПРАКТИКО-ПРИЛОЖНИ АСПЕКТИ

Гл. ас. д-р Златка Ваклева Проф. д-р Тоня Георгиева

Книжка 2
PREPARATION OF PRIMARY SCHOOL TEACHERS FOR COMMUNICATIVE AND RHETORICAL ACTIVITY IN SCHOOL IN THE CONTEXT OF THEIR PRACTICAL TRAINING

Prof. Halyna Bilavych Prof. Nataliia Bakhmat Prof. Tetyana Pantyuk, Prof. Mykola Pantyuk Prof. Borys Savchuk

ПРОЛЕТНА КОНФЕРЕНЦИЯ НА СЪЮЗА НА МАТЕМАТИЦИТЕ В БЪЛГАРИЯ

(Трявна, 5 – 9 април 2022) Гл. ас. д-р Албена Симова

Книжка 1
ДИГИТАЛНАТА ИНТЕРАКЦИЯ ПРЕПОДАВАТЕЛ – СТУДЕНТ В ОНЛАЙН ОБУЧЕНИЕТО В МЕДИЦИНСКИТЕ УНИВЕРСИТЕТИ

Д-р Миглена Търновска, д-р Румяна Стоянова Доц. Боряна Парашкевова, проф. Юлияна Маринова

2021 година
Книжка 6
Книжка 5
ЕДНА РЕКАПИТУЛАЦИЯ НА ИЗСЛЕДВАНИЯ ВЪРХУ ИНТЕРКУЛТУРНИТЕ ОТНОШЕНИЯ. КАКВО СЛЕДВА ОТ ТОВА ЗА ОБРАЗОВАНИЕТО?

Давидков, Ц., 2019. Изследвания върху културите. Културни ориентири на управлението. София: СУ „Св. Климент Охридски“, ISBN 978-954-9399-52-3 Проф. Пламен Макариев

Книжка 4s
RECOGNITION OF FAKE NEWS IN SPORTS

Colonel Assoc. Prof. Petko Dimov

SIGNAL FOR HELP

Ina Vladova, Milena Kuleva

Книжка 4
PREMISES FOR A MULTICULTURAL APPROACH TO EDUCATION

Dr. Anzhelina Koriakina, Assoc. Prof., Prof. Lyudmila Amanbaeva, DSc.

ПОЗИТИВНА ПСИХОЛОГИЯ: ПРОБЛЕМНИ ОБЛАСТИ И ФОРМИРАНЕ НА ЛИЧНОСТТА

Доц. д-р Стоил Мавродиев, Любомира Димитрова

КНИГА ЗА ИСТОРИЯТА НА БЪЛГАРСКОТО ВИСШЕ ИНЖЕНЕРНО ОБРАЗОВАНИЕ

Сгурев, В., Гергов, С., Иванов, Г., 2019. Положителните науки с приложение към индустрията. История на висшето техническо образование в България. София: Изд. на БАН „Проф. Марин Дринов“, Изд. „Захарий Стоянов“. ISBN 978-619-245-004-5, ISBN 978-954-09-1387-2.

Книжка 3
ENTREPRENEURSHIP AND INTERDISCIPLINARY EDUCATION – SEMIOTIC ASPECTS

Prof. Dr. Christo Kaftandjiev Dr. Diana Kotova

THE PRACTICAL IMPORTANCE OF ACCOUNTING EDUCATION FOR FUTURE MANAGERS

Nataliia Radionova, DSc. Dr. Radostina Stoyanova, Assist. Prof.

ЗА ОБРАЗОВАТЕЛНАТА ИНТЕГРАЦИЯ И ЗАЛОЗИТЕ НА НАСТОЯЩЕТО

Нунев, Й., 2020. Мониторинг на процесите на приобщаване и образователна интеграция и модели за десегрегация на ромското образование. Пловдив: Астарта, ISBN 978-954-350-283-7

Книжка 2
Книжка 1
METHODOLOGY OF SAFETY AND QUALITY OF LIFE ON THE BASIS OF NOOSPHERIC EDUCATION SYSTEM FORMATION

Nataliia Bakhmat Nataliia Ridei, Nataliia Tytova, Vladyslava Liubarets, Oksana Katsero

ОБРАЗОВАНИЕ В УСТОЙЧИВО РАЗВИТИЕ И ВЗАИМОДЕЙСТВИЕ „ДЕТЕ – СРЕДА“

Стоянова, М. (2020). Образование в устойчиво развитие и взаимодействие „дете – среда“ София: Авангард принт. ISBN 978-954-337-408-3

2020 година
Книжка 6
HIGHER EDUCATION AS A PUBLIC GOOD

Yulia Nedelcheva, Miroslav Nedelchev

Книжка 5
НАСЪРЧАВАНЕ НА СЪТРУДНИЧЕСТВОТО МЕЖДУ ВИСШИТЕ УЧИЛИЩА И БИЗНЕСА

Добринка Стоянова, Блага Маджурова, Гергана Димитрова, Стефан Райчев

Книжка 4
THE STRATEGY OF HUMAN RIGHTS STUDY IN EDUCATION

Anush Balian Nataliya Seysebayeva Natalia Efremova Liliia Danylchenko

Книжка 3
ПОМОЩНИ СРЕДСТВА И ТЕХНОЛОГИИ В ПРИОБЩАВАЩОТО ОБРАЗОВАНИЕ

Янкова, Ж. (2020). Помощни средства и технологии за деца и ученици със специални образователни потребности в приобщаващото образование.

Книжка 2
МИГРАЦИЯ И МИГРАЦИОННИ ПРОЦЕСИ

Веселина Р. Иванова

SOCIAL STATUS OF DISABLED PEOPLE IN RUSSIA

Elena G. Pankova, Tatiana V. Soloveva, Dinara A. Bistyaykina, Olga M. Lizina

Книжка 1
ETHNIC UPBRINGING AS A PART OF THE ETHNIC CULTURE

Sholpankulova Gulnar Kenesbekovna

ЗА СВЕТЛИНАТА, КОЯТО ИЗЛЪЧВА… В ПАМЕТ НА ПРОФ. Д.П.Н. АСЕМГУЛ МАЛДАЖАНОВА

Нашата редколегия загуби един все- отдаен и неповторим колега и приятел – проф. д.п.н. Асемгул Малдажанова. Пе- дагог по призвание и филолог по мисия! Отиде си от нас нашият приятел, коле- га и член на редколегията на списанието – професор д.п.н. Асемгул Малдажанова – първи заместник-ректор на Евразийския

2019 година
Книжка 6
EMOTIONAL COMPETENCE OF THE SOCIAL TEACHER

Kadisha K. Shalgynbayeva Ulbosin Zh.Tuyakova

Книжка 5
„ОБРАЗОВАТЕЛНИ КИНОХОРИЗОНТИ“ В ПОЛЕТО НА МЕДИА ОБРАЗОВАНИЕТО

(2018). Образователни кинохоризонти. Международен сборник с научни публи- кации по проект „Естетически и образователни проекции на кинодидактиката“. Бургас: Проф. д-р Асен Златаров. Съставител: Маргарита Терзиева. ISBN 978-954-471-496-3

Книжка 4
ВИСШЕТО МОРСКО ОБРАЗОВАНИЕ В КОНКУРЕНТНА СРЕДА

Бакалов, Я. (2019). Висше морско образование. Лидиране в конкурентна среда. Варна: Стено. ISBN 978-619-241-029-2

Книжка 3
УЧИЛИЩЕТО НА БЪДЕЩЕТО

Наталия Витанова

Книжка 2
КНИГА ЗА УСПЕШНИТЕ НАУЧНИ ПУБЛИКАЦИИ

Кожухаров, А. (2018). Успешните научни публикации. Варна: Тера Балканика. ISBN 978-619-90844-1-0

Книжка 1
POST-GRADUATE QUALIFICATION OF TEACHERS IN INTERCULTURAL EDUCATIONAL ENVIRONMENT

Irina Koleva, Veselin Tepavicharov, Violeta Kotseva, Kremena Yordanova

ДЕЦАТА В КОНСТИТУЦИОННИТЕ НОРМИ НА БЪЛГАРИЯ

Румен Василев, Весела Марева

СЪСТОЯНИЕ НА БЪЛГАРСКОТО ОБРАЗОВАНИЕ

Анелия Любенова Любомир Любенов

ИНТЕРКУЛТУРНИЯТ ТРЕНИНГ КАТО ЧАСТ ОТ СТРАТЕГИЯТА ЗА ГЛОБАЛИЗАЦИОННА ИНТЕГРАЦИЯ

Хубенова, М. (2018). Значение на междукултурната комуникация за направления: политически науки, право, икономика и бизнес. София: Издателски комплекс УНСС. ISBN 978-619-232-072-0

ЕДИН НОВ УЧЕБНИК

Дончева, Ю. (2018). Теоретични и методически основи на запознаване с околния свят в детската градина. Русе: Лени Ан

2018 година
Книжка 6
СТРАТЕГИИ НА ОБРАЗОВАТЕЛНАТА И НАУЧНАТА ПОЛИТИКА НАУЧНО СПИСАНИЕ STRATEGIES FOR POLICY IN SCIENCE AND EDUCATION EDUCATIONAL JOURNAL ГОДИНА XXVI / VOLUME 26, 2018 ANNUAL CONTENTS / ГОДИШНО СЪДЪРЖАНИЕ СТРАНИЦИ / PAGES КНИЖКА 1 / NUMBER 1: 1 – 120 КНИЖКА 2 / NUMBER 2: 121 – 224 КНИЖКА 3 / NUMBER 3: 225 – 336 КНИЖКА 4 / NUMBER 4: 337 – 448 КНИЖКА 5 / NUMBER 5: 449 – 560 КНИЖКА 6 / NUMBER 6: 561 – 664

ДИСКУСИОННО / DISCUSSION 211 – 216: Процедурата за назначаване на ръководител на катедра като причина за вло- шаващото се качество на обучението и микроклимата във висшите учи лища у нас [The Procedure for Appointing a Head of Department as a Reason for the Deteriorating Quality of Education and the Microclimate in the Higher School] / Александър Димит- ров / Alexander Dimitrov

Книжка 5
A NEW AWARD FOR PROFESSOR MAIRA KABAKOVA

The staff of the Editorial board of the journal “Strategies for Policy in Science and Education” warmly and sincerely congratulates their Kazakhstan colleague -

ПРОДЪЛЖАВАЩАТА КВАЛИФИКАЦИЯ НА УЧИТЕЛИТЕ – НОРМАТИВЕН И ИЗСЛЕДОВАТЕЛСКИ ОБЗОР

(научно-теоретично обобщение върху проведени обучения на учители)

ЕТНОЦЕНТРИЗМЪТ И ИНЕРЦИИТЕ ОТ МИНАЛОТО – СЕРИОЗНИ ПРОБЛЕМИ В БЪЛГАРСКАТА ОБРАЗОВАТЕЛНА СИСТЕМА

(Eтнопедагогически аспекти на основното и средното образование) Веселин Тепавичаров

Книжка 4
ХРИСТО БОТЕВ И ПОЗНАВАТЕЛНИЯТ КРЪГОЗОР НА СЪВРЕМЕННИТЕ СТУДЕНТИ ЗА ЕВРОПА

Изследователски разказ за един познавателен подвиг и за една познавателна недостатъчност

Книжка 3
BLENDED EDUCATION IN HIGHER SCHOOLS: NEW NETWORKS AND MEDIATORS

Nikolay Tsankov Veska Gyuviyska Milena Levunlieva

ВЗАИМОВРЪЗКАТА МЕЖДУ СПОРТА И ПРАВОТО

Ивайло Прокопов, Елица Стоянова

НАДНАЦИОНАЛНИ И МЕЖДУПРАВИТЕЛСТВЕНИ МЕТОДИ НА ИНТЕГРАЦИЯ В ОБЛАСТТА НА ПРАВОСЪДИЕТО И СИГУРНОСТТА

(Формиране на обща миграционна политика: парадигми и образователни аспекти) Лора Махлелиева-Кларксън

Книжка 2
Книжка 1
ВЪЗПРИЯТИЯ И НАГЛАСИ НА УЧЕНИЦИТЕ ПО ВАЖНИ ОБЩЕСТВЕНИ ВЪПРОСИ

(Данни от Международното изследване на гражданското образование – ICCS 2016)

2017 година
Книжка 6
ЗНАЧИМОСТТА НА УЧЕНЕТО: АНАЛИЗ НА ВРЪЗКИТЕ МЕЖДУ ГЛЕДНИТЕ ТОЧКИ НА УЧЕНИЦИ, РОДИТЕЛИ И УЧИТЕЛИ

Илиана Мирчева, Елена Джамбазова, Снежана Радева, Деян Велковски

ВЪЗПРИЯТИЯ И НАГЛАСИ НА УЧЕНИЦИТЕ ПО ВАЖНИ ОБЩЕСТВЕНИ ВЪПРОСИ

(Данни от Международното изследване на гражданското образование – ICCS 2016)

СТРАТЕГИИ НА ОБРАЗОВАТЕЛНАТА И НАУЧНАТА ПОЛИТИКА НАУЧНО СПИСАНИЕ STRATEGIES FOR POLICY IN SCIENCE AND EDUCATION EDUCATIONAL JOURNAL ГОДИНА XXV / VOLUME 25, 2017 ANNUAL CONTENTS / ГОДИШНО СЪДЪРЖАНИЕ

СТРАНИЦИ / PAGES КНИЖКА 1 / NUMBER 1: 1 – 112 КНИЖКА 2 / NUMBER 2: 113 – 224 КНИЖКА 3 / NUMBER 3: 225 – 336 КНИЖКА 4 / NUMBER 4: 337 – 448 КНИЖКА 5 / NUMBER 5: 449 – 552 КНИЖКА 6 / NUMBER 6: 553 – 672

Книжка 5
ОРГАНИЗАЦИОННА КУЛТУРА В УЧИЛИЩЕ

Ивайло Старибратов, Лилия Бабакова

Книжка 4
КОУЧИНГ. ОБРАЗОВАТЕЛЕН КОУЧИНГ

Наталия Витанова, Нели Митева

Книжка 3
ТЕХНОХУМАНИЗМЪТ И ДЕЙТЪИЗМЪТ – НОВИТЕ РЕЛИГИИ НА БЪДЕЩЕТО

Harari, Y. N. (2016). Homo Deus. A Brief History of Tomorrow. Harvill Secker. ISBN-10: 1910701874

Книжка 2
Книжка 1
РЕФОРМИТЕ В ОБРАЗОВАНИЕТО – ПЕРСПЕКТИВИ И ПРЕДИЗВИКАТЕЛСТВА

Интервю с Габриела Миткова, началник на Регионалното управление на образованието – Силистра

ЕМПАТИЯ И РЕФЛЕКСИЯ

Нели Кънева, Кристиана Булдеева

2016 година
Книжка 6
СТРАТЕГИИ НА ОБРАЗОВАТЕЛНАТА И НАУЧНАТА ПОЛИТИКА НАУЧНО СПИСАНИЕ STRATEGIES FOR POLICY IN SCIENCE AND EDUCATION EDUCATIONAL JOURNAL ГОДИНА XXIV / VOLUME 24, 2016 ANNUAL CONTENT / ГОДИШНО СЪДЪРЖАНИЕ

СТРАНИЦИ / PAGES КНИЖКА 1 / NUMBER 1: 1 – 120 КНИЖКА 2 / NUMBER 2: 121 – 232 КНИЖКА 3 / NUMBER 3: 233 – 344 КНИЖКА 4 / NUMBER 4: 345 – 456 КНИЖКА 5 / NUMBER 5: 457 – 568 КНИЖКА 6 / NUMBER 6: 569 – 672

Книжка 5
Книжка 4
Книжка 3
Книжка 2
Книжка 1
2014 година
Книжка 6
Книжка 5
КОХЕРЕНТНОСТ НА ПОЛИТИКИ

Албена Вуцова, Лиляна Павлова

Книжка 4
ОБРАЗОВАНИЕТО ПО ПРАВАТА НА ЧОВЕКА ПРЕЗ ПОГЛЕДА НА ДОЦ. ЦЕЦКА КОЛАРОВА

Цецка Коларова. (2013). Образование по правата на човека. София: Авангард Прима. ISBN 978-619-160-234-6

USING THE RESULTS OF A NATIONAL ASSESSMENT OF EDUCATIONAL ACHIEVEMENT

Thomas Kellaghan Vincent Greaney T. Scott Murray Chapter 4 Translating Assessment Findings Into Policy And Action Although the primary purpose of a system of national assessment is to describe students’ learning, its role is not limited to description. To justify the effort and expenditure involved, the information that an assessment provides about the achievements of students, their strengths and weaknesses, and how they are distributed in the population (for example, by gender or location

Книжка 3
Книжка 2
PROFESSIONAL DEVELOPMENT OF UNIVERSITY FACULTY: А SOCIOLOGICAL ANALYSIS

Gulnar Toltaevna Balakayeva Alken Shugaybekovich Tokmagambetov Sapar Imangalievich Ospanov

ЗА ПО-ХУМАНИСТИЧНА ТРАДИЦИОННО- ИНОВАЦИОННА ОБРАЗОВАТЕЛНО-ВЪЗПИТАТЕЛНА СТРАТЕГИЯ У НАС

(КОНЦЕПТУАЛНА РАЗРАБОТКА В ПОМОЩ НА ПОДГОТОВКАТА НА НОВ ЗАКОН ЗА ОБРАЗОВАНИЕТО)

Книжка 1
РЕФЛЕКСИЯТА В ИНТЕГРАТИВНОТО ПОЛЕ НА МЕТОДИКАТА НА ОБУЧЕНИЕТО ПО БИОЛОГИЯ

Иса Хаджиали, Наташа Цанова, Надежда Райчева, Снежана Томова

USING THE RESULTS OF A NATIONAL ASSESSMENT OF EDUCATIONAL ACHIEVEMENT

Thomas Kellaghan Vincent Greaney T. Scott Murray Chapter 1 Factors affecting the use and nonuse of national assessment fi ndings The main objectives of a national assessment, as set out in volume 1 of this series, Assessing National Achievement Levels in Education, are to determine (a) how well students are learning in the education system (with reference to general expectations, aims of the curriculum, and preparation for further learning and for life); (b) whether there is evidence of par

2013 година
Книжка 6
Книжка 5
Книжка 4
QUESTIONNAIRE DEVELOPMENT

ÎÖÅÍßÂÀÍÅÒÎ

РОССИЙСКАЯ СИСТЕМА ОЦЕНКИ КАЧЕСТВА ОБРАЗОВАНИЯ: ГЛАВНЫЕ УРОКИ

В. Болотов / И. Вальдман / Г. Ковалёва / М. Пинская

Книжка 3
MASS MEDIA CULTURE IN KAZAKHSTAN

Aktolkyn Kulsariyeva Yerkin Massanov Indira Alibayeva

РОССИЙСКАЯ СИСТЕМА ОЦЕНКИ КАЧЕСТВА ОБРАЗОВАНИЯ: ГЛАВНЫЕ УРОКИ

В. Болотов / И. Вальдман / Г. Ковалёва / М. Пинская

Книжка 2
ОЦЕНЯВАНЕ НА ГРАЖДАНСКИТЕ КОМПЕТЕНТНОСТИ НА УЧЕНИЦИТЕ: ПРЕДИЗВИКАТЕЛСТВА И ВЪЗМОЖНОСТИ

Светла Петрова Център за контрол и оценка на качеството на училищното образование

Книжка 1
Уважаеми читатели,

вет, както и от международния борд за предоставените статии и студии, за да могат да бъдат идентифицирани в полето на образованието пред широката аудитория от педа- гогически специалисти във всички степени на образователната ни система. Благодаря за техния всеотдаен и безвъзмезден труд да създават и популяризират мрежа от научни съобщества по профила на списанието и да насърчават научните изследвания. Благодаря на рецензентите от национално представените висши училища, на- учни институции и

METHODS FOR SETTING CUT SCORES IN CRITERION – REFERENCED ACHIEVEMENT TESTS

ÎÖÅÍßÂÀÍÅÒÎ COMPARATIVE ANALYSIS OF THE QUALITY OF THE SEPARATE METHODS

ПУБЛИКАЦИИ ПРЕЗ 2012 Г.

СПИСАНИЕ „БЪЛГАРСКИ ЕЗИК И ЛИТЕРАТУРА“

2012 година
Книжка 6
DEVELOPMENT OF SCIENCE IN KAZAKHSTAN IN THE PERIOD OF INDEPENDENCE

Aigerim Mynbayeva Maira Kabakova Aliya Massalimova

Книжка 5
Книжка 4
Книжка 3
СИСТЕМАТА ЗА РАЗВИТИЕ НА АКАДЕМИЧНИЯ СЪСТАВ НА РУСЕНСКИЯ УНИВЕРСИТЕТ „АНГЕЛ КЪНЧЕВ“

Христо Белоев, Ангел Смрикаров, Орлин Петров, Анелия Иванова, Галина Иванова

Книжка 2
ПРОУЧВАНЕ НА РОДИТЕЛСКОТО УЧАСТИЕ В УЧИЛИЩНИЯ ЖИВОТ В БЪЛГАРИЯ

* Този материал е изготвен въз основа на резултатите от изследването „Parental Involvement in Life of School Matters“, проведено в България в рамките на проек- та „Advancing Educational Inclusion and Quality in South East Europe“, изпълняван

ВТОРИ ФОРУМ ЗА СТРАТЕГИИ В НАУКАТА

Тошка Борисова В края на 2011 г. в София се проведе второто издание на Форум за страте- гии в науката. Основната тема бе повишаване на международната видимост и разпознаваемост на българската наука. Форумът се организира от „Elsevier“ – водеща компания за разработване и предоставяне на научни, технически и медицински информационни продукти и услуги , с подкрепата на Министер- ството на образованието, младежта и науката. След успеха на първото издание на Форума за стратегии в науката през

Книжка 1
РЕЙТИНГИ, ИНДЕКСИ, ПАРИ

Боян Захариев