Survey: Public Broadcasting Metadata Survey for the PBCore

Author: CPB
Filter:
Responses Received: 49

email NameFirst NameLast Organization RFC_WG RFC_PMX RFC_PMNX
aiz@unl.edu Art Zygielbaum Nebraska Educational Television Yes No Yes
srvconsult@charter.net Steven Vedro Consultant Yes No Yes
nancy_baldacci@aptonline.org Nancy Baldacci American Public Television Yes No Yes
thom_shepard@wgbh.org? Thom? Shepard? WGBH Yes No Yes
efthimis@u.washington.edu Efthimis Efthimiadis University of Washington Yes Yes No
drc3@psu.edu? Duane? Champion? WPSX Penn State Yes No Yes
jzm10@psulias.psu.edu? Jin? Ma? Penn State Yes No Yes
haarsager@wsu.edu Dennis Haarsager Washington State University Yes No Yes
mpierce@pbs.org Marilyn Pierce PBS Yes No Yes
abaker@mpr.org Alan Baker MPR Yes No Yes
steinbach@wpt.org? James? Steinbach? UWM, WHA? Yes No Yes
bmorse@pbs.org? Bea? Morse? PBS Yes No Yes
gagnew@rci.rutgers.edu Grace Agnew Rutgers University Yes Yes No
rholt@npr.org? Robert? Holt? NPR Yes No Yes
smokey@smokey.com Smokey Forester Smokey Forester, Incorporated No No Yes
Abbe.Wiesenthal@turner.com Abbe Wiesenthal Turner Broadcasting No Yes No
jerome.mcdonough@nyu.edu Jerome McDonough NYU Libraries No Yes No
jmastrobattista@targetanalysis.com John Mastrobattista Target Analysis Group No No Yes
snorton@kpbs.org Scot Norton KPBS Stations No No Yes
mary_ide@wgbh.org Mary Ide WGBH Media Archives No No Yes
tcarter@myersinfosys.com Tracy Carter Myers Information Systems, Inc. (ProTrack) No No Yes
mary_ann_thyken@itvs.org Mary Ann Thyken ITVS (Independent Television Service) No No Yes
cbramhall@gpb.org Chrissy Bramhall Georgia Public Television No No Yes
kdavis@allegiancesoftware.com Ken Davis Allegiance Software, Inc. No No Yes
rkaelberer@state.pa.us Richard Kaelberer Pennsylvania PTV Network No No Yes
dcf@mptv.org David Felland WMVS Yes No Yes
dhamby@acornmedia.com Dan Hamby Acorn Media No No Yes
cfle@loc.gov Carl Fleischhauer Library of Congress No Yes No
laurateeter@scoutis.com Laura Teeter Scout Information Services No No Yes
phanson@afi.com Patrica King Hanson American Film Institute (AFI) No No Yes
kturner@wcpn.org Keith Turner WCPN/WVIZ Ideastream No No Yes
amy@washington.edu Amy Philipson Research Channel No No Yes
mike_tondreau@opb.org Michael Tondreau OPB Yes No Yes
john.tynan@riomail.maricopa.edu John Tynan KJZZ Radio-Tempe, AZ No No Yes
shawn.rounds@mnhs.org Shawn Rounds Minnesota Historical Society, Library and Archives No Yes No
cher@rmpbs.org Cher Skoubo Rocky Mtn PTV No No Yes
bserrick@cac.washington.edu Beth Serrick University of Washington No No Yes
tolson@kqed.org Tim Olson KQED Yes No Yes
spessutti@tribune.com Susie Pessutti Tribune Media Services No No Yes
lmoulton@afassoc.com Lowell Moulton Sony Electronics Inc. No No Yes
million@iptv.org Mark Million Iowa PTV No No Yes
mcundiff@loc.gov Morgan Cundiff Library of Congress No Yes No
jkutzner@pbs.org Jim Kutzner PBS Engineering No No Yes
sguez@dalet.com Stephane Guez Dalet Digital Media Systems No No Yes
mail@makxdekkers.com Makx Dekkers Dublin Core No Yes No
cvaughn@klrn.org Charles Vaughn KLRN No No Yes
maubin@mail.mpt.org Mike Aubin Maryland Public Television THINKPORT No No Yes
steve@prx.org Steve Schultze Public Radio Exchange No No Yes
lisac@uky.edu Lisa Carter University of Kentucky/KET No Yes No


1.1 Tell Us Your Organization

Mean = 5.67, Standard Deviation = 3.66

Response Count Percent
(1) Television station 21 42.9%
(2) Television national producer 11 22.4%
(3) Television national distributor 13 26.5%
(4) Television content consortium 5 10.2%
(5) Television systems vendor 9 18.4%
(6) Radio station 12 24.5%
(7) Radio national producer 9 18.4%
(8) Radio national distributor 7 14.3%
(9) Radio content consortium 2 4.1%
(10) Radio systems vendor 10 20.4%
(11) Educational Organization addressing digital content 15 30.6%
(12) National Organization (non-PB) addressing digital content 4 8.2%

1.2 Scope Of Your Org

Mean = 2.06, Standard Deviation = 1.14

Response Count Percent
(1) Local/Ed 21 43.8%
(2) National/Regional Content 11 22.9%
(3) National/Regional Distribution 8 16.7%
(4) Vendor to 8 16.7%

1.3 Tell Us Your Role

Mean = 2.90, Standard Deviation = 1.15

Response Count Percent
(1) Content/Production/Editing 6 12.5%
(2) Ops/Engineering/Software/Systems Dev 13 27.1%
(3) Archives/Asset Management/Content Dist 12 25.0%
(4) Senior Management 14 29.2%
(5) Sales or Support 3 6.3%

1.4 Agree PB Needs Core

Mean = 4.67, Standard Deviation = 0.55

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 2 4.1%
(4) 4 12 24.5%
(5) 5 35 71.4%

1.4.1 Why PB Needs Core


sharing between stations and educational repositories; for automation
A standardized metadata dictionary will provide a vehicle to identify and or share the many assets that our community has to offer. It will also provide a standard platform to build from in sharing program and operational informaiton within the PTV community.
1. federated searching 2. central metadata repository 3. metadata for the exchange of program content 4. controlled vocabularies for national broadcast viewer guides such as TiVo and TV Guide
content description & discovery (retrieval)
internal and external reusable content exchange, listing and finding in house assets for production, educational client searches for content
To describe rich media content, to exchange and distribute media content across different systems and institutions.
Broad-scale digital distribution and efficient production impossible without it.
To facilitate the exchange of program data.
To enhance our ability to create new business models for getting the broadest distribution of our materials. Shared metadata standards make it possible to do much more of this without putting an overwhelming staffing burden on the new ventures.
-To allow for easy content exchange among stations -Among other public service organizations -Cost savings - compatible systems; shared R&D snf purchasing -Metadata is underpinnings for digital services; standards make this achievable on widespread basis -PTV take the lead in this; possibiity of shaping asset management software and practices
To foster effective efficient communication of program related information between distribution chain entities.
To insure interoperability between stations and with other initiatives and to provide semantic uniformity for management, search and retrieval of assets.
So that content can be easily exhanged among all entities, to enhance the content offerings of all invlolved.
To ensure efficient interchange of information, search for content, and exchange of products.
To find things! It will also help streamline operations. And it's a way to preserve the information for the future.
Due to the high nature of shared content and the obligation PB has to make its content available to the widest possible audience.
A standardized metadata dictionary will simplify the process of exchanging content/metadata between institutions involved in the production of public broadcast material, as well as with other institutions that deal with the broadcast industry (moving image archives, for example).
Would allow consistent coding of audince and donors according to program viewing or listening
It's important that common concepts and terms be well understood.
To ease the delivery and sharing of master programming and production elements.
To facilitate the exchange of content and the information about that content
As we combine our assets we need to describe them as uniformly as possible.
Streamlining processes within the public broadcasting community should save time in conversion, exporting/importing, and crossover between systems, resulting in both financial and man-hour savings, a key benefit for non-profits.
As we continue to evolve into a system that employes automation to manage operations and contrnt distribution, commmon identification of the content is critical. As we begin to realize the possibilities of repurposing our content, this common identification and management of the metadata - information about the content in a standardized manner again becomes a cirtical factor.
To meet the unique needs of public broadcasting.
For consistent tracking and establishment of content libraries; for ease of distribution and licensing of media content and related print materials.
To permit interoperability within PBS and affiliates' community. But the legacy to society, e.g., content to permanent national or other research collections, will also benefit from standardization, especially standardization that is synchronized with other national, archiving metadata standards.
From a software vendor standpoint, it certainly makes things more portable if there is a standardized metadata dictionary. If we want to import and export data between two systems, for example, it's MUCH easier if we don't have to do any translation. Our elements are the same as your elements and our formats are the same as your formats.
Broadcasting, like film and other areas of the arts, will not be able to expand access to its history, archival collections, etc. without standards and the ability to share the information
A common frame of reference allows ease of movement and use of assets between stations, producers, etc.
To facillitate the efficient exchange of information between all organizations affiliated with public broadcasting.
Standardized metadata will not only allow for better management of "objects" over the long term, it will also facilitate sharing and re-use of these objects, thereby increasing your return on investment in monetary as well as intellectual terms. This is important not only for use of these objects within the Public Broadcasting family, but also with interested outside parties.
The value of content is increased as it is able to be used in diverse applications and in new ways. Metadata should enhance the value of the content, allow it to be used in new ways and improve the production process by having needed information immediately accessible.
To streamline content searches. Consistency in data entry.
Public broadcasting needs to be able to aggragate, parse, exchange and distribute digital content among the various platforms and applications of local stations, national organizations, distributors, external partners, vendors and end users.
Help with PSIP, program promotion, archive research, etc.
To make it easier for producers, programmers and end users to discover content they may be interested in accessing.
Doesn't NEED one, but such will allow streamlined information transfer.
To facilitate interarchival data exchange and software tool building.
PB needs a standardized metatdata dictionary to facilitate the exchange of program and related information across various databases and among different organizations.
To pool content, share resources, exchange material.
Ease of content management, common ground across sector enabling re-use of solutions and increasing market for products based on a standard, facilitating exchange of information with other sectors (e.g. governement information, industry etc.),
a standardized dictionary is required to eliminate ambiguity, streamline comparison and search functions and to promote detailed information
Standardized metadata will allow all PBS-related organizations to share data more efficiently. The ability to search for resources with a common language will improve the quality and productivity of the work.
As public radio content becomes increasingly digitized and more easily shared, entities will face more and more situations where they must be able to understand what is being shared with them and communicate what is being shared to others. A shared dictionary could alleviate much of the associated pain.
The more standardized PB metadata is, the more opportunity stations will have to share content. Standards promote greater access to learning objects and will increase the impact that each individual station can have both locally and nationally. Without a standardized metadata dictionary, stations will continue to reinvent the wheel a distraction from achieving the work we are meant to be doing, creating and distributing enlightening and educational content to learners.


1.4.2 What Applications Need Core


shared content repositories; PBS and local station automation of ops
Production assets (don't know of a current vehicle) Program Scheduling and Operations (producing schedules, through to PSIP)
databases program listings scheduling software
asset management, searching and exchange between education partners, departments, students, and between internal and external clients.
resource sharing, archiving, asset management.
research, production, distribution
Traffic, Library Management, Asset Management, Broadcast Operations, Production Operations.
- Station to station content exchange - discovery and sharing of archival materials
-Program and content exchange between stations -Need someting for production; best if related to entire workflow inclucing broadcast (hence, standard dictionary) -Effective datacasting etc (these things are possible with a "non standard" metadata schema, but will be much more effective w/ standard
DAM, DRM, Interconnection (ACE), PSIP
1. Participate in other consortial activities, such as the MIC (Moving Image Collections) portal. Standardized metadata would allow MIC to readily ingest data from any PBS station that chooses to participate. 2. Allow for federated searching of assets across PBS repositories. A Z39.50 profile specific to PBCore could be developed, for example. 3. To develop a K12 portal across all PBS repositories with standardized search and retrieval of assets. 4. To add precision and uniformity to the management and identification of assets for scheduling and other programming activities. 5. To enable PBS to develop collaborative preservation and asset migration strategies.
Content exchange for Radio, TV, Web, Portable devices, etc.
Production, research
Content creation, editing, air and library.
Search, browsing and retrieval of content.
Information retrieval, asset management, rights management.
Constituent Management: helping fundrraisers, programmers, underwriting and other parties to use consistent coding to link fianacial participation to broadcast or other services
Program commissioning, creation, exchange and archiving.
Content production & distribution; Internal and system wide operational needs such as delivery of listings and PSIP information; archival and research
Programs, program elements, ancilliary products and data
Accounting, fundraising/membership, traffic, automation/playback, rights management, and content distribution systems, at a minimum.
Distribution, Automation, Content repurposing, production.
Preproduction Production Post Production Broadcast Educational access and delivery
Content sharing, the distribution of content to non-broadcast users (e.g., educational), and ultimate permanent archiving (by PBS, affiliates, or independent research collections).
I would think that a standardized metadata dictionary would be beneficial for any application that would involve data that is used by many different stations. For example, many sites use the PODs data. If there is a standardized metadata dictionary, then regardless of how the sites use the data, or what software system they ingest that data into, the various elements will all have the same meaning.
Archival records; historical research; document retrieval for current use; to become a national database of record
Programming (traffic & automation), production / editing, library functions/asset management, asset distribution
xml feeds for web sites or email news letters, print newsletters, program specific pages, schedules. Way to infuse local content into national programs and web sites. Way to improve local offerings on web, email, and in other ways that we do not even know yet.
I'm not sure what is meant by "applications" -- software? programs? other?
Production: content creation, security, rights clearances, conditional access, broadcast data for EPG systems, educational content, workflow, archiving, media interchange, creation and output over multiple platforms and software/hardware manufacturers. Media: broadcast, radio, online services, datacasting, interactive television, EPG, CDs, DVDs, digital libraries, education
Schedule, digital asset management, content management systems, video and audio on demand, archiving, ..
Same as above
To facilitate the development and integration of database search and essence browsing applications.
1. Program information transfer (old PODS, NOLA info) 2. Orion (old REDBOOK) 3. Viewer statistical database
For preservation repository and web dissemination applicatons.
Sharing of information among different organizations with different databases and applications.
Exchange of material and programs, distribution, pooled archives.
production, editing, re-purposing, broadcast and non-broadcast applications, promotion, rights management
Since I am involved in the education department, I am most concerned with being able to access data from other members in a format that allows me to incorporate their resources into a local delivery system. This may be soemthing as simple as lesson plans designed to support a broadcast program or other entity. I would also like to see the digital broadcast progrmas use the same protocols so we can repurpose and merge assets to work together. I believe this may also benefit when looking at digital assest management issues. As noted above, the common dictionary allows different applications to share resources and assets.
Sharing between producers, distributors and broadcasters. Each step of the path-to-air must preserve the metadata to avoid confusion and unnecessary work. Shared metadata standards not only help this process, but also comuncation back from distributors and stations to producers regarding licensing (of essences and sub-essences), carriage, and payment.
Distribution Developing interactivity Management of internal assets Repurposing Licensing


1.4.3 Why PB Does Not Need Core



2.0 Content Metadata User

Mean = 1.18, Standard Deviation = 0.39

Response Count Percent
(1) Yes 40 81.6%
(2) No 9 18.4%

2.01.1.1 Element Title Rating

Mean = 4.88, Standard Deviation = 0.40

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 1 2.5%
(4) 4 3 7.5%
(5) 5 36 90.0%

"Comment" responses:


a NOLA-type code may suffice as standard program identifier with title variations allowable
I would consider the refinement "Segment" or "Component" for titles within a program, such as a segment of a magazine-format show.
Titles are sometimes variable station/station
Usage should be required for this element
What about titles that are title+subtitle (ex: "The Life I Lead: Adventures of a Public Radio Nerd") The first part of the title is not a series and the second part is not an alternate title. Should this be clarified in description and/or examples?
The Definition and Guidelines for Usage for this element need to more accurately indicate or suggest the format of this element, especially when there is a series title involved.


 

2.01.1.2 Element Title Confusing

Mean = 1.90, Standard Deviation = 0.31

Response Count Percent
(1) Yes 4 10.3%
(2) No 35 89.7%

"Comment" responses:


I think the title element should simply be title qualified by a sub-element, title_type. This would allow for unlimited alternative titles and a hierarchical titling structure: for example, project, series, program or eipisode, segement.
Disagree with the use of "leading articles". This does not work.
try using a slightly less technical approach
While I had no trouble understanding the description, the general description seemed to add too much information
Description seems to assume all material to be in English
If it wasn't such a clearly useful and relevant element I would have answered yes to this question. As important as it is, the usage of Title vs. Title Program and Title Episode is not at all clear.


 

2.01.2.1 Element Title Refinements Rating

Mean = 3.87, Standard Deviation = 1.07

Response Count Percent
(1) 1 1 2.6%
(2) 2 3 7.9%
(3) 3 9 23.7%
(4) 4 12 31.6%
(5) 5 13 34.2%

"Comment" responses:


Okay as far as it goes but may ultimately prove too limited.
I don't necessarily agree with the usage guidelines. While search engines can ignore initial punctuation, it will probably mess up a results display sorted by title, where "The Adventures of..." will display after the title "Theater".
A title.uniform refinement might be valuable, although it might imply a level of cataloging skill that broadcast organizations can't commit to.
Since there are few "rules" or restrictions, I would rate this as somewhat low.
The examples are useful.
Rule-based title construction IS important for classical music.


 

2.01.2.2 Element Title Refinements Confusing

Mean = 1.88, Standard Deviation = 0.33

Response Count Percent
(1) Yes 5 12.5%
(2) No 35 87.5%

"Comment" responses:


I can see that you're trying for "plain speak" in the descriptions, but they're still dense. For example, "Titles typically are not searched as part of complex semantic interpretations" ...could that be simplified?
It was a overly complicated, especially for something is straightforward as a title.


 

2.02.1.1 Element Title.Alternative Rating

Mean = 4.13, Standard Deviation = 0.85

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.0%
(3) 3 6 15.0%
(4) 4 17 42.5%
(5) 5 15 37.5%

"Comment" responses:


Alternative titles can effectively serve as thesaurus terms. Still, it may be worth considering a breakdown between alternative series titles and alternative program or episode titles.
Useful to provide variant spellings, such as spelling out numerals. Otherwise, I think the categories provided cover the ground sufficiently.
You may run into legal issues here with contractually obligated AKA titles.
usefull for end user searches where they don't use exact title phrase
Whoops, forget my comment on 'Title'


 

2.02.1.2 Element Title.Alternative Confusing

Mean = 1.93, Standard Deviation = 0.27

Response Count Percent
(1) Yes 3 7.5%
(2) No 37 92.5%

"Comment" responses:


Again, a bit too much of an explanation; however, it did not specifically appear to offer explanation for a re-titled or alternate title that might be a working title or formal re-title. For example, The Phil Silvers Show (aka You'll Never Get Rich


 

2.02.2.1 Element Title.Alternative Refinements Rating

Mean = 3.77, Standard Deviation = 1.16

Response Count Percent
(1) 1 2 5.1%
(2) 2 2 5.1%
(3) 3 13 33.3%
(4) 4 8 20.5%
(5) 5 14 35.9%

"Comment" responses:


See discussion of initial articles under "title"


 

2.02.2.2 Element Title.Alternative Refinements Confusing

Mean = 1.95, Standard Deviation = 0.22

Response Count Percent
(1) Yes 2 5.1%
(2) No 37 94.9%

"Comment" responses:


Examples of use are very important!
Consider guidance to spell out titles with punctuation in the first four words where the punctuation is spelled out (& spelled as and) or removed "Human/Animal Relations" becomes Human Animal Relations."


 

2.03.1.1 Element Title.Series Rating

Mean = 4.70, Standard Deviation = 0.56

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 2 5.0%
(4) 4 8 20.0%
(5) 5 30 75.0%

"Comment" responses:


See earlier notes
Series is a very important concept for television. End users may discover a resource that is part of a series and want to see every other title in that series, for example.
I am concerned that all titles in the series have the same content in title.series
Disagree with use of leading articles. They should follow titles with a comma, e.g. Black Cat, The
This is important to television series.


 

2.03.1.2 Element Title.Series Confusing

Mean = 1.95, Standard Deviation = 0.22

Response Count Percent
(1) Yes 2 5.0%
(2) No 38 95.0%

"Comment" responses:


It was not as encompassing as it should be; it didn't really definte what a series is. I didn't get the sense that a series was frequently an ongoing program or group of programs under and umbrella title. The definition seems a bit loose.
TV producers may want a simpler explanation, but the techs will want what you have now.


 

2.03.2.1 Element Title.Series Refinements Rating

Mean = 4.00, Standard Deviation = 1.21

Response Count Percent
(1) 1 2 5.3%
(2) 2 2 5.3%
(3) 3 9 23.7%
(4) 4 6 15.8%
(5) 5 19 50.0%

"Comment" responses:


See comments for title data element
Series is one of those elements that people constantly argue about; refinements in the definition, is there a minimum number of titles to constitute a series, etc. would be helpful.
Interesting that what I commonly think of as a program is actually a series. It helps to think about this distinction.
I presume it's 'optional' only if not a series
Not clear whether title.series relates to a series in which the resource being described is contained or whether the title of the resource itself is the series.
If the title.series is shared by many titles, it maybe should be more constrained, though is is probably outside of the metadata scheme


 

2.03.2.2 Element Title.Series Refinements Confusing

Mean = 1.92, Standard Deviation = 0.28

Response Count Percent
(1) Yes 3 8.1%
(2) No 34 91.9%

"Comment" responses:



 

2.04.1.1 Element Title.Program Rating

Mean = 4.33, Standard Deviation = 0.92

Response Count Percent
(1) 1 1 2.5%
(2) 2 0 0.0%
(3) 3 6 15.0%
(4) 4 11 27.5%
(5) 5 22 55.0%

"Comment" responses:


I think it is important to distinguish between a program and a series.
Extremely confusing with Title, Title Episode, Title Series
Useful for production and logging video footage. Could be used to access stories within an episode allowing content to be more searchable.
The usefulness and relevance of this element depends on how people choose to use the Title element. Since your description for Title is not entirely clear, people may be confused about how this differs from Title Episode or Title


 

2.04.1.2 Element Title.Program Confusing

Mean = 1.80, Standard Deviation = 0.41

Response Count Percent
(1) Yes 8 20.0%
(2) No 32 80.0%

"Comment" responses:


Unclear what's the difference between a program and an episode. This is crucial if they are contained in distinct fields.
Should clarify the distinction between "Title" and "Title.Program". Should clarify title in re completed works vs smaller pieces (segments, shots, sequences)
Examples which clarify the distinction between title.series and title.program and when each should be applied will probably be helpful to your users.
Yes and No; I think I understand it but some may find a problem deferentiating betwen "Program" and the broader "Title"
I would try to eliminate some of the non-essential words, such as "consequently".
at a glance, confused between title and title.program
What is difference between title and title.program? Description mentions Title.Segment;, Title.Excerpt; and Title.Working but these do not appear to be part of the 58 elements. What are these, from an extendede set above and beyond the core 58?
Again, not clear how different title elements relate to one another.
I don't understand in what context this would be more helpful than the Title element. Can the same Title.Program be assigned to more than one media item or resource?
For example, if you have a program that is the 101 show of Comment on Kentucky and it has no program title of it's own, it's just known as the 101st Comment on Kentucky show, how do you formulate a distinctive Title, Title Series, Title Program, etc


 

2.04.2.1 Element Title.Program Refinements Rating

Mean = 3.67, Standard Deviation = 1.26

Response Count Percent
(1) 1 3 7.7%
(2) 2 3 7.7%
(3) 3 12 30.8%
(4) 4 7 17.9%
(5) 5 14 35.9%

"Comment" responses:


See comments for title data element
Seems redundant and adds little value. Don't get it.


 

2.04.2.2 Element Title.Program Refinements Confusing

Mean = 1.90, Standard Deviation = 0.31

Response Count Percent
(1) Yes 4 10.3%
(2) No 35 89.7%

"Comment" responses:


When you specify "Language of the Element = eng" do you need to state if it is American or UK? Also, is it coded in American or UK? Is this for use in multiple countries?
at a glance, confused between title and title.program


 

2.05.1.1 Element Title.Episode Rating

Mean = 4.41, Standard Deviation = 0.82

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.6%
(3) 3 5 12.8%
(4) 4 10 25.6%
(5) 5 23 59.0%

"Comment" responses:


Too close in meaning to Program to be useful. As I've suggested elsewhere, it would be better to simply use Title and Title_type and then developed a vocabulary for Title_type that includes these field values.
Same note as Title.Program ... is this the same? Would we fill in both "titles" for an episode of a series?
I think that episode and program are somewhat redundant. If the series title is always included in the metadata, the fact that a program is an episode is already reflected. While series is a critical issue, this field adds very little value.
Essential for television, interactive and online.
Usage should be required for all shows where more than one episode exists, and recommended for all other shows.
The examples include what looks like the program and/or series title ("I Claudius"). Is this actually part of the episode title?
What is the difference between Title Program and Title Episode? Is it the case in other places that if a program has an individual program title, it also has an individual episode title? This is not the case here and I think we would encounter


 

2.05.1.2 Element Title.Episode Confusing

Mean = 1.97, Standard Deviation = 0.16

Response Count Percent
(1) Yes 1 2.6%
(2) No 38 97.4%

"Comment" responses:


The descriptions are confusing, yes, but not enough for me to rate its usefulness.
it wasn't TOO confusing but does seem to contain two definitions. I think you could get rid of the statemet "An Episode Title is one specifically identified by the media production agency or group and exists in order to facilitate discovery and ..."
Again, I think a definition problem that exists among Title, Series and Program is carried on here, although the definition of an Episode is clear
If the pop-up window comes up each time you enter this section it would be annoying.
Description mentions Title.Segment;, Title.Excerpt; and Title.Working but these do not appear to be part of the 58 elements. What are these, from an extendede set above and beyond the core 58?
IS this field applicable for non-series programs?
different usage depending on the person doing data entry (which is never the same person). With Title, Title P and Title E, there are too many Title feilds for people to choose for data entry, which will result in confusion and inaccurate entry.


 

2.05.2.1 Element Title.Episode Refinements Rating

Mean = 3.95, Standard Deviation = 1.13

Response Count Percent
(1) 1 1 2.7%
(2) 2 2 5.4%
(3) 3 12 32.4%
(4) 4 5 13.5%
(5) 5 17 45.9%

"Comment" responses:


See comments for title data element


 

2.05.2.2 Element Title.Episode Refinements Confusing

Mean = 1.97, Standard Deviation = 0.16

Response Count Percent
(1) Yes 1 2.6%
(2) No 38 97.4%

"Comment" responses:


It seems there are no controlled vocabularies or encoding schemes being recommended.


 

2.06.1.1 Element Subject Rating

Mean = 4.77, Standard Deviation = 0.48

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 1 2.6%
(4) 4 7 17.9%
(5) 5 31 79.5%

"Comment" responses:


I believe Subject should be the descriptive backbone of a content management system.
It is unclear how title and subject are related. Is one a subset of the other?
Think lists of controlled words needs to be more restricted or else this will become chaotic.
I do have some reservations about making this element mandatory; subject description of some programs may be rather forced and artificial.
subject tags are critical to organizing our content by genera
Required to CREATE the record? Or for posting at some point?


 

2.06.1.2 Element Subject Confusing

Mean = 1.92, Standard Deviation = 0.27

Response Count Percent
(1) Yes 3 7.7%
(2) No 36 92.3%

"Comment" responses:


Not sure what this means: "If the subject of the item is a person or an organization, use the same form of the name that is used by the element Creator."
Not quite as clear as other descriptions, but there are more issues to cover. The links to additional information are valuable.
See above
Can't actually read it because of your late breaking news pop-up


 

2.06.2.1 Element Subject Refinements Rating

Mean = 3.89, Standard Deviation = 1.22

Response Count Percent
(1) 1 2 5.4%
(2) 2 4 10.8%
(3) 3 5 13.5%
(4) 4 11 29.7%
(5) 5 15 40.5%

"Comment" responses:


Good sources and examples but perhaps this element needs more attention. For some, one field is not enough to capture a subject hierarchy. Consider Disciple, Topic, Sub_Topic etc.
n the traditional library cataloging system, Library of Congress Subject Headings and Name Authories are used. However, in the emerging library digital projects/initiatives, these schemes and controlled vocabularies are not strictly followed.
we should suggest some options and a manner to designate what scheme you are using.
This is a good idea: "PBCore is considering adding an element, possibly named Subject.ClassificationSchemeUsed, in which a specific subject authority can be identified."
I would strongly recommend documenting the name and version of any schema that is used, since those schema change frequently. I actually would encourage the development of controlled subject heading lists to add semantic consistency to descriptions.
I would *strongly* endorse adding the subject.ClassificationSchemeUsed element to the vocabulary.
Controlled subject authority is highly recommended
Consider specifiying a default classification scheme with the option to choose another if more appropriate. Add the ClassificationSchemeUsed element.
What is significance of use of periods, semi-colons and hyphens separating terms in examples? Guidelines for usage do not make this info easy to access?
Controlled vocabularies for subject headings ARE important. Keywords produce unreliable results.
Approach as indicated in late breaking news seems to be plainly wrong. Classification Scheme Used should be indicated by a scheme qualifier not an element refinement
Identification of authority used to select keywords should be identifiable within the metadata. Additionally, PBCore should identify and require certain authorities to standardize vocabulary across stations.


 

2.06.2.2 Element Subject Refinements Confusing

Mean = 1.95, Standard Deviation = 0.22

Response Count Percent
(1) Yes 2 5.1%
(2) No 37 94.9%

"Comment" responses:


It seems ok to follow certain rules instead of using a particular scheme.
Interesting how Subject is actually what I normally think of generically as keyword.
Don't know can't read it because of pop-up


 

2.07.1.1 Element Description Rating

Mean = 4.47, Standard Deviation = 0.72

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 5 12.5%
(4) 4 11 27.5%
(5) 5 24 60.0%

"Comment" responses:


good for searching keywords
I guess I am surprised to see that this data element is mandatory rather than recommended.
This seems focused on pragmatic descriptive text. What about marketing-oriented (pitch?) text?


 

2.07.1.2 Element Description Confusing

Mean = 1.98, Standard Deviation = 0.16

Response Count Percent
(1) Yes 1 2.5%
(2) No 39 97.5%

"Comment" responses:


We should change the description to indicate that at the end of the description we are referencing the Subject element rather than the stated "...subject or topic of an item."
Are there character limits to how much content may be entered or where it is likely to be cut off? Can the content be tagged to be retrieved by various media such as an EPG guide?
Vague usage guidelines and broad range of examples will yield similar results in practice
You might find that in some stations of course, non-catalogers/archivists/librarians might confuse this feild with subject and not understand the distinction between this and the qualified forms of this element. But I don't see how this can be avoid


 

2.07.2.1 Element Description Refinements Rating

Mean = 4.00, Standard Deviation = 1.03

Response Count Percent
(1) 1 1 2.7%
(2) 2 2 5.4%
(3) 3 7 18.9%
(4) 4 13 35.1%
(5) 5 14 37.8%

"Comment" responses:


I would add an additional refinement for "evaluation" or "commentary" to allow educators and others to add evaluative descriptions via portal implementations.
separate elements for abstract and table of contents are a good thing.
Again there were none.


 

2.07.2.2 Element Description Refinements Confusing

Mean = 1.97, Standard Deviation = 0.16

Response Count Percent
(1) Yes 1 2.6%
(2) No 38 97.4%

"Comment" responses:



 

2.08.1.1 Element Description.Abstract Rating

Mean = 4.03, Standard Deviation = 1.06

Response Count Percent
(1) 1 2 5.1%
(2) 2 1 2.6%
(3) 3 6 15.4%
(4) 4 15 38.5%
(5) 5 15 38.5%

"Comment" responses:


I don't understand why this is necessary. Doesn't description handle this?
good for keyword searchers
Need to better differentiate this from other description elements
the abstract is longer than the description!!! Very strange.
This element would be highly useful in search engines. Is it possible to require the use of complete sentences as opposed to phrases?
I think "abstract" is a misleading term that invites confusion between this and the more general Description. But your definition, if people use it pushes to clarity.


 

2.08.1.2 Element Description.Abstract Confusing

Mean = 1.92, Standard Deviation = 0.27

Response Count Percent
(1) Yes 3 7.7%
(2) No 36 92.3%

"Comment" responses:


This is NOT just a longer form of the description (see my note below) and that needs to be emphasized.
It's important but needs better definition.
Should be called Description Summary
I found this a little confusing, as I had trouble finding something that I am familiar with to compare it to. However, I think I get the "gist" of it.
I think listing a word limitation or range of words would further definte "short"
Character limit?
Vague (to me) line between this and description.


 

2.08.2.1 Element Description.Abstract Refinements Rating

Mean = 3.54, Standard Deviation = 1.10

Response Count Percent
(1) 1 1 2.7%
(2) 2 5 13.5%
(3) 3 13 35.1%
(4) 4 9 24.3%
(5) 5 9 24.3%

"Comment" responses:


This is a key thing, and should be highlighted: "why an asset or media file is important at all or within certain contexts." "
there were none.


 

2.08.2.2 Element Description.Abstract Refinements Confusing

Mean = 1.92, Standard Deviation = 0.27

Response Count Percent
(1) Yes 3 7.7%
(2) No 36 92.3%

"Comment" responses:



 

2.09.1.1 Element Description.TableOfContents Rating

Mean = 4.26, Standard Deviation = 0.88

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.6%
(3) 3 8 20.5%
(4) 4 10 25.6%
(5) 5 20 51.3%

"Comment" responses:


Not sure how this element would be used on item level. You should consider a separate Metadata Dictionary for a Collection-level record, where this might fit.
The TOC should be able to refer to other individual content records-- e.g., a single piece on All Things Considered.
Don't see this as a table of contents. We call these "elements" . Table of Contents has too much of a different meaning in the print world.
This looks like a log to me. Seems strange to me to use bibliographic term for moving image content.
Very useful. Could be used for the production process as well as final output.
would be helpful if included readable in-out points for linear media so application could jump directly to desired segment
Structured information categories should be pre-defined.
Why aren't some of these borken out into their own elements? Content advisories, for example, could be an important search criterion.


 

2.09.1.2 Element Description.TableOfContents Confusing

Mean = 1.92, Standard Deviation = 0.27

Response Count Percent
(1) Yes 3 7.7%
(2) No 36 92.3%

"Comment" responses:


Good description. Liked use of time code. Might also be a way to take teachers to areas that relate to content related to certain educational standards or topics.
The only confusion I had was that coming from a TV entity, I felt confused and mislead by the indication that composers, and play lists would be listed here, as in a TV program the composers would be listed in Contributor. But then I realized radio.


 

2.09.2.1 Element Description.TableOfContents Refinements Rating

Mean = 3.57, Standard Deviation = 1.24

Response Count Percent
(1) 1 3 8.1%
(2) 2 3 8.1%
(3) 3 12 32.4%
(4) 4 8 21.6%
(5) 5 11 29.7%

"Comment" responses:


I would strngly recommend structured formatting for this data element. You give an example, but you don't recommend formatting within the data element.
A standardized format makes a lot of sense
Some subunits should have a standardized structure-- not just free-form text. One of your examples lists the start & end times for the various segments of a program & gives a brief description of each. A standard format here would be helpful.
I think the 'natural language' requirement may prove to be a problem, but I honestly don't have a better answer. At this time.
n/a


 

2.09.2.2 Element Description.TableOfContents Refinements Confusing

Mean = 1.90, Standard Deviation = 0.31

Response Count Percent
(1) Yes 4 10.3%
(2) No 35 89.7%

"Comment" responses:


Interesting to see this listed as a text string. Wouldn't potentially rigidly formatted text like a playlist potentially be exchanged as xml (or even comma delimited text) and not a text string?
Will all meta data transfer between multiple platforms and software programs?
n/a


 

2.10.1.1 Element Description.ProgramRelatedText Rating

Mean = 4.13, Standard Deviation = 1.08

Response Count Percent
(1) 1 2 5.1%
(2) 2 1 2.6%
(3) 3 5 12.8%
(4) 4 13 33.3%
(5) 5 18 46.2%

"Comment" responses:


I don't see the usefulness of this as a field element. Better would be to assume related program text as content not metadata. Individual systems might handle textual content as metadata but it serves no purpose in a metadata dictionary.
We call these "components". Not all of this is really text.
Clearly identifying related text (and its language and usage) will be of immense help in assisting end-users in locating materials they want.
R
Metadata seems like the wrong place for Program Related Text (PRT). PRT is itself an object that should be linked to (or part of) the main object its related to and described with its own metadata.
This type of metadata such as closed captions and subtitles would be more useful if linked to essence via time code as an event on a timeline or a metadata track as in MXF and AAF
Content for additional cost should not be included in this element.
Good thing this is repeatable because with the automated text/speech extraction tools coming into use, this is going to be a popular feild. And long in terms of data to be entered.


 

2.10.1.2 Element Description.ProgramRelatedText Confusing

Mean = 1.89, Standard Deviation = 0.31

Response Count Percent
(1) Yes 4 10.5%
(2) No 34 89.5%

"Comment" responses:


A little confused about "Actual types of Program Related Text are identified in the element LANGUAGE.USAGE", will comment under LANGUAGE.USAGE
I found it somewhat difficult to imagine how to include this information in a description field. Would it be important to differentiate the type (transcript, closed caption, etc.) with a label: Closed Caption, Transcript, etc.
But description not very clear
Knowing if a program has closed captions (for example) is important, but I'm not sure if this descibes the captioning or is the only indicator that the program is closed captioned. Hopefully, this will be clearer later.
Combining different types of elements that are diverse seems problematic. For example, does SAP mean, what the actual words would be in the SAP text, or simply indicate that there was an SAP?
Would like to see examples in this area as well.
Examples would be useful, even if only an exerpt from an example.
not clear when to use this instead of Relation


 

2.10.2.1 Element Description.ProgramRelatedText Refinements Rating

Mean = 3.59, Standard Deviation = 1.28

Response Count Percent
(1) 1 2 5.4%
(2) 2 6 16.2%
(3) 3 10 27.0%
(4) 4 6 16.2%
(5) 5 13 35.1%

"Comment" responses:


This data element needs work. I suggest you mock up some examples, to see what you are up against.
n/a


 

2.10.2.2 Element Description.ProgramRelatedText Refinements Confusing

Mean = 1.84, Standard Deviation = 0.37

Response Count Percent
(1) Yes 6 15.8%
(2) No 32 84.2%

"Comment" responses:


Examples of how to combine description.programrelatedtext, language and language usage in cases of multiple texts/languages will be helpful.
No examples?
If, as in the above example with closed captioning, this is the only indicator, then I think that something other than free-form text should be used for such coding.
More to cover, but extremely useful for special services for visually or hearing impaired, multiple languages.
n/a


 

2.11.1.1 Element Type Rating

Mean = 4.50, Standard Deviation = 0.82

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.0%
(3) 3 2 5.0%
(4) 4 10 25.0%
(5) 5 26 65.0%

"Comment" responses:


Type as defined here serve as organizing mechanism for the dictionary itself. That is, fields and values should be broken out by media type.
Not a good description -- should this include items types like sub master, stock, original footage etc.
Pop-up window is annoying.
Thihs element should be required.
Like the problem with title, with non-cataloger data entry people, I don't see how you are going to get them to fill out Type, Type Form and Type Genre. I understand the importance of these 3 feilds and in splitting them out but a producer/ap/editor


 

2.11.1.2 Element Type Confusing

Mean = 1.93, Standard Deviation = 0.27

Response Count Percent
(1) Yes 3 7.5%
(2) No 37 92.5%

"Comment" responses:


Would the PBCore Type List also include "E-commerce and T-commerce"? Just thinking that it may be useful to be able to tag items for easy retrieval or interaction.
is just going to want to say "Moving Image: Drama" here. In fact in context with the Type.Form and Type.Genre, this feild looks to be of little use the more narrow your application of the metadata is. "Of course it's a moving image, it's a video"


 

2.11.2.1 Element Type Refinements Rating

Mean = 4.03, Standard Deviation = 0.94

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.3%
(3) 3 10 26.3%
(4) 4 11 28.9%
(5) 5 15 39.5%

"Comment" responses:


I would use still image instead of static image, to more explicitly map to DC. It's important that your list maps to general usage by others for interoperability.
Type, type.form and type.genre need serious reconsideration as to their use. As defined, they have messy and confusing overlap, and controlled vocabs are badly designed.
This is one of those where there is a fine line between too general and too specific-- just an observation.
Some of the types, Animation, Physical object, etc., seem to be quite diverse, creating an apples vs. oranges problem
Good to use pick list
Will there be a catch-all 'other' catagory? If not, how do we petition to add one?
essential to use controlled vocabulary for this
Picklist should be limited and exclusive
The problem with this element is not in the conception of it or your description, but in the reality of who is going to be putting data in an asset managment system. For the non-cataloger, the distinction between .Form and .Genre doesn't seem


 

2.11.2.2 Element Type Refinements Confusing

Mean = 1.92, Standard Deviation = 0.27

Response Count Percent
(1) Yes 3 7.7%
(2) No 36 92.3%

"Comment" responses:


but there seems to be an odd duck in the list. Animation?
Can an interactive resource contain multiple types? Are these types cataloged separately?
relevant. Type seems too general to be useful, unless you are indoctrinated to think enterprise wide about the system. I think there will be lots of misuse of these 3 elements due to lack of understanding why.


 

2.12.1.1 Element Type.Form Rating

Mean = 3.59, Standard Deviation = 1.33

Response Count Percent
(1) 1 5 12.8%
(2) 2 4 10.3%
(3) 3 3 7.7%
(4) 4 17 43.6%
(5) 5 10 25.6%

"Comment" responses:


I think the examples give a good indication of the need for splitting Form into Media types. As it exists now, the list is far too long and ill-defined.
It seems to me that "Type" and "Type.Form" are describing different information, no much implied hierarchy.
type.genre should cover it.
Seems to me it should exist in addition to type.genre
The elements seem to be too-series focused-- what about individual pieces within an episode of a series?
Disagree that Form can be moved into Genre.
See Type comments.
It seems to me this confuses genre with form. See for example AMIA form terms.
Big pick list may be cumbersome to use
How is Type.Form="crime drama" different from Type.Form="drama" and Type.Genre="crime"?
Again, I think that the distinction between .Form and .Genre will be lost on the averae data entry person. Even the value lists look similar enough that one wonders why you have two different elements to describe this.


 

2.12.1.2 Element Type.Form Confusing

Mean = 1.85, Standard Deviation = 0.37

Response Count Percent
(1) Yes 6 15.4%
(2) No 33 84.6%

"Comment" responses:


Not confusing to me but I suspect will be for others.
A little confused about the definition "Type" and "Type.Form"
I don't think we should delete it but rather leave it for those of us who need it. I think the feeling was that people couldn't understand the difference, so combine them into Genre. Radio folks will use it and need to share this data.
Under "Examples": Interview is mispelled as "Inteview". Missing an "r."
It is not clear the distinction between this and genre.
What is to keep people from trying to just use one of these elements and lumping in Type, .Form and .Genre? I think that that is what most stations without catalogers are going to be inclined to do.


 

2.12.2.1 Element Type.Form Refinements Rating

Mean = 3.53, Standard Deviation = 1.43

Response Count Percent
(1) 1 6 15.8%
(2) 2 3 7.9%
(3) 3 6 15.8%
(4) 4 11 28.9%
(5) 5 12 31.6%

"Comment" responses:


See notes under 2.12.1.1. Both Form and Genre may be out of our hands; we may be forced to accept existing broadcast industry standards.
The controlled list will always be open to question (for example, do we really need three versions of "soap" in the PB core?) Is it worth revisiting the "other" category, allowing "other" to be selected in association with another category. Probablto
Very thorough aggregate list
Again, this is one where you need to use caution. You don't want to be too general, but being too specific could cause trouble too (and create a huge list). Also, you don't want Type and Type.Form to become interchangeable.


 

2.12.2.2 Element Type.Form Refinements Confusing

Mean = 1.85, Standard Deviation = 0.37

Response Count Percent
(1) Yes 6 15.4%
(2) No 33 84.6%

"Comment" responses:



 

2.13.1.1 Element Type.Genre Rating

Mean = 4.00, Standard Deviation = 1.21

Response Count Percent
(1) 1 2 5.1%
(2) 2 5 12.8%
(3) 3 1 2.6%
(4) 4 14 35.9%
(5) 5 17 43.6%

"Comment" responses:


Genre is a fundamental concept. Unfortunately it gets too wrapped up in Subject and even Form. Untangling these elements may be fruitless, considering there are already broadcasting standards in place. We need to look outside public broadcasting.
In library, the topical information is described in "Subject", not in "Genre".
Same comment as prior. We'll never cover al the bases ... Should there be some distinction for "Formal Instructional" as distinguished from "how to" and "educational"; or does "educational" = "instructional" in that sense
I would combine form and genre as one data element.
See Type.
Was there any consideration of Moving Image Materials: Genre Terms by LC in developing this or referencing?
This should be covered by Subject
many pb services are being created that aggragate and serve (and sell) content by genera
Pick list too big
Of the three Type Elements, this is the one that makes the most sense. I see unfortunately though how it's going to be inviting to contaminate this feild with terms from the other two feilds. The distinction between the elements seems confusing or


 

2.13.1.2 Element Type.Genre Confusing

Mean = 1.90, Standard Deviation = 0.31

Response Count Percent
(1) Yes 4 10.3%
(2) No 35 89.7%

"Comment" responses:


In library, for example, "Genre terms for textual materials designate specific kinds of materials distinguished by the style or technique of their intellectual content (e.g., biographies, catechisms, essays, hymns or reviews)."
controlled vocab should be much smaller
A good genre list
not sure distinction between type.form and type.genre; not clear distinction between type.genre and subject
insignificant. The Type.Genre element is the one that makes most sense and appears to be most valuable. I also see people using Type and Type.Format as repeatable instances of this element instead of how they are supposed to be used.


 

2.13.2.1 Element Type.Genre Refinements Rating

Mean = 4.11, Standard Deviation = 0.99

Response Count Percent
(1) 1 1 2.7%
(2) 2 3 8.1%
(3) 3 1 2.7%
(4) 4 18 48.6%
(5) 5 14 37.8%

"Comment" responses:


A policy statement needs to be made recognizing existing broadcast industry standards.
The description doesn't adequately distinguish between form and genre, and the distinctions may not be relevant to the public. We combined these in the MIC core registry.
How is this different from Type.Form?
Are these pick lists case sensitive? Why are only the first letters capitalized? Are there other potential lists of genres in addition to the Tribune Sports Tags?
even if not everyone uses the same genera CV, we need XML parsers that can translate one groups list to the other
Tags for additional genres should also be identified, developed, and made exclusive.
You do expect stations to shorten the list and just include the genre terms they expect their people to use? The list is too long and for our purposes about half of the terms are not relevant, but this is going to be different at each station.


 

2.13.2.2 Element Type.Genre Refinements Confusing

Mean = 1.92, Standard Deviation = 0.27

Response Count Percent
(1) Yes 3 7.9%
(2) No 35 92.1%

"Comment" responses:


this data is important, but element used is confusing; see comments above


 

2.14.1.1 Element Source Rating

Mean = 3.82, Standard Deviation = 1.07

Response Count Percent
(1) 1 0 0.0%
(2) 2 5 12.8%
(3) 3 11 28.2%
(4) 4 9 23.1%
(5) 5 14 35.9%

"Comment" responses:


Though we use this element internally for capturing legacy metadata, I am not convinced of its usefulness for metadata change. Too similar to Relation.
I think that this part needs to be fleshed out more. There needs to be a clear definition of entities-- programs, producers, etc, that can be a source.
Could be confused with "creator". Could "Derivation" not be used here to avoid confusion?
How do we account for source being within ptv station/internal department?
Is there a way to uniquely identify other resources that are defined by PBCore?
It is important in your instructions to remind people that this is a Content element. There is no clear feild to use for the physical analog source (this was digitized from a 3/4" tape) and people may be inclined to put that information here.


 

2.14.1.2 Element Source Confusing

Mean = 1.84, Standard Deviation = 0.37

Response Count Percent
(1) Yes 6 15.8%
(2) No 32 84.2%

"Comment" responses:


I think an example would help; I assume it means something like the underlying literary work of a broadcast play or musical, but it was not clear in the definition


 

2.14.2.1 Element Source Refinements Rating

Mean = 3.41, Standard Deviation = 1.26

Response Count Percent
(1) 1 4 10.8%
(2) 2 3 8.1%
(3) 3 13 35.1%
(4) 4 8 21.6%
(5) 5 9 24.3%

"Comment" responses:


The examples could easily fit into Annotation.
n/a
Source should have derivation categories. (e.g. books, film, program)


 

2.14.2.2 Element Source Refinements Confusing

Mean = 1.82, Standard Deviation = 0.39

Response Count Percent
(1) Yes 7 18.4%
(2) No 31 81.6%

"Comment" responses:


Can this not only reference multiple derivations, but varied derivatiosn from other 'mutually exclusive' assets?
n/a


 

2.15.1.1 Element Relation.Type Rating

Mean = 3.62, Standard Deviation = 1.07

Response Count Percent
(1) 1 2 5.1%
(2) 2 1 2.6%
(3) 3 17 43.6%
(4) 4 9 23.1%
(5) 5 10 25.6%

"Comment" responses:


Relation does has an important place in the Metadata Dictionary, but its current usefulness is weakened by not including a Collection-level model that would serve as a parent or "packaging" record.
I would tend to use this very sparingly, or not at all. It's a very confusing field.
This seems the wrong way to do this. DC has the option to use Relation refinements (e.g. Relation.IsVersionOf) that contains either text or the identifier of a related resource.
Can the secondary resource also exist as a primary resource within the same asset system or database?
So is this the element where you would refer to the original analog tape a digital object is a surrogate of? Unless you have been a cataloger, I think the idea of primary source you are describing and secondary source it relates to can easily be


 

2.15.1.2 Element Relation.Type Confusing

Mean = 1.85, Standard Deviation = 0.36

Response Count Percent
(1) Yes 6 15.0%
(2) No 34 85.0%

"Comment" responses:


Very confusing as to what this means!
The description and examples are not as clear as others are. It is still very important. Shouldn't the examples include what it is related to?
Looks cumbersome to use
confused because in our project the primary source is the original tape the secondary source is the digital object and the record will serve (in reality, although admittedly, not correctly) to describe both items. It's true in other elements as well


 

2.15.2.1 Element Relation.Type Refinements Rating

Mean = 3.54, Standard Deviation = 1.17

Response Count Percent
(1) 1 2 5.4%
(2) 2 4 10.8%
(3) 3 13 35.1%
(4) 4 8 21.6%
(5) 5 10 27.0%

"Comment" responses:


These values needs to be re-considered, perhaps even dumb-downed for practical usage. Also, some of these values are covered by Elements like Format.
I think you should evaluate the list and remove any that aren't critical for the PBS community.
I question the need for "is format of" in the controlled vocabulary, given that "is version of" is also present.
but particularly with this element, the show me examples is of little help, Describing a potential situation and how the items were described would be more helpful. If we understood how to use "Has Format" then we wouldn't need to see it listed in


 

2.15.2.2 Element Relation.Type Refinements Confusing

Mean = 1.77, Standard Deviation = 0.43

Response Count Percent
(1) Yes 9 23.1%
(2) No 30 76.9%

"Comment" responses:


Needs more work.
How does 'replaced by' eventually become 'deleted', if ever?
the show me part. I think you need to be more clear about how this element works with Source and is or is not used to describe master tapes or original sources.


 

2.16.1.1 Element Relation.Identifier Rating

Mean = 3.89, Standard Deviation = 1.01

Response Count Percent
(1) 1 1 2.6%
(2) 2 1 2.6%
(3) 3 12 31.6%
(4) 4 11 28.9%
(5) 5 13 34.2%

"Comment" responses:


It's probably good to specify Identifier but confusing when one looks at the Relation Type values: not all suggest an Identifier value.
r
Absolutely necessary if Relation is used
Is there a tag for EPG retrieval of data? Would be great if we could embed data that could update EPG systems dynamically.
need to include type of identifier; if free text can it be called an identifier? It's questionable whether free text falls under the definition of identifier.
My comments do not fit here. In short, the mechanism proposed for Relation.Type and Relation.Identifier is unnecessary because can be expressed through Relation.=identifier
Identifier shol dbe required if the asset has an entry for Relation.Type.
PBCore unique identifier?
If people can wrap their heads around the concepts involved in Source and Relation.Type, then this feild is a piece of cake and highly relevant if people use it.


 

2.16.1.2 Element Relation.Identifier Confusing

Mean = 1.87, Standard Deviation = 0.34

Response Count Percent
(1) Yes 5 12.8%
(2) No 34 87.2%

"Comment" responses:


Somewhat confusing but I got the general concept.
One example is location -- this is confusing
Somewhat confusing. An example in conjunction with Relation.Type might have helped.
This was quite unclear to me; examples were mostly numerus, so I could not really understand what this meant
O.K. now I see how the components are split up for increased flexibility. I like the ablility to use the bar code.


 

2.16.2.1 Element Relation.Identifier Refinements Rating

Mean = 3.47, Standard Deviation = 1.25

Response Count Percent
(1) 1 3 8.3%
(2) 2 4 11.1%
(3) 3 12 33.3%
(4) 4 7 19.4%
(5) 5 10 27.8%

"Comment" responses:


See note under 2.16.1.1
I think you need to revisit programrelated text and the relation fields with some examples. I think this needs further work. Not enough guidance is provided here.
I think more emphasis needs to be put on uniquely identifying a related resource. Saying 'x is version of y' and only providing a shelf location for y could be a problem when someone decides to reshelve....
n/a
If there is not a formal way to uniquely identify the resource, it may be unusable!


 

2.16.2.2 Element Relation.Identifier Refinements Confusing

Mean = 1.87, Standard Deviation = 0.34

Response Count Percent
(1) Yes 5 13.2%
(2) No 33 86.8%

"Comment" responses:


What's confusing is the bond between this and Relation Type.
How did you create a bar code using a text string?
n/a


 

2.17.1.1 Element Coverage.Spatial Rating

Mean = 3.50, Standard Deviation = 1.25

Response Count Percent
(1) 1 3 7.9%
(2) 2 5 13.2%
(3) 3 10 26.3%
(4) 4 10 26.3%
(5) 5 10 26.3%

"Comment" responses:


I am on the fence with this element. I would not object if it was covered by qualified Subject (Geographic). However, if it's used to help capture Geospatial camera info, then maybe it belongs within Format.
High importance but term spatial needs to be more focused
Are abbreviations recognized by all data systems? Ex: state or city abbreviations.


 

2.17.1.2 Element Coverage.Spatial Confusing

Mean = 1.92, Standard Deviation = 0.27

Response Count Percent
(1) Yes 3 7.7%
(2) No 36 92.3%

"Comment" responses:


As I mentioned, however, it's hard to pinpoint what is the nature of the metadata and how is it entered or captured?
It could be a bit clearer that this concerns "spatial" elements within the program; at first, it almost seemed like an archival determination of the physical location of the program


 

2.17.2.1 Element Coverage.Spatial Refinements Rating

Mean = 3.46, Standard Deviation = 1.26

Response Count Percent
(1) 1 3 8.1%
(2) 2 4 10.8%
(3) 3 14 37.8%
(4) 4 5 13.5%
(5) 5 11 29.7%

"Comment" responses:


The examples add to my confusion in they list both descriptive and geospatial metadata, which seems like apples and oranges, subject vs format data.
Teachers really like to localize resources. I would suggest, at a minimum, using the ISO 3166-1 and -2 country and state codes to provide some searchable uniformity here.
Think it's a mistake to leave this free-form.
I don't think you should discourage the use of gazeteers/controlled vocabularies for place names, simply because there isn't one in the public domain.
Really need some sort of thesarus or at least rules for entering information
n/a
Why will you not use thesauri like the Getty TGN or EU NUTS?
Need to develop an authority file outlining form and definition. (i.e. Boston or Boston, MA or Boston, MA USA) This would standardize the format of the resource. A picklist would also be helpful to keep syntax similar.


 

2.17.2.2 Element Coverage.Spatial Refinements Confusing

Mean = 1.89, Standard Deviation = 0.31

Response Count Percent
(1) Yes 4 10.5%
(2) No 34 89.5%

"Comment" responses:


See note 2.17.2.1
Interesting that you thought beyond simply making the category "location" or "dateline"
n/a


 

2.18.1.1 Element Coverage.Temporal Rating

Mean = 3.89, Standard Deviation = 1.20

Response Count Percent
(1) 1 2 5.3%
(2) 2 3 7.9%
(3) 3 8 21.1%
(4) 4 9 23.7%
(5) 5 16 42.1%

"Comment" responses:


Again, I think it's possible to move this to Subject as do other cataloging standards but I don't object.
It's a big mistake not to use some kind of scheme such as ISO 8601 .
Is this also used for an event? Ex: time stated in which item exists live online. Item is removed after expiration date.
While I think this is an important element, your caution that if TC is sufficiently covered in Subject, Title, or Description will lead to inconsistancy in how this element is used.


 

2.18.1.2 Element Coverage.Temporal Confusing

Mean = 1.95, Standard Deviation = 0.22

Response Count Percent
(1) Yes 2 5.1%
(2) No 37 94.9%

"Comment" responses:


Need just a little clarification on above point.


 

2.18.2.1 Element Coverage.Temporal Refinements Rating

Mean = 3.68, Standard Deviation = 1.27

Response Count Percent
(1) 1 3 8.1%
(2) 2 3 8.1%
(3) 3 10 27.0%
(4) 4 8 21.6%
(5) 5 13 35.1%

"Comment" responses:


Good not to require a specific format ... but shouldn't we recommend a common approach when possible?
I think you will be sorry not to provide uniformity here. MARC guidelines can provide uniformity and are readily available at loc.gov if you don't like the ISO standard
Somewhere it would be good to capture if there is an end-date to the products relevancy -- If there is something in the production that really dates it.
see above.
Dates are messy. Allowing free text dates keeps them that way. Note that without applying not-yet-invented artificial intelligence, searches for 1863 won't find the asset labeled '1861-1865'. no good solutions here, unfortunately.
Really need to have rules if not using a coding scheme like ISO 8601
n/a
DATE is very important. It needs to be formalized! Free text is not a good idea for this field.
Time periods should be standardized and not made a free-form text entry.


 

2.18.2.2 Element Coverage.Temporal Refinements Confusing

Mean = 1.97, Standard Deviation = 0.16

Response Count Percent
(1) Yes 1 2.6%
(2) No 37 97.4%

"Comment" responses:


My notes for spatial coverage generally pertain to temporal coverage, though the geospatial example is missing, therefore more consistent with its definition.
Interesting that you thought beyond simply making the category "date."
n/a


 

2.19.1.1 Element Audience.Level Rating

Mean = 4.05, Standard Deviation = 1.05

Response Count Percent
(1) 1 0 0.0%
(2) 2 3 7.7%
(3) 3 11 28.2%
(4) 4 6 15.4%
(5) 5 19 48.7%

"Comment" responses:


This element really should apply to learning standards. It is far too complicated to be handled by a single element. An educational sub-committee should be formed to deal with how our materials connect to educational standards.
Makes sense to use library of congress system ... but note that some school systems do not have Jr HIgh, but rather middle school, so 9th grade goes with 10-12.
TV business mandates a rating scheme such as TV-14, MA etc. How does that fit it?
important for certain sets of content, e.g. adult education or children's programming
Good idea, stations will not be happy if there is no further 'refinement'


 

2.19.1.2 Element Audience.Level Confusing

Mean = 1.97, Standard Deviation = 0.16

Response Count Percent
(1) Yes 1 2.6%
(2) No 38 97.4%

"Comment" responses:


The surface has barely been scratched for mapping materials to educational usage. Even beyond educational usage, we'd need a breakdown of material facets with ties to audience rating.
good description that is succinct
I am not sure what "special audience" is. Ex: is it for the hearing impaired or a niche market such as spanish speaking, or train enthusiasts?


 

2.19.2.1 Element Audience.Level Refinements Rating

Mean = 3.92, Standard Deviation = 1.05

Response Count Percent
(1) 1 1 2.6%
(2) 2 2 5.3%
(3) 3 10 26.3%
(4) 4 11 28.9%
(5) 5 14 36.8%

"Comment" responses:


The values are a good start but this element is far too complicated for a single list or even a single element.
I agree that the LOC picklist is easier to apply than GEM, but GEM is an organization you will want to interoperate with (as is NSDL, which I believe uses GEM)
The controlled vocabulary needs some work. I'd love to hear the explanation of why 'male' and 'female' constitute different 'levels' of audience. Are blacks/muslims/native americans/episcopalians also different 'levels'?
Educational level should be a separate element. PBCore should consider the GEM metadata for recognized levels.


 

2.19.2.2 Element Audience.Level Refinements Confusing

Mean = 1.97, Standard Deviation = 0.16

Response Count Percent
(1) Yes 1 2.6%
(2) No 38 97.4%

"Comment" responses:


This is not exactly the "LOC" list; we are not sure what you mean by that or which list you are referring to. There is a list of "target audiences" from which some of this seems to come and others added?


 

2.20.1.1 Element Audience.Rating Rating

Mean = 4.21, Standard Deviation = 0.98

Response Count Percent
(1) 1 1 2.6%
(2) 2 1 2.6%
(3) 3 6 15.4%
(4) 4 12 30.8%
(5) 5 19 48.7%

"Comment" responses:


The concept is a good one but does not go far enough.
Gotta have it, but the ratings themselves are so poorly applied that it jusyt perpetuates bad data (and social policy)
I would use this element to replace Audience and drop AudienceRating as a separate element. This gets more to the content without making presumptions about who the audience should be.
if required by FCC, then for sure
importance increases as the degree of problematic content increases!
Element should be required.
Like with other elements, it would be useful if you provided your pick lists in a format where they could be cut and pasted into a database the stations might be using to describe their assets. I know it's early, but if I were creating a database


 

2.20.1.2 Element Audience.Rating Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 39 100.0%

"Comment" responses:


Again, okay as top-level explanation but educators may object to its usefulness for choosing materials appropriate to a classroom.
Links very useful.
now that I would want to interoperate with other systems later, I would want to be able to copy the values for my drop down menus now. If I'm too lazy to type them all out, it may be too late later to go back and fix this element when we get a


 

2.20.2.1 Element Audience.Rating Refinements Rating

Mean = 4.16, Standard Deviation = 1.13

Response Count Percent
(1) 1 2 5.3%
(2) 2 1 2.6%
(3) 3 6 15.8%
(4) 4 9 23.7%
(5) 5 20 52.6%

"Comment" responses:


I have no problem with the values but they should be accompanied by a notes element that specifically cites material that may be objectionable. (war violence, gore, even specific words)
I like the fact that you provided both rating systems. You might want to preface the rating with the rating system, e.g., MPAA: or FCC:
We need to have some equivalent for radio.
The rating schemes are moronic, but that's hardly your folks' fault.
proper system up and running.


 

2.20.2.2 Element Audience.Rating Refinements Confusing

Mean = 1.95, Standard Deviation = 0.23

Response Count Percent
(1) Yes 2 5.4%
(2) No 35 94.6%

"Comment" responses:



 

3.0 Property/Rights Metadata User

Mean = 1.24, Standard Deviation = 0.43

Response Count Percent
(1) Yes 37 75.5%
(2) No 12 24.5%

3.01.1.1 Element Creator Rating

Mean = 4.69, Standard Deviation = 0.52

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 1 2.8%
(4) 4 9 25.0%
(5) 5 26 72.2%

"Comment" responses:


Though Creator should be madatory, roles need to be established to determine who or what job title is the Creator for a production.
Is there a way to distinguish between a sub-unit (state of Utah, dept of corrections) and an equal partnership?
You're going tohave confusion with the "Source" element.
Very important but often only the presenter or distributor is known.


 

3.01.1.2 Element Creator Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.8%
(2) No 35 97.2%

"Comment" responses:


We could give the the short version of AACR2 in the description. it's in the usage guidlines, however.


 

3.01.2.1 Element Creator Refinements Rating

Mean = 4.36, Standard Deviation = 0.96

Response Count Percent
(1) 1 1 2.8%
(2) 2 0 0.0%
(3) 3 6 16.7%
(4) 4 7 19.4%
(5) 5 22 61.1%

"Comment" responses:


I strongly recommend the establishment of an authority file for creators.
Enumerated list is for "role" and not creator proper. Which is fine.
You mention AACR2 rules; this gives rules for formulating the name. You might considered using controlled forms of names, i.e. names established by an authority list. Use the LC Name Authority file.


 

3.01.2.2 Element Creator Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 35 100.0%

"Comment" responses:



 

3.02.1.1 Element Creator.Role Rating

Mean = 4.36, Standard Deviation = 0.83

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.6%
(3) 3 2 5.6%
(4) 4 13 36.1%
(5) 5 19 52.8%

"Comment" responses:


This will be subject to change as we use it. For example, "Executive in Charge" might be relevant in some circumstances.
should add application programmer, developer, designer
as a producer, mainly just interested in what I can do with the content rights wise, don't care title/roll of those who made it
Roles should be expressed through element refinements, not through controlled vocabularies


 

3.02.1.2 Element Creator.Role Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:


Could it include Online Producer, Interactive Producer? We would want the department identified as well as the producer. We also may need "encoder or iTV producer" for TV interactive data.
I think the only thing people might have trouble with is where to draw the line between a Creator and a Contributor. When is a Composer a Creator instead of a Contributor say?


 

3.02.2.1 Element Creator.Role Refinements Rating

Mean = 4.23, Standard Deviation = 0.97

Response Count Percent
(1) 1 0 0.0%
(2) 2 3 8.6%
(3) 3 4 11.4%
(4) 4 10 28.6%
(5) 5 18 51.4%

"Comment" responses:


You are on the right track but roles should be relative to media or content type.
See my comments under "contributor"
Insufficiently broad controlled vocabulary, but I base that in part on my belief that you should eliminate the contributor element
hard due to overlaping, unclear roles
You may have to additional "roles." For example, there was nothing for some important roles, for example Sound, Art direction, etc.
Some instituations might select just those values from the pick list that they want people to use as creators, since the list is quite long and poses some confusing delinations between Creators and Contributors.


 

3.02.2.2 Element Creator.Role Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:



 

3.03.1.1 Element Publisher Rating

Mean = 4.69, Standard Deviation = 0.52

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 1 2.8%
(4) 4 9 25.0%
(5) 5 26 72.2%

"Comment" responses:


This agent element is fundamental for packaging rights information. One reservation is symantic: many people do not think of broadcasters as publishers. Efforts are needed to change this. Also, I recommend a publisher ID to reference a Vcard domain.
This one will cause some confusion no matter how clearly it's defined.
I think there
confusing with "Distributor"
Shouldn't term be producer/distributor for moving image product
This is often the entity controlling rights
Critical to intellectual property since this is where the "copyright holder" information goes.


 

3.03.1.2 Element Publisher Confusing

Mean = 1.94, Standard Deviation = 0.23

Response Count Percent
(1) Yes 2 5.6%
(2) No 34 94.4%

"Comment" responses:


The definition helps clarify this element, but some users may insist on other, non_DC terms such as Presenter or Distributor.
If Dublin Core states that the Publisher field should not be used when the Publisher is also the Creator, should this be a mandatory field? There may be some instances with just this example, and there's no instruction of how to handle this scenario.
You might want to add some wording to include the idea of a production company which seems even more relevant in broadcasting than Publisher
You state 'some might not have a publisher', then label the field as mandatory.


 

3.03.2.1 Element Publisher Refinements Rating

Mean = 4.14, Standard Deviation = 1.12

Response Count Percent
(1) 1 2 5.7%
(2) 2 1 2.9%
(3) 3 4 11.4%
(4) 4 11 31.4%
(5) 5 17 48.6%

"Comment" responses:


Again, a central authority name file would be useful.
I question the wisdom of suddenly referring people to AACR2 for help in encoding this element. I thought the point here was a scheme that DIDN'T require a professional cataloger....
see comment under Creator
Database of publishers would make entry more consistent and reliable


 

3.03.2.2 Element Publisher Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.8%
(2) No 35 97.2%

"Comment" responses:


See above.


 

3.04.1.1 Element Publisher.Role Rating

Mean = 4.17, Standard Deviation = 1.04

Response Count Percent
(1) 1 0 0.0%
(2) 2 4 11.4%
(3) 3 4 11.4%
(4) 4 9 25.7%
(5) 5 18 51.4%

"Comment" responses:


I think this element is important only to clarify moving image publishers.
Again, using publisher instead of producer is confusing
Copyright holder identification is critical.
See remark for Creator


 

3.04.1.2 Element Publisher.Role Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.8%
(2) No 35 97.2%

"Comment" responses:


Not enough distinction between "source" and "Distributor within publisher role"
You might want to add wording to describe production companies, which are important in all types of broadcasting and is a term used more often than publisher


 

3.04.2.1 Element Publisher.Role Refinements Rating

Mean = 4.08, Standard Deviation = 1.11

Response Count Percent
(1) 1 1 2.8%
(2) 2 3 8.3%
(3) 3 5 13.9%
(4) 4 10 27.8%
(5) 5 17 47.2%

"Comment" responses:


I don't think Copyright Holder belongs here. It should be its own element.


 

3.04.2.2 Element Publisher.Role Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:


This may come up elsewhere, but it would be useful to have identifier(s) associated with intellectual property information, e.g., copyright registration num, or identifier to track owner. Your element 10 might work, but needs enforcement.
See PUBLISHER comment.


 

3.05.1.1 Element Contributor Rating

Mean = 3.75, Standard Deviation = 1.11

Response Count Percent
(1) 1 1 2.8%
(2) 2 4 11.1%
(3) 3 9 25.0%
(4) 4 11 30.6%
(5) 5 11 30.6%

"Comment" responses:


I think it is correct to separate contributor from creator, but rules for doing so must be defined for each media.
I think it is very difficult to assign primary creation importance to one entity over another. Having both creator and contributor is redundant. I'd suggest keeping Creator for mapping to MPEG7 and LOM
The distinction between 'creator' and 'contributor' is a historical artifact deriving from libraries' need to have a single main entry for a work. You don't need to do this in the digital age, and it's particularly nonsensical for A/V materials.
No harm, see comment on Contributor.Role in this IP section.
But it needs to be considered whether this element and Creator should be combined, because of the difficulty in assessing which is primary and which secondary.
Is "contributor" the right term for relations like: "thanks to", advisor/mentor, funder. Do they fit here?


 

3.05.1.2 Element Contributor Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.8%
(2) No 35 97.2%

"Comment" responses:


I understand it but the language is a little sloppy.
A bit confusing; I was not sure where the lines were drawn between creator and contributor; specific examples would help


 

3.05.2.1 Element Contributor Refinements Rating

Mean = 3.86, Standard Deviation = 1.20

Response Count Percent
(1) 1 2 5.6%
(2) 2 3 8.3%
(3) 3 7 19.4%
(4) 4 10 27.8%
(5) 5 14 38.9%

"Comment" responses:


See notes on Creator. I also think we need a Contributor ID to connect to a VCard metadata domain.
see comments under creator
Titles or roles should be pre-defined from an authority list.


 

3.05.2.2 Element Contributor Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 35 100.0%

"Comment" responses:



 

3.06.1.1 Element Contributor.Role Rating

Mean = 3.86, Standard Deviation = 1.13

Response Count Percent
(1) 1 2 5.6%
(2) 2 1 2.8%
(3) 3 10 27.8%
(4) 4 10 27.8%
(5) 5 13 36.1%

"Comment" responses:


Even more than Creator role, Contributor roles need to be accurate because it may include anyone from a cameraperson to an intern.
Should have "Instructional Designer" added
possibly confusing with the category that includes "actor"
The enumerated list is pretty much focused on "creative" aspects and (indirectly?) (not so much?) on copyright-ownership aspects.
I rated the contributor as medium, but if you've already got that, then knowing their role could be somewhat important.
Pick list needs to be shortened
see comments under contributor
See comment for Creator. DCMI has defined a set of MARC relator codes as refinements of Contributor


 

3.06.1.2 Element Contributor.Role Confusing

Mean = 1.94, Standard Deviation = 0.23

Response Count Percent
(1) Yes 2 5.6%
(2) No 34 94.4%

"Comment" responses:


Devotion to Dublin Core put you into the "contributor" mood here. But I guess I would be sure to offer the guidance (re: IP) to "repeat publisher" when needed.
This is more clear than the previous definition and seems to answer questions I had; in general, though, the main heading definitions are a bit broad
Again, people might have some trouble determining the difference between a Creator and a Contributor. Where do they draw the line? When is a Cinematographer a Creator and when are they a Contributor?


 

3.06.2.1 Element Contributor.Role Refinements Rating

Mean = 3.83, Standard Deviation = 1.25

Response Count Percent
(1) 1 2 5.6%
(2) 2 4 11.1%
(3) 3 7 19.4%
(4) 4 8 22.2%
(5) 5 15 41.7%

"Comment" responses:


Definitions seem crucial here. Also some policy might be established on what text string is used for these roles: official job titles? on-screen credit? We also must recognize that the list is too long for a drop-down.
The list is quite long, however
I would drop "Contributor" and merge the roles for contributor into Creator. I'd use MARC Relator, which is a very complete list. You may be using that already?
Not so relevant to IP.
see comments under Creator
Element should be required if contributor element is populated
I think stations will refine this list, mostly making it shorter so that it reflects the roles they have in house.


 

3.06.2.2 Element Contributor.Role Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:



 

3.07.1.1 Element Rights.Usage Rating

Mean = 4.89, Standard Deviation = 0.40

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 1 2.8%
(4) 4 2 5.6%
(5) 5 33 91.7%

"Comment" responses:


This element should be mandatory.
Extremely important to define how / when / where content may be used
Boy, we sure do need "rights description language," don't we. But don't you guys invent it. Lets hope MPEG does a good job!
Element should be required
It seems like it might make sense to break out some optional sub-elements here. Ex: Exerptable, public performance, etc. Also, maybe ability to link to a particular license of particular format? (ex Creative Commons, etc)


 

3.07.1.2 Element Rights.Usage Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 34 97.1%

"Comment" responses:


Some caution about the importance of consistant language might be helpful


 

3.07.2.1 Element Rights.Usage Refinements Rating

Mean = 3.76, Standard Deviation = 1.37

Response Count Percent
(1) 1 4 11.8%
(2) 2 1 2.9%
(3) 3 9 26.5%
(4) 4 5 14.7%
(5) 5 15 44.1%

"Comment" responses:


Though this information should be required, how it is entered is still open for debate. My experience is that a simple drop-down list is too limited. In any case, a Rights Notes element should be added.
I strongly recommend creating a standardized value list to enable interoperability across stations and standard information for end users via public portals. See if Creative Commons terminology can be adapted.
Rights are too important to be left to free-form text entry
Free text: you are doing the best you can. But a suggested (not enforced) vocabulary might be warranted.
From a software standpoint (in which the software checks to make sure that a program is in rights when scheduled), a more formal set of rules than free-form text would be useful.
Need tohave a pick list or rules (for example, "broadcast" -- does that mean it's allowed or not?)
I wonder about having this be free form, it is a big jump from what we are 90% used to.
n/a


 

3.07.2.2 Element Rights.Usage Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.8%
(2) No 35 97.2%

"Comment" responses:


n/a


 

3.08.1.1 Element Rights.Reproduction Rating

Mean = 4.50, Standard Deviation = 0.81

Response Count Percent
(1) 1 1 2.8%
(2) 2 0 0.0%
(3) 3 1 2.8%
(4) 4 12 33.3%
(5) 5 22 61.1%

"Comment" responses:


This is too close to Rights Usage. Or perhaps this should be called Rights Notes.
Is this intended to include broadcast?
Given the state of description language for rights, having another free text field will likely be useful. But a little hard to forecast how it will play out in real life.
Any material exported for distribution should have this as a requirement.
Element should be required.
Maybe add optional sub-elements and a way to link to a specific license of specific format?


 

3.08.1.2 Element Rights.Reproduction Confusing

Mean = 1.92, Standard Deviation = 0.28

Response Count Percent
(1) Yes 3 8.3%
(2) No 33 91.7%

"Comment" responses:


How does this element differ significantly from the Rights.Usage element? I think there needs to be greater emphasis on the distinction between use (as in what can you do with this item) and reproduction (making copies) Can stations choose to put


 

3.08.2.1 Element Rights.Reproduction Refinements Rating

Mean = 3.78, Standard Deviation = 1.27

Response Count Percent
(1) 1 2 5.6%
(2) 2 4 11.1%
(3) 3 9 25.0%
(4) 4 6 16.7%
(5) 5 15 41.7%

"Comment" responses:


See comment for rights.usage. I'd standardize a value list.
Free form text is only useful to human readers
both in this element. Consistancy in language should be emphasized.


 

3.08.2.2 Element Rights.Reproduction Refinements Confusing

Mean = 1.91, Standard Deviation = 0.28

Response Count Percent
(1) Yes 3 8.6%
(2) No 32 91.4%

"Comment" responses:


I think the whole Rights Elements domain needs to be re-considered and made clear. Each asset has a group of usage rights and each usage has terms and restrictions.


 

3.09.1.1 Element Rights.Access Rating

Mean = 4.42, Standard Deviation = 0.84

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.8%
(3) 3 5 13.9%
(4) 4 8 22.2%
(5) 5 22 61.1%

"Comment" responses:


This is high in concept but more complicated than can be handled by a single Element. Access is not an on/off switch. Access should be associated with Groups. Again, we may want to combine the simple drop-down list with a free text notes field.
It's ok to have a straightforward field Rights.Acess, but the information has been included in Rights.Usage and Rights.Reproduction.
Very useful as a flag.
Could there be a "conditional access" if triggered or would this be set-up at another level?
Maybe add optional sub-elements and a way to link to a specific license of specific format?


 

3.09.1.2 Element Rights.Access Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.8%
(2) No 35 97.2%

"Comment" responses:


I do take issue with the proposed definition. Access really is NOT about rights; it is about availability. I think this element works better as Holdings.Status.
I think the key with this one is that this is the feild used for mining. Clarity or highlighting this purpose of this element might be helpful because otherwise people are going to tend to want to lump all 3 rights elements into one feild.


 

3.09.2.1 Element Rights.Access Refinements Rating

Mean = 4.00, Standard Deviation = 1.29

Response Count Percent
(1) 1 2 5.6%
(2) 2 4 11.1%
(3) 3 5 13.9%
(4) 4 6 16.7%
(5) 5 19 52.8%

"Comment" responses:


See 3.09.1.1 The suggested values are okay for this element, but this element needs to be expanded to include additional fields or sub-elements.
Same comment as previous data elements. Create standardized values.
May want to restrict access by group.


 

3.09.2.2 Element Rights.Access Refinements Confusing

Mean = 1.94, Standard Deviation = 0.24

Response Count Percent
(1) Yes 2 5.9%
(2) No 32 94.1%

"Comment" responses:


Since there are only two options "Open Access" and "Restricted Access", what about "Apply multiple times, as needed"?
A binary distinction between wide-open and restricted isn't going to be useful to your users or you. More thought needs to go into how more useful access restrictions could be specified. In the meantime, make this free text.
A definition of open/restricted access may have been helpful.
IF you apply it both ways ... what will your search engines do?


 

4.0 Instantiation/Format Metadata User - Yes or No

Mean = 1.24, Standard Deviation = 0.43

Response Count Percent
(1) Yes 37 75.5%
(2) No 12 24.5%

4.01.1.1 Element Date.Created Rating

Mean = 4.61, Standard Deviation = 0.69

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 4 11.1%
(4) 4 6 16.7%
(5) 5 26 72.2%

"Comment" responses:


It is important to retain the original creation date of an asset, even when that asset is duplicated or migrated. So it should be absolutely clear that creation date marks the birthdate of an asset.
Should it have subsets of "date originally created" and "date updated" or "date modified"?
Excellent
most relevant for news, content that is timely
essential element


 

4.01.1.2 Element Date.Created Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:


Excellent description.


 

4.01.2.1 Element Date.Created Refinements Rating

Mean = 4.68, Standard Deviation = 0.71

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 5 13.5%
(4) 4 2 5.4%
(5) 5 30 81.1%

"Comment" responses:


Designating the format of a date is crucial for migrating this data to various database systems.
Well written description and usage notes
ISO 8601 is a good idea.
excellent


 

4.01.2.2 Element Date.Created Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:



 

4.02.1.1 Element Date.Issued Rating

Mean = 4.27, Standard Deviation = 1.04

Response Count Percent
(1) 1 1 2.7%
(2) 2 1 2.7%
(3) 3 7 18.9%
(4) 4 6 16.2%
(5) 5 22 59.5%

"Comment" responses:


Conceptually this is an important element but it really belongs to a LifeCycle element that includes a repeatable Date value.
Would we ever want to be able to distinguish between "date issued" and "date broadcast" ?
why not "date released?" Issued is ambiguous.
We have a "debut date" that could match to this, but our "debut date" may be different from the date of issuance if the "debut date" on our system is not the first date of issuance. Should this be a repeatable element with issueance type specified?


 

4.02.1.2 Element Date.Issued Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.8%
(2) No 35 97.2%

"Comment" responses:


It was not 100% clear that the definition meant the first date issued anywhere; this can be confusing when comparing release and air dates, previews, etc. in different geographical locales


 

4.02.2.1 Element Date.Issued Refinements Rating

Mean = 4.30, Standard Deviation = 1.10

Response Count Percent
(1) 1 1 2.7%
(2) 2 2 5.4%
(3) 3 6 16.2%
(4) 4 4 10.8%
(5) 5 24 64.9%

"Comment" responses:


I like the way MPEG-7 handles this--with date and country of issue combined. The date of issue often varies by country
ISO 8601 always a good idea.


 

4.02.2.2 Element Date.Issued Refinements Confusing

Mean = 1.92, Standard Deviation = 0.28

Response Count Percent
(1) Yes 3 8.3%
(2) No 33 91.7%

"Comment" responses:


I am not sure what format you are recommending if you allow a text string and partial dates.
Language is too redundant, like looking in a dictoray for a defination just to be told "like .... and then the world again.


 

4.03.1.1 Element Date.AvailableStart Rating

Mean = 4.25, Standard Deviation = 0.97

Response Count Percent
(1) 1 1 2.8%
(2) 2 0 0.0%
(3) 3 7 19.4%
(4) 4 9 25.0%
(5) 5 19 52.8%

"Comment" responses:


I don't think this element needs to be part of the Dictionary.
Comment from an archivist; may be more important in actual broadcast application.
O.K. now I see the answer to a question on a previous page.


 

4.03.1.2 Element Date.AvailableStart Confusing

Mean = 1.89, Standard Deviation = 0.32

Response Count Percent
(1) Yes 4 11.4%
(2) No 31 88.6%

"Comment" responses:


ho dif from date.released?
I think this is too close to Date.Issued to be used effectively.
Need more description between this and Date.Issued
How does this element differ from Date Issued?


 

4.03.2.1 Element Date.AvailableStart Refinements Rating

Mean = 4.47, Standard Deviation = 0.81

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.8%
(3) 3 4 11.1%
(4) 4 8 22.2%
(5) 5 23 63.9%

"Comment" responses:


Again, you should consider pairing this with city, state, country of release, even though this is not DC and would have to be stripped off.
ISO 8601, good.


 

4.03.2.2 Element Date.AvailableStart Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 34 97.1%

"Comment" responses:


Unclear from examples what format or standard is expected.


 

4.04.1.1 Element Date.AvailableEnd Rating

Mean = 4.41, Standard Deviation = 0.98

Response Count Percent
(1) 1 1 2.7%
(2) 2 0 0.0%
(3) 3 7 18.9%
(4) 4 4 10.8%
(5) 5 25 67.6%

"Comment" responses:


This and the previous element probably have more to do with asset management than description. Description might better relate to less ephemeral lifecycle dates.
Date available for broadcast sound a lot like rights
For an long-term archivist, medium; for a current-broadcast use, may be high.


 

4.04.1.2 Element Date.AvailableEnd Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:



 

4.04.2.1 Element Date.AvailableEnd Refinements Rating

Mean = 4.41, Standard Deviation = 0.90

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.7%
(3) 3 7 18.9%
(4) 4 5 13.5%
(5) 5 24 64.9%

"Comment" responses:


Date available start and end are almost more about programming management than description. It is somewhat ephemeral because presumably, after broadcast, the resource may be available indefinitely for view on demand.
ISO 8601


 

4.04.2.2 Element Date.AvailableEnd Refinements Confusing

Mean = 1.94, Standard Deviation = 0.23

Response Count Percent
(1) Yes 2 5.6%
(2) No 34 94.4%

"Comment" responses:


See notes for Date.AvailableStart


 

4.05.1.1 Element Format.Physical Rating

Mean = 4.78, Standard Deviation = 0.67

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.7%
(3) 3 2 5.4%
(4) 4 1 2.7%
(5) 5 33 89.2%

"Comment" responses:


No problems with this Element.
How would unlisted/new formats be handled?
Video is missing too many HD formats. Images and text need more descriptive items, e.g. jpeg, tiff, wmf, etc.
I will note that the distinction between Format.Physical and Format.Digital is (as you know) "physical" versus "intangible" (or "file"). Digital content exists in BOTH realms.
Need to add D-10 (IMX) and D-11 (HDcam) and others to this pick list


 

4.05.1.2 Element Format.Physical Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:


"...occupies physical space dimensions" is a little goofy, though
Include HD formats
But I might add to the definition: " . . . includes digital content in tangible form, e.g., digiBeta tapes and Kodak PhotoCDs."
You should caution that this element describes the resource you are holding in your hand, because people could confuse it with the original source. Especially considering that you do not in your value lists have an option for a digital copy not


 

4.05.2.1 Element Format.Physical Refinements Rating

Mean = 4.35, Standard Deviation = 0.98

Response Count Percent
(1) 1 0 0.0%
(2) 2 3 8.1%
(3) 3 4 10.8%
(4) 4 7 18.9%
(5) 5 23 62.2%

"Comment" responses:


Good breakdown by media type.
perhaps too specific. I would like to see simple entries like Analog reel and Data CD/DVD.
I would really like to see uniformity between the AMIA-developed format list in the MIC directory and this list. I have printed your list and will do the comparison. The text needs to include transcripts, captions, etc.
There is a nasty conflation of carrier medium and format happening in this element. As a result, format.digital lists nothing but MIME types for computer files.
Will you have a procedure to extend the enumerated lists? As you know, these will need extension as time passes.
A very comprehensive list
There are already more A/V formats - is there a process for adding them?
stored on a tape. For example, how do you describe a resource that is on a hard drive (along with a bunch of different resources). You don't even have a physical object to describe. But what if that's the only way it exists (or the only way the


 

4.05.2.2 Element Format.Physical Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.8%
(2) No 35 97.2%

"Comment" responses:


Here is where problems begin. The use of formats such as Beta SP argues that there may be a source format with many Your use of source and relation elements doesn't provide adequate guidance for handling different physical instantiations for a work.
The break-out/controlled vocabs here need rethinking. D3 specifies a tape format *AND* a way of storing digital data on that tape. Why not have format.physical be 1/2" tape and format.digital be D3 in that case?
The list for text needs some work and especially definitions. It is mixing various concepts.
item you are describing exists because the original tape was described in another record). Some discussion about how this element is used with Format.Digital would be helpful.


 

4.06.1.1 Element Format.Digital Rating

Mean = 4.70, Standard Deviation = 0.66

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.7%
(3) 3 1 2.7%
(4) 4 6 16.2%
(5) 5 29 78.4%

"Comment" responses:


My only issue with this is the Element name; call it what it is: Format.MimeType
Unable to evaluate specifics but the element is critical
ignore previous comment about formats...they were captured here.
See discussion in format.physical
I like the break-up between application, text, image, etc.
element is important to properly display content on Web
Please add video AAF and video MXF and video GXF to pick list
Is there a way to specify that a particular digital format is "preferred" or "primary" or "for broadcast"? We distribute resources that have "audition" versions and/or higher/lower quality versions.


 

4.06.1.2 Element Format.Digital Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:


Someday this list will be unmanageable


 

4.06.2.1 Element Format.Digital Refinements Rating

Mean = 4.19, Standard Deviation = 1.12

Response Count Percent
(1) 1 2 5.6%
(2) 2 1 2.8%
(3) 3 4 11.1%
(4) 4 10 27.8%
(5) 5 19 52.8%

"Comment" responses:


will always be adding new formats
Needs to be edited but generally okay.
way too many choices in a flat list even for profesionals to use.
I am not sure that MIME types are the best way to go, although there is not much else out there. But look at the proposed GDFR (http://hul.harvard.edu/formatregistry/). For example, you may need to know the profile/level for MPEG-2 or -4 files.
It should be possible to identify what scheme the value came from.


 

4.06.2.2 Element Format.Digital Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:



 

4.07.1.1 Element Format.Identifier Rating

Mean = 4.46, Standard Deviation = 1.02

Response Count Percent
(1) 1 1 2.7%
(2) 2 1 2.7%
(3) 3 5 13.5%
(4) 4 3 8.1%
(5) 5 27 73.0%

"Comment" responses:


This may be useful for Collection-level but redundant for asset-level.
Again, this is confused with location
applicable to physical librarians only


 

4.07.1.2 Element Format.Identifier Confusing

Mean = 1.94, Standard Deviation = 0.24

Response Count Percent
(1) Yes 2 5.7%
(2) No 33 94.3%

"Comment" responses:


Is there any overlap between "Format.Identifier" and "Identifier"?
It is not clear how this differs from Identifier or location.
Add UMID, NOLA code to your usage guidelines?


 

4.07.2.1 Element Format.Identifier Refinements Rating

Mean = 3.94, Standard Deviation = 1.03

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.7%
(3) 3 13 37.1%
(4) 4 5 14.3%
(5) 5 15 42.9%

"Comment" responses:


More emphasis on uniquely identifying the resource, please. Kill the "you'll find it on shelf C" example.
Examples should include tape labels and clip-ids.
For internal control, this ought to work, I guess. But would someone using the data remotely have to put together the creator or publisher name with this ID to actually find the item?
n/a


 

4.07.2.2 Element Format.Identifier Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 33 97.1%

"Comment" responses:


How do you create a bar code from a text string?
n/a


 

4.08.1.1 Element Format.FileSize Rating

Mean = 4.49, Standard Deviation = 0.73

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 5 13.5%
(4) 4 9 24.3%
(5) 5 23 62.2%

"Comment" responses:


Useful for comparing integrity of digital file transfer, but perhaps Dictionary should break out metadata that applies to digital files only.
We need to be sure we can encompass multiple formats per piece.
Question: in the digital realm, you may have parts or segments of something. Will you size them elsewhere, or in repeated elements? *** Come to think of it, how does this metadata control segments, what digital librarians call structural metadata?
Very important
helpful for online applications
May want to add terabytes TB to list as we move to 1080 p 60 production formats in next 5 years
It is probably necessary to have this information, but not clear whether this relates to an intellectual or physical entity (a program could reside in more than one file).


 

4.08.1.2 Element Format.FileSize Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 35 100.0%

"Comment" responses:



 

4.08.2.1 Element Format.FileSize Refinements Rating

Mean = 4.06, Standard Deviation = 1.04

Response Count Percent
(1) 1 0 0.0%
(2) 2 4 11.1%
(3) 3 6 16.7%
(4) 4 10 27.8%
(5) 5 16 44.4%

"Comment" responses:


bytes is too small a measurement, should be in mbytes with preceeding zeros if small, e.g, 0.020Mbytes for 20000 bytes
I think we need to look at how software captures this info before we impose the "byte" standard.
This is listed as mandatory, yet bytes may not apply to all storage formats
Why limit to bytes? Awkward and not commonly done
n/a
In addition to the converter pointers, you should include the conversion formula
When dealing with digital video, it doesn't make sense to record size in bytes. A video file is always going to be in MB.


 

4.08.2.2 Element Format.FileSize Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 35 100.0%

"Comment" responses:


n/a


 

4.09.1.1 Element Format.AudioBitDepth Rating

Mean = 4.30, Standard Deviation = 0.94

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.4%
(3) 3 6 16.2%
(4) 4 8 21.6%
(5) 5 21 56.8%

"Comment" responses:


I have no real issue with this element except that we need to consider where this data comes from. If it is embedd in content file, is it necessary to maintain as metadata? Is it useful for measuring data integrity?
No expertise, can't assess, "high" is my default to not skew responses down
This assumes linear PCM audio, no? Will there be a need for more elements for compressed audio data, such things as the use of AAC compression in an MPEG-4 file?
You may want to add 1 bit delta sigma Direct Stream Digital uncompressed format
same comment as filesize


 

4.09.1.2 Element Format.AudioBitDepth Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 35 100.0%

"Comment" responses:



 

4.09.2.1 Element Format.AudioBitDepth Refinements Rating

Mean = 4.32, Standard Deviation = 0.88

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.7%
(3) 3 7 18.9%
(4) 4 8 21.6%
(5) 5 21 56.8%

"Comment" responses:


I would make sure this corresponds to MPEG7
I appreciate the effort to use a controlled vocabulary, but the first thing that's going to happen is someone's going to come along with 18-bit sampling. Just make them use an integer value here.
Should not be mandatory.
Fine for linear PCM.


 

4.09.2.2 Element Format.AudioBitDepth Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 34 100.0%

"Comment" responses:



 

4.10.1.1 Element Format.AudioChannelConfiguration Rating

Mean = 4.39, Standard Deviation = 0.93

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.6%
(3) 3 5 13.9%
(4) 4 6 16.7%
(5) 5 23 63.9%

"Comment" responses:


no expertise
Umm. Here I wonder if you are not conflating "sound field" with "channel mapping," i.e., the difference between stereo and English/French, with a need to know that English is on track 1 and French on 2. More elements may be required.


 

4.10.1.2 Element Format.AudioChannelConfiguration Confusing

Mean = 1.92, Standard Deviation = 0.28

Response Count Percent
(1) Yes 3 8.3%
(2) No 33 91.7%

"Comment" responses:


This element seems to describe two fields, number and configuration. I also take issue with Mandatory obligation; pinpointing these values may be difficult (mono to simulated stereo, stereo to mono with added narration or DVS)
I would rather say "less sophisticated than may be needed for the full range of audio content types."
give an example or more detailed definition


 

4.10.2.1 Element Format.AudioChannelConfiguration Refinements Rating

Mean = 3.81, Standard Deviation = 1.19

Response Count Percent
(1) 1 2 5.6%
(2) 2 2 5.6%
(3) 3 11 30.6%
(4) 4 7 19.4%
(5) 5 14 38.9%

"Comment" responses:


I would use MPEG-7 terminology
Should not be mandatory for non-audio assets; a suggested list of values would be good for matching
Limited usefulness, see preceding comments.
n/a
You've likely discussed this to death, but, would a pick list (covering 90% of the choices), with an optional free form entry be easier to use?
If useful or mandaotry this field needs to be more formalized
Free-form text should not be allowed. Picklist of values categorized by analog or digiyal needs to be developed.


 

4.10.2.2 Element Format.AudioChannelConfiguration Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 34 100.0%

"Comment" responses:


n/a


 

4.11.1.1 Element Format.AudioDataRate Rating

Mean = 4.22, Standard Deviation = 0.92

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.4%
(3) 3 6 16.2%
(4) 4 11 29.7%
(5) 5 18 48.6%

"Comment" responses:


should be maximum rate, since some files are encoded at high rates but are streamed at lower rates depending on client decoder and connection speed
no expertise
Ummm, again. Is it useful to distinguish variable from fixed bit rates? If not, should this definition indicate "provide average and maximum in two element repititions?" But then, no qualifier to tell which is average and which is max. Sigh.
You may want to add Direct Stream Digital


 

4.11.1.2 Element Format.AudioDataRate Confusing

Mean = 1.92, Standard Deviation = 0.28

Response Count Percent
(1) Yes 3 8.3%
(2) No 33 91.7%

"Comment" responses:


confused encode rate with delivery rate
Just too simplified, I fear.
In these cases of the format elements where PBC has designated them as Mandatory, could you explain why they are mandatory? More Mandatory than Title? I would think that a title would be more mandatory than the number of kilobits/second any day.


 

4.11.2.1 Element Format.AudioDataRate Refinements Rating

Mean = 4.11, Standard Deviation = 0.96

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.7%
(3) 3 8 22.9%
(4) 4 9 25.7%
(5) 5 16 45.7%

"Comment" responses:


Should you mention the possibility of variable bit rate encoding in the discussion of this element?
Should only be mandatory for certain types (formats) of assets; need better encoding to facilitate searching
I actually find these physical elements descriptions very succinct and intelligible, even those this is not something that I deal with normally
n/a, but may want controlled vocabulary
need to be more formalized


 

4.11.2.2 Element Format.AudioDataRate Refinements Confusing

Mean = 1.94, Standard Deviation = 0.24

Response Count Percent
(1) Yes 2 5.7%
(2) No 33 94.3%

"Comment" responses:


This may not be known, for existing assets. I'd make this recommended rather than mandatory
Consider if more is needed here.
n/a


 

4.12.1.1 Element Format.AudioSamplingRate Rating

Mean = 4.31, Standard Deviation = 0.95

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.6%
(3) 3 6 16.7%
(4) 4 7 19.4%
(5) 5 21 58.3%

"Comment" responses:


no expertise
will soon be more important
Especially useful for linear PCM data. Some other elements ("encoding" and "codec" and so on) MAY be equally useful for compressed audio.
You may want to add Direct Stream Digital and also 192 KHz sampling


 

4.12.1.2 Element Format.AudioSamplingRate Confusing

Mean = 1.89, Standard Deviation = 0.32

Response Count Percent
(1) Yes 4 11.4%
(2) No 31 88.6%

"Comment" responses:


If you mean "only linear PCM" perhaps you should say so.
Not exactly sure by the description how the sampling is performed.
Here, you do describe the importance of gathering this data, but again, more Mandatory than a title?


 

4.12.2.1 Element Format.AudioSamplingRate Refinements Rating

Mean = 4.19, Standard Deviation = 0.89

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 11 30.6%
(4) 4 7 19.4%
(5) 5 18 50.0%

"Comment" responses:


MPEG-7 uses Hz rather than KHz. I'd consider standardizing to MPEG-7. I don't know what SMPTE does
Mandatory again!
This applies elsewhere too: if you need to PARSE the data, will you wish the number-value is in one element and that the units of measure (kHz here) are handled as an attribute?
You should consider providing an optional free form entry for odd rates


 

4.12.2.2 Element Format.AudioSamplingRate Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 33 97.1%

"Comment" responses:


For existing assets, this may not be known. I'd make this element recommended rather than mandatory


 

4.13.1.1 Element Format.ImageAspectRatio Rating

Mean = 4.47, Standard Deviation = 0.88

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.8%
(3) 3 6 16.7%
(4) 4 4 11.1%
(5) 5 25 69.4%

"Comment" responses:


Doesn't apply to radio you may wish to ignore my response to this
We've had a LOT of trouble with this one internally, since it is not obvious on all masters. We also have a "Y" or "N" option for letterboxing or masking.
Note that SMPTE distinguises between capture, presentation, and viewport aspect ratios. Do these distinctions have relevance for PBS?


 

4.13.1.2 Element Format.ImageAspectRatio Confusing

Mean = 1.94, Standard Deviation = 0.24

Response Count Percent
(1) Yes 2 5.7%
(2) No 33 94.3%

"Comment" responses:


Do you need to make it clear (if you mean this) that what is here is the desired display aspect ratio, in case there are instances where the actual scan lines cover more ground? Compare to ImageFrameSize.


 

4.13.2.1 Element Format.ImageAspectRatio Refinements Rating

Mean = 4.42, Standard Deviation = 0.77

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.8%
(3) 3 3 8.3%
(4) 4 12 33.3%
(5) 5 20 55.6%

"Comment" responses:


There will be a need to indicate captioning in respect to aspect ratio.
Doesn't apply to radio you may wish to ignore my response to this
For existing assets, this may not be known. I'd make any data elements that might not be known recommended rather than mandatory.
Mandatory for all formats? Should include 14:9 letterbox


 

4.13.2.2 Element Format.ImageAspectRatio Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 34 97.1%

"Comment" responses:


Are you not including aspect ratios for older films, i.e., the old 1.33:1? You may need more variations


 

4.14.1.1 Element Format.ImageBitDepth Rating

Mean = 4.17, Standard Deviation = 0.97

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.6%
(3) 3 8 22.2%
(4) 4 8 22.2%
(5) 5 18 50.0%

"Comment" responses:


Doesn't apply to radio you may wish to ignore my response to this
no expertise
Shouldn't be mandatory....what if this info is not available?
See next comment . . .
Usefulness seems to be limited to Computer Graphics images. Video images may use 8, 10, 12 or 14 bit quantization and different quantization for luminance and chrominance channels. For example 12 bit Y and 10 bit R-Y and B-Y
Done understand how this varys from another element with very similiar defintion also in this elememt. format cateory


 

4.14.1.2 Element Format.ImageBitDepth Confusing

Mean = 1.94, Standard Deviation = 0.24

Response Count Percent
(1) Yes 2 5.7%
(2) No 33 94.3%

"Comment" responses:


Help me out about video, when they say that ITU 601 can be 8 or 10 bit -- is that in terms of RGB (8 per channel?) or in some terms pertaining to the YCC color space? Your definition is written as if RGB, I think. I wish I understood this better.


 

4.14.2.1 Element Format.ImageBitDepth Refinements Rating

Mean = 4.31, Standard Deviation = 0.87

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.9%
(3) 3 6 17.1%
(4) 4 9 25.7%
(5) 5 19 54.3%

"Comment" responses:


Will need to be expanded. Also consider this as a number field rather than text string with "bit" understood.
Doesn't apply to radio you may wish to ignore my response to this
See comments about use of mandatory for existing assets where some values are unknown. Also, would this apply to analog assets?
Need to be *extremely* clear in definition what exactly we're referring to with bit depth. When user says '8 bit' is that per sample or per pixel (which may have more than one sample)? Spell this out exactly.
Mandatory?
I think you may need to sort out color space and channel consderation for video. This might do for stills, assuming that you don't ever have RGB-A images.


 

4.14.2.2 Element Format.ImageBitDepth Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 33 97.1%

"Comment" responses:


Seems redundant to previous entry


 

4.15.1.1 Element Format.ImageChannelConfiguration Rating

Mean = 4.08, Standard Deviation = 0.94

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.6%
(3) 3 8 22.2%
(4) 4 11 30.6%
(5) 5 15 41.7%

"Comment" responses:


Doesn't apply to radio you may wish to ignore my response to this
No expertise
YIKES! Is this about "layers" (in the definition) or about "segments" (digital library "structural metadata?") as suggested in the table in your background page?


 

4.15.1.2 Element Format.ImageChannelConfiguration Confusing

Mean = 1.86, Standard Deviation = 0.36

Response Count Percent
(1) Yes 5 14.3%
(2) No 30 85.7%

"Comment" responses:


I am really not clear here. Do you mean quality layers in the JPEG-2000 or MPEG-4 (scalable profile), or do you mean segments? The definition is not clear to me.
Somewhat vague-- image OR video channels. How do you know which?


 

4.15.2.1 Element Format.ImageChannelConfiguration Refinements Rating

Mean = 3.75, Standard Deviation = 1.11

Response Count Percent
(1) 1 0 0.0%
(2) 2 5 13.9%
(3) 3 12 33.3%
(4) 4 6 16.7%
(5) 5 13 36.1%

"Comment" responses:


needs more clarification
Doesn't apply to radio you may wish to ignore my response to this
Not particularly useful to public end users. Also, make recommended rather than mandatory
Mandatory?
Can't really tell until the definition gets sorted out.


 

4.15.2.2 Element Format.ImageChannelConfiguration Refinements Confusing

Mean = 1.76, Standard Deviation = 0.44

Response Count Percent
(1) Yes 8 24.2%
(2) No 25 75.8%

"Comment" responses:


Maybe I misunderstood. It seems that knowing WHICH channels is more important than how many. Maybe that's a different field/element.
Examples would be useful, such as "video and linear alpha channel"
The definition needs clarification or examples


 

4.16.1.1 Element Format.ImageColorCode Rating

Mean = 3.92, Standard Deviation = 1.16

Response Count Percent
(1) 1 1 2.8%
(2) 2 3 8.3%
(3) 3 10 27.8%
(4) 4 6 16.7%
(5) 5 16 44.4%

"Comment" responses:


The concept is high for still images but only somewhat useful for moving image materials. Also, the Element name is misleading. Captioning color should also have a place in the Dictionary.
Doesn't apply to radio you may wish to ignore my response to this
No expertise
Need to add "tinted" (for silent films)" and "colorized"
Helpful to have a switch at this high level of generality.
Could we also specify "RGB" or "CMYK"? Would we be able to add this data to vector-based files stored on a media management system?


 

4.16.1.2 Element Format.ImageColorCode Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 34 97.1%

"Comment" responses:


This Element as defined is not a color "code" in the way Colorspace would be. I also question the repeatable nature of this Element; the list indicates that a 1:1 would suffice.


 

4.16.2.1 Element Format.ImageColorCode Refinements Rating

Mean = 3.97, Standard Deviation = 1.16

Response Count Percent
(1) 1 1 2.8%
(2) 2 3 8.3%
(3) 3 9 25.0%
(4) 4 6 16.7%
(5) 5 17 47.2%

"Comment" responses:


Though I have problem with these values for a general or overall color description, I recommend the use of Colorspace with values such as RGB, Grayscale, BAW, CMYK, TCLR, YCC, etc. I also think this element is an argument for a Format notes element.
Doesn't apply to radio you may wish to ignore my response to this
I am glad you standardized your value list and I like the values you provide. For the digital world, the colorspace (RGB, YCmCr) would be really useful to know. Perhaps an additional data element?
Mandatory? Should include colorized.


 

4.16.2.2 Element Format.ImageColorCode Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 34 97.1%

"Comment" responses:


Are you excluding col elements of the past such as tinting and toning or sepia?
You need clarification of the difference between B&W w/ color, and color w/B&W


 

4.17.1.1 Element Format.ImageDataRate Rating

Mean = 4.33, Standard Deviation = 0.99

Response Count Percent
(1) 1 0 0.0%
(2) 2 3 8.3%
(3) 3 4 11.1%
(4) 4 7 19.4%
(5) 5 22 61.1%

"Comment" responses:


Doesn't apply to radio you may wish to ignore my response to this
No expertise
How constant are these rates throughout programs?
But: did you mean for only compressed files, or for ITU 601 and so on? And what about fixed and variable bit rate encoding?
Sound like previous measurement


 

4.17.1.2 Element Format.ImageDataRate Confusing

Mean = 1.94, Standard Deviation = 0.24

Response Count Percent
(1) Yes 2 5.7%
(2) No 33 94.3%

"Comment" responses:


again, issue is max rate the image is coded for, some deliver systems may compress. Given that the "pipeline" may be different sizes, issue is NOT the rate it is actually sent, but the max rate possible
Need to answer preceding questions. Need to clarify how to express fixed and variable. Need to consider if, for PARSING, the units of measure are to be considered as an attribute.


 

4.17.2.1 Element Format.ImageDataRate Refinements Rating

Mean = 3.97, Standard Deviation = 1.10

Response Count Percent
(1) 1 0 0.0%
(2) 2 4 11.4%
(3) 3 9 25.7%
(4) 4 6 17.1%
(5) 5 16 45.7%

"Comment" responses:


Doesn't apply to radio you may wish to ignore my response to this
as with audio data rate, element definition should discuss how to handle variable bit rate files.
Mandatory?
See preceding comment about unit as attribute.
It's not clear to me why this is mandatory


 

4.17.2.2 Element Format.ImageDataRate Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 35 100.0%

"Comment" responses:


Can't make this mandatory if PBCore is used to also describe analog assets.


 

4.18.1.1 Element Format.ImageFrameRate Rating

Mean = 4.44, Standard Deviation = 0.84

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.6%
(3) 3 2 5.6%
(4) 4 10 27.8%
(5) 5 22 61.1%

"Comment" responses:


Okay if intended for data integrity. Not sure of its relevance as metadata.
Doesn't apply to radio you may wish to ignore my response to this
No expertise
What is the difference between 30fps and 60 fields per second....is that not redundant?
But: what about frames versus fields? I don't see a way to sort out progressive 30 fps from interlaced 30 fps. Is that not important in an ATSC environment?
Please add 60 frames per second.


 

4.18.1.2 Element Format.ImageFrameRate Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 33 97.1%

"Comment" responses:


I'd like to see some comment on fields vs frames.


 

4.18.2.1 Element Format.ImageFrameRate Refinements Rating

Mean = 4.31, Standard Deviation = 0.83

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.9%
(3) 3 5 14.3%
(4) 4 11 31.4%
(5) 5 18 51.4%

"Comment" responses:


Doesn't apply to radio you may wish to ignore my response to this
I think 30 and 29.97 fps are really the same thing. I wouldn't include both. 7.5 is also a standard fps and should be added.
But do you want to record DF/NDF time code somewhere?
Mandatory?
If I understand the description, you have not captured all the display rates


 

4.18.2.2 Element Format.ImageFrameRate Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 34 100.0%

"Comment" responses:


See previous data elements for discussion of mandatory
Again, are you excluding silent films that were projected at 16 fps?


 

4.19.1.1 Element Format.ImageFrameSize Rating

Mean = 4.31, Standard Deviation = 0.98

Response Count Percent
(1) 1 0 0.0%
(2) 2 3 8.3%
(3) 3 4 11.1%
(4) 4 8 22.2%
(5) 5 21 58.3%

"Comment" responses:


I don't know. Like other technical metadata elements, this is captured by machine, not people. Does this belong in a separate class within the Dictionary?
Doesn't apply to radio you may wish to ignore my response to this
No expertise
I wonder if there are too many factors crammed into a single element. Here there is a mention of progressive vs interlaced, but no payoff in the enumerated list. There is also the question of captured frame size versus desired display aspect ratio.
Some static images could have other frame sizes. Are you referring only to broadcast and not to elements such as jpegs?


 

4.19.1.2 Element Format.ImageFrameSize Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 3.0%
(2) No 32 97.0%

"Comment" responses:


I would like to see a little more sorting out of some of these factors. If you leave "i" and "p" in the same definition with rows/samples/pixels, perhaps a bit more can be said.


 

4.19.2.1 Element Format.ImageFrameSize Refinements Rating

Mean = 4.28, Standard Deviation = 0.94

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.6%
(3) 3 6 16.7%
(4) 4 8 22.2%
(5) 5 20 55.6%

"Comment" responses:


I don't know.
Doesn't apply to radio you may wish to ignore my response to this
How does your list compare with SMPTE and MPEG-7?
If you need to PARSE this data, it will be difficult with all those example types under the same element.
I strongly disagree with the use of the term 'resolution' - you have provided a selection of spatial sampling frames, but resolution is a perception


 

4.19.2.2 Element Format.ImageFrameSize Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 34 100.0%

"Comment" responses:



 

4.20.1.1 Element Format.TimeStart Rating

Mean = 4.58, Standard Deviation = 0.81

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.8%
(3) 3 4 11.1%
(4) 4 4 11.1%
(5) 5 27 75.0%

"Comment" responses:


I don't know. I am bothered by the mandatory obligation; perhaps a default of zero is acceptable.
IS there room to comment on anomalies ... for example, would there ever be video clips with broken time code (gaps, starts/stops)?
We need to know what the clock says at program start. But this definition hints at segment information (structural metadata). Do you mean that somewhere I would learn the titles for segments 1,2, and 3, and look at this repeating element for times?
What if it is a production element and not on video?


 

4.20.1.2 Element Format.TimeStart Confusing

Mean = 1.94, Standard Deviation = 0.23

Response Count Percent
(1) Yes 2 5.6%
(2) No 34 94.4%

"Comment" responses:


As metadata, it's confusing because it seems like a Time_in attribute, which is useful for breaking moving images into shots for logging.
Not clear how to understand "segment" in this context; examples do not seem to help with this.


 

4.20.2.1 Element Format.TimeStart Refinements Rating

Mean = 4.41, Standard Deviation = 0.83

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.7%
(3) 3 5 13.5%
(4) 4 9 24.3%
(5) 5 22 59.5%

"Comment" responses:


Not sure why two standrads are being recommended. Why not just SMPTE?
Time code types are fine.
I understand the need for flexibility in some instances, but that does get a little bit away from the standardization of some elements.
You have not defined where the program starts (1st frame of video or last frame of black) - perhaps that was intended


 

4.20.2.2 Element Format.TimeStart Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 36 100.0%

"Comment" responses:



 

4.21.1.1 Element Format.Duration Rating

Mean = 4.81, Standard Deviation = 0.58

Response Count Percent
(1) 1 0 0.0%
(2) 2 1 2.8%
(3) 3 0 0.0%
(4) 4 4 11.1%
(5) 5 31 86.1%

"Comment" responses:


often requested by end users for setting recording times, also useful in writting program applications that calculate time to view or time next thing starts


 

4.21.1.2 Element Format.Duration Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 34 97.1%

"Comment" responses:


If I have a program that starts with the timecode at "one hour" and runs for one hour, do I give the closing time code (02:00:00), or subtract to provide 01:00:00?


 

4.21.2.1 Element Format.Duration Refinements Rating

Mean = 4.64, Standard Deviation = 0.54

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 1 2.8%
(4) 4 11 30.6%
(5) 5 24 66.7%

"Comment" responses:


My only issue is that we not make this a mandatory format and allow an optional free-text notes element for those who can only round off to nearest hour or minute.
Same issue as 4.20.2.1 - you have not defined duration


 

4.21.2.2 Element Format.Duration Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 35 100.0%

"Comment" responses:



 

4.22.1.1 Element Format.Standard Rating

Mean = 4.69, Standard Deviation = 0.52

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 1 2.8%
(4) 4 9 25.0%
(5) 5 26 72.2%

"Comment" responses:


I have no problem with the concept but the label is too vague. Format.Broadcast? I also wonder why this is repeatable.
MPEG-7 calls this the Media System. I would suggest being congruent.
I think you are going to need to expand both lists, but I would consult clyde.smith@turner.com
But these are broad categories . . . .
Quite a few standards missing including DV DIF, HDcam, AAF audio, Dolby E audio, etc.


 

4.22.1.2 Element Format.Standard Confusing

Mean = 1.94, Standard Deviation = 0.24

Response Count Percent
(1) Yes 2 5.7%
(2) No 33 94.3%

"Comment" responses:


You could enhance the definitition a bit. "overarching media architecture that circumscribes underlying" makes one's eyes glaze over. Without the pick list, I wouldn't have known what you were trying to say.


 

4.22.2.1 Element Format.Standard Refinements Rating

Mean = 4.44, Standard Deviation = 0.73

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 5 13.9%
(4) 4 10 27.8%
(5) 5 21 58.3%

"Comment" responses:


"standards" come and go
I would drop the word "video" from values (PAL, NTSC...)
Really think the controlled vocabularies need more granularity/specificity. There's a big difference between MPEG1/Layer3 and MPEG4/AAC, both of which appear to be "MPEG" in this vocabulary.
Do you need to sort out MPEG-2 (MP@ML) from MPEG-4 (SNR scalable)? This is a variation on my comments about the inadequacies of MIME type for intangible digital format identification. Perhaps you cover this in Format.Encoding?
You have not included an optional free form entry


 

4.22.2.2 Element Format.Standard Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 34 100.0%

"Comment" responses:


Again, you can't make this mandatory.


 

4.23.1.1 Element Format.Type Rating

Mean = 4.03, Standard Deviation = 1.11

Response Count Percent
(1) 1 1 2.9%
(2) 2 3 8.8%
(3) 3 5 14.7%
(4) 4 10 29.4%
(5) 5 15 44.1%

"Comment" responses:


The definition is wrong for these values. This element has little to do with Format.Standard. Rather, they fall under the hierarchy of Type (08.00). I also question repeatable nature of element.
List is way too long and has too many duplicates.
seems to overlap with genre


 

4.23.1.2 Element Format.Type Confusing

Mean = 1.86, Standard Deviation = 0.35

Response Count Percent
(1) Yes 5 13.9%
(2) No 31 86.1%

"Comment" responses:


Yes, the description is too confusing but I can rate the usefulness of the values.
Confusing.
This was changed in our Boston meeting. We need to change the descriptions and examples to reflect this change
The description and the value list are confusing and seem to be an attempt to create a manifest of all the components in the video or audio, indicate the type of manifestation (master, etc.) It's a confusing potpourri and not at all core, IMO.
Seems like this should fall under TypeForm.
Again, the picklist is more informative than the definition. "use or reason"?


 

4.23.2.1 Element Format.Type Refinements Rating

Mean = 4.00, Standard Deviation = 1.21

Response Count Percent
(1) 1 3 8.8%
(2) 2 1 2.9%
(3) 3 3 8.8%
(4) 4 13 38.2%
(5) 5 14 41.2%

"Comment" responses:


My preference would be to use AMIA's Generation Type values.
We changed this
I'd strongly consider deleting. It's a worthy concept but impossible to complete comprehensively and way too detailed for most end users.
do you want a type of "other"?
overlap with genre


 

4.23.2.2 Element Format.Type Refinements Confusing

Mean = 1.91, Standard Deviation = 0.28

Response Count Percent
(1) Yes 3 8.6%
(2) No 32 91.4%

"Comment" responses:



 

4.24.1.1 Element Format.Encoding Rating

Mean = 4.34, Standard Deviation = 0.91

Response Count Percent
(1) 1 1 2.9%
(2) 2 0 0.0%
(3) 3 4 11.4%
(4) 4 11 31.4%
(5) 5 19 54.3%

"Comment" responses:


I do see the usefulness of Element but wonder if a Compression Standard Element with a Compression Rate would be more understandable. Again I ask, why repeatable?
No expertise
I'm not so sure this should be free-form.
We all wish for an enumerated list here . . . but your examples are helpful and make a good start at addressing my previous comments about sorting out "sub-format" information.
Does wrapper/header data go here (ex: BWF, Cartchunk)? How do we make it clear that one encoding or header should be read before another?


 

4.24.1.2 Element Format.Encoding Confusing

Mean = 1.85, Standard Deviation = 0.36

Response Count Percent
(1) Yes 5 15.2%
(2) No 28 84.8%

"Comment" responses:


how different than format.digital?
formats should be in a list to pick from
Why is this free-form? This data element identifies the playback app needed and is useful for transcoding when a format goes obsolete.
Is it worth hinting that for many video items, this instance would occur twice (or more?), at least for the video and the audio details?
I have no idea what this is
The definition is confusing regarding what information you expect to see in this element. It's only when you view the examples that you know what kind of information you are supposed to place in this feild, but still not understanding the definition


 

4.24.2.1 Element Format.Encoding Refinements Rating

Mean = 3.69, Standard Deviation = 1.32

Response Count Percent
(1) 1 3 8.6%
(2) 2 4 11.4%
(3) 3 7 20.0%
(4) 4 8 22.9%
(5) 5 13 37.1%

"Comment" responses:


how different from the video choices in format.digital?
wasn't this updated in Son of Smackdown?
NEED CONTROLLED VALUE LIST! Same comment about mandatory.
mandatory free form text?
Wish for an extensible list.
Please clarify the description and add examples
Is free form really appropriate?
Encoding needs to be pre-defines from an authority based on format.type
It seems that it would be very helpful to have a defined list of formats so that users could automatically know how how to decode?


 

4.24.2.2 Element Format.Encoding Refinements Confusing

Mean = 1.88, Standard Deviation = 0.33

Response Count Percent
(1) Yes 4 11.8%
(2) No 30 88.2%

"Comment" responses:



 

4.25.1.1 Element Identifier Rating

Mean = 4.57, Standard Deviation = 0.73

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 5 13.5%
(4) 4 6 16.2%
(5) 5 26 70.3%

"Comment" responses:


Of course, this Element as defined is imperative, but the examples seem all over the map. I emphatically do not think shelf location should be used as an identifier. I recommend distinct elements for Identifier.Barcode and Location.PhysicalLLocation
How would this work with a broadcast program at a local station?
Isn't this a duplicate to a previous field?
Seems to be another location idetnifier
Surely this one warrants a qualifier or an attribute for the type of identifier.
critical as primary key to set up exchanges with other systems
V-ISAN is best identifier for video content
Definition is confusing, not until you see the examples is it understand it's along the lines of "tape location"


 

4.25.1.2 Element Identifier Confusing

Mean = 1.91, Standard Deviation = 0.28

Response Count Percent
(1) Yes 3 8.6%
(2) No 32 91.4%

"Comment" responses:


I would like to see qualification or "typing" of this, at least as an option.


 

4.25.2.1 Element Identifier Refinements Rating

Mean = 3.91, Standard Deviation = 1.12

Response Count Percent
(1) 1 1 2.9%
(2) 2 3 8.6%
(3) 3 8 22.9%
(4) 4 9 25.7%
(5) 5 14 40.0%

"Comment" responses:


To repeat myself: shelf # as identifiers make me queasy in your examples.
Ain't none.
How did you create bar code using text entry?
recommend identifying the scheme used
Include UMID or NOLA code in your list of examples


 

4.25.2.2 Element Identifier Refinements Confusing

Mean = 1.85, Standard Deviation = 0.36

Response Count Percent
(1) Yes 5 15.2%
(2) No 28 84.8%

"Comment" responses:


If this Element is repeatable, there needs to be an Identifier_Type Element to qualify these repeated values. I don't think it is useful to include a text explanation in the same field as the Element's value.
n/a


 

4.26.1.1 Element Language Rating

Mean = 4.70, Standard Deviation = 0.57

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 2 5.4%
(4) 4 7 18.9%
(5) 5 28 75.7%

"Comment" responses:


Perhaps not necessary to be mandatory if English is assumed to be the default.


 

4.26.1.2 Element Language Confusing

Mean = 1.94, Standard Deviation = 0.23

Response Count Percent
(1) Yes 2 5.6%
(2) No 34 94.4%

"Comment" responses:


shouldn't this be in the Content section as opposed to format, which I thought was about the physical or digital attributes of the medium, not the language of what has been recorded
You could enhance the directions for listing all of the languages present here. How do you distinguish between primary language, the language on the CC and the language in the subtitles?


 

4.26.2.1 Element Language Refinements Rating

Mean = 4.41, Standard Deviation = 0.80

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 7 18.9%
(4) 4 8 21.6%
(5) 5 22 59.5%

"Comment" responses:


Though I am in favor of using standard codes for language, I would want to explore flexibility in using full language name.
If you recommend 3-letter codes, you should use those in ISO 639-2;the official page is the one at LC and should be cited; others may not be updated.


 

4.26.2.2 Element Language Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.8%
(2) No 35 97.2%

"Comment" responses:


see above comment


 

4.27.1.1 Element Language.Usage Rating

Mean = 4.33, Standard Deviation = 0.89

Response Count Percent
(1) 1 0 0.0%
(2) 2 2 5.6%
(3) 3 4 11.1%
(4) 4 10 27.8%
(5) 5 20 55.6%

"Comment" responses:


How a language is used will be critical for some types of media, but each Language Usage Element must correspond to the Language Element. This is not clear in the Dictionary.
In library, language is used only for language code. Other language information is recorded in "Notes".
May need to be expanded; we have masters with 4 audio tracks, (Eng, Span, Port., Fr) any of which could be primary.
I'm not sure I understand why this is related to the definition of Language
Item shou7ld be required if asset has an entry under Description.ProgramRelatedText


 

4.27.1.2 Element Language.Usage Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 33 97.1%

"Comment" responses:


At times the Dictionary blurs Language Usage into the Format domain. I prefer to simplify the Language elements to reflect the overall asset.
The name of this element is misleading


 

4.27.2.1 Element Language.Usage Refinements Rating

Mean = 4.20, Standard Deviation = 1.05

Response Count Percent
(1) 1 1 2.9%
(2) 2 2 5.7%
(3) 3 4 11.4%
(4) 4 10 28.6%
(5) 5 18 51.4%

"Comment" responses:


I really like this data element.
How one has this element, language, description.programrelatedtext, relation.identifier and relation.type working together successfully is EXTREMELY unclear. I know that this is just a metadata dictionary, and not an encoding recommendation, but..
How will this be associated with the associated Description.ProgramRelatedText?? There may be several of both.


 

4.27.2.2 Element Language.Usage Refinements Confusing

Mean = 1.94, Standard Deviation = 0.24

Response Count Percent
(1) Yes 2 5.7%
(2) No 33 94.3%

"Comment" responses:


There are confusing aspects to the Dictionary's guidelines. For example, what are all these DVD Subtitle values?
..you need some examples showing how you *think* these things will be used together.


 

4.28.1.1 Element Annotation Rating

Mean = 3.84, Standard Deviation = 1.14

Response Count Percent
(1) 1 1 2.7%
(2) 2 3 8.1%
(3) 3 12 32.4%
(4) 4 6 16.2%
(5) 5 15 40.5%

"Comment" responses:


Notes will ultimately make or break a metadata exchange initiative. I recommend the addition of an AnnotationType Element with a list that includes other top-level Dictionary Elements (Publisher Notes, Creator Notes, etc)
very important
I would call this notes field; annotation suggets commentary
You always need a "note" element . . . .
Risky to give people an unstructured notes space - they could get lazy and just use this instead of properly using the other elements. Also would be difficult to search/index. All necessary metadata should be capturable in structured elements.
This combines added information about both the metadata and the resource, so is confusing.
It seems to make sense to monitor use of this element to identify needed additions to the dictionary.


 

4.28.1.2 Element Annotation Confusing

Mean = 1.94, Standard Deviation = 0.23

Response Count Percent
(1) Yes 2 5.6%
(2) No 34 94.4%

"Comment" responses:


I don't know if your examples match your definition. Is this element about the metadata or a general notes feild, a place to dump info that doesn't go anywhere else?


 

4.28.2.1 Element Annotation Refinements Rating

Mean = 3.53, Standard Deviation = 1.36

Response Count Percent
(1) 1 4 11.1%
(2) 2 3 8.3%
(3) 3 12 33.3%
(4) 4 4 11.1%
(5) 5 13 36.1%

"Comment" responses:


No encoding recommended, which is fine.
n/a


 

4.28.2.2 Element Annotation Refinements Confusing

Mean = 2.00, Standard Deviation = 0.00

Response Count Percent
(1) Yes 0 0.0%
(2) No 33 100.0%

"Comment" responses:


A good data element but I am not sure I'd consider it core
Although this is an excellent tool, I know from experience it can be overused; perhaps add wording to indicate that it should not be used for all fields
n/a


 

4.29.1.1 Element Location Rating

Mean = 4.19, Standard Deviation = 1.06

Response Count Percent
(1) 1 1 2.8%
(2) 2 2 5.6%
(3) 3 5 13.9%
(4) 4 9 25.0%
(5) 5 19 52.8%

"Comment" responses:


Crucial for indicating where item is physically located but I wonder if this should cross into digital file domain.
I think this can be deleted, Thom should rule.
Clarify re identifier elements; this is what a non-expert would look for firt, I think
This also seems redundant to a previous field.
This seems to be broad institutional location.
I guess this is the cousin to Identifier?
Good example would be "tape is located on the floor in the back seat of Fred's car"
Note that this is the same as the MODS metadata schema's location element (comes from MARC). Also used in DC-Library application profile.


 

4.29.1.2 Element Location Confusing

Mean = 1.94, Standard Deviation = 0.23

Response Count Percent
(1) Yes 2 5.6%
(2) No 34 94.4%

"Comment" responses:


The issue is not that it's too confusing to rate, but that it is too similar to other Elements like Identifier and Format.Identifier to nail down its purpose.
Use more television terms to make your point instead of corporate or publishing terms


 

4.29.2.1 Element Location Refinements Rating

Mean = 3.89, Standard Deviation = 1.24

Response Count Percent
(1) 1 3 8.3%
(2) 2 0 0.0%
(3) 3 11 30.6%
(4) 4 6 16.7%
(5) 5 16 44.4%

"Comment" responses:


I recoomend adding Elements to Location to handle Organization, Department, etc.
n/a
In a particular case, if I can use either Location or Identifier should I user one, the other or both?
You might suggest that the instituation create a pick list


 

4.29.2.2 Element Location Refinements Confusing

Mean = 1.97, Standard Deviation = 0.17

Response Count Percent
(1) Yes 1 2.9%
(2) No 33 97.1%

"Comment" responses:


n/a


 

5.1 Agree PBCore Meets Need

Mean = 4.25, Standard Deviation = 0.53

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 2 4.2%
(4) 4 32 66.7%
(5) 5 14 29.2%

5.2 Feel Understand PBCore

Mean = 3.96, Standard Deviation = 0.82

Response Count Percent
(1) 1 0 0.0%
(2) 2 3 6.1%
(3) 3 8 16.3%
(4) 4 26 53.1%
(5) 5 12 24.5%

5.3.1 Other Metadata Schemes Investigated


IEEE LOM
n/a
National Library of Australia Preservation Metadata for Digital Collection is one of the better models: http://www.nla.gov.au/preserve/pmeta.html IMS SCORM CIMI METS OAIS VIDE SMEF many, many others
none, however I have heard that for education environments Dublin may not be the best choice to base standard on.
Dublin Core VRA Core Categories 3.0 FGDC Content Standard for Digital Geospatial Metadata 2.0
SCORM, CIDOC, Dublin Core, MPEG-7, MPEG-21, LOM, P/Meta, RDF, SMEF, VRA Core
EBU P-META and P-FRA Dublin Core SMEF emerging AES MPEG-7 etc Other broadcasters SMPTE NewsML MARC EAD RLIN OCLC
Dublin core, LOMM
dublin, GEM extensions,
Dublin Core MPEG-7 MODS IEEE LOM CDL CIDOC CSGDM MARC
None
Dublin Core SMEF (BBC) Time Warner Turner (internal) Library of Congress
IEEE LOM, IMS, SCORM, DC, MARC, MODS, METS, MPEG7, MPEG21, VRA Core, PMeta
none
16+ years of database and database application experience.
none
Dublin Core, MARC, SCORM, MPEG 7
We've contemplated the use of XML schema as a way of standardizing imported/exported content, but have hesitated due to the lack of a systemwide standard naming scheme for various fields.
METS, SMPTE RP-210, AES draft administrative and process
In terms of metadata schemes for broadcasting, I have only closely worked with my own company's. I have seen others during data conversions where stations move from another system to our system, but I know ours the best.
My primary area is film and as such I have been responsible for instituting the AFI Catalog's scheme. As a former librarian I also have had some familiarity with general cataloging schemes; I have not previously participated in a similar metadata scheme evaluation such as this
None
RSS/RDF, NewsML, Homegrown scheme specifically for music playlists.
OAIS, MARC, METS, National Archives of Australia's Recordkeeping Metadata Standard, Dublin Core.
Will we include AAF or MXF?
TV-Anytime, PSIP (PMCP), our own internal dbs
SMPTE DMS and EBU P-meta
Dublincore
MODS (http://www.loc.gov/mods) METS (for technical metadata in combination with MODS; http://www.loc.gov/mets) VRA Core EAD
None
G.E.M. - Gateway to Educational Materials National Digital Information Infrastructure and Preservation program
1. Dublin Core generally 2. MPR Metadata 3. Our internal PRX metadata scheme 4. Various ad-hoc schemes
Dublin Core


5.3.2 Other Scheme Benefits


better for education
n/a
The best models show a clear element breakdown that includes Collection, Object, and File as well as clear distinction between media types as Sub-elements. In other words, they understand the need for an Element Domain, which PB_Core lacks.
Packaging. In order to get maximum benefit from sharing material using a "common practice" set of metadata it will beessential that the metadata is stored in a standard way. There are a number of options including IMS ContentPackaging and SCORM specifications as well as the possibility of storing the metadata in the objectsthemselves, for example as META tags in a web page.•IMS CP: Metadata in an IMS content package is optional and is allowed within , ,, , and elements to more fully describe the contents of a package. Such generality does not help "common practices" to develop. The location of metadata is important when packages are aggregated and disaggregated. Clearly when resources are removed from one package andinserted into another package the metadat must be carried with the resource. No advice on handling metadata is given in the aggregation and disaggregation part of the IMS Content Packaging Best PracticeGuide. The following extract highlights some of the lack of direction "Some Content Packages will havetheir associated meta-data captured in a separate file. When this is the case, manifests may include an in-linereference to the external meta-data file." This means that metadata can exist but it may be referred to in themanifest where it is expected!•SCORM: SCORM Content Packaging is based directly on IMS Content Packaging. However, SCORM differentiates between context-specific and context-independent metadata. Context-specific metadata isused to describe the Content Aggregation level in which an educational content has been established. Context-independent metadata applies to SCOs which are intended for reuse in different contexts and forAssets. In a Content Aggregation metadata (if it exists) must be in the manifest (inline) although there isalso an option to include a reference to the metadata which can be external to the manifest - even as a URLto metadata outside the package. An aggregated package may contain several . Each item shouldcontain metadata in-line or by reference as for the top-level metadata. Within the section of the manifest each resource should have its context-independent metadata either inline or referred to externally.In all cases the metadata, if it exists, should include as a minimum all the mandatory fields.•Other: Various applications attempt to embed metadata when they save web pages. There appears to be noconsistency between these products. http://www.ukoln.ac.uk/metadata/education/meetings/agendas/2002-04-18/duncan.pdf
These standards can be used as conceptual model for building PBCore. Centain princiles can be applied to PBCore. For example, Dubline Core 1:1 principle: "only one object or resource may be described within a single metadata set".
none
Simplicity and ubiquitous use; specific to education
GEM is more useful in tagging educational resources/assets
Some of the schemas have explicit mapping to other schemas, which I think is very useful, particularly to evaluate the "coreness" of your schema. I also like the use of attributes in other schemas, particularly MODS, MPEG-7 addresses multiple physical manifestations of a single work particularly well
Our internal scheme meets our internal needs better since our business model is unique.
This (and the drawbacks question) isn't something that can answered outside of a particular use context. If I need preservation metadata, I'm going to be looking at PMeta and not PBCore. If I need to ship assets to PBS, on the other hand, I'm probably going to use PBCore and not SCORM.
Using content inventory and coding to asssociate content to consumption by public Broadcasting viewers and listeners
Great benefits for increased access and coordinated management and preservation of assets.
N/A
None; it appears that PBCore has elements of XML schema incorporated.
METS features the structural metadata that is more or less missing from the PBS set, although MEPG-7/MPEG-21 might provide this as well. SMPTE RP-210 has some technical nuances missing from the PBS Instantiation data. Compared to METS "digiProv" and AES "process," the PBS data lacks technical information about production was carried out, which may be fine for the PBS requirements. I would add that DublinCore is not as helpful as one would like, for associating data types and other needs. MODS is an interesting option but more complex.
There are cases where our vocabularies are better defined, but the structures are actually quite similar.
The only general criticism I would offer of the PBCore is that some of the definitions tend to be overcomplicated on the one hand and somewhat vague on the other; I tried to briefly indicate these sections.
RSS: simplicity, ubiquity. NewsML: none.
Dublin Core is the best choice to base your standard on -- widely used, lowest-common-denominator set, easy to crosswalk to and from.
Interoperability between systems in development and post-productions.
There is a good amount of overlap
The ability to associate metadata to various temporal aspects of the content, such as to the whole program, to clips, to frames, to events and as a "track". They us a public registration authority (www.smpte-ra.org) and are ISO registered. Their are over 1,200 items in the SMPTE dictionary and there are metadata items that describe the technical aspects of the content better than the PB Core technical elements. They use 16 byte labels which are easily encoded and parsed in KLV file formats. DMS is very specific as to how it plugs in to MXF structural metadata. The 1,200 plus items in the SMPTE dictionary are currently being mapped to an XML namespace. They can describe DVD, print, multimedia and other non video content as well as PB Core.
Wide acceptance, I think.
MODS: allows for hierarchy in expressing elements; Dublin Core is entirely flat. Thus you can associate related elements with each other. Very powerful use of related item to allow for hierarchical descriptions. METS allows you to package different forms of metadata together, including technical, and to encode at various levels.
GEM metadata are specifically referenced to education and are a subset under the Dublin Core. They should be used as an authority for any educational values. G.E.M. also allows for indivualized values that can be related to existing values with different nomencalture. NDIIPP is very influential and may have an impact by setting the standards that all other protocols must adopt. Thjis will allow disparate assets and resources to be shared across multiple databases.
1. Dublin Core: already defined ;) 2. MPR: Unsure, not familiar enough with it 3. PRX: Structured and explainable as a relational database, which can be easier to understdand in some cases 4. Various ad-hoc schemes: solve specific problems more effectively than a generalized scheme
A little bit more settled


5.3.3 Other Scheme Drawbacks


doesn't address instantiation and tech codes needed for broadcast ops or production
n/a
Some of the other models are too complicated to be useful. They also seem aimed at a specific media type. A few of these use XML as the basis for their Metadata Dictionary, which is hard to read at times. I also think a big plus for PB Core is the attempt to actually recommend values for controlled vocabularies. Finally, though most of the models I've studies explicitly map to Dublin Core, the PB Core and Vide are the only models that started with Dublin Core and built outward.
Dublin Core is too general and simple. The other two schemes are too specific, especially, the FGDC metadata is for digital geospatial data.
Not fully responsive to public broadcasting needs.
Too complicated
not specific to public broadcasting
not targetted to broadcast production/distribution
Many of them are not particularly core, but PBCore is also fairly complete rather than core. PBCore is very well done but I don't think the issue of multiple physical manifestations for a single title is addressed. I think the data elements are generally very complete and very well defined but I don't have the sense that you have cataloged a range of existing assets to really test it. If you did, I think you would find that you have issues with multiple physical manifestations, with the use of "mandatory", etc. You should test it on a range of assets yourself (if you haven't already) and then use volunteers who are cataloging novices, from the metadata creator community you are addressing as "guinea pigs" to test it further. Each data element seems well documented and robust in itself, but the schema as a whole doesn't demonstrate the robustness of a tested schema.
NA
Level of apparent granularity could delay implementation as well as result in huge costs with hard to calcualte ROI.
Will be a challenge to train users to do exact cataloging.
N/A
None.
Greater complexity.
I think the main drawbacks are exactly part of what you are trying to eliminate-- Sometimes our terminology doesn't exactly match another vendor's terminology, and if we are importing or exporting data, we may have to spend more time clarifying and identifying the data that is needed, and we may have to do some reformatting to match the other system's requirements.
RSS: limited to web addresses.
Not to exclude. Just wondering if anything will be included.
They all share the same problems. The type/form/genre selection is hard to emforce. Any time we tried to come up with a rule, we always came up with examples that broke them. Consistency in the data is hard to achieve.
These are newer standards than Dublin Core.
???
Not as well known.
GEM is somewhat limited in description of varied resources as related to broadcasting.
1. Dublin Core base: obviously not specific enough 2. MPR Metadata: unsure, probably not generalized enough 3. PRX: Not nearly as well-specified, too application-specific (although influenced by early PBCore drafts) 4. Other ad-hoc schemes: not well defined, too specific, etc.
Not at all relevant to television materials


5.4 Rate Original Markup Effectiveness

Mean = 4.04, Standard Deviation = 0.85

Response Count Percent
(1) 1 0 0.0%
(2) 2 3 6.7%
(3) 3 6 13.3%
(4) 4 22 48.9%
(5) 5 14 31.1%

"Comment" responses:


I would never consider using the PB Core Elements as the foundation of my database. For one thing it does not grapple with Collection-level metadata in a practical way.
goiudagfiud aiufd iugadifg paueyfpiudf puf pouydf uy 8dh AYFDOIl
N/A
Production use not yet settled; and how closely with the ACE system conform to PBCore?
As long as the relationship between series, programs, individual instances of programs and program segments are represented, it will work.
As an archival repository, I need more preservation metadata, and detailed technical metadata, than PBCore provides. But it's quite good overall.
Really need to see coherent examples of known data. Searching for information would be very labor intensive.
My work involves cataloging films only; I am not an archivist
With so many elements, it's bit cumbersome and daunting to metadata creators. Hard to distinguish some elements from one another.
Every element seems essential to the data we need to produce, archive, retrieve and create.
Would involve a lot of manual translating
For us, we would want a richer, more hierarchical scheme and better incorporation of technical (and structural) metadata. But we could probably convert to be able to use as a recipient of the data.
The important term is 'we', not 'you' - if I were the only user any system would be suitable
(I am not representing a TV/prod organization). Only usage and experience will tell what is missing or what is too detailed or constraining.
There would be several things that it does not describe which are application-specific but essential (ie: "complete" would not work). I'm also not sure about the logistics of mapping PBCore (or generally Dublin Core) onto a relational model.
Certainly some tailoring to our individual situation will need to take place


 

5.5 Rate PBCore Usefulness To You

Mean = 4.07, Standard Deviation = 0.85

Response Count Percent
(1) 1 1 2.3%
(2) 2 0 0.0%
(3) 3 8 18.2%
(4) 4 21 47.7%
(5) 5 14 31.8%

"Comment" responses:


N/A
Not clear how well will fit with museums
MUCH better than no data!!!!
I would like to use it immediately .
I'll advise our customers to use MXF DMS
Actual tools for implementation needed
I don't have any indications about how


 

5.6 Rate Search And Discovery Effectiveness

Mean = 4.11, Standard Deviation = 0.75

Response Count Percent
(1) 1 0 0.0%
(2) 2 0 0.0%
(3) 3 10 22.2%
(4) 4 20 44.4%
(5) 5 15 33.3%

"Comment" responses:


doesn't really address content or contextual keywords
Though I've pointed out some problems with the various descriptive Elements, PB_Core would be very useful for search and retrievals.
depends in part on navigation, wording of query instructions
Need to standardize the term lists for more data elements, particularly the rights data elements.
WITH TRAINING!!!!
Need to extend rights to searchable elements
The number of elements with free-text schemes might make search/discovery difficult. Better to use pick lists and defined rules/schemes for consistency
We could increase descriptive options for some of the still images. Also, data that applies to moving images should not be mandatory for still images or production illustration files.
PB Core requires more synchronous descriptive metadata such as who is in this shot, what did they say in this clip, etc
Using a standard set of elements is important.
Afew items need refinement before I'd rate it a "5"
The quality of the data filled in is essential though.
training/orientation required
We have some but not many fields that are not covered
helpful PBC will be for distribution, or use in production. I don't have any indications that other PB stations are using it, that it's being integrated in the PB initiatives or that there's any expectation that it will be a standard.


 

5.7.1 Rate Revenue Or Service Likelihood

Mean = 3.39, Standard Deviation = 0.99

Response Count Percent
(1) 1 1 2.3%
(2) 2 8 18.2%
(3) 3 13 29.5%
(4) 4 17 38.6%
(5) 5 5 11.4%

5.7.2 Describe Opportunity Scenarios


Creating efficiencies, by providing source data that would not have to be manipulated at the station/user level.
Federated searching could be a great boom to researchers as well as for selling our original footage. As DVD market grows, re-use of program materials, along with previous unaired "extras," could be a great source of revenue. The PB_Core could help find "lost" treasures.
search and availability of internal production assets, search availability of assets for external clients
I am not in a public broadcasting organization.
Paid on-demand access to audio and video assets.
As more and more assets become or are born digital, with a standardized dcescriptive language, we will be able to make certain collections of material available to new users or more affordably make them available to existing partners. This means that the costs associated with providing material to partners drops, and makes the barrier to entry lower for any new venture.
Don't know about revenue. Could facilitate content exchange between stations; and between stations and other public service partners. Hope for seamless connection to ACE system.
I am answering this as the MIC developer. We'd use PBCore to map PBS data into MIC. We might also expose MIC metadata in PBCore as a function of the digital video portal. As an MPEG-7 AP developer, I'd seriously consider the PBCore controlled vocabularies for use within MPEG-7. Many of the vocabularies are much better than the ones supplied with MPEG-7.
Being able to easily and cheaply offer web-based data search of library.
Stations could use media asset codes linked to donor transaction codes to better cultivate and communicate with potential funders at all levels of indiividual, corpororate, foundation, and goverment support.
Provide an opportunity for a user to search for an on-demand asset for purchase and download, or shipment of physical media.
The service opportunity for us is to facilitate the exchange of information, and standards make that easier.
More effective and efficient exchange of material between stations and network center
Searchable HD footage and programs for external customers
I work at the Library of Congress, where we receive some PBS (or PBS-related) content for our collections. Our ability to serve as an archive will be enhanced if content arrives with better data; the PBS metadata would be very welcome.
It is possible that it might facilitate the work that we do with other companies (e.g. automation equipment, PSIP generators, etc.). I'm not sure that it would actually generate new revenue or service opportunities.
We could more easily identify elements of programs or assets that are currently not available in a standard form or possibly not availble at all, e.g. usage rights & reproduction
An organization could catalog objects using the PBCore and then offer a searchable web interface / "shopping site" for interested parties. Digital items could be downloadable through the same site either free or for a fee.
I would need to explore this further. There is immediate value to inhouse production. We would need to see how much of our content could be made available to the public. We are exploring options with datacasting and it could help with this.
1. subscription or pay-per-play audio, video on demand services 2. syndication of content to third party (in our outside public broadcasting) for fee
A station could make it's assets searchable using a low resolution browse application and then make available or sell the high resolution content to end users.
???
We could only imagine receiving assets and using metadata that comes with them. We would probably convert the metadata to something more consistent with what we already have.
I wouldn't begin to make an effort to generate new revenue based on PB Core - this best serves PB by universal acceptance (and the efforts that fall out might generate new revenue)
public service serach tools and access to assets, sale/distribution of assets to other professionals. The schema by itself is not enough. The tools that would exploit the schema are essential, even more important than the schema it is most formalized details.
making archived assets avaiable to qualified users will result in increased revenue from these assets
The ability to share assets within a search is vital in the access of these resources. Metadata would make this possible. While new revenue is not an outcome, the efficiency gained by the PBCore could potentially save money in staff costs as wells as increase quality and productivity.
Exchange of assets with other distributors for broader footprint. Eased ingest of assets from various producers.
Right now our legacy materials are pretty inaccessible, first to the staff of the station but even more so to the audience. So any cataloging and provision of access to our materials will generate new revenue and service opportunities for us. Since we have nothing else, PBC is a decent choice and probably will afford new revenue and service opportunities for us, especially if we can get funding to implement it enterprise wide or if PBS were to institute it as a standard.


6.1 Likelihood Implement PBCore

Mean = 2.84, Standard Deviation = 1.33

Response Count Percent
(1) 1. The next 6 months 7 16.3%
(2) 2. 6 months to a year 12 27.9%
(3) 3. 1-2 years 13 30.2%
(4) 4. 2-3 years 3 7.0%
(5) 5. Not likely within the next 3 years 8 18.6%

6.1.1.1 Map Existing Data Fields To PBCore

Mean = 1.16, Standard Deviation = 0.37

Response Count Percent
(1) Yes 27 84.4%
(2) No 5 15.6%

6.1.1.2 Mapping Existing Data Fields Describe How


This is yet to be determined - we're currently exploring our software options to better accommodate our current and future data delivery needs.
Looking at following options: 1. generate a PB_Core XML from our database 2. creating a PB_Core repository 3. creating XML/XSLT sytlesheets to directly map our asset management schema to a PB_Core schema when one is available.
We have intiative underway that make use of PBcore as their basis.
I am not in a public broadcasting organization.
Migration from NOLA to the new Broadview
It all depends on what industry-wide use the PBCore is put to. If a subset of fields is required in PBCore format for submitting content to our distributor then, we would do the mapping. otherwise it will be on and individual project basis. Getting the big two organizations, PBS/NPR to popularize and use PBCore will be vital to its adoption.
Sort of ... in a sense, we're starting from scratch because there's so little metadata related to current tapes (and virtually all of it in the form of labels, etc)
Within the context of MIC, as described earlier. At Rutgers University, I am certain that RU-TV would be very interested in this schema and might want to apply it. In fact, marketing to university television stations is a very good idea for you guys to pursue.
No exisiting libary so it's not an issue, but if I did, I would.
Best case scenario would use a simple tool perfected by a third party and provided at no cost. Short of that...maybe a roomful of chimps with typewriters?
Map based on existing standardized dbs fields.
Incorporate the ability to do this into our application: ProTrack.
We'd probably use .NET classes combined with XML.
If we received PBS metadata with PBS content for archiving, we would crosswalk the metadata to our online cataloging system, and/or our collection management system.
Assuming that we are given access to PBCore, I'm sure that we would implement at least part of it. For the most part, we contain the same elements. There may be naming conventions that are different, and perhaps we'd map those. As for the data formats, we'd probably try to modify some of those that don't currently match PBCore's for the ease of working with other systems.
Write software that will convert current, internal database field identifiers to match PBCore
We would like to however the data is very scattered. Some consists of printed legal releases, some may be electronic. This could be a project, but I would start by implementing the system in content being produced now.
unknown, would need to investigate how. would hope that our major vendors in which we house our data would do it for all their PB clients, including us
PBS Core would become a subset of a larger metadata schema. For example, elements in the PBCore XML namespace could be mapped into SMPTE RP 210 Metadata Dictionary Elements and used in an MXF schema such as DMS.
XML or text-file export into PBCore from existing Scout database.
we have relatively few assets under management currently, but there will be a requirement for manual translation of the existing records; future logging will be directly into the PBSCore structure
Unsure of how at this point without further research and evaluation.
When designing our own system we looked at early drafts of PBCore, so hopefully it should not be very painful. We would work to map our fields to the PBCore fields (one-to-one as well as breaking/merging as necessary). We would then work on two tools: one to present our data in PBCore format and one to map data from PBCore to our internal format. We would also look into ways to modify our internal format to make it more compatible with and convertable to PBCore.
Well, we will try. We have so little information right now that there isn't much to map. What is there to map will have to be carefully reviewed after importing the data because there was not consistant data entry in the first place.


6.1.2.1 Begin Asset Digitize And Manage Using PBCore

Mean = 1.31, Standard Deviation = 0.47

Response Count Percent
(1) Yes 22 68.8%
(2) No 10 31.3%

6.1.2.2 Begin Asset Digitize And Manage Describe How


very small test
This is a "quite possibly yes" - again the software direction we choose will have a lot to do with this decision.
underway now
I am not in a public broadcasting organization.
TBD
ALmost all descriptive information is containied within PubCore. You may only be able to gathert a small portion of metadata upon capture of the asset but starting with key PBCore elements ensures you have others in the industry to contact for guidance.
Ask me in a year. We're just getting into this for the station.
Not high enough priority -- to expensive to do yet.
PBCore can be mapped into a relational DB model fairly simply. A web-based front end for entering metadata during digitization processes should be fairly straightforward to create.
require an automation integrator to comply with the standard.
already have asset management program
Perhaps as a beginning step
Within WGBH "reference architecture"
Yes, but only if it will complement our current database and future upgrades.
But we would happily receive the data and transform it.
I'm saying no to this for now, just because we have other asset management systems that we already work with. That's not to say that we couldn't do something in the future.
Will include PBCore as part of evaluation of any asset management software or system.
I would incorporate PBCore into our web site's content management system.
I would use it as a structure to build off. Start by organizing data for each department in the way that pertains to each. Data would be used in present systems as it is applicable. Do you have tech specs, coding guidelines or are we to go to each of the links to retrieve info? Would be helpful. However, the metadata may be embedded differently in different media.
being only a part-time national producer, KQED would need to do cost/benefit anayisis of building DAM system for our conent
If this is what the customer wanted then I'd work with ingest, archive, and asset management vendors to develop applications that created PBCore metadata and linked that metadata to content assets.
this is a process that has been put on-hold, pending the release of PBSCore. we will resume the process when the standard is issued
Would combine this with any digital assets management or digital rights management systems
I have to qualify my answer. I would begin such a system using PBCore as a guide but adding elements if need be. I would also have to sort out issues regarding mapping PBCore/Dublin Core to a relational model or whatever other data storage model I was using. I doubt that PBCore would be a complete solution, but it would be a very large help and would enable much easier metadata sharing.
We are digitizing our legacy materials. In doing so, we need a metadata scheme to describe the resulting digital files. We will be using PBC to get started at the very least keeping mapping to it in mind as we create our database.


6.1.3.1 Map Existing Management System To PBCore

Mean = 1.26, Standard Deviation = 0.44

Response Count Percent
(1) Yes 23 74.2%
(2) No 8 25.8%

6.1.3.2 Mapping Existing Management System Describe How


n/a
See earlier notes.
I am not in a public broadcasting organization.
I'd try to, but will have to see.
I think there will eventually be many crosswalks employed within our companies and throughout public broadcasting. Mapping existing data collections to PBCore ensures that you are mnoving toward an understood standard and hopefully tools will emerge that add value to assets that are described using PBCore.
Did preliminary for LOMM ... just a paper proces at this point
At some point, I will map MIC to PBCore.
If I had one, I would, but I don't.
PBCore is fairly similar to metadata we're creating/storing at the moment. Crosswalking should be a trivial process, and will mostly involve combining certain data fields in our database into a single PBCore element.
Best case scenario would use a simple tool perfected by a third party and provided at no cost. Short of that...maybe a roomful of chimps with typewriters?
Actually not sure if I would start fresh or adapt. We have so little asset info now perhaps we would start from the beginning.
We'd probably use .NET classes along with XML.
If PBS data arrived for archiving with this metadata, we would use the data to develop our catalog information.
again, would look for vendor help. Or write parser/translator ourself, but prefer help
If I was building a system for a facility that required this then I'd do it by mapping the current asset management system to an XML namespace and then that XML namespace would be mapped to the PBCore XML namespace.
Associate corresponding values and transfer fields addressed by them. This is a wierd question.
This needs to be worked out by others
many of the fields of our database can be mapped, but there will be a lot of manual translation involved
Our current G.E.M. is setup to accomplish this type of mapping already.
We will. As I mentioned on a previous question, we looked at early drafts of PBCore when designing our own system. We will work to map our fields to the PBCore fields (one-to-one as well as breaking/merging as necessary). We will then work on two tools: one to present our data in PBCore format and one to map data from PBCore to our internal format. We will also look into ways to modify our internal format to make it more compatible with and convertable to PBCore.


6.2 Implementation Requires Significant Changes In Org

Mean = 1.31, Standard Deviation = 0.47

Response Count Percent
(1) Yes 29 69.0%
(2) No 13 31.0%

6.2.1 Explain Implementation Issues In Org


convincing all that the extra work is worth it
We're in a "free text" non-structured invironment in capturing our data. Re-keying is common in distributing iformaiton to build in structure after the fact. This type of environment will allow a more streamline - up-front flow -- producing natural efficiencies in the process.
Yes, only if my recommendations for specific Elements are not addressed. I don't think PB_Core is currently useful as a day-to-day workflow model, but has potential for basic metadata exchange. More work must be done to grapple with Collection-level metadata.
input of metadata is a major time issue which may or may not be offset by time savings later. implemetnation of an organization wide software and the cost to do so, training users, purchase and maintainance of hardware and software. integrating and interfacing PBCore into the education environment.
I am not in a public broadcasting organization.
Would require building discipline where little currently exists.
-Staff resistance to required processes (fields, formats, etc) -Broad range of related changes, from logging to media storage ... again, staff resistance (whats in it for me?) -Not enough technology in the house to make it easy; not enough connectivity. -How do we handle legacy material? -Long term project; getting the stages/steps right will take careful planning.
It's a step that doesn't happen right now, so it will have an impact.
Asset creators typically expend all their energy and resources in the execution of the core requirements of the commission. This leaves collection and management of metadata to a subsequent process.
Extensive training needs and buy in.
The start up would be time consuming, the maintence much easier
Unification of approach to identifying program material and elements among 9 diverse independent entities. There will also undoubtedly be workflow changes that will necessitate expenditure of funds not currently budgeted.
My first instinct was to say no to this, but then I considered our users. Any change to them is often difficult to get used to-- whether it be new terminology, new vocabularies, or new formats for the data. Additionally, with any changes in names or formats, we'd have to modify all existing reports and queries that use those fields/elements.
We catalog films for the public and, as stated above, I am not an archivist; although AFI does have an archive, physically housed at the Library of Congress; the AFI archivist works with LC on their system and maintains a system within the STAR software as well.
Need to identify elements that are of particular interest to our organization or to organizations with which we exchange assets, training - to ensure necessary staff understanding of the standard and how to inmplement it;quality assurance - to ensure we are consistent in the standard's application; monitoring & discipline - to make sure application of the standard is always complete
The fact that someone can produce a news story doesn't mean they can catagorize the information properly. It's likely that we will want to add the task of simply managing metadata to one person's job. Also, our CMS employs a feature called "aquisition" where much of the PBCore could be inherited to all of our content, and would need to be changed only when it applies.
Metadata creators would need training in how to use the standard properly and consistently. People would need to be educated first about the business benefits of undertaking the extra work otherwise they will find "work-arounds", refuse to use it, etc.
It would require more planning on the part of inhouse production. I would try to organize and simplify the PBCore guidelines for the TV producers. They seem to get overwhelmed when even mentioning the word "metadata." We need the dictionary with details as you have them. Would create a simpler outline for producers to make everything as easy as possible. Would create a form template to make it easier. May need to input some data in each department as it moves through production.
any additional workload would be difficult to extract from current staff.
Learning new attributes, populating the data consistently, mappings to our internal structures.
Training operators how to be consistent in using the application that creates the PBCore metadata for an asset. Eventually PBCore metadata will be associated with a file and not a physical tape. When this happens new collaborative workflows will be possible and producers will be able to edit their own programs. Roles will change, specialists will become generalists and everyone will be able to edit and modify content, from the station manager to the custodian. Finding how to manage a technical on air LAN along side an administrative LAN will become an issue, especially keeping viruses out of the digital archives.
Education on how to use PBCore. RE-evaluation of workflow needs.
Workflow changes and learning to read and use the metadata are the issues - I wouldn't elevate the effort to that of a 'cultural change'
Requires discipline, similar to what Libarians are using. Requires training. Requires awareness of the importance of accurate data.
agreement on the ownership of the responsibility to develop metadata content; including metadata generation in the asset development process throughout; managing and enforcing the process
Workflow would be impactred greatly as would training and other intellectual issues.
Getting people to follow the rules about what goes into what feild. Getting people to enter more data then what they have in front of them or know immediately. No one is going to fill out more than 15 feilds when they handle a resource, not even the tape library staff will have the patience or see the benefit of carefully entering information in as many of the PBC elements as possible. People are not used to generating metadata for anyone but themselves.


6.2.2.1 Who Needs Training In PBCore


I'm going to go out on a limb here -- the entire Distribution group (all but finance) Programming - from the screening phase to the actual acqusition of distribution right. Communications - program descriptions Operations -
Some effort would need to be made to capture and transform technical metadata. Also data would need to be normalized by trained staffed.
managers would need to see the value and return on the doller. producers, directors, editors, videographers would need to see the value of metadata, would it make just one more thing for them to do or save them time, identify an individual or individuals to operate, input, and manage the system.
Music, Audio/Video librarians and catalogers
Producers, editors, technicians, web specialists
Library, Traffic and Scheduler type
Reporters/Producers Content distribution personnel Archivist
Producers, traffic, editors, MCO's, engineers, unit managers, IT, programming, development
Librarians, Editors, Reporters
Assistant editor and Producer
digital library staff
software application developers
Effective deployment of a comprehensive metadata management process would impact, to varying degree, every member of the organization.
archives and all production and admin. units depositing materials in digital repository
Production, Distribution
Software programmers.
All of technical operations and Scheduling/Traffic
Pre production, production, post production, traffic, broadcast, public information, engineering and operations
Operations staff
Archival accessioning and cataloging staff, to execute the data transformation. And a toolmaker to make a transformation tool.
Anyone who does coding, support, training, documentation, etc. would need to be able to work with the proposed PBCore. That's just in our office. Our primary software users at the stations would need to be aware of using PBCore as well, especially those who like to write their own queries.
The AFI archivist in Washington, New Media ventures staff
program coordinators, producers, tape dubbers
Operations (traffic, master control, librarian); Production (editors, producers, shooters, audio); Operating engineers
Likely the news, jazz and classical staff would need to trained about how to better describe the content that they post to the web.
Presumably resource catalogers. I could also see a connection to records management and collections management.
Online Services Production On-Air Graphics Engineering Volunteers
TV and radio archivist/librarian program schedule system person
Editorial mostly.
Tape room, feeds, transmission, traffic, editing, archivists, tape librarians, producers, researchers, editorial staff, writers, directors, just about everyone!
1. Production 2. Operations 3. Network control 4. IT
Operations, Library, Scheduling, Traffic, Programming
director of production; producers; editors; broadcast operations management; education services
All departments invloved in production, content, and education
Technical implementers of our DAM system.
Producers, AP, Editors, Audio, Library staff, talent, just about everyone who handles tapes. The biggest issue is actually finding someone to designate on staff who will be responsible for managing the system, training staff, and doing quality control.


6.2.2.2 What Levels Of Training In PBCore


Not clear what you are actually looking for in this answer...
I don't know.
A group of core individuals with through understanding of the complete system software, hardware, and architecture of PVCore would need to be trained and responsible for the roll out. depending upon the organization all users of the PBCore would need access, training and understanding.
unknown
Exchange topology.
The manager of the archives would need to be highly trained but other staff would only need to know portions of it in depth
Not sure. For most users, it should be like training for a new word processing system. For traffic, and others managing content, higher levels of training.
Several hours.
Not too much.
no real training necessary. DL staff could learn this on their own.
basic
At a basic level, each contributor to the metadata process would require sufficient training to preclude thier unintended mucking up of the process. After that it's all gravy.
Using the dictionary and understanding the definitions
Extensive for a few, medium for most
Minimal.
Understnading of the system and building a commitment to abiding by the standardized structured system.
Enough to allow each individual to complete normal tasks
Not sure yet.
Toolmaker would have to analyze the data and build a software tool to transform to normal cataloging and collection management data.
For our office staff, we'd need a pretty thorough understanding. For our primary users, they would need to know the basics. Other users could probably get by with minimal training.
Extensive
Not certain at this point;TBD
Simply reading the PBCore documentation was helpful - it is likely that this would be enough for most station staff. However, I am currently working with rss feeds and am preparing to learn about/work with other flavors of xml.
The catalogers would need the most training since they would be ones populating the fields. The records management and collections management groups would just need to have an understanding of the elements without any technical expertise.
Technical: Online, Engineering Simplified: On-Air Production & Graphics, Volunteers The volunteers are working on a big archival project called "SAM." I would like them to enter this data as it is archived. It would need to be checked by someone in Production to be certain that legal and technical elements are accurate. To make it easier I would start with people entering the data that they have and then researching additional content as time permits.
overall sell for senior managers specific to their dept. instruction for program scheduler and librarians overall concept for Interactive staff, and how it works with rest of the building
Ingest operators need to understand the subtleties of metadata creation, end users such as researchers need training on the browse and database search applications.
1. Data input 2. Operations data input and translation, also correction and timeline execution. 3. Monitoring updating and crises intervention. 4. IT - Maintenance and customization.
TBD
the dir of production, producers and editors will be trained to tag the content as they create it with an eye on serving the needs of the broadcast and education clinets who will be involved in retrieving the content
Organization would need at least two highly qualified and trained individuals as well as several other staff with content entry and knowledge skills.
Technical implementers would have to learn PBCore and our internal system. They would have to create a map.
Some medium high level of explaination about how to use the elements in PBC will be necessary. The key to implementing this on an enterprise wide basis is going to be to find the DAM software that will use PBC in such a way to make it appear easy, seamless and necessary to generate and use metadata.


6.3 Most Valuable Form Of PBCore

Mean = 2.04, Standard Deviation = 0.81

Response Count Percent
(1) Application Profile in PDF 27 55.1%
(2) Website Utility Tool 32 65.3%
(3) Database or GUI template 31 63.3%
Other 14 0.0%

"Other" responses:


standardizatoin of a way to carry the metadata with the assets when they move.
csv is lowest common denominator
automated conversion tool.
XML Schema
compiled help file
Artesia Teams
an xslt stylesheet might also be useful
A tool and database/template would be very useful. PDF would be ok, but why not give it to people in RTF as well so that they can cut/paste, etc. for training purposes.
We could use website (#1 ref. tool) & PDF for reference and template in XCEL and FileMaker Pro.
XML
Proxy browse application as part of the database GUI
XML schema so that we can apply an XSLT stylesheet to convert to another metadata format.
Built into the program record creation tools
Template adapated for the AM tool being used. PBCore without a tool to exploit it is of limited use if any. If first users do not get benefits out of it, they won't use it in the long term. The tools and applications are very important.
web services
XML DTD


 

7.0 Additional Thoughts


see my comments on the confusing fields in terms of sending data rate versus encode rate
Metadata and its role in education will need to be further addressed.
I work as cataloging/metadata librarian in a university library. Though I am not familiar with some concepts in the public broadcasting area, I can understand most elements by reading the guidelines and descriptions, only with some confusion caused by slightly different usage in the library field. Using controlled vocabularies or encoding schemes for item-level digital object can be time-consuming and may need special training especially if using Library of Congress Subject Headings and Name Authorities.
In the time since we began this effort, RSS feeds have taken off and are quickly becoming a major mode of moving content. NPR, for example, is now implementing a beta version of an RSS feed. So far as I know, analysis of the RDF specification played no role in the PB Core deliberations. Because of that, I'm concerned that the PB Core group make some effort to do that, if it hasn't already happened, to analyze what issues there may be in accessing PB assets via the RDF specification used by RSS.
Suggestion for additional element: expertise of person who entered the metadata (archivist? producer? intern?) and purpose for which metadata was entered (for production? for station library?). When looking for material, the quality of the metadata could be an important filter when searching.
Feel free to contact me with any questions: gagnew@rci.rutgers.edu
Need an additional anything else field to cover highlights that might not fit within the metadata structure described but might contain keywords that would help someone find the contnet.
Excellent work! This is very important to do for so many reasons.
An interesting survey. It seemed to require a substantial amount of it's own metadata in order to provide the respondent enough information to proceed.
You have done a terrific job and I look forward to this being a success.
Elements marked as mandatory should make sense for all asset types. A method should be found to note elements that are mandatory for certain types of assets. It would be very helpful to have specific examples of assets described completely in PB Core; along with a formal XML schema.
Great project, really needed.
PBCore implemetation timetable?
Good luck and best wishes.
Standardization is always a good idea. I hope you plan to make this a public accessible item. That way, anyone working with PBS stations will have an opportunity to be better informed and better able to help the stations.
Sorry if my expertise and experience only seemed to fit without half of your survey. It was well planned and executed. Good luck
This survey was filled out from an engineering perspective so some of the questions elicited minimal response. However, even from the engineering perspective, it is recognized as vital that public broadcasting fully and completely implement a metadata standard in the very near future.
Simply having a dialog about exchanging information is healthy. Some simple practical applications/examples might be a good next step. Thank you.
Excellent job. It is obvious that much work has gone into this project. Very excited and very interested in using this asap!
The PBCore is a significant step forward for the professional television production and distribution community. The PBMI has done us all a great service in creating this very thoughtful set of 58 or so elements. The PBCore will become the lingua franca by which Public Broadcasters can make their tape liabilities into digital assets that can be easily located by all end users. I hope that the PBCore metadata will soon be linked to files located on shared network storage systems rather than linked to physical tapes on shelves. I'm looking forward to seeing a PBS MXF Application Specification that plugs PBCore metadata into MXF structural metadata. With PBCore the Public Broadcasters have moved to the cutting edge of television production and distribution technology and are one of the few organizations to understand the commercial potential of making it's assets accessable with a common metadata scheme. Congratulation and thanks are due to you folks! Kudos!
Good luck.
Please note that this response has been compiled by both Morgan Cundiff and Rebecca Guenther, Network Development and MARC Standards Office, Library of Congress.
Most of my entries were in section 4 almost all of them rated a '5' in importance. Rating the importance of mostly mandatory elements didn't appear to be useful.
I think the key to making this application successful is to demonstrate the premise that facilitating greater access to content will increase the revenue that can be dervied from that content. Stored archives, like dark fiber generate no income.
I think it's a very worthwhile project. I hope that the dictionary will be dynamic and updated per user needs. I think it would be incredibly valuable to develop some common tools (or templates of tools) for tasks such as reading/writing to/from PBCore. It would also be helpful (perhaps as part of a separate initiative) to define communcation protocols between different systems are PBCore compatible (eg: XML-based queries).
I think PBS has a long way to go. Many don't know what PB Core is or why they would want to use it. Most don't have the staff in place to get it started or to lead the effort. That staff would have a huge job to do to train and implement the metadata gathering. There's no funding to implement this kind of a project, to buy the software necessary, to do all the data entry or correcting the data entry to get clean metadata. This is not unlike putting in new transmitters for digital television, but I don't see the funding sources for that like I do for the transmitters. Nor do I see the staff support or commitment. People understand what transmitters do, people still don't really understand what media asset managers, DAM systems and metadata do. Good luck.



Generated: 3/1/2004 2:07:00 PM