Presentation
Selecting the Perfect DAM for Your Organization
22 August 2024
Choosing the right Digital Asset Management (DAM) system can feel overwhelming, especially with the myriad of options and the complexities involved in the selection process. If you’re like many organizations, you may find yourself questioning your choices, feeling uncertain about your requirements, or unsure of how to navigate the procurement landscape. This guide dives deep into the essential steps and considerations for selecting the ideal DAM system for your organization.
Understanding Your Needs
The first step in selecting a DAM solution is understanding your organization’s specific needs. This process involves more than just listing features; it requires a comprehensive assessment of your current assets, workflows, and user requirements.
- Stakeholder Engagement: Engage with different teams to gather insights about their needs and pain points. This ensures that the selected system will cater to the diverse requirements of all users.
- Discovery Process: Conduct interviews and surveys to identify what users expect from the DAM system. This step is crucial as it uncovers needs that might not be immediately obvious.
- Documentation: Document all findings in a clear manner. This will serve as a reference throughout the selection process.
The Importance of a Structured RFP Process
A well-structured Request for Proposal (RFP) is vital in the DAM selection process. It not only communicates your needs to potential vendors but also sets the tone for how they will respond.
- Clarity in Requirements: Clearly outline your requirements using user stories or scenarios. This helps vendors understand the context behind your needs.
- Prioritization: Prioritize your requirements into mandatory, preferred, and nice-to-have categories. This helps vendors focus on what’s most important to your organization.
- Engagement: Allow stakeholders to participate in the RFP process. Their involvement increases the likelihood of buy-in and adoption later on.
Common Pitfalls in DAM Selection
Many organizations fall into common traps when selecting a DAM system. Avoiding these pitfalls can save you time and money.
- Ignoring User Needs: Skipping the discovery process can lead to selecting a system that does not meet the actual needs of users.
- Over-Reliance on Recommendations: Choosing a system based solely on a colleague’s recommendation can be misleading. What works for one organization may not work for another.
- Underestimating Costs: Focusing only on the initial purchase price without considering implementation, training, and ongoing costs can lead to budget overruns.
Evaluating Vendor Responses
Once you’ve sent out your RFP, the next step is to evaluate the responses from vendors. This involves more than just looking at prices; it requires a thorough analysis of how each vendor meets your specific needs.
- Scoring System: Develop a scoring system to compare vendor responses based on how well they meet your requirements. This allows for an apples-to-apples comparison.
- Demos: Schedule vendor demos focused on your specific use cases. This helps you see how the system performs in real-world scenarios relevant to your organization.
- Qualitative Feedback: Collect feedback from stakeholders who attend the demos to gauge their impressions and preferences.
Understanding Customization and Configuration
Many organizations grapple with the concepts of customization and configuration during the DAM selection process. Understanding the difference is crucial.
- Configuration: This involves setting up the system using available features without altering the underlying code. It’s generally easier and cheaper to implement.
- Customization: This entails modifying the software to meet specific needs, which can be more complex and costly. Be sure to inquire about the implications of customization during vendor discussions.
Managing the Implementation Timeline
Timing is everything in the DAM selection process. Many organizations underestimate how long it takes to select and implement a new system.
- Anticipate Delays: Factor in time for vendor responses, stakeholder feedback, and potential procurement delays.
- Implementation Timeline: A typical DAM selection process can take several months, so start early to avoid rushed decisions.
- Post-Selection Support: Ensure that you have a plan for training and onboarding users once the system is selected.
Conclusion: Making the Right Choice
Selecting the right DAM system is a significant decision that can impact your organization for years to come. By following a structured process, engaging stakeholders, and carefully evaluating options, you can make an informed choice that meets your organization’s needs. Remember, the goal is not just to choose a system, but to select a solution that enhances your workflows and improves the management of your digital assets.
For more resources on DAM selection, including checklists and guides, visit AVP’s Free Resources.
Transcript
Chris Lacinak: 00:00
Amy Rudersdorf, welcome to the DAM right podcast.
Amy Rudersdorf: 02:15
Yeah, thanks for the opportunity.
Chris Lacinak: 02:17
I’m really excited to be here. So for folks that don’t know, you are the Director of Consulting Operations at AVP. And I’ve asked you to come on today because you’ve just written a piece called Creating a Successful Dam RFP, and you’ve included with it a bunch of really useful handouts. And so I wanted to just dive into that and have our listeners better understand what the process is, what the value of it is, why it’s important, what happens if you don’t do it, so on and so forth. But I’d love to just start, if you could tell us, what is the expertise and experience and background that you bring to this topic?
Amy Rudersdorf: 02:54
Sure. So before I came to AVP, I was working in government and academic institutions where we had to go through a procurement process to buy large technologies. And so I’ve seen this process from the client side. I know what the challenges are. I know that this can be a really time-consuming process and really challenging if you don’t know how to do it. And then when I came to AVP, I had the opportunity to help guide clients through this process. And over the years, we’ve really refined what I think is a great workflow for ensuring that our clients get the right technology that they need.
Chris Lacinak: 03:33
And you’ve been doing this for years, as you say. So I’m curious, why now? What inspired you to write this piece after refining this for so many years? Why is now a good time to do it?
Amy Rudersdorf: 03:44
I think the main… Well, there are a couple of reasons, but one of them is that there’s just been in the last couple of years a proliferation of systems. There are hundreds of systems out there that we call DAM or MAM or PIM or PAM or digital preservation. There’s all kinds of systems. From a pricing standpoint, DAMS range from as low as $100 a month to six figures annually. And the market is really catering to a diverse set of needs from B2B to cultural heritage to Martech, and then your general purpose asset management systems. And I’ve seen organizations recognize that it’s really important to do it right. They want to make sure that when they acquire technology, it’s something that’s going to work for their institution for the long term. But they really struggle with how to do it. So what I hope through this piece is that I can help individuals and organizations with this step-by-step guide to successfully procure their own technology without us, and maybe in addition, see the value of working with an organization like AVP.
Chris Lacinak: 05:00
And how would you describe who this piece and these checklists are for?
Amy Rudersdorf: 05:06
Well, I would say specifically, they’re for organizations looking to procure a DAM. And this could be your first DAM or moving from a DAM to an enterprise DAM technology or MAM. So that’s the specific audience. But really, if someone’s looking to procure a technology, the process is going to be very similar. And so many of these checklists will be useful to those folks as well.
Chris Lacinak: 05:38
Yeah, I think it is important. You’ve kind of touched a couple times on, you know, the piece is called, or calls out specifically DAM. As you mentioned, and it’s worth reiterating, we’ve talked about it here on the podcast before, but we use a very broad interpretation of DAM to include things like you mentioned MAM, PIM, PAM, digital preservation, so on and so forth. So it’s good to know that folks looking for any of those technologies in the broader category of DAM that this is useful for. For someone out there considering procuring a DAM and thinking, you know, we don’t need an RFP process or we don’t need to use this complex, time-consuming process, is it still useful for them? Or are they things that they can grab out of this piece, even if they don’t want to go through the full process?
Amy Rudersdorf: 06:32
Well, my initial response to this question is, you should be considering the RFP process. And if not a full RFP process, at least an RFI, which is a request for information as opposed to a request for proposals. The RFI is a much more lightweight approach. But in either case, I feel like this document, this set of checklists is useful for anyone thinking about getting a DAM. Because the checklists step you through not just how to write an RFP, but also how to gather the information you need to communicate to vendors. So if you look at checklist number two, for instance, it really focuses on discovery and how to undertake the stakeholder engagement process, which you’ll want to do whether or not you’re writing an RFP. You really need to understand your user needs before you set out to identify systems that you might want to procure.
Chris Lacinak: 07:39
Yeah, that’s a good point. And maybe it’s worth saying that for smaller organizations maybe that aren’t required to use an RFP process, that what you’ve put down here, when I look at it, I think of it’s kind of the essential elements of an RFP, right? You might give this to an organization that then wraps a bunch of bureaucratic contractual language around it and things like that. But this is the essence of a organization-centered or user-centered approach to finding an RFP or finding a DAM that fits. So let’s talk about what are the pitfalls that people run into when they’re procuring a DAM system?
Amy Rudersdorf: 08:25
So I’ll start by saying, I think it’s really important that when you’re procuring a technology that you talk to your colleagues in the field and see what they’re using. But just, as much as that’s important, I will say that’s also a major pitfall if you do that, if that’s your only approach. Because you may have a colleague who uses a system they love, it does everything they need it to do, and they say to you, “Yeah, you should definitely buy this system.” But the reality is that that system works for them in their context and your context, your stakeholders are very different. And so that assumption is, I think, flawed. You have to go through a stakeholder engagement and discovery process where you’re talking to your users and finding out what they need, what their requirements are in order to communicate to vendors what it is that you need that system to do for you, as opposed to what it’s doing for your colleagues. I’ll say, Kara Van Malssen posted a LinkedIn post a few weeks ago, and it was really useful. It’s the eight worst ways to choose a DAM based on real world examples. And one of those is choose the system that your colleague recommends. And as she says, your organization’s use cases are totally different from theirs. I think there’s also the pitfall of, you go to a conference and you met a salesperson, they were really nice, the DAM looked great, it did everything that they said it could do. But when you’re at a conference, that salesperson is on their best behavior, and they’ve got a slick presentation to show you. So just approaching this with a multifaceted approach is going to be far more effective than just saying, my colleague likes it, or I saw it at a conference. You combine all of those things together as part of your research to find the system that works for you.
Chris Lacinak: 10:44
Yeah. And that makes me think of requirements and usage scenarios, which I want to dive into. But before we go there, I want to just ask a similar question, but with a different slant, which is, what’s the risk of not getting this right, of selecting the wrong DAM?
Amy Rudersdorf: 11:02
Yeah, so the risk is huge. I think DAMS are not cheap. I would say that’s the first thing. You do not want to purchase or sign a contract, which is typically multi-year, with a vendor for a system that doesn’t work for you. You will be miserable. And I think more importantly, your users will be miserable. And this will cause work stoppage, potentially loss of assets, and it could be a financial loss to the organization. Not doing this right will have repercussions all the way down the line for the organization, and you’ll be hurting for years to come.
Chris Lacinak: 12:00
Yeah, I think one thing we’ve seen is an organization, maybe they go out and they buy a cheap DAM, and maybe they think, “Well, you know what? It’s cheap. If it doesn’t work, we only spent, what, $15,000 or $20,000,” or whatever the case may be. Not realizing that that might be the cheapest part, right? Because you got to get organizational buy-in, you got to train people, you got to onboard them. And then it goes wrong, or it goes, you know. And we’ve seen this. We’ve come in on the heels of this. Where like, there’s a loss of trust. There’s poor morale. People don’t believe that it’s going to go right this time. So yeah, there’s a lot to lose there, and it’s more than just the cost of the DAM system, as you point out. So let’s jump back to requirements and usage scenarios. So you talk quite a bit about the importance of getting requirements and usage scenarios documented and getting them right. Could you just talk a bit about those two things, how they relate to each other, and then we’ll kind of dive in and I’ll ask you for some examples of each of those.
Amy Rudersdorf: 13:03
Okay. Well, this is where I’ll probably start to nerd out a little bit. But you’re going to, as the centerpiece of your RFP, communicate your needs. And when I say your, I mean your organizational needs for a new system. So you will be representing the needs of your stakeholders, if you’re doing it right. Their challenges or pain points, their wishlist, all of that needs to be communicated to vendors in a clear and concise manner that they can interpret appropriately and provide answers that are meaningful to you so that you can then analyze the responses in such a way that you can understand whether that system will work for your organization. So structuring your requirements and your usage scenarios, we call them usage scenarios at AVP, lots of people call them use cases, but structuring those correctly is going to be the part that gets you the responses you need in order to make a data-driven decision.
Chris Lacinak: 14:20
And I’ve heard you talk about before, I mean, to that point, I guess, we have seen RFPs in which the question that is posed is, can you do this thing to the vendor? And the vendor just simply has to check a yes or no box. To your point, I think from what I’ve seen from your work is like, you really get to how do you do this thing so that there’s much more information around it. So it sounds like structuring those, getting those right and structuring them in the right way is going to give you not a yes or no answer, which is often misleading and unhelpful and things, but like a much more nuanced answer.
Amy Rudersdorf: 14:56
I think the other part is, you want to help the vendor understand. You want to work with them to get the best outcome from this process. And so giving them as much context as you can is important too. And that’s why we structure our requirements the way we do so that the vendor sees what the need is, but also understands why we’re asking for it.
Chris Lacinak: 15:20
That’s a really good point. That is something that you hear vendors complain about with RFPs that they don’t provide enough information. And I want to ask you later about what ruffles the feathers of vendors, but let’s keep on the requirements and usage scenarios. So can I ask, you said most people call them use cases, AVP calls them usage scenarios. Why is that?
Amy Rudersdorf: 15:42
Well, a use case is just a standalone narrative of, it’s a step-by-step narrative of what the needs are for a system. So you’re telling a story about a user going through a process or a series of processes. A usage scenario offers context beyond that. So you provide some background information. Why is this usage scenario important? Well, you’re explaining that we’re asking you to respond to this because this is our problem. And so providing, again, it’s that context. So the vendor understands why you’re asking for something or why you need something. It just makes their answers better. They’re more informed. I think they feel more confident in their responses. And so it’s just a little bit more context around the use case than just a standalone use case.
Chris Lacinak: 16:38
What does a well-crafted requirement look like?
Amy Rudersdorf: 16:44
So at AVP, we use the user story structure, which comes out of the agile development process. It’s basically an informal sort of general explanation of a software feature that’s written from the perspective of an end user or a customer. And we call them personas. So as part of your user story, it’s a three-part structure. So as a persona, or the person that needs something to happen, I need to do something so that something is achieved. So a standard requirement is just a statement of a need. But here you can see there’s a real person. So this is user-driven content. There’s a real person who has a real need because something really needs to be achieved. And I think that structure is really powerful. I just said “really” a lot of times. But there are a lot of examples for how to build user stories on the web. And again, just giving that vendor as much context as possible. You have to think about you’re handing over to these vendors 20-page documents that they have to sift through to try to understand what your needs are and how to match them to their system. And so any background you can give them, any context you can give them, is just going to be a win for everyone. So it really will impact whether you’re seeing responses from the vendors that align with your needs or not. I think it provides clarity in the process that the standard requirement structure doesn’t offer.
Chris Lacinak: 18:31
Right. So I’ll go out on a limb here and venture a guess that a bad requirements list might be a list of bullet points, something like integrations, video, images, things like search, things like that. Not a lot of context, not a lot of useful information.
Amy Rudersdorf: 18:49
The other thing to keep in mind is these have to be actionable. So you can’t say “fast upload.” Every vendor is going to say “yeah, our upload is super fast.” But you could say, “As a creative, I need to upload five gigabyte video files in under 30 seconds” or something like that. You want them to be something that a vendor can respond to so that you get a useful response.
Chris Lacinak: 19:24
How might you explain what a well-crafted usage scenario looks like?
Amy Rudersdorf: 19:30
Sure. So usage scenarios are, as I said earlier, they’re these step-by-step narratives. So it’s a story about a user moving through the system. They flow in a logical order. They cover all of the relevant steps, all the decision-making points. They are user-centric. So the scenario should define who the user is. We always use real users in our usage scenarios. So we’ll have identified some of the major personas from the client’s organization. And that might be, like I said, a creative. It might be the DAM Manager. It’s real people who work in their organization. We don’t name them by name, but we name them by their title. So that this is truly representing the users. So it’s the story of the user performing tasks. And every usage scenario should have clear objectives, outlining what the user is trying to achieve. And it’s their specific tasks. They’re solving a problem. So it might be, for example, just this isn’t how you would write it, but it might be a story about a marketing creative who needs to upload assets in batch and needs to ensure that metadata is assigned to those assets automatically every time they’re uploaded. And the DAM Manager is pinged when those new uploads are in the system so that they can review them. So that might be a story that you would tell in a usage scenario. It’s realistic. It’s based on real people. And it represents real challenges that users face.
Chris Lacinak: 21:26
Yeah, that makes a lot of sense. There’s a lot of things that we see in marketing and in communication around the power of stories. I can imagine that that is a more compelling and meaningful way to communicate to vendors. It makes me wonder, in your experience in working with organizations, you craft this story and someone listening might think that’s information that’s at the ready that just simply needs to be put into story form. But I’m curious, you put a lot of emphasis on discovery and talking to different stakeholders. And I’m just curious, how useful is this process to people within an organization coming up with these stories? Are they at the ready? Or is it through the discovery process that they’re able to synthesize and really understand to be able to put it into that form?
Amy Rudersdorf: 22:21
Yeah, I would say if you take nothing away from this discussion except the fact that discovery is absolutely necessary as part of your technology procurement process, it’s that. Discovery is the process of interviewing your users and stakeholders to understand what their needs are, their current pain points are, and what they wish the system could do. That’s it in a nutshell. And I have never had a core team or the person leading the project on the client side say, “Oh, I already knew all that.” Time and time again, their eyes are opened to new challenges, new needs from these users. So, it’s a really powerful process. I think this is taking it a little off topic, but just to ensure that you have buy-in from your stakeholders, bringing them in at the beginning of the process is key. So it’s a benefit for you in that you learn what they need, you learn how they use systems today and what they need the system to do in the future, but you’ve also kind of got them engaged in the process as well. They see that they’re important and that you’re making decisions on their behalf and thinking of them as the system is being procured. And all of that together, I think, is really powerful and can only make for a better procurement process.
Chris Lacinak: 23:58
Yeah. So, wow, it really does point out the value of the process. So earlier I was saying like, what’s the pitfalls or maybe someone doesn’t want to go through the RFP process, but like the RFP, I mean, let’s say somebody did just throw together an RFP without going through the process, it would be a very different RFP than after going through the process. And the process, and also it sounds like the process solidifies things that don’t manifest in an RFP. They actualize through greater adoption and more executive buy-in and in other ways that you wouldn’t have if you didn’t go through this.
Amy Rudersdorf: 24:35
Absolutely.
Chris Lacinak: 24:36
Let me ask about that, the buy-in side. So in discovery, well, not so much the buy-in, I think more about adoption here, but like one of the challenges has to be you talk to, let’s say 10 different people, each person has many requirements they want to list. And maybe one of those is in creative ops, maybe one is in marketing, maybe one is in more administrative role. Who knows? They are different stakeholders with different focus points and they all give you lots of requirements. And on one hand, I have to think it’s important for those to be represented so someone doesn’t look at it and say, “Well, it doesn’t have any of my stuff in there. This system’s not right for me.” On the other hand, it’s got to be such a huge load. It just makes me wonder, how do you get to prioritization to both represent but also make sure that the most important stuff is represented up front?
Amy Rudersdorf: 25:31
Right. Well, it’s definitely a team process. So the first thing I’ll say is just to provide a little context, when we do these requirements, these user stories, in the past, we would write 150 requirements. And we try really hard not to do that. It’s really hard on the vendors to ask them to respond to 150 requirements. And so we really try to synthesize what the users are telling us and really hone in on the key needs. Now that doesn’t mean that we disregard different users’ needs. But in some cases, their need is something that every dam can meet. So there’s no need to include that in the requirement list. You want to be able to search. They all can search, so that should be fine. But once you have got your requirements list, which I think in a healthy RFP is probably in 50 or so requirements range, then it’s up to the organization to prioritize those requirements. So as a company, we will write those user stories on behalf of the client. But then we give them that list and say, now prioritize these. This is your part of the process. And typically, this is the core team’s job. So when we work with a client, there’s usually two to four people who are part of the client core team. And they are either sitting in on the discovery interviews or reading transcripts or just really engaged in the process. So they understand what these priorities look like. So by the time they get that list, they should be able to, as a group, sit down and identify the priorities. And we prioritize based on the list we do is mandatory, preferred, and nice to have. So if there are some requirements that someone is noisy about really wanting to have in the list, we can always just call it nice to have. And they’re there. But then it’s not mandatory that the system is able to do it.
Chris Lacinak: 27:59
So it sounds like that’s done through a workshopping or group process where folks are able to discuss and talk about those. So that seems like that innately. Being able to be heard, have the conversation, and then even if it’s not called mandatory, you still feel like you got to have the conversation and it’s represented in some way.
Amy Rudersdorf: 28:23
Yeah. And I wanted to also say that gathering these requirements from the users is really obviously important, as I’ve said. But then engaging them throughout this process is also really valuable. And so not just asking them at the beginning what they need, but actually letting them come to demos and things like that, I think, is important as well. It’s going to make implementation and buy-in much more successful.
Chris Lacinak: 28:51
You’ve got these requirements. You’ve got these usage scenarios. You create a bunch of things to hand over to a vendor. I guess I’m wondering, how do you manage apples to apples comparisons? Because there’s going to be such a wide variety in how they respond to things. And how do you manage comparing pricing to make sure that there’s not surprises down the road? How do you manage those things?
Amy Rudersdorf: 29:16
Well, so I’m going to set the pricing question aside for a second. So the way that we do it at AVP is, I think, a methodology that is unique to the RFP process. And that is that we’ve created a qualitative methodology. So we create the requirements, and the client prioritizes them. The vendor responds to them in a certain way. And then we’re actually able to score those responses. And it’s based on priority. So if something’s mandatory, it’s going to get a higher score. The vendor may say it’s out of the box. They’re going to get a higher score. If they say it has to be customized to do that, they’re going to get a lower score. So we create this scoring structure that allows us to hand over to the client data that they can look at. So they’re actually seeing side-by-side scores for all of the respondents to the RFP. Pricing is really tricky. It is so complicated. Every vendor prices their system completely differently. And so we really have to spend a lot of time digging out the details to understand where the pricing is coming from and what the year-to-year pricing looks like. And then we do actually provide a side-by-side analysis of that as well. It’s really tricky to do it. But in the end, the client gets data that they can base their decisions on. And then you asked a question about avoiding surprises when it comes to pricing. I think this is the hardest thing to talk about when you’re buying technology. And I think this is probably the case for lots of different types of technology, not just the DAM, MAM, PIM, PAM world. But this information is not widely available on the web. You can’t go to a vendor’s website and see how much it’s going to cost for you for the year, an annual subscription or license. And the reason for that is that there are so many dependencies around their pricing, including how much storage you need and what that storage growth looks like over time, how many users and what type, some vendors base their pricing on seats, like the number of users you have and the different types of users in their different categories, SLA levels, service level agreement levels. So if you want the gold standard, it’s going to cost this. So the costs are going to be unique to your situation. Just to sort of toot our horn that we know this market really well. And so if somebody says, how much does it cost for an annual license to vendor X? I can say, but that just comes with years of experience. Otherwise, it’s a wild west out there as far as pricing goes.
Chris Lacinak: 32:48
You know that I know that you want to get your hands on Amy’s how to guide and handouts for DAM selection. Come closer and I’ll tell you where to find it. Closer. I don’t want anyone else to hear this. Okay. It’s weareavp.com/creating-a-successful-dam-rfp. That’s where the guide is. Here’s where you get the handouts. It’s weareavp.com/free-resources. Okay. Now delete those URLs once you download them. I don’t want that getting out to just anyone. All right. Talk to you later. Bye.
Two thoughts here. One is, um, I mean, you talked about the kind of like spreadsheet analysis and scoring. But I know you, you dive deeper than that. I mean, part of your comparison, comparative analysis process is also demos as well. And I imagine that that, that plays, that makes me think of a couple of things. One is like one using those as a tool in the apples to apples comparison. But two, like I imagine, you know, you have this list of requirements and uses scenarios and some solutions can probably meet that out of the box. And some probably need some custom development to do it or some sort of workflow development or something in order to meet those. So could you just talk a little bit about the role of demos and custom configurations related to pricing?
Amy Rudersdorf: 34:20
Yeah. So a demo is a general term, um, that can mean many different things in this, in this realm. Um, so vendors love to give demos, uh, and they would love to, you know, spend an hour and a half with you telling you how great their system is. That’s their job. Their system may be great. And, and so, you know, that’s, that’s okay, but that’s not how you base a decision, a purchasing decision. Um, you, you, you go, you see those, those demos, those sort of bells and whistles demos to get a sense of what the system looks like. What we do is, um, after the RFP comes back and you know, we’re sort of playing with different ways to do that now. Um, but the way that we’ve done it typically is that after the, the, um, RFP comes back, let’s say you get six responses, you choose your top three, and then you spend two hours in a demo with the, with the vendor. The vendor does not get to, um, uh, make the agenda. We do. And in that demonstration, they’re going to, um, respond to the, some of the usage scenarios that we wrote for the RFP. So for 15 minutes, talk to us about that uploading, um, usage scenario I mentioned earlier. And in order to do that, here are assets from the organization and metadata from the organization, um, that you must use in your examples. So now you’re seeing side by side, um, demonstrations of how the systems work with your data. And I think that’s really powerful, um, because now you’re going to start to see the system maybe move a little slower with that five gigabyte movie that you have. Um, and, and, and it’s not quite as slick as the, as the, the assets they use typically in their, in their demos. So you get to see a real sense of how the system works in that way. And as part of those demonstrations, we always have the clients fill out feedback forms. So again, um, we’re going to get some, some qualitative, um, responses like what, what did you like? What other questions do you have? But we’re also going to get quantitative responses, score this vendor, um, on use the, on the usage scenario that you saw, what, you know, from one to five, did they, did they do what they said the system could do? And so again, we’re, we’re trying to set up opportunities for that apples to apples, um, uh, comparison. And how about the, um, kind of custom configuration aspects? I guess this goes, really goes back to kind of, I guess it’s both pricing and timeline, right? Like how do you manage that through the RFP process? I think that’s really probably one of the toughest things. The vendors differ on how involved they want to be in customization and configuration. Some systems require lots of configuration, um, but not so much customization. And maybe we should define those terms. So configuration means, you know, pressing some buttons behind the scenes to make something happen. Um, maybe turning on a feature, turning off a feature. Customization means writing some code to make the system do what you need it to do. So configuration should be cheaper and easier than customization. And so, uh, from a configuration perspective or from, from a cost and timeline perspective, configuration is, is less of a challenge. Um, because typically the, the vendor can do that. And that’s part of the, the offering. Customization is different. Uh, if something is custom, we ask them to tell us how much time it’s going to take and how much it’s going to cost to do it. Uh, so that that’s in, that’s in their proposal as well. In order to get to that point can be challenging. You really have to be very specific and clear about what you need. Um, so an example would be integration, which is something that everyone asks for, um, in an RFP. DAMS aren’t systems that just stand alone in your organization. They integrate with collections management systems or marketing technologies. Um, and so understanding for instance, who is responsible for building the integration and maintaining the integration. Uh, knowing that upfront is super important. If a vendor says, yeah, we can do that. Make sure they explain how that happens and what it’s going to cost and what the real cost is going to be for you. Um, so I, I guess I just say that you’re your best advocate and, and if you have a question, ask it and ask them to, to, um, document it.
Chris Lacinak: 39:44
Speaking of vendors, like what, what have you heard as responses from vendors to, you know, the, the RFPs that you’re proposing people do in this process. Do they love them? Do they hate them like that? How have they been received by vendors generally speaking?
Amy Rudersdorf: 40:01
Um, well, I’ll, you know, we have actually reached out to vendors and asked them this question and I have heard on a number of occasions that they really like the RFPs that we put together for them because they’re so, they’re so clear and they understand what we’re asking and why we’re asking it. And you know, going back to this, this point I made earlier about not having 150 requirements, you know, the vendors appreciate that as well. It’s a, it’s a lot of work for them to respond to these and, and we, we don’t want this to be onerous, um, or overly complicated for them. So we’ve really tried to create RFPs that serve the client foremost, but also, um, make the process as pain free as possible for the vendors as well. And we’ve gotten feedback from them, from a number of them that they like, um, the way that we present the data.
Chris Lacinak: 41:04
Having been someone that’s been on the responding side of RFPs, I will say, you know, one of the things you worry about when you’re in that position is the, uh, customer being able to make an apples to apples comparison, making sure that the appropriate context is there, making sure that they fully understand, um, and that you have all the right information to be able to provide the right responses. So I guess everybody has a vested interest in being clear and transparent, right? That’s actually helpful to everybody. And I imagine that also helps people like opt out. Maybe a vendor says, you know what, we’re not, they look at that RFP and they say, this really is not our strong spot. We should not spend the time on this. So that’s probably helpful to them to be able to filter out what is and isn’t in their wheelhouse.
Amy Rudersdorf: 41:50
Yeah, absolutely. I, I think it’s important to recognize that we don’t work in a vacuum. Um, we, we work very closely. We are vendor neutral as a company, but we work very closely with, um, the sales teams at lots of different, um, vendor companies. And we want to be partners with them as well. We want, we want them to be successful, um, whoever they are and whatever we can do to make sure that they’re able to, um, show off their system as well at, and as appropriately as possible, you know, that’s a win for everyone. And so I do really keep in mind that perspective when I’m putting these, um, documents together.
Chris Lacinak: 42:40
So what are some of the things that you’ve heard vendors complain about with regard to RFPs? Not ours, of course, other people’s RFPs. Like, what are the things that would turn a vendor off or make them not want to respond or make them feel poorly about an RFP?
Amy Rudersdorf: 42:57
I think I’ve mentioned this now a couple of times. I think one of the common deterrents is just the overwhelming number of requirements. And when they’re not written as user stories, they can be really confusing, hard to interpret. Um, and, and just really probably pretty frustrating to, um, try to answer or respond to. The other challenge, and I talk to clients all the time about this is you can’t make every requirement mandatory because there is not going to be a system out there that can do absolutely everything you want out of the box turnkey solution. Um, and it’s, it’s unreasonable to ask for that, I think, in my opinion. Um, and so, you know, making sure that you’re really prioritizing those requirements helps vendors see that you’ve really thought about this and that you, um, understand what you’re asking for and what your needs really are. So I think that maybe isn’t a deterrent, it’s a positive, but flipping that every, every requirement being mandatory is, um, is probably really frustrating. I would say too that, I mean, there’s, there’s sort of the flip side of this. Um, there’s excessively detailed or overly complex, and then there’s not enough information to, to provide a, a useful response. Um, so finding that sweet spot where you’re giving them the context, the background, the information they need, um, but not overwhelming them is, is, is, um, important. And I, you know, we’ve all seen a poorly structured RFP, you know, something that lacks clear vision or is ambiguous or vague, or, you know, is filled with like grammatical errors and spelling mistakes. It just makes everybody look bad. And, you know, if I was responding to that, I would question, um, the organization and their sort of dedication to this process.
Chris Lacinak: 45:04
We should, we should point out that you’re a hardcore grammarian.
Amy Rudersdorf: 45:08
I am.
Chris Lacinak: 45:09
So, um, let’s talk about timeline. What, you know, you talked about who is this, who’s the guide and checklist for, and you said it’s for people who are maybe getting their first DAM. Maybe it’s for people who are getting their second or third DAM. They’ve already got one. When is the right time to start the RFP process? People are surprised at how long this process takes. At AVP, it is a 20 week process. And so that’s five months. And that is, uh, that is sort of keeping all the, the, the milestones tight, moving the process along quickly. Uh, that’s, that’s just how long it takes. Um, so, you know, thinking about things like, Oh, my contract is coming up in a year, you know, working backwards from that, you need a solid six months or more for implementation. So, you know, you should, you should be, um, working on that RFP now. Uh, but, you know, give yourself, you know, expect this process, if you do it right, to, to take a solid four or five months. Um, and, and then you also have to, you know, build in buffer for your procurement office. You’ve got your InfoSec, uh, reviews. All of these things can take even longer. So, um, yeah, as soon as you realize that you’re going to get a new system, uh, start the work on that RFP.
Chris Lacinak: 46:40
And we should say that five months that you mentioned includes, uh, a pause while you wait for vendors to respond to the RFP as well. Right. And how long is that period typically?
Amy Rudersdorf: 46:54
Um, I, I usually say a month, I think less than a month is not, is not being a, a good um, you know, I think it’s sort of, uh, inhumane to make them respond in, in less than a month. These are complicated. They want to make sure they’re getting it right. We want them to get it right. So a month is a solid amount of time. We also build in time where they can ask questions and so they can’t really start working on it until they get the questions, the answers back. So a month I think is the, is the sweet spot there.
Chris Lacinak: 47:28
You’re right. I, I feel like we run into frequently where people might hear five months and they’re a little put off by how long that sounds. Uh, and then it is extraordinarily common that, uh, contracting and security takes significantly longer than that to get through. So that is something I think people often underestimate, especially if you’re not used to working through procurement, like that’s something that people really need to consider as a, as a part of their timeline.
Amy Rudersdorf: 47:57
This is a real, this happened yesterday. I have a client who was very adamant that we, um, shorten that time, time frame by a month. So we had compacted all of our work into four months and they came back to me and said, you know, we’re going to need more time. So let’s, let’s go with your original timeline. And then he said, it’s like, you know what you’re doing.
Chris Lacinak: 48:24
Yeah, yeah, yeah. That happens sometimes. So I mean, I do what, you know, I have, there has been this concept lately though, that I’ve heard repeated consistently about a fast track selection process and it’s something that’s significantly faster. And I wonder, I don’t know, do you have thoughts about that? Is that a realistic thing? Does that sacrifice too much? Is it possible as long as you’re willing to accept X, Y, and Z risks? I mean, what’s the fastest you might be able to do a selection process if someone really pushed you?
Amy Rudersdorf: 49:00
That’s a tough question. I mean, there are, there are people who talk about this fast track process. I think you’re putting yourself at risk if you don’t at least spend the time you need to with your end users and stakeholders. Whatever else you do around this process to make it go faster for you, you know, whether it’s not do the RFP and just invite vendors to do their demos. I still think spending the time with your stakeholders is going to be really important and drafting their requirements in some way that communicates those to the vendors so that when you have them demo, you have them demo with your user needs in mind. You know, I think you could do that. I’m not entirely sold on it. I think our process works really well. But if someone came to us and said, “We want to do it a different way,” I think we’d be willing to discuss other methods.
Chris Lacinak: 50:08
It makes me think, you know, if you were going to have, say, I’m thinking of a shape, Amy, and I want you to draw it, and I’m going to give you a number of dots to draw the shape that I’m thinking of, right? If I give you three dots, the chances of you getting that shape right are pretty slim. If I give you 50 dots, you’re more likely to get the shape that I’m thinking of, right? You can draw the line and connect the dots. And it seems that if you fast track it, you’re going to miss some dots and you’re less likely. And as we talked about earlier, like, I guess I do want, I mean, now that we’re talking about it, it’s like weighing the risk reward here. Like, okay, let’s say that the fastest you could do this with some level of certainty that everybody was willing to accept was three months. But you increase your chance of getting it wrong by 30%. We talked earlier about what are the risks of getting it wrong. Like, that just seems on its face obvious that that’s not worth it. Like, the amount, it’s not just the cost of the DAM system. Because to get back all those stakeholders again, do discovery again, go through the process. Like, everybody’s burnt. They’re unhappy about it. The thing didn’t work. It failed. I don’t know. It just seemed, yeah. Now that we talk about it, it just seems obvious that that’s not a great idea.
Amy Rudersdorf: 51:25
You’ve broken their trust. And if you, and I’ve seen this in implementation too, where if you invite your stakeholders into the system before it’s ready, or it’s not doing what they need it to do, they’re going to hesitate to come back. And to have to go through this process all over again, I just can’t see, you’re going to lose their trust.
Chris Lacinak: 51:56
You can imagine the Slack message already. Hey, did you see that? I went in there and nothing’s in there. I couldn’t find anything. It’s like all of a sudden that starts creating a poor morale around the system.
Amy Rudersdorf: 52:07
Yeah, it gives me shivers.
Chris Lacinak: 52:10
Yeah. In your piece, you go through the whole RFP process. We haven’t gone through that here because it’s rather lengthy and I think that it’s a lot to talk about. So we’ll leave that to folks to see in the piece. But I’m curious if you could tell us, when you see people do this on their own and they don’t have the advantage of having an expert like yourself guiding them along, what’s the number one most important part of the process that you see people skip?
Amy Rudersdorf: 52:40
Well, I think it’s the discovery process. It’s getting in front of your users and stakeholders. Without that information, you don’t know what you need. And you can only guess at what you need based on your personal experience.
Chris Lacinak: 53:00
So people think, “Oh, I know what my users need. I’ve been working with these people for years. I can tell them.” Or maybe like, “I know what we need better than anybody else. I’m just going to write it down.”
Amy Rudersdorf: 53:08
I don’t know if I said this already, but I’ve never had anyone say, “Oh yeah, I knew all that,” after they went through the discovery process. Time and again, they’re like, “Wow, I had no idea.”
Chris Lacinak: 53:18
I bet.
Amy Rudersdorf: 53:20
Yeah. It’s pretty interesting to talk to the core team after the discovery process is complete. Because they often sit in on these interviews and you can just see their eyes pop when they hear certain things that they had no idea about. And that happens every time we go through this process.
Chris Lacinak: 53:48
So we’ll put a link to your piece in the show notes here. I’m curious though, if you could tell us when people download the handouts, what’s in there? What can people expect to see?
Amy Rudersdorf: 54:01
It’s six checklists that guide you through the entire RFP process, from developing your problem statement to the point where you’re selecting your finalists. Some of the checklists are things that you need to do. So they kind of step you through discovery and how you structure your RFP. But then there are checklists that you can actually include in your RFP. We always have an overview document that sort of introduces the RFP to the vendors. And there’s a very long checklist that we include that they have to answer those specific questions that are in that download. They’re in the actual RFPs that we create as well. So it’s a little bit of, we’re offering a little IP to users.
Chris Lacinak: 54:56
So it’s the things that you would use for yourself as part of the process.
Amy Rudersdorf: 55:01
Yeah.
Chris Lacinak: 55:02
That’s great. So for the question I ask everybody on the DAM Right podcast, which is, what is the last song that you added to your favorites playlist?
Amy Rudersdorf: 55:12
Oh, I’ll tell you right now. I have it right in front of me.
Chris Lacinak: 55:17
Great.
Amy Rudersdorf: 55:18
Heart of Gold by Neil Young.
Chris Lacinak: 55:19
What were the circumstances there?
Amy Rudersdorf: 55:21
He’s back on Spotify. He had left Spotify. They pulled all of his stuff off Spotify. And I realized he was back. And so I grabbed that song. Yeah. Four days ago.
Chris Lacinak: 55:32
That’s right. Well, thank you so much for joining me and sharing your expertise and your experience and all this great information. It’s been fun having you on. I really appreciate you taking the time.
Amy Rudersdorf: 55:41
Yeah. Thanks again for the opportunity to talk about a topic that some people might not find very exciting, but I do.
Chris Lacinak: 55:52
You know that I know that you want to get your hands on Amy’s how-to guide and handouts for DAM selection. Come closer and I’ll tell you where to find it. Closer. I don’t want anyone else to hear this. Okay. It’s weareavp.com/creating-a-successful-dam-rfp. That’s where the guide is. Here’s where you get the handouts. It’s weareavp.com/free-resources. Okay. Now delete those URLs once you download them. I don’t want that getting out to just anyone. All right. Talk to you later. Bye.
Manage Your DAM Expectations: A Guide to Successful Implementation
25 July 2024
In the world of digital asset management (DAM), the journey of implementation can often feel overwhelming. However, with the right approach and mindset, organizations can navigate this process smoothly. This guide explores key insights on managing expectations during a DAM implementation, drawing parallels to the experience of moving into a new home.
Understanding the Need for a DAM System
Every successful DAM implementation begins with understanding the reasons behind the need for a new system. Organizations often face several pain points, leading them to seek a more efficient solution. Here are some common reasons organizations consider moving to or adopting a new DAM system:
- Centralization of Assets: Staff frequently struggle to find images or videos scattered across various platforms like Dropbox, Google Drive, and email. A DAM system centralizes these assets, making retrieval easier and faster.
- Control Over Asset Usage: Misuse of assets is a significant concern. Organizations often find their images on social media or websites without proper permissions. A DAM system helps establish control over how assets are used.
- Unlocking Hidden Treasures: Many organizations have digitized assets that remain underutilized. A DAM system can help make these assets available for broader use.
Deciding What You Need
Once the reasons for adopting a DAM system are clear, the next step is to define what is needed from the system. Organizations should identify three to five key differentiators or deal breakers when evaluating potential DAM solutions. Here are some considerations:
- Budget: Always a crucial factor, understanding the financial implications will guide your decisions.
- Technical Requirements: Determine whether you need on-premise hosting or prefer a vendor-hosted solution.
- Format Compatibility: Ensure the DAM can handle specific file formats essential for your operations, such as InDesign files.
- Functional Needs: Identify critical functionalities, like full-text search capabilities, that the system must support.
Planning and Scoping Your MVP
Creating a clear plan and defining a Minimum Viable Product (MVP) are essential steps before implementing a DAM system. Unlike moving into a new home, where the process is somewhat familiar, DAM implementation can be murky for many organizations. Here are common pitfalls to avoid:
- Resource Allocation: Organizations often underestimate the internal resources required for implementation, sometimes needing one person to focus full-time on the migration process.
- Major Migration Planning: If moving a large volume of data, thorough planning is critical. Transferring data from multiple systems can be complex and time-consuming.
- Over-Ambition: Organizations sometimes aim to do too much before going live. Focusing on core features that work well is essential to avoid extending timelines unnecessarily.
Maintenance, Enhancements, and Repairs
Just like maintaining a house, a DAM system requires ongoing care. After implementation, it’s essential to ensure that the system remains organized and functional. Here are tips for effective maintenance:
- Daily Maintenance: Regularly check and tidy up the system to ensure it operates efficiently.
- Ownership and Oversight: Assign someone to oversee the DAM, particularly in the initial months after launch, to address any issues promptly.
- Resource Allocation: As the DAM system grows in popularity, be prepared to allocate more resources to maintain its success.
Don’t Go It Alone
Implementing a DAM system is not a solitary journey. Just as you would seek help from a realtor or lawyer when buying a home, organizations should consider enlisting experts in DAM. Here’s how to find the right support:
- Consultants and Specialists: Engage professionals who have experience in DAM implementation to guide your organization through the process.
- Communication Teams: If you aim to promote the DAM widely, consider involving internal communications teams to help socialize and promote the system.
- Expert Organizations: Partner with firms that have expertise in metadata, taxonomy, and asset management best practices to ensure a smoother implementation.
Conclusion
Implementing a DAM system can be a transformative experience for organizations, but it requires careful planning, resource allocation, and ongoing maintenance. By understanding the reasons for adoption, defining needs, and seeking expert help, organizations can navigate the complexities of DAM implementation successfully. Remember, just like moving into a new home, the journey may have its challenges, but the rewards are well worth the effort.
For further insights and resources on DAM implementation, consider exploring specialized DAM consultants and their offerings.
Transcript
Chris Lacinak: 00:00
Hello, welcome to DAM Right.
I’m your host, Chris Lacinak. Today, we’re gonna try something a little different.
We’re gonna do a short episode
that’s about 10 minutes instead of 60 to 90,
and I’d love to know how you feel about it.
Let me know at [email protected].
Today’s episode is an interview with Kara Van Malssen,
who you know if you’re a listener of the show.
If not, I’ll say quickly that Kara is a Partner
and Managing Director at AVP,
a thought leader in the DAMosphere,
and an all-around wonderful person.
The interviewer is former AVP Senior Consultant,
Kerri Willette.
Since doing this interview, Kerri has moved on
and is now doing awesome work, no doubt, at Dropbox.
Kerri is a super talent and pure delight of a human being.
Since we’re keeping this short,
I’ll just quickly say that I really love
how Kara makes the analogy between DAM implementation
and moving into a new home.
She grounds the topic of DAM implementation,
making it both fun and relatable.
I know you’ll enjoy it.
Speaking of which, please go like, follow, or subscribe
on your platform of choice.
And remember, DAM right,
because it’s too important to get wrong.
Kerri Willette: 01:07
We’re here today, we’re gonna talk about
some things that were inspired by the article that you wrote for Henry Stewart’s publication
in the Journal of Digital Media Management,
I think it was volume seven.
That article subsequently evolved into a blog post
that I know you wrote after relocating.
And the blog post is called “Manage Your DAM Expectations.
Or How Getting a DAM is Like Buying and Owning a Home.”
All right, so tip one,
there’s usually a good reason for doing it.
Kara Van Malssen: 01:39
Yeah, so we had an opportunity in another city,
my husband got a job offer. So within five months, we had sold a house,
moved, bought a house, and moved again.
It was quite a lot.
Kerri Willette: 01:52
So what are some of the good reasons
that you’ve heard from organizations who are looking to move to or switch
or get a new DAM system for the first time?
Kara Van Malssen: 02:02
Yeah, so it usually falls into a few different buckets.
Like a lot of times it’s around pain points that they’re having.
So it might be things like,
staff’s trying to find images or videos
and they’re rummaging through Dropbox and Google Drive
and email and hard drives and who knows where,
trying to find what they’re looking for.
And it takes forever and they don’t find it.
So centralizing the assets is one good reason.
Another one we see a lot is maybe misuse of assets
where you’ve got people putting images on social media
that they shouldn’t be using or on the website
that they don’t have permission to use for that purpose.
And so trying to kind of get some control
around the usage of the assets
is another reason we see a lot.
And then another reason might just be
to kind of open up like a new treasure trove of assets
that was previously sort of hidden.
Like maybe you digitized a whole bunch of stuff
and you wanna make that available.
So that’s another good reason.
Kerri Willette: 03:06
So the next tip in your post,
you have to decide what you will need. How do you feel like organizations can answer the question
of what they need in a DAM system?
Kara Van Malssen: 03:16
You’ve gotta figure out what those three to five
or four to six like key differentiator things are or the real deal breakers.
And one of those is always gonna be the budget,
but the other things are unique to you.
Maybe it’s technical things
like you need to host this on-premise
or you need to host it in your own
Amazon Web Services account
or maybe you want the vendor to host it for you.
So those might be some of those considerations
or maybe they’re things like format requirements.
Like you want specific support for InDesign files,
for instance.
Or maybe it’s functional things
like you really need full-text search of documents.
Like that’s critical.
So you don’t wanna look at systems that don’t have that.
It’s like that’s one of your deal breakers,
things like that.
So you’ve gotta kind of figure out what are those top fives
that you really need to have in the DAM
and you can use that to sort of narrow down
the candidate solutions.
And then when you start to evaluate those,
you can really look for the kind of nuance differences
between them and how they actually help you achieve
the goals that you have in mind.
Kerri Willette: 04:28
Yeah, that makes sense.
So tip three in your blog post talks about making a plan and clearly scoping what you call a minimum viable product
or MVP version of what you need.
And you would do that before implementing a DAMS.
We all know that moving requires a lot of planning,
but what are some areas you’ve seen organizations
that you’ve worked with most often not plan well
for implementing a DAM?
Kara Van Malssen: 04:57
There’s a big difference here between moving a house
and moving into a DAM. You kind of know what’s involved
in the moving house situation.
You know, it’s gonna be like a lot of packing
and organizing and then unpacking and organizing.
But with a DAM, a lot of people
haven’t really done this before.
So it’s a little murky,
like what are the things you need to do?
So what we see is, I think, three things that people,
where they might go wrong here.
So one is they’re not allocating enough resources internally
to the implementation and the migration.
And, you know, it’s probably gonna be like
one person’s full-time job for a while.
So just something to keep in mind.
Another is just not really planning
around major migrations.
If you’ve got a lot of data to move
from one system to another,
or from maybe ten systems
or ten different data stores to another,
it’s just, that’s a lot of work.
It takes time and planning.
And then the last one is kind of getting overly ambitious,
maybe not realizing that you’re doing it,
but, you know, trying to kind of do everything
before you go live.
And maybe that’s including like custom integrations,
maybe custom development on top of the
kind of out of the box features of the system.
It’s like if you got a contractor
and you decided to gut renovate the house
before you moved in,
you better expect that’s gonna take you some time.
So you’re not getting in that house really anytime soon.
But this is an organization,
there’s politics, there’s budget,
there’s like, you know, expectations.
And if the thing drags on for too long
before it gets launched,
that can really damage the reputation of this program.
It can kind of lose political will.
So it’s important to kind of scope something
that’s realistic to just get it off the ground
and get those core features working really well.
So things like just making the search work,
the browse work,
making sure the assets are well organized,
making sure they’re well described and tagged,
that people can easily access them when they should
and they can’t access them when they shouldn’t.
So roll out those key features,
get it in the hands of people
who are gonna give you really good feedback
and gonna start with it.
And then you can get those additional things over time.
Kerri Willette: 07:18
Great.
Tip four, maintenance, enhancement and repairs come with the territory.
So Kara, I happen to know
that you recently discovered a gas leak in your new house.
And luckily you were able to get it repaired really quickly,
but it definitely, I think, brings home your point
about allocating resources for future maintenance
and how that relates to home buying for sure.
So how does that relate to your experiences
helping organizations deploy their DAM systems?
Kara Van Malssen: 07:49
Yeah, it’s like with the house,
you’ve kind of got a gamut of kind of home maintenance and repair and improvement that you’re doing.
Like you’re gonna be cleaning every day,
tidying it up, cleaning the kitchen.
You’re gonna be kind of repairing those things that break
and then you’re gonna be making improvements over time.
It’s really the same thing with a DAM.
You’ve gotta have kind of somebody in there
who’s just making sure everything’s tidy and neat
so that the thing continues to work well for the users.
You’d have to make sure that there’s some ownership
and oversight of the DAM from the very beginning,
especially in those critical,
like first few months after launch.
And then over time,
you might find you even need more resources there
than you thought you would
because maybe it becomes really successful and that’s great,
but you’re probably gonna need to throw a bit more manpower
at it to make sure it continues to succeed.
Kerri Willette: 08:43
All right.
Don’t go it alone. What kind of experts, when it comes to DAM systems,
what kind of expert help might be useful?
Kara Van Malssen: 08:53
Yeah, so it’s like, if you’re getting a house,
you know, you’re probably gonna get a realtor, you’re gonna need a lawyer to help with the closing.
You’re gonna probably have a home inspector
come and check it out before you buy it.
Some of those things you might take on yourself,
but sometimes you’re gonna work with others.
And it’s sort of the same thing with a DAM.
A lot of people, I think,
just figure like, I can do this, let’s do this.
But if you’ve never had any experience implementing a DAM
and you kind of don’t know what that path forward looks like
or what the expectations might be
or where you might run into problems,
it can be really hard.
And if you are doing things like in a custom integration
with other applications,
you might need people like developers.
You know, if you’re really gonna be promoting this widely,
if you have a lot of users, you’re trying to get to adopt it
you might need like communications folks
maybe within your organization
to kind of help socialize it and promote it.
And also, you know, organizations like ours, AVP.
So we are experienced in this.
We have a lot of expertise in things like metadata,
taxonomy, search and navigation,
asset organization, management, best practices
and things like that.
So we’ve been down this road before.
So we can also help you kind of manage your expectations
a little bit and try to get to as much
of a painless launch as possible.
Kerri Willette: 10:14
Well, thanks, Kara.
This was really great. It was nice talking to you.
Kara Van Malssen: 10:18
Yeah, thanks, Kerri.
Appreciate it. (upbeat music)
The Critical Role of Content Authenticity in Digital Asset Management
11 April 2024
The question of content authenticity has never been more urgent. Digital media has proliferated, and advanced technologies like AI have emerged. Distinguishing genuine content from manipulated material is now crucial in many industries. This blog examines content authenticity, its importance in Digital Asset Management (DAM), and current initiatives addressing these challenges.
Understanding Content Authenticity
Content authenticity means verifying that digital content is genuine and unaltered. This issue isn’t new, but modern technology has intensified the challenges. For example, the FBI seized over twenty-five paintings from the Orlando Museum of Art, demonstrating the difficulty of authenticating artworks. Historical cases, like the fabricated “Protocols of the Elders of Zion,” reveal the severe consequences of misinformation. Digital content’s ubiquity makes it vital for organizations to verify authenticity. Without proper measures, content may remain untrustworthy.
The Emergence of New Challenges
Digital content production has skyrocketed in the last decade. Social media rapidly disseminates information, often without verification. Generative AI tools create highly realistic synthetic content, complicating the line between reality and fabrication. Deepfakes can simulate real people, raising serious concerns about misinformation. Organizations must combine technology with human oversight to navigate this complex environment.
The Role of Technology in Content Authenticity
Technology provides tools to detect and address authenticity challenges. Yet, technology alone isn’t enough. Human expertise must complement these solutions. The Content Authenticity Initiative (CAI), led by Adobe, is one effort creating standards for embedding provenance data in digital content. The Coalition for Content Provenance and Authenticity (C2PA) also works to embed trust signals into digital files. These efforts enhance content verification and authenticity.
Practical Applications of Content Authenticity in DAM
For organizations managing digital assets, content authenticity is crucial. DAM systems benefit from integrating authenticity protocols. Several practical applications include:
- Collection Development: Authentication techniques help evaluate incoming digital assets.
- Quality Control: Authenticity measures verify file integrity during digitization projects.
- Preservation: Provenance data embedded in files ensures long-term reliability.
- Copyright Protection: Content credentials protect assets when shared externally.
- Efficiency Gains: Automating authenticity data reduces manual errors.
The Risks of Neglecting Content Authenticity
Neglecting content authenticity poses significant risks. Misinformation spreads quickly, damaging brands and eroding public trust. Sharing manipulated content can lead to legal issues and financial losses. Ignoring authenticity can have severe consequences, including reputational and legal liabilities.
Collaboration and the Future of Content Authenticity
Collaboration is vital for achieving content authenticity. Organizations, technology providers, and stakeholders must develop best practices together. The rapidly evolving digital landscape demands ongoing innovation. Investing in authenticity technologies and frameworks will become essential.
Case Studies: Content Authenticity in Action
Organizations are already implementing successful authenticity measures. Media outlets verify user-generated videos and images with specialized tools. Human rights organizations embed authenticity data into witness-captured files, ensuring credibility in court. Museums and archives verify digital assets’ provenance, preserving their integrity.
Conclusion: The Imperative for Content Authenticity
Content authenticity is a societal necessity, not just a technical issue. As digital content grows, verifying authenticity will be vital for maintaining trust. Organizations that prioritize content authenticity will navigate the digital age more effectively. Collaboration and technology will ensure digital assets remain credible, trustworthy, and protected.
Transcript
Chris Lacinak: 00:00
Hello, welcome to DAM Right, Winning at Digital Asset Management. I’m your host, Chris Lacinak, CEO of Digital Asset Management Consulting Firm, AVP. In the summer of 2022, the FBI seized more than 25 paintings from the Orlando Museum of Art based on a complex, still unclear scheme to legitimize these supposedly lost and then found paintings as the works of Basquiat. In 1903, the Protocols of the Elders of Zion was published, detailing a series of meetings exposing the Jewish conspiracy to dominate the world. It was used in Nazi Germany and by anti-Semites worldwide to this day as a factual basis to promote and rationalize anti-Semitism. Of the many problematic things regarding this text, one of the biggest is that it was a complete work of fiction. In 2005, an investigation conducted by the UK National Archives, identified a number of forged documents interspersed with authentic documents posing as papers created by members of the British government armed services, tying them to leading Nazi figures. No one was convicted, but three books by the author, Martin Allen, cited these forged documents and documentation shows that he had access to these specific documents. In 1844, an organized gang was convicted in London for creating forged wills and registering fictitious deaths of bank account holders that the gang had identified as having dormant accounts so that they could collect the remaining funds. As this sampling of incidents demonstrates, content authenticity is not a new problem. It is, however, a growing problem. The proliferation of tools for creating and altering digital content has amplified the authenticity dilemma to unprecedented levels. In parallel, we are seeing the rapid growth and deployment of tool sets for detecting fake and forged content. As is highlighted in this conversation, the line between real and fabricated lies in the intent and context of its creation and presentation. This conundrum signals that technology alone cannot bear the weight of discerning truth from fiction. It can merely offer data points on a file’s provenance and anomalies. As the hyperspeed game of cat and mouse continues on into the foreseeable future, it’s also clear from this conversation that addressing this challenge in any truly effective way requires an integrated and interoperable ecosystem that consists of both people and technology. The stakes are high touching every industry and corner of society. The ability to assert and verify the authenticity of digital content is on the horizon as a cornerstone of digital asset management, as well as being a social imperative. Amidst this complex landscape of authenticity, integrity, and technological chase, I am excited to welcome a vanguard in the field, Bertram Lyons, to our discussion. As the Co-Founder and CEO of Medex Forensics, an Illuminary in content authenticity, Bert’s insights are extraordinarily valuable. His journey from a Digital Archivist at the American Folklife Center at the Library of Congress to spearheading innovations at Medex Forensics underscores his deep engagement with the evolving challenges of digital veracity. Bert’s involvement in the Content Authenticity Initiative and the C2PA Working Group, coupled with his active roles in the American Academy of Forensic Sciences and the Scientific Working Group on Digital Evidence, highlight his commitment to shaping a future where digital authenticity is not just pursued, but attained. Join us as we explore the intricate world of content authenticity, guided by one of its esteemed experts.
Bertram Lyons, Welcome to DAM Right. I’m so excited to have you here today. Uh, um, I’m particularly excited at this moment in time, because I feel like the expertise and experience you bring is going to be a breath of fresh air, um, that gives us a deeper dive into the nuance and details of a topic, content authenticity, which I think is most frequently, uh, experienced as headlines around, uh, kind of bombastic AI sorts of things, and I think that, uh, you’ll, you’ll bring a lot of clarity to the conversation. So thank you so much for being willing to talk with us today. I appreciate it.
Bertram Lyons: 04:27
Thanks Chris.
Chris Lacinak: 04:28
I’d like to start off with just talking a little bit about your background. I think it’s fair to say that you didn’t come to forensics and content authenticity with the most typical background. I’d love to hear a bit about how you arrived here and how the journey, uh, kind of informed what your approach is today.
Bertram Lyons: 04:47
To give you a sense of, you know, where I think I am today is working in the world of, uh, authenticating digital information, uh, specifically video images. Um, and how I got there, you know, I spent 20 years plus working in the archives industry. That was really what, what I spent my time doing up until a few years ago. Um, I started at, you know, various different kinds of archives, um, one. exciting, um, uh, place that I worked for, for a variety of years. When I first started out, it was a place called the Alan Lomax Archive. And that was a really cool audiovisual archive. You know, it had tons of formats from, from the start of recording technology up until the time that, that particular individual, Alan Lomax, stopped recording, which spanned from like 1920s through the . 1990s So, you know, really a lot of cool recording technology. And I did a lot of A. D. analog to digital conversion at that time. Um, and that led me down a path of really ultimately working in the digital side of, of archives and ending up at the Library of Congress in D. C. where I, where, you know, my job was specifically a Digital Archivist, and my job there was to learn and understand how historical evidence, um, how it existed in digital form. Um, to document that and to be able to establish strategies and policies for keeping that digital information alive as, as long as possible, both, both the bits on one side and the, um, and the information itself on, on the other side and ensuring that we can, we can reverse engineer information as needed as, as time goes on, uh, so we don’t lose the information in our, in our historical collections. So, uh, it’s been many years with that and then, you know, jumped out, jumped ship from, from LC and started working with you, uh, at AVP and, uh, you know, for a number of years. And that was an exciting ride where we applied a lot of that knowledge, you know, I was able to apply a lot of my experience to our, our customers and clients and colleagues there. Um, but ultimately the, the thing that brought us, brought me into the digital evidence world where I work now was through a relationship that we developed with the FBI and their Forensic Audio Video Image Analysis Unit, um, in Quantico where, you know, we were tasked to increase capabilities, you know, help that team there who, who were challenged with establishing, , authenticity of evidence for court and help them to increase their ability to do that, uh, both manually using their knowledge about digital file formats, but also ultimately in an automated way because Unfortunately, and fortunately, digital video and image, um, and audio are, just everywhere, you know, there’s just so much video, uh, image and audio data around that it becomes the core of almost every investigation that’s happening. Um, any question about what happened in the past we turn to multimedia
Chris Lacinak: 07:43
I think back to you sitting at the American Folklife Center and Library of Congress. Did you ever have any inkling that one day you’d be working in the forensics field? Was that something you were interested in at the time or was it a surprise that kind of to you that you ended up where you did?
Bertram Lyons: 07:57
on my mind in that when I, in: 2000
Chris Lacinak: 10:22
Transitioning a bit now away from your personal experience, I, I guess in preparing for this conversation, it dawned on me that content authenticity is not a new problem, right? That there’s been forgeries and archives and in museums and in law enforcement situations and legal situations for, for centuries, but but it does seem very new in its characteristics. And I wonder if you could talk a bit about like what’s happened in the past decade that makes this a much more urgent problem now, uh, that it deserves the attention that it’s getting.
Bertram Lyons: 10:57
I think, you know, you say the past decade, a few things that I would put on the table there. One would be just entirely. the boom, which is more than a decade old, but the boom in social media and that like, and that the how fast I can put information out into the world and how quickly you will receive it, right? Wherever you are. So it’s just the, the ability for information to spread And information being whether it’s, whether it’s a, you know, media like image or audio or video or whether it’s, you know What I’m saying in text. Those are different things too, right? So just to scope it for this conversation, just thinking about the creative or documentary sharing of image, video, and audio, right? So it’s a little bit different probably when we talk about misinformation on the tech side. But when we talk about content authenticity with media things, you know, it can go out so quickly, so easily, from so many people. That’s a, you know, that’s a huge shift from years past where we’re worried about the authenticity of a photograph in a, in a museum, right? That’s a, the reach and the, uh, the immediacy of that is, is significantly different, um, in today’s world. And then on, uh, I was, I would add to that, now the ease with which, and this is more of the last decade, with which the, uh, individuals have access to creatively manipulate or creatively generate, you know, new media, That can be confused with, from create, from the creative side to the documentary side. Can be confused with actually documentary evidence. So, you know, the content’s the same whether I create a video of, you know, of myself, um, you know, climbing a tree or whatever. Um, that’s content and I could create a creative version of that that may not have ever happened. And that’s for fun and that’s great. We love creativity and we like to see creative imagery and video and audio. Or I could create something that’s trying to be documentary. You know, Bert climbed this tree and he fell out of it. Um, and that really happened. I think the challenge is that we’re starting, the world started, the world of creating digital content is blending such that you wouldn’t be able to tell whether I was doing that for, from a creative perspective or from a documentary perspective. And then, you know, and I have the ability to share it and claim one or the other, right? And so the, the, those who receive it now, out in the social media world and the regular media world, you know, have to make a decision. How do I interpret it?
Chris Lacinak: 13:31
Yeah
Bertram Lyons: 13:31
But I think the core challenge that we face off the authentication side is still one of intent by the individual who’s, who’s creating and sharing the content. The tools have always been around to do anything you really want to digital content, um, whether it’s a human doing it or, or asking a machine to do it. In either scenario, what’s problematic is the intent of the person or group of people creating that, and how they’re going to use it.
Chris Lacinak: 14:04
What do you think people misunderstand most about the topic of content authenticity? Is there something that you see repeatedly there?
Bertram Lyons: 14:11
From the way the media addresses it generally, I think one of the biggest misinterpretations is that synthetic media is inherently bad in some way. that we have to detect it because it’s inherently bad, right? You get this narrative, um, that is not true. You know, it’s, it’s a creation process, and it inherently is not a, uh, it doesn’t have a bad or a good to it, right? It comes back to that question of intent. Synthetic media or generative AI that’s creating synthetic media is really just allowing a new tool set for creating what you want to create. We’ve been looking at CGI movies for years and how much of that is ever real. Very little of it, but it’s beautiful and we love it. It’s entertaining. And it comes back to the intent. On the flip side, another really, I think, big misunderstanding in, in this is that, this really comes down to people’s understanding of how files work and how they move through the ecosystems that they’re, that they’re stuck in. You know, files themselves don’t live except for within these computing ecosystems. They move around, they get re-encoded, they, um, and as they follow the, that lifecycle, they get interacted with by, by all kinds of things. Um, like by encoders that are changing, uh, the resolution, for example, or encoders that are just changing the packaging. Um, those changes, which are invisible to the, to the average person, those changes are actually extremely detrimental to the ability to detect synthetic media, or anything that you want to detect about a, about a, you know, that content. As that content gets moved through, it’s being normalized, it’s being laundered, if you will, um, into something that’s very basic. Um, and, and as that laundering happens, that particular content and that particular packaging of the file becomes in some ways useless from a forensic perspective. And I think the average person doesn’t get that yet. That information is available to them. That, that if you want to detect if something’s synthetic and it’s sitting on your Facebook feed, well it’s too late. Facebook had the chance on the way in, and they didn’t do it, or they did do it. Um, and now we’re stuck with like network analysis stuff. Who did, who posted that? Now we’re going back to the person. Who posted that? Where were they? What was their behavior pattern? Can we trust them? Versus, you know, having any ability to apply any trust analysis unless it’s a blatantly visual issue to that particular file.
Chris Lacinak: 16:45
Can you give us some insights into what are some of the major organizations or initiatives that are out there that are focused on the issue of content authenticity? What’s the landscape look like?
Bertram Lyons: 16:55
From the content authenticity perspective. It’s a lot, a lot of it’s being led by, major technology companies who, who, who trade in content. So that could be from Adobe, who trades in content creation. Could to Google, who trades in content distribution and searching. Um, you know, and everybody in between. Microsoft, Sony, you know, organizations who are either creating content. Whose tools allow humans to create content and computers or, uh, organizations who really trade in the distribution of that content. Um, so there’s, there’s an organization that’s composed of a lot of these groups called the Content Authenticity Initiative. Um, and there’s, it’s, that, that organization is really heavily led by Adobe. Um, but has a lot of other partners involved with it. And then it sort of has become an umbrella for, for, for, uh, I’d say an ecosystem based perspective on content authenticity that’s really focused on, um, the ability to embed what they’re calling content credentials, but ultimately to embed signals of some sort, whether it’s actual text based cryptographic signatures, whether it’s watermarking, other kinds of, there’s other kinds of approaches, but ultimately to embed signatures, or embed signals in digital content. Such that as it moves through this ecosystem that I mentioned earlier, you know, from creation on the computer, to upload to a particular website, to display on the web, through a browser. It’s really focused on like, can we, can we, can we map the lifecycle of, of a particular piece of content? Um, can we somehow attach signals to it such that as it works its way through, um, it can, those signals can be read, displayed, evaluated, and then ultimately a human can determine how much they trust that content.
Chris Lacinak: 19:00
If I’ve got it right, I think the Content Authenticity Initiative are the folks that are creating what’s commonly referred to as C2PA or the coalition for content provenance and authenticity. Is that right?
Bertram Lyons: 19:12
That’s right. Yeah, that’s like the schema,
Chris Lacinak: 19:15
Okay.
Bertram Lyons:: 19:15
technical schema.
Chris Lacinak: 19:16
And in my reading of that schema, and you said this, but I’ll just reiterate and try to kind of recap is that it looks to primarily identify who created something. It really focuses on this concept of kind of trusted entities. Um, and it does offer, um, as you said, provenance data that it will automatically and or systematically embed into the, uh, files that it’s creating. And this starts at the creation process, goes through the post production and editing process through the publishing process. Is that a fair characterization? Is there anything that’s kind of salient that I missed about, uh, how you think about or describe that, uh, schema?
Bertram Lyons: 20:03
I think that’s fair. I think the only thing I would change in the way you just presented it is that the C2PA is a schema and not software. So it will never embed anything and do any of the work for you. It will allow you to create software that can do what you just said. C2PA itself is purely like a set of instructions for how to do it. And then if you, or if you, uh, you know, want to implement that, you can. If Adobe wants to implement that, they actually already implemented it in Photoshop. If you create something and extract it, you will have C2PA data in it, um, in that file. So it’s really creating a specification that can then be picked up by, um, anybody, any who generates software to read or write, uh, video or images or audio. Actually, it’s really built to be pretty broad, you know. They define ways to package the C2PA data sets into PDFs, into PNGs, into WAVs, you know, generally, um, trying to provide support across a variety of format types.
Chris Lacinak: 21:03
And the provenance data that’s there, or the specification, uh, for, for embedding, uh, creating provenance information is optional, right? It, someone doesn’t have to do it. Is that true?
Bertram Lyons: 21:16
Let me come at it a different way.
Chris Lacinak: 21:18
Okay
Bertram Lyons: 21:18
It depends on what you use. If you use Adobe tools, it will not be optional for you. Right? If you use a, a, a tool to do your editing that’s not working, that doesn’t, hasn’t implemented C2PA, it will be optional. It won’t even be available to you. Um, that’s why I talk about ecosystem. You know, the, the tools you’re using have to adopt, implement this kind of, um, technology in order to ultimately have the files that you export contain that kind of data in them, right? So it’s optional in that you choose how you’re going to create your content, and you have the choice to buy into that ecosystem or actually to select yourself out of that ecosystem. This reminds me of the early days of kind of in just generally speaking embedded metadata, where before everyone had the ability to edit metadata in word documents and PDF documents and audio files and video files and all that stuff. It was a bit of a. black box that would hold some evidence. And there were cases where folks claimed that they did something on such and such a date, but the embedded metadata proved otherwise. Uh, today that feels naive because it’s so readily accessible to everybody. So I kind of, in the same way that, um, there was a time and place where not everybody could access and view and, or write and edit. Uh, and embedded metadata in files, this sounds similar that, that the tool set and the ecosystem, as you say, has to support, um, that sort, that sort of, those sort of actions. Yeah, they’ll have to be able, you’ll have to support it, and I’ll, just, just so, so, somebody listening doesn’t get the wrong idea, C2PA spec is very much stronger than the concept of embedded metadata, and that, it’s cryptographically signed. So, you know, up until C2PA existed, anybody could go into a file and change the metadata, and then just re save the file and no one would ever know. Potentially. Um, but what the, the goal of C2PA actually is to make embedded metadata stronger. Um, and it’s to generate, um, these, this package of a manifest. It says, you know, inside of this file, there are going to be some assertions that were made by the tool sets that created the file and maybe the humans that were involved with the tool set that created the file, they’re going to make some assertions about its history and then they’re going to sign it with the cryptographic signature. They’re going to sign everything that they said such that if anything changes, the signature will no longer be valid, right? So it’s really a goal of trying to lock down inside the file the information that was stated about the file when it was created and to bind that to the, to the hashing of the content itself. So if I have a picture of me, that all the pixels that go into that picture of me get hashed to create a, you know, a single value, um, what we call a checksum. That checksum is then bound to the statements I make about that. I created this on Adobe Premiere, well actually, Adobe Photoshop would make a statement about what I did to create it, you know, it was created by Photoshop, it was, these edits were done, this is what created it, and that’s an assertion, and then I might say, you know, Bert Lyons created it, that’s the author, that’s an assertion, those assertions are then bound to the checksum of the file, of the image itself, right, and locked in, and if that data sticks around in the file as it goes through, um, it’s ecosystem, and someone picks it up at the end of the pathway, they can then check. Bert says he, he created this on this date, using a Photoshop. Photoshop said he did X, Y, and Z. Signature matches, nothing’s been changed. Now I have a trust signal, and it’s still going to be up to the human to say, do I trust that? Is C2PA strong? Is the cryptography and the trust framework strong enough, such that nobody could have, nobody really could have changed that?
Chris Lacinak: 25:16
So this C2PA spec then brings kind of this trust. entity trust level, who created this thing, but it also then has this robust cryptographic, um, signed, uh, kind of provenance data that tells exactly what happened. And it sounds like it is editable, uh, it’s deletable, it’s, it’s creatable, but it’s within the ecosystem that it lives within and how it works, it sounds like that there are protection mechanisms that mitigate, um, intentional, uh, augmentation for, you know, malicious purposes or something that it, that it mitigates that risk.
Bertram Lyons: 25:56
Yeah, I mean, think about it like this. Like it, it doesn’t take away my ability to just go in and remove all the C2PA data from the file. I, I just did that with a file I created for, from Adobe, right? I needed to create a file of my colleague Brandon. I wanted to put a fun fake generative background behind him. And I, I created it and I put some fake background behind them and I exported it as a PNG and I looked in there because I know and I was out of curiosity and so I was like, oh look, here’s the, here’s the C2PA manifest for this particular file. I just removed it. Nothing stops me from doing that. Resaved the file and moved on. Now this file, um, so the way C2PA works, this file now longer, now no longer has C2PA data. It can go down. It can go across, uh, about its life like any other file. And if someone ever wanted to evaluate the authenticity, they’re going to have to evaluate it from without that data in it. They’re going to look at the metadata, they’re going to look at where it was posted, where they accessed it, what was said about it, all of that. The same way that we do for everything that we, we, we interact with today. Um, if that C2PA data had stayed in the file, which I, I, I was just wanting to make sure that I, I’m always testing C2PA, you know, does it still, does the file still work if I removed this, et cetera. Um, but if it stayed in there, it likely would’ve been removed from LinkedIn when I posted it, for example, I posted it up on LinkedIn. Um, it would’ve, it would, it would’ve been removed anyway ’cause the file would’ve been re it reprocessed by LinkedIn. Uh, but if LinkedIn, LinkedIn was C2PA aware, which maybe one day it will be, it would say it would be, and if I left the C2PA data in it and I submitted it to. To, uh, LinkedIn, then LinkedIn would be able to say, oh, look, I see C2PA data. Let me validate it. So it would validate it, and then gimme a report that said, there’s data in here and I validated the checksum, uh, the, or the, the, the signature from the, from C2PA. And now it could display that provenance data for me. It was created by Bert in Photoshop. Um, and it could, again, it all comes around to communicating back to the end user. About the, about the file. Um, now if I had done, tried to make, it doesn’t, still doesn’t stop me from making a malicious change. If I, instead of removing the C2PA data, I went in and tried to change something, that, what would happen? Like maybe I changed the, who created it from Bert to Chris. Um, when that hit, if LinkedIn was C2PA aware, when that hit LinkedIn, LinkedIn would say this has a manifest in it, but it’s not valid. So it would alert me to something being different in the metadata. In the ctpa manifest then from when it was originally created doesn’t keep me from doing it. But now I’m sending a signal to LinkedIn where they’re going to be able to say there’s something invalid about the manifest. That’s kind of the behavioral patterns that happen. So again, it comes back to you. And I went through that example just to show you that still, no matter what we implement, the human has decisions to make on the creation side, on the sharing side and on the interpretation side.
Chris Lacinak: 29:04
Right.
Bertram Lyons: 29:04
Nothing’s really even at this most advanced technological state, which I think C2PA is probably the strongest effort that’s been put, put forward so far. You know, if I, if I want to be a bad actor, I’m going to, I can get around it. You know, I could just, well, I can opt out of it. That’s where it comes down. So the ecosystem is what’s really important about that approach is that the more systems that require it, then, and the less I have to opt out of it, the better. Right? So we’re creating this tool for it to work. It’s about, really about the technological community, buying in and locking it down such that you can’t share a file on Facebook if you don’t, if it doesn’t have C2PA data in it. If LinkedIn said you can’t share something here if it doesn’t have C2PA data, then once I remove the data, I wouldn’t be able to share it on LinkedIn.
Chris Lacinak: 29:54
Right.
Bertram Lyons: 29:55
That’s what’s missing so far.
Chris Lacinak: 29:57
Thanks for listening to the DAM Right podcast. If you have ideas on topics you want to hear about people, you’d like to hear interviewed or events that you’d like to see covered, drop us a line at [email protected] and let us know. We would love your feedback. Speaking of feedback. Please give us a rating on your platform of choice. And while you’re at it, make sure to follow or subscribe so you don’t miss an episode. If you’re listening to the audio version of this, you can find the video version on YouTube using at @DAMRightPodcast and Aviary at damright.aviaryplatform.com. You can also stay up to date with me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak. And finally, go and find some really amazing and free DAM resources from the best DAM consultants in the business at weareavp.com/free-resources. You’ll find things like our DAM Strategy Canvas, DAM Health Scorecard, and the “Get Your DAM Budget” slide deck template. Each resource has a free accompanying guide to help you put it to use. So go and get them now. Let’s move on from C2PA. Um, that, that sounds like that covers this, some elements of content authenticity at the organizational level, at provenance documentation level, some signatures and cryptographic, um, protections. You’re the CEO and Founder of a company that also does, uh, forensics work, uh, as you mentioned, Medex Forensics. Uh, could you tell us about what Medex Forensics does? What does that technology do and how does that fit into the ecosystem of tools that focus on content authenticity?
Bertram Lyons: 31:43
The way we approach and, and the contributions that we try to make to the forensics field is from a file format forensics perspective. So if we know how video file formats work, we can accept a video file, we can parse that video file and extract all the data from it and all the different structures and internal sequencing, ultimately to describe the object as an object, as a piece of evidence, like you would if you were handling 3D evidence. Look at it from all the different angles, make sure we’ve evaluated its chemistry, like we really understand every single component that goes to make up this information object called a file. Um, and once we do that, we can then describe how it came to be in that state. How did it come to be as it is right now? If the question was, hey, is this thing an actual original thing from a camera? Was it filmed on a camera and has not been edited? Then we’re going to evaluate it, and we’re not going to say real or fake, true or false. We’re going to say, based on the internal construction of this file, it is, it is consistent with what we would expect from an iPhone 13 camera original file, right? That’s the, that’s the kind of response that we would give back. And that goes back into the interpretation. So if the expectation was, was this an iPhone 13? We’re going to give them a result that matches their expectation. If their expectation was this came from a Samsung Galaxy, and we say it’s consistent with an iPhone 13, that’s going to change their interpretation. They’re going to have to ask more questions. Um, so that’s what we do. We have built a, a, a methodology, uh, that can track and understand how encoders create video files. Uh, and we use that, the, that knowledge to automatically match the internal sequencing of a file to what we’ve seen in the past and introduce that data back. So that’s, that’s kind of where we play. Um, in that world. I’ll, I’ll point out just a couple of things. So we call that non-content authentication. Um, and you would also want to employ content based authentication. So maybe critical viewing, just watching it. That’s the standard approach, right? The critical viewing approach. Or analytics on the, on pixels with quantification of, you know, uh, are there any cut and pastes? Are there any pixel values that jump in ways that they shouldn’t jump? So there’s a lot of algorithms that really focus on, on, uh, the quantification side of, of, uh, of the pixels in the image. People do analysis based purely on audio, right? Audio frequencies, looking for cuts and splices and things like that. So there’s a lot of ways that people approach content authenticity, um, that ultimately all together if used together can create a pretty strong approach. I mean, it takes a lot of knowledge to learn the different techniques and to understand the pros and cons and how to interpret the data, and that’s why there’s not probably a single, uh, tool out there right now because you just have the domain knowledge required is, is quite large. So we’re the kind of tool that we are. Just to tie in where we sit within the question of content credentials in C2PA is that we read, we would be a tool that would ultimately, if we were analyzing it, we would read the C2PA data in your file and say, oh, there’s a C2PA manifest in that file, and we would validate it, and we would then report back, there’s a valid C2PA data manifest, and here’s what the manifest says, so we would also be someone who would play in that ecosystem on the, on the, you know, the side of analysis, not on creation. We don’t create or, you know, get involved with creating C2PA, but we recognize, read and validate C2PA data in a file, for example. Um, we, we’re looking at all the signals, uh, but that would be one signal that we might evaluate, uh, in an authentication exam.
Chris Lacinak: 35:28
You said, uh, Medex won’t tell you if something is real or fake, but just to kind of bring this all together, tying into C2PA, uh, let me say what I think my understanding is, how this might work is, and you correct me when, where I get it wrong. But it seems that C2PA may, for instance, say this thing was created on this camera. It was edited in this software on this date by this person, so on and so forth. Medex can say what created it and whether it’s edited or not. Uh, so for instance, if something, if, if the C2PA data said, uh, this was created in, um, uh, an Adobe product, but Medex purported that it was created in Sora, let’s just say just throwing anything out there, uh, that that it wouldn’t tell you this is real or fake, but it would give you some data points that would help the human kind of interpret and understand what they were looking at and, and make some judgment calls about the veracity of that. Does that sound right?
Bertram Lyons: 36:28
Yeah, that’s right. And I’d say the human and or the, the, uh, the workflow algorithm that’s taking data in and out that, you know, that, from a, think about more like moderation pipeline, you know. C2PA says X, Medex says Y. They conflict, flag it. Or, they don’t conflict, they match. Send it through. You can think about it that way too, from like an automation perspective. Um, but also from a human perspective.
Chris Lacinak: 36:54
For the listeners of this podcast, which are largely DAM practitioners and people leveraging digital asset management and their organizations, I’d love to bring that back up, you know, bring us back up to the level of, of why should a Walt Disney or a Library of Congress or National Geographic or Museum of Modern Art, why should organizations that are practicing digital asset management with collections of digital files, you know, we talked, we kind of delved into like legal things and social media things. But why should an organization that isn’t, uh, involved in a legal dispute or, or, or, or some of the other things we’ve talked about, why should they care about this? And how does, how does content authenticity play into the digital asset management landscape? Can you help us get some insights into that?
Bertram Lyons: 37:37
Yeah, that’s a great question, that’s near and dear to my heart. And we, we probably need hours to talk about all the reasons why, but let’s try to tee up a couple and then you can help me get to it. You know, there’s, I’ll, I’m gonna, I’m gonna list, list a set and then we’ll, we’ll hit some of them. But so, you know, let’s think about collection development, right? So just on the collection development side, we want to know what we have, what we’re collecting, what’s coming in. And we want to apply and we do this as much, as best we can as today, um, in that community with triage tools like, like um, I’ll name one, Siegfried is a good example, built off of the UK’s National Archives PRONOM database. It really focuses on identifying file formats. So, you know, to date, we want to know what file, like as, as, when we’re doing collection development, we want to know what file formats are coming in. Um, but furthermore, actually when we’re doing collection development, you know, I’m speaking of organizations like, like MoMA and Library of Congress, who are collecting organizations. We’re going to get to National Geographic and, uh, Disney and et cetera shortly. You know, on that side, we need collection development tools to make us, make sure we know what we have, right? It goes back to your earlier fakes question. We don’t want to let something in that’s different than what we think it is. And authentication techniques are not present, uh, in those organizations today. It’s a tool that purely metadata, metadata analysis is happening. Just extracting metadata, reviewing the metadata, uh, reviewing the file format based on, based on format, uh, these quote unquote signatures that the UK, Um, National Archives has, has produced and with, with the community over the years, which are great. You know, they’re really good at quickly saying this is a doc, Word doc. This is a PDF. This is a, you know, you know, they identify the type of file. They don’t authenticate the content in any way. So that’s one side of it. Did, um, quality control on big digitization projects is another great way to do this. And start to incorporate this. And of course we kind of do that with metadata techniques still. We’re looking for metadata. We don’t look at file structure, for example, and those kinds of, uh, we don’t know exactly what happened to the file. We know what’s in the file, but we don’t always know what happened to the file. Authentication techniques are focused on that. Um, so I think there’s just ways that that could be added to the current pipelines in those communities. Um, then we think about the file, the content that we’re now storing on the preservation side. We don’t want to necessarily change the hash of files, right? When you’re thinking about libraries and museums and archives. So there’s, there’s probably not a, not a play there to embed C2PA metadata, for example. At least not in the original. There’s probably a play to embed it in the, in the derivatives that are created for access or, or etc. That’s something to discuss. Um, on the create, creation side, you think about companies or organizations like Disney or National Geographic. Content credentials are an excellent mechanism, you know, that and watermarking, which is all, which is all part of the same conversation, um, and moving on, and, and, and this is moving beyond visual watermarking to, uh, non perceptible watermarking, to, to things like that, which are being paired with, with C2PA these days. And, and the, the value there at the, is, is about protecting your assets. Can you ensure that as this asset goes through its lifecycle, whether it’s in your DAM, um, in which case you want your DAM to be C2PA aware or watermark aware. You want your DAM to read these files and report. The C2PA manifest is here for this asset, it’s valid, and here’s the history. You know that that’s another way of securing your assets internally, but then as they go out of the company, whether into advertisements or whether out, you know, being shared to patrons or however they’re being used. out of the company. You know, it’s just another mechanism to ensure your, your copyright’s there to ensure that you are protecting that asset and, and anything that happens to it’s being directed back to you. Um, that’s where on the creative pro, pro production side of the house, that’s these tool sets that are being developed, that are really focused on ensuring content authenticity, they’re, they’re really being built for, for that need. Right? They’re being built for the, for you to have some way to protect your assets as they’re out in the world. That’s why I come back to intent again. Gives you an, a, a, you who have an intent to, to, you know, to do this, the ability to do this.
Chris Lacinak: 42:06
WhAt is the risk? Let’s say that, um, these organizations that, you know, all of which are using digital asset management systems today, choose not to pay attention to content authenticity.
Bertram Lyons: 42:19
It depends on what your company has, you know, what your organization collects and manages, but you know, with these generative AI tools that are out there. Content that makes it out of your company’s hands, if it’s yours and you created it and it has something that has meaning to you, um, it’s very easy for someone to, if you don’t have any protections inside of those files in any way, it’s very easy for someone to, to take that, move it into another scenario and change the interpretation of it and put it back out into the world. This happens all the time, right? So, the, the what, the why there is about protecting, protecting the reputation of your, of your company. That’s a big one. Um, The, the other why is about, I, there’s a, there’s a why that’s not about, you know, the public. It’s the internal why is increased efficiency and, you know, and reducing mistakes. I don’t know how many times we’ve seen, um, companies or organizations that have, uh, mis, misattrib, have misattribution to what’s the original of, of, of an object and what’s the, you know, uh, uh, access copy. And some cases lost the original and are only left with the access copy. And the only way to tell the difference would be some kind of database record, if it exists. If it doesn’t exist, you’d have someone whose experience has to do some kind of one to one comparison. But with the content credentials, um, there would be no, no, no, um, question at all between what’s, what was the original and what was a derivative of that original. From a file management perspective, I think there’s a lot of efficiencies to be gained there. Um, and then, and then, in essence, potentially reducing labor, right? So if, if you think about National Geographic, they have photographers out all over the world doing, you know, all kinds of documentary work. If that documentary work is, from the beginning, has content credential aware tools, there’s, there’s cameras out there, um, etc. Or if those , those photographers are then, or maybe they, maybe it’s not, maybe the content credentials don’t start at the camera, but they start at post process, right, you know, into, into Adobe. I’m not, I don’t work for Adobe, I’m not trying to sell Adobe here, but I’m just using it as an example. But, know, it goes into a product like that, that is, that is C2PA aware, for example. And that photographer can create all of that useful provenance data at that moment, as it makes it to National Geographic, if their dam is C2PA, C2PA aware, imagine all of the reduction in typing and data entry that happens at that point. We trust this data inherently because it was created in this cryptographic way. The DAM just ingests it, creates the records, you know, updates and supplements the records. Um, there’s a lot of opportunity there both for DAM users and for actually DAM providers.
Chris Lacinak: 45:07
Yeah, so to kind of pull it up to the, maybe the most plain language sort of, uh, statements or questions that, that this answers would be again, kind of going back to who created this thing. So a bad actor edits something that’s put out there, posts it, you know, maybe under a, uh, uh, an identity that looks like an entity Walt Disney, for instance, and is trying to say this thing came from Walt Disney. Uh, so this, this sort of suite of tools around content, content authenticity would let us know who actually created that thing and, and allow us to identify that it was not in fact, Walt Disney in that hypothetical. It also sounds like, um, the ability to, um, help identify, you know, something that’s stated as real and authentic, whether it is in fact real and authentic. I’ve got this video, I’ve got this artifact, an image of an artifact. Is this, is this, is this digital object a real thing or not? And vice versa. Someone claiming, and I think we’ll see more and more of this, people claiming that something that is real Is AI generated, right? That’s not real. That’s AI generated. That, doesn’t exist. Uh, the ability to actually, in fact, prove the veracity of something as well, that’s claimed to be non authentic. Um, does that kind of, those three, those are kind of three things that I think what we’ve talked about today points at, like, why would this be important for an organization to be able to answer those questions? And you can imagine in the list of organizations, we, we listed there that there could be a variety of scenarios in which answering those questions would be really critical to their, to those organizations.
Bertram Lyons: 46:51
You give yourself ability to protect yourself and to protect your assets.
Chris Lacinak: 46:56
Right. So, you have really drilled home today the importance of this ecosystem, um, that, that exists and kind of a bunch of people. playing, working, agreeing on, um, building tool sets around, uh, you know, an ecosystem. Are you seeing DAM technology providers opt into that ecosystem yet? Are there digital asset management systems? And I know you don’t know all of them. I’m not saying to give me give us a definitive yes or no across the board. But are you aware of any that are, um, adopting C2PA, implementing Medex Forensics, or similar types of tools into the, into their digital asset management platforms.
Bertram Lyons: 47:43
Not yet, Chris. I haven’t seen a, a DAM company buy into this yet. Um, you know, to be honest, um, I think it’s, this, this is new. This is very much emerging technology. Um, I think a lot of people are waiting to see where it goes and what the adoption is. Um, I will say that two years ago when I started working on, collaborating within the C2PA, uh, schema team, I was, I was feeling like there was very little chance of quick uptake. Um, you know, I, I thought this is a mountain to climb. This is a huge mountain to climb to get technology companies on board, to create C2PA aware technology, whether they have hardware, whether they’re camera, phone companies, whether they’re, whether they’re post processing companies like Adobe, whether they’re browser, you know, serve, like, services, uh, like Chrome, like Google, whether they’re search engines, whether they’re social media. I thought this is just, it’s a, it’s a, mountain. In two years time, however, I don’t know if it was accelerated because of all that’s happened with AI so quickly and the fact that, you know, interests have elevated up to the government level. We have a presidential executive statement on AI, you know, that mentions watermarking. Um, basically mentions C2PA. Um, in two years time, there’s so much change that all of a sudden, um, that mountain feels a lot smaller to climb. It’s like it can be done. Just in the past few months massive organizations have jumped into the Content Authenticity Initiative. Uh, from Intel to NVIDIA, uh, you know, important players in that ecosystem are now coming on board. Um, and I think that, I think that, you know, there’s a chance here. So I, I think we will see DAM folks, uh, who provide systems taking a much stronger look. I will say in the digital evidence management, which we call DEMS, uh, community. there is, there is definite interest in authentic, authentication, right? It’s already happening in the DEMS world, and I think that will bleed over into the DAM world as well. Um, in that content coming into these systems, it’s another signal that the systems can automatically work with to populate and supplement what’s happening behind the scenes. Um, and we, and we know that, that DAMS, DAMS work closely with anything they can to authentic, to, uh, automate, um, their pipelines and make things more efficient for the end user.
Chris Lacinak: 50:18
So I know you’ve done a lot of work. Uh, we’ve talked about today, you know, the kind of law enforcement and legal components of this. Uh, we’ve talked about, uh, digital asset management within, uh, collecting institutions and corporations and things like that. But you’ve done some really fascinating work, I know, with journal within journalism. And within human rights stuff. And I would love, could you just talk a bit about that and maybe tell us some of the use cases that Medex has been used, uh, within those contexts?
Bertram Lyons: 50:52
Think about the context of journalism and, and, you know, human rights organizations is, is really one of, of, of collecting and documenting evidence. Uh, and it may be on the human rights side, a lot of it is collecting evidence of something that’s happened and that evidence is typically going to be video or images. Uh, so we have, we have people documenting atrocities or documenting, you know, any kind of rights issues that are happening and wanting to get that, that documentation out and, and also to have that documentation trusted so it can be believed. So that they can actually serve as evidence of something, whether it’s evidence for popular opinion or evidence for a criminal court, you know, from the UN, both and all, right? So there’s that, that’s the process that has to happen. So there’s, you know, often challenging questions with that kind of evidence, um, to document its authenticity. And in some ways, things like C2PA have come out of that world, you know, there’s an effort with that WITNESS out of New York, um, worked on, and I know they had other partners in it, and I don’t know the names of everybody, so I’ll just say I know it’s not just WITNESS, but I know that they’ve collaborated in efforts for many years to create these camera focused, um, systems that allow that authentic, authent, signal to be stored and processed within the camera upon creation. And then securely share it out from that camera to, you know, to another, another organization or location, um, with all of that authentication data present. And what I mean when I say authentication data there is like hashes and dates and times. Um, and, and, and usually to do it, and the more, the more challenging thing is to do it without putting the name of the person who created the content in the authentication. Because that’s a, that’s a. It’s a dangerous thing for, for some people, for their names to be associated with the evidence of a human, of a human rights atrocity. Um, so they, you know, you think about that’s a really challenging scenario to design for and human rights orgs have been really focused and put a lot of effort into figuring, trying to figure that out. So that you don’t reduce the ability of people to document what’s happened by making it too technologically challenging or costly. Um, and also you don’t want to add harm to that. You want the person who’s, who’s created this to be noted. But then again, at the end of the spectrum, you need to trust it. You need someone else to trust it. But you can’t say who did it, you can’t say anything, you know, right? So, so there’s been a lot of excellent work. And I know we’ve been involved a lot on the side of, of helping to provide input into authentication of, of video from, from these kinds of scenarios. Um, to add weight to trust, right? Ultimately, it’s all around trust. Can we trust it? Uh, what signals do we have that allow us to trust it? And do they outcompete any signals that would want us to distrust it? Um, so that’s, that’s been really exciting. That work, you know, is, is continually going on. And I know there’s a lot of organizations involved, but we’ve partnered closely with WITNESS over the years and they, they do excellent work. Um, and I know that there, there’s a lot more out there, but you know, that’s, On the journalism side, it’s a little different than that, right? On the journalism side, you have, uh, journalists who are writing investigative reports, right? And their job is to, in a little bit of a different way, is to receive or acquire, um, documentation that’s of world events or local events, um, and to quickly attain or assess the veracity of that content so they can make the correct interpretation of it. And also decide the risk of actually using it as evidence in a piece, in an article. Um, we work closely with a variety of organizations. The New York Times is a good example of, of a group we work closely with where, you know, it’s not always easy. You know, even if you’re receiving from a particular human being on, you know, in some location, you’re receiving evidence from them, you know, you want to you want to evaluate it with as many tools as you can. You want to watch it. You want to look at its metadata. You want to look at its, uh, authentication signals. And you want to ultimately make a decision on, can we write, are we going to put this as the key piece of evidence in an article? It’s never first person, right, from the journalist’s perspective. They’re not the first person usually, right? So they’re, they’re having to take this as, uh, from someone who delivered it to him, who is also, they can’t prove is first person. You know, they have to decide how first person is the content in this, in this video or image or, or audio. So, um, I don’t know if that answers your question, but that’s, you know, we, you see a lot of need for the question of content authenticity in both of those worlds. And, and a lot of focus on it.
Chris Lacinak: 55:53
Yeah. So, well, maybe just to pull it up to a hypothetical or, or even hinting at real world example here, like, uh, let’s say a journalist might get a piece of video, um, out of, let’s say Ukraine or Russia, uh, and they’re reporting on, on that war. And, and, uh, they have gotten that video, let’s say, through Telegram or something like that. Uh, so, their ability to make some, uh, calls about the veracity of it are really critically important. And I, they could use Medex and other tools to say, for instance, that, yes, this came, you know, if it looks like it’s cell phone footage, that, yes, this came, this was recorded on a cell phone. Uh, uh, yes, this came through Telegram. Um, no, it was not edited, no, it did, it was not created through an AI generation tool or a deep fake, uh, piece of software, things like that. That would not tell them yes or no, they definitively can or can’t trust it, but give them several data points that would be useful for making a judgment call together with other information on whether they can trust that and use it as, as journalism. Uh, in their journalism.
Bertram Lyons: 57:04
That’s right. Yeah, it’s always the human. At the end, and I’ve stressed this, uh, as much as I like automated tools, we really need, in scenarios like that, a human to say, this is my interpretation of this, all of these data points that I’m seeing. Um, and, and that’s a great example. And that’s a real example. We actually dealt with it. Remember when the nuclear, um, when that war just originally broke out, there was challenges to, um, nuclear facility there. There were, um, it was still under the control of Ukraine and there were Ukraine scientists in the facility sending out Telegram videos saying we’re here, there’s bombing happening around this nuclear facility, this is extremely dangerous, please stop. Um, and, and the video was coming out from Telegram. But the only, the only way to evaluate it was, was from a secondary encoded version of a file that, you know, Um, initiated somewhere, uh, and then it was passed through Telegram to a Telegram channel and then extracted by news agencies and then they want to, um, as quickly as possible say is this real? We want to report on this. We want to amplify this, um, this information coming out from Ukraine. It’s challenging, you know, in that case, you know, we, we, in the case of, in the files that we were asked to evaluate in that case, you know, we could say, yeah, you know, it was. It’s encoded by Telegram, um, and, and it was, you know, it has, it has some signals left over that we’re able to ascertain that, that, that would only be there if this thing originated on a cell phone device, on a Samsung, for example. Um, so there’s the census, maybe that’s all the signal you have, and you have to make a judgment call at that point. point Um, now. In the future, what if Telegram embedded C2PA data, you know, and, and that, and that was still there and we could, you know, maybe that’s a stronger signal at that point.
Chris Lacinak: 59:00
Yeah. Or combined. It’s another data point, right?
Bertram Lyons: 59:08
Yeah, it’s just another data point, right?
Chris Lacinak: 59:09
Great. Well, Bert, I want to thank you so much for your time today. Uh, in closing, I’m going to ask you a totally different question, uh, that I’m going to ask of all of our guests on the DAM Right Podcast, which Uh, help shed a little light, I think, into the folks we’re talking to. Get out of the weeds of the technology and, and, and details. And, and that question is what’s, what’s the last song you liked or added to your favorites playlist?
Bertram Lyons: 59:33
The last song That I added to my like songs was Best of My Love by The Emotions
Chris Lacinak: 59:43
That’s great. Love it.
Bertram Lyons: 59:46
Ha ha ha ha You know, I mean, actually I’ve probably added that like 3 or 4 times over the years It’s probably on there, different versions of it Um, that’s great, great track, I used to have 45 of it. You know that track.
Chris Lacinak: 59:59
Yep. It’s a good one.
Bertram Lyons: 60:00
I recommend you play it as the outro from today’s DAM
Chris Lacinak: 60:03
If I had the licensing fees to pay, I would. Alright, Well, Bert, thank you so much for all of the great insight and, and, and contributions you made today. I really appreciate it. And, uh, it’s been a pleasure having you on the podcast.
Bertram Lyons: 60:17
Thanks for having me, Chris.
Chris Lacinak: 60:18
Thanks for listening to the DAM Right podcast. If you have ideas on topics you want to hear about people, you’d like to hear interviewed or events that you’d like to see covered, drop us a line at [email protected] and let us know. We would love your feedback. Speaking of feedback. Please give us a rating on your platform of choice. And while you’re at it, make sure to follow or subscribe so you don’t miss an episode. If you’re listening to the audio version of this, you can find the video version on YouTube using at @DAMRightPodcast and Aviary at damright.aviaryplatform.com. You can also stay up to date with me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak. And finally, go and find some really amazing and free DAM resources from the best DAM consultants in the business at weareavp.com/free-resources. You’ll find things like our DAM Strategy Canvas, DAM Health Scorecard, and the “Get Your DAM Budget” slide deck template. Each resource has a free accompanying guide to help you put it to use. So go and get them now.
Insights from the Henry Stewart DAM LA Conference 2024
28 March 2024
The Henry Stewart DAM LA conference brings together industry professionals to discuss the latest trends, challenges, and innovations in Digital Asset Management (DAM). This year’s conference showcased a vibrant atmosphere filled with networking opportunities, insightful presentations, and engaging discussions. Here’s a look at the key themes and takeaways from the event.
Day One Highlights: The Buzz of Excitement
As the conference kicked off, there was an unmistakable buzz in the air. Attendees from around the world were ready to dive into the rich content and networking opportunities that the conference offered.
Emerging Themes and Trends
Christine Le Couilliard from Henry Stewart shared insights about the evolving landscape of DAM. She noted a significant focus on global expansion, with DAM serving as a catalyst for growth within organizations. The personalization of content outreach was highlighted as a key agenda item, alongside the rise of AI and generative AI technologies.
Attendees expressed a keen interest in how DAM systems can facilitate not just content management but also strategic planning across larger enterprises. This focus on enterprise-wide integration reflects a growing recognition of DAM’s central role in organizational success.
Voices from the Floor: Attendee Insights
The conference featured a variety of perspectives from attendees. Here’s what some of them had to say about their expectations and excitement for the sessions ahead.
Amy Rudersdorf’s Insights
Amy Rudersdorf, Director of Consulting Operations at AP, shared her enthusiasm for learning how major brands are managing licensed content efficiently. She emphasized the importance of automating rights management to streamline operations and reduce the reliance on manual processes.
Matt Kuruvilla on Rights Management
Matt Kuruvilla from FADEL expressed excitement about the ongoing discussions surrounding licensed content and how brands handle rights management. He highlighted that the growing complexity of content creation necessitates effective automation solutions.
Yonah Levenson’s Perspective
Yonah Levenson, co-academic director of the Rutgers DAM Certificate Program, reflected on the evolution of DAM discussions over the years. He noted that early conversations revolved around basic concepts, while current dialogues focus on integration with other systems and the advanced capabilities of DAM platforms.
New Innovations in AI and Automation
AI and automation were hot topics, with many participants eager to explore how these technologies can enhance DAM workflows. The potential for AI to expedite tagging processes and improve metadata management was a particular area of interest.
Nina Damavandi on AI Applications
Nina Damavandi from USC shared her excitement about leveraging AI and machine learning to expedite tagging processes, a challenge that many organizations face. She highlighted the need for efficient data management to enhance asset utilization.
Leslie Eames on Automation
Leslie Eames from the Maryland Center for History and Culture discussed her interest in automating metadata processes. She emphasized the importance of ethical considerations in using AI tools to ensure equitable benefits from shared data.
Networking and Community Building
Networking opportunities were abundant, with attendees sharing stories, challenges, and successes. The conference fostered a sense of community, reminding participants that they are not alone in their struggles.
Billy Hinshaw on the Value of Networking
Billy Hinshaw from Bissell Homecare emphasized the value of meeting peers and hearing their stories. He noted that these connections are crucial for professional growth and learning from one another’s experiences.
Key Takeaways from Day Two
As the conference progressed into its second day, several themes emerged that encapsulated the evolving nature of DAM.
The DAM Ecosystem
Emily Somach from National Geographic highlighted that DAM systems are central to a broader ecosystem. She stressed the need for seamless integration with various other systems used within organizations, ensuring that all components work harmoniously together.
Christina Aguilera on Community and Collaboration
Christina Aguilera, Vice President of Product for Crunchyroll, shared her multifaceted career journey and the importance of community in professional development. She underscored how the relationships formed at conferences like Henry Stewart can open doors to new opportunities.
Technology Providers’ Perspectives
The conference also featured insights from various DAM platform providers. Each shared their unique offerings and how they differentiate themselves in a competitive landscape.
Orange Logic’s Cortex Platform
Christopher Morgan-Wilson from Orange Logic introduced Cortex, an enterprise-level asset management software that adapts to user needs. He explained how this flexibility allows organizations to consolidate multiple DAM solutions into a single source of truth.
FADEL’s Innovative Solutions
Matt Kuruvilla highlighted FADEL’s focus on automating content rights management, emphasizing the necessity for brands to manage licensed content efficiently.
Bynder’s Usability Focus
Brian Kavanaugh from Bynder discussed the platform’s emphasis on usability and configurability, ensuring that organizations can maximize their DAM investments.
Looking Ahead: The Future of DAM
As the conference drew to a close, discussions turned to the future of DAM. The concept of DAM 5.0 was debated, with attendees speculating on how the landscape might evolve.
AI’s Role in the Future
Participants expressed varying opinions on the role of AI in DAM. While many were optimistic about its potential, there were also concerns about the limitations of current technologies, particularly in terms of metadata entry and contextual understanding.
Content Authenticity and Security Concerns
One topic that received less attention than anticipated was content authenticity. With the rise of generative AI and deep fakes, the need for robust security measures is becoming increasingly critical.
Final Thoughts
The Henry Stewart DAM LA conference not only provided valuable insights into current trends but also fostered a sense of community among professionals in the field. As the industry continues to evolve, the conversations sparked at this event will undoubtedly shape the future of DAM.
For those looking to stay updated on the latest in DAM, it’s essential to engage with this community and share experiences, challenges, and solutions. The power of collaboration and knowledge sharing is the key to navigating the complexities of digital asset management.
Thank you for joining us in this recap of the Henry Stewart DAM LA conference. We look forward to seeing how these discussions will influence the future of DAM in the coming years.
Transcript
CHRIS LACINAK
So here I am at home about to leave for Henry Stewart DAM LA. I’m excited although there’s two to three feet of snow on the ground here. I have to say it’s a gorgeous day and I wouldn’t otherwise want to leave but for Henry Stewart DAM LA I’m down. Let’s go. Henry Stewart DAM LA here comes the AVP and the DAM Right Podcast.
[Music]
All right here we are day one of the Henry Stewart conference. Let’s go in and let’s talk to some people. I’m here with Christine Le Couillard from Henry Stewart. Christine how’s the conference going so far?CHRISTINE LE COUILLIARD: 01:15
Oh Chris it’s great. There’s a real buzz, a real buzz about the place. We’ve seen folks coming from all over, not just the US but overseas and they’re just hungry for content, hungry for networking, hungry to see what’s next on the agenda.CHRIS LACINAK: 01:33
Yeah and you’ve got such unique insights because you help put the program together for Henry Stewart every year. So are there topics or themes that you think are emerging this year that are new?CHRISTINE LE COUILLIARD: 01:42
Great question. I think we’re looking at global expansion. How DAM is that helpful catalyst within an organization to help companies grow and expand and with a whole personalization of content, outreach and so on. That is certainly high up on the agenda. Connected to that is AI, generative AI. Where’s that leading? It’s scary but there’s an opportunity there too to make it work for you.CHRIS LACINAK: 02:13
I’m here with Amy Rudersdorf, Director of Consulting Operations at AVP and we’re starting the day, day one of the Henry Stewart DAM LA Conference and Amy, love to hear what what are you excited to hear about at this conference?AMY RUDERSDORF: 02:27
I’m really excited to see what the Maple Leafs are doing with their DAM. They are talking about moving from implementation and moving a million objects into their DAM in a short period of time and then bringing their strategic plan to the larger organization. So bringing it enterprise-wide all while the Maple Leafs are playing and they’re adding new assets all the time. So it’ll be exciting to see what they’re doing.CHRIS LACINAK: 02:56
I’m here with Matt Kuruvilla from FADEL. Matt, what are you most excited about hearing about at this conference this year?MATT KURUVILLA: 03:03
Well, I’m tempted to say spending time with you, Chris, because this has already been so much fun. But I am really excited to see how brands are handling all their licensed content and managing those rights and making sure that’s easy and automating that because I just feel like that’s a bigger part of how content’s being made nowadays. So you got to have a way to solve it that doesn’t require a whole lot of humans. So I’m excited to hear how people are doing that today.CHRIS LACINAK: 03:28
I’m here with Yonah Levenson. Yonah, can you tell us a little about who you are first?YONAH LEVENSON: 03:32
Sure, I am the Co-Academic Director of the Rutgers DAM Certificate Program, State University of New Jersey, and along with David Lipsey is the other Co-Academic Director. And I’m also a metadata and taxonomy strategy consultant.CHRIS LACINAK: 03:48
Great, great. And you’ve been coming to Henry Stewart for a long time now.YONAH LEVENSON: 03:53
This is true.CHRIS LACINAK: 03:54
What would you say, what are you seeing as some of the themes or trends over these years and kind of where we are today?YONAH LEVENSON: 04:01
So way back when at the beginning it was what’s a DAM? And then it was how do I update my DAM? And then it was how do I replace my DAM? And then it’s become how do I integrate my DAM with other systems? And now it’s how do I get my DAM not just to integrate with other systems but also to push the envelope and how much can I do within and across my DAM? And there’s also been I think a much bigger interest in metadata and taxonomy because it’s being recognized that you have to have a way to have commonalities and normalize language across multiple systems if you’re going to do it right. So that this way when senior management says, “Hey, can you get me a report on this?” You’re not going to like necessarily 15 different places and then having to figure out does this really mean that?CHRIS LACINAK: 04:57
Right, right. Okay, great. Well, thank you for that insight. I appreciate it.YONAH LEVENSON: 05:01
You’re welcome.CHRIS LACINAK: 05:02
All right, I’m here with Phil Seibel from Aldis. Phil, what are you most excited about at this conference this year?PHIL SEIBEL: 05:08
Yeah, honestly I’m just really excited to see what people are doing, what’s new in the industry, how things are trending. It always feels like at this conference that people are both looking to share everything they’ve learned and find new things and it’s really interesting to see where people have made up ground and where they’re still looking to make up ground in the industry and I really like to feel the pulse of things here so that’s what I’m looking forward to.CHRIS LACINAK: 05:29
Awesome, all right, here to feel the pulse. Sounds good. Thank you, Phil.
I’m here with Nina Damavandi from USC, Digital Asset Manager. And Nina, I’d love to hear what are you most excited about, any particular topics or sessions or anything at the conference this year?NINA DAMAVANDI: 05:43
Yeah, I think the main thing I’m excited about is how companies are using AI and machine learning in their DAM workflows to expedite the tagging process. That’s kind of one of our biggest struggles at USC is getting enough data on our assets and so if there is a way to make that faster and I look at our assets they share so much in common like there should be a way to make this easier without so much human labor needed.CHRIS LACINAK: 06:09
Yeah, and we’re about halfway through the first day so have you gotten any nuggets yet?NINA DAMAVANDI: 06:15
Yeah, there were a couple of good sessions this morning on the topic of AI like Netflix gave a great presentation so I think they are much further ahead with it than we are but it’s really cool to see what the potential is.CHRIS LACINAK: 06:27
All right, I’m here with Billy Hinshaw, BISSELL Homecare. Billy, what are you most excited about at the conference this year?BILLY HINSHAW: 06:32
Just the continuing networking opportunities, meeting so many people, hearing their stories, hearing about what they do and seeing where there’s similarities in terms of the accomplishments and the struggles that they deal with. I think the biggest benefit of attending these conferences is that we realize we’re not alone. We might be on an island, you know, at our particular companies but that’s not the reality as far as our industry is concerned nor should it ever be.CHRIS LACINAK: 06:59
Yeah, well that’s that’s fantastic summary of the value of Henry Stewart for sure. Now you’re a past presenter at Henry Stewart and you’re presenting this year, popular sessions. Can you tell us a little bit about what you’re presenting on tomorrow?BILLY HINSHAW: 07:11
I’m presenting on the different responsibilities that DAM professionals have to balance and how to best manage that without losing losing your mind basically. Yeah, that’s important to keep your mind intact.CHRIS LACINAK: 07:26
Awesome, well thank you Billy, I appreciate it.BILLY HINSHAW: 07:28
Yep, thank you Chris.CHRIS LACINAK: 07:29
I’m here with Leslie Eames. Leslie, can you tell us who you are?LESLIE EAMES: 07:33
Yeah, I’m Leslie. I’m the Director of Digital Collections and Initiatives at the Maryland Center for History and Culture.CHRIS LACINAK: 07:38
Great, and is there any particular topics or sessions or anything that you’re most excited about this year at the conference?LESLIE EAMES: 07:45
Yes, I’m really looking for ways to automate our metadata processes so we can ingest more of our data into our DAM. So looking at machine learning and AI tools that can help us and then also exploring some of the ethical implications behind those, knowing that, you know, we want to be deliberate about who’s benefiting from the data we’re sharing when we use those tools.CHRIS LACINAK: 08:11
Yeah, the ethics part of that is a very important part of that conversation. That makes sense. We’re about halfway through the first day so far, so have you have you gotten what you’re looking for yet or are you hopeful to find it in the coming day and a half?LESLIE EAMES: 08:25
I feel like it’s coming together slowly. I’m getting pieces here and there from a lot of different sources, so I’ve learned a lot and hoping to learn more and make connections with others that continue to grow my knowledge.CHRIS LACINAK: 08:39
All right, so I’m here with Emily Somach from National Geographic. Emily, thanks for talking to me. I appreciate it. So we’re nearing the end of the conference on day two. Are there any particular themes or takeaways that you found interesting this year?EMILY SOMACH: 08:53
Yeah, definitely. I think the biggest takeaway and theme too is that the DAMs is really at the center of an ecosystem. We all have other systems that are integrating with it and communicating with it and just always keeping that in mind when you’re working in the DAMs or changing things in the DAMs or building a DAMs. Just knowing that eventually it’s going to be connecting and talking to all these other systems that either you or your coworkers or other teams in your organization are using. So I think that’s just an important thing to keep in mind. And then I guess some other, I guess, yeah, always thinking about the next step and the future and what you can do to set yourself up for success. Migration is just a big part of our world, so always knowing that you might be having to migrate down the road or bringing stuff in from another system eventually and kind of keeping that in mind and making sure everything works together and is standardized.CHRISTINA AGUILERA: 09:43
I am Christina Aguilera and I have multiple jobs. So we’ll start off with my most recent. So I am currently, I just joined Crunchyroll. So I’m the Vice President of Product for Enterprise Technology and Enterprise Technology to Crunchyroll is basically the entire studio workflow. So it is amazing the way that we incorporate asset management into the operations of getting content published to a platform. So that’s an incredible opportunity. I’m also the president of Women in Technology Hollywood Foundation. So as part of Women in Technology Hollywood Foundation, that is my nonprofit where I get to spend all my passion. So we do a lot of professional development opportunities. We’ve got mentorship programs. We do live events in the spring and the fall. The spring is technology focused, the fall is leadership focused. So it’s a great combination and a great network. And then also I am launching a new business with some incredible women out there. So in March on International Women’s Day we launched the brand and it’s called Enough. And it’s basically we are going out there to all of those women leaders globally and making sure they know they are enough. So this is a professional development platform as well as a community and that platform launches April 17th. So that is our brand reveal, our brand launch that’s happening in April. And it’s really really exciting. I think it’s going to change the world.CHRIS LACINAK: 11:11
Wow.CHRISTINA AGUILERA: 11:12
Yeah.CHRIS LACINAK: 11:12
Wow. Wow. So you’re a powerhouse.CHRISTINA AGUILERA: 11:14
I love it.CHRIS LACINAK: 11:15
You’re doing all kinds of things. That’s amazing.CHRISTINA AGUILERA: 11:17
I’m about this close to publishing a book too.CHRIS LACINAK: 11:19
Fantastic. That’s amazing. You’ll have to tell us how you do all these things at some point.CHRISTINA AGUILERA: 11:23
Very little sleep.CHRIS LACINAK: 11:24
And what so what do you think the value of coming to the Henry Stewart Conference is?CHRISTINA AGUILERA: 11:29
You know I’ve been involved with the Henry Stewart Conference for probably over 20 years now. I’ve known them throughout my entire career and the biggest value to me is the people and the people you meet and the people that you grow to connect with and you build the relationships with. You don’t know when you’re first meeting somebody if they’re gonna open that future door for you.CHRIS LACINAK: 11:55
Yeah.CHRISTINA AGUILERA: 11:55
So my career has taken so many different paths and the people that I’ve met at Henry Stewart have opened many of those doors. So it’s an incredible community of people. It’s a great place to come and connect on like ideas and like concepts and it doesn’t matter what industry we’re in or what our job title is because we all have similar problems in the workplace and we come here to commiserate and build relationships and help each other evolve in our careers.CHRIS LACINAK: 12:24
Thanks for listening to the DAM Right podcast. If you have people you want to hear from, topics you’d like to see us talk about, or events you want to see us cover, please send us an email at [email protected]. That’s [email protected]. Speaking of feedback, please go to your platform of choice and give us a rating. We would absolutely appreciate it. While you’re at it, go ahead and follow or subscribe to make sure you don’t miss an episode. You can also stay up to date with me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak. And finally, go and find some really amazing and free resources focused just on DAM at weareavp.com/free-resources. That’s weareavp.com/free-resources. You’ll find things there like our DAM Strategy Canvas, our DAM Health Score Card, and the Get Your DAM Budget slide deck template. Each one of those also has a free accompanying guide to help you put it to use. So go get them now.
Let’s turn to the DAM platforms in the room now. I’m gonna ask them each a series of questions and I’m gonna edit it so that you can hear their answers side by side. Before we get into the questions, I’ll introduce you to each of them. Christopher Morgan-Wilson from Orange Logic. Shannon DeLoach from Censhare. Melanie Chalupa from Frontify. John Bateman from Tenovus. Brian Kavanaugh from Bynder. Bróna O’Connor from MediaValet. Jake Athey from Acquia. Tell us about your platform and what differentiates you from the other platforms in the room today.CHRISTOPHER MORGAN-WILSON: 14:01
Orange Logic has created Cortex. Cortex is an enterprise-level asset management software that’s actually able to adapt the way it presents itself depending on the user. So if you think a lot of companies out there, they’ll buy multiple DAM solutions. Like their teams and departments will kind of go rogue and buy different software. But now there’s a big push for companies to consolidate all that into one central source of truth and that’s where Orange Logic comes in with Cortex.SHANNON DELOACH: 14:28
Censhare is an omni-channel DAM, PIM, and CMS platform. What differentiates us is it’s fully integrated out of the box. So there’s no outside integrations needed to get that full functionality. Those three functionalities, DAM, PIM, and CMS, are built on a common structure. So it’s very flexible. Really where we stand out is if you need DAM and PIM, our niche is where you can buy one platform and have them both. So that’s what we’re very proud of.MELANIE CHALUPA: 14:57
Frontify is a brand-centered solution that’s focused on all facets of your brand. So of course looking at a critical element of your brand is going to be the DAM itself, but on top of that we also have the ability to digitize all of your guidelines. And a lot of our clients will also include a multi-portal setup. So looking at your corporate brand, the assets that are associated with that, as well as the guidelines, sometimes campaign toolkits, but also being able to support product brands, employer brand, really every facet of your brand. So that’s kind of our unique differentiator.JOHN BATEMAN: 15:24
Tenovos is about a five-year-old company, so relatively young in the company of a lot of legacy DAM providers. So we like to think that we’re differentiated because of the architecture of our platform built on microservices, APIs, and very flexible modern technology, very scalable. So that sets us apart. It really means that we can fit in into different ecosystems in people’s MarTech stacks. So very easy to connect with other platforms, other technologies. So I think that’s one of the key differentiators.BRIAN KAVANAUGH: 16:11
Bynder is a leading digital asset management platform according to Forrester, as well as our G2 customer reviews. And I would say what sets us apart is first and foremost use cases in the enterprise, but when you look at Bynder, it’s really usability and configurability of the platform, the most integrations, and the biggest marketplace in terms of plugging into other platforms, and then a leading AI strategy centered around search as well as generative AI. So those are three things that come to mind, Chris, but there’s certainly more as well.BRÓNA O’CONNOR: 16:40
MediaValet is a Canadian DAM. We are a digital asset management vendor. We are built on Microsoft Azure, so we are the only platform built exclusively on Microsoft Azure. We help customers across a variety of industries, so whether they are higher ed, non-profit, manufacturing, media and entertainment of all sizes, from SMB through to enterprise, and we work with those organizations to deliver content at scale. So very much a core DAM platform that delivers seamlessly through integration so that your users can work in the systems that they love, but have a great DAM platform at its base. And in terms of setting us aside, I think we’re very proudly rated the highest security vendor for DAM, so the highest security rating, we’ve got a 99% rating there, so we’re exclusively a league of our own in that area.JAKE ATHEY: 17:28
Acquia is the open digital experience platform, and we provide content management, digital asset management, product information management, and customer data management solutions, and Acquia acquired Widen in 2021, which is where I come from as one of the early pioneers in the DAM space, and I’ve been in this space for 20 years. Let’s focus on some of our strengths. Our strengths in flexibility and adaptability, really leaning into that open promise of Acquia, and the fact that we integrate with anything, and that’s really key among our roadmap priorities as well, and then having a scalable performance and governance model, and being one of the few combined DAM and PIM platforms on the market.CHRIS LACINAK: 18:12
For this next question, taking AI off the table, what in your roadmap is your company most focused on or most excited about?CHRISTOPHER MORGAN-WILSON: 18:20
Orange Logic. One of the big things right now is different file formats. Last year was a huge push for MAM, so media asset management, or I guess multimedia asset management, so video. We’re seeing a lot of requests for working with 3D files, project files, resource management, so like not only being able to handle the assets, but the people working on those assets, their time, the budget. Again, it’s that central source of truth where everything regarding the asset, from ideation to creation, all the way to final approval, like pushing out to other platforms, all that is handled within the DAM.SHANNON DELOACH: 18:55
Censhare. We’re most focused on our cloud initiative, right? So we’re going cloud native. It’s going to offer much more flexibility and faster speed to deployment for our customers. So that’s really the aim, to get our customers a usable system more quickly. We’re going cloud native.MELANIE CHALUPA: 19:13
Frontify. So something we’ve been focused a lot on lately is templates. So we do have a template offering within our portal, or our brand portal solution, is of course templating and being able to scale production across several channels is such a critical part of leveraging and getting the most out of your assets, but also being brand compliant. So something that we’re looking to do right now is to further enhance that tool and be able to include things like video templates and being able to manipulate templates for each channel in one go. So I think that’s something that has been really resonating with our clients and we’re looking forward to offering more in that realm.JOHN BATEMAN: 19:46
Tenovos. When the company set about developing a DAM platform, in the back of our minds was how can people get value from the assets and how do you derive the most value? Previously you couldn’t really see how things were performing out in the wild once it left the DAM. So our ideas really from the start, I think from the inception of Tenovos, have been around that smarter use of the assets and smarter use of your resources being guided by the data that you’re pulling back from the assets through all your different channels, whether it’s your social, through your e-commerce, etc. So I think that for us it’s a big focus at the moment.BRIAN KAVANAUGH: 20:36
Bynder. Composable architecture and just using a best-in-breed approach for sure.CHRIS LACINAK: 20:42
Okay, that’s a lot of big words. Can you break that down for us a little bit?BRIAN KAVANAUGH: 20:47
What we’re most excited about is organizations taking what we call a best-in-breed approach to their MarTech stack, not being dependent on a single suite provider or single platform. More identifying needs and capabilities for DAM but also adjacent technologies around CMS, marketing automation, what-have-you, and using the best vendor for each and integrating their platforms using APIs. And another big theme that is right tied into that is delivery of assets. So being more intelligent, more automated, more sophisticated of how assets get delivered out of the DAM to downstream platforms that the customer touches.BRÓNA O’CONNOR: 21:24
MediaValet. We’ve got an exciting roadmap ahead of us this year which we are finalizing and building at the different components but something that’s coming up very soon that I think you’re going to hear from us is about templating. So we’re working with a great partner called Mark and we will be releasing a templating solution in Q2 which will really enable our marketing customers to really drive better impact by enabling their teams to work efficiently with their campaign materials, drive more campaigns out the door, and then leverage your other resources on more strategic initiatives. So it’s really empowering your team to do more which of the time we’re in that is really important for our marketing organizations to drive that efficiency.JAKE ATHEY: 22:03
Acquia. Top non-AI priorities of 2024. We have the priority of integrated workflows. We want more native integrations and more partnerships to really help our customers optimize their content operations as well as to connect assets and metadata across the digital experience. We also have new insights analytics and reporting capabilities with new data visualizations and more analytics API endpoints coming so that customers can work with their DAM data, their DAM reports within whatever business analytics tools that they use. And we also have a new search experience coming with enhanced usability, accessibility, and some added features. And of course we’re advancing our PIM and DAM combination with added PIM and added syndication capabilities that we’re very excited about for our customers that are makers and marketers of products.CHRIS LACINAK: 22:57
Putting AI back on the table, which of the following are you most focused on in the application of AI? Content generation, search, or tagging and description?CHRISTOPHER MORGAN-WILSON: 23:07
Orange Logic. So it’s a good mix of everything. Right from the get-go we’ve always focused heavily on the search because there’s really no point in having a DAM if people can’t find what you’re looking for. And I used to be an asset manager on Disney’s AFV for about seven years so I was the one doing the tagging and it’s so hard to know what people are gonna search for. So if you use the AI for the tagging and the searching that kind of gives you a level up on you know surfacing those assets. And then the third one we are now starting to focus on content generation whether that’s actual physical images based off of other assets in your DAM, document creation like being able to create a brief before you kick off that project. So you’ve cheated and said all three I asked you to pick one. Oh I’m sorry. That’s okay that’s fine. I think we’ll assume that. Searching is the most important. Search, okay all right fair enough.SHANNON DELOACH: 23:57
Censhare. Oh content generation for sure. Okay and can you tell us at all about where how you’re focused on content generation? Yeah so generative AI right so creating product descriptions right so you have a great product and you want to quickly create those descriptions we want to generate that for you. Generating images even videos the whole concept of you know create once use many but now let’s just do it with AI so you can do it faster. And actually using AI to find specific areas within content that you may want to reuse. So I said a mouthful there but really it’s really it’s a lot of our clients are using it for yeah creating those those quick you know give me three bullets on my new product right so boom we can generate it now that’s in the DAM now you can use that and push that out to you know your online channel or whatever other platform or whatever, so.MELANIE CHALUPA: 24:54
Frontify. Probably search at the moment so we’ve recently rolled out our brand AI assistant so that’s going to be able to help our clients have their end users enter their portal and search for assets and through their guidelines and kind of chat to this bot to be able to find what they need and also have that bot generate answers for them that might not even involve them going into the system further so really looking at improving that kind of speed to search timeline as well. We do have some other exciting things around the other elements that you mentioned. Okay tell us tell us about it. Okay yeah so we’re also rolling out a plug-in with open AI where you can generate images within the DAM so on that kind of generative image topic that’s what we’re doing there and we already have AI tagging which has been really great and helping our clients to cast that wide net so that whatever their end-user search for has you know the most likely hood of producing results for them.JOHN BATEMAN: 25:43
Tenovos. Content creation I think and you know really you know the generative stuff is very interesting at the moment but things like localization of assets is it seems to be very prevalent on some of the big global brands that we’re we’re working with that’s a big thing at the moment and then things like you know some of the the cropping and creating different derivatives of assets for different different formats and that sort of thing so yeah I’d say probably the latter two you know search and tagging I think we feel have been done you know for a number of years and work you know it’s kind of matured but I think the content creation side of us seems to be evolving at a sort of exciting pace now particularly around the generative stuff you know.BRIAN KAVANAUGH: 26:35
Bynder. So I think when it comes to tangible applications and what our customers are getting ROI out of like every single day and discovering new use cases for I would start with search because it’s this whole philosophy of a great place to start with AI in your organization is maximizing existing data and what is existing data for a DAM? Well it’s usually the volume of assets you’ve built up over time where if you can apply AI to it there’s just a added level of discoverability and an added level of efficiency you’re going to get which every organization right now is focused on when it comes to efficiency or getting more out of what they’ve already created. So I know generative is exciting and I know that there’s probably a lot to unlock from here on out but if I think of the here and now, it’s really search I think represents the most efficiency.BRÓNA O’CONNOR: 27:23
MediaValet. I would say you’re gonna hear more from us on search very soon with us developing that area. Tagging is a huge one, especially for our customers that have huge libraries, right? So they’re ingesting a ton of content into the DAM and that automatic tagging with AI has been essential for them to get through utilizing their catalogs. Something related to that that we’re very excited about and I was speaking with our customer here about is the Jane Goodall Institute leverage video intelligence. So that’s another AI capability that they’re leveraging and really it’s about extracting that content from their video and then for reuse. So leveraging content, using AI to generate transcripts and social quotes and everything has been really important for that customer and a great story we talked about yesterday as Henry Stewart DAM LA.JAKE ATHEY: 28:07
Acquia. I want to say all three because we have all three among our roadmap priorities for the next year: smart tags is one of those roadmap priorities, smart tagging and search. Effectively, search is the desired outcome. We also have this concept of automatic video transcription and automatic video generation and templates, and so we are excited about the generation capabilities there. But I’m gonna go search if I have to pick just one because that’s really fundamental to DAM. Should I say funDAMmental is if I will, yeah.CHRIS LACINAK: 28:41
Got to get the DAM pun in.JAKE ATHEY: 28:43
Indeed, never gets old.CHRIS LACINAK: 28:44
Now there’s a few providers in the room that are not DAM platforms. They’re add-ons, they’re partners, they’re technologies that work alongside DAM, and I’d like to ask them some questions. They’re a bit different, so I’m gonna approach this one a little bit differently and just talk to each one for a few minutes. Reinhard Holzner from Smint.io, we see that you are not a DAM, so can you tell us what you are?REINHARD HOLZNER: 29:11
Hey Chris, yeah, so we are not a DAM but we work with your DAM. Imagine you have your favorite DAM and you want to give it different experiences for different audiences. We say the DAM is not the right place for everybody to for every audience, for example. So if you want to reach other audiences like partners or the press or your employees, you might need a different experience, and that’s what we do with our content portals. You can build a brand portal, you can build a media center, you can build a download area, you can build all those different experiences on top of your DAM that you can’t do with your DAM alone.CHRIS LACINAK: 29:47
Could you give us an example of a, I mean I don’t know if you’re allowed to use client names or not, but maybe not, if you can just anonymize it, give us an example of how one of your clients uses Smint?REINHARD HOLZNER: 29:56
So we have several clients that we can name, for example, we have in Europe we have Ferrero Group, which is one of the largest retailers in Europe. They use this, for example, for the internal product portal or product imagery portal, so all the employees can access the imagery that is required through easy to use, simple, mobile-enabled interface and they don’t need to go to the DAM, which is very complicated, for example. Or we have two of the largest sports organizations in the world as our clients where I cannot name them, but I can tell the story. So they reach the press and the media through our portals because, for example, the DAM that they use is not really mobile-enabled, it’s not properly printed, and stuff, and so they put the content from the DAM in front of the media when there’s tournaments and when there’s events that they need to cover. Or we have guys like Somfy, which is a big manufacturer of home automation devices, they’re doing partner portals and providing all the content to their partners, to their resellers like product imagery, data sheets, and so on and so on. So we have a beverages vendor from the US who is using that as a product information portal, bringing together, for example, content from the DAM together with content from their Salsify PIM in this case, and really displaying that data or providing that data to their departments. For example, to see which marketing material is missing for which market. So a lot of different use cases and you see a lot of different audiences that have different requirements that not necessarily can be covered with the DAM alone.CHRIS LACINAK: 31:34
Great, and can I ask what are you particularly excited about in DAM in 2024 or at Henry Stewart DAM LA or anything that’s caught your attention or that you’re particularly focused on?REINHARD HOLZNER: 31:47
Hmm, good question. So what happens in DAM, I think, is that everything professionalizes, everything grows a lot. We see also transactions in the marketplace, going on mergers, companies taking up other companies. I hope that in the future, this will be even going into a more interesting direction that we see larger players in the marketplace that have more influence. The thing is we have a very fragmented DAM marketplace right now with, I think, over 220 vendors out there competing in the marketplace, and it will be very interesting to see if this consolidates because that would probably make things easier for the clients because they have a more complete offering for all those different units that are out there.CHRIS LACINAK: 32:37
David Sultan from OneTeg. David, thanks for agreeing to talk to me.DAVID SULTAN: 32:41
Nice to see you.CHRIS LACINAK: 32:42
Could we start off by you just telling me about OneTeg and what you guys do?DAVID SULTAN: 32:46
Sure, so OneTeg is the integration platform as a service, and what we do is we connect any system to any system, kind of like Zapier, but our focus is on digital asset management, product information, and e-commerce. So we’re able to make integration a lot easier, a lot faster, easy to maintain, easy to deal with upgrades, and just making the level of effort to your customers a lot easier to manage. So instead of having big projects, it’s a lot smaller projects, and you can predict a little bit more that.CHRIS LACINAK: 33:17
That sounds like a good goal, so it sounds like kind of creating more predictability and efficiency around the integration process, which can be unwieldy and a lot of risk as far as costs and time. That’s great.
Could you give us an example of maybe how, and you don’t have to use names, it’s okay if you want to anonymize it, but just how like a customer has used OneTeg, give us an example of that.DAVID SULTAN: 33:43
So we have a customer who uses OpenText as their DAM and using Syndigo as their syndication engine. So whenever he needs to go to Amazon or to any of those other marketplaces they sell beverages, so we had to connect their assets from their DAM to their they had a separate PIM which is it’s like in a separate PIM system, it was an in-house PIM, and we had to syndicate it to Syndigo. So we basically are marrying all of that information, a very complicated flow, and ensuring all of the information is married up between the product, the images, into the website, into the marketplaces.CHRIS LACINAK: 34:21
Okay, all right, that’s great. Thank you, that’s helpful. And what’s one of the features that’s on your product roadmap that you’re most excited about?DAVID SULTAN: 34:29
So when we first launched it a couple of years ago, it was really about just being kind of more of a generic iPaaS solution focusing on DAM and PIM, and we still are, but what we’ve realized is that a lot of our customers, what they really want is a quick way to get into a project. So we start building a lot of templates, so a template, so we call it a recipe, so a template or recipe, so say for example you want to connect inriver to MediaValet, a DAM and a PIM, we can very easily spin up a recipe that already has done the, already has all of the hooks between those two systems, and then we can, you can use a template to expand to your own flow that you need to build in your environment. So that’s like a big thing we’re doing as well, and we also, this is not short term, but long term, we also trying to look for an AI in order to help the developers or whoever is actually building the flows to use AI to generate the flow for it by putting prompts. That’s kind of a little bit longer in the roadmap.CHRIS LACINAK: 35:27
Okay, interesting. Yeah, that’s an interesting use of AI. It makes sense; it’s going to be different than how the platforms are using it. So that’s interesting to hear. Eric Wengrowski, CEO of Steg AI. Eric, can you tell us a little bit about Steg AI and what you do?ERIC WENGROWSKI: 35:41
Yeah, sure. So Steg is a state-of-the-art watermarking company. So we do watermarks for a variety of use cases, everything from leak protection to identifying generative AI, deep fakes, things like that, and we do it all with state-of-the-art watermarking technology that we’ve developed in-house and we’ve patented. We work with many of the DAMs here at Henry Stewart to bring our tech to customers.CHRIS LACINAK: 36:06
Great, and can you tell me, in your roadmap, what are you most excited about that’s on the horizon that you can talk about?ERIC WENGROWSKI: 36:17
Yeah, sure. So, you know, the benefit that Steg brings to our customers is primarily around security, and so, you know, with the explosion of deep fakes and generative AI, seeing is no longer believing. I mean, like, I’ve been working in this field and developing AI algorithms, you know, for 10 years now.CHRIS LACINAK: 36:38
Okay.ERIC WENGROWSKI: 36:39
And a lot of the times I can’t tell the difference between something that came out of a camera and something that came out of an algorithm. So, it’s getting to the point where, you know, even relying on people better than me, forensic experts, aren’t going to be able to tell the difference, and just given the sheer volume of content that people consume over social media and things like that, we really need tools to help understand what’s real, what’s trustworthy, what’s synthetic, what’s organic without labeling something as like, you know, just good or bad, just telling us more about the provenance. So, you know, we’re working right now, we’ve created tools to help identify the origin of content, what’s trustworthy. This is for everybody from generative AI companies to federal governments who are wanting to ensure that there’s a sort of a clean communication channel between them and their nationals.CHRIS LACINAK: 37:31
Great. Yeah. And maybe could you help us wrap our heads around it a bit more, maybe but give us in a case study, and you don’t have to name names if you need to anonymize it, or but just help us understand how some of your customers are putting your technology to use.ERIC WENGROWSKI: 37:46
Yeah, sure. So a couple of years ago, we were approached by a company that was experiencing million-dollar on average leaks for every one of their products that had launched for the past three years, and they were having multiple launches a year that were all leaking ahead of time. This is a consumer electronics company. So they were working with a DAM who we decided to partner with that was great, but, you know, the problem was they really couldn’t tell where these leaks were coming from. Is this stuff that was internal, people on their own team, was it any of their vendors, partners, anything like that. So we integrated Steg’s watermarking technology with their DAM, so automatically in the background whenever they were sharing assets out or any step with the creation process, we were applying new watermarks every time. So if anything leaked out, we could always go back and identify the source. And when leaks happened, and they’ve happened many times, we’ve always been able to trace back and identify the source of the leaks and help the customer plug this extremely costly problem.CHRIS LACINAK: 38:47
And last but not least, what’s the last song you added to your favorites playlist?CHRISTOPHER MORGAN-WILSON: 38:52
Orange Logic. Dance, Dance by Ryan Prewett.SHANNON DELOACH: 38:55
Censhare. An oldie but a goodie, it was Public Enemy and then the Hour of Chaos. So for some reason, I just had a hankering for that song, I added it to my playlist.MELANIE CHALUPA: 39:05
Frontify. Do What I Want by Kid Cudi somehow wasn’t in my playlist before today, and now it is.JOHN BATEMAN: 39:11
Tenovos. Iron Maiden, Run to the Hills, that’s one you probably haven’t gotten.BRIAN KAVANAUGH: 39:16
Bynder. Square One by none other than Tom Petty, and so I’m a big Tom Petty fan but that’s not one that I’d heard, and so I added it this past weekend.BRÓNA O’CONNOR: 39:25
MediaValet. Billie Eilish, What Was I Made For, and that was because I saw her perform it at the Oscars a week ago, so that was that.CHRIS LACINAK: 39:33
You were at the Oscars yourself?BRÓNA O’CONNOR: 39:35
No. I wish.CHRIS LACINAK: 39:36
Let’s just say you were. Let’s just say you were.BRÓNA O’CONNOR: 39:39
Yeah, I was there.JAKE ATHEY: 39:40
Acquia. I’m a girl dad, so I’m gonna go with Taylor Swift, and one that really gets me revved up is Ready For It, and that’s from the Reputation album.REINHARD HOLZNER: 39:47
Smint. It’s that Elton John, Dua Lipa song.CHRIS LACINAK: 39:51
Okay, all right, all right, great, wouldn’t have guessed.DAVID SULTAN: 39:55
OneTeg. So I like John Prine, I know he’s, I think he died a few years ago, but I love his music, it’s country music, and I think the song, it’s called That’s the Way That the World Goes ‘Round.ERIC WENGROWSKI: 40:09
Steg AI. All right, so I didn’t add it to my favorites playlist, but I took, so my wife and I just had a baby a few months ago, as a present while she was still pregnant, I took her to see Taylor Swift here in LA.CHRIS LACINAK: 40:27
Best husband award of the year.ERIC WENGROWSKI: 40:29
Yeah, I’ll take that for this one. So, you know, I’m, I would not describe myself as a Swiftie, I’m definitely not a hater, but you know, my wife is a real Swiftie, and so I was like, hey, you know, I’ll go, it’ll be fun. Best concert I’ve ever been to, hands down. Yeah, SoFi, it was awesome.CHRIS LACINAK: 40:47
All right, so give me a favorite Taylor Swift song.ERIC WENGROWSKI: 40:51
Oh, I like Colors.CHRIS LACINAK: 40:52
Now, there’s a fun session that happens at every Henry Stewart I’ve been to at least called Stump the DAM Consultant. It’s hosted by Jarrod Gingras from the Real Story Group. A number of brave consultants get on stage, the audience asks a bunch of questions in an app, Jarrod Gingras looks at the upvotes to see what are the highest priority questions or the ones that have been voted on the most, and asks those of the consultants. Now, all the consultants put on headphones with music so they can’t hear the other consultants answering, and at one at a time, they answer, and then the audience votes on who has the best answer. And because I don’t have the approval of all the consultants on the stage or Jarrod, I’m going to include just answers from Kara Van Malssen from AVP in this one to give you a little taste of what that looks like and sounds like; it’s a fun event. So, a little bonus for you here.
If we’re currently in DAM 4.0, what will DAM 5.0 be?KARA VAN MALSSEN: 41:53
Okay, so my answer is, I don’t think that there will be a DAM 5.0. I just, I luckily, I did my homework and I went to Jarrod’s session earlier, and it got me thinking about this exact question because as he was describing it, it just seemed more and more to be not DAM anymore, as kind of a content convergence and, you know, we have these beautiful and massive content orchestration engines. It seems like the concept of DAM as we know it today, DAM or MAM, as this kind of it just, that idea makes it a silo in and of itself, and I think that puts it into this corner which I just don’t see the future being. So, I just don’t know if there is a DAM 5.0. I think it’s an evolution. If you have a kid that has a Pokemon and you know how the Pokemon work, they go from like the basic Pokemon to evolution 1, 2. And I think by the time you get to evolution VMAX, you know, it’s not even the same character anymore, and that’s the reality.CHRIS LACINAK: 43:02
When will AI tagging actually work right?KARA VAN MALSSEN: 43:08
Okay, so my question is, who’s your DAM vendor? Because it should already be working. So if you don’t have it working, you come see me and we help you find a new one. Just kidding. Okay, in all honesty, I think where we are in that space, the maturity is pretty good for specific types of use cases. So, I think you have to get specific on what you want it to do. So if you’re trying to do things that are more visual, object recognition, computer vision, what’s in the photo, what colors are in this photo, what’s that object, things like that. There’s pretty good capabilities there now that are readily available. I think the harder part and maybe I’m not sure this is what you’re trying to get at is when we’ll be able to not have humans do any kind of metadata entry. I don’t know if we’ll ever be quite there. There’s certain metadata, contextual information, provenance information, information about what campaign was this part of, what project was this a part of, what are the rights to this image, what’s the credit line, should it credit the AI that created it. You know what, all of those kinds of things, I don’t think we’re necessarily ever gonna be there. So there’s just a certain amount that I think that the AI tagging can and can’t do. But I think there’s a level of maturity that is pretty solid right now for certain use cases. So, I’ll just say it’s limited but it’s evolving.CHRIS LACINAK: 44:39
What’s the easiest AI win for a DAM when your boss is forcing a quick AI answer?KARA VAN MALSSEN: 44:44
The quickest AI win right now… Okay, well it’s, I think it’s kind of similar to the last question, which was some of that tagging. But I actually think the very easiest one you can unlock pretty fast is speech to text for video and audio. So that’s pretty good. You know, you might have to do some editing. What’s so funny back there? Okay, vote for Kara. So, speech to text is pretty, you know, that’s an easy one. And you can just get all of that transcription of your audio and video, and then you have so much searchable text. Boom. Easy way. Go for it. Do it tomorrow.CHRIS LACINAK: 45:30
If you had to use a song to describe a DAM, what song would you pick?KARA VAN MALSSEN: 45:37
The first word of this song title is a curse word, but it’s “b” with those, you know, special characters, better have my money. It’s expensive, right?CHRIS LACINAK: 45:51
So here we are at the end of the Henry Stewart DAM LA conference. It’s been a great conference. What are some of the takeaways and themes from this year? One is that a lot of people were talking about portals. Last year, that was a word that was being used, but we mostly saw it on the DAM and technology provider side. This year, I heard a lot about it from users. People that were talking about real use cases wanting to create seamless user experiences on both the download and the upload side. Speaking to very specific audiences both internal and external to their organization, and it felt like a thing that was new in a new practical way. Speaking of practical, another thing that was that felt new this year was we heard a lot about AI. Last year felt a bit more wide-eyed than it did this year. This year, people had clearly put it to use. They had grappled with the issues more. There was skepticism but helped mix with healthy enthusiasm, and we just heard a lot about real-world AI applications, conversations that were happening in organizations, proof of concepts, and then day-to-day use. We still heard a mix of perspectives but it felt like a new mix, a healthy mix, and something that I think represents the progress of how organizations are using AI. That was interesting and fun to hear about. Lastly, I’ll just say that the vibe in general was really good. It felt like there was more energy this year than last year, and not to say last year was bad, but there was just something this year, there was a momentum. There was a lot of great engagement. I think the content and the program was really good this year compared to last year, and not to say it was bad, but just this year felt exceptionally good. It felt cohesive. It had people talking in the coffee breaks, at the lunches, you know, there was a lot of conversation around the program, which just meant to me that they nailed it on the authenticity of the topics, and that it was resonating with people, so that’s great. Whoever did the programming did a great job. I will say one thing that was missing, and there was one company that was representing this, actually, there were a few companies that were representing this, but it just wasn’t a topic that came up much, which was content authenticity. I heard about it in one session that I attended. There was one vendor, Steg AI, that had a booth. FADEL was here, and then there was one other company I think they were called Verify that was here in the audience. They were focused on rights management and one or two use cases for content authenticity, but I was surprised that there wasn’t more there. Now, it’s not a super sexy topic, you know, security is not the most fun thing to talk about, but it’s been bubbling up so much this year, and with the massive amounts of content generation that’s happening, with the questions around content authenticity, you know, calling real things fake and calling fake things real, and the meaning and potential impact that has to DAMs and archives is huge. So, I was just surprised that there wasn’t more about that, but I bet that that, you know, will be a conversation that we’ll hear a lot more about next year. That’s going to be a prediction for next year, so we’ll see. Anyway, it’s been a great time. I hope that you’ve enjoyed the content around the Henry Stewart DAM LA recap, and remember, DAM right because it’s too important to get wrong.
Thanks for listening to the DAM Right podcast. If you have people you want to hear from, topics you’d like to see us talk about, or events you want to see us cover, please send us an email at [email protected]. That’s [email protected]. Speaking of feedback, please go to your platform of choice and give us a rating. We would absolutely appreciate it. While you’re at it, go ahead and follow or subscribe to make sure you don’t miss an episode. You can also stay up to date with me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak. And finally, go and find some really amazing and free resources focused just on DAM at weareavp.com/free-resources. That’s weareavp.com/free-resources. You’ll find things there like our DAM Strategy Canvas, our DAM Health Score Card, and the Get Your DAM Budget slide deck template. Each one of those also has a free accompanying guide to help you put it to use. So go get them now.
Guide To Developing A Request For Proposal For The Digitization Of Video (And More)
8 September 2018
Clear articulation and understanding of goals and specifications is essential to ensuring the success of any project. Whether performing digitization work in-house or using a vendor, a statement of work or request for proposal serves as the foundation of the project.
This resource is intended to guide organizations in thinking critically about and discussing – internally and with vendors – the salient aspects of a request for proposal and the details within. Although this guide uses video as a focus point it is relevant and applicable for all media types.
A Study of Embedded Metadata Support in Audio Recording Software
1 October 2017
This report presents the findings of an ARSC Technical Committee study, coordinated and authored by AVPS, which evaluates support for embedded metadata within and across a variety of audio recording software applications. This work addresses two primary questions: (1) How well does embedded metadata persist, and is its integrity maintained, as it is handled by various applications, and (2) How well is embedded metadata handled during the process of creating a derivative? The report concludes that persistence and integrity issues are prevalent across the audio software applications studied. In addition to the report, test methods and reference files are provided for download, enabling the reader to perform metadata integrity testing.
- A STUDY OF EMBEDDED METADATA (PDF)
- THE TEST METHOD (PDF)
- TEST 1 REFERENCE FILES (ZIP)
- TEST 2 REFERENCE FILES (ZIP)
- TEST 3 REFERENCE FILES (ZIP)
Cloud Storage Vendor Profiles
26 January 2017
Part of our Feet on The Ground: A Practical Approach to The Cloud series, these profiles break down the offerings of third party cloud storage providers from a preservation point of view. Assessment points include Data Management, Reporting/Metadata, Redundancy, Accessibility, Security, End of Service, and adherence to the NDSA’s Levels of Preservation.
Digital Preservation Standards: Using ISO 16363 For Assessment
8 June 2016
This presentation, given by Amy Rudersdorf at the 2016 American Library Association’s Preservation Administrator’s Interest Group meeting, provides a higher level discussion of the use of standards for digital preservation and repository management and assessment. Particular focus is given to ISO 16363: Audit and Certification of Trusted Digital Repositories and its usefulness beyond an audit tool to perform assessments to identify both gaps and strengths in digital repository practice.
Applying Digital Preservation Standards For Assessment And Planning
9 March 2016
In this presentation, Bertram Lyons demonstrates a methodology for employing the ISO 16363 standard for Audit and Certification of Trustworthy Repositories as a tool that can be used to help an organization plan for continued improvement of digital preservation services.
Guide To Developing A Request For Proposal For The Digitization Of Audio
18 June 2015
Whether outsourcing or digitizing in-house, collection managers need to be able to define the parameters and specifications for preservation reformatting in order to properly care for their assets and to control and understand the outcomes of the digitization process. In association with the ARSC Guide to Audio Preservation, AVP is releasing this Guide to RFPs for the Digitization of Audio, along with recommendations for technical and preservation metadata to collect during the process and a sample spreadsheet to obtain estimated pricing from digitization vendors. Every digitization project and organizational requirements are different; this guide is a starting point for creating an RFP specific to those needs.