Article
Choosing a Digital Asset Management System: The Final Decision
27 August 2025
After months of evaluating platforms, the moment has arrived: it’s time to make a decision on your digital asset management (DAM) system. Your choice will shape how your teams access, manage, and use content for years. Our goal is to help you move forward with confidence.
We assume you’ve already done the necessary legwork: aligning stakeholders, identifying requirements, evaluating right-fit vendors, and running demos and a POC tailored to your assets and workflows. If not, consider revisiting those steps—take a look at our previous posts in this series.
Reconnect with Your Digital Asset Management System Goals
Before comparing feature lists or pricing tables, revisit why you began this process. What problems are you trying to solve? What does success look like a year from now? Make sure your final decision is rooted in those goals. Your task is to choose the digital asset management system that best supports your organization, not just the one with the flashiest interface.
Evaluate DAM Vendors Using a Structured Framework
A decision of this magnitude benefits from objectivity. Using a structured scoring model or decision matrix can help your team make a transparent, evidence-based selection. This approach allows you to evaluate each platform against consistent criteria, assign weights based on your priorities, and compare options side by side. It also creates documentation that supports internal alignment and future reference.
Ten Dimensions to Evaluate Each Digital Asset Management System Vendor Finalist:
1. Value
Does the platform deliver the functionality you need? Does it offer capabilities that significantly improve how your organization produces, manages, and shares content? Focus on alignment with your current and future needs, not the total number of features.
2. Feasibility
Can you implement and maintain the platform with your available resources? Consider implementation effort, integration complexity, and ongoing management. A great-looking system may require infrastructure or capacity you don’t currently have.
3. Usability
How easy is the system for different user groups—admins, content creators, and end users? If these groups weren’t included in demos, or didn’t participate in a proof of concept, go back a step. Be sure to get input from the people who will be affected most. Don’t forget to test admin functionality too.
4. Affordability
Is the pricing model sustainable? In addition to license fees, consider implementation (including integration and migration), training, support, storage, and feature add-ons. Don’t forget to look at the cost of utilizing AI services, too. We recommend projecting costs over at least three years to get a clear picture of the price.
5. Scalability
Will the platform grow with you? Think about asset volume, metadata complexity, user numbers, and geographic spread. If you have a particularly large collection or number of users, ask the vendors what their largest deployments are. Review whether the vendor’s roadmap aligns with your growth trajectory.
6. Security & Compliance
Does the platform meet your organization’s security and compliance requirements? Evaluate encryption, access controls, audit trails, and alignment with standards like GDPR or SOC 2. Consider both technical and policy aspects.
7. Ecosystem Fit
How well does the platform integrate with your current systems? Assess APIs, connectors, plugin availability, and the vendor’s experience with relevant third-party tools. Custom integration can quickly become a significant area of cost and complexity, so look for vendors that plug-in to your ecosystem easily.
8. Social Proof
Have similar organizations (in industry, size, scale, complexity) adopted this platform successfully? Are they growing with it over time? Review case studies, references, and testimonials. Speak directly with current customers to learn about the vendor’s strengths and limitations.
9. Trust
Does the vendor seem like a reliable long-term partner? Look at financial stability, delivery track record, and support reputation. Review SLAs, support channels, and upgrade policies. You’ll get great insights when you speak to other customers.
10. Exit Path
If your needs change, can you move on easily? Ask vendors how they support full export of assets, metadata, vocabularies, and user data in open formats. Understand the terms and costs of a potential exit.
Assign Weights and Score Objectively
Not all criteria carry the same weight. A nonprofit with limited IT support may prioritize feasibility and security, while a global brand may focus on integration and scalability. Assign weights to reflect your priorities, then score each option accordingly.
Final DAM evaluation using weighted scoring
Include a cross-functional team in the process to reflect diverse perspectives and build alignment. Document your evaluation so you can refer back to it as needed.
Avoid Common Final-Decision Pitfalls
Even with a strong evaluation process, watch out for these missteps:
- Letting brand recognition or peer adoption sway your decision
- Letting cost outweigh actual needs
- Underestimating implementation, integration, and migration effort
- Failing to thoroughly vet vendor support and services
Get Internal Buy-In and Document the Decision
Before finalizing, make sure all key stakeholders are aligned. Review the decision rationale with leadership, legal, procurement, and IT to surface any final concerns. And as a reminder, don’t forget to talk to your chosen vendor’s current customers (and not just the ones they suggest you talk to!)
Document your decision, including priorities and tradeoffs. This record will be valuable during implementation and future reviews.
Final Thoughts
Selecting a DAM system is more than a software purchase. It’s a strategic decision that will shape how your organization manages content for years. Use comprehensive evaluation criteria and a collaborative process to choose with confidence.
When implementation begins, you’ll be glad you did.
Digital Asset Management Demos and Proof of Concepts
27 August 2025
Digital asset management demos and POCs are where things get real. A demo is a live, guided walkthrough of your specific usage scenarios—ideally using your actual assets. A proof of concept (POC) goes further, giving your team hands-on access to test how the system performs with real workflows. Together, they offer a grounded, honest look at whether a system fits, not just how it looks in a sales deck.
A structured, goal-driven approach to managing these activities is the best way to move from feature lists to informed decisions.
Before the Demo: Set Your Foundation
Start by defining what matters most to your organization. Common areas to evaluate in a DAM system include:
- Workflow automation
- Metadata structure and taxonomy
- Permissions and user roles
- Search and discovery
- Upload and download processes
- User interface and experience (UI/UX)
- Integrations with other systems (e.g., CMS, PIM, MAM)
Also consider what makes your organization unique. Do you manage large volumes of high-resolution images, video, or audio (rich media)? Do you need to preserve or migrate older, inconsistent, or incomplete metadata (often referred to as legacy metadata)? These factors should inform the usage scenarios you ask vendors to demonstrate or support during a proof of concept (POC).
If you haven’t created usage scenarios yet, now’s the time. A usage scenario is a short, structured description of a key task a user needs to perform in the system. Each should include:
- A clear title
- The goal or objective
- The user role
- A brief narrative of the scenario
- Success criteria
Aim for 6 to 8 scenarios that reflect your core needs across different user types. A focused set like this keeps digital asset management demos and POCs grounded in what really matters to your team and ensures a more meaningful evaluation.
Preparing for the Demo
Give vendors a chance to show how their system handles your real-world needs. Ask them to walk through 4–5 key tasks your users need to perform in a two-hour demo session.
About two weeks before the demo, send each vendor a small sample of your actual content—around 25 assets in a mix of file types and sizes—along with a simple spreadsheet describing those files (titles, descriptions, dates, etc.). If you work with items made up of multiple files (like a book with individual page scans), include one or two of those as well.
The goal is to see how the system performs with your materials—not polished demo content—so you can better understand how it might work for your team.
Digital Asset Management Demo Participation and Structure
Invite a diverse group:
- Core users
- Edge users with atypical needs
- Technical staff
- Decision-makers
Suggested agenda:
- 30 minutes – Slide-based intro and vendor context
- 60 minutes – Live walkthrough of your usage scenarios
- 30 minutes – Open Q&A
Distribute a feedback form before the demo so your teams can rate the system and each usage scenario in real time. Collect quantitative scores (e.g., “On a scale of 1–5, how well did the system support this scenario?”) to make it easier to compare vendors side by side. Include a few qualitative prompts as well, such as “What surprised you?” or “What did you like or find confusing?” Keep the form short and focused—if it’s too long, people won’t fill it out.
Running the POC
Once you’ve identified a finalist, it’s time for hands-on testing. A two-week POC is ideal—short enough to keep momentum, long enough to explore.
Set expectations upfront. Testers must dedicate focused time. The POC isn’t a background task. If people delay or casually click around, you won’t get meaningful results.
Check with the vendor about potential POC costs. Some vendors charge if their team invests heavily and you don’t purchase. Ask early.
Prepare for a successful POC:
- Give vendors ~3 weeks to configure the system with your content and workflows. Share usage scenarios and access needs early.
- Assign clear roles, for example:
- End Users – Test search, discovery, and downloads
- Creators – Test uploads, tagging, and editing metadata
- Admins – Test permissions, structure, workflows, and configuration
- Create a task-based script aligned with your usage scenarios. Ask testers to log their experience, pain points, and surprises.
- Schedule three vendor touchpoints:
- Kickoff (60 min): Introduce the vendor, ensure everyone has access, clarify roles, and walk through the POC goals and script.
- Midpoint Check-in (30 min): Surface blockers or confusion while there’s still time to fix them. Encourage open questions: “How do I…?” or “Why isn’t this working?”
- Wrap-up (30 min): Review what worked and what didn’t. Ask the vendor to walk through anything missed. Preview post-purchase support and onboarding to help gauge confidence in next steps.
Reminder: This is not a sandbox. Stick to the script, test with intention, and focus on how the system performs in a real working scenario.
Decision Making
Pull your team together while the experience is still fresh.
Start with the structured feedback:
- Compare rubric scores across categories like usability, metadata, permissions, and admin tools.
- Look for patterns or outliers: did some roles struggle more than others?
- Discuss gaps, friction points, and what’s non-negotiable.
If your group is large, collect final thoughts via a form and summarize for review.
Document your decision—not just which system you chose, but why. Connect it to your business goals, priorities, and user needs. This not only strengthens your recommendation, but also provides valuable context for onboarding new users and teams. When people understand the reasons behind the choice, they’re more likely to engage with the system and use it effectively. It also gives you a foundation for measuring success after launch.
Final Thoughts
Digital asset management demos and POCs don’t just validate vendor claims, they clarify your priorities, surface assumptions, and test how ready your team is for change. They help you figure out not just if a system works, but how it works for you.
A well-run process builds alignment, fosters engagement, and reduces risk by exposing critical gaps early. Most importantly, it sets the stage for a smoother implementation.
When you choose a system based on real tasks, real users, and real feedback, you’re not just buying software. You’re investing with confidence.
Conducting Market Research and Shortlisting Digital Asset Management Vendors
27 August 2025
Choosing a Digital Asset Management (DAM) system is one of the most critical decisions an organization can make for managing digital content. But diving into the DAM market without guidance can be overwhelming. Dozens of vendors offer similar feature sets, and without a clear plan, it’s easy to get lost in marketing jargon or swayed by a sleek demo that doesn’t reflect your real-world needs.
This process isn’t just about picking a product. It’s about starting a long-term relationship with a vendor who will support your team, evolve with your workflows, and play a role in your digital strategy. That’s why thoughtful market research and intentional shortlisting are essential.
Begin with Requirements, Not Features
Effective vendor research starts with clarity about your needs. Before browsing solutions, define what your organization actually requires from a DAM platform. Consider:
- Who your primary users are and what they need to do with assets
- What types of assets you manage (images, video, audio, documents)
- Metadata standards and requirements
- Integration needs (CMS, PLM, PIM, creative tools, cloud storage, preservation)
- Permission models and access control
- Reporting, analytics, and training needs
List “must-have” and “nice-to-have” features, then use that as your rubric. This helps you stay focused on what matters and avoid shiny features that don’t advance your goals.
Navigating the Digital Asset Management Marketplace
A web search is a fine place to start, but it’s not enough. Vendor websites offer a polished view, but few provide meaningful detail about true differentiators, limitations, or ideal usage scenarios.
Sites like G2, Trustpilot, and Capterra offer user-generated reviews and side-by-side comparisons, which can be helpful for spotting trends or potential red flags. That said, be aware that many listings are paid placements, and reviews often lean toward the extremes—either very positive or very negative. Also, many of the tools listed on these sites aren’t actually full-featured DAM systems. Some, like Canva or Airtable, offer DAM-like features but may not meet the broader needs of your organization. This can make it tricky to distinguish between tools that support part of the workflow and those that can truly serve as a centralized DAM solution.
For deeper and more balanced insight, explore:
- DAM News – Offers industry-specific news, vendor updates, and interviews with practitioners.
- CMSWire – Covers a range of digital workplace topics, including strong, up-to-date content on DAM.
- LinkedIn – A powerful resource where DAM professionals share real-world insights, lessons learned, and vendor experiences. Connect with industry peers who have already implemented a DAM and ask for honest feedback and recommendations.
Research Firms & Case Studies
- Reports from Gartner, Forrester, and Real Story Group provide in-depth vendor evaluations and market analysis. (You can typically find these linked from vendor websites.)
- Seek out case studies from vendor websites to understand how specific solutions perform in real-world contexts.
Industry Events
Consider attending a Henry Stewart DAM Conference, which gathers DAM professionals and vendors for learning and networking. These take place annually in:
- London (June)
- New York City (October)
- Sydney (November)
- Los Angeles (March)
These events offer an opportunity to demo different systems and meet digital asset management vendors in person, expert panels, and the opportunity to hear directly from other organizations about their selection and implementation journeys.
Learn from Peers, with Context
Colleagues can be a great source of insight. Ask what systems they use, what worked well or poorly, and what they’d do differently. These conversations reveal how vendors behave during implementation and long-term support.
But keep in mind: a DAM that works well for your pal over at their organization may not be right for you. Your users, workflows, and digital strategy are unique. A negative experience elsewhere might reflect poor alignment rather than a flawed system. Treat peer feedback as helpful context, not universal truth.
Consult the Experts
If you lack time or in-house expertise, consider hiring a DAM consultant. Specialists know the landscape, can translate your needs into actionable requirements, and can help you run a disciplined selection process. They can also facilitate internal conversations neutrally to surface user needs and pain points, ensuring decisions are informed by real requirements and aligned with strategic goals.
Digging into DAM Differentiators
Most DAMs claim to offer robust features—AI, metadata support, flexible permissions, and more. These terms sound impressive, but they rarely reveal how the system actually works in practice. Real differentiators are found in the details across all functionality areas.
For example:
- “AI” alone isn’t helpful. One platform might offer basic auto-tagging, another facial recognition, or full generative AI descriptions and AI-driven workflows tied to metadata.
- “Controlled vocabularies” are standard. A system with the ability to support complex taxonomies, multilingual thesauri, or ontology integration might stand out if this is what your organization need.
- “Permissions” are expected. Granular controls, field-level restrictions, and automated rights management are worth noting.
Ask vendors for documentation that shows actual configuration options, not just marketing overviews. In demos, go beyond checklists. Ask how it performs at scale, supports your asset types, and adapts to real-world workflows. If you don’t push, vendors may not volunteer specifics.
Engage Digital Asset Management Vendors with Purpose
Once you reach out to digital asset management vendors, you’re signaling interest. Sales reps will follow up. That’s expected. Many will work hard to win your business, and that can be a good thing. But this isn’t just a sales transaction. If you choose their system, you’ll likely be working closely with that company for years.
Pay attention to how vendors engage with you. Do they ask thoughtful questions about your needs? Offer strategic guidance? Or are they focused only on closing the deal? You want a partner, not just a product.
Ask tough, specific questions. Request use-case examples. Involve your users early so they can determine if the system fits their actual workflows.
Early demos can help you understand layout and navigation. But once you’re seriously considering a system, ask for tailored demonstrations using your scenarios and assets. This helps you evaluate both product fit and vendor fit—their responsiveness, flexibility, and support philosophy. And if you really want to get under the hood, consider doing a proof of concept with your top 1-2 finalist vendors.
Building the Shortlist
A shortlist should include only those digital asset management vendors who align with your requirements, fall within your budget, and seem like a cultural fit. Aim for five to six vendors for your Request for Information (RFI) or Request for Proposal (RFP).
After reviewing the vendors’ responses, narrow the list to two or three finalists. Invite them for detailed demos, reference calls, and technical Q&A. Note that at this point, you’re evaluating the partnership as much as the platform.
What Makes Digital Asset Management Vendors Shortlist-Worthy
A vendor becomes shortlist-worthy not just by meeting your technical and functional requirements, but by demonstrating alignment with your organization’s broader context and strategic direction. Beyond feature fit, consider factors like company size and funding stability—these can indicate whether a vendor is likely to support and evolve their platform over the long term. Geographic location may matter for support hours, data residency, or language requirements. Longevity and client retention can signal maturity and reliability, but don’t discount newer vendors if they show strong responsiveness and innovation. Experience within your industry or with similar organizations can also be a valuable indicator of how well the vendor understands your needs and challenges. Most importantly, assess cultural and strategic fit: does the vendor listen actively, offer thoughtful insights, and seem invested in your success? A good partner should feel like an extension of your team, not just a service provider.
Final Thoughts
DAM market research is both a filtering and discovery process. It takes effort, but the payoff is a well-aligned solution that fits your organization and your future.
Stay focused on your goals. Be curious, but critical. Ask hard questions. A solid selection process sets you up for long-term success—not just with the tool, but with the vendor team that supports it and the users who rely on it every day.
Documenting Your Digital Asset Management Criteria
1 August 2025
Choosing a Digital Asset Management (DAM) system isn’t just about comparing feature lists from vendor websites. It starts with understanding your organization’s specific digital asset management criteria: what assets you manage, how your teams work, what’s not working, and where you’re headed. To make good decisions, you need clear documentation that captures those needs in a reusable, structured format.
This article offers practical guidance to help you build that foundation, with examples and templates you can reuse throughout your planning process, including RFP development, vendor evaluations, and internal alignment.
1. Start with a Centralized, Collaborative Document
Use a collaborative tool like Google Sheets, SharePoint, Excel, or AirTable to keep your documentation organized and visible to stakeholders. Create tabs that reflect the key areas in this article (e.g., Stakeholders, Usage Scenarios, Assets, Metadata), and structure your notes in a clear, sortable format. This makes it easier to spot patterns, prioritize shared needs, and track where each requirement came from. Your spreadsheet becomes a central source of truth for drafting your RFP, comparing vendors, and aligning internally.
2. Interview Stakeholders and Track Themes
Record short interviews (with permission) with stakeholders in marketing, creative, archives, IT, legal, and other teams that work with digital assets. Focus on what tools they use, where processes break down, and what they wish were easier.
Skip surveys. Interviews offer deeper insight into workflows, pain points, and expectations, and help you capture the language people actually use. These conversations will ground your future steps, ensuring the DAM supports real-world needs.
Tip: “Role” refers to the type of user experiencing the need (e.g., Designer, Archivist), while “Source” refers to the specific person or department who shared that insight during interviews (e.g., Design Lead, Archives Manager). This helps you see how broadly a need applies and trace it back to the original stakeholder if you need more context later.
3. Inventory Your Digital Assets (Rough Counts Are Fine)
You don’t need a full audit, just a rough idea of what you have, where it is, and who uses it. Include file types, volume estimates, and storage sizes:
This information is essential for planning migration and estimating storage needs, and vendors will need a summarized version to provide accurate costs in their proposals.
4. Look for Metadata (Even If You Don’t Call It That)
Even if you’re not using a formal metadata system yet, your team is probably tracking important information about your assets, like who created them, what they’re about, or how they can be used. That’s metadata.
Start by identifying what kind of information you already track and where it lives. It could be:
- In filenames or folder names
- In a spreadsheet
- Stored inside the file itself (like photo properties and technical information about the file)
You might also hear terms like:
- Metadata schema: This just means a consistent set of fields used to describe your assets, for example, “Photographer,” “Date Taken,” or “Usage Rights.” If you’re not using one yet, that’s okay. Start by listing what you are tracking.
- Embedded metadata: This is metadata that’s saved inside the file itself. For example, a photo might include the date it was taken, the camera model, or GPS location.
You might be tracking more metadata than you realize. Look around, especially in shared drives, naming patterns, or that old spreadsheet someone still updates manually. This will help you decide what metadata to keep as-is, what to standardize, and what metadata to capture automatically (with AI) once your DAM is in place.
5. Document Integration Needs Across Systems
Most DAM systems won’t stand alone. They often need to connect to tools your team already uses. These could include your website CMS, creative tools from Adobe, or archives and records systems.
Think about what other tools or systems it should work with.
Start by making a list of all the software your team already uses—like design programs, content management systems, cloud storage, or social media tools. Then, for each one, ask:
“What do we need the DAM to do with this system?”
For example:
- Your designers might want to pull images straight from the DAM while working in Adobe Creative Cloud, without switching between tools.
- Your marketing team might need the DAM to automatically send approved images to your website or social media platform.
Making this list now will help you choose a digital asset management (DAM) system that plays nicely with the rest of your tech setup—and saves your team time down the line.
Even if you’re not sure how the integration will work yet, noting your needs now gives vendors and IT something concrete to work with later.
6. Capture Technical Requirements Up Front
Before you choose a digital asset management system, it’s important to document any technical expectations your IT team or organization has. These might include how users will log in, where the system is hosted, or what kind of security and accessibility standards it needs to meet.
Start with questions like:
- Does your organization require Single Sign-On (SSO)?
- Do you prefer a cloud-based system or one hosted internally?
- Are there file size limits you need to support?
- Do you have accessibility or compliance requirements?
No need for a technical spec. Just capture the basics to share with vendors.
Final Thoughts
Take your time with documenting your digital asset management needs. It can be tempting to jump straight into vendor conversations, but a clear, well-documented foundation will save time, reduce confusion, and support better decisions later on.
And don’t try to do it alone. Involve the people who will use the DAM every day. Their input will save you from surprises later, and probably make the system better for everyone.
Appendix A. DAM Selection Planning Checklist
Once you’ve done some of this early prework, like interviewing stakeholders and identifying your assets, you can move on to this checklist. It’s comprehensive and may feel overwhelming at first, but you don’t have to tackle it all at once. Take it step by step. Collaborate with your main stakeholders. Check in with IT. Use this list to structure your planning, shape your RFP, and guide vendor conversations.
The good news is, if you’ve done the work above, this list will feel much more manageable and actionable.
Strategic Foundation
- What purpose will your DAM system serve, and what problems is it meant to solve?
- What does success look like, and how will you measure it?
- What does Phase 1 (Minimum Viable Product) look like?
Users & Stakeholders
- Who are your key users and stakeholders?
- Have you conducted recorded interviews with them?
- What pain points and needs did they share?
- Have you tracked themes across roles and prioritized them?
- Who will administer the DAM system?
Usage Scenarios & Requirements
- Have you written future-focused usage scenarios for core roles?
- Have you written user stories that describe desired functionality?
- Are your requirements categorized as Mandatory / Preferred / Nice to Have?
- Are sources (departments, individuals) attributed to each requirement?
Assets & Storage
- What types of digital assets do you manage? (e.g., images, videos, audio, 3D)
- Where are they stored now? (shared drives, cloud storage, hard drives)
- What’s the estimated volume (e.g., number of files) and storage size (e.g., in TB)?
- Who uses or owns each asset type?
- Are any assets at risk (e.g., no backups, fragile storage media)?
Metadata & Organization
- What metadata do you track, even informally (e.g., in file names or spreadsheets)?
- Where does that metadata live (e.g., embedded, folder structures, Excel)?
- Do you have consistent file naming conventions?
- Do you use any controlled vocabularies or taxonomies?
Workflow & Lifecycle
- Who creates, reviews, approves, and publishes digital assets?
- What do your current workflows look like, and where are the pain points?
- Do you distinguish between Work in Progress (WIP) and Final assets?
- How are assets currently tagged and ingested?
- Who will manage migration and tagging into the new DAM?
Digital Preservation
- Do any assets need long-term preservation beyond active use?
- Are there embargoing, archiving, or retention policy requirements?
- Will the DAM integrate with a preservation system or strategy?
Licensing & Rights
- Are you currently tracking usage rights and license information?
- Do you know which channels, regions, and formats assets are approved for?
- Are any licenses expired, missing, or uncertain?
- How will user roles, permissions, and security be defined in the DAM?
UX / UI
- What should the user experience be like for search, upload, and browsing?
- Do you need features like thumbnails, preview players, or 3D viewers?
- Do you need multilingual interface support?
- How will different user types (e.g., casual vs. power users) interact with the system?
Integration Requirements
- What systems should the DAM integrate with (e.g., CMS, PIM, Adobe CC)?
- What kind of integrations do you need (e.g., push/pull assets, metadata sync)?
- Are any integrations vendor-supported or likely to require customization?
- Which integrations are Mandatory, Preferred, or Nice to Have?
Technical Requirements
- Do you require SaaS (cloud-based) or on-premise deployment?
- Is SSO (Single Sign-On) required (e.g., via SAML or OAuth2)?
- Are there preferred storage providers or data residency requirements?
- What is the max file size or upload threshold?
- Do you need accessibility compliance (e.g., WCAG 2.1 AA)?
- Will the DAM need to support public delivery of assets with secure access?
Timeline & Budget
- What is your ideal timeline for selection, contracting, and go-live?
- What is your estimated first-year cost?
- What is your projected ongoing cost (e.g., storage, licensing, support)?
- Will implementation be phased or rolled out all at once?
A DAMn Good Investment
24 June 2025
When the going gets tough, the tough get investing.
With economic instability, the pressure is on leaders to tighten belts yet remain top of mind for target markets. In 2025, the global economy has been wildly unpredictable with tariffs, layoffs, and consumer confidence unstable. And, one of the biggest mistakes I see business leaders make during times of uncertainty is cutting their marketing and advertising budgets altogether. To unlock the full potential of a company’s data for informed decision-making, it is essential that data be accurately recorded, securely stored, and properly analyzed. This becomes especially critical during economic downturns, when financial scrutiny intensifies and every margin matters. Data presented to prospects and existing customers must be precise to ensure that services and differentiators are clearly and correctly communicated. Internally, the accuracy of data shared with executives and analysts can directly influence client retention, strategic direction, and budget planning.
This is also a matter of operational efficiency. Even with effective employee training, the benefits can only be realized if teams are working from a consistent and reliable source of truth … DAM. Establishing this foundation is an investment that relies more on strategic time allocation than significant capital expenditure. To position itself for future growth, a company cannot afford to be complacent when evaluating potential technology investments. In a fast-moving digital landscape, organizations that delay improvements during slow periods risk falling behind. In contrast, companies that make deliberate investments—whether through new systems or by dedicating employee time to development and training—will be better prepared to seize emerging opportunities and showcase their competitive advantages as conditions improve.
This is a good time to invest in DAM.
Change is a Good Investment
Change is as present as it is pervasive. It is good to recognize, acknowledge and accept that change is happening in business, and to learn not only what that means for you and your team, but to be ready for those new opportunities. So, why do we change?
- We change to advance forward.
- We change to make ourselves stronger.
- We change to adapt to new situations.
Without change, there would be no improvements. If business is about growing, expanding and making things better for your customers, then what changes are you making? As many of us begin to see future recovery, I too look to the horizon and know that better days are ahead for us all. Whether you’re undertaking an improvement, an upgrade or modernization, whatever you call it, any such effort is holistic by design, encompassing all aspects of business. Many businesses have taken this time to focus on improving all aspects of their business that affect people, process, and technology. This is about good and positive access to information from many systems to not hinder but enable our work. Watch for signs and respond well. Improvement for all is a good thing. In business, we always aspire for stability but need to be prepared for the opposite. This is about both insurance, and investment.
Invest in DAM
The demand to deliver successful and sustainable business outcomes with our DAM systems often collides with transitioning business models within marketing operations, creative services, IT, or the enterprise. You need to take a hard look at the marketing and business operations and technology consumption with an eye toward optimizing processes, reducing time to market for marketing materials, and improving consumer engagement and personalization with better data capture and analysis.
Time to Transform
To respond quickly to these expectations, we need DAM to work within an effective transformational business strategy that involves the enterprise. Whether you view digital transformation as technology, customer engagement, or marketing and sales, intelligent operations coordinate these efforts towards a unified goal. DAM is strengthened when working as part of an enterprise digital transformation strategy, which considers content management from multiple perspectives, including knowledge, rights and data. Using DAM effectively can deliver knowledge and measurable cost savings, deliver time to market gains, and deliver greater brand voice consistency — valuable and meaningful effects for your digital strategy foundation.
Future-Proof your Content
Consider the opportunity in effective metadata governance: do you have documented workflows for metadata maintenance? Are you future-proofing your evergreen content and data? Remember to listen to your users, to keep up to date and aware of your digital assets, and leverage good documentation, reporting, and analytics to help you learn, grow and be prepared. If you are not learning, you are not growing. If you are not measuring, then you are not questioning, and then you are truly not learning.
Conclusion
Keep the lights on. Now is the time to get smart and strategic with your money to ensure you can weather the current unpredictability and even come out ahead. Tariffs, recession fears, rising prices, and potential layoffs dominate headlines right now. As you look to the second half of the year, this might be causing you to take a close look at budget forecasts and reevaluate spending.
Play the long game. Marketing is a long-term strategy, and DAM is a cornerstone of Marketing efforts and operations. More than ever, there is a direct need for DAM to serve as a core application within the enterprise to manage these assets. The need for DAM remains strong and continues to support strategic organizational initiatives at all levels. DAM provides, more than ever, value in:
- Reducing Costs
- Generating new revenue opportunities
- Improving market or brand perception and competitiveness
- Reducing the cost of initiatives that consume DAM services
The decision to implement a DAM isn’t one to take lightly. It is a step in the right direction to gain operational and intellectual control of your digital assets. DAM is essential to growth as it is responsible for how the organization’s assets will be efficiently and effectively managed in its daily operations.
A DAMn good investment to me.
What you need to know about Media Asset Management
10 December 2024
While marketing strategies vary by company size or industry, they likely have one thing in common: a lot of content.
Every stage of the customer journey is powered by marketing content — from digital ads and social media posts to web pages and nurture emails. And if all the related workflows are going to run smoothly, all of the supporting assets need to be organized effectively.
That’s where media asset management comes in. Let’s take a look at this practice and how media asset management software can help teams achieve their content goals.
What is media asset management?
Media asset management (MAM) is the process of organizing assets for successful storage, retrieval, and distribution across the content lifecycle.
This includes any visual, audio, written, or interactive piece of content that supports a marketing goal. The list of possible marketing assets is long and can include:
- E-books
- Whitepapers
- Customer stories
- Reports and guides
- Infographics
- Webinars
- Explainer videos
- Product demos
- Podcasts
- And more!

In addition, all of these assets are produced with the help of many smaller creative elements, like images and graphics. The volume of these files grows exponentially…after all, one photoshoot alone can result in hundreds of images.
And if teams don’t have a centralized repository for their assets, finding a specific file requires a tedious search across shared drives, hard drives, and other devices — which can prove impossible without knowing the filename. Audio and video files can be particularly challenging to manage not only because they tend to be large but also because they are difficult to quickly scan. Sometimes files simply can’t be found and have to be recreated.
This content chaos all adds up to a lot of wasted time and resources…and frustration.
MAM software addresses this problem by providing teams with a single, searchable repository to store and organize all creative files — making asset retrieval a breeze.
How is media asset management software used?
While content management is important for all kinds of teams, MAM focuses on marketing assets and workflows. And the benefits of MAM are numerous — let’s explore some through common use cases.
Distributed marketing teams, one brand
It takes a village to bring a marketing strategy to life, and that village often includes numerous regional offices, remote workers, contracted agencies, and external partners. And all of these content creators and communicators need to be working towards a unified brand experience.
MAM software makes global brand management possible by providing a centralized platform to store and manage files, including brand guidelines and standards. So not only are dispersed team members working from the same playbook, but they are also using the same brand-approved assets — helping to ensure consistency across customer touchpoints.
Ease of use, powered by metadata
Marketing assets are central to workflows across an organization. For example, sales reps need current product materials for deal advancement and customer success managers use the same assets to support and educate existing customers. Their ability to work with agility depends on having these resources at their fingertips.
MAM software includes flexible metadata capabilities that power robust search tools, allowing users to locate an asset with just a few clicks — even in a repository of tens of thousands of files. Further, because MAM software offers easy-to-use versioning capabilities, users can be confident that assets are current. This efficiency accelerates workflows and fuels revenue growth.
Integrated systems and automated processes
A modern marketing technology (martech) stack includes numerous platforms to store, produce, and publish content, many of which include their own asset libraries.
By positioning a MAM system as the central source of truth for all content, teams can simplify content management and consolidate redundant tools. Powerful APIs and out-of-the-box connectors automate the flow of content from a MAM platform to other systems to ensure the same assets are used across digital destinations, without the need for manual updates across the content supply chain.
What are media asset management software options?
Organizations shopping for a MAM platform have an abundance of choices to consider. A simple search for “media asset management” on G2 — a large, online software marketplace — lists 82 products!
All of these platforms have big things in common. For example, most, if not all, are software as a service (SaaS) solutions in the cloud (versus on-premise software that is installed locally).
However, the specific features that each vendor offers can vary quite a bit. So the first step in any MAM software search is to clearly understand and outline the functionality needed for your unique workflows.
From there, you can really begin your research in earnest — or even start drafting a request for proposal (RFP).

The difference between MAM and DAM
In today’s enormous martech landscape media asset management overlaps with several other disciplines, including digital asset management (DAM).
While these two solution categories are similar (in name and practice), there are key differences.
DAM refers to the business process of storing and organizing all types of content across a company. This could mean files from the finance department, legal team, human resources, or other business units.
MAM, on the other hand, really focuses on assets that the marketing department requires, including large video and audio files.
So finding the solution that’s right for your team really starts with clarifying your functionality needs, including the types of files you want to store in your system and how you need to manage and distribute them.
Successful technology selection, with AVP
Is a MAM or DAM platform right for your organization? Further, which vendor is the best match for your marketing goals? While answering these questions can be hard, we can help.
AVP’s consultants have worked with hundreds of organizations to select the software partner that best fits their workflow and technology needs. If you’re dealing with content chaos, we’d love to hear from you.
Contact us to learn more about AVP Select — and how we can work together to achieve your content management goals, faster.
5 Warning Signs Your DAM Project is at Risk
12 September 2024
In the world of Digital Asset Management (DAM), recognizing early warning signs can mean the difference between success and failure. Chris Lacinak, founder and CEO of AVP, shares valuable insights from his extensive experience in the field. Here are the five key warning signs that indicate your DAM project may be at risk.
No Internal Champion
The absence of an internal champion can jeopardize your DAM project. This champion should possess the necessary expertise and experience to guide the initiative effectively. Pragmatically, it might sound like relying too heavily on external consultants or team members thinking they can manage without a dedicated point person. This reliance signals a potential failure in project management.
So, what does an effective internal champion look like? They need to:
- Have a solid understanding of the domain.
- Dedicate time solely to this project.
- Maintain the knowledge and context after external parties depart.
Having someone who can coordinate resources and make informed decisions is crucial for the sustainability and success of the project.
Insufficient Organizational Buy-In
Lack of organizational buy-in is another critical warning sign. If key stakeholders, especially leadership, do not understand the DAM initiative or fail to see its significance, the project is likely to struggle. This might manifest as leadership not being involved or departments feeling excluded from the process.
To foster buy-in, it’s vital to establish executive sponsorship early on. This sponsorship can come from various levels, including directors or C-level executives, who can advocate for the project and ensure it aligns with the organization’s strategic vision. Engaging with key stakeholders will help ensure they feel heard and included in the process, reducing potential pushback during implementation.
Inability to Articulate Pain Points
Another red flag is the inability to clearly articulate pain points. Statements like “we’re just a mess” or “we need a new DAM” indicate a lack of understanding of specific challenges. It’s essential to identify precise pain points to effectively address them.
Using the “Five Whys” technique can help drill down to the root of the problem. For instance, if a team is losing money, asking why repeatedly can reveal that the core issue might be difficulty in finding digital assets, leading to unnecessary recreations. This approach emphasizes that pain points are human problems, not just technological ones, and should be treated as such.
Unclear Definition of Success
Not knowing what success looks like or what the impact of solving the problem would be can lead to project derailment. If stakeholders cannot envision the outcome of a successful DAM implementation, it suggests a lack of direction and clarity.
To establish a strong business case, it’s crucial to articulate what success entails. Consider questions like:
- What will you be able to do that you couldn’t do before?
- What improvements will you see in workflows or team morale?
- How will this align with the organization’s strategic goals?
A well-defined vision of success helps secure leadership buy-in and provides a roadmap for measuring progress.
Skipping Critical Steps in the Process
Finally, wanting to skip critical steps or prematurely determining solutions can be detrimental. Statements like “we did discovery a couple of years ago” or “we just need a new DAM” indicate a lack of thoroughness in the planning process.
Discovery is essential for gathering updated information and engaging stakeholders. If stakeholders feel involved in the process, they are more likely to support the initiative and its outcomes. Rushing through this phase can lead to poor decisions and wasted resources, ultimately putting the project’s success at risk.
Conclusion
Identifying these five warning signs early on can help mitigate risks associated with your DAM project. Establishing an internal champion, ensuring organizational buy-in, articulating pain points, defining success, and taking the time to conduct thorough discovery are all critical steps toward a successful DAM implementation. By addressing these areas proactively, you can set your project up for success and avoid common pitfalls.
If you found this information helpful or have further questions, feel free to reach out to Chris Lacinak at AVP for more insights into managing your DAM projects effectively.
Transcript
Chris Lacinak: 00:00
Hey, y’all, Chris Lacinak here.
If you’re a listener on the podcast, you know me as the host of the DAM Right Podcast.
You may not know me as the Founder and CEO of digital asset management consulting firm, AVP.I founded the company back in: 2006
And I have learned what the early indicators are that are likely to make a project successful or a failure.
And I’m gonna share with you today five warning signs that your project is at risk.
So let’s jump in.
Number one, there is no internal champion to see things through, or there’s an over-reliance on external parties.
Now, what’s that sound like pragmatically?
That sounds like, “that’s why we’re hiring a consultant”, or “between the three of us, I think we should be able to stay on top of things.”
You might think it’s funny that myself as a consultant is telling you that you should not have an over-reliance on external parties, but the truth of the matter is, is that if you are over-reliant on us, and you are dependent on us, we have failed to do our job.Chris Lacinak: 01:06
That is a sign of failure.
But let’s talk about the champion.
What’s the champion look like?
Well, first and foremost, it’s someone who has the right expertise and experience.
We wanna set this person up for success.
They need to have an understanding of the domain.
They don’t have to be the most expert person, but they need to understand, they need to be conversant, they need to understand the players, the parts, how things work.
They need to be knowledgeable enough that they’re able to do the job.
Second, they need to have the time.
This can’t be, you know, one of ten things that this person is doing, part of their job.
It needs to be dedicated.
And it can’t be something that’s shared across three, four, five people.
That’s not gonna work either.
Things will slip through the cracks.
Now, why is this important?
Well, it’s important because it mitigates the reliance on external parties, as I’ve already said.
But what’s the other significance?
The other significance is that once the consultant leaves, or once the main project team is done doing what they’re doing, whether that’s an internal project team or external project team, this person is gonna be the point person that is going to maintain the knowledge, the history, the context of the project.Chris Lacinak: 02:14
They’re gonna have an understanding of what the strategy, what the roadmap is, what the plan is, and they’re gonna help execute that.
They’re gonna be the point person for coordinating the various resources, the people.
And, you know, it’s gonna depend what this position looks like as to what authority they have, what budget control they have, things like that, about exactly what it looks like.
But more or less, this person is either gonna be, you know, the main point of recommendations and influence, or they might even be the budget holder and actually be making the calls and decisions.
But one way or another, you need somebody that is gonna see this through, that’s internal to your organization in order for it to be sustainable and for it to succeed.
Number two, not enough organizational buy-in or poor socialization.
What’s that sound like in practice?
Well, it might sound like “leadership doesn’t understand, they just don’t get it.”
Or “there’s issues with that department that we don’t need to go into here, but we don’t need to include them.
They’ll have to fall into place once we do this.”
Or “we haven’t talked with folks about this yet, but we know it’s a problem that needs to be fixed.
It’s obvious.”Chris Lacinak: 03:14
Those are all signs that there’s poor socialization and that you haven’t gotten the appropriate buy-in from the organizational key stakeholders.
Now, who are the key stakeholders?
Well, let’s start with executive sponsorship.
It’s critically important that there’s an executive sponsor.
Now, executive can mean a number of things.
It could be director level, it could be C-level.
Essentially, it’s someone who is making decisions, is a key part of fulfilling strategy for the organization, the department, the business unit, has budget and is making budget calls.
So why is this important and how do you respond to this?
Well, it’s important because executive sponsorship is looking out for the vision, the strategy, the mission, and the budget of the organizational unit.Chris Lacinak: 03:59
You can’t be sneaky and successfully slip a DAM into an organization, right?
There’s no such thing as a contraband dam.
It’s not like that bag of chips you sneak into the grocery cart when your significant other is looking the other way.
It’s an operation, it’s a program.
It requires executive sponsorship.
It requires budget.
It requires a tie-in into the strategy, the vision, and the mission.
It’s not a bag of chips.
It’s all that and a bag of chips.
Who are the other key stakeholders?
Well, your key stakeholders are gonna be the other people that are either contributors, users, or supporters of the DAM in some way.
‘Cause it is an operation.
A dam has implications to workflows, to policies, to behaviors.
It touches so many different parts of the organization.
So it’s critically important that your key stakeholders are included as early on in the process so that they feel heard, they feel included, they feel represented.
And when it’s time to roll out that DAM, you don’t have people going and looking backwards, right?Chris Lacinak: 04:57
Everybody has an understanding of where you are, why you’ve arrived there, and how you’re moving forward.
And even if they don’t agree, they understand and they’re on board.
And you want concerns and objections.
You want those, as I said, not at the point at which you’re trying to do the thing, but you want it early on.
You wanna be able to respond to those.
You wanna be able to address them.
You hope they come out as early as possible so that you can build allies and trust as early on in the process as possible.
Now, do not confuse this with getting consensus and doing everything by consensus.
That is not what I mean.
And in fact, that could stand on its own as a major warning of potential failure here.
So doing everything by consensus is a huge downfall.
Do not go that route.
You want a robust, diligent system and a process for planning and executing the project that uses and addresses feedback of people along the way, but not one that requires everybody to agree on something.Chris Lacinak: 06:01
And not one where everybody’s wants and wishes are treated equally, right?
That’s just not how organizations work.
If you do that, you’re gonna end up with a system that makes nobody happy ’cause you’re not doing anything particularly well.
So it’s a setup for failure.
So do not do that.
And you might think, well, if I don’t give people what they want, if I tell people no, if we say their issue is a priority three instead of a priority one, they’re gonna object to the system or the program.
And that’s not true.
Actually, what people want, they wanna feel heard, right?
They wanna be able to raise their concerns, their objections, their wishes.
Then they wanna know that you hear them.
They want to understand why whoever’s making the decisions are making the decisions the way they are.
So if their issue or their wish or their request is number three instead of number one, people can live with that.Chris Lacinak: 07:02
If they understand why you’ve made that decision, you’ve addressed it, it’s transparent, and it serves the greater mission, vision, or purpose of what you’re trying to do.
So that is critical.
You need to lay out that mission, vision, purpose early on.
And again, if you look at our DAM operational model, that is at the center of the operational model.
Once you have people on board with that, then you can start to get people to organize around that.
And even if they don’t get everything they want, they’re willing to be on board and be a productive member of obtaining that greater goal.
Number three, unable to clearly articulate the pain points.
So what’s that sound like in practice?
Well, it sounds like something like, “we’re just a mess.
My friend works at such and such organization, they have their act together, we need to be like them.”
Or “we definitely just need a new DAM.
Jerry was in charge of getting the one we have now and nobody likes Jerry.”Chris Lacinak: 08:01
So that’s not often a good place to start.
You know, well, and maybe you do start there.
It’s not a good place to end.
You don’t act off of that point.
That begs more questions.
And here’s what I’ll say, you know, and I’ve been, I’m a buyer of services where I’m not an expert.
So there are things that I know I don’t know.
And I might have trouble talking to, you know, a service provider for me, not understanding what the landscape of service offerings are or exactly what I need, right?
You don’t need to know what you need or what the solution is.
It’s okay that you don’t know what you don’t know.
That’s not the problem.
That’s something where you can get, you know, the service provider that I’m talking to, or if you’re talking to AVP, we can guide you on that.
We will get enough context and understanding to be able to guide you on that.Chris Lacinak: 09:02
What we need to know is what your pain points are.
So imagine going to the doctor, you have a hurt knee and maybe that hurt knee has you worried.
So your stomach’s upset with worry and you’re feeling down in the dumps because you’re not feeling well.
You need to be able to articulate the relevant pain points to the doctor, right?
You can’t just go in and say, “Oh, I feel awful.”
“Well, what’s wrong?”
“Everything.”
No, that’s not gonna help anybody.
You need to be able to say at least my knee is hurting.
That’s the main problem.
If there’s information and context on what caused that, that’s great and useful.
You know, “I had a fall.”
“I twisted it when I was on the trail”, whatever the case may be.
That might be helpful information, but you don’t need to know why your knee hurts and you don’t need to know what the solution is.
You just need to be able to point the doctor in the right direction of where the pain is.
Similarly with DAMS, right?
So let me give you a tip for identifying pain points.
I talked about, I gave those examples of what it sounds like up front.
And I said, you know, that might be an okay place to start.
I mean, it might be.Chris Lacinak: 10:01
Sometimes you’re just frustrated, you’re overwhelmed, right?
And that’s the thing that comes out.
But that’s not the place you end.
So there’s something called the five whys.
It comes out of a root cause analysis that I think can be really useful here.
And I’ll give you as a tool to use for kind of drilling down on what the pain points are.
So let’s give an example.
Let’s state a problem.
And then you ask why five times to get down to the core of the problem.
So let’s say we start at, “we’re just a mess.”
Well, why?
“Well, we’re losing money.”
Why is that?
“Because we keep missing deadlines and going over budget on production expenses.”
Well, why is that?
“Well, because people continuously have to recreate assets that we’ve already created in the past, and that just takes more time and more money.”
Well, why is that?
“Because people can’t find what they’re looking for.
They have to recreate it.
Or maybe it’s lost and we have to recreate it.”
Well, why?
“Well, because when they search for things using the terms that are meaningful to them, they don’t get the right results.Chris Lacinak: 11:00
They don’t get the things they are looking for.
And it takes too much time and it’s too hard.”
Now that is extremely useful, right?
That’s where we get down to the pain points.
People can’t find what they’re looking for.
That gives us something to work with.
And remember here too, you put humans first.
Pain points are not technology problems, they’re human problems.
And we’re aiming to solve human problems.
Now, technology has a role to play in solving these problems, but technology problems are not what we’re aiming to solve.
The problem is not that you don’t have a DAM.
The problem is that the digital assets can’t be found easily.
Digital assets are being lost and recreated.
Licensed content is being misused.
Brand guidelines are being violated.
All of these things cause pain to people in the way of time, money, frustration, excellence, et cetera.
So one solution to this may be a DAM technology, but there’s more to it than just that.
And again, I’m gonna point you to the DAM Op model.
Number four, you’re unable to understand what success looks like or what the impact of solving the problem would be.Chris Lacinak: 12:01
And that sounds like, (crickets chirping) crickets.
I always like to ask, if you could solve all of these pain points and problems, and if this project is a total success, imagine, what does it look like?
What are you able to do that you couldn’t do before?
What do you have that you didn’t have before?
And this ties back to pain points ultimately, and you put them together to make what’s called a business case.
And while it’s always a great idea for a business case to get down to dollars, it doesn’t have to.
So don’t be distracted by the money.
Let’s go for the money, that’s important, and I’ll talk about why.
But don’t be distracted by that.
Let’s talk about the other things too, ’cause there are qualitative factors that are meaningful and important as well.
For instance, you might say, “if we could solve the search problem, we’d be able to come in on or under deadline and budget.
The team would have a much better work experience, people would feel happier, less frustrated.
The CEO or executive director would be ecstatic because we’d be able to support three of their five key strategies over the next year.
We’d be able to reduce storage costs by 200%.Chris Lacinak: 13:01
We could cut the legal budget for license fee violations by 90%”, right?
And I’m gonna link to a business case post and slide deck template that we have for you to help you there.
And I’m gonna encourage you to go and check that out.
But the point here is that you need to be able to articulate what success looks like.
And tied into the vision and strategies of the organization and leadership, that puts so much wind in the sail of your DAM project.
Less pain is one thing, but more joy is even better.
If you don’t have a strong business case and you can’t speak to what success looks like or what the impact will be, I’m gonna say it’s unlikely that you truly have the buy-in of leadership.
Or that there’s even a sustainable path forward that is at least clear today.
Why?
Well, because leadership, whether it’s at a Fortune 500 or a nonprofit or higher ed institution, prioritizes where they spend their resources based on how well it supports their vision, strategies and mission.Chris Lacinak: 14:04
If you can’t convince them and demonstrate how your DAM project will do that, then you’re not going to get anything more than play money.
What do I mean by play money?
It’s something that keeps you busy and out of their hair while they go about realizing their vision, strategy and mission.
It’s not a sustaining revenue source.
It’s not a sustaining funding source, I should say, in order to support a DAM project and program.
Also, if you don’t know what you’re aiming for, how can you measure, track, report, prove, and improve?
In order to keep the attention of leadership, you need to be able to consistently demonstrate the value of the DAM program.
And aside from leadership, it also creates direction orientation for the organization.
Number five, wanting to skip critical steps or predetermining that you need something you don’t.
What’s this sound like in practice?
It might sound like, “We did discovery a couple of years ago.
We’ll use that so we can do this faster or cheaper.”
Or, “We know we just need a new DAM.
Let’s just focus on that.”Chris Lacinak: 15:03
Or it might sound like, “We’re just a mess.
My friend works at such and such organization and they have Acme DAM and it works great.
And nobody likes our DAM vendor anyway.
We just need a new DAM.”
Well, let’s start with the discovery part.
So the reality is that discovery serves multiple purposes.
One purpose is information, right?
And in six months or twelve months or two years, things change and can change dramatically.
So for just informational purposes, you’re setting the foundation up here for your strategy, your plan, your implementation, whatever it is, you don’t wanna take a risk on getting that wrong.
Make sure that your information is up to date, it’s accurate, it’s robust, right?
I wouldn’t use information from six months ago or two years ago as a stand-in for today for that reason.
But discovery serves other purposes.
Discovery gets stakeholders, key stakeholders specifically, sitting down at the table and engaging.Chris Lacinak: 16:07
That’s critically important for change management, for buy-in.
Earlier, we talked about getting those objections and concerns out on the table as early as possible.
It does that.
It gets people talking, people feel included, they feel part of the process.
It greatly increases from a human and organizational perspective, the success probability.
So you wanna get people down and engaging in this process early on.
The other funny thing about this is that someone is coming, when they’ve predetermined, we just need a new DAM.
Someone’s coming to an expert that they have sought out, seeking their expertise and their experience.
And they are, in that regard, they have acknowledged that they don’t have the appropriate expertise and experience.
On the other hand, they are sure that they know best, better than the person’s expertise and experience that they’ve sought out.Chris Lacinak: 17:02
So let’s start with, “we’re just a mess.
My friend works at such and such organization and they have Acme DAM.”
They might go on to say, “we know we need Acme DAM and we just need you to convince procurement that we need Acme DAM and get them to get it for us.”
This is like going to your doctor with a hurt knee and saying, “my friend got a knee replacement and it did them wonders.
I know I just need a knee replacement and I need you to convince the insurance company to pay for it.”
Now, it’s possible you need a knee replacement, just like it’s possible that this organization needs a new DAM.
And it’s within the realm of possibility that this organization would benefit from Acme DAM and you would benefit from a new knee.
But if that doctor says, “okay, let’s look at the surgery schedule, how’s noon today work?”
You should run.
Well, maybe don’t run.
You do have a hurt knee after all, walk briskly out of there and don’t go back.
Similarly, if a consultant says, “okay, let’s get to work on getting you that Acme DAM” after that conversation, you should run, which is okay in this scenario because you don’t have a hurt knee as far as I know.Chris Lacinak: 18:03
The real deal is that there are lots of reasons that DAM operations and programs don’t work.
That experienced doctor is gonna look at your gait, at how you hold your body, ask questions about your activities, look at whether it’s a bone or a tendon issue, et cetera.
They’re gonna look systematically and holistically before making a judgment call in the best course of action.
And that’s exactly what you or your DAM consultant should do in this scenario in order to stand the best chance of getting to success in the fastest and most cost-effective manner.
Because the disaster scenario is that you get the new knee or you get the new DAM and not only does it not make things better, but it makes them worse.
In the knee situation, you’ve only hurt yourself.
It’s still unfortunate, but you’ve only hurt yourself.
In the DAM scenario, you’ve likely wasted hundreds of thousands of dollars.
You stand to lose more.
You’ve lost trust.
You’ve hurt morale.
You’ve possibly put your job at risk.
So there’s a lot to lose.
And I’m gonna link to a blog that we wrote about the cost of getting it wrong, just so that you can understand a little bit better why you don’t wanna go that route.Chris Lacinak: 19:02
So those are the five warning signs that your DAM project is at risk.
I hope that you have found this extremely helpful.
Please email me at [email protected].
Leave comments, like, and subscribe.
Let me know how you liked it.
And let me know if you’d like to see more content like this or hear more content like this.
Thanks for joining me today.
Look forward to seeing you at the next DAM Right Podcast.
Exploring the Future of Object Storage with Wasabi AiR
8 August 2024
In today’s data-driven world, object storage is revolutionizing how we manage digital assets. Wasabi AiR, an innovative platform, uses AI-driven metadata to enhance this storage method, making it more efficient and accessible. This blog explores how Wasabi AiR is reshaping data management, the benefits it offers, and what the future holds for AI in this field.
How Wasabi AiR Transforms Object Storage
Wasabi AiR integrates AI directly into storage systems, automatically generating rich, searchable metadata. This feature allows users to find, manage, and utilize their data more effectively. By enhancing storage with AI, Wasabi AiR helps organizations streamline data retrieval, boosting overall productivity and efficiency.
The Evolution of Metadata in Object Storage
While AI-generated metadata has existed for nearly a decade, its adoption in data storage has been slow. Wasabi AiR simplifies this integration, allowing organizations to leverage automation without complexity.
Aaron Edell’s Vision for AI in Storage
Aaron Edell, Senior Vice President of AI at Wasabi, leads the Wasabi AiR initiative. His vision is to make AI a seamless part of data management, enabling organizations to generate metadata effortlessly and manage digital assets more efficiently.
Advanced Technology in Wasabi AiR
Wasabi AiR uses advanced AI models, including speech recognition, object detection, and OCR, to create detailed metadata. This capability enhances the storage system by making data more searchable and accessible. One standout feature is timeline-based metadata, enabling users to locate specific moments within videos or audio files stored in their systems.
Use Cases: How Wasabi AiR Benefits Different Sectors
Wasabi AiR has numerous applications across industries, improving data handling in:
- Media and Entertainment: It helps create highlight reels quickly, as seen with Liverpool Football Club’s use of Wasabi AiR to boost fan engagement.
- Legal Firms: Law firms save time by managing extensive video and audio records efficiently.
- Education and Research: Institutions make their archived content more accessible through AI-driven metadata.
Cost Efficiency of AI-Powered Data Storage
Wasabi AiR offers a cost-effective solution, charging $6.99 per terabyte monthly. This straightforward pricing makes it easier for organizations to predict costs while benefiting from AI-enhanced solutions.
Activating Wasabi AiR
Setting up Wasabi AiR is simple. Users connect it to their existing system, and the platform begins generating metadata immediately, enhancing value and usability without requiring complex configurations.
The Future with AI
As data continues to grow, efficient management is increasingly important. Wasabi AiR is set to play a key role by enhancing searchability and usability through AI-driven solutions.
Integration and Interoperability
Wasabi AiR supports integration with other data management systems, enhancing workflows. Its APIs allow seamless metadata export to Digital Asset Management (DAM) or Media Asset Management (MAM) systems, making data handling more efficient.
Ethical AI Considerations
Ethical considerations are crucial when implementing AI in data management. Wasabi AiR ensures data security and transparency, building trust and ensuring responsible AI use.
Conclusion: Elevating Data Management with AI
Wasabi AiR is a game-changer, enhancing how we manage, search, and utilize data. By combining AI with innovative technology, organizations can significantly improve efficiency, accessibility, and data management. As digital data management continues to evolve, Wasabi AiR positions itself as a leader, offering a future where data isn’t just stored—it’s actively leveraged for success.
Transcript
Chris Lacinak: 00:00
The practice of using AI to generate metadata has been around for almost a decade now.
Even with pretty sophisticated and high-quality platforms and tools, it’s still fair to say that the hype has far outpaced the adoption and utilization.
My guest today is Aaron Edell from Wasabi.
Aaron is one of the folks that is working on making AI so easy to use that we collectively glide over the hurdle of putting effort into using AI and find ourselves happily reaping the rewards without ever having had to do much work to get there.
It’s interesting to note the commonalities and approach with both Aaron and the AMP Project
folks who I spoke with a couple of episodes ago.
Both looked at this problem and aimed to tackle it by bringing together a suite of AI tools
into a platform that orchestrates their capabilities to produce a result that is greater than the
sum of their individual parts.
Aaron is currently the SVP of AI at Wasabi.
Prior to this, he was the CEO of GreyMeta, served as the Global Head of Business and
GTM at Amazon Web Services, and was involved in multiple AI and ML businesses in founding
and leadership roles.
Aaron’s current focus is on the Wasabi AiR platform, which they announced just before
I interviewed him.
I think you’ll find his insights to be interesting and thought-provoking.
He’s clearly someone who has thought about this topic a lot, and he has a lot to share
that listeners will find valuable and fun.
Before we dive in, I would really appreciate it if you would take two seconds to follow,
rate, or subscribe on your platform of choice.
And remember, DAM Right, because it’s too important to get wrong.
Aaron Edell, welcome to the DAM Right podcast.
Great to have you here.
Aaron Edell: 01:38
It’s an honor.
Chris Lacinak: 01:40
Thank you for having me. I’m very excited to talk to you today for a number of reasons.
One, you’ve recently announced a really exciting development at Wasabi. Can’t wait to talk about that.
But also, our career paths have paralleled and intersected in kind of strange ways over
the past couple decades.
We both have a career start and an intersection around a guy by the name of Jim Lindner, who
was the founder of Vidipax, a place that I worked for a number of years before I started
AVP, and who was also the founder of Samba, where you kind of, I won’t say you started
there, you had a career before that, but that’s where our intersection started.
But I’d love for you to tell me a bit about your history and your path that brought you
to where you are today.
Aaron Edell: 02:34
Yeah, definitely.
The other funny thing about Jim is that he is a fellow tall person. So folks who are listening to this can’t tell, but I’m six foot six, and I believe Jim is
also six six or maybe six seven.
So when you get to that height, there’s a little Wi-Fi that goes on between people of
similar height that you just make a little connection.
You kind of look at each other and go, “I know your pain.
I know your back hurts.”
So my whole life growing up, ever since really I was five years old, I loved video, recording,
shooting movies, filming things.
I eventually went to college for it.
I did it a lot in high school.
And this is back in the early 90s when video editing was hard.
And the kid in high school who knew how to do it and had the Mac who could do it was
kind of the only person able to actually create content.
So I was rarefied, I guess, in that sense.
So I would go to film festivals and all sorts, and it was just great time.
And I was never very good at it.
I just really loved it.
And when you love something, especially when you’re young, you learn all of the things
you need to know to accomplish that.
So I learned a lot about digital video just because I had to figure out how to get my
stupid Mac to record and transcode.
And then I got introduced to nonlinear editing very early on and learning that.
So when I went to college, I went there for film and video, really.
That was what I thought I wanted to be when I grew up was a filmmaker.
My father was talent for KGO television and ABC News for a long time.
So I had some familial– and my mother was the executive producer of his radio show.
So I had a lot of familial, sort of, media and entertainment world around and was very
supported in that way, I suppose.
By the time I got– so I went to college, and I loved my college.
Hampshire College is a fantastic institution.
It has no tests, no grades.
It has a– you design your own education, which is not something I was prepared for,
by the way, when I went there.
I’m so thrilled I went there because all of my entrepreneurial success is because of what
I learned there.
But at the time, I had no appreciation for that.
And I just thought, well, this is strange.
I’m here for film and video, and they’re like, here’s a camera.
Here’s a recording button.
And I thought, mm, this is an expensive private college in Massachusetts and probably need
to make it a little bit harder.
So my father is a physician, so I thought pre-med.
And I did it.
I went full on pre-med.
I was going to be a doctor.
I was going to apply to medical school.
But I was also working on documentaries and producing stuff and acting in other people’s
films and things like that.
So I still– that love, that passion never went away.
I was just kind of being creative about how to do it.
And my thesis project ended up being a documentary about a medical subject, which was kind of
perfect.
Because at the end of the day, my father, he’s a physician, but he’s actually a medical reporter.
And that’s a whole separate field that fascinated me.
So when I graduated, I was like, OK.
I went and actually got a job producing and editing a show for PBS, which was super cool
in New York City.
And that was around: 2000
I was doing it for a couple of years.
And we were– it was a PBS show, so we were very reliant on donations and whatnot.
And: 2008
It dried up.
We ran out of money.
And I was looking for a job.
And I worked on a couple of movies that were being shot in the city.
And I found this job at this weird company called SAMMA Systems on 10th Avenue and 33rd
Street or something that was Jim Lindner’s company.
That came to learn later.
But they were making these robotic systems that would migrate videocassette tapes to
a digital format.
So think of a bunch of tape decks on top of each other with a gripper going up and down
and pulling videotapes out of a library, putting them in, waiting for them to be digitized,
taking them out, cleaning them– not in that order, but essentially that way.
And I was just fascinated.
I mean, it was so cool.
Building robots.
Chris Lacinak: 07:02
Yeah.
Aaron Edell: 07:03
You know, video.
It was everything I loved kind of in one. And the rest is just really history from there.
Chris Lacinak: 07:09
Yeah.
So we have another intersection that I didn’t know about, which was Hampshire College, although I was denied by Hampshire College.
So you definitely one-upped me on that.
Which I taught at NYU in the MIAP program, and Bill Brand also taught there, also taught
at Hampshire College.
And I told him that I was denied by Hampshire College.
And he said, I didn’t know they denied people from Hampshire College.
Aaron Edell: 07:30
Oh, that makes it worse.
Chris Lacinak: 07:32
Anyway, all things happen for a reason.
It was all good. But that’s very cool.
That is a great school.
And what a fascinating history there.
So it’s not– I mean, I still think there’s– let’s connect the dots between working for
a company that was doing mass digitization of audiovisual and where you are today at
Wasabi.
Like, that is not necessarily easy to fill in that gap.
So tell us a little bit about how that happened.
Aaron Edell: 08:00
Yes.
Well, as my father likes to say, you know, life is simply a river. You just jump in and kind of flow down and you end up where you end up.
I don’t think I could have engineered or controlled this.
p– you know, SAMMA, this was: 2008
If I could jump back, you know, and say to myself back then, this is where you’re going
to end up, I would just been like, how?
How do you do that?
How is that possible?
So this is what happened.
I mean, I– you know, SAMMA was very quickly acquired by a company called Front Porch Digital
in: 2000
Very close to: 2009
And Front Porch Digital, you know, created these products that were– the core product
was called DIVA Archive, which still exists today, although it’s owned by Telestream.
But essentially, it is– you know, you’ve got your LTO tape robot and you’ve got your
disk storage and you have– you’re a broadcaster.
And you need some system to keep track of where all of these files and digital assets
live and exist.
And you’ve got to build in rules.
Like, take it off spinning disk if it’s old.
Make sure that there’s always two or three LTO tape backups.
You know, transcode a copy for my man over here.
Automation wants some video clip for the news segment.
You know, pull it off tape and put it here.
All of that kind of stuff was the DIVA Archive software.
And I’m oversimplifying.
But through that process, you know, I was– I joined as the– I was kind of bottom of
the rung, like, support engineer.
And I had delivered some SAMMA systems, you know, installed some and did a little product
managing just because we were– you know, we needed it.
We were only eight people.
And I was probably the most knowledgeable of the system other than one or two people
at the time.
And so by the time I got to Front Porch Digital, you know, I was doing demos and I was– I
was architecting solutions for customers.
So I was promoted to a solutions architect.
And that’s kind of where I learned, you know, business, just like generic business stuff,
emails, quotes.
I learned about the tech industry and media and entertainment industry in particular and
how, you know, how sales works in those industries and how it doesn’t work sometimes.
And all of the products that are– that are involved.
So I was kind of, you know, getting a real good crash course of just how media and entertainment
works from a tech perspective and how to be a vendor in the space.
I did a brief stint at New Line.
For those of you who don’t know New Line, I don’t think it exists anymore, but it was
a company based in Long Island that kind of pioneered some of the like set-top box digital
video fast channel stuff.
And then– but I was more or less at Front Porch for about seven years.
And then Front Porch was acquired by Oracle.
And working at Oracle was a very different experience.
You know, they are a very, very large company and they have a lot of products.
And I don’t know, I just– it just didn’t feel like I could do my scrappy startup thing,
which I had kind of spent the last 10 years honing.
So that is– so, you know, that is kind of at the point where I– a sales guy that I
had worked with at Front Porch named Tim Stockhaus went off to California to start this company
called GrayMeta based on this idea that we were all kind of floating around, which is,
man, metadata is a real problem in the industry right now, especially as it relates to archives
and finding things.
So GrayMeta was founded on that idea.
When I joined there, I was the first or second employee.
So it was– we were building it from scratch.
And I mean building everything, not just the product or the technology, but the sales motions
that go to market.
And that’s where I learned all that stuff.
I quit GrayMeta about two years in to go start my own startup because I just wanted to do
it.
I wanted to be a founder.
I wanted to know what that was like.
And I, at that point, had learned a lot about machine learning and how it applies to the
media and entertainment industry, specifically around things like transcription and AI tags.
And a couple of my coworkers at GrayMeta had this really great idea that let’s build our
own machine learning models and make them Docker containers that have their own API
built in and their own interface and just make them run anywhere, run on-prem, run in
the cloud, wherever you want.
Because it solved a lot of the problems at the time.
So we jumped ship.
We built the company.
It exploded.
I was the CEO.
My founders were the technical leaders.
And between the three of us, man, we were doing everything– sales, marketing, building,
tech support, all of it.
And gosh, what a learning experience.
Also as a founder and CEO, you’re raising money.
You’ve got to figure out how the IRS works.
You need to figure out how to incorporate stuff.
So a whole other learning experience for me.
a company called Veritone in: 2019
Changed our lives.
I mean, we went through an acquisition.
We walked away with a lot of money.
And it was a whole new world.
Things open up, I guess, when that happens to you in business.
And I actually got recruited to join AWS.
And the funny thing is that it had nothing to do with media and entertainment or AI at
all.
AWS said, hey, you have a lot of experience taking situations where there’s a lot of data
and simplifying it for people or building products to simplify it for people and make
it more consumable and understandable.
AWS has that problem with their cost and usage data.
Chris Lacinak: 13:47
Oh, interesting.
Aaron Edell: 13:49
Yeah. And you get a, especially if you use a lot of the cloud and you’re a big company, you
get a bill. It’s not really a bill.
You get like this data dump that’s not human readable.
It’s billions of lines long, has hundreds of columns.
You can’t even open it in Excel.
It’s like, how do I use this?
So AWS was like, go figure this out, man.
So I mean, gosh, it was such a great experience.
We built a whole business based on this idea.
We built a product.
We built a go to market function.
We changed how AWS and actually I think how the world consumes cloud spending.
I think we had that big of an impact, not to toot our own horn, but it was for me, for
my career and my learning as a human, wow.
Like seeing how you can impact the whole world.
Chris Lacinak: 14:39
Yeah.
Well, as a consumer of AWS web services, I’ll say thanks because the billing definitely improved dramatically over the past several years.
So I know exactly what you mean.
And I see the manifestation of your work there.
I didn’t realize though that that’s what you’re doing at AWS.
I did always have in my mind that it was on the AI front.
So that’s really interesting that you kind of left that.
So in some ways, your role now is kind of a combination of the two in the sense that
Wasabi is a competitor to AWS, but you are very much in the AI space.
So tell us about what you’re doing at Wasabi now.
Aaron Edell: 15:17
Yeah, absolutely.
Well, you know, it was your point is really spot on because one of the biggest problems, I think for customers of the cloud is that, and I learned this thoroughly, is that it’s
not forecastable and it’s really hard to actually figure out what you’re spending money on.
And it’s also can be expensive if you do it wrong.
There really is a right way to do cloud in a wrong way.
And it’s not always obvious how to navigate that.
So when I first came up, so, you know, the board of GrayMeta called me while I was AWS,
you know, kind of chugging along and said, “Hey, Aaron, why don’t you come and be CEO?”
And I thought, you know what, that’s scary.
But it’s also like, it’s perfect because the Graymeta story, I feel like we never got to
finish telling it.
I left, you know, before we got to finish telling it.
And so I came back and I said, “Guys, I’ve now had the experience of creating our own
machine learning models and running a machine learning company, like one that actually makes
AI and solves problems.
Let’s do that.”
So that’s when I met Wasabi, was very shortly after I came back.
And you are totally right, because when I met Wasabi, it was like a door opening with
all this, you know, heavenly light coming through in terms of cloud FinOps.
Because Wasabi is, you know, cloud object storage, just like S3 or Microsoft Blob, that
is just $7.99 per terabyte and, sorry, $6.99 per terabyte and just totally predictable.
Like, you don’t get charged for API fees, you don’t get charged for egress, which is
where the kind of complexity comes in for other hyperscalers in terms of cost optimization
and understanding your cloud use and cloud spend.
That’s all the unpredictable stuff.
That’s what makes it not forecastable.
So the fact that, you know, Wasabi has just like a flat per terabyte per month pricing
and there’s just nothing else.
It’s just elegant and simple and beautiful and very compelling for the kind of experience
I had in the, and we call it the FinOps space or cloud FinOps space, where for three and
a half years, all I heard were problems about that this solved, right?
So it just pinged in my brain immediately.
The connection with AI, you know, goes back even further in the sense that I had always
advocated for, I always believed fundamentally that the metadata for an object and the object
itself should be as closely held together as possible.
Because when you start separating them and they’re serviced by different vendors or whatever,
that’s where the problems can seep in.
And one of the best analogies for this that I can think of is, you know, our Wasabi CEO,
Dave Friend, I love how he put it because he always refers to, you know, a library needs
a card catalog, right?
You go into the library and the card catalog is in the library.
You don’t go across the street to a different building for the card catalog, right?
It’s the same concept.
I mean, it’s obviously, you know, the physical world versus the virtual computer world, but
similar concept in the sense that, you know, the metadata that describes your content,
it should be as close to the content as possible because if it’s not, you know, you are at
risk of losing data at the end of the day.
I mean, I’ve talked to so many customers that have these massive libraries, sometimes they’re
LTO libraries, sometimes there are other kinds of libraries where they’ve lost the database,
right?
And, you know, in LTO, like you need a database.
You need to know what objects are written on what tape.
It’s gone.
I mean, what do you do, right?
You’re in such a bad, it’s such a bad spot to be in.
So hopefully we’re addressing that.
Chris Lacinak: 19:18
Yeah.
So that’s, I remember reading something on your website or maybe a spec sheet or something for air, which said object storage without a catalog is like the internet without a search
engine or something.
So, and to take that, to tie that to your other analogy, it’s like a library without
a card catalog, right?
You walk in, you just have to start pulling books off the shelves and seeing what you
find.
Although there, we have a lot of text-based information.
When you pull a tape out of a box or a file off of a server, there’s a lot more research
to do than there is maybe even with a book.
So yeah.
Aaron Edell: 19:55
Yes.
Chris Lacinak: 19:56
So tell me, what does AiR stand for?
It’s a capital A lowercase i capital R. Tell us about that. What’s that mean?
Aaron Edell: 20:05
So I believe it stands for AI recognition.
Chris Lacinak: 20:08
Okay.
Aaron Edell: 20:09
And so the idea is that, so the product wasabi AiR is this new product and it’s, you know,
the kind of combination of the acquits. So I guess we skipped the important part, which is that wasabi acquired the Curio product
and some of the people, including myself came over and the Curio product really was this
platform.
We called it a data platform, if you will, that when you pointed at video files and libraries
and archives, it literally, it would do the job of opening up each file, like you just
said and watch essentially watching it, you know, logging, you know, taking it, making
a transcript of all the speech, looking at OCR information.
So, you know, recognizing text on screen, recording that down, pull, you know, pulling
down faces, object recognition, basically creating a kind of rich metadata entry for
each file.
So this is where I think the, the, the kind of marriage between that technology and Wasabi
comes in because you’re, we now have a way of essentially with wasabi AiR it’s, you know,
it’s your standard object storage bucket.
Now you can just say anything that’s in that bucket.
I want it, I want a metadata index for that.
We’ll just do automatically with machine learning and you have access to that and you can search
and you can see the metadata along a timeline, which is really kind of turning out to be
quite unique.
I’m surprised that I don’t see that at a lot of other places in specifically seeing the
metadata along the timeline.
And that’s important because the whole point, it’s not just search, it’s not just, I want
to find assets where there’s a guy wearing a green shirt with the Wasabi logo.
I want to know where in that asset those things appear because I’m an editor and I need to
jump to those moments quickly.
Chris Lacinak: 21:54
Right, right, right.
Aaron Edell: 21:56
So that, that’s, that’s what we’re doing at, at, at wasabi with wasabi AIR.
And that’s, that’s why AiR stands for recognition, AI recognition, because you know, we’re essentially running AI against and recognizing objects, logos, faces, people, sounds for all your
assets.
So I want to dive into that, but before we do that, on the acquisition front, did Wasabi acquire
a product from GrayMeta or did wasabi acquire GrayMeta?
Wasabi acquired the product, the Curio product.
So GrayMeta still exists.
In fact, it’s really quite, is thriving with the Iris product and the SAMMA product, which
we talked about SAMMA.
That was the other piece I skipped over that too.
When I, when they called me and said, come be CEO of GrayMeta, it really made sense because
SAMMA was part of that story.
And that, that was like a connection to my first job in tech, which was wonderful because
I love, I love the SAMMA product.
I mean, we were, we were preserving the world’s history, you know, the, the National Archives
and Library, the Library, US Library of Congress, the Shoah foundation, the, you know, criminal
tribunal in the Rwandan genocide from the UN, like just history.
So anyway, I digress.
Chris Lacinak: 23:07
Well, no, I mean, actually the last step, as we sit here today and talk, the last episode
that aired was with the video of Fortunoff, the Fortunoff Video Archive for Holocaust Testimonies, which was, I think one of the first, if not the first SAMMA users.
So that, that definitely ties in.
s that around, I think it was: 2015
maybe: 2016
I remember wandering around the NAB floor and, and for the past several months had been
having conversations with Indiana university about this concept of a project around, you
know, this, this, they, they had just digitized or actually were in the process of digitizing
hundreds of thousands of hours of content, video, film, audio.
And they had the problem that they had to figure out metadata for it.
You know, they had some metadata in some cases, in other cases, they didn’t have any, in other
cases it wasn’t dependable.
So we, we were working on a project that was how does Indiana university and others tackle
the challenge of the generation of massive amounts of metadata that is meaningful.
And so we, that, that was the spawning of this project, which became known as AMP.
And by the time this episode airs, we will have aired an episode about AMP, but I was
wandering around the NAB floor.
I come across GrayMeta.
As I remember, it was in like the backup against the wall.
And and I’m like, Oh my God, this is the thing we’ve been talking about.
Like it was kind of like this amazing realization that you know, other folks were doing great
work on that front as well.
I think at the same time there was maybe Perfect Memory.
I mean, they’re, they’re one of the ones who I see doing metadata on the timeline and in
a kind of a similar way that you’re talking about, but but yeah, there weren’t, there
weren’t a lot of folks that were tackling that issue.
So it’s really cool one to have seen the evolution.
Do I have that timeline right?
Was it about like: 2015
Was that you had a product at that point?
I remember seeing it.
So like you had been working.
Aaron Edell: 25:08
Yeah, so I, so we, I joined, I was, like I said, the second employee at GrayMeta, which
would have been August of: 2015
Right.
It must have been.
Chris Lacinak: 25:23
Yep.
Aaron Edell: 25:24
Yes.
So we, we did have a big booth and we had a product, but it’s possible. I can’t remember exactly when it is we introduced machine learning for the tagging is possible.
It was by then.
Yeah.
But it wasn’t right away that originally we were just scraping exif and header data from
files and, and sort of putting a, putting that in its own database, which yeah, it’s
cool.
It’s useful.
But when, when machine learning came out, holy cow, I mean just speech to text alone.
Yeah.
Think of the searchability.
Yeah.
s was definitely a problem in: 2016
so for many years was that your only option was to use the machine learning as a service
capabilities from the hyperscalers and they were great, but they were very expensive.
Chris Lacinak: 26:13
Yeah.
Aaron Edell: 26:14
And talk about like cost optimization.
You know, we would even as testers, we would get bills from, from these cloud providers that, that shocked us after running it, running the machine learning.
So we, it’s why we started Machine Box was because it just, we just didn’t think it had
to be that, that, that that was the only way to do it.
And, and it was a problem.
Like we were having trouble getting customers because it was just too expensive.
That’s all been solved now.
But, but that’s why I think this is why it’s interesting because the, the, it’s really
good validation that you guys, that other people had come up with the same idea.
That to me is a great sign.
Whenever I see that when independently different organizations and different people kind of
come to the same conclusion that, yeah, this is a problem.
We can solve it this way.
But I think it’s taken this long to do it in a way that’s affordable, honestly, and
secure.
And also the accuracy has really improved since those early days.
Chris Lacinak: 27:15
Yeah.
Aaron Edell: 27:16
It’s gotten to the point where it’s like, actually this, I can use this.
This is a pretty, the transcripts in particular are sometimes 90 to 99 to a hundred percent accurate even with weird accents and in different languages and all sorts.
Chris Lacinak: 27:29
Yeah.
I agree. It’s, it’s, it’s, it’s come a long way to where it’s, it’s production ready in many
ways.
Let me ask you though, from a different angle, from the, from the customer angle, do you,
what are your thoughts on whether consumers are ready to put this level of sophistication
to use?
What do you see out there?
Do you see wide adoption?
Are you struggling with that?
What’s that look like?
Aaron Edell: 27:54
So do you mean, you mean from the perspective of like, Hey, I’ve got a Dropbox account or something and I want to, I want to process it with AI?
Chris Lacinak: 28:01
Well, I think there’s, I think about it in a few ways.
One is, are people prepared? And here let’s think about logistics and technology.
They have their files in a given place.
They know what they know, what they want to do.
They can provide access, they can do all those things.
But the other is kind of policy wise, leveraging the outputs of, of, of something like Wasabi
AiR to be able to really put it to use in service of their mission and providing access,
preservation, whatever those goals are.
Do you, I guess I’m wanting readiness on both those fronts.
Do you, do you see that as a challenge or do you find people are diving in whole hog
here?
What do you think?
Aaron Edell: 28:40
I think, I think people are diving in.
I think we’ve really reached the point now where I do think it’s kind of, it’s a combination of the accuracy and the sort of cost to do it.
Because if it’s not very accurate and very expensive, that’s a problem.
If it’s very accurate and very expensive, it’s still a problem.
But but we’re at a point now where we can do it inexpensively and accurately.
And so I’ll mention that even just today, which, which, you know, by the time folks
listen to this, it’ll probably be a few weeks in the past now or so.
But Fortune magazine published a post about Wasabi AiR and the Liverpool Football Club.
And they, what I, what I love is that they make it very clear, right?
Their use case, which is we want our, the fans of the football club to be able to go
onto an app and just watch highlights of, you know, Mohamed Salah crushing Man U, right?
Manchester United.
And just get it like a quick 30 second compilation of like all the goals or whatever, you know,
just just fan engagement.
And in order to accomplish that, you know, Liverpool has unbelievable amounts of video
content from every game from multiple cameras.
They’re, you know, they’re, I think people imagine that there’s there’s like a whole
bank of editors sitting around with nothing better to do.
It’s not really true.
They don’t, they don’t have that many editors.
And these editors have to, you know, create content from all of this library and archive
constantly and based basically Wasabi AiR makes them do it so much faster that they
can actually have an abundance of content ready for their app, which helps with increases
fan engagement.
And it’s that simple for them.
And they like the quote in the article from Drew Crisp, who is their senior vice president
of their digital world, says that that’s how they think about applying AI.
You know, we want to solve this use case.
We want to be able to create this 30 second compilation of all these goals.
Maybe it’s against a specific team or whatever the context is.
But we can’t sit around for hours and hours and hours watching every single second and
maybe manually logging things or tagging things or, you know, it’s always like, it’s always
a, it always happens after the fact, right?
You’ve recorded it all.
Okay, now it’s on a, it’s safe on, it’s on a disk.
I’ve got all my footage.
And then maybe you, you know, you in the file name, you put the team you played, but that’s
not enough metadata.
So, yeah, so I think they are ready.
I think, you know, it’s, it’s, um, people have to think about it the right way.
You know, this is a productivity boost.
This is a time-saving boost.
This is a, what hidden gems do I have in my archive boost?
You know, that latter, that latter use case, by the way, is, is really spectacular, but
very hard to put a number to and hard to measure.
You know, how much money do I make from the hidden gems?
The things that I didn’t even know I had in the first place.
Chris Lacinak: 31:57
And I, and sports organizations are interesting.
They’ve always kind of been at the leading edge, I think when it comes to, um, creation and utilization of metadata in service of analytics, statistics, fan experience.
I mean, we think about Major League Baseball was always doing great stuff.
NBA has done some great stuff.
I mean, it’s, and, and they have something going for them, which is a certain amount
of consistency, right?
There’s a structure to the game that allows there’s, there’s known names and entities
and things.
So, um, so that does make a lot of sense.
And it seems like it’s just ripe, uh, for, for really making the most of something like
Wasabi AiR.
I can just see that being a huge benefit to, to organizations like that.
Um, are you seeing, can you give us some examples?
Are there other, um, maybe non-sports organizations that are, that use cases that are using Wasabi
AiR?
Aaron Edell: 32:53
Yeah, definitely.
Um, I’ll give you one more sports one first though, because there there’s, you know, the, the use case I gave you is, is about creating content and marketing content for channels
and for consumption of consumers.
But they also are, you know, especially teams, individual teams are very brand heavy in the
sense that they, you know, they seek sponsorship for logo placement in the field or the stadium
or whatever.
And AiR is used for, by sports teams to look at that data and basically roll up, hey, the
Wasabi logo appeared in 7% of this game and the Nike logo appeared in 4% of this game.
And then you can go to Nike and say, Hey, do you want to be 7%?
You should buy this logo stanchion or whatever.
So really interesting use cases there, but non-sports use cases.
So one of my all time favorites is a, uh, a company called, uh, Video Fashion and Video
Fashion has a very large library.
I think it’s on the, to the tune of 30,000 hours of video footage of the fashion industry
going back as long as video can go back.
And they, um, and, and a lot of this was on videotape and needed to be digitized.
And I think they still have a lot that still needs to be digitized, but they used Wasabi
AiR back when it was called Curio, um, basically to kind of, you know, auto tag and catalog
these things so that when they get a request for, and they licensed this footage, right?
So this is how they make money.
This is how they monetize it.
This is why I like this use case because it’s a very clear cut monetization use case where
they sell the, you know, they licensed this footage per, I want to say per second probably.
And they, and so Apple TV Plus came to them one day as just an example and said, Hey,
we’re making a documentary.
It’s called Supermodels.
Do you have any footage of Naomi Campbell in the nineties?
It took them like five seconds, right?
To bust out every single piece of content they have where not only does Naomi Campbell
appear, but her name is written across the street.
Somebody talks about her, right?
So it, it’s literally like a couple seconds.
Yeah.
And then they just, they license it, right?
So they, they get all this revenue and have very little cost associated with servicing
that revenue.
And that’s exactly the kind of thing we want Wasabi AiR to empower.
You know, it’s time is money, my friend.
Yeah.
We’re saving time.
Chris Lacinak: 35:21
I love, one of the things I really like about Wasabi AiR is that it allows you to do sophisticated
search where you can say, I want to see Naomi Campbell. I want it in this geographic location.
I want it at this facility and wearing this color of clothing or something, right?
Like you can put together these really sophisticated searches and come up with the results that
match that, which I think is just fantastic.
I think that is, that is the realization of what the ideal vision is for being able to
search through audio visual content in the same way that we search through Word documents
and PDFs today.
I mean, that’s, that’s, that’s fantastic.
I’d love to dig into like, let’s dig, let’s make this a little bit more concrete for people.
We haven’t really talked about exactly what it is.
We’ve got this high level description.
But let’s jump in a little bit more.
So, so folks that are going to use Wasabi AiR would be clients that store their assets
in Wasabi, in Wasabi storage.
Is that a true statement?
Aaron Edell: 36:15
Yes, they, they can be existing customers or, you know, new customers. But yes, you need to, you need to put your stuff in Wasabi storage.
Chris Lacinak: 36:23
You’ve got your assets in Wasabi storage.
How do you turn Wasabi AirRon? Is it something that’s in the admin panel?
How does that work?
Aaron Edell: 36:31
Not yet.
I mean, that is, that’s where we’re working towards. Right now, you reach out to us, you know, reach out to your sales representative or,
you know, honestly, on our website, I think we’ve got a submission form, you say, I’m
interested, this is how much content I have.
And you don’t have to be a Wasabi customer when you reach out, right?
Like, we’ll help you sort that, sort that out.
But essentially, when we will, we’ll just, we’ll create an instance for you of Wasabi
AiR.
And when we do that, we’ll attach your buckets from your Wasabi account, and it’ll start
processing and basically, you’ll get an email or, you know, probably an email with a URL
and credentials to log in.
And when you click on that URL and log in, you’ll have a user interface that looks a
lot like Google, right?
It’s, it’s, there’s, you know, some buttons and things on the side, but essentially, right
in the center is just a search bar.
And we want it to be intuitive, of course, obviously happy to answer questions from folks,
but you should be able to just start searching, you know, we’ll be processing the background
and maybe you want to wait for it to complete processing, it’s up to you, but you can just
start searching, and you’ll get results.
And those results will sort of tell you, you know, some some basic metadata about each
one, there’ll be a little thumbnail.
And then let’s say you search for the word Wasabi.
And maybe you specified just logos.
I just want where the logo is a Wasabi, not the word or somebody saying Wasabi.
When you get the search results, let’s say you click on the first one, you’ll have a
little preview window and you can play the asset if it’s a video or audio file, right?
We have a nice little, you know, proxy in the browser.
And then you’re going to see all this metadata that’s all time line, timecode accurate along
the side.
And you can kind of toggle between looking at the speech to text or looking at the object
tags, and then on the bottom will be a timeline kind of like a nonlinear editor, be this long
timeline and your search term Wasabi for the logo, you’ll see all these little like kind
of tick marks where it found that logo.
So you can just click a button and jump right to that moment.
And what I like about that is so let’s say, let’s say in the use case, you’re trying to
you’re trying to quickly scan through some titles for bad words, or for nudity or violence
or something like that.
Those, you know, those things will show up and you can just in five seconds, you can
just, you know, make go through them and make sure they’re either okay or not, right?
Like sometimes, for example, it’ll, you know, it’ll give you a false positive.
That’s just what happens with machine learning.
But it doesn’t take you very long.
In fact, it takes you almost no time at all to just clear it and just, you know, go through
and then if you want, you can even edit it and just remove it or add a tag or something.
So let’s so hopefully that gives a good picture.
Aaron Edell: 39:19
Yeah, so well, and I’ll ask this question, because wasabi is so transparent about pricing.
You’ve mentioned $6.99 per terabyte. Is there is there transparency on that level yet with AiR?
Or is this still something that’s in motion?
Or?
Aaron Edell: 39:34
Yeah, we’re still we’re still working on it.
But we do have a kind of a we, we came out with a pricing for NAB, we’re calling it the NAB show special.
So you know, get it while it’s hot, I guess, because we probably will have to change it.
But it’s just $12.99 a terabyte per month.
So think of it almost like a different tier of storage, although, you know, it’s, it’s
the same storage, it’s just that you have now all this indexed metadata.
Chris Lacinak: 39:58
And is that $12.99 per month on top of the $6.99 per month? Or is that inclusive of so $12.99 total?
Aaron Edell: 40:05
Exactly.
Yeah, which is still cheaper than I think the 20 or 30 bucks per terabyte per month for just the storage for some of the hyperscalers.
So you know, even even if you didn’t use air, and you were just paying for the storage,
it’s still a lot, a lot less expensive.
And there’s no egress and no API fees and all that.
Chris Lacinak: 40:23
Yeah.
So in the I mentioned the project that I was working on before called AMP, we, we call we came up with the term MGM, which stands for metadata generation mechanisms.
And this is to say speech to text or object recognition or facial recognition, as would
all be things we called MGMs, right?
Do you have a term for those?
What do you call those
so I can refer to them the way you do?
So this is so funny you ask, because when we we when we started gray meta, we had so
much fun trying to come up with that term.
And the original product was called haystack.
Because we thought you’re going to find the needles and I like that.
Right?
Chris Lacinak: 41:01
I like that.
Aaron Edell: 41:02
Yes.
So so how do you find a needle in a haystack? Bring a big old magnet.
So we called those things magnets at first.
You’d have a magnet for speech to text or whatever.
I think I think they were still called magnets by the time I left.
When I came back, we were calling them harvesters, which are maybe gosh, extractors, maybe extractors.
Okay, so but but since we joined Wasabi, I think we’ve just been referring to them as
models honestly, models, not all of them are machine learning models, but you know, okay,
Chris Lacinak: 41:36
well, I just just so we have a term for this discussion.
And I’ll use the term models then to talk about that. So so can you tell us what models you have built into air right now?
Aaron Edell: 41:46
Yes.
So right now, we have speech to text, which is outstanding and understands I think, 50 languages and will translate it even to English, as well as do a transcription in the native
language.
We have an audio classification engine, which, you know, basically tries to tell you what
sounds it hears, you know, coughing, screaming, gunshot, blah, blah, blah.
We have a logo and brand detection system, which we just trained ourselves from scratch
and is very good, actually, I’m really surprised because that that’s that was when we were
doing this before, it was a really hard problem to solve.
It still is, but we actually got it working.
Then we have an object recognition model, which will essentially try to tag things that it
sees lamp post shirt, beard, that kind of thing.
And then we’ve got OCR optical character recognition.
So words that appear on the screen get turned into metadata.
And then we’ve we’ve got we call it we call it technical cues.
So this is very specific to the M&E industry, but bars and tone, slate, titles, that sort
of thing.
And then faces and people.
So, you know, we will we will detect faces and then kind of kind of like how in iphoto
on your phone, like it’ll it’ll say, who’s this?
Right.
Here’s a bunch of photos of this person.
Who is this?
Same thing.
Right.
We group unknown faces together.
You can type in who they are.
And then going forward, you basically have names associated with with faces.
Right.
So it’s a very, very simple system.
Chris Lacinak: 43:31
And if I remember right from the demo, you can also upload images of individuals that
you know are going to be in your collection and and identify proactively. Right.
Like if for myself, if I could I could upload three photos of myself, say this is Chris
Lacinak and then it’ll use that.
You can do it ahead of time.
Aaron Edell: 43:52
Exactly.
Yeah. Yes.
So if you know the people ahead of time, you can do it, too, which is which is really useful.
Chris Lacinak: 43:57
I like that feature.
I mean, that’s another thing that is similar with AMP is just the concept of using non audiovisual materials in order to train models on to describe audiovisual objects.
So the OK, that’s great.
And do you do you are those are all of those models, things that Wasabi has built that
are owned by Wasabi?
Or are you connecting to other providers of AI services?
Aaron Edell: 44:24
We built we built all our own models, all homegrown.
This was this was my this was my big change when I came back to GrayMeta because I had experience doing it.
I knew it was possible and I didn’t think that relying on third party models was a good
idea.
I mean, obviously, for intellectual property reasons, but also it’s just really expensive
to do it that way.
We wanted to make it just basically I don’t want to I don’t want to say cheap, but we
wanted to make it economical for people.
Right.
Because that was a major barrier.
If you are, you know, a large library, you could have millions of hours of footage.
And if you’re paying the hyperscalers, which charge like 50 bucks an hour in some case.
I mean, what are you going to spend 120 million dollars on money on
AII tagging? Probably not.
So so we built all our own.
And the reason we were able to do that, and by the way, like, don’t think you can just
go on to Hugging Face and pull down a model off the shelf and just pop it into production.
I have seen that you can’t do that.
And the reason is because, you know, a lot of those models are trained on not media and
entertainment.
They’re trained on other world things and they don’t work.
They don’t their accuracy drops when you’re talking to people like L of C or you’re talking
to to, you know, you know, pick pick your pick your broadcaster, pick your network,
pick your pick your post house.
When you’re talking about media and entertainment content, they need to be trained for that.
And then you got to build in pipelines and we had to do all kinds of stuff to make it
efficient, because there’s a lot of really cool machine learning out there that’s very
advanced but very expensive and compute intensive to run.
And that’s also not going to work for customers.
They can’t spend 50 bucks an hour on their machine learning tagging.
It’s not it’s a no go.
So we’ve put we’ve put years of experience into our models and and also understanding
like what to expect on the other end.
I there’s a there’s a guy who works for me, Jesse Graham.
He’s been doing this for so long that you can give him any machine learning model now.
And he can just he he knows he knows the pieces of content that’s going to throw it for a
loop and he can see the results and he knows customers are going to either be OK with this
or not.
Chris Lacinak: 46:43
Yeah.
Aaron Edell: 46:44
And that that experience is so valuable to us because it gives it lets us quickly iterate.
It lets us go to market with with production models that actually work for customers. They’re not just cool demos.
You know, they’re not just kind of fluffy fun things.
They’re real.
They have real value.
And that’s why we spend so much time building our own models.
Chris Lacinak: 47:04
Do you have feedback or requests for the Dam Right podcast?
Hit me up and let me know at [email protected]. Looking for amazing and free resources to help you on your DAM journey.
Let the best DAM consultants in the business help you out.
At weareavp.com/free-resources.
And finally, stay up to date with the latest and greatest from me and the DAM Right podcast
by following me on LinkedIn at linkedin.com/in/clacinak.
And let me ask related to that, talking about training it based on media and entertainment
broadcast content.
How how have you found it to work or have you done testing on archival content stuff
that’s not production necessarily like production broadcast quality, always highly variable,
maybe lower quality audio and video stuff like how how how is it performing on that
sort of content?
Aaron Edell: 48:00
Surprisingly well, actually, I’ll give you an example.
So Steamboat Willie, which is now in the public domain, you know, practically an ancient piece of animated content featuring, I think, the original appearance of Mickey Mouse, although
I don’t think he was called Mickey Mouse back then.
Anyway, there there’s it correctly identifies the boat is a boat.
You know, it the object recognition, surprisingly, is able to tag things that are animated and
in black and white.
I have I have also seen it pick up logos that are on almost undetectable by human eyes.
So we had so much fun showing this off at NAB because I we Wasabi sponsored the Fenway
Bowl recently.
And so we had we had the Fenway Bowl.
We ran we ran it against wasabi air.
And there’s obviously a ton of logos everywhere.
And so there was this one logo golf, I think it was Gulf Oil or something like that.
And I would show it.
So I’d pull it up on the screen and I would click and jump to that moment.
And I would say, OK, everybody who’s watching me do this demo right now, tell me when you
see the Gulf Oil logo in the video.
And they’re like squinting and, you know, most people don’t see it.
But if you kind of expand it and zoom in, it’s just there, teeny tiny little thing in
the background.
So, yeah, I’ve I’ve been I’ve been really pleased with a lot of of where machine learning
has has how far it’s come in terms of the research that’s gone on behind it.
The you know, the the embeddings and weights that you that people are open sourcing and
making available.
It’s just extraordinary.
Chris Lacinak: 49:38
Yeah.
Let me dive into the weeds a little bit here about kind of the the the models and things. I’m curious, I mean, one of the things that we developed in AMP and I’m wondering, I know
that you had to have thought about this and I’m curious where you’ve arrived and what
you’re thinking about for future.
But is the concept of workflows.
It sounds it sounds like and correct me if I’m wrong once I’m done saying this, like
I have my I have my videos and my audio and things stored in in Wasabi.
I turn on Wasabi AiR and it runs these models.
It sounded like seven or eight-ish models, I think, in parallel.
But let’s say that I wanted to create a something that does speech to text and then runs it
through named entity recognition, sentiment analysis.
Right.
I take and I want to take outputs of one model, plug it into another model and create workflows
instead of just getting the output of a single model.
Where are you at?
Does that exist today?
Is that on the horizon?
What’s that like?
Aaron Edell: 50:44
Yeah.
So I’ve experimented with that in some way or another at actually several different companies. In fact, I think at Veritone, we even had like a workflow builder that you could do
where you could sort of drag nodes in and go output from this to there.
The state, I think the way that we’re thinking about it today is we just we don’t want you
to even have to do that.
So let’s pick apart why you’re doing that.
So named entity recognition based on speech to text.
It’s a really good example.
Like I want maybe I want to search by places.
So speech to text is particularly the one that we’ve developed is surprisingly good,
is shockingly good at proper nouns and proper names for things.
This is where speech to text in the past has always fell down.
But it’s just text.
So the way we think about it is instead of you having to come up with that use case for
that workflow, we’re just going to build that in.
So when you’re running product and you’re thinking about, “Okay, how do I solve these
problems?”
I like to I like and this is a great thing I learned from from working at Amazon is just
put yourself in the customer’s shoes, be customer obsessed.
Think about, okay, the editor is sitting down, they got to do their job.
They want to get shots of the Eiffel Tower or something or maybe just I don’t know, I’m
trying to think of a better example of that.
Because if you search for Eiffel Tower, you just show up.
But named entity recognitions like companies or something like that.
Maybe I’m looking for people.
Okay, I got it.
When people are talking about Wasabi the company and not wasabi the sushi sauce, right?
I want to differentiate.
So normally, if I search for the word Wasabi, obviously, all references will show up.
We are going to give you an experience where that is just seamless, right?
It’s a new option.
Just search for Wasabi the company or I’m doing named entity recognition on the speech
to text.
That’s how we might solve it in the back end.
We may solve it some other way.
There is a lot of the whole machine learning pipeline thing is what’s really evolving.
Like for example, our audio classification and speech to text are one multimodal model,
for example.
So there’s this kind of newer world of these end to end neural networks that are really
good at doing different things.
Instead of in the old way, which is what you described where we would kind of have the
output of one and go and make it be input in another, that kind of ends up being like
a Xerox of a Xerox of a Xerox sometimes.
So we’re building kind of more capabilities around combining these things into one neural
network so that A, it’s way more efficient and B, it’s more accurate.
So that you’re going to see from us in the coming months, a lot of innovations around
that and with the express goal of doing what you described, which is just better search,
better, more contextual, more accurate search.
Chris Lacinak: 53:56
Well, I have to hand it to you.
I mean, I think what you gain with sophistication, you kind of add a burden of complexity. And right now I’ve seen the demo of Wasabi AiR and it is elegant in its simplicity.
I can totally understand aiming for simplicity.
That’s going to be a better user experience.
So yeah, that’s interesting.
It’d be good to maybe, I don’t want to bore our listeners with that, maybe a sidebar sometime
offline we can talk about that.
And another question in the weeds here, I mean, one of the things that I’ve grappled
with or we grappled with in the AMP project that I’d love to know what you’re thinking
about or how you’re managing this, if you’re able to share is on the efficiency front of
processing efficiency, right?
The concept of running, for instance, speech to text where there’s nobody talking, it’s
music or BPM analysis on music where there’s somebody talking, right?
Facial recognition where there aren’t people.
You got the idea here, but trying to really only feed segments of relative things to models,
using your term in order to create more efficient and cost-effective processing.
Is that so negligible?
Is that so processor intensive that it doesn’t really pay off or is that an actual model
that I’m now using model in a different way that works?
Aaron Edell: 55:26
I know what you mean.
Yeah, I think it does add up. So in the true FinOps cost optimization fashion, once you take out the big things, you go after
the little things because they just add up, right?
If I can reduce some fee that’s one cent or something to half a cent, that in theory would
add up.
So it’s worth it to think about it.
We do have some of that.
So for example, you mentioned a really good example of that, which is don’t run speech
to text if there’s nobody talking.
So we actually have a separate model that we call, I think it’s called the voice activity
detector or something like that.
So this is what I mean.
It’s such a good example of what I was trying to convey, which is that you have to think
about these things when you’re doing this in production.
And these are the things that drive efficiency to make it actually viable for customers to
pay for and use.
So when we first started building our own speech to text, we just plopped it in, we
ran it and my goodness, it was so slow.
And the accuracy was great, but it just was not going to work.
Over time, we built the pipeline better.
We introduced VAD that greatly improved the accuracy of the timecode markers for the speech
to text, as well as improved the overall efficiency.
I mean, I don’t want to get in trouble for this, but I think we improved the efficiency
by a hundred times.
Think about that.
Chris Lacinak: 57:00
Yeah.
Aaron Edell: 57:01
That’s a huge, huge difference.
And that’s just basically trial by fire in some ways. I mean, I believe in iterative product design.
I don’t like to sit around for six months and try to build the perfect product.
I like to build little things and iterate quickly and learn.
And that was one of the first things that we learned when we first started doing speech
to text.
And we just iterated it and made it faster, faster, faster until we got to this super
efficient state.
So yeah, for an in the weeds question, that was a really poignant one because it is where
I think the value of AiR comes from and perhaps other systems that are trying to accomplish
the same thing is when you build your own machine learning, there’s a lot of things
you got to think about and it’s hard to know what they’re going to be up front.
And it’s taken us years to get it right.
Now, it doesn’t necessarily mean it’ll take everybody years.
You can always learn, but it’s a trial by fire.
Chris Lacinak: 57:56
It makes me think of Formula One racing with hundreds of little tweaks to these vehicles
to make things just get a 10th of a second faster or something. Right.
Let me jump over to the questions around ethics and AI.
And I’m going to break that into a couple categories to kind of go off of here.
I guess, you know, when it’s come up, typically there’s one around bias, how do the AI models
in this way perform across a variety of contexts?
Another is around intellectual property.
Like here we think of in Chat GPT now, I can buy the business license in which my content
that I’m feeding it is not going to train the model, right?
As opposed to the free or the cheap one where my data that I feed it goes to train the larger
model.
Can you talk about how you are thinking about and acting on those sorts of ethical questions
today?
Aaron Edell: 58:57
Absolutely.
You know, for me, machine learning is not a means to an end. So I kind of like to use the analogy that, you know, I don’t go around talking about
how Wasabi AiR is built on electricity, right?
Like that doesn’t make sense.
Electricity is a technology that we kind of take for granted.
Machine learning solves the real problem that I’m trying to solve, which is I don’t want
people to have to lose content in their archives.
I want people to be able to find stuff quickly and be able to get it out the door.
And I want editors to just have a wonderful life, not be miserable.
And so I think about machine learning in that sense.
I don’t think about it as a, hey, we’re going to try and scrape as much data and make the
best overall models and make money by selling machine learning, if that makes sense.
So I think your motivations for your ethical use of AI start there.
The bias thing is really interesting, and I have to hand it to, I mentioned my Machine
Box co-founders before, Mat Ryer and David Hernandez.
David Hernandez, brilliant computer scientist, he really taught me a lot.
And one of the things that he pointed out to me was, and this was back in, we were doing
Machine Box in, I don’t know,: 2017
turn words into vectors.
And this is important because for the listeners who don’t know what that means, basically
take the word frog and take the word toad.
Now instinctually as humans, we know that those are a lot closer together in concept
than the word frog and the word curiosity.
So vectors attempt to kind of do the same thing.
We take basically every word, and this is, you have to picture a thousand dimensions,
right?
It’s not a three-dimensional thing, it’s like thousands of dimensions.
But basically in these thousands of dimensions, we can do math to figure out that the word
frog and the word toad are very close together.
And this helps us in search.
So if I search for toad, I get pictures of frogs because they’re very relevant.
Now those systems, a lot of these embedding and vectorization systems were trained, at
least back in the day, and I’m pretty sure this has been addressed, but they were trained
on news articles and written material from humanity ranging all the way back.
So what happened was that if you actually look at the distance between, for example,
the word doctor and the word man, much closer than doctor and woman, and the inverse was
true for nurse and man and nurse and woman.
Now that’s a bias.
That bias came from the training data, which is again, I think was a lot of news articles
written over the last 70 years or something like that.
So what you end up with is a machine learning system that’s just as biased as humans are
or have been in the past.
And they don’t necessarily reflect our inclusive nature and how we want our society to exist
where we don’t want that bias.
That’s not something we want in our machine learning because we’re using our machine learning
to solve problems in the real world and it doesn’t reflect the real world.
So I think about that a lot and I think about how can we improve our machine learning.
Now it’s the training data.
It’s not the machine learning models.
It’s the training data.
So we as humans have to go back and fix that in the training data and do our best to think
of those things ahead of time.
And there’s ways, there’s tools to process your training data in certain ways and look
at patterns and things like that.
And you can detect that kind of thing.
So I’m always thinking about that and I always want to make it better.
And it’s probably an ongoing challenge that’s never going to really end, but something that
we have to pay attention to.
Ethically, like any technology, any new technology, what I’m about to say could be applied to
nuclear physics.
It could be applied to electricity.
It could be applied to taking metal and making it sharper.
Don’t use it for bad things.
Your intentions, like I mentioned before, my intention is to make people’s lives at
their jobs, in particular media and entertainment editors and marketing people and these professionals,
I don’t want them to have to sit around trying to find stuff.
I want to make them immediately find the thing they’re looking for and deliver the content
and the value that they want.
That’s my purpose.
If your purpose is to go around electrocuting people or dropping nuclear bombs or stabbing
people, you’re going to use these technologies in the wrong way.
So I don’t mean to say that we all have to just be responsible for our own actions.
I think we do, but the rules that we come up with, scientists have rules around bioengineering,
for example.
There’s laws against you can’t patent certain molecules, you can’t patent DNA.
Those things are being challenged all the time.
But I do think that we can collectively as a society agree that we’re not going to use
AI for these purposes, even though some people will.
You can’t legislate bad guys out of existence.
They will be there and they will test it.
But I think the more educated we are about it, the more we can tackle it.
But I don’t think that means we have to stop using AI or ML or we can’t innovate and we
shouldn’t innovate and we shouldn’t see where this can go.
I think that’s equally as dangerous.
Chris Lacinak: 64:43
I’ve got a question that’s a little bit out there, but if you don’t have a response to
this, I don’t know anybody who does. So you’re the best person I can think of to ask this question.
And that is about the prospect of a future in which the machine learning models, and
here I’m not talking about models as in things that generate metadata, but the machine learning
model as in the thing that you train over time, are interoperable.
Is there a future in which I go to Wasabi as an organization, my data is there, I
spend years training it and cultivating that data, not just the data, not the output of
just the metadata, but let’s say the machine learning that we do over time and training
the models and giving it feedback and maybe triangulation of that data, that God forbid
Wasabi goes out of business in 20 years, that I could take that and transfer it to another
entity that has machine learning.
Is there a future in which such a thing exists or is that not even on the horizon?
Aaron Edell: 66:04
Well, today, I’m a very customer obsessed person.
I mentioned that already. And I think if I’m the customer, when I spend effort and time training a machine learning
model, let’s say in Wasabi AiR, which you can do, you can train it on people and soon
you’ll be able to train it on other things.
I’m putting my effort and my data into that.
I should own that.
And I believe in that.
So we segment that all off.
We don’t aggregate people’s data.
We don’t look at the training data and make our own models better.
You own it.
It’s your data.
If you trained it, it’s yours.
But I think that it would be hard, just the nature of the technology itself, it’s hard
to take all that training and shuffle it off somewhere else.
I mean, I guess in theory, there’s like embeddings and vectors and stuff like that and you could.
I think more likely over time, you won’t have to train it.
I think our models will get better at context.
They will be larger.
They’ll have more parameters.
But I also think that they’ll get more specific and I kind of like this agent approach that’s
kind of emerging where, let me put it this way.
I do not think that artificial general intelligence is anywhere near happening.
I mean, I think people will change their definition of what that means to kind of fit their predictions.
But I don’t think that we’re in danger of one very large AI model that just does everything
and takes over humanity and kills us all.
Or I don’t know, who knows, maybe they’ll do something wonderful, like help us explore
other planets, whatever.
I think what’s more likely is that we will get better at segmenting off specific tasks
and making machine learning models that are just very, very, very good at that and then
orchestrating that, which is kind of what Wasabi AiR does today.
But I don’t think the need for training it is interesting because if you asked me this
question back in: 2017
machine learning, which is that your machine learning model should be trained on the data
that it’s expected to run against.
You should not be able to tell the difference.
And this was kind of at the time when synthetic training data was emerging and you can’t beat
a human curated, really, really clean, really good data set.
You can’t beat it.
And today I think that that might be changing a little bit and that the need to train models
to be more specific or to train it on your own data is not heading up.
I think it’s probably going down.
In fact, we already see some of it a little bit.
Like, you know, take, okay, great example, the Steamboat Willie example.
It used to be that you would have to train your object recognition system to recognize
animated objects as kind of custom objects.
We have been experimenting with some machine learning that we haven’t put into air yet,
but we might at some point where you don’t have to do that anymore.
In fact, it actually interprets your search in a different way.
So if I searched for, let me put it this way, like it would process a picture.
Let’s say it takes a picture of the two of us talking and I have a beard and you don’t
have a beard.
And I sent it to this system and processed it.
Instead of coming back with brown hair, beard, blue shirt, microphone, right, this whole
list of things, it just sits there.
Then you ask it, is there a microphone in this picture?
Yes, there is.
Here it is.
Is there, and this is what I like about it because the words that we use can be very
different.
So is there a mustache?
Yes, there’s a mustache.
And it draws a line just around this part of my beard.
Instead of saying the whole thing is a beard, right?
Or it’s using an LLM to interpret the question rather than trying to seek custom training.
And it has a fundamental deep understanding of the picture in a way that we don’t understand
as humans, right?
It’s broken it down into vectors and things that are just basically math.
And when you ask it, is there a green shirt here?
It interprets your question and goes, okay, this vector over here kind of looks like a
green shirt.
I’m going to say there’s a 60% chance that that’s what it is and draw a bounding box
around it and there you go.
I think that’s the future.
I think that’s where we’re going.
Machine learning models that are specific, but way more contextual and understand images
and video and data in ways that we can’t, but can be mapped to concepts that we as humans
think about.
Chris Lacinak: 71:22
And somewhat related, kind of pulling several of these strings together, like the question
around humans in the loop, like we’ve done a lot of work with the Library of Congress and Indiana University, that AMP Project kind of had at its core that humans in the loop
as far as these workflows go.
And some of that was quantitative.
It was about, for instance, taking the output in a given workflow, taking the output of
speech to text, reviewing it by a human, editing, correcting, and then feeding it back sort
of thing.
Some of it’s qualitative.
It’s about ethics.
There are some sensitive collections that need to be reviewed and make sure that they’re
described properly and accordingly and things.
And I guess I wonder, do you think about that in the work that you’re doing?
One, it sounds like some of what you just said makes it sound like the quantitative
aspect of that is becoming less and less important as things improve dramatically.
But I wonder, do you think about humans in the loop with regard to what Wasabi offers,
or do you think about that as something that’s up to the user post-Wasabi processing to manage
themselves?
Aaron Edell: 72:31
No, I think about it all the time.
In fact, one of the bigger initiatives that we have, and we are still working on it very much, is a frictionless human in the loop process with your data.
So in spite of what I just said, I still think that you need to be able to teach it things
based on your data and correct it, and it should learn.
We do that with faces today, for example.
That’s a really good example of this, but it’s solved.
Where we want to take it is some of the other things you mentioned.
So improving proper noun and proper name detection, improving the way it detects certain objects
and things in your data, because maybe you’re NASCAR or something, and you just have a very
specific content with objects that are, in the broader perspective, kind of strange,
but in your perspective are very set and usually the same or something like that.
You should be able to use your own data and say, “Yeah, that’s what this is.
This is a tire.
This is this car.”
And we actually do have it in the system.
We’ve just disabled it for now because I want to make it so seamless that you don’t even
really know what you’re … You don’t even really think about it as training machine
learning.
Just like … I really love the Apple Photos example.
They just do such a good job with faces.
I don’t know if you have an iPhone.
I’m sure Android does the same thing.
Just go in your photos and it’s like, “Hey, who is this guy?
Who is that?”
Brilliant.
It should be very similar.
“What is this?
I don’t know what this is.
Tell me what this is.”
So I think about that a lot.
I definitely see … There is just no better arbiter for accuracy in machine learning and
data sets than humans, ironically.
You have to, as a human, make some decisions.
For example, back in: 2016
I bet I could train a classification engine to tell if a news article was fake news or
not fake news.”
ake news was a big problem in: 2016
I went about to try and train it.
Basically that meant creating a data set of fake news and not fake news.
I wrote a lengthy blog post about the details, so I won’t reiterate it here.
What I ended up figuring out was that, as a human, I have to decide what is fake news.
How do I … Is it satire?
Is it factually incorrect information?
There’s all these subcategories.
I just had to figure out where do I draw the line.
The machine learning ended up working best was when I drew the line in the data set had
bias.
What I was really doing was training a bias detection system.
So it was able to tell if this article was written in a biased way or an unbiased way
and rank it.
That journey for me was really telling about how data sets get made to train these machine
learning systems in the first place.
You really cannot mess up.
This is where the human in the loop problem or question can become a problem and you have
to think about.
If I am surfacing, “Hey, what is this logo?” and you get it wrong and the next guy gets
it right five times, you’ve caused a problem in your machine learning because you now have
a dirty data set.
So you need to think about that.
How do I keep it clean?
How do I check that this work that’s been done is actually accurate?
That’s part of the reason why we’re spending so much time thinking about it is we want
to get that experience right.
Chris Lacinak: 76:26
So that’s on the horizon, it sounds like.
That’s great. Look forward to seeing that.
And users of Wasabi AiR, you have, as we mentioned, a user interface within Wasabi’s GUI, but
is there APIs that can push this out to other systems?
If people generate the metadata in Wasabi AiR, can they push it to their DAM?
Aaron Edell: 76:49
Absolutely.
In fact, we’re in the talks with several MAM systems right now. I think that IBC, which is in September, will be able to announce some of them, but we want
people to do that.
The vision for Wasabi AiR and for Curio prior to the acquisition was always that this is
a sort of data platform with APIs.
In fact, our whole UI consumes our own APIs.
That was really important for us and that was a wise decision that was made before I
came back to GrayMeta because at the end of the day, you know this, in the DAM world,
in the MAM world in particular, man, you can go in a lot of directions with a MAM.
You can get bogged down in the tiny features and all of the requests that customers want.
And I think that’s why so many MAMs today are kind of like rubber band balls.
They have a lot of features and they’re all different and they all have different buttons
and they can be very confusing.
It’s really hard to keep something simple when you’re sort of serving all of those use
cases and trying to build a thousand features, one for each customer.
I don’t want to be in that business.
So I think we’ve got a great tool that gets you what you need off the ground right away.
Some customers have described it as a great C-level tool as well.
We just need some insight into this archive for our managers or for these certain groups
of people.
But the people who use MAMs and DAMs and really use them, they should have access to the metadata
too.
And so they will.
Chris Lacinak: 78:27
Yeah.
Well, let’s talk, I think what I see when I look at Wasabi AiR is a blurring of the lines between what has been storage and DAM and MAM, but also between storage and other
storage providers that offer AI and ML tools.
Right?
So I’d like to, let’s touch on each of those for a minute.
Wasabi AiR brings to the table something that is in many ways, not new, right?
Google Cloud and AWS, they have a suite of tools that you can use to process your materials,
but it is new that you turn on the switch and it does it automatically.
You don’t have to go deploy this tool and that tool and put together workflows and things
like that.
Is that the main difference between, is that how you would describe the difference between
what Wasabi is doing today and what AWS is doing today?
Aaron Edell: 79:17
Absolutely.
Yes. I mean, I feel like I don’t even need to continue talking, but I will, because I think you described
it pretty perfectly.
We want it to be very simple and elegant and we kind of want to redefine what object storage
is.
What is, especially cloud object storage, like what criteria defines cloud object storage?
And having a metadata and an index that’s searchable, I think is, we’re hoping is going
to be the new definition because it is really hard to solve this other ways.
I mean, there are other similar tools, but yeah, if you use the hyperscalers, first of
all, it’s an API call.
You still have to process your video, transcode it, and in some cases, chop it up, post each
of those pieces, or in other cases, you can send the whole file, but I think it depends,
to an API endpoint, get back that metadata and then what, right?
Like it’s a JSON file.
And then, so if you want to view this metadata on a timeline and make it searchable, there’s
a whole stack you need to build with open search or some kind of search index incorporated.
You need to build a UI.
You have to process and collate all that metadata.
You have to keep track of where it came from, especially if you’re chopping stuff up into
segments.
And yeah, you end up building a MAM.
Chris Lacinak: 80:40
It’s complicated.
Aaron Edell: 80:41
Yeah, it’s complicated.
Exactly. I do think that the value of just being able to just turn it on, like here’s my storage,
press a button, and now I’ve got this insight.
And if I want, I can hit the API, get the metadata into my existing MAM, but I also
have an interface, a search bar, a Google search bar into my archive just without having
to do anything.
I like that.
I like that solution.
Chris Lacinak: 81:07
Yeah.
It makes a lot of sense. And I suspect that there will be others that follow suit, I imagine.
Aaron Edell: 81:16
Probably.
Chris Lacinak: 81:17
So tell me about the blurring of the lines between the dams of the world and Wasabi,
because you’re now, there is, this creates an overlap of sorts. How are you thinking about that?
What do you think it means to the evolving landscape of digital asset management?
Aaron Edell: 81:33
Yes.
It’s definitely a heady topic. And I think that the MAM world has always been a world that both fascinates me and terrifies
me at the same time.
When we were at Front Porch Digital, for example, we integrated with all the MAMs that existed
at the time.
And I remember going to various customer sites and they would show me their MAMs and I was
just like, “Oh my God, this is so complicated.
I don’t know how do you use this?
There must be all kinds of training and everything.”
And they were very expensive.
Very, very, very, very, very, very expensive to implement.
We had our own, we built our own MAM light.
We always called it a MAM light called DIVA Director.
And this is, Diva Director is kind of where I think I get my idea of what a MAM should
be from, but it’s not.
MAMs have a purpose.
There’s a whole world of moving files around, keeping track of high res and low res and
edits and all that, that I am willfully ignoring at this point because that is important.
And it is complicated and there are wonderful MAM tools out there to solve all that.
But when I think about these customers that I spent so much time with, the Library and
Archives of Canada, the Library of the United States Congress, the Fortunoff Archive, the
USC Shoah Foundation, all of these archives have a kind of somewhat finite archive.
Now there’s stuff that’s new, that’s born digital, and maybe they have parts of what
they do that, if you think about like, I don’t know, NBC Universal are always making new
stuff, but they also have an archive.
And the people who are thinking about and maintain the archive have kind of different
use cases from other people.
So when I think about blurring the lines, I really think about the customer.
Like what do they need?
When they wake up and they go to work, what do they have to do with their fingers and
their hands and their brains on their computer?
And if it’s, you know, manage an archive, be the person who can fulfill requests for
content, help other business units find things.
I think an application like Wasabi AiR is probably sufficient.
Now there’s always new, there’s always features and things that can be added and improvements,
but I don’t want to take it beyond that.
Like I don’t want to go further into the MAM and DAM world because I think that those existing
systems are way better than anything we could build for those purposes.
Chris Lacinak: 84:16
So it sounds, yeah, I mean, you look at a lot of dams, you know, there’s complex permission
structures and a lot of implementation of governance and things like that, that Wasabi AiR doesn’t do.
So in those cases, it sounds like Wasabi AiR could serve the purposes of some folks who
don’t need a dam or mam otherwise.
And in other cases, Wasabi Air is populating those dams or mams to help them, give them
the handholds, the metadata for improving search and discovery within their own systems.
Aaron Edell: 84:46
Exactly.
It’s exactly, it’s a source of more metadata and it’s sort of a window into your objects that maybe your other MAMS don’t have.
The other important thing too, is if you flip it, if you think about like S3, right?
If I have, and we’ve had customers who have had S3 buckets with hundreds of millions of
objects in them.
If you go into the AWS console, into the S3 console, there’s no search bar, right?
That’s not part of object storage, you know, because it’s a separate concept.
I mean, it’s, you know, and you have to solve it with technology.
You can’t just search your object storage with no indices or anything like that, that
otherwise it’d take a million years.
So I feel like that’s where we sit.
We are saying Wasabi AiR, Wasabi object storage now has a search bar.
That’s it.
Chris Lacinak: 85:43
We focused heavily on audio and video today. Does Wasabi AiR also work with PDFs, Word documents, images, just the same?
Aaron Edell: 85:52
It does. Okay. It does.
And it’s a good point because those open up, being able to process that opens up whole
other worlds, you know, that we don’t spend a lot of time thinking about, but we will,
we’re going to start.
Because, you know, and video and audio too is not just limited to media and entertainment
as well.
I like to think of, for example, law firms and, you know, maybe there’s a case and there’s
discovery and they get a huge dump of data.
And that data might include security camera footage of a pool gate or, you know, video
or interviews and depositions and not just all the PDF.
And I think, you know, if you were opposing console and you wanted to, you know, give
these, this law firm a really hard time, send them boxes of documents and, you know, you
can’t search boxes and boxes of documents, right?
There’s no insight into that.
Or say, oh yeah, I’ll scan it for you.
You scan it and you send them PDFs, but they’re not, they’re just pictures, still not searchable.
So I think making PDF searchable, making Word docs searchable, pulling out, you know, images
that might be embedded in these things, processing those with object detection and logo recognition
and all sorts is a very valuable space that Wasabi Air does today.
You just got to put it in the bucket.
Chris Lacinak: 87:14
Well, Aaron, it has been so fun talking to you today, geeking out.
Just it’s really exciting. And your career path and your recent accomplishments have been just, you know, game changing, I
think.
Thank you for sharing your insights and being so generous with your time today.
I do have one final question for you that I ask all the guests on the DAM Right podcast,
which is what is the last song you added to your favorites playlist?
Aaron Edell: 87:43
Oh boy.
You know, I have to admit something that’s going to be, that’s going to divide your audience in an extraordinary way, which is that I actually own a Cybertruck.
I’m also a child of the eighties.
So the whole Cybertruck aesthetic really pleases me.
In fact, if you were to just crack open my brain and dive inside, it’s like basically
would be the interior of the Cybertruck.
And the music that would be playing is the kind of a whole genre that I’ve only recently
discovered because of the truck is sort of eighties synth wave.
So I’ve recently added to my favorites, some very obscure eighties synth wave music that
I could look up.
Chris Lacinak: 88:25
Yeah, please.
Please do. We have a soundtrack where I add all of these songs to a playlist that we share.
Aaron Edell: 88:33
So recently I added a song called Haunted by a group called Power Glove.
Chris Lacinak: 88:40
Okay. Awesome.
Aaron Edell: 88:41
And the Power Glove has a space in it. It’s Power Glove.
Chris Lacinak: 88:45
Good to know.
Aaron Edell: 88:46
Because there’s also a band called Power Glove that doesn’t have a space.
Chris Lacinak: 88:51
Good to know.
We learned yet another thing right at the tail end of the podcast. Awesome.
Well, Aaron, thank you so much.
I’m very grateful for your time and your insights today.
I really appreciate it.
Aaron Edell: 89:01
It’s my pleasure.
Chris Lacinak: 89:02
Do you have feedback or requests for the DAM Right podcast?
Hit me up and let me know at [email protected]. Looking for amazing and free resources to help you on your DAM journey?
Let the best DAM consultants in the business help you out.
Visit weareavp.com/free-resources.
And finally, stay up to date with the latest and greatest from me and the DAM Right podcast
by following me on LinkedIn at linkedin.com/in/clacinak.
AMPlifying Digital Assets: The Journey of the Audiovisual Metadata Platform
11 July 2024
The digital landscape has transformed dramatically in the last decade. AI has reemerged as a powerful tool for asset description. This evolution has enabled previously hidden assets to be discovered and utilized. However, AI tools have often operated in isolation, limiting their full potential. This blog discusses the Audiovisual Metadata Platform (AMP) at Indiana University, a groundbreaking project creating meaningful metadata for digital assets.
Context and Genesis of AMP
Many organizations are digitizing their audiovisual collections. This highlighted the need for a unified platform. Indiana University, with Mellon Foundation funding, initiated the AMP project. Their goal was to help describe over 500,000 hours of audiovisual content and support other organizations facing similar challenges.
The Need for Metadata
Digitization efforts produce petabytes of digital files. Effective metadata is essential to make these collections accessible. AMP addresses this need by integrating AI tools and human expertise for efficient metadata generation.
The Role of AI in Metadata Creation
AI helps automate metadata generation, but integrating various AI tools into one workflow has been challenging. AMP was designed to combine these tools, incorporating human input for more accurate results.
Building Custom Workflows
AMP allows collection managers to build workflows combining automation and human review. This flexibility suits different types of collections, such as music, oral histories, or ethnographic content. Managers can tailor workflows to their collection’s needs.
The User Experience with AMP
Collection managers are the main users of AMP. They often face complex workflows. AMP simplifies this with an intuitive interface, making it easier to manage audiovisual collections.
Integrating Human Input
Human input remains essential in AI-driven workflows. AMP ensures that human expertise refines the metadata generated by AI tools, preventing AI from replacing traditional cataloging roles.
Ethical Considerations in AI
Ethical considerations are crucial in AI projects. AMP addresses issues like privacy and bias, ensuring responsible AI implementation in cultural heritage contexts.
Privacy Concerns
Archival collections often contain sensitive materials. AMP has privacy measures, especially for AI tools used in facial recognition. Collection managers control these tools, ensuring ethical responsibility.
Collaboration and Community Engagement
AMP is designed to be a collaborative platform. It aims to engage with institutions, sharing tools and insights for audiovisual metadata generation.
Partnerships and Testing
AMP has partnered with various institutions to test its functionalities. These collaborations provided valuable feedback, refining the platform to meet diverse user needs.
Future Directions for AMP
AMP’s journey continues as technology evolves. New AI tools like Whisper for speech-to-text transcription are being integrated.
Expanding Capabilities
AMP aims to enhance its metadata generation process with more functionalities. It seeks to improve existing workflows and incorporate advanced AI models for accuracy.
Conclusion
AMP represents a significant advancement in audiovisual metadata generation. By integrating AI and human expertise, it offers efficient management of digital assets. As it evolves, AMP will continue providing value to cultural heritage institutions.
Resources and Further Reading
- AMP Project Site
- Mellon Foundation
- BBC Transcript Editor
- INA Speech Segmenter
- Galaxy Project
- Kaldi ASR
The Importance of Broadcast Archives: Insights from Brecht Declercq
27 June 2024
Broadcast archives are an invaluable resource for understanding the cultural, social, and political history of the 20th and early 21st centuries. In this blog post, we will explore the significance of these archives, the challenges they face, and the future of broadcast operations as we transition to a digital age.
Understanding Broadcast Archives
Imagine you are a future archaeologist trying to comprehend the lives of people from the early 1900s to the early 2000s. What would be the most robust source of information? While institutions like the Library of Congress and various landfills have their merits, broadcast collections produced by radio and television broadcasters are arguably the most comprehensive source. These collections hold stories that document the formation of nations, cultural shifts, and significant political events.
The Unique Role of Broadcasters
Broadcasting entities have amassed vast collections over the years, capturing the essence of their respective societies. Through a mix of entertainment, news, and cultural programming, they have created a historical record that includes comedy, drama, and sports. These archives are not just a collection of shows; they are a reflection of the times, offering insights into the prevailing attitudes and events of various eras.
Meet Brecht Declercq
Brecht Declercq is a leading expert in the field of broadcast archives and has served as the President of FIAT/IFTA, the International Federation of Television Archives. His extensive experience in the field has equipped him with invaluable insights into the challenges and opportunities facing broadcast archives today.
The Role of FIAT/IFTA
FIAT/IFTA is the world’s leading professional association for those engaged in the preservation and exploitation of broadcast archives. The organization focuses on creating and exchanging expert knowledge while promoting awareness of future media archiving. With membership spanning public broadcasters, commercial entities, and audiovisual archives, FIAT/IFTA aims to build a global community dedicated to preserving audiovisual heritage.
Surveying the Landscape
One of the organization’s key initiatives is conducting surveys to gauge the state of broadcast archives worldwide. These surveys provide crucial insights into the evolution of archiving practices and highlight the challenges faced by institutions across different regions. For instance, the “Where Are You on the Timeline?” survey allows members to assess their progress in digitization and other archival practices.
The State of Broadcast Archives Worldwide
While some regions enjoy advanced archival practices, others struggle with significant challenges. In wealthier countries, many broadcasters have completed digitization efforts, preserving their audiovisual heritage. However, in less affluent regions, many archives remain in a state of disrepair, risking the loss of critical historical documents.
The Impact of Economic Factors
The financial health of a country plays a significant role in the preservation of its broadcast archives. In economically disadvantaged areas, the degradation and obsolescence of audiovisual carriers are prevalent. This leads to a situation where important historical records may be lost forever, resulting in a gap in our understanding of history.
AI and the Future of Archiving
The advent of artificial intelligence (AI) has introduced new possibilities for managing broadcast archives. AI can enhance the efficiency of cataloging and metadata generation, making it easier to access and utilize archival materials. As the technology continues to evolve, it is likely that AI will play an increasingly prominent role in the archiving process.
Broadcast Archive Operations
Understanding how broadcast archives operate is essential for appreciating their value. Typically, a broadcast archive is divided into several key functions: acquisition, preservation, documentation, and access. Each of these areas plays a crucial role in ensuring that the collections remain relevant and accessible to future generations.
Staffing and Organization
The staffing structure within a broadcast archive can vary widely, depending on the size of the institution and the scope of its collection. For example, Brecht’s current organization, RSI, has a dedicated team of approximately forty staff members. In contrast, larger institutions may employ hundreds of individuals to manage their extensive collections.
Collaboration with Production Teams
Broadcast archives often work closely with production teams to ensure that valuable content is preserved. This collaboration may involve integrating archival processes with production asset management systems (PAM) and media asset management systems (MAM). By connecting these systems, archives can efficiently manage the flow of content from production to preservation.
The Shift to Streaming and On-Demand Services
The rise of streaming services has fundamentally changed the landscape of broadcasting. As audiences increasingly turn to on-demand content, the role of traditional broadcast archives is evolving. The lines between archives and streaming platforms are becoming blurred, with many archives now offering their collections through digital platforms.
Ethical Considerations in Archiving
As broadcast archives transition to digital platforms, ethical considerations come to the forefront. Archives must navigate the complexities of rights management while ensuring that historical content is accessible. This includes addressing potentially problematic content and providing context to users, allowing for a more comprehensive understanding of history.
Preserving History for Future Generations
Ultimately, the mission of broadcast archives is to preserve history for future generations. As Brecht points out, it is vital to acknowledge both the positive and negative aspects of history. By maintaining transparency and providing context, archives can ensure that their collections serve as valuable resources for education and reflection.
Conclusion
Broadcast archives are pivotal in shaping our understanding of history and culture. As we navigate the challenges of digitization, AI, and the transition to streaming services, the importance of these archives cannot be overstated. With leaders like Brecht Declercq at the helm of organizations like FIAT/IFTA, the future of broadcast archives looks promising as they continue to adapt and evolve in the digital age.
For more insights on this topic and to stay updated on the latest developments in broadcast archiving, consider following FIAT/IFTA and engaging with the broader community of media archivists.
Transcript
Chris Lacinak: 00:00
Imagine you’re a future archaeologist trying to understand humans from the early 1900s through the early 2000s. What do you think the most robust, compelling, comprehensive source for obtaining that understanding might be? If I were you, I might be thinking the Library of Congress or landfills perhaps. And in truth, both of those do hold portions of the collections I’m thinking of, but that’s not what I’m going for. I think there’s a strong argument to say that broadcast collections produced and/or held by broadcasting entities across the world is the answer to this question. Radio and television broadcasters have held a unique place in the hearts of people around the globe over the past century and more. In their mission to entertain, document, and inform, they have amassed some of the largest and most important collections throughout the world. Each collection providing deep insights into the time and place in which they were broadcast. Broadcast collections hold the stories of the forming of countries and governments. They hold documentary evidence of culture and politics. They store the comedy, the drama, and the sports that captivated the audiences they reached. Leveraging the power of audio, film, and video, there is arguably no greater record of humanity for this period than the culmination of these broadcast collections.
I’m delighted to have Brecht Declercq join me on the episode today.
Brecht has served on the board of FIAT-IFTA for seven years. In English, this stands for the International Federation of Television Archives, and they are self-described as the world’s leading professional association for those engaged in the preservation and exploitation of broadcast archives. Brecht has served as the president of the organization for the past four years, giving him in-depth knowledge on the state of affairs with regard to broadcasting entities throughout the world. Brecht has also worked with and in broadcast archives for his entire career. Currently, Brecht serves as the head of archives for RSI. The Italian-speaking Swiss public broadcaster. Brecht’s experiences and insights are so interesting and valuable, and I’m excited to be able to share his thoughts and voice with the DAM Right listeners. Remember, DAM Right, because it’s too important to get wrong.
Brecht Declercq, welcome to the DAM Right podcast.
It’s an honor to have you here today. I wanted to have you on the podcast to get a peek inside of radio and television broadcast archives, and you bring a lot to the table there for a variety of reasons. You have worked on and in radio and television archives, and you have been the president of FIAT-IFTA for years now. So I’m really excited for you to bring a sneak peek inside of radio and television archives for our listeners that have not had the opportunity to work within those archives. So thank you so much for joining me today. I really appreciate it.
Brecht Declercq: 02:52
It’s a pleasure. So I’d love to start off with getting some insight into your background, and I’d like to maybe pinpoint, is there one thing from your past, your history, that you bring to the table that you think really informs your approach and how you work today?
Brecht Declercq: 03:12
Well, yeah, it’s of course a difficult question because I’ve been active in this field since 20 years. And I think if you’d ask me like, okay, what was that decisive moment in which you said, okay, this is kind of a career that I could make, that I could feel well in, that decisive moment was in fact in 2010 when I attended for the first time a big international conference. It was in 2010, the International Association of Sound and Audiovisual Archives together with the Association of Moving Image Archivists in the US organized their conference, joint conference in Philadelphia. And that was four days of very immersive encounters, I would say, immersive experiences, attending all these presentations, meeting all these passionate people. And I was very lucky to be there because I had submitted a proposal without even asking my boss. And that was the moment in which I said to myself, sometimes it’s better to ask to be forgiven than to ask for permission. And that’s the one lesson that I drew from that experience. And it was so motivating that it kept on thriving based on those four days only in the US for quite a few years.
Chris Lacinak: 04:39
And if I remember right, you and I first met at that conference, I believe.
Brecht Declercq: 04:44
Yeah, that’s true. It’s actually a quite ironic anecdote, I would say. I was speaking about a workflow to migrate the content of DAT tapes, digital audio tapes. And I remember the room was packed and I was very proud of that. It was not a big room, definitely not with maybe 30, 40 people sitting in that room. And you asked me a question at the end of my presentation, and you were asking whether I had ever heard of interstitial errors. And I was so ashamed at that moment that I had to say no, me standing in front of that audience and say, okay, this guy is asking me one question and I don’t even know how to answer it. But then you reassured me and you said, don’t worry, many people in this room won’t have heard about it. So yeah, that was our first meeting, Chris.
Chris Lacinak: 05:34
An advantageous moment, yes. And I remember you being very, I remember your energy. You were very energetic, very into the, I mean, as was I, but I just remember that about you that you and I spoke afterwards and you were very into the conference and super energetic about it. It’s funny to think back quite a while ago. So we later came to meet again when you were at an organization, and my lazy American accent will always get this wrong forever. I’ve said this word a million times, but so you’ll have to forgive it, give me. meemoo was an organization when we started working together, it was called VIAA. But I’d love to, if you could talk about the work that you did there and what was unique about that initiative and that work.
Brecht Declercq: 06:23
Yeah, I think I’ve explained VIAA, now called meemoo, several times around the globe. And I think it can best be explained by pointing to the pain, the pain that meemoo was solving or is still solving. And that is that the audiovisual heritage of many countries is spread amongst a variety of institutions like libraries, archives, museums, public broadcasters, commercial broadcasters, smaller and bigger ones. And if you thoroughly think about it, and if your national government decides to take a responsibility in that, because that’s not always the case throughout, over the globe, then this comes a very cumbersome duty, I would say. And it can, there is a risk that it becomes a very expensive one. And there is also a need to do it in a very professional way. And that is actually the, meemoo is the answer of the Flemish community, so the Dutch speaking northern part of Belgium to the questions of obsolescence, of degradation of audiovisual carriers and of the increased demand to audiovisual heritage. So what they decided to do is set up, with a government subsidy, of course, set up large scale digitization projects for audiovisual heritage, collecting in fact, all those tapes and cassettes and films, et cetera, et cetera, that were present at so many institutions. We started off in 2013 with around 40 institutions, about 10 broadcasters and 30 libraries, archives, museums. And by now they are at, I think almost 180 of them. And I am proud to say that when I left meemoo about one and a half year ago, about 80% of that whole volume estimated at around 600,000 to 650,000 objects is digitized. So there is still some stuff to be done, mainly film, but that is done. And it was not only about digitization. meemoo also provided sustainable digital storage, because also that can be a cumbersome task for say a small museum or a small library with just a few hundreds of audiovisual carriers. So they provided also that kind of professional storage. You could call it a public cloud. You could somewhat compare it to that. And then they also said, what is the value of all this material if we don’t valorize it not in a financial way, but in a, I always refer to the return on society. So they decided to set up, for example, an educational platform, shortening almost literally the distance between the archival vault and the classroom to let’s call it a few weeks, maybe a few months in some cases, a few days in an extreme case, so that teachers can use those materials in the classroom. And it is indeed a unique construction, but because I so thoroughly believed in it, I still keep on spreading that word because on a, let’s say on a daily basis, I’m confronted with the situation of audiovisual heritage these days in the world. And the number one basic question for so many archives is how are we going to fund our functioning? How can we provide certainty? How can we approach, can we tackle all these huge challenges without certainty about our funding, et cetera, et cetera. And I think that if you manage to convince a government of a very efficient way of dealing with this thing, and you then provide the Flemish example that many governments can be interested in, we’ve seen that in India, for example, and we’ve seen that as well in New Zealand. Those are the only other countries where they, I wouldn’t say copied the Flemish example, but rather got inspired by it.
Chris Lacinak: 10:44
Yeah. Well, it’s certainly, it is one of the most masterful, comprehensive, I think, digital transformations that I’ve seen in that it addresses such a variety of cultural heritage, material types, content. It addresses digital preservation, digital asset management, and as you said, like the outreach engagement to classrooms and to the public and tons of metrics and tracking around that and to measure success. And it’s really a phenomenal initiative, I guess is the word. I’m not sure if initiative is the right word or not, but program, entity, whatever, the effort has been, I think, really phenomenal. So I appreciate you filling us in about that. And we’ll share a link in the show notes to the organization so people can go and check that work out.
Brecht Declercq: 11:36
Yeah. The nice thing about the approach is also, I want to stress is that a lot of the information and the knowledge that they created while doing all this, they’re sharing it for free and in an English version as well on their website. I really want to stress this because it was one of the goals to stress their experience even beyond the Flemish borders, positively deciding to translate stuff also into English and thereby contributing to the spread of this kind of knowledge throughout the globe.
Chris Lacinak: 12:00
Yeah. Well, I wanted to ask you to talk about that because I think it does inform, it’s an important part of your background and kind of the context that you come from. Could you talk a bit about what you’ve done since being at meemoo? Well as I said, one and a half year ago, I decided to leave meemoo, not because I wasn’t having a good time, I was having a great time. I was absolutely having a great time, but it’s always been on my mind to take a challenge abroad. I am Belgian, but I always have had this international outlook. And then a vacancy came up here in Switzerland at RSI. And I kind of know this organization since a while. In 2011, the World Conference of the International Federation of Television Archives, of which I am now the president, took place in Turin in Italy. And I went there by car because some people will know that I have this kind of passion for everything that’s Italian in my spare time. And I went there by car. And when driving back, I came in contact with the Head of Archives here at RSI. And the road from Turin to Belgium actually crosses the town where I’m now living. So I decided to make a stopover and to visit that same RSI. And I was stunned by what I saw because the reason that I stopped was that I wanted to see a very nice innovation, in my opinion, that was a robot, a robot to digitize their video cassette collection, a three-dimensional robot refurbished from the car industry. You know, those orange ones you always see in footage. They had refurbished that and that machine had an autonomy of five days. So for five days, it could continue to digitize tapes, clean tapes, get them out, et cetera, et cetera. And I found that a marvelous innovation. And I wanted to see that. But when I arrived here at RSI, they wanted to show me something else. And that was their speech to text fully integrated with their documentation processes. So we are talking about an artificial intelligence that was already implemented almost 50 years ago. Yeah. They started off with that in 2009. So on not one, but two levels, they were like, yeah, as far as I know, on a global scale, there were forerunners. And I said like, that must be a marvelous organization to be able to work there. So I decided to apply and I’m now Head of Archives. So Head of Archives, meaning that all the archival departments, whether it’s a radio archive is a television archive are under my responsibility here in Italian speaking Switzerland.
Chris Lacinak: 15:04
I want to come back to RSI later, but I want to sidestep and talk about FIAT/IFTA for a bit first. You’ve touched on the organization and what you’ve just said, but I love it. Can you tell us a bit more about what’s the mission of the organization? What’s the makeup of the organization? And tell us a bit more about how the organization works.
Brecht Declercq: 15:26
Yeah. First of all, FIAT/IFTA stands for, it’s a double abbreviation, International Federation of Television Archives, Fédération Internationale des Archives de Télévision. So the French, French.
Chris Lacinak: 15:36
That sounds much better that way.
Brecht Declercq: 15:40
Okay. Well, formally our mission is FIAT/IFTA actively creates and exchanges expert knowledge and promotes and raises awareness of future media archiving by building and maintaining an international network and its broader community, organizing events, developing trusted resources, and taking challenging initiatives for those engaged in the field of media archives. I have to admit that I’d read that. So I don’t know it by heart. So yeah, that’s actually what we’re doing. We’re trying to form a global community for all those engaged in media archive. So our membership typically consists of around 40 to 50% public broadcasters, 10, 15% of commercial broadcasters, and then 10, 15% of very active, what I would call national audio visual archives or national archives and national libraries that are involved in the preservation of audio visual heritage in their country as well. And then evermore, we also have members of the industry. They have a special membership called supporting membership. And then we have organizations like a broad plethora of members, such as FIFA, the International Football Association, the New York Times for a while was a member of ours and several others. So it goes into several directions, but I would say the stronghold is really, or the real focus is really media archives, traditionally television, but evermore venturing into radio and video at large, all these kinds of things. Yeah.
Chris Lacinak: 17:27
And when you say the industry, what do you mean when you refer to the industry? Yeah. Good question. I’d say companies like AVP or all kinds of services and goods providers. Yeah. Service provider digitization companies, but consultants, software developers, evermore also companies in the field of artificial intelligence, MAM and DAM, obviously, they’re very closely connected to our community. So yeah, that’s what I mean with the industry.
Chris Lacinak: 18:05
So you’ve given us a picture of, that it’s a global organization. Can you offer some sort of breakdown of members?
Brecht Declercq: 18:12
Yeah. As I said, our stronghold and our historical background is mainly in Europe, that’s for sure. So we’re talking about, yeah, once again, 40, 50% European members. But I want to stress that amongst our founding members were also American companies, American broadcasters, such as NBC, CNN. Later on, we also got CBC Radio Canada, for example, as a member. In Latin America, we’re also in the realm of public broadcasting, but also commercial broadcaster, for example, Globo, the Globo Group, which is the largest commercial broadcaster of Latin America is a member of ours. Then if you go to Africa, you typically, once again, are with public broadcasters, the South African Public Broadcasting Organization, for example. And if we look at, yeah, the Middle East, then you’re Al-Arabiya, Al-Jazeera. Towards the other parts of Asia, the Japanese public broadcaster was one of our early members, ABC in Australia. So we really have a global outlook, but I do want to recognize that we are mainly Eurocentric, I regret to say, because the ambition is to be global.
Chris Lacinak: 19:38
It still sounds like, I mean, I have attended FIAT/IFTA conferences and they definitely are attended by participants worldwide. They feel very global. So I appreciate the transparent Eurocentric admission there, but I would say that probably FIAT/IFTA is doing a lot better than a lot of organizations in global representation. I know that you have done surveys in the past, in your time, I think even before you were president, you were involved in a working group that did some surveys to the FIAT/IFTA membership. And I think since you’ve been president, you’ve done some of these. I want to ask you to go into all of them, but I wonder, are there any that were particularly interesting in their findings and would you be willing to share maybe what the questions, what was the gist of the questions and what were the findings?
Brecht Declercq: 20:32
It’s true that we do love surveys as an instrument because it’s interesting towards our members and also towards our broader stakeholder group. And a survey that we do on an annual basis is called, “Where are you on the timeline?” And that really says it all in the sense that we’re doing it now this year, probably for the 15th consecutive time. And it’s a really short survey. It’s six or seven questions. I should check that. I’ve run it personally for three or four years. It really asks three, sorry, five, six, seven questions in a very concise way. And it allows the respondents to respond with a multiple choice. So they just pick the answer that fits or that describes their situation best. And the answers are formulated in a progressive way. So you just indicate what stage you are in, in what we consider it when we drafted this survey, a logical evolution of things. And that survey really allows us to see and to monitor the evolution that our members and beyond, because responding is not restricted to our membership, what level, what stage that archives are in. And we’ve seen things evolving up until the point where we are even saying now, like we should add extra options to our scale because things have evolved so much. And we see so many archives reaching those final stages that we had foreseen, I would say so many years ago, that we really have to extend that survey again. So that timeline survey is really a nice quote, but there have been others. We have been doing surveys about media asset management systems, for example, about metadata creation and the way how organizations create their metadata and how they look at that and the evolutions they expect there. So yeah. And sometimes we also give it a regional focus and that’s also very enlightening because that’s when our members really say like, okay, this allows me to compare, but really with comparable situations. So yeah.
Chris Lacinak: 23:03
I want to come back to the regional focus later. That’s an interesting point. I’d like to ask, I guess, in the surveys you’ve done, maybe, we’re about halfway in between the FIAT/IFTA World Conference, it was in October, so we’re about halfway to the next one and halfway past the last one. But between the conference, what you see happening in the conference and between those surveys, could you give us some sort of summary about what you see? And of course, it’s a large body of members. So any insights that you could share about what’s the state of affairs related to broadcast archives across the world?
Brecht Declercq: 23:45
Yeah. It’s hard to answer that question in a mono-directional way.
Chris Lacinak: 23:53
It’s a very unfair question. Yes.
Brecht Declercq: 23:56
Yeah. On a global scale, the situation is very different. I’ve been privileged enough to travel the world and to see broadcasters archives on every continent. And the situation can be very different, even within one region. It often depends on the, well, let’s say it like it is, the financial and budgetary wealth of a certain country. But apart from that, the evolutions that I’ve seen throughout the years, and people who are a bit longer active in this field will definitely recognize that, is that real wave of digitization that has conquered our field, I would say. And digitization, not only in terms of the digitization of working methods and the whole environment in which media is produced, but also in terms of archival digitization. So already in the mid 2000s, there were some alarm bells going on everywhere in the world, like, okay, this is happening. And then around 2010, 2013, if I’m not mistaken, a few very prominent audiovisual archivists in the world, I always quote Richard Wright from the BBC and Mike Casey from Indiana University there, they were warning and they were saying, beware, dear colleagues, because somewhere around 2023, 2028, to digitize large quantities of magnetic media, either audio or video, will become practically unaffordable. Not impossible in the sense that technically machines will stay around, some machines will stay around. If you have a huge collection and several hundreds of thousands of these audiovisual carriers, such as radio and television stations typically have, then things might become unaffordable. It’s going to cost so much money to have those carriers digitized that you’re not going to be able to pay it anymore. And actually that wave is now, I would say, coming to an end in some parts of the world. There are several broadcasters in the FIAT/IFTA membership, for example, that have finished digitization. My own employer here in Switzerland at RSI, we have practically finished almost everything. I think we’re at 98% or so. We’re just thinking of re-digitizing some film material, but that’s it. But there are indeed many broadcasters still in the world that haven’t digitized everything yet. I was in Tunisia a few weeks ago and I hesitate to say this because I don’t want to blame anybody, but we have to look reality straight in the eyes. And reality is that we are losing that battle. We are losing that battle and it’s important to be aware of that. In the poorer parts of the world, what Mike Casey, I already mentioned his name, what Mike Casey has called degralescence, this portmanteau concept of degradation and obsolescence is striking and it’s striking first in the poorest parts of the world. I thought first it was a coincidence, but when I started thinking about it, it wasn’t. In the last two days, I received two notifications, two emails from broadcasters and I won’t mention their name because that doesn’t make any sense, but from poorer parts of the world asking whether I considered it possible in their country to have two inch open reel video tapes digitized and my clear and honest answer was no, not even in your neighboring countries. So yeah, that degralescence is striking. We are coming at that point now that was predicted so many years ago by so many people. So that’s an important evolution that I want to point to. Another one, and it’s partially overlapping now, is that AI wave. It’s undeniably so. It has been for long predicted. It has been predicted for so long in the broadcast world. As I said, as early as the early 2000s, we were all talking about it. The world was buzzing like there is this new technology that’s going to take over the documentalist’s job. And then the strange thing is that we had to wait for it so long that some in the media archiving world already started to doubt. They said like, “Isn’t it all rumors? Isn’t it all like fake news almost?” And my answer, my personal answer was always like, it’s not a question of if, it’s a question of when. And if you’re seeing now how quickly things are going, I am still convinced that broadcast archives were amongst the first parts of the media industry that adopted artificial intelligence first. And we were very aware of what was coming, but then still we were surprised by the speed that it actually made throughout the last, let’s say, two years after the launch of ChatGPT and DALL-E, everything changed, of course. So that’s that other wave that I’ve seen coming. Yeah, I do.
Chris Lacinak: 29:57
You’re right. I remember early mid 2000s, a lot of hype around AI and just major disappointment on the execution and delivery of the promise. And it did take a while. Yeah, it took 10, 15 years before it came back with something that was impressive enough to grab people’s attention. Although we did see lots of organizations doing smaller, interesting kind of proof of concepts along the way. I want to go back to, you touched on, and this touches on, you talked about the regional nature of your surveys and things. You talked about how countries with less resources are suffering, kind of the lack of digitization. Can you help people understand what’s lost if these materials are lost to degradation and obsolescence? What, you know, across the globe as you look, what are some things that we miss out on both regionally but globally in our understanding of the world that goes along with the media that’s lost?
Brecht Declercq: 31:04
Yeah, that’s a very good question. But because I, every now and then I have to give that answer to make people aware. But I’m going to give you a very, very simple answer. Let’s have a look at the, let’s focus for a second on Africa. The African wave of independence, so that started off around the mid-50s in Ghana, it was Kwame Nkrumah, which was an African leader of, a great charismatic leader. And I’m not going to tell the whole story of the independence of Ghana, but my point is that’s where it all started off and it continued up until the 70s, that wave of independence. But that is also the era in which broadcasting, television production was actually switching gradually from film recording onto video recording. So that era is the era in which, from which we have the oldest videotapes. Also in those countries, you have to be aware that the countries that those African countries became independent of were mainly, as we know, European, Western European countries, France, Great Britain, Belgium, my own country. And those television systems in those countries had been installed by those colonizers. So they were also the ones that provided technology and that decided about the technology and that was videotape evermore. And after that independence, of course, those broadcasters, those public broadcasters, they became independent institutions under the wings of their governments, of course. And they are still now preserving their archives. But once again, I’m not blaming anyone here, I’m just describing a few facts. In many African countries that became independent in the 50s, 60s, 70s, those archives are in a dreadful state. So what these archives are losing and what their countries are losing is the audio visual documentation of their birth.
Chris Lacinak: 33:25
Wow.
Brecht Declercq: 33:26
So take a second to think about that. Take a second to take the American Declaration of Independence. Can you imagine that you would say, “Ah, sorry, we can’t read it anymore.” That’s what happening now in Africa, now as we speak. That’s what happening. And then take this on a global scale and then I would say like, “Okay, let’s make a little comparison.” Try to imagine today’s world and the importance of audio visual media and try to be aware that also throughout the course of the 20th century, many, many historical evolutions were documented on radio and television. Television and radio were amongst the most popular media and the most influential media in the 20th century. You cannot explain the rise to power of Adolf Hitler without acknowledging the role of radio. So try to imagine that we would lose that kind of heritage. Try to imagine that we’d have to explain history without having access to radio and television as historical sources. It would simply be impossible. And then now I quit, I rest my case.
Chris Lacinak: 34:44
Yeah, wow. Well, you can imagine. So, I mean, just to kind of reiterate and follow up on what you just said, the fast forward, 50, 100 years, I would say even with the presence of archives, it can be difficult to represent the true narrative of history. But the source material is there, right? Imagine the picture you’ve just painted. In many cases across the world, the source material is lost. Just what a major shaping of the historical narrative takes place from that could, and I would say it’s probably likely to misrepresent what’s happened historically across the globe. That’s major. You make a very good case.
Brecht Declercq: 35:36
Can I point to one simple example as well? Just a very small state on the globe, it’s called Timor-Leste, Portuguese for Eastern Timor. It’s a small island close to Indonesia. And that country became independent in the 90s. And there was a, if I’m not mistaken, it’s a French German cameraman called Max Stahl, and he documented all that was going on in the independence war, because that country has become independent from Indonesia. Now filming there, that cameraman has filmed a lot of the violence of the Indonesian army throughout that war of independence. That archive in itself is documenting the birth of Timor-Leste in the 90s. Luckily, that archive was saved at some point, also thanks to the intervention of INA, the French National Audiovisual Institute. But that is another example of a country that could have lost the documentation of its birth, paired with, let’s say it like it is, crimes against humanity during that war of independence. So, it demonstrates once again that unique documentational role of not only of media corporations of course, but also of audiovisual heritage in general.
Chris Lacinak: 37:10
Do you have feedback or requests for the DAM Right podcast? Hit me up and let me know at [email protected]. Looking for amazing and free resources to help you on your DAM journey? Let the best DAM consultants in the business help you out. Visit weareavp.com/free-resources. Stay up to date with the latest and greatest from me and the DAM Right podcast by following me on LinkedIn at linkedin.com/in/clacinak. I want to shift away from this specific topic, but stay in the general theme of kind of differences and discrepancies across the globe. And I’m going to maybe just focus in a bit on, well, I’ll ask you to paint a picture for us, but maybe we can use kind of Europe versus the United States as a place to focus in on in particular, which is, I think a lot of people, if you haven’t been in the field, you may not recognize just how different broadcast operations look in various countries. And here I think of both the commercial versus the non-commercial nature, the public kind of government backed broadcasters versus commercial broadcasters. Can you paint a picture for people what some of those differences look like and how they operate, how they’re funded and what the meaning there is?
Brecht Declercq: 38:39
Yeah, it’s true what you say, that there is often a very big difference between, I would say, profit driven and non-profit organization in that respect. For what I see or what I know from my daily experience, I haven’t worked for a commercial broadcaster yet, but what I know is firsthand, testimonial by people who work there is that typically a commercial broadcaster has less of that heritage perspective. And that’s okay, that’s perfectly legitimate, I’m not saying that they should. But when you are in a public broadcaster, there is this double perspective always, there is always this double perspective between on the one hand, and this is something they have in common with commercial broadcasters, broadcasters archives are always there in the first place to support their own production, their own production departments. And that’s what they typically cater for, I would say. But at the same time, there is always this perspective of a contribution to society. A public broadcaster’s archive is always supposed to help external customers as well. And external customers that often don’t have a commercial perspective at all, libraries, museums, whether they want to access those archives in a small kind of way, just asking for one or two tapes or one or two clips or so, or whether they want to use it really on a structural scale to open it up towards the whole educational world and the whole school system, etc., etc. And as a public broadcast archivist, you can barely, you can’t barely say no to that kind of requests. And it’s not an intention either. I mean, I always say like, without use, a broadcaster’s archive, a broadcaster’s, a public broadcaster’s archive, their shelves are empty, if you understand what I mean. This kind of what I call a heritage perspective, contributing with the archives to the society’s needs without the requirement of earning money with that, that is a perspective that is always present in a public broadcaster’s archive. In a commercial broadcaster’s archive, and I’ve seen that several times, that kind of perspective is absent or close to absent. And that gives them the liberty to take decisions with their archive that I, as an historian, sometimes regret. But you can barely blame them for that because in many countries, there is no such thing as what is called a legal deposit, the legal obligation to deposit a copy of what you have broadcasted to some kind of institution that then preserves it and in the longer run, respecting copyright, et cetera, et cetera, in the longer run gives access to it, such as it happens with books. So many countries in the world have a legal deposit for books or any kind of written publication. So little countries in the world have a legal deposit when it comes to audio visual publications and especially radio and television broadcasts. And that’s the difference in the perspective that I see so often. It doesn’t exclude that some commercial broadcasters do have that heritage perspective as well in certain parts of the world and I really respect them deeply for that because they are often not obliged to do so. On the contrary, the driver that they often have much more is a profit-driven driver. So they often really consider their archives as a source of income. And once again, that’s perfectly legitimate, but this is a whole completely different perspective. For them, it’s a way to valorize in a financial way what they have. It’s really assets in the true sense of the word, on condition of course, that they’re findable and that they have the rights to exploit them in a financial way, of course. But it’s a completely different perspective. And just as a side note, in FIAT/IFTA we bring those two together so you can imagine how difficult it can be to unite those two perspectives sometimes.
Chris Lacinak: 43:28
I feel like I have seen instances of broadcast archives that are not commercial also trying to valorize their archives in order to create a more sustainable kind of business model, even when it is a government-based institution. Is that right? Have you seen that as well? Yeah, that’s correct. That’s absolutely correct. Let’s not deny that. Many public broadcasters’ financing is public financing is under heavy pressure in many countries which you see is currently, for example, in Slovakia, the government is threatening heavily the financing, the funding of public broadcasting. And so public broadcasters do all they can to mitigate that kind of effects by searching for other sources of revenue and selling or licensing archival materials are for many broadcasters one of their many ways to counter those effects. And that for me doesn’t necessarily mean that it is a big source of revenue. We have to be honest about that. There are not many public broadcasters archives that can fund, I would say not even two or three full-time equivalents on an annual basis with what they sell in terms of footage. That’s something to keep in mind. There is no, in my opinion, there is no sustainable financing model for public broadcasting based on the licensing of footage or archival material. I’m very sorry for those who believe in that, but I don’t.
Chris Lacinak: 45:19
Yeah, that was years ago there was a concept I was running with around cost of inaction, which was kind of, you know, looking at the traditional return on investment. And because I had within organizations of all types, broadcasters, non-broadcasters, universities, you know, all sorts, this concept that usually executives in the organization would hold around, how can we see a return on investment on our archives? And it just, it never calculated out to be advantageous. And it seemed to always lack a holistic perspective on what the true value was. If you, you know, it wasn’t, it didn’t just come down to dollars. And while that’s obviously important, funding is a critical issue that when you look at it alone, it never seemed to do the issue real justice. And some of the things you talked about earlier really paint a picture about the value of these archives.
Brecht Declercq: 46:17
Yeah, if I can just intervene because I want to add a perspective. In 2013, there was a research by the Danish public broadcasters archive. And what they did was for one week, seven consecutive days, 24/7, they recorded the full broadcasting, the full broadcasting schedule on their two main channels. And they measured the duration of all the content that was being broadcasted. And they make the distinction between broadcasted for the first time or not broadcasted for the first time. They came to the conclusion that 75% of the broadcasting schedule, the duration of the broadcasting time was not filled with content that was broadcasted for the first time. And they said, this means that this content has passed through the archive, 75% of that broadcasting time. And if you take a look from that perspective, you could say it’s probably not an exact calculation, but you could think like if we’d have to fill all that time with new productions or with acquired stuff, broadcasting would probably cost us three to four times as much.
Chris Lacinak: 47:45
That’s interesting. Right.
Brecht Declercq: 47:48
It’s an interesting perspective because you never get to think about things that way. But yeah, and it’s not exact, of course, that measurement, but it switches your mindset.
Chris Lacinak: 47:59
Yeah, absolutely. That’s a good framing. Well, I want to jump into, we’ve talked kind of up high, I’d like to jump into what does a broadcast television archive look like? And we’ve just talked about all the disparities and differences. So obviously I want to lean on your personal experience here. Can you offer some insights into, for someone who maybe has worked in digital asset management, has worked in archives, but has never worked in a radio and television archive. And here you’ve had the experience, at meemoo you saw all sorts of organizations. So broadcast was just one source. There was lots of others. But you do have some unique perspective here. Can you give us some insights into what is a radio and television broadcast archive look like? How’s it staffed, organized, those sorts of things?
Brecht Declercq: 48:55
Yeah, let’s first start off by saying that the size of the country usually does not necessarily coincide with the size of the broadcasters or the size of the broadcasters archive. The determining factor is how many channels they have had throughout their history. That typically describes the size of the collection, if we talk about that. So typically in any kind of country for a long while, you’ve had like for a while, one channel, then a second one, then a third one, and four to five, and then some regional channels, et cetera, et cetera. And then television came in the fifties and they started with one channel, they added a second, sometimes a third or a fourth, et cetera, et cetera. And then you’re venturing into the 21st century. And typically that created up until, let’s say the end of the nineties, the start of the 21st century, that created collections about say 400,000 to 500,000 hours of film and videotape. And often taking into account that a lot has been lost, 200,000 to 300,000 of hours of radio or broadcasted radio content, taking into account as well that typically the music programs are not being preserved because their content is not considered unique. So there you have an idea about the size of those collections. And then take into account that in the 21st century, when the MAM systems came up, television and radio archives were much better prepared and much better able to preserve everything that they were broadcasting. So then you’re really talking about an explosion of content. And these days, it’s absolutely no exception that you come into a broadcaster’s archive and you meet say collections of more than a million hours of television content, six, seven, 800,000 of hours of radio broadcast content. And then when it comes to the structure of these archives, once again, up until I would say the nineties, the early 2000s, many, many broadcasters, public broadcasters and also commercial ones had a distinction between their, if they were making radio as well, it was a distinction between television and radio and they had separate archives. That also had historical backgrounds. And I could talk about that for ages, but I’m not gonna do that. But in the 2000s, many of those radio and television archives, they merged within one organization. They became one up until a certain extent, of course, because there were some differences in the processes. And well, what they do is I tend to keep things clear and to say like their typical activities are situated in acquisition and preservation. Yeah, well, the broad domain of acquisition and preservation. And then they intend to invest a lot of their resources also in documentation and cataloging, a lot of their resources, because those processes were the most labor intensive typically, and also therefore the most expensive. And then a third domain of activities is in access and valorization, either internally by delivering their content to their own production environments or by selling footage sales and or by developing all kinds of platforms or websites to which the larger audience or specific target groups within society can access those archives. And there’s a difference, as I said earlier on, between the public and the commercial broadcasters.
Chris Lacinak: 53:01
Yeah.
Brecht Declercq: 53:02
So that gives you an idea. And then maybe what you said about the number of staff. Well, it strongly depends. It strongly depends. Here at RSI, I have a team of about 40. But the General Secretary of FIAT/IFTA, Virginia Bazán, she is now head of archives at the Spanish public broadcaster, RTVE. And if I’m not mistaken, her staff is between 350 and 400 people. So yeah.
Chris Lacinak: 53:32
I want to come back to staff and kind of what your staff does, but I want to touch on something. As you were talking, I just had the thought we were talking about differences and types of archives. I just want to say in my experience, I mean, you’re talking about preservation and archiving as a role within the organizations you’ve been in, and those have been public broadcasters. I would say that there is a big difference I’ve seen between broadcasters in that it sounds like I’m going to guess that the organizations you have worked for have had a mandate or a mission of some sort to preserve and archive. In other broadcasters we’ve worked with, they may or may not have a mandate, but they might have a very strong business case. They have content that they can monetize and it’s very popular content. And so they have a business case to preserve an archive, even if they don’t have a mandate, which has implications because for the stuff that is less popular or less monetizable, then that tends to get lesser treatment. So a mandate would typically cover things that are both popular and non-popular. So there are implications to having a business case without a mandate. And then there are organizations that we have run across many of who don’t have a mandate and don’t have a really strong business case whose collections have either been thrown in dumpsters or saved from dumpsters by a university or some other entity that sees the cultural value and grabs it because they see it, even if the organization that created it doesn’t. So I just want to point out that difference across different organizations.
Brecht Declercq: 55:12
Yeah, you’re absolutely right. You’re absolutely right in that. Yeah, definitely. It’s an observation that I’ve made as well. And there are some regional differences in the world as well there, I think.
Chris Lacinak: 55:24
Let’s come back to the staffing. For your organization that has 40, I think you said 40, four zero, right?
Brecht Declercq: 55:30
Yes.
Chris Lacinak: 55:31
Okay. Can you just describe some of the, like, what are some of the roles and responsibilities tasks? I guess I wonder, how does it bump up against, how does your operation bump up against kind of the production side of the operation? And then on the other side, like on the distribution, publishing, access side, what’s the division of roles and responsibilities on what you all do? And I guess maybe one thing to focus on in particular would be like description. Like how much metadata and description is there on the way in? How much do you guys do? And then how much is there done post?
Brecht Declercq: 56:04
Yeah, that’s a very good question because exactly that point that you’re talking to is in, it’s currently, that’s my feeling, it’s being revolutionized by AI amongst others. And a typical situation I would say in a broadcaster’s archive currently is that there is a production, let’s limit ourselves to television only for now, because radio is somewhat parallel there, but you have a production platform and several production systems and post-production systems circulating around a, what you would call a PAM system, a production asset management system. And then from the archives part, connected, often connected to that PAM, you have a MAM, a media asset management system. And those two are often connected to each other and that situation might differ from organization to organization, depending on how they look at things and where they situate their archive exactly at the right in the middle of the production process or at the end of the chain of production. That is still a point of debate with many broadcasters archives. So typically what you see is that broadcasters archives try to connect their systems in such a way that as many descriptive, administrative, and technical metadata are inherited by the archival databases coming from all kinds of production systems. And so they connect these systems to each other through APIs and other kinds of protocols, I would say. And then they try that way to limit the manual work that still has to be done by the documentalists. That is a typical situation. But as I said, it’s in full evolution there because what is jumping in is AI. And so what we have seen throughout the history is that four big groups of metadata creation have grown, I would say. And those are like the old school manual work by documentalists that has been around for 80 years, say. Then inheritance by true production systems, inheritance, what I just described, like connecting PAM and MAM systems and inheriting as much metadata as possible. And then a third group, which is kind of a bit off the radar these days, but nevertheless interesting is what we used to call user generated metadata. The metadata that users that are involved in documentation processes via any kind of project, for example, could create and then deliver to the archive, but also in conscious ways of doing that. And I tend to call that consumer generated metadata. The fact that you watch a clip for only 5 seconds and not for 10 seconds is what I would call an interesting consumer generated metadata for the archive. It all has to do with media companies being data driven these days. And the fourth way of generating metadata is the broad world of AI, what I would call automatically generated metadata in some way. Now, what I had been thinking 10 years ago is that those four groups would always be combined and they’re covering up for their weaknesses and strengths and finally result in a fully filled up archival database. What I’m seeing now is that the quality of the results of artificial intelligence algorithms is increasing so quickly and the cost of, for example, connecting MAM systems and PAM systems and all kinds of systems that could provide metadata, that cost is so high that is quickly being overhauled by the evolution of AI algorithms. Also because all those several systems within a broadcaster, within a media production, they all have what I call asynchronous life cycles. Their technologies evolve in their own way and many, many broadcasters, they call upon the service of external providers or they tend to use a plethora of systems and to make them communicate to each other has become impossible. And then all of a sudden AI is there as well and obtains results that are nearly as good and often cheaper.
Chris Lacinak: 61:26
Could you put some more clarity? I just want to talk a bit more on the, you talked about PAM and for listeners, I’ve heard PAM recently, but on the CPG, consumer product side for product asset management. So this is not that, this is production asset management, which is… In the kind of production and post part of the organization. And you mentioned MAM, I wonder in your experience, where have MAM and DAM lived in an organization and how does that interact? How does the archive interact with that?
Brecht Declercq: 62:02
That’s a good question as well, because when I first contributed to the development of a MAM system that was in 2006, 2007, when I was working for the Flemish public broadcaster VRT, the reasoning was that a MAM system would be the, I would say the spinal cord of media production and the archives main database at the same time. So the theoretical background to that, and I wish to refer to one author in particular, that’s Annemieke de Jong from the Netherlands Sound and Vision, Netherlands Institute for Sound, but she did a lot of work around this. And she said like, what we see is that the archive evolves from being at the end of the production chain into the center of the production chain. And she was right, her theoretical thesis was absolutely correct. But still that didn’t really happen. I don’t know why, it’s hard to say why it didn’t happen completely as she predicted. But I do think that many broadcasters have been bringing in the expertise of audiovisual archivists into the center of their production environment because they acutely became aware of the importance of, yeah, I can’t describe it with other words than managing their assets. And whether you do it with the aim to, I would say store them for the longterm or store them to be reused the day after, I would almost say, what’s the difference?
Chris Lacinak: 63:52
The practices are the same.
Brecht Declercq: 63:54
Yeah. Yeah, you could add, for the archivist, you could then come up with the whole story of digital preservation and longterm preservation, tens of years, et cetera, et cetera. That’s a world in itself, I would say. But often, and this is also what makes broadcasters archives a bit particular, often that kind of subjects, that kind of challenges are tackled by the IT departments. Strangely enough, because radio and television archives, they have been also logistics guys and gals, but the whole digital logistics part is now covered by IT engineers that are not working anymore for the archives department.
Chris Lacinak: 64:44
And what I’ve seen in broadcast operations too, I mean, you have, of course, scheduling systems, which are their own kind of asset management components. My view is that the landscapes within broadcast operations with regard to digital asset management are typically more complex than in, say, a corporate archive or a corporate entity where you have some very specific spots you tend to see DAM, MAM, PAM, those sorts of things. I want to shift a bit towards talking about as broadcast operations or broadcasters move more towards on-demand and streaming as being the primary driver, I’ll say. What are the implications of that to the archives within these organizations? Are there implications there?
Brecht Declercq: 65:39
Yeah, definitely. I think this is also an evolution to which I think many archivists have been looking forward because it stresses the importance of the archive. And on an annual basis, I contribute to the call for papers of the FIAT/IFTA World Conference. And this year, and it’s not the first time, I really pushed to have one theme in this call for papers that is like OTT platforms, over-the-top platforms or streaming platforms or archival catalogs. What’s the difference? That to me is an intriguing question. We are evolving ever more with broadcasting, with television towards a world, and it might even be more the case in the US than it is already over here in Europe. We are evolving ever more into a situation where linear broadcasting is becoming a marginal thing. And I even foresee within a few years the closing down of television stations. The general director of the BBC has announced that there won’t be a linear broadcasting by the BBC anymore by 2030. I think that’s realistic. And then the question becomes what those broadcasters, if you can still call them that, those media companies are offering is content, right? It’s content on any kind of platform. And what the archive has been offering is content as well. It might not be content that is recently produced. It might be content that has been produced a bit earlier, but the border between the two is ever more getting irrelevant. And I remember illustrating that evolution towards people who inquired with me about it, by saying like, for you, when does the archive begin? If you have to count back from now, from one second ago, you’re listening to the radio, watching television, when does the archive begin? And most people then say like, hmm, maybe one year ago or 10 years ago. Then my answer is, how can you reasonably sustain such an answer? It doesn’t make sense. It for me, the archives begin tomorrow because in our archive, as we speak, the interview with the Pope that I just referred to was already in our archive a month before it was spread worldwide. So we already have stuff in our archive that is like not even yet broadcasted. So it’s coming ever more together. The lines are really blurring there.
Chris Lacinak: 68:45
So does linear broadcasting then gets replaced by platforms for watching and listening to content and the linear component, I guess, the kind of curation gets replaced, I guess, by recommendation engines and things like that, that seem to look at the behavior of the consumer and tries to feed them content they think they’ll be interested in. Is that what the future looks like, you think, for broadcasts?
Brecht Declercq: 69:14
Yeah, I don’t think I’m saying revolutionary things if I agree with you. Yeah, that’s how I look at things. And then the question for the archivists, but also for the person responsible for filling those platforms could be like, what kind of things from our huge catalog of recently produced or long time ago produced stuff are we going to publish today? I mean, I want to illustrate this with a very, in my opinion, a very interesting evolution. So in France, the archive of the public broadcaster and of so many other broadcasters is managed by the Institut National d’Audiovisuel, a French National Audiovisual Institute, which is one of the biggest audiovisual archives in the world. And they have decided to call themselves since last year, a media heritage company. They have their own streaming platform. They are, I would say, as much a streaming platform as Netflix is. That says it all to me. It says it all. They’ve just evolved into something Netflix like or something Disney like.
Chris Lacinak: 70:39
What are the ethical considerations here? I mean, do you just open up the archive entirely? How does rights play into that? How does content that this station may want to put some sort of moderation or context around that’s historic and maybe problematic in some ways? What do you think that looks like?
Brecht Declercq: 91:02
That’s also a very intriguing and very interesting question. I’m really aware of the sensitivity of this subject just because our broadcasting history, our media history is almost, it’s touching for many people is touching upon almost what I would call their identity. And that once again proves between brackets how influential television has been throughout its history. If people find their favorite programs from their favorite channels that have been broadcasted so many years ago and that colored their youth, if they find that so important, well, that shows how impactful television in particular, but radio also have been. But this might be also a bit of a European standpoint, but I think in Europe, our answer, although it took us some time to learn to deal with this, but I think we recognize, I’m really careful choosing my words here. I think we recognize that broadcaster’s archives are undeniably reflecting their own history and the history of human conceptions and human ideas throughout history. And if we want to look history in the eyes, we also have to look into the eyes of the more painful parts of our history. And let’s make no mistake, for example, the use of language evolves with humanity. And I always say, who knows which kinds of words that we pronounce now without asking ourselves any question, which words will be considered in 50 years from now, very problematic. We don’t know that yet and the people who pronounce those words 50 years ago, they in some cases have been unrespectful also. There are some words that were a hundred years ago already insulting and still they were used 50 years ago, but they have been used. And as an historian, it’s my opinion that you cannot falsify history. What you can do as an archivist is point to those problematic episodes of your own history and say, look, what we are showing you here is not intended as a source of entertainment, not necessarily. Please consider it as an historical document as well that was made in an era with certain values, applicable editorial values, editorial guidelines applied in the era of production. And today we adhere different norms. And if you think that this would be insulting to you, we’re warning you already that this might occur, but we’re not going to hide it because it is our own history and it’s a difficult part of our history now today, but it’s there. And you could then argue like, do you have to publish it in such a public way? Shouldn’t you just keep it on a sidetrack that is only accessible for historians or so? That’s a different discourse as well.
Chris Lacinak: 74:50
When you say, I just want to clarify, when you say you can’t falsify history, I take that to mean that what you’re saying is you can’t hide the ugly parts away and just show one part that would be a falsifying of history. Is that the right interpretation of what you just said?
Brecht Declercq: 75:08
Yeah, correct. Correct. And I realize how problematic this might be, but it’s the historian speaking here. And yeah, it’s a debate that is not yet finished. And I see it also on OTT and streaming platforms all over the globe that broadcasters and media companies tend to consider this question in a different way. And it also has to do with how they interpret their own role. I find it perfectly legitimate that a company like Disney says, look, our streaming platform is not intended as an historical source. And it’s intended as a form of entertainment. Those historians who would want to watch the original things, because for them, for their historical profession, it’s important that they can access authentic sources. For them, we have other ways to show them. What I mean is it depends of your mission.
Chris Lacinak: 76:18
Yeah, I know that’s a very interesting kind of dissection of you’ve got. Because it would be easy to look at broadcast all as under the entertainment umbrella. I think that’s probably how most people would think of it. And so it’s interesting to just kind of put that point on there to say that in some cases it’s in the mission of the organization, that there’s a historical documentation component, perspective, lens, and then there’s an entertainment perspective or lens. And those are two different animals that may get treated in two different ways. Yeah. Well, let’s wrap up here. You’ve been very generous with your time. And before we started, you said you’ve got more work to do today. It’s already late where you are. So I don’t want to keep it too much longer. But maybe could you tell the listeners when the next Fiat IFTTT conference is and where it is?
Brecht Declercq: 77:15
The next FIAT/IFTA World Conference takes place from the 15th to the 18th of October in Bucharest, Romania, hosted by the public broadcaster of Romania, TVR.
Chris Lacinak: 77:24
That sounds like an interesting and fun destination to go to as well as a great conference.
Brecht Declercq: 77:30
Yeah, definitely.
Chris Lacinak: 77:30
And I’ll share a link in the show notes to the conference or into the FIAT/IFTA site so folks can find that if they’re interested in finding out more. I’m going to wrap this with a question that I ask all of our DAM Right guests, which is, what is the last song that you added to your favorites playlist? Feel free to look at your phone.
Brecht Declercq: 77:57
And now this can be a very shameful moment.
Chris Lacinak: 78:00
It lets us…
Brecht Declercq: 78:02
Okay, no, it’s not so shameful. It’s not so shameful. It’s “The Way It Is” by Bruce Hornsby and The Range.
Chris Lacinak: 78:08
All right, a classic, classic song.
Brecht Declercq: 78:10
And also you could say it’s an archival… It has been archivally reused. Several times.
Chris Lacinak: 78:20
What was the circumstance? Did it come up on shuffle or something? You’re like, “Oh, I have to add this to my liked list.” Or did you seek it out because you remembered it? How did it come to end up on your favorites playlist?
Brecht Declercq: 78:32
Yeah, it’s got a great melody in my opinion. But it’s, you know, that piano. I’m always intrigued by how musicians come to that kind of genius melodies, you know? And that, no, it was just pure coincidence. I was driving in the car and said like, “Oh, I want to hear that song.” And then I said like, “Let’s add it to my favorites list.”
Chris Lacinak: 78:54
Yeah, that is a great song. Great. Well, Brecht, I really appreciate your time and all the super interesting and valuable insights you’ve shared today. I thank you very much. Thanks for your service to FIAT/IFTA II as the President. And yeah, I just, I think the listeners are going to really love this episode and we’ll get a lot out of it. So thank you.
Brecht Declercq: 79:18
It’s been really a pleasure to talk with you, Chris.
Chris Lacinak: 79:21
Do you have feedback or requests for the DAM Right Podcast? Hit me up and let me know. Visit [email protected]. Looking for amazing and free resources to help you on your DAM journey? Let the best DAM consultants in the business help you out. Visit weareavp.com/free-resources. Stay up to date with the latest and greatest from me and the Damn Right Podcast by following me on LinkedIn at linkedIn.com/in/clacinak.