AI-Powered Image Recognition in FileMaker (Part 2)
FileMaker + AI

AI-Powered Image Recognition in FileMaker (Part 2)

By Kate Waldhauser Oct 16, 2025 9 min read
FileMaker 2025image recognitionsemantic searchAISemanticFind
AI in FileMaker 2025 — Part 4 of 4
TL;DR: Building on Part 1's foundation, this guide covers batch embedding generation, building a semantic search interface, implementing AISemanticFind, and displaying results with confidence scores. You will have a working system where users type a description and FileMaker finds matching images by meaning.
Table of Contents

In Part 1, we covered how to set up and deploy an AI model for image recognition in FileMaker 2025, including configuring the AI account, understanding multi-modal embedding models, and generating your first image embeddings.

Now it’s time to put those embeddings to work. In Part 2, we’ll build a real semantic image search workflow — the kind where a user types a description and FileMaker finds matching images based on meaning, not keywords or filenames.

What We’re Building

By the end of this post, you’ll have a working system where:

  1. Images stored in container fields have vector embeddings generated automatically
  2. Users can search for images using natural language (e.g., “sunset over water” or “team meeting in conference room”)
  3. FileMaker returns visually relevant results ranked by similarity

This is semantic image search — and it’s one of the most compelling AI features in FileMaker 2025.

Prerequisites

Before starting, make sure you have:

  • FileMaker 2025 (Server and Pro)
  • An AI model account configured (from Part 1)
  • A multi-modal embedding model set up (e.g., CLIP-based model)
  • A table with container fields holding images
  • Embedding fields to store the generated vectors

Step 1: Generate Embeddings for All Images

If you followed Part 1, you already know how to generate an embedding for a single image. Now we need to batch-process your entire image library.

Create a script that loops through all records and generates embeddings:

Go to Record/Request/Page [ First ]
Loop
    If [ IsEmpty( Images::embedding_vector ) ]
        # Generate embedding from container
        Set Variable [ $embedding ; Value:
            AIModelEmbedding( "your-model-name" ; Images::photo_container )
        ]
        Set Field [ Images::embedding_vector ; $embedding ]
        Commit Records/Requests
    End If
    Go to Record/Request/Page [ Next ; Exit after last ]
End Loop

Tips for batch processing:

  • Run this on the server for large image libraries
  • Process during off-hours to avoid performance impact
  • Add error handling for images that fail to embed (corrupt files, unsupported formats)
  • Track progress with a counter so you know where you are

Step 2: Build the Search Interface

Create a layout with:

  • A text field for the search query (e.g., “red car” or “person holding a document”)
  • A search button that triggers the semantic search script
  • A portal or list view to display results

The key insight: when the user enters a text query, you generate a text embedding using the same multi-modal model, then compare it against the image embeddings stored in your database.

The search script:

  1. Takes the user’s text query
  2. Generates a text embedding from the query
  3. Performs a semantic find against the stored image embeddings
  4. Returns results ranked by similarity
# Get search query
Set Variable [ $query ; Value: Images::search_query ]

# Perform semantic find
Perform Find [
    Restore: AISemanticFind( "your-model-name" ; $query ; Images::embedding_vector )
]

FileMaker 2025’s AISemanticFind handles the similarity comparison and returns results in relevance order.

Step 4: Display Results with Confidence Scores

Each result from a semantic find includes a similarity score. Display this alongside the image to give users a sense of how confident the match is:

  • 0.9+ — Very strong match
  • 0.7–0.9 — Good match, likely relevant
  • 0.5–0.7 — Partial match, may or may not be relevant
  • Below 0.5 — Weak match, likely not what the user wants

Consider setting a threshold (e.g., only show results above 0.6) to keep results useful.

Practical Use Cases

Digital Asset Management

Photographers, designers, and marketing teams can search their entire image library using descriptions instead of relying on manual tags. “Find me all photos with people outdoors” becomes a single search.

Inventory and Product Catalogs

Retail and manufacturing teams can search product images by description — “blue widget with serial number label” — without needing every product meticulously tagged.

Insurance and Claims

Claims adjusters can search photo evidence using descriptions of damage types, locations, or conditions.

Medical and Scientific Records

Research teams can search microscopy images, field photos, or diagnostic images using natural language descriptions.

Responsible Use Considerations

Accuracy Isn’t Perfect

Semantic image search is powerful but imperfect. Models can misinterpret visual content, especially with:

  • Abstract or ambiguous images
  • Cultural context that differs from the model’s training data
  • Low-quality or heavily cropped photos

Recommendation: Always present results as suggestions, not definitive answers. Let humans make the final call.

Privacy and Sensitivity

If your images contain people, sensitive locations, or confidential information:

  • Ensure embedding generation doesn’t send images to external services you haven’t vetted
  • Review your AI model provider’s data handling policies
  • Consider whether facial recognition implications apply to your use case

Bias in Visual Models

Multi-modal models can inherit biases from their training data. They may perform better on certain types of images, demographics, or cultural contexts than others. Test with diverse data and monitor for inconsistencies.

Performance Optimization

  • Embedding size matters — Larger embeddings capture more detail but take more storage and processing time
  • Index your embedding fields — Ensures fast similarity searches
  • Consider caching — If the same searches are run frequently, cache results
  • Server-side processing — Always generate embeddings on the server for batch operations

What’s Next

With text extraction (GetTextFromPDF()) and image search in place, the next frontier is combining them — imagine searching across both documents and images simultaneously, using a single natural language query.

FileMaker 2025 is building toward truly intelligent data interaction. The key is implementing it responsibly, with human oversight at every step.


Need help implementing AI image search in your FileMaker solution? Schedule a free call to discuss your use case.

How AI Was Used in This Post

AI assisted with drafting, technical research, and code example formatting. All content was reviewed against FileMaker 2025 documentation and tested implementations.

Frequently Asked Questions

What is a good similarity score for semantic image search results?
+

A score of 0.9+ indicates a very strong match. Scores between 0.7 and 0.9 are good matches. Between 0.5 and 0.7 is a partial match. Below 0.5 is typically not relevant. Consider setting a threshold (like 0.6) to keep results useful for your users.

Can I search for images using text descriptions in FileMaker?
+

Yes. FileMaker 2025's AISemanticFind function lets users enter natural language queries like 'red sports car' or 'damaged roofing.' The system generates a text embedding from the query and compares it against stored image embeddings to find visually relevant results.

How do I batch-process existing images for AI search?
+

Create a looping script that checks each record for an existing embedding, generates one if missing, and commits the record. Run this on FileMaker Server during off-hours for large image libraries. Add error handling for corrupt or unsupported files.

Are there bias concerns with AI image search?
+

Yes. Multi-modal models can inherit biases from their training data and may perform better on certain demographics, cultural contexts, or image types than others. Test with diverse data, monitor for inconsistencies, and always present search results as suggestions rather than definitive answers.

Explore Related Services

FileMaker
Claris FileMaker Services
Learn more →
AI Audit
Responsible AI Audit for FileMaker
Learn more →
Kate Waldhauser
Founder of Violet Beacon. Responsible AI consultant, ISO 42001 Lead Implementer, and Certified Claris Partner with 20+ years of FileMaker expertise.

Related Posts

← Back to all posts

Want to discuss this topic?

Book a free call to talk about responsible AI, FileMaker, or anything you've read here.