Splunk Search

How to manage memory consumption while returning data from a search using the Java SDK?

clomeli
Engager

This may be a silly question, but how does one manage memory while returning data from a search? The results are being placed in a Map and I think there must be some way of streaming the data or throttling the output - to manage memory.

Tags (4)
0 Karma
1 Solution

clomeli
Engager

I think I understand how this works. I can count the records and fetch in smaller increments using

    int resultCount = job.getResultCount(); // Number of results this job returned
    int x = 0;          // Result counter
    int offset = 0;     // Start at result 0
    int count = 10;     // Get sets of 10 results at a time                    

    // Loop through each set of results
    while (offset < resultCount) {
        CollectionArgs outputArgs = new CollectionArgs();
        outputArgs.setCount(count);
        outputArgs.setOffset(offset);

        // Get the search results and display them
        InputStream results = job.getResults(outputArgs);
        ResultsReaderXml resultsReader = new ResultsReaderXml(results);
        HashMap<String, String> event;

        while ((event = resultsReader.getNextEvent()) != null) {
            x++;
            System.out.println("\n***** RESULT " + x + " *****\n");
            for (String key: event.keySet())
                System.out.println("   " + key + ":  " + event.get(key));
        }
        resultsReader.close();

        // Increase the offset to get the next set of results
        offset = offset + count;
    }       

View solution in original post

clomeli
Engager

I think I understand how this works. I can count the records and fetch in smaller increments using

    int resultCount = job.getResultCount(); // Number of results this job returned
    int x = 0;          // Result counter
    int offset = 0;     // Start at result 0
    int count = 10;     // Get sets of 10 results at a time                    

    // Loop through each set of results
    while (offset < resultCount) {
        CollectionArgs outputArgs = new CollectionArgs();
        outputArgs.setCount(count);
        outputArgs.setOffset(offset);

        // Get the search results and display them
        InputStream results = job.getResults(outputArgs);
        ResultsReaderXml resultsReader = new ResultsReaderXml(results);
        HashMap<String, String> event;

        while ((event = resultsReader.getNextEvent()) != null) {
            x++;
            System.out.println("\n***** RESULT " + x + " *****\n");
            for (String key: event.keySet())
                System.out.println("   " + key + ":  " + event.get(key));
        }
        resultsReader.close();

        // Increase the offset to get the next set of results
        offset = offset + count;
    }       
Get Updates on the Splunk Community!

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...

Splunk MCP & Agentic AI: Machine Data Without Limits

Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization uses ...

Application management with Targeted Application Install for Victoria Experience

Experience a new era of flexibility in managing your Splunk Cloud Platform apps! With Targeted Application ...