Splunk Search

Update Macro From Search

Thulasinathan_M
Contributor

Hi,

I've a case where I want to update/append the Macro with the results from lookup. I don't want to do this manually each time. So is there any way I could use a scheduled search and update macro if the lookup has any new values.

Labels (1)
Tags (2)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust

Convert your lookup so it has a pattern and name for the pattern e.g.

loglinepattern
Deprecated configuration detected in path Please update your settings to use the latest configuration options.*Deprecated configuration detected in path* Please update your settings to use the latest configuration options.*
Query execution time exceeded the threshold: seconds. Query: SELECT * FROM users WHERE last_login*Query execution time exceeded the threshold:*seconds. Query: SELECT * FROM users WHERE last_login*
Query execution time exceeded the threshold: seconds. Query: SELECT * FROM contacts WHERE contact_id*Query execution time exceeded the threshold:*seconds. Query: SELECT * FROM contacts WHERE contact_id*

Then add a lookup definition and use advanced option to set WILDCARD(pattern)

Now you can use lookup on your events to find out which type of loglines you have

 

| lookup patterns.csv pattern as _raw
| stats count by logline

 

View solution in original post

ITWhisperer
SplunkTrust
SplunkTrust

If there were a way to update a macro, it would likely to have a ReST endpoint, but there doesn't appear to be one. Having said that, even if there were, this sounds like a risky thing to be doing anyway. Perhaps a better way would be to update a lookup or kv store with the results from your search so that the macro can use those i.e. keep the processing (defined by the macro) separate from the data (found by the search). What you seem to be asking for smacks of self-modifying code, which, while it may sound like a cool thing to do, is generally not a safe practice.

0 Karma

Thulasinathan_M
Contributor

Thanks @ITWhisperer for your valuable info. My lookup has full of rex Patterns (1000s of patterns), but I don't want to dump this in a macro. That's why thought to update macro only if I start seeing new Patterns in the result event. If you could help me with this specific use-case it would be very much helpful. Thanks in advance.

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Does your look up have 1000s of patterns or your macro has 1000s of patterns or both?

Where do these patterns come from?

Please explain with a bit more detail and examples what your usecase is?

0 Karma

Thulasinathan_M
Contributor

I have 1000s of rex Patterns which is already available in a lookup file, but I don't want to put everything into macro. So I thought to update macro only if I start seeing events match any of rex pattern in lookup but not in macro. So by doing this I have minimal rex pattern in macro (For now I've 232 rex patterns in macro).

0 Karma

Thulasinathan_M
Contributor

Let's say below are few rex Patterns available in my lookup

| rex field=LogLine mode=sed "s|(Deprecated configuration detected in path).*( Please update your settings to use the latest configuration options.)|\1 \2|g"
| rex field=LogLine mode=sed "s|(Query execution time exceeded the threshold:).*(seconds. Query: SELECT * FROM users WHERE last_login).*|\1 \2|g"
| rex field=LogLine mode=sed "s|(Query execution time exceeded the threshold:).*(seconds. Query: SELECT * FROM contacts WHERE contact_id).*|\1 \2|g"


Below are the search results, I want to use above rex Pattern:

WARN  ConfigurationLoader - Deprecated configuration detected in path /xx/yy/zz. Please update your settings to use the latest configuration options.
WARN  ConfigurationLoader - Deprecated configuration detected in path /aa/dd/jkl. Please update your settings to use the latest configuration options.
WARN  QueryExecutor - Query execution time exceeded the threshold: 12.3 seconds. Query: SELECT * FROM users WHERE last_login > '2024-01-01'.
WARN  QueryExecutor - Query execution time exceeded the threshold: 21.9 seconds. Query: SELECT * FROM contacts WHERE contact_id > '252’.

 
So I'll get something like below, if I do stats

LogLineCount
Deprecated configuration detected in path . Please update your settings to use the latest configuration options.2
Query execution time exceeded the threshold: seconds. Query: SELECT * FROM users WHERE last_login1
Query execution time exceeded the threshold: seconds. Query: SELECT * FROM contacts WHERE contact_id1
0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

Convert your lookup so it has a pattern and name for the pattern e.g.

loglinepattern
Deprecated configuration detected in path Please update your settings to use the latest configuration options.*Deprecated configuration detected in path* Please update your settings to use the latest configuration options.*
Query execution time exceeded the threshold: seconds. Query: SELECT * FROM users WHERE last_login*Query execution time exceeded the threshold:*seconds. Query: SELECT * FROM users WHERE last_login*
Query execution time exceeded the threshold: seconds. Query: SELECT * FROM contacts WHERE contact_id*Query execution time exceeded the threshold:*seconds. Query: SELECT * FROM contacts WHERE contact_id*

Then add a lookup definition and use advanced option to set WILDCARD(pattern)

Now you can use lookup on your events to find out which type of loglines you have

 

| lookup patterns.csv pattern as _raw
| stats count by logline

 

Thulasinathan_M
Contributor

Thank you, @ITWhisperer. It's working as expected 😊

0 Karma

ITWhisperer
SplunkTrust
SplunkTrust

So let me see if I have understood:

You have 1000s of patterns in a lookup which you use against a set of events and if any of the events match against a pattern in the lookup you copy that pattern into a macro? And this is the process you want to automate?

0 Karma

Thulasinathan_M
Contributor

Absolutely, correct. That's my intention and I'm bit worried if I would hit a Performance impact if I keep on updating the macro and it exceeds limit at some point. Is there any better approach I can deal with for this use-case. Happy to adapt to any better approaches.

0 Karma
Get Updates on the Splunk Community!

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...

Explore the Latest Educational Offerings from Splunk (November Releases)

At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of ...

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...