All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It does work See this example row/panel that creates some pseudo data and simulates what you're trying to do <row> <panel> <input type="dropdown" token="func_option"> <label>Fun... See more...
It does work See this example row/panel that creates some pseudo data and simulates what you're trying to do <row> <panel> <input type="dropdown" token="func_option"> <label>Func</label> <choice value="Func1">Func1</choice> <choice value="Func2">Func2</choice> <choice value="Func3">Func3</choice> </input> <chart> <search> <query>| makeresults count=1000 | eval _time=_time-random() % 3600 | eval Func=mvindex(split("Func1,Func2,Func3",","), random() % 3) | eval duration=random() % 1000000 / 1000 | timechart fixedrange=t count avg(duration) by Func | fields _time avg*$func_option$ count*</query> <earliest>-60m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">1</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.overlayFields">"avg(duration): $func_option$"</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> I can choose the Func dropdown and it will make that particular Func as the chart overlay using the token  
I apologize if the following question might be a bit basic.  But I'm confused with the results.   When I append the  following code into the "search" line, it returns a shortened list of results. (f... See more...
I apologize if the following question might be a bit basic.  But I'm confused with the results.   When I append the  following code into the "search" line, it returns a shortened list of results. (from 47 to 3)  AND ("a" in ("a"))   original code.  index=main_service ABC_DATASET Arguments.email="my_email@company_X.com" | rename device_model as hardware, device_build as builds, device_train as trains, ABC_DATASET.Check_For_Feature_Availability as Check_Feature_Availability | search (Check_Feature_Availability=false) AND ("a" in ("a")) | table builds, trains, Check_Feature_Availability   I was expecting to see the same number of results.  Am I wrong about my expectations, or am I missing something here? TIA     index=main_service  ABC_DATASET  Arguments.email="my_email@company_X.com" | rename device_model as hardware, device_build as builds, device_train as trains, ABC_DATASET.Check_For_Feature_Availability as Check_Feature_Availability | search (Check_Feature_Availability=false)  AND ("a" in ("a")) | table builds, trains, Check_Feature_Availability
Use this index=testing | timechart max("event.Properties.duration") as maxDuration | eval maxDuration=round(maxDuration/1000, 3)  
It does sounds from your data description of package, current_version, current_date, previous_version, previous_date, that a lookup may be a practical way to maintain the current state of events.  A... See more...
It does sounds from your data description of package, current_version, current_date, previous_version, previous_date, that a lookup may be a practical way to maintain the current state of events.  Are you planning to run this search once a month or daily, but it would seem that an approach might be to have a search that looks for data in the last 30 days. At this point it will have version and date info for package X It can then do a lookup of that package to the lookup data and get 'current/previous' info for the package.  It's not difficult to manage that lookup to update it, but I'm not clear on what you want from this list. As you don't know how far back a 'previous' patch was installed you have know way of knowing how far back to search data, so the lookup will give you all that immediately. I am guessing you may also want to be looking at host+package, not just package, so depending on how many hosts/packages you have, the lookup could be reasonably big, but much depends on how often you want to read/use this data.
Thank you @PickleRick . Here is the detailed background of my requirement. I need to refer the values from lookup and compare it with values in events for same field and derive the other field https... See more...
Thank you @PickleRick . Here is the detailed background of my requirement. I need to refer the values from lookup and compare it with values in events for same field and derive the other field https://community.splunk.com/t5/Splunk-Search/Help-with-splunk-search-query/m-p/685039#M233782  
Could someone help me in deriving solution for this case below? Background : We have an app and in which we set all our saved searches as durable ones as we dont want to miss any runs. So any schedu... See more...
Could someone help me in deriving solution for this case below? Background : We have an app and in which we set all our saved searches as durable ones as we dont want to miss any runs. So any scheduled search if it fails on that particular scheduled time due to any issues like infra related or resource related it will be covered in next run. So am trying to capture the last status even after the durable logic applied.  Lets say I have 4 events. So the first two runs  (Scheduled_time=12345  AND Scheduled_time=12346)  of ALERT ABC failed. And in the third schedule during 12347 those two are covered and in that 12347 is also covered and all are success.  So if I take query like this first .. | stats last(status) by savedsearch_name scheduled_time I get output like this savedsearch_name last(status) scheduled_time ABC                    skipped                   12345 ABC                    skipped                   12346 ABC                    success                   12347   I need to write a logic that take A. jobs whose last status is not success - So here  ABC 12345 and ABC 12346 B. where durable_cursor != scheduled_time. So it will pick events for that job where multiple jobs covered for that missed duration. In this case here it will pick my EVENT 3  C. Then I have to derive like this. Take the failed saved search job name with its scheduled time in which its failed and check that scheduled_time falls within next durable_cursor and scheduled_time with status=success. .. TAKE FAILED SAVEDSEARCH NAME TIME as FAILEDTIME | where durable_cursor!=scheduled_time | eval Flag=if(FAILEDTIME>=durable_cursor OR FAILEDTIME<=scheduled_time, "COVERED", "NOT COVERED") with its schedule_time and check again if that job (with its job name) other scheduled time run falls betweee EVENT 4 : savedsearch_name = ABC ; status = success; scheduled_time =12347 EVENT 3 : savedsearch_name = ABC ; status = success ;  durable_cursor=12345 scheduled_time =12347 EVENT 2 : savedsearch_name = ABC ; status = skipped ; scheduled_time =12346 EVENT 1 : savedsearch_name = ABC ; status = skipped ; scheduled_time =12345 How I derived so far and where I stuck. I took this in two reports First report will take all the Jobs whose last status is not success and tabled output with fields SAVEDSEARCH NAME, SCHEDULEDTIME AS FAILEDTIME, LAST(STATUS) as FAILEDSTATU Then I saved this result in lookup Thsi has to run for last one hour window Second Report It will refer the lookup and take the failed savedsearch names from the lookup and search only those events in Splunk internal sets and search only the events where durable_cursor!=scheduled_time and then check if that failed savedsearch time falls within durable_cursor and next scheduled_time and check if status is success. Thsi is working fine if I have one savedsearch job for one time. But not for multivalues Lets say Job A itself is having four runs in an hour and except first all are failures. In this case I could not cover as referring values from lookup as multivalue field not matching the exact stuff Here is the question I posted for the same https://community.splunk.com/t5/Splunk-Search/How-to-retrieve-value-from-lookup-for-multivalue-field/m-p/684637#M233699   If somebody have any alternate or better thoughts on this can you please throw some light on this.
Do you actually care what order the data is returned in - you are simply adding it to the summary index. The _time written to the summary will be whatever you want it to be, so just ignore the messag... See more...
Do you actually care what order the data is returned in - you are simply adding it to the summary index. The _time written to the summary will be whatever you want it to be, so just ignore the message, I don't believe it will affect the data in the summary.
Splunk is a time series database, so you cannot have data without _time, unless you store that data in a lookup. If you are using DBXQuery to fetch data and store it in a summary index it must have ... See more...
Splunk is a time series database, so you cannot have data without _time, unless you store that data in a lookup. If you are using DBXQuery to fetch data and store it in a summary index it must have _time. However, you can always set _time to be the time you query the data, so if you do this daily, then the last 24 hours _time data will be the most recent copy of the DBXQuery data. By configuring the index retention period, you can control how long the data will exist for in the summary. Alternatively you can write the data to a lookup (outputlook command) and this will overwrite any existing data in the lookup, so you only ever have the latest copy. Note that there are some size constraints for lookup that affect how they behave, but this could be an option. I am not sure I understand your example of correlating with another index - you cannot use dbxquery on a Splunk index.
Hello, I have a static data about 200,000 rows (potentially grow) needs to be moved to a summary index daily. 1) Is it possible to move the data from DBXquery to summary index and re-write the data... See more...
Hello, I have a static data about 200,000 rows (potentially grow) needs to be moved to a summary index daily. 1) Is it possible to move the data from DBXquery to summary index and re-write the data daily, so there will not be old data with _time after the re-write? 2) Is it possible to use summary index without _time and make it like DBXquery?  The reason I do this is because I want to do data manipulation (split, etc)  and move it to another "placeholder" other than CSV or DBXquery, so I can perform correlation with another index.  For example:  | dbxquery query=" SELECT * from Table_Test"   the scheduled report for summary index will add something like this: summaryindex  spool=t  uselb=t  addtime=t  index="summary" file="test_file" name="test" marker="hostname=\"https://test.com/\",report=\"test\"" Please suggest. Thank you for your help.
Hey, I installed splunk enterprise free trial on ubuntu server and this is the first time I am using splunk so I am following a video. I am having trouble locating "local event logs" option while add... See more...
Hey, I installed splunk enterprise free trial on ubuntu server and this is the first time I am using splunk so I am following a video. I am having trouble locating "local event logs" option while adding data to splunk from a universal forwarder in windows server. I want to capture event logs from windows server to see in splunk. Please help me out as soon as possible. Thank you.
How do I split my query from DBXquery (eg. 200k rows)and push it into a Summary Index at the same time? | dbxquery query=" SELECT * from Table_Test" the scheduled report for summary i... See more...
How do I split my query from DBXquery (eg. 200k rows)and push it into a Summary Index at the same time? | dbxquery query=" SELECT * from Table_Test" the scheduled report for summary index will add something like this: summaryindex  spool=t  uselb=t  addtime=t  index="summary" file="test_file" name="test" marker="hostname=\"https://testcom/\",report=\"test\"" Technically, I don't really need the _time because it is a static data, but it needs to get updated every day. Thanks
Hello! I have been trying to get some logs into a metric index and I'm wondering if they can be improved with better field extraction. These are what the logs look like:     t=1713291900 path="/da... See more...
Hello! I have been trying to get some logs into a metric index and I'm wondering if they can be improved with better field extraction. These are what the logs look like:     t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s4 type=COUNTER value=12 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s6 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s7 type=COUNTER value=2 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3 type=COUNTER value=104 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9:10 type=COUNTER value=8 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s4 type=COUNTER value=104 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=140 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=3 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=1 t=1713291900 path="/data/p3/p4" stat=s20 type=COUNTER value=585 t=1713291900 path="/data/p3/p4" stat=s21 type=COUNTER value=585 t=1713291900 path="/data/p3/p4" stat=s22 type=TIMEELAPSED value=5497.12 t=1713291900 path="/data/p3/p5" stat=s23 type=COUNTER value=585 t=1713291900 path="/data/p1/p5" stat=s24 type=COUNTER value=585 t=1713291900 path="/data/p1/p5" stat=s25 type=TIMEELAPSED value=5497.12 t=1713291900 path="/data/p1/p5/p6" stat=s26 type=COUNTER value=253 t=1713291900 path="/data/p1/p5/p6" stat=s27 type=GAUGE value=1     t is the epoch time. path is the path of a URL which is in double quotes, always starts with /data/, and can have anywhere between 2 and 7 (maybe more) subpaths. stat is is either a single stat (like s20) OR a colon-delimited string of between 3 and 6 stat names. type is either COUNTER, TIMEELAPSED, or GAUGE. value is the metric. Right now I've been able to get a metric index set up that: Assigns t as the timestamp and ignores t as a dimension or metric Makes value the metric Makes path, stat, and type dimensions This is my transforms.conf:     [metrics_field_extraction] REGEX = ([a-zA-Z0-9_\.]+)=\"?([a-zA-Z0-9_\.\/:-]+) [metric-schema:cm_log2metrics_keyvalue] METRIC-SCHEMA-MEASURES = value METRIC-SCHEMA-WHITELIST-DIMS = stat,path,type METRIC-SCHEMA-BLACKLIST-DIMS = t     And props.conf (it's basically log2metrics_keyvalue, we need cm_ to match to our license):     [cm_log2metrics_keyvalue] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) METRIC-SCHEMA-TRANSFORMS = metric-schema:cm_log2metrics_keyvalue TRANSFORMS-EXTRACT = metrics_field_extraction NO_BINARY_CHECK = true category = Log to Metrics description = '<key>=<value>' formatted data. Log-to-metrics processing converts the keys with numeric values into metric data points. disabled = false pulldown_type = 1      path and stat are extracted exactly as they appear in the logs. However, I'm wondering if it's possible to get each part in the path & stat fields into their own dimension, so instead of: _time path stat value type 4/22/24 2:20:00.000 PM /p1/p2/p3 s1:s2:s3 500 COUNTER   It would be: _time path1 path2 path3 stat1 stat2 stat3 value type 4/22/24 2:20:00.000 PM p1 p2 p3 s1 s2 s3 500 COUNTER   My thinking was that we'd be able to get really granular stats and interesting graphs. Thanks in advance!
I'm having issues getting parsing working using a custom config otel specification. The `log.file.path` should be one of these two formats: 1. /splunk-otel/app-api-starter-project-template/app-api-s... See more...
I'm having issues getting parsing working using a custom config otel specification. The `log.file.path` should be one of these two formats: 1. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template-96bfdf8866-9jz7m/app-api-starter-project-template.log 2. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template.log One with and one without the pod name. We are doing it this way so that we only index one application log file in a set of directories rather than picking up a ton of kubernetes logs that we will never review, but yet have to store. At the bottom is the full otel config. We are noticing that regardless of the file path (1 or 2) above, it keeps going to the default option, and in the `catchall` attribute in splunk, it has the value of log.file.path which always is the 1st format above (e.g. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template-96bfdf8866-9jz7m/app-api-starter-project-template.log). - id: catchall type: move from: attributes["log.file.path"] to: attributes["catchall"] Why is it that it's not going to the route `parse-deep-filepath` considering the Regex should match. We want to be able to pull out the `application name`, the `pod name`, and the `namespace` which are all reflected in the full `log.file.path` receivers: filelog/mule-logs-volume: include: - /splunk-otel/*/app*.log - /splunk-otel/*/*/app*.log start_at: beginning include_file_path: true include_file_name: true resource: com.splunk.sourcetype: mule-logs k8s.cluster.name: {{ k8s_cluster_instance_name }} deployment.environment: {{ aws_environment_name }} splunk_server: {{ splunk_host }} operators: - type: router id: get-format routes: - output: parse-deep-filepath expr: 'log.file.path matches "^/splunk-otel/[^/]+/[^/]+/app-[^/]+[.]log$"' - output: parse-shallow-filepath expr: 'log.file.path matches "^/splunk-otel/[^/]+/app-[^/]+[.]log$"' - output: nil-filepath expr: 'log.file.path matches "^<nil>$"' default: catchall # Extract metadata from file path - id: parse-deep-filepath type: regex_parser regex: '^/splunk-otel/(?P<namespace>[^/]+)/(?P<pod_name>[^/]+)/(?P<application>[^/]+)[.]log$' parse_from: attributes["log.file.path"] - id: parse-shallow-filepath type: regex_parser regex: '^/splunk-otel/(?P<namespace>[^/]+)/(?P<application>[^/]+)[.]log$' parse_from: attributes["log.file.path"] - id: nil-filepath type: move from: attributes["log.file.path"] to: attributes["nil_filepath"] - id: catchall type: move from: attributes["log.file.path"] to: attributes["catchall"] exporters: splunk_hec/logs: # Splunk HTTP Event Collector token. token: "{{ splunk_token }}" # URL to a Splunk instance to send data to. endpoint: "{{ splunk_full_endpoint }}" # Optional Splunk source: https://docs.splunk.com/Splexicon:Source source: "output" # Splunk index, optional name of the Splunk index targeted. index: "{{ splunk_index_name }}" # Maximum HTTP connections to use simultaneously when sending data. Defaults to 100. #max_connections: 20 # Whether to disable gzip compression over HTTP. Defaults to false. disable_compression: false # HTTP timeout when sending data. Defaults to 10s. timeout: 900s tls: # Whether to skip checking the certificate of the HEC endpoint when sending data over HTTPS. Defaults to false. # For this demo, we use a self-signed certificate on the Splunk docker instance, so this flag is set to true. insecure_skip_verify: true processors: batch: extensions: health_check: endpoint: 0.0.0.0:8080 pprof: endpoint: :1888 zpages: endpoint: :55679 file_storage/checkpoint: directory: /output/ timeout: 10s compaction: on_start: true directory: /output/ max_transaction_size: 65_536 service: extensions: [pprof, zpages, health_check, file_storage/checkpoint] pipelines: logs: receivers: [filelog/mule-logs-volume] processors: [batch] exporters: [splunk_hec/logs]
Ok. Now it's a bit better described but. 1. You still haven't shown us a sample of the actual events. 2. Not everything in Splunk can be done (reasonably and effectively) with just a single search.... See more...
Ok. Now it's a bit better described but. 1. You still haven't shown us a sample of the actual events. 2. Not everything in Splunk can be done (reasonably and effectively) with just a single search. Maybe you could bend over backwards and compose some monster using subsearches and map but it will definitely not be a good solution. Performance would be bad and you still might hit limits for subsearches and get wrong results. It sounds like something that should be done by means of repeated scheduled search storing intermediate state in a lookup. You might try to to search through "all-time" and build a huge list of everything that happened in your index only to choose two most recent changes but it would consume a lot of memory and is not really a good solution.
@deepakc's was a so-called "run-anywhere" example. A sequence of commands that can be run on its own without any additional data that you need to search for, meant for showing some mechanism. It star... See more...
@deepakc's was a so-called "run-anywhere" example. A sequence of commands that can be run on its own without any additional data that you need to search for, meant for showing some mechanism. It starts with a makeresults command which creates an "empty" result. This example was not meant to be run as part of your search but you should do something similar with your data and your field names.
What do you mean by "there is no overlapping"? A 4728 or 4729 event will have an Account Name field. Splunk applies transform class from left to right and applies them all (if they match). So your... See more...
What do you mean by "there is no overlapping"? A 4728 or 4729 event will have an Account Name field. Splunk applies transform class from left to right and applies them all (if they match). So your event will first match the first transform, if the event is 4728 or 4729 the index will get overwritten to index1 but then immediately Splunk will apply the second transform which will - for the *.adm accounts - overwrite the index to index2. At least that's how it should work if the regexes are OK (I didn't check that).
Be aware that subsearches have limitations and it can be nasty if you hit the limit because the search will be finalized silently. You won't know something's not right. Also the | dedup host | tab... See more...
Be aware that subsearches have limitations and it can be nasty if you hit the limit because the search will be finalized silently. You won't know something's not right. Also the | dedup host | table host part is quite suboptimal. And in general be wary when using the dedup command (you have it in outer search as well) - it might behave differently than you'd expect.
Hello @marnall , I already tested both regex in regex101 and there is not overlapping, this is why I do not understand why it's not working.
You could hit the REST endpoint for approvals. (https://docs.splunk.com/Documentation/SOARonprem/6.2.1/PlatformAPI/RESTApproval) Unfortunately the docs do not include the POST requests for actually a... See more...
You could hit the REST endpoint for approvals. (https://docs.splunk.com/Documentation/SOARonprem/6.2.1/PlatformAPI/RESTApproval) Unfortunately the docs do not include the POST requests for actually approving the task, so you'll have to do an approval in the web interface and then log the POST request using your browser dev tools. Then you can use that POST request to approve tasks without having to log into SOAR. You will need to provide authentication credentials or a token though.
Hi Deepak  I am bit confused using the time command  Filed name - event.Properties.duration  How do i execute this in the command.  I tried the below but sure i am missing something  index=... See more...
Hi Deepak  I am bit confused using the time command  Filed name - event.Properties.duration  How do i execute this in the command.  I tried the below but sure i am missing something  index=testing | "event.Properties.duration"="*" | makeresults | eval millis_sec = 5000 | eval seconds = millis_sec/1000 | table millis_sec, seconds