All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I want mask some data coming from web server logs particularly only one server out of all my web server logs. Can I apply my masking rule to only one my webserver  source instead of all my web server... See more...
I want mask some data coming from web server logs particularly only one server out of all my web server logs. Can I apply my masking rule to only one my webserver  source instead of all my web server sending to the same sourcetype? If I apply this rule to all web server log it will be high resource usage at my indexer? Thanks
Hi all,   Im analysing event counts for a specific search criteria and I want to know how the count of values changed over time.  Below search is not good enough to see whats going on as many use... See more...
Hi all,   Im analysing event counts for a specific search criteria and I want to know how the count of values changed over time.  Below search is not good enough to see whats going on as many usernames have huge number of events and some with small numbers are barely noticeable (Im interested in rate of change and not count itself) ``` index=test_index "search string" | timechart span=10m count(field1) by username ``` So I want to see a rate of change of the count rather than simple count, by username field. How can we achieve this?
Neither are working for me. Their search gives an unwieldy table with 100+ columns, yours has only blanks for avg and max.  Splunk 9.1.2
Thank you for all updates. Due to large number of devices I decided to use method #2 from the last post. My SPL looks like ------- index=index2 OR (index=index1 sourcetype="metadata" "health.sev... See more...
Thank you for all updates. Due to large number of devices I decided to use method #2 from the last post. My SPL looks like ------- index=index2 OR (index=index1 sourcetype="metadata" "health.severity"!=NULL) | eval IP_ADDRESS=if(index=index1, interfaces.address, PRIMARY_IP_ADDRESS) ```PRIMARY_IP_ADDRESS is from index2 to match interfaces.address from index1111 | stats dc(index) as indexes values(DISCOVERED_OS) as DISCOVERED_OS by interfaces.address | where indexes=2 | table IP_ADDRESS ________ Query runs with no errors, but produced 0(zero) events Thank you, Leon
I am sorry but I don't see any commands. Did you mean to attach them to the post?
Working with just this example, the same applies across the board. get_device_trajectory_2:action_result.data.*.events.*.file.parent.file_name .data.*.events.*. is most likely your problem. Every t... See more...
Working with just this example, the same applies across the board. get_device_trajectory_2:action_result.data.*.events.*.file.parent.file_name .data.*.events.*. is most likely your problem. Every time your filter block hits a true, you're telling your format block to pull in all of the file names in the event data from get_device_trajectory_2. You'll need to find a way to tell it to only pull in the information from the index of the item you care about. Something like  get_device_trajectory_2:action_result.data.*.events.X.file.parent.file_name​ where X is the item in the list that evaluated true.
How are you measuring / detecting the value of the load? How often do you want to check? Over what period do you want to measure the load?
I have 2 servers (hosts) and I need to create an alert so that when the difference in value (or load) between the 2 hosts is greater than 50 percent, it gives results (alerts).
I'm not familiar with the HTTP app so can't speak directly to this specific example, but I can answer your question at the end: Yes you are. The way callback works is it looks for another playbook b... See more...
I'm not familiar with the HTTP app so can't speak directly to this specific example, but I can answer your question at the end: Yes you are. The way callback works is it looks for another playbook block by that name, not a function defined within the same block. So what you can do is use the standard HTTP action block and move get_action_results to its own playbook block. SOAR will understand that you want to input the values from the action calling the callback. Looking at your reply on the other thread, something that may be helpful would be writing a loop to separate each URL and run through the process one by one. It's slower and more resource intensive, but that way you don't need to worry about keeping track of multiple results at once.
yes, each method was tested separately to it doesn't overlap. I just combined it here to it's easier to see what been tried so far. Here is the sample log (reduced to a few kv) from k8s nodes t... See more...
yes, each method was tested separately to it doesn't overlap. I just combined it here to it's easier to see what been tried so far. Here is the sample log (reduced to a few kv) from k8s nodes that is pulled by collector(agent).  2024-03-11T21:04:41.411025006Z stdout F {"time": "2024-03-11T21:04:41+00:00", "upstream_namespace":"system-monitoring", "remote_user": "sample-user"}   If for example I just use `attributes/upsert` it appends to existing but not overwrite it.
Hi @sajo.sam, I'm going to see what I can find for you, in the meantime, have you seen/read this AppD Docs page: https://docs.appdynamics.com/appd/23.x/latest/en/infrastructure-visibility/monitor-k... See more...
Hi @sajo.sam, I'm going to see what I can find for you, in the meantime, have you seen/read this AppD Docs page: https://docs.appdynamics.com/appd/23.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/auto-instrument-applications-with-the-cluster-agent/auto-instrumentation-configuration
Have you verified that the used user has permissions to access ServiceNow via API? You could verify that with postman or a plain curl call.  
Could you please activate only the attributes in your pipeline "logs", get rid of the transforms block and then verify the functionality? Next time it would be great if we can focus on one configura... See more...
Could you please activate only the attributes in your pipeline "logs", get rid of the transforms block and then verify the functionality? Next time it would be great if we can focus on one configuration that does not work.
Hi @SANDEEP.KUMAR, If the reply helped answer your question, please take 5 seconds to click the “Accept as Solution” button on the reply? This helps the community know your question has been offici... See more...
Hi @SANDEEP.KUMAR, If the reply helped answer your question, please take 5 seconds to click the “Accept as Solution” button on the reply? This helps the community know your question has been officially answered and builds a bank of knowledge across the community. If the reply did not answer your question, jump back into the conversation to keep it going. 
Try something like this ``` Reverse the order of events so that earliest is first ``` | reverse ``` Extract the fields ``` | rex "(?<Date>\[[^\]]+\])\s(?<loglevel>\w+)\s-\swire\s(?<action_https>\S+)... See more...
Try something like this ``` Reverse the order of events so that earliest is first ``` | reverse ``` Extract the fields ``` | rex "(?<Date>\[[^\]]+\])\s(?<loglevel>\w+)\s-\swire\s(?<action_https>\S+)\sI\/O\s(?<correlationID>\S+)\s(?<direction>\S+)\s(?<message>.*)" ``` Tag the events in order to be able to maintain the sequence ``` | streamstats count as event ``` Create a direction-based grouping for correlatoinIds ``` | eval grouping=correlationID.direction ``` Sort so that events for the same correlationID are together and in sequence ``` | sort 0 correlationID event ``` Find where the grouping changes ``` | streamstats count by grouping reset_on_change=t global=f ``` Assign the events to sequence groups ``` | streamstats count(eval(count == 1)) as sequence ``` Gather the field values by sequence group ``` | stats first(Date) as start last(Date) as end list(message) as message by sequence action_https correlationID loglevel ``` Reset Date field to first Date ``` | eval Date=start ``` Calculate the duration from the start and end times ``` | eval duration=round(1000*(strptime(end,"[%F %T,%3N]")-strptime(start,"[%F %T,%3N]")),0) ``` Sort results by Date (this a string based sort and works because of the date format used) ``` | sort 0 Date ``` Output table of fields ``` | table Date, loglevel, action_https, correlationID, message, duration
Yes, it does.  service: pipelines: logs: exporters: - otlp processors: - transform - attributes/upsert   So far I have tried the... See more...
Yes, it does.  service: pipelines: logs: exporters: - otlp processors: - transform - attributes/upsert   So far I have tried these options but none seem to work.  processors: attributes/upsert: actions: - key: upstream_namespace action: upsert value: "REDACTED_NS" transform: log_statements: - context: log statements: - replace_all_patterns(attributes,"value","upstream_namespace", "REDACTED_NS") - replace_all_patterns(attributes,"key","upstream_namespace", "REDACTED_NS") - replace_match(attributes["upstream_namespace"], "*" , "REDACTED_NS") - replace_match(attributes["upstream_namespace"], "system-monitoring" , "REDACTED_NS") - delete_key(attributes,"upstream_namespace") - delete_key(resource.attributes,"upstream_namespace") - replace_all_patterns(attributes["upstream_namespace"],"value","upstream_namespace", "REDACTED_NS") - replace_all_patterns(attributes["upstream_namespace"],"value","system-monitoring", "REDACTED_NS") The attribute/upsert and set() however appends to existing value.  upstream_namespace: REDACTED_NS system-monitoring Not sure what is missing here, any suggestions to resolve this? Thanks
Hi @Ganesh1 , If the events are less than the total count in stats ,check i the fields Object and Failure_Message are present in all the events or only in a subset of them, and eventually not both i... See more...
Hi @Ganesh1 , If the events are less than the total count in stats ,check i the fields Object and Failure_Message are present in all the events or only in a subset of them, and eventually not both in the same events. If the events are freater than the total count in stats, probably you have more values in the same events. because probably the issue is related to the fact that a stats count BY two fields returns the count of results with both the fields containing a value. Ciao. Giuseppe
You should check the difference between AddOns and Apps . Apps and add-ons - Splunk Documentation If the outdated Aruba App does not contain any dashboard you have to create them by yourself.    ... See more...
You should check the difference between AddOns and Apps . Apps and add-ons - Splunk Documentation If the outdated Aruba App does not contain any dashboard you have to create them by yourself.     
Settings on the SH as follows: AUTO_KV_JSON = false KV_MODE = none   Settings on the HF: AUTO_KV_JSON = false INDEXED_EXTRACTIONS = json KV_MODE = none   Values are getting duplicated, do you h... See more...
Settings on the SH as follows: AUTO_KV_JSON = false KV_MODE = none   Settings on the HF: AUTO_KV_JSON = false INDEXED_EXTRACTIONS = json KV_MODE = none   Values are getting duplicated, do you have anymore suggestions for us ?
Have you considered this article:  https://community.splunk.com/t5/Splunk-Search/How-do-I-find-all-duplicate-events/m-p/9764