All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm not familiar with the HTTP app so can't speak directly to this specific example, but I can answer your question at the end: Yes you are. The way callback works is it looks for another playbook b... See more...
I'm not familiar with the HTTP app so can't speak directly to this specific example, but I can answer your question at the end: Yes you are. The way callback works is it looks for another playbook block by that name, not a function defined within the same block. So what you can do is use the standard HTTP action block and move get_action_results to its own playbook block. SOAR will understand that you want to input the values from the action calling the callback. Looking at your reply on the other thread, something that may be helpful would be writing a loop to separate each URL and run through the process one by one. It's slower and more resource intensive, but that way you don't need to worry about keeping track of multiple results at once.
yes, each method was tested separately to it doesn't overlap. I just combined it here to it's easier to see what been tried so far. Here is the sample log (reduced to a few kv) from k8s nodes t... See more...
yes, each method was tested separately to it doesn't overlap. I just combined it here to it's easier to see what been tried so far. Here is the sample log (reduced to a few kv) from k8s nodes that is pulled by collector(agent).  2024-03-11T21:04:41.411025006Z stdout F {"time": "2024-03-11T21:04:41+00:00", "upstream_namespace":"system-monitoring", "remote_user": "sample-user"}   If for example I just use `attributes/upsert` it appends to existing but not overwrite it.
Hi @sajo.sam, I'm going to see what I can find for you, in the meantime, have you seen/read this AppD Docs page: https://docs.appdynamics.com/appd/23.x/latest/en/infrastructure-visibility/monitor-k... See more...
Hi @sajo.sam, I'm going to see what I can find for you, in the meantime, have you seen/read this AppD Docs page: https://docs.appdynamics.com/appd/23.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/auto-instrument-applications-with-the-cluster-agent/auto-instrumentation-configuration
Have you verified that the used user has permissions to access ServiceNow via API? You could verify that with postman or a plain curl call.  
Could you please activate only the attributes in your pipeline "logs", get rid of the transforms block and then verify the functionality? Next time it would be great if we can focus on one configura... See more...
Could you please activate only the attributes in your pipeline "logs", get rid of the transforms block and then verify the functionality? Next time it would be great if we can focus on one configuration that does not work.
Hi @SANDEEP.KUMAR, If the reply helped answer your question, please take 5 seconds to click the “Accept as Solution” button on the reply? This helps the community know your question has been offici... See more...
Hi @SANDEEP.KUMAR, If the reply helped answer your question, please take 5 seconds to click the “Accept as Solution” button on the reply? This helps the community know your question has been officially answered and builds a bank of knowledge across the community. If the reply did not answer your question, jump back into the conversation to keep it going. 
Try something like this ``` Reverse the order of events so that earliest is first ``` | reverse ``` Extract the fields ``` | rex "(?<Date>\[[^\]]+\])\s(?<loglevel>\w+)\s-\swire\s(?<action_https>\S+)... See more...
Try something like this ``` Reverse the order of events so that earliest is first ``` | reverse ``` Extract the fields ``` | rex "(?<Date>\[[^\]]+\])\s(?<loglevel>\w+)\s-\swire\s(?<action_https>\S+)\sI\/O\s(?<correlationID>\S+)\s(?<direction>\S+)\s(?<message>.*)" ``` Tag the events in order to be able to maintain the sequence ``` | streamstats count as event ``` Create a direction-based grouping for correlatoinIds ``` | eval grouping=correlationID.direction ``` Sort so that events for the same correlationID are together and in sequence ``` | sort 0 correlationID event ``` Find where the grouping changes ``` | streamstats count by grouping reset_on_change=t global=f ``` Assign the events to sequence groups ``` | streamstats count(eval(count == 1)) as sequence ``` Gather the field values by sequence group ``` | stats first(Date) as start last(Date) as end list(message) as message by sequence action_https correlationID loglevel ``` Reset Date field to first Date ``` | eval Date=start ``` Calculate the duration from the start and end times ``` | eval duration=round(1000*(strptime(end,"[%F %T,%3N]")-strptime(start,"[%F %T,%3N]")),0) ``` Sort results by Date (this a string based sort and works because of the date format used) ``` | sort 0 Date ``` Output table of fields ``` | table Date, loglevel, action_https, correlationID, message, duration
Yes, it does.  service: pipelines: logs: exporters: - otlp processors: - transform - attributes/upsert   So far I have tried the... See more...
Yes, it does.  service: pipelines: logs: exporters: - otlp processors: - transform - attributes/upsert   So far I have tried these options but none seem to work.  processors: attributes/upsert: actions: - key: upstream_namespace action: upsert value: "REDACTED_NS" transform: log_statements: - context: log statements: - replace_all_patterns(attributes,"value","upstream_namespace", "REDACTED_NS") - replace_all_patterns(attributes,"key","upstream_namespace", "REDACTED_NS") - replace_match(attributes["upstream_namespace"], "*" , "REDACTED_NS") - replace_match(attributes["upstream_namespace"], "system-monitoring" , "REDACTED_NS") - delete_key(attributes,"upstream_namespace") - delete_key(resource.attributes,"upstream_namespace") - replace_all_patterns(attributes["upstream_namespace"],"value","upstream_namespace", "REDACTED_NS") - replace_all_patterns(attributes["upstream_namespace"],"value","system-monitoring", "REDACTED_NS") The attribute/upsert and set() however appends to existing value.  upstream_namespace: REDACTED_NS system-monitoring Not sure what is missing here, any suggestions to resolve this? Thanks
Hi @Ganesh1 , If the events are less than the total count in stats ,check i the fields Object and Failure_Message are present in all the events or only in a subset of them, and eventually not both i... See more...
Hi @Ganesh1 , If the events are less than the total count in stats ,check i the fields Object and Failure_Message are present in all the events or only in a subset of them, and eventually not both in the same events. If the events are freater than the total count in stats, probably you have more values in the same events. because probably the issue is related to the fact that a stats count BY two fields returns the count of results with both the fields containing a value. Ciao. Giuseppe
You should check the difference between AddOns and Apps . Apps and add-ons - Splunk Documentation If the outdated Aruba App does not contain any dashboard you have to create them by yourself.    ... See more...
You should check the difference between AddOns and Apps . Apps and add-ons - Splunk Documentation If the outdated Aruba App does not contain any dashboard you have to create them by yourself.     
Settings on the SH as follows: AUTO_KV_JSON = false KV_MODE = none   Settings on the HF: AUTO_KV_JSON = false INDEXED_EXTRACTIONS = json KV_MODE = none   Values are getting duplicated, do you h... See more...
Settings on the SH as follows: AUTO_KV_JSON = false KV_MODE = none   Settings on the HF: AUTO_KV_JSON = false INDEXED_EXTRACTIONS = json KV_MODE = none   Values are getting duplicated, do you have anymore suggestions for us ?
Have you considered this article:  https://community.splunk.com/t5/Splunk-Search/How-do-I-find-all-duplicate-events/m-p/9764
Thank You, this worked, the only thing I wish I could see is just the matched lines and get rid of the blank rows.
Hi, it seems that there is no dashboard included with the add_on, wy second question :  the search (SPL command) to have a table with SNR values vs AP vs users ?   Thx      
Hello @ITWhisperer,    First of all, I'd like to thank you for taking the time to think about my concerns. As you said, If the  combinations of correlationIDs and direction are reused it may not g... See more...
Hello @ITWhisperer,    First of all, I'd like to thank you for taking the time to think about my concerns. As you said, If the  combinations of correlationIDs and direction are reused it may not give the results I expect. The correlationID and direction are completely random. The correlationID is an ID that SWO2-APIM associates with the request to identify it. The direction means that SWO2-APIM receives or sends the request. In the real log, the first log line is at the bottom and the last log line is at the top. This is the real logs look like : [2024-03-26 13:02:16,357] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "[\r][\n]" [2024-03-26 13:02:16,357] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "0[\r][\n]" [2024-03-26 13:02:16,357] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "<Message or something>[\r][\n]" [2024-03-26 13:02:16,357] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "8e[\r][\n]" [2024-03-26 13:02:16,357] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "[\r][\n]" [2024-03-26 13:02:16,357] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "Connection: close[\r][\n]" [2024-03-26 13:02:16,357] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "Transfer-Encoding: chunked[\r][\n]" [2024-03-26 13:02:16,357] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "Date: Tue, 26 Mar 2024 13:02:16 GMT[\r][\n]" [2024-03-26 13:02:16,357] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "Content-Type: application/xml; charset=UTF-8[\r][\n]" [2024-03-26 13:02:16,357] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "HTTP/1.1 200 OK[\r][\n]" [2024-03-26 13:02:16,353] DEBUG - wire HTTPS-Listener I/O dispatcher-4 >> "[\r][\n]" [2024-03-26 13:02:16,353] DEBUG - wire HTTPS-Listener I/O dispatcher-4 >> "Accept-Encoding: gzip, compressed[\r][\n]" [2024-03-26 13:02:16,353] DEBUG - wire HTTPS-Listener I/O dispatcher-4 >> "User-Agent: HealthChecker/2.0[\r][\n]" [2024-03-26 13:02:16,353] DEBUG - wire HTTPS-Listener I/O dispatcher-4 >> "Connection: close[\r][\n]" [2024-03-26 13:02:16,353] DEBUG - wire HTTPS-Listener I/O dispatcher-4 >> "Host: 10.229.55.71:8243[\r][\n]" [2024-03-26 13:02:16,353] DEBUG - wire HTTPS-Listener I/O dispatcher-4 >> "GET /services/Version HTTP/1.1[\r][\n]" [2024-03-26 13:02:11,042] DEBUG - wire HTTPS-Listener I/O dispatcher-3 << "[\r][\n]" [2024-03-26 13:02:11,042] DEBUG - wire HTTPS-Listener I/O dispatcher-3 << "0[\r][\n]" [2024-03-26 13:02:11,042] DEBUG - wire HTTPS-Listener I/O dispatcher-3 << "<Message or something>[\r][\n]" [2024-03-26 13:02:11,042] DEBUG - wire HTTPS-Listener I/O dispatcher-3 << "8e[\r][\n]" [2024-03-26 13:02:11,042] DEBUG - wire HTTPS-Listener I/O dispatcher-3 << "[\r][\n]" [2024-03-26 13:02:11,042] DEBUG - wire HTTPS-Listener I/O dispatcher-3 << "Connection: close[\r][\n]" [2024-03-26 13:02:11,042] DEBUG - wire HTTPS-Listener I/O dispatcher-3 << "Transfer-Encoding: chunked[\r][\n]" [2024-03-26 13:02:11,042] DEBUG - wire HTTPS-Listener I/O dispatcher-3 << "Date: Tue, 26 Mar 2024 13:02:11 GMT[\r][\n]" [2024-03-26 13:02:11,042] DEBUG - wire HTTPS-Listener I/O dispatcher-3 << "Content-Type: application/xml; charset=UTF-8[\r][\n]" [2024-03-26 13:02:11,042] DEBUG - wire HTTPS-Listener I/O dispatcher-3 << "HTTP/1.1 200 OK[\r][\n]" [2024-03-26 13:02:07,131] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "[\r][\n]" [2024-03-26 13:02:07,131] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "0[\r][\n]" [2024-03-26 13:02:07,131] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "<Message or something>[\r][\n]" [2024-03-26 13:02:07,131] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "8e[\r][\n]" [2024-03-26 13:02:07,131] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "[\r][\n]" [2024-03-26 13:02:07,131] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "Connection: close[\r][\n]" [2024-03-26 13:02:07,131] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "Transfer-Encoding: chunked[\r][\n]" [2024-03-26 13:02:07,129] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "Date: Tue, 26 Mar 2024 13:02:07 GMT[\r][\n]" [2024-03-26 13:02:07,129] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "Content-Type: application/xml; charset=UTF-8[\r][\n]" [2024-03-26 13:02:07,129] DEBUG - wire HTTPS-Listener I/O dispatcher-4 << "HTTP/1.1 200 OK[\r][\n]"   If you look closely at the requests, they are received from bottom to top. And so, I would like to have this kind of outing : Date, loglevel, action_https, correlationID, message, duration [2024-03-26 13:02:16,357], DEBUG, HTTPS-Listener, dispatcher-4, "HTTP/1.1 200 OK[\r][\n]" "Content-Type: application/xml; charset=UTF-8[\r][\n]" "Date: Tue, 26 Mar 2024 13:02:16 GMT[\r][\n]" "Transfer-Encoding: chunked[\r][\n]" "Connection: close[\r][\n]" "[\r][\n]" "8e[\r][\n]" "<Message or something>[\r][\n]" "0[\r][\n]" "[\r][\n]", 000 [2024-03-26 13:02:16,353], DEBUG, HTTPS-Listener, dispatcher-4, "GET /services/Version HTTP/1.1[\r][\n]" "Host: 10.229.55.71:8243[\r][\n]" "Connection: close[\r][\n]" "User-Agent: ELB-HealthChecker/2.0[\r][\n]" "Accept-Encoding: gzip, compressed[\r][\n]" "[\r][\n]", 000 [2024-03-26 13:02:11,042], DEBUG, HTTPS-Listener, dispatcher-3, "HTTP/1.1 200 OK[\r][\n]" "Content-Type: application/xml; charset=UTF-8[\r][\n]" "Date: Tue, 26 Mar 2024 13:02:11 GMT[\r][\n]" "Transfer-Encoding: chunked[\r][\n]" "Connection: close[\r][\n]" "[\r][\n]" "8e[\r][\n]" "<Message or something>[\r][\n]" "0[\r][\n]" "[\r][\n]", 000 [2024-03-26 13:02:07,129], DEBUG, HTTPS-Listener, dispatcher-4, "HTTP/1.1 200 OK[\r][\n]" "Content-Type: application/xml; charset=UTF-8[\r][\n]" "Date: Tue, 26 Mar 2024 13:02:07 GMT[\r][\n]" "Transfer-Encoding: chunked[\r][\n]" "Connection: close[\r][\n]" "[\r][\n]" "8e[\r][\n]" "<Message or something>[\r][\n]" "0[\r][\n]" "[\r][\n]", 003
I have three tables. Each has one or more ID fields (out of ID_A, ID_B, ID_C) and assigns values Xn, Yn, Zn to these IDs. In effect, the tables each contain a fragment of information from a set of ob... See more...
I have three tables. Each has one or more ID fields (out of ID_A, ID_B, ID_C) and assigns values Xn, Yn, Zn to these IDs. In effect, the tables each contain a fragment of information from a set of objects 1...5. Table X: ID_A ID_B X1 X2 A1 B1 X1_1 X2_1 A2 B2 X1_2a X2_2 A2 B2 X1_2b X2_2 A3 B3 X1_3 X2_3 Table Y: ID_A ID_B Y1 Y2 A2 B2 Y1_2   A2 B2   Y2_2 A3 B3   Y2_3a A3 B3   Y2_3b A4 B4 Y1_4 Y2_4   Table Z: ID_B ID_C Z1 B1 C1 Z1_1 B3 C3 Z1_3 B5 C5 Z1_5 How can I create the superset of all three tables, i.e. reconstruct the "full picture" about obects 1..5 as good as possible? I tried with union and join in various ways, but I keep tripping over the following obstacles: The 1:n relation between ID and values (which should remain expanded as individual rows) Empty fields in between (bad for stats list(...) or stats values(...) because of different-sized MV results) There is no single table that has references to all objects (e.g. object 5 only present in table Z).   Desired result: ID_A ID_B ID_C X1 X2 Y1 Y2 Z1 A1 B1 C1 X1_1 X2_1     Z1_1 A2 B2   X1_2a X2_2 Y1_2 Y2_2   A2 B2   X1_2b X2_2 Y1_2 Y2_2   A3 B3   X1_3 X2_3   Y2_3a Z1_3 A3 B3   X1_3 X2_3   Y2_3b Z1_3 A4 B4       Y1_4 Y2_4     B5 C5         Z1_5 Sample data:   | makeresults | eval _raw="ID_A;ID_B;X1;X2 A1;B1;X1_1;X2_1 A2;B2;X1_2A;X2_2 A2;B2;X1_2B;X2_2 A3;B3;X1_3;X2_3 " | multikv forceheader=1 | table ID_A, ID_B, X1, X2 | append [ | makeresults | eval _raw="ID_A;ID_B;Y1;Y2 A2;B2;Y1_2; A2;B2;;Y2_2 A3;B3;Y1_3;Y2_3A A3;B3;Y1_3;Y2_3B A4;B4;Y1_4;Y2_4 " | multikv forceheader=1 | table ID_A, ID_B, Y1, Y2 ] | append [ | makeresults | eval _raw="ID_B;ID_C;Z1 B1;C1;Z1_1 B3;C3;Z1_3 B5;C5;Z1_5 " | multikv forceheader=1 | table ID_B, ID_C, Z1 ] | table ID_A, ID_B, ID_C, X1, X2, Y1, Y2, Z1    
As  I mentioned - problem was that we have an application on UF's and indexers for SSL log encryption. The problem was that  someone put in config file wrong password for .*pem file and because of th... See more...
As  I mentioned - problem was that we have an application on UF's and indexers for SSL log encryption. The problem was that  someone put in config file wrong password for .*pem file and because of that forwarders started to disappear from the console, as inactive. First thing what you should check - review both indexer and forwarder logs for any connection problems.   Best, Eugene
Hi . Trying with: Field transformations:       And adding them to sourcetype:     But does not work is there anything wrong?   Thank you all!!   BR
The "rest" in my answer is an SPL command.  The same REST endpoint can be accessed via port 8089 after the port is enabled. ACS will not get you all of the KOs owned by a user.
last one worked!