All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the response. I have tried the following, but it times out. I assume its a port issue. [license] manager_uri = https://servername:8089
I have no idea what that means, can you give an example of your expected results and how you think they should be calculated?
Hi, In general, if you can do it from the UI there is an undocumented API that will let you do this. for processes, I can see there are a SIM API eg https://<tenant>.saas.appdynamics.com/controlle... See more...
Hi, In general, if you can do it from the UI there is an undocumented API that will let you do this. for processes, I can see there are a SIM API eg https://<tenant>.saas.appdynamics.com/controller/sim/v2/user/machines/<node>/processes?timeRange=last_1_hour.BEFORE_NOW.-1.-1.60&limit=1000&sortBy=CLASS so in theory you should be able to iterate this API asynchronously for all the nodes. I can add this to my free Rapport tool to demonstrate 
Hey, Check this out, but not sure ATM how to get an associated service. https://community.splunk.com/t5/Splunk-IT-Service-Intelligence/List-ITSI-entities-with-all-related-aliases-and-informational/... See more...
Hey, Check this out, but not sure ATM how to get an associated service. https://community.splunk.com/t5/Splunk-IT-Service-Intelligence/List-ITSI-entities-with-all-related-aliases-and-informational/m-p/576753
is there a way to get the difference between today's volume difference vs yesterdays volume difference in percentage ? Current SPL: base search earliest=-1d@d latest=now | eval Day=if(_time<relati... See more...
is there a way to get the difference between today's volume difference vs yesterdays volume difference in percentage ? Current SPL: base search earliest=-1d@d latest=now | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | chart count by User_Id, Day. Expected Result: User_Id Today Yesterday Percentage_Difference abc 5 10 100% xyz 2 4 100%  
Try setting the value of the dropdowns to be the parts of the search which are different label1 as Aruba NetWorks, value1 as node = "Aruba NetWorks"| table  node_dns node_ip region label2 as Cisco,... See more...
Try setting the value of the dropdowns to be the parts of the search which are different label1 as Aruba NetWorks, value1 as node = "Aruba NetWorks"| table  node_dns node_ip region label2 as Cisco, value2 as node = "Cisco"| table  Name Then change your search to use the token like this index=dot1x_index sourcetype=cisco_failed_src OR sourcetype=aruba_failed_src| eval node= if(isnotnull(node_vendor),"Cisco","Aruba NetWorks")| search $<dropdown token>$
I was trying something along the lines of dynamic field creation.  At issue is that we have multiple dot notation field names with different prefixes, but a common suffix.  (e.g.: file_watch.sgid and... See more...
I was trying something along the lines of dynamic field creation.  At issue is that we have multiple dot notation field names with different prefixes, but a common suffix.  (e.g.: file_watch.sgid and execve.sgid).    There are about 40 prefixes and 50 or more suffixes.  Not all prefixes have all suffixes.  What I wanted to do was to create a dashboard that would show the prefixes as rows, and the suffixes as columns, with x marking cells with non-null values for prefix.suffix based on a search over the last 24 hours.
Dear isoutamo: Thanks for you reply! I created indexes through MN (manager node) and then linked the search header nodes with the index of the peer cluster by executing the command "./split edit cl... See more...
Dear isoutamo: Thanks for you reply! I created indexes through MN (manager node) and then linked the search header nodes with the index of the peer cluster by executing the command "./split edit cluster configuration mode searchhead master_uri<Index Cluster Master URI>" on each search header set node. In fact, this approach is feasible. However, my self-developed add on also has automatic index creation configured. I found this behavior in GPT, and if I write data to this index through the API, it is actually written to the same named index of the peer cluster. This is also the result I want, because I want to achieve synchronization of add on data between search head clusters in this way. The current situation is that I have three search head nodes, and two of them have achieved this effect. The other node still writes data to the index created by its own node, not the index in the peer cluster  
Try something like this <query> | loadjob savedsearch="mp:search:query name" | addinfo | where $pc$ AND $version$ AND strptime(TimeStamp,"%F %T.%3N")&gt;info_min_time AND strptime(TimeStamp,"%F %T.%... See more...
Try something like this <query> | loadjob savedsearch="mp:search:query name" | addinfo | where $pc$ AND $version$ AND strptime(TimeStamp,"%F %T.%3N")&gt;info_min_time AND strptime(TimeStamp,"%F %T.%3N")&lt;info_max_time </query>
Try something like this | stats count by timestamp, uid
Can you write me from your experience how to parse my timestamp field to be able to be compared with earliest, latest parameters please?
As I said, you need to parse your timestamp field using the strptime() function so that you can compare it with other time values, e.g. earliest and latest. Having said that, you should probably use ... See more...
As I said, you need to parse your timestamp field using the strptime() function so that you can compare it with other time values, e.g. earliest and latest. Having said that, you should probably use addinfo to get the min and max times used in the search.
thanks. I'm trying to do something like that but  it doesn't work: (my TimeStamp field format is: 2023-11-07 16:43:05.227) <form version="1.1" theme="dark"> <label>time try</label> <search id="bl... See more...
thanks. I'm trying to do something like that but  it doesn't work: (my TimeStamp field format is: 2023-11-07 16:43:05.227) <form version="1.1" theme="dark"> <label>time try</label> <search id="bla"> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <query> | loadjob savedsearch="mp:search:query name" | where $pc$ AND $version$ AND TimeStamp&gt;$field1.earliest$ AND TimeStamp&lt;$field1.latest$ </query> </search> <fieldset submitButton="true" autoRun="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-1d@h</earliest> <latest>now</latest> </default> </input> <input type="multiselect" token="pc" searchWhenChanged="true"> <label>pc</label> <choice value="%">All</choice> <default>%</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>(pc like("</valuePrefix> <valueSuffix>"))</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>pc</fieldForLabel> <fieldForValue>pc</fieldForValue> <search base="bla"> <query> | where ( $version$) | dedup pc| fields pc </query> </search> </input> ........
HI , Need some help on removing the duplicates from table.  Am querying the accounts which uses the plain port connection as LDAP for particular timestamp.  My query : index=***  host=host1 OR hos... See more...
HI , Need some help on removing the duplicates from table.  Am querying the accounts which uses the plain port connection as LDAP for particular timestamp.  My query : index=***  host=host1 OR host=host2 source=logpath | transaction startswith=protocol=LDAP | search BIND REQ NOT "protocol=LDAPS" NOT  | dedup "uid" If i uses the above query in a table am getting two values in a row and again for other timestamp the same value got repeated even though am using dedup .  I have tried consecutive=true. In the UID column am seeing duplicates still. results came like this: timestamp uid 2023-12-12T05:44:23.000-05:00  abc xyz 2023-12-12T05:45:20.000-05:00 abc efg 123 2023-12-12T05:45:20.000-05:00 xyz 456 efg   I need each value in single row and no duplicates should displayed. Help will much appreciated!!!  
Hi @yuvaraj_m91, you have to use the eval coalesce command to put both the field values in the same field, something like this: <your_search> | eval Error=coalesce(error_message,error_response) | s... See more...
Hi @yuvaraj_m91, you have to use the eval coalesce command to put both the field values in the same field, something like this: <your_search> | eval Error=coalesce(error_message,error_response) | stats count BY Error Ciao. Giuseppe
I have two different logs where the error is capturing in different fields in each log message...(error_message and error_response) I have to capture the error_message and error_response without d... See more...
I have two different logs where the error is capturing in different fields in each log message...(error_message and error_response) I have to capture the error_message and error_response without dropping the other logs.? Log 1 : message:"Lambda execution: exit with failure", message_type:"ERROR", error_message:"error reason update" Log 2 : message:"Lambda execution: exit with failure", message_type:"ERROR", error_response:"updated error reason" Expected Output : Error                                                   count 1. error reason update                  1 2. updated error reason                1
Hi I want to execute different SPL query in Dashboard studio panel on the basis of dropdown value. Drop down have two item only, if we select "Item1" in dropdown then in particular panel of Dashboa... See more...
Hi I want to execute different SPL query in Dashboard studio panel on the basis of dropdown value. Drop down have two item only, if we select "Item1" in dropdown then in particular panel of Dashboard should execute "Query1" if selected "item2" in dropdown then in same panel of Dashboard studio should execute "Query2" item1 = "Aruba NetWorks" Item2 = "Cisco" Query1 = index=dot1x_index sourcetype=cisco_failed_src OR sourcetype=aruba_failed_src| | eval node= if(isnotnull(node_vendor),"Cisco","Aruba NetWorks")| search node = $<dropdown token>$ | table  node_dns node_ip region Query2 = index=dot1x_index sourcetype=cisco_failed_src OR sourcetype=aruba_failed_src| eval node= if(isnotnull(node_vendor),"Cisco","Aruba NetWorks")| search node = $<dropdown token>$ | table  Name Kindly Guide. Thanks Abhineet Kumar
When i'm in the dashboard panel I can't seem to find a download button anywhere so I'm not sure if i can download the results
Rather than exporting the dashboard, can you download the results from the panel?
I have a dashboard with the event feeds of these browser tests but when I try to export the dashboard it exports as a json file and I need it as a csv or another file that shows the results of the te... See more...
I have a dashboard with the event feeds of these browser tests but when I try to export the dashboard it exports as a json file and I need it as a csv or another file that shows the results of the test.