All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

sure. for example, user called abc uploaded two files today with name as abc.1 , abc.2. the same user abc uploaded four files yesterday abc.1, abc.2, abc.3, abc.4. I want to create the table, with ... See more...
sure. for example, user called abc uploaded two files today with name as abc.1 , abc.2. the same user abc uploaded four files yesterday abc.1, abc.2, abc.3, abc.4. I want to create the table, with  user name and uploaded files count today and yesterday.. what is missing file count from previous day. in this scenario, User Today Yesterday Missing File from previous Day abc 2 4 2 ( in Percentage) 100%   
Thanks for that. I created the file in /opt/splunk/etc/system/local/props.conf as follows: [default] [host::router.xxxxxxxx] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = ^ TIME... See more...
Thanks for that. I created the file in /opt/splunk/etc/system/local/props.conf as follows: [default] [host::router.xxxxxxxx] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%D%:z MAX_TIMESTAMP_LOOKAHEAD = 30 TRUNCATE = 10000 EVENT_BREAKER_ENABLE = true I am still getting the descrepency. Perhaps my props.conf file is not the correct format or in the right spot for Splunk to read?
So its a opnsense firewall 
Hello Team,   I would like to install UF on Linux server but I got confused. Which one should I open "9997 for İndexer cluster and 8089 for deployment server" OR "9997 and 8089 for deployment serve... See more...
Hello Team,   I would like to install UF on Linux server but I got confused. Which one should I open "9997 for İndexer cluster and 8089 for deployment server" OR "9997 and 8089 for deployment server"? Can any body help about port requirement?   
thanks!! i used min, max because add_info didn't work for me. but it doesn't work, when i select a range (for example 4 hours) in the time filter the data that i get is not between this range. maybe ... See more...
thanks!! i used min, max because add_info didn't work for me. but it doesn't work, when i select a range (for example 4 hours) in the time filter the data that i get is not between this range. maybe i should do something with $field1.earliest$, $field1.latest$? my code: <search id="bla"> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <query> | loadjob savedsearch="mp:search:query name" | eventstats max(_time) as maxtime, min(_time) as mintime | where $pc$ AND $version$ AND strptime(TimeStamp,"%F %T.%3N")&gt;mintime AND strptime(TimeStamp,"%F %T.%3N")&lt;maxtime </query> </search> <fieldset submitButton="true" autoRun="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-1d@h</earliest> <latest>now</latest> </default> </input>
Thanks for the response. I have tried the following, but it times out. I assume its a port issue. [license] manager_uri = https://servername:8089
I have no idea what that means, can you give an example of your expected results and how you think they should be calculated?
Hi, In general, if you can do it from the UI there is an undocumented API that will let you do this. for processes, I can see there are a SIM API eg https://<tenant>.saas.appdynamics.com/controlle... See more...
Hi, In general, if you can do it from the UI there is an undocumented API that will let you do this. for processes, I can see there are a SIM API eg https://<tenant>.saas.appdynamics.com/controller/sim/v2/user/machines/<node>/processes?timeRange=last_1_hour.BEFORE_NOW.-1.-1.60&limit=1000&sortBy=CLASS so in theory you should be able to iterate this API asynchronously for all the nodes. I can add this to my free Rapport tool to demonstrate 
Hey, Check this out, but not sure ATM how to get an associated service. https://community.splunk.com/t5/Splunk-IT-Service-Intelligence/List-ITSI-entities-with-all-related-aliases-and-informational/... See more...
Hey, Check this out, but not sure ATM how to get an associated service. https://community.splunk.com/t5/Splunk-IT-Service-Intelligence/List-ITSI-entities-with-all-related-aliases-and-informational/m-p/576753
is there a way to get the difference between today's volume difference vs yesterdays volume difference in percentage ? Current SPL: base search earliest=-1d@d latest=now | eval Day=if(_time<relati... See more...
is there a way to get the difference between today's volume difference vs yesterdays volume difference in percentage ? Current SPL: base search earliest=-1d@d latest=now | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | chart count by User_Id, Day. Expected Result: User_Id Today Yesterday Percentage_Difference abc 5 10 100% xyz 2 4 100%  
Try setting the value of the dropdowns to be the parts of the search which are different label1 as Aruba NetWorks, value1 as node = "Aruba NetWorks"| table  node_dns node_ip region label2 as Cisco,... See more...
Try setting the value of the dropdowns to be the parts of the search which are different label1 as Aruba NetWorks, value1 as node = "Aruba NetWorks"| table  node_dns node_ip region label2 as Cisco, value2 as node = "Cisco"| table  Name Then change your search to use the token like this index=dot1x_index sourcetype=cisco_failed_src OR sourcetype=aruba_failed_src| eval node= if(isnotnull(node_vendor),"Cisco","Aruba NetWorks")| search $<dropdown token>$
I was trying something along the lines of dynamic field creation.  At issue is that we have multiple dot notation field names with different prefixes, but a common suffix.  (e.g.: file_watch.sgid and... See more...
I was trying something along the lines of dynamic field creation.  At issue is that we have multiple dot notation field names with different prefixes, but a common suffix.  (e.g.: file_watch.sgid and execve.sgid).    There are about 40 prefixes and 50 or more suffixes.  Not all prefixes have all suffixes.  What I wanted to do was to create a dashboard that would show the prefixes as rows, and the suffixes as columns, with x marking cells with non-null values for prefix.suffix based on a search over the last 24 hours.
Dear isoutamo: Thanks for you reply! I created indexes through MN (manager node) and then linked the search header nodes with the index of the peer cluster by executing the command "./split edit cl... See more...
Dear isoutamo: Thanks for you reply! I created indexes through MN (manager node) and then linked the search header nodes with the index of the peer cluster by executing the command "./split edit cluster configuration mode searchhead master_uri<Index Cluster Master URI>" on each search header set node. In fact, this approach is feasible. However, my self-developed add on also has automatic index creation configured. I found this behavior in GPT, and if I write data to this index through the API, it is actually written to the same named index of the peer cluster. This is also the result I want, because I want to achieve synchronization of add on data between search head clusters in this way. The current situation is that I have three search head nodes, and two of them have achieved this effect. The other node still writes data to the index created by its own node, not the index in the peer cluster  
Try something like this <query> | loadjob savedsearch="mp:search:query name" | addinfo | where $pc$ AND $version$ AND strptime(TimeStamp,"%F %T.%3N")&gt;info_min_time AND strptime(TimeStamp,"%F %T.%... See more...
Try something like this <query> | loadjob savedsearch="mp:search:query name" | addinfo | where $pc$ AND $version$ AND strptime(TimeStamp,"%F %T.%3N")&gt;info_min_time AND strptime(TimeStamp,"%F %T.%3N")&lt;info_max_time </query>
Try something like this | stats count by timestamp, uid
Can you write me from your experience how to parse my timestamp field to be able to be compared with earliest, latest parameters please?
As I said, you need to parse your timestamp field using the strptime() function so that you can compare it with other time values, e.g. earliest and latest. Having said that, you should probably use ... See more...
As I said, you need to parse your timestamp field using the strptime() function so that you can compare it with other time values, e.g. earliest and latest. Having said that, you should probably use addinfo to get the min and max times used in the search.
thanks. I'm trying to do something like that but  it doesn't work: (my TimeStamp field format is: 2023-11-07 16:43:05.227) <form version="1.1" theme="dark"> <label>time try</label> <search id="bl... See more...
thanks. I'm trying to do something like that but  it doesn't work: (my TimeStamp field format is: 2023-11-07 16:43:05.227) <form version="1.1" theme="dark"> <label>time try</label> <search id="bla"> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <query> | loadjob savedsearch="mp:search:query name" | where $pc$ AND $version$ AND TimeStamp&gt;$field1.earliest$ AND TimeStamp&lt;$field1.latest$ </query> </search> <fieldset submitButton="true" autoRun="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-1d@h</earliest> <latest>now</latest> </default> </input> <input type="multiselect" token="pc" searchWhenChanged="true"> <label>pc</label> <choice value="%">All</choice> <default>%</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>(pc like("</valuePrefix> <valueSuffix>"))</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>pc</fieldForLabel> <fieldForValue>pc</fieldForValue> <search base="bla"> <query> | where ( $version$) | dedup pc| fields pc </query> </search> </input> ........
HI , Need some help on removing the duplicates from table.  Am querying the accounts which uses the plain port connection as LDAP for particular timestamp.  My query : index=***  host=host1 OR hos... See more...
HI , Need some help on removing the duplicates from table.  Am querying the accounts which uses the plain port connection as LDAP for particular timestamp.  My query : index=***  host=host1 OR host=host2 source=logpath | transaction startswith=protocol=LDAP | search BIND REQ NOT "protocol=LDAPS" NOT  | dedup "uid" If i uses the above query in a table am getting two values in a row and again for other timestamp the same value got repeated even though am using dedup .  I have tried consecutive=true. In the UID column am seeing duplicates still. results came like this: timestamp uid 2023-12-12T05:44:23.000-05:00  abc xyz 2023-12-12T05:45:20.000-05:00 abc efg 123 2023-12-12T05:45:20.000-05:00 xyz 456 efg   I need each value in single row and no duplicates should displayed. Help will much appreciated!!!  
Hi @yuvaraj_m91, you have to use the eval coalesce command to put both the field values in the same field, something like this: <your_search> | eval Error=coalesce(error_message,error_response) | s... See more...
Hi @yuvaraj_m91, you have to use the eval coalesce command to put both the field values in the same field, something like this: <your_search> | eval Error=coalesce(error_message,error_response) | stats count BY Error Ciao. Giuseppe