All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello  Let me clarify, We receive device status logs every 2 minutes from AWS Cloud. These logs indicate both online and offline statuses. If a device goes offline, we continuously receive o... See more...
Hi @gcusello  Let me clarify, We receive device status logs every 2 minutes from AWS Cloud. These logs indicate both online and offline statuses. If a device goes offline, we continuously receive offline logs until it comes back online, at which point we receive online logs for that specific device. My requirement is to trigger a critical alert for the end user when a particular device goes offline. Subsequently, I will notify the end user when the device comes back online. Based I need to create alert. Is this possible?  also I have already shared example logs in this conversation. Moreover we have this type of alert is working other observability application, now we are migrating to Splunk. I hope this clarifies my requirement. Please let me know anything required.
Hi Thanks for reply We can't make changes to dropdown as dropdown fields populates from SPL query and it's uses by other panel in dashboard. need assistance to make changes in panel SPL query itsel... See more...
Hi Thanks for reply We can't make changes to dropdown as dropdown fields populates from SPL query and it's uses by other panel in dashboard. need assistance to make changes in panel SPL query itself so it pick query on the basis of dropdown field. other if you could help me to makes changes in json code to manage both SPL query according to dropdown menu. Thanks Abhineet Kumar 
Many thanks for your time and insights @ITWhisperer  it works as expected.
We got output in table but all values are in one column  for each fields of output table. We want to split values in row. Below is the output table for reference. Please help to split it.   
base search earliest=-1d@d latest=now | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | chart count by User_Id, Day | eval Percentage_Difference = ((Yesterday - Today) / Yesterday)... See more...
base search earliest=-1d@d latest=now | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | chart count by User_Id, Day | eval Percentage_Difference = ((Yesterday - Today) / Yesterday) * 100
sorry my bad, it should be 50% variance. Today =2, yesterday 4   (Yesterday count - Today count / Yesterday  count )* 100 (4-2 /4)* 100 = >2/4 *100 ==> 50%
Hi @parthiban, probably there's a misundertanding one the condition to check: I understood that you want to check if status="recovery" or status=down, and I check for these statuses, but what's you... See more...
Hi @parthiban, probably there's a misundertanding one the condition to check: I understood that you want to check if status="recovery" or status=down, and I check for these statuses, but what's your requirement? with your search you check status=down and status=online, is this the requirement? Ciao. Giuseppe
The command is addinfo not add_info - the problem with using "$field1.earliest$" and "$field1.latest$" is that they can contain string and not epoch times, whereas addinfo provides the epoch times de... See more...
The command is addinfo not add_info - the problem with using "$field1.earliest$" and "$field1.latest$" is that they can contain string and not epoch times, whereas addinfo provides the epoch times derived from the timepicker.
Universal forwarders (UF) usually do not need to listen on any port since they typically read local files.  They can opt to read TCP data on any port or Splunk protocol on port 9997. A UF must be ab... See more...
Universal forwarders (UF) usually do not need to listen on any port since they typically read local files.  They can opt to read TCP data on any port or Splunk protocol on port 9997. A UF must be able to connect to indexers on port 9997.  If you have several UFs, it's a good idea to use a Deployment Server (DS) to manage them.  UFs talk to the DS on port 8089.
How is 2 missing 100%? 100% of what?
I tried a different server in the same IP range now to test and the method you mentioned worked just fine.  So was the port being blocked. Thanks for the help.
Make sure the LM is running.  Confirm 8089 is the LM's management port (it's the default).  Verify your firewalls allow connections to that port.
On which instance did you install those settings?  They should be on the indexers (and heavy forwarders, if you have them).  Did you restart the instances after modifying the file?  Are you looking a... See more...
On which instance did you install those settings?  They should be on the indexers (and heavy forwarders, if you have them).  Did you restart the instances after modifying the file?  Are you looking at new data?  The changes will not affect indexed data.  Do you have the correct host name in the stanza?  Have you tried using the sourcetype name instead of host?
i added the command | add info  and i think that it works i will do validations but thanks a lot!
sure. for example, user called abc uploaded two files today with name as abc.1 , abc.2. the same user abc uploaded four files yesterday abc.1, abc.2, abc.3, abc.4. I want to create the table, with ... See more...
sure. for example, user called abc uploaded two files today with name as abc.1 , abc.2. the same user abc uploaded four files yesterday abc.1, abc.2, abc.3, abc.4. I want to create the table, with  user name and uploaded files count today and yesterday.. what is missing file count from previous day. in this scenario, User Today Yesterday Missing File from previous Day abc 2 4 2 ( in Percentage) 100%   
Thanks for that. I created the file in /opt/splunk/etc/system/local/props.conf as follows: [default] [host::router.xxxxxxxx] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = ^ TIME... See more...
Thanks for that. I created the file in /opt/splunk/etc/system/local/props.conf as follows: [default] [host::router.xxxxxxxx] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%D%:z MAX_TIMESTAMP_LOOKAHEAD = 30 TRUNCATE = 10000 EVENT_BREAKER_ENABLE = true I am still getting the descrepency. Perhaps my props.conf file is not the correct format or in the right spot for Splunk to read?
So its a opnsense firewall 
Hello Team,   I would like to install UF on Linux server but I got confused. Which one should I open "9997 for İndexer cluster and 8089 for deployment server" OR "9997 and 8089 for deployment serve... See more...
Hello Team,   I would like to install UF on Linux server but I got confused. Which one should I open "9997 for İndexer cluster and 8089 for deployment server" OR "9997 and 8089 for deployment server"? Can any body help about port requirement?   
thanks!! i used min, max because add_info didn't work for me. but it doesn't work, when i select a range (for example 4 hours) in the time filter the data that i get is not between this range. maybe ... See more...
thanks!! i used min, max because add_info didn't work for me. but it doesn't work, when i select a range (for example 4 hours) in the time filter the data that i get is not between this range. maybe i should do something with $field1.earliest$, $field1.latest$? my code: <search id="bla"> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <query> | loadjob savedsearch="mp:search:query name" | eventstats max(_time) as maxtime, min(_time) as mintime | where $pc$ AND $version$ AND strptime(TimeStamp,"%F %T.%3N")&gt;mintime AND strptime(TimeStamp,"%F %T.%3N")&lt;maxtime </query> </search> <fieldset submitButton="true" autoRun="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-1d@h</earliest> <latest>now</latest> </default> </input>
Thanks for the response. I have tried the following, but it times out. I assume its a port issue. [license] manager_uri = https://servername:8089