All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am trying to test the Index Time field extraction,  and want to know how to refine the field extraction using source_key Keyword.   Then how can i refine my Field extraction if i cant use the SO... See more...
I am trying to test the Index Time field extraction,  and want to know how to refine the field extraction using source_key Keyword.   Then how can i refine my Field extraction if i cant use the SOURCE_KEY twice?
Our network device data sends data to a Syslog server and then up to our splunk instance.  I have a few TAs that I’ve requested our SysAdmin team install on my behalf so the logs can be parsed o... See more...
Our network device data sends data to a Syslog server and then up to our splunk instance.  I have a few TAs that I’ve requested our SysAdmin team install on my behalf so the logs can be parsed out.  However, the TAs aren’t parsing out the data, and furthermore, the network device logs come into the source type of “syslog” rather than the sourcetype in the respective TAs.  Where do I need to look or have the SysAdmins look at? (I’m just a power user). 
I'm not sure that I'm understand correctly. Can you explain me more please that what's you have in mind ?  Thank you so much
Please share your full search so we might be able to determine why you are not getting any results.
No. Splunk Cloud "underneath" is essentially the same indexing and searching machinery embedded within some additional management layer. So the delete command works exactly the same - it masks the da... See more...
No. Splunk Cloud "underneath" is essentially the same indexing and searching machinery embedded within some additional management layer. So the delete command works exactly the same - it masks the data as "unreachable". It doesn't change anything else - the data has already been ingested so it counts against your license entitlement.
If you're using the timechart command, it generates zero count for periods when there is no values. Otherwise you need to use this approach https://www.duanewaddle.com/proving-a-negative/
Really? You expect your users to search through 25000 choices from your dropdown? You might be better off (or at least your users might be), using another dropdown the provide a filter for the 25000 ... See more...
Really? You expect your users to search through 25000 choices from your dropdown? You might be better off (or at least your users might be), using another dropdown the provide a filter for the 25000 list, e.g. values beginning with A, values beginning with B, etc.?
@ITWhisperer Thank you for your response.  But it did not work.  I don't get any results.
@ITWhispererI assume that 4624 means windows eventcode so they are standard windows event logs. @sujaldThe easiest approach is indeed the one shown by @ITWhisperer but it will show you number of ... See more...
@ITWhispererI assume that 4624 means windows eventcode so they are standard windows event logs. @sujaldThe easiest approach is indeed the one shown by @ITWhisperer but it will show you number of logins aligned to 10-minute periods. So if someone logged in every minute from 10:13 till 10:26, you will get two separate "buckets" of logins - one starting at 10:10, another at 10:20. If you want a moving window count, you'll need to employ the streamstats command with time_window=10m.
  Hello, I have a problem with the dropdown menu limit which displays a maximum of 1000 values. I need to display a list of 22000 values ​​and I don't know how to do it. Thank you so much  ... See more...
  Hello, I have a problem with the dropdown menu limit which displays a maximum of 1000 values. I need to display a list of 22000 values ​​and I don't know how to do it. Thank you so much   
Hi team,   Currently we are implemented the Standalone below servers, Zone-1 Environment Server Name IP Splunk Role DEV L4     Search Head+ Indexer QA L4   ... See more...
Hi team,   Currently we are implemented the Standalone below servers, Zone-1 Environment Server Name IP Splunk Role DEV L4     Search Head+ Indexer QA L4     Search Head + Indexer L4     Deployment Server   Zone-2         Environment Server Name IP Splunk Role DEV L4     Search Head + Indexer QA L4     Search Head + Indexer   In our environment, only 2 Indexer + Search head server is there on the same instance   How to Implement the High Availability Servers?   please help me the process.             
The forwarders are unable to connect to the indexers and send their logs.  Verify the outputs.conf settings are correct on the forwarders.  Check the URL, the port (9997 is the default), and the cert... See more...
The forwarders are unable to connect to the indexers and send their logs.  Verify the outputs.conf settings are correct on the forwarders.  Check the URL, the port (9997 is the default), and the certificate (if SSL/TLS is used).  Also, check that the network is allowing connections to the indexers.
Perhaps the alert is not configured as expected.  Please share the savedsearches.conf stanza for the alert so we can check for errors.
On my Splunk on Windows the Addon is very slow and i got some Error Messages. 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.ex... See more...
On my Splunk on Windows the Addon is very slow and i got some Error Messages. 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.exe D:\apps\Splunk\bin\runScript.py setup': cfg = cli.getConfStanza("ta_databricks_settings", "logging") 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.exe D:\apps\Splunk\bin\runScript.py setup': File "D:\apps\Splunk\etc\apps\TA-Databricks\bin\log_manager.py", line 32, in setup_logging 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.exe D:\apps\Splunk\bin\runScript.py setup': _LOGGER = setup_logging("ta_databricks_utils") This errors happend for 60 seconds and than the connection will estabished and i recieved the data.  
Because you are using _time as your x-axis, the chart will show all times in your time range. You could change your chart settings so that the lines are not joined Alternatively, you could renam... See more...
Because you are using _time as your x-axis, the chart will show all times in your time range. You could change your chart settings so that the lines are not joined Alternatively, you could rename the _time field to something else, but then you would also have to format the time - you may also have to remove events where the value is null (depending on how your search is setup)   | rename _time as time | fieldformat time=strftime(time,"%F %T")   However, this is likely to lead to the x-axis values having ellipses in, so you could rotate the labels  
Hi @tscroggins, How we can represent server icon for the nodes. could you please let me know. Thanks in advance!
It is a bit difficult to suggest a solution without know what your events looks like. Please share some anonymised representative events. Alternatively, if the account name in your events is "user",... See more...
It is a bit difficult to suggest a solution without know what your events looks like. Please share some anonymised representative events. Alternatively, if the account name in your events is "user", you could try something like this | bin _time span=10m | stats count by _time user
Hi Team, An alert is scheduled to run for every 2 hours  It is getting skipped per day the alert will run - 12 times For a week 12*7 = 84 times a week We could see in the skipped search resul... See more...
Hi Team, An alert is scheduled to run for every 2 hours  It is getting skipped per day the alert will run - 12 times For a week 12*7 = 84 times a week We could see in the skipped search result that the alert is skipped for 3000 times in last 7 days How is it possible? Below search is used to find the skipped search splunk_server=*prod1-heavy index="_internal" sourcetype="scheduler" host=*-prod1-heavy | eval scheduled=strftime(scheduled_time, "%Y-%m-%d %H:%M:%S") | lookup search_env_mapping host AS host OUTPUT tenant | stats count values(scheduled) as scheduled values(savedsearch_name) as search_name values(status) as status values(reason) as reason values(run_time) as run_time values(dm_node) as dm_node values(sid) as sid by savedsearch_name tenant | sort -count | search status!=success | table scheduled, savedsearch_name, status, reason,count,tenant
| eval results=if(results=0,"No events Found",results)
Hi, I would like to create a time chart for a specified time suppose 8AM to 2PM everyday for last 30 days. I am able to chart it however in visualisation, the line from 2PM to next day 8AM is a strai... See more...
Hi, I would like to create a time chart for a specified time suppose 8AM to 2PM everyday for last 30 days. I am able to chart it however in visualisation, the line from 2PM to next day 8AM is a straight line. How can we exclude that line for duration(2PM to next day 8AM) and just show chart for 8AM to 2PM everyday as a single line. Can we exclude the Green box line? Query Used(just conditions): | eval hour=tonumber(strftime(_time,"%H")) | where hour >=8 | where hour <=14 | fields - hour