All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Howdy all, Perhaps someone can help me to remember the SPL query that lists out the datasets as fields in the data model. It's something like:   | datamodel SomeDataModel accelerate....(i don... See more...
Howdy all, Perhaps someone can help me to remember the SPL query that lists out the datasets as fields in the data model. It's something like:   | datamodel SomeDataModel accelerate....(i dont remember the rest and cant find the relevant notes I had)   Thanks in advance.
Hello Everyone, Recently I have installed Splunk db connect app (3.16.0) in my Splunk heavy forwarder (9.1.1). As per the documentation I have installed jre and installed the my sql add on. And crea... See more...
Hello Everyone, Recently I have installed Splunk db connect app (3.16.0) in my Splunk heavy forwarder (9.1.1). As per the documentation I have installed jre and installed the my sql add on. And created Identities, connections and inputs. But when I check for the data it is not getting ingested. So I enabled the debug mode and checked the logs and got the hec token error. But the hec token is configured in inputs.conf file and same I could see in the Splunk web gui under Data inputs -> HTTP event collector. could you please help if anyone faced this error before? Error: ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOException: There are no Http Event Collectors available at this time. ERROR c.s.d.s.dbinput.recordwriter.CheckpointUpdater - action=skip_checkpoint_update_batch_writing_failed java.io.IOException: There are no Http Event Collectors available at this time.   ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch java.io.IOException: There are no Http Event Collectors available at this time.
It does depend on what values you have in your 25k dropdown list, but if we assume that the list is generated dynamically with some sort of search, your search could include a filter which is based o... See more...
It does depend on what values you have in your 25k dropdown list, but if we assume that the list is generated dynamically with some sort of search, your search could include a filter which is based on a token from a much smaller dropdown so that you can limit your results to under 1000.
Dear All, I want to setup an alert in an event. The event contains three timestamps, New Event time, Last update, and startDate time. So these are the logs coming from MS365.  I want the Alert if o... See more...
Dear All, I want to setup an alert in an event. The event contains three timestamps, New Event time, Last update, and startDate time. So these are the logs coming from MS365.  I want the Alert if one of the field "Incident Resolved = False" is satisfied even after 4 hours of startDate time. So we receive first event at startDate  but we don't want an alert until 4 hours of startDate .   startDateTime: 2024-07-01T09:00:00Z        
I am trying to test the Index Time field extraction,  and want to know how to refine the field extraction using source_key Keyword.   Then how can i refine my Field extraction if i cant use the SO... See more...
I am trying to test the Index Time field extraction,  and want to know how to refine the field extraction using source_key Keyword.   Then how can i refine my Field extraction if i cant use the SOURCE_KEY twice?
Our network device data sends data to a Syslog server and then up to our splunk instance.  I have a few TAs that I’ve requested our SysAdmin team install on my behalf so the logs can be parsed o... See more...
Our network device data sends data to a Syslog server and then up to our splunk instance.  I have a few TAs that I’ve requested our SysAdmin team install on my behalf so the logs can be parsed out.  However, the TAs aren’t parsing out the data, and furthermore, the network device logs come into the source type of “syslog” rather than the sourcetype in the respective TAs.  Where do I need to look or have the SysAdmins look at? (I’m just a power user). 
I'm not sure that I'm understand correctly. Can you explain me more please that what's you have in mind ?  Thank you so much
Please share your full search so we might be able to determine why you are not getting any results.
No. Splunk Cloud "underneath" is essentially the same indexing and searching machinery embedded within some additional management layer. So the delete command works exactly the same - it masks the da... See more...
No. Splunk Cloud "underneath" is essentially the same indexing and searching machinery embedded within some additional management layer. So the delete command works exactly the same - it masks the data as "unreachable". It doesn't change anything else - the data has already been ingested so it counts against your license entitlement.
If you're using the timechart command, it generates zero count for periods when there is no values. Otherwise you need to use this approach https://www.duanewaddle.com/proving-a-negative/
Really? You expect your users to search through 25000 choices from your dropdown? You might be better off (or at least your users might be), using another dropdown the provide a filter for the 25000 ... See more...
Really? You expect your users to search through 25000 choices from your dropdown? You might be better off (or at least your users might be), using another dropdown the provide a filter for the 25000 list, e.g. values beginning with A, values beginning with B, etc.?
@ITWhisperer Thank you for your response.  But it did not work.  I don't get any results.
@ITWhispererI assume that 4624 means windows eventcode so they are standard windows event logs. @sujaldThe easiest approach is indeed the one shown by @ITWhisperer but it will show you number of ... See more...
@ITWhispererI assume that 4624 means windows eventcode so they are standard windows event logs. @sujaldThe easiest approach is indeed the one shown by @ITWhisperer but it will show you number of logins aligned to 10-minute periods. So if someone logged in every minute from 10:13 till 10:26, you will get two separate "buckets" of logins - one starting at 10:10, another at 10:20. If you want a moving window count, you'll need to employ the streamstats command with time_window=10m.
  Hello, I have a problem with the dropdown menu limit which displays a maximum of 1000 values. I need to display a list of 22000 values ​​and I don't know how to do it. Thank you so much  ... See more...
  Hello, I have a problem with the dropdown menu limit which displays a maximum of 1000 values. I need to display a list of 22000 values ​​and I don't know how to do it. Thank you so much   
Hi team,   Currently we are implemented the Standalone below servers, Zone-1 Environment Server Name IP Splunk Role DEV L4     Search Head+ Indexer QA L4   ... See more...
Hi team,   Currently we are implemented the Standalone below servers, Zone-1 Environment Server Name IP Splunk Role DEV L4     Search Head+ Indexer QA L4     Search Head + Indexer L4     Deployment Server   Zone-2         Environment Server Name IP Splunk Role DEV L4     Search Head + Indexer QA L4     Search Head + Indexer   In our environment, only 2 Indexer + Search head server is there on the same instance   How to Implement the High Availability Servers?   please help me the process.             
The forwarders are unable to connect to the indexers and send their logs.  Verify the outputs.conf settings are correct on the forwarders.  Check the URL, the port (9997 is the default), and the cert... See more...
The forwarders are unable to connect to the indexers and send their logs.  Verify the outputs.conf settings are correct on the forwarders.  Check the URL, the port (9997 is the default), and the certificate (if SSL/TLS is used).  Also, check that the network is allowing connections to the indexers.
Perhaps the alert is not configured as expected.  Please share the savedsearches.conf stanza for the alert so we can check for errors.
On my Splunk on Windows the Addon is very slow and i got some Error Messages. 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.ex... See more...
On my Splunk on Windows the Addon is very slow and i got some Error Messages. 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.exe D:\apps\Splunk\bin\runScript.py setup': cfg = cli.getConfStanza("ta_databricks_settings", "logging") 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.exe D:\apps\Splunk\bin\runScript.py setup': File "D:\apps\Splunk\etc\apps\TA-Databricks\bin\log_manager.py", line 32, in setup_logging 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.exe D:\apps\Splunk\bin\runScript.py setup': _LOGGER = setup_logging("ta_databricks_utils") This errors happend for 60 seconds and than the connection will estabished and i recieved the data.  
Because you are using _time as your x-axis, the chart will show all times in your time range. You could change your chart settings so that the lines are not joined Alternatively, you could renam... See more...
Because you are using _time as your x-axis, the chart will show all times in your time range. You could change your chart settings so that the lines are not joined Alternatively, you could rename the _time field to something else, but then you would also have to format the time - you may also have to remove events where the value is null (depending on how your search is setup)   | rename _time as time | fieldformat time=strftime(time,"%F %T")   However, this is likely to lead to the x-axis values having ellipses in, so you could rotate the labels  
Hi @tscroggins, How we can represent server icon for the nodes. could you please let me know. Thanks in advance!