All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok. Do a simple index=_internal host=your_uf search and run it as a real-time search. That's one of the very few use cases when real-time search is actually warranted. If you see something, check ... See more...
Ok. Do a simple index=_internal host=your_uf search and run it as a real-time search. That's one of the very few use cases when real-time search is actually warranted. If you see something, check validity of the data. Typical problem when you're supposedly ingesting data but don't see it (apart from non-existent destination index) is time problems - if you have misconfigured timezone on the source, the data can be seen as coming from a long time ago so it's indexed into the past. So you're not seeing it looking for it in "last 15 minutes" or so.
1. Post your searches in code block or preformatted paragraph - it helps readability. 2. Don't use the join command if you can avoid it (in this case you can probably go with stats instead) 3. Fiel... See more...
1. Post your searches in code block or preformatted paragraph - it helps readability. 2. Don't use the join command if you can avoid it (in this case you can probably go with stats instead) 3. Fields depend on the data you onboard. The only "default" thing about them is when you have them normalized to be CIM-compliant. But I don't see any datamodel applicable here.
1. This is a 5-year-old thread. Post your question as a new thread to give it visibility. 2. Be a bit more descriptive. I suspect I know what you mean but it's nice to be sure that all parties are o... See more...
1. This is a 5-year-old thread. Post your question as a new thread to give it visibility. 2. Be a bit more descriptive. I suspect I know what you mean but it's nice to be sure that all parties are on the same page.
Clustering is an internal thing of the indexers from the source's (in this case your search head's) point of view it doesn't matter. You just set the output group to both your indexers and you're goo... See more...
Clustering is an internal thing of the indexers from the source's (in this case your search head's) point of view it doesn't matter. You just set the output group to both your indexers and you're good. If your indexers were clustered they'd replicate the incoming data among themselves. When they're not clustered only the one directly receiving event will hold it.
Is it possible to download the splunkcloud.spl file by using curl?
Hi We are Getting "GnuTLS handshake retry returned error" when try to communicate with ForeScout".   Any suggestion 
Hmmm, are you talking about the role defined within the Monitoring Console? I am having tons of issues resolving this. 
You just need the _time field that isn't multivalued to be present for timechart to work.  Assuming you are working with the example you replied to you would simply add "| eval _time=min(_time) | tim... See more...
You just need the _time field that isn't multivalued to be present for timechart to work.  Assuming you are working with the example you replied to you would simply add "| eval _time=min(_time) | timechart count" after the where line.  The eval is ensuring you take the earliest time if it is multi-valued.
Well, in that case I am really confused.   I can telnet from UF host to the indexer on port 9997. One more thing, I do see UF host names in metrics.log on my indexer logs. And tcpdump shows traffic... See more...
Well, in that case I am really confused.   I can telnet from UF host to the indexer on port 9997. One more thing, I do see UF host names in metrics.log on my indexer logs. And tcpdump shows traffic being sent from UF host to the indexer and on the indexer, traffic being received from the UF host. 
This is not working for me   I have the app in place on my DS (/etc/apps/DS_Fix/local/outputs.conf 
@dmarling   pretty good explanation ! .  But now I need to go one step further . Based on result of stats commands I want to create timechart.   First I tried to replace stats with timechart but it ... See more...
@dmarling   pretty good explanation ! .  But now I need to go one step further . Based on result of stats commands I want to create timechart.   First I tried to replace stats with timechart but it simply does not work.  Then I created table from stats result and base on this table wanted to create a time chart. But I rather feel than know  it is not a good way.  Would you give any example how to do it please. Even based on this simple from source post
Ah, I should have expected that.  Try my revised query without IN.
Afte trying that , it errors out saying  "Error in 'search' command: Unable to parse the search: Right hand side of IN must be a collection of literals. '((jobNumber = "3333") OR (jobNumber = "11111... See more...
Afte trying that , it errors out saying  "Error in 'search' command: Unable to parse the search: Right hand side of IN must be a collection of literals. '((jobNumber = "3333") OR (jobNumber = "11111") . OR ..."
In a simple use case you can get similar effect by using the autoregress command. But since streamstats is way more powerful and can also be used in simple cases, people tend to use even for those si... See more...
In a simple use case you can get similar effect by using the autoregress command. But since streamstats is way more powerful and can also be used in simple cases, people tend to use even for those simple cases
These are two separate issues. If you have local permissions/selinux issues you might not be able to ingest "production" data but you should still be getting events into the _internal index since the... See more...
These are two separate issues. If you have local permissions/selinux issues you might not be able to ingest "production" data but you should still be getting events into the _internal index since these are forwarder's own logs. Check splunkd.log on the forwarder and check if it's able to connect to the receiving indexer(s). If not, see what's the reason.
If you mean that you want to ingest data available over some HTTP endpoint, you need to either have a scripted or modular input polling said endpoint or have an external script pulling the data perio... See more...
If you mean that you want to ingest data available over some HTTP endpoint, you need to either have a scripted or modular input polling said endpoint or have an external script pulling the data periodically and either writing to file (from which you'd ingest with normal monitor input) or push to HEC endpoint - these are the most straightforward options. If I remember correctly, Add-on Builder can be used to make such polling input for external HTTP sources.
I am using a curl command to get data from an api endpoint, the data comes as a single event but I want to be able to store each event as the events come through. I want to get a timechart from that ... See more...
I am using a curl command to get data from an api endpoint, the data comes as a single event but I want to be able to store each event as the events come through. I want to get a timechart from that  
I've implemented your suggested logic and enhanced it to detect password spray attempts and also alert when there's a successful login from the same source following a spray attempt. Specifically, I... See more...
I've implemented your suggested logic and enhanced it to detect password spray attempts and also alert when there's a successful login from the same source following a spray attempt. Specifically, I added conditions to detect successful logins from the same source following a spray attempt. Here’s a summary of the changes: Added a check for successful logins:   dc(eval(if('data.type'="s", 'data.user_name', null()))) AS unique_successful_accounts​   Categorized alerts: An eval statement to differentiate the alert type as "Successful After attempt" if there's a successful login after the failed attempts. These changes ensure that the query not only detects password spray attempts but also alerts when there's a successful login following the spray attempt. Thank you so much for your help!
I built a new index intended for storing a report of some very heavily modified and correlated vulnerability data. I figured the only way to get this data to properly math the CIM requirements was th... See more...
I built a new index intended for storing a report of some very heavily modified and correlated vulnerability data. I figured the only way to get this data to properly math the CIM requirements was through a lot of evals and lookup correlations. After doing all of that I planned on spitting it back into a summary index and have that be part of the Vulnerability data model.   Anyway, I scheduled the report and enabled summary indexing but my new index doesn't show up in the list of index. I noticed a few indexes are missing from the list. And also the filter doesn't even work lol. indexes that are clearly visible in the list do not filter in when you type the name of the index. Very strange.   I'm an admin and I've done this a few times previously. This particular index is just giving me issues. Not sure what I need to do besides delete it and rebuild it.
Thanks, I tried I tried " index =_internal | stats count by host" but don't see the newly installed UF host name there.  Then, I tried "./splunk add forward-server <host name or ip address>:<listeni... See more...
Thanks, I tried I tried " index =_internal | stats count by host" but don't see the newly installed UF host name there.  Then, I tried "./splunk add forward-server <host name or ip address>:<listening port>" but it says, it's already there. So, I removed both inputs.conf and outputs.conf and then tried the above command that created outputs.conf. Also, I readded inputs.conf manually and then restarted splunk without any success.   I do see errors in splunkd.log on UF as shown below: TailReader [19453 tailreader0] - error from read call from '/var/log/message'. Maybe it's a permission issue.