All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

These are two separate issues. If you have local permissions/selinux issues you might not be able to ingest "production" data but you should still be getting events into the _internal index since the... See more...
These are two separate issues. If you have local permissions/selinux issues you might not be able to ingest "production" data but you should still be getting events into the _internal index since these are forwarder's own logs. Check splunkd.log on the forwarder and check if it's able to connect to the receiving indexer(s). If not, see what's the reason.
If you mean that you want to ingest data available over some HTTP endpoint, you need to either have a scripted or modular input polling said endpoint or have an external script pulling the data perio... See more...
If you mean that you want to ingest data available over some HTTP endpoint, you need to either have a scripted or modular input polling said endpoint or have an external script pulling the data periodically and either writing to file (from which you'd ingest with normal monitor input) or push to HEC endpoint - these are the most straightforward options. If I remember correctly, Add-on Builder can be used to make such polling input for external HTTP sources.
I am using a curl command to get data from an api endpoint, the data comes as a single event but I want to be able to store each event as the events come through. I want to get a timechart from that ... See more...
I am using a curl command to get data from an api endpoint, the data comes as a single event but I want to be able to store each event as the events come through. I want to get a timechart from that  
I've implemented your suggested logic and enhanced it to detect password spray attempts and also alert when there's a successful login from the same source following a spray attempt. Specifically, I... See more...
I've implemented your suggested logic and enhanced it to detect password spray attempts and also alert when there's a successful login from the same source following a spray attempt. Specifically, I added conditions to detect successful logins from the same source following a spray attempt. Here’s a summary of the changes: Added a check for successful logins:   dc(eval(if('data.type'="s", 'data.user_name', null()))) AS unique_successful_accounts​   Categorized alerts: An eval statement to differentiate the alert type as "Successful After attempt" if there's a successful login after the failed attempts. These changes ensure that the query not only detects password spray attempts but also alerts when there's a successful login following the spray attempt. Thank you so much for your help!
I built a new index intended for storing a report of some very heavily modified and correlated vulnerability data. I figured the only way to get this data to properly math the CIM requirements was th... See more...
I built a new index intended for storing a report of some very heavily modified and correlated vulnerability data. I figured the only way to get this data to properly math the CIM requirements was through a lot of evals and lookup correlations. After doing all of that I planned on spitting it back into a summary index and have that be part of the Vulnerability data model.   Anyway, I scheduled the report and enabled summary indexing but my new index doesn't show up in the list of index. I noticed a few indexes are missing from the list. And also the filter doesn't even work lol. indexes that are clearly visible in the list do not filter in when you type the name of the index. Very strange.   I'm an admin and I've done this a few times previously. This particular index is just giving me issues. Not sure what I need to do besides delete it and rebuild it.
Thanks, I tried I tried " index =_internal | stats count by host" but don't see the newly installed UF host name there.  Then, I tried "./splunk add forward-server <host name or ip address>:<listeni... See more...
Thanks, I tried I tried " index =_internal | stats count by host" but don't see the newly installed UF host name there.  Then, I tried "./splunk add forward-server <host name or ip address>:<listening port>" but it says, it's already there. So, I removed both inputs.conf and outputs.conf and then tried the above command that created outputs.conf. Also, I readded inputs.conf manually and then restarted splunk without any success.   I do see errors in splunkd.log on UF as shown below: TailReader [19453 tailreader0] - error from read call from '/var/log/message'. Maybe it's a permission issue.   
Thanks and I see bunch line like below:   TailReader [19453 tailreader0] - error from read call from '/var/log/message'   Is that permission issue?
That did it, thanks for the assist.
Excel (.xls) files cannot be ingested because they are binary files.  See if the ADMT logs can be saved in another format (text, CSV, for example).
See if this helps. index=main host=[hostname] Operation="UserLogon" ApplicationId=[appid] | bin span=1d _time | stats dc(_time) as numDays by UserId
the query is: index=main host=[hostname] Operation="UserLogon" ApplicationId=[appid] If I add: | timechart span=1d dc(UserId) I get Unique users per day OR I can run with: | stats count b... See more...
the query is: index=main host=[hostname] Operation="UserLogon" ApplicationId=[appid] If I add: | timechart span=1d dc(UserId) I get Unique users per day OR I can run with: | stats count by UserId to get total logins per user for the period I am looking to get "unique days per user"
Hi all, My question is regarding ADMT log ingestion to splunk. ADMT logs are sent to a common blob storage account & storage account is created and it has logs in excel format. In splunk web >> Con... See more...
Hi all, My question is regarding ADMT log ingestion to splunk. ADMT logs are sent to a common blob storage account & storage account is created and it has logs in excel format. In splunk web >> Configuration done by adding the Azure storage account in the addon "Splunk Add-on for Microsoft Cloud Services" by mentioning account secret key. Added Input (Azure storage blob) by selecting the above added storage account. Now, I can able to see the log in splunk, but the format seems wrong.   Attaching the sample event here. Can you advice what needs to be done to get the logs in correct format.  Thanks  
Can you share the current query?
@ITWhisperer  Thank you - that worked Do you have any links/examples for 'streamstats' and use of 'current' and 'values' clauses?
I am trying to get a table showing the number of days a user was active in the given time period.  I currently have a working search that gives me the number of total logins for each user and one tha... See more...
I am trying to get a table showing the number of days a user was active in the given time period.  I currently have a working search that gives me the number of total logins for each user and one that gives me the number of unique users per day.  I am looking for "unique days per user".  ie. if Dave logs in 5x Monday, 3x Tuesday , 0x Wednesday, 2x Thursday, & 0x Friday I want to show 3 active days not 10 logins
We deployed our first Splunk in AWS using the tooling in AWS to do this and see that there is unallowed traffic calling out from our Splunk to beam.scs.splunk.com, which I see is for Splunk Cloud Edg... See more...
We deployed our first Splunk in AWS using the tooling in AWS to do this and see that there is unallowed traffic calling out from our Splunk to beam.scs.splunk.com, which I see is for Splunk Cloud Edge processor, which we have no intention to use.  We would like to disable this traffic but there is no documentation on it.  Can this be done?
When trying to access documentation for add on 3088 which should be on  https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/About I am redirected to  https://github.com/pages/au... See more...
When trying to access documentation for add on 3088 which should be on  https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/About I am redirected to  https://github.com/pages/auth?nonce=cdd5c03e-1d79-4996-9ec8-36e50189986b&page_id=47767351&path=Lw  Which is unavailable without login and also with one of my github logins.  What's going on here?
You should be able to remove your local changes by deleting local/props.conf from the AWS add-on directory and restarting Splunk.  If you changed default/props.conf (never advised) then re-installing... See more...
You should be able to remove your local changes by deleting local/props.conf from the AWS add-on directory and restarting Splunk.  If you changed default/props.conf (never advised) then re-installing the add-on will restore the defaults.
yes i'm started doing a search based on the traceId and spanId  index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as current_servic... See more...
yes i'm started doing a search based on the traceId and spanId  index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as current_service | join type=inner trace_id [search index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as parent_service, span_id as parent_span_id] | where parent_span_id = span_id | table trace_id, parent_service, current_service but i'm asking if there is a default fields related to microservices in Splunk 
Hi @Narendra.Rao, If you have not seen this AppD Docs page yet, we have a list of all the available APIs - https://docs.appdynamics.com/appd/24.x/latest/en/extend-cisco-appdynamics