All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks, I tried I tried " index =_internal | stats count by host" but don't see the newly installed UF host name there.  Then, I tried "./splunk add forward-server <host name or ip address>:<listeni... See more...
Thanks, I tried I tried " index =_internal | stats count by host" but don't see the newly installed UF host name there.  Then, I tried "./splunk add forward-server <host name or ip address>:<listening port>" but it says, it's already there. So, I removed both inputs.conf and outputs.conf and then tried the above command that created outputs.conf. Also, I readded inputs.conf manually and then restarted splunk without any success.   I do see errors in splunkd.log on UF as shown below: TailReader [19453 tailreader0] - error from read call from '/var/log/message'. Maybe it's a permission issue.   
Thanks and I see bunch line like below:   TailReader [19453 tailreader0] - error from read call from '/var/log/message'   Is that permission issue?
That did it, thanks for the assist.
Excel (.xls) files cannot be ingested because they are binary files.  See if the ADMT logs can be saved in another format (text, CSV, for example).
See if this helps. index=main host=[hostname] Operation="UserLogon" ApplicationId=[appid] | bin span=1d _time | stats dc(_time) as numDays by UserId
the query is: index=main host=[hostname] Operation="UserLogon" ApplicationId=[appid] If I add: | timechart span=1d dc(UserId) I get Unique users per day OR I can run with: | stats count b... See more...
the query is: index=main host=[hostname] Operation="UserLogon" ApplicationId=[appid] If I add: | timechart span=1d dc(UserId) I get Unique users per day OR I can run with: | stats count by UserId to get total logins per user for the period I am looking to get "unique days per user"
Hi all, My question is regarding ADMT log ingestion to splunk. ADMT logs are sent to a common blob storage account & storage account is created and it has logs in excel format. In splunk web >> Con... See more...
Hi all, My question is regarding ADMT log ingestion to splunk. ADMT logs are sent to a common blob storage account & storage account is created and it has logs in excel format. In splunk web >> Configuration done by adding the Azure storage account in the addon "Splunk Add-on for Microsoft Cloud Services" by mentioning account secret key. Added Input (Azure storage blob) by selecting the above added storage account. Now, I can able to see the log in splunk, but the format seems wrong.   Attaching the sample event here. Can you advice what needs to be done to get the logs in correct format.  Thanks  
Can you share the current query?
@ITWhisperer  Thank you - that worked Do you have any links/examples for 'streamstats' and use of 'current' and 'values' clauses?
I am trying to get a table showing the number of days a user was active in the given time period.  I currently have a working search that gives me the number of total logins for each user and one tha... See more...
I am trying to get a table showing the number of days a user was active in the given time period.  I currently have a working search that gives me the number of total logins for each user and one that gives me the number of unique users per day.  I am looking for "unique days per user".  ie. if Dave logs in 5x Monday, 3x Tuesday , 0x Wednesday, 2x Thursday, & 0x Friday I want to show 3 active days not 10 logins
We deployed our first Splunk in AWS using the tooling in AWS to do this and see that there is unallowed traffic calling out from our Splunk to beam.scs.splunk.com, which I see is for Splunk Cloud Edg... See more...
We deployed our first Splunk in AWS using the tooling in AWS to do this and see that there is unallowed traffic calling out from our Splunk to beam.scs.splunk.com, which I see is for Splunk Cloud Edge processor, which we have no intention to use.  We would like to disable this traffic but there is no documentation on it.  Can this be done?
When trying to access documentation for add on 3088 which should be on  https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/About I am redirected to  https://github.com/pages/au... See more...
When trying to access documentation for add on 3088 which should be on  https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/About I am redirected to  https://github.com/pages/auth?nonce=cdd5c03e-1d79-4996-9ec8-36e50189986b&page_id=47767351&path=Lw  Which is unavailable without login and also with one of my github logins.  What's going on here?
You should be able to remove your local changes by deleting local/props.conf from the AWS add-on directory and restarting Splunk.  If you changed default/props.conf (never advised) then re-installing... See more...
You should be able to remove your local changes by deleting local/props.conf from the AWS add-on directory and restarting Splunk.  If you changed default/props.conf (never advised) then re-installing the add-on will restore the defaults.
yes i'm started doing a search based on the traceId and spanId  index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as current_servic... See more...
yes i'm started doing a search based on the traceId and spanId  index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as current_service | join type=inner trace_id [search index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as parent_service, span_id as parent_span_id] | where parent_span_id = span_id | table trace_id, parent_service, current_service but i'm asking if there is a default fields related to microservices in Splunk 
Hi @Narendra.Rao, If you have not seen this AppD Docs page yet, we have a list of all the available APIs - https://docs.appdynamics.com/appd/24.x/latest/en/extend-cisco-appdynamics
What happens if i have multiple tags for same sourcetype and index? Will it fail get into DM?
Hi @BRFZ , you have to go in [Settings > Forwarding and Receiving > Forwarding ] of your SHs and configure the forwarding of all logs to your indexers, inserting both your indexers. This activity s... See more...
Hi @BRFZ , you have to go in [Settings > Forwarding and Receiving > Forwarding ] of your SHs and configure the forwarding of all logs to your indexers, inserting both your indexers. This activity should be done on all your Splunk Servers except Indexers themselves (e.g. also on Deployment Server, if you have). If you have not clustered indexers , it's the same thing in forwarding, obviously, if one of them is down, you'll have in your searches half of data. Ciao. Giuseppe
Could you help me with how to do this in the case where there are two indexers that are not in a cluster please?
| rex field=source "info_(?<timestamp>\d{8}_\d{6})\.csv" | eval _time=strptime(timestamp,"%Y%m%d_%H%M%S")
Hi @BRFZ , it's a best practice to forward all internal logs from Splunk servers to Indexers and not having a local indexing. Ciao. Giuseppe