All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

See if this helps. index=main host=[hostname] Operation="UserLogon" ApplicationId=[appid] | bin span=1d _time | stats dc(_time) as numDays by UserId
the query is: index=main host=[hostname] Operation="UserLogon" ApplicationId=[appid] If I add: | timechart span=1d dc(UserId) I get Unique users per day OR I can run with: | stats count b... See more...
the query is: index=main host=[hostname] Operation="UserLogon" ApplicationId=[appid] If I add: | timechart span=1d dc(UserId) I get Unique users per day OR I can run with: | stats count by UserId to get total logins per user for the period I am looking to get "unique days per user"
Hi all, My question is regarding ADMT log ingestion to splunk. ADMT logs are sent to a common blob storage account & storage account is created and it has logs in excel format. In splunk web >> Con... See more...
Hi all, My question is regarding ADMT log ingestion to splunk. ADMT logs are sent to a common blob storage account & storage account is created and it has logs in excel format. In splunk web >> Configuration done by adding the Azure storage account in the addon "Splunk Add-on for Microsoft Cloud Services" by mentioning account secret key. Added Input (Azure storage blob) by selecting the above added storage account. Now, I can able to see the log in splunk, but the format seems wrong.   Attaching the sample event here. Can you advice what needs to be done to get the logs in correct format.  Thanks  
Can you share the current query?
@ITWhisperer  Thank you - that worked Do you have any links/examples for 'streamstats' and use of 'current' and 'values' clauses?
I am trying to get a table showing the number of days a user was active in the given time period.  I currently have a working search that gives me the number of total logins for each user and one tha... See more...
I am trying to get a table showing the number of days a user was active in the given time period.  I currently have a working search that gives me the number of total logins for each user and one that gives me the number of unique users per day.  I am looking for "unique days per user".  ie. if Dave logs in 5x Monday, 3x Tuesday , 0x Wednesday, 2x Thursday, & 0x Friday I want to show 3 active days not 10 logins
We deployed our first Splunk in AWS using the tooling in AWS to do this and see that there is unallowed traffic calling out from our Splunk to beam.scs.splunk.com, which I see is for Splunk Cloud Edg... See more...
We deployed our first Splunk in AWS using the tooling in AWS to do this and see that there is unallowed traffic calling out from our Splunk to beam.scs.splunk.com, which I see is for Splunk Cloud Edge processor, which we have no intention to use.  We would like to disable this traffic but there is no documentation on it.  Can this be done?
When trying to access documentation for add on 3088 which should be on  https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/About I am redirected to  https://github.com/pages/au... See more...
When trying to access documentation for add on 3088 which should be on  https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/About I am redirected to  https://github.com/pages/auth?nonce=cdd5c03e-1d79-4996-9ec8-36e50189986b&page_id=47767351&path=Lw  Which is unavailable without login and also with one of my github logins.  What's going on here?
You should be able to remove your local changes by deleting local/props.conf from the AWS add-on directory and restarting Splunk.  If you changed default/props.conf (never advised) then re-installing... See more...
You should be able to remove your local changes by deleting local/props.conf from the AWS add-on directory and restarting Splunk.  If you changed default/props.conf (never advised) then re-installing the add-on will restore the defaults.
yes i'm started doing a search based on the traceId and spanId  index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as current_servic... See more...
yes i'm started doing a search based on the traceId and spanId  index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as current_service | join type=inner trace_id [search index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as parent_service, span_id as parent_span_id] | where parent_span_id = span_id | table trace_id, parent_service, current_service but i'm asking if there is a default fields related to microservices in Splunk 
Hi @Narendra.Rao, If you have not seen this AppD Docs page yet, we have a list of all the available APIs - https://docs.appdynamics.com/appd/24.x/latest/en/extend-cisco-appdynamics
What happens if i have multiple tags for same sourcetype and index? Will it fail get into DM?
Hi @BRFZ , you have to go in [Settings > Forwarding and Receiving > Forwarding ] of your SHs and configure the forwarding of all logs to your indexers, inserting both your indexers. This activity s... See more...
Hi @BRFZ , you have to go in [Settings > Forwarding and Receiving > Forwarding ] of your SHs and configure the forwarding of all logs to your indexers, inserting both your indexers. This activity should be done on all your Splunk Servers except Indexers themselves (e.g. also on Deployment Server, if you have). If you have not clustered indexers , it's the same thing in forwarding, obviously, if one of them is down, you'll have in your searches half of data. Ciao. Giuseppe
Could you help me with how to do this in the case where there are two indexers that are not in a cluster please?
| rex field=source "info_(?<timestamp>\d{8}_\d{6})\.csv" | eval _time=strptime(timestamp,"%Y%m%d_%H%M%S")
Hi @BRFZ , it's a best practice to forward all internal logs from Splunk servers to Indexers and not having a local indexing. Ciao. Giuseppe
This is file content. This contnt does not have timestamp for each entries. So I have to use the file timestmap for each entries within csv file    
The CSV files are generated by automation which generated the server status with filename when the file was generated.  There is not timestamp generated in the file so I have to use file generation t... See more...
The CSV files are generated by automation which generated the server status with filename when the file was generated.  There is not timestamp generated in the file so I have to use file generation time stamp in the naming convention.  
Do you mean you have loaded the csv into a lookup or that the csv has been ingested into an index and there is a source field associated with each event with the file name in?
I have a event that are generated in csv format with timestamp within file name as mentioned below. Need to extract timestamp from the file and create new column as _time. Need rex query to extract t... See more...
I have a event that are generated in csv format with timestamp within file name as mentioned below. Need to extract timestamp from the file and create new column as _time. Need rex query to extract the YYYY-MM-DD HH:MM:SS.   D:\automation\miscprocess\test_utilization_info_20240618_195509.csv