All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've got a feed that is sending non-compliant json since spath doesn't work on it.  I put together this search index=dlp sourcetype=sft:json "{" | head 1 | eval data='{"time": "2023-07-21T19:10:... See more...
I've got a feed that is sending non-compliant json since spath doesn't work on it.  I put together this search index=dlp sourcetype=sft:json "{" | head 1 | eval data='{"time": "2023-07-21T19:10:48+00:00", "pid": 24086, "msec": 1689966648.059, "remote_addr": "aaa.bbb.ccc.ddd", "request_time": 0.005, "host": "sitename.noname.org", "remote_user": "-", "request_filtered": "GET /healthz HTTP/1.1", "status": 200, "body_bytes_sent": 13, "bytes_sent": 869, "request_length": 72, "http_referer_filtered": "", "http_user_agent": "-", "http_x_forwarded_for": "-", "context": "973235423dccda96a385ca21c133891632a28d91"}' | spath input=data I'm not seeing any value for data, thus nothing for the spath.  Do I need to do something special to the eval to get it to process? TIA, Joe
I have a script that I am generating a json formatted log file entries. I want to get this data into Splunk. What is the best way to write the data to disk to be monitored and ingested? Should I ju... See more...
I have a script that I am generating a json formatted log file entries. I want to get this data into Splunk. What is the best way to write the data to disk to be monitored and ingested? Should I just append json data into a single file, or should the log file have only one entry at a time, and I overwrite/clear the file each time I need to add new data?  
We generally follow a pattern of logging in a key=value pattern. I am curious if we should totally avoid logs that are not in that format. Is it not recommended to have logs like:       log.info... See more...
We generally follow a pattern of logging in a key=value pattern. I am curious if we should totally avoid logs that are not in that format. Is it not recommended to have logs like:       log.info("Flushing kafka buffer before callback.");        
In my splunk environment I have access to the infrastructure overview which shows if an entity is stable, unstable, or inactive. I would like to know if there is a way to put these statuses for indiv... See more...
In my splunk environment I have access to the infrastructure overview which shows if an entity is stable, unstable, or inactive. I would like to know if there is a way to put these statuses for individual hosts in a Splunk Dashboard so I dont have to filter through them in the infrastructure overview.    Thanks
I have a lookup that is mapping action, category, attributes and more fields for windows event codes. However for each event code not all the column have values.  EventCode, action, category, attr... See more...
I have a lookup that is mapping action, category, attributes and more fields for windows event codes. However for each event code not all the column have values.  EventCode, action, category, attr, ..... 1,allow,,xyx,,, 2,fail,firewall,,....   When I add this to the transforms and props.conf and deploy it out to splunk cloud it is creating fields even when it is empty for that match.  Is there a way to make sure that the null values are not getting outputted using props and transforms.conf ?
Dear All, I have observed License usage for one of the sourcetype is high capmpare to privious days. However events count is low capmpare privious days . How to check this in splunk , how to valida... See more...
Dear All, I have observed License usage for one of the sourcetype is high capmpare to privious days. However events count is low capmpare privious days . How to check this in splunk , how to validate the licence utilization.   I.e. :  Sourcetype: Cisco: asa  12 July'23 - Eventcount:16819087, license usage : 21GB 14 July'23 - Eventcount:15722874, license usage : 42 GB
how download the ova of splunk soar for vmware  
Hello everyone, I'm encountering an issue with the web interface for the deployment instance. When I attempt to access it, it prompts me for a username and password, but after entering the credentia... See more...
Hello everyone, I'm encountering an issue with the web interface for the deployment instance. When I attempt to access it, it prompts me for a username and password, but after entering the credentials, it gets stuck on a loading gif that keeps spinning indefinitely. Upon inspecting the network tab, I noticed there's only one resource returning a 404 error: https://my-deploy-instance:8000/en-US/splunkd/__raw/services/dmc-conf/settings/settings   In the past, I have successfully performed several upgrades for Splunk by provisioning a new instance on my cloud, adding my configuration files, and starting the process. However, in my latest attempt, I encountered issues where the process doesn't seem to work correctly anymore. To ensure safety and avoid disruptions on production instances, I adopted a technique of provisioning a separate stack for this work and testing it independently. Strangely, on this new stack, I am encountering license-related complaints, with it stating that the license is revoked. This is puzzling because the license used is exactly the same as the one used in the production version, which is functioning without any problems.   ERROR LMStack - failed to load license from file: splunk.license, err - The license file with signature XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX) has been revoked. Please file a case online at http://www.splunk.com/page/submit_issue I'm wondering if there have been any changes in this context that might be causing the issue. Could someone kindly advise if I am overlooking something in this process?
The errors indicates that there are some python libraries that are missing. We managed to fix couple of issues related to libraries, this keeps repeatedly failing for multiple library files. Is t... See more...
The errors indicates that there are some python libraries that are missing. We managed to fix couple of issues related to libraries, this keeps repeatedly failing for multiple library files. Is there any specific python related configurations to be installed so as to run the jobs from PSA? Encountering the below error in the scheduled jobs on the controller as shown below.
Hi I was wondering on a dashboard if you could click on an item and it shows all the information for that single instance of events. Through drilldowns or is there another way of doing it .  
Hello, I have an alert that sends an email when there are x authentication failures , this works fine and returns user,count - but I'd like to also include a table that contains the below fields wh... See more...
Hello, I have an alert that sends an email when there are x authentication failures , this works fine and returns user,count - but I'd like to also include a table that contains the below fields when the alert trips, how can we go about doing that? user,src_ip,count   current alert: index=main action=fail* OR action=block* (source="*WinEventLog:Security" OR sourcetype=linux_secure OR tag=authentication) user!="" |  stats count by user src _time | stats sum(count) as count by user | where count>200 | sort - count
Hi people, I wonder whether it is possible to run a query that generates a set of n-sample of events for each sourcetype in an index? In some sense, if the log data has been ingested and conformed ... See more...
Hi people, I wonder whether it is possible to run a query that generates a set of n-sample of events for each sourcetype in an index? In some sense, if the log data has been ingested and conformed properly, this is perhaps not so problematic, you might build a datamodel or just query across the relevant CIM field (alias.) So lets get specific:   index-someIndex sourcetype=someSourceType | enumerate against some defined key value, say an eventtype | enumerate all of the eventtypes and pull out any subeventtypes | list the for 2-5 events for each subeventtype, else just list the 2-5 events for the the eventtype | table _time, eventtype, subeventtype (NULL if blank), event  
Have anyone an idea how to fix this?  Any suggestion? Thank you    
I have a splunk event with below format: { message { DATE: 2023-07-20T11:53:04 } } I want to find all the events that have the above DATE field in a particular range. However below query is no... See more...
I have a splunk event with below format: { message { DATE: 2023-07-20T11:53:04 } } I want to find all the events that have the above DATE field in a particular range. However below query is not yielding any results. Is something wrong with it? BASE SEARCH | message.DATE >= strftime("2023-07-20T11:50:04","%Y-%m-%dT%H:%M:%S") AND message.DATE <= strftime("2023-07-20T11:56:04","%Y-%m-%dT%H:%M:%S")
Hi Splunk support, I recently added my second forwarder. Everything was perfectly done. Only one thing is, the newly added forwarder is not listing in client (forwarder management). After restarted ... See more...
Hi Splunk support, I recently added my second forwarder. Everything was perfectly done. Only one thing is, the newly added forwarder is not listing in client (forwarder management). After restarted also, it remains the same. Screen captures attached herewith. Please give the proper solution on this. Thanks, Ragav
hi, have a qn  in the below query | makeresults count=730 | streamstats count | eval _time=_time-(count*86400) | timechart Count as Timestamp span=1mon | join type=left _time [| savedsearch XYZ | e... See more...
hi, have a qn  in the below query | makeresults count=730 | streamstats count | eval _time=_time-(count*86400) | timechart Count as Timestamp span=1mon | join type=left _time [| savedsearch XYZ | eval today = strftime(relative_time(now(), "@d"), "%Y-%m-%d %H:%M:%S.%N") | where like (APP_NAME ,"Managed iMAP Application") and like (BS_ID,"%") and like (Function,"%") and like (DEPARTMENT_LONG_NAME,"%") and like (COUNTRY,"%") and like(EMPLOYEE_TYPE,"%") and STATUS="active" | eval _time = strptime(FROM_DATE, "%Y-%m-%d %H:%M:%S.%N") | eval _time!= "2023-07" | timechart Count as Created span=1mon | streamstats sum(Created) as Createdcumulative] | join type=left _time [| savedsearch XYZ | where like (APP_NAME ,"Managed iMAP Application") and like (BS_ID,"%") and like (Function,"%") and like (DEPARTMENT_LONG_NAME,"%") and like (COUNTRY,"%") and like(EMPLOYEE_TYPE,"%") and STATUS="inactive" | eval _time = strptime(TO_DATE, "%Y-%m-%d %H:%M:%S.%N") | timechart Count as Deactivated span=1mon | streamstats sum(Deactivated) as Deactivatedcumulative] | eval Active = Createdcumulative | eval Deactivated = Deactivatedcumulative | where _time>=relative_time(now(),"-1y@d") | fields - Createdcumulative, Deactivatedcumulative, Timestamp the below query fetches me the results below:  i need to restrict the data till the previous month and not show current month. can anyone help me with modifying the query pls
 I've got Splunk Universal Forwarder up and running on my DC-01, and it's set to forward all Windows event logs to Splunk. But there's a catch - it's not forwarding the Security events for some reaso... See more...
 I've got Splunk Universal Forwarder up and running on my DC-01, and it's set to forward all Windows event logs to Splunk. But there's a catch - it's not forwarding the Security events for some reason! Interestingly, when I installed the UF on a regular Windows PC, everything worked like a charm, and all event types, including Security events, were forwarded without a hitch.  I've done my fair share of digging through documentation and troubleshooting cases, but I'm still at a loss. It feels like it might be a permissions or rights issue, but I can't seem to find the root cause. If any of you have encountered a similar issue or have any insights, I'd be incredibly grateful for your help and ideas. Thank you in advance for any guidance you can provide!
I am getting a value from my data that a number buts actually the duration how do I convert into minuets hours and days.     
Splunk App for Lookup File Editing version 3.6.0 works fine, but version 4.0.1 does not use root_endpoint. Downgrade to 3.6.0 and it works fine.   settings: splunk:   conf:     - key: web ... See more...
Splunk App for Lookup File Editing version 3.6.0 works fine, but version 4.0.1 does not use root_endpoint. Downgrade to 3.6.0 and it works fine.   settings: splunk:   conf:     - key: web       value:         directory: /opt/splunk/etc/system/local         content:           settings:           root_endpoint: /splunk   Expected results:  https://server/splunk/ko-KR/app/lookup_editor/lookup_edit...... v4.0.1 : https://server/app/lookup_editor/lookup_edit......
I'm new to Splunk Enterprise, and my task is to forward logs from Splunk HF (AWS EC2 instance) to an AWS Cloud Watch log group. I tried to export the logs using CLI commands and stored them on the ... See more...
I'm new to Splunk Enterprise, and my task is to forward logs from Splunk HF (AWS EC2 instance) to an AWS Cloud Watch log group. I tried to export the logs using CLI commands and stored them on the Splunk HF server locally. Then, I used the Cloud Watch agent to send the logs to the Cloud Watch log group. please refer the below Splunk cli command for export the logs #./splunk search "index::***** sourcetype::linux_audit" -output rawdata -maxout 0 -max_time 5 -auth splunk:***** >> /opt/linux-Test01.log The challenge I'm facing is that when I run the CLI command using a Linux crontab, it does not export the logs. Are there any other solutions or guidance available to resolve this issue?