All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I need some help. I have a Splunk add-on that worked fine and showed pie charts and single values in a dashboard. I deleted it and re-downloaded it from splunkbase and now I am getting this erro... See more...
Hi I need some help. I have a Splunk add-on that worked fine and showed pie charts and single values in a dashboard. I deleted it and re-downloaded it from splunkbase and now I am getting this error for pie charts The single values work fine. Any idea what is going on and how can I fix it?
A file directory needs to be collected, but there is a large amount of historical data in the file directory. If I currently point to collecting data from the last three days, is there a parameter to ... See more...
A file directory needs to be collected, but there is a large amount of historical data in the file directory. If I currently point to collecting data from the last three days, is there a parameter to control the collection of files after the specified time?
Hi all, i want to change the timestamp on event: I want put the createDteTime on Time (yellow) I changed the props.conf as follow:     [sourcetype] INDEXED_EXTRACTIONS = json TIMESTA... See more...
Hi all, i want to change the timestamp on event: I want put the createDteTime on Time (yellow) I changed the props.conf as follow:     [sourcetype] INDEXED_EXTRACTIONS = json TIMESTAMP_FIELDS = createdDateTime\": TIME_FORMAT = %FT%TZ     But at the moment does not work. Any idea? Thanks!
Hi,  I need help with creating a table in Splunk that displays all the components below:          I too need to create another table that gives an overview of the Host. The components are:  ... See more...
Hi,  I need help with creating a table in Splunk that displays all the components below:          I too need to create another table that gives an overview of the Host. The components are:          I have been looking at this for a while, however the task is difficult, so I am hoping I can find the help I need here.  Thank you.    
Hi Everyone, Data coming in from an API is using the _indextime as the _time field because the timestamp format that is being sent is not recognised by Splunk. An example of the timestamp would l... See more...
Hi Everyone, Data coming in from an API is using the _indextime as the _time field because the timestamp format that is being sent is not recognised by Splunk. An example of the timestamp would look like this:     2016-06-21T01:18:51-07:00     OR     2018-02-16T06:34:31-08:00      As you can see, an offset of -7 or -8 hours is being added to the time field. The timestamp format we`re currently using for the sourcetype is:     %Y-%m-%dT%H:%M:%S%:z     This is no longer working after the sender made some changes to the timestamp, but I`m not entirely sure how to represent the new format. Using Splunk Cloud.  Any help would be greatly appreciated. Toma.
Hi, I had this csv list command_Rex comment_remark *uname -a malicious *arp* malicious *tcpdump* malicious   I want to search for events (it had data.command field that hold the... See more...
Hi, I had this csv list command_Rex comment_remark *uname -a malicious *arp* malicious *tcpdump* malicious   I want to search for events (it had data.command field that hold the command executed in linux server) How can I search and filter out those event matched 1 of those regex in the list?
tried to add all ip's (basically audio/video streaming sites) in IP whitelist in UBA for excluding uba alert for excessive data transmission/ data exfiltration. However still UBA alerts are getting g... See more...
tried to add all ip's (basically audio/video streaming sites) in IP whitelist in UBA for excluding uba alert for excessive data transmission/ data exfiltration. However still UBA alerts are getting generated for those ip. How to tune those alerts in uba. Also noticed that those waitlisted ips is populated in devices field. Someone can please advise on this
What will be the query to copy  all data from one index to another index in splunk ,we are using splunk for jenkins logs
I have defined stacked bar chart in my Splunk Enterprise Dashboard. I've been trying to solve this problem but I cannot solve them.. Please help me out. These are the problems that I encountered:... See more...
I have defined stacked bar chart in my Splunk Enterprise Dashboard. I've been trying to solve this problem but I cannot solve them.. Please help me out. These are the problems that I encountered: 1. As you can see in the line below `| eval PERCENTAGE = round(PERCENTAGE, 1)` I've rounded the data. However, with data such as 7 and 10, ".0" gets dropped off for some reason. Our client strongly wants the data to be labelled in a uniform format. Is there a way to force natural number to appear as a integer form? * 7 &10 (AS-IS) -> 7.0 & 10.0 (TO-BE) 2. Since the color of the Bar is Dark, I wanted to change the label in the bar to be White. I've wrote this line of code : `<option name="charting.fontColor">#ffffff</option>` However, this makes the whole font color including the ones in the axes.    3. I  also want to change the label in the bar to be "Bold" and "Times New Roman" font-family. Is there a way to specify them? This is my current code: ```XML <row> <panel> <chart> <search> <query> index = ~ (TRIMMED OF DUE TO PRIVACY ISSUE) | eval PERCENTAGE = round(PERCENTAGE, 1) | fields DOMAIN MONTHS PERCENTAGE | chart values(PERCENTAGE) over MONTHS by DOMAIN </query> <earliest>earliest</earliest> <latest>latest</latest> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">right</option> <option name="height">200</option> <option name="refresh.display">progressbar</option> <row> ``` Thank you
Hi, One of use case giving below error while sending email to recipients. The use case configured to run every 20 mins and the use case alert trigger action is send alert and notable.  we can... See more...
Hi, One of use case giving below error while sending email to recipients. The use case configured to run every 20 mins and the use case alert trigger action is send alert and notable.  we cannot see the  results in notable index and we are not receiving the email. If we ran the use case manually we can see the results. I have checked in python logs, nothing found about use case Error message 08-17-2023 03:04:34.681 +0000 ERROR ScriptRunner [6973 AlertNotifierWorker-0] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/search/bin/sendemail.py "results_link=https://XXxxx-splunk.com/app/SplunkEnterpriseSecuritySuite/@go?sid=scheduler_c29jX2VzX3JlcG9ydA__SplunkEnterpriseSecuritySuite__RMD5851862a85f91df65_at_1692241200_11388_A08142CF-D931-4BE1-ADEA-3D2962FFE6D6" "ssname=xxxxxx- xxxx-xx-xxx -  - Rule" "graceful=True" "trigger_time=1692241474" results_file="/opt/splunk/var/run/splunk/dispatch/scheduler_c29jX2VzX3JlcG9ydA__SplunkEnterpriseSecuritySuite__"': _csv.Error: line contains NUL External search command 'sendemail' returned error code 1.
Hi, I could see duplicate data in splunk by using below query index="indexname" | stats count by _raw | where count >1   I checked 10 different indexes out of which 8 indexes are having dup... See more...
Hi, I could see duplicate data in splunk by using below query index="indexname" | stats count by _raw | where count >1   I checked 10 different indexes out of which 8 indexes are having duplicate logs  
Hello there, I would like some help with my query. I want to summarize 2 fields into 2 new columns One field is unique, but the other is not The field fhost is not unique. I want the sum o... See more...
Hello there, I would like some help with my query. I want to summarize 2 fields into 2 new columns One field is unique, but the other is not The field fhost is not unique. I want the sum of field "cores" by unique combination of the columns    "clname" and  "fhost" I am struggle how to do this properly and how i can use the sum unique for column "fhost" | makeresults | eval clname="clusterx", fhost="f-hosta", vhost="v-hosta",cores=2,cpu=1 | append [| makeresults | eval clname="clusterx", fhost="f-hosta", vhost="v-hostb" ,cores=2,cpu=1 ] | append [| makeresults | eval clname="clusterx", fhost="f-hostb", vhost="v-hostc" ,cores=4,cpu=1 ] | append [| makeresults | eval clname="clusterx", fhost="f-hostc", vhost="v-hostd" ,cores=6,cpu=1 ] | eventstats sum(cpu) as total_vhost_cpus by clname ``` This is not working ``` | eventstats sum(cores) as total_fhost_cores by clname fhost `` The output should be in table format ``` | table clname cores cpu fhost vhost  total_vhost_cpus  total_fhost_cores Thank you in advance. Harry
Hi Team, I am encountering an issue while trying to Enabling the Transaction Analytics part for the Python Application using Python agent. As the stand-alone agent is working fine, however, I have o... See more...
Hi Team, I am encountering an issue while trying to Enabling the Transaction Analytics part for the Python Application using Python agent. As the stand-alone agent is working fine, however, I have observed that there is no analytic-related data reflecting on the controller. I followed the steps from link1  and link2  It is a Django-based framework application and below is the appdynamics.cfg file for the same  [agent] app = Test tier = T1 node = N1 [controller] host = <saas-controller> port = 443 ssl = true account = <saas controller-account> accesskey = <password> [log] dir = path/to/directory level = debug debugging = on [services:analytics] host = https://bom-ana-api.saas.appdynamics.com port = 9090 ssl = true enabled = true for  running the agent I use the command 'pyagent proxy start -c <path/to/appdynamcis.cfg>' Please let me know what maybe done here
Hi Olly experts, We have 3 node Kubernetes Cluster node which we want to monitor so I have installed OTEL using Helm 3.0 version and it was deployed successfully but we were not able to find Kubern... See more...
Hi Olly experts, We have 3 node Kubernetes Cluster node which we want to monitor so I have installed OTEL using Helm 3.0 version and it was deployed successfully but we were not able to find Kubernetes data in Olly dashboards. Please suggest how to fine tune the problem. Note: Firewall was disabled on the server. Regards, Eshwar
I am using collect command to transfer data data from one index to another index  The query is like index=A source=sourceA sourcetype:sourcetypeA host=hostA | collect index=B source=sourceA sourcety... See more...
I am using collect command to transfer data data from one index to another index  The query is like index=A source=sourceA sourcetype:sourcetypeA host=hostA | collect index=B source=sourceA sourcetype:sourcetypeA host=hostA. But some data is missing why
Hi all, I am trying to implement Splunk in a particular use case.  Use case I am trying to implement: HF (configured proxy) > transfer data via internet > indexer Kind share your knowledge... See more...
Hi all, I am trying to implement Splunk in a particular use case.  Use case I am trying to implement: HF (configured proxy) > transfer data via internet > indexer Kind share your knowledge. Further help would be highly appreciated. thanks
I am wanting to query DB information from DB Connect. But the problem is that each time the query gets information of the entire query table. This takes up a lot of storage space Is there a way to ... See more...
I am wanting to query DB information from DB Connect. But the problem is that each time the query gets information of the entire query table. This takes up a lot of storage space Is there a way to get only new logs without duplicates? Every day the amount of new information is different, can't limit the number of rows you want to get? Thanks
Folks, Does anyone know when we configure advanced secution in Source Type (Settings>SourceTypes and Edit), where is original configuration file where the advanced view shows? I choose "linux_sec... See more...
Folks, Does anyone know when we configure advanced secution in Source Type (Settings>SourceTypes and Edit), where is original configuration file where the advanced view shows? I choose "linux_secure" source type and check advanced tab. I saw "src" and "src_ip" in search result for my data that used this source type. However I could't find any settings for these fields. So I though there were missing configurations in this tab and I wanted to know source configuration files for each source types. Please someone share your knowledge.
I want to generate a time chart that shows time on x-axis, results on y-axis and hue (legend) showing the different analytes. So far this what I have generated which is not the format I am looking fo... See more...
I want to generate a time chart that shows time on x-axis, results on y-axis and hue (legend) showing the different analytes. So far this what I have generated which is not the format I am looking for. I have the search code below. I probably do not need fieldformat but was thinking I needed the correct datatype. I am used to python Jupyter notebooks and am quite new to Splunk. Any help would be very appreciated. For example, I am showing a scatter plot from python that I can generate that mirrors what I am looking for in Splunk Incorrect Splunk Scatter Plot Example of What I want to get to     |inputlookup $lookupToken$ |where _time <= $tokLatestTime$ |where _time >= $tokEarliestTime$ |search $lab_token$ |search $analyte_token$ |search $location_token$ |sort _time desc |replace "ND" WITH 0 IN Results |table _time, Results, Analyte |fieldformat _time=strftime(_time, "%Y-%m-%d")        
Greetings! I have been googling, pluralsighting, reading splunk docs and I am extremely new to splunk. I did search the community and didnt find something close enough to what I need. So I am askin... See more...
Greetings! I have been googling, pluralsighting, reading splunk docs and I am extremely new to splunk. I did search the community and didnt find something close enough to what I need. So I am asking if anyone here has an idea of how I can find newly created users and then check if there are also any events that would signify those users were added to one of two groups. So far what I have is not working I cant figure out how to take the result set from the first search and fire off a second search (like a foreach) or if i am even thinking about that right. I was thinking using the fields command would do it, I have also tried to use "return" -  index=wineventlog source="wineventlog:security" eventcode=4720 | fields user_principal_name | search index=wineventlog source="wineventlog:security" eventcode in (4732,4728) "group1" OR "group2" I don't get errors but i can break the first query up and it works, I am not sure on how to take that result and pass it to the second. Most examples feature lookups and if that is the best way awesome. I am looking for technique tips as well as search construction help. Thank you in advance!