All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to create a table which has Total number of events and the Error count in the events. The field 'services.errorCode' will be filled if there is an error, it will not be filled if the even... See more...
I am trying to create a table which has Total number of events and the Error count in the events. The field 'services.errorCode' will be filled if there is an error, it will not be filled if the event is a success. The below query gives me the correct count of Total, but the Error Count is always 0. I have verified in the Events and there are many events with the field errorCode filled. index=prod | stats count as "Total", count(eval("services.errorCode"!=null)) as "Error Count" by services.serviceProviderName Please guide me on how this could be done.
Splunk search process is consuming high memory causing splunkd to crash. How to collect jemalloc heap data for Splunk Technical Support to troubleshoot high memory growth issues contributed by the sp... See more...
Splunk search process is consuming high memory causing splunkd to crash. How to collect jemalloc heap data for Splunk Technical Support to troubleshoot high memory growth issues contributed by the specific search process?   
Hello everybody ! I'm new in Splunk dev and I can not do a query in my drilldown :// I have a map, and i want to do a query with the click name for get data from a lookup and after go to my dashb... See more...
Hello everybody ! I'm new in Splunk dev and I can not do a query in my drilldown :// I have a map, and i want to do a query with the click name for get data from a lookup and after go to my dashboard with this 2 tokens in params. So : click, get data thx to the click name, go to dashboard with data and click in params in the URL Like this :        <drilldown> <set token="TOKEN">| inputlookup lookup_name.csv | search name = $click.name$ | fields nbr</set> <link target="_blank">/app/dashboard_$TOKEN$ nom_site=$click.name$</link> </drilldown>      Thx a lot !
Hi all I want to create a new AWS monitoring alert. For the first step I checked the AWS fields and I saw that I need to parse the "principalId" field from the "_raw". (I want to create the rule ba... See more...
Hi all I want to create a new AWS monitoring alert. For the first step I checked the AWS fields and I saw that I need to parse the "principalId" field from the "_raw". (I want to create the rule based on principalID" Part of the raw: {\"type\": \"Root\", \"principalId\": \"444444444444\", \"arn\"   Im running this query: "search index=aws userIdentity.type=Root eventName=ConsoleLogin earliest=-1d | rex field=_raw principalId\W\W:\s\W\W(?P<principalId>\d*)" and getting results but without the "prinicpialId" new field. What am I missing in the query? Thanks!  
Hello All, We created a custom search on splunk which calculates a specific metric on all the servers that are part of the distributed search. I wanted to find out if its possible to run this custom... See more...
Hello All, We created a custom search on splunk which calculates a specific metric on all the servers that are part of the distributed search. I wanted to find out if its possible to run this custom search on splunk servers which are not part of the distributed search via GUI or CLI. when I try running  /opt/splunk/bin/splunk search 'customsearchcommand inputs' -uri https://172.31.2.100:8089 (172.31.2.100 is not part of the distributed search) it says customsearchcommand not found. The custom search command is only present on the search head and it is working perfectly on all the distributed search peers without having the custom search defined there.    Thanks
I used this  query to get the count of Uri and IP  "index=*index* host="*host*" status = "400" OR "404" OR "500" OR "403" status!="200" status!="NULL" NOT "GoogleBot" status=404 | top limit=10 uri... See more...
I used this  query to get the count of Uri and IP  "index=*index* host="*host*" status = "400" OR "404" OR "500" OR "403" status!="200" status!="NULL" NOT "GoogleBot" status=404 | top limit=10 uri |where count > 9 | append[ search index=*index* source=*source*| top limit=10 Real_IP | where count > 10]" I need the query search that automatically it picks the highest value of Uri count and shows the IP address corresponding to this Uri  along with Uri 
Hi, I am trying to train a LSTM time series classifier in the search window in Splunk, step 4 in Model Development Guide in deep learning toolkit. It says that the data can be split into a trainin... See more...
Hi, I am trying to train a LSTM time series classifier in the search window in Splunk, step 4 in Model Development Guide in deep learning toolkit. It says that the data can be split into a training set and a testing set using the | sample command. To my understanding, the datasets is then constructed by taken events randomly.  I do not want datasets that is constructed by taking events randomly. I want the data to be sorted by time and then split with a ratio, how can I do this? Splunk version: 7.2.4 Deep Learning Toolkit for Splunk Version: 2.3.0
Hello Team, I have been working to optimize the data going to Splunk and found EventCode 4662, Object Type= Computers are forwarding huge amount of data. Upon further investigation I found that Sub... See more...
Hello Team, I have been working to optimize the data going to Splunk and found EventCode 4662, Object Type= Computers are forwarding huge amount of data. Upon further investigation I found that Subject user name having $ (Local Account) can be blacklist from sending to Splunk Cloud. To do so I added below regex on the Splunk Application over Deployment Server, [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="(?:<Data Name='SubjectUserName'>).+(?:\$)" blacklist2 = EventCode="4662" Message="Object Type:(?!\s*(groupPolicyContainer|computer|user))" renderXml=true index = DC_Events   The below regex completely works fine on Sample Data,  blacklist1 = EventCode="4662" Message="(?:<Data Name='SubjectUserName'>).+(?:\$)" This would help me not to send the user name having $ in Subject User name which could save lot of space over splunk as more than 100s of servers are sending the data to the splunk and would increased eventually. Thanks & Regards, Pratik Pashte    
Hi i have installed  Splunk Add-on for Microsoft Cloud Services in my heavy forwarder instance, and the app has been working as expected, with 7-8 inputs configured (storage account inputs). Now i am... See more...
Hi i have installed  Splunk Add-on for Microsoft Cloud Services in my heavy forwarder instance, and the app has been working as expected, with 7-8 inputs configured (storage account inputs). Now i am trying to add a new storage account where the container resides in a sub-folder. My existing inputs folder pattern in Azure(which works fine) : Storage account -> containers -> <my_container_name> inputs.conf as below [mscs_storage_blob://myfirstblob] account = my_account_name blob_mode = append collection_interval = 300 container_name = <my_container_name> index = index1 sourcetype = mscs:storage:blob:myfirstblob python.version = python2 I Need help with configure this patter path : Storage account -> containers -> <my_folder> -> <my_container_name> i have tried multiple patterns like below but does not seems to be working inputs.conf [mscs_storage_blob://myfirstblob] account = my_account_name blob_mode = append collection_interval = 300 container_name = containers/<my_folder>/<my_container_name> (OR this as well /<my_folder>/<my_container_name>) index = index1 sourcetype = mscs:storage:blob:myfirstblob python.version = python2
  Hi All i have configured syslog-ng in our company to collect the logs form network devices,I am little bit confuse for setting the syslog server retention      options { chain_hostnames(no); ... See more...
  Hi All i have configured syslog-ng in our company to collect the logs form network devices,I am little bit confuse for setting the syslog server retention      options { chain_hostnames(no); create_dirs (yes); dir_perm(0755); dns_cache(yes); keep_hostname(yes); log_fifo_size(2048); log_msg_size(8192); perm(0644); time_reopen (10); use_dns(yes); use_fqdn(yes); }; source s_network { tcp(ip(0.0.0.0) port(514)); udp(ip(0.0.0.0) port(514)); }; #Destinations destination d_all { file("/var/syslog/logs/catch_all/$HOST/$YEAR-$MONTH-$DAY-$HOUR-catch_all.log" perm(0666) dir_perm(0777) create_dirs(yes)); }; log { source(s_network); destination(d_all); }; # Source additional configuration files (.conf extension only) #@include "/etc/syslog-ng/conf.d/*.conf" # vim:ft=syslog-ng:ai:si:ts=4:sw=4:et: # command line audit logging local1.* -/var/log/cmdline      Please see the above configuration and advise?   splunk-enterprise syslog syslog-ng syslogs  
Hi there, I got the Google Import/export app installed and followed the instructions here How to setup app  on how to set it up. When I try to configure the app and give the service account key, the... See more...
Hi there, I got the Google Import/export app installed and followed the instructions here How to setup app  on how to set it up. When I try to configure the app and give the service account key, there is no confirmation/direction after that step and the screen doesn't go past the message "The service account key is currently stored in secure storage. " How do I sync a spreadsheet from Google Drive to Splunk using this app? I see no interface for doing that in our Splunk Cloud. Can you please shed some light on the process, or if there is any command you have to use when importing a spreadsheet? Also, I have given the edit permission to the service account email in the google drive folder and have a test file in it. Would really appreciate if anyone or @LukeMurphey can provide some help here.
Has anyone presented this problem?  
Hi, I installed Vertica database driver on my DB Connect (3.4.0).  I downloaded the driver and extracted to my local machine and upload only the jar file to $SPLUNK_HOME/etc/apps/splunk_app_db_connec... See more...
Hi, I installed Vertica database driver on my DB Connect (3.4.0).  I downloaded the driver and extracted to my local machine and upload only the jar file to $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers; however, I tried version  7.1, 7.2, 9.0, and 10, DB Connect didn't recognize Vertica database driver at all.  Did I miss any steps or make any mistake?   thanks
Hello, I am seeing the below error when i am trying to connect to a MS-SQL server using MS-SQL Server Using Generic Driver connection.  " DBX Server didnot respond with in 310 seconds, Please make ... See more...
Hello, I am seeing the below error when i am trying to connect to a MS-SQL server using MS-SQL Server Using Generic Driver connection.  " DBX Server didnot respond with in 310 seconds, Please make sure it is started and listening on 9998 port. I have installed Java8 JDK and the path which i mentioned is JRE installation path as /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.262.b10-0.el7_8.x86_64/jre  and then i clicked on save. Is Task server JVM options and Query server JVM options needs to be filled? if yes, can you please help me with the values? Please help me out in fixing the issue.   Thanks in advance.
I'm trying to create I'm following all steps from the custom visualization tutorial. After i enter " npm run build"  on the /opt/splunk/etc/apps/testchart/appserver/static/visualization/testchart fo... See more...
I'm trying to create I'm following all steps from the custom visualization tutorial. After i enter " npm run build"  on the /opt/splunk/etc/apps/testchart/appserver/static/visualization/testchart folder, it throws the error ERROR in ./~/xmlhttprequest/lib/XMLHttpRequest.js Module not found: Error: Cannot resolve module 'child_process' in /opt/splunk/etc/apps/testchart/appserver/static/visualizations/testchart/node_modules/xmlhttprequest/lib @ ./~/xmlhttprequest/lib/XMLHttpRequest.js 15:12-36 ERROR in ./~/xmlhttprequest/lib/XMLHttpRequest.js   any input would be greatly appreciated. Module not found: Error: Cannot resolve module 'fs' in /opt/splunk/etc/apps/testchart/appserver/static/visualizations/testchart/node_modules/xmlhttprequest/lib @ ./~/xmlhttprequest/lib/XMLHttpRequest.js 16:9-22 Is this due to code that i've copied incorrectly or something else?
 In which situation the persistent queue would be used in UF, only if indexer is slow in writing or is down for a long time? I mean it won't have to do in any cases to drop events in UF,  if UF is c... See more...
 In which situation the persistent queue would be used in UF, only if indexer is slow in writing or is down for a long time? I mean it won't have to do in any cases to drop events in UF,  if UF is crashing??? If UF is crashed and when we restart it  would start resending events in  where he had left off files reading and in-memory queue will be written to persistent queue or would it read again from events source?  
Can anyone enlist possible reasons when UF may drop events? There may be many situations, but some known reasons came from practical experiences...
Incase indexer is down or has slow speed for writing events in a disk, I guess in these cases UFs parsing queue and output queue would be full enough and considering dropEventsOnQueueFull = -1  Su... See more...
Incase indexer is down or has slow speed for writing events in a disk, I guess in these cases UFs parsing queue and output queue would be full enough and considering dropEventsOnQueueFull = -1  Suppose indexer was up again upon next day, From where would UF start events, from where he had left off reading files or would drop events??? Can I consider that in metrics.log  if group=queue  and blocked=true, the IF is blocked and may drop events Or will it send all logs when queue has space or indexer is up and running, where he had left off. I don't have option of persistent queue bcoz I may monitor some logs files using monitor stanza
Is there any way to find how many events were dropped by UF in a day? Need a daily report to find how may events were dropped by UF. Can I know number of events dropped? OR xyz MB of events are dr... See more...
Is there any way to find how many events were dropped by UF in a day? Need a daily report to find how may events were dropped by UF. Can I know number of events dropped? OR xyz MB of events are dropped?    
Hello Team,  I hope everyone is fine. I have an issue here that I hope someone can help me with! We have been reporting in Splunk through the ServiceNow API. We are able to report to every field,... See more...
Hello Team,  I hope everyone is fine. I have an issue here that I hope someone can help me with! We have been reporting in Splunk through the ServiceNow API. We are able to report to every field, however when it comes to the fields such as location.site, location.country or location.city, the information is not available in the reports. It is urgent. Can someone please help me how to rectify the situation? Regards Nilanjan