All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a list of switches on our network and once in a while some of them stop reporting to Splunk. I need a query that lists those switches not reporting to be able to create a dashboard
I see logs leaving the proxy to an external IP. How do I know the internal IP requesting that external site/IP
Scripted input not showing up in search results, but is running fine in server
Hello,  Does Splunk supports sound alerts in Enterprise dashboards based on the threshold in the query? Ex. I have a query where it shows the (statues > 100) or (statues < 100).  If the (statues ... See more...
Hello,  Does Splunk supports sound alerts in Enterprise dashboards based on the threshold in the query? Ex. I have a query where it shows the (statues > 100) or (statues < 100).  If the (statues >= 100), i would like to get a sound alert. I'm displaying these statuses in Trellis Single Value. Please let me know if it's possible to get an alert once it reaches a certain threshold and how would you set it up in Splunk dashboard? Thank you!
I'm having an issue with the authentication.conf file on my search head. I have the file managed in puppet with the necessary SAML configuration. When splunk restarts, the mapping of users to roles t... See more...
I'm having an issue with the authentication.conf file on my search head. I have the file managed in puppet with the necessary SAML configuration. When splunk restarts, the mapping of users to roles that was created by users signing into splunk gets cleared, because I don't want to have to keep adding users to that mapping. This results in searches becoming orphaned by "disabled users" even though the users are still valid, and when they sign in the next time the searches are no longer orphaned.  What is the proper way to manage this configuration? Should I be placing the SAML configuration in a separate location? or would the file still get modified in a separate location?
Hi, Please indulge me as I am relatively new to Splunk. I wish to create a query or report I can run on demand to provide proactive data from our client (Windows) machines, namely battery status,... See more...
Hi, Please indulge me as I am relatively new to Splunk. I wish to create a query or report I can run on demand to provide proactive data from our client (Windows) machines, namely battery status, CPU usage, disk space usage, along those lines. I found the below on Lantern, but, pardon my ignorance, but have no idea how i would implement this in a Splunk search.   | mstats avg(LogicalDisk.%_Free_Space) AS "win_storage_free" WHERE index="<name of your metrics index>" host="<names of the hosts you want to check>" instance="<names of drives you want to check>" instance!="_Total" BY host, instance span=1m | eval storage_used_percent=round(100-win_storage_free,2) | eval host_dev=printf("%s:%s\\",host,instance) | timechart max(storage_used_percent) AS storage_used_percent BY host_dev Would appreciate some help and guidance. Thank you in advance! 
How can we ensure that the HTTP Event Collector works correctly? without dropping connections on the HEC endpoint, solid flow of data, batching that is implemented correctly etc. What are the best ... See more...
How can we ensure that the HTTP Event Collector works correctly? without dropping connections on the HEC endpoint, solid flow of data, batching that is implemented correctly etc. What are the best practices around it?
We are quite close to reach the license limit, data wise, about 2 TBs off the 20 or so TBs allowed. What can we do to insure that we don't breach the license limit?
I have this search query which will return a single row of data- index=xyz | search accountID="1234" instanceName="abcd1" | table curr_x, curr_y, curr_z, op1_x, op1_x, op1_z, op2_x, op2_y, op2_z,... See more...
I have this search query which will return a single row of data- index=xyz | search accountID="1234" instanceName="abcd1" | table curr_x, curr_y, curr_z, op1_x, op1_x, op1_z, op2_x, op2_y, op2_z, op3_x, op3_y, op3_z | fields - accouintID, instanceName and I want to display the resultant row of data in a matrix format like - Option x y z current curr_x curr_y curr_z option_1 op1_x op1_x op1_z option_2 op2_x op2_y op2_z option_3 op3_x op3_y op3_z Please note: Field names are indicative, actual values of the respective fields to be displayed. Assumption : There will always be only one row for a selected accountID and instanceName   Can someone please help me by letting know how this can be achieved?
Hi, Trying to correlate failed logon attempts (event 4776) with the IIS OWA logs, I realized that the OWA logs are in UTC by default and I am in CEST time (Madrid). According to the official docu... See more...
Hi, Trying to correlate failed logon attempts (event 4776) with the IIS OWA logs, I realized that the OWA logs are in UTC by default and I am in CEST time (Madrid). According to the official documentation    To configure time zone settings, edit the props.conf file in $FORWARDER_HOME/etc/system/local/ or in your own custom application directory in $FORWARDER_HOME/etc/apps/.   https://docs.splunk.com/Documentation/Splunk/8.2.5/Data/Applytimezoneoffsetstotimestamps I deployed several apps in the exchange server but onle one app is reporting wrongly , called TA-Windows-Exchange-IIS. So I only need to change the timezone in that specific app if I understood correctly. And this is what I did, creating the file props.conf in the local path of the app. C:\Program Files\SplunkUniversalForwarder\etc\apps\TA-Windows-Exchange-IIS\local   [monitor://C:\inetpub\logs\LogFiles\W3SVC1\*.log] TZ = UTC [monitor://E:\Program Files\Microsoft\Exchange Server\V15\Logging\Ews] TZ = UTC      I restarted the splunkforwarder service just in case. The result is that the time is still wrongly taken from those exchange events, in UTC. Any idea on what I am doing wrong? thanks a lot.
Hi All, I am facing an issue related to time zone interpretation, one server which is configured with CET and sending log splunk cloud (in my best knowledge indexers are placed in GMT timezone). Th... See more...
Hi All, I am facing an issue related to time zone interpretation, one server which is configured with CET and sending log splunk cloud (in my best knowledge indexers are placed in GMT timezone). This server sends syslogs to SC4S servers configured with GMT time zone. Event Time value in splunk is being picked as per the raw event time. Since splunk indexers are GMT, SC4S is in GMT, I am getting time difference between event time (server time/ CET time zone) and index time (GMT time zone). please help, how can I resolve this issue of huge time difference in event time and index time.   Thanks, Bhaskar
I am investigating how to have a continuous build process for our Splunk addon and I saw that there are 3 options: slim -a cli tool that is a part of  Package apps with the Packaging Toolkit | Docu... See more...
I am investigating how to have a continuous build process for our Splunk addon and I saw that there are 3 options: slim -a cli tool that is a part of  Package apps with the Packaging Toolkit | Documentation | Splunk Developer Program AppInspect - a part of Validate quality of Splunk apps | Documentation | Splunk Developer Program Add-on Builder - Install the Add-on Builder - Splunk Documentation slim gave me very little output and it wasn't clear what sort of validations it was running. app-inspect is very configurable and provides rich output so I'm pretty happy with it. However, I got a recommendation to trust the Add-on Builder's validations. Unfortunately, apart from using some Selenium manipulations to touch the Splunk UI, I wasn't able to identify a way to automatically call it validation logic from an HTTP API or a cli. Finally, the output of AppInspect & the Add-on Builder differs - I'm currently checking why this is so. Perhaps the validations are completely different ... So my questions to the community are: 1. What is the best approach to validate a Splunk add-on? 2. How would you recommend automating at least the validation part of the process? Thank you so much in advance!
Hi Community,   We have encountered a weird case with the curl command. One of the users was running a curl command to get a response from a server and run an SPL search on it. Since the time limit... See more...
Hi Community,   We have encountered a weird case with the curl command. One of the users was running a curl command to get a response from a server and run an SPL search on it. Since the time limit was not mentioned on the curl command, the response was used to go back in time and retrieve all the data ever produced from the URL. This process would take time to retrieve information and would die in the server. This was found once we had a slowness issue with the server and had added the time parameters. Now the issue with slowness is fixed. However, I would like to check on the possibility to get the list of curl commands executed in Splunk. Or are some other alternatives to get the list of curl commands getting executed in Splunk? Thanks in advance.   Regards, Pravin
Hello I use a very basic search on a short period like below but  I am a little surprised by the quota size used by this search (350 MO for 148000 events between 7h and 13h)    index=tutu sourc... See more...
Hello I use a very basic search on a short period like below but  I am a little surprised by the quota size used by this search (350 MO for 148000 events between 7h and 13h)    index=tutu sourcetype="toto" type=x earliest=@d+7h latest=@d+19h | fields sam | eval sam=lower(s) | stats dc(s)    So I try to find some tracks for reducing the quota size Is anybody have an idea please?
I have a long event which I tried to extract fields from, using splunk's extract additional fields feature.  I chose comma delimited extraction and named the fields appropriately. I have 117 fields... See more...
I have a long event which I tried to extract fields from, using splunk's extract additional fields feature.  I chose comma delimited extraction and named the fields appropriately. I have 117 fields altogether and when I to display the fields with the table command, I noticed that there are a couple of data-to-field mismatches.    field3 value is replicated for field5.  field4 value is replicated for field8. Please refer the screenshot for better understanding: regex error I have checked the transforms.conf and that looks fine. I'm not sure how to get over this issue. Any help in guiding towards the right solution will be highly appreciated. 
I have  messages like below in logs, I want to extract ErrorCode from Those messages, Here ErrorCode is CIS-46031 However there could be space right after ErrorCode or after ErrorCode:  msg: Erro... See more...
I have  messages like below in logs, I want to extract ErrorCode from Those messages, Here ErrorCode is CIS-46031 However there could be space right after ErrorCode or after ErrorCode:  msg: ErrorCode:CIS-46031,ErrorMessage:Some unknown error occurred in outage daemon request. Please check.,Error occurred in CIS domain events outage processing. msg: ErrorCode : CIS-46032,ErrorMessage:Some unknown error occurred in outage daemon request.  msg: ErrorCode :CIS-46033, ErrorMessage:Some unknown error occurred in outage daemon request.  How can we do the same in Splunk
Hi,   Is there a way to connect Splunk Connect for Kubernetes to HEC on Splunk Cloud instance through a HTTP(S) Proxy ?   Is it possible to use `environmentVar:` in values.yml file ? If yes... See more...
Hi,   Is there a way to connect Splunk Connect for Kubernetes to HEC on Splunk Cloud instance through a HTTP(S) Proxy ?   Is it possible to use `environmentVar:` in values.yml file ? If yes, what are the variables and the format to use ?   Regards. Nicolas.
Hello, does anybody know how to set shared axis ranges for more metrics using dual-axis chart?  Actually, when I add more than 1 metric on the right (or left) axis, the widget creates ranges for ... See more...
Hello, does anybody know how to set shared axis ranges for more metrics using dual-axis chart?  Actually, when I add more than 1 metric on the right (or left) axis, the widget creates ranges for every metric, which is unusable. Use case: I want to use the LEFT axis for Response Time (line) my Bussines Transaction and the RIGHT axis for Calls per minute (column) and Errors per minute (column) my Bussines Transaction. Actually, for now, looks like my BT has a 100% fail rate, but there were only 2 errors. Thanks for any advice Tomas B.
How to know the last event's time from each of the hosts in the system?.  The output can be of the below format? host1|datetime host2|datetime   thank you
Dears I need an advice from experts who have past experience on splunk, Please do not advise for splunk professional services or Partner help,  How i can measure approximately the source device... See more...
Dears I need an advice from experts who have past experience on splunk, Please do not advise for splunk professional services or Partner help,  How i can measure approximately the source device is generating how much number of data that i need to  ingest in splunk , there must be some way to assume till some extend for example a firewall will generate more logs than a windows server. Lets assume if i m ingesting a 300GB/day in splunk and i have 5  administrative users using search head then the highlighted below is good to follow. If i am adding Enterprise security module then the sizing changes,?? how much additional  data ingestion needs to be added and what is the math behind this ? thanks