All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there any reason why syntax highlighting is not working by default for splunk logs?. While clicking on the Syntax hightlight  option I am getting the logs Highlighted.
Hi all Getting this message :  ERROR ExecProcessor [3700 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::confi... See more...
Hi all Getting this message :  ERROR ExecProcessor [3700 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-AzureMfa-AuthZ/AuthZAdminCh' I've tried numerous combinations in the stanza such as :  WinEventLog://Microsoft-AzureMfa-AuthZ/AuthZAdminCh WinEventLog://Microsoft-AzureMfa-AuthZ-AuthZAdminCh WinEventLog://Microsoft/AzureMfa/AuthZ/AuthZAdminCh The Windows Event Log chain for the AuthZAdminCh source is in the attachment.  Just not quite sure where I'm going wrong.  Appreciate some advice.  
Hello, I have a heavy forwarder install on a server to monitor a certain log file. We used to read that log just fine, but after some bug fixed about log generation(on server side) and that server ... See more...
Hello, I have a heavy forwarder install on a server to monitor a certain log file. We used to read that log just fine, but after some bug fixed about log generation(on server side) and that server restart, I can't read that log file at all.  Our inputs.conf was           [monitor:///data/ESB/ACH/LOG/] disabled = 0 sourcetype = napas.itso.app.achlog index = napas.ach.app.log           And it still can read all the logs file in there, but can't read the one that we need. I have restarted the agent, restart the server and restart splunkd connection but it still can't read the one that we need. We can read /data/ESB/ACH/LOG/iib_log_summary_2022-07-12.log but can't read /data/ESB/ACH/LOG/iib_log_detail_2022-07-12.log   We check the read permission on both file but they're the same. How can I troubleshoot it?
Hi, When I run a search against an index in smart/verbose mode, I am getting the below error with zero results, "Some events were removed by Timeliner because they were missing _time" However, ... See more...
Hi, When I run a search against an index in smart/verbose mode, I am getting the below error with zero results, "Some events were removed by Timeliner because they were missing _time" However, when the same query is run in fast mode I am seeing results. Is there anything wrong with the time of the logs coming in? How should I fix this?
Hi,  Recently my team setup our ITSI environment and had a few service templates and KPI base searches created. When i try to create a new service and link it to a service template i tell it to back... See more...
Hi,  Recently my team setup our ITSI environment and had a few service templates and KPI base searches created. When i try to create a new service and link it to a service template i tell it to backfill the KPIs. Everything up to this point looks great and the KPIs as well as the service analysis page display everything in the backfill as being healthy but nothing under the KPIs is displaying beyond the time i created the service and eventually everything returns to saying N/A.  Ive discovered that the KPI search acceleration using the summary index does not seem to be working. When i run the raw search everything will display. How do i get the itsi_summary index to be populated with the correct information? 
Hi, i have event like vuln {     host: some_host     cve: {         base_score: 10         description: "Really nasty"         references: [link_1                                 link_2]... See more...
Hi, i have event like vuln {     host: some_host     cve: {         base_score: 10         description: "Really nasty"         references: [link_1                                 link_2]     }    remediation: {      something: "something"    } } Now I don't want table where json key is Column name but as Event raw in Table . I would like to have dynamic Table something like Name               | Value base_score   | 10 description   |  Really nasty references    | link_1, link_2
Hello, I have a table on my splunk dashboard. I have a search to get the data and I am using fields to filter out the columns that I want to see on my dashboard. I observed that, even if I filte... See more...
Hello, I have a table on my splunk dashboard. I have a search to get the data and I am using fields to filter out the columns that I want to see on my dashboard. I observed that, even if I filter out a couple of columns to display, I am still able to use the other column data (column c or d from below example) that I am not displaying for other purpose. Below is the format that I am using for a splunk table.   <search> <query>| dbxquery connection=connection query="SELECT a, b, c, d FROM t1" </query> </search> <fields>["a", "b"]</fields>   Now, I have a requirement to create a custom Splunk table using JavaScript. I was able to do the same but I am not sure how to use the <fields> provided by Splunk in JavaScript. The problem that I observed with JavaScript table is that, If I am not displaying a particular column, I will not be able to use that column data for other operations. I basically want to display just a, b columns from my JavaScript query but at the same time utilize the c, d column data for other operations. Kindly suggest a solution for this.
I am trying to include dynamic names for a notable event that I have triggering. When I try to use $variable$ it just shows that and does not pull the field value. My search: index = o365 sourcet... See more...
I am trying to include dynamic names for a notable event that I have triggering. When I try to use $variable$ it just shows that and does not pull the field value. My search: index = o365 sourcetype="mscs:azure:eventhub" body.operationName="User Risk Detection" "body.properties.riskLevel"=high | rename body.properties.userDisplayName AS "Display Name"   My Name: Office 365 Risky User Detected - $Display Name$ Can anyone help?  
splunk-winevtlog.exe crash, low thruput, high cpu  utilization and eventcode filtering not working as expected with 8.1.x/8.2.x/9.0
[Blog Post] Differentiate various size related parameters from indexes.conf maxTotalDataSizeMB maxGlobalRawDataSizeMB maxGlobalDataSizeMB maxDataSize maxVolumeDataSizeMB
Hi All,  is there a way to query the API just to return custom metrics that have been setup in an application/tier/node? Use Case: We would like to know how many custom metrics have been created pe... See more...
Hi All,  is there a way to query the API just to return custom metrics that have been setup in an application/tier/node? Use Case: We would like to know how many custom metrics have been created per Application and understand if this metric is actually useful and being used. Having 100,000's of custom metrics sitting on the controller is not good practice I would imagine. We are using a SaaS controller.  Thanks if anybody has an answer or suggestion.
I am trying to enable Server Certificate Hostname Validation in the server.conf file and I literally cut and pasted the command  sslVerifyServerName = true # turns on TLS certificate host name valid... See more...
I am trying to enable Server Certificate Hostname Validation in the server.conf file and I literally cut and pasted the command  sslVerifyServerName = true # turns on TLS certificate host name validation from the Splunk documentation and when I restart Splunk on this on prem deployment server it says : WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details.   now I get the CLI command is cliVerifyServerName instead of sslVerifyServerName, but I even tried having both lines there and it still does not like it I have issued an Enterprise web certificate to this server, it is still valid for two years, so I am at a total loss here please help  
Hello,  I have several events in the _raw field that add a unique identification number. I would like to replace these with something standard to aggregate counts on.  Example Data: fi/transact... See more...
Hello,  I have several events in the _raw field that add a unique identification number. I would like to replace these with something standard to aggregate counts on.  Example Data: fi/transaction/card/purchase/tx_2994882028948/refund fi/transaction/card/purchase/tx_3920496893002/void fi/transaction/card/purchase/tx_2930540482198/refund   I'd like these all to read:  fi/transaction/card/purchase/trans/refund fi/transaction/card/purchase/trans/void fi/transaction/card/purchase/trans/refund   So replace the unique identifier, but maintain the verbiage at the end.  I've tried a few of the other methods noted in other threads, but to no avail. Some don't work at all, some run, but don't replace the values. Thanks!!
Hello community I am looking at TA/apps and trying to figure out what to use, where to use it and how to use it optimally. Potentially there are some best practices for this? Just to set the stag... See more...
Hello community I am looking at TA/apps and trying to figure out what to use, where to use it and how to use it optimally. Potentially there are some best practices for this? Just to set the stage, let’s take as an example. Wanting to collect only some logs from hosts using a universal forwarder, using the “Splunk_TA_nix” and setting up a “/local/inputs.conf”, cherry-picking a few sources/folders from default/inputs.conf seems reasonable. Though pushing the “entire” app to a UF seems a bit “overkill”. There should be lots of things not being used in a setup like this. Not all scripts will be needed etc. Then on the search heads, field extraction should be performed for these sources. I assume you need a/the app installed on the these as well for search time extraction. However, things like “/bin”, inputs.conf, outputs.conf, etc. seem unnecessary. Generally it seems like keeping excerpts from props.conf and transforms.conf could suffice? I should formulate a question which can be answered. Say I have a few logs/sources being indexed and searchable. On the search heads, assuming I am only interested in field extraction for the affected source types, no dashboards, reports, log collection on SH etc. Should I just install the entire application/TA regardless of how much or little of it will actually be used? Or, can I remove things which fills no purpose for function outlined? If so, what is the most efficient way to sort out which things to keep and what to discard (like will it always be enough to keep props.conf/transforms.conf)? Looking forward to your feedback regarding this, all the best
Hello Some users in my system does not have the data summary button (each one has different role) How can I enable the data summary for them ?   Thanks
I'm looking for a way to collect all custom lists.  While I can do so individually for every Custom List with `phantom.get_list()` I still need to have their names to make use of this function. So, i... See more...
I'm looking for a way to collect all custom lists.  While I can do so individually for every Custom List with `phantom.get_list()` I still need to have their names to make use of this function. So, is there a way to get all Custom Lists names, or Custom Lists' contents?  As a workaround I tried making request to "/rest/decided_list", but it doesn't return everything that is accessible through phantom itself. 
Hi all. I want to create an alert for hosts file modification. Found the build in one here on the forums but I would like to add a filter that can read inside the file and when it's being modified ... See more...
Hi all. I want to create an alert for hosts file modification. Found the build in one here on the forums but I would like to add a filter that can read inside the file and when it's being modified by Docker, it would ignore and won't activate the alert.   Appreciate the assistance!
I uploaded a new version of my app to splunkbase.  It passed validation, and I was able to update successfully from my splunk enterprise test instance.  However, my splunk **cloud** instance is not... See more...
I uploaded a new version of my app to splunkbase.  It passed validation, and I was able to update successfully from my splunk enterprise test instance.  However, my splunk **cloud** instance is not detecting the new version and offering to update.  It did pull the new version if I uninstalled and reinstalled the app, it just isn't detecting it as an update from the older version to the latest version.  The is a problem since my current customers aren't receiving this bugfix release.  Any help is appreciated.
I have a URL as below 1.aa/bb/cc/dd 2.nbcn/hbd/hvhd/hbxn   Need to regular expression to get the below output 1.aa/bb 2.nbcn/hbd
Hi, For testing purposes, we are trying to use the Logstash client command line to send data to a Splunk server instance. The client configuration is the following (based on the steps found in http... See more...
Hi, For testing purposes, we are trying to use the Logstash client command line to send data to a Splunk server instance. The client configuration is the following (based on the steps found in https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Data/UsetheHTTPEventCollector) and it uses an HTTP Event Collector token:   input {       stdin       {             codec => json       } }   output {       http       {             format => "json"             http_method => "post"             url => "https://prd-p-kz4cj.splunkcloud.com:8088/services/collector/raw"             headers => ["Authorization", "Splunk 26c964a2-c1e8-46e8-96ca-679d3b7542bd"]       }              stdout       {       } }     However, it fails with the following error:   [ERROR] 2022-07-01 14:28:12.794 [[main]>worker5] http - Could not fetch URL {:url=>"https://prd-p-kz4cj.splunkcloud.com:8088/services/collector/raw", :method=>:post, :message=>"Connect to prd-p-kz4cj.splunkcloud.com:8088 [prd-p-kz4cj.splunkcloud.com/54.167.42.214] failed: connect timed out", :class=>Manticore::ConnectTimeout, :will_retry=>true}   And updating the url above to https://prd-p-kz4cj.splunkcloud.com/en-US:8088/services/collector/raw produces a different error:   [ERROR] 2022-07-01 14:34:26.702 [[main]>worker4] http - Encountered non-2xx HTTP code 404 {:response_code=>404, :url=>"https://prd-p-kz4cj.splunkcloud.com/en-US:8088/services/collector/raw", :event=>#<LogStash::Event:0x199312cb>}     Also, we have noticed that a ping to the Splunk server instance does not return a reply:   ping prd-p-kz4cj.splunkcloud.com PING prd-p-kz4cj.splunkcloud.com (54.167.42.214) 56(84) bytes of data.     Could it be that a server configuration setting is missing? Kind regards,       Moacir Silva